content
stringlengths 275
370k
|
---|
The ability to withstand drying-out has been an important challenge that plants have overcome during the course of evolution. Plants evolved in water and the colonization of the land involved the evolution of characteristics to help them avoid desiccation.
The green colour on the north face of trees and walls is caused by the growth of a tiny green plant called Pleurococcus. Pleurococcus is an alga, the closest relatives of which grow in lakes and rivers, yet it withstands the drying effects of exposure to the air and, during its lifetime, is never fully immersed in water.
When a colonising Pleurococcus lands on a surface, it releases a carbohydrate-rich slime that sticks the cell in place and stops it being washed away. Once safely attached this pioneering cell reproduces by cell division (mitosis), where each cell makes an identical copy of itself and colony growth begins.
Since each of the new Pleurococcus cells release sticky slime, the colony soon sits on extracellular glue that not only attaches the ever-expanding colony to its substrate but also forms a matrix that can be colonized by other organisms, such as bacteria. Soon this matrix becomes a complex environment, with a large array of different chemical substances rich in a variety of algae and bacteria.
These sheets of diverse microorganisms, suspended in a mass of carbohydrate-rich chemicals, are called biofilms. The ability to form biofilms allows Pleurococcus, whose closest relatives live in the water, to survive on the dry and exposed surfaces on the land environment - the air.
The ability of algae to grow on land is likely to be a very ancient characteristic. There is fossil evidence that shows that mats of algae existed as far back as the Proterozoic, the geological eon that ended 540 million years ago. These mats would have had the characteristics of biofilms, similar to those formed by Pleurococcus today.
The green hue that you see on damp, exposed surfaces is therefore a reminder of a process that occurred many times in Earth history - colonization of the land by life.
Many organisms left the water and got a foothold on the dry, continental surfaces of the planet. Some diversified once they got a foothold on the land, giving rise to a seemingly endless diversity of forms. Others, like the ancestor of Pleurococcus, have changed very little since they first left the water over half a billion years ago.
Graham JE et al 2009. Algae. Benjamin Cummings.
Guiry MD, Guiry GM 2014. AlgaeBase. World-wide electronic publication. National University of Ireland, Galway. |
Early childhood education is becoming more and more important for children in order to be successful in the world today. The world has become so fast paced and children are exposed to so much more now, than ever. It is important to give your children a quality preschool education where they can gain a strong educational foundation to build on. This foundation will prepare them for future education as well as the challenges that life presents as they grow.
At the preschool level, children are able to perform tasks that help develop their concentration, a sense of order, independence, responsibilities and self esteem. Young children are working on building themselves as individuals. The academics for the preschool child will come so much easier if these attributes are in place.
The Montessori Method of education is a great choice to start with children in their early education. A Montessori classroom offers children the opportunity to explore through hands on learning. The Montessori approach is a very developmental, individualized and self paced. Children move through the classroom choosing heir own “work” with guidance from the Montessori teacher. All the Montessori materials were designed by Maria Montessori with a specific purpose. All the materials are refereed to as “work”.
With the Montessori method, the child is the center of the education. The teacher’s role is to carefully prepare the classroom with materials that will entice and spark interest in the child. The teacher observes what the child is choosing and how they are doing the work. It is the teacher’s job to be a facilitator and guide the child to work that will aid him/her in their development.
For example, if the teacher observes a young child struggling to hold a pencil correctly and struggling with writing skills, she knows that that child needs to be directed to activities that will help with strengthening the child’s fingers and hands. The muscles need be stronger and more developed to be able to grasp the pencil strongly. After the muscles become stronger, then it’s time to direct the child back to learning writing skill. At this point the child can now be successful. A Montessori teacher’s goal is always to support the child in being successful.
The Montessori Method of education is a holistic approach. There is so much more to this method than meets the eye! Children are working on themselves from the inside out. They are developing as human beings with support from the teacher. They are gaining a foundation in academics and are preparing to go out into the world.
The Montessori method of teaching is truly a good choice for any child’s early childhood education! |
For many climate change activists, the latest rallying cry has been, “Keep it in the ground,” a call to slow and stop drilling for fossil fuels. But for a new generation of land stewards, the cry is becoming, “Put it back in the ground!”
As an avid gardener and former organic farmer, I know the promise that soil holds: Every ounce supports a plethora of life. Now, evidence suggests that soil may also be a key to slowing and reversing climate change.
“I think the future is really bright,” said Loren Poncia, an energetic Northern Californian cattle rancher. Poncia’s optimism stems from the hope he sees in carbon farming, which he has implemented on his ranch. Carbon farming uses land management techniques that increase the rate at which carbon is absorbed from the atmosphere and stored in soils. Scientists, policy makers, and land stewards alike are hopeful about its potential to mitigate climate change.
Carbon is the key ingredient to all life. It is absorbed by plants from the atmosphere as carbon dioxide and, with the energy of sunlight, converted into simple sugars that build more plant matter. Some of this carbon is consumed by animals and cycled through the food chain, but much of it is held in soil as roots or decaying plant matter. Historically, soil has been a carbon sink, a place of long-term carbon storage.
But many modern land management techniques, including deforestation and frequent tilling, expose soil-bound carbon to oxygen, limiting the soil’s absorption and storage potential. In fact, carbon released from soil is estimated to contribute one-third of global greenhouse gas emissions, according to the Food and Agriculture Organization of the United Nations.
Ranchers and farmers have the power to address that issue. Pastures make up 3.3 billion hectares (about 8 billion acres), or 67 percent, of the world’s farmland. Carbon farming techniques can sequester up to 50 tons of carbon per hectare (about 2.5 acres) over a pasture’s lifetime. This motivates some ranchers and farmers to do things a little differently.
“It’s what we think about all day, every day,” said Sallie Calhoun of Paicines Ranch on California’s central coast. “Sequestering soil carbon is essentially creating more life in the soil, since it’s all fed by photosynthesis. It essentially means more plants into every inch of soil.”
Calhoun’s ranch sits in fertile, rolling California pastureland about an hour’s drive east of Monterey Bay. She intensively manages her cattle’s grazing, moving them every few days across 7,000 acres. This avoids compaction, which decreases soil productivity, and also allows perennial grasses to grow back between grazing. Perennial grasses, like sorghum and bluestems, have long root systems that sequester far more carbon than their annual cousins.
By starting with a layer of compost, Calhoun has also turned her new vineyard into an effective carbon sink. Compost is potent for carbon sequestration because of how it enhances otherwise unhealthy soil, enriching it with nutrients and microbes that increase its capacity to harbor plant growth. Compost also increases water-holding capacity, which helps plants thrive even in times of drought. She plans to till the land only once, when she plants the grapes, to avoid releasing stored carbon back into the atmosphere.
Managed grazing and compost application are just a few common practices of the 35 that the Natural Resources Conservation Service recommends for carbon sequestration. All 35 methods have been proven to sequester carbon, though some are better documented than others.
David Lewis, director of the University of California Cooperative Extension, says the techniques Calhoun uses, as well as stream restoration, are some of the most common. Lewis has worked with the Marin Carbon Project, a collaboration of researchers, ranchers, and policy makers, to study and implement carbon farming in Marin County, California. The research has been promising: They found that one application of compost doubled the production of grass and increased carbon sequestration by up to 70 percent. Similarly, stream and river ecosystems, which harbor lots of dense, woody vegetation, can sequester up to one ton of carbon, or as much as a car emits in a year, in just a few feet along their beds.
On his ranch, Poncia has replanted five miles of streams with native shrubs and trees, and has applied compost to all of his 800 acres of pasture. The compost-fortified grasses are more productive and have allowed him to double the number of cattle his land supports. This has had financial benefits. Ten years ago, Poncia was selling veterinary pharmaceuticals to subsidize his ranch. But, with the increase in cattle, he has been able to take up ranching full time. Plus, his ranch sequesters the same amount of carbon each year as is emitted by 81 cars.
Much of the research on carbon farming focuses on rangelands, which are open grasslands, because they make up such a large portion of ecosystems across the planet. They are also, after all, where we grow a vast majority of our food.
“Many of the skeptics of carbon farming think we should be planting forests instead,” Poncia said. “I think forests are a no-brainer, but there are millions of acres of rangelands across the globe and they are not sequestering as much carbon as they could be.”
The potential of carbon farming lies in wide-scale implementation. The Carbon Cycle Institute, which grew out of the Marin Carbon Project with the ambition of applying the research and lessons to other communities in California and nationally, is taking up that task.
“It really all comes back to this,” said Torri Estrada, pointing to a messy white board with the words SOIL CARBON scrawled in big letters. Estrada is managing director of the Carbon Cycle Institute, where he is working to attract more ranchers and farmers to carbon farming. The white board maps the intricate web of organizations and strategies the institute works with. They provide technical assistance and resources to support land stewards in making the transition.
For interested stewards, implementation, and the costs associated with it, are different. It could be as simple as a one-time compost application or as intensive as a lifetime of managing different techniques. But for all, the process starts by first assessing a land’s sequestration potential and deciding which techniques fit a steward’s budget and goals. COMET-Farm, an online tool produced by the U.S. Department of Agriculture, can help estimate a ranch’s carbon input and output.
The institute also works with state and national policy makers to provide economic incentives for these practices. “If the U.S. government would buy carbon credits from farmers, we would produce them,” Poncia said. These credits are one way the government could pay farmers to mitigate climate change. “Farmers overproduce everything. So, if they can fund that, we will produce them,” he said. While he is already sequestering carbon, Poncia says that he could do more, if given the funding.
Estrada sees the bigger potential of carbon farming to help spur a more fundamental conversation about how we relate to the land. “We’re sitting down with ranchers and having a conversation, and carbon is just the medium for that,” he said. Through this work, Estrada has watched ranchers take a more holistic approach to their management.
On his ranch, Poncia has shifted from thinking about himself as a grass farmer growing feed for his cattle to a soil farmer with the goal of increasing the amount of life in every inch of soil.
• Sally Neas wrote this article for YES! Magazine, a national, nonprofit media organization that fuses powerful ideas with practical actions. Sally is a freelance writer and community educator based in Santa Cruz, California. She has a background in permaculture, sustainable agriculture, and community development, and she covers social and environmental issues. |
Draw a Pirate Ship and Create an Ocean Collage
Students will draw a pirate ship from a model and then create an ocean scene using torn or cut out pieces of paper.
1. We will begin by going over basic geometric shapes and drawing them for our warm up exercise.
2. We will then talk about how all objects are created with shapes.
3. Next, we will look at a ship model and talk about the shapes they see.
4. I will then walk them through drawing the shapes in order to create the ship.
5. We will talk about the horizon line.
6. We will talk about collage.
7. We will cut construction paper to fill in our pirate ships and create our ocean collage.
Colors and background will be up to each learner. This is where their individual creativity is encouraged.
The class will be fun and learners should feel free to ask questions and make comments about the process.
Some knowledge of geometric shapes such as squares, rectangles, circles, and triangles will help but is not necessary.
Benefits of art activity
- Students will review geometric shapes and learn new ones they might not know.
- Students will practice making geometric shapes.
- Students will use fine-motor skills to draw, cut or tear, and glue.
- Students will practice drawing what they see instead of what they think something should look like.
- Students will learn how shapes create objects.
- Students will learn how to use cut out or torn shapes to create a collage. |
Asthma is an inflammatory disease of the airways to the lungs. It is a chronic, or long-term condition that intermittently inflames and narrows the airways in the lungs. Paitnet experiences difficulty breathing and can make some physical activities difficult to perfome.
Asthma affects people of all ages and it often starts during childhood. Asthma causes periods of wheezing, chest tightness, coughing and shortness of breath. Asthma symptoms can range from mild to severe. It may happen rarely or every day.When symptoms worsen, it is called an asthma attack, exacerbation, or flare-up. Uncontrolled asthma can damage the lungs.
Asthma attacks are episodes that occur when symptoms get much worse. Asthma attacks can happen suddenly and may be life-threatening. People who have severe asthma experience asthma attacks frequently. There are certain things that can set off asthma symptoms, these are known as triggers. Some of theses triggers include: Smoking, stress, polluted places, Perfumes, Flowers, animal fur , certain Food items ( chocolates, ice-cream), Cold and flu.
Several tests may be done to help determine if asthma is likely to be the cause of symptoms. These tests include:
Pulmonary function tests
Spirometry with bronchodilator (BD) test
Peak expiratory flow (PEF)
Patient may need to undergo following tests to rule out other conditions if symptoms include:
Shortness of breath with dizziness, light-headedness, or tingling in your hands or feet
A cough without other breathing issues
Coughing up mucus
Difficult and noisy breathing during exercise
To rule out other medical conditions include following tests are done:
Chest X-ray to rule out lung infections, such as tuberculosis, or a foreign substance, such as an object that was inhaled by accident.
Sleep Study to rule out Sleep Apnea.
Endoscopy to rule out gastrointestinal reflux disease (GERD). These tests may include endoscopy
Electrocardiogram (EKG) to rule out heart failure or arrhythmia while in emergency care.
Treatment of Asthma:
Treatment for asthma are divided into three primary categories:
Breathing Exercises: Pulmonologist or an occupational therapist helps the patient to learn breathing exercises for asthma. These exercises can help to get more air into and out of the lungs. This eventually help increase lung capacity and cut down on severe asthma symptoms.
Rescue or First Aid Treatment: Resue medications are used in the event of an asthma attack. They provide quick relief from the symptoms. Rescue inhaler and nebulizers, adre used with medications that needs to be inhaled deep into the lungs. Medications such as, Bronchodilator, they work to relax the tightened muscles in the lungs. Anti-inflamatories which target lung inflammation which prevent breathing. If symptoms persist for more than 20 minutes even after giving second round of medication, seek medical attention
Long-term asthma control medications: Long-term asthma control madication should be taken on daily basis. to prevent symptoms. Dosage of these medication is adjusted timely based on patient’s asthma symptoms. Right treatment or combination of treatments is determined based on the type of asthma, age, and trigger factors.
The goal of asthma management is to achieve control with an asthma action plan. An asthma action plan may include monitoring, avoiding triggers, and using medicines. |
Scientists at the University of Birmingham have developed a method to visualize gold on the nanoscale by using a special probe beam to image 20 atoms of gold bound together to make a cluster. The research is published in the Royal Society of Chemistry’s journal Nanoscale.
Physicists have theorized for many years how atoms of gold and other elements would be arranged and ten years ago the structure of a 20-atom tetrahedral pyramid was proposed by scientists in the US.
Birmingham physicists can now reveal this atomic arrangement for the first time by imaging the cluster with an electron microscope.
Gold is a noble metal which is unreactive and thus resistant to contamination in our every day experience, but at the smallest, nano scale it becomes highly active chemically and can be used as a catalyst for controlling chemical reactions.
Clusters of metal atoms are used in catalysis in various industries including oil refining, the food industry, fine chemicals, perfumery and pharmaceuticals as well as in fuel cells for clean power systems for cars.
Richard Palmer, the University of Birmingham’s Professor of Experimental Physics, Head of the Nanoscale Physics Research Laboratory, and lead investigator, said: ‘We are working to drive up the rate of production of these very precisely defined nano-objects to supply to companies for applications such as catalysis. Selective processes generate less waste and avoid harmful biproducts - this is green chemistry using gold.’ |
Boxwood is one of the most useful shrubs available because of its year-round green color and very high tolerance to shearing and shaping. Common boxwood (Buxus sempervirens) grows in U.S. Department of Agriculture plant hardiness zones 5 through 8. Although it grows relatively easily in a variety of locations, it occasionally suffers from diseases and pests. Boxwood blight is a serious fungal disease that plagues boxwood plants in some areas. It causes boxwood leaves to drop off, and can also make the bushes die back.
Types and Symptoms
Several different fungi can cause boxwood leaf blight. Cylindrocladium buxicola occurs on all species of boxwoods and often causes foliage to fall off. Volutella fungus causes dieback on all types of boxwoods. Macrophoma causes a leaf blight and discolored foliage. When boxwoods suffer from blight, they sometimes have a combination of these different fungi. In general, blight causes yellowish or pinkish foliage discoloration, a spindly appearance and dead or dying branches.
Boxwoods are less likely to get leaf blight or other diseases if they are properly cared for. They can grow in sunny or partially shady areas, but they require some irrigation during dry summer weather. Gardeners should plant boxwoods in well-drained soils. Pruning boxwoods to thin them and remove dead branches increases air circulation and decreases humidity and fungal problems. Also avoid over-fertilization and overwatering.
To get rid of leaf blight, prune away dead and severely infected branches. It is also helpful to mulch with uninfected mulch material. Check to make sure the soil is not overly soaked, and reduce irrigation if it is. Gardeners who are unsure what the disease is can send leaf and soil samples to a nursery or laboratory for a confirmation test. Soil samples can also reveal whether soil nematodes could be causing the problem and whether the boxwoods need more or less fertilizer.
To keep blight from spreading, remove fallen infected boxwood leaves from the garden area. It is also a good idea to destroy infected branches after pruning. To avoid spreading the fungi during pruning, sterilize the pruning shears in a solution of 1 part bleach and 9 parts water. Sterilize them before pruning and throughout the pruning process. If boxwoods in the garden die from boxwood blight, do not plant English boxwoods in the same area. Instead, plant a different type of hedge or use a disease-resistant American boxwood variety.
- Medioimages/Photodisc/Photodisc/Getty Images |
What is Diabetes? The amount of sugar in the blood is controlled by a hormone called insulin, which is produced by the pancreas. When food is digested and enters the bloodstream, insulin moves glucose out of the blood and into cells, where it is broken down to produce energy. Your body is
unable to break down glucose into energy. This is because there’s either not enough insulin to move the glucose or the insulin produced doesn’t work properly. In type 1 your body’s immune system attacks and destroys the cells that produce insulin. As no insulin is produced, your glucose levels increase, which can seriously damage the body’s organs. Type 1 is often known as insulin-dependent diabetes. It’s also sometimes known as juvenile diabetes because it usually develops before the age of 40, often during the teenage years. Type 1 is less common than Type 2. In the UK, it affects about 10% of all adults with diabetes. If you’re diagnosed with type 1, you’ll need insulin injections for the rest of your life. You’ll also need to pay close attention to certain aspects of your lifestyle and health to ensure your blood glucose levels stay balanced. Type 2 occurs when your body doesn’t produce enough insulin, or the body’s cells don’t react to insulin. This is known as insulin resistance. If you’re diagnosed with type 2, you may be able to control or even reverse your symptoms simply by eating a healthy diet, exercising regularly and monitoring your blood glucose levels. However, as type 2 is a progressive condition, you may eventually need medication, usually in the form of tablets. Type 2 is often associated with obesity. Obesity-relate diabetes is sometimes referred to as maturity-onset diabetes because it’s more common in older people. During pregnancy, some women have such high levels of blood glucose that their body is unable to produce enough insulin to absorb it all. This is known as gestational diabetes and affects up to 18 in 100 women during pregnancy. Pregnancy can also make existing type 1 diabetes worse. Gestational diabetes can increase the risk of health problems developing in an unborn baby, so it’s important to keep your blood glucose levels under control. In most cases, gestational diabetes develops during the second trimester of pregnancy (weeks 14 to 26) and disappears after the baby is born. However, women who have gestational diabetes are at an increased risk (30%) of developing type 2 later in life (compared with a 10% risk for the general population). |
The trillions of microbes, including bacteria, fungi, and viruses, in one environment (for example your gut), are collectively known as a microbiome. These microbes may be commensal, symbiotic, or pathogenic.
Researchers estimate the microbiome to account for one to three percent of the human body mass, which can total as high as three pounds. Bacterial DNA was once considered to outnumber human DNA by nearly 10 times; however, today that number is estimated to be around 3 times. So if you think about it, you are more bacteria than you are human!
Your first reaction to this information might be disgust; however, there is a common misconception surrounding bacteria and their potential for disease. Many of the bacteria and other microbes in and on the human body are actually considered commensals, which means that there is no harm to the human host. In addition, a majority of the microbes may also play symbiotic roles, which is a mutually beneficial relationship for the microbe and human host. The balance of gut microbes playing a role in our health is a revolutionary idea that is literally changing the medical field and how we think about and treat disease.
The human gut microbiome is considered one of the most important microbial environments in regards to the role it plays in human health; however, we still have a lot to learn. Research on the gut microbiome only began in the late 1990s. Some scientists have hailed the gut microbiome as a new organ, and believe it plays a major role in immune health. Some disease states, including obesity, diabetes, asthma, autism, and many more, have been linked to specific balances of microbes, or dysbiosis. |
Ch 2 Introduction Developing Quality Virtual Courses: Selecting Instructional Models Sharon Johnston, Spokane Virtual Learning A development model is a plan or flowchart structuring instruction and producing a unified series of related instructional activities Frank Schneemann
Ch 2 Essential Questions about Online Instruction • Essential Questions about Online Instruction • To what extent should online teaching mirror face-to-face teaching? • How can educators use their own experience as well as the precepts of educational theorists to create quality online courses? • To what extent will the institution's instructional principles be reflected in the online courses?
Ch 2 Gagne's Nine Events of Instruction Gagne's Nine Events of Instruction 1. Gaining Attention (Reception) 2. Informing Learners of the Objective (Expectations) 3. Stimulating Recall of Prior Learning (Retrieval) 4. Presenting the Stimulus (Selective Perception) 5. Providing Learning Guidance (Semantic Encoding) 6. Eliciting Performance (Responding) 7. Providing Feedback (Reinforcement) 8. Assessing Performance (Retrieval) 9. Enhancing Retention and Transfer (Generalization) These events are closely tied to cognitive theory and research on how the brain uses and stores information
Ch 2 Keller's Motivation Model • Keller's Motivation Model • Attention—refers to establishing and maintaining curiosity and learner arousal • Relevance—refers to linking the learning situation to the needs and motives of the learner • Confidence—refers to learners' attributing positive learning experiences to their individual behavior • Satisfaction—refers to developing the desire to continue the pursuit of similar goals
Ch 2 Harold Bloom's Hierarchy of Thinking Skills Harold Bloom's Hierarchy of Thinking Skills There is more than one type of learning. A committee of colleges, led by Benjamin Bloom, identified three domains of educational activities: Cognitive: mental skills (Knowledge) Affective: growth in feelings or emotional areas (Attitude) Psychomotor: manual or physical skills (Skills) Trainers often refer to these three domains as KSA (Knowledge, Skills, and Attitude). This taxonomy of learning behaviors can be thought of as "the goals of the training process." That is, after the training session, the learner should have acquired new skills, knowledge, and/or attitudes.
Ch 2 Harold Bloom's Hierarchy of Thinking Skills Harold Bloom's Hierarchy of Thinking Skills Knowledge: arrange, define, duplicate, label, list, memorize, name, order, recognize, relate, recall, repeat, reproduce state. Comprehension: classify, describe, discuss, explain, express, identify, indicate, locate, recognize, report, restate, review, select, translate, Application: apply, choose, demonstrate, dramatize, employ, illustrate, interpret, operate, practice, schedule, sketch, solve, use, write. Analysis: analyze, appraise, calculate, categorize, compare, contrast, criticize, differentiate, discriminate, distinguish, examine, experiment, question, test. Synthesis: arrange, assemble, collect, compose, construct, create, design, develop, formulate, manage, organize, plan, prepare, propose, set up, write. Evaluation: appraise, argue, assess, attach, choose compare, defend estimate, judge, predict, rate, core, select, support, value, evaluate.
Ch 2 Grant Wiggins and Jay McTighe's Understanding by Design (UBD) Grant Wiggins and Jay McTighe's Understanding by Design (UBD) The Three Stages of Backward Design Identify desired results Determine acceptable evidence Plan learning experiences & instruction McTighe Wiggins Click the link below for a summary of UBD PowerPoint by Wiggins and McTighe
Ch 2 Instructional Design Principles Developing quality curriculum for the virtual environment (as it is for face-to-face environments) is challenging and time consuming. Finding an appropriate model or design plan with solid pedagogy can make it much easier to develop curriculum that engages the learner and has consistency and quality.
Ch 2 Four basic principles of instructional design (1) Four basic principles simplify the complexities of instructional design (Kemmis, Atkin, & Wright, 1977): 1. Frequency of Interaction. Increasing the frequency of interaction between the learner and online lesson-learning materials generally increases a student's engagement and retention of content.
Ch 2 Four basic principles of instructional design (2) • Four basic principles simplify the complexities of instructional design • 2. Complexity of Interaction. Interactions in an online learning environment vary in complexity and sophistication and generally fall into the following five categories: • Simple recognition (true/false or yes/no) • Recall (fill-in, free recall, or matching) • Comprehension (multiple choice, substitution, raphrase, or short answer) • Problem solving (simulations or modeling) • Knowledge construction (project-based outcomes, research, or products from creative activity)
Ch 2 Four basic principles of instructional design (3) Four basic principles simplify the complexities of instructional design 3. Feedback Content and Quality. Online courses should offer students substantial feedback on all tests and work products. Online feedback provided in the online learning environment can be simple judgments indicating correct or incorrect answers, or it can be complex responses that include diagnosis or remediation, or both. Diagnostic or remedial online feedback promotes better outcomes than feedback simply signaling that a response is right or wrong.
Ch 2 Four basic principles of instructional design (4) Four basic principles simplify the complexities of instructional design 4.Balancing Comprehension and Significance. Information provided in an online learning environment can be ither easy or difficult to comprehend based on its density and complexity. In general, screens displaying too much information (text or graphics) can be difficult or confusing to read or interpret. However, information that is overly simplified may be perceived by the reader to be trivial or even irrelevant. Achieving a reasonable balance between excessive complexity and trivial simplicity seemingly has more in common with judgments about aesthetic worth that might be applied by artists and artisans than it does with any kind of objective science (Blomeyer, 2002).
Ch 2 Elements of a quality online course Elements of a quality online course • Interaction • Easy access • Ease of use • Clear objectives • Course syllabus • Measurable objectives • Quality evaluation • Outline of time management • Estimated time for each activity • Effective virtual reality/simulations for real-life skills • Links and resources • Current and relevant content • Multiple modalities • Engaging, robust curriculum • Choices • Prerequisites • Student access . Audience-appropriate material • Timely feedback • Tech help desk/human contact Learning resources (Web libraries) • Built-in monitoring systems (self-checks—students/teacher) • Links to student services (tutorials, • writing labs, etc.) • Layered content • Student authenticity/academic integrity • Student evaluation/feedback on course
Ch 2 Future Trends Future Trends For a good portion of their youth, our current students have used computers, e-mail, the Web, interactive multimedia, cell phones, and instant messaging in almost all facets of their daily lives. While young students may not think of these everyday tools as technology,' it's easy to recognize the influence these technologies have had on their personalities, attitudes, expectations, and learning strategies. They multitask and expect 24/7 access to information with zero tolerance for delays. They think nonlinearly and learn through lurking, discovering, experimenting and experiencing. (n.p.) In describing the incoming higher education students, Gautsch characterizes the students leaving high school. K-12 teachers must be aware of the millennial generation and present provocative, engaging, online learning options, not textbooks online.
Ch 2 Conclusion Conclusion Today's technologies are changing how we learn and teach, but for quality instruction a fundamental design process (such as the one described in Gagne's Nine Events of Instruction) remains a bastion of learning and teaching. Continual advancements in technology will equip the online teacher with abundant resources for the creation of student-centered learning environments. Most important, though, these new technologies should be coupled with solid learning theories to ensure that K-12 online teachers offer quality instruction that accommodates the unique abilities, interests, and needs of all students. A student-centered classroom is not an "incidental pedagogical choice but a choice that shapes how and what students learn and crucially how they learn to learn" (Katz, 1993, pp. 2-3).
Ch 2 Thanks Thanks for your attention |
Deadly Olive Disease
Deadly Olive Disease
By: Rodrigo Almeida and Milton Schroth
The bacterium, Xylella fastidiosa, in 2013, infected olive trees in the southern region of Apulia in Italy. This soon developed into one of the most destructive plant disease epidemics in over a hundred years and has raised havoc in Italy and other European countries that grow olives. This disease has resulted in a significant drop in olive oil production, economic loss, and social turmoil in Italy.
This bacterial plant pathogen requires insect vectors to transmit it from plant to plant. In the Americas the main insect vectors are sharpshooter leafhoppers, whereas in Europe spittlebugs are the most important vectors. An interesting aspect of the biology of X. fastidiosa is that it must colonize both plants and insects to spread the disease. It damages trees by clogging the water-conducting xylem vessels, causing symptoms of water stress.
Scientists using genomic data determined that X. fastidiosa originated in Central America. Trees die in a couple of years after showing typical symptoms of scorching and dying of leaves. The bacterium is a significant threat to Europe’s agriculture and landscape, depending on the strains that show up. Xylella has been in the US for a long time and infects many plants such as grapevines (Pierce’s disease), citrus, almonds, olives, and a large number of trees such as oaks, elms, and sycamores. But it may take years to kill them depending on the plant. So far the US has not had the destruction seen in Europe.
Olive trees are intertwined with Italian life and are an important source of income and tourism as well as a keystone plant in the landscape. Management of these kinds of diseases requires removal of infected trees, including symptomatic and asymptomatic plants. But such eradication methods are not acceptable for many farmers and other inhabitants. Five years have now passed and this conflict has yet to be resolved. The pathogen has spread northwards from a small area in Apulia to the entire ‘heel’ of the boot of Italy. It is not clear if eradicating X. fastidiosa was feasible when the pathogen was first reported in 2013. It is evident that the disease may continue to spread because of lack of communication among involved parties, mistrust of science and scientists, and the unwillingness of stakeholders to agree on efforts to limit the impacts of this pathogen. |
Concepts Of Universalism And Cultural Relativism International Law Essay
“Human Rights” is a relatively new expression, having come into international law only after World War II and the establishment of United Nations. Universal Declaration of Human Rights, adopted and proclaimed by the General Assembly of the United Nations on December 10, 1948 is a milestone document in the history of human rights. And the debate, which arose along with the internationalization of human rights, is whether all human rights are universal, or there are certain rights and freedoms, which can be avoided for the cultural features. This essay examines the debate through the contradiction of concepts of Universalism and Cultural Relativism.
International law, which actually has started developing with the first states, has been a subject of significant changes especially during the period between Westphalian peace treaty (1648) and World War I. Traditional international law is a law of power, that is the war is considered to be an important attribute of state sovereignty. One of the essential qualitative differences between traditional international law and contemporary international law is the prohibition of aggressive wars and the idea of international protection of human rights. In other words, contemporary international law takes the rights of man under its patronage. The international protection of human rights is a revolutionary idea and traditional disciplines of international law have nothing to do with it at all. It has been an accepted doctrine that international law is to regulate the relations between nation-states, but not individuals. Thus Oppenheim, the leading authority on international law in the United Kingdom wrote, that “the so-called rights of man not only do not, but cannot enjoy any protection under international law, because that law is concerned solely with the relations between states and cannot confer rights on individuals.” Â
Shortly after the atrocities of World War II, the first step was taken to establish and recognize the universality of human rights in international law. It was proclaimed in the Purposes of UN Charter that human rights and fundamental freedoms are “for all without distinction as to race, sex, language, or religion.”  The adoption and proclamation of the Universal Declaration of Human Rights was another major progress in the procedure of universalizing the human rights. The UDHR Preamble was clearly defining that “The General Assembly proclaims This Universal Declaration Of Human Rights as a common standard of achievement for all peoples and all nations…”  Later the principles of UN Charter and UDHR were developed and affirmed in the International Covenant on Civil and Political Rights and International Covenant on Economic, Social and Cultural Rights both adopted by General Assembly resolution 2200 (XXI) of 16 December 1966, and in number of other international treaties and agreements. As a result, a universal system of rules was established for the protection of human rights.
The dilemma of international protection of human rights is the ideological conflict of Universalism and Cultural Relativism. Simply put, the concept of Universalism holds that each human being possesses certain inalienable rights simply because he or she is a human, regardless the national background, religious or political views, gander or age. The proponents of this concept claim that “the international human rights like rights to equal protection, physical security, free speech, freedom of religion and free association are and must be the same everywhere.”  The concept of Universalism bases on three fundamental jurisprudential theories- the natural law theory, the theory of rationalism, and the theory of positivism. The roots of natural law theory go back to the ancient times. The main point of this theory is that natural law is standing above manmade positive law and defines the eliminable human rights, which are necessary for all the nation-states. Rationalism, a closely related concept, “is a theory of universal laws based on a belief in the universal human capacity to reason and think rationally.”  Rationalism supersedes the idea of divine origin of natural law with the theory that each individual is endowed with certain rights due to the universal capacity of all individuals to think rationally. Both natural law theory and theory of rationalism consider universal human rights not to depend on cultural diversities and specialties. Theory of positivism demonstrates the existence of universal human rights noting the acceptance and ratification of human rights instruments by vast majority of states regardless their cultural background. It appears that the concept of Universalism with its supporting theories of natural law, rationalism and positivism finds the source of human rights in international law, rather than in individual cultures. Human Rights are extracultural.
Cultural relativism is the assertion that human values, far from being universal, vary a great deal according to different cultural perspectives.  From my point of view one of the major drawbacks of the theory of Cultural relativism is the perception of “culture” as something unchanging and stable. In fact, all types of Cultural relativism, be it Strong or Weak  Cultural relativism, are based on stable conception of culture, which fails to recognize the flexibility of culture for social changes and ideological innovations. Whereas, I strongly support the idea that culture is an ongoing process of historical development, adaptation and evolution. Opponents of this theory argue that Cultural relativism can be dangerous for the effectiveness of international protection of human rights, since the nature of the theory fundamentally justifies human rights abuses linking to the customs and traditions of the society. Indian tradition of sati  is a bright example of human rights violation with cultural bases. An eighteen-year-old Rajput girl committed sati in 1987 during her husband’s funeral pyre. She was a university student and her marriage was insisted by her parents. There is no evidence whether she committed sati voluntarily or under pressure, however this case found a large response among Rajput society. As a sign of protest many human rights activists, both men and women, organized marches against the tradition of sati, meanwhile many others came out for the tradition, claiming that sati is a significant part of their ethnic culture. They not only made the young girl as a symbol of devoted wife, but also erected a shrine in honor of her. The human rights defenders and activists were branded as Western imperialists who were superseding old Indian traditions with Western ones. Obviously, the theory of Cultural relativism leads to the idea, that the main social unit is community, not individual. A question rises, does the community have rights to impose its will on an individual, or does it have rights to limit any eliminable right of individual?
As one of the ancient nations, Armenians have their own unique cultural traditions and scope of ethics, though our traditions are more flexible to meet the challenges of time. I do not hesitate to underline that Armenian traditions are quite humanistic, since they are largely inspired with the ideology of Armenian Apostolic Church. One of the greatest miracles of Armenian Apostolic Church is that there is not separate church and separate people, our church and people together is one whole unity, like a huge “cathedral”. And this “cathedral” caries inside it all the human values, like conscience, kindness etc.
Investigating the concepts of Universalism and Cultural Relativism, I came to the conclusion that in many societies or it is better to say in many communities social relations are regulated through native traditional norms. Indeed, rejection of international human rights may lead to systematic abuses of human rights within the societies or communities, still sometimes international protection of human rights can be used for political purposes. Human rights violations sometimes are reason for intervention of one country’s armed forces into another country’s territory. From this point, cultural relativism is not justified. I justify the existence of Cultural Relativism instead. In my opinion Cultural Relativism is a result of natural historical development, it is a problem which couldn’t be avoided.Order Now |
Billion-Year-Old Fungus Found Below The Arctic Could Change What We Know Of Evolution
Scientists love finding fossils, because they tell us so much about creatures alive billions of years ago and how they evolved.
Now, scientists have discovered fossils of a fungus that lived roughly a billion years ago the oldest of its kind ever to be found.
Named Ourasphaira giraldae, the fungus was found in the Grassy Bay Formation in the Canadian region of the Arctic by a group of researchers led by Corentin Loron from the Université de Liège. The team identified the fungus in micrometer-scale fossils from the region, announcing their discovery this week.
The newly-discovered fungus is about a billion years old, which handily beats the previous oldest fungus fossil record of 600 million years. That also mean that other eukaryotic organisms, basically multicellular life, may have formed around the same time as the fungus in the mid-Proterozoic age.
"Fungi are, in the 'tree of life,' the closest relative to animals," Loron told Motherboard. "This is reshaping our vision of the world because those two groups, as well as other eukaryotic groups like algae, are still present today. Therefore, this distant past, although very different from today, may have been much more 'modern' than we thought."
Thanks to this discovery, scientists can now confirm that fungi did in fact originate much earlier than previously thought. However, it has other consequences as well.
Fungi play an important part in decomposing dead matter. Their existence so much earlier means there was a high likelihood of the existence of complex multicellular creatures at the time, hence necessitating the evolution of fungi into the ecosystem.
This fungus fossil then could help narrow down the timelines within which creatures first appeared that resemble those alive today. |
What exactly is cloud computing? A Layman’s Guide to the Cloud.
Cloud computing is a combination of technologies that make up a network for the delivery of computing services. It requires hardware for infrastructural purposes and software to deliver the on-demand services over the internet. Users of cloud services are not actively involved in the management of the network.
Read further for a bit more detail.
Cloud computing allows people to use online services that are generally available through any device with an internet connection. This means that the user does not need to be at a certain location or take care of their own costly infrastructure. The very basis for the cloud to exist is the internet (which is not the same as the web, but more on that later).
‘The Cloud’ is everywhere these days. Some examples of companies that provide services through the cloud:
- Airbnb, hospitality services
- Netflix, video streaming
- Zynga, online games
- Spotify, music streaming
- Slack, team collaboration tools
- Adobe Systems, creativity software
Perhaps you use some of the above services and have never realized that these are cloud computing services.
The examples all have a few things in common:
- The user is in no way involved with the active management of the service;
- To gain access you don’t need to be physically present at the service’ network location;
- The services are on-demand, meaning that they are available whenever the user wants it.
Some networks use computer power to run online applications, others to provide a service, such as webhosting. Still, others are only utilized for storing data. Generally speaking we can divide all of the different services into three layers of the cloud. The primary objective of a cloud computing network is based on what layers its users have access to.
Essentially, the cloud consists of three distinctive layers: Infrastructure, platforms, applications. The Infrastructure of any cloud network is the most fundamental hosting layer that anything else is built on. It is also the most physical layer, including hardware such as servers, storage capacity and memory. The people running this layer are usually IT administrators.
Build on the infrastructure is the Platform layer. Platforms are locations in the cloud where applications are build, identities managed, and files executed. Traditionally, software developers are the most common users of this layer. With the introduction of low-code development platforms the building capabilities are simplified so that anyone can develop applications.
The most varied layer is where the Applications reside. This is already made clear with the examples of cloud services above. There is an endless list of possibilities here, including applications for content, collaboration, communication, finance, or the management of any of these.
The service models of the cloud
Cloud providers offer different types of service. Currently, the three most common cloud models are: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
Infrastructure as a Service (IaaS): The most basic service is IaaS. The organization is provided with remote servers, storage capacity and other hardware in the cloud. It saves the organization office space and large hardware investments. Additionally, it is easy to scale up (or down) and you only pay for what you use.
Platform as a Service (PaaS): The customers of PaaS providers can build, run and manage applications by themselves, without too much complexity. The organization does not need to invest in building and maintaining the architecture of the platform, this is incorporated into the service. The building blocks that are required for creating an application are already present in the platform, making it faster to assemble an application than with traditional software building.
Software as a Service (SaaS): Ready-to-use applications that are accessed online via subscription fall under the category Software as a Service. Many SaaS applications run directly from a web-browser or (mobile) app and do not require a local installation, but this is not a requirement. It is also known as on-demand software, which is perhaps easiest to comprehend with the examples of streaming services in mind.
Difference between the internet and the web
The internet and the web are not the same thing. Surprise! (If you knew this already just skip the paragraph.)
The internet is a network of networks, lots of them. They include the networks at schools and businesses, mobile networks, even satellites. The internet is what ties them all together. The World Wide Web, or just web, allows us to access all these networks through the internet. Other systems that use the internet for transferring information include email and messaging apps like WhatsApp and Facebook Messenger. And let’s not forget blockchain technologies such us bitcoin, which also distribute data over the internet. There are many more of these systems, like, for example, the cloud! The information that you consume through the cloud has been delivered to you over the internet. At the same time this information might be retrieved from a website that is hosted by the web.
So it is all very much connected and interdependent.
Privacy, Security and Compliance
A common misconception about cloud computing as opposed to on-premise resources is that cloud services are not as secure. There are also oftentimes privacy and compliance concerns, especially when the organization works with financial, personal or other sensitive data.
Tripwire has published an excellent article on this that I would like to quote: “The cloud is certainly different from on-premises resources, so it makes sense that security would be different, too. It follows that organizations must sometimes rethink how they’re currently doing things with respect to implementing security in the cloud.”
Vital to making the cloud secure are the security controls. These can include (but are not limited to) encryption of connections, data and banking information, and segregation of duties in the application or on the platform. If you’re really looking into cloud providers, also make sure to check their certificates.
Conclusion ‘What is the Cloud’?
The cloud is a network of servers that vendors provide for a wide range of online (on-demand) services. We recognize three different layers of cloud services:
- Infrastructure (servers, storage capacity, and other hardware)
- Platform (the building blocks developers can use, without the hassle of maintaining the network architecture)
- Applications (software)
Cloud service providers offer these layer in three different services: Infrastructure as a Service, Platform as a Service and Software as a Service. This concludes the very basics of cloud computing. It’s a technology that is still relatively new, as the adoption grows, so too will the wide range of products and services.
This article was last updated on March 14, 2019.
Weitere BlogsMore BlogsMeer blogs |
A high-quality preschool,kindergarten, and first grade education, particularly in reading, is crucial to building the foundation for a student’s academic career. Children who fail to develop basic literacy skills by the time they enter school are three to four times more likely to drop out of school in later years. Furthermore, children who are read to at least 20 minutes a day are exposed to 1.2 million words of text every year.
These facts are what motivates Quackenworth to provide high quality apps that prepare young children for preschool and beyond. In this post we discuss: 1) the basics of reading and literacy and 2) how to use our suite of literacy readiness educational apps. The focus of these educational apps is on long and short vowel sounds, rhyming, sight words, beginning/ending consonants; all important building blocks to learning how to read. We believe our apps strongly support teachers who want to provide prove supplemental reading materials and parents who want to supply a literacy-rich home environment. We also are of the opinion that there are important milestones that should be met in preschool,kindergarten, and first grade. These milestones include hearing more than million words per year, introduction to stories, the alphabet, vowel sounds, blends,sight words, and more.But before I proceed with a discussion about our educational apps, I would like to discuss some basics facts about reading and literacy fundamentals.
Among early grade teachers it is well understood that well-constructed school district literacy programs generally have these five components:
1) Phonemic awareness — the knowledge and manipulation of sounds in spoken words.
2) Phonics — the relationship between written and spoken letters and sounds.
3) Reading fluency, including oral reading skills — the ability to read with accuracy, and with appropriate rate, expression, and phrasing.
4) Vocabulary development—the knowledge of words, their definitions, and context.
5) Reading comprehension strategies—the understanding of meaning in text.
In addition to these five important components, schools should also include diagnostic reading assessments, ongoing professional development for teachers, a plan for building students’ motivation to read, and a strategy for integrating technology that assist student’s in learning to read. Integrating these five components and addition strategies will help child in early grades to become strong and confident readers .
Over the last 50 years, scientists have found empirical evidence demonstrating that reading to children positively effects oral language readiness, vocabulary, and literacy. Teachers and academics have always known the positive effects of reading to preschool age children. Recently, science has joined in support by using modern technology to further back this long-held belief. However, for the first time, researchers have found that reading affects brain function. In a study published in Pediatrics, researchers used magnetic resonance images (MRI)to study the brains of nineteen 3 to 5 year old kids. According to the Huffington Post, “The MRIs revealed that children from more stimulating home reading environments had greater activity in the parts of the brain that help with narrative comprehension and visual imagery. Their brains showed greater activity in those key areas while they listened to stories.”
Although the study was done on a small scale, it provides strong evidence that reading to children has positive biological effects on a child’s brain. Because of this and other studies, The American Academy of Pediatrics recommends that parents read to their kids as infants, even if it’s just a for few minutes. The positive effects are now well known and are supported not only by teachers and parents, but by science.
Most early grade teachers are familiar with the fluency basics and why it is important. Inits most basic format fluency is the ability to read with speed, accuracy, and appropriate expression. At each grade level there are certain expectations for a student’s reading fluency.
The primary metric used by teachers and research professionals is called words per minute (wpm).Each student is expected to have a satisfactory wpm score to be considered reading on grade level. There are a number of different fluency studies with varying grade levels of wpm scores. For example, the Rasinski scale suggests that a first grader should read 80 wpm by the end of the year. Harris & Sipay recommends between 60-90 wpm and Manzo, 30-54, respectively. The 2006 Hasbrouck & Tindal Oral Reading Fluency study goes a step further and considers several different factors including reading percentile, fall, winter, and spring reading assessments, and weekly improvement.
The graph below provides you with ways to measure your child's reading fluency expectation.
Since the 1970’s there have been numerous literacy tools and concepts that are intended to help children become better readers. And while many of these resources are effective, there are some fundamental and proven strategies that will help your child not only learn to read, but also become a better reader. You can do this with just yourself, a book, and your child. And with just 15 minutes, a couple times a week your child will become a faster reader. Improving a child’s reading fluency, no matter the age, will boost their confidence and ultimate their reading skills. Below are three simple and effective strategies that help improve a child’s reading fluency:
• Echo Reading– You read a sentence and then the child reads the same sentence.
• Choral Reading– You both read together.
• Partner Reading– You read a sentence and the child reads the next sentence.
That’sit! Don’t forget get to ask lots of questions before and after you read the story. Also, have them make inferences from the images in the story to help with comprehension. We suggest doing using these proven methods for just 15 minutes a day three times a week.
There are obvious benefits to using interactive books. Increasing motivation to read, feedback,and the ability to include animation and sounds are just a few. Our Learn to Read: Vowel Stories interactive book app is a great place to start when it comes to learning how to read. It contains colorful characters, animations,sound, and will increases interest in reading. While the benefits of using technology to teach children to learn how to read, we also believe that teachers and parents should also use proven traditional tools alongside technology. We believe that contemporary educators should use multiple tools (visual, audio, kinesthetic) to teach and reinforce basic reading concepts.
Below are 5 non-tablet activities/ideas to use in combination with our Learn to Read: Vowel Stories interactive book app:
1) Use our Vowel Stories worksheets that reinforce long/short vowel sounds and sight words.
2) Use a digital camera take a picture of 10 short/long vowel sounds found in the stories.
3) Use a word processing program (e.g. Word) type ten sight words your child remembered from the story. Print out the assignment, sign it, and post on your refrigerator!
4) Have a verbal spelling contest where you ask your child to spell certain words.Spell ten words correctly and someone gets ice cream!
5) Draw a picture of ten sight words that are nouns (person, place, or thing).
In this section we will demonstrate some quick tips for our Learn to Read: Sight Word app, an app with over 500 words including common sight words, beginning consonants,ending consonants, and word families. The purpose of app is to help improve the vocabulary of young readers.
• Tip #1 - As with most of our apps we suggest that you work at least 15 minutes, 3 times a week. If the child is unable to for 15 minutes, start with five minutes and work your way up to 15 minutes or more.
• Tip #2 - When you first start using the app work together with your child by modeling the behavior. For example, as the word appears you repeat the word. Continue this and then have the child repeat after you if they do not know the word. Modeling behavior helps the child focus on how to use the app.
• Tip #3 - To swipe or not to swipe - With this feature you can elect to control the pace of the words. For example, on automatic mode the words will appear every 4 seconds. In manual mode, the child swipes at their own pace.
• Tip #4 -Words completed - In this section you can track the words that the child has scene.
Step into any successful preschool or kindergarten classroom and you will find an environment rich with rhyming books, songs, and visual cues. It is well known among educators that rhyming builds phonological awareness or a child's ability to distinguish between different sounds.Additionally, the benefits of rhyming are supported heavily by years of academic research. A 1987 study (Maclean,Bryant, and Bradley) of three-year-olds found that the more nursery rhymes a child knew the better their phonological awareness was when they were older. A 1994 study ((McDougall, Hulme, Ellis and Monk) found that children were more successful readers depending on their level of rhyming awareness. Children with superior rhyming awareness were more successful readers than children with weaker rhyming skills. Benefits of rhyming
• Learningrhymes early increases a child’s chances of becoming a strong confident reader.
• While rhymingbuilds phonemic awareness or the ability to distinguish between sounds.
• Rhyming helps children recognize word patterns and teaches word families.
• Rhyming is fun for children and motivates them to read more.
• Rhyming helps children’s spelling ability.
Our Learn to Read: Rhyme Stories is designed to help children build early literacy skills by providing rhyming stories and other literacy rich activities. The stories include Joe has a Sore Toe, Pete Has Big Feet, Kim Likes to Swim, Jenny Found a Penny, Mike has a Bike, and Paul is Tall. The Learn to Read: Rhyme Stories series provides parents and educators with an arsenal of literacy weapons to help build phonological awareness. The goal of ultimately making them a stronger more confident reader.
Quackenworth specializes in mobile games for kids and educational websites. Our mission is to develop fun products that teachers and parents can use to educate and enrich the lives of children and young adults. Click here to learn more about Quackenworth's apps for kids. |
The AsteroidsBetween the orbits of Mars and Jupiter are an estimated 30,000 pieces of rocky debris, known collectively as the asteroids, or planetoids. The first and, incidentally, the largest (Ceres), was discovered during the New Year's night of 1801 by the Italian astronomer Father Piazzi (1746–1826), and its orbit was calculated by the German mathematician Karl Friedrich Gauss (1777–1855). Gauss invented a new method of calculating orbits on that occasion. A few asteroids do not move in orbits beyond the orbit of Mars, but in orbits that cross the orbit of Mars. The first of them was named Eros because of this peculiar orbit. It had become the rule to bestow female names on the asteroids, but when it was found that Eros crossed the orbit of a major planet, it received a male name. These orbit-crossing asteroids are often referred to as the “male asteroids.” A few of them—Albert, Adonis, Apollo, Amor, and Icarus—cross the orbit of Earth, and two of them may come closer than our Moon; but the crossing is like a bridge crossing a highway, not like two highways intersecting. Hence there is very little danger of collision from these bodies. They are all small, 3 to 5 miles (4.8 to 8.0 kilometers) in diameter, and therefore very difficult objects to identify, even when quite close. Some scientists believe the asteroids represent the remains of an exploded planet.
On Oct. 29, 1991, the Galileo spacecraft took a historic photograph of asteroid 951 Gaspra from a distance of 10,000 miles (16,000 kilometers) away. It was the first close-up photo ever taken of an asteroid in space.
Gaspra is an irregular, potato-shaped object about 12.5 miles (20 kilometers) by 7.5 miles (12 kilometers) by 7 miles (11.2 kilometers) in size. Its surface is covered with a layer of loose rubble and its terrain is covered with several dozen small craters.
Close-up photos of Asteroid 243 Ida taken by the Galileo spacecraft on Aug. 28, 1993, revealed that Ida had a tiny egg-shaped moon measuring 0.9 miles by 0.7 miles (1.44 by 1.12 kilometers). The moon has been named Dactyl.
NASA's Near-Earth Asteroid Rendezvous spacecraft was launched on Feb. 17, 1996. (Near-Earth asteroids come within 121 million miles [195 million kilometers] of the Sun. Their orbits come close enough that one could eventually hit Earth.) It flew within 750 miles (1,200 kilometers) of minor planet 253 Mathilde on June 27, 1997, and took spectacular images of the dark, crater-battered world. The asteroid's mean diameter was found to be 33 miles (52.8 kilometers). The NEAR spacecraft discovered that the carbon-rich Mathilde is one of the darkest objects in the solar system, only reflecting about 3% of the Sun's light, making it twice as dark as a chunk of charcoal. The asteroid is almost completely cratered, and at least five of its craters just on the lighted side are larger than 12 miles (19.2 kilometers).
The spacecraft reached asteroid 433 Eros in Dec. 1998, but aborted its mission due to engine problems. NEAR measured Eros to be 21 miles (33.6 kilometers) long by 8 miles (12.8 kilometers) wide and 8 miles (12.8 kilometers) deep. It rotates once every 5.27 hours and has no visible moons.
The spacecraft was renamed NEAR Shoemaker in honor of geologist Dr. Eugene M. Shoemaker (1928–1997), who influenced research on asteroids and comets in shaping the planets. It made a successful rendezvous with Eros on Feb. 14, 2000, and began a year-long orbit of the asteroid. NEAR Shoemaker data showed that the asteroid's ancient surface is covered with craters, ridges, boulders, and other complex features.
The First Ten Minor Planets (Asteroids)
(millions of miles)
If you need to teach it, we have it covered.
Start your free trial to gain instant access to thousands of teacher-approved worksheets, activities, and over 22,000 resources created by educational publishers and teachers.Start Your Free Trial |
Biologists have figured out the most efficient way to destroy an ecosystem — and it’s based on the Google search algorithm.
Scientists have long known that the extinction of key species in a food web can cause collapse of the entire system, but the vast number of interactions between species makes it difficult to guess which animals and plants are the most important. Now, computational biologists have adapted the Google search algorithm, called PageRank, to the problem of predicting ecological collapse, and they’ve created a startlingly accurate model.
“While several previous studies have looked at the robustness of food webs to a variety of sequences of species loss, none of them have come up with a way to identify the most devastating sequence of extinctions,” said food web biologist Jennifer Dunne of the Santa Fe Institute, who was not involved in the research. Using a modified version of PageRank, Dunne said, the researchers were able to identify which species extinctions within a food web would lead to biggest chain-reaction of species death.
“If we can find the way of removing species so that the destruction of the ecosystem is the fastest, it means we’re ranking species by their importance,” said ecologist Stefano Allesina of the University of California, Santa Barbara, who co-authored the paper published Friday in PLoS Computational Biology.
Unlike previous solutions to the coextinction problem, the Google solution takes into account not only the number of connections between species, but also their relative importance. “In PageRank, you’re an important website if important websites point to you,” Allesina said. “We took that idea and reversed it: Species are important if they support important species.”
In other words, grass is important because it’s eaten by gazelles, and gazelles are important because they’re eaten by lions.
When the researchers tested the Google algorithm against existing models for predicting ecosystem collapse, they found that the new solution outperformed the old ones in each of the 12 food webs they looked at. “In every case that we tested, the algorithm returned either the best possible solution, out of the billions of possibilities, or very close to it,” Allesina said. In this case, the “best possible solution” is the one that predicts total ecosystem collapse using the fewest number of species extinctions.
To make the circular PageRank algorithm work for food webs, which are traditionally considered unidirectional, the researchers had to solve the problem of what to do with dead ends: Not much eats a lion, but that doesn’t necessarily mean lions aren’t critical to the food chain. The scientists solved this problem by adding what Allesina calls a “root node,” which is based on the idea that all living creatures contribute to the food chain through their excrement and eventual decay.
“What we found is that the importance of a species can be connected to the amount of matter that flows to it,” Allesina said. “If species eat a lot of things, and a lot of things eat them, they tend to be important.” Previous solutions to the problem tended to underestimate the importance of species that are lower on the food chain, Allesina said, and he hopes the new solution will encourage conservation biologists to take a broader view of species extinctions.
“What I hope is that people will pick up interest and start thinking about conservation in a more network-based way,” Allesina said. “Right now, most conservationists are focused on a single species, and they just study that species. But you really have to take into account that this species is not independent, it’s really tangled in a network of multi-species interactions.”
For ecosystems on the brink of collapse, such as marine environments taxed by overfishing, Allesina said a network-based approach to conservation could make all the difference.
Image: Composite of PLOS Computational Biology illustration and photo from Flickr/fusion68k.Go Back to Top. Skip To: Start of Article. |
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (June 2013) (Learn how and when to remove this template message)|
Medieval communes in the European Middle Ages had sworn allegiances of mutual defense (both physical defense and of traditional freedoms) among the citizens of a town or city. They took many forms, and varied widely in organization and makeup. Communes are first recorded in the late 11th and early 12th centuries, thereafter becoming a widespread phenomenon. They had the greater development in central-northern Italy, where they were real city-states based on partial democracy, while in Germany they became free cities, independent from local nobility.
The English and French word "commune" appears in Latin records in various forms. The classical Latin communio means an association. In some cases the classical Latin commune was used to mean people with a common interest. Ultimately, the roots are cum (with or together) and munire (to wall), literally 'to wall together' (i.e., a shared fortification). More frequently the Low Latin communia was used from which the Romance commune was derived. When independence of rule was won through violent uprising and overthrow, they were often called conspiratio (a conspiracy).
During the 10th century in several parts of Western Europe, peasants began to gravitate towards walled population centers, as advances in agriculture (the three-field system) resulted in greater productivity and intense competition. In central and northern Italy, and in Provence and Septimania, most of the old Roman cities had survived—even if grass grew in their streets—largely as administrative centers for a diocese or for the local representative of a distant kingly or imperial power. In the Low Countries, some new towns were founded upon long-distance trade, where the staple was the woolen cloth-making industry. The sites for these ab ovo towns, more often than not, were the fortified burghs of counts, bishops or territorial abbots. Such towns were also founded in the Rhineland. Other towns were simply market villages, local centers of exchange.
Such townspeople needed physical protection from lawless nobles and bandits, part of the motivation for gathering behind communal walls, but the struggle to establish their liberties, the freedom to conduct and regulate their own affairs and security from arbitrary taxation and harassment from the bishop, abbot, or count in whose jurisdiction these obscure and ignoble social outsiders lay, was a long process of struggling to obtain charters that guaranteed such basics as the right to hold a market. Such charters were often purchased at exorbitant rates, or granted, not by the local power, which was naturally jealous of prerogatives, but by the king or the emperor, who came thereby to hope to enlist the towns as allies in the struggle to centralize power that was arising in tandem with the rise of the communes. "The burghers of the tenth and eleventh centuries were ruthlessly harassed, blackmailed, subjected to oppressive taxes and humiliated. This drove the bourgeois back upon their own resources, and it accounts for the intensely corporate and excessively organized character of medieval cities."
The walled city represented protection from direct assault at the price of corporate interference on the pettiest levels, but once a townsman left the city walls, he (for women scarcely travelled) was at the mercy of often violent and lawless nobles in the countryside. Because much of medieval Europe lacked central authority to provide protection, each city had to provide its own protection for citizens both inside the city walls, and outside. Thus towns formed communes, a legal basis for turning the cities into self-governing corporations. Although in most cases the development of communes was connected with that of the cities, there were rural communes, notably in France and England, that were formed to protect the common interests of villagers.
Every town had its own commune and no two communes were alike, but at their heart, communes were sworn allegiances of mutual defense. When a commune was formed, all participating members gathered and swore an oath in a public ceremony, promising to defend each other in times of trouble, and to maintain the peace within the city proper.
The commune movement started in the 10th century, with a few earlier ones like Forlì (possibly 889), and gained strength in the 11th century in northern Italy which had the most urbanized population of Europe at the time. It then spread in the early 12th century to France, Germany and Spain and elsewhere. The English state was already very centralized, so the communal movement mainly manifested itself in parishes, craftsmen's and merchants' guilds and monasteries. State officialdom expanded in England and France from the 12th century onwards, while the Holy Roman Empire was ruled by communal coalitions of cities, knights, farmer republics, prince-bishops and the large domains of the imperial lords.
According to an English cleric of the late 10th century[who?], society was composed of the three orders: those who fight (the nobles), those who pray (the clergy) and those who work (the peasants). In theory this was a balance between spiritual and secular peers, with the third order providing labour for the other two. The urban communes were a break in this order. The Church and King both had mixed reactions to communes. On the one hand, they agreed safety and protection from lawless nobles was in everyone's best interest. The communes intention was to keep the peace through the threat of revenge, and the Church was sympathetic to the end result of peace. However, the Church had their own ways to enforce peace, such as the Peace and Truce of God movement, for example. On the other hand, communes disrupted the order of medieval society; the methods the commune used, eye for an eye, violence begets violence, were generally not acceptable to Church or King. Furthermore, there was a sense that communes threatened the medieval social order. Only the noble lords were allowed by custom to fight, and ostensibly the merchant townspeople were workers, not warriors. As such, the nobility and the clergy sometimes accepted communes, but other times did not. One of the most famous cases of a commune being suppressed and the resulting defiant urban revolt occurred in the French town of Laon in 1112.
The development of medieval rural communes arose more from a need to collaborate to manage the commons than out of defensive needs. In times of a weak central government, communes typically formed to ensure the safety on the roads through their territory to enable commerce (Landfrieden). Perhaps the most successful such medieval community was the one in the alpine valleys north of the Gotthard Pass: it later resulted in the formation of the Old Swiss Confederacy. The Swiss had numerous written acts of alliance: for each new canton that joined the confederacy, a new contract was written. Besides the Swiss Eidgenossenschaft, there were similar rural alpine communes in County of Tyrol, but these were quenched by the House of Habsburg. Other such rural communes developed in the Graubünden, in the French Alps (Briançon), in the Pyrenees, in northern France (Roumare), in northern Germany (Frisia and Dithmarschen), and also in Sweden and Norway. The colonization of the Walser also is related. The southern medieval communes most probably were influenced by the Italian precedent, but the northern ones (and even the Swiss communes north of Gotthard Pass) may well have developed concurrently and independently from the Italian ones. Only very few of these medieval rural communes ever attained imperial immediacy, where they would have been subject only to the king or emperor; most still remained subjects of some more or less distant liege.
Evolution in Italy and decline in Europe
During the 11th century in northern Italy a new political and social structure emerged and the medieval communes developed to the form of city states. The civic culture which arose from this urbs was remarkable. In most places where communes arose (e.g. France, Britain and Flanders) they were absorbed by the monarchical state as it emerged. Almost uniquely, they survived in northern and central Italy to become independent and powerful city-states. The breakaway from their feudal overlords by these communes occurred in the late 12th century and 13th century, during the Investiture Controversy between the Pope and the Holy Roman Emperor: Milan led the Lombard cities against the Holy Roman Emperors and defeated them, gaining independence (battles of Legnano, 1176, and Parma, 1248 - see Lombard League). Meanwhile, the Republic of Venice, Pisa and Genoa were able to conquer their naval empires on the Mediterranean sea (in 1204 Venice conquered one-fourth of Byzantine Empire in the Fourth Crusade). Cities such as Parma, Ferrara, Verona, Padua, Lucca, Siena, Mantua and others were able to create stable states at the expenses of their neighbors, some of which lasted until modern times. In southern and insular Italy, autonomous communes were rarer, Sassari in Sardinia being one example.
In the Holy Roman Empire, the emperors always had to face struggles with other powerful players: the land princes on the one hand, but also the cities and communes on the other hand. The emperors thus invariably fought political (not always military) battles to strengthen their position and that of the imperial monarchy. In the Golden Bull of 1356, emperor Charles IV outlawed any conjurationes, confederationes, and conspirationes, meaning in particular the leagues of towns but also the rural communal leagues that had sprung up. Most leagues of towns were subsequently dissolved, sometimes forcibly, and where refounded, their political influence was much reduced. Nevertheless, some of this communes (as Frankfurt, Nuremberg, Hamburg) were able to survive in Germany for centuries and became almost independent city-states vassals to the Holy Roman Emperors (see Free imperial city).
- Lombard League
- Hanseatic League
- Communalism before 1800
- Italian city-states
- Free imperial city
- Such examples provided Henri Pirenne (Medieval Cities: Their Origins and the Revival of Trade (1927), Mohammed and Charlemagne (1937)) with a thesis he perhaps too widely applied.
- Cantor, 1993, p. 231.
- Im Hof, Ulrich Im Hof (2007). Geschichte der Schweiz. W. Kohlhammer Verlag. ISBN 978-3-17-019912-5.
- Cantor, Norman E. 1993. The Civilization of the Middle Ages (New York: HarperCollins)
- Jones, Philip. 1997. The Italian City-State: From Commune to Signoria. (Oxford: Oxford University Press)
- Lansing, Carol, 1992. The Florentine Magnates: Lineage and Faction in a Medieval Commune. (Princeton: Princeton University Press)
- Sella, Pietro, "The Statutes of the Commune of Bugelle (Biella)" 1904. 14th-century statutes of a Piedmontese commune (Latin and English translations), express the nature of the commune in vivid detail, productions of medieval society and the medieval personality.
- Tobacco, Giovanni, 1989. The Struggle for Power in Medieval Italy: Structures of Political Rule, 400-1400,translator, Rosalind Brown Jensen (New York: Cambridge University Press)
- Waley, Donald, 1969 etc. The Italian City-Republics (3rd ed. New York: Longman, 1988.)
- Guelph University, "The Urban Past: IV. The Medieval City" A bibliography.
- Encyclopaedia Britannica 1911: "Medieval commune"
- (Italian) Itinerari medievali: risorse per lo studio del Medioevo |
How do archeologists identify artifacts?
Artifacts from the Locher/Poffenberger cabin site at Antietam National Battlefield. (National Capital Region, Regional Archeology Program, NPS)
Once archeology was almost totally artifact-oriented. Archeologists collected artifacts and categorized them based almost solely on their physical attributes and functions. Gradually, archeologists have shifted objectives, realizing that understanding the people behind the artifacts is more compelling than the artifacts themselves. Today's archeology has turned from simply filling museum cabinets to discovering how people in the past actually lived. To do this, archeologists use various studies to link artifacts, ecofacts, and features with the human behavior that produced them (Thomas 1998:229).
For your information
Read about the processing and analysis of artifacts in the laboratory.
Try it yourself
Try it: for kids!
Find a partner and try a trash can dig!
Learn how archeologists are using artifacts they have excavated to reconstruct the trade network between Jamestown's English colonists and the local Indians. |
History of Zanzibar
Part of a series on the
|History of Tanzania|
People have lived in Zanzibar for 20,000 years. History proper starts when the islands became a base for traders voyaging between the African Great Lakes, the Arabian peninsula, and the Indian subcontinent. Unguja offered a protected and defensible harbor, so although the archipelago had few products of value, Omanis and Yemenis settled in what became Zanzibar City (Stonhttps://en.wikipedia.org/wiki/Main_Pagee Town) as a convenient point from which to trade with towns on the Swahili Coast. They established garrisons on the islands and built the first mosques in the African Great Lakes.
During the Age of Exploration, the Portuguese Empire was the first European power to gain control of Zanzibar, and kept it for nearly 200 years. In 1698, Zanzibar fell under the control of the Sultanate of Oman, which developed an economy of trade and cash crops, with a ruling Arab elite and a Bantu general population. Plantations were developed to grow spices; hence, the moniker of the Spice Islands (a name also used of Dutch colony the Moluccas, now part of Indonesia). Another major trade good was ivory, the tusks of elephants that were killed on the Tanganyika https://en.wikipedia.org/wiki/Main_Pagemainland. The third pillar of the economy was slaves, which gave Zanzibar an important place in the Arab slave trade, the Indian Ocean equivalent of the better-known Triangular Trade. The Omani Sultan of Zanzibar controlled a substantial portion of the African Great Lakes coast, known as Zanj, as well as extensive inland trading routes.
Sometimes gradually, sometimes by fits and starts, control of Zanzibar came into the hands of the British Empire. Part of the political impetus for this was the movement for the abolition of the slave trade. In 1890, Zanzibar became a British protectorate. The death of one sultan and the succession of another of whom the British did not approve later led to the Anglo-Zanzibar War, also known as the shortest war in history.
The islands gained independence from Britain in December 1963 as a constitutional monarchy. A month later, the bloody Zanzibar Revolution, in which several thousand Arabs and Indians were killed and thousands more expelled and expropriated, led to the Republic of Zanzibar and Pemba. That April, the republic merged with the mainland Tanganyika, or more accurately, was subsumed into Tanzania, of which Zanzibar remains a semi-autonomous region. Zanzibar was most recently in the international news with a January 2001 massacre, following contested elections.
Zanzibar has been inhabited, perhaps not continuously, since the Paleolithic. A 2005 excavation at Kuumbi Cave in southeastern Zanzibar found heavy duty stone tools that showed occupation of the site at least 22,000 years ago. Archaeological discoveries of a limestone cave used radiocarbon techniques to prove more recent occupation, from around 2800 BC to the year 0 (Chami 2006). Traces of the communities include objects such as glass beads from around the Indian Ocean. It is a suggestion of early trans-oceanic trade networks, although some writers have expressed pessimism about this possibility.
No cave sites on Zanzibar have revealed pottery fragments used by early and later Bantu farming and iron-working communities who lived on the islands (Zanzibar, Mafia) during the first millennium AD. On Zanzibar, the evidence for the later farming and iron-working communities dating from the mid-first millennium AD is much stronger and indicates the beginning of urbanism there when settlements were built with mud-timber structures (Juma 2004). This is somewhat earlier than the existing evidence for towns in other parts of the Swahili Coast, given as the 9th century AD. The first permanent residents of Zanzibar seem to have been the ancestors of the Hadimu and Tumbatu, who began arriving from the African Great Lakes mainland around 1000 AD. They had belonged to various Bantu ethnic groups from the mainland, and on Zanzibar they lived in small villages and failed to coalesce to form larger political units. Because they lacked central organization, they were easily subjugated by outsiders.
Early Iranian & Arab rule
Ancient pottery demonstrates existing trade routes with Zanzibar as far back as the ancient Sumer and Assyria. An ancient pendant discovered near Eshnunna dated ca. 2500-2400 BC. has been traced to copal imported from the Zanzibar region.
Traders from Arabia (mostly Yemen), the Persian Gulf region of Iran (especially Shiraz), and west India probably visited Zanzibar as early as the 1st century AD. They used the monsoon winds to sail across the Indian Ocean and landed at the sheltered harbor located on the site of present-day Zanzibar Town. Although the islands had few resources of interest to the traders, they offered a good location from which to make contact and trade with the towns of the Swahili Coast. A phase of urban development associated with the introduction of stone material to the construction industry of the African Great Lakes littoral began from the 10th century AD.
Traders began to settle in small numbers on Zanzibar in the late 11th or 12th century, intermarrying with the indigenous Africans. Eventually a hereditary ruler (known as the Mwenyi Mkuu or Jumbe), emerged among the Hadimu, and a similar ruler, called the Sheha, was set up among the Tumbatu. Neither had much power, but they helped solidify the ethnic identity of their respective peoples.
Villages were also present in which lineage groups were common.
Vasco da Gama's visit in 1499 marked the beginning of European influence. In 1503 or 1504, Zanzibar became part of the Portuguese Empire when Captain Ruy Lourenço Ravasco Marques landed and demanded and received tribute from the sultan in exchange for peace. Zanzibar remained a possession of Portugal for almost two centuries.
Later Arab rule
In 1698, Zanzibar became part of the overseas holdings of Oman, falling under the control of the Sultan of Oman. The Portuguese were expelled and a lucrative trade in slaves and ivory thrived, along with an expanding plantation economy centring on cloves. The Arabs established garrisons at Zanzibar, Pemba, and Kilwa. The height of Arab rule came during the reign of Seyyid Said (more fully, Sayyid Said bin Sultan al-Busaid), who in 1840 moved his capital from Muscat in Oman to Stone Town. He established a ruling Arab elite and encouraged the development of clove plantations, using the island's slave labour. Zanzibar's commerce fell increasingly into the hands of traders from the Indian subcontinent, whom Said encouraged to settle on the island. After his death in 1856, his sons struggled over the succession. On April 6, 1861, Zanzibar and Oman were divided into two separate principalities. Sayyid Majid bin Said Al-Busaid (1834/5–1870), his sixth son, became the Sultan of Zanzibar, while the third son, Sayyid Thuwaini bin Said al-Said, became the Sultan of Oman.
The Sultan of Zanzibar controlled a large portion of the African Great Lakes Coast, known as Zanj, as well as trading routes extending much further across the continent, as far as Kindu on the Congo River. In November 1886, a German-British border commission established the Zanj as a ten-nautical mile (19 km) wide strip along most of the coast of the African Great Lakes, stretching from Cape Delgado (now in Mozambique) to Kipini (now in Kenya), including Mombasa and Dar es Salaam, and several offshore Indian Ocean islands. However, from 1887 to 1892, all of these mainland possessions were lost to the colonial powers of the United Kingdom, Germany, and Italy, with Britain gaining control of Mombasa in 1963.
In the late 1800s, the Omani Sultan of Zanzibar also briefly acquired nominal control over parts of Mogadishu in the Horn region to the north. However, power on the ground remained in the hands of the Somali Geledi Sultanate (which, also holding sway over the Shebelle region in Somalia's interior, was at its zenith). In 1892, Ali bin Said leased the city to Italy. The Italians eventually purchased the executive rights in 1905, and made Mogadishu the capital of the newly established Italian Somaliland.
Zanzibar was famous worldwide for its spices and its slaves. It was the Africa Great Lakes' main slave-trading port, and in the 19th century as many as 50,000 slaves were passing through the slave markets of Zanzibar each year. (David Livingstone estimated that 80,000 Africans died each year before ever reaching the island.) Tippu Tip was the most notorious slaver, under several sultans, and also a trader, plantation owner and governor. Zanzibar's spices attracted ships from as far away as the United States, which established a consulate in 1837. The United Kingdom's early interest in Zanzibar was motivated by both commerce and the determination to end the slave trade. In 1822, the British signed the first of a series of treaties with Sultan Said to curb this trade, but not until 1876 was the sale of slaves finally prohibited.
Zanzibar had the distinction of having the first steam locomotive in the African Great Lakes, when Sultan Bargash bin Said ordered a tiny 0-4-0 tank engine to haul his regal carriage from town to his summer palace at Chukwani.
British influence and rule
The British Empire gradually took over; the relationship was formalized by the 1890 Heligoland-Zanzibar Treaty, in which Germany pledged, among other things, not to interfere with British interests in Zanzibar. This treaty made Zanzibar and Pemba a British protectorate (not colony), and the Caprivi Strip (in what is now Namibia) part of German South-West Africa. British rule through a sultan (vizier) remained largely unchanged.
The death of Hamad bin Thuwaini on 25 August 1896 saw the Khalid bin Bargash, eldest son of the second sultan, Barghash ibn Sa'id, take over the palace and declare himself the new ruler. This was contrary to the wishes of the British government, which favoured Hamoud bin Mohammed. This led to a showdown, later called the Anglo-Zanzibar War, on the morning of 27 August, when ships of the Royal Navy destroyed the Beit al Hukum Palace, having given Khalid a one-hour ultimatum to leave. He refused, and at 9 am the ships opened fire. Khalid's troops returned fire and he fled to the German consulate. A cease fire was declared 45 minutes after the action had begun, giving the bombardment the title of The Shortest War in History. Hamoud was declared the new ruler and peace was restored once more. Acquiescing to British demands, he brought an end in 1897 to Zanzibar's role as a centre for the centuries-old eastern slave trade by banning slavery and freeing the slaves, compensating their owners. Hamoud's son and heir apparent, Ali, was educated in Britain.
From 1913 until independence in 1963, the British appointed their own residents (essentially governors).
Independence and revolution
On 10 December 1963, Zanzibar received its independence from the United Kingdom as a constitutional monarchy under the Sultan. This state of affairs was short-lived, as the Sultan and the democratically elected government were overthrown on 12 January 1964 in the Zanzibar Revolution led by John Okello, a Ugandan citizen who invaded Zanzibar with British trained Ugandan soldiers from the mainland. Sheikh Abeid Amani Karume was named president of the newly created People's Republic of Zanzibar and Pemba. Several thousand ethnic Arab (5,000-12,000 Zanzibaris of Arabic descent) and Indian civilians were murdered and thousands more detained or expelled, either their property confiscated or destroyed. The film Africa Addio documents the violence and massacre of unarmed ethnic Arab civilians.
The revolutionary government nationalized the local operations of the two foreign banks in Zanzibar, Standard Bank and National and Grindlays Bank. These nationalized operations may have provided the foundation for the newly created Peoples Bank of Zanzibar. Jetha Lila, the one locally owned bank in Zanzibar, closed. It was owned by Indians and although the revolutionary government of Zanzibar urged it to continue functioning, the loss of its customer base as Indians left the island made it impossible to continue.
Union with Tanganyika
On 26 April 1964, the mainland colony of Tanganyika united with Zanzibar to form the United Republic of Tanganyika and Zanzibar; this lengthy name was compressed into a portmanteau, the United Republic of Tanzania, on 29 October 1964. After unification, local affairs were controlled by President Abeid Amani Karume, while foreign affairs were handled by the United Republic in [Dar es Salaam]. Zanzibar remains a semi-autonomous region of Tanzania.
Lists of rulers
Sultans of Zanzibar
- Majid bin Said (1856–1870)
- Barghash bin Said (1870–1888)
- Khalifah bin Said (1888–1890)
- Ali bin Said (1890–1893)
- Hamad bin Thuwaini (1893–1896)
- Khalid bin Barghash (1896)
- Hamud bin Muhammed (1896–1902)
- Ali bin Hamud (1902–1911) (abdicated)
- Khalifa bin Harub (1911–1960)
- Abdullah bin Khalifa (1960–1963)
- Jamshid bin Abdullah (1963–1964)
- Sir Lloyd William Matthews, (1890 to 1901)
- A.S. Rogers, (1901 to 1906)
- Arthur Raikes, (1906 to 1908)
- Francis Barton, (1906 to 1913)
- Francis Pearce, (1913 to 1922)
- John Sinclair, (1922 to 1923)
- Alfred Hollis, (1923 to 1929)
- Richard Rankine, (1929 to 1937)
- John Hall, (1937 to 1940)
- Henry Pilling, (1940 to 1946)
- Vincent Glenday, 1946 to 1951)
- John Rankine, (1952 to 1954)
- Henry Steven Potter, (1955 to 1959)
- Arthur George Mooring, (1959 to 1963)
- "Excavations at Kuumbi Cave on Zanzibar 2005", The African Archaeology Network: Research in Progress, Paul Sinclair (Uppsala University), Abdurahman Juma, Felix Chami, 2006
- Ingrams, William Harold (1967). Zanzibar, its history and its people. Routledge. pp. 43–46. ISBN 0-7146-1102-6.
- Meyer,, Carol; Joan Markley Todd; Curt W. Beck. "From Zanzibar to Zagros: A Copal Pendant from Eshnunna". Journal of Near Eastern Studies 50 (4): 289–298. doi:10.1086/373516. JSTOR 545490.
- Zanzibar: Its History and Its People, W. H. Ingrams, Frank Cass and Company Ltd., Abingdon, United Kingdom, 1931, page 99
- Lewis, I. M. (1988). A Modern History of Somalia: Nation and State in the Horn of Africa. Westview Press. p. 38. ISBN 978-0-8133-7402-4.
- Hamilton, Janice (1 January 2007). Somalia in Pictures. Twenty-First Century Books. p. 28. ISBN 978-0-8225-6586-4.
- http://ngm.nationalgeographic.com/ngm/data/2001/10/01/html/ft_20011001.6.html National Geographic article
- http://news.bbc.co.uk/1/hi/world/africa/6510675.stm Remembering East African slave raids
- Excerpt from: Race, Revolution and the Struggle for Human Rights in Zanzibar by Thomas Burgess
- Steere, Edward (1869). Some account of the town of Zanzibar. London: Charles Cull. |
Work presented today at the Goldschmidt Geochemistry Conference in Sacramento, California shows that the timing of the giant impact between Earth's ancestor and a planet-sized body occurred around 40 million years after the start of solar system formation. This means that the final stage of Earth's formation is around 60 million years older than previously thought.
Geochemists from the University of Lorraine in Nancy, France have discovered an isotopic signal which indicates that previous age estimates for both the Earth and the Moon are underestimates. Looking back into "deep time" it becomes more difficult to put a date on early Earth events. In part this is because there is little "classical geology" dating from the time of the formation of the Earth -- no rock layers, etc. So geochemists have had to rely on other methods to estimate early Earth events. One of the standard methods is measuring the changes in the proportions of different gases (isotopes) which survive from the early Earth.
Guillaume Avice and Bernard Marty analysed xenon gas found in South African and Australian quartz, which had been dated to 3.4 and 2.7 billion years respectively. The gas sealed in this quartz is preserved as in a "time capsule," allowing Avice and Marty to compare the current isotopic ratios of xenon, with those which existed billions of years ago. Recalibrating dating techniques using the ancient gas allowed them to refine the estimate of when the Earth began to form. This allows them to calculate that the Moon-forming impact is around 60 million years (+/- 20 m. y.) older than had been thought.
Previously, the time of formation of the Earth' s atmosphere had been estimated at around 100 million years after the solar system formation. As the atmosphere would not have survived the Moon-forming impact, this revision puts the age up to 40 million years after the solar sytem formation (so around 60 million years older than previously thought).
According to Guillaume Avice: "It is not possible to give an exact date for the formation of the Earth*. What this work does is to show that the Earth is older than we thought, by around 60 my.
"The composition of the gases we are looking at changes according the conditions they are found in, which of course depend on the major events in Earth's history. The gas sealed in these quartz samples has been handed down to us in a sort of "time capsule." We are using standard methods to compute the age of the Earth, but having access to these ancient samples gives us new data, and allows us to refine the measurement."
The xenon gas signals allow us to calculate when the atmosphere was being formed, which was probably at the time the Earth collided with a planet-sized body, leading to the formation of the Moon. Our results mean that both the Earth and the Moon are older than we had thought."
Bernard Marty added "This might seem a small difference, but it is important. These differences set time boundaries on how the planets evolved, especially through the major collisions in deep time which shaped the solar system."
*The oldest rocks of the solar system have been dated to 4,568 my ago, so the Earth is younger than that.
Cite This Page: |
Scientists were stunned when the Cassini spacecraft transmitted images of a geyser-like plume spouting from a tiny moon of Saturn no wider than Arizona. Huge geysers seem to be erupting from fissures in Enceladus's south pole, pouring tons of water vapor and ice into a thin ring around Saturn.
To understand this improbable water fountain in space and whether we should expect life to be teeming beneath, we spoke with John Spencer, an expert on the moons of giant planets who is part of the Cassini team, working at the Southwest Research Institute in Boulder, Colorado.
What are the geysers like?
JS: They are jets of water vapor and dust—really very fine water ice particles that are coming out of large fractures in the south pole of Enceladus. They were discovered last summer when we flew very close to Enceladus. We actually flew through this plume of water vapor and were able to sample it directly with the Cassini spacecraft.
So liquid water has definitely been discovered?
JS: This is not a direct detection of liquid water. It's an inference from the characteristics of this cloud of ice grains that is now being seen by the cameras. Liquid water is not certain, but it's probably the most plausible interpretation. But we don't know for sure.
How big is the plume?
JS: It's several hundred miles, certainly. It is as big as Enceladus itself, and probably bigger. It's not as though the stuff goes up and falls back to the surface, as happens with the eruptions of volcanoes on Io. This stuff just goes up and keeps on going, so defining the top of it is kind of tricky. The gravity is tiny on Enceladus, and this vapor is going fast enough that it will escape the gravity. It will blend out into the dusty environment of the E ring [the outermost ring of Saturn].
What is causing this eruption?
JS: You need quite a lot of energy if you are going to produce these geysers, and water boiling off into a vacuum is a good way to do that. If you expose liquid water to a vacuum, it boils explosively and it freezes at the same time. And so it's a pretty violent event. The boiling point becomes equal to the freezing point. So you get this very explosive boiling and freezing. So that's plenty of power to shoot stuff off the moon.
Was anything found besides (frozen or liquid) H2O?
JS: Because we have actually flown through the plume, the mass spectrometer on board of Cassini got a direct composition measurement and found some carbon dioxide in the plume and traces of nitrogen and methane. We did not see ammonia, which is a big surprise because ammonia has always been theorized to be kind of an anti-freeze that would be in these very cold moons. Ammonia makes it easier for liquids to melt and for geological activity to be produced. That was what everyone expected to see, and that was not seen.
Why are these geysers such a huge surprise?
JS: Everything else in the solar system the size of Enceladus is geologically dead or at least dormant. Enceladus is only 300 miles across. We don't expect such small objects to have much internal heat. Also, it's ten times further from the sun than we are, so it's awfully cold. But we measured quite a lot of heat coming out of these fractures. We expected temperatures of around 70 degrees Kelvin [- 300 degrees Fahrenheit] above absolute zero, but we actually saw temperatures in some small areas that were at least 145 degrees Kelvin [- 200 degrees Fahrenheit]. That's much warmer than we expected. Even though that's extremely cold by terrestrial standards, that's a pretty major addition of heat. Our instruments can't look deep down into the fissures, so we couldn't directly see the even higher temperatures you'd expect for the liquid water that might be powering the geysers.
Were their any hints that Enceladus could be odd?
JS: We knew Enceladus was strange ever since Voyager flew past in the early 1980s. We knew it had an extremely bright surface – it's the brightest object in the solar system. The surface is as bright as freshly fallen snow. It has a very strange and complicated geology. The surface has been torn up by internal forces, and there are very few craters. So we knew geological activity had taken place recently. But usually when we say "recently" we are thinking 100 million years ago, not right now. The other interesting thing is that there is a ring of dust or ice particles that surround Saturn right at the orbit of Enceladus. It appeared that debris was coming off Enceladus and feeding into the ring. So we had much circumstantial evidence that something was going on, but I think we were all taken aback to see this very active phenomenon going on under our noses.
What is causing the surprising heat?
JS: It almost certainly has something to do with tidal heating, which is the mechanism that powers the volcanoes on Io. Basically this moon is in an orbit that's not quite circular around Saturn. So sometimes it's closer, sometime it's further away. When it's closer, the shape is distorted by the tidal forces of Saturn. And then as it moves away it relaxes a little bit. So its shape is changing all the time. That produces a lot of friction inside. It produces a lot of heat. And so that's probably the mechanism that produces enough heat to melt water or do whatever it is that it's actually doing down there. We don't understand the details. One of the odd things is that there is another moon right next door called Mimas, where tidal heating should be operating much more efficiently that Enceladus. But Mimas is completely dead. So something is making this happen on Enceladus and not Mimas. We don't have a solution yet.
How probable is it that life exists on Enceladus?
JS: Well, obviously it's wildly speculative. But we know that life needs liquid water, at least the life we can imagine. We know it needs energy. And we've got both of those on Enceladus. Or at least we have some possibility of liquid water – I don't think it is a done deal. It's not impossible that there is life. But it's an awfully small place. The area that's active is less than a hundred miles across. For life to have developed and remained, you would have to have liquid water in that tiny area continually for possibly billions of years. So who knows? But it's certainly going to get people very interested in trying to figure out whether life could survive there. The other place where we have been excited about the possibility of life has been Europa, a moon of Jupiter, which is a far bigger place. We think it has a thick liquid water ocean which is thousands of time more voluminous than any water on Enceladus. And so in that case you might not have as much energy – you don't see it at the surface, you don't see it spouting off into space like we do with Enceladus – but you have a lot more space for things to happen and more places for things to hide. But in this one respect of having energy and having water at the surface, which could be a source of energy, Enceladus is looking very interesting.
Besides the surprise aspect, why is this discovery significant?
JS: This is a very exclusive club of bodies on which we can see current internal geological activity. There's the Earth, there's Jupiter's moon Io, there's Enceladus, and that's it. So that's very exciting in itself. We see evidence in many worlds in the outer solar system for ice vulcanism, if you want to call it that, where you have a surface that is mostly ice and there seem to be eruptions and disturbances and fracturing of the surface. And so to actually have a living example to study is really more than we could have hoped for. You can learn so much more when you can see something happening, rather than just picking over the bones of it millions of years later.
NASA is under fire right now for proposing cutting programs for unmanned space exploration so that they can invest in getting people to the moon and then Mars, and to maintain the space station. Could you comment on the value of unmanned missions like Cassini?
JS: This discovery is an example of the tremendous science you can get out of unmanned space flights. The return on these things is incredible. I hope the unmanned space program maintains a funding level that is commensurate with its enormous return, particularly given the meager scientific returns we are getting with things like the space station. You can do spectacular science without humans. I think the arguments for humans in space are not really scientific arguments. They are political. They are: We want humans in space because we think humans should be in space, not because it is the best way to do science. |
A blister is a bubble of fluid under the outer layer of skin. The fluid may be clear or filled with blood or pus. There are many possible causes of blisters including a burn, disease, an allergic reaction, or from your skin rubbing against something. Blisters caused by your skin rubbing against something are called friction blisters and most commonly occur on feet or hands.
You may get blisters on your feet if your shoes or socks don’t fit well. Athletes and hikers often get foot blisters. You may also get blisters on your hands when you work with tools for a long time (such as digging or raking). Gymnasts and baseball players often get blisters on their hands or fingers.
Blisters usually occur at the start of a new sports season or exercise program, after wearing new shoes, or when the weather is hot and humid.
When the skin becomes irritated, fluid collects underneath the outer layer of skin. This can be quite painful. The surrounding area may be red, sore, or swollen. Blisters can be very small or quite large.
Most blisters are filled with clear fluid. If the fluid is bloody it usually means that a lot of force caused the blister. If the blister is filled with pus it is probably infected. The blister as well as the tissue around the blister can get infected. Infected blisters are very painful, they may be swollen and warm to touch and you may even have a fever.
It is best to leave most small blisters alone. They should be kept clean and covered with an antibiotic ointment and a bandage. Putting a little petroleum jelly around the blister or the part of a shoe that causes the irritation may reduce friction.
Moleskin can be used to protect a blister. You can buy moleskin at a drug store. Cut a round piece of moleskin that is bigger than the blister and cut a hole in the center. Then put the moleskin on your skin with the hole over the blister. Cover the moleskin with a bandage.
Blisters usually drain by themselves. The overlying skin is a natural protective layer. It should be left in place until it is very dry and the underlying skin has become tough and painless. Then you can trim off the layer of dry skin.
Large blisters may need to be drained. It is important to do this in a way that does NOT cause an infection. Always use a sterilized needle to drain a blister. The needle should be sterilized by heating it with a flame until it is red hot and then allowed to cool. You can also sterilize a needle with rubbing alcohol. Use the needle to puncture the edge of the blister in several places. Make the punctures wide enough so they do not reseal. Cover the area with antibiotic ointment and a bandage.
If you have a blister that becomes infected, you need to come to Reddy Urgent Care, as you may need an antibiotic.
Most blisters last about 3 to 7 days. You can continue with your activities (such as hiking or landscaping), as long as you can tolerate the discomfort of the blisters, and they are protected. If your blisters are infected, stop your activities, and come to Reddy Urgent Care. Do not resume activity until the infection is gone.
Try to minimize rubbing against your skin using the following guidelines.
Make sure that your shoes fit well and don’t wear wet shoes.
Wear two pairs of socks to protect your feet.
Wear gloves to protect your hands.
Put petroleum jelly (Vaseline) on spots that tend to rub or use a foot powder.
Put athletic tape or a bandage over sore spots. |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Communication is a process that allows organisms to exchange information by several methods. Exchange requires feedback. The word communication is also used in the context where little or no feedback is expected such as broadcasting, or where the feedback may be delayed as the sender or receiver use different methods, technologies, timing and means for feedback.
Specialized fields focus on various aspects of communication, and include Mass communication, Communication studies, Organizational Communication, Sociolinguistics, Conversation analysis, Cognitive linguistics, Linguistics, Pragmatics, Semiotics, and Discourse analysis.
Communication is the articulation of sending a message, whether it be verbal or nonverbal, so long as a being transmits a thought provoking idea, gesture, action, etc. . .
Communication can be defined as the process of meaningful interaction among human beings. It is the act of passing information and the process by which meanings are exchanged so as to produce understanding.
Communication is the process by which any message is given or received through talking, writing, or making gestures.
There are auditory means, such as speaking, singing and sometimes tone of voice, and nonverbal, physical means, such as body language, sign language, paralanguage, touch, eye contact, or the use of writing.
Communication happens at many levels (even for one single action), in many different ways, and for most beings, as well as certain machines. Several, if not all, fields of study dedicate a portion of attention to communication, so when speaking about communication it is very important to be sure about what aspects of communication one is speaking about. Definitions of communication range widely, some recognizing that animals can communicate with each other as well as human beings, and some are more narrow, only including human beings within the parameters of human symbolic interaction.
Nonetheless, communication is usually described along a few major dimensions:
- Content (what type of things are communicated)
- Source/Emisor/Sender/Encoder (by whom)
- Form (in which form)
- Channel (through which medium)
- Destination/Receiver/Target/Decoder (to whom)
- Purpose/Pragmatic aspect
Between parties, communication includes acts that confer knowledge and experiences, give advice and commands, and ask questions. These acts may take many forms, in one of the various manners of communication. The form depends on the abilities of the group communicating. Together, communication content and form make messages that are sent towards a destination. The target can be oneself, another person or being, another entity (such as a corporation or group of beings).
Depending on the focus (who, what, in which form, to whom, to which effect), there exist various classifications. Some of those systematical questions are elaborated in Communication theory.
Content, form, and destination of human communication
Communication as a named and unified discipline has a history of contestation that goes back to the Socratic dialogues, in many ways making it the first and most contestatory of all early sciences and philosophies. Seeking to define "communication" as a static word or unified discipline may not be as important as understanding of communication as a family of resemblances with a plurality of definitions as Ludwig Wittgenstein had put forth. Some definitions are broad, recogizing that animals can communicate, and some are more narrow, only including human beings within the parameters of human symbolic interaction.
Nonetheless, communication is usually described along three major dimensions: content, form, and destination. Examples of communication content include acts that declare knowledge and experiences, give advice and commands, and ask questions. These acts may take many forms, including social cues and trappings, gestures (nonverbal communication, sign language and body language), writing, or verbal speaking. The form depends on the symbol systems used. Together, communication content and form make messages that are sent towards a destination. The target can be oneself, another person (in interpersonal communication), or another entity (such as a corporation or group).
There are many theories of communication, and a commonly held assumption is that communication must be directed towards another person or entity. This essentially ignores intrapersonal communication (note intra-, not inter-) via diaries or self-talk.
Interpersonal conversation can occur in dyads and groups of various sizes, and the size of the group impacts the nature of the talk. Small-group communication takes place in settings of between three and 12 individuals, and differs from large group interaction in companies or communities. This form of communication formed by a dyad and larger is sometimes referred to as the psychological model of communication where in a message is sent by a sender through channel to a receiver. At the largest level, mass communication describes messages sent to huge numbers of individuals through mass media, although there is debate if this is an interpersonal conversation.
Communication as information transmission
Communication: transmitting a message with the expectation of some kind of response. This can be interpersonal or intrapersonal.
- Syntactic (formal properties of signs and symbols),
- pragmatic (concerned with the relations between signs/expressions and their users) and
- semantic (study of relationships between signs and symbols and what they represent).
Therefore, communication is social interaction where at least two interacting agents share a common set of signs and a common set of semiotic rules. (This commonly held rule in some sense ignores autocommunication, including intrapersonal communication via diaries or self-talk).
In a simple model, information or content (e.g. a message in natural language) is sent in some form (as spoken language) from an emisor/ sender/ encoder to a destination/ receiver/ decoder. In a slightly more complex form a sender and a receiver are linked reciprocally.
A particular instance of communication is called a speech act. In the presence of "communication noise" on the transmission channel (air, in this case), reception and decoding of content may be faulty, and thus the speech act may not achieve the desired effect.
Dialogue is a form of communication in which both the parties are involved in sending and receiving information.
Theories of coregulation describe communication as a creative and dynamic continuous process, rather than a discrete exchange of information.
Nonverbal communication is the act of imparting or interchanging thoughts, posture, opinions or information without the use of words, using gestures, sign language, facial expressions and body language instead.
- Main article: Models of communication
The wide range of theories about communication make summarization difficult. However, a basic model of communication describes communication as a five-step output-input process that entails a sender's creation (or encoding) of a message, and the message's transmission through a channel or medium. This message is received and then interpreted. Finally this message is responded to, which completes the process of communication. This model is based on a model of signal transmission known as the Shannon-Weaver model. A related model can be seen in the work of Roman Jakobson.
Our indebtedness to the Ancient Romans in the field of communication does not end with the Latin root "communicare". They devised what might be described as the first real mail or postal system in order to centralize control of the empire from Rome. This allowed for personal letters and for Rome to gather knowledge about events in its many widespread provinces.
In the last century, a revolution in telecommunications has greatly altered communication by providing new media for long distance communication. The first transatlantic two-way radio broadcast occurred on July 25 1920 and led to common communication via analogue and digital media:
- Analog telecommunications include traditional Telephony, radio, and TV broadcasts.
- Digital telecommunications allow for computer-mediated communication, telegraphy, and computer networks.
Communications media impact more than the reach of messages. They impact content and customs; for example, Thomas Edison had to discover that hello was the least ambiguous greeting by voice over a distance; previous greetings such as hail tended to be garbled in the transmission. Similarly, the terseness of e-mail and chat rooms produced the need for the emoticon.
Modern communication media now allow for intense long-distance exchanges between larger numbers of people (many-to-many communication via e-mail, Internet forums). On the other hand, many traditional broadcast media and mass media favor one-to-many communication (television, cinema, radio, newspaper, magazines).
The adoption of a dominant communication medium is important enough that historians have folded civilization into "ages" according to the poo medium most widely used. A book titled "Five Epochs of Civilization" by William McGaughey (Thistlerose, 2000) divides history into the following stages: Ideographic writing produced the first civilization; alphabetic writing, the second; printing, the third; electronic recording and broadcasting, the fourth; and computer communication, the fifth.
While it could be argued that these "Epochs" are just a historian's construction, digital and computer communication shows concrete evidence of changing the way humans organize. The latest trend in communication, termed smartmobbing, involves ad-hoc organization through mobile devices, allowing for effective many-to-many communication and social networking.
The following factors can impede human communication:
- Not understanding the language
- Verbal and non-verbal messages are in a different language. This includes not understanding the jargon or idioms used by another sub-culture or group.
- Not understanding the context
- Not knowing the history of the occasion, relationship, or culture.
- Intentionally delivering an obscure or confusing message.
- Inadequate attention to processing a message. This is not limited to live conversations or broadcasts. Any person may improperly process any message if they do not focus adequately.
- Improper feedback and clarification
- In asynchronous communication, neglecting to give immediate feedback may lead to larger misunderstandings. Questions and acknowledgment such as ("what?") or ("I see") are typical feedback mechanisms.
- Lack of time
- There is not enough time to communicate with everyone.
- Physical barriers to the transmission of messages, such as background noise, facing the wrong way, talking too softly, and physical distance.
- Medical issues
- Hearing loss and various brain conditions can hamper communication.
- World-views may discourage one person from listening to another.
- Fear and anxiety associated with communication is known by some Psychologists as communication apprehension. Besides apprehension, communication can be impaired via processes such as bypassing, indiscrimination, and polarization.
Other examples of communication
Almost all communication involves periods of silence or an equivalent (e.g. spaces in written communication). However, computer or electronic communication is less reliant on such delimiters.
In certain contexts, silence can convey its own meaning, e.g. reverence, indifference, emotional coldness, rudeness, thoughfulness, humility etc.
Also see the Prisoners and hats puzzle
- Jungle drums
- Smoke signals
- Morse code
- Semaphores (use of devices to increase the distance "hand" signals can be seen from by increasing the size of the movable object)
- Voyager Golden Record (sent on Voyager 1 into interstellar space)
- Art (including Theatre Arts)
Information exchange between living organisms
Communication in many of its facets is not limited to humans, or even to primates. Every information exchange between living organisms — i.e. transmission of signals involving a living sender and receiver — can be considered a form of communication. Thus, there is the broad field of animal communication, which encompasses most of the issues in ethology. On a more basic level, there is cell signaling, cellular communication, and chemical communication between primitive organisms like bacteria, and within the plant and fungal kingdoms. All of these communication processes are sign-mediated interactions with a great variety of distinct coordinations.
A language is a system of arbitrary signals, such as voice sounds, gestures or written symbols which communicate thoughts or feelings.
Human spoken and written languages can be described as a system of symbols (sometimes known as lexemes) and the grammars (rules) by which the symbols are manipulated. The word "language" is also used to refer to common properties of languages.
Language learning is normal in human childhood. Most human languages use patterns of sound or gesture for symbols which enable communication with others around them. There are thousands of human languages, and these seem to share certain properties, even though many shared properties have exceptions.
Humans and computer programs have also constructed other languages, including constructed languages such as Esperanto, Ido, Interlingua, Klingon, programming languages, and various mathematical formalisms. These languages are not necessarily restricted to the properties shared by human languages.
For effective communication in specialized contexts, certain strategies can be taken that will help people achieve their goals and can be seen as techniques for attaining the purpose of communication.
Below is a list with explanations of communication strategies used in marketing and selling:
- Adaptive Innovation
- Building or improving products, services, and processes while working with a customer versus building products or services outside a customer engagement. Relates to service companies working with large enterprises.
- Entrepreneurial Management
- Describes a business where the employees are expected to work and relate to each other as self driven business partners versus expecting to be mentored by a command and control management structure. This assumes the phrase, "be the leader you seek."
- One Voice
- A skill used to manage customer team meetings where one person is designated the leader and other team members direct all their comments and questions through the designated OneVoice speaker rather than to the customer(s).
- A term related to business people being "on stage" at all times during a meeting or customer visit.
- Strategic speed
- A term related to working fast and smart, constantly looking for opportunities to improve and innovate.
- Discipline of Dialogue
- A term related to controlling your words and conversations during a business meeting or presentation.
SOLER (Egan, 1986) is a technique used by care workers. It helps clients or patients to feel safe and to trust the care-giver, and assists in effective communication. SOLER means:
- S – Sit squarely in relation to the patient
- O – Open position
- L – Lean slightly towards the patient
- E – Eye contact
- R – Relax
Metacommunication is the process of communicating about communication, for example, to discuss a past conversation and to determine the meanings behind certain words, phrases, etc.. It can be used as a tool for sense making, or for better understanding events, places, people, relationships, etc.. The ability to communicate on the meta-level requires introspection and, more specifically what is called metacommunicative competence. It is not a distinct form of communication as seen from the five aspects mentioned in the introduction.
- Episodic Level Metacommunication
The events occurring within a given communicative episode help the participants make relational sense out of the experience. e.g. "This is an order", "Please", or "I am Joking". Different levels at which people reflect on their communication: 1) Labels what kind of message he sends and how serious he is. 2) Says why he/she sent the message. 3) Says why he sent the message by referring to the other's wishes. 4) Says why he sent the message by referring to a request of the other. 5) Says why he sent the message referring to the kind of response he was trying to elicit. 6) Says what he was trying to get the other to do.
Mass media is a term used to denote, as a class, that section of the media specifically conceived and designed to reach a very large audience (typically at least as large as the whole population of a nation state). It was coined in the 1920s with the advent of nationwide radio networks and of mass-circulation newspapers and magazines. The mass-media audience has been viewed by some commentators as forming a mass society with special characteristics, notably atomization or lack of social connections, which render it especially susceptible to the influence of modern mass-media techniques such as advertising and propaganda.
Animal communication is any behaviour on the part of one animal that has an effect on the current or future behavior of another animal. Of course, human communication can be subsumed as a highly developed form of animal communication. The study of animal communication, called zoosemiotics (distinguishable from anthroposemiotics, the study of human communication) has played an important part in the development of ethology, sociobiology, and the study of animal cognition. This is quite evident as humans are able to communicate with animals especially dolphins and other animals used in circuses however these animals have to learn a special means of communication.
Animal communication, and indeed the understanding of the animal world in general, is a rapidly growing field, and even in the 21st century so far, many prior understandings related to diverse fields such as personal symbolic name use, animal emotions, animal culture and learning, and even sexual conduct, long thought to be well understood, have been revolutionized.
- Written and spoken language
- Hand signals and body language
- Territorial marking (animals such as dogs - stay away from my territory)
- Pheromones communicate (amongst other things) (e.g. "I'm ready to mate") - a well known example is moth traps, which contain pheromones to attract moths.
- List of basic communication topics
- Augmentative communication
- Animal communication
- Anxiety/Uncertainty Management
- Cognitive linguistics
- Communication skills
- Communication skills training
- Communication systems
- Communication theory
- Communications media
- Corporate communications
- Development communication
- Diffusion of innovations
- Discourse analysis
- Electronic communication
- Emotional content
- Environmental communication
- Information theory
- Interpersonal communication
- Interspecies communication
- Mass communication
- Mass media
- Media studies
- Nonverbal communication
- Organizational communication
- Persuasive communication
- Privilged communications
- Professional communication
- Scientific communication
- Technical communication
- Technical writing
- Verbal communication
- A brief history of communication across ages
- Communicating for change and impact
- How Human Communication Fails (Tampere University of Technology)
- The Transmission Model of Communication (Daniel Chandler)
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| |
Modern cryptographic systems include symmetric-key algorithms (such as DES and AES) and public-key algorithms (such as RSA). Symmetric-key algorithms use a single shared key; keeping data secret requires keeping this key secret. Public-key algorithms use a public key and a private key. The public key is made available to anyone (often by means of a digital certificate). A sender encrypts data with the public key; only the holder of the private key can decrypt this data.
Since public-key algorithms tend to be much slower than symmetric-key algorithms, modern systems such as TLS and SSH use a combination of the two: one party receives the other's public key, and encrypts a small piece of data (either a symmetric key or some data used to generate it). The remainder of the conversation uses a (typically faster) symmetric-key algorithm for encryption.
Computer cryptography uses integers for keys. In some cases keys are randomly generated using a random number generator (RNG) or pseudorandom number generator (PRNG). A PRNG is a computer algorithm that produces data that appears random under analysis. PRNGs that use system entropy to seed data generally produce better results, since this makes the initial conditions of the PRNG much more difficult for an attacker to guess. In other situations, the key is derived deterministically using a passphrase and a key derivation function.
The simplest method to read encrypted data is a brute force attack—simply attempting every number, up to the maximum length of the key. Therefore, it is important to use a sufficiently long key length; longer keys take exponentially longer to attack, rendering a brute force attack impractical. Currently, key lengths of 128 bits (for symmetric key algorithms) and 1024 bits (for public-key algorithms) are common.
- Distributed key generation: For some protocols, no party should be in the sole possession of the secret key. Rather, during distributed key generation, every party obtains a share of the key. A threshold of the participating parties need to cooperate to achieve a cryptographic task, such as decrypting a message.
|This cryptography-related article is a stub. You can help Wikipedia by expanding it.| |
If the pigment is melanin, they are called melanophores.Chromatophores are common in
Chromatophores are often used for camouflage. This picture (courtesy of the Field Museum of Natural History) shows a winter flounder resting on a checkerboard pattern.
The chromatophores of cephalopods change size (expand and contract) as a result of activity of muscle fibers and the motor neurons that terminate at them.
In crustaceans and amphibians, the chromatophores have a fixed shape. Color change comes about through the dispersal (darkening) or aggregation (lightening) of granules within the cell. This is under hormonal control.
|Link to illustrated description of the melanophores of the frog and their hormonal control.| |
Peripheral nerves are those nerves that lie outside of the central nervous system or CNS (brain and spinal cord). These nerves may carry sensory information (sensory nerves) to the CNS or motor signals (motor nerves) to the muscle. Nerves are a collection of neurons, the basic nerve cell, and like any part of the body it is prone to a wide range of diseases. When a nerve is affected by any disease process, its function may be affected to varying degrees. This may be seen as a change in sensation, abnormal sensations or muscle activity.
What is neuritis?
Neuritis is a broad term use to describe various diseases involving the inflammation of a nerve or a group of nerves. It is often associated with pain, changes in sensations, weakness, numbness, paralysis or muscle wasting. Neuritis along with other diseases that damage peripheral nerves are collectively known as neuropathies.
Types of Neuritis
Several types of neuritis have been identified. The most common types of neuritis are peripheral neuritis and optic neuritis. There are several other less common varieties of neuritis including :
- brachial neuritis
- polyneuritis multiplex
- intercostal neuritis
- ulnar neuritis
- lumbosacral neuritis
- occipital neuritis
- vestibular neuritis
- cranial neuritis
- arsenic neuritis
- sensory motor polyneuropathy
- granulomatous neuritis of leprosy
Causes of Neuritis
In most patients with neuritis the exact cause usually remains uncertain. Neuritis is more commonly seen with advancing age (55 years and older) and in women. Diseases affecting blood supply to the nerves and deficiency of certain nutrients are prominent among the factors can contribute to the development of neuritis.
The various causes of neuritis include :
Injury to the nerve causes inflammation and subsequently the symptoms of neuritis arise. There are various types of injuries that are generally localized and involving single nerves. The various agents causing injury to nerves are :
- Physical injury. Compression of a nerve or direct injury from penetrating injury of nerve can lead to inflammation. Carpal tunnel syndrome is a typical example of compression of the nerve and subsequent nerve injury. This can result in pain and numbness of the thumb and the index finger. Use of high-heeled shoes leading to compression of nerves that supply the toes is another example of a compression injury. This can result in pain and numbness of the affected toes.
- Chemical injury. Nerve injury may arise secondary to damage to adjacent structures and can result in the release of noxious substances that lead to chemical neuritis. Administration of some medicines by injections can cause chemical injury to the nerves lying in close proximity to the injection site. Neuritis may develop as a side effect of certain drugs used in chemotherapy. Chemical neuritis can also result from metallic poisoning like arsenic poisoning.
- Radiation injury. Radiation nerve injuries can develop following radiotherapy for various cancers. Brachial neuritis or plexopathy is a known complication of radiotherapy of the upper chest area.
Neuritis is considered to be commonly associated with various nutritional deficiencies. Vitamin B deficiencies like vitamin B1 (thiamine), B2 (riboflavin), B6 (pyridoxine) or B12 (cyanocobalamin) are often associated with peripheral neuritis.
Various infections can result in neuritis.
- Lyme disease
- Cat scratch disease
- Herpes simplex infection
Several diseases conditions may leas to neuritis. This includes :
- Diabetes mellitus
- Autoimmune diseases like multiple sclerosis, sarcoidosis and systemic lupus erythematosis
- Beriberi (caused by thiamine deficiency)
- Pernicious anemia
- Chronic acidosis
- Certain types of cancers
Optic neuritis is more commonly associated with the autoimmune diseases.
A few types of neuritis are genetically transmitted including :
- Leber’s hereditary optic neuropathy (LHON)
- Amyloid polyneuropathy
- Charcot-Marie-Tooth disease
Toxins and Medication
Neuritis can develop as a result of toxicity of certain environmental pollutants, metals, drugs and other chemicals. Insecticides (like endosulfan, DDT), mercury, lead, arsenic, methanol, chronic alcoholism and ethambutol (antibiotic) are some of the substances that can lead to neuritis as a result of its toxic effects. It may also be seen as long term side effects of some of the drugs for cholesterol (statins side effects), blood pressure and arthritis. Excessive intake of pyridoxine is also associated with neuritis.
Signs and Symptoms
The symptoms of neuritis depend on the nerve or group of nerves affected. The common symptoms of neuritis usually localized to the affected area include :
- pain – stabbing or pricking type
- muscle weakness (paresis)
- paresthesia (abnormal sensation) can be in the form of tingling or a burning sensation
In patients with severe forms of neuritis numbness, loss of sensation (anesthesia), swelling, redness of skin, paralysis, muscle wasting and loss of muscle reflexes may be seen.
Patients suffering from optic neuritis can have visual disturbances of varying degrees. It may be blurred or distorted vision in some patients while it may be loss of vision in others. Some patients may suffer from loss of color vision or pain in the eye. Some patients may have problems with adjustment to bright light or darkness. |
The Carolina Bays are large shallow elliptical depressions and wetlands with raised rims which are found east of the Rocky Mountains but are concentrated mainly along the Atlantic seaboard. The bays were discovered in the 1930s from the first aerial photographs of the Atlantic coast. The elliptical shape and alignment toward the Great Lakes was not discovered before aerial reconnaissance because the bays are very large and their shape cannot be easily determined from ground level. The Carolina bays are also called Delmarva Bays, Maryland Basins or Nebraska Rainwater basins.
The invention of LiDAR in the early 1960s discovered a great number of bays by emphasizing the small differences in elevation. It is estimated that there are approximately 500,000 Carolina Bays. In some areas, the bays are so dense that all the ground is completely covered by them.
Characteristics of the Carolina Bays
The Carolina Bays are elliptical depressions that occur in sandy soil along the Atlantic coast of the United States and in some Midwesterner states like Nebraska. The bays in the east coast have a northwest/southeast alignment, whereas those in Nebraska have a northeast/southwest alignment. The bays point toward the Great Lakes. The main morphologic characteristics of the Carolina Bays were summarized by Eyton (1975):
Carolina Bays also occur in the gravels of Midlothian, Virginia at elevations that vary from 91 to 122 meters above sea level (Johnson and Goodwin, 1967). Midlothian is located approximately 27 kilometers west of Richmond.
Although many Carolina Bays have been destroyed by erosion, the structural preservation of the Carolina Bays may be partly due to the fact that they are found on flat porous landscape that allows rain water to quickly filter underground thereby preventing lateral water flow. The bays in Nebraska and Kansas occur in what once were the shores of the Western Interior Seaway of North America. This seaway disappeared by the Paleocene Epoch 60 million years ago after the Laramide Orogeny uplifted the Rocky Mountain region. The Midwestern bays are at elevations ranging from 400 to 900 meters above sea level and about 2000 kilometers from both the Pacific and Atlantic coasts. Fewer bays can be observed in the Midwestern states because only the larger bays have endured the erosion by water and the accumulation of layers of wind-blown dust and silt (loess).
Ages of the bays
Tests on the terrain of the Carolina Bays have produced wide range of ages. A study from the University of South Carolina (Brooks, 2010) reported that the dates obtained by Optically Stimulated Luminescence (OSL) indicate that wind processes had modified the shorelines of the bays in five stages dating from 12,000 to 140,000 years ago.
OSL dating estimates the time since last exposure to sunlight for quartz sand and similar materials. Cosmic rays and ionizing radiation from naturally occurring radioactive elements in the earth causes electrons to become trapped in the crystal structures of buried quartz and other minerals. OSL frees the trapped electrons to produce luminescence in proportion to how long the quartz has been buried.
Bay Formation Theories
One of the first proposals for the formation of the Carolina Bays was made by Melton and Schriever from the University of Oklahoma in 1933. They suggested that a meteorite shower or a colliding comet coming from the northwest could have created the bays. Surface structures created by impacts only became accepted around 1960, when geologist Eugene M. Shoemaker presented criteria for establishing that Meteor Crater in Arizona was the result of an extraterrestrial impact and not the caldera of an extinct volcano.
Using the criteria established by Shoemaker, scientists concluded that the Carolina Bays could not have been created by an asteroid or comet. Analysis of the Carolina Bays showed no evidence of a hyperspeed impact.
Extraterrestrial impacts would have melted the target material, and impacts large enough to create the bays should have penetrated the soil, excavated bedrock, and produced distinctive signs of intense disturbance in the bedrock. The fact that there is no bedrock ejecta around the bays excludes the possibility that they were formed by extraterrestrial impacts.
Geologists have proposed substrate dissolution, action of wind, marine waves and currents that reduce the volume of karst-like depressions which are later modified by wind or ice-push processes (May and Warne, 1999). The bays have also been compared to thermokarst or thaw lakes that are circular or elliptical in shape and are often aligned with the prevailing wind (Melosh 2011). The problem with these terrestrial formation hypotheses is that they do not provide a mechanism for the formation of elliptical bays that radiate from a common point to account for the orientation of the Carolina Bays. In addition, the eolian and lacustrine processes fail to explain how bays at elevations of 200 meters or higher could have formed on ground that had not been close to the sea for millions of years. None of the terrestrial processes can explain why the bays have elliptical shapes with very similar aspect ratios.
It has also been suggested that the Carolina Bays could have formed when marshy ground dried up like the Australian salt lakes, but this does not create the raised rims or overlapping ellipses which are characteristic of the Carolina Bays.
The Younger Dryas Impact Hypothesis
In 2007, Richard Firestone proposed that an extraterrestrial comet airburst 12,900 years ago caused the late Pleistocene megafaunal extinctions and the Younger Dryas cooling event. In 2009, Firestone extended his argument by pointing out the orientation of the Carolina Bays and proposed that the comet impact could have struck the ice sheet that covered North America. His hypothesis proposed that fragments of the comet could have created the Great Lakes, but he did not explain how the bays were formed.
The Younger Dryas Impact Hypothesis was soundly rejected by the scientific community (Pinter, et al. 2011) because Firestone and his colleagues had not provided the type of impact evidence established by Shoemaker for hypervelocity impacts. In addition, the claim that the Carolina Bays could be used as evidence of an impact was rejected because there was no crater from an extraterrestrial impact and the bays had different dates which meant that they could not have been created by a single event.
A new impact hypothesis
The new impact hypothesis proposes that the Carolina Bays were made by impacts of ice ejected from a glacier by an extraterrestrial impact. The formation of the bays by secondary impacts is consistent with the physical characteristics of the bays.
The impacts should have occurred on soft ground so that the ice could penetrate and form oblique conical cavities that would later transform into elliptical bays.
The following table has the coordinates and sizes of 23 Carolina Bays. The bays have clearly defined borders that make it possible to measure the major and minor axes accurately using Google Earth with a LiDAR overlay.
in meters (L)
in meters (W)
The dimensions of the bays correspond the impact angles from 31 to 41 degrees.
The interpretation of the elliptical bays as conic sections whose eccentricity depends on the angle of impact is a reasonable explanation for the geometrical regularity of the bays. Thus far, oblique impacts by ice offer the best mechanistic explanation for the formation and orientation of the Carolina Bays, but it will take additional research to establish this conclusively. |
Inquiry-based learning, or IBL, is an approach to teaching that can position Meaningful Student Involvement in the center of any topic, in any grade level. The essence of IBL is that it lets students work with a learning challenge until they fully understand it with learning activities and projects driving the process.
In classrooms where IBL is used, learning centers on challenging students to solve problems through problem-posing, experimenting, exploration, creation and communication. Instead of giving students a linear, straight path towards finding answers, IBL allows educators to guide, mentor and facilitate learning through well-designed challenges meant to engage students in identifying the tools or topics they need to learn to solve them.
You know you’re in an IBL classroom when there are:
- Clear, deep and meaningful challenges facing students
- Significant, sustained opportunities for student-to-student collaboration
- Educators co-engaged in the learning process
- Projects underway that address substantive issues
That last bullet point is where Meaningful Student Involvement can be infused tightly with IBL. Focused on real issues in education, IBL can a powerful driver for students to learn through involvement. Ensuring those projects meet real educational challenges ensures the transition from IBL being an average school method towards becoming a meaningful method for learning, teaching and leadership throughout the education system. |
Using the text Deutsch Aktuell, students gain a deeper appreciation for the history and cultural heritage of today's German-speaking countries. This is brought to light through discussions of such topics as the environment, social problems, health, media and technology. New grammar points are integrated within a wide variety of text types including cartoons, poems, short stories, biographies and cultural articles. The highly illustrated text provides ample partner and group activities to strengthen speaking skills. The audio program allows students to sharpen their listening skills and with the video series, Junge Leute, students will follow the lifestyles and daily experiences of several young people from Germany and Austria. The accompanying workbook provides for a wide variety of written exercises including use of the Internet. A unit on the Amish (Pennsylvania Deutsch) in the spring term culminates with a field trip to Lancaster County, Pennsylvania. |
The St. Louis Box Turtle Project Team, led by Dr. Sharon Deem who is a wildlife veterinarian and epidemiologist at the
Saint Louis Zoo, tracks
box turtles and studies their health to better understand environmental factors that may be affecting the health of wildlife and humans alike. It is
a fun and exciting way to help connect young people with nature while gaining an understanding of what it means to co-exist with animals and the importance
of keeping them from going extinct. Take for instance bats. According to Popular Science magazine, it is estimated that windmills and wind turbines
kill between 600,000 and 900,000 bats every year. Although bats may not appeal to humans for several reasons, it is important that we understand their
role. Bats eat mosquitoes that could possibly give us West Nile or the Zika virus, and they also help with pest control. The same thing goes for understanding
plants such as invasive species and the effects they have on our local environment. Bees and colony collapse are another example. So, you can see the
importance of co-existing with other species.
It is an exciting time for young people who may be interested in environmental issues, as well as, the more traditional
medicine – human or veterinary. Dr. Deem says, "Whether it's called One Health, One Medicine, Conservation Medicine, Planetary Health, etc., it is
really just this concept that we need disciplines across the spectrum – including human doctors, veterinarians, environmental scientists, educators,
economists, politicians, journalists – to help bring awareness to the challenges of species co-existing." The St. Louis Box Turtle Project helps bring
awareness to local students by using Box Turtles as an outreach tool to introduce students to nature and to get them started thinking about our world
and the importance of conservation.
Using VHF technology, the St. Louis Box Turtle Project tracks box turtles in local Forest Park and Tyson Research Center. Currently, Dr. Deem says they
have about 18 box turtles with trackers attached to their shell.
The team attaches the trackers using a small amount of non-toxic plumber's epoxy, and each tracker emits its own unique
Using VHF radio receivers and a special antenna, students learn to track and locate these turtles while tracking the
turtles' home ranges.
Thanks to the St. Louis Box Turtle Project, students learn not only how to use telemetry and GPS, but also how to weigh
the turtles, take measurements, observe the general condition of the turtles, and sample for some of the diseases the turtles might get. Speaking of
box turtle health issues, be sure to read about Georgette and how she has overcome serious health challenges.
Having lived in the Galapagos Islands for a time, Dr. Deem and her husband now travel there for 4-6 weeks each summer to help save the giant tortoises and iguanas. Dr. Deem was actually in the Galapagos Islands when she so graciously interviewed with me. Dr. Deem was really excited because this is the first year that one of the local St. Louis educators has been able to make the trip, and they hope to have some from Galapagos travel to St. Louis in the near future. While in the Galapagos Islands, they will track the giant tortoises using telemetry tags much like in St. Louis. In Galapagos, they have 86 tagged tortoises in total now and use them to research the impact humans have on giant tortoises in the Galapagos Islands.
To further explore conservation medicine, check with your local zoo. Zoos play major roles in conservation medicine, for example, scientists at zoos conduct clinical, nutritional, pathological and epidemiological studies of diseases of conservation concern; provide healthcare to the wildlife in their care, thus ensuring successful zoo breeding programs that contribute to the sustainability of biodiversity; monitor diseases in free-living wild animals where they interface with domestic animals and humans; and perform studies that contribute to the fields of comparative medicine and the discovery of all life forms, from invertebrates and vertebrate species to parasites and pathogens. To continue following Dr. Deem and the St. Louis Box Turtle Project, please follow them on Facebook. |
The lymphatic system is very complex and it is made up lymphoid organs, lymph nodes, lymph ducts, lymph capillaries, and lymph vessels that make and transport lymph fluid from tissues to the circulatory system. The network of lymphatic vessels carry lymph a clear, watery fluid that contains protein molecules, salts, glucose, urea, and other substances throughout the body.
The lymphatic system is not a close system and the movement of the lymph fluid moves with low pressure due to functions such as peristalsis, valves, and the milking action of skeletal muscles. Lymph fluid only ever travels in one direction. Lymph fluid drains into lymph capillaries, which are tiny vessels. The fluid is then pushed along when a person breathes or the muscles contract. The lymph capillaries are very thin, and they have many tiny openings that allow gases, water, and nutrients to pass through to the surrounding cells, nourishing them and taking away waste products. When lymph fluid leaks through in this way it is called interstitial fluid. Lymph vessels collect the interstitial fluid and then return it to the bloodstream by emptying it into large veins in the upper chest, near the neck.
As the lymph fluid moves through the body, it collects waste products and toxins and disposes of them through the bladder, bowel, lungs, and skin. The lymphatic system is vital for both detoxification and the immune system, and if it is not working properly, then a wide range of illnesses can develop.
The lymphatic system helps defend the body against germs like viruses, bacteria, and fungi that can cause illnesses. Those germs are filtered out in the lymph nodes (small masses of tissue located along the network of lymph vessels). These nodes contain lymphocytes, a type of white blood cell. Some of those lymphocytes make antibodies (special proteins that fight off germs and stop infections from spreading by trapping disease-causing germs and destroying them). Hence the lymphatic system is considered to be a part of the immune system. |
Building on their creation of the first-ever mechanical device that can measure the mass of individual molecules, one at a time, a team of Caltech scientists and their colleagues have created nanodevices that can also reveal their shape. Such information is crucial when trying to identify large protein molecules or complex assemblies of protein molecules.
Michael Roukes, the Robert M. Abbey Professor of Physics, Applied Physics, and Bioengineering at Caltech and the co-corresponding author, explains:
“You can imagine that with large protein complexes made from many different, smaller subunits there are many ways for them to be assembled. These can end up having quite similar masses while actually being different species with different biological functions. This is especially true with enzymes, proteins that mediate chemical reactions in the body, and membrane proteins that control a cell’s interactions with its environment.”
One foundation of the genomics revolution has been the ability to replicate DNA or RNA molecules en masse using the polymerase chain reaction to create the many millions of copies necessary for typical sequencing and analysis. However, the same mass-production technology does not work for copying proteins.
Right now, if you want to properly identify a particular protein, you need a lot of it — typically millions of copies of just the protein of interest, with very few other extraneous proteins as contaminants.
The average mass of this molecular population is then evaluated with a technique called mass spectrometry, in which the molecules are ionized – so that they attain an electrical charge – and then allowed to interact with an electromagnetic field. By analyzing this interaction, scientists can deduce the molecular mass-to-charge ratio.
But mass spectrometry often cannot discriminate subtle but crucial differences in molecules having similar mass-to-charge ratios.
“With mass spectrometry today,” explains Roukes, “large molecules and molecular complexes are first chopped up into many smaller pieces, that is, into smaller molecule fragments that existing instruments can handle. These different fragments are separately analyzed, and then bioinformatics–involving computer simulations — are used to piece the puzzle back together. But this reassembly process can be thwarted if pieces of different complexes are mixed up together.”
With their devices, Roukes and his colleagues can measure the mass of an individual intact molecule.
Each device — which is only a couple millionths of a meter in size or smaller — consists of a vibrating structure called a nanoelectromechanical system (NEMS) resonator.
When a particle or molecule lands on the nanodevice, the added mass changes the frequency at which the structure vibrates, much like putting drops of solder on a guitar string would change the frequency of its vibration and resultant tone. The induced shifts in frequency provide information about the mass of the particle.
But they also, as described in the new paper, can be used to determine the three-dimensional spatial distribution of the mass: i.e., the particle’s shape.
“A guitar string doesn’t just vibrate at one frequency,” Roukes says. “There are harmonics of its fundamental tone, or so-called vibrational modes. What distinguishes a violin string from a guitar string is really the different admixtures of these different harmonics of the fundamental tone. The same applies here. We have a whole bunch of different tones that can be excited simultaneously on each of our nanodevices, and we track many different tones in real time. It turns out that when the molecule lands in different orientations, those harmonics are shifted differently. We can then use the inertial imaging theory that we have developed to reconstruct an image in space of the shape of the molecule.”
Professor Mehmet Selim Hanay of Bilkent University in Ankara, Turkey, a former postdoctoral researcher in the Roukes lab and co-first author of the paper, says:
“The new technique uncovers a previously unrealized capability of mechanical sensors. Previously we’ve identified molecules, such as the antibody IgM, based solely on their molecular weights. Now, by enabling both the molecular weight and shape information to be deduced for the same molecule simultaneously, the new technique can greatly enhance the identification process, and this is of significance both for basic research and the pharmaceutical industry.”
Currently, molecular structures are deciphered using X-ray crystallography, an often laborious technique that involves isolating, purifying, and then crystallizing molecules, and then evaluating their shape based on the diffraction patterns produced when x-rays interact with the atoms that together form the crystals.
However, many complex biological molecules are difficult if not impossible to crystallize. And, even when they can be crystallized, the molecular structure obtained represents the molecule in the crystalline state, which can be very different from the structure of the molecule in its biologically active form.
“You can imagine situations where you don’t know exactly what you are looking for — where you are in discovery mode, and you are trying to figure out the body’s immune response to a particular pathogen, for example,” Roukes says. In these cases, the ability to carry out single-molecule detection and to get as many separate bits of information as possible about that individual molecule greatly improves the odds of making a unique identification.
We say that cancer begins often with a single aberrant cell, and what that means is that even though it might be one of a multiplicity of similar cells, there is something unique about the molecular composition of that one cell. With this technique, we potentially have a new tool to figure out what is unique about it.”
So far, the new technique has been validated using particles of known sizes and shapes, such as polymer nanodroplets.
Roukes and colleagues show that with today’s state-of-the-art nanodevices, the approach can provide molecular-scale resolution — that is, provide the ability to see the molecular subcomponents of individual, intact protein assemblies. The group’s current efforts are now focused on such explorations.
Top Illustration: Multimode nanoelectromechanical systems (NEMS) based mass sensor; the main figure schematically depicts a doubly-clamped beam vibrating in fundamental mode (1). Conceptual “snapshots” of the first six vibrational modes are shown below (1-6), colors indicate high (red) to low (blue) strain. The inset shows a colorized electron micrograph of a piezoelectric NEMS resonator fabricated in Caltech’s Kavli Nanoscience Institute. Credit: M. Matheny, L.G. Villanueva, P. Hung, J. Li and M. Roukes/Caltech |
Learn how evidence of past tropical cyclones collected in an underground cave in Central America is being used to predict where future North Atlantic hurricanes will strike, in this video from NOVA: Killer Hurricanes. Amy Frappier is studying stalagmites, mineral deposits that form inside caves. She and her research team collected them in Belize. Stalagmite growth layers contain chemical traces of past hurricanes. Because hurricane rain contains lower levels of oxygen-18 (the heavier isotope) relative to oxygen-16 (“light” oxygen), a low ratio of oxygen-18 to oxygen-16 present in a stalagmite is evidence of a past hurricane. With data from caves across Central America and the Caribbean, Frappier’s analysis suggests a trend that she continues to explore: over a 450-year period, hurricanes have been moving northward from the equator toward the continental United States. This resource is part of the NOVA Collection.
Visit the program page. |
The hypoblast is a thin monolayer of small cuboidal cells that make up the lower layer of the bilaminar embryonic disc. This layer can be distinguished from the overlying epiblast in the mouse embryo as early as day 4.5, and day 7 in human). The cells of the hypoblast are not thought to contribute to the developing embryo proper, but contribute to several extraembryonic structures. In rodents, the hypoblast cells form the parietal endoderm, parietal and visceral yolk sac as well as extraembryonic membranes, such as the extraembryonic endoderm, the yolk sac and the stalk that links it to the endodermal digestive tube.
In human embryos, hypoblast cells migrate and line the blastoceolic cavity of the blastocyst, forming the primary yolk sac and Heuser's membrane. Although both the murine perietal yolk sac and the human primary yolk sac are transient structures, they are not developmental equivalents. A second wave of hypoblast cells migrates to form the definitive yolk sac which displaces the primary yolk sac and is equivalent to the murine visceral yolk sac. The primary yolk sac breaks up into small vesicles that can persist at the mural (abembryonic) pole. |
The 3rd grade students were able to combine the old and modern Aboriginal Art into one piece of Art. Students were able to add their handprint (which originates from the cave art of Australia) and then fill their space with dots/symbols (which is from Modern Australian Art). Students also learned about the warm/cool colors and had to choose one color group for their art. Aboriginal art originates from Australia over 30,000 years ago. Aborigines began painting on rocks and inside caves with earth tone or neutral colors. Students learned that back then they couldn’t go to the store to buy paint; they had to find things in nature to make their own paint. Today Aboriginal Art has three main characteristics, dots, symbols and/or animals. Aborigines filled their space and did not leave any negative (empty space) in their art. The Aborigines used symbols in their art to document their travels and experiences. Students learned that different countries/cultures have different meanings for different symbols. Students were also introduced to REPITION and UNITY and how to create/find that in a piece of art. |
The gut micro biota plays a crucial role in maintaining our body’s overall health. New research shows what happens if we do not feed our gut microbes with the fibre they need to survive.
Our gut micro biota contains at least 1,000 different species of known bacteria, summing up 3 million genes.
We share one third of our gut bacteria with other people, while the composition in our other two thirds is unique to each one of us.
Gut micro biota is important to our health because it contributes to a healthy immune system by acting as a barrier against other harmful microorganisms
It also helps with digesting foods that the stomach and small intestine have not been able to digest, as well as producing some vitamins.
We have always been told by healthcare professionals and nutritionists that fibre is important to a healthy diet.
But new research examines exactly what happens if our intestinal microbes do not receive the appropriate amount of fibre.
Studying the behaviour of gut bacteria in mice
The study was carried out by an international team of researchers led by Eric Martens, Ph.D., associate professor of microbiology at the University of Michigan Medical School, and Mahesh Desai, Ph.D., from the Luxembourg Institute of Health.
Researchers bred mice especially for the study. Mice were born and raised without any gut bacteria of their own. They then received a transplant of 14 bacteria that normally live in the human gut.
Knowing the genetic signature of each bacterium, scientists were able to track the evolution of each one of them over time. They used a germ-free lab facility and genetic techniques that allowed them to see what bacteria were present and active under different dietary conditions.
Researchers infected the mice with a strain of bacteria that is the equivalent of E. coli in humans. Then they examined the impact of diets with varying amounts of fibre, as well as a diet with no fibre at all.
Researchers tried a diet that was 15 per cent fibre, made from minimally-processed grains and plants. They also tried a diet that was rich in prebiotic fibre – a purified form of soluble fibre that is similar to what some processed foods and dietary supplements contain.
Gut microbes really need their fibre
As revealed by the study – published in the journal Cell – the induced infection did not fully spread in mice that received the 15 percent-fiber diet. Their mucus layer remained thick, protecting them against the infection.
But when scientists replaced the diet with one that lacked fibres altogether, gut microbes started eating the mucus. Even a few days of fibre deprivation led the bacteria to start invading the colon wall.
Gut microbes rely on fibre for their food, and when they do not get it, they start eating away at your gut. This makes the gut more prone to infections.
The diet rich in supplement-like prebiotic fibre had the same results as the diet lacking fibre completely. The mucus layer started eroding as a result of the action of microbes.
“The lesson we’re learning from studying the interaction of fibre, gut microbes and the intestinal barrier system is that if you don’t feed them, they can eat you.”
Researchers were also able to see what fibre-digesting enzymes the bacteria were making. They found 1,600 different enzymes that digest carbohydrates – a complexity similar to the one found in the human gut.
A lack of fibre also triggered a higher production of such mucus-degrading enzymes.
Scientists were able to look at images of the “goblet” cells on the colon wall that produce mucus. They could clearly see how the mucus layer got progressively thinner as the mice received less fibre.
In a normal gut, mucus is being produced and degraded at a steady pace. But on a fibre-deprived diet, mucus was degraded at a much higher pace than it was produced.
Examining the gut tissue of infected mice, researchers were able to see inflammation across a wide area of thinning, and even patchy tissue.
Infected mice that received a diet rich in fibre also displayed inflammation but across a much smaller area.
Future research to study different diets
In the future, Martens and Desai hope to study the effect of different prebiotic combinations over a longer period of time, as well as the impact of an intermittent natural fibre diet.
Researchers would also like to find the biomarkers that signal the state of the mucus layer in human guts, such as the number of mucus-degrading bacteria.
Martens and Desai also wish to study the impact of a low-fibre diet on chronic illnesses such as inflammatory bowel disease.
“While this work was in mice, the take-home message for humans amplifies everything that doctors and nutritionists have been telling us for decades: Eat a lot of fibre from diverse natural sources.
Your diet directly influences your micro biota, and from there it may influence the status of your gut’s mucus layer and tendency toward disease. But it’s an open question of whether we can cure our cultural lack of fibre with something more purified and easy to ingest than a lot of broccoli.”
Source: Medical News Today |
In the present modern world cultural diversity has become a common notion. But the children who come from different cultural and linguistic background feel like they’re moving to a different world when they go from home to school. Due to the fact that the teachers often have a different culture and language when compared to the individuals family. The expectation and patterns of communication in the classroom may also differ from those at home to these children.
The teachers cannot hope to understand the student who sits before them unless they connect with the families and communities and understand the culture from which the children have come. The cultural difference between teacher and the children can result in an uncomfortable classroom experience for some children and teacher. The teacher should have the ability to understand the culturally diverse student and appropriately interact with members of different cultures in a variety of situations. This approach requires the teacher to understand the viewpoint of those children who come from different cultural backgrounds and to understand their view and to accommodate them and make them feel comfortable with the different cultural settings present in the school atmosphere.
Conclusion – most of the culturally and linguistically diverse families have different types of education, beliefs, customs, the differ from those reflected in the expectation of mainstream schools. It is the task of the teacher to understand and appreciate cultural difference and the heterogeneity of knowledge which this culture processes and appreciate their cultural background and this is more likely to provide enriching and responsive learning environment that celebrate and capitalize on children’s cultural difference. Teachers should conduct workshops and interact with the parents and get a understanding from the culturally diverse parents . |
John Hunter & new hospitals
During the 18th century medicine made slow progress. Doctors still did not know what caused disease and some continued to believe in the pseudo-science of four humors (blood, yellow bile, black bile & phlegm), although belief in this theory declined during the 18th century.
Other doctors thought disease was caused by miasmas (odourless gases or particles in the atmosphere). "Miasma theory" prevailed well into the 19th century until John Snow realised, during the infamous outbreak of cholera in Victorian London in 1854, that it was not invisible chemicals in the air that had caused the outbreak but contaminated water at a single pump in Broad Street, Soho. His "germ theory" was regarded with great scepticism because it was another 30 years before the cholera bacterium was identified by Robert Koch.
Surgery and urology
"Travelling lithotomists" still practised both in England and abroad, removing bladder stones. Amongst them, the best known was William Cheselden whose 50% mortality rate for perineal lithotomy was considered outstandingly good. In 1744, Cheselden became the first Warden of the Company of Surgeons (later to be the Royal College of Surgeons) and, the following year, presided over its separation from the Company of Barber-Surgeons, thereby establishing surgery as a speciality in its own right.
Surgical techniques, however, did see some progress during this period. The famous 18th century surgeon and anatomist, John Hunter (1728-1793, pictured) has sometimes been called the "Founder of Scientific Surgery". He described, practised and taught many new procedures, including operations such as tracheostomy which are, nowadays, regarded as "routine".
The Museum at the Royal College of Surgeons of England is named after him and contains many of his surgical and anatomical preparations, originally purchased by the College in 1799. Although part of his specimen collection was destroyed when the College was hit by a German bomb on 10 May 1941, the majority was undamaged and remains on display in the Hunterian Museum to this day.
The founding of major hospitals
This was also an era when some of the country's major hospitals appeared. During the century, many teaching hospitals were founded:
- Guy's in 1724 (pictured right, with a bequest from Thomas Guy, a wealthy merchant),
- St George's London and Bristol (1733),
- York (1740),
- Exeter (1741),
- The Middlesex and Liverpool (1745), and
- in 1751, the first American hospital (in Philadelphia).
Medical dispensaries in the 18th century
John Coakley Lettsom, a Quaker, philanthropist and physician, defined the concept of a dispensary in 1770 and started to set them up around the capital. He justified his original idea by stating:
“... and notwithstanding the many excellent charities, already subsisting for relief of the sick, in and about this great metropolis, yet, when it is considered how many poor, from the nature of their circumstances and disorders, are still necessarily confined to their wretched dwellings, and perish through want of proper assistance, the utility of this institution becomes obvious ... "
By the end of the century, dispensaries had been set up in many towns. In London alone, there were 13, sharing the following characteristics:
- voluntary subscribers, known as governors, funded the operations and designed the rules,
- governors filled out a letter of recommendation to enable poor patients to receive care,
- physicians treated these patients either in the dispensary building or in their homes, and
- physicians, apothecaries (and sometimes surgeons) offered outpatient medical relief (mainly self-help advice or herbal medication).
Many of the medicines available were relatively ineffective, although laudanum (made by using alcohol to extract morphine from raw opium, pictured) was widely used not just to relieve pain but to treat other conditions for which there was no available cure (e.g. tuberculosis, syphilis).
← Back to Time Corridor |
Who were the Mongols?
The Mongols were originally a nomadic tribe from the Mongolian plateau. Although the name is older, the Mongols were first unified as a group by Kabul Khan, Genghis Khan’s great-grandfather. Like other tribes from the same area their lifestyle centered on herding horses, cattle, sheep, camels, and goats. These animals were a source of food, materials and the main method of transport for the Mongols. The Mongols also traded with the wealthy dynasties in China to acquire other goods. Mongol families lived in large round tents called Gers. These were built with collapsible wooden frames that were covered in felt and animal skins. The family was the most important unit in Mongol society although each family would be part of a larger clan with links to a major tribe.
When Genghis Khan was born, the Mongols were just one of the many independent tribal groups in the region. Some of the other powerful groups were the Tartars, the Naiman and the Khereits. Raiding and warfare between the groups was common. However, between 1186 and 1206 Genghis Khan conquered the other tribes of the region and created a unified Mongol nation.
This changed the Mongols for ever. Men from every tribal group rose to prominence in the Mongol Empire. The nomadic lifestyle remained, but society was very different. The conquests that created the empire brought the Mongols into contact with a host of new cultures and over time they settled in the areas they conquered and adopted many of the ideas they found there.
What did they do?
Between 1206 and 1279 the Mongols conquered a vast empire that stretched from the Pacific coast of China to Eastern Europe and from Siberia to Vietnam. These conquests started under the leadership of Genghis Khan and were extended by his successors. The Mongol conquests destroyed the reigning dynasties in China and the powerful Muslim states that had dominated Central Asia and the Middle East for hundreds of years.
On his death Genghis split the empire between his sons with Ogoedei becoming Great Khan, the supreme ruler. This form of government continued with the different branches of the family competing for the throne. In the late 13th Century the empire began to splinter into separate states in the late 13th century. Although the empire was never unified again the successor kingdoms left behind were extremely powerful. Kublai Khan founded the Yuan dynasty, the first foreign dynasty to rule all of China, which lasted until 1368. The Golden Horde in Russia lasted until the end of the 14th century and the Mongol influence over Persia was continued by Timur well into the 15th century.
The Mongol conquests brought huge changes in their wake. The initial invasions were notable for their violence and destruction. The conquests in areas like Persia and Russia were so violent that caused major demographic changes and the destruction of ancient cities. In contrast the stable Mongol administration encouraged a flourishing of global trade and communication never seen before. The ‘Pax Mongolica’ as it is often called opened up links between China, Europe and the Middle East making goods available through a network of trading posts. One unfortunate effect of this new freedom was that better communication encouraged the spread of the Black Death (plague carried by rats) across Europe and Asia. This disease would devastate populations from China to England during the 14th century.
It is fair to say that the modern world would not be the same without the Mongol conquests. |
Native to eastern North America, Flowering Dogwood is recognized by its gray to dark reddish-brown bark; the surface is broken into small, scale-like squares. In autumn and winter, it has terminal buttonlike buds and clusters of red fruits. Blooming in Tuscaloosa in mid-April, flowers have 4 large white petals, and make up an inflorescence, a dense head of about 20 flowers, that is 2″ to 4″ in diameter. Flowering dogwoods grow up to 15 m tall, and have several large, wide-spreading branches that form a low, dense head. Its deciduous leaves are light green and somewhat hairy, and become scarlet in the fall. Growing best in moist acidic soils near streams and on slopes, it is usually found in the shade of other hardwoods, but also found on open slopes and ridges.
The hard, dense wood is extremely shock resistant, but fairly limited in economic value, with uses such as mallet heads, jewelerâs blocks, tool handles, and golf club heads. In colonial times, a brew made from the bark was used to treat fever. The bright red fruit, ripening in late summer, is an important source of food to dozens of bird species, which then distribute the seeds. The flowering dogwood is exceptional as an ornamental because of its hardiness, moderate size, prominent flower clusters in spring, and red leaves and fruits in autumn.
A few trees here at the Arboretum have been infected by the disease Dogwood Anthracnose, and are at varying stages of decline. An anthracnose fungus, Discula sp., has been identified as the causal agent, but the origin of the disease is unknown. It began in the northeastern U.S. about 25 years ago and has migrated south, becoming increasingly harmful to the vitality of Alabamaâs flowering dogwoods during the past several years. |
Prehistoric humans who lived in the Middle East may have been among the world's first farmers.
Researchers from Weizmann Institute and the Israel Antiquities made the surprising conclusion based on the discovery of what they now believe are the world’s oldest fava seeds at Neolithic archaeological sites in the Galilee. The seeds – found in storage pits after they had been husked dated to between 10,125 years and 10,200 years - would suggest the inhabitants diet consisted mainly of fava beans, as well as lentils and various types of peas and chick peas.
The storage of the seeds and their uniform size, according to the researchers, would also suggest they were cultivated and harvested at the same time – offering some of first signs of long-term planning for agriculture. The seeds, the researchers concluded, were intended not only for food but also to ensure there are future crops in the coming years.
“The identification of the places where plant species that are today an integral part of our diet were first domesticated is of great significance to research,” the researchers said in a statement.
“Despite the importance of cereals in nutrition that continues to this day, it seems that in the region we examined west of the Jordan River, it was the legumes, full of flavor and protein, which were actually the first species to be domesticated,” they said. “A phenomenon known as the agriculture revolution took place throughout the region at this time: different species of animals and plants were domesticated in the Levant and it is now clear that the area that is today the Galilee was the main producer of legumes in prehistoric times.” |
Science Module. 8 th Grade. TEKS 8.8 A and B. The student knows that matter is composed of atoms. The student is expected to: describe the structure and parts of an atom; and identify the properties of an atom including mass and electrical charge. Objectives.
The student knows that matter is composed of atoms. The student is expected to:
Because atoms are so small and strange to us, we represent them by models. The following activity on modeling brings to light some of the limitations of models.
Activity in Chapter One on making models
Cut a piece of paper in half as many times as you can.
Paper Cutting Link
The “Black” box activity is important on several levels:
Who knew? is an activity designed to help students understand how science is built upon the investigations of previous scientists and determined in large part by the technology available.
The animation explains how theory of the atom moved from the Bohr model to the modern theories. It also shows the position and structure the atom and its components.
Atomic Theory Animation
This animation shows the Scanning Tunneling Microscope (STM) mapping atomic clouds in the silicon material. The STM uses a needle with a single atom on the tip to “see” the silicon atoms by their charged electron clouds.
The atom is composed of mostly empty space.
Atomic Field Video
Electrons are very small compared to a proton.
If the proton were the mass of an automobile, the electron would have the mass of a bag of potato chips. |
Zinc remains a mystery metal to scientists who study its role in health problems. They are just beginning to fathom how the body keeps levels of zinc under the precise control that spells the difference between health and disease.
“The question of how much zinc is available in a cell has emerged at the forefront of chemical biology”, said Amy R. Barrios, Ph.D., of the University of Southern California, Los Angeles.
“We believe this new technique can help us understand how zinc is involved in plaque formation in Alzheimer’s disease, how prolonged seizures or stroke kill brain cells, and how the cell normally allocates zinc to different proteins,” said Richard B. Thompson, Ph.D. of the department of biochemistry and molecular biology, University of Maryland School of Medicine, Baltimore.
Thompson explained that almost all zinc inside cells is incorporated into proteins, where it plays many vital roles. “We know that if there is much zinc in the cell that is not attached to protein or otherwise encapsulated — so-called ‘free zinc’ — the cell is stressed or may be undergoing programmed cell death. This has been observed in animal models of epilepsy and stroke.”
The technique uses a special protein molecule that has been re-engineered to report when zinc becomes stuck to it as a change in luminescence that can be seen in the microscope. This protein, originally found in blood cells, is very selective, recognizing tiny levels of free zinc even in the presence of the million-fold higher levels of other metals present in cells, such as calcium or magnesium.
Because proper zinc levels are so important in health and disease, scientists have been seeking ways of measuring zinc inside and outside of cells for more than a decade.
COMPAMED.de; Source: American Chemical Society |
See literature on the Ancient Near East.
Contents of this section
History starts per definition with the invention of writing (around 3200 BCE). However, the first written documents are scarce, difficult to read, mostly economic in nature and thus revealing little about the political situation. Most of the oldest records are still undeciphered. The earliest historic period is often called protohistory, the period of scarcely written documents. In Europe for example the period of the Carolines and Merovines in the early middle ages are called the protohistory.
Some scholars emphasize the literary component in those societies and speak about the protoliterate which is divided in several parts, called A, B, C and D.
The small amount of available documents is supplemented by texts written many centuries later but referring to these early stages. In combination with archeological records, these should be taken seriously. A legendary king becomes real when e.g. votive inscriptions carrying his name are found.
The most important aspects of the society in the protohistory are the beginning of monumental buildings (temples, palaces, fortifications), the accumulation of capital and the economic use of metal and writing, leading to the first city states. As in the neolithic one speaks about the agricultural revolution, this age witnesses the urban revolution.
Sumerian protohistory is divided into the Jemdet Nasr period (the foundation of the first city states) for which no contemporary records are available and the Old Sumerian period. The Old Sumerian period lasts until the seizure of power by the Semitic king Sargon of Akkad (around 2350 BCE). The period is divided into dynastics determined by the hegemony of a certain city.
Sumerians are very conscious about their civilization and held a high opinion of it. The urban revolution starting around 3100 BCE has impressed the Sumerians themselves. It was a heroic age. The circumstances in those times are a source for many myths and legends. An epic tradition started with heroic poems going back to real social phenomena. In origin historical events (at least in part) are chanted and told from generation to generation, adding and deleting with literary freedom. Many centuries later stories from oral tradition were written down, usually schematically and as loose fragments. Still later (a millennium, in the Old Babylonian period, 19th century BCE) the fragments were arranged and composed into complete epics. They got standardized into canonical literature, when they were written and copied by generations of scribes (often in schools). There is a general analogy with other 'heroic ages' in later times, (Homer, the Indian Maghabharata, the Germanic Heroic age). The similarity probably shows a common political and social structure.
The Sumerian epic bear witness to a political structure in which a leader as king of a city or small state (city with subordinate cities) maintains hegemony by personal courage. The king has a retinue of armed and loyal supporters. Kings of different city states are in competition, but basically have a good relation. They form an aristocracy, separated from ordinary people. The divine world is structured in a similar way.
Sumerian King Lists
Some early texts are the Sumerian King Lists, known in ancient times by the
first line or the first few opening words: (Sumerian) nam.lugal meaning
'kingship' with lugal 'king', the sign nam introduces an
abstract noun in Sumerian (and later in Akkadian compound logograms). These
lists are composed in the 22th century BCE, many centuries after the times they
refer to. The lists are copied by generations of scribes and standardized in
this process until in the Old Babylonian time a canonical version exists
extended with kings up to that time period. The Lists are first studied by
Jacobsen and published in 1939. It is a basic tool in the earliest history of
Mesopotamia. The purpose of these lists was probably to show that Sumer and
Akkad ''always'' served under one kingship and consequently may have distored
the truth to serve the purpose. The lists sometimes contradicts other epic
stories. E.g. certain kings should be contemporaneous, whereas they don't show
to do so in the King Lists.
In the lists Kingship is seen as a divine institution: it descended from heaven. The opening line of the text is:
'When kingship was lowered from heaven, the kingship was in Eridu.'
Because of this, kingship is seen as an institution that is shared by different cities. Each city takes its turn during a certain period. The Sumerian sign for 'government' or 'year(s) of government' is the same sign for 'turn', bala taken as loan word by the Akkadians as palű. It is written with the sign BAL which in later New Assyrian orthography is . In Akkadian it is used as a logogram. The sign developed from a pictogram of the shuttle of a loom (the rotating part, to weave tissue, together with the determinative for 'wood' it still means 'shuttle of a loom') and was used for words meaning 'to rotate', 'turn' and thus also 'government'. The hegemony of a city in the Sumerian King Lists does not always mean that the cited kings really had supremacy over kings in neighboring city states.
From the lists an important caesure becomes apparent, the great Flood or Deluge. Names and events are either antediluvial or postdiluvial. In later epics the Flood signals the end of mythological times, when things were formed, and inaugurates the beginning of historical times. About eight (in other versions ten) antediluvial kings are mentioned together with their periods of government. Extremely large ages were attributed to the kings before the Flood. Added together they would have ruled for 241200 years..... The antediluvial period is also seen as the era of divine revelations, such as the invention of agriculture, the invention of writing etc. Some of the antediluvial cities mentioned are Eridu, Sippar and Šurruppak.
Eridu, the first city mentioned, is the city of the water god Enki/Ea (one of the top three deities in the Sumerian pantheon). It is situated in the extreme south of Mesopotamia near the sea or a lagoon. It is said that the 'principle of agriculture' was revealed by a god to the first king of Eridu: Emmeduranki.
Sippar was to become the city of the sun god, Sumerian utu, later called Šamaš in Akkadian. It is said that the secrets of divination were shown to a king of Sippar, also by divine revelation. Gods make their will, intentions and answers known to the people by supernatural means: numerous omens and signs that needed explanation. The exegesis of omens was seen as a discipline ('science') to inquire the gods. It was an official institution, used by the king to collect information. No decision of any importance was taken without proper consulting. The sun god utu is in particular connected with the discipline of divination. He is in a position to oversee everything, so also the future.
Šurrupak is a city on the banks of the Euphrates, near modern Fara. The last king of Šurruppak was the hero in the Flood story.
The Flood story
The motive of the Flood, a ''word wide'' catastrophe, circulates in all of antiquity. All kinds of versions of the catastrophe are passed down from generation to generation and from country to country. There are Sumerian, Akkadian, Ugaritic, Hittite versions and probably independently in much of the world's folklore elsewhere. When the first texts about the Flood (Akkadian abübum, a devastating storm surge) were discovered in 1872 by George Smith, it made headline news in all papers, because of the similarities with the story in the bible (dated almost two millennia later). Fantasy was further stirred by the English archeologist Sir L. Woolley. He found (1929) in excavations a deposit of silt of a few meters thickness, under which artifacts were found dated to the 5th millennium. These deposits, however, are always localized to a small area, as Woolley himself has later discovered. Time, place and extend of this flood are inconsistent with the literary tradition. A local breakthrough of the river is a sufficient explanation.
All alluvial plains and river deltas in the world have suffered from major floods. A serie of floods in the 15th century CE, called the The St. Elisabeth Floods, has shaped part of the Netherlands in the Rhine delta. Millions of people even now are in constant danger because of flood threat, so it is not surprising that the story still addresses the imagination. There is no doubt that floods did have a great impact on the Mesopotamian civilization and that some of them occurred around 2900 BCE.
The various versions and fragments of the epic point to different traditions in Flood stories. The Sumerian Flood hero (the early Noah) is called Ubar-Tutu ('Friend of the god Tutu'), in other versions Ziusudra ('Life of long days') In the Akkadian version he is called Utnapištim ('he has found (everlasting) life') elsewhere also Atrahasďs ('exceedingly wise'). The epic named after the latter is very famous and is in Old Babylonian form dated to 1635 BCE. It exists also in later traditions.
The urban revolution, the building of the first cities, took place in 3100-2900 BCE in the transition from prehistory to history. The change in human settlement pattern from isolated settlements to larger village communities, described before, continued. The dry climate at the end of the 4th millennium now allowed habitation of the great plains in the extreme south of Mesopotamia, the area later called Sumer. Inadequate rainfall stimulated the continuing development of irrigation works. The production of bronze, an alloy of copper and other metals, mainly tin, allows the manufacturing of new weapons, for which protection was sought by the construction of fortifications around the villages and walls around cities.
The bloom and further development of the city states is called the Early Dynastic period (2900-2400 BCE) or Old Sumerian period. It is divided into three periods in which different cities dominate. The Old Sumerian period is characterized by strong rivalry between city states and an increasing division between state and religion. Monumental buildings that should be called palaces as opposed to temples are attested for the first time. Despite the rivalry there are strong similarities in architecture, building materials, motives of ornaments etc., The people shared a common religion and spoke the same language. So in general one could speak of a Sumerian art and culture.
Old Sumerian is the language used in the Old Sumerian age. A large fraction of texts in Old Sumerian and most of our knowledge on this language is derived from texts already found before 1900 CE in Nippur, a holy city, the religious capital of Sumer, seat of Enlil, the supreme god of the Sumerian pantheon. These tablets (more than 30000) can now be found in Istanbul, Jena and Philadelphia. These tablets include the oldest versions of literary works, such as the Gilgamesh Epic and the Creation Story, as well as administrative, legal, medical and business records, and school texts.
3.1 The city Kish, Early Dynastic-I, the Golden Age (2900-2700 BCE)
Kish, a city in the north of Babylonia near modern Tel el-ehęmir, is the first postdiluvial city mentioned in the Sumerian King Lists. After the great Flood, 'kingdom lowered again from heaven'. The first kings had Semitic names. It is an age in which 'the four quarters of the world' lived in harmony.
From excavations it appears indeed that Kish has been an important city. It is the center of the first Sumerian dynasty, called Early Dynastic-I. The findings point to a specialization in labor and a high quality of craftsmanship, which must have been the result of a long tradition. Beautiful golden daggers and other artifacts are found in tombs. In Kish archeologist found the first monumental building which must have been a palace, rather than a temple. The king is in power, and not the en the high priest.
The title King of Kish. The importance attached to Kish is also shown in the title 'King of Kish', in Akkadian šar kiš šati. This title was used by kings even many centuries later to show prestige, as if it meant 'king of the whole world'. The title was even used when another king was actually the king of Kish and also long after Kish had ceased to be the seat of kingship. It is possible that the title was more than just prestige. Kish is situated in the north of the plains of southern Mesopotamia on a critical spot at the Euphrates river. A breakthrough of the river to the lowlands in the direction south west (to modern An Najat, where the Euphrates flows nowadays) would mean that a whole system of irrigation channels would be without water supply. The control of the Euphrates in the neighborhood of Kish thus was of vital importance to the rulers in the south of Mesopotamia. The title 'king of Kish' could have indicated the ruler that exercised this control.
3.2 The city Uruk, Early Dynastic-II, the Heroic Age (2700-2500 BCE)
Uruk (Sumerian unug, in the bible Erech) is situated near modern Warka (still showing the same root consonants *'rk but with a different vocalization). This period under the hegemony of Uruk is also called the Heroic Age. Dynasties are known from epics written some time later. Uruk is the city of the goddess Inanna and the supreme god An. Kings of Uruk are called en 'lord'. A reconstruction from later mythology shows this period to be a primitive democracy. Major decisions are taken by the king after consultance of a counsel of elderly men
Enmerkar, king of Uruk and Kullub, has as epithet 'he who build Uruk' and is known from two epics. There is no known inscription or plaque that bears his name, so there is no archeological proof of his existence. The texts refer to commercial and military contacts with a city called Aratta (not yet localized, probably in Iran), where the Sumerian goddess Inanna (later Akkadian Ištar) and Dumuzi were also worshiped. These epics are seen as a proof of trade contacts, e.g. the trade in precious stones, like lapis lazuli. Enmerkar was the first, according to legend, to write on clay tablets.
Lugalbanda (lugal 'king', banda 'small', so 'junior king') was the third king in the first dynasty of Uruk, and also featuring in heroic-epic Sumerian poems, the so called Lugal banda-epic (two parts, together 900 lines).
Gilgameš is grandson of Enmerkar. His fame spread over a large region through the Gilgamesh-epic. An Assyrian version is found in the library of Aššurbanipal (around 650 BCE) and probably dates back to 1700 BCE. Smaller Sumerian fragments with only a few hundred lines are dated around 2000 BCE. The spread in time and location indicates that the epic was known for more than 15 centuries in a large region up to Anatolia. It is nowadays (as one of the few Mesopotamian epics) still played on stage. The Gilgamesh epic is further explained elsewhere on the Web.
Gilgamesh was responsible for the construction of the city walls of Uruk. Indeed, it appears from archeological records that these walls were expanded around 2700 BCE with its typical plano-convex type of bricks. Archeologists take the use of this material as a characteristic for the start of Early Dynamic-II. There is no archeological evidence for the existence of Gilgamesh. An other royal name in this dynasty, Mesannepada, has been found written on a golden plate (dated to 2600 BCE) with a votive inscription.
3.3 The cities Ur and Lagash, Early Dynastic-III (2500-2350 BCE)
The Early Dynastic-III period is outside protohistory and usually considered to be part of history. Many source and archives are known. One of them, contemporaneously with the archeological stratus of Uruk-IVa with archaic pictographical texts, is found in Šuruppak (modern Fara). Another site is only known by its modern name, the village Abu salabih, with Old Sumerian texts. The majority of these texts have an economical/administrative nature.
Ur. Officially, according to the Sumerian King Lists, Ur has the hegemony in this era, the Early Dynastic-III. In practice 'hegemony' probably was fairly marginal. Ur is a port with connection to the Persian Gulf.
Lagaš and the religious metropolis Girsu are both cities in the extreme south of Mesopotamia. Many Old Sumerian texts have been found here, mostly on hard materials like albast, copper and gold, e.g. the royal inscriptions of Lagash and texts about the eternal border conflicts between Lagash and the nearby city Umma. The conflicts often concern water rights and were sometimes settled by mediation of the king of Kish.
The patron deity of Lagash is Ningursu, later associated with Ninurta, a warrior god and central in the elimination of demons. Some of the kings of lagash are:
Eannatum , (E-ana-tum) the first king who called himself 'King of Kish', 'he who overrules the countries'. He boasts that his territory extends from Kish in the north, to Mari in the west, Uruk in the south and Elam in the east, although it is not clear what the 'ruling' over these cities actually means. He has had a long reign, but after his reign his territory was reduced again to its original size.
Famous is the victory depicted on the so called vulture stela of Eannatum, (see figure of the vulture stela at UCLA Art History). It is the oldest direct witness of the political and military power of a king, of which 1/3 is preserved. The texts announces new borders and the victory of Eannatum of Lagash over the ruler of Umma. It depicts military high lights, imprisoning of the enemy, the burial of the dead and the vultures who escape with the bones of the dead. It is shown as a serie of unrelated pictures. It is either an artist impression of a historical battle or just expresses the intention of such a battle.
Urukagina is the last and pious king of the dynasty in Lagash, also called uru-inimgina. The name is written with the sign ka 'mouth', which also stands for inim 'word'. Proper names often do not give enough context to know the correct reading of the sign. He was the last king of the first dynasty of Lagash, and introduced many reforms ('social reforms of Uru-inimgina') and enacted edicts related to the problem of enslavements which were caused by running up debts. High extortionate rates of the interest on capital (often 33.3 percent) had to be paid by enslaving one's children until the debts were paid off. Uru-inimgina remits the debts by decree. |
Start a 10-Day Free Trial to Unlock the Full Review
Why Lesson Planet?
Find quality lesson planning resources, fast!
Share & remix collections to collaborate.
Organize your curriculum with collections. Easy!
Have time to be more creative & energetic with your students!
Fat or Skinny Questions?
In this fat or skinny questions worksheet, students identify whether questions are simple or intricate for the book The Rainbow Fish. Students analyze ten sentences.
11 Views 14 Downloads
Fiction, Poetry, and Drama Part 1
Similar to a textbook, this resource includes multiple texts, plenty of explanation, lots of practice, and several graphic organizers. Use all of the materials, or pick and choose from such texts as "The Circuit," "Shoes for Hector,"...
5th - 8th English Language Arts
Grammar, Usage, and Mechanics
If you're planning on teaching grammar in your fifth or sixth grade class, take a look at a packet of language arts activities that would be perfect for any grammar lesson. Covering skills such as parts of speech, sentence and paragraph...
5th - 6th English Language Arts CCSS: Adaptable |
“Overlapping triangle codes” as drawn in English crop pictures: the extra-terrestrials are teaching us a novel kind of mathematics, which they have used to encode numbers such 13, 26, 52 or 104 from Mayan calendars
Many novel kinds of mathematics have been drawn in English crop pictures, for example “pi to ten digits” at Barbury Castle on June 1, 2008 (see www.dailymail.co.uk or www.telegraph.co.uk). That puzzle was not solved for over a week, while images of the crop picture spread widely over the Internet to millions of people around the world. Finally Mike Reed, a retired astrophysicist living in North Carolina, realized what it meant, then informed Linda Howe and myself.
Here we will show four other mathematical puzzles which were drawn in crops during the summers of 2000, 2009 or 2012. No one has realized what they mean until now. They are all based on complex triangular geometries.
The first “triangle puzzle” to appear in crops was near Silbury Hill in June of 2000
The first “triangle puzzle” to appear in crops was drawn near Silbury Hill in June of 2000. It came down in two separate phases on June 2 then June 4:
This puzzle relates the number of vertical levels in Pascal’s Triangle, to the number of small overlapping triangles which may be counted inside (see jwilson.coe.uga.edu). The crop picture shows an example for n = 3 and 13 triangles. Please study this clever animation to see the results for n = 4 and 27 triangles, which are directly comparable (see www.transum.org or http://krexy.com/how-many-triangles):
I am calling this “Pascal’s Triangle”, because we will return to that particular geometry in other slides below.
How many small triangles can we count within a large triangle that has been subdivided into n parts?
The next two “triangle puzzles” to be drawn in crops during 2009 or 2012 were more difficult to understand. In order to explain what they mean, first we need to show an example of overlapping triangular geometries. How many small triangles can you count, within a large triangle that has been subdivided into n parts?
The answer is usually n cubed, for example 8 triangles for n = 2, or 27 triangles for n = 3, or 64 triangles for n = 4, or 125 triangles for n = 5 (see http://scienceblogs.com). The case of n = 4 is shown in detail on the right, where all 64 of its small, overlapping triangles are counted carefully.
A “triangular pyramid” which appeared in crops near Chesterton Windmill on July 9, 2009 was subdivided as n = 2 or n = 3 at different vertices
Now everything is going to get a bit harder! A “triangular pyramid” which appeared in crops near Chesterton Windmill on July 9, 2009 was subdivided on each face as n = 2 or n = 3 at different vertices:
Ideally for this example, we might expect to count 15 overlapping triangles along each face, which is a value intermediate between 8 triangles for n = 2 at both vertices, or 27 triangles for n = 3 at both vertices. Yet the crop artist has some surprises in store! This particular pyramidal shape was subdivided into an asymmetric set of panels, which resemble the blades of a wind turbine:
Now one can count just 13 triangles on each face, instead of the expected 15 for ideal symmetry:
There are 12 small triangles which subdivide each face. Then we must add one large triangle for the whole face, to give a total of 12 +1 =13.
This crop picture was drawn in a field near Chesterton Windmill, along with a small “key” motif which pointed towards the windmill. It showed six tiny lines on the top, which resemble a six-lined vertex from the triangular crop picture:
The “key” to understanding this puzzle is to realize that the triangular crop picture would possess four faces in three dimensions, just like the four blades of that windmill:
The total number of small, overlapping triangles therefore equals 4 x 13 = 52, even though a lower face of the pyramid with 13 triangles remains “hidden”. Could there be some intent here to refer to “52 months” which separate July 2009 and December 2013, the latter date of which has been coded in other crop pictures? (see /fringe2013k)
Those two coded numbers of “13” or “52” have distinct connotations in terms of ancient Mayan culture. There are 13 baktuns in one part of the Mayan Long Count calendar, while there are 52 panels on each face of the Pyramid of Quetzalcoatl in Chichen Itza:
Those 52 panels on each face of the Mayan pyramid represent “52 years” in their Sun-Venus calendar. Once again, might the Chesterton Windmill crop picture be referring to the “52 months” between July 2009 to December 2013? Will we perhaps see a large triangular UFO then?
A “large triangular UFO” was drawn in crops at West Kennett Long Barrow on August 13, 2013. It looks much like the crop picture drawn at Chesterton Windmill on July 9, 2009. Later in the same month of July 2009, another “triangular pyramid” as drawn in crops also resembles a triangular or pyramidal UFO.
Two complex geometry diagrams from 2012 suggest “extra dimensions”
We will show other interesting “triangle puzzles” below, but first we need to prepare the way with further explanatory information. Two different geometry diagrams from the summer of 2012 suggested “extra dimensions”. The first at Liddington Castle on July 1, 2012 (then July 21) showed the image of a cube passing through our 3-D space from other dimensions (see fringe2013b):
In their first diagram on July 1, we can see one corner of that cube “rising above” the plane of the crop field. Then in their second diagram on July 21, we can see a corner of that cube “falling below” the plane of the crop field.
Later on August 26, 2012, they drew a very intricate 3-D cube which shows a similar change of perspective. We can apparently visualize the front corner of that cube by either “looking down” or “looking up” (see also hackpenhill3). How did the crop artists create such an amazing illusion? As shown in the next slide, they achieved this by changing the nature of visual perspective in its central parts:
One of those complex geometry diagrams contains another “triangle puzzle”
Now quite remarkably, one of those geometry diagrams from the summer of 2012 also contains a “triangle puzzle”, like the example from Chesterton Windmill which we just studied above. We will be able to study this new triangle more easily if we “stretch” it away from the curved shape drawn in crops, back into a simple linear shape:
This new triangle is a clever variation away from the idealized case of n = 4 as discussed above, which contains 64 small, overlapping triangles. In this new case, line 1 remains the same, but lines 2 and 3 which subdivide the triangle are connected differently to their opposite sides. When we count carefully, we find that the modified n = 4 triangle contains only 26 overlapping triangles, instead of 64 as for the ideal case:
Again we can count 25 small, overlapping triangles within each face. Then we need to add one more for the whole face to reach a total of 25 + 1 = 26.
One small line near the centre of the triangle has been subtly omitted, or else we would find a total of 30 triangles rather than 26.
Finally, in order to switch our linearized triangle back into its curved shape as was drawn in crops, we can apply a mathematical method known as “Steiner’s deltoid” (see /blog.zacharyabel.com):
The highly curved triangle which was drawn in crops appears “hyperbolic”, in order to symbolize a hyperbolic spatial geometry for “extra dimensions”.
Pascal’s Triangle with a code for “26 triangles” appeared in crops at Waden Hill on the same day of July 1, 2012
During the day of July 1, 2012, we were inspecting a new crop picture near Stanton St. Bernard with Paul Jacobs and Stuart Dike. Then a call came in, to inform us that a new crop picture had just been found on top of Waden Hill, not far from Avebury or Silbury Hill. When we arrived there, the new crop picture was still pristine, because only three people were in it on the ground.
One was Ross Holcombe, who was busy sketching a complex code which he had found. Six of its 36 triangular spaces showed standing tufts, while the other 30 spaces showed centres which were swirled just slightly off the ground. In the slide below, we have added six “red asterisks” to denote where six standing tufts of crop were located:
Once we saw an aerial photograph, we realized that the Waden Hill crop picture shows “Pascal’s Triangle”, using the artistic style for a “flower of life”. The total number of triangular spaces on each level of the triangle equalled 1, 3, 5, 7, 9 or 11. Yet when studied in the form of Pascal’s Triangle, only 1, 2, 3, 4, 5 or 6 spaces on each level were available for coding. Those six standing tufts match individual numbers of 1, 4, 5, 10, 5 or 1 from Pascal’s Triangle, and add to a total of 26.
Now the other crop picture which appeared at Liddington Castle on July 1, 2012 also showed “26 triangles”, within each face of a triangular pyramid. Could the coded triangle at Waden Hill have been meant to help us read a different coded triangle at Liddington? Most of the individual numbers at Waden Hill match the number of small triangles of each kind, which were drawn on the same day at Liddington: 10 triangles based on the lowest pair of subdividing lines, 6 (or 5 +1) triangles based on the middle pair of subdividing lines, 4 triangles based on the highest pair of subdividing lines, 5 symmetric triangles in the centre, and 1 large triangle for the entire face.
Around the outside of Waden Hill, we can see 2, 1 or 1 standing tufts which add to 4. Might those four outer tufts represent the four faces of a triangular pyramid which was drawn at Liddington?
Comparing the triangle puzzle from Chesterton Windmill 2009 with a triangle puzzle from Liddington Castle 2012
When we compare both crop pictures as seen from above, their similarities become striking:
Both crop pictures show large triangular pyramids, which have been coded on each face with either 13 or 26 small, overlapping triangles. When viewed in three dimensions, each triangular pyramid would possess four faces.
We may summarize our current results as follows:
The triangle puzzle from Chesterton Windmill on July 9, 2009 shows 4 x 13 = 52 small, overlapping triangles. The triangle puzzle from Liddington Castle on July 1, 2012 shows 4 x 26 = 104 small, overlapping triangles. Both of those numbers “52” or “104” match the number of panels found on one or two faces of the Pyramid of Quetzalcoatl in Chichen Itza. Both crop pictures also resemble a “large triangular UFO”, which was drawn in crops at West Kennett Long Barrow on August 13, 2013.
Who knows what will happen next? All of these spectacular field images must be leading somewhere. The past is prologue.
Red Collie (Dr. Horace R. Drew, Caltech 1976-81, MRC LMB Cambridge 1982-86, CSIRO Australia 1987-2010)
P.S. We would like to thank Marina Sassi for the “yellow windmill” image of a crop picture at Chesterton Windmill on July 9, 2009, and also Sarah Susanka for her careful analysis of the “cube” from Hackpen Hill on August 26, 2012 (see hackpenhill3 or Sarah_Susanka):
“As an architect, I draw window frames in perspective all of the time. So it is very apparent to me that the way in which these lines of see-through cubes have been drawn is actually ‘upside-down and inside-out’, relative to what our eyes would expect to see at the upper, outward-pointing corner of the cube. This tells me that we are being shown an upside-down world beyond the boundaries of that cube, or on its other side.”
Indeed, the crop artists seem to come from a “mirror universe” which is upside-down relative to ours (see fringe2013k). If an architect and best-selling author such as Sarah can get it right, why not the scientists or academics in our schools? As Joni Mitchell sings in “Shine”, they have tunnel vision (see www.youtube.com). |
A robot deceives an enemy soldier by creating a false trail and hiding so that it will not be caught. While this sounds like a scene from one of the Terminator movies, it's actually the scenario of an experiment conducted by researchers at the Georgia Institute of Technology as part of what is believed to be the first detailed examination of robot deception.
"We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered," said Ronald Arkin, a Regents professor in the Georgia Tech School of Interactive Computing.
The results of robot experiments and theoretical and cognitive deception modeling were published online on September 3 in the International Journal of Social Robotics. Because the researchers explored the phenomena of robot deception from a general perspective, the study's results apply to robot-robot and human-robot interactions. This research was funded by the Office of Naval Research.
In the future, robots capable of deception may be valuable for several different areas, including military and search and rescue operations. A search and rescue robot may need to deceive in order to calm or receive cooperation from a panicking victim. Robots on the battlefield with the power of deception will be able to successfully hide and mislead the enemy to keep themselves and valuable information safe.
"Most social robots will probably rarely use deception, but it's still an important tool in the robot's interactive arsenal because robots that recognize the need for deception have advantages in terms of outcome compared to robots that do not recognize the need for deception," said the study's co-author, Alan Wagner, a research engineer at the Georgia Tech Research Institute.
For this study, the researchers focused on the actions, beliefs and communications of a robot attempting to hide from another robot to develop programs that successfully produced deceptive behavior. Their first step was to teach the deceiving robot how to recognize a situation that warranted the use of deception. Wagner and Arkin used interdependence theory and game theory to develop algorithms that tested the value of deception in a specific situation. A situation had to satisfy two key conditions to warrant deception -- there must be conflict between the deceiving robot and the seeker, and the deceiver must benefit from the deception.
Once a situation was deemed to warrant deception, the robot carried out a deceptive act by providing a false communication to benefit itself. The technique developed by the Georgia Tech researchers based a robot's deceptive action selection on its understanding of the individual robot it was attempting to deceive.
To test their algorithms, the researchers ran 20 hide-and-seek experiments with two autonomous robots. Colored markers were lined up along three potential pathways to locations where the robot could hide. The hider robot randomly selected a hiding location from the three location choices and moved toward that location, knocking down colored markers along the way. Once it reached a point past the markers, the robot changed course and hid in one of the other two locations. The presence or absence of standing markers indicated the hider's location to the seeker robot.
"The hider's set of false communications was defined by selecting a pattern of knocked over markers that indicated a false hiding position in an attempt to say, for example, that it was going to the right and then actually go to the left," explained Wagner.
The hider robots were able to deceive the seeker robots in 75 percent of the trials, with the failed experiments resulting from the hiding robot's inability to knock over the correct markers to produce the desired deceptive communication.
"The experimental results weren't perfect, but they demonstrated the learning and use of deception signals by real robots in a noisy environment," said Wagner. "The results were also a preliminary indication that the techniques and algorithms described in the paper could be used to successfully produce deceptive behavior in a robot."
While there may be advantages to creating robots with the capacity for deception, there are also ethical implications that need to be considered to ensure that these creations are consistent with the overall expectations and well-being of society, according to the researchers.
"We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception and we understand that there are beneficial and deleterious aspects," explained Arkin. "We strongly encourage discussion about the appropriateness of deceptive robots to determine what, if any, regulations or guidelines should constrain the development of these systems."
This work was funded by Grant No. N00014-08-1-0696 from the Office of Naval Research (ONR). The content is solely the responsibility of the principal investigator and does not necessarily represent the official view of ONR. |
Wind power has been around for decades, but the U.S. has only recently seen a significant percent of power come from wind. What's changed?
The cost of energy for wind turbines is now reduced quite dramatically [compared to] what it was five or 10 years ago. It turns out that bigger wind turbines are more cost effective for a number of reasons. And the companies in the wind industry have grown in size so they can do things in greater volume, which brings down the cost of energy. But the main thing is that the amount of energy produced by wind turbines has gone up.
A wind turbine cannot operate at full power all the time. That would require 25-mph wind all the time, and there's no place on the earth that has that sort of wind. So we measure this by capacity factor: the energy the turbine produces in a year divided by the energy it could produce if it ran at full power throughout the year. In the past wind turbines in the Americas would be doing well if they had capacity factors in the low-30-percent [range]. That is what would have been a good product in the 1980s. In 1985 we built one wind farm that had about 42 percent capacity factor and that was really, really impressive then.
The most common type of turbines we installed in 2012 in America has on average a capacity factor of about 50 percent, and the highest-producing turbine is at almost 60 percent. So that means windmills now are producing power at roughly a 50 percent better rate than what windmills produced in the 1980s.
And the key was bigger wind turbines?
Yes, and there have also been specific advances, mainly in aerodynamics; the blades are a lot more efficient than they were 20 years ago.
It's basically the shape (not the material) that gives the aerodynamics performance. Twenty years ago blades were made with airfoils designed for airplanes. But during the last 10 years, we have created airfoil shapes specifically for wind turbines. They are thinner and they are curved differently.
You recently designed a wind turbine that did away with gearboxes, which are a key component in current wind turbines. Why?
In the mid-1990s, we had many, many problems with gearboxes in windmills. It's not like a gearbox in a car, either stick shift or an automatic. A gearbox in a wind turbine has a fixed ratio, typically around 1:100, and it changes the low speed of the rotor shaft to the high speed of the generator shaft. It's a complicated piece of equipment. I got pretty frustrated about it back in the late '90s. The radical solution if you have a problem with a component is to just not have that component. The way to do that is to say to the assertion in the engineering process, "You need to do it this way," and in turn you ask, "Why?"
Having asked this question and decided to act, we did it a proper way. We tested equipment, put out prototypes, and so on. All of that led to the development of the direct wind turbine. We have made a quantum leap in terms of simplicity.
Modern wind turbines have rotors that are 100 meters [328 feet] or more in diameter. Our offshore bestseller has a 120-meter [394-foot] rotor. Such a big machine has a gearbox weighing 35 metric tons, containing 13 gear wheels and 22 bearings. The direct-drive machine, which we use for even bigger turbines, has a 154-meter (505-foot) rotor with blades of 250 feet. This gearless turbine has zero gear wheels and one bearing. By this simplification, you get a lot of benefit. You simply reduce the risk of anything going wrong.
What's the next big tech problem for the wind industry?
One of the things I'm doing quite a lot of work on is electricity storage, because wind turbines produce energy, to some extent, as the wind is blowing. If you have a low-wind period, you could miss energy when you need it. So you need backup power, such as gas turbines or nuclear.
One way to solve that problem is to have wind turbines spread over a large geographical areachances are, when the wind isn't blowing in one area, it's blowing in another. But in a longer perspective, we need to be able to store electricity for when there's no wind.
That cannot be done in the way most often thought about, which is just to store it in batteries. That would be way too costly, and there would be a shortage of raw materials if you just store it in batteries. If you could store part of the energy until somebody needed it, you would be free of the question, "What if there's no wind today?"
What other challenges face the wind industry in the U.S.?
One is that there's a certain disconnect between where you have good wind resources and where the load centers are. The good wind resources are in Texas and in the Midwest. The load centers are on the East Coast and the West Coast, and in industrial areas such as Chicago. There's a certain debt of transmission of energy in the U.S. If you could store energy from the high-wind areas and then transmit the electricity during low-wind periods, that would to some extent solve the problem. Hopefully, the U.S. system would gradually be built out so that the load centers can be connected to where the wind resources are.
Another challenge is the cost. For a long while it looked like wind would soon be cheaper than the other energy resources, but now gasdue to unconventional, so-called shale gas explorationhas become a lot cheaper in the last few years. Compared with other energy sources, wind has the big benefit [in] that it is super cleanalthough you have to build and transport wind turbines, and that takes some energy.
We are not there today, but I am confident that in the future we will be cheaper than any other energy system. |
Start a 10-Day Free Trial to Unlock the Full Review
Why Lesson Planet?
Find quality lesson planning resources, fast!
Share & remix collections to collaborate.
Organize your curriculum with collections. Easy!
Have time to be more creative & energetic with your students!
Math Stars: A Problem-Solving Newsletter Grade 1
Keep the skills of your young mathematicians up-to-date with this series of newsletter worksheets. Offering a wide array of basic arithmetic, geometry, and problem solving exercises, this resource is a great way to develop the critical...
1st - 3rd Math CCSS: Adaptable
Getting Into Shapes: Identifying and Describing Two-Dimensional Shapes
Young scholars examine their classroom to find examples of various types of shapes. After identifying and describing the various shapes, they draw as many as they can on a piece of paper. They organize them into an image based on their...
K - 2nd Math
Impress Yourself: Textured Pendants
Students experiment with textures and polymer clay while creating individual pendants. This lesson is suitable for all ages and includes ideas for adding "personal touches" to each pendant, baking the pendants, and applying texture and...
K - 12th Visual & Performing Arts
Let's Count to 20 Lesson 4 - Building Sets of 15 and 16
Pupils use a variety of techniques to demonstrate sets of 15 and 16 items. They build trains with cubes, group bean sticks, and show 15 and 16 using ten frames. All of the resources you need to implement the instructional activity are...
K - 2nd Math CCSS: Adaptable |
Many physical workout programs place a great deal of emphasis on “strengthening your core” or strengthening the muscles located at the center of your body. This isn’t a whole lot different from the Common Core method of education. Utah has had its own core standards that stretch the state, but now we have joined the National Standards. The National Common Core is a hot topic, with 47 states having adopted the Common Core Standards (CCS) as of July 2013.
We have been particularly interested in CCS because we work to help our students realize grade-level accomplishments. It took only a minute to get online and look up the CSS of Utah, however, the documents themselves would take about half an hour to print. Stacked on top of each other, the CCS would be about as thick as a phone book! But the basics are this: Each grade, starting with kindergarten and going through high school, set benchmarks for math and language arts that should be met by the time the student completes each grade. If not, the student may not pass and move up to the next grade level. Math and language arts are emphasized because these are skills that are used in life outside of formal schooling, and they provide a good foundation for success in life after high school.
These new standards come with high expectations for all, so it’s important that parents are aware of where their child should be. We have outlined just a few of the Kindergarten and First Grade expectations.
Kindergarten Math Benchmarks
- Count to 100 by ones and by tens.
- Count forward beginning from a given number within the known sequence.
- Write numbers from 0 to 20. Represent a number of objects with a written numeral 1-20.
- Describe measurable attributes of objects, such as length or weight. Describe several measurable attributes of a single object.
- Directly compare two objects with a measurable attribute in common, to see which object has “more of” or “less of” the attribute, like taller/shorter.
- Classify objects into given categories; count the number of objects in each category and sort the categories by count.
- Add and subtract small numbers.
- Recognize and name 10 shapes.
Kindergarten Language Arts Benchmarks
- Writing the letters and knowing all of the letter sounds.
- Reading and spelling 100 sight words.
- Recognize and produce rhyming words.
- Count, pronounce, blend, and segment syllables in spoken words.
- Isolate and pronounce the initial, medial vowel, and final sounds (phonemes) in three-phoneme words.
- Add or substitute individual sounds (phonemes) in simple, one-syllable words to make new words.
First Grade Math Benchmarks
- Add within 100, including adding a two-digit number and a one-digit number.
- Understand that in adding two-digit numbers, one adds tens and tens, ones and ones; and sometimes it is necessary to compose a ten.
- Given a two-digit number, mentally find 10 more or 10 less than the number, without having to count; explain the reasoning used.
- Subtract multiples of 10 in the range 10-90 from the multiples of 10 in the range 10-90.
- Understanding word problems that involve adding and subtracting.
First Grade Language Arts Benchmarks
- Recognize the distinguishing features of a sentence (e.g., first word, capitalization, ending punctuation).
- Use phonic skills to read and write unfamiliar words.
- Identify the main idea and recall details in a story.
- Write about a topic with a good opening and closing thought.
- Read with sufficient accuracy and fluency to support comprehension.
- Read grade-level text with purpose and understanding
If you suspect your child may not be up to par with these standards, it may be time to put a “workout plan” in place in order to get your child into great Core shape! |
Latin is the language spoken by the ancient Romans who lived in Italy and created an Empire all over Europe about 2,000 years ago.
Latin is interesting and enjoyable for its own sake, but we study it because it:
- Boosts out English vocabulary
- Helps us to learn modern languages like French, Italian and Spanish which are based on Latin
- Helps us think logically
- Opens up the history of Europe
All Year 8, 9 and 10 pupils have the opportunity to study Latin. You can study it throughout the school at every level. There is even a trip to Italy where you can see the Roman ruins! |
What Is the Greenhouse Effect?
Sunlight passes through the atmosphere and warms the Earth’s surface. Some of this solar radiation is reflected by the Earth and the atmosphere. Greenhouse gases in the atmosphere, such as carbon dioxide (CO2), absorb heat and further warm the surface of the Earth. This is called the greenhouse effect.
As more greenhouse gases are emitted into the atmosphere, heat that would normally be radiated into space is trapped within the Earth’s atmosphere, causing the Earth’s temperature to increase.
EPA. 2008q. Climate Change Web site. Climate Change—Science. accessed August 31, 2009. |
(1723?–70). The first American to die at the Boston Massacre, Crispus Attucks was probably an escaped slave. He became a powerful symbol as a martyr in the American colonists’ struggle against the British.
Attucks’s life prior to the day of his death is still shrouded in mystery. Nothing is known for certain, but historians generally agree that Attucks was of mixed ancestry, of both African and Natick Indian descent. It is also believed that Attucks was the runaway slave described in a notice that ran in the Boston Gazette in 1750. In the 20-year interval between his escape from slavery and his death at the hands of British soldiers, Attucks probably spent a good deal of time aboard whaling ships.
Attucks reappeared on March 5, 1770, the day he would die as a martyr for the American cause. Two British regiments had been stationed in Boston, Massachusetts, after colonists had protested new British taxes, and resentment had been building. Toward evening of that day, a crowd of colonists gathered and taunted a small group of British soldiers, some pelting the soldiers with snowballs. Tension mounted rapidly. A group of men from the docks approached, carrying sticks, with Attucks in the lead. The outnumbered soldiers opened fire. The first to fall was Attucks, his chest pierced by two bullets, one of the first to die in the struggle against the British. Two other Americans were killed instantly and two more mortally wounded.
The bodies of Attucks and 17-year-old ship’s mate James Caldwell, neither of whom lived in Boston, were carried to Faneuil Hall, where they lay in state until March 8. On the day of the funeral of Attucks and three others, shops closed, and thousands of residents followed the procession to the Granary burial ground, where the men were buried in a common grave. The event was a galvanizing one for the colonists chafing under British rule. Pamphleteers and propagandists quickly dubbed it a “massacre.”
During the trial of the British soldiers John Adams, who went on to become the second U.S. president, was the defense lawyer. Adams painted Attucks as a troublemaker who was to blame for the soldiers’ attack. Testimony varied, with some witnesses saying that Attucks had grabbed at the bayonet of one of the soldiers and was shot in the ensuing struggle; however, others said Attucks was leaning on a stick when shot. The British captain and six of the group were acquitted, including the soldier who had been charged with killing Attucks; two more were found guilty and branded on the thumb.
Attucks was the only victim of the Boston Massacre whose name was widely remembered. For years the people of Boston marked each March 5 as Crispus Attucks Day to commemorate the turning point in the struggle against the British. In 1888 the Crispus Attucks monument was unveiled in Boston. |
A Colorado-based research team recently completed a major wind study using Second Wind’s Triton® Sonic Wind Profiler to learn more about one of wind power’s biggest unknowns, the wake effect, and its impact on turbine productivity. Triton is one of several remote sensing technologies that TWICS (the Turbine Wake and Inflow Characterization Study) has used to create a detailed 3D model of the turbulence caused when wind passes over rotating turbine blades. Turbulence can damage turbines downstream and undermine productivity. The project’s goal is to understand how to enhance wind farms’ productivity. Turbine inflow and wake observations will be integrated into a wind energy forecasting model. Understanding how gusts and rapid changes in wind direction affect turbine operations will enable turbine manufacturers to improve design standards and increase efficiency, which will ultimately reduce the cost of energy.
The study is aimed at capturing turbulence and other wake effects in a broad wedge of air up to 7km (4.3 miles) long and 1km (3,280 feet) high in front of and behind a multi-MW wind turbine. Triton, along with tower-mounted sensors and other remote sensing systems, profiles the winds in front of and behind a 130-meter high wind turbine located at the National Renewable Energy Laboratory’s (NREL’s) National Wind Technology center near Boulder, Colorado. NREL, the University of Colorado at Boulder, the Cooperative Institute for Research in Environmental Sciences (CIRES), and the National Oceanic and Atmospheric Administration (NOAA) have teamed up to conduct the study.
“The NREL site is prone to complicated wind patterns, so we needed several remote sensing instruments. The site is flat, but it’s located just five kilometers from El Dorado Canyon on Colorado’s Front Range and the canyon funnels air into the site,” says Julie Lundquist, assistant professor of atmospheric and oceanic sciences at the University of Colorado at Boulder. “The Triton is a good instrument for this study. It will provide us with anchor points in the study by profiling selected slices of a larger wedge of the atmosphere over a long period of time.”
Triton is an advanced remote sensing system that uses sodar (sound detection and ranging) technology to measure wind in the areas that most affect a wind turbine’s performance. By measuring wind speeds at the turbine rotor’s hub height and beyond, Triton reduces uncertainty in annual energy production (AEP) forecasts. Easy to install and capable of autonomous operation, Tritons are being used throughout the wind industry, alone or in conjunction with met towers, to streamline the wind farm development process and to improve wind farm operations.
“Turbine wake effects are a huge unknown in the wind industry. To fully realize the potential of wind energy, with large-scale wind farms, we need to know how turbulence from one turbine affects those around it,” says Second Wind CEO Larry Letteney. “We’re confident that Triton will make a significant contribution to understanding wind farm conditions, which will lead to a more productive and efficient wind power industry.”
Second Wind provides the wind energy industry with the intelligence required to plan, finance and operate highly efficient, profitable wind generation facilities. Learn more at www.secondwind.com. |
It is widely believed that the origins of Friesian cattle can be traced back to the 18th century. At this time, small black and white cattle were brought in to Friesland and the north of Holland and crossed with the native cattle. The Netherlands herdbook was established in 1873 and the Friesland herdbook in 1879.
Prior to the establishment of these herdbooks, red-pied animals were kept separately to black-pied cattle. The latter were preferred in America and today only a small number of red-pied cattle exist in Holland.
In the 1970s, Holstein cattle were imported from America to improve the milk production of the Friesian. The effects of this cross-breeding are keenly felt today. Many Friesian cattle are 25 – 75 percent Holstein.
The Friesian is able to graze on low lying and upland grassland. Selective breeding over the last century has resulted in an animal well able to sustain itself over many lactations. Milk protein levels are around 3.4%.
Unsurprisingly, the Friesian is similar in size to the Holstein. Friesian cows typically weigh around 580 kilos. They have excellent conception rates and the ability to calve frequently in their lifetime. Indeed, a higher number of calves per lifetime are born to Friesians than any other cattle of their kind. |
Like other states of the American South, Tennessee has a history which includes both slavery and racial segregation. In some ways, however, the history of the relationship between the races in the Volunteer State more closely resembles that of a border state than those of the Deep South. Although chattel slavery and the social attitude that undergirded it existed in Tennessee, slavery never achieved as much of a stranglehold upon the state as it did in most places of the South. Indeed, some parts of Tennessee reflected a hostility toward the institution, and portions of the state objected strongly to participation in a Civil War in which slavery played a prominent role. Nevertheless, the bloody clash between North and South did not alter most white Tennesseans’ belief in the racial inferiority of African Americans, no matter where they lived. Even as a border state, however, Tennessee witnessed its share of occasional violence and brutality, even lynching and race riots, and the state could take little pride in being the birthplace of the Ku Klux Klan, founded in Pulaski shortly after the war.
Following the Civil War, Tennessee moved quickly to reenter the Union and, consequently, race did not muddy the political waters as dramatically as it did in the other ten Southern states that had left the Union in 1860-61. Even though legal segregation of the races did not appear immediately after the war, established social customs and tightly fixed racial etiquette dictated private and public contact between blacks and whites. White Tennesseans expected blacks to “know their place” and to stay within prescribed political, social, and economic boundaries. The violation of custom by blacks carried the terrible risk of embarrassment at least and even possible bodily harm. Black Tennesseans briefly glimpsed the sight of freedom in the early years following the war, but the appearance of more conservative forces on the political horizon soon dashed their bright hopes for the future.
Near the end of the nineteenth century, white racial attitudes in Tennessee and the country hardened. Various legislatures framed laws not only to regulate race relations but to control many aspects of the African American community. “Jim Crow” had arrived. In 1896 the U.S. Supreme Court gave legal sanction to segregation in the historic Plessy v. Ferguson decision that established the principle of “separate but equal.” For nearly sixty years Plessy displayed remarkably enduring strength, producing in Tennessee and in America a unquestionably separate but decidedly unequal society. Blacks, however, did not quietly succumb to racial oppression, and for the next half-century they carried out both overt and covert attempts to defeat Jim Crow. Activists such as newspaper publisher Ida Wells-Barnett of Memphis and other local leaders fought for black civil rights. Black Tennesseans who were able to vote used the ballot as a weapon in their own behalf, often punishing those who ignored the interests of the black community. Unfortunately, restrictions on the franchise stifled progress within the black community and delayed democratic equality in the state.
The World War II period helped to fuel a powerful movement in Tennessee and in other parts of the United States that moved the country away from Plessy and the discrimination that usually accompanied it. Participation in that conflict acquainted Americans with the visible results of racial and religious bigotry and its consequences for the country’s national fiber. The performance of black soldiers, including thousands of Tennesseans, during the war and the patriotic support of African Americans on the home front argued powerfully against an old system that kept racism alive and black persons second-class citizens.
Black Tennesseans contributed to the success of the war effort, and they also took part in the intellectual assault that led to the eventual demise of segregation. In Tennessee a number of public school teachers and college professors became involved with the Association for the Study of Negro Life and History in an attempt to counter the effects of the shoddy scholarship that ignored black contributions to American history or that deliberately misrepresented the race. No scholar in Tennessee played a more crucial role in helping to bring about reform in race relations than sociologist Charles S. Johnson of Fisk University. Johnson came to Fisk in 1927, established the Race Relations Institute in 1934, and became the university’s president in 1937. He moved quietly but forcefully in his approach to the race problem. His research efforts and conferences, designed to bring blacks and whites together, had a meaningful impact on racial attitudes in Tennessee and in other parts of America where committed reformers worked for good race relations.
The new spirit generated by World War II and the efforts of scholars such as Johnson brought about some social changes in Tennessee before the mid-1950s. Although African Americans in the Volunteer State chipped away doggedly at the edifice of Jim Crow, white attitudes changed slowly. Blacks waged a vigorous and persistent attack upon racial oppression through a number of self-help and protest organizations including the National Association for the Advancement of Colored People (NAACP). The NAACP and a number of local groups worked to equalize teacher salaries, to abolish segregated public accommodations, and to invalidate the hated Tennessee poll tax, which restricted the black franchise in several parts of the state. Where blacks did have the ballot, they often used it wisely to exact gains from urban politicians in Nashville, Knoxville, and Chattanooga. In Memphis blacks represented a powerful part of the political machine that controlled the city for many years. Ironically, their activity provided whites with an argument against the poll tax as well, as opponents of the political leadership in Memphis during the era of Edward H. “Boss” Crump alleged that his machine paid poll taxes for blacks and dictated their vote. Some Tennessee blacks found themselves at odds with members of their own race because of the close alliance between some Memphis blacks and Crump, who often flexed his muscle in statewide politics.
Segregation was a strong and resilient social and political force. In the 1950s Jim Crow remained intact, despite the new spirit that prevailed, and its death would come painfully, slowly. More than ever, black Tennesseans now questioned their mandated role in society. They sometimes walked off jobs or went on strike when treated unfairly in employment, challenged private citizens and municipalities in court for alleged wrongs, and pressured the state to abolish racially exclusive laws. The Supreme Court’s decision in Brown v. Board of Education of Topeka (1954) further emboldened black Tennesseans, for it not only mandated desegregation of the public schools, but it also destroyed the props that gave support to racial segregation and discrimination in general. No other development in the social sphere threatened to create as much possible disruption as the case that turned Plessy upside down and abolished the legal principle of separate but equal.
Brown generated hostility among many white Tennesseans and other southerners who visualized chaos within their land. To them the case meant not only desegregated schools but also social integration throughout society; and increased social contact between the races on equal terms, they believed, would lead inevitably to interracial courtship and even marriage between the races. A haunting fear gripped those who believed in the old order. To insist that young white children become pawns in a broad social experiment proved unacceptable to most white Tennesseans, who had never known anything other than a society guided by the Plessy decision. Federal court intervention, they said, had gone too far in the lives of the people, had exceeded constitutional limits by infringing upon the rights of the states, and ultimately threatened to sacrifice the well-being of their children.
For Tennesseans, then, Brown represented more than a mere legal abstraction. Although the case disturbed most white citizens, they moved cautiously in their response to the decree. The city of Knoxville provided a good index to the attitudes of whites in the state at that time. A city of less than 10 percent black population in 1954, some observers regarded Knoxville as the least racially sensitive of the state’s largest cities. An opinion poll in 1958, however, revealed that 90 percent of white citizens strongly disapproved of desegregation. It showed further that not a single white person of the 167 polled would agree to enrolling even one white child in a black school, and nearly 72 percent would oppose sending a black child to a white school. Ninety-four percent of those polled opposed sexually and racially mixed classes. Such figures were a powerful commentary. Ironically, however, those whites who had defended Plessy as the law of the land now found themselves painted into a legal corner in a country that supposedly honored the rule of law.
For blacks, desegregation of the schools pointed toward progress in education, but Brown also brought some unexpected problems. Many black Tennesseans, including a number who had fought relentlessly to overthrow segregation and racism, had not pondered how thoroughly their lives had become culturally and economically interwoven with the African American school. As an institution, only the family and the church were more central to black community life. Because of its social attractions and its many extracurricular activities, the black school served as a powerful agent for racial cohesiveness. Although many African Americans in Tennessee applauded the death of Jim Crow in education, they lamented the passing of special school activities that had once fostered a vital sense of community among them–athletic contests, choral renditions, dramatic productions, and other functions. Desegregation raised in an unexpected and striking manner not only educational questions, but cultural ones as well.
Despite white dissatisfaction with Brown, desegregation proceeded with less recalcitrance and violence than in most other states of the South. But violence was not totally foreign to the state, since public schools in Nashville and Clinton did experience damage to two of their schools from bomb blasts. In the case of Clinton, tensions remained high until Governor Frank G. Clement called out the National Guard to restore calm. Although most Tennessee localities, and the state government itself, connived at ways to slow desegregation, they faced a losing battle. Various plans to stall desegregation or to delay implementation of court orders to achieve integration ultimately failed when they confronted federal action, the resistance of the black community, and a state newspaper press that, by and large, encouraged obedience to the law.
Some citizens hoped that the institution of cross-town busing would offer a panacea for desegregation of the public schools, especially in urban areas of Tennessee. They were wrong. By the mid-1990s opposition to busing still remained strong, and some cities, such as Nashville, had begun to study other approaches to desegregation. Paradoxically, integration had made some of its most notable advances where persons had least suspected–rural areas and where hard-core racial attitudes had once prevailed. Rural white children had ridden buses for long distances to school for many years, and the debate over busing took on a slightly different meaning in places where an entire surrounding area constituted a “neighborhood.”
If desegregation of schools offered Tennessee a difficult challenge, so did direct-action protest techniques to abolish segregation in public facilities. Through the efforts of young black students, especially in Nashville at Tennessee State University, Fisk University, Meharry Medical College, and American Baptist Theological Seminary, the state would bequeath to the national civil rights campaign valuable lessons in protests while raising to prominence a number of notable young black leaders. In Knoxville African Americans and a small number of white protesters would also wage a determined campaign against public Jim Crow. Before the mid-1950s black Tennesseans had already begun to use their institutions as vehicles to fight the forces of segregation. By the time of Brown, then, a social consciousness, anathema to inequality and the idea of black inferiority, existed in the state among African Americans. In the 1960s–during a period often alluded to as the “Movement” days of civil rights–black college students and their allies aided reform with an idealistic crusade that found segregation intolerable and public demonstrations a legitimate technique for battling societal wrongs.
The Nashville civil rights demonstrations stood out among the most noted sit-in activities in Tennessee. But in other cities of the state during the sixties and early seventies, indigenous black leadership contributed to the abolition of societal restraints that made democracy more real for many Tennesseans. Significantly, the movement in Nashville developed from a rather old political base and institutional arrangements that gave protest more of a possibility of success.
Sit-ins by African Americans, of course, predated the modern Civil Rights movement that followed Brown. In Nashville, they owed much of their success to effective leadership of the black clergy. Although historians now realize that a much larger leadership base existed throughout Tennessee than was once known, two persons in particular stand out in the history of protest in the city and the state–Kelly Miller Smith and James Lawson. Smith was the youthful pastor of First Baptist Church, Capitol Hill. A handsome, articulate, charismatic figure, he had a powerful appeal to both young and old. In early January 1958 Smith and other black activists organized the Nashville Christian Leadership Conference (NCLC), an affiliate of Martin Luther King Jr.’s Southern Christian Leadership Conference (SCLC), with which Smith was already affiliated. King, of course, was the acknowledged leader of the Civil Rights movement from the mid-1950s until his untimely death at Memphis 4 April 1968. The NCLC and SCLC had as their ultimate objective a frontal attack on the immorality of discrimination through the unification of ministers and laymen in a common effort to bring about “reconciliation and love” in a racially just society.
Lawson, another young clergyman, worked to hone the technique of nonviolent resistance that finally triumphed in Nashville and other places in Tennessee, as well as all over the South. He had come south from Ohio to attend the Divinity School at Vanderbilt University. A serious student of nonviolence who had spent time in India, Lawson began conducting workshops in 1958 at Smith’s First Baptist Church. As he and Smith developed a strategy for social action through NCLC, they recognized that the city of Nashville could serve as an important laboratory for testing nonviolent protest methods and become a model for other such activities. Lawson knew that segregated society was vulnerable to the power of a grassroots movement if participants willingly sacrificed and suffered to defeat the evil of injustice. When Vanderbilt University ordered Lawson to cease his activity or face suspension, he persevered, and the university dismissed him.
During the later part of 1959 the NCLC commenced its direct-action campaign against downtown stores with two so-called test sit-ins. When whites-only establishments refused blacks service, youthful demonstrators left the facilities after discussing with management the injustice of segregation and their denial of service. What these early public efforts demonstrated was the clear presence of discrimination in the city of Nashville. By January 1960 Smith, Lawson, and student protesters had decided to launch a full-scale, nonviolent attack against businesses that discriminated against blacks if they did not voluntarily alter their policies. Before the students could act, however, other protesters in Greensboro, North Carolina, initiated sit-ins in that city which precipitated demonstrations in other southern cities. Leaders of the Nashville movement now decided to move decisively. Led by Diane Nash, John Lewis, James Bevel, Cordell Reagon, Matthew Jones, and Bernard Lafayette, students from the city’s black colleges began more intense sit-in activity that led in May 1960 to the initial desegregation of some Nashville businesses. A number of factors accounted for their success. Undoubtedly the students’ determined and courageous efforts played a pivotal role in tearing down restrictive racial barriers, but a highly effective economic boycott of downtown Nashville stores by the black community also had a measurable impact. Furthermore, the city’s mayor, Ben West, openly acknowledged that the obvious immorality of discrimination helped to create a more moderate climate following violence against the students and some other black citizens.
Other cities in Tennessee also struggled with changes on the civil rights front in the 1960s and 1970s. Indigenous leadership in Knoxville produced considerable civil rights activity in that East Tennessee city. After black citizens failed to negotiate successfully the end to segregated public facilities, a protest movement led by students at Knoxville College and Merrill Proudfoot, a white minister at that institution, set out to change laws and customs in a city that prided itself on “healthy” race relations. A broad-based movement, the Knoxville campaign drew support from a large number of African Americans who lived in the city, a sizable number of white moderates, and city officials willing to listen and act with reasonable restraint. Following the initiation of sit-ins in June 1960, white city leaders and politicians of Knoxville convinced businessmen to desegregate by mid-July.
The courageous acts of young black demonstrators and their supporters united the black community in Tennessee and challenged the consciences of those whites who casually–sometimes unthinkingly–accepted the laws and customs of contemporary society. The African Americans of Tennessee made an important contribution to the reform tradition in America by assisting the birth of a powerful social movement that changed the country, finally fulfilling the promise of the Declaration of Independence and the Constitution of the United States, and substantiated the faith in individual growth and progress that so many other persons had sought in Tennessee. Tennesseans did much to foster the idea of nonviolent protests during the era of Martin Luther King; how ironic it is that the movement era came to an end with the assassination of King in April 1968, when James Earl Ray’s bullet ripped through his body.
The four decades since Brown have produced measurable progress in black civil rights in Tennessee and the nation. The country saw the passage of a civil rights bill, a voting rights act, and housing legislation, and the federal government has instituted a number of social programs designed to redress past grievances. In 1952 the courts forced the state to admit its first blacks to its graduate, professional, and special schools. Nine years later, the University of Tennessee admitted undergraduates to its Knoxville campus, although six blacks were already matriculating at the institution’s Nashville campus, which under court order later became part of previously all-black Tennessee State University. By 1965 the state could announce that all seven of its institutions of higher learning had technically integrated, and that color was not a precondition for admittance. High school graduation rates, too, were encouraging. At the time of the first major sit-in in Tennessee, less than 9 percent of African Americans had finished high school, but by the mid-1990s that figure exceeded 40 percent.
Strides in the political arena also proved impressive. In 1964 Tennessee elected its first black state legislator since the late nineteenth century, A. W. Willis Jr.; two years later the first African American woman, Dr. Dorothy Brown, won a seat in that body. By 1993 twelve African Americans sat in the legislature, and a total of 168 blacks served in various political positions in a state where 16 percent of its citizens were African American. In January 1992 black votes in Memphis helped to catapult into the office the city’s first black mayor, Willie W. Herenton. In the fast-growing metropolis of Nashville, an African American, Emmet Turner, earned the right to head the Metropolitan Police force in 1996.
In recent years, however, disturbing signs have appeared on the civil rights horizon. Despite recognizable progress, overcoming the past effects of discrimination and the achievement of full rights of citizenship remains a daunting task for African Americans. A number of programs designed to aid blacks, such as affirmative action, faced biting criticism in the mid-1990s. Notable discrepancies still existed between black and white incomes, and the federal courts threatened to dilute the effect of black political power with some of their decisions. As late as July 1996 the United States Commission on Civil Rights noted that Tennessee was sitting on “powder kegs” of tensions which could ignite into violence.
The Civil Rights Commission did not misread the times. But it may not have accounted for the considerable number of blacks and whites who wanted to create a better Tennessee, citizens who had the determination to fight the racial conservatism that threatened to damage the state and its reputation. Opinion polls bore out the contention that most Tennesseans did not want to overturn the fundamental changes made since Brown or to return to the ugly days of harsh segregation. But it was hard for many of them to make the personal or political sacrifices necessary to adjust the wrongs of the past. Yet a desegregated, pluralist society remained a healthy ideal for most Tennesseans, especially for those born in the post-Brown era. In the words of one hopeful native who lived through the era of segregation, the state has moved “too far to turn back.” That optimism, more than anything else, characterized the spirit of the African American community at the time of the state’s Bicentennial. It also registered the hope of more than a few white Tennesseans of good will who had come to decipher the real meaning of democracy, justice, and fair play.
Cynthia G. Fleming, “We Shall Overcome: Tennessee and the Civil Rights Movement,” Tennessee Historical Quarterly 54 (1995): 232-45; Hugh D. Graham, Crisis in Print: Desegregation and the Press in Tennessee (1967); Lester C. Lamon, Black Tennesseans, 1900-1930 (1977); Merrill Proudt, Diary of a Sit-in (1990); Linda T. Wynn, “The Dawning of a New Day: The Nashville Sit-Ins, February 13-May 10, 1960,” Tennessee Historical Quarterly 50 (1991): 42-54 |
If I were hypothetically wearing a spacesuit and sitting on one of the Voyager space probes at their current positions in space, how much light would I have? (Intermediate)
The Voyager 1 and 2 space probes are rushing further away every day, but they are currently about 22 billion kilometres (13 billion miles) and 18 billion kilometres (11 billion miles) away from the Sun, respectively . For comparison, the Earth is about 150 million kilometres (93 million miles) away from the Sun, so the Sun’s light will certainly be much dimmer at the Voyager space probes than at the Earth.
The apparent brightness of a light source decreases in proportion to the square of the distance from the light source. This means that a light source viewed from a distance of three metres away will appear to be 32 = 9 times fainter than if it were viewed from a distance of 1 metre away. Using this reasoning, we can calculate the strength of light at the Voyager space probes!
A unit for quantifying the brightness of light, as seen by the human eye, is the lux. The brightness of a sunny day is about 10,000 lux, while twilight is about 10 lux. A dark night with a full Moon is about 0.1 lux, while a dark night with only starlight is about 0.001 lux .
The brightness of the Sun at the Voyager 1 and 2 space probes is about 6 lux and 9 lux, respectively. So if you were sitting on one of the Voyager space probes, the Sun itself would appear to be roughly as bright as a point on the sky at twilight.
However, it would actually seem to be much darker than twilight on Earth. For one thing, the Sun will only appear to be a pinprick point source of light. While it is dangerous to look directly at the Sun, it actually has an angular size of about 1900 arcseconds (where 1 arcsecond is 1/3600 of 1 angular degree). Coincidentally, the Moon also has a similar angular size, which makes it possible for it to completely cover the Sun during a total solar eclipse. The smallest angular distance that the human eye can resolve is about 15 arcseconds, so anything smaller this this will appear to be a point source of light. The angular size of the Sun as seen from both Voyager 1 and 2 is about 7 arcseconds, which is well below the limit of what the human eye can resolve. So, the Sun would appear as a tiny pinprick point of light that is no larger than any other star! However, you would be able to identify it as the Sun because it will would be much brighter than any other star.
Furthermore, on Earth during the day, the sky appears to be bright in all directions because molecules in Earth’s atmosphere scatter light in all directions. However, there is no atmosphere surrounding the Voyager space probes, so you will only see sunlight if you are looking directly at the Sun (which would be safe to do from that distance) or if the Voyager probe that you were sitting on reflected some sunlight back into your eyes.
Only a fraction of the sunlight that shines on the Voyager space probe will reflect back into your eyes (with the fraction depending on how reflective the surface is), but you would likely be able to faintly see at least the most reflective parts of the space probe.
So in conclusion, if you were sitting on a Voyager space probe, it would be very dark. However, the pinprick point source that is the Sun would be much brighter than any other star (roughly as bright as a point on the sky at twilight) and you would likely be able to see some faint sunlight reflecting off the Voyager space probe that you were hypothetically sitting on!
Last updated: May 25, 2019 |
¶ 3 Leave a comment on paragraph 3 2 Whether they came as servants, slaves, religious refugees, or powerful planters, the men and women of colonial settlements created new worlds. The first victims in this process were Native Americans who saw fledgling settlements turn into unstoppable invasion forces, increasingly monopolizing resources and remaking the land into something foreign and deadly. As colonial societies developed in the seventeenth and eighteenth centuries, fluid labor arrangements and racial categories solidified into the race-based chattel slavery that dominated the economy of the British Empire. The North American mainland originally held a small place in that empire, as even the output of even its most prosperous colonies paled before the tremendous wealth of the Caribbean sugar islands. Despite their economic and political unimportance, the backwaters of the North American mainland, ignored by many imperial officials, were deeply tied into larger networks of Atlantic exchange. These networks tied together the continents of Europe, Africa, and the Americas, and these ties would drive the development of colonial societies as men and women struggled to survive in harsh conditions.
¶ 4 Leave a comment on paragraph 4 0 Events across the ocean continued to influence the lives of colonists. Britain’s seventeenth century was fraught as civil war, religious conflict, and nation building wracked and remade societies on both sides of the ocean. These transformations brought considerable resistance from both within and without, but colonial settlements developed into powerful societies capable of making war against Native Americans and subduing internal upheavals in equal measure. Patterns established throughout this process would echo for centuries. In unfolding these patterns, the story of colonial American history must begin with power and labor. These colonies developed one of the most brutal labor regimes in human history.
II. Slavery and the Making of Race
¶ 6 Leave a comment on paragraph 6 3 Arriving in Charles Town, Carolina in 1706, Reverend Francis Le Jau grew horrified almost immediately. He met enslaved Africans brutalized by the Middle Passage, Indians traveling south to enslave enemy villages, and colonists terrified of invasions from French Louisiana and Spanish Florida. Slavery and death surrounded him.
¶ 7 Leave a comment on paragraph 7 4 Still, Le Jau’s stiffest words were aimed at his own countrymen, the English. White servants lazed about, “good for nothing at all.” Elites were no better, unwilling to concede “that Negroes and Indians are otherwise than Beasts.” Although the minister thought otherwise and baptized several hundred slaves after teaching them to read, his angst is revealing.
¶ 8 Leave a comment on paragraph 8 2 The 1660s marked a turning point for black men and women in southern colonies like Virginia. New laws created the expectation that African descended peoples would remain enslaved for life. The permanent deprivation of freedom facilitated the maintenance of strict racial barriers. Skin color became more than superficial difference; it became the marker of a transcendent, all-encompassing division between two distinct peoples, two races, white and black.
¶ 9 Leave a comment on paragraph 9 1 Racial prejudice against African descended peoples co-evolved with Anglo-American slavery, but blacks were certainly not the only slaves, nor whites the only slaveholders. For most of the seventeenth century, as it had been for many thousands of years, Native Americans controlled almost the entire North American continent. Only after more than a century of Anglo-American contact and observations of so many Indians decimated by diseases, did settlers come to see themselves as somehow more naturally “American” than the continent’s first human occupiers.
¶ 10 Leave a comment on paragraph 10 0 All seventeenth-century racial thought did not point directly towards modern classifications of racial hierarchy. Captain Thomas Phillips, master of a slave-ship in 1694, did not justify his work with any such creed: “I can’t think there is any intrinsic value in one color more than another, nor that white is better than black, only we think it so because we are so.” For Phillips, the profitability of slavery was the only justification he needed.
¶ 11 Leave a comment on paragraph 11 3 British colonists in the Caribbean made extensive use of Indian slaves as well as imported Africans. Before the intrusion of colonists, warring indigenous societies might take prisoners of war from enemy tribes to be ceremonially killed, traded to allied Indian groups as gifts, or incorporated into the societies of their captors. Throughout the colonial period, in many parts of the Americas, Europeans exploited these systems of indigenous captivity. Colonists purchased captives from Indian traders with guns, metal goods (like knives), alcohol, or other manufactured goods. Colonists turned the purchased Indian captives into slaves who served on plantations in diverse functions: as fisherman, hunters, field laborers, domestic workers, and concubines. As the Indian slave trade became more valuable, illegal raids, rather than purchases, become more common. Courts might also punish convicted Indians by selling them into slavery.
¶ 12 Leave a comment on paragraph 12 2 Wars offered the most common means for colonists to acquire Native American slaves. Seventeenth-century European legal thought held that enslaving prisoners of war was not only legal, but more merciful than killing the captives outright. After the Pequot War (1636-1637), Massachusetts Bay colonists sold hundreds of North American Indians to the West Indies. A few years later, Dutch colonists in New Netherland (New York and New Jersey) enslaved Algonquian Indians during both Governor Kiefts War (1641-1645) and the two Eposus Wars (1659-1664). The Dutch similarly sent these Indians to English-settled Bermuda, and also Curaçao, a Dutch plantation-colony in the southern Caribbean. An even larger number of Indian slaves were captured during King Phillip’s War from 1675-1678, a pan-Indian rebellion against the encroachments of the New England colonies. Hundreds of defeated Indians were bound and shipped into slavery,. The New England colonists also tried to send Indian slaves to Barbados, but the Barbados assembly refused to import the New England Indians for fear they would encourage rebellion.
¶ 13 Leave a comment on paragraph 13 1 In the eighteenth century, wars in Florida, South Carolina, and the Mississippi Valley produced even more Indian slaves. Some wars emerged from contests between Indians and colonists for land, while others were manufactured as pretenses for acquiring captives. Some were not wars at all, but merely illegal raids performed by slave traders. Historians estimate that between 24,000 and 51,000 Native Americans were enslaved throughout the South in the period 1670-1715. Some Indian stayed in the southern colonies, but many were exported through Charlestown, South Carolina, to other ports in the British Atlantic, most likely to Barbados, Jamaica, and Bermuda. Slave raids and Indian slavery upset the many settlers who wished to claim land in frontier territories. By the eighteenth century, colonial governments often discouraged the practice, although it never ceased entirely as long as slavery was, in general, a legal institution.
¶ 14 Leave a comment on paragraph 14 3 Native American slaves died quickly, mostly from disease, but also from starvation, exposure, or simply murder. The demands of colonial plantation economies required a more reliable labor force, and the transatlantic slave trade met the demand. European slavers transported millions of Africans across the ocean in a horrific journey, known as the Middle Passage. Writing at the end of the eighteenth century, Olaudah Equiano recalled the fearsomeness of the crew, the filth and gloom of the hold, the inadequate provisions allotted for the captives, and the desperation that led some slaves to suicide. Equiano claimed to have been born in Igboland (in modern-day Nigeria), but he may have been born in colonial South Carolina and collected memories of the Middle Passage from African-born slaves. Also in the 1780s, Alexander Falconbridge, a slave ship surgeon, described the sufferings of slaves from shipboard infections and close quarters in the hold. Dysentery, known as “the bloody flux,” left captives lying in pools of excrement. Chained in small spaces in the hold, slaves could lose so much skin and flesh from chafing against metal and timber that their bones protruded. Other sources attest to shipboard abuse like rape and whippings as well as to diseases like smallpox and conjunctivitis.
¶ 15 Leave a comment on paragraph 15 1 “Middle” had various meanings in the Atlantic slave trade. For the captains and crews of slave ships, the Middle Passage was one leg in the maritime trade in sugar and other semi-finished American goods, manufactured European goods, and African slaves. For the enslaved Africans, the Middle Passage was the middle leg of three distinct journeys from Africa to the Americas. First was an overland journey to a coastal slave-trading factory, often a trek of hundreds of miles. Second—and middle—was an oceanic trip lasting from one to six months in a slaver. Third was acculturation (known as “seasoning”) and transportation to the mine, plantation, or other location where new slaves were forced into labor.
¶ 16 Leave a comment on paragraph 16 0 Recent estimates count between 11 and 12 million Africans forced across the Atlantic, with about 2 million deaths at sea as well as an additional several million dying in the trade’s overland African leg or during seasoning. Conditions in all three legs of the slave trade were horrible, but the first abolitionists focused on the abuses of the Middle Passage.
¶ 17 Leave a comment on paragraph 17 1 Europeans made the first steps toward an Atlantic slave trade in the 1440s, when Portuguese sailors landed in West Africa in search of gold, spices, and allies against the Muslims who dominated Mediterranean trade. Beginning in the 1440s, ship captains carried African slaves to Portugal. These Africans were valued only as domestic servants since Western Europe had a surplus of peasant labor. European expansion into the Americas introduced both settlers and European authorities to a new situation—an abundance of land and a scarcity of labor. Portuguese, Dutch, and English ships became the conduits for Africans forced to America. The western coast of Africa, the Gulf of Guinea, and the west central coast were sources of African captives. Wars of expansion and raiding parties produced captives who could be sold in coastal factories. African slave traders bartered for European finished goods such as beads, cloth, rum, firearms, and metal wares.
¶ 19 Leave a comment on paragraph 19 3 Slavers often landed in the British West Indies, where slaves were seasoned in places like Barbados. Charleston, South Carolina, became the leading entry point for the slave trade on the mainland. Sugar and tobacco became crazes, even near-addictions, in Europe in the early colonial period, but rice, indigo, and rum were also profitable plantation exports. In the middle of the eighteenth century, after trade wars with the Dutch, English slavers became the most active carriers of Africans across the Atlantic. Brazil was the most common destination for slaves. More than four million slaves ended up in Brazil. English slavers, however, brought approximately two million slaves to the British West Indies. About 450,000 Africans landed in British North America, seemingly a small portion of the 11 to 12 million victims of the trade. Females were more likely to be found in North America than in other slave populations. These enslaved African women bore more children than their counterparts in the Caribbean or South America. A 1662 Virginia law stated that an enslaved woman’s children inherited the “condition” of their mother. This meant that all children born to slave women would be slaves for life, whether the father was white or black, enslaved or free.
¶ 20 Leave a comment on paragraph 20 0 American culture contains many resonances of the Middle Passage and the Atlantic slave trade. Many foods associated with Africans, such as cassava, were imported to West Africa as part of the slave trade, then adopted by African cooks before being brought to the Americas, where they are still eaten. West African rhythms and melodies live in new forms today in music as varied as religious spirituals and synthesized drumbeats. African influences appear in the basket making and language of the Gullah people on the Carolina Coastal Islands.
¶ 21 Leave a comment on paragraph 21 2 Most fundamentally, the modern notion of race emerged as a result of the slave trade. Before the Atlantic slave trade, neither Europeans nor West Africans had a strong notion of race. Indeed, African slave traders lacked a firm category of race that might have led them to think that they were selling their own people. Similarly, most Englishmen felt no racial identification with the Irish or the even the Welsh. Modern notions of race emerged only after Africans of different ethnic groups were mixed together in the slave trade and as Europeans began enslaving only Africans and Native Americans.
¶ 22 Leave a comment on paragraph 22 0 In the early years of slavery, especially in the South, the distinction between indentured servants and slaves was, at first, unclear. In 1643, a law was passed in Virginia that made African women “tithable.” This, in effect, associated African women’s work with hard, agricultural labor. There was no similar tax levied on white women. This law was an attempt to disassociate white and African women. The English ideal was to have enough hired hands and servants working on a farm so that wives and daughters did not have to partake in manual labor. Instead, white women were expected to labor in dairy sheds, small gardens, and kitchens. Of course, due to the labor shortage in early America, white women did participate in field labor. But this idealized gendered division of labor contributed to Englishmen’s conception of themselves as better than other groups who did not divide labor in this fashion, including the West Africans who arrived in slave ships to the colonies. For white colonists, the association of a gendered division of labor with Englishness was a key formulation in determining that Africans would be enslaved and subordinate to whites.
¶ 23 Leave a comment on paragraph 23 0 Ideas about the rule of the household were informed by legal understandings of marriage and the home in England. A man was expected to hold “paternal dominion” over his household, which included his wife, children, servants, and slaves. White men could expect to rule over their subordinates. In contrast, slaves were not legally seen as masters of a household, and were therefore subject to the authority of the white master. Slave marriages were not legally recognized. Some enslaved men and women married “abroad”; that is, they married individuals who were not owned by the same master and did not live on the same plantation. These husbands and wives had to travel miles at a time, typically only once a week on Sundays, to visit their spouses. Legal or religious authority did not protect these marriages, and masters could refuse to let their slaves visit a spouse, or even sell a slave to a new master hundreds of miles away from their spouse and children. In addition to distance that might have separated family members, the work of keeping children fed and clothed often fell to enslaved women. They performed essential work during the hours that they were not expected to work for the master. They produced clothing and food for their husbands and children, and performed other work like religious and educational instruction.
III. Turmoil in Britain
¶ 25 Leave a comment on paragraph 25 0 Religious violence plagued sixteenth-century England. While Spain plundered the New World and built an empire, England struggled as Catholic and Protestant monarchs vied for supremacy and attacked their opponents as heretics. Queen Elizabeth cemented Protestantism as the official religion of the realm, but questions endured as to what kind of Protestantism would hold sway. Many Puritans looked to the New World as an opportunity to create a beacon of Calvinist Christianity, while others continued the struggle in England. By the 1640s, political conflicts between Parliament and the Crown merged with long-simmering religious tensions. The result was a bloody civil war. Colonists reacted in a variety of ways as England waged war on itself, but all were affected by these decades of turmoil.
¶ 26 Leave a comment on paragraph 26 1 The outbreak of civil war between the King and Parliament in 1642 opened an opportunity for the English state to consolidate its hold over the American colonies. The conflict erupted as Charles I called a parliament in 1640 to assist him in suppressing a rebellion in Scotland. The Irish rebelled the following year, and by 1642 strained relations between Charles and Parliament produced a civil war in England. Parliament won, Charles I was executed, and England transformed into a republic and protectorate under Oliver Cromwell. These changes redefined England’s relationship with is American colonies.
¶ 27 Leave a comment on paragraph 27 0 In 1642, no permanent British North American colony was more than 35 years old. The crown and various proprietors controlled most of the colonies, but settlers from Barbados to Maine enjoyed a great deal of independence. This was especially true in Massachusetts Bay, where Puritan settlers governed themselves according to the colony’s 1629 charter. Trade in tobacco and naval stores tied the colonies to England economically, as did religion and political culture, but in general the English left the colonies to their own devices.
¶ 28 Leave a comment on paragraph 28 0 The English civil war forced settlers in America to reconsider their place within the empire. Older colonies like Virginia and proprietary colonies like Maryland sympathized with the crown. Newer colonies like Massachusetts Bay, populated by religious dissenters taking part in the Great Migration of the 1630s, tended to favor Parliament. Yet, during the war the colonies remained neutral, fearing that support for either side could involve them in war. Even Massachusetts Bay, which nurtured ties to radical Protestants in Parliament, remained neutral.
¶ 30 Leave a comment on paragraph 30 0 Charles’s execution in 1649 altered that neutrality. Six colonies, including Virginia and Barbados, declared open allegiance to the dead monarch’s son, Charles II. Parliament responded with an Act in 1650 leveling an economic embargo on the rebelling colonies, forcing them to accept Parliament’s authority. Parliament argued in the Act that America had been “planted at the Cost, and settled” by the English nation, and that it, as the embodiment of that commonwealth, possessed ultimate jurisdiction over the colonies. It followed up the embargo with the Navigation Act of 1651, which compelled merchants in every colony to ship goods directly to England in English ships. Parliament sought to bind the colonies more closely to England, and deny other European nations, especially the Dutch, from interfering with its American possessions.
¶ 31 Leave a comment on paragraph 31 0 Over the next few years colonists’ unease about Parliament’s actions reinforced their own sense of English identity, one that was predicated on notions of rights and liberties. When the colonists declared allegiance to Charles II after the Parliamentarian state collapsed in 1659 and England became a monarchy once more in 1660, however, the new king dashed any hopes that he would reverse Parliament’s consolidation efforts. The revolution that had killed his father enabled Charles II to begin the next phase of empire building in English America.
¶ 33 Leave a comment on paragraph 33 1 Charles II ruled effectively, but his successor James II made several crucial mistakes. Eventually, Parliament again overthrew the authority of their king, this time turning to the Dutch Prince William of Holland and his English bride Mary, the daughter of James II. This relatively peaceful coup was called the Glorious Revolution. English colonists in the era of the Glorious Revolution experienced religious and political conflict that reflected transformations in Europe. It was a time of great anxiety for the colonists. In the 1670s, King Charles II tightened English control over America. For example, he created the royal colony of New Hampshire in 1678, and in 1684 transformed Bermuda into a crown colony. The King’s death in 1685 and subsequent rebellions in England and Scotland against the new Catholic monarch, James II, threw Bermuda into crisis. Irregular reports made it unclear who was winning or who would protect their island. Bermudians were not alone in their wish for greater protection. On the mainland, Native Americans led by Metacom, or as the English called him, King Philip, had devastated New England between 1675 and 1678 while Indian conflicts helped trigger Bacon’s Rebellion in Virginia in 1676. Equally troubling, New France loomed, and many remained wary of Catholics in Maryland. In the colonists’ view, Catholics and Indians sought to destroy English America.
¶ 34 Leave a comment on paragraph 34 0 James II worked to place the colonies on a firmer defensive footing by creating the Dominion of New England in 1686. Colonists had accepted him as king despite his religion but began to suspect him of possessing absolutist ambitions. The Dominion consolidated the New England colonies plus New York and New Jersey into one administrative unit to counter French Canada, but colonists decried the loss of their individual provinces. The Dominion’s governor, Sir Edmund Andros, did little to assuage fears of arbitrary power when he impressed colonists into military service for a campaign against Maine Indians in early 1687.
¶ 35 Leave a comment on paragraph 35 0 In England, James’s push for religions toleration brought him into conflict with Parliament and the Anglican establishment. Fearing that James meant to destroy Protestantism, a group of bishops and Parliamentarians asked William of Orange, the Protestant Dutch Stadtholder, and James’s son-in-law, to invade the country in 1688. When the king fled to France in December, Parliament invited William and Mary to take the throne, and colonists in America declared allegiance to the new monarchs. They did so in part to maintain order in their respective colonies. As one Virginia official explained, if there was “no King in England, there was no Government here.” A declaration of allegiance was therefore a means toward stability.
¶ 36 Leave a comment on paragraph 36 0 More importantly, colonists declared for William and Mary because they believed their ascension marked the rejection of absolutism and confirmed the centrality of Protestantism in English life. Settlers joined in the revolution by overthrowing the Dominion government, restoring the provinces to their previous status, and forcing out the Catholic-dominated Maryland government. They launched several assaults against French Canada as part of “King William’s War,” and rejoiced in Parliament’s 1689 passage of a Bill of Rights, which curtailed the power of the monarchy and cemented Protestantism in England. For English colonists, it was indeed a “glorious” revolution as it united them in a Protestant empire that stood counter to Catholic tyranny, absolutism, and French power.
IV. New Colonies
¶ 38 Leave a comment on paragraph 38 0 Despite the turmoil in Britain, colonial settlement grew considerably throughout the seventeenth century, and the two original colonies of Virginia and Massachusetts were joined by several others.
¶ 39 Leave a comment on paragraph 39 0 In 1632, Charles I set a tract of about 12 million acres of land at the northern tip of the Chesapeake Bay aside for a second colony in America. Named for the new monarch’s queen, Maryland was granted to Charles’s friend and political ally Cecilius Calvert, the second Lord Baltimore. Calvert hoped to gain additional wealth from the colony, as well as create a haven for fellow Catholics. In England, many of that faith found themselves harassed by the Protestant majority and more than a few considered migrating to America. Charles I, a Catholic sympathizer, was in favor of Lord Baltimore’s plan to create a colony that would demonstrate that Catholics and Protestants could live together in toleration.
¶ 40 Leave a comment on paragraph 40 0 In late 1633, settlers of both the Protestant and Catholic faiths left England for the Chesapeake, arriving in Maryland in March 1634. Men of middling means found greater opportunities in Maryland and it prospered as a tobacco colony without the growing pains suffered by Virginia.
¶ 41 Leave a comment on paragraph 41 0 Unfortunately, Lord Baltimore’s hopes of a diverse Christian colony were dashed. Most colonists were Protestants relocating from Virginia. These Protestants were radical Quakers and Puritans who were tired of Virginia’s efforts to force adherence to the Anglican faith. In 1650, Puritans revolted, setting up a new government that prohibited both Catholicism and Anglicanism. Governor William Stone attempted to put down the revolt in 1655, but would not be successful until 1658. Two years after the Glorious Revolution (1688-1689), the Calverts lost control of Maryland and the colony became a royal colony.
¶ 42 Leave a comment on paragraph 42 0 Religion was implicated in the creation of several other colonies as well, including the New England colonies of Connecticut and Rhode Island. The settlements that would eventually comprise Connecticut grew out of settlements in Saybrook and New Haven. Thomas Hooker and his congregation left Massachusetts for Connecticut because the area around Boston was becoming increasingly crowded. The Connecticut River Valley was large enough for more cattle and agriculture. In June 1636, Hooker, one hundred people, and a variety of livestock settled in an area they called Newtown (later Hartford).
¶ 43 Leave a comment on paragraph 43 0 New Haven Colony had a more directly religious origin. The founders attempted a new experiment in Puritanism. In 1638, John Davenport, Theophilus Eaton, and other supporters of the Puritan faith settled in the Quinnipiac (New Haven) area of the Connecticut River Valley. In 1643, New Haven Colony was officially organized, with Eaton named governor. In the early 1660s, three men who had signed the death warrant for Charles I were concealed in New Haven. This did not win the colony any favors, and it became increasingly poorer and weaker. In 1665, New Haven was absorbed into Connecticut, but it’s singular religious tradition endured in the creation of Yale College.
¶ 44 Leave a comment on paragraph 44 0 Religious rogues similarly founded Rhode Island. Roger Williams, after being exiled from Massachusetts created a settlement called Providence in 1636. He negotiated for the land with the local Narragansett sachems Canonicus and Miantonomi. Williams and his fellow settlers agreed on an egalitarian constitution and established religious and political freedom in the colony. The following year, another Massachusetts castoff, Anne Hutchinson, and her followers settled near Providence. Soon, others followed, and were granted a charter by the Long Parliament in 1644. Persistently independent, the settlers refused a governor and instead elected a president and council. These separate plantations passed laws abolishing witchcraft trials, imprisonment for debt, and, in 1652, chattel slavery. Because of the colony’s policy of toleration, it became a haven for Quakers, Jews, and other persecuted religious groups. In 1663, Charles II granted the colony a royal charter establishing the colony of Rhode Island and Providence Plantations.
¶ 45 Leave a comment on paragraph 45 0 Until the middle of the seventeenth century, the English neglected the settlement of the area between Virginia and New England, despite obvious environmental advantages. The climate was healthier than the Chesapeake and more temperate than New England. The mid-Atlantic had three highly navigable rivers: the Susquehanna, Delaware, and Hudson. Because the English failed to colonize the area, the Swedes and Dutch established their own colonies: New Sweden in the Delaware Valley and New Netherland in the Hudson Valley.
¶ 46 Leave a comment on paragraph 46 0 Compared to other Dutch colonies around the globe, the settlements on the Hudson River were relatively minor. The Dutch West India Company realized that, in order to secure its fur trade in the area, it needed to establish a greater presence in the colony. Toward this end, the company formed New Amsterdam on Manhattan Island in 1625.
¶ 47 Leave a comment on paragraph 47 0 Although the Dutch extended religious tolerance to those who settled in New Netherland, the population remained small. This left the colony vulnerable to English attack during the 1650s and 1660s, resulting in the eventual hand-over of New Netherland to England in 1667. The new colony of New York was named for the proprietor, James, the Duke of York, brother to Charles I who had funded the expedition against the Dutch in 1664. The Dutch resisted assimilation into English culture well into the eighteenth century, prompting New York Anglicans to note that the colony was “rather like a conquered foreign province.”
¶ 48 Leave a comment on paragraph 48 0 After the acquisition of New Netherland, Charles I and the Duke of York wished to strengthen English control over the Atlantic seaboard. In theory, this was to better tax the colonies, but in practice, the awarding of the new proprietary colonies of New Jersey, Pennsylvania, and the Carolinas was a payoff of debts and political favors.
¶ 49 Leave a comment on paragraph 49 0 In 1664, the Duke of York granted the area between the Hudson and Delaware rivers to two English noblemen. These lands were split into two distinct colonies, East Jersey and West Jersey. One of West Jersey’s proprietors included William Penn. The ambitious Penn wanted his own, larger colony, the lands for which would be granted by both Charles II and the Duke of York. Pennsylvania consisted of about 45,000 square miles west of the Delaware River and the former New Sweden. Penn was a Quaker, and he intended his colony to be a “colony of Heaven for the children of Light.” Like New England’s aspirations to be a City Upon a Hill, Pennsylvania was to be an example of godliness. But Penn’s dream was to create, not a colony of unity, but rather a colony of harmony. He noted in 1685 that “the people are a collection of diverse nations in Europe, as French, Dutch, Germans, Swedes, Danes, Finns, Scotch, and English….” Because Quakers in Pennsylvania extended to others in America the same rights they had demanded for themselves in England, the colony attracted a diverse collection of migrants. Slavery was particularly troublesome for the pacifist Quakers of Pennsylvania on the grounds that it required violence. In 1688, Quakers of the Germantown Meeting signed a petition protesting the institution of slavery.
¶ 50 Leave a comment on paragraph 50 0 The Pennsylvania soil did not lend itself to the slave-based agriculture of the Chesapeake, but other colonies would depend heavily on slavery from their very foundations. The creation of the colony of Carolina, later divided into North and South Carolina and Georgia, was part of Charles I’s scheme to strengthen the English hold on the eastern seaboard and pay off political and cash debts. The Lords Proprietor of Carolina—eight very powerful favorites of the king—used the model of the colonization of Barbados to settle the area. In 1670, three ships of colonists from Barbados arrived at the mouth of the Ashley River where they founded Charles Town. This defiance of Spanish claim to the area signified England’s growing confidence as a colonial power.
¶ 51 Leave a comment on paragraph 51 0 To attract colonists, the Lords Proprietor offered alluring incentives: religious tolerance, political representation by assembly, exemption from quitrents, and large land grants. These incentives worked, and Carolina grew quickly, attracting not only middling farmers and artisans, but also wealthy planters. Settlers who could pay their own way to Carolina were granted 150 acres per family member. The Lords Proprietor allowed for slaves to be counted as members of the family. This encouraged the creation of large rice and indigo plantations along the coast of Carolina, which were more stable commodities than the deerskin and Indian slave trade. Because of the size of Carolina, the authority of the Lords Proprietor was especially weak in the northern reaches on the Albemarle Sound. This region had been settled by Virginians in the 1650s and were increasingly resistant to Carolina authority. As a result, the separate province of North Carolina was founded by the Lords Proprietor in 1691.
V. Riot, Rebellion, and Revolt
¶ 53 Leave a comment on paragraph 53 0 The seventeenth century saw the establishment and solidification of the British North American colonies, but this process did not occur peacefully. Explosions of violence rocked nearly all of the English settlements on the continent.
¶ 54 Leave a comment on paragraph 54 1 In May 1637, an armed contingent of English Puritans from Massachusetts Bay, Plymouth, and Connecticut colonies trekked into the New England wilderness. Referring to themselves as the “Sword of the Lord,” this military force intended to attack “that insolent and barbarous Nation, called the Pequots.” In the resulting violence, Puritans put the Mystic community to the torch, beginning with the north and south ends of the town. As Pequot men, women, and children, tried to escape the blaze, other soldiers waited with swords and guns. One commander estimated that of the “four hundred souls in this Fort…not above five of them escaped out of our hands,” although another counted near “six or seven hundred” dead. In a span of less than two months, the English Puritans boasted that the Pequot “were drove out of their country, and slain by the sword, to the number of fifteen hundred.”
¶ 55 Leave a comment on paragraph 55 1 The foundations of the war lay within the rivalry between the Pequot, the Narragansett and Mohegan, whobattled for control of the fur and wampum trades. This rivalry eventually forced the English and Dutch to choose sides. The war remained a conflict of Native interests and initiative, especially as the Mohegan hedged their bets on the English and reaped the rewards that came with displacing the Pequot.
¶ 56 Leave a comment on paragraph 56 1 Victory over the Pequot not only provided security and stability for the English colonies, but also propelled the Mohegan to new heights of political and economic influence as the primary power in New England. Ironically, history seemingly repeated itself as the Mohegan, desperate for a remedy to their diminishing power, joined the Wampanoag war against the Puritans, which produced a more violent conflict in 1675 known as King Philip’s War, bringing a decisive end to “Indian Power” in New England.
¶ 57 Leave a comment on paragraph 57 1 In the winter of 1675, the body of John Sassamon, a Christian, Harvard-educated Wampanoag, was found under the ice of a nearby pond. A fellow Christian Indian informed English authorities that three warriors under the local sachem named Metacom, known to the English as King Philip, had killed Sassamon, who had previously accused Metacom of planning an insurrection against the English. The three alleged killers appeared before the Plymouth court in June 1675, were found guilty of murder, and executed. Several weeks later, a group of Wampanoags killed nine English colonists in the town of Swansea.
¶ 58 Leave a comment on paragraph 58 0 Metacom—like most other New England sachems—had entered into covenants of “submission” to various colonies, viewing the arrangements as relationships of protection and reciprocity rather than subjugation. Indians and English lived, traded, worshiped, and arbitrated disputes in close proximity before 1675, but the execution of three of Metacom’s men at the hands of Plymouth Colony epitomized what many Indians viewed as a growing inequality of that relationship. The Wampanoags who attacked Swansea may have been seeking to restore balance, or to retaliate for the recent executions. Neither they nor anyone else sought to engulf all of New England in war. Yet that is what happened. Authorities in Plymouth sprung into action, enlisting help from neighboring colonies, Connecticut and Massachusetts.
¶ 59 Leave a comment on paragraph 59 1 Metacom and his followers eluded colonial forces in the summer of 1675, striking more Plymouth towns as they moved northwest. Some groups joined his forces, while others remained neutral or supported the English. The war badly divided some Indian communities. Metacom himself had little control over events, as panic and violence spread throughout New England in the autumn of 1675. English mistrust of neutral Indians, sometimes accompanied by demands they surrender their weapons, pushed many into open war. By the end of 1675, most of the Indians of western and central Massachusetts had entered the war, laying waste to nearby English towns like Deerfield, Hadley, and Brookfield. Hapless colonial forces, spurning the military assistance of Indian allies such as the Mohegans, proved unable to locate more mobile native villages or intercept Indian attacks.
¶ 60 Leave a comment on paragraph 60 0 The English compounded their problems by attacking the powerful and neutral Narragansetts of Rhode Island in December 1675. In an action called the Great Swamp Fight, l,000 Englishmen put the main Narragansett village to the torch, gunning down as many as 1,000 Narragansett men, women and children as they fled the maelstrom. The surviving Narragansetts joined the Indians already in rebellion against the English. Between February and April 1676, rebel forces devastated a succession of English towns closer and closer to Boston.
¶ 61 Leave a comment on paragraph 61 0 In the spring of 1676 the tide turned. The New England colonies took the advice of men like Benjamin Church, who urged the greater use of Native allies to find and fight the mobile rebels. Unable to plant crops and forced to live off the land, the rebels’ will to fight waned as companies of English and Native allies pursued them. Growing numbers of rebels fled the region, switched sides, or surrendered in the spring and summer. The English sold many of the latter group into slavery. Colonial forces finally caught up with Metacom in August 1676, and the sachem was slain by a Christian Indian fighting with the English.
¶ 62 Leave a comment on paragraph 62 0 The war permanently altered the political and demographic landscape of New England. Between 800 and 1,000 English, and at least 3,000 Indians perished in the 14-month conflict. Thousands of other Indians fled the region or were sold into slavery. In 1670, Native Americans comprised roughly 25% of New England’s population. A decade later, they made up perhaps 10%. The war’s brutality also encouraged a growing hatred of all Indians among many New England colonists. Though the fighting ceased in 1676, the bitter legacy of King Philip’s War lived on.
¶ 63 Leave a comment on paragraph 63 0 Native American communities in Virginia had already been decimated by wars in 1622 and 1644. But in the same year that New Englanders crushed Metacom’s forces, a new clash arose in Virginia. This conflict, knows as Bacon’s Rebellion, grew out of tensions between Native Americans and English settlers as well as tensions between wealthy English landowners and the poor settlers who continually pushed west into Indian territory.
¶ 64 Leave a comment on paragraph 64 0 Bacon’s Rebellion began, appropriately enough, with an argument over a pig. In the summer of 1675, a group of Doeg Indians visited Thomas Mathew on his plantation in northern Virginia to collect a debt that he owed them. When Mathew refused to pay, they took some of his pigs to settle the debt. This “theft” sparked a series of raids and counter-raids. The Susquehannock Indians were caught in the crossfire when the militia mistook them for Doegs, leaving fourteen dead. A similar pattern of escalating violence then repeated: the Susquehannocks retaliated by killing colonists in Virginia and Maryland, the English marshaled their forces and laid siege to the Susquehannocks. The conflict became uglier after the militia executed a delegation of Susquehannock ambassadors under a flag of truce. A few parties of warriors intent on revenge launched raids along the frontier and killed dozens of English colonists.
¶ 65 Leave a comment on paragraph 65 1 The sudden and unpredictable violence of the Susquehannock War triggered a political crisis in Virginia. Panicked colonists fled en masse from the vulnerable frontiers, flooding into coastal communities and begging the government for help. But the cautious governor, Sir William Berkeley, did not send an army after the Susquehannocks. He worried that a full-scale war would inevitably drag other Indians into the conflict, turning allies into deadly enemies. Berkeley therefore insisted on a defensive strategy centered around a string of new fortifications to protect the frontier and strict instructions not to antagonize friendly Indians. It was a sound military policy but a public relations disaster. Terrified colonists condemned Berkeley. Building contracts for the forts went to Berkeley’s wealthy friends, who conveniently decided that their own plantations were the most strategically vital, colonists also condemned the government as a corrupt band of oligarchs more interested in lining their pockets than protecting their people.
¶ 66 Leave a comment on paragraph 66 0 By the spring of 1676, a small group of frontier colonists took matters into their own hands. Naming the charismatic young Nathaniel Bacon as their leader, these self-styled “volunteers” proclaimed that they took up arms in defense of their homes and families. They took pains to assure Berkeley that they intended no disloyalty, but Berkeley feared a coup and branded the volunteers as traitors. Berkeley finally mobilized an army—not to pursue Susquehannocks, but to crush their rebellion. His drastic response catapulted a small band of anti-Indian vigilantes into full-fledged rebels whose survival necessitated bringing down the colonial government.
¶ 67 Leave a comment on paragraph 67 1 Bacon and the rebels stalked the Susquehannock as well as friendly Indians like the Pamunkeys and the Occaneechis. The rebels became convinced that there was a massive Indian conspiracy to destroy the English and understood themselves into heroes to frightened Virginians. Berkeley’s stubborn persistence in defending friendly Indians and destroying the Indian-fighting rebels led Bacon to accuse the governor of conspiring with a “powerful cabal” of elite planters and with “the protected and darling Indians” to slaughter his English enemies.
¶ 68 Leave a comment on paragraph 68 1 In the early summer of 1676, Bacon’s neighbors elected him their burgess and sent him to Jamestown to confront Berkeley. The governor promptly arrested him and forced him into the humiliating position of publicly begging forgiveness for his treason. Bacon swallowed this indignity, but turned the tables by gathering an army of followers and surrounding the State House, demanding that Berkeley name him the General of Virginia and bless his universal war against Indians. Instead, the 70-year old governor stepped onto the field in front of the crowd of angry men, unafraid, and called Bacon a traitor to his face. Then he tore open his shirt and dared Bacon to shoot him in the heart, if he was so intent on overthrowing his government. “Here!” he shouted before the crowd, “Shoot me, before God, it is a fair mark. Shoot!” When Bacon hesitated, Berkeley drew his sword and challenged the young man to a duel, knowing that Bacon could neither back down from a challenge without looking like a coward nor kill him without making himself into a villain. Instead, Bacon resorted on bluster and blasphemy. Threatening to slaughter the entire Assembly if necessary, he cursed, “God damn my blood, I came for a commission, and a commission I will have before I go.” Berkeley stood defiant, but the cowed burgesses finally prevailed upon him to grant Bacon’s request. Virginia had its general, and Bacon had his war.
¶ 69 Leave a comment on paragraph 69 0 After this dramatic showdown in Jamestown, Bacon’s Rebellion quickly spiraled out of control. Berkeley slowly rebuilt his loyalist army, forcing Bacon to divert his attention to the coasts and away from the Indians. But most rebels were more interested in defending their homes and families than in fighting other Englishmen, and deserted Bacon in droves at every rumor of Indian activity. In many places, the “rebellion” was less an organized military campaign than a collection of local grievances and personal rivalries. Both rebels and loyalists smelled the opportunities for plunder, seizing their rivals’ estates and confiscating their property.
¶ 70 Leave a comment on paragraph 70 0 For a small but vocal minority of rebels, however, the rebellion became an ideological revolution: Sarah Drummond, wife of rebel leader William Drummond, advocated independence from England and the formation of a Virginian Republic, declaring “I fear the power of England no more than a broken straw.” Others struggled for a different kind of independence: white servants and black slaves fought side by side in both armies after promises of freedom for military service. Everyone accused everyone else of treason, rebels and loyalists switched sides depending on which side was winning, and the whole Chesapeake disintegrated into a confused melee of secret plots and grandiose crusades, sordid vendettas and desperate gambits, with Indians and English alike struggling for supremacy and survival. One Virginian summed up the rebellion as “our time of anarchy.”
¶ 71 Leave a comment on paragraph 71 0 The rebels steadily lost ground and ultimately suffered a crushing defeat. Bacon died of typhus in the autumn of 1676, and his successors surrendered to Berkeley in January 1677. Berkeley summarily tried and executed the rebel leadership in a succession of kangaroo courts-martial. Before long, however, royal fleet arrived bearing over 1,000 red-coated troops and a royal commission of investigation charged with restoring order to the colony. The commissioners replaced the governor and dispatched Berkeley to London, where he died in disgrace.
¶ 72 Leave a comment on paragraph 72 1 But the conclusion of Bacon’s Rebellion was uncertain, and the maintenance of order remained precarious for years afterward. The garrison of royal troops discouraged both incursion by hostile Indians and insurrection by discontented colonists, allowing the king to continue profiting from tobacco revenues. The end of armed resistance did not mean a resolution to the underlying tensions destabilizing colonial society. Indians inside Virginia remained an embattled minority, and Indians outside Virginia remained a terrifying threat. Elite planters continued to grow rich by exploiting their indentured servants and marginalizing small farmers. The vast majority of Virginians continued to resent their exploitation with a simmering fury and meaningful reform was nowhere on the horizon. Bacon’s Rebellion, in the words of one historian, was “a rebellion with abundant causes but without a cause,” and its legacy was little more than a return to the status quo. However, the conflict between poor farmers and wealthy planters may have persuaded a few leaders to look for a less volatile labor force. Indentured servants eventually became free farmers, competing for land and power, while African slaves did not. For this reason Bacon’s Rebellion further motivated the turn to slave labor in the Chesapeake.
¶ 73 Leave a comment on paragraph 73 0 Just a few years after Bacon’s Rebellion, the Spanish experienced their own tumult in the area of contemporary New Mexico. The Spanish had been maintaining control partly by suppressing Native American beliefs. Friars aggressively enforced Catholic practice, burning native idols and masks and other sacred objects and banishing traditional spiritual practices. In 1680 the Pueblo religious leader Popé, who had been arrested and whipped for “sorcery” five years earlier, led various Puebloan groups in rebellion. Several thousand Pueblo warriors razed the Spanish countryside and besieged Santa Fe. They killed 400, including 21 Franciscan priests, and allowed 2,000 other Spaniards and Christian Pueblos to flee. It was perhaps the greatest act of Indian resistance in North American history.
¶ 75 Leave a comment on paragraph 75 0 In New Mexico, the Pueblos eradicated all traces of Spanish rule. They destroyed churches and threw themselves into rivers to wash away their Christian baptisms. “The God of the Christians is dead,” they proclaimed, before reassuming traditional spiritual practices. The Spanish were exiled for twelve years. They returned in 1692, weakened, to reconquer New Mexico.
¶ 76 Leave a comment on paragraph 76 0 The late seventeenth century was time of great violence and turmoil. Bacon’s Rebellion turned white Virginians against one another, King Philip’s War shattered Indian resistance in New England, and the Pueblo Revolt struck a major blow to Spanish power. It would take several decades after these conflicts before similar patterns erupted in Carolina and Pennsylvania, but the constant advance of European settlements provoked conflict in these areas as well.
¶ 77 Leave a comment on paragraph 77 0 In 1715, The Yamasees, Carolina’s closest allies and most lucrative trading partners, turned against the colony and very nearly destroyed it all. Writing from Carolina to London, the settler George Rodd believed they wanted nothing less than “the whole continent and to kill us or chase us all out.” Yamasees would eventually advance within miles of Charles Town.
¶ 78 Leave a comment on paragraph 78 0 The Yamasee War’s first victims were traders. The governor had dispatched two of the colony’s most prominent men to visit and pacify a Yamasee council following rumors of native unrest. Yamasees quickly proved the fears well founded by killing the emissaries and every English trader they could corral.
¶ 79 Leave a comment on paragraph 79 0 Yamasees, like many other Indians, had come to depend on English courts as much as the flintlock rifles and ammunition traders offered them for slaves and animal skins. Feuds between English agents in Indian country had crippled the court of trade and shut down all diplomacy, provoking the violent Yamasee reprisal. Most Indian villages in the southeast sent at least a few warriors to join what quickly became a pan-Indian cause against the colony.
¶ 80 Leave a comment on paragraph 80 0 Yet Charles Town ultimately survived the onslaught by preserving one crucial alliance with the Cherokees. By 1717, the conflict had largely dried up, and the only remaining menace were roaming Yamasee bands operating from Spanish Florida. Most Indian villages returned to terms with Carolina and resumed trading. The lucrative trade in Indian slaves, however, which had consumed 50,000 souls in five decades, largely dwindled after the war. The danger was too high for traders, and the colony discovered even greater profits by importing Africans to work new rice plantations. Herein lies the birth of the “Old South,” that hoard of plantations that created untold wealth and misery. Indians retained the strongest militaries in the region, but they never again threatened the survival of English colonies.
¶ 81 Leave a comment on paragraph 81 0 If there were a colony where peace with Indians might continue, it would be in Pennsylvania, where William Penn created a religious imperative for the peaceful treatment of Indians. His successors, sons John, Thomas, and Richard, continued the practice but increased immigration and booming land speculation increased the demand for land. The Walking Purchase of 1737, a deal made between Delaware Indians and the proprietary government in an effort to secure a large tract of land for the colony north of Philadelphia in the Delaware and Lehigh River valleys, became emblematic of both colonials’ desire for cheap land and the changing relationship between Pennsylvanians and their Native neighbors.
¶ 82 Leave a comment on paragraph 82 0 Through treaty negotiation in 1737, native Delaware leaders agreed to sell Pennsylvania all of the land that a man could walk in a day and a half, a common measurement utilized by Delawares in evaluating distances. John and Thomas Penn, joined by the land speculator James Logan, hired a team of skilled runners to complete the “walk” on a prepared trail. The runners traveled from Wrightstown to present-day Jim Thorpe and proprietary officials then drew the new boundary line perpendicular to the runners’ route, extending northeast to the Delaware River. The colonial government thus measured out a tract much larger than Delawares had originally intended to sell,roughly 1,200 square miles.As a result, Delaware-proprietary relations suffered. Many Delawares left the lands in question and migrated westward to join Shawnees and Delawares already living in the Ohio Valley. There, they established diplomatic and trade relationships with the French. Memories of the suspect purchase endured into the 1750s and became a chief point of contention between the Pennsylvanian government and Delawares during the upcoming Seven Years War.
¶ 84 Leave a comment on paragraph 84 0 The seventeenth century saw the creation and maturation of British North American colonies. Colonists endured a century of struggle against unforgiving climates, hostile natives, and imperial intrigue. They did so largely through ruthless expressions of power. Colonists conquered Indians, attacked European rivals, and joined a highly lucrative transatlantic economy rooted in slavery. These slave-based economies funneled considerable wealth into British coffers, but the North American colonies still remained an afterthought, especially when compared to the tremendous riches of the Caribbean sugar colonies.
¶ 85 Leave a comment on paragraph 85 0 The violence of the seventeenth century echoed into the eighteenth, as new cultural expressions began to create significant changes in colonial North America. After surviving a century of desperation and war, British North American colonists fashioned increasingly complex societies with unique religious cultures, economic ties, and even political traditions. These societies would come to shape not only North America, but also the entirety of the Atlantic World. |
(ORDO NEWS) — Scientists at the University of Arizona have revealed why images of black holes in the galaxy M87 and the Milky Way are very similar, despite the huge differences in the mass of both objects. The astronomers’ explanation is given in a press release posted on the Phys.org website.
The M87 galaxy, located 53.5 million light-years from Earth, hosts a supermassive black hole, Powehi (or M87*), with a mass of about 3.5 billion solar masses, which corresponds to a size larger than the size of the solar system.
It was considered the largest object of its kind until supermassive black holes with masses of 9.7 and 27 billion solar masses were discovered. In 2019, an image of Powehi was first released showing a glowing ring around a dark space known as the black hole’s “shadow”.
On May 12, 2022, the Event Horizon Telescope (EHT) collaboration published an image of the second supermassive black hole, Sgr A*, which is at the center of the Milky Way and is 1,500 times smaller than Powehi, which is comparable to the size of Mercury’s orbit.
Despite the fact that the masses of both black holes are very different, the image of Sgr A* almost does not differ in shape from the image of the black hole in M87. According to the researchers, this confirms a fundamental prediction of Einstein’s theory of gravity, according to which the size of a black hole depends only on its mass.
Because M87 is 2,000 times further away than Sgr A*, both objects appear to be identical in appearance, despite significant differences in size.
According to Dimitrios Psaltis, professor of astronomy and physics at the University of Arizona, if scientists could get an image of a black hole with a mass of only about ten times the mass of the Sun (in reality, this is impossible due to low resolution), then it would be very similar to an image of M87.
The luminous ring surrounding the shadow of a black hole is formed by photons and photon-emitting particles moving in circular orbits in curved space-time, as predicted by Einstein’s equations. That is why, in all cases, black holes should look like luminous donuts, the scientist says.
In 2000 and 2001, M87* and Sgr A* were identified by scientists as the only possible targets for the Earth-sized Event Horizon Virtual Telescope, as they are in a suitable environment that makes them sufficiently distinguishable for observations.
However, analyzing and validating the final image of Sgr A* required petabytes of data, sophisticated algorithms and years of research, including the development of models that should help determine the physical properties of the black hole and its surroundings.
Contact us: [email protected] |
Coal-fired power plants provide nearly 50% of all electricity in the U.S. While coal is a cheap and abundant natural resource, its continued use contributes to rising carbon dioxide (CO2) levels in the atmosphere. Capturing and storing this CO2 would reduce atmospheric greenhouse gas levels while allowing power plants to continue using inexpensive coal. Carbon capture and storage represents a significant cost to power plants that must retrofit their existing facilities to accommodate new technologies. Reducing these costs is the primary objective of the IMPACCT program.
Project Innovation + Advantages:
Researchers at Alliant Techsystems (ATK) and ACENT Laboratories are developing a device that relies on aerospace wind-tunnel technologies to turn CO2 into a condensed solid for collection and capture. ATK's design incorporates a special nozzle that converges and diverges to expand flue gas, thereby cooling it off and turning the CO2 into solid particles which are removed from the system by a cyclonic separator. This technology is mechanically simple, contains no moving parts and generates no chemical waste, making it inexpensive to construct and operate, readily scalable, and easily integrated into existing facilities. The increase in the cost to coal-fired power plants associated with introduction of this system would be 50% less than current technologies.
If successful, ATK's technology would collect and remove CO2 at half the cost to coal-fired power plants of current-generation carbon capture and storage technologies.
Enabling continued use of domestic coal for electricity generation will preserve the stability of the electric grid.
Carbon capture technology could prevent more than 800 million tons of CO2 from being emitted into the atmosphere each year.
Enabling cost-effective carbon capture systems could accelerate their adoption at existing power plants. |
I have created a range of activities for you to use throughout this lesson. Some of the activities may take more than one day (i.e. Comprehension Crafts).
Vocab Flip-Flap: Students practice writing the vocabulary words in sentences, and writing definitions. I suggest doing this activity prior to reading the story so when they see the vocabulary word in the reading they will have an understanding of it.
ABC Order Spelling Words: Print these pages two sided. One page they write the words in ABC order the other they pick 7 of the words to put into sentences. As an extra challenge you could always have them put two spelling words in each sentence.
Roll a Spelling word: Students will need dice for this activity. They roll a die to find out what word to write. This can also be done in partners as a game as a race to see who can finish first.
Interactive Journal Page: There is 1 interactive Journal page in this unit for Root Words.
Fact and Opinion Craft: This is an adorable mobile craft where students writing out facts and opinions from the week’s selection.
5 Fab Things About Erik: Students write 5 facts about Erik in the hands. Also another great fast finisher or also a homework option.
Commas in Sentence Worksheet: Students read sentences and passages from the text and put in the missing commas.
Spelling Writing Pages: Students practice tracing, writing, and rainbow writing the spelling words. This could be used during centers or as another homework option.
Comprehension Tri-Fold: I have provided you two comprehension Tri-Folds for this week’s selections. Students answer questions about the week’s selection. They could do this in small groups or individually as an informal assessment.
Summary Page for the week’s selection.
Copyright © Emily Barnes of Emily Education. All rights reserved by author. This product is to be used by the original purchaser only. Copying for more than one teacher, classroom, department, school, or school system is prohibited. This product may not be distributed or displayed digitally for public view. Failure to comply is a copyright infringement and a violation of the Digital Millennium Copyright Act (DMCA). Clipart and elements found in this PDF are copyrighted and cannot be extracted and used outside of this file without permission or license. Intended for classroom and personal use ONLY. See product file for clip-art and font credits.
These materials were prepared by a teacher in Washington State and have neither been developed, reviewed, nor endorsed by Houghton Mifflin Harcourt Publishing Company, publisher of the original Journeys Reading Program work on which this material is based. |
A new research study of children’s growth, published in the September issue of Pediatrics, can help parents and pediatricians determine the risk that a child will be overweight at age 12 by examining the child’s earlier growth. The study demonstrates that children who are overweight at any stage of their growth before age 12 are more likely to be overweight by the time they are 12, and the more times a child is measured as overweight during these growth years, the greater the chance that by 12 the child will be overweight.
For example, the researchers discovered that preschool-age children who were medically determined to be overweight at one of three points of measurement before age 5 were more than five times as likely to be overweight at age 12 than those who were below the 85th percentile for body mass index (BMI) during the same period. BMI is a standard measure calculated from a person’s height and weight
Philip R. Nader, M.D., Professor Emeritus of Pediatrics at the University of California, San Diego (UCSD) School of Medicine, is primary author of the study, with co-authors from 10 different institutions around the nation. He said the group pursued the study because obesity is a major public health problem in the United States.
According to Center for Disease Control growth standards developed before the obesity epidemic, children are considered to be overweight if their BMI is over the 85th percentile, or falls in the top 15% of children of the same height and gender. The Institute of Medicine considers these children obese if their BMI is over the 95tth percentile or the top 5%. The rate of obesity among adults and children in the U.S. has nearly tripled over the time that the children in the study were growing up.
“Obesity produces physical and psychological health problems that lead to decreased life expectancy and increased health care costs,” said Nader. “This study is particularly important because it gives parents and health care providers new data on the likelihood a child will become overweight in early adolescence. Once adolescents become overweight, there is a high likelihood that they will remain overweight as adults. This is one reason why doctors are seeing more children and adolescents with Type Two diabetes.”
The study examined more than 1,000 children from ten U.S. locations born in 1991. “They grew up during the period when the ‘obesity epidemic’ began to be noticed,” said Nader.
Nader and colleagues measured the heights and weights of participating children in the study at seven time points: three in the preschool age: two years, three years, and four-and-a-half years; three times in the school age period: seven, nine and 11 years; and finally, at age 12 years.
The researchers found that during the pre-school and elementary school period, the more times a child was overweight, the greater the likelihood of a child being overweight at age 12.
For example, if a school-age child was overweight one time, the child was 25 times more likely of being overweight at age 12, two times resulted in 159 times more likely, three times was 374 times more likely. The group determined that 60% of children who were overweight at any time during the preschool period and 80% of children who were overweight at any time during the elementary period were overweight at age 12.
“These results suggest that any time a child reaches the 85th percentile for BMI may be an appropriate time for intervention,” cautioned Nader. “For this sample of children who are growing up during a period of increasing obesity rates, it is clear that the longer a child remains in the lower ranges of normal BMI, the less likelihood the child will become overweight by early adolescence. The more times a child entered a higher BMI category or was overweight there was a greater likelihood that the child would remain overweight.”
In children, height and weight is routinely collected by healthcare providers and calculation of BMI could be an important step in the path to early detection of a tendency towards becoming obese,” said Nader. “Factors such as parental weight status, genetic tendencies, and lifestyle/ environmental issues of healthy diet, TV watching, and opportunities for safe active play are important contributors to the problem of obesity.”
The researchers concluded that based upon the reported growth data, health care providers can confidently encourage parents of overweight young children that they should address the child’s eating and activity patterns immediately rather than delaying in the hope that the overweight and the patterns that support the continued addition of weight will resolve themselves in due course.
Nader stated that if parents can prevent the escalating rate of weight gain as the child grows taller, parents will be able to keep the child in the lower normal BMI ranges, which will greatly decrease the chances of becoming obese in adolescence and possibly later in life.
“Parents should demand that the environment that their child is exposed to include healthy foods, less exposure to TV and sedentary activities and safe active places for physical activity, including neighborhood parks and quality physical education in schools,” said Nader.
The study, to be published in PEDIATRICS in September 2006, was sponsored by the National Institute of Child Health and Human Development’s Early Childcare Research Network (ECCRN), Study of Early Childcare and Youth Development. Authors are: Philip R. Nader MD, UCSD; Marion O’Brien, Ph.D., UNCG; Renate Houts, Ph.D., RTI International; Robert Bradley, Ph.D. Univ Arkansas; Jay Belsky Ph.D. Birkbeck University London; Robert Crosnoe, Ph.D., Univ Texas; Sarah Friedman, Ph.D., NICHD; Zugeo Mei M.D., CDC; Elizabeth Susman Ph.D. The Pennsylvania State University, and the ECCRN.
# # #
Media Contact: Leslie Franz or Jeffree Itrich, 619-543-6163
UCSD Health Sciences Communications HealthBeat: News |
People are often mixing the above as if they were one and the same, so here’s a recap of them. One of the things you often find people saying is that “my data is in the WGS84 coordinate system”. This doesn’t really make sense, but I will get back to this later.
This is a very confusing subject, and I might have gotten a few things wrong myself, so please add a comment and I’ll update it ASAP.
A coordinate system is simply put a way of describing a spatial property relative to a center. There is more than one way of doing this:
- The Geocentric coordinate system is based on a normal (X,Y,Z) coordinate system with the origin at the center of Earth. This is the system that GPS uses internally for doing it calculations, but since this is very unpractical to work with as a human being ( due to the lack of well-known concepts of east, north, up, down) it is rarely displayed to the user but converted to another coordinate system.
- The Spherical or Geographic coordinate system is probably the most well-known. It is based on angles relative to a prime meridian and Equator usually as Longitude and Latitude. Heights are usually given relative to either the mean sea level or the datum (I’ll get back to the datum later).
- The Cartesian coordinate system is defined as a “flat” coordinate system placed on the surface of Earth. In some projections it’s not flat in the sense that it follows the earth’s curvature in one direction and has a known scale-error in the other direction relative to the distance of the origin. The most well-known coordinate system is the Universal Transverse Mercator (UTM), but surveyors define their own little local flat coordinate systems all the time. It is very easy to work with, fairly accurate over small distances making measurements such as length, angle and area very straightforward. Cartesian coordinate systems are strongly connected to projections that I will cover later.
Sidenote: The geocentric coordinate system is strictly speaking a cartesian coordinate system too, but this is the general terms I've seen used the most when talking about world coordinate systems.
Datums and ellipsoids
Some of the common properties of the above coordinate systems are that they are all relative to the center of Earth and except the Geocentric coordinate system, uses a height system relative to the surface of the earth.
This poses two immediate problems:
- Where is the center of the earth
- What is the shape of the earth?
By now most people should know that that the earth isn’t flat (although there are still some who doubts it). If we define the surface of Earth as being at the mean sea level (often referred to as the Geoid), we don’t get a spheroid or even an ellipsoid. Because of gravitational changes often caused by large masses such as mountain ranges etc, Earth is actually very irregular with variations of +/- 100 meters. Since this is not very practical to work with as a model of earth, we usually use an ellipsoid for approximation. The ellipsoid is defined by its semi-major axis, and either the flattening of the ellipoid or the semi-minor axis.
The center and orientation of the ellipsoid is what we call the datum. So the datum defines an ellipsoid and through the use of a set of points on the ground that we relate to points on the ellipsoid, we define the center of the Earth. This poses another problem, because continental drift moves the points used to define the points around all the time. This is why the name of a datum usually have a year in it, often referring to the position of those points January 1st of that year (although that may vary).
There are a vast amount of datums, some used for measurements all over the world, and other local datums defined so they fit very well with a local area. Some common ones are: World Geodetic Datum 1984 (WGS84), European Datum 1950 (ED50) and North American Datum 1983 (NAD83).
The most well-known is WGS84 used by the GPS systems today. It is a good approximation of the entire world and with fix-points defined almost all over the world. When it was defined they forgot to include points in Europe though, so the Europeans now have their own ETRS89, which is usually referred to as the “realization of WGS84 in Europe”. The problem here was solely because of continental drift, so they defined some points relative to WGS84 in 1989, and keeps track of the changes. In most use-cases it is of no real importance and you can use one or the other.
I mentioned earlier that people often refer to having their data in WGS84, and you see now why this doesn’t make sense. All you know from that is that the data is defined using the WGS84 datum, but you don’t know which coordinate system it uses.
Read more on Datums and Spheroids.
The earth isn’t flat, and there is no simple way of putting it down on a flat paper map (or these days onto a computer screen), so people have come up with all sorts of ingenious solutions each with their pros and cons. Some preserves area, so all objects have a relative size to each other, others preserve angles (conformal) like the Mercator projection, some try to find a good intermediate mix with only little distortion on several parameters etc. Common to them all is that they transform the world onto a flat Cartesian coordinate system, and which one to choose depends on what you are trying to show.
A common statement that I hear in GIS is the following “My map doesn’t have a projection”, but this is simply not possible (unless you have a good old rotating globe). Often people are referring to data that is in longitude/latitude and displayed on a map without having specified any projection. What happens is that the system applies the simplest projection it can: Mapping Longitude directly to X and Latitude to Y. This results in an equirectangular projection, also called the “Plate Carree” projection. It results in very heavy distortion making areas look squashed close to the poles. You can almost say that the “opposite” of the Plate Carree is the Mercator projection which stretches areas close to the poles in the opposite direction, making them look very big. Mercator is the type of projection you see used on Live maps and Google maps, but as many often mistakenly thinks, they do NOT use WGS84 for the projected map, although WGS84 is used when you directly input longitude/latitude values using their API (read more on this here).
More on projected coordinate systems
The spatial reference is a combination of all the above. It defines an ellipsoid, a datum using that ellipsoid, and either a geocentric, geographic or projection coordinate system. The projection also always has a geographic coordinate system associated with it. The European Petroleum Survey Group (EPSG) has a huge set of predefined spatial references, each given a unique ID. These ID’s are used throughout the industry and you can download an Access database with all them from their website, as well as some very good documents on projection (or see the Spatial References website).
So when you hear someone saying they have their data in WGS84, you can often assume that they have longitude/latitude data in WGS84 projected using Plate Carree. The spatial reference ID of this is EPSG:4326.
Spatial References are often defined in a Well-known format defining all these parameters. The Spatial Reference EPSG:4326 can therefore also be written as:
As mentioned Live/Google maps use a Mercator projection, but although their datum is based on WGS84, they use a sphere instead of an ellipsoid. This means that they use the same center and orientation as WGS84, but without applying any flattening. The spatial reference string for their projection therefore becomes:
PROJCS["Mercator Spheric", GEOGCS["WGS84based_GCS", DATUM["WGS84based_Datum", SPHEROID["WGS84based_Sphere", 6378137, 0], TOWGS84[0, 0, 0, 0, 0, 0, 0]], PRIMEM["Greenwich", 0, AUTHORITY["EPSG", "8901"]], UNIT["degree", 0.0174532925199433, AUTHORITY["EPSG", "9102"]], AXIS["E", EAST], AXIS["N", NORTH]], PROJECTION["Mercator"], PARAMETER["False_Easting", 0], PARAMETER["False_Northing", 0], PARAMETER["Central_Meridian", 0], PARAMETER["Latitude_of_origin", 0], UNIT["metre", 1, AUTHORITY["EPSG", "9001"]], AXIS["East", EAST], AXIS["North", NORTH]] |
Knowing the answers will help you in school, knowing how to question will help you in life – Warren Berger
The essential thing in life is to keep on questioning, as it is one of the best ways to find about the things you want to know. Curiosity leads to questioning and further down the lane to wisdom and knowledge.
Questioning is the ability to organize our thinking around what we don’t know – The Right Question Institute
What is questioning techniques, and are they essential in today’s society, is a common question in everybody’s mind. The thing to remember at this point is that it is a necessary tool for the learners to attain a better understanding as well as skills that will prove a blessing in the days to come.
Questioning in itself is a technique that is considered crucial and all-important in the growth and well being of an individual.
Planning for better questioning techniques
Take the following steps to prepare for better questioning techniques
- Identify your intention or purpose for asking questions and link the questions to it
- Determine the levels of questions you want to ask
- Select the content and choose the material that you consider important
- Plan the questions that you want to use
- Make sure you have your arsenal of the key as well as subsidiary questions at hand
- Choose the questioning technique that will fit the situation
- Phrase the questions in a manner that makes it clear to the respondent
- Decide on the timing and order of questioning
- Analyze the given information to come up with an answer
Types of questioning techniques
The various types of questioning techniques are as follows
1. Open question
Open questions are an essential part of questioning techniques, and they deal in the broader discussion, explanations, and elaboration. These are framed in a manner of conversation between two people where questions help to describe the situation.
Open questions help in a better understanding of the topic under discussion as it allows endless questions and answers. It is useful in productive talks so that you can delve deep and extract more information.
When you are asking open questions, the person at the other end feels your interest, and this creates a strong bond. This method of questioning techniques is best for individuals or small groups, not large ones, as it will become challenging to extract information and give personal attention to everyone.
Examples of open questions are
- What happened in the class today?
- What did you do in your office today?
- Who was present at the time of the incident?
- Explain why it happened?
- What are your thoughts about the incident?
- Why did it happen?
2. Closed questions
Closed questions are quick and easy to answer because they deal in short and often one-word answers. These are often yes, or no answers asked for affirmations, agreements, disagreements, and understanding concepts.
These are not part of a conversation as asking a close question can put an end to it. Closed questions are easy to compare during statistical analysis.
The limitation of this questioning technique is that it does not allow the respondent to express himself or put forth his views, nor does it facilitate a better understanding between the people involved in the question and answers. Examples of closed questions are
- Do you like hard drinks?
- Do you like soft drinks?
- Did you drive to the office today?
- Are you happy in your office?
- Where do you live?
3. Rhetorical questions
Rhetorical questions are used to engage the audiences. It encourages people to think out of the box and come up with innovative ideas. This type of questioning technique is not looking for answers.
These are statements that are phrased as questions to engage the other party and draw him into agreeing with you. Rhetorical questions make the conversation exciting and engaging and are used to persuade people to their way of thinking. Examples of rhetorical questions are-
- Isn’t it nice working in this office?
- Aren’t all the team members cooperative?
- Isn’t this a perfect fit?
4. Leading questions
Leading questions are also known as reflective questions because of their nature. Here an environment is created by leading the respondent towards a specific route. It also encourages people to agree with you and say yes.
This type of questioning technique is useful for closing a deal, building positive rapport, and directing the conversation towards an outcome that you desire. Sometimes leading questions are also used for manipulating situations to your advantage.
- Do you have any issues in your workplace?
- Do you enjoy working with your colleagues?
- Did anyone say anything to you?
- Did anyone misbehave with you?
5. Probing questions
As the name suggests probing questions are used to probe and extract information. It is looking for elaborate answers to clear all the related doubts. This type of questioning technique is most useful for encouraging others to open up and provide more information.
It involves a series of questions that encourage others to talk and give a full picture for better understanding. Examples of probing questions are
- When do you need this information?
- What is this information needed for?
- Where will you use this information?
- Who requires this information and details?
6. Funnel questions
Funnel questions are called by this unique name because of the type of questioning involved. A funnel is full at the mouth and gradually narrows down to the bottom, and so is the concept in this type of questioning techniques.
The person starts with lots of general questions about a specific topic and, with time, narrows it down to one point to arrive at the result. The funnel type of questioning techniques is generally used by people interested in research and investigation, for example, by a policeman investigating a case or a scientist researching his paper.
They ask funnel questions to collect useful information that is later arrowed down to an obvious conclusion. Examples of funnel questions are
- When did you last see him?
- What was he wearing?
- Was he disturbed?
- What did you talk about?
- Did he say anything specific to you?
7. Clarifying questions
As the name suggests clarifying questions are used to verify specific information. In this type of questioning techniques, things are finalized at the end to confirm the matter that was under discussion.
Examples of clarifying questions are
- Am I right in believing that you all broke the hostel curfew?
- Just to confirm that you have asked for an unlimited package for the internet?
8. Loaded questions
The loaded questions are often closed questions that seem pretty straightforward. There is a twist in loaded question as it includes an assumption about the other person.
This type of questioning technique is considered tricky. It is used mainly by journalists and lawyers who want to trick the other person in giving answers that they usually would not provide.
Loaded questions are considered fact finders as they are useful in discovering facts that anyone would be reluctant to share. Examples of loaded questions are-
- Have you stopped overeating?
- Have you stopped cheating during an examination?
9. Recall and process questions
The recall questioning technique is used when you want the respondent to remember a particular fact. For instance, a teacher asks the student what 5 multiplied by 5, and the student will have to recall and answer 25.
Process questions, on the other hand, encourage the respondent to recall as well as add their opinion and then answer. It tries to test their understanding and knowledge about a specific subject.
The recall and process questions are most useful in developing critical thinking in individuals. They are used during the in-depth evaluation of a topic during discussions, interviews, and tests. Examples of Recall and process questions are-
- What is your account password?
- Why do you consider yourself the right person for the job?
Purpose of questioning techniques
The purpose of questioning techniques is as follows-
- Engage, involve and challenge
- Boost independent learning
- Clear doubts and gain clarity
- Share ideas and start conversations. It also helps to take control of the conversation
- Express interest in others and build rapport
- Promote reasoning power
- Encourage the habit of problem-solving
- Promote thinking on important issues and key concepts
- Helps to evaluate and boost analytical thinking
- Develop an interest
- Recall existing knowledge
- Develop a better understanding of a subject
- Check on the knowledge gained at earlier instances
- Know about the opinions and beliefs of other people
- Obtain information and search for viable solutions
- Explore attitudes and feelings
Strategies for effective questioning techniques
The strategies of questioning techniques in a classroom are-
- Do not ask questions only to those students who are raising their hands and volunteering to answer the question. Make it a no-hands policy where the students will not raise hands instead the teacher will ask anyone randomly
- Introduce a wait time if you are looking for an effective questioning technique in the classroom. The wait time gives the student time to think, process, and rehearse the answer before speaking. This action will bring considerable improvement in the student’s interaction
- The teacher should plan the lesson so that the questions are well prepared. It will keep the experience on track and achieve desired outcomes
- Use various types of questions during the question and answer session as it will give the students a chance to deal with every situation admirably.
- It is essential to encourage the students so that they can ask a question in a classroom. This will foster a better understanding of the lesson
- Prepare the follow-up questions so that you can use it if required
- Do not dismiss any answers as it will discourage students from participating in such activities
The advantages of questioning techniques in teaching are-
- The question-answer method helps develop critical thinking and analytical skills in children from an early age
- The questioning techniques develop the power of expression amongst students
- It helps to analyze the mindset of students and determine whether they can cope in their surroundings
- It is also used to reflect on the behavior and attitude of a student
- The questioning techniques help children to think aloud and have active discussions
- It helps the learners to clear their doubts
- Motivates them to develop various interests
- Empowers students so that they become confident
- Helps teachers to check the understanding level of a student
The disadvantages of questioning techniques in teaching are-
- It is a very time-consuming process that takes a lot of effort
- A teacher will have to be skilled to use this method as an advantage
- It can disturb the atmosphere in the class.
Questioning techniques are referred to as learning skills that encourage asking questions and knowing the right answers. It is used by everyone in all the spheres of life, for instance, at home, at work, at social gatherings, at meetings, amongst friends, family, colleagues, and even in the presence of strangers.
Proper questioning techniques lead to better interpersonal skills and successful communication. |
59 search results
Study Edge Precalculus
Precalculus is the preparation for calculus. The course approaches is designed to strengthen and enhance conceptual understanding and mathematical reasoning used when modeling and solving mathematical and real-world problems. Students systematically work with functions and their multiple representations. Precalculus can deepen students' mathematical understanding and fluency with algebra and trigonometry and extends their ability to make connections and apply concepts and procedures at higher levels. Students will investigate and explore mathematical ideas, develop multiple strategies for analyzing complex situations, and use technology to build understanding, make connections between representations, and provide support in solving problems (TAC §111.42(b)(3)).
This video book is brought to you by TEA and Study Edge. It may be used to teach an entire Precalculus course or to supplement traditional Precalculus textbooks.
This open-education-resource instructional material by TEA is licensed under a Creative Commons Attribution 4.0 International Public License in accordance with Chapter 31 of the Texas Education Code.
Please provide feedback on Study Edge's open-education resource instructional materials.
5.01 Radians and Degree Measurements
In this video, students will learn the basics of angle measurements, definitions of various types of angles, radians and degrees, along with arc length and area of a sector.
5.02 Linear and Angular Velocity
In this video, students will learn about angular and linear velocity and how each relates to unit conversions.
5.03 Trigonometric Ratios
In this video, we will define the trigonometric ratios in terms of the sides of a right triangle.
5.04 Trigonometric Angles and the Unit Circle
In this video, students will learn special angles and the unit circle, and learn how to apply them.
5.05 Graphs of Sine and Cosine
In this video, students will learn how to graph sine and cosine and how to interpret graphs of sine and cosine.
5.06 Graphs of Secant and Cosecant
In this video, students will learn how to graph and interpret graphs of secant and cosecant, and how secant and cosecant relate to sine and cosine.
5.07 Graphs of Tangent and Cotangent
In this video, students will learn how to graph and how to interpret graphs of tangent and cotangent.
5.08 Inverse Trigonometric Functions and Graphs
In this video, students will explore the relationship between trigonometric functions and their inverses.
Drawing Conclusions about Three-Dimensional Figures from Nets
Given a net for a three-dimensional figure, the student will make conjectures and draw conclusions about the three-dimensional figure formed by the given net.
Conservation of Momentum
This resource was created to support TEKS IPC(4)(E).
8.01 Conic Sections
In this video, students will learn the definition of a double-napped cone, and how conic sections are formed at the intersection of a plane and a double-napped cone.
In this video, students will learn the analytic definition of an ellipse, the standard form of the equation of an ellipse, and how to graph ellipses.
In this video, students will learn the analytic definition of a hyperbola, the standard form of the equation of a hyperbola, and how to graph hyperbolas.
8.05 Polar Coordinates and Equations
In this video, students will learn about the polar coordinate system and how to convert to and from the rectangular coordinate system.
8.06 Polar Graphs
In this video, students will learn how to graph polar curves.
8.07 Special Polar Graphs
In this video, students will learn the equations and graphs of special polar curves.
8.04 Parametric Equations
In this video, students will learn about parametric equations, how to sketch parametric curves, and the differences between parametric curves and rectangular graphs.
This resource provides flexible alternate or additional learning activities for students learning about the gravitational attraction between objects of different masses at different distances. IPC TEKS (4)(F)
6.01 Trigonometric Identities
In this video students will learn trigonometric identities, where they are derived from, and apply them in problems. |
Hydrogen is estimated to account for 90% of all atoms in the universe and close to ⅔ of the atoms in our bodies. HYDROGEN gas, or molecular hydrogen, has been studied since 1975.
Our own microbiome contains bacteria that produce HYDROGEN GAS (H2). Hydrogen, the first element in the creation of the universe, the smallest and most prevalent element, was the key to the beginning of life as we know it.
In 2007, groundbreaking research brought hydrogen to the awareness of scientists for its potential role in human disease. Nature Medicine reported that the inhalation of H2 gas inhibited brain injury. The research found molecular hydrogen rapidly diffuses across membranes, acting as a selective antioxidant, neutralizing cytotoxic reactive oxygen species (ROS).
In the mitochondria, not only does H2 neutralize the most harmful free radicals like peroxynitrite and hydroxyl radicals, we also find up-regulation of ATP production. This is why hydrogen tablets are so popular among biohackers, high performance athletes and busy professionals looking for peak performance.
HYDROGEN’S SYSTEMIC BENEFITS
The systemic benefits of Molecular Hydrogen go far beyond that. Hydrogen gas (H2) has been found to:
- Inhibit inflammation
- Support nitric oxide, circulation and cardiovascular health
- Modulate the immune response
- Enhance cognitive function
- Preserve mitochondrial function
- Delay the aging process
- Inhibit DNA/RNA damage
- Optimize oxygen utilization
- Modulate gene expression
HOW DO HYDROGEN TABLETS WORK?
By simply dropping one tablet into liquid, the tablet undergoes a reaction and immediately infuses the liquid with hydrogen gas. Plasma levels peak within 5-15 minutes after drinking. The tablet also releases magnesium in an ionic, highly absorbable form. |
At the end of 2018, the observatory of gravitational waves LIGO announced that they had discovered the most distant and massive source of space-time pulsations ever observed: waves caused by pairs of black holes colliding in deep space. Only from 2015, we were able to observe these invisible astronomical bodies, which can be detected only by their gravitational attraction. The history of our hunt for these mysterious objects dates back to the 18th century, but the decisive stage occurred in a rather dark period of human history – World War II.
The concept of a body that could hold light and thus become invisible to the rest of the universe was first considered by natural philosophers John Michel and then by Pierre-Simon Laplace in the 18th century. They used Newton's gravitational laws to calculate the rate at which a particle of light emanates from a body, predicting the existence of stars so dense that light could not escape from them. Michell called them "dark stars."
But after it was discovered in 1801 that light took the form of a wave, it became unclear how light would affect the Newtonian gravitational field, so the idea of dark stars was discarded. It took about 115 years to understand how light in the form of a wave will behave under the influence of a gravitational field, with the help of Albert Einstein's General Theory of Relativity in 1915 and Karl Schwarzschild's solution to this problem a year later.
Schwarzschild also predicted the existence of a critical body circumference beyond which light can not cross: the Schwarzschild radius. This idea was similar to the idea of Michell, but now this critical circle was understood as an impenetrable barrier.
Only in 1933, Georges Lemaitre showed that this impenetrability was only an illusion that a distant observer would have. Using the now famous illustration of Alice and Bob, the physicist suggested that if Bob stops while Alice jumps into a black hole, Bob will see Alice’s image slow down to freezing just before reaching the Schwarzschild radius. Lemaitre also showed that Alice actually overcomes this barrier: Bob and Alice simply experience the event differently.
Despite this theory, at that time there was not a single known object of this size, and even close to a black hole. So no one believed that something like dark stars, as Michel suggested, would exist. In fact, no one even dared to take this opportunity seriously. Until World War II.
From dark stars to black holes
On September 1, 1939, the fascist German army invaded Poland, which led to the beginning of the war, which forever changed world history. It is noteworthy that on this very day the first scientific article on black holes was published. The recently acclaimed article by J. Robert Oppenheimer and Hartland Snyder, two American physicists, “On the Continuation of Gravitational Compression” was a decisive moment in the history of black holes. This time seems especially strange if we take into account the central place of the Second World War in the development of the theory of black holes.
This was Oppenheimer's third and final article in astrophysics. In it, he and Snyder predict the continuation of the contraction of a star under the influence of its own gravitational field, creating a body with an intense attractive force from which even light cannot escape. It was the first version of the modern concept of a black hole, an astronomical body, so massive that it can be detected only by gravitational attraction.
In 1939, this idea was too strange to believe in it. It will take two decades until the concept is sufficiently developed for physicists to begin to recognize the consequences of the continuing reduction described by Oppenheimer. And the Second World War itself played a decisive role in its development due to the investment of the US government in the study of atomic bombs.
Reborn from the ashes
Oppenheimer, of course, was not only an important character in the history of black holes. He will later become the head of the Manhattan Project, a research center that led to the development of atomic weapons.
Politicians understood the importance of investing in science in order to gain a military advantage. Consequently, in all areas, there were significant investments in the research of revolutionary physics related to the war, nuclear physics, and the development of new technologies. All sorts of physicists have devoted themselves to these kinds of research, and as a direct consequence, the fields of cosmology and astrophysics have been largely forgotten, including the Oppenheimer article.
Despite the decade lost to large-scale astronomical research, the discipline of physics generally flourished as a result of the war — in fact, military physics ended in an increase in astronomy. The USA left the war as the center of modern physics. The number of doctors of science has increased dramatically, and a new post-doctoral education tradition has been created.
By the end of the war, the study of the universe was resumed. A renaissance arose in the once underestimated theory of the general theory of relativity. The war changed the way we do physics: and as a result, this led to the fact that the fields of cosmology and general relativity gained the recognition they deserve. And it was fundamental for accepting and understanding black holes.
Princeton University has become the center of a new generation of relativists. It was there that nuclear physicist John A. Wheeler, who later popularized the name “black hole”, first became acquainted with the general theory of relativity and re-analyzed Oppenheimer's work. Initially skeptical, the influence of close relativists, new advances in computer modeling and radio engineering, developed during the war, turned him into the greatest enthusiast of Oppenheimer's prediction on the day the war began, September 1, 1939.
Since then, new properties and types of black holes have been theorized and discovered, but all this culminated only in 2015. Measuring gravitational waves created in a black hole binary system was the first concrete proof of the existence of black holes.
Scientists have discovered the largest known black hole collision. |
What Is the Difference Between a Simile and a Metaphor?
A Simile IS a Metaphor
Similes and metaphors are figures of speech used to paint a picture in the mind. Actually, a simile IS a metaphor, but a metaphor is not a simile. What does that mean? Well, a simile is a type of metaphor, just as an apple is a type of fruit. Both compare one item to another, but the difference is in the wording.
The way I always remembered is that a simile is similar. Simile is similar. Simile. Similar. Similar-sounding words there, you see. So, similar items are compared with the words "as" or "like." A metaphor also compares two things to the other, but the words "like" or "as" are left out. A simile says that one item is like another; a metaphor says that one item IS the other. That's the simplest explanation for these two literary terms.
In the movie, Forrest Gump, Forrest describes Jenny and himself in this way: "We were like peas and carrots." They were similar or "like." Therefore, he used a simile to describe himself and his friend. If he had used a metaphor, he would have said, "We were peas and carrots." See the difference?
Both similes and metaphors are used as poetic devices, particularly similes. A metaphor, which compares two things, often unlike things, is actually more forceful, which perhaps is what makes it somewhat less poetic.
Metaphors and similes are used in everyday language, as well as in various types of entertainment mediums. Similes and metaphors abound in music and poetry especially.
Examples of Similes
Here are some examples of similes. Note the phrases that begin with "as" or "like."
The baby was as light as a feather.
Her room was as neat as a pin.
Last night, I slept like a log.
Similes in Poetry:
Oh, my love's like a red, red rose. --Robert Burns
My mistress' eyes are nothing like the sun. --William Shakespeare
And of course in the movie, Forrest Gump:
Like is like a box of chocolates. --Forrest Gump
Time is Money!
Examples of Metaphors
A metaphor compares two things that are unlike but have something in common. No "like" or "as" is used in the comparison. Note that metaphors can be different parts of speech, such as adjectives or verbs as well.
Life is a roller coaster.
Time is money!
You are the apple of my eye.
I'm feeling blue.
She is fishing for compliments
He shot down all my ideas.
Note that none of the examples are literal. You are not an apple in someone's eye; if you paid attention in biology, that would be a pupil.
You don't really feel blue, no more than you feel red, brown, or yellow.
As far as the verbs go, nobody fishes for compliments. I envision someone sitting on the bank of the river, fishing pole in the water, trying to hook statements such as "You're pretty" or "Gee, you're nice."
And I've never seen someone pick up a gun and literally shoot someone's ideas until they fall dead to the ground. (At least not literally.)
Examples of Simile and Metaphor in Songs
Review of Simile and Metaphor
Do you think you understand the difference between a simile and a metaphor, and how you can remember which is which?
Just remember the similar (like, as) comparison in a simile, but that a metaphor compares the two without the "like" or "as."
Simile: He has no manners; he eats like a pig!
Metaphor: He has no manners; he is such a pig! And maybe he really, really is!
Simile in Poetry and Song
Metaphors and Similes: Do you know the difference?
Do you feel you know the difference between metaphors and similes?
Metaphor in Song: Life isn't "like" a highway; Life IS a highway!
Questions & Answers
is there any metaphor which uses the word "like"?
No, metaphors do not use the word "like." Similes do, as explained in the article.Helpful 4 |
Avian Eye Disorders
Birds can suffer from many different eye disorders. They can be due to an eye injury, or possibly an infection to the area. Occasionally, eye disorders are symptoms of another underlying medical problem. Therefore, if your bird has an eye problem, it should be considered serious and you should consult a veterinarian to rule out any major internal disease.
Symptom and Types
Conjunctivitis, a common eye disorder, is usually caused by bacteria and can be identified as red and swollen eyelids, and may lead to photosensitivity (avoidance of light) in the bird. Conjunctivitis is also a symptom of many other medical problems, including respiratory infections.
Uveitis causes an inflammation of the inner parts of the eye. However, it is commonly associated with symptoms of other internal diseases in the bird. This particular disorder needs to be treated quickly to avoid cataracts from forming.
Cataracts develop in the bird’s eye when there is a deficiency in vitamin E, an infection with encephalomyelitis, or even from continuous exposure to some artificial lights.
Marek’s disease is a particular type of eye disorder that is caused by a viral infection. This medical condition can lead to irregularly shaped pupils, iris problems blindness, and can progress into cancer. Vaccination can prevent this eye disorder from occurring. However, a bird that is already infected with the virus, cannot be cured.
Avian Pox is another eye disorder which is found in birds, and is due to a viral infection. Though it is a generalized disease, the eye symptoms include swelling of the eyelids with blister-like formations, and partial or total loss of vision. However, the eyeball is not affected by the infection and the vision usually returns after the infection is treated.
Many eye disorders are caused by bacterial infections (i.e., salmonella). This particular bacteria causes both conjunctivitis and ophthalmitis -- inflammation with pus in the eyeball and conjunctiva -- and possible blindness. In addition, salmonella is contagious and often spread from parent to your bird, or genetically through the egg yolk.
Fungal infections of the eye can also lead to bird eye disorders, usually because of moldy feed. One common fungi, Aspergillus, infects the bird’s respiratory system, but can also affect brain and eyes. The infected eye will show yellow plaques under the eyelid. The eye will also have inflammation, and if left untreated, this infection can result in severe eye damage.
Vitamin deficiency is another cause of eye disorders in birds. For instance, a deficiency in vitamin E in the parent can lead to the birth of a blind chick. And vitamin A is required for proper pigmentation and tearing of the eyes. To prevent such deficiencies, give your bird commercial feed.
If your bird show signs of discomfort or symptoms of any eye disorder -- such as the eyes close, swell, become red, discharge a substance, or blink more than usual -- be sure to get the bird checked by the veterinarian for immediate treatment. Antibiotic eye drops or other medicines can help in dealing with the eye disorder at an early stage.
Prevention of certain types of eye disorders are dependent on the symptoms found in the bird. But, timely medical intervention can save the bird from suffering, as well as any serious eye damage. |
Defining brain cell regulation
|James Trimmer has identified how potassium ion channels in the brain respond to a variety of stimuli|
For years, scientists have attempted to study how nerve cells regulate the function of potassium ion channels because they are crucial to how the body responds to a variety of stimuli, from noise in the environment to chemical messages from different parts of the body.
Researchers have traditionally thought that these pore-like openings, which use potassium ions to conduct weak electric signals across their membranes, operated like an on-off switch. But UC Davis research, published in the August 19 issue of Science , has shown that control of brain-cell activity is much more sophisticated, operating more like volume controls on stereos. The groundbreaking research provides a new model for the behavior of critical gatekeeper proteins found in nerve cell membranes.
“We've shown that brains cells regulate activity in an incremental way, with thousands of different possible levels of activity,” explained James Trimmer, professor of pharmacology and toxicology at UC Davis School of Medicine. He and his colleagues studied an ion channel that controls neuronal activity called Kv2.1, a type of voltage-gated potassium channel that is found in every neuron of the nervous system.
“Our work showed that this channel can exist in millions of different functional states, giving the cell the ability to dial its activity up or down depending on the what's going on in the external environment,” said Trimmer. This regulatory phenomenon is called 'homeostatic plasticity' and it refers, in this case, to the channel protein's ability to change its function in order to maintain optimal electrical activity in the neuron in the face of large changes within the brain or the animal's environment. “It's an elegant feedback system,” he added.
The current study is the first to combine mass spectrometry-based proteomics and ion channel biophysics to the study of living brain cells. “This is an important biological question that couldn't have been answered any other way,” Trimmer said.
Most cells in the body can get by with on/off-like switches, allowing them grow and proliferate when needed. In fact, examples of these 'switches' include the well-studied products of oncogenes, proteins that get stuck in the 'on' position and cause cancer. Brain cells, however, must multi-task, receiving and processing signals from various sources, both inside and outside the body. “This ability to deal with a variety of signals involves some fairly sophisticated and subtle regulation of neuronal activity,” Trimmer said.
Brain cell activity is diminished when potassium channels are open. Closed channels lead to an increase in neuron excitability. Certain kinds of snake venom exploit this mechanism by blocking potassium channels and causing seizures. Likewise, defects in potassium channels have been associated with epilepsy and reduced brain development, as well as neurodegenerative disorders similar to Alzheimer's and Parkinson's diseases.
The type of potassium ion channel examined in the current study, Kv2.1, has been shown in studies by assistant research scientist Hiroaki Misonou to be highly regulated in response to epileptic seizures, stroke and anesthesia.
Trimmer and his colleagues are the first to use a mass spectrometry technique called SILAC (stable isotope labeling with amino acids in cell culture) to study ion channels in brain cells. The problem for researchers has been that while mass spectrometry gives incredibly accurate measures of mass, quantifying amounts of a protein in different samples can be difficult. SILAC allows scientists to add additional atomic weight to one sample so that two different samples can be analyzed in a given run, allowing for precise measurements of quantity. The 'mass tag' separates the two samples — the experimental and control — on the mass spectrometry read out.
Using this technique, postdoctoral fellow Kang-Sik Park revealed 16 sites where the protein is modified by the cell by via addition of a phosphate group. Further study — in which each of the sites is removed to reveal its role in modulation — followed by careful biophysical analyses of channel function by postdoctoral fellow Durga Mohapatra, revealed that seven of these sites were involved in the regulation of neuronal activity. Since each site can be regulated independently on the four channel subunits, the neuron can generate a huge (1018) number of possible forms of the channel.
Using this mechanism, Kv2.1 channels are quickly modified, even mimicking the activity of other potassium ion channels. “The beauty of doing it with a single protein is that it is already there and can change in a matter of minutes. It would take hours for the cell to produce an entirely different potassium channel,” Trimmer explained.
Based on these results, Trimmer and his colleagues hypothesize that parts of the Kv2.1 channel protein interact in ways that make it either easier or harder for it to change from closed to open. The protein, they believe, can exist in either loose states that require low amounts of energy, or voltage, to change from one state to another or a locked-down state that requires lots of energy (high voltage) to open or close. The number and position of phosphate molecules are what determine the amount of voltage required to open the channel.
The next step will be to determine how brain neurons regulate the addition and removal of phosphates at individual sites on the Kv2.1 protein during normal animal behavior. This involves proteomic analysis of Kv2.1 from different brain regions after stimulation with light, sound and with different learning paradigms. Trimmer and colleagues will also explore the pharmacological modulation of Kv2.1 phosphorylation in therapeutic intervention for neurological and psychiatric disorders. |
Various indigenous peoples lived in the territory of the present-day state of Montana for thousands of years. Historic tribes encountered by Europeans and settlers from the United States included the Crow in the south-central area; the Cheyenne in the southeast; the Blackfeet, Assiniboine and Gros Ventres in the central and north-central area; and the Kootenai and Salish in the west. The smaller Pend d' Oreille and Kalispel tribes lived near Flathead Lake and the western mountains, respectively.
The land in Montana east of the continental divide was part of the Louisiana Purchase in 1803. Subsequent to and particularly in the decades following the Lewis and Clark Expedition, American, British and French traders operated a fur trade, typically working with indigenous peoples, in both eastern and western portions of what would become Montana. These dealings were not always peaceful, and though the fur trade brought some material gain for indigenous tribal groups it also brought exposure to European diseases and altered their economic and cultural traditions. Until the Oregon Treaty (1846), land west of the continental divide was disputed between the British and U.S. and was known as the Oregon Country. The first permanent settlement by Euro-Americans in what today is Montana was St. Mary's (1841) near present day Stevensville. In 1847, Fort Benton was established as the uppermost fur-trading post on the Missouri River. In the 1850s, settlers began moving into the Beaverhead and Big Hole valleys from the Oregon Trail and into the Clark's Fork valley.
The first gold discovered in Montana was at Gold Creek near present day Garrison in 1852. A series of major mining discoveries in the western third of the state starting in 1862 found gold, silver, copper, lead, coal (and later oil) that attracted tens of thousands of miners to the area. The richest of all gold placer diggings was discovered at Alder Gulch, where the town of Virginia City was established. Other rich placer deposits were found at Last Chance Gulch, where the city of Helena now stands, Confederate Gulch, Silver Bow, Emigrant Gulch, and Cooke City. Gold output from 1862 through 1876 reached $144 million; silver then became even more important. The largest mining operations were in the city of Butte, which had important silver deposits and gigantic copper deposits.
Prior to the creation of Montana Territory (1864–1889), various parts of what is now Montana were parts of Oregon Territory (1848–1859), Washington Territory (1853–1863),Idaho Territory (1863–1864), and Dakota Territory (1861–1864). Montana became a United States territory (Montana Territory) on May 26, 1864. The first territorial capital was at Bannack. The first territorial governor was Sidney Edgerton. The capital moved to Virginia City in 1865 and to Helena in 1875. In 1870, the non-Indian population of Montana Territory was 20,595. The Montana Historical Society, founded on February 2, 1865, in Virginia City is the oldest such institution west of the Mississippi (excluding Louisiana). In 1869 and 1870 respectively, the Cook–Folsom–Peterson and the Washburn–Langford–Doane Expeditions were launched from Helena into the Upper Yellowstone region and directly led to the creation of Yellowstone National Park in 1872.
As white settlers began populating Montana from the 1850s through the 1870s, disputes with Native Americans ensued, primarily over land ownership and control. In 1855, Washington Territorial Governor Isaac Stevens negotiated the Hellgate treaty between the United States Government and the Salish, Pend d' Oreille, and the Kootenai people of western Montana, which established boundaries for the tribal nations. The treaty was ratified in 1859. While the treaty established what later became the Flathead Indian Reservation, trouble with interpreters and confusion over the terms of the treaty led whites to believe that the Bitterroot Valley was opened to settlement, but the tribal nations disputed those provisions. The Salish remained in the Bitterroot Valley until 1891.
The first U.S. Army post established in Montana was Camp Cooke on the Missouri River in 1866 to protect steamboat traffic going to Fort Benton, Montana. More than a dozen additional military outposts were established in the state. Pressure over land ownership and control increased due to discoveries of gold in various parts of Montana and surrounding states. Major battles occurred in Montana during Red Cloud's War, the Great Sioux War of 1876, the Nez Perce War and in conflicts with Piegan Blackfeet. The most notable of these were the Marias Massacre (1870), Battle of the Little Bighorn (1876), Battle of the Big Hole(1877) and Battle of Bear Paw (1877). The last recorded conflict in Montana between the U.S. Army and Native Americans occurred in 1887 during the Battle of Crow Agency in the Big Horn country. Indian survivors who had signed treaties were generally required to move onto reservations.
Chief Joseph and Col. John Gibbon met again on the Big Hole Battlefield site in 1889Simultaneously with these conflicts, bison, a keystone species and the primary protein source that Native people had survived on for centuries were being destroyed. Some estimates say there were over 13 million bison in Montana in 1870. In 1875, General Philip Sheridan pleaded to a joint session of Congress to authorize the slaughtering of herds in order to deprive the Indians of their source of food. By 1884, commercial hunting had brought bison to the verge of extinction; only about 325 bison remained in the entire United States.
Cattle ranching has been central to Montana's history and economy since Johnny Grant began wintering cattle in the Deer Lodge Valley in the 1850s and traded cattle fattened in fertile Montana valleys with emigrants on the Oregon Trail. Nelson Story brought the first Texas Longhorn cattle into the territory in 1866. Granville Stuart, Samuel Hauser and Andrew J. Davis started a major open range cattle operation in Fergus County in 1879. The Grant-Kohrs Ranch National Historic Site in Deer Lodge is maintained today as a link to the ranching style of the late 19th century. Operated by the National Park Service, it is a 1,900-acre (7.7 km2) working ranch.
Tracks of the Northern Pacific Railroad (NPR) reached Montana from the west in 1881 and from the east in 1882. However, the railroad played a major role in sparking tensions with Native American tribes in the 1870s. Jay Cooke, the NPR president launched major surveys into the Yellowstone valley in 1871, 1872 and 1873 which were challenged forcefully by the Sioux under chief Sitting Bull. These clashes, in part, contributed to the Panic of 1873 which delayed construction of the railroad into Montana. Surveys in 1874, 1875 and 1876 helped spark the Great Sioux War of 1876. The transcontinental NPR was completed on September 8, 1883, at Gold Creek.
Tracks of the Great Northern Railroad (GNR) reached eastern Montana in 1887 and when they reached the northern Rocky Mountains in 1890, the GNR became a significant promoter of tourism to Glacier National Park region. The transcontinental GNR was completed on January 6, 1893, at Scenic, Washington.
In 1881, the Utah and Northern Railway a branch
line of the Union Pacific completed an arrow gauge line from
northern Utah to Butte. A number of smaller spur lines operated in
Montana from 1881 into the 20th century including the Oregon Short
Line, Montana Railroad and Milwaukee Road.
Buffalo Soldiers, Ft. Keogh, Montana, 1890. The
nickname was given to the "Black Cavalry" by the Native American
tribes they fought. Under Territorial Governor Thomas Meagher,
Montanans held a constitutional convention in 1866 in a failed bid
for statehood. A second constitutional convention was held in Helena
in 1884 that produced a constitution ratified 3:1 by Montana
citizens in November 1884. For political reasons, Congress did not
approve Montana statehood until 1889. Congress approved Montana
statehood in February 1889 and President Grover Cleveland signed an
omnibus bill granting statehood to Montana, North Dakota, South
Dakota and Washington once the appropriate state constitutions were
crafted. In July 1889, Montanans convened their third constitutional
convention and produced a constitution acceptable by the people and
the federal government. On November 8, 1889 President Benjamin
Harrison proclaimed Montana the forty-first state in the union. The
first state governor was Joseph K. Toole. In the 1880s, Helena (the
current state capital) had more millionaires per capita than any
other United States city. |
Radar measurements of Mars' polar ice caps reveal that the mostly dry, dusty planet is emerging from an ice age, following multiple rounds of climate change. Understanding the Martian climate will help determine when the planet was habitable in the past, how that changed, and may inform studies of climate change on Earth. Models have suggested that Mars has undergone ice ages in the past, but empirical data to confirm this has been sparse. Here, Isaac Smith and colleagues used radar to analyze layers of ice within the planet's polar ice caps, using the Shallow Radar instrument onboard the Mars Reconnaissance Orbiter spacecraft. As ice erodes, wind can create spiral troughs and other distinct features. Tracing the layers of these features within the ice can reveal changes in ice accumulation and flow - and thus changes in climate - in the past. While the southern ice cap is relatively small and altered by meteorite impacts, the researchers were able to trace the layers within the northern ice cap. They found layers and migration paths that increase in slope abruptly, reverse direction, or are completely buried. Their analysis suggests that the planet is currently emerging from an ice age, in a retreat that began approximately 370,000 years ago. |
Growing Independence & Fluency
SHH! We’re Reading!
Rationale: To be a fluent reader, a child must be able to read both aloud and silently to themselves. To increase reading speed, fluency, and comprehension, students need to learn to read silently. In this lesson, students will learn techniques to read silently. In this lesson, the children will be reading a decodable book of their choice (with independent reading level stickers) to learn how to read without talking.
1. Yellow (high reading level), red (middle reading level), and blue (lower reading level) stickers for Independent reading levels for books.
2. Classroom library containing books with Independent Reading level stickers on them.
3. Book talks for a few of the books
5. Reading Journals
6. Worksheet for cross-checking (at the end of the lesson)
7. Chalk for chalkboard
8. Sample books for
classroom library – Kite Day at Pine Lake by Sheila
Us Amelia, Bedelia by Patricia Parrish, and Leftover Lily by
checklist for silent reading
1. Begin the lesson by telling the students that they are going to start a reading a different way than they have in the past. We are going to learn how to read silently today. The teacher will then give a few book talks to get the students interested in some of the books they will be able to select from. (Examples: Kite Day at Pine Lake by Sheila Cushman, Teach Us Amelia, Bedelia by Patricia Parrish, and Leftover Lily by Sally Warner).
Sample Book Talk for Leftover Lily – Six-year-old Lily is left out when her friends exclude her. Lily picks a new friend to boss around, but her new friend Hilary does not want to be controlled by Lily! What is Lily going to do now?
2. The teacher will also explain that they will be reading silently so that they can read faster, and so they can understand what they are reading a little easier. When we read out loud we can sometimes get distracted or distract those around us. When everyone is silent, you are able to concentrate on the book you are reading. Before we start reading silently, we are going to review a few strategies to help your reading be smoother.
3. The teacher will begin this by explaining cross-checking to the students. The teacher will make sure that when the students read silently, they read for comprehension, not just for speed. The teacher will pass out a worksheet with sentences on it like: The cat barked when it found its bone, and Sally the mouse ate a piece of cheese. The children will go through these sentences on their own, then decide which ones make sense to them. Then, the teacher goes over the right answers with the whole class and make sure that everyone understands. It is important to remember to cross-check, or see if what you are reading makes sense, when you are reading silently.
4. Next, the teacher should review how to do the cover up method. Write the word "mouth" on the board. Ask a student, how would you use the cover up method to read this word? Good, first you would see what sound the vowel makes, then add the first letter to the vowel, and finally add the last sound. (While explaining this, the teacher will show the ou first, then uncover the m, then add the last sound /th/) The teacher should then model how to do some harder words that the children might not know (absolute, numerous, etc.). For numerous, the teacher should uncover nu/mer/ous part by part and explain how to put the sounds together to say the word.
5. Class, I am going to pick up a book and read the first paragraph. Notice how I read silently, not where you can hear me. (Teacher should read silently so students know how it should look while they are reading.)
6. Now we are going to have some of our own silent reading time. Everyone may pick out one book to read from our classroom library. You may pick a book that I gave a book talk on earlier, or you may choose another book. Make sure you pick a book that has the same color sticker as the stickers that were given to you earlier today. If you are reading a chapter book, read as many chapters as you can in the time you are given. We are going to be doing silent reading daily from now on, so you will have plenty of time to finish your book. Then, the students will find a seat wherever they want to in the room and read their book silently.
7. If the teacher believes a student is not reading during this time, ask the student what they read after the silent reading time is over. The teacher can assess the children by observing them while they read. The teacher will look at their silent reading techniques (checklist for all students):
_____ Lips Only
Then, the teacher can allow each child to go to the front of the room and share a sentence or two about their book. This is to make sure that they read it, and that they comprehended it.
8. Have the children write in their journals what they liked and didn’t like about silent reading. Also have them write a sentence or two about what they read in their book. After the children have completed their journal entry they will gather together again and have a discussion on the importance of silent reading. The class can then talk openly about their silent reading experience. They can discuss the problems some may have had while silent reading and how each student can become a better silent reader.
O’Brian, Barclay “Silence for Solo Reading” CTRD Student Spring 2001
Harbour, Mary Ann “Shh…Silent Reading” CTRD Student Spring 2001
Wilson, P. (1992)
Nonreaders: Voluntary Reading, Reading Achievement, and the Development
Reading Habits. In C. Temple and P. Collins (Eds), Stories and Readers:
Perspectives on literature in the elementary classroom (157-169).
Click here to return to Constructions
Worksheet for Cross-checking
1. The cat barked when it found its bone.
2. Sally the mouse ate a piece of cheese.
3. Johnny does not like her dress.
4. I bought my dog at the grocery store.
5. Mary wore her swimsuit in the summertime. |
It was about 14 million years ago when the peaks of several volcanoes broke the surface of the Pacific Ocean and formed the initial Galapagos Archipelago. The Galapagos hotspot is located in the western part of Galapagos. A hot spot is a place where the magma in the Earth is hotter than usual.
The Galapagos Islands were created by volcanoes over the course of ages, born of the fires deep within the Earth’s core. However the volcanoes in the Archipelago are different. The islands sit on what is called the Nazca Plate, one of which form the Earth’s crust in the middle of the Pacific Ocean, a particularity about this plate is that it does not collide with any other, which moves slowly around 5 cm per year. Each time the plate moves, new volcanoes rise up. This is how the Galapagos Islands were formed, about 600 miles west from the coast of Ecuador. It only took a volcano to form all the islands, with the exception of Isabela Island, the largest of all, formed by the union of six different volcanoes above the sea level.
Volcanoes lava and geology are fundamental to understanding the uniqueness of Galapagos. Most people know that the inside of the Earth is made of magma, or molten rock, which is very hot. So why don’t we get burned? Because Earth’s upper crust is cool and protects us from the heat. But the crust in not one big solid piece, like the coating on a cand M&M. Rahter, the crust is in the form several different pieces called “plates,” that move around and occasionally crash into one another.
In the last 200 years, a remarkable 50 plus eruptions have occurred. They have damaged the unique species of the islands, but they have also created every new land there. The islands that are farther from the hot spot are the oldest and the closest from it are the youngest. For example, San Cristobal Island was formed about 4 million years ago, and the young Fernandina Island is believed to have less than 700,000 years old.
Current Volcanic Activity
The eastern Galapagos Islands are no longer volcanically active. Some of them are very old, and have nearly been reclaimed by the sea. Genovesa, for example, is a small island and it’s all that remains of a once-enormous volcano. It’s no longer active: in fact, you may get to snorkel in the volcano’s crater!
The western islands, on the other hand, are still quite active, as they are still over the hot spot. The volcanoes on Isabela and Fernandina still erupt regularly.
Recent Volcanic Activity
In April of 2009, La Cumbre Volcano on Fernandina erupted, sending smoke, gas and ash high into the sky and endangering thousands of animals including Galapagos Marine Iguanas and Penguins. Although the volcano was erupting, visits to the other side of Fernandina Island continued!
In May of 2008, Cerro Azul on Isabela erupted briefly.
In October of 2005, Sierra Negra (Isabela) erupted, shooting lava and ash into the sky. It had been dormant since 1978.
One noteworthy volcanic event took place in 1954, when a sudden volcanic event caused the underwater geography off of Isabela Island to shift. As a result, Urbina Bay was created when a section of the ocean floor was suddenly pushed above water. Reports from ship’s captains said that the area reeked of dead fish and marine life for weeks. It happened so fast that the sea animals could not escape! It is possible to visit Urbina Bay and you can still see some coral formations there along the trail.
Is the Volcanic Activity Dangerous?
Not really. Most of the tourism sites in Galapagos are far away from any volcano that might be dangerous. They are dangerous for the animals, therefore: giant tortoises occasionally get burned by lava or hot ash, and other animals may lose habitat. The rare Galapagos pink land Iguanas, which inhabit Isabela’s Wolf volcano, are considered at-risk because its numbers and habitat are so small that an inopportune eruption could wipe them out.
Most Galapagos visitors have come from around the world to see the unfearful wildlife or to dive in the crystal blue Galapagos waters.
The Galapagos Islands, a place where you can discover pristine wildlife, geology, and uniqueness. You will have a one in a lifetime experience. |
Newswise — WINSTON-SALEM, N.C. – March 21, 2019 – Congenital heart defects (CHDs) are abnormalities in the structure of the heart that develop before birth. They are the most common type of birth defect, affecting approximately 1 percent of all newborns, or roughly 40,000 per year in this country.
These defects can have a significant impact on blood flow through the heart and out to the rest of the body. Common types are holes in different areas of the heart, too-narrow blood vessels and leaky valves. In more severe forms, parts of the heart may be poorly formed, out of place or missing altogether.
Thanks to significant advances in methods of detection, diagnosis and treatment, these congenital defects are not as deadly as they once were. According to the federal Centers for Disease Control and Prevention, the present-day rates for survival to age 18 are about 95 percent for babies born with mild forms of CHD and about 70 percent for those born with severe defects. (By way of comparison, as recently as 2005 the survival rate to age 1 for newborns with severe CHDs was only 69 percent.)
That means the population of people with congenital heart defects is growing. Researchers estimate that in the United States there are currently about 1 million infants, children and adolescents and 1.4 million adults living with some form of CHD. Most of these people (or their parents) are aware of their condition. Most, but not all.
That’s because while congenital heart defects are present at birth, they don’t necessarily produce signs or symptoms until later in life, sometimes much later. And there’s no good way to tell when – or even if – they’ll become apparent.
“The heart is an amazing organ with many moving parts and there are a lot of possibilities as the heart is developing for something to not connect right, so there are a lot of different congenital defects, probably more than 30,” said Sanjay Gandhi, M.D., an interventional cardiologist at Wake Forest Baptist Medical Center. “Some of these are identifiable right away, or they manifest in early childhood or maybe in adolescence. But otherwise there are too many variables to predict when someone with CHD might experience symptoms or effects.”
These variables include the type of defect, its location, its severity, the function(s) it effects and whether it is a simple or complex defect. Other factors are an individual’s age, overall health and lifestyle habits, especially regarding diet and exercise.
“Many of the lifestyle choices that can increase the risk of other cardiovascular issues can have an impact on congenital defects later in life,” Gandhi said.
And early intervention is no guarantee of immunity: Even congenital heart defects that are identified and treated in childhood can resurface and cause problems in adulthood.
Among the most common problems associated with CHD in adults are arrhythmia (abnormal heart rhythms), stroke, congestive heart failure, pulmonary hypertension (high blood pressure in the lungs) and atherosclerosis (hardening or narrowing of the arteries).
Fortunately, in the majority of cases these complications are preceded by warning signs such as shortness of breath, exertion fatigue, dizziness or fainting and heart palpitations.
“These are also signs of other heart problems and may not have anything to do with a congenital heart defect,” Gandhi said. “But if you have any of these symptoms and they’re life-impacting – you’re having trouble doing any of the simple, everyday things you could do easily a year ago – you should definitely inform your primary health care provider and explore whether you need to see a heart specialist and have some cardiac testing.”
The most frequently employed methods of detecting heart abnormalities are electrocardiogram (EKG), which measures the heart’s electrical activity, and echocardiogram, which uses sound waves (ultrasound) to produce a picture of the heart. Depending on what those tests reveal, doctors may also turn to blood tests, chest X-rays and CT or MRI scans to accurately evaluate an individual’s condition.
And if it is determined that the problem is due to a congenital heart defect?
“Often times the patients are very surprised that the problem is something they’ve been living with their entire lives,” Gandhi said. “On the other hand, sometimes they realize that things weren’t always right with their breathing or stamina, that they couldn’t keep up with their friends on the soccer field or in the swimming pool when they were younger, that maybe this problem with their heart did manifest earlier than they had thought.”
The treatments for CHD vary, based on the nature and severity of the defect. Some patients can be treated with medications, which cannot repair a structural abnormality but can relieve symptoms and reduce the risk of complications. Other individuals require one or more surgical procedures, which can be either minimally invasive (such as percutaneous catheterization) or traditional (open heart surgery). Virtually all instances call for follow-up monitoring and management.
Cardiologists and heart surgeons who treat adults are generally more familiar with acquired heart disorders than CHD. (Gandhi estimated that less than 10 percent of the minimally invasive procedures he performs involve patients with congenital heart problems.) At Wake Forest Baptist, the adult heart specialists work closely with the pediatric cardiology experts at the medical center’s Brenner Children’s Hospital, who are more familiar with congenital defects, in identifying, evaluating and treating these conditions in adults.
“We’ll do whatever’s needed to best fix the problem,” Gandhi said. “And it’s all going to be tailored specifically to the needs of the individual patients.” |
A tarball is a blob of petroleum that has been weathered after floating in the ocean. When crude oil (or a heavier refined product) floats on the ocean surface, its physical characteristics change. During the first few hours of an oil spill, the oil spreads into a thin slick. Winds and waves tear the slick into smaller patches that are scattered over a much wider area. Various physical, chemical, and biological processes change the appearance of the oil. These processes are generally called weathering.
Initially, the lighter components of the oil evaporate much like a small gasoline spill. In the cases of heavier types of oil, such as crude oil or home heating oil, much of the oil remains behind. At the same time, some crude oils mix with water to form an emulsion that often looks like chocolate pudding. This emulsion is much thicker and stickier than the original oil. Winds and waves continue to stretch and tear the oil patches into smaller pieces, or tarballs. While some tarballs may be as large as pancakes, most are coin-sized (a relatively large tarball is shown in the photo above). Tarballs are very persistent in the marine environment and can travel hundreds of miles.
Weathering processes eventually create a tarball that is hard and crusty on the outside and soft and gooey on the inside, not unlike a toasted marshmallow. Turbulence in the water or beach activity from people or animals may break open tarballs, exposing their softer, more fluid centers. Scientists have not been very successful at creating weathered tarballs in the laboratory and measuring the thickness of the crusty outer layer. Therefore, we don't know how much energy is needed to rupture a tarball.
temperature has an important effect on the stickiness of tarballs. As air and water temperatures increase, tarballs become more fluid and, therefore, sticky--similar to an asphalt road warmed by the summer sun. Another factor influencing stickiness is the amount of particulates and sediments present in the water or on the shoreline, which can adhere to tarballs. The more sand and debris attached to a tarball, the more difficult it is to break the tarball open. These factors make it extremely difficult to predict how long a tarball will remain sticky.We do know that
For most people, an occasional brief contact with a small amount of oil, while not recommended, will do no harm. However, some people are especially sensitive to chemicals, including the hydrocarbons found in crude oil and petroleum products. They may have an allergic reaction or develop rashes even from brief contact with oil. In general, we recommend that contact with oil be avoided. If contact occurs, wash the area with soap and water, baby oil, or a widely used, safe cleaning compound such as the cleaning paste sold at auto parts stores. Avoid using solvents, gasoline, kerosene, diesel fuel, or similar products on the skin. These products, when applied to skin, present a greater health hazard than the smeared tarball itself.
There is no magic trick to making tarballs disappear. Once tarballs hit the beaches, they may be picked up by hand or by beach-cleaning machinery. If the impact is severe, the top layer of sand containing the tarballs may be removed and replaced with clean sand. |
Why is Aristotle considered one of the greatest minds in Western history?
Read more from
The system of philosophy that Aristotle (384–322 B.C.) developed became the foundation for European philosophy, theology, science, and literature. The Aristotelian system may be so much a part of the fabric of Western culture that the only effective way to describe his philosophy is through example.
Among his writings on logic is Organon, meaning “tool” or “instrument.” Here he defines the fundamental rules for making an argument. While other thinkers may well have formulated the argument before Aristotle, no one had made a systematic study of it. In Organon, Aristotle puts forth a method for coming to a conclusion based on circumstantial evidence and prior conclusions rather than on the basis of direct observation. This deductive scheme, called a syllogism, is made up of a major premise, a minor premise, and a conclusion. For example: every virtue is laudable (major premise); courage is a virtue (minor premise); therefore courage is laudable (conclusion). (It is worth noting, however, that the belief in deductive logic was later rejected by English philosopher Sir Francis Bacon [1561–1626] in 1620, in favor of an inductive system, or one that is based on observation.)
In Poetics, Aristotle expounded upon his literary views. He maintained that epic and tragedy portray human beings as nobler than they truly are, while comedy portrays them as less noble than they are. In order to explain how tragedy speaks to the emotions of the spectator, Aristotle introduced the idea of catharsis. He separated tragedy from epic with the distinction that tragedy maintains unity of plot (later translated as unity of plot, time, and place), while the epic does not. Because of the keen understanding evident in Poetics, the work has illuminated literary criticism since antiquity.
In addition to logic and rhetoric, Aristotle wrote on natural science (Physics, On the Heavens, Parts of Animals, and On Plants) and on ethics and politics (Politics). His great philosophical work was Metaphysics, so named because, in the body of his works, it comes after (the Greek word for which is meta) the work Physics. Metaphysics as a philosophy is the study of substance, or the nature and structure of reality. It is considered one of five major branches of Western philosophy. In modern thought, metaphysics can include many disciplines, such as cosmology (the study of the origins and structure of the universe) and theology (the study of religion). Most of the great philosopher’s writings are compilations of notes from lectures he delivered to his students at the Lyceum, also called the Peripatetic School, in Athens. Among his pupils there were Greek leaders, including Alexander the Great (356–323 B.C.). |
Future-focused history is the commonsense idea that knowledge of past historical experience can inform future judgment in the realm of human affairs. It is based on two obvious realities:
a. History is the intellectual discipline that describes past events in the realm of human affairs.
b. Past experience is the primary indicator of future outcomes.
This is a basic tenet of Bayes’ theorem of statistical probability, perhaps the best-accepted theorem in the field of statistics.* Throughout our lives, and on a daily basis, we humans routinely rely on past experience to inform future judgments that direct our subsequent decisions and actions. We reach for a hammer instead of a wrench to drive a nail because this choice has produced favorable outcomes in the past. Similarly, history can supply useful information about what has worked and not worked in the vitally important realm of human affairs.
Extending Future-Focused History to Education. Future-Focused History education is based on two additional realities:
a. Formal education (schooling) exists to impart important knowledge of the world that can help students and society to function effectively in the future.
b. School subjects impart such knowledge by identifying general principles of how the world works derived from their subject matter—principles that can be applied in the future, such as addition and subtraction in mathematics, grammar and punctuation in language, and photosynthesis and gravity in science. General principles of knowledge offer the most practical and the most powerful means to learn from past experience. It might be said that disciplines of all kinds, from medicine to fly-fishing to small engine repair, exist for the express purpose of identifying, systematizing, and imparting their general principles of knowledge.
– History education can fulfill the purpose of education as other school subjects do only by imparting general principles of knowledge derived from its subject matter, principles that can be usefully applied in the future.
– History has been supplying humans with useful principles of knowledge for well over two millennia, since the time of Thucydides in Greece and Sun Tzu in China.* The idea that history—like other disciplines—possesses general principles of knowledge was reaffirmed during the Renaissance by such eminent thinkers as Niccolò Machiavelli and David Hume.* Respected contemporary thinkers continue to identify principles of historical knowledge and the recurring patterns upon which they are based. (See “Future-Focused History is alive outside history class.”)
– Nonetheless, general principles of historical knowledge are missing from the official curriculum taught to students in our schools and colleges.*
– Unlike professionals in other intellectual domains—who dedicate their careers to uncovering general principles derived from their fields’ subject matter—historians concentrate on describing events of the past rather than identifying principles useful in the future.
– The agenda of history education has traditionally been set by academic historians, an agenda that does not include general principles of history.
Why might academic historians prefer not to acknowledge principles of historical knowledge?
– To acknowledge the existence of such principles would be to admit that historical learning presently lacks the fundamental component of other intellectual disciplines—the component that makes these disciplines useful in life.
– To acknowledge such principles could be seen to imply that academic historians have been remiss, if not negligent, in overlooking general principles of history up to now.
– To acknowledge such principles would be to recognize that an essential aspect of historical knowledge lies beyond the scope of academic historians, which could throw into question their privileged role in directing the nature of history education.
– History teachers are left to teach about one-time events from the past, most of which have little or no relevance to the future.
– History education fails to effectively fulfill the fundamental purpose of education.
– History is in decline in the nation’s schools and colleges.*
– Students and society may learn about history, but they seldom learn from history, so the cycle of historical ignorance perpetuates indefinitely.
A reasonable response: If academic historians wish to confine their efforts to describing events of the past, that’s their business. Then the task of identifying general principles of historical knowledge falls to history teachers who bear the professional responsibility to impart important knowledge of the world that can help students and society to function effectively in the future. That’s their business.
Under existing conditions, history education may be unable to survive and thrive over the longer term—unless history teachers take charge of history schooling and supply historical learning that is relevant to the future.
. . .
- More about general principles of historical knowledge
- Future-Focused History is alive outside history class
- Sympathy for the historian
- Can we stop the decline of history education?
*Factual evidence for the starred statements in the above article is provided in the book Future-Focused History Teaching: Restoring the Power of Historical Learning by Mike Maxwell. |
MALI TO MECCA, MANSA MUSA MAKES THE HAJJ
Students explore the historical event when Mansa Musa, Ruler of Mali, made the Hajj (or holy pilgrimage) to Mecca in 1324 AD. They create Web pages or PowerPoint presentations maps, charts, posters, and oral presentations with bibliographies.
57 Views 45 Downloads
- Activities & Projects
- Graphics & Images
- Lab Resources
- Learning Games
- Lesson Plans
- Primary Sources
- Printables & Templates
- Professional Documents
- Study Guides
- Writing Prompts
- AP Test Preps
- Lesson Planet Articles
- Interactive Whiteboards
- All Resource Types
- Show All
See similar resources:
Mansa Musa and Islam in Africa
Delve into the world of Malian ruler Mansa Musa, the development and culture of African kingdoms such as the Swahili civilization, the use of oral tradition, and the spread of Islam across trade routes. The narrator does an excellent job...
10 mins 9th - 12th Social Studies & History CCSS: Adaptable
Mansa Musa, One of the Wealthiest People Who Ever Lived
Discover the complex, rich legacy of fourteenth century African king Mansa Musa, who is said to have been one of the wealthiest individuals who has ever lived. Young historians will learn about trade routes between the Mediterranean and...
4 mins 7th - 12th Social Studies & History CCSS: Adaptable
Trekking to Timbuktu: Restoring the Past
Young scholars investigate the environmental factors that threaten Timbuktu. Students investigate what measures are being taken to restore their mosques, and the condition of their ancient manuscripts. Young scholars discover information...
6th - 8th Social Studies & History
Sundiata, Mali’s Lion King
Learners investigate the history of Mali. In this African cultures lesson, students research the impact of Sundiata Keita as king of Mali, recognize the significance of historical Malian festivals, and create character masks to be worn...
6th - 12th Visual & Performing Arts |
You or your child have probably played charades. In a traditional game of charades, one person draws a word, for example, "angry." Without talking, that person acts out the word - shaking a fist, stomping, growling, any actions that seem angry. Eventually, someone in the group guesses correctly.
How does someone with a visual impairment play charades? They can play it well. Try these versions.
Follow the game's instructions below, or
- Your leader chooses a word and lets all players know it, except one person who is "the actor."
- Everybody else helps the actor act out the word by moving the person's body into the right position or guiding the person to make a movement.
- The actor has to guess what the word or phrase is.
- Your leader chooses a word and lets only two players see it.
- These two players work together. One of the players acts out the word or phrase where only his partner can see him - maybe around a corner or standing behind the other players.
- The partner is the only one who can see what is being acted out and describes the actions to the rest of the players.
- The rest of the players have to guess the answer, based only on the description.
Suggested words for charades.
- Hot Salsa |
|Human Rights Introduction
A Year for Human Rights
In April, in Bogota, Columbia, The Organization of American States adopted the American Declaration of the Rights and Duties of Man ? a document that pre-empted the Universal Declaration of Human Rights by eight months. The Declaration stated that ‘All men are born free and equal, in dignity and in rights, and, being endowed by nature with reason and conscience, they should conduct themselves as brothers one to another’ and affirmed that the juridical and political institutions of all the states of the OAS should work to protect those dignities and rights.
On 9th July, The International Labour Organization adopted the Convention of the Freedom of Association and Protection of the Right to Organize. The convention recognizes the right of workers to organize and form federations without discrimination from employers.
On 9th December, The Convention on the Prevention and Punishment of the Crime of Genocide was adopted by the United Nations General Assembly. It built on the work of Polish lawyer Raphael Lemkin and defined genocide as the ‘?intent to destroy, in whole or in part, a national, ethnical, racial or religious group?’ The Convention exhorts all members of the UN to report and punish acts of genocide, whether they are committed during peace or wartime.
On 10th December, The United Nations General Assembly adopted the Universal Declaration of Human Rights. Along with its adoption by the General Assembly, The United Nations charged its member states to disseminate and proclaim the contents of the Declaration within their borders. The preamble of the declaration states that ‘?recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world’. |
Dyspnea (shortness of breath) is found in asthma and COPD and is an important symptom for diagnosis. Shortness of breath severity can range from person to person, but is commonly described as tightening of the chest or feelings of suffocation. Shortness of breath can occur for many reasons, and is often a symptom of many other conditions. However, if it is tied with a respiratory disease like asthma or COPD, it can also be chronic.
In a healthy individual, shortness of breath can occur from strenuous exercise, extreme temperatures, changes in altitude or even from obesity. If shortness of breath occurs suddenly it could be a sign of something more serious and medical attention should be sought right away.
Dyspnea in COPD
COPD – chronic obstructive pulmonary disease – is a condition that causes breathlessness, fatigue and difficulty breathing. Even taking a few steps may be enough to worsen symptoms related to COPD. COPD damages the air sacs in the lungs, causing them to become too large, which leads to breathing difficulties as well as inflammation and irritation in the lining of the lungs – known as emphysema and bronchitis.
Aside from dyspnea, other symptoms of COPD include:
- Tightening and pain in the chest
- Excess mucus in the lungs
- Fatigue and feeling frequently tired
- Frequent respiratory infections
- Sputum when you cough
- Unintended weight loss
Dyspnea in asthma
Asthma is another respiratory disease that can lead to dyspnea. Dyspnea perception in asthmatics could increase the risk of asthma attacks and the more severe asthma is, the poorer dyspnea perception is.
One study, which looked at 71 participants that underwent induced sputum, measured nitric oxide while breath condensates were taken. Perception of dyspnea was recorded with the BORG–VAS/FEV1 slope, before and after testing, and correlated with the stages of asthma, inflammatory markers, age and depression scale.
The researchers found with the worsening of asthma dyspnea perception decreased, which was correlated with aging and depression. Airway inflammation, too, played a role in the decline of dyspnea perception.
The researchers concluded that age, depression, inflammation and severity of asthma are all factors of a worsened perception of dyspnea.
Causes and symptoms of dyspnea
There are many causes of dyspnea, from everyday activities to respiratory infections or heart-related problems. Causes of dyspnea include:
- Respiratory tract infections like pneumonia
- Allergic reaction
- Blockage in the respiratory tract
- Blood clot
- Collapsed lung
- Interruptions of blood flow to the heart
- Heart failure
- Interstitial lung disease
- Being out of shape, or unfit
- Pulmonary hypertension
- Anxiety, panic attack
How a person describes shortness of breath may vary depending on the cause. Some may describe it as if they are “hungry for air” and others may indicate they “cannot breathe deeply enough.” Along with noticing shortness of breath, you should pay close attention to other symptoms that may be occurring simultaneously to help determine if the cause is serious or acute and will only last for a short matter of time.
Treatment and prevention of dyspnea
Treating dyspnea is most successful when the underlying condition is treated, like losing weight if you are obese, reducing stress and anxiety, or using an inhaler to better control asthma. These are just some examples of how dyspnea can be better managed.
Other treatments may involve the use of medications, such as bronchodilator drugs to open up the airways, anti-inflammatory drugs, opiates, anti-anxiety drugs, and supplementary oxygen.
Non-drug treatments may include respiratory rehabilitation, which involves exercise programs and therapies to help improve the use of the lungs, or even yoga, which utilizes many different breathing techniques and works as a great stress-reliever as well.
In order to determine the best mode of treatment for dyspnea, it’s important to understand what the cause of it is. If you’re concerned about your shortness of breath, speaking with your doctor can help narrow down the culprit.
It’s hard to pinpoint when our liver is sick, which is why it’s so important to always pay attention to your body and recognize the signs. Symptoms of an unwell liver include loss of appetite, weight loss, yellowing of the skin and bruising, just to name a few. Because many of the symptoms associated with a sick liver can be confused with other ailments, it’s often overlooked as the root cause. Continue reading…
Trouble breathing? Persistent cough or frequent chest infections? This could be chronic obstructive pulmonary disease (COPD) – the name for a collection of lung diseases including chronic bronchitis, emphysema and chronic obstructive airways disease. COPD exacerbation is not something you want to fool around with. Your body needs oxygen and you need to breathe! Continue reading… |
|Parliament, the Judiciary and the Executive are the three key arms of
the state, with well-defined spheres of authority under the
Constitution. Parliament represents the law making arm, the Executive is
responsible for enforcement of laws, and the Judiciary is in charge of
interpretation of the Constitution and laws as well as dispute
resolution. In this note, we examine how the relationship between Parliament and the Judiciary has evolved over the years.
Parliament and the Judiciary (1051 KB) |
When the black vine weevil (Otiorhynchus sulcatus), a foe to a wide variety of garden plants, makes an appearance in your landscape, it is time to snap into action. Infestations are serious and may lead to highly visible symptoms and damage in addition to destruction that is virtually impossible to recognize until it is too late. Monitor your plants regularly and take immediate management measures if a black weevil problem occurs to prevent severe repercussions.
Also referred to as the taxus weevil, black vine weevils are small blackish-gray pests with snouted faces and a body length measuring approximately three-eighths of an inch. In their larval form, the weevils display white C-shaped bodies with brown heads and a wrinkled appearance. Although these weevils have wings, they are unable to fly. They feed on a wide variety of plants, including trees, shrubs, vines and flowers. Some common hosts include yew and hemlock trees, rhododendrons and liquidambars.
Black vine weevils feed on plants both in their larval and adult forms. As larvae, these pests begin feeding on plant roots during the spring and continue into summer. As they feed, symptoms and damage include girdling, yellowing of foliage, stunted growth and plant death. Damage caused by adults is much less severe. In their mature form, the weevils chew on foliage, taking notched bites out of leaf margins throughout spring and summer. Although the bites typically cause little harm to the plant, they are unsightly.
Provide optimal care to your garden, as vigorous, healthy plants have a better chance of avoiding and recovering from infestations than those that are neglected. Make efforts to provide your plant with the necessary sun exposure. For example, if a plant requires full sunlight but is partially shaded by an overhanging plant, cut the larger plant back to avoid inhibiting sun absorption. In addition, taking care of your plant's soil is essential. Determine your plant's water needs, as both under-watering and over-watering can be disastrous. Improving drainage is as simple as incorporating organic content, such as compost, into the top layers of your plant's soil. Always select plants that thrive within your region's U.S. Department of Agriculture plant hardiness zones for best development.
Before using chemicals, consider the use of parasitic nematodes. These microscopic organisms are natural enemies of black vine weevils. Applications of nematodes in addition to thorough soil watering provide biological control of larval infestations, while chemical use has not shown substantial control over larval weevils. However, for adult management, applications of foliar insecticides are effective. After first attempting to pluck weevils by hand and dropping them into a bucket of soapy water, treat a continuing problem using an insecticide with an active ingredient such as acephate or fenvalerate. Make applications in the evening, when adults are active, and reapply after three weeks for thorough control.
- Penn State Cooperative Extension: Black Vine Weevil Fact Sheet
- Ohio State University Extension: Black Vine Weevil (And Other Root Weevils)
- University of Rhode Island Landscape Horticulture Program: Black Vine Weevil
- University of Connecticut Integrated Pest Management: The Black Vine Weevil
- University of California IPM Online: Black Vine Weevil -- Otiorhynchus Sulcatus |
Milky Way Galaxy Facts with Pictures
The Milky Way Galaxy is the name of the Galaxy where our own solar system is located. In shape Milky is kind of a spiral and it is the building block of our universe. That means our universe is nothing but the conglomeration of numerous galaxies. The name Milky Way came from a Latin word called Via Lactea, which has a root in Greek, means a pale band of light.
The Size of Milky Way Galaxy
In 1917, Harlow Shapley made an estimation of the size of Milky Way and it is still valid. He determined that our galaxy will be around 100,000 light years in diameter and our Sun is almost 30,000 light years from the center of the center of this galaxy. The most distant stars of the galaxy found to be almost 72,000 light years from the galactic center (center of milky way), which means if you start our journey at the speed of light from the center of milky way towards its most distant stars, that will take us 72,000 light years and it we start our journey from the our planet towards, at the same speed, toward the center of Milky way that would take approximately 30,000 light years.
The Mass of Milky Way
Estimating the total mass of a galaxy is extremely complicated matter, which takes into account of the velocity and position of hydrogen gas, its distance from the center, gravitational pull etc. the mass is found by constructing a mathematical model of the galaxy considering all the components within it. In 1960 an estimate predicted that value of total mass of our galaxy is 200,000,000,000 times the mass of the Sun. however, in 1980, another estimate was made, which included the mass of invisible matter like dark matter, and the prediction was that the total mass of milky way is around 1,000,000,000,000 times the mass of our Sun, which is almost 10 times higher than the previous estimation of Milky Way Mass.
Actually the nature of undetected matters like dark matter, sub-atomatic particles still remains the matter of questions and this contributed significant change is the 1980’s estimation of our galaxy. It will be extremely difficult to identify the mass of the total nondirective matters in our universe, which might change the latest predicted mass of Milky Way.
Mass of the Sun- 1.98892 × 1030 kilograms
Major components of Milky Way
A galaxy is mainly composed of numerous Stars and stellar populations. All stars are not of the same character tics and values, just like Hollywood stars of our planet, they mainly differ by masses, luminosities and orbiting characteristics. That is why not all stars become black holes, when they die. Only the stars having 1.44 times of the mass of Sun can turn into black holes-this critical mass of becoming black hole is also known as Chandrasekhar limit. Though most of the stars in our Milky Way exists as either a single star (the Sun), but there are a number of well defined groups and clusters of stars, which has thousands of member stars. This type of formation of stars in a galaxy can be subdivided into three main categories-globular, open and stellar associations-age and numbers are the main distinguished parts of star sub division system. Apart from grouping of stars, there are some other important objects like emission nebulas, planetary nebulas act. More or less, the followings are the major components of Milky Way galaxy
- Star clusters and stellar associations
- Emission nebulas
- Planetary nebulas
- Supernova remnants
- Dust clouds
- Interstellar medium
1 Star clusters and stellar associations
Though most of the stars in our Milky Way exists as either a single star (the Sun), but there are a number of well defined groups and clusters of stars, which has thousands of member stars. This type of formation of stars in a galaxy can be subdivided into three main categories-globular, open and stellar associations, moving groups-age and numbers are the main distinguished parts of star sub division system.
1.1 Globular clusters
They are the largest star clusters with the most massive size stars. There are 130 globular clusters in Milky Way, mostly are spherically spread around the Milky Way. This 'type of clusters are the most luminous too, globular clusters brightness is comparable to around 25,000 suns. The masses of these objects are roughly 1,000,000 of the mass of Sun. almost all the clusters are extremely dense at the center and having radius of 10 to 300 light years.
1.2 Open clusters
They are smaller in numbers and less massive to globular type clusters. They are called open clusters because they have loose appearance in comparison to globular clusters. They are very concentrated at the center and gradually decrease at the edge from their center. They are less luminous than globular clusters; the most brightest open cluster is equivalent to 5000 times greater than the luminosity of the Sun. the average mass of an open clusters can be around 50 solar mass and the number of starts can be anywhere between tens to thousands. About 20 light years an average open clusters diameter. Most of open clusters are at the age of 1,000,000 or 2,000,000 years and only a few percentages have reached over 1,000,000,000 of age. So, far only 1,000 open clusters have been found in our galaxy.
1.3 Stellar associations
In terms of age, they are younger than open clusters and consist of very loosely grouped stars. They appear in the region of the galaxy where start formation take place, mostly at the spiral arms of the Milky Way. They very luminous objects, even brighter than the globular clusters. Their luminosity is about 1,000,000 times the Sun. but the lifetime of stars in stellar associations of clusters is very short-only a few million years. The size of stellar association is very high, their average diameter is over 700 light years and. Only a several hundred solar mass can be the mass of an average stellar body. All stellar associations are so loosely structured that their gravitational pull is not enough to hold them together, most of them after a few million years get disperse in the galaxy and become like unconnected stars of Milky Way.
Supermassive Black Hole in the Milky Way Galaxy
2. Emission Nebulous
Emission Nebulous are mainly a kind of cloud-like objects consists or bright and diffusive gases and stars. In a nebula gas exists in the form of ionized gas- this state of gas is due to the emission of ultra violet lights by hot stars inside a nebula. Since emission type nebulas are mostly of ionized hydrogen- they are also known as H II regions. Many of the emission nebulas are located 10,000 light years away from the center of Milky Way.
3. Planetary nebulas
Planetary nebulas are gaseous clouds. They are so called because they almost resemble disk shaped planets when you look at them with telescope. This type of nebula represents the end stage of stellar life cycle. So, far over 1,000 planetary nebulas have been identified in our Galaxy.
4. Supernova Remnants
They are another form of nebulous objects found in our galaxy, in the form of gas. The main reason of their existence is because of the forming of super massive stars or supernova. Though they almost look like planetary nebulas, but they have three distinguished characteristics that make them differ from other types of nebulas-they have larger mass, higher velocity, short life-time. Example of an observed supernova remnant is AD 1054. Inside these types of stellar objects particles move in spiral direction with the speed of light and their emitted radiation patterns form flat spectrum of radio waves-this pattern of radiation also know as synchrotron radiation.
5. Dust clouds
They are located, mainly, to the Milky Way Plane and they are very conspicuous in the region of spiral arms of our galaxy. Dust clouds over 2000 light years distant from our Sun are not detectable with any optical telescope. Dust clouds can have masses over several hundred of solar masses and their maximum size can be about 200 light years.
Milky Way in 3D
Structure of Milky Way
If we look at our Milky Way with 3D view, it can be considered as a large spiral system with 6 separate parts.
- Central bulge
- Spiral arms
- Spherical component
- Massive hallow
What is in the center of Milky Way
At the center point of our galaxy, there is a super massive black hole, which is only detectable by radio waves only. Near the center of this black hole, the activities like infrared radiation, X-ray emission, rapidly moving gases can be observed if you ever have the opportunity to visit this place. All the radio evidence gathered indicates that this black hole pulling material outside the nucleus of the Milky Way. When gases clouds near this black hole, its gravitational force turn it into a fast moving rotating disk, which extends 5-30 light years! And can you guess about the |
TYPES OF TEXTS
1.What is text?
Text is any piece of writing.
This could be a letter, an email, a novel, a poem, a recipe, a note, instructions, an article in a newspaper or magazine, writing on a webpage or an advert.
When you are reading or writing any text think about the purpose of the text or why it has been written.
2. What might the purpose of a text be?
An advert might be trying to persuade you to buy something.
A letter from school might be to inform you about something.
A novel might describe somewhere or someone to you.
A car manual might instruct you how to do something to your car.
Depending on the purpose of the text, different methods will be used to get the message across to the reader.
A persuasive text is a text that really wants you to do something.
An advert might want you to buy something.
You might write a letter to persuade a friend to go on holiday with you, or to try and get off a parking ticket.
Persuasive texts might use:
•text in capital letters
•rhetorical questions (questions where no answer is needed)
•an emotional one-sided argument
SPECIAL OFFER! Buy today! Would you want to miss this SPECIAL offer? Phone NOW...
This is the Best Car you could ever ask for!
An informative text is a text that wants to advise or tell you about something.
A newspaper article might give you information about a health issue like giving up smoking.
A website might give you information about a movie, band or something that you are interested in.
A handout from school might be advising you about what your child will be doing during the next term.
Informative texts usually:
•give information in a clear way - introducing the subject and then developing it
Autumn term: Your child will be covering simple fractions during weeks 1-6.
An instructive text is a text that instructs or tells you how to do something.
A recipe wants to instruct you how to cook something.
A leaflet with a piece of furniture wants to tell you how to put it together or take care of it.
•are written as though the reader is being spoken to -
(although the word 'you' is not usually used)
•language is direct and unnecessary words are left out
•often use 'must' and 'must not'
•sometimes use diagrams or pictures to help understanding
Put all ingredients into bowl together. Mix the salad and the oil.
A descriptive text is a text that wants you to picture what they are describing.
A novel might want you to imagine the characters and see them in your mind.
A travel book will want you to see the country it is describing.
Descriptive texts usually:
•make use of adjectives and adverbs
•use comparisons to help picture it - something is like something
•employ your five senses - how it feels, smells, looks, sounds and tastes
The morning air was crisp and sharp as Sean walked down the road.
The pavement was slippery and cold beneath his feet like a slimy wet fish.
Seleccione con una X la respuesta correcta de las opciones dadas.
1:What is the purpose of the following passage of text?
2.Minimise shock for casualty
3.Prevent infection - for casualty and between yourself and the casualty
4.Arrange for casualty to go to the hospital if necessary
___ To inform the reader that bleeding needs to be controlled.
___ To describe the scene of an accident.
___ To persuade the reader to attend a First Aid course.
___ To instruct the reader on what to do if they come across an accident.
2:What is the purpose of the following passage of text?
Bert Baxter was lying in a filthy-looking bed smoking a cigarette, there was a horrible smell in the room, I think it came from Bert Baxter himself. The bed sheets looked as though they were covered in blood, but Bert said that was caused by the beetroot sandwiches he always eats last thing at night.
___ To inform the reader not to smoke in bed.
___ To persuade the reader to always clean their sheets.
___ To describe Bert Baxter and his room.
___ To instruct the reader how to eat beetroot sandwiches.
3:Why might a personal loan company include the following line in their advertisement?
For under £100 a month you could borrow £5 000 immediately with no questions asked.
___ To instruct you to quickly contact the company to arrange the loan.
___ To persuade you to take out a loan for £5 000.
___ To inform you that your loan application will be accepted.
___ To describe the range of services offered by the loan company.
4:Is the following passage an example of instructive text?
SPECIAL OFFER FOR SHELL REFRIGERATOR SHOPPERS!
2 Tickets for the price of 1 if you book before 16th September.
5:The following passage is an example of informative text. Which of the following is the reader being informed about?
Introduction to Yoga.
FURBY COLLEGE OF CONTINUING EDUCATION
Description of Course:
This course is an introduction to the practice of yoga.
Aimed at beginners, yoga is great for men and women of any age or ability or fitness level.
___ That the course is suitable for complete beginners
___ That the course will lead you on to the Stage 2 course.
___ That yoga is particularly suitable for women.
___ That yoga is not suitable if you have a heart complaint.
6:Which of the following are not normally used in descriptive texts?
___ step by step action to be taken by the reader
___ comparisons to enable the reader to picture something
7:Instructive texts always use images to show what is to be done.
8:You should always consider the intended audience/reader when writing a document to be read by someone else.
9:Which of the following is not a piece of text?
___ an email
___ a newspaper article
___ a map
___ an advertisement
10:Which of the following is not an example of informative text?
___ a church newsletter
___ a recipe book
___ a doctors' surgery leaflet about services provided
___ an obituary in a local newspaper
Mire atentamente las siguientes imagenes tomadas de textos publicitarios y trate de leerlas (o interpretarlas):
Samsung “Express Yourself” Advertisement
Dettol Instant Hand Sanitizer Advertisement
“Reserved for Drunk Drivers” Advertisement
Stop’n Grow Bag Advertisement: German product that stops nail biting
3M Security Glass Advertisement: 3M was so sure their Security Glass was unbreakable, they put a large stack of cash behind it and shoved it in a bus stop |
By Derek Whitelock
U.S. cotton is considered to have some of the lowest levels of contamination in the world. However, that reputation is in jeopardy as complaints of contamination from domestic and foreign mills are on the rise.
Of particular concern for the United States is plastic trash that collects in cotton fields, black plastic film used as mulch, plastic twine typically used for hay baling, and plastic film used for round module wrap.
Education And Research Efforts
So, what is the cotton industry doing to keep plastic contamination out of U.S. cotton?
The answer has two parts: education and research.
First, there is a nationwide campaign led by the National Cotton Council to educate the industry about plastic contamination and how to prevent it. Second, Cotton Incorporated supports collaborative research efforts at the U.S. Department of Agriculture ginning laboratories in Texas, New Mexico and Mississippi; the USDA Cotton Structure and Quality lab in New Orleans; and at Texas A&M University, Oklahoma State University and the University of North Texas.
These efforts focus on detection using imaging/optical techniques and separation using physical/electrostatic methods. The Cotton Foundation also has requested proposals and will be assisting with future research.
Contaminants In The Field And Gin
The best way to keep plastics out of U.S. cotton is to prevent them from entering the cotton stream in the first place. One research project is investigating harvester-mounted cameras to detect contamination in the field and warn the operator before it enters the harvester and ends up in the module.
Another research effort gearing up next year will look at using unmanned aerial vehicles or drones mounted with cameras to fly over the cotton field and detect and record the location of contaminants. These coordinates can then be transmitted to the cotton producer’s smartphone and located manually, or an autonomous land vehicle can be dispatched to retrieve the plastics before harvesting.
Research efforts on detection methods at the gin also are underway. A color camera system is being developed to view the backside of the module feeder cylinders to detect plastics wrapped on the spikes or pieces of plastic that slip through.
For contaminants that make their way into gin machinery, a prototype system is being tested and refined. It detects colored contaminants in seed cotton in places where the flow is slower and more spread out, such as in rectangular ducts between cylinder cleaners and stick machines. For neutral colored or transparent plastics, researchers are investigating infrared detectors and light sources that sense differences in chemical composition rather than color.
Assuming these detection methods are successful, what can be done to extract plastic contaminants from the cotton? Previous research shows that with current gin machinery, about 17 percent of plastics that enter the gin end up in the bale. Research is underway to modify this machinery to more effectively remove contaminants.
In addition, extraction methods using jets of air to fluff the cotton and float lighter contaminants away are under investigation. Another innovative concept exploits the differences in the static electric charge that cotton and plastics acquire to develop a “plastic magnet.”
These differences cause plastics and cotton to be attracted to opposite charges when exposed to a high voltage electric field. It is anticipated this behavior can be used to encourage the lighter plastics to move away from the seed cotton.
These projects are all in different levels of development. Whatever the development phase, researchers and scientists working on these concepts at USDA facilities and the universities are committed to helping the U.S. cotton industry find solutions to the current cotton contamination problem.
Derek Whitelock, acting research leader, USDA-ARS Southwestern Cotton Ginning Research Laboratory, Mesilla Park, N.M., contributed this article. |
SummaryStudents continue their pyramid building journey, acting as engineers to determine the appropriate wedge tool to best extract rock from a quarry and cut into pyramid blocks. Using sample materials (wax, soap, clay, foam) representing rock types that might be found in a quarry, they test a variety of wedges made from different materials and with different degrees of sharpness to determine which is most effective at cutting each type of material.
An important job for any engineer is to assign appropriate tools for building, machining and manufacturing. A wrong choice may result in a poor quality final product and/or a tool that wears out or breaks quickly. For example, a dull wooden wedge used to cut solid rock is only successful for a short time before it breaks or wears down to the point at which it no longer cuts. Engineers calculate the amount of force required to cut through a given material; knowing this helps them choose the most appropriate tool material that will cut through the rock material.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science,
technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN),
a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics;
within type by subtype, then by grade, etc.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc.
General knowledge of pyramids and geometric angles. Familiarity with the six simple machines introduced in Lesson 1 of this unit.
This unit's lesson, Pyramid Building: How to Use a Wedge, does not need to be completed prior to students doing this activity, but the storyline of creating a pyramid created within the lesson helps students understand what they are trying to achieve with this activity.
The pyramid site choosing activity in Simple Machines: Lesson 1, is also not required, but it provides students with a background so they understand what a rock quarry is, and from where they are trying to extract the rock and why.
After this activity, students should be able to:
- Determine how different materials break using differently-angled wedges.
- Describe why simple machines are used and how a wedge exerts a force.
- Demonstrate why material selection and material science is important to engineers.
- Understand that because simple machines do their jobs so well they are still used today.
Each group needs:
- 1 soap or wax block (~5 cm x 5 cm)
- 1 clay block (~5 cm x 5 cm)
- 1 foam block (~5 cm x 5 cm)
- 1 Styrofoam block (~5 cm x 5 cm)
- Wooden wedge (approximately the size of a typical doorstop [5 cm x 10 cm x 5 cm])
- Balsa wood wedge (~3 cm x 3 cm x 6 cm)
- Plastic wedge (~3 cm x 3 cm x 6 cm)
- Styrofoam/foam wedge (~3 cm x 3 cm x 6 cm)
For the entire class to share:
- A variety of demonstration wedges with different angles and materials. For example, a plastic knife, putty knife, table knife, metal screwdriver or chisel (and a small hammer to use with the chisel).
- Some demonstration quarry material. For example, a half stick of cold margarine, a brick, etc.
The wedge is a simple machine that helps make our lives easier. Does anyone know how a wedge helps us do work? (Answer: A wedge allows us to split materials apart much more easily then we could do by hand.) The wedge gives us a mechanical advantage. This means that we have to exert less force to complete the task, but we usually have to go a greater distance. For example, when you use an axe to cut through a log, the job is much easier then if you tried to break the log just with your hands. However, you also have to make many axe cuts to get through the wood. The axe allowed you to cut through the wood easier, but you had to make more cuts to do it.
The wedge is slightly different from other simple machines because when you are using a wedge as a tool, often times another object is required to help. For example, if you were using a nail, which is a very sharp-angled (acute) wedge, a hammer would be required to force the nail into the wood. This is similar to the way we believe ancient pyramid construction was done. The pyramid builders found large rock quarries filled with different types of rock. They had to figure out a way to break the rocks away from the quarry wall into large bricks (stone blocks) that could be used to build the pyramid.
Pyramid builders would likely find a variety of different rock types within the quarry. They had to engineer a wedge that could be used to break away the rock from the quarry wall. If they encountered a soft, clay-like material, pyramid engineers designed a large wedge made of wood which could easily cut through clay-like material and successfully make many bricks. But, when they came across hard marble in their quarry, the wooden wedge would not break the material apart well. The wedge quickly wore down and the engineers knew that they must design a more effective wedge. That's where you come in. Today you are going to be design engineers and help research wedge designs that help the pyramid builders cut through each type of rock they encounter.
Show students an excellent animation at a Polish website about transportation methods that do not use a wheel and axle: http://www.swbochnacki.com/ (click on Site Map, then click on Transport on the Ramp). The animation shows how heavy stone blocks might have been systematically moved up an incline plane (ramp) using many human-powered wedges. The large supply of Egyptians workers would have made this method possible.
Through this activity, you will learn about different types of wedge angles and the different materials from which wedges can be made. You will experiment with a variety of materials so that you will be able to make recommendations to the pyramid engineers about how to best design a wedge. You will also see why it is so important for engineers to understand the design process and material selection when they are working on a project. If you choose the incorrect material for your wedge, the tool will not work well and your project may not be successful.
Angle: The "sharpness" of a wedge.
Design: (verb) To plan out in systematic, often graphic form. To create for a particular purpose or effect. Design a building. (noun) A well thought-out plan.
Mechanical advantage: An advantage gained by using simple machines to accomplish work with less effort. Making the task easier (which means it requires less force), but may require more time or room to work (more distance, rope, etc.). For example, applying a smaller force over a longer distance to achieve the same effect as applying a large force over a small distance. The ratio of the output force exerted by a machine to the input force applied to it.
Quarry: A pit from which rock or stone is removed from the ground.
Simple machine: A machine with few or no moving parts that is used to make work easier (provides a mechanical advantage). For example, a wedge, wheel and axle, lever, inclined plane, screw, or pulley.
Tool: A device used to do work.
Wedge: A simple machine that forces materials apart. Used for splitting, tightening, securing or levering. It is thick at one end and tapered to a thin edge at the other.
Work: Force on an object multiplied by the distance it moves. W = F x d (force multiplied by distance).
Before the Activity
- Gather teacher demonstration materials.
- Prepare student group activity materials.
- Before making copies of the The Wedge Worksheet, fill in the table column descriptions with the wedge materials the students will be using, for example "wood," "plastic" and "Styrofoam" (or have the students do this). Fill in the row descriptions with the type of rock material the students will be using, for example, "foam," "wax" and "clay" (or have the students do this).
With the Students
- Teacher Demo: To give the students a visual understanding of a wedge and how it can be used as a tool, lead a demonstration using wedges as cutting tools. For the demo, the material being cut should be at a larger scale than what the students will do in their group activity. Use a variety of wedges, such as a chisel and hammer, and a plastic knife, on two very different materials, such as a brick and a stick of cold margarine. Some of the wedges will not be successful at cutting the hard materials, due to the wedge angle being too dull or the wedge material being too weak. Having two very different "rock" samples (brick and margarine) helps students understand why we need to design different types of wedges that are made from different materials, and introduces the idea of the importance of appropriate material selection.
- For the student group activity, divide the class into pairs of two students each (although groups of three work well, too).
- Direct each team of students to test their given wedges on each of their given "rock" sample materials. Allow them ~20 minutes to complete the activity.
- Instruct the students to record the performance of each wedge and "rock" sample on their worksheet, using the provided rating scale.
- As a class, conclude the activity by comparing test results among all teams and holding a class discussion. Which wedge/rock combinations were successful? Which were not? Why? How are the wedge angles and points different from each other? Which wedge had the sharpest angle? Which had the biggest cutting surface? How does this make a difference? (Possible answers: A larger cutting surface allows the user to exert more force on the object being cut.) Why do you think engineers design different types of wedges? (Possible answers: Depending upon the characteristics of the material to be cut, they might need to design stronger, but more expensive wedges to cut hard materials.) Why is material selection an important engineering job? (Possible answer: If you choose the incorrect material for your wedge, the tool will not work well and your project may not be successful.)
- While safety protection is not required, students should be aware that wedges have sharp edges so safety precautions need to be taken when handling them.
- Students should not be given metal wedges such a nails.
To keep the desks clean and for ease of clean-up, set materials on a tray, paper or cardboard.
Make wedges by sanding the edge of a piece of material such as plastic or wood to create a tapered edge.
Using foam can be useful as an example of a material that does not work well in a wedge application.
Alternate activity setup: If a limited number of supplies are available, each group could work with only one wedge and one material. At the end, each group could share what they learned with the entire class. Alternatively, group sizes of three work fine for this activity.
If the students have a hard time understanding how a wedge works, present a variety of pictures of a wedge in action.
Know / Want to Know / Learn (KWL) Chart: Before the activity, ask students to write down in the top left corner of a piece of paper (or as a group on the board) under the title, Know, all the things they know about wedges. Next, in the top right corner under the title, Want to Know, ask students to write down anything they want to know about wedges. After the activity, ask students to list in the bottom half of the page under the title, Learned, all of the things that they have learned about wedges.
Activity Embedded Assessment
Material Selection Discussion: After the teacher demonstration, take a few minutes to lead a class discussion about the strengths and weaknesses of each type of wedge material. Write these on the board. This will get the students thinking about material selection before they start their group activity.
Worksheet: Have the students record their test results on The Wedge Worksheet; review their answers to gauge their mastery of the subject.
KWL Chart: Finish the remaining section of the KWL Chart as described in the Pre-Activity Assessment section. After the activity, ask students to list in the bottom half of the page under the title, Learned, all of the things that they have learned about wedges. Ask students to name a few items and list them.
Engineering Recommendations: List two or three rock types on the board while the students are finishing the activity and the worksheet questions. Have the students discuss within their groups recommendations for a wedge design to cut through each of the rocks listed. They should suggest what material the wedge should be made from, and how sharp the wedge needs to be.
Class Discussion: Have the students participate in a concluding class discussion about their group test results and answers to the worksheet questions. Which wedge/rock combinations worked? Which did not? Why? How are the wedge angles and points different from each other? Which wedge had the sharpest angle? Which had the biggest cutting surface? How does this make a difference? Why do you think engineers design different types of wedges? Why is material selection an important engineering job?
Compile the class worksheet data on the board to provide some nice extension possibilities, discussion, math exercises (averages), graphing, etc. Discuss success in terms of the choice of material or wedge.
Have students explore material properties and material use. Engage the class in a discussion on how the pyramid stones had been shaped and what materials were used (metals, harder stones, etc).
Have the students design their own wedge to serve a specific purpose. For example, ask them to design a wedge that moves snow or splits air (such as an airplane wing). This wedge does not have to be sharp because air is not "hard." How is a zipper considered a wedge?
If possible, take a field trip to a local quarry to see how wedges are used and help students better understand the scale of materials that are extracted from a rock quarry.
- For lower grades, a pencil can be used as a wedge to simplify the amount of materials. In this version, students learn how just one material cuts through various types of other materials, such as marshmallows, wax, sandwiches, etc. Have them rate the success of the pencil in cutting each material so they come to understand how a wedge can be used to split materials.
- For higher grades, assign students the task of extracting 4 cubic cm of each material using the provided wedges.
ContributorsLindsey Wright; Lawrence E. Carlson; Jacquelyn Sullivan; Malinda Schaefer Zarske; Denise Carlson, with design input from the students in the spring 2005 K-12 Engineering Outreach Corps course.
Copyright© 2005 by Regents of the University of Colorado.
Supporting ProgramIntegrated Teaching and Learning Program, College of Engineering, University of Colorado Boulder
The contents of this digital library curriculum were developed under a grant from the Fund for the Improvement of Postsecondary Education (FIPSE), U.S. Department of Education, and National Science Foundation GK-12 grant no 0338326. However, these contents do not necessarily represent the policies of the Department of Education or National Science Foundation, and you should not assume endorsement by the federal government. |
Not all new palaeontology discoveries are made on dramatic rocky outcrops. Sometimes dusty drawers in the back-rooms of museums are the source of exciting discoveries. A new study by Dean Lomax, a researcher at the University of Manchester, and colleagues on a previously neglected specimen in the the Lapworth Museum of Geology, University of Birmingham, UK, has increased our knowledge of how the youngest ichthyosaurs - a group of extinct marine reptiles - lived and fed.
Ichthyosaurs were a diverse group of marine reptiles, viewed as the Mesozoic equivalent of modern day whales and dolphins (and a prime example of convergent evolution in action). With streamlined bodies, flippers and tail flukes, they evolved from a reptilian ancestor on land to become highly adapted to an exclusively marine lifestyle. Ranging from smaller, agile hunters less than a metre long to 20-metre, toothless suction feeders like Shastasaurus, we already know a fair deal about these animals – which died out about 95 million years ago – from exceptionally well-preserved fossils.
Ichthyosaur fossils often make for dramatic storytelling. Many specimens have been found that show us one of the group’s key adaptations to a fully aquatic life: they gave birth to live young, the skeletons of which can be seen within the mother’s body. Some specimens even appear to show animals caught in the very act of giving birth, although the reality is probably rather less romantic: a build-up of gases in the decaying carcass probably pushed the embryo halfway out of the birth canal (something which can be seen today in stranded pregnant whale carcasses).
The same exquisite preservation of more or less complete skeletons means that we know a lot about ichthyosaur diets too. The gastric contents of ichthyosaurs have been studied in fossils from the early Jurassic (around 190 Ma) shales of Lyme Regis and Whitby in the UK for more than 150 years, and from the slightly younger Posidonia Shale near Holzmaden in Germany. Early Jurassic ichthyosaur stomachs were densely packed with hooklets from the limbs of squids, with some also containing fish scales and other remains. Conversely, coprolites (fossil faeces) thought to have been produced by the same ichthyosaurs contain fish scales and spikes but fewer contain squid hooklets – which presumably built up in the animal’s stomach. By contrast, a younger, early Cretaceous, ichthyosaur graveyard found in Chile shows large numbers of individuals, of all ages, thought to be hunting for fish and belemnites, and a regurgitation pellet containing pterosaur remains.
The specimen which Lomax and colleagues describe is an Ichthyosaurus communis, the type species for Ichthyosaurus, which was initially described back in 1822. And it’s small: only 70cm long, and the smallest known which has been positively identified as a member of this species. Combine its overall size with a sclerotic ring (eye bones) which fill the orbit space, and bones which are not fully ossified, and you have key indicators that it was very young when it died.
So why is this a newborn and not another ichthyosaur foetus? Firstly, it isn’t within or associated with a larger individual. Secondly, its last meal is also preserved: it has squid hooks in its rib region. This is interesting, because previous studies on Stenopterygius, a similar ichthyosaur from Holzmaden, found that the young of that species fed exclusively on fish, before switching to a squid-rich diet as they matured. Different species preferred different baby foods.
One problem with back-room discoveries is that you don’t always have all the contextual data you would wish for. Sadly, this is the case for this newborn Ichthyosaurus specimen, which lacks any information on where it was collected and from which stratigraphic layer. Lomax and colleagues used another palaeontological technique to help to address this, by getting a small fragment of the rock analysed for microfossils. The ostracods and foraminifera which were found allowed them to pinpoint the age of the specimen to around 199 Ma (very latest Hettangian to very earliest Sinemurian of the Lower Jurassic), though since rocks of this age are widespread across the UK, it hasn’t helped them to establish a likely locality.
Lomax, D. R., Larkin, N. R., Boomer, S., Dey, S. and Copestake, P. 2017. The first known neonate Ichthyosaurus communis skeleton: a rediscovered specimen from the Lower Jurassic, UK. Historical Biology, https://doi.org/10.1080/08912963.2017.1382488 |
Polio is caused by a virus that can lead to a mild or a very serious illness. The virus infects the bowel and from there can attack the nervous system, causing meningitis or paralysis. Most infected people do not have any symptoms.Children who develop paralysis may appear to recover only to become ill again after a few days. The paralysis is usually permanent.
The virus is found in fluids from the nose and throat and is spread by coughing, sneezing, sharing drink bottles and so on. It's also spread by contact with the faeces (poos) of an infected person. This can happen if people don't wash their hands properly after going to the toilet or changing babies nappies. |
From the air, the Amazon forest may just seem like a big swath of green, one tree much like the next. But look closely, and it’s a lot more complicated than that. Depending on the species and its precise location, trees will have different chemical compositions, grow at different speeds, and take on greater and lesser roles in sequestering the carbon that causes climate change.
You get a sense of this arboreal variety in the maps here. Taken in Peru, they’re the latest images from the Carnegie Airborne Observatory, a special twin-engine plane that flies above the most environmentally-important forests in the world. The CAO–which we first covered here–uses pulsing lasers to map forests in 3-D, and an onboard spectrometer that measures for chemical characteristics.
“These maps are colored to show the different chemicals in the forest canopy. By looking at their chemical signatures, we determined that different colors represent different species or communities of species,” says Greg Asner, who leads the research at the Carnegie Institution for Science.
In the image showing the Amazon river system, the trees in red contain more “growth chemicals” than the ones in green and yellow. That means they’re probably sequestering carbon at a faster rate than their neighbors, and therefore can be expected to become larger, relatively speaking, over time. The differences are explained by variations in elevation and underlying soil content. “This new work shows us that forests change in terms of their composition. The chemicals we detect in the canopies give us a rate of carbon dioxide uptake by the trees,” Asner says.
The CAO maps should help make conservation more cost-effective, showing which trees are most important from an environmental point of view. “By knowing where forests change in composition, conservation can focus on saving a portfolio of different forest types instead of making the mistake of saving just one or a few,” Asner says. |
(Human Immunodeficiency Virus and Acquired Immunodeficiency Syndrome)
HIV and AIDS – Definition
HIV is a virus that attacks white blood cells called helper T cells (CD4). These cells are part of the immune system. They fight off infections and disease. As a result, an HIV infection can leave you vulnerable to severe illnesses.
AIDS is a late stage of HIV infection. It reflects severe damage to the immune system. One or more opportunistic infections will also likely exist. This is a type of infection that only occurs in people with compromised immune systems. |
Acid-loving plants require rich, organic soil to grow and produce healthy-looking plants that live up to their full potential. Acidic soils tend to occur in moist climates, although one of the best ways to know if your soil offers enough acidity requires taking a soil test to determine the pH balance. For gardeners who need to improve their soil, a mix of Canadian peat moss, soil and sand works well to give the planting area the right acidity.
When it comes to choosing from a variety of acid-loving plants, rhododendrons fit the bill. The genus Rhododendron, consisting of both rhododendrons and azaleas, features more than 1,000 species of plants. Rhododendrons come in all sizes, shapes and blossom colors, with most of the plants blooming in late winter to early summer. Most rhododendrons thrive in moist, acidic soil where they receive dappled sun to full shade. The plants require protection from drying winds. Since flower buds start appearing months before actually blooming, some of the early-blooming rhododendrons may need protection from late spring frosts.
Northern Bayberry (Myrica pensylvanica)
A native of the eastern United States and Canada, northern bayberry grows up to 12 feet in height and width. The slow-growing plant features glossy, dark-green leaves that release a pleasant aroma when crushed. The female plants produce clusters of bluish-black fruits with a waxy coating. The perennial plants prefer to grow in acidic soil in full sun to partial shade, although they tolerate a variety of soil and moisture conditions. The berries are used to make bayberry candles and soap. Birds also find the berries an attractive food source.
Highbush Blueberry (Vaccinium corymbosum)
Highbush blueberry thrives in wet or dry soil and full sun or full shade, as long as the soil remains acidic. This acid-loving plant grows up to 12 feet in height and width, producing blue fruits prized by wildlife and birds. In the spring, reddish-green spring leaves appear first, followed by white or pink bell-shaped flowers forming long dropping clusters. During the summer, the leaves turn bluish-green while the green fruit ripens into mature blueberries that bears, game birds, songbirds and other mammals use as a food source. Deer and rabbits also eat the twigs and foliage of highbush blueberry.
Scotch Heather (Calluna vulgaris)
For changing colors through the seasons, scotch heather works perfectly. In the spring and summer, the acid-loving plant features foliage in shades of silver, yellow, gray and green. New growth comes in shades of pink, yellow and orange. In the fall, the foliage changes to hues of red, orange, bronze and dark green. The flowers also add plenty of color, appearing in late summer and early autumn in long clusters of pink, purple or rose blooms. Scotch heather grows up to 3 feet in height and width, and prefers moist, very well-drained acidic soil in full sun. |
Approach to Fractions Seen as Key Shift in Common Standards
For many elementary teachers, fractions have traditionally sprung to mind lessons involving pizzas, pies, and chocolate bars, among other varieties of "wholes" that can be shared. But in what many experts are calling one of the biggest shifts associated with the Common Core State Standards for mathematics, more teachers are now being asked to emphasize fractions as points on a number line, rather than just parts of a whole, to underscore their relationships to integers.
As common-core proponents see it, the new standards do a much better job of putting fractions into context, which will help students make connections across other math concepts.
"The ultimate underlying principle is you want kids to understand that fractions are numbers," said William G. McCallum, a mathematics-education professor at the University of Arizona, in Tucson, and one of the lead writers of the common standards. "They're new, but they're not in a different galaxy."
"I should not have to change what I know about numbers to learn fractions," said Zachary Champagne, an assistant in research at the Florida Center for Research in Science, Technology, Engineering, and Mathematics at Florida State University, in Tallahassee.
Fractions instruction in schools has long been seen as a problem area. In 2010, the U.S. Department of Education's Institute of Education Sciences released a report on effective K-8 fractions instruction as part of its What Works Clearinghouse. The report noted that half of 8th graders could not place three fractions in order from least to greatest on the 2004 National Assessment of Educational Progress in math. It also found that fewer than 30 percent of 17-year-olds could convert 0.029 into a fraction.
The recommendations from that report, as well as those of a 2008 federal study by the National Mathematics Advisory Panel that called difficulty with fractions "pervasive," are reflected in the common core.
In addition to emphasizing fractions as points on the number line, the common standards differ from previous standards in other important ways: They delay arithmetic with fractions until students have a thorough understanding of what a fraction is; they eliminate explicit instruction on lowest common denominators; they do not differentiate between proper and improper fractions; and they place an early emphasis on decimal equivalents.
Under the common core, 1st and 2nd graders learn some basic vocabulary on fractions, including describing parts of shapes as halves and quarters, and 6th graders learn division of fractions. But the bulk of fractions instruction goes on between grades 3 and 5.
The 3rd grade rollout of fractions is intended to be slow and steady. The standards require students to view fractions as divided wholes and as numbers on a number line, as well as to reason about a fraction's size. There's no arithmetic with fractions that year.
"We're allowing time for students to explore and delve deeply into the meaning of fractions," said Denise M. Walston, the director of mathematics for the Washington-based Council of the Great City Schools.
In the past, teachers and textbooks have rushed into operations before students really understood the basics of fractions, said Jonathan A. Wray, the instructional facilitator for secondary mathematics curricular programs in Maryland's Howard County public schools. Mr. Wray, who also helped write the 2010 IES report, said he has seen students make it to middle school still thinking of the numerator and denominator as separate numbers. "They're using algorithms to come up with equivalent fractions, they're using cross-multiplication—that was getting them by, but they didn't understand anything behind it," he said.
Students first need a good grasp of what a unit fraction is—that is, a fraction with a one in the numerator, the basic unit of measurement for larger fractions—before moving on to operations, according to the standards' authors.
"This emphasis on units and unit fractions, and how nonunit fractions are built from unit fractions, they did this very purposefully to help students use the knowledge they have from how whole numbers work," said Diane J. Briars, the president of the Reston, Va.-based National Council of Teachers of Mathematics. "It helps teachers hammer home the point that a denominator is just the label that tells you the size of the partition," she said.
Putting fractions on a number line early can also help solidify students' understanding that fractions can and should be compared to whole numbers, experts say.
Circular and rectangular representations can still be useful, said Mr. McCallum, especially in the context of learning about quarters and halves. But "it's not so easy to divide a pizza into five equal pieces," he said.
In addition, the number line helps ensure students use consistent units. Mr. Wray pointed out that students who are trying to compare fractions with a circular model may end up drawing two circles of significantly different sizes. If they shade one-half of the larger circle and three-fourths of the smaller circle, they could make the argument that one-half is greater than three-fourths, he said.
"The fact that [number lines] are mentioned explicitly as a teaching and learning tool in the common core, I think that has changed the landscape a little bit," said Mr. Wray.
Of course, many teachers have been putting fractions on number lines for years, said Ms. Briars, but the common core makes this "much more of a central representation."
A notable absence from the standards, meanwhile, is any mention of "finding the lowest common denominator."
Students have traditionally spent large amounts of time practicing reducing fractions to their lowest form, and in many classrooms, an answer was marked wrong if it was not simplified.
"The question is, 'Why?' " said Mr. McCallum. "It's not mathematically important."
Students do need to compare equivalent fractions, which means they will need to simplify at times. "But it's not an overriding concern," said Mr. McCallum. "And there are situations where it positively is getting in the way of understanding."
Mr. Champagne, of Florida State University, offered an example. "Fifty-two one-hundredths is more than half," he said. "If you put that in lower terms, it becomes 26/50. If you go further, it's 13/25. But that's way harder to picture than 52/100."
Asking students to reduce a fraction or write it in its lowest terms can give them the impression that a fraction is getting smaller, said Ms. Briars: "I think that not emphasizing that makes a lot of sense. The number you want in the denominator depends on the situation you're in. If I'm working with time, I'm thinking about 60ths, so it makes a lot more sense to work with that than thirds and halves."
And often, students calculate a correct answer but then make a mistake when trying to reduce the fraction. "There's nothing mathematically that says you have to do that," said Mr. Champagne. "Fourteen-sixteenths [can be] a correct answer."
The authors of the common core also chose not to differentiate between proper and improper fractions. "It's part of the same mantra of trying to get people to see fractions as numbers," said Mr. McCallum. "You can start counting forward on the number line by thirds, and there's no law that says you have to stop at 1."
In fact, "a lot of people think a fraction means less than one," he said, when that is far from true.
Instead of spending time having students practice the steps for changing improper fractions into mixed numbers, the common standards presume that students will figure out how to do that on their own through better conceptual grasp.
"The meaning develops out of the work with the number line," said Ms. Walston of the CGCS. "They will see, 'Oh, a quick way I can do it is to divide the numerator by the denominator.' "
Fraction-decimal equivalents can also be demonstrated more easily through the number-line approach, some educators say.
With more traditional methods of instruction, "students end up thinking there are all different types of numbers—fractions, mixed numbers, and decimals—and they're all different animals," said Mr. McCallum. But the number-line approach makes it clear that these can all be the same number: "You're simply writing the number in a different way."
Consistency with the number-line approach across the grades can also help develop conceptual understanding, said Justin Minkel, an elementary teacher in Springdale, Ark., and the 2007 Arkansas Teacher of the Year. "If you use colorful, shaded circles [for fractions] and then jump to money [for decimals], I don't think kids are learning that connection," he said.
All the work with the number line in 3rd grade is meant to lead to a more thorough understanding of what it means to add, subtract, multiply, and divide with fractions. The common standards repeatedly ask students to "apply and extend previous understandings" when working with fractions—that is, if students are able to show addition on a number line, they should also be able to show addition with fractions there.
Challenges for Teachers
What has typically happened in classrooms is "you learn how to add whole numbers, you learn algorithms for adding multidigit whole numbers, and then you learn addition with fractions, and it's completely different," said Mr. McCallum. The number line makes addition "much more like what addition was before. You're just using fifths instead of ones as the unit where you measure things out."
For teachers who've taught fractions as "a different animal" from whole numbers until now, assimilating the new approach will take some time and effort.
"We know the rule and how to get the answer really quickly," said Mr. Champagne, "but that's not going to be enough." For instance, when dividing fractions, teachers can no longer just tell students to "invert and multiply." They also need to explain why the rule works and use it to solve real-world problems, he said. "If you ask any K-12 teacher to write a word problem for dividing 1/2 by 3/4, it's really tough."
In some cases, teachers will need to relearn, or at least get a refresher on, the concepts themselves. "Especially for elementary teachers who are generalists and teach multiple subjects, they may not have had the opportunity to develop their understanding of the mathematics concepts as deeply as the common core is calling for," said Ms. Briars of the NCTM. At her group's annual conference last spring, more than 35 sessions focused on fractions, including a well-attended keynote address.
"Teachers need help with this," said Mr. Wray. "This certainly isn't the way most of us learned fractions."
Video: Teaching Fractions Under the Common Core
Vol. 34, Issue 12, Pages s6,s7,s8 |
Repetition – Individuals with FASD have chronic short-term memory problems; they forget things they want to remember as well as information that has been learned and retained for a period of time. In order for something to make it to longterm memory, it may simply need to be re -taught and re-taught.
Consistency – Because of the difficulty individuals with FASD experience trying to generalize learning from one situation to another, they do best in an environment with few changes. This includes language. For example, teachers and parents can coordinate with each other to use the same words for key phrases and oral directions. |
Momentum and Impulse
Sir Isaac Newton described momentum as "a quantity of motion". Although in class we state Newton's Second law in the popular form (F=ma), he might have though it more appropriate to refer to it as a force causing change in the momentum of an object. We'll take a look at several different ways to approach this problem.
First, let's start by defining momentum (p) as the quantity mass time velocity, or p=mv. This is a vector quantity (the direction that our momentum is being applied in is very important) and the units are kg-m/s, or the Newton-Second.
Impulse is defined as "a change in momentum". We can write this as Dp, or p - po. Anytime we look at a change in something, we will be looking at the final quantity minus the initial quantity. Again, it is important to keep our signs (direction) straight! Impulse may also be the result of applying a force on an object for a specified period of time. Mathematically, we would say that impulse equals Force times time, or F t. So: Impulse = Dp, or Impulse = Ft. We'll combine these and say:
Another idea to discuss is that of "mass flow rate". Mass flow rate is the rate that mass is moving (or changing), measured in kg/sec. When we multiply this by the velocity the mass is moving, we also get the force applied by the moving mass stream. Hold that thought.
Momentum is conserved. OK - not a new
concept - we have previously discussed this in Physics. This means that, in an isolated system, the total momentum in the system remains constant, or the momentum before an event is equal to the
momentum after an event. How does this apply to our study of rockets? There is a popular misconception that the reason a rocket moves is that the exhaust gases push against the ground or the
atmosphere and cause the rocket to be propelled forward. In fact, a rocket is able to move through the vacuum of
space where there is nothing for the gases to push against. What's up with this? The answer is an application of Newton's Third Law and the principle of Conservation of Momentum. I think either explanation is valid. The hot gases from the propulsion system are expelled from the rocket with a great deal of force (equal to the mass flow rate times the velocity of the gases). The reaction pair is the gases pushing the rocket in the opposite direction but with an equal magnitude of force. We can use Conservation of Momentum to help determine the velocity a rocket will have. The force applied to the rocket by the expelled gases is the mass flow rate times the velocity of the gases. Applied for a given amount of time, this equals the momentum of the gases. The momentum of the gases must equal the momentum of the rocket, but in the opposite direction since these must add to equal zero (the momentum of the system before ignition.) |
Atmospheric models are used for every day weather events and short-term forecasting while climate models are used for longer term forecasts. Climate forecasts are generally divided into statistical forecasts, which cover seasonal to annual forecasts, and global climate models, which use equations to simulate the climate across the entire globe over a long time period.
Why do I care? Climate models can be used to predict climate variations for the next year, which can help farmers plan their crops and management procedures for the next growing season, or future climate impacts years from now that could affect farmers and agricultural agents and what kinds of crops they can grow.
I should already be familiar with: Southeast Temperature
Models are simple ways of trying to simulate how the atmosphere behaves. Some are simple black box models with just a few inputs and outputs, and some are highly complex mathematical calculations incorporating the physical equations which describe the atmosphere's behavior, along with large sets of observations of the current atmospheric state and sophisticated computing techniques.
Atmospheric Forecasting Models – Atmospheric models can be thought of as future weather predictors. There are several that are used for forecasting by the National Weather Service and other weather laboratories across the country, and they give forecasters an idea of what kind of weather to expect.
These models use MANY mathematical and complex equations defining atmospheric and physical motions. Since these models use so many complex and long equations, it takes very powerful supercomputers to run these models. However, despite the complexity of the models, they cannot account for everything that goes on in the atmosphere, and as a result they tend to have errors. These errors over time continue to grow and can give forecasters misleading information. Even the best forecasting models in the world cannot provide meaningful predictions beyond 2 weeks, and any prediction of more than a week ahead is not likely to be very accurate.
Monthly and Seasonal Climate Forecasts
The National Weather Service and meteorological agencies in other countries prepare monthly and seasonal forecasts for up to a year ahead of current conditions. These forecasts are based mainly on statistical relationships between current conditions and expected atmospheric conditions for the next few months. The single strongest statistical predictor for climate conditions over the next few months in the United States is the state of El Niño. This statistical signal is strongest in winter; there is less skill in the climate forecasts in other seasons when El Niño is not as strong or is absent. You can look at differences in expected temperature and rainfall in the Southeast based on El Niño at www.AgroClimate.org. NOAA's Climate Prediction Center uses the El Niño phase and other information on ocean temperatures and soil moisture conditions on the continent to predict the likelihood of above or below normal values up to a year ahead by statistically comparing current conditions to what happened in previous years with similar conditions.
Interpreting seasonal forecasts is done using maps produced by the Climate Prediction Center. For example, on the CPC map for June-July-August 2010 for temperature (to the left) the map shows areas that are more likely to be above normal, areas that are more likely to be below normal, and also some areas with equal chances or above-, below-, and near-normal. The basic assumption is that in an area of equal chances there is a 33.3 percent chance of normal temperature, 33.3 percent chance of above normal temperature, and 33.3 percent chance of below normal temperature. In areas labeled above or below normal, the percentages are shifted towards the most likely climate. But even in the Southwest, where there is a center of above 50 percent chance of above normal temperature, there is still a 33 percent chance of normal weather and a 17 percent chance of below normal temperature. Because every year is unique, statistics cannot provide a perfect forecast. However, knowing the odds of getting above or below normal temperature or rainfall can help producers decide what crops to grow and how to manage their planting dates as well as the likelihood of being affected by some weather-dependent pests and diseases.
Global climate models (GCMs) are used to provide detailed global predictions of the climate. Like weather forecasting models, they are run on supercomputers using a grid superimposed on the globe with calculations of the atmospheric equations of motion over time. They also include simplified ways of simulating rainfall and other small-scale atmospheric processes which cannot be mathematically calculated at scales smaller than the grid they use. The biggest difference between many GCMs and weather forecasting models is that they generally have to include the effects of a changing ocean and atmospheric chemistry (including carbon dioxide), which don't really have an impact on the atmosphere over a few days.
GCMs have been used to study past climates as well as make predictions of future climate under global warming scenarios. By looking at how they behave when predicting past or present climate, we can gain more confidence in the predictions they make for climate 100 years from now. Even though predictions from individual GCMs do not always agree exactly on what the future climate will bring, comparisons of the results of many different climate model forecasts show that warming of the Southeast is likely by 2100. The forecasts for rainfall trends is much less consistent between models, but generally shows that rainfall is likely to come in shorter, higher intensity bursts which could increase erosion and allow more rain to run off rather than sink into the ground, reducing soil moisture. |
Using the data from NASA's Kepler mission, astronomers have found three alien planets smaller than Earth, orbiting a star much smaller than our Sun.
The planets, which are orbiting a red dwarf star known as KOI-961, are 0.78, 0.73 and 0.57 times the diameter of Earth, making them the smallest alien planets discovered so far.
The KOI-961 is located 120 light-years away, in the Constellation Cygnus (The Swan). It's approximately one-sixth the size of our sun, which made it possible for scientists to watch for dips in the star's brightness and thus discover the orbiting planets.
The planets are thought to be rocky like Earth, but they are too close to the star to be habitable - at least by our standards.
"It's almost like you took a shrink gun and zapped a planetary system, the whole thing, including the sun," John Johnson, principal investigator John Johnson of the California Institute of Technology in Pasadena, told Space.com.
Image credit: NASA |
7.EE.1.2 Understand that rewriting an expression in different forms in a problem context can shed light on the problem and how the quantities in it are related.
7.EE.2 Solve real-life and mathematical problems using numerical and algebraic expressions and equations.
7.EE.2.3 Solve multi-step real-life and mathematical problems posed with positive and negative rational numbers in any form (whole numbers, fractions, and decimals), using tools strategically. Apply properties of operations to calculate with numbers in any form; convert between forms as appropriate; and assess the reasonableness of answers using mental computation and estimation strategies.
7.EE.2.4 Use variables to represent quantities in a real-world or mathematical problem, and construct simple equations and inequalities to solve problems by reasoning about the quantities.
7.EE.2.4.a Solve word problems leading to equations of the form px + q = r and p(x + q) = r, where p, q, and r are specific rational numbers. Solve equations of these forms fluently. Compare an algebraic solution to an arithmetic solution, identifying the sequence of the operations used in each approach.
7.EE.2.4.b Solve word problems leading to inequalities of the form px + q > r or px + q < r, where p, q, and r are specific rational numbers. Graph the solution set of the inequality and interpret it in the context of the problem.
7.G.1.2 Draw (freehand, with ruler and protractor, and with technology) geometric shapes with given conditions. Focus on constructing triangles from three measures of angles or sides, noticing when the conditions determine a unique triangle, more than one triangle, or no triangle.
7.G.1.3 Describe the two-dimensional figures that result from slicing three-dimensional figures, as in plane sections of right rectangular prisms and right rectangular pyramids.
7.G.2.6 Solve real-world and mathematical problems involving area, volume and surface area of two- and three-dimensional objects composed of triangles, quadrilaterals, polygons, cubes, and right prisms.
7.RP.1.2 Recognize and represent proportional relationships between quantities.
7.RP.1.2.a Decide whether two quantities are in a proportional relationship, e.g., by testing for equivalent ratios in a table or graphing on a coordinate plane and observing whether the graph is a straight line through the origin.
7.SP.1 Use random sampling to draw inferences about a population.
7.SP.1.1 Understand that statistics can be used to gain information about a population by examining a sample of the population; generalizations about a population from a sample are valid only if the sample is representative of that population. Understand that random sampling tends to produce representative samples and support valid inferences.
7.SP.1.2 Use data from a random sample to draw inferences about a population with an unknown characteristic of interest. Generate multiple samples (or simulated samples) of the same size to gauge the variation in estimates or predictions.
7.SP.2 Draw informal comparative inferences about two populations.
7.SP.2.3 Informally assess the degree of visual overlap of two numerical data distributions with similar variabilities, measuring the difference between the centers by expressing it as a multiple of a measure of variability.
7.SP.2.4 Use measures of center and measures of variability for numerical data from random samples to draw informal comparative inferences about two populations.
7.SP.3 Investigate chance processes and develop, use, and evaluate probability models.
7.SP.3.5 Understand that the probability of a chance event is a number between 0 and 1 that expresses the likelihood of the event occurring. Larger numbers indicate greater likelihood. A probability near 0 indicates an unlikely event, a probability around 1/2 indicates an event that is neither unlikely nor likely, and a probability near 1 indicates a likely event.
7.SP.3.6 Approximate the probability of a chance event by collecting data on the chance process that produces it and observing its long-run relative frequency, and predict the approximate relative frequency given the probability.
7.SP.3.7 Develop a probability model and use it to find probabilities of events. Compare probabilities from a model to observed frequencies; if the agreement is not good, explain possible sources of the discrepancy.
7.SP.3.7.a Develop a uniform probability model by assigning equal probability to all outcomes, and use the model to determine probabilities of events.
7.SP.3.8.b Represent sample spaces for compound events using methods such as organized lists, tables and tree diagrams. For an event described in everyday language (e.g., "rolling double sixes"), identify the outcomes in the sample space which compose the event.
7.NS.1.1.b Understand p + q as the number located a distance |q| from p, in the positive or negative direction depending on whether q is positive or negative. Show that a number and its opposite have a sum of 0 (are additive inverses). Interpret sums of rational numbers by describing real-world contexts.
7.NS.1.1.c Understand subtraction of rational numbers as adding the additive inverse, p - q = p + (-q). Show that the distance between two rational numbers on the number line is the absolute value of their difference, and apply this principle in real-world contexts.
7.NS.1.2 Apply and extend previous understandings of multiplication and division and of fractions to multiply and divide rational numbers.
7.NS.1.2.a Understand that multiplication is extended from fractions to rational numbers by requiring that operations continue to satisfy the properties of operations, particularly the distributive property, leading to products such as (-1)(-1) = 1 and the rules for multiplying signed numbers. Interpret products of rational numbers by describing real-world contexts.
7.NS.1.2.b Understand that integers can be divided, provided that the divisor is not zero, and every quotient of integers (with non-zero divisor) is a rational number. If p and q are integers, then -(p/q) = (-p)/q = p/(-q). Interpret quotients of rational numbers by describing real-world contexts. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.