content
stringlengths 275
370k
|
---|
We’ve added a new feature to BrainPOP UK this year which will make using our movies even more interactive – whether you’re in the classroom or at home.
A sizeable number of topics on the site now include typeable and printable activities. And while we aim to have activities on every single topic by September you can rest assured that all new movies we’ve added from March forward are guaranteed to include activities.
Here are our 5 top tips for classroom and home use.
1. Thinking ahead
All activities are typeable and printable so, while they’re great for kids to fill out on-screen after watching the movie, they’re also a good tool to keep in mind when you’re planning.
Because activities can be printed ahead of time students can fill them out as they interact with the movie too.
Here’s a great effort at a “Complete the passage” page from a mini BrainPOPper:
Some activities include recall questions, as well as higher order thinking tasks, e.g. “Think about it”. The History of the UK activity sheet above contains a good example:
This activity could be completed as a whole class but would probably work best in small groups or as homework.
2. Get organised
Some activities include creative graphic organisers such as this one from our new Main Idea movie:
We also have blank graphic organisers available on the site which are great for you to use in a variety of lessons, BrainPOP and otherwise.
3. Vocabulary sheet
Every single activity has at least a vocabulary sheet. You can use these with students before watching the movie to assess prior knowledge, focus attention, or demonstrate understanding. Using the vocabulary sheet helps familiarise students with key vocabulary they’ll encounter in the movie.
It can also be used whilst your viewing the movie to practice note-taking skills. Remember to pause so students have time to write down definitions. Helpfully, words are listed in the order in which they appear in the movie.
Here’s an example of a vocabulary sheet from the History of the UK movie:
4. Hands-on practical
As well as activities which work on expanding vocabulary, exercising literacy and numeracy skills, we have included activities with a more “hands-on” practical element. For example, our Space Flight topic challenges the student to design a real rocket launch:
5. Time to spare?
In addition to integrating BrainPOP into lesson plans, remember that it only takes a few minutes to show a movie and take advantage of those in-between – unplanned – times by transforming them into teaching moments. Our class discussion page provides helpful tips on how to easily fit a lesson based on a news event into the curriculum you follow but you can also rely on activities to fill a quiet five minutes before lunch.
For example, the Elvis Presley topic features a word search:
However you choose to use them, make sure to take advantage of the pedagogical benefits available. A quick summary:
- Activities are typeable and printable
- Activity sheets are a helpful note-taking tool while watching the movies
- Activities can be completed either as a whole class on an interactive whiteboard, individually, or in small groups
- Return to the activity pages after watching the movie
- Assign activities for homework
As ever, if you have any classroom tips for how to use activities in lessons, we’d love to hear from you! Email [email protected] with lesson plans, tips and tricks.
|
Have you heard about water beads? These things grow about 20 times their size and they are seriously addictive. Well, I got some, for the kids, and turned it into a learning experience. The word of the day was “absorb” watch our video below then read through our critical thinking questions.
- What does absorb mean?
- What do you think will happen to the dry water bead once we add water?
- *Remove bead from water at 4, 6, and 8 hours
- Which bead is biggest? Why?
- Which bead is smallest? Why?
- Put the beads in order from the least amount of water held to the most amount of water held.
- Name three things that absorb.
- Name three things that do not absorb.
This is an experiment we will visit again as the kids get older it would be a great way to teach permeation! Let us know how it goes!
|
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
|
In the mid-1700s, a Unitarian minister called Joseph Priestly became fascinated by science. The brewery next to his home in Leeds, England, made Priestly curious about what makes beer bubbly–and that’s what led Joseph Priestly to invent soda. Today, soda is so common that we might never stop to wonder exactly what it is.
Soda is a solution of carbon dioxide gas dissolved in water. When something dissolves, it gets spread out evenly. This even distribution of ingredients is what makes soda a solution. But soda is also a mixture because solutions are a type of mixture.
Keep reading to find out more about solutions, mixtures, and the science of soda. Learn how Priestly made the world’s first fizzy water and how his original process has been modified over time. And finally, discover how you can experiment with soda yourself, at home.
Soda Is Both a Solution and a Mixture
A “mixture” is when at least two substances are put together without being chemically combined. Key characteristics of mixtures are that the substances inside them
- Can be combined in varying amounts
- Stay the same individual substances
- Can be mechanically separated.
Mixtures can be liquids, solids, and gasses. In the table below, you can see some examples of how one substance–carbon dioxide–can be added to other substances to form mixtures.
|Mixture||Not a Mixture|
|Carbon dioxide and water||x|
|Carbon dioxide and oxygen||x|
Carbon dioxide can be combined with oxygen in any amount you choose. Because the carbon dioxide doesn’t combine chemically with the oxygen, you can separate it back out, and it will be unchanged. But carbon dioxide itself is a compound, not a mixture. Carbon and oxygen need to combine in an exact ratio chemically to make it. Once they are combined, you can’t easily separate them.
Soda is a mixture because the water and carbon dioxide–as well as any flavoring or coloring that’s added–can be put together in varying amounts. Even though it looks like it’s all one substance, it’s really not. Everybody has experienced a “flat” soda. Sodas can lose their bubbles over time as the carbon dioxide gas escapes the liquid.
Soda is a particular kind of mixture called a solution. In a solution, a solvent is used to dissolve one or more substances–the “solutes.” In soda, the solvent is water, and the solutes are carbon dioxide and sometimes other ingredients. The substances in the solution form a homogeneous mixture–one where all of the parts are distributed equally. Because the solute has dissolved inside the solvent, the solute stays distributed and doesn’t settle out over time.
Carbon dioxide all by itself doesn’t meet these criteria, so it isn’t a solution. Carbon dioxide and oxygen, by themselves, aren’t a solution. But when they are dissolved in a large amount of nitrogen, the result is the solution that you’re breathing right now–air.
|Solution||Not a Solution|
|Carbon dioxide and water||x|
|Carbon dioxide and oxygen||x|
For more information about mixtures and solutions, you can watch this video:
How Soda Is Made
When Joseph Priestly discovered that the fizz in beer comes from dissolved carbon dioxide, he used this knowledge to also create bubbles in the water. He created carbon dioxide by combining chalk with sulphuric acid and trapping the gas in a homemade contraption made from a pig’s bladder. He then dissolved this trapped gas in water, and soda was born.
Unfortunately, contact with a pig’s bladder gave the carbon dioxide a nasty taste. A Scottish doctor, John Nooth, figured out a better way to trap the carbon dioxide–in a glass. Jacob Schweppe, a Swiss watchmaker, developed the first system for creating carbonated water that was efficient enough for commercial applications. This was the beginning of soda as a popular drink.
At first, any flavorings were mixed with soda water either at home or in shops where soda was sold. Today, of course, flavored soda is mostly made in factories. Water makes up 90-99 percent of the average soda, so the process of filtering the water–usually through layers of sand–is very important. Any impurities in the water can negatively impact the taste of the soda. Carbonation from various industrial sources is then introduced. Because carbon dioxide can easily escape the solution, different manufacturers use different processes to create the perfect fizz and flavor.
To see an old-fashioned soda delivery company in action, watch this video:
And if you’re curious about how soda is made in commercial factories, you can have a look here:
Soda Science at Home
If learning about the science and history of the delicious solution we call soda has inspired you, maybe you’ll be interested in some soda science you can do at home.
Make Your Own Soda
The process of creating and trapping carbon dioxide isn’t simple and probably not something to try at home. But you can buy a variety of products that have already taken care of this process for you:
- Zipfizz Healthy Energy Drink Mix: This comes as a powder, but when you dissolve it in water, it becomes a carbonated energy drink full of vitamins and electrolytes.
- Baskiss Soda Siphon Maker: You can use Liss 8 Gram CO2 Cartridges to charge this soda siphon–then just add water to make your own soda. It’s an easy way to make bubbly water over and over.
If you want to flavor your soda, there are many options to choose from.
- Torani Syrup Variety Pack, Soda Flavors contains a variety of Torani’s famous fruit flavors.
- Get extra fancy and try Monin Lavender Syrup or Monin Exotic Citrus Syrup.
Maybe you’ll discover that you really love your homemade creations–and if you decide you want to make them regularly, you may want to check out these options:
- SodaStream Fizzi Sparkling Water Maker Bundle–it’s got everything you need to make your delicious bubbly drinks. It comes with cartridges, flavoring drops, and even bottles to hold the soda.
- Make Your Own Soda: Syrup Recipes for All-Natural Pop, Floats, Cocktails, and More: this book is full of great recipes if you’re looking to really up your soda game.
Experiment with Soda
When you harness the pressure that carbon dioxide creates when it escapes the soda solution, you can create some interesting effects.
- You can use soda as the leavening agent in some baked goods. You need to work quickly if you want to try it, though, because you don’t want the carbon dioxide to have completely escaped before you put your sweet treat in the oven.
- What’s more fun than experimenting with cake? Check out this video to find out:
Soda is a solution and a mixture because it consists of carbon dioxide heterogeneously dissolved in water. It was invented in the mid-1700s by Joseph Priestly. Both John Nooth and Jacob Schweppe made important contributions to making this solution commercially available.
Today, the soda industry is huge, and soda is mixed in large factories. It is possible to mix soda at home, though, and there are several products that you can buy to help you with this process.
Any mixture or solution can be taken apart into the substances used to make it–and when we do this with soda by letting the carbon dioxide escape, we can harness the escaping gas to create interesting effects.
|
問1 Which of the following is closest to the meaning of the underlined word imperative in paragraph(1)?
問2 According to paragraph(2), which of the following statements is true?
① Early routes were created by people who traveled by wheeled carts.
② People’s first routes on land followed the growth of towns and cities.
③ The development of land routes led to progress in many areas of society.
④ The improvement of routes resulted in the invention of the automobile.
問3 Why is the example of Edo introduced in paragraph(3)?
① To describe the difficulty of creating routes on the water
② To emphasize the fact that it was an important city
③ To explain the use of water routes to move along the coastlines
④ To illustrate the important roles of water routes for cities
問4 What does paragraph(5) tell us about routes?
① Routes can be thought of as existing invisibly in the world.
② Routes that move information can be regarded as dangerous.
③ The fundamental functions of routes are declining.
④ The importance of different kinds of routes is the same.
問5 What is the main point of this article?
① Humankind first created various types of convenient routes on land.
② Improvements in transportation have come at great cost.
③ Technology has interfered with opening up routes around the world.
④ The advancement of humanity was aided by the development of routes.
① Creation of roads used by people, animals, and vehicles
② Developing ways for people to fly from place to place
③ Establishment of global paths for information transfer
④ Opening of lanes for ships to travel and transport things
問1 ②① 問2 ③ 問3 ④ 問4 ① 問5 ④
問6 1① 2④ 3② 4③
出典:センター2019本試験 - 大問6
|
The proposed Constitution, so far from implying an abolition of the State governments, makes them constituent parts of the national sovereignty, by allowing them a direct representation in the Senate, and leaves in their possession certain exclusive and very important portions of sovereign power. This fully corresponds, in every rational import of the terms, with the idea of a federal government.Alexander Hamilton – The Federalist, #9 (excerpt)
Federalism is a system of government in which sovereignty is shared between regional and national governments. In the United States, its is composed of the federal (national) government and the various state governments. (Local governments, strictly speaking, are not sovereign. They are creations of their state governments.)
It is a structure that is well suited for large countries like ours, because it allows simultaneously for unity in some matters and diversity in others.
When we declared our independence from Britain, we did so as a union of independent states. The Declaration of Independence refers to “the United States,” which was a phrase that meant that thirteen independent states were acting, for this particular purpose, in unity with one another. We were not one country; we were thirteen of them.
Our first attempt to unify these states into a single country was under the Articles of Confederation, which created a strictly limited national government and left most of the power with the states. This proved ineffective, and the federal government was largely paralyzed. Delegates to a constitutional convention gathered in Philadelphia in 1787 and crafted a new U.S. Constitution, which, with twenty-seven amendments, is (at least nominally) still in effect today.
Although the U.S. Constitution provides for a much stronger federal government than the one that existed under the Articles of Confederation, it still retains a distinctively federal structure. The states are the general sovereigns with the primary responsibility for governing. The federal government is only authorized to act in the specific areas delegated to it by the constitution.
Some of the founders — including George Mason, Patrick Henry, and Thomas Jefferson — were concerned that the new constitution did not sufficiently protect the rights of the people or the sovereignty of the states. To allay their concerns, one of the first acts of the new U.S. Congress was to propose new amendments to the constitution: the Bill of Rights. Ten of the proposed amendments were successfully ratified. Nine of them dealt primarily with individual and group rights, but the tenth dealt explicitly with the concept of federalism and the dual-sovereignty arrangement:
The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.U.S. Constitution, Amendments, Article X
This is not rocket science. The federal (national) government only has the authority to do the things that the constitution specifically delegates to it. All other sovereign rights belong to the states, which enjoy a more general sovereignty, or the people, who are the ultimate sovereigns in every republic.
Any activity performed by the federal government outside of the enumerated, delegated powers is an unconstitutional and unjust activity, violating the principles of self government and encroaching upon the sovereignty of the states.
Difference Breeds Unity
We have the right to change our system of government — self governance is a human right — so we have the right to amend or rewrite our constitution to create a single national government that holds all sovereign authority within the bounds of natural law. We have not done so, of course, but we could.
But this would be a bad idea. Federalism is not just a dusty piece of eighteenth century political theory. There is a reason why some of the most brilliant thinkers in American history built our political system this way, and their reasoning still applies today.
First, we must acknowledge that the United States is a large, diverse country. We are made up of fifty states, each of which has its own culture, its own political leanings, it’s own economic strengths and weaknesses, and often its own vernacular. You would have to be living under a rock not to see that the country is pretty polarized right now; some states are “deep blue” Democratic, while others are “blood red” Republican.
When it comes to domestic policy, it would be practically impossible to craft a major policy that would make the people in both California and Alabama equally happy. And even if we put aside a state’s political leanings, it is unreasonable to assume that a policy that works well in New York would automatically work equally well in North Dakota.
There was a similar difficulty in the early days of our republic. Though we were all bunched up on the east-coast at the time, there were still deep and seemingly irresolvable political differences between, say, Virginia and Massachusetts. A fully unified nation without a federalist dual-sovereignty system would have failed. Indeed, as the nation became more and more nationalized at the federal level, and became more and more prone to imposing policies on states that did not want them, the resentment soon grew so strong that we literally went to war with ourselves.
(Obviously slavery was a huge part of what led up to that conflict, and obviously slavery was deeply wrong and unacceptable. It was, and still is, a direct affront to the human rights of life and liberty. But at the same time, we cannot ignore the self governance and federalism issues that were also causes of the Civil War.)
The beauty of a healthy federalist system of government is that you don’t have to try and craft national policy in a country that is really fifty smaller nations that all have different beliefs, different approaches, and different priorities. The federal government should limit itself to the areas in its purview — those areas where the states all agreed to delegate authority upwards — and let the states manage their domestic affairs.
Let’s use health care reform as an example. The “Affordable Care Act” was imposed, with the slimmest of congressional majorities, on a country that basically didn’t like it. Those in “deep blue” states felt that it didn’t go far enough, and continued to clamor for a socialized single-payer system. Those in “blood red” states despised the federal interference in their lives and the negative impacts it had on their premiums and on the availability of care.
Maybe a better approach would have been to leave health care reform to the states. Do some general interstate commerce regulation at the federal level. Then let the blue states enact statewide single-payer systems and let the red states slash restrictions and let the free market work. Let the states somewhere in the middle enact hybrid systems. Nobody gets steamrolled, nobody gets resentful, and we get to see what works and what doesn’t.
Over time, the kinds of systems that work well will be adopted by more and more states. The systems that don’t work well will be abandoned. But nobody will feel like some unaccountable dictators in Washington forced them to do something they didn’t want to do. Difference, counterintuitively, can breed unity. But forced unity only breeds resentment.
|
Deconvolutional networks are convolutional neural networks (CNN) that work in a reversed process. Deconvolutional networks, also known as deconvolutional neural networks, are very similar in nature to CNNs run in reverse but are a distinct application of artificial intelligence (AI).
Deconvolutional networks strive to find lost features or signals that may have previously not been deemed important to a convolutional neural network’s task. A signal may be lost due to having been convoluted with other signals. The deconvolution of signals can be used in both image synthesis and analysis.
A convolutional neural network emulates the workings of a biological brain’s frontal lobe function in image processing. A deconvolutional neural network constructs upwards from processed data. This backwards function can be seen as a reverse engineering of convoluted neural networks, constructing layers captured as part of the entire image from the machine vision field of view and separating what has been convoluted.
Deconvolutional networks are related to other deep learning methods used for the extraction of features from hierarchical data such as that found in deep belief networks and hierarchy-sparse automatic encoders. Deconvolutional networks are primarily used in scientific and engineering fields of study.
|
Acts 2, the story of the Holy Spirit’s coming at Pentecost, is a text about names. The text centers on names—a list of the many nations represented there in Jerusalem—and crescendos in a name—Peter’s invocation of the one name upon which the many might call and be saved.
Names also form the background to Pentecost in its Old Testament precursor, the story of the Tower of Babel. In Genesis 11:1-9, the settlers of Shinar aspire to “build a city, and a tower with its top in the heavens,” in order that they might “make a name for [themselves].” For their arrogant plans to undermine God, their project receives a name not of their own choosing: “Babel,” a sign of the “confusion” that their pride engendered. Usually, the story of the Tower is interpreted as an explanation for the diversity of human language, with the logic that the many tongues of Babel were God’s punishment for the sin of the builders. Read within its actual context, however, the story suggests not an original oneness of speech from which all languages arose, but an unjust assertion of the language and culture of one people over the many.
How is this so? Although the Tower story begins with the statement that “the whole earth had one language and the same words,” the preceding chapter, known as “the Table of Nations” (Genesis 10), assumes an original (or at least post-flood) diversity of languages. Indeed, in listing “the descendants of Noah’s sons, Shem, Ham and Japheth,” the text sounds the same refrain for each genealogy: “These are the descendants of ..., by their families, their languages, their lands, and their nations.” In other words, according to the biblical narrative, linguistic and cultural diversity existed before the Tower, and, therefore, was not the consequence of human sin but an expression of the God-ordained goodness of creation.
Even so, what then was “the one language and the same words” which the Tower story claims was common to “the whole earth?” Again, the Table of Nations exhibits strong links to the story of the Tower. In the midst of its simple listing of names, Genesis 10 pauses to detail one name, Nimrod, the son of Cush in the line of Ham. Alone in the depth of his characterization, Nimrod was a “mighty warrior” and “hunter”—and the builder of cities, most notably Nineveh and Babel “in the land of Shinar,” where his kingdom began. Consequently, Nimrod in Genesis 10 bears a striking resemblance to the settlers of Shinar in Genesis 11. And just as Nimrod appears as a name above names, so the architects of Shinar devised to “make a name for themselves.”
Given the weight of the evidence in the text, then, it seems likely that “the one language of the whole earth” preceding the building of the Tower was the speech of Nimrod and his kin, an ancient lingua franca, one language that came to occupy a place of privilege among the many due to the conquering, imperialistic power of its native speakers. Seen in this light, what God frustrates is precisely Nimrod’s—or any empire’s—attempt to undermine God and suppress the God-given linguistic and cultural identities of others in the cause of self-glorification. In the face of Nimrod’s prideful project, God protects God’s identity as the God of “every tribe and language and people and nation” (Revelation 5:9). In spite of his own pretensions, Nimrod is but one name among the nations, not the name above all names.
God’s will revealed at Babel carries the same meaning as God’s Spirit unleashed at Pentecost. For indeed, just as God reasserted linguistic diversity in the face of Nimrod’s suppression of others, so the Holy Spirit communicates through the languages of the many nations gathered together at Pentecost. Though the speakers were all “Galileans,” the nations heard about the wonders of God “in their own languages.” Moreover, just as God reasserted God’s own name as a name for others in the face of those who would “make a name for themselves” at others’ expense, so the Holy Spirit, speaking through Peter, pronounces a name upon which all who call will be saved. The Wind of the Spirit gives birth to the Word of Jesus Christ, the “name of the Lord” in whom the many names might find their unity.
Such unity, of course, will not arise with a Tower reaching up to heaven, nor result from its bricks burned thoroughly in the fires of human scheming; rather, the fellowship, the one body made up of many members, is forged in sudden tongues of Spirit-fire, which touch all names and confess the name of Jesus.
|
Ulcerations are a result of a break down of the skin. Ulcerations are classified based upon their depth and their cause. Common ulcerations are due to diabetes, ischemia (poor circulation), and venous stasis (varicose veins).
Diabetic ulcerations are by far the most common form of ulceration of the feet. These ulcerations occur in areas of the foot that are exposed to excessive pressure or irritation from the rubbing of the shoes on the skin. corns and calluses develop as a result of excessive pressure over bony areas of the foot. Over time the thickened callus that forms can act as an irritant that breaks down the skin under the callus, forming an ulceration. This is more likely to occur if the person with diabetes also suffers from diabetic neuropathy. Diabetic neuropathy is a condition that most commonly affects the nerves of the hands and feet. Diabetic neuropathy causes a loss or alteration in the ability to perceive pain associated with excessive pressure, heat or cold, sharp and dull, vibration and position sense. As a consequence, corns and calluses which would normally be painful do not cause pain and over time, breaks down the skin causing ulceration. Quite often, an infection will also occur which can result in bone infection (osteomyelitis) or deep tissue infection. If the person also has poor circulation, gangrene can develop.
Treatment is geared toward prevention. People with diabetes must learn to inspect their feet daily and obtain medical attention as soon as they notice anything suspicious or an ulceration forming. Calluses which have a black or blue appearance are in the early stages of ulceration. Corns and calluses should be treated regularly by a podiatrist. These areas should be protected from pressure by using pads and/or cushions. Over-the-counter corns removers must be avoided. These home treatments have acid in them, which can burn the skin and cause infection. Once an ulceration has started, every effort must be made to reduce the pressure to the area or it will not heal. Special shoe inserts, called orthotics, are useful in reducing abnormal pressure on the bottom of the foot in areas of calluses or ulcerations. There are also several different topical medications that are used for the treatment of ulcerations. Treatment should be guided and supervised by a physician.
Ischemic ulcerations occur in areas of poor circulation. Commonly they form on the feet, ankles and lower legs. As the circulation gets worse, the skin begins to thin and is less resistant to pressure and friction forces. Spontaneous break down of the skin can occur. These ulcerations tend to be painful, with a whitish or light-pinkish base. Treatment is focused on keeping the ulceration clean and free from infection. By-pass surgery may be indicated to improve the circulation to the area. Hyperbaric oxygen treatments may also be useful. It is important not to use bandages that can cut off the circulation, or adhesive tape, which can tear the skin when removed.
Venous stasis ulcerations occur in areas where the venous circulation is poor. Venous circulation is the blood flow that returns to the heart in the veins. Varicose veins are abnormal veins that do not allow normal blood flow back to the heart. As the veins become more and more damaged, there is a pooling of fluid that accumulates in the feet and ankle. This swelling of the tissue, over time will cause damage to the skin, and can result in open sores or ulceration. These ulcerations tend to weep a clear fluid, have a reddish base and become infected easily.
Treatment is geared toward prevention by reducing the swelling in the legs with the use of support stockings, medications to reduce the swelling, and elevation of the legs. Once ulcerations have developed, treatment consists of keeping the ulcerations clean and free from infection. This often requires the long-term use of oral antibiotics. A common form of treatment consists of wrapping the legs with a dressing called an unna boot. This dressing is a gauze wrap which has xinc oxide impregnated in it. This dressing helps to keep the bacteria that is in the ulceration from growing and also adds compression to help reduce swelling.
Article provided by PodiatryNetwork.com.
DISCLAIMER: MATERIAL ON THIS SITE IS BEING PROVIDED FOR EDUCATIONAL AND INFORMATION PURPOSES AND IS NOT MEANT TO REPLACE THE DIAGNOSIS OR CARE PROVIDED BY YOUR OWN MEDICAL PROFESSIONAL. This information should not be used for diagnosing or treating a health problem or disease or prescribing any medication. Visit a health care professional to proceed with any treatment for a health problem.
|
A mottled gray shorebird with bright yellow legs, the Lesser Yellowlegs is similar in appearance to the Greater Yellowlegs, with some important differences. The Lesser Yellowlegs is about half the size (in weight) of the Greater Yellowlegs, which is a useful distinction when the two are seen together. The bill of the Lesser Yellowlegs is not significantly longer than the diameter of its head, whereas the Greater Yellowlegs' bill is much longer. The bill of the Lesser Yellowlegs does not become paler at the base during the winter; it is solid black year round. Its bill always appears straight, without the slight upturn sometimes seen on the bill of the Greater Yellowlegs. In flight, the Lesser has a dark back, a white rump, and a dark tip on its tail. Relative to its size, the Lesser’s legs are longer than those of the Greater Yellowlegs, a difference that can be seen in flight (entire toes and tip of tarsus visible behind the tail). Juvenile Lesser Yellowlegs have finer streaking on their breasts than do juvenile Greater Yellowlegs.
Lesser Yellowlegs breed in open boreal woods in the far north. They often use large clearings or burned areas near ponds, and will nest as far north as the southern tundra. During migration and winter, they occur on coasts, in marshes, on mudflats, and lakeshores. In comparison to Greater Yellowlegs, Lessers are typically found in more protected areas, on smaller ponds. They are less common on extensive mudflats than Greater Yellowlegs. When nesting, they generally use drier, more sheltered sites than their larger counterparts.
Lesser Yellowlegs typically occur in tighter and larger flocks than do Greaters, both in flight and while feeding. Like the Greater Yellowlegs, Lessers forage in shallow water outside the breeding season, picking at prey on or just below the water's surface. They are less likely than Greaters to run after their prey, but more likely to scythe their bills back and forth in the water stirring up prey like an avocet. They are typically more approachable than the wary Greater Yellowlegs. They bob the front half of their bodies up and down, a characteristic behavior of this genus. The most common vocalization heard is a two-note flight call.
During the breeding season, insects make up the majority of the diet. The rest of the year, Lesser Yellowlegs also eat small fish and crustaceans.
Lesser Yellowlegs nest in loose colonies. They first breed at one to two years of age. They form monogamous pair bonds, but typically pair with a different mate each year. The nest is located on the ground in a dry spot, usually near water, but sometimes quite far away. The nest is usually well hidden in a densely vegetated area, next to a mossy hummock, fallen branch, or log. It is usually a shallow depression lined with moss, twigs, leaves, grass, and needles. Both parents share incubation duties, and the 4 eggs hatch in 22-23 days. The young leave the nest soon after hatching and feed themselves. Both parents tend and aggressively defend the young. The female usually leaves about 11 days after the young hatch, while the male stays with the chicks until they can fly, about 23-31 days. Pairs raise only one brood per season.
Lesser Yellowlegs are long-distance migrants and follow the classic shorebird migration pattern of traveling north concentrated in the interior of North America, and traveling south spread across the continent. They return to the same general breeding area in successive years and migrate to the southernmost coasts of the US south to South America. The Lesser Yellowlegs is one of the earliest fall migrants, showing up by June.
Lesser Yellowlegs were hunted heavily until the Migratory Bird Treaty Act of 1918 banned their hunting. Observers have speculated that the population has recovered since the act took effect, but a lack of information on historical and current population size makes this claim hard to substantiate. Breeding Bird Survey data indicate that there have been significant decreases in numbers between 1980 and 1996, although these numbers come from a small sample size and may not represent the entire population. The International Shorebird Survey (1972-1983) suggests that there has been no significant trend in number of fall migrants at 43 stopover sites along the Atlantic. Christmas Bird Counts indicate that the wintering population in the United States is on the increase. The consensus today is that the population is currently stable, and the Canadian government estimates it at half a million birds.
When and Where to Find in Washington
This species migrates through Washington on both its northward and southward trips, but is most common from mid-July through September. Most of the birds in Washington in July are adults, and the juveniles follow from late July into early October. Lesser Yellowlegs are fairly uncommon after the middle of October. They can be found along the coast and in a variety of wetland habitats throughout Washington's lowlands. In spring, they are uncommon migrants in eastern Washington from mid-April to mid-May, where they are found in freshwater wetlands.
|Pacific Northwest Coast||R||R||U||U||U||R|
Washington Range Map
North American Range Map
- Spotted SandpiperActitis macularius
- Solitary SandpiperTringa solitaria
- Gray-tailed TattlerTringa brevipes
- Wandering TattlerTringa incana
- Greater YellowlegsTringa melanoleuca
- WilletTringa semipalmata
- Lesser YellowlegsTringa flavipes
- Upland SandpiperBartramia longicauda
- Little CurlewNumenius minutus
- WhimbrelNumenius phaeopus
- Bristle-thighed CurlewNumenius tahitiensis
- Long-billed CurlewNumenius americanus
- Hudsonian GodwitLimosa haemastica
- Bar-tailed GodwitLimosa lapponica
- Marbled GodwitLimosa fedoa
- Ruddy TurnstoneArenaria interpres
- Black TurnstoneArenaria melanocephala
- SurfbirdAphriza virgata
- Great KnotCalidris tenuirostris
- Red KnotCalidris canutus
- SanderlingCalidris alba
- Semipalmated SandpiperCalidris pusilla
- Western SandpiperCalidris mauri
- Red-necked StintCalidris ruficollis
- Little StintCalidris minuta
- Temminck's StintCalidris temminckii
- Least SandpiperCalidris minutilla
- White-rumped SandpiperCalidris fuscicollis
- Baird's SandpiperCalidris bairdii
- Pectoral SandpiperCalidris melanotos
- Sharp-tailed SandpiperCalidris acuminata
- Rock SandpiperCalidris ptilocnemis
- DunlinCalidris alpina
- Curlew SandpiperCalidris ferruginea
- Stilt SandpiperCalidris himantopus
- Buff-breasted SandpiperTryngites subruficollis
- RuffPhilomachus pugnax
- Short-billed DowitcherLimnodromus griseus
- Long-billed DowitcherLimnodromus scolopaceus
- Jack SnipeLymnocryptes minimus
- Wilson's SnipeGallinago delicata
- Wilson's PhalaropePhalaropus tricolor
- Red-necked PhalaropePhalaropus lobatus
- Red PhalaropePhalaropus fulicarius
|Federal Endangered Species List||Audubon/American Bird Conservancy Watch List||State Endangered Species List||Audubon Washington Vulnerable Birds List|
|
Adhesives and tapes offer unique benefits compared to welds and mechanical fasteners, without sacrificing strength. Learn how adhesive bond strength is tested and measured, and the many factors to consider when choosing an adhesive for an application.
With each of the mechanisms of adhesion playing a role in performance, adhesion scientists investigate the strength of an adhesive bond to determine its ability to perform in an application.
Measuring the “work of adhesion” for a given bond helps to determine the strength of an adhesively bonded assembly. The most common way to measure this is to pull an adhesive bond apart. The force needed to pull the bond apart allows engineers to understand how the adhesive will perform in an application.
Adhesive strength is the interfacial strength between adhesive and substrate, and usually the most important consideration when designing a strong adhesive bonded assembly. However, adhesive strength is not the only factor critical to creating an effective bond. Even when using the world’s toughest adhesive, a bond will fail if the adhesive does not bond to the surface of the substrate.
Cohesive strength is the internal strength of an adhesive - the ability of the adhesive to hold itself together under stress. The higher the cohesive strength, the stronger the adhesive. Cohesive strength is determined by the chemical composition of the adhesive. The strength of adhesives covers a wide range, from pressure sensitive adhesives to structural epoxy and acrylic adhesives.
It’s important to consider the specific types of stress that will act on an adhesive joint. Common stresses include shear, cleavage, peel and tensile. Knowing the magnitude and frequency of the stresses your application will be subjected to is helpful in choosing the adhesive with the best cohesive strength for the task at hand.
Surface energy is a physical property of the surface of a material that determines whether an adhesive will make intimate contact. On a material with high surface energy, a liquid will wet out or spread out on the surface; on a material with low surface energy, the liquid will resist flowing and bead up. An adhesive must wet out the substrate to provide a bond.
To choose the proper adhesive, it’s important to understand the surface energies of all the substrates in your assembly, and how well the adhesive will wet out each one. Surface cleanliness must also be considered as some adhesives require a high degree of substrate cleanliness.
The Science of Adhesion Educational Series is designed to be a comprehensive introduction to Adhesion Science and the use of tapes and adhesives in design applications. Visit the Science of Adhesion to view all articles or choose from the selected topics below.
|
One comprehensive book which homeschooling families (especially those who employ the Charlotte Mason Approach to education) find useful as a guide in nature study is Handbook of Nature Study. Although this book does not inspire one in the area of drawing, (like Keeping a Nature Journal does), it is useful to guide the teacher to discuss and encourage the children to observe and take notes of specific details.
The Handbook of Nature Study begins by defining what nature study is.
This is exactly the aim of nature study and is consistent with Charlotte Mason's view of teaching Science - with nature study as the forefront of teaching observational skills.
Nature Study is not the same as the study of Science.
Here are the differences as the author herself writes:
Science: "In elementary science, the work begins with the simplest animals and plants and progresses logically through to the highest forms; at least this is the method pursued in most universities and schools. The object of the study is to give the pupils an outlook over all the forms of life and their relation one to another."
Nature Study: "In nature-study the work begins with any plant or creature which chances to interest the pupil. It begins with the robin when it comes back to us in March, promising spring; or it begins with the maple leaf which flutters to the ground in all the beauty of its autumnal tints."
Science: "A course in biological science leads to the comprehension of all kinds of life upon our globe."
Nature Study: "Nature-study is for the comprehension of the individual life of the bird, insect or plant that is nearest at hand." (p5)
by Anna Comstock was written in 1911 and revised in 1939. Of course there have been new discoveries since that time that aid us in the study of nature and the skies,but her love of nature is not bounded by time and place. She quotes from poets, includes stories for the teacher's benefit (but I have used this to read aloud to my children) and includes a Leading Thought, Method and Observations for each lesson.
Handbook of Nature Study covers:
There are hundreds of lessons and she states that no teacher is expected to teach all of the lessons. There is a wide choice so that teachers, students and location and availability makes the decision as to what is studied. Although some species will only be endemic to where she lives, there are enough lessons and ideas to help anyone study nature in their location. There are house flies, ants, bees, mice and chickens all over the world.
One way to use this book is to plan a study on a specific topic for a period of time, using the lessons from Handbook of Nature Study as a guide to develop questions and to help your children observe with a keen eye. How will you cover the topics?
Here is an example of how the topics can be covered in 4 years:
Autumn: Rocks/ minerals
Winter: Trees/shrubs, bushes
Summer:Invertebrates (other than insects)
Autumn: flowerless plants
Winter: The brook/stream
Spring: Fish/ amphibians
Summer: cultivated crops
Autumn: The soil
Autumn: Climate and weather
Winter: Skies and water forms
Spring: garden flowers
1. The teacher/parent should familiarize herself with the subject of the lesson, the questions and the story; In order to make the lesson an investigation, the teacher should not read the lesson. As Anna Comstock states, "Make the lesson an investigation and make the pupils feel that they are investigators. To tell the story to begin with, inevitably spoils this attitude and quenches interest." However, she does say that the story can be read as a supplement for the facts they have discovered for themselves and a guide and inspiration for further study.
2. The teacher takes note of the "leading thought" and the questions for observation.
3. Before the walk, the teacher gives a talk about the purpose of the walk and the observations that will need to be made. The nature walk can be as short as 10-15 minutes if it is clear as to what the purpose of the investigation is; Alternatively, a field trip may half a day if desired;
4. Students have a field notebook to record drawings, observations, poetry, writings (which are not to be corrected). The book should be considered personal property of the child and should not be criticized by the teacher except as a matter of encouragement; It should not be considered part of English studies;
5. During the walk, children are encouraged to observe carefully, draw and take notes if desired and investigate to find out the answers;
The Beak of a Bird
Leading thought — Each kind of bird has a beak especially adapted for getting its food. The beak and feet of a bird are its chief weapons and implements.
Methods — Study first the beak of the hen or chick and then that of the duckling or gosling.
1. What kind of food does the hen eat and where and how does she find it in the field or gaiden? How is her beak adapted to get this food? If her beak were soft like that of a duck could she peck so
hard for seeds and worms? Has the hen any teeth? Does she need any?
2. Compare the bill of the hen with that of the duck? What are the differences in shape? Which is the harder?
3. Note the saw teeth along the edge of the duck's bill. Are these for chewing? Do they act as a strainer? Why does the duck need to strain its food?
4. Could a duck pick up a hen's food from the earth or the hen strain out a duck's food from the water? For what other things than getting food do these fowls use their bills?
5. Can you see the nostrils in the bill of a hen? Do they show plainer in the duck? Do you think the hen can smell as keenly as the duck?
Supplementary reading — The Bird Book, p. 99; The First Book of
Birds, pp. 95-7; Mother Nature's Children, Chapter VIII.
"Weather and wind and waning moon.
Plain and hilltop under the sky,
Ev'ning, morning and blazing noon,
Brother of all the world am I .
The pine-tree, linden and the maize.
The insect, squirrel and the kine.
All — natively they live their days —
As they live theirs, so I live mine,
I know not where, I know not what: —
Believing none and doubting none
What'er befalls it counteth not, —
Nature and Time and I are one."
— L. H. Bailey.
Entire Text of Handbook of Nature Study found here.
You can also use this book to explore your children's interests and whatever comes into your home and life in nature walks.
As you take your children outdoors and enjoy nature walks - allow them to observe the weather, the atmosphere, the animals and plants and whatever strikes them on the visit can be the topic which can be looked up in the Handbook when you get home. You can explore this further with other resources you have at home; You may continue to explore this interest over the next weeks - reading more from the Handbook, using the lessons, the leading thoughts and questions for observation as your basis.
Bring specimens home (if possible), and learn more about it, using this guide. Of course, you will not always find the exact specimen in the book, but you may still from the examples and descriptions of behaviour of similar animals. Otherwise, its a trip to the local library, or exploring it further with other resources you have at home or use the new discovery as a reason for a trip to a Natural History Museum.
(For those in Sydney, the Explore and Discover section of the Australian Museum is an excellent place to help you and your children identify bugs and bones and whatever else.)
If you are looking for Outdoor Hour Challenges using Handbook of Nature Study, check out this site by Barb McCoy for really helpful ways to make nature study a part of your homeschooling science.
Handbook of Nature Study
Witness the wonders of God's creation with naturalist Comstock as your guide! From dandelions, toads, and fireflies, to robins, rocks, and weather, she takes you on a lively trek through the natural world, vividly describing the habits, habitats, and physical structures of common living and nonliving things. Includes study questions and black-and-white photographs. 887 pages, softcover from Cornell University.
Sometimes it is really hard to work out what is the best homeschooling curriculum for your family.
Tell us how you used this curriculum.
Show us an example of a project you created using this curriculum. How did you mould and integrate this curriculum in your family? Will you continue to use it?.... Your answers will help another homeschooling family make a curriculum choice!
|
The United Kingdom – together with its dominion South Africa and fellow Allied power Belgium – occupied the majority of German East Africa in 1916 during the East African Campaign. Three years later, the British were tasked with administering the Tanganyika Territory as a League of Nations mandate. It was turned into a UN Trust Territory after World War II, when the LN dissolved in 1946 and the United Nations was formed. In 1954, the Tanganyika African Association – which spoke out against British colonial rule– became the Tanganyika African National Union (TANU) under the leadership of Julius Nyerere and Oscar Kambona. The aim of the political party was to attain independence for the territory; its flag was a tricolour consisting of three horizontal green, black and yellow bands. Shortly before independence in 1961, elections were held in Tanganyika. After the TANU won comprehensively, the British colonial leaders advised them to utilize the design of their party’s flag as inspiration for a new national flag. As a result, yellow stripes were added, and Tanganyika became independent on 9 December 1961.
The Sultanate of Zanzibar – which was a British protectorate until 1963 – used a red flag during its reign over the island. The last sultan was overthrown in the Zanzibar Revolution on 12 January 1964, and the Afro-Shirazi Party – the ruling political party of the newly formed People’s Republic of Zanzibar and Pemba – adopted a national flag the next month that was inspired by its own party flag. This consisted of a tricolor with three horizontal blue, black and green bands.
In April 1964, both Tanganyika and Zanzibar united in order to form a single country – the United Republic of Tanzania. Consequently, the flag designs of the two states were amalgamated to establish a new national flag. The green and black colors from the flag of Tanganyika were retained along with the blue from Zanzibar’s flag, with a diagonal design used “for distinctiveness”. This combined design was adopted on 30 June 1964. It was featured on the first set of stamps issued by the newly unified country.
The colors and symbols of the flag carry cultural, political, and regional meanings. The green alludes to the natural vegetation and “rich agricultural resources” of the country, while black represents the Swahili people who are native to Tanzania. The blue epitomizes the Indian Ocean, as well as the nation’s numerous lakes and rivers. The thin stripes stand for Tanzania’s mineral wealth, derived from the “rich deposits” in the land. While Whitney Smith in the Encyclopædia Britannica and Dorling Kindersley‘s Complete Flags of the World describe the fimbriations as yellow, other sources – such as The World Factbook and Simon Clarke in the journal Azania: Archaeological Research in Africa – contend that it is actually gold.
|
APOD: RCW 86: Historical Supernova Remnant (5/28/22)
In 185 AD, Chinese astronomers recorded the appearance of a new star in the Nanmen asterism. That part of the sky is identified with Alpha and Beta Centauri on modern star charts. The new star was visible for months and is thought to be the earliest recorded supernova. This deep image shows emission nebula RCW 86, understood to be the remnant of that stellar explosion. The narrowband data trace gas ionized by the still expanding shock wave. Space-based images indicate an abundance of the element iron and lack of a neutron star or pulsar in the remnant, suggesting that the original supernova was Type Ia. Unlike the core collapse supernova explosion of a massive star, a Type Ia supernova is a thermonuclear detonation on a a white dwarf star that accretes material from a companion in a binary star system. Near the plane of our Milky Way galaxy and larger than a full moon on the sky this supernova remnant is too faint to be seen by eye though. RCW 86 is some 8,000 light-years distant and around 100 light-years across.
© Martin Pugh
|
Almost all students of English, native and non-native speakers alike, have to study the works of William Shakespeare. Most do so begrudgingly. Part of this reaction is because, despite reassurances from teachers that Shakespeare was one of the most influential writers in the English language (and in the world), many students don’t understand exactly how profound Shakespeare’s influence was on the development of the English language.
Here’s some food for thought:
- Before Shakespeare’s time, written English was, on the whole, not standardized. His works contributed significantly to the standardization of grammar, spelling, and vocabulary. Shakespeare introduced 1,700 original words into the language, many of which we still use (despite significant changes to the language since Shakespeare’s time). These words include: “lonely,” “frugal,” “dwindle,” and many more>many more.
- In addition to all these words, many phrases that we use daily originated in Shakespeare’s work. When you talk about “breaking the ice” or having a “heart of gold,” or when you use any number of other phrases, you’re using Shakespeare’s language.
- Finally, Shakespeare had a profound impact on poetry and literature that has lasted centuries. He perfected blank verse, which became a standard in poetry. Herman Melville, William Faulkner, Alfred, Lord Tennyson, and Charles Dickens were all heavily influenced by Shakespeare. The impact led George Steiner to conclude that romantic English poets were “feeble variations on Shakespearean themes.”
Because of the profound impact of Shakespeare’s language on the way we speak today, studying the works of Shakespeare is an indispensable part of cultural education. Exploring the thousands of ways we still use Shakespeare’s language and themes is not only worthwhile and fascinating, but also fun.
Did you study Shakespeare’s works? What did you like? What did you dislike?
|
Calendar and goals for the Unit 5: Genocide.
- The Holocaust, Rwanda, and Sudan were avoidable. Each event was the result of government decisions, the compliance of citizens, and the lack of interference from other nations.
2. We see the very best and the very worst of humanity during genocide. We see tremendous suffering and people turning their backs on each other. We also see people who fight back and people who sacrifice their own safety for others.
3. Living in a world where genocide is possible requires the courage to speak out against your government and your peers. It means doing what is right and not necessarily what is easy.
Project 1 - "Road to the Holocaust" poster. On one side of a winding road, students list, describe, and illustrate the following: Nuremberg laws, ghettoes, death camps. On the opposite side of the road, students create cartoons showing how many Germans rationalized these events: scapegoats, race theory, government laws, avoid the camps themselves.
Project 2 - ?Those who fought back? pickets. In groups, students research one of the following: King Christian X of Denmark, Warsaw Ghetto uprising, Sobibor, Hiding Jewish families, Treblinka, or Schindler. Each team must create an illustration, a written description, 3 symbols, and a journal entry excerpt (with SOAPS analysis?). Display outside.
Performance Task: Students will create letters to the editor and send them to local (and national?) papers. Letters will encourage readers to write letters to Congressmen regarding the continued genocide in Sudan. Letters must reference lessons learned from the Holocaust and from Rwanda.
This resource is part of Unit 5: Genocide and the Social Studies 7 course.
|
“In a time before time, there was no earth. There was only water. Coyote told the animals and birds living in the sky to dive down and bring up dirt so there would be land. They all tried, but failed. Coyote himself almost died trying. So he asked Earth Diver (Coot) to dive down and bring up some dirt. Coot stayed down all day and finally brought up some dirt. Then there was land and all the animals and birds came down out of the sky.”The land that Coot brought up in that long ago time of animal people was that of rivers, rolling savannahs and a vast inland lake environment. Even the animal people were of different forms than the present. On the eastern side of the Kawaiisu homeland, from Sand Canyon to Red Rock Canyon, fossil remains of saber-toothed cats, camels, horses, rhinoceros and elephant-like creatures have been found.
Since that time, dynamic geological forces have dramatically altered the geomorphology. Sometime during the Middle Miocene to Pliocene (2 to 10 million years ago) major folding and faulting lifted these lands to their present elevations. To the south is Tehachapi Peak (Double Mt., elev. 7988), while to the west is Cummings Mountain (elev. 7753) and Bear Mountain (elev. 6895). Piute Mountain (elev. 8432) lies to the northwest. The land below these peaks is made up of high ridges, deep canyons and wide valleys.
Generally the mountainous land form runs north/south. The Tehachapi Mountains, which are the southern extension of the southern Sierras, have been rotated in a westerly direction, forming a transverse range that runs east/west. This was caused by movement along the Garlock fault which lies just south of Tomo-Kahni, along Oak and Cameron Creeks. The Garlock fault, California's other major fault, runs generally southwest from the Death Valley area and is offset by the San Andreas Fault west of I-5 at Frazier Park. This fault continues to the Coastal Range as the Big Pine Fault.
From the vantage point of the high eastern ridges, Tomo-Kahni appears as a bowl within a larger fifty square mile bowl that is Sand Canyon. The rock types are comprised primarily of igneous and sedimentary rock. Metamorphic rock can also be found in the Tehachapis in the form of marble. From Tomo-Kahni, sedimentary limestone deposits can be seen north of the Calaveras cement plant. Within Tomo-Kahni, the igneous rock is the dark, black volcanic basalt rock in which we find grinding slicks. Sedimentary rock is seen in the lighter tuffaceous sandstone in which we find bedrock mortars (holes used for grinding and pounding). Outside of Sand Canyon, mortars are seen in granite bedrock.
One of the most interesting geological features at Tomo-Kahni is the dark red to black soil. This soil is derived from the breakdown of volcanic basaltic rock. In response to moisture and freezing, the clay content swells and contracts, resulting in a phenomenon known as self-cultivation. In other words, the soil turns itself over time and buried objects don't always stay buried.
The basic sedimentary sandstone materials formed in an inland lake environment during the Miocene epoch five to 20 million years ago. What is primarily seen in Tomo-Kahni and greater Sand Canyon is tuffaceous sandstone (tuff intermixed with sand). This tuff came from local volcanic eruptions. Throughout the province there are exposed layers of brightly colored compacted volcanic ash (tuff) and grayish compacted volcanic mud. Both are typical of volcanic eruptions in a continental setting.
The main drainage of Sand Canyon is Cache Creek. At one time, this was a major river course with gravel deposits estimated to be several thousand feet thick. Forty vertebrate fossil sites have been recorded along this creek and in several branching canyons. As recently as one million years ago, Cache Creek flowed westward into the San Joaquin Valley. Faulting uplifted the land and changed the creek's flow so that it now runs eastward into the Mojave Desert.
“Coyote was very smart. So that fog would cool down his hot, rock house, he would go out on a mountain and play his flute to entice the fog to follow him back to his dwelling. One day he did not run back to the house. The fog came, then it rained, then it snowed and he died.”Present day inhabitants of the Tehachapis can sympathize with Coyote. The weather can be extremely fickle and dramatic. Tehachapi is indeed the land of four seasons. Unlike more predictable environments, Tehachapi can have all four seasons in the same day. Geographically, the area is a transition zone between the more moderate Pacific Coast and the more extreme inland environment. The Tehachapi, Piute and Scodie Mountains are subject to all five wind flows: Polar, Pacific, Sub-tropical, Continental and Gulf. With elevations ranging from 2500 to 8500 feet, the area is made up of numerous valleys, canyons, and high peaks. As a result, the Kawaiisu core area is an array of micro-climates. Air masses from the southwest, west and northwest must pass over an assemblage of mountains to the west. Because these air masses are partially wrung out by the time they reach the mountains, precipitation is not great and varies greatly across the many micro-climates.
At Tomo-Kahni, average annual precipitation is 8 inches, with temperatures ranging from -15 F to +115 F. In the higher mountains, annual precipitation can reach 50 inches and temperatures may fall to -25 F. Snow is not uncommon from Ocotber into May and occasionally falls in the summer. Wind is also a factor as evidenced by the 5500 present-day wind turbines which are visible to the north, east and south of Tomo-Kahni.
As an example of how micro-climates affect weather in different parts of the Tehachapi Valley, consider that twenty miles to the west of Tomo-Kahni, Bear Valley averages 22 inches of precipitation, while the city of Tehachapi averages 12 inches. Just east of Tomo-Kahni, at 6200 feet, average precipitation is 16 inches. The average wind flow across the ridges housing the wind turbines is in excess of 15 MPH, while twenty miles to the west, the average wind flow across the top of Bear Mountain is less than 15 MPH.
Kawaiisu weather shamans were known to have strong powers over the weather. Arboreal moss played a big part in their ability to call for or predict rain. This has credibility since this moss does respond to changes in humidity.
|
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
Click herefor a complete list of Reading Like a Historian lessons, and click here for a complete list of materials available in Spanish.
|
Taking on Toxoplasma
It invades your body through food and water and has been found nearly everywhere warm-blooded animal populations exist. A mother can unknowingly pass it to her unborn child but may not get sick herself. One of the most common and beloved house pets in many cultures around the world is the parasite’s most prolific spreader, passing it on to adoring owners.
It may sound like something out of a horror movie, but Toxoplasma is very real and infects nearly one-third of the world’s human population. The parasite has been found almost everywhere, but Dr. Chunlei Su, associate professor in the Department of Microbiology, and his lab team are trying to map out and study Toxoplasma (“toxo” for short) to learn how to make it more manageable and less problematic on a global scale.
“[Toxo is] a single-cell pathogen that must infect human or animal cells,” explains Su. “Then they reside inside the cell and replicate, and in that process they cause disease. This parasite can infect mammals and birds in general. They hide in the cells, and then you cannot remove them.”
It is believed that infected people carry the parasite for life, but in most cases, people’s healthy immune systems can suppress this parasite so it won’t replicate as much and thus remains in the body only as a chronic infection.
Sometimes, though, it can cause eye lesions in healthy people. Infection acquired during pregnancy in a healthy mother may spread to the fetus and cause severe damage. In immunocompromised persons, such as those with AIDS or recipients of organ transplant, infections of toxo can cause life-threatening inflammation of the brain.
One of the most interesting aspects of this parasite is that it has no host specificity. Toxo can infect almost any mammal, including humans. According to Su, in other parasites, like malaria, a particular species of the parasite can infect only certain hosts.
Toxo’s ability to infect a wide range of hosts presents a particularly difficult challenge for those trying to study and treat it. Because the parasite can live in just about every mammal, it can be extremely prolific on nearly every continent, and because felines, including the common house cat, are the main propagators of toxo, human populations are especially affected by the disease.
Su explains that toxo probably began its spread around the world when humans moved from being hunter-gatherers constantly on the move to becoming more sedentary farmers with agriculture as their main source of food.
When early modern humans started settling in permanent housing, mice moved in as well. This led the always-resourceful humans to adopt cats to keep the mice out of their food. Unfortunately, this resourcefulness also facilitated a cycle of toxo transmission between cat and mouse. In just a few days, a single infected cat can shed hundreds of millions of parasites in their feces, which may contaminate food and water and cause infection in animals—mice, pigs, goats, sheep, cattle, chickens, and humans—if the parasite is ingested. So as agriculture-based human society grew, cat and mouse—and therefore toxo—populations expanded with it.
Su and his lab are trying to track the spread of toxo throughout the world by genotyping the different strains found on different continents and inside various mammals. He hopes eventually to put together a world map showing the different strains of toxo around the globe. This new research has already yielded some very compelling results.
“So now we see a very interesting pattern at the global level,” says Su. “For example, when you look at Europe, Africa, Asia, and North America, there is one particular genotype, type II, which is dominant. You see it eighty percent of the time. The question is why do you see this particular strain? Why is it dominant at the global level?”
One reason Su thinks this strain might be so prolific is its age. He hypothesizes that this type of toxo was around for a longer time than other strains, so it had more time to spread to several continents of the world via cats. Another reason Su thinks type II might be dominant is that it may have a biological advantage over other strains of the organism, so that it can spread more efficiently. Part of Su’s work is to understand whether age or biological advantage is the major factor contributing to the success of type II toxo.
“On the other hand,” says Su, “when you look at toxo strains from South America, they’re very different. You don’t see type II strains there very often.”
One possible explanation for this variation is the effect of the many species of felines in the jungles of South America. The only time toxo can go through sexual recombination is when it is inside the gut of a feline host. So if more feline hosts are eating a wide range of prey animals infected with various types of toxo, then toxo naturally becomes more diverse through sexual recombination.
“A different ecosystem may facilitate a different way of transmission and generate a different population structure,” says Su. “By studying this, we try to understand different transmission patterns and eventually have some idea of how we should control the spread of this parasite.”
The ultimate goal of Su’s research is to find a way to manage, and even manipulate, toxo to keep it from infecting such a large proportion of the world’s creatures. Through understanding the basic biology of the parasite, they might find a way to turn off its particularly virulent aspects and perhaps develop a way to cure the disease entirely.
“If you reduce the possibility of toxo’s infecting livestock, then you can reduce the possibility of its infecting humans. Understanding the mechanism of virulence will help us identify a target to treat the disease,” explains Su. “It’s a long way to get there, but for now we need to understand the basic biology so that toxo can be better controlled.”
|
Orthostatic hypotension is low blood pressure that occurs when a person stands up from a seated or reclining position. For this reason, the condition is also called postural hypotension. Often, orthostatic hypotension is mild and short-lived. Sometimes, the person will be able to identify the reason their blood pressure has dropped and act accordingly to treat the problem at home. However, regular episodes of orthostatic hypotension could be a sign of an underlying health condition.
The most common symptom of orthostatic hypotension is a dizzy or light-headed sensation upon rising from sitting or lying down. The person may feel temporarily confused and have blurred vision. It can also cause nausea. Sometimes, the condition can cause weakness. If blood pressure becomes very low, the person may faint. Symptoms of orthostatic hypotension are temporary and usually last for just a few minutes.
If the person experiences dizziness when standing up only occasionally, this is usually not a cause for concern. However, if one has symptoms of orthostatic hypotension frequently, or the symptoms are severe, this can be a sign of an underlying health condition and requires assessment by a doctor. If the person loses consciousness when they stand up, even if it is for a very brief period, urgent medical care is required. Having symptoms at dangerous times, such as while driving, also need to be assessed urgently by a doctor.
When a person stands up, blood temporarily gathers in their legs and abdominal area. This is normal and works to temporarily lower blood pressure. Normally, the body automatically responds to this, and the heart starts to beat faster to raise the blood pressure back to a healthy level. When a person has orthostatic hypotension, the body's ability to correct low blood pressure from standing up is interrupted. This can happen for a variety of reasons.
Dehydration is a common and temporary cause of orthostatic hypotension. A person may become dehydrated because of a stomach bug, not drinking enough fluids, or vigorous exercise. Once healthy levels of hydration are restored, the body regains the ability to correct the blood pressure after standing. Problems with the cardiovascular system can also cause the condition. Endocrine and nervous system disorders often lead to the development of orthostatic hypotension. In seniors, orthostatic hypotension can occur after eating.
If a person is showing signs of orthostatic hypotension, a doctor may want to carry out regular blood pressure monitoring or perform a tile table test. This involves raising and lowering the person on a tilting surface, allowing the doctor to assess how getting up affects the patient's blood pressure. The doctor may also order blood tests to investigate underlying health conditions that may cause low blood pressure.
Treatment for orthostatic hypotension involves attempting to restore normal blood pressure. If the condition is mild, it can usually be resolved by simply sitting down again during an episode. If an underlying condition causes orthostatic hypotension, medication may help raise blood pressure. A doctor may prescribe medicine to increase blood volume. Compression stockings can also help prevent blood pooling in the legs, which can lead to lower blood pressure when standing.
Simple lifestyle changes may be recommended to prevent and reduce episodes of orthostatic hypotension. Avoiding dehydration by drinking plenty of fluids can be helpful, as can reducing alcohol and salt intake. If the person experiences orthostatic hypotension after eating, consuming smaller meals may improve their condition. Propping the upper body in bed may also be recommended, as this reduces the difference in height between lying and standing. Standing up slowly can also prevent drops in blood pressure.
Orthostatic hypotension is most common in people over 65. Certain medications also make experiencing these events more likely, such as antidepressants. Narcotic and alcohol use can exacerbate the condition. If a person was on long-term bed rest, orthostatic hypotension could occur when they resume moving about. Extreme heat exposure can also cause low blood pressure upon standing because sweating in high temperatures can lead to dehydration.
Pregnant women often experience orthostatic hypotension because the cardiovascular system has to expand very quickly to meet the demands of growing a baby. Often, this leads to temporarily lowered blood pressure. Orthostatic hypotension in pregnancy is normal and isn't usually a cause for concern. Once the woman has given birth, her blood pressure should return to normal.
Usually, mild orthostatic hypotension doesn't cause long-term health problems. However, more severe cases can lead to complications, especially in older adults. One of the most common concerns is fainting and the injuries that could result from a fall. If a person has persistent orthostatic hypotension, they are also at a higher risk of having a stroke or developing cardiovascular problems such as heart failure or arrhythmia.
This site offers information designed for educational purposes only. You should not rely on any information on this site as a substitute for professional medical advice, diagnosis, treatment, or as a substitute for, professional counseling care, advice, diagnosis, or treatment. If you have any concerns or questions about your health, you should always consult with a physician or other healthcare professional.
|
Tree pollens are a major irritant (Image: Jeremy Burgess/SPL)
The distribution and abundance of allergy-inducing pollen is changing, and this could certainly play a role in the hay fever epidemic – perhaps, as a result, more people are becoming sensitised or are seeking treatment for worsening symptoms.
Last September, invasive super-pollens made the news in the UK after they were detected in central England. The offending pollen was ragweed, a long-standing scourge for Americans with hay fever that has recently become established in parts of central and southern Europe – which is presumably where the pollen blew in from. Whether ragweed will establish itself further north is up for debate. “It’s a very plastic plant; it can survive at cooler temperatures, but whether it can prosper is another question,” says Roy Kennedy, director of the National Pollen and Aerobiology Research Unit in Worcester, UK.
Climate change could also be a factor, shifting the distribution of allergenic plants such as olive trees, which are a major cause of hay fever in Spain, and subtropical grasses, which are a problem in northern Australia. “With climate change, we are likely to see a spread and change in distribution of the types of grass pollen, with subtropical species that flower predominantly in summer coming in to play in more temperate regions,” says Janet Davies at the University of Queensland in Brisbane. If such plants then flower at different times, this could extend the hay fever season for people who become sensitised to them.
The flowering seasons of individual species could also change; one recent study found that the annual length of the …
|
A BRIEF HISTORY OF AFRICA
By Tim Lambert
Scientists believe that Africa was the birthplace of mankind. By 100,000 BC modern humans lived by hunting and gathering with stone tools. From Africa they spread to Europe.
By 5,000 farming had spread to North Africa. People herded cattle and they grew crops. At that time the Sahara Desert was not a desert. It was a green and fertile area. Gradually it grew drier and became a desert.
Meanwhile about 3,200 BC writing was invented in Northeast Africa, in Egypt. (It is sometimes forgotten that one of the world's oldest and greatest civilizations was African). The Egyptians made tools and weapons of bronze. However by the time Egyptian civilization arose most of Africa was cut off from Egypt and other early civilizations by the Sahara Desert. Africa was also hampered by its lack of good harbors, which made transport by sea difficult.
Farmers in Africa continued to use stone tools and weapons however about 600 BC the use of iron spread in North Africa. It gradually spread south and by 500 AD iron tools and weapons had reached what is now South Africa.
In 814 BC the Phoenicians from what is now Lebanon founded the city of Carthage in Tunisia. Carthage later fought wars with Rome and in 202 BC the Romans defeated the Carthaginians at the battle of Zama. In 146 BC Rome destroyed the city of Carthage and made its territory part of their empire.
Meanwhile Egyptian influence spread along the Nile and the kingdoms of Nubia and Kush arose in what is now Sudan. By 100 AD the kingdom of Axum in Ethiopia was highly civilized. Axum traded with Rome, Arabia and India. Axum became Christian in the 4th century AD.
Meanwhile the Roman Empire continued to expand. In 30 BC Egypt became a province of Rome. Morocco was absorbed in 42 AD. However the rest of Africa was cut off from Rome by the Sahara Desert.
AFRICA IN THE MIDDLE AGES
In 642 the Arabs conquered Egypt. In 698-700 they took Tunis and Carthage and soon they controlled all of the coast of North Africa. The Arabs were Muslims, of course, and soon the whole coast of North Africa converted to Islam. Ethiopia remained Christian but it was cut off from Europe by the Muslims.
After 800 AD organised kingdoms emerged in northern Africa. They traded with the Arabs further north. (Trade with the Arabs led to the spread of Islam to other parts of Africa). Arab merchants brought luxury goods and salt. In return they purchased gold and slaves from the Africans.
One of the earliest African kingdoms was Ghana (It included parts of Mali and Mauritania as well as the modern country of Ghana). By the 9th century Ghana was called the land of gold. However Ghana was destroyed in the 11th century by Africans from further north.
By the 11th century the city of Ife in Southwest Nigeria was the capital of a great kingdom. From the 12th century craftsmen from Ife made terracotta sculptures and bronze heads. However by the 16th century Ife was declining.
Another African state was Benin. (The medieval kingdom of Benin was bigger than the modern country). From the 13th century Benin was rich and powerful.
Meanwhile the kingdom of Mali was founded in the 13th century. By the 14th century Mali was rich and powerful. Its cities included Timbuktu, which was a busy trading center where salt, horses, gold and slaves were sold. However the kingdom of Mali was destroyed by Songhai in the 16th century.
Songhai was a kingdom situated east of Mali on the River Niger from the 14th century to the 16th century. Songhai reached a peak about 1500 AD. However in 1591 they were defeated by the Moroccans and their kingdom broke up.
Another great north African state was Kanem-Bornu, located near Lake Chad. Kanem-Bornu rose to prominence in the 9th century and it remained independent till the 19th century.
Meanwhile the Arabs also sailed down the east coast of Africa. Some of them settled there and they founded states such as Mogadishu. They also settled on Zanzibar.
Inland some people in southern Africa formed organised kingdoms. About 1430 impressive stone buildings were erected at Great Zimbabwe.
Meanwhile in the Middle Ages Ethiopia flourished. The famous church of St George was built about 1200.
Meanwhile the Portuguese were exploring the coast of Africa. In 1431 they reached the Azores. Then in 1445 they reached the mouth of the River Congo. Finally in 1488 the Portuguese sailed around the Cape of Good Hope.
In the 16th century Europeans began to transport African slaves across the Atlantic. However slavery was nothing new in Africa. For centuries Africans had sold other Africans to the Arabs as slaves. However the trans-Atlantic slave trade grew until it was huge.
In the 18th century ships from Britain took manufactured goods to Africa. They took slaves from there to the West Indies and took sugar back to Britain. This was called the Triangular Trade. (Many other European countries were involved in the slave trade).
Some Africans were sold into slavery because they had committed a crime. However many slaves were captured in raids by other Africans. Europeans were not allowed to travel inland to find slaves. Instead Africans brought slaves to the coast. Any slaves who were not sold were either killed or used as slaves by other Africans. The slave trade would have been impossible without the co-operation of Africans many of whom grew rich on the slave trade.
Meanwhile from the 16th to the 18th centuries Barbary pirates from the North African coast robbed Spanish and Portuguese ships.
In the 16th century a people called the Turks conquered most of the North African coast. In 1517 they captured Egypt and by 1556 most of the coast was in their hands.
Further south Africans continued to build powerful kingdoms. The empire of Kanem-Bornu expanded in the 16th century using guns bought from the Turks. However in the 16th century Ethiopia declined in power and importance although it survived.
Meanwhile the Europeans founded their first colonies in Africa. In the 16th century the Portuguese settled in Angola and Mozambique while in 1652 the Dutch founded a colony in South Africa.
In the 19th century European states tried to stop the slave trade. Britain banned the slave trade in 1807. On the other hand in the late 19th century Europeans colonized most of Africa!
In 1814 the British took the Dutch colony in South Africa. In 1830 the French invaded northern Algeria. However colonization only became serious in the late 19th century when Europeans 'carved up' Africa. In 1884 the Germans took Namibia, Togo and Cameroon and in 1885 they took Tanzania. In 1885 Belgium took over what is now Democratic Republic of Congo. The French took Madagascar in 1896. They also expanded their empire in northern Africa. In 1912 they took Morocco and Italy took Libya. In 1914 the British took control of Egypt. By then all of Africa was in European hands except Liberia and Ethiopia. (The Italians invaded Ethiopia in 1896 but they were defeated by the Ethiopians).
Further south the British took Zimbabwe, Zambia, Malawi, Uganda and Kenya. The British also took control of Egypt. Angola and Mozambique remained Portuguese.
However in the early 20th century attitudes to imperialism began to change in Europe. Furthermore in Africa churches provided schools and increasing numbers of Africans became educated. They became impatient for independence. The movement for African independence became unstoppable and in the late 1950s and 1960s most African countries became independent. In 1960 alone 17 countries gained their independence. However Mozambique and Angola did not become independent until 1975.
In the early 21st century Africa began to boom. Today the economies of most African countries are growing rapidly. Tourism in Africa is booming and investment is pouring into the continent. Africa is developing rapidly and there is every reason to be optimistic.
A History of Botswana
A history of Egypt
A history of Ethiopia
A History of Gambia
A History of Libya
A History of Malawi
A History of Morocco
A History of Senegal
A History of South Africa
A History of Tanzania
A History of Tunisia
A History of Uganda
A History of Zambia
|
Shirobana spirea requires fertilizing every few years, pruning and watering. When planting Shirobana spirea, a hole should be dug twice the size of the roots and as deep as the plant was in the container. The plant should be placed in the hole, and the hole should be refilled with dirt and mulch.
Shirobana spirea, scientifically known as Spiraea japonica 'Shirobana', is a deciduous shrub. Its leaves are green and grow up to 3 inches long. The plant produces white and pink flowers in the summer. Spiraea spirea is native to China and Japan.
Spiraea spirea can be pruned by pinching, thinning, shearing or rejuvenating. Pinching involves removing the stem tips from branches. Thinning means removing branches to let in more light and provide better circulation. Shearing involves clipping the surface with shears to maintain the plants shape, and rejuvenating involves removing old branches to reduce the size of the plant.
Spiraea spirea is affected by aphids, powdery mildew, caterpillars, blight and leaf spots. Aphids are small insects that suck the fluid from the plant. Aphids can stunt the plant's growth and transmit viruses to the plant. Powdery mildew is a fungus that kills leaves. It is found on plants with inadequate air circulation. Caterpillars eat the leaves and stems of the plant. Blight is caused by fungi or bacteria, and it kills the plant's tissue. Leaf spots are caused by fungi or bacteria, and it damages the plant's leaves.
|
By Mario Salazar
For 39 years the boundary dividing the new liberated Chile and the Mapuche remained in place and the treaty conceding the land in the south to the Mapuche was ratified by both Chile and Argentina.
In 1860 the Mapuche nation headed by the troika of Lonkos Kalipan of Gulumapu, Kalfucura of Puelmapu, and Orélie-Antoine de Tounens (a naturalized Mapuche), established a constitutional monarchy on their lands in the Southern Cone of South America. This nation was recognized by several European nations and was for all practical purposes an independent sovereign nation. This new nation was a legitimate free country as recognized by International treaties.
In 1862 the Chilean and Argentinean governments started a war of genocide against the Mapuche nation in violation of International treaties. These treaties included the original one, and one ratified during the years of independence by both Chile and Argentina. The encounters were very bloody and one sided, with Chile and Argentinean forces using superior weapons and having the advantage of numbers. No country in the world came to the aid of the Mapuche Nation. Chile and Argentina finally prevailed in 1865, ending the Mapuche nation as an independent country.
For more on tribal sovereignty, see US = "Baby Country" and Modoc Nation Rips UN Declaration.
|
Researchers from MIT, Princeton University, and elsewhere have developed a new technique to monitor the seasonal changes in Greenland’s ice sheet, using seismic vibrations generated by crashing ocean waves. The results, published today in the journal Science Advances, may help scientists pinpoint regions of the ice sheet that are most vulnerable to melting. The technique may also set better constraints on how the world’s ice sheets contribute to global sea-level changes.
“One of the major contributors to sea level rise will be changes to the ice sheets,” says Germán Prieto, the Cecil and Ida Green Career Development Assistant Professor in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) at MIT. “With our technique, we can continuously monitor ice sheet volume changes associated with winter and summer. That’s something that global models need to be able to take into account when calculating how much ice will contribute to sea level rise.”
Prieto and his colleagues study the effects of “seismic noise,” such as ocean waves, on the Earth’s crust. As ocean waves crash against the coastline, they continuously create tiny vibrations, or seismic waves.
“They happen 24 hours a day, seven days a week, and they generate a very small signal, which we generally don’t feel,” Prieto says. “But very precise seismic sensors can feel these waves everywhere in the world. Even in the middle of continents, you can see these ocean effects.”
The seismic waves generated by ocean waves can propagate through the Earth’s crust, at speeds that depend in part on the crust’s porosity: The more porous the rocks, the slower seismic waves travel. The scientists reasoned that any substantial overlying mass, such as an ice sheet, may act like a weight on a sponge, squeezing the pores closed or letting them reopen, depending on whether the ice above is shrinking or growing in size.
The team, led by Aurélien Mordret, a postdoc in EAPS, hypothesized that the speed of seismic waves through the Earth’s crust may therefore reflect the volume of ice lying above.
“By looking at velocity changes, we can make predictions of the volume change of the ice sheet mass,” Prieto says. “We can do this continuously over time, day by day, for a particular region where you have seismic data being recorded.”
Scientists typically track changing ice sheets using laser altimetry, in which an airplane flies over a region and sends a laser pulse down and back to measure an ice sheet’s topography. Researchers can also look to data gathered by NASA’s GRACE (Gravity Recovery and Climate Experiment) mission — twin satellites that orbit the Earth, measuring its gravity field, from which scientists can infer an ice sheet’s volume.
As Prieto points out, “you can only do laser altimetry several times a year, and GRACE satellites require about one month to cover the Earth’s surface.”
In contrast, ocean waves and the seismic waves they produce generate signals that sensors can pick up continuously.
“This has very good time resolution, so it can look at melting over short time periods, like summer to winter, with really high precision that other techniques might not have,” Prieto says.
The researchers looked through seismic data collected from January 2012 to January 2014, from a small seismic sensor network situated on the western side of Greenland’s ice sheet. The sensors record seismic vibrations generated by ocean waves along the coast, and they have been used to monitor glaciers and earthquakes. Prieto’s team is the first to use seismic data to monitor the ice sheet itself.
Looking through the seismic data, the scientists were able to detect incredibly small changes in the velocity of seismic waves, of less than 1 percent. They tracked average velocities from January 2012 to 2014, and observed very large seismic velocity decreases in 2012, versus 2013. These measurements mirrored the observations of ice sheet volume made by the GRACE satellites, which recorded abnormally large melting in 2012 versus 2013. The comparison suggested that seismic data may indeed reflect changes in ice sheets.
Using data from the GRACE satellites, the team then developed a model to predict the volume of the ice sheet, given the velocity of the seismic waves within the Earth’s crust. The model’s predictions matched the satellite data with 91 percent accuracy.
Nikolai Shapiro, research director for the National Center for Scientific Researchat the Institute de Physique du Globe de Paris, sees the group’s technique as “a very nice contribution in the direction of developing methods for environmental seismological monitoring.” He adds that such use of seismic data to study ice sheets “will certainly become more and more frequent and will become even more valuable, with an ongoing effort to install seismic networks in the vicinity of ice sheets, both in southern and northern polar areas.”
Toward that end, the team plans next to use available seismic networks to track the seasonal changes in the Antarctic ice sheet.
“Our efforts right now are to use what’s available,” Prieto says. “Nobody has been looking at this particular area using seismic data to monitor ice sheet volume changes.”
If the technique is proven reliable in Antarctica, Prieto hopes to stimulate a large-scale project involving many more seismic sensors distributed along the coasts of Greenland and Antarctica.
“If you have very good coverage, like an array with separations of about 70 kilometers, we could in principle make a map of the regions that have more melting than others, using this monitoring, and maybe better refine models of how ice sheets respond to climate change,” Prieto says.
In addition to MIT and Princeton, the paper’s contributing institutions are Stanford University, Harvard University, and Boise State University. This research was supported, in part, by the National Science Foundation.
Story Image: A photo of the edge of the Greenland ice sheet. “With our technique, we can continuously monitor ice sheet volume changes associated with winter and summer,” Germán Prieto says - Image courtesy: MIT News
|
The "Make the Play for Healthy Habits" kid contest is an extension of BCBSM's ongoing efforts to combat childhood obesity by encouraging kids to share their ideas using creativity and new media. In addition, this week BCBSM announced that elementary schools can apply for a new round of grant funding from Building Healthy Communities, a partnership with the Michigan Fitness Foundation, Wayne State University's College of Education Center for School Health and the United Dairy Institute of Michigan. Since 2009, BCBSM, the program's creator and primary funder, has invested more than $3 million in the Building Healthy Communities program in an effort to promote healthier lifestyles and prevent childhood obesity and its associated health risks.Immediate health effects of childhood obesity:
- Obese youth are more likely to have risk factors for cardiovascular disease, such as high cholesterol or high blood pressure. In a population-based sample of five- to 17-year-olds, 70 percent of obese youth had at least one risk factor for cardiovascular disease.
- Obese adolescents are more likely to have pre-diabetes, a condition in which blood glucose levels indicate a high risk for development of diabetes.
- Children and adolescents who are obese are at greater risk for bone and joint problems, sleep apnea, and social and psychological problems such as stigmatization and poor self-esteem.
- Children and adolescents who are obese are likely to be obese as adults and are therefore more at risk for adult health problems such as heart disease, type 2 diabetes, stroke, several types of cancer and osteoarthritis. One study showed that children who became obese as early as age two were more likely to be obese as adults.
- Overweight and obesity are associated with increased risk for many types of cancer, including cancer of the breast, colon, endometrium, esophagus, kidney, pancreas, gall bladder, thyroid, ovary, cervix and prostate, as well as multiple myeloma and Hodgkin's lymphoma.
- According to the CDC, healthy lifestyle habits, including healthy eating and physical activity, can lower the risk of becoming obese and developing related diseases.
- The dietary and physical activity behaviors of children and adolescents are influenced by many sectors of society, including families, communities, schools, child care settings, medical care providers, faith-based institutions, government agencies and the media, as well as the food, beverage and entertainment industries.
|
From the work done in the last section we can easily derive the principle of conservation of angular momentum. After we have established this principle, we will examine a few examples that illustrate the principle.
Recall from the last section that τ ext = . In light of this equation, consider the special case of when there is no net torque acting on the system. In this case, must be zero, implying that the total angular momentum of a system is constant. We can state this verbally:
If no net external torque acts on a system, the total angular momentum of the system remains constant.This statement describes the conservation of angular momentum. It is the third of the major conservation laws encountered in mechanics (along with the conservation of energy and of linear momentum).
There is one major difference between the conservation of linear momentum and conservation of angular momentum. In a system of particles, the total mass cannot change. However, the total moment of inertia can. If a set of particles decreases its radius of rotation, it also decreases its moment of inertia. Though angular momentum will be conserved under such circumstances, the angular velocity of the system might not be. We shall explore these concepts through some examples.
Consider a spinning skater. A popular skating move involves beginning a spin with one's arms extended, then moving the arms closer to the body. This motion results in an increase of the speed with which the skater rotates increases. We shall examine why this is the case using our conservation law. When the skater's arms are extended, the moment of inertia of the skater is greater than when the arms are close to the body, since some of the skater's mass decreases the radius of rotation. Because we can consider the skater an isolated system, with no net external torque acting, when the moment of inertia of the skater decreases, the angular velocity increases, according to the equation L = Iσ .
Another popular example of the conservation of angular momentum is that of a person holding a spinning bicycle wheel on a rotating chair. The person then turns over the bicycle wheel, causing it to rotate in an opposite direction, as shown below.
We have now completed our study of angular momentum, and have likewise come to the end of our examination the mechanics of rotation. Since we have already examined the mechanics of linear motion, we can now describe basically any mechanical situation. The union of rotational and linear mechanics can account for almost any motion in the universe, from the motion of planets to projectiles.
|
Split personality – Multiple Personality Disorder
“Split personality”, Dissociative Identity Disorder (DID), Multiple Personality Disorder (MPD). Multiple personality disorder (MPD) is a disorder characterized by having at least one “alter” personality that controls behaviour. The “alters” are said to occur spontaneously and involuntarily, and function more or less independently of each other. Some individuals with DID have been found to have personality states that have different ways of reacting, in terms of emotions, pulse, blood pressure, and blood flow to the brain. This disorder was formerly called multiple personality disorder (MPD) and is often referred to as split personality disorder. Dissociative identity disorder is an effect of severe trauma during early childhood, usually extreme, repetitive physical, sexual, or emotional abuse.
Dissociative identity disorder (DID), also known as multiple personality disorder (MPD), is a mental disorder characterized by at least two distinct and relatively enduring identities or dissociated personality states that alternately control a person’s behaviour, and is accompanied by memory impairment for important information not explained by ordinary forgetfulness. These symptoms are not accounted for by substance abuse, seizures, and other medical conditions. Diagnosis is often difficult as there is considerable comorbidity with other mental disorders.
Dissociative identity disorder (DID), formerly called multiple personality disorder (MPD) or split personality, is an illness that is characterized by the presence of at least two clear personality states, called alters, which may have different reactions, emotions, and body functioning. Symptoms and signs: how often DID occurs remains difficult to know; a history of severe abuse is thought to be associated with DID (MPD); signs and symptoms of DID include memory lapses, blackouts, being often accused of lying, finding apparently strange items among one’s possessions, having apparent strangers recognize them as someone else, feeling unreal, and feeling like more than one person; individuals with DID often also suffer from other mental illnesses, including post-traumatic stress disorder, borderline and other personality disorders; psychotherapy is the mainstay of treatment of DID and usually involves helping individuals with DID improve their relationship with others, preventing crises, and to experience feelings they are not comfortable with having.
DID is one of the most controversial psychiatric disorders with no clear consensus regarding its diagnosis or treatment. Research on treatment effectiveness still focuses mainly on clinical approaches and case studies. Dissociative symptoms range from common lapses in attention, becoming distracted by something else, and daydreaming, to pathological dissociative disorders. No systematic, empirically-supported definition of “dissociation” exists.
Although neither epidemiological surveys nor longitudinal studies have been done, it is thought DID rarely resolves spontaneously. Symptoms are said to vary over time. In general, the prognosis is poor, especially for those with co-morbid disorders. There is little systematic data on the prevalence of DID. The prevalence of DID increased greatly in the latter half of the 20th century, along with the number of identities (often referred to as “alters”) claimed by patients (increasing from an average of two or three to approximately 16).
Specifically, it is thought that one way that some individuals respond to being severely traumatized as a young child is to wall off, in other words to dissociate, those memories. When that reaction becomes extreme, DID may be the result. As with other mental disorders, having a family member with DID may be a risk factor, in that it indicates a potential vulnerability to developing the disorder but does not translate into the condition being literally hereditary. While there’s no “cure” for dissociative identity disorder, long-term treatment is very successful, if the patient stays committed. Effective treatment includes talk therapy or psychotherapy, medications, hypnotherapy, and adjunctive therapies such as art or movement therapy.
|
Recovery of Neandertal mtDNA: an evaluation
The recovery of mitochondrial DNA (mtDNA) from the right arm bone (humerus) of the original Neandertal fossil discovered in 1856 in a cave in the Neander Valley, near Dusseldorf, Germany, has been hailed as a stunning feat of modern biochemistry. Christopher Stringer (Natural History Museum, London) said: ‘For human evolution, this is as exciting as the Mars landing’. The achievement was announced in the July 11, 1997 issue of the journal, Cell.1 There is no question that the accomplishment was both conceptually and experimentally brilliant. However, the brilliance of the methodology does not guarantee the accuracy of the interpretation which the authors of the Cell article have placed on the data.
Based upon the differences between the Neandertal mtDNA and modern human mtDNA, the evolutionary interpretation is that the Neandertal line diverged from the modern human line about 550,000 to 690,000 years ago and that the Neandertals became extinct without contributing mtDNA to modern humans. The implication is that the Neandertals did not evolve into fully modern humans, that they were a different species from modern humans, and that they were just one of many proto-human types that were failed evolutionary experiments. We alone evolved to full humanity.
Two factors make humans unique. We are the only members of our genus, Homo, on the planet; and we are interfertile world-wide. Biologically, we humans are an oddity. Almost all other organisms have many kindred species, some living and some extinct. Since evolutionists believe humans are just a part of nature and of the evolutionary process, they believe that there must have been a number of proto-human species at one time, even though we are now alone.
The fossil record is now being reinterpreted to bring human origins more in line with the rest of nature. Evolutionary trees are out. Evolutionary bushes are in. Homo habilis is being split into two separate species, Homo habilis and Homo rudolfensis. Homo erectus is being split into two separate species, Homo erectus and Homo ergaster. The Neandertals are just one of at least five twigs on the human evolutionary bush. Evolutionists do not know—and say that they may never know—from which of the twigs modern humans evolved. However, the Neandertals—through the interpretation of this mtDNA recovery—have now been eliminated from modern human ancestry. Since 1964 the Neandertals have been considered a sub-species of modern humans. They will now almost certainly be moved out of our species and back into a separate species, Homo neanderthalensis.
The extensive publicity associated with this remarkable biochemistry is certain to give the concept of human evolution added stature. However, there is solid evidence for believing that the Neandertals were fully human and the ancestors of at least some modern humans. The evidence that the Neandertals were members of our species and were fully human falls into three general categories:
(1) the Biblical and cultural evidence,
(2) the fossil evidence for gradations, and
(3) the flawed interpretation of the mtDNA evidence.
Biblical and Cultural Evidence
No one has had a worse public image to overcome than have the Neandertals. When that Neander Valley individual was discovered in a cave in Germany in 1856, the shape of his skull and the curves in the long bones of his body caused evolutionists to believe that the expected link between apes and humans had been found. Evolutionary preconceptions also guided the world-famous anatomist, Marcellin Boule, as he restored the Neandertal skeleton from La Chapelle-aux-Saints, France, to show the world what a Neandertal looked like—a stooped and stupid hunchback. This view of the Neandertal ‘Cave Man’ prevailed for 100 years.
In the 1960s Boule’s glaring mistakes were corrected. It was realized that the Neandertal people, when healthy, stood straight and erect. The physical ‘redemption’ of the Neandertals was accomplished. However, the Neandertals were still considered to be culturally barren. Even the discovery at Shanidar Cave, Iraq, that the Neandertals buried their dead with flowers2 did not improve their general image. Many evolutionists still talk about the Neandertal people as having been culturally stagnant. They say that about 40,000 years ago, ‘The Great Leap Forward’ took place. Anatomically modern humans invaded Europe bringing art, technology, and innovation.3 The Neandertals, being outclassed, disappeared.
In recent years, however, we have witnessed a cultural ‘redemption’ of the Neandertals is beginning to take place. The year 1996 saw the publication of discoveries of items of personal ornamentation used by Neandertals4,5 and the first example of Neandertal musical instrument.6,7 Archaeologist Randall White (New York University) says of the Neandertals: ‘The more this kind of evidence accumulates, the more they look like us’.8 It can now be said that every type of evidence that we can reasonably expect from the fossil and archaeological record showing that the Neandertals were fully human has already been discovered.
One of the strongest evidences that the Neandertals were fully human relates to their reputation as ‘Cave Men’. Since so many of their remains have been found in caves, it was assumed that they lived in caves because they had not evolved enough to invent more sophisticated dwellings. The public is unaware that Neandertal dwellings have been found. Nor is the public aware that thousands of people across the world live in caves today. When Ralph Solecki (Columbia University) excavated Shanidar Cave, Iraq, he discovered that about 80 Kurds had lived in that cave until 1970, during a time of political unrest.9
The book of Genesis sheds light on the activity of early humans regarding the use of caves. The first reference to caves is in Genesis 19:30, which states that Lot and his daughters lived in a cave after fleeing the destruction of Sodom. This is in keeping with the use of caves throughout human history as temporary or permanent shelters. However, all other references to caves in Genesis refer to a usage that is seldom considered today.
Genesis 23:17–20 (NIV) records a business transaction between Abraham and the Hittite, Ephron. Abraham wanted to purchase property in order to bury Sarah.
‘So Ephron’s field in Machpelah near Mamre—both the field and the cave in it, and all the trees within the borders of the field-was deeded to Abraham as his property in the presence of all the Hittites who had come to the gate of the city. Afterward Abraham buried his wife Sarah in the cave in the field of Machpelah near Mamre (which is at Hebron) in the land of Canaan. So the field and the cave in it were deeded to Abraham by the Hittites as a burial site.’
Upon his death (Genesis 25:7–11), Abraham was buried in that same cave. In Genesis 49:29–32, Jacob instructs his sons that he, too, is to be buried in that cave where Abraham and Sarah were buried. We then learn that Jacob buried his wife, Leah there, and that Isaac and Rebekah were buried there also. Abraham and Sarah, Isaac and Rebekah, and Jacob and Leah were all buried in the cave in the field of Machpelah which Genesis 23:20 states Abraham purchased ‘as a burial site’. Only Sarah died in the geographic area of the cave. All of the others had to be transported some distance to be buried there, and Jacob’s body had to be brought up from Egypt. It was important then, as it is today, to be buried with family and loved ones.
The Neandertal fossil evidence shows that the Neandertal practice is in complete accord with the Genesis record. At least 345 Neandertal fossil individuals have been discovered so far at 83 sites in Europe, the Near East, and western Asia. Of these 345 Neandertal individuals, 183 of them (53 per cent) represent burials—all of them burials in caves or rock shelters. Further, it is obvious that caves were used as family burial grounds or cemeteries, as the following sites show:
Krapina Rock Shelter, Croatia—75 (minimum) Neandertals buried.
Arcy-sur-Cure caves, France—26 Neandertals buried.
Kebara Cave, Mount Carmel, Israel—21 Neandertals buried.
Tabun Cave, Mount Carmel, Israel—12 Neandertals buried.
La Freesia Rock Shelter, France—8 Neandertals buried.
Shanidar Cave, Iraq—7 Neandertals buried.
Maude Cave, Galilee, Israel—7 Neandertals buried.
Gutter Cave, Monte Circa, Italy—4 Neandertals buried.
Tsar 'Ail Rock Shelter, Lebanon—3 Neandertals buried.
It is understandable why burial in caves was common in ancient times. Graves in open areas must be marked so that future generations can return to pay homage to their ancestors. However, grave markers or reference points can be changed, destroyed, or moved. Directions to the grave site can become confusing over time. Landscapes can change, and memories of certain features can become clouded. Just as Abraham did not always live in one place, so the Neandertals may have moved seasonally following herds of game. Since caves are usually permanent, it would have been easy to locate the family burial site if it were in a cave. One could be sure that he was at the very spot where his ancestors were buried.
Most anthropologists recognize burial as a very human, and a very religious act. But the strongest evidence that Neandertals were fully human and of our species is that at four sites Neandertals and modern humans were buried together. In all of life, few desires are stronger than the desire to be buried with one’s own people. Jacob lived in Egypt, but wanted to be buried in the family cemetery in the cave of Machpelah. Joseph achieved fame in Egypt, but wanted his bones to be taken back to Israel (Genesis 50:25, Exodus 13:19, Joshua 24:32). Until recently it was the custom to have a cemetery next to the church so that the church family could be buried together. For centuries, many cities had separate cemeteries for Protestants, Roman Catholics, and Jews so that people could be buried with their own kind.
Skull Cave, Mount Carmel, Israel, is considered to be a burial site of anatomically modern Homo sapiens individuals. Yet, Skhul IV and Skhul IX fossil skulls are closer to the Neandertal configuration than they are to modern humans.10 Jebel-Qafzeh, Galilee, Israel, is also considered to be an anatomically modern burial site. However, Qafzeh skull 6 is clearly Neandertal in its morphology.11 Tabun Cave, Mount Carmel, Israel, is one of the classic Neandertal burial sites. But the Tabun C2 mandible is more closely aligned with modern mandibles found elsewhere.12 The Krapina Rock Shelter, Croatia, is one of the most studied Neandertal burial sites. At least 75 individuals were buried there. However, the remains are fragmentary making diagnosis difficult. The addition of several newly identified fragments to the Krapina A skull (also known as Krapina 1) reveals it to be much more modern than was previously thought, indicating that it is closer in shape to modern humans than it is to the Neandertals.13
That Neandertals and anatomically modern humans were buried together constitutes strong evidence that they lived together, worked together, intermarried, and were accepted as members of the same family, clan, and community. The false distinction made by evolutionists today was not made by the ancients. To call the Neandertals ‘Cave Men’ is to give a false picture of who they were and why caves were significant in their lives. If genuine mtDNA was recovered from that fossil from the Neander Valley, the results have been misinterpreted. ‘From one man he (God) made every nation of men, that they should inhabit the whole earth.’ (Acts 17:26 NIV).
In comparing the Neandertal burial practice with Genesis, I do not wish to imply that Abraham and his descendants were Neandertals. What the relationship was—if any—between the people of Genesis and the Neandertals we do not know. Young-Earth creationists tend to believe that the Neandertals were a post-Flood people. Evolutionists date the Neandertals from about 300,000 to about 33,000 years ago. What is striking is that the burial practice of the Neandertals seems to be identical with that of the people of Genesis.
Fossil Evidence for Gradations
What is it that makes a Neandertal a Neandertal in contrast to an anatomically modern Homo Sapiens? G.A. Clark (Arizona State University) states a problem: ‘That researchers cannot distinguish a “Neandertal” from a “modern human” might seem surprising to some, but there is little consensus on what these terms mean’.14 Although anthropologists have yet to agree on a formal definition of the Neandertals, there is a set of physical characteristics that are used in referring to a classic Neandertal morphology. They are:
The skull is lower, broader, and elongated in contrast to the higher doming of a modern skull.
The average brain size (cranial capacity) is larger than the average modern human by almost 200 cubic centimetres.
The forehead is low, with heavy brow ridges curving over each eye.
There is a slight projection at the rear of the skull (occipital bun).
The cranial wall is thick compared to modern humans.
The facial architecture is heavy, with the mid-face and the upper jaw projecting forward (prognathism).
The nose is prominent and broad.
The frontal sinuses are expanded.
The lower jaw is large and lacks a definite chin.
The body bones are heavy and thick and the long bones somewhat curved.
Any one of these characteristics, several of them, or all of them could be found in some humans living today, or even perhaps all of them might be found in some humans living today. There is nothing profoundly distinct about them. In fact, when the first Neandertal was discovered in 1856, even ‘Darwin’s bulldog’, Thomas Huxley, recognized that it was fully human and not an evolutionary ancestor. Donald Johanson, in his book, Lucy’s Child, writes:‘From a collection of modern human skulls Huxley was able to select a series with features leading “by insensible gradations” from an average modern specimen to the Neandertal skull. In other words, it wasn’t qualitatively different from present-day Homo Sapiens’.15
What Huxley was able to do with his collection of skulls more than a century ago, any anthropologist with a respectable collection of modern skulls could do in his laboratory today—show that the Neandertals were not qualitatively different from present-day Homo Sapiens
This same gradation from Neandertals to modern humans can also be seen in the fossil record. We are not referring to an evolutionary transition from earlier Neandertals to later modern humans. We are referring to morphological gradations between Neandertals and modern humans both having the same dates and living at the same time as contemporaries representing a single human population. Whereas evolutionists have chosen to divide these humans into two categories—Neandertals and anatomically modern Homo Sapiens, individual fossils are not always that easy to categorize. There is a wide range of variation among modern humans, and there is variation within the Neandertal category as well. A number of fossils in each group are very close to that subjective line, and could be categorized either way. These fossils constitute a gradation between Neandertals and modern humans, demonstrating that the distinction made by evolutionists is an artificial one.
Among fossils usually classified as Neandertal are at least 25 individuals from five different sites who are clearly close to that subjective line which divides Neandertals from anatomically modern Homo Sapiens These fossils constitute part of that continuum or gradation from Neandertals to modern humans found in the fossil record. Evolutionists recognise these fossils as departing from the classic Neandertal morphology and describe them as ‘progressive’ or ‘advanced’ Neandertals. Their shape is sometimes explained as the result of gene flow (hybridization) with more modern populations. This would refute the interpretation of the mtDNA evidence that the Neandertals and modern humans are not the same species—since reproduction is on the species level. Those sites having ‘advanced’ Neandertals are:
Vindija Cave remains, Croatia—twelve individuals.16
Starosel’e remains, Ukraine, CIS—two individuals.19
Stetten 3 humerus, cave deposits, Germany—one individual.20
Ehringsdorf (Weimar) remains, Germany—nine individuals.21
Completing that continuum or gradation from Neandertals to modern humans are at least 107 individuals from five sites who are usually grouped with fossils of anatomically modern humans. However, since they are close to the line which divides them from the Neandertals, they are often described as ‘archaic moderns’ or stated to have ‘Neandertal affinities’ or ‘Neandertal features’. These five sites are:
Oberkassel remains, Germany—two individuals.22
Bacho Kiro Cave mandibles, Bulgaria—two individuals.28
Pontnewydd Cave remains, Wales—four individuals.29
GA Clark summarizes the evidence that the Neandertals are the ancestors of at least some modern humans:‘Those who would argue that Neandertals became extinct without issue should show how it could have occurred without leaving traces of disjunction in the archaeological record and in the fossils themselves’.11
The mtDNA evidence
Details of Ancient DNA Recovery
DNA is the incredibly complex molecule involved in the genetics of life. Deprived of the repair mechanisms found in the living cell, there is substantial breakdown of DNA within a few hours after the death of the organism. Causes of DNA degrading include water, oxygen, heat, pressure, time, exposure to transition metals (such as zinc), microbe attack, and radiation. This degrading involves the breakage of the cross-linking of the DNA molecules, modification of sugars, alteration of bases, and the breakage of long strands into strands that eventually become so short that no information can be retrieved from them.
It is uncertain how long retrievable DNA will last. It is thought that it might last a few thousand years. To last longer, DNA must be removed from degrading factors soon after biological death and preserved. Under the most favorable conditions, evolutionists estimate that DNA might last ‘tens of thousands of years’.30,31 However, even under ideal conditions, background radiation will eventually erase all genetic information. Sensational reports about the recovery of DNA millions of years old are now discounted because researchers have not been able to repeat the results. Even amber is not the fool-proof preservative it was once thought to be.32
In the past, there was a scarcity of genetic material for experimentation. It was largely inaccessible because it was always embedded in a living system. Kary B. Mullis writes: ‘… it is difficult to get a well-defined molecule of natural DNA from any organism except extremely simple viruses’.33 One of the most remarkable breakthroughs in modern biotechnology was the development in the 1980s of the polymerase chain reaction (PCR). Kary Mullis shared the 1993 Nobel Prize in chemistry for his ‘invention’. The PCR technique can make unlimited copies of a specific DNA sequence independent of the organism from which it came:‘With PCR, tiny bits of embedded, often hidden, genetic information can be amplified into large quantities of accessible, identifiable, and analyzable material’.34
In dealing with the Neandertal specimen, the scientific team, led by Svante Pääbo (University of Munich), decided to search for mitochondrial DNA rather than nuclear DNA. Whereas there are only two copies of DNA in the nucleus of each cell, there are 500 to 1,000 copies of mtDNA in each cell. Hence, the possibility was far greater that some of the ancient mtDNA might be preserved. Further, because it has no repair enzymes, mtDNA accumulates mutations at about ten times the rate of nuclear DNA, making it, evolutionists believe, a more fine-grained index of time.
The most serious problem in analyzing ancient DNA is the possibility of contamination from modern DNA. This contamination could come from anyone who has ever handled the fossil since its discovery, from laboratory personnel, from laboratory equipment, and even from the heating and cooling system in the laboratory. Even a single cell of modern human contamination would have its DNA amplified blindly and preferentially by the PCR because of its superior state of preservation over the older material. The PCR technique is ‘notoriously contamination-sensitive’.35 The problem is so serious that some contamination from modern DNA is unavoidable. Ann Gibbons and Patricia Kahn express the problem:
‘Worst, it’s tough to distinguish DNA intrinsic to an ancient sample from the modern DNA that unavoidably contaminates it—the source of many false claims in the past. Ancient human samples are especially tricky, because their sequences might not differ much from that of contaminating modern human DNA, so it’s hard to get a believable result’.36
Since repeatability is at the heart of experimental science, many have suggested that what is needed is to retrieve DNA from a second Neandertal specimen in order to confirm the results of Svante Pääbo and his team. In fact, several other teams have tried unsuccessfully to retrieve Neandertal DNA. One attempt dealt with a Neandertal bone fragment from Shanidar, Iraq.37 Pääbo reports that he and his team have also attempted to retrieve DNA from Neandertal fossils from Zafarraya (Spain), Krapina (Croatia), and La Chaise (France), as well as from a Cro-Magnon fossil from Nerja (Spain), all without success. He suggests that the climate in these areas was too warm for DNA preservation. In contrast, the Neander Valley, Germany, is one of the northernmost Neandertal sites. It is just south of the limit of maximum glaciation during the late Pleistocene (Ice Age). Hence, that fossil was likely to have experienced cold conditions during most of its history. Pääbo states:
‘Therefore, preserved Neandertal DNA is likely to be rare, and the DNA in the type specimen [the 1856 Neander Valley Neandertal fossil] may result from its unique preservation conditions. … Most Neandertal specimens are therefore unlikely to contain amplifiable DNA. …’38
Whether or not genuine Neandertal mtDNA has been retrieved is impossible for an outside observer to say at this time. Knowing the unstable nature of the DNA molecule, if DNA was retrieved from that Neandertal fossil, it is strong evidence that the fossil is not nearly as old as evolutionists claim—30,000 to 100,000 years. From a scientific point of view, the fact that the recovery may never be duplicated on another specimen could add a degree of contingency to the results. As far as the recovery, itself, is concerned, it is possible that the mtDNA is genuine. However, the evolutionary interpretation of those mtDNA sequences—that the Neandertals are a separate species and are not closely related to modern humans—is not scientifically justified.
In the Cell article, Svante Pääbo and his associates explain their findings and their interpretation:
‘The Neandertal sequence was compared to 994 contemporary human mitochondrial lineages, i.e., distinct sequences occurring in one or more individuals, found in 478 Africans, 510 Europeans, 494 Asians, 167 Native Americans and 20 individuals from Australia and Oceania. Whereas these modern human sequences differ among themselves by an average of 8.0 ± 3.l (range 1–24) substitutions, the difference between the humans and the Neandertal sequence is 27.2 ± 2.2 (range 22–36) substitutions. Thus, the largest difference observed between any two human sequences was two substitutions larger than the smallest difference between a human and the Neandertal’.
When the comparison was extended to 16 common chimpanzee lineages, the number of positions in common among the human and chimpanzee sequences was reduced to 333. This reduced the number of human lineages to 986. The average number of differences among humans is 8.0 ± 3.0 (range 1–24), that between humans and the Neandertal, 25.6 ± 2.2 (range 20–34), and that between humans and chimpanzees, 55.0 ± 3.0 (range 46–67). Thus, the average number of mtDNA sequence differences between modern humans and the Neandertal is about three times that among humans, but about half of that between modern humans and modern chimpanzees.
To estimate the time when the most recent ancestral sequence common to the Neandertal and modern human mtDNA sequences existed, we used an estimated divergence date between humans and chimpanzees of 4–5 million years ago and corrected the observed sequence differences for multiple substitutions at the same nucleotide site. This yielded a date of 550,000 to 690,000 years before present for the divergence of the Neandertal mtDNA and contemporary human mtDNAs. When the age of the modern human mtDNA ancestor is estimated using the same procedure, a date of 120,000 to 150,000 years is obtained, in agreement with previous estimates. Although these dates rely on the calibration point of the chimpanzee-human divergence and have errors of unknown magnitude associated with them, they indicate that the age of the common ancestor of the Neandertal sequence and modern human sequences is about four times greater than that of the common ancestor of modern human mtDNAs.39
Flaws in the Neandertal mtDNA Interpretation
(1) The Problem of Statistical ‘Averages’
The Cell article points out that the sequence differences in modern human mtDNA range from one to 24 substitutions, with the average being eight substitutions. The mtDNA sequence differences between modern humans and the Neandertal fossil range from 22 to 36 substitutions, with the average being 27. Thus, the few modern humans who have the largest number of substitutions (24) have two more substitutions in their mtDNA than the smallest number (22) between modern humans and the Neandertal individual. In other words, there is a slight overlap. However, by comparing the modern human ‘average’ of eight substitutions and the Neandertal ‘average’ of 27 substitutions, the false impression is given that the Neandertal mtDNA variation is three times as great as that among modern humans. Using averages allows Kahn and Gibbons to write in Science:
‘These data put the Neandertal sequence outside the statistical range of modern human variation …’.40 (Emphasis added.)
Statistics has been used to cloud a relationship between Neandertals and modern humans. It is improper to use statistical ‘averages’ in a situation where many entities are being compared with only one entity. In this case, 994 sequences from 1669 modern humans are compared with one sequence from one Neandertal. Thus, there cannot be a Neandertal ‘average’, and the comparison is not valid. Although it may not be the intention, the result of such a comparison could not help but be deceptive. The biochemistry in the experiment is brilliant but the mathematics leaves much to be desired.
This inappropriate and deceptive use of averages has carried over into the popular press. Science writer Robert Kunzig, describing the Neandertal mtDNA results in the January 1998 issue of Discover, first states that the Neandertal individual ‘… differed at 27 positions, on average, from the modern human sequences … .’ He then goes on to say:
‘Among themselves the modern sequences differed by an average of only eight places. Picture a crowd of modern humans huddled around a campfire, with nobody more than eight yards from the centre; then the Neanderthal is 27 yards away, well outside the circle, in the shadows at the edge of the woods’.41
Kunzig’s illustration is misleading and totally inaccurate. Since, we are dealing with modern human ‘averages’, only a few of the modern humans would be exactly eight yards (eight sequences) from the centre. Almost half of them would be less than eight yards from the centre, with a few of them just one yard (one sequence) from the fire. Instead of ‘nobody more than eight yards from the center’, the rest of them would be more than eight yards from the centre, with a few of them 24 yards (24 sequences) from the centre. Since there is only one Neandertal individual, he would have to be spread out from 22 to 36 yards from the centre. But instead of the large gap between modern humans and the Neandertal that Kunzig was trying to illustrate, there would actually be a slight overlap of the modern humans with the Neandertal.
Science writer Kate Wong, in the January 1998 Scientific American, states that the mtDNA variation between the Neandertal and modern humans was, on average, four times greater than that found between any two modern humans.42 The Cell authors said it was, on average, three times greater. Thus in the two most popular science magazines in the United States, mistakes were made in describing the interpretation of the Neandertal mtDNA. In both cases, the mistakes portrayed the genetic distance between modern humans and the Neandertal as being even greater than was originally reported in Cell. However, both mistakes were the result of the misleading and improper use of ‘averages’ by the Cell authors.
(2) The Problem of Species Distance
Based upon an improper use of statistical averages, the authors of the mtDNA Neandertal study arrive at a fallacious interpretation of the nature of the Neandertals by using mtDNA sequence differences as a measure of species differences. They write:
‘The Neandertal mtDNA sequence thus supports a scenario in which modern humans arose recently in Africa as a distinct species and replaced Neandertals with little or no interbreeding’.43
Modern humans have an average of eight mtDNA substitution differences. The Neandertal individual has a minimum of 22 mtDNA substitution differences from the modern human average. That implies that 14 mtDNA substitution differences delineates a new or different species, and that the Neandertals should be so classified. However, mtDNA substitution differences in modern humans range from 1 to 24. That means that there are a few modern humans who differ by 16 substitutions from the modern human average—two substitutions inside the range of the Neandertal individual. Would not logic demand that those few modern humans living today should also be placed in a separate species? To state the question is to reveal the absurdity of using such differences as a measure of species distinctions. Maryellen Ruvolo (Harvard University) points out that the genetic variation between the modern and Neandertal sequences is within the range of other single species of primates. She goes on to say: ‘… there isn’t a yardstick for genetic difference upon which you can define a species’.40
(3) The Problem of Evolutionary Time and Distance
Based upon their improper use of statistical averages, the authors of the mtDNA Neandertal study arrive at another fallacious conclusion from their experiment. They use mtDNA sequence differences as a measure of evolutionary time and distance. This is a universal practice in evolutionary studies. Hence, the Neandertals are placed in an evolutionary sequence between modern humans and chimpanzees. However, as we saw above, there are a few modern humans living today who have mtDNA substitutions inside the range of the Neandertal individual. Would not logic also demand that we say that there are a few humans living today who are less evolved than were the Neandertals, and who are more closely related to chimpanzees than were the Neandertals?
Australian biochemist John P. Marcus makes a significant observation about a graph in the Cell article. He writes:
‘This graph might lead one to think that Neandertal sequences are somewhere between modern human and chimp sequences. This could then give the impression that Neandertal is a link between chimps and humans. On closer examination, however, this is not the case. As labelled, the graph shows the number of differences between human-human, human-Neandertal, and human-chimp pairs. Significantly, the authors do not show the distribution of Neandertal-chimp differences. The reason they do not show this last of four possible comparisons between the populations is not clear to me. What is clear, however, from the DNA distance comparisons that I performed, is that the Neandertal sequence is actually further away from either of the two chimpanzee sequences than the modern human sequences are. My calculations show that every one of the human isolates that I used was “closer” to chimp than was the Neandertal. The fact that Neandertal and modern human sequences are approximately equidistant from the chimpanzee outgroup seems to be a good indication that Neandertal and modern humans comprise one species. Clearly, the Neandertal is no more related to chimps than any of the humans. If anything, Neandertal is less related to chimps’.44
(4) The Problem of the Molecular ‘Clock’
The basis of the interpretation that modern humans and the Neandertals are separate species is the unconditional acceptance, by evolutionists, of the concept of the molecular ‘clock’. Yet, the authors of the mtDNA Neandertal study admit (in the lengthy quotation cited above) that ‘… these dates rely on the calibration point of the chimpanzee-human divergence and have errors of unknown magnitude associated with them’. Their interpretation assumes the legitimacy of the molecular ‘clock’ as a means of determining the relationship of modern humans to chimpanzees and to Neandertals. GA Clark writes:
‘Molecular clock models are full of problematic assumptions. Leaving aside differences of opinion about the rate of base pair substitutions, how to calibrate a molecular clock, and whether or not mtDNA mutations are neutral, the fact that the Neandertal sequence … differs from those of modern humans does not resolve the question of whether or not “moderns” and “Neandertals” were different species’.11
Karl J. Niklas (Cornell University) refers to using mutation rate calibration to determine species relationships as: ‘… a research area that is at present characterized by too much speculation chasing too few data’.45
The most amazing development regarding the molecular ‘clock’ is the possibility that mtDNA may mutate much faster than has been estimated. A recent article in Science states that the ‘clock’ may be in error by as much as twenty-fold. Neil Howard (University of Texas Medical Branch, Galveston) says:
‘We've been treating this like a stop-watch, and I’m concerned that it’s as precise as a sun dial’.46
If the new rates hold up, the results for evolutionary time estimates, such as for ‘mitochondrial Eve’, could be startling. ‘Using the new clock, she would be a mere 6000 years old’.47
[Ed. note: see A shrinking date for Eve]
(5) The Problem of Using mtDNA to Determine Relationships
Evolutionists, themselves, are questioning the use of mtDNA as a proper method of determining relationships. Geneticist L. Luca Cavalli-Sforza (Stanford University) and his associates write: … the mitochondrial genome represents only a small fraction of an individual’s genetic material and may not be representative of the whole’.48
After testing the assumptions involved in the use of mtDNA to determine primate relationships, D. Melnick and G. Hoelzer (Columbia University) state:
‘Our results suggest serious problems with use of mtDNA to estimate “true” population genetic structure, to date cladogenic events, and in some cases, to construct phylogenies’.49
Jonathan Marks (Yale University) emphasizes the subjectivity involved in using mtDNA to determine relationships. He comments:
‘Most analysis of mitochondrial DNA are so equivocal as to render a clear solution impossible, the preferred phylogeny relying critically on the choice of outgroup and clustering technique’.50
(6) The Possibility of PCR Copying Errors
PCR copying errors on oxygen-damaged residues in the Neandertal mtDNA could result in the Neandertal mtDNA appearing to be more distant from that of modern humans than it actually is. John Marcus sees evidence of this in his own study of the Cell report. He observes possible PCR induced systematic errors due to a uniform oxidation of particular residues in particular sequence contexts. He explains:
‘When the nature of the differences between the modern human reference sequence and the Neandertal sequence was compared, it was noted that there were 27 differences. Twenty-four of these were transitions (G to A, and C to T) changes. Apparently it is easier for DNA polymerase to make this kind of substitution as it copies the template DNA. Since PCR also makes use of DNA polymerase to amplify the original template DNA, it is possible that the differences seen with the mtDNA from the Neandertal is actually a result of PCR induced errors. Some phenomena in the ancient DNA could actually cause a consistent misamplification of the DNA template present in the Neandertal bone. A possible example of this in the Cell paper can be seen in Figure 4 of the paper. At positions 107 and 108 as well as 111 and 112 there were a number of consistent variations that could be the result of bad copying by the DNA polymerase used. Tomas Lindahl, who writes a mini-review at the beginning of the Cell volume, comments on this. Is it not then possible that a somewhat uniform oxidative process might damage the DNA in such a way that the original information present in the Neandertal mtDNA would be reproducibly “copied” wrongly?’51
(7) The Problem of Philosophical Biases
Little attempt was made by Pääbo and his associates to hide their philosophical biases. These are:
A bias toward molecules over fossils;
A bias toward the more politically correct ‘Out of Africa’ model of modern human origins, which demands a separation of the Neandertals from anatomically modern humans.
(a) The bias toward molecules over fossils. Ever since the advent of molecular taxonomy, paleontologists have been divided over which method is the better interpreter of evolutionary history. Molecules seem so neat and tidy, so precise and objective. Their use is based upon the unproven assumption that every organism’s evolutionary history is encoded in their genes. Fossils, on the other hand, seem so dirty and messy. Their interpretation is anything but objective. Palaeontologists have felt the sting of the charge that their discipline is ‘non-experimental’ and ‘resistant to falsification’.52 The newer fossil discoveries have not fulfilled their promise to clarify the picture of human origins. Instead, they have brought more confusion. Christopher B. Stringer explains the appeal of the molecules over the fossils:
‘The study of human origins seems to be a field in which each discovery raises the debate to a more sophisticated level of uncertainty. Such appears to be the effect of the Kenyan, Tanzanian, and Ethiopian [fossil] finds’.53
However, the search for objectivity is an illusive one. Although the molecular data appears to be very objective and precise, John Marcus states that the interpretation of the molecular data is just as subjective as is the interpretation of the fossils. Not only is the molecular evidence unfalsifiable, but
‘… the scientist must always choose which piece(s) of DNA he is going to use to do his comparisons. Very often a particular piece of DNA will not give the “right” answer and so it is dismissed as a poor indicator of the evolutionary process’.54
Kenneth A.R. Kennedy (Cornell University) comments:
‘This practice of forcing the paleontological and archaeological data to conform to the evolutionary and genetic models continues in reinterpretations of dates based upon the molecular clock of mitochondrial DNA as well as radiometric samples… .’55
The misinterpretation of the mtDNA data is seen in the work of Pääbo and his associates. We have earlier shown that the Neandertal fossil evidence contradicts their interpretation of the mtDNA evidence.
(b) The bias toward the more politically correct ‘Out of Africa’ model of modern human origins. The popularity of the ‘Out of Africa’ model is due, in part, to its being so politically correct.
Modern humans are said to have originated in Africa, a source of satisfaction to non-Western people who may feel that they have been exploited by Westerners.
The model emphasizes the unity of all humans despite differences in external appearance.
For many people it is an advantage to have the Neandertals removed from their ancestry. After all, who wants to be related to a Neandertal?
A woman, mitochondrial Eve, is the hero of the plot. We all owe our existence to her. And,
The sudden replacement of the Neandertals by modern humans favors the newer and more popular punctuated equilibrium evolution model.
We seem to be witnessing a classic struggle in palaeoanthropology between the molecules and the fossils. Some palaeoanthropologists themselves are bewildered at how rapidly their fellows have forsaken the fossils for the molecules. It is all the more surprising because the human fossil evidence clearly contradicts the ‘Out of Africa’ model. The European fossil evidence is against it, as we have shown in this paper. The Chinese fossil evidence is strongly against it, as Xinzhi Wu and Frank E. Poirier demonstrate.56 The Javanese and Australian fossils also witness against it.57 With the African fossils, the jury is still out. The reason is that the ‘Out of Africa’ model demands that the fossils fall within a certain time-frame. However, many of the fossils upon which the ‘Out of Africa’ model is based, such as the Border Cave fossils and the Klasies River Mouth Caves fossils, are very difficult to date.
Other possible interpretations of the data
The fossil record clearly supports a close relationship between the Neandertals and modern humans. However, the mtDNA data, if accurate, shows some differences between the two groups. For political reasons, these differences have been over-interpreted by Pääbo and his associates who claim that the Neandertals were a separate species from modern humans. Geneticist Simon Easteal (Australian National University) noting that chimpanzees, gorillas, and other primates have much more within-species mtDNA diversity than modern humans do, states: ‘The amount of diversity between Neanderthals and living humans is not exceptional’.42
Regarding these differences, there are a number of legitimate interpretations of the mtDNA Neandertal data that have been ignored by Pääbo and his associates. Some of these interpretations may be more likely than others, but all are possibilities.
That this particular Neandertal individual was from a small, isolated group. The Neander Valley of Germany is one of the northernmost Neandertal sites, close to the ice-age glaciers. Of the 345 Neandertal individuals discovered to date, only 14 are from Germany, and 12 of them were far to the south of this individual.
That the Neandertals did contribute to the modern gene pool, but that their sequences disappeared through random genetic loss, selection, or both. John Marcus feels that the human race had much greater mtDNA sequence variation in the past. Being genetically stronger, ancient humans were able to cope with greater genetic variation. Today, because of many more mutations, we are a weaker race. Perhaps greater mtDNA variation in this area was deleterious to health and stabilizing selective pressure has reduced the variation.58
That this particular Neandertal from whom sequences were derived was at one extreme end of a diverse spectrum in Neandertals that includes other more modern-like sequences. The recovery of mtDNA from other Neandertal individuals, if possible, may confirm whether or not this is true.
That while Neandertal mothers did not contribute mtDNA to the modern gene pool, Neandertal fathers may have contributed nuclear genes to the modern gene pool. Throughout history, warfare, conducted by men, has been characterized by the victimizing of conquered women. Hence, Neandertal men may have made ‘unsolicited contributions’ to the modern human gene pool. Further, most migrations in history have initially involved men.
That our ancestors underwent a population bottleneck that wiped out a great deal of the original genetic variation. Kahn and Gibbons write:
‘Living humans are strangely homogeneous genetically, presumably because … their ancestors underwent a population bottleneck that wiped out many variations’.59
Iceland illustrates an isolated population whose genetic homogeneity increased when it experienced two bottlenecks, one cause by bubonic plague and the other by famine.60
Future Neandertal mtDNA recovery
The PCR technique is, says Tomas Lindahl, ‘notoriously contamination-sensitive’. What is most needed is an independent test of ancient DNA authenticity. Researchers, including Pääbo, believe they might have devised such a technique, based upon the ratio of amino acid racemization to DNA depurination, to determine if a particular ancient specimen might still contain retrievable DNA. In testing this new method for DNA in ancient specimens, they write: ‘… we excluded human remains because of the inherent difficulty of recognizing contamination from contemporary humans’.61 In other words, it is much easier to recognize modern human DNA contamination in ancient non-human specimens than in ancient human specimens. It is obvious that much of the contaminating DNA would come from modern humans because modern humans are doing the research and handling the ancient DNA. The closer ancient human DNA sequences are to modern ones, the harder it is to tell if they are truly ancient or if they are just the result of modern human contamination.
The fossil evidence shows that the Neandertals were closely related to anatomically modern humans. Since the mtDNA evidence is being used to challenge that relationship, almost all observers recognise the need to obtain mtDNA from other Neandertal specimens. Robert DeSalle (American Museum of Natural History) states: ‘But it’s possible that you could see something quite different if you looked at DNA from another Neanderthal sample’.62 It is at this point that biochemist John P. Marcus sees a problem. He states:‘Knowing the bias of evolutionists, it would not be surprising if, in the future, true Neandertal mtDNA sequences were rejected on account of their being too close to modern human ones and therefore suspected of arising from modern human mtDNA contamination’.63
Such concerns are justified since most evolutionists involved in mtDNA recovery favor the ‘Out of Africa’ model of human evolution which demands a separation of the Neandertals from anatomically modern humans. Hence, any future mtDNA evidence showing a close relationship of Neandertals to modern humans could be dismissed as contamination from modern human mtDNA and the results not reported. This would perpetuate the false idea that Neandertals and modern humans were not closely related.
The words of anthropologist Robert Foley (University of Cambridge) written about a book by geneticist Luigi Luca Cavalli-Sforza (Stanford University) sums up the work of Svante Pääbo and his team, who in spite of brilliant biochemistry,
‘… shows plainly the futility of trying to interpret genes without knowing so much more—about selection and drift, about processes of cultural transmission, about history and geography, about fossils, about anthropology, about statistics’.64
After 140 years, the Neandertals are still having to fight for their reputation. If genuine mtDNA has been recovered from the fossil from the Neander Valley, the results have been misinterpreted—both in a statistical and cultural sense. However, within the context of the Biblical record of human history, this individual is likely to post-date the dispersion from Babel (Genesis 11:8,9). This being the case, we can conclude that he, like all human kind, was a direct descendant of one of the sons of Noah (Genesis 9:19, 10:32).
- Krings, M., Stone, A., Schmitz, R.W., Krainitzki, H., Stoneking, M. and Pääbo, S., 1997. Neandertal DNA sequences and the origin of modern humans. Cell, 90:19–30. Return to text.
- Solecki, R.S., 1971. Shanidar: the First Flower People, Alfred A. Knopf, New York. Return to text.
- Diamond, J., 1989. The great leap forward. Discover, May 1989, pp. 50–60. Return to text.
- Hublin, J.-J., Spoor, F., Braun, M., Zonneveld, F. and Condemi, S., 1996. A late Neanderthal associated with Upper Palaeolithic artefacts. Nature, 381:224–226. Return to text.
- Discover, January 1997, p. 33. Return to text.
- Wong, K., September 1997. Neanderthal notes. Scientific American 227(3):17–18. Return to text.
- Science News, November 23, 1996, p. 328. Return to text.
- Discover, January 1997, p. 33. Return to text.
- Solecki, Ref. 2, p. 69. Return to text.
- Corruccini, RS, 1992. Metrical reconsideration of the Skhul IV and IX and Border Cave 1 crania in the context of modern human origins. American Journal of Physical Anthropology, 87(4):433–445. Return to text.
- Corruccini, Ref. 10, pp. 440–442. Return to text.
- Quam, R.M. and Smith, F.H., 1996. Reconsideration of the Tabun C2 ‘Neandertal’. American Journal of Physical Anthropology, Supplement 22, p. 192. Return to text.
- Minugh-Purvis, N. and Radovcic, J., 1991. Krapina A: Neandertal or Not? American Journal of Physical Anthropology, Supplement 12, p. 132. Return to text.
- Clark, GA, 1997. Neandertal genetics. Science, 277:1024. Return to text.
- Johanson, D. and Shreeve, J., 1989. Lucy’s Child, William Morrow and Company, New York, p.49. Return to text.
- Ahern, J.C. And Smith, F.H., 1993. The transitional nature of the late Neandertal mandibles from Vindija Cave, Croatia. American Journal of Physical Anthropology, Supplement 16, p. 47. Return to text.
- Tattersall, I., Delson, E. and Van Couvering, J., (eds), 1988. Encyclopedia of Human Evolution and Prehistory, Garland Publishing, New York, p. 241. Return to text.
- Stringer, C. and Gamble, C., 1993. In Search of the Neanderthals, Thames and Hudson, Inc., New York, pp. 179–180. Return to text.
- Ref. 17, p. 56. Return to text.
- Oakley, K.P., Campbell, B.G. And Molleson, T.I., (eds), 1971. Catalogue of Fossil Hominids, Trustees of the British Museum— Natural History, London, Part II, p. 209. Return to text.
- Wolpoff, M. and Caspari, R., 1997. Race and Human Evolution, Simon and Schuster, New York, pp. 177,182. Return to text.
- Boule, M. and Vallois, H.V., 1957. Fossil Men, The Dryden Press, New York, p. 281. Return to text.
- Smith, F.H., Falsetti, A.B. and Liston, M.A., 1989. Morphometric analysis of the Mladec postcranial remains. American Journal of Physical Anthropology, 78(2):305. Return to text.
- Wolpoff, M.H. And Jelinek, J., 1987. New discoveries and reconstructions of Upper Pleistocene hominids from the Mladec cave, Moravia, CCSR. American Journal of Physical Anthropology, 72(2):270–271. Return to text.
- Minugh, N.S., 1983. The Mladac 3 child: aspects of cranial ontogeny in early anatomically modern Europeans. American Journal of Physical Anthropology, 60(2):228. Return to text.
- Smith, F.S., 1976. A fossil hominid frontal from Velika Pecina (Croatia) and a consideration of Upper Pleistocene hominids from Yugoslavia. American Journal of Physical Anthropology, 44:130–131. Return to text.
- Ref. 20, Part II, p. 342. Return to text.
- Ref. 17, pp. 56,87. Return to text.
- Klein, R.G., 1989. The Human Career: Human Biological and Cultural Origins, The University of Chicago Press, Chicago, pp. 236–237. Return to text.
- Lindahl, T., 1993. Instability and decay of the primary structure of DNA. Nature, 362:713. Return to text.
- Pääbo, S., 1993. Ancient DNA, Scientific American, November 1993, p. 92. Return to text.
- Gibbons, A., 1998. Ancient history, Discover, January 1998, p. 47 Return to text.
- Mullis, K.B., 1990. The unusual origin of the polymerase chain reaction. Scientific American, April 1990, p. 56. Return to text.
- Koshland, D. Jr., and Guyer, R.L., 1989. Perspective. Science, 22 December 1989, p. 1543. Cited by Rabinow, P., 1996. Making PCR, The University of Chicago Press, Chicago, pp. 5–6. Return to text.
- Lindahl, T., 1993. Recovery of antediluvian DNA. Nature, 365:700. Return to text.
- Kahn, P. and Gibbons, A., 1997. DNA From an extinct human. Science, 277:176–177. Return to text.
- Ross, P.E., 1992. Eloquent remains. Scientific American, May 1992, p. 116. Return to text.
- Pääbo, S., Cooper, A., Poinar, H.N., Radovcic, J., Debenath, A., Caparros, M., Barroso-Ruiz, C., Bertranpetit, J., Nielsen-Marsh, C., Hedges, R.E.M. And Sykes, B., 1997. Neandertal genetics. Science, 277 (22 August 1997) 1021–1023. Return to text.
- Krings et al., Ref. 1, pp. 24–25. Return to text.
- Kahn and Gibbons, Ref. 36, p. 177. Return to text.
- Kunzig, R., 1998. Not our mom. Discover, January 1998, p. 33. Return to text.
- Wong, K., 1998. Ancestral Quandary. Scientific American, January 1998, p. 32. Return to text.
- Krings et al. Ref. 1, p. 27. Return to text.
- Personal communication. Emphasis mine. Return to text.
- Niklas, K.J., 1990. Turning over an old leaf. Nature, 344:587. Return to text.
- Gibbons, A., 1998. Calibrating the mitochondrial clock. Science, 279 (2 January 1998), p. 28. Return to text.
- Gibbons, Ref. 46, p. 29. Return to text.
- Mountain, J.L., Lin, A.A., Bowcock, A.M. and Cavalli-Sforza, L.L., 1993. Evolution of modern humans: evidence from nuclear DNA polymorphisms. The Origin of Modern Humans and the Impact of Chronometric Dating, M.J. Aitken, C.B. Stringer, and P.A. Mellars (eds), Princeton University Press, Princeton, p. 69. Return to text.
- Melnick, D. and Hoelzer, G., 1992. What in the study of primate evolution is mtDNA good for? American Journal of Physical Anthropology, Supplement 14, p. 122. Return to text.
- Marks, J., 1992. Chromosomal evolution in primates. The Cambridge Encyclopedia of Human Evolution, S. Jones, R. Martin, and D. Pilbeam (eds), Cambridge University Press, Cambridge, p. 302. Return to text.
- Personal communication. Return to text.
- Niklas, Ref. 45, p. 588. Return to text.
- Stringer, CB, 1993. The legacy of Homo Sapiens Scientific American, May 1993, p. 138. Return to text.
- Personal communication. Return to text.
- Kennedy, K.A.R., 1992. Continuity of replacement: controversies in Homo Sapiens evolution. American Journal of Physical Anthropology, 89(2):271, 272. Return to text.
- Wu, X. and Poirier, F.E., 1995. Human Evolution in China, Oxford University Press, New York, p. 113. Return to text.
- Lubenow, M.L., 1992. Bones of Contention: A Creationist Assessment of the Human Fossils, Baker Book House, Grand Rapids, Michigan, pp. 131–133. Return to text.
- Personal communication. Return to text.
- Kahn and Gibbons, Ref. 36, p. 178. Return to text.
- Marshall, E., 1997. Tapping Iceland’s DNA. Science, 278(5338):566. Return to text.
- Poinar, H.N., Höss, M., Bada, J.L. And Pääbo, S., 1996. Amino acid racemization and the preservation of ancient DNA. Science, 272 (10 May 1996) P. 864. Return to text.
- Rocky Mountain News, Denver, July 11, 1997. Return to text.
- Personal communication. Return to text.
- Foley, R., 1995. Talking genes. Nature, 377 (12 October 1995) pp. 493–494. Return to text.
|
DNA, RNA or Both? With Answer Key! Paper-saving assignment allows for quick assessment of student understanding of DNA and RNA by using compare and contrast. Comes in a 10 and 15 question version, with 2 assignments to a page.
This is a crossword puzzle that covers the topic of DNA and replication. NOTE: RNA, transcription and translation are not covered by this puzzle.
Quia - DNA, RNA, replication, protein synthesis, quiz
DNA, RNA, Protein Synthesis Crossword Puzzle. This is a crossword puzzle that covers the topic of DNA, RNA, and protein synthesis. This could be used as a review for a test, as a homework assignment, as a classwork assignment or as a quiz.
Continue reading "quiz protein synthesis"
For years, the exact means by which the DNA coded for structures, chemicals and other behaviors in living things was a mystery. Scientists now know which mRNA codons match each amino acid. This means that if you know the sequence of DNA or mRNA, you can figure out the sequence of amino acids that make a protein.
Try this quiz about RNA's role in protein synthesis.
The entire protein would actually require a much longer sequence of DNA bases.
Whenever a protein needs to be made, the correct DNA sequence for that protein is copied to a molecule called (mRNA).
DNA, RNA, Protein Synthesis Worksheet / Study Guide
In this lesson you learned how the DNA code works to create a sequence of amino acids. You also examined the effect of mutations on this code. Practice your learning below to get ready for the upcoming quiz.
|
Copulation in Frogs
The entry of the sperm in the egg is the basic requirement of sexual reproduction. The eggs are enveloped by liquid viscous albumin. When this albumin comes in contact water, it swells by absorbing it (water). This Albumin becomes very thick and transparent, colorless jelly. This jelly does not permit the entry of any sperm through it. The fertilization is external and in water. So it essential that the sperm should be able to enter the egg before the jelly formation. For this purpose the sperms should remain in as close a contact with egg is possible. The copulation is essential in frogs, in order to provide the maximum possible number of eggs, an opportunity for fertilization within a very short time. The frogs reproduce in monsoon in dim light. Usually when it is raining the male frogs collect near the bank of water pool or ditch. They produce croaking sound with the help of vocal sacs. The vocal sacs act as amplifiers. The croaking is a mating call for the female frogs. The female frogs getting attracted approach the male frogs. The male frog rides over the female frog and embraces it. The male frog holds the female frog firmly by its forelimbs and nuptial pads. The couple remains in this condition for 2-3 days. The Frog takes a long time to become sexually excited, as they are cold-blooded animals and devoid of copulatory organs. The male holds the female more tightly at the state of orgasm. At this stage the female discharges a large number of eggs in water from its ovisac through the cloacal aperture. The male frog right at the same moment discharges its sperms over the eggs falling in water. The two animals separate from each other on completion of this process.
Fertilization in Frogs
The process of copulation is immediately followed by the process of fertilization. The fertilization is external and in water. Each egg falling in water has a small conical protuberance from the equitorial plane. This conical protuberance is called reception cone. The tip of the reception cone is thin walled and projecting out of the albuminous layer. Normally, each egg is surrounded by innumerable sperms. The first sperm that reaches the tip of the reception cone dissolves the tip with the help of an enzyme secreted by the acrosome on its head and makes its way in to the egg. The head and middle piece of the sperm enter the egg, while the vibratile tail is cut off and remains outside the egg.
During fertilization a series of changes occur in the egg. The nucleus of the secondary oocyte divides into two nuclei.
(A)One of these nuclei remains in the egg cell. It is known as female pronucleus. The other nuclei is pushed out through the egg cell membrane. This nucleus settles as second polar body. It remains near the first polar body.
(B)The reception cone becomes flattened and closed. The cell content of the egg with egg cell membrane shrinks. The cell content of the egg gets fully separated from the egg membrane. The egg membrane is now termed as fertilization membrane.
(C)The head of the sperm that has entered the egg changes its shape. The head of the sperm becomes a round male pronucleus.
(D)The male pronucleus moves towards the female pronucleus and finally fuses with it to form a dioploid (2n) nucleus called zygote nucleus. The melanin granules settle deeper from the surface exactly on the opposite side of the site of entry of the sperm in the egg. As a consequence a much less dark (grey) colored semilunar patch called grey crescent is formed. The egg having grey crescent formed in it is recognized as a fertilized egg or zygote.
Email Based Homework Assignment Help in Copulation/Fertilization in Frogs
Transtutors is the best place to get answers to all your doubts regarding the process of copulation in frogs through the formation of jelly from albumin and subsequent mating. You can submit your school, college or university level homework or assignment to us and we will make sure that you get the answers you need which are timely and also cost effective. Our tutors are available round the clock to help you out in any way with biology.
Live Online Tutor Help for Copulation/Fertilization in Frogs
Transtutors has a vast panel of experienced biology tutors who specialize in the process of copulation in frogs and can explain the different concepts to you effectively. You can also interact directly with our biology tutors for a one to one session and get answers to all your problems in your school, college or university level biology. Our tutors will make sure that you achieve the highest grades for your biology assignments. We will make sure that you get the best help possible for exams such as the AP, AS, A level, GCSE, IGCSE, IB, Round Square etc.
|
Home > Flashcards > Print Preview
The flashcards below were created by user
on FreezingBlue Flashcards
. What would you like to do?
What does the respiratory system consists of?
tubes that filter incoming air and transport it into the microscopic alveoli where gases are exchanged
The entire process of exchanging gases between the atmosphere and body cells is called __ and consists of the following: __(5)__
- 1) ventilation
- 2) gas exchange btw blood and lungs
- 3) gas transport in the bloodstream
- 4) gas exchange btw the blood and body cells
- 5) cellular respiration
The organs of the respiratory tract can be divided into two groups: the __ and the __.
- upper respiratory tract (nose, nasal cavity, sinuses, and pharynx)
- lower respiratory tract (larynx, trachea, bronchial tree, and lungs)
1) The nose, supported by __ and __, provides an entrance for air in which air is filtered by __ inside the __.
- coarse hairs
1) The nasal cavity is a space __ to the nose that is divided __ by the __.
2) __ divide the cavity into passageways that are lined with __, andhelp increase the __ available to __ and __ incoming air.
3) Particles trapped in the __ are carried to the __ by __, swallowed and carried to the __ where __ destroys any microorganisms in the mucus.
1) posterior; medially; nasal septum
2) nasal conchae; mucous membrane; surface area; warm; filter
3) mucus; pharynx; ciliary action; stomach; gastric juice
1) __ are air-filled spaces within the __, __, __, and __ bones of the skull.
2) These spaces open to the __ and are lined with __ that is continuous with that lining the __.
3) What do the sinuses do?
1) sinuses; maxillary; sphenoid; ethmoid; frontal
2) nasal cavity; mucuous membrane; nasal cavity
3) reduce the weight of the skull; serve as a resonant chamber to affect the quality of the voice
1) The pharynx is a common passageway for __ and __.
2) What does the pharynx aid in?
1) air/ food
2) in producing sounds for speech
1) The larynx is an __ in the airway superior to the __ and inferior to the __.
2) What does it help do?
3) The larynx is composed of a framework of __ and __ bound by __.
1) enlargement/ trachea; pharynx
2) keep particles from entering the trachea and also houses the vocal cords
30 muscles/ cartilage/ elastic tissue
4) Inside the larynx, two pairs of folds of muscles and connective tissue covered with __ make up the __.
a) The upper pair is the __.
b) The lower pair is the __.
c) Changing tension on the vocal cords controls __, while increasing the __ depends upon increasing the force of __ vibrating the vocal cords.
5) During swallowing, the __ and __ close off the __.
4) mucuous membrane/ vocal cords
a) false vocal cords
b) true vocal cords
c) pitch/ loudness/ air
5) false vocal cords/ epiglottis/ glottis
1) The trachea extends downward anterior to the __ and into the __, where it splits into right and left __.
2) the inner wall of the trachea is lined with __ with many __ that serve to trap incoming particles.
3) The tracheal wall is supported by __.
1) esophagus/ thoracic cavity/ bronchi
2) ciliated mucous membrane/ goblet cells
3) 20 incomplete cartilaginous rings
1) The bronchial tree consists of branched tubes leading from the __ to the __.
2) The bronchial tree begins with the __, each leading to a __.
3) The branches of the bronchial tree from th e__ are __; these further subdivide until __ give rise to __ which terminate in __.
4) It is through the thin __ of the __ that __ between the blood and air occurs.
1) trachea/ alveoli
2) two primary bronchi/ lung
3) trachea/ right and left primary bronchi/ bronchioles/ alveolar ducts/ alveoli
4) epithelial cells/ alveoli/ gas exchange
1) The right and left soft, spongy, cone-shaped lungs are separated __ by the __ and are enclosed by the __ and __.
2) The __ and __ enter each lung.
3) The __ is attached to the lung, and the __ lines the thoracic cavity; __ lubricates the __ between these two membranes.
4) The __ lung has __ lobes, the __ has __.
1) medially/ mediastinum/ diaphragm/ thoracic cage
2) bronchus/ large blood vessels
3) visceral pleura; parietal pleura; serous fluid; pleura cavit
4) right/ three/ left/ two
What would you like to do?
Home > Flashcards > Print Preview
|
Art in ancient Egypt
Ancient Egyptian art has survived for over 5000 years and continues to fascinate people from all over the world. An ancient premise has become a modern reality: art is a path to eternal remembrance.
In ancient Egypt, art was magical. Whether in the form of painting, sculpture, carving or script, art had the power to maintain universal order and grant immortal life by appealing to various gods to act on behalf of people – both in life and in death.
Art was everywhere. From 4500 BCE Egypt’s symbolic art was an essential part of public buildings such as temples and palaces. Widely understood symbols formed the basis of this art as it was believed these offered protection from evil influences in this life and the next. It is no surprise then that art was also a crucial inclusion in the elaborate tombs that housed the mummified remains of people.
Tomb art was considered the point of contact between the land of the living and the land of the dead. If certain formulas for the creation of art were followed and the right gods supplicated, all Egyptians from the wealthy to the poor could look forward to completing their earthly life, successfully navigating the dangerous underworld and traversing to the blessed, eternal afterlife.
Egyptian tombs were like secret art galleries that were never meant to be viewed. Instead, these amazing examples of artistic craftsmanship spoke only to an elite group of visitors – the gods.
When representing human figures in a piece of tomb art, it was important to show as much of the body to the gods as possible. That is why both frontal and profile views of a body were integrated into one figure. It wasn’t meant to be naturalistic; it was intended to serve as a sign that stood for ‘human’. This method helped the gods recognise the person and also made the figure a recipient for ritual activity.
Kings were portrayed larger than life to symbolise the ruler’s god-like powers and therefore importance in the afterlife. Similarly, tomb owners, as the most important subject of the design in their tomb, were depicted on a grand scale. In contrast, wives and children, servants and animals were drawn smaller, indicating their lesser importance.
Colour was seen as a kind of universal language that was used to communicate significant meaning to the Egyptian gods. Certain colours were imbued with specific powers or attributes that were linked to various gods. As a result, great power could be contained within an object if it was made or painted in meaningful colours. For example, green and blue were the colours of plants, water and sky and symbolised fertility and prosperity. Gold was the colour of the sun and of the gods’ skin and was linked to immortality.
The most direct way that a tomb owner could communicate with the gods was through the elaborate Egyptian hieroglyph system. These pictograms performed a very specific function – to ensure certain gods were supplicated and rituals performed for all eternity. Hieroglyphs were written in both columns and rows, and could be read from either the left or the right, depending on the design of the text.
Deborah White , Editor
|
Slope-intercept form of a line is a general formula for the equation of a line. Using the slope-intercept form of a line helps in finding the slope of a line from an equation and the intercept directly from the equation. This is great because it shows how to graph a line using y = mx + b and can be quite useful for applications of linear equations.
This video explains how to graph a linear equation given in slope intercept form.
This video provides an example of how to graph a line in slope intercept form with a positive fractional slope and a whole number y-intercept.
This video provides an example of how to graph a line in slope intercept form with a negative fractional slope and a negative y-intercept.
This video provides an example of how to graph a line in slope intercept form with a negative integer slope and a positive integer y-intercept.
Point-Slope Form of a Line
Point-slope form of a line is one method to write the equation of a line. Start with the slope equation, which is basically the difference between two points, and then rearrange the terms to obtain the point-slope form of a line. The point-slope form of a line provides a formula that is useful in finding the slope of a line from an equation and a point the line passes through.
This video provides two examples of how to graph a linear equation in point-slope form.
Standard Form of a Line
We will commonly see lines expressed standard form, especially when we look at and write systems of linear equations. The standard form of a line puts the x and y terms on the left hand side of the equation, and makes the coefficient of the x-term positive. While standard form is commonly, we sometimes rewrite a line in slope-intercept form in order to graph it.
This video explains how to write a linear equation in standard form, how to determine the equation of a line in standard form, and how to graph a line in standard form.
This video provides an example of how to graph a linear equation in standard form with fraction coefficients by solving for y and writing the equation in slope intercept form.
Applications of Linear Equations
We often see math applied to the real world through word problems, and the applications of linear equations are seen throughout all our math courses after Algebra. To understand applications of linear equations we need to have an understanding of slope, how to interpret a graph, and how to write an equation. In upper-level Algebra, we apply systems of linear equations to these problems as well.
Given a Linear Model, Interpret the Meaning of the Slope and Make Predictions
In this video, a linear equation is given in slope intercept form to model the descent of the plane. The meaning of the slope is discussed and then the equation is used to answer various questions.
Linear Equation Application (Cost of a Rental Car)
This video provides an example of how to determine how far you can drive a rental car with a specific amount of money to cover the fixed cost and mileage cost.
We welcome your feedback, comments and questions about this site - please submit your feedback via our Feedback page.
|
Taken from Whitmore, K. F., & Goodman, K. S. (1996). Practicing what we teach: The principles that guide us. In Whitmore, K.F., & Goodman, K.S. (eds.), Whole language voices in teacher education. York, ME: Stenhouse
- Language is the medium of communication, thought, and learning. It's central to whole language programs.
- Language is authentic when it serves real language purposes in real speech acts and literacy events.
- Language must be whole and functional to be comprehended and learned.
- Written language is language: a parallel semiotic system to oral language.
- Reading and writing are processes of making sense through written language.
- Making sense of print involves three language cue systems: graphophonic, syntactic, and semantic
- Language learning is universal. All people can think symbolically and share a social need to communicate.
- Invention and convention are two forces that shape language development and concept development.
- Each learner invents language within the convention of the social language.
- Learning language, learning through language, and learning about language take place simultaneously (Halliday, undated).
- Written language is learned like oral language: in the context of its use.
- Learning is an ongoing process. It occurs over time, in a supportive, collaborative context, and is unique for each learner.
- Reflection is a central part of the learning process, and self-evaluation is a major part of the reflection process.
- What you know affects what you learn.
- There is a zone of proximal development (Vygotsky 1978) that develops in learners: the range of what they are capable of learning at any point in time
- Learners must be trusted to assume responsibility for their own learning.
- Whole language teachers are curriculum makers; they initiate appropriate learning opportunities for their pupils and invite them to participate.
- Whole language teachers mediate learning; they do not intervene and take control of it.
- Whole language teachers are kidwatchers (Y. Goodman 1985); they know their students. Whole language teacher educators are teacher watchers. They also know their students.
- Teachers are sensitive, as kidwatchers, to learners' zones of proximal development and provide enough (but not too much) support and mediation.
- Teachers support learners' ownership over their own learning.
- Teachers must enable students to empower and liberate themselves.
- Teachers need to accept diversity and teach for it
- Whole language teachers are advocates for their students
- The whole language curriculum is whole in two senses: it is complete, and it is integrated.
- The whole language curriculum integrates all aspects of the curriculum and the whole student around themes and inquiries.
- The whole language curriculum is a dual curriculum: it builds thought and language at the same time that it builds knowledge and concepts.
- The curriculum starts with learners, building on who they are, what they know and believe, and where they are going.
- The curriculum reflects the culture and realities of the community.
- The whole language curriculum is broad enough to include the interests and needs of all learners and deep enough to support substantive learning at all levels.
- There are no artificial floors and ceilings in whole language. Learners may start where they are and go as far as their interests and needs take them
- Whole language brings the outside world into the classroom by valuing and then relating learners' life experiences to class room learning experiences.
- Each whole language classroom invents itself as a learning community (Whitmore and Crowell 1994).
- A major aspect of education is being socialized into a community: joining the literacy club (Smith 1988).
- Whole language teachers value collaborative learning communities and consciously work to create a sense of shared involvement.
- Only in democratic classrooms can children learn to be citizens in a democracy. College classrooms and staff development programs need to be democratic, too
Goodman, Y.M. 1985. "Kidwatching: Observing Children in the Classroom." In Observing the Language Learner (pp. 9-18). A. Jaggar and M. T. Smith- Burke,ed. Newark, DE an dUrbana, IL: Co-published by international Reading Association and National Council of Teachers of English.
Halliday, M.A.K. undated. "Three Aspects of Children's Language Development: Learning Language, Learning Through Language, Learning About Language." In Language Research: Impact on Educational Settings. G.S. Pinnell and M. Matlin Haussler, eds. Unpublished manuscript.
Smith, F. 1988. Joining the Literacy Club. Portsmouth, NH. Heinnemann.
Vygotsky, L.S. 1978. Mind and Society. Cambridge", "MA: Harvard University Press
Whitmore, K. F., and C.G. Crowell. 1994. Inventing a Classroom: Life in a Bilingual, Whole Language Learning Community. York, ME: Stenhouse
|
By the end of this section, you will be able to:
- Identify the uses of quotes.
- Correctly use quotes in sentences.
Quotation marks (“ ”) set off a group of words from the rest of the text. Use quotation marks to indicate direct quotations of another person’s words or to indicate a title. Quotation marks always appear in pairs.
A direct quotation is an exact account of what someone said or wrote. To include a direct quotation in your writing, enclose the words in quotation marks. An indirect quotation is a restatement of what someone said or wrote. An indirect quotation does not use the person’s exact words. You do not need to use quotation marks for indirect quotations.
Direct quotation: Carly said, “I’m not ever going back there again.”
Indirect quotation: Carly said that she would never go back there.
WRITING AT WORK
Most word processing software is designed to catch errors in grammar, spelling, and punctuation. While this can be a useful tool, it is better to be well acquainted with the rules of punctuation than to leave the thinking to the computer. Properly punctuated writing will convey your meaning clearly. Consider the subtle shifts in meaning in the following sentences:
- The client said he thought our manuscript was garbage.
- The client said, “He thought our manuscript was garbage.”
The first sentence reads as an indirect quote in which the client does not like the manuscript. But did he actually use the word “garbage”? (This would be alarming!) Or has the speaker paraphrased (and exaggerated) the client’s words?
The second sentence reads as a direct quote from the client. But who is “he” in this sentence? Is it a third party?
Word processing software would not catch this because the sentences are not grammatically incorrect. However, the meanings of the sentences are not the same. Understanding punctuation will help you write what you mean, and in this case, could save a lot of confusion around the office!
PUNCTUATING DIRECT QUOTATIONS
Quotation marks show readers another person’s exact words. Often, you will want to identify who is speaking. You can do this at the beginning, middle, or end of the quote. Notice the use of commas and capitalized words.
Beginning: Madison said, “Let’s stop at the farmers market to buy some fresh vegetables for dinner.”
Middle: “Let’s stop at the farmers market,” Madison said, “to buy some fresh vegetables for dinner.”
End: “Let’s stop at the farmers market to buy some fresh vegetables for dinner,” Madison said.
Speaker not identified: “Let’s stop at the farmers market to buy some fresh vegetables for dinner.”
Always capitalize the first letter of a quote even if it is not the beginning of the sentence. When using identifying words in the middle of the quote, the beginning of the second part of the quote does not need to be capitalized.
Use commas between identifying words and quotes. Quotation marks must be placed after commas and periods. Place quotation marks after question marks and exclamation points only if the question or exclamation is part of the quoted text.
Question is part of quoted text: The new employee asked, “When is lunch?”
Question is not part of quoted text: Did you hear her say you were “the next Picasso”?
Exclamation is part of quoted text: My supervisor beamed, “Thanks for all of your hard work!”
Exclamation is not part of quoted text: He said I “single-handedly saved the company thousands of dollars”!
QUOTATIONS WITHIN QUOTATIONS
Use single quotation marks (‘ ’) to show a quotation within in a quotation.
Theresa said, “I wanted to take my dog to the festival, but the man at the gate said, ‘No dogs allowed.’”
“When you say, ‘I can’t help it,’ what exactly does that mean?”
“The instructions say, ‘Tighten the screws one at a time.’”
Use quotation marks around titles of short works of writing, such as essays, songs, poems, short stories, and chapters in books. Usually, titles of longer works, such as books, magazines, albums, newspapers, and novels, are italicized.
“Annabelle Lee” is one of my favorite romantic poems.
The New York Times has been in publication since 1851.
WRITING AT WORK
In many businesses, the difference between exact wording and a paraphrase is extremely important. For legal purposes, or for the purposes of doing a job correctly, it can be important to know exactly what the client, customer, or supervisor said. Sometimes, important details can be lost when instructions are paraphrased. Use quotes to indicate exact words where needed, and let your coworkers know the source of the quotation (client, customer, peer, etc.).
- Use quotation marks to enclose direct quotes and titles of short works.
- Use single quotation marks to enclose a quote within a quote.
- Do not use any quotation marks for indirect quotations.
1. Copy the following sentences onto your own sheet of paper, and correct them by adding quotation marks where necessary. If the sentence does not need any quotation marks, write OK.
- Yasmin said, I don’t feel like cooking. Let’s go out to eat.
- Where should we go? said Russell.
- Yasmin said it didn’t matter to her.
- I know, said Russell, let’s go to the Two Roads Juice Bar.
- Perfect! said Yasmin.
- Did you know that the name of the Juice Bar is a reference to a poem? asked Russell.
- I didn’t! exclaimed Yasmin. Which poem?
- The Road Not Taken, by Robert Frost Russell explained.
- Oh! said Yasmin, Is that the one that starts with the line, Two roads diverged in a yellow wood?
- That’s the one said Russell.
|
Brief Summary of Convention on Trade in Endangered Species (CITES)
David Favre (2002)
CITES is a mature international treaty (a treaty is an agreement between countries) which, as of the Fall of 2002, has over 150 countries as members. [List of Party States] The purpose of the treaty is to control the international movement of wild plants and animals, alive or dead, whole or parts there of ("specimens" of species) in such a manner as to be assured that the pressures of international trade do not contribute to the endangerment of the listed species. Thus, live birds such as the Imperial Eagle, the shells of sea turtles, Elephant ivory and wild orchards are all controlled by this treaty when the items or specimens move from one county to another. International trade, lawful and illegal, in wildlife and plants is at the level of billions of dollars per year. Illegal trade in such items as bear gallbladders, elephant ivory tusk and other products is a significant issue around the world.
CITES has exercised control over the fur of big cats such as leopards, the skins of crocodiles, trade in live birds such as the Hyacinth Macaw, whales and primates.
CITES does not provide protection for the habitat of species or how a species is used within a country, it only controls international movement of the species. If a hunter kills a listed species in Canada and wishes to return to the US then a CITES permit will have to be obtained. Likewise, if a zoo in New York wishes to import an orangutan, a CITES permit will have to be obtained. A permit will not be given if the removal of the animal would be detrimental to the species in the wild, that is, if the removal would impose additional risk on the survival of the species.
If a species is in danger of extinction then the treaty will impose a ban on the commercial trade of the listed species. These species are listed on Appendix I of the treaty (may countries consider this the endangered species list). If a species might have population level concerns then the species is listed on Appendix II, where commercial trade is allowed. Examples of Appendix II species include: North American black bear, the golden eagle, and many orchid species. (Many countries would consider this a list of threatened species.)
Representatives of countries that have signed the treaty meet every two or so years in order to carry out their responsibilities under the treaty. This event is referred to as a Conference of the Parties. In November of 2002 there will be a meeting in Santiago Chile. There have been eleven of these meetings since 1975. [List of meetings - date and places] At these meetings it is decided which animals and plants should be listed on or removed from Appendix I or Appendix II of the treaty.
An Introduction to the Nature of Treaties
David S. Favre (2002)
The term "treaty" can have at least two connotations in international law. In the narrower, traditional sense, a "treaty" is the title found at the top of a number of important international agreements. In the broader sense of the word, it refers to an entire class of international agreements which may or may not have the word "treaty" in its title. In general, the term is usually used in the broader sense of the word. Thus, under the heading of treaties will be found such documents as Conventions, Agreements, Declarations, Protocols or Acts.
Consider the following definitions of the word "treaty":
(1) Lord McNair, Law of Treaties:
[A] written agreement by which two or more States or international organizations create or intend to create a relationship between themselves operating within the sphere of international law. (Lord McNair, Law of Treaties 4 (Clarendon Press, Oxford) reissued 1986.)
(2) Vienna Convention on the Law of Treaties:
"[T]reaty" means an international agreement concluded between States in written form and governed by international law, whether embodied in a single instrument or in two or more related instruments and whatever its particular designation. (Article 2. U.N. Doc. A/Conf. 39/27, (1969), 63 A.J.I.L. 875 (1969), 8 I.L.M. 679 (1969).
This treaty was drafted with the goal of codifying existing international practice. It entered into force on Jan. 27, 1980. The U.S. has not become a Party to this treaty.
Oral statements, even those between heads of States, do not create international treaties. In effect, there is a customary Statute of Frauds in international law. This custom or tradition is founded on the same policy reasons that support the Common Law and statutory statements of the Statute of Frauds. Agreements in writing provide certainty as to language and objective evidence of the subjective intent of the parties involved.
B. International Capacity and States
Only certain entities are recognized by the international community as having the legal capacity to create a treaty. Traditionally, only sovereign States can enter into treaties. The Vienna Convention, in this tradition, limits the scope of its provisions to agreements between States. Lord McNair's more liberal position allows agreements with or between international organizations to also be referred to as treaties.
A full discussion of what constitutes a "State" is beyond the scope of this article. Briefly, while geographic areas such as the United States, United Kingdom, Brazil, or India are clearly States, there are a number of problems that can arise in deciding Statehood. Neither geographic size, physical isolation nor number of citizens will determine an area's Statehood status. Luxembourg, consisting of 998 sq. miles (2,586 km2) and a population of 380,000 is one of the oldest States in Europe. Greenland, with an area of 840,000 sq. miles (2,175,600 km2) and population of 58,000 is a province of Denmark, and has never been recognized as a State.
People of a particular geographic area often have multiple layers of political organizations seeking to govern them. In the United States a group of people may be part of a village, town or city, as well as the county, state and national levels of government. These people can also be considered part of the United Nations and other international institutions. But any one geographic area can be represented in the international legal arena by only one level of government, referred to as a "State." The determination that a particular level of government constitutes a State is the critical step. But, who decides which political organization is a State? There is no court or other authority which can decide the issue. States are recognized as States only by other States. While during several periods of human history States have gone though periods of territorial expansion, the 1990's is witnessing a period of territorial break up. Political organizations that were previously subparts of a State now seek and obtain the status of State. Thus, Lithuania broke off from the USSR. The disintegration of Yugoslavia into a group of States is another example of the process at work.
The wishes of the people within the boundaries of a territory is not necessarily sufficient to transform them into a State. Consider the case of Taiwan. Taiwan has had military self control and independent foreign relations since World War II. It asserts itself as a State, and has all the normal trappings of a sovereign State. Yet most States consider it to be a part of the State of China, rather than a separate State. Taiwan is thus denied its self-asserted status as a State within the international community. For example, it has not been allowed to become a party to the treaty protecting endangered species, the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), even though Taiwan has significant trade in wildlife. In this treaty, as in many, the term "State" is used without any attempt to define what constitutes a State.
Another example is Hong Kong, a territory denied the status of State. Hong Kong had been a part of the British Protectorate for over ninety years. London, under the provisions of a long term lease, has controlled its foreign affairs and provided its military protection. Without regards to the wishes of the citizens of Hong Kong, the British Government signed a treaty transferring back the territory to China as of 1997.
A unique problem in this area is that of Antarctica. This continent, having no permanent or historic human population, contains no States. Historically, a number of States have asserted claims over portions of its territory but, these claims have been held in abeyance by the Antarctica Treaty.
Sometimes political subparts of a State have state-like attributes but are not recognized as having international legal capacity. Within the United States, Michigan, Iowa and all the other states can be referred to as sovereign states in domestic jurisprudential analysis. Yet, if the state of Michigan was to sign a Great Lakes Clean Up Agreement with the Province of Ontario, the agreement could not be called a treaty, as neither party has the international capacity to create treaties. Under the U.S. Constitution, Article 1, the power to enter into international obligations has been delegated to the federal government of the United States. (Article 1, Section 10 - "No State shall enter into any Treaty, Alliance or Confederation...") Likewise, Indian tribes within the United States are sometimes referred to and treated as sovereign States under U.S. law. (Under the federal Clean Air Act, both states and Indian tribes are given the power to set the level air quality within their land for purposed of the Prevention of Significant Deterioration provisions, 42 U.S.C. § 7474.) Yet, they would not be considered to have the international capacity to enter into treaties. Within Canada, where the provinces have more autonomy of action than states within the United States, the federal government nevertheless is recognized as possessing the treaty-making power for all of that territory. (Because of the historical legal development of Canada, it's federal Constitution is silent on the issue of authority to engage in treaty making. See Peter Hogg, Constitutional Law of Canada, ch. 11 Treaties (2d ed. 1985).)
Some organizations other than States are considered to have international legal capacity, the best example being those organizations which are composed of States. Thus, the United Nations and its associated organizations such as the United Nations Environment Programme have treaty capacity. Less clear is the status of economic or military organizations created by States. Does NATO, itself created by treaty, have the legal capacity to enter into treaties? Most likely not. What about the European Economic Community (EEC) Perhaps yes, for under the provisions of the treaty creating it, the EEC has been delegated authority to act on behalf of its member States within limited areas, such as economic trade. Thus, the EEC, having the power to control the movement of trade between its member States, has sought the status of a voting Party within the treaty which deals with plant and wildlife trade (CITES). As the EEC is not within the traditional definition of a sovereign State, an amendment to the treaty has been proposed, but not yet adopted, which allows economic unions such as the EEC to have voting status within the treaty structure. The Biological Diversity Treaty of 1992 ( 31 I.L.M. 818 (1992)) contains a specific provision which will allow the EEC to have voting status under the treaty. Article 33 allows that the Convention shall be open for signature by "all States and any regional economic integration organization." While the term State is not defined the later phase of "regional economic integration organization" is defined as "an organization constituted by sovereign States of a given region, to which its members States have transferred competence in respect of matters governed by this Convention and which has been duly authorized, in accordance with its internal procedures, to sign, ratify, accept, approve or accede to it." (Article 2) Thus, the Treaty does not consider the EEC a State, but does acknowledge its international legal status.
International private organizations, whether they are profit seeking, such as Exxon, or nonprofit, such as Greenpeace, do not normally have international legal standing. While a State may enter into agreements with private organizations, such agreements are more in the nature of contracts rather than treaties.
1. The Forest Service of the Department of Interior of the U.S. government meets with its counterpart in Brazil several times, ultimately producing a document which outlines a tropical forest management plan, a timber export control plan for Brazil, and an old growth forestry management plan for the United States. It is signed by both department heads. The United States then refuses to implement the plan. Brazil claims a violation of treaty obligations and seeks remedies in the International Court of Justice. What initial defenses would the U.S. raise?
2. At the annual meeting of the G7 governments (the "Economic Summit") a final declaration is normally issued by the States present. Recently, portions of these declarations have spoken of a duty to preserve the environment. Do these declarations give rise to international obligations or individual rights?
3. You are a U.S. lawyer representing the Hughmongus Oil Co. The President of the Corporation gives you a call and tells you that he just heard that the U.S. has signed a treaty whose goal is the reduction of greenhouse gas emissions by 50% over the next decade. He wants to know what he should tell his plant managers to do. What advice do you give?
4. Did the World Charter of Nature or the Rio Declaration create any legal environmental rights for U.S. citizens?
C. Creation of a Treaty
Any and all environmental topics or concerns have been, or may be made, the subject of an international agreement. There are no specific limitations on the subject matter of treaties. However, the concern of this chapter is not the substance of treaties, but their creation, form and interpretation.
As might be expected, the road to a formal treaty usually starts with much informal discussion: discussions between States, between private interest groups, scientists and others. States, through their foreign affairs offices (i.e., the U.S. State Department), may initiate discussion with any party they choose. Often the initial discussions may be done by a nongovernmental organization (NGO) until some preliminary consensus is found. A State then sponsors the final phase of the creation process. In the case of CITES, the International Union for the Conservation of Nature and Natural Resources (IUCN) was the non-state party that did the initial drafting and consultation with States. The IUCN also prepared the draft of the Polar Bear Agreement discussed below. After a number of draft documents, the United States agreed to sponsor the CITES treaty by holding a formal negotiation session to which all interested States were invited. Thus, in 1973, the United States invited several dozen States to send their official representative to a plenipotentiary conference in Washington D.C. to decide on the final language of the proposed treaty.
At these formal negotiation conferences, State representatives arrive with their authority expressed in writing. This is in the nature of an expressed agency. Several hundred years ago, such representatives might also have had the power to bind the State to a treaty without having to return home. Today, the presence of a representative is not an assurance that a State will ultimately become bound by the treaty drafted at a Conference, only that the State is interested in the drafting process.
Under standard international rules of negotiations, a draft treaty will be finalized when it is adopted by a two-thirds vote of the States at the plenipotentiary conference or, if negotiated in another manner, upon agreement of all parties. (See Article 9, Vienna Convention on the Law of Treaties.) Once the text is finalized, no further compromises in the language of the proposed treaty can be made. At this point the representatives may sign the treaty. This signature is not binding on the State, but represents general support for the treaty and attests to the fact that the signed document is the final, negotiated text.
After the final text is adopted, each State eligible to become a Party State under its terms, can begin its review process for deciding whether to ratify the treaty. If the decision is yes, then an instrument of ratification is drafted, signed by the State and delivered to the specified Depositary Government. There is no universal form for this instrument of assent.
Each treaty sets forth those steps necessary to bring the treaty into full legal effect. When all of the steps have been fulfilled, the treaty "comes into force". With bilateral treaties, the ratification of both parties are necessary before the treaty can come into force. With multilateral treaties, a minimum number of parties must agree to be bound by the treaty's provisions. In the case of CITES, Article XXII states:
1. The present Convention shall enter into force 90 days after the date of deposit of the tenth instrument of ratification, acceptance, approval or accession, with the Depository Government.
The Convention on Long-Range Transboundary Air Pollution (1979) required 24 instruments of ratification, acceptance, or approval.
The Vienna Convention on the Law of Treaties, Article 2 states:
(b) "ratification", "acceptance", "approval" and "accession" mean in each case the international act so named whereby a State establishes on the international plane its consent to be bound by a treaty;
"Ratification" is a positive expression of a State's willingness to be bound by a treaty that it helped negotiate. The terms "acceptance" or "approval" are slightly less formal terms, but ultimately mean the same thing as ratification. If, under a State's domestic law, there is more than one way by which the government can decide to be bound, by using the phrase "ratification, acceptance or approval", the treaty is allowing the State to choose whichever method it deems appropriate.
The time period involved in this process is normally one of years. The Polar Bear Agreement, set out below, was signed on November 15, 1973; it went into force May 26, 1976 after three of the signatories had deposited instruments of ratification. It was not until January 25, 1978, over four years from the signing of the agreement, that all five of the signatories had ratified it.
Under the Constitution of the United States, only the President is authorized to negotiate treaties. However, as part of the checks and balances of the U.S. system, the President alone can not make a treaty binding on the United States. He must have the "advice and consent" of two-thirds of the U.S. Senate to ratify a proposed treaty. In the United Kingdom, the decision of whether to ratify a treaty rests with the Crown, with the "advice" of the appropriate government Minister. (The advice is more in the nature of a directive, as the Crown always accepts the "advice".). In Canada the power to ratify a treaty is solely within the executive branch, although, on some important issues, the executive branch seeks support in the form of legislative resolutions.
While ratification normally refers to the acts of those States which were signatories to the treaty at the end of the drafting process, accession refers to an act by a States which wishes to become a party to a treaty it did not help negotiate. This was very important to CITES, as only a limited number of States were part of the drafting process; yet, to be effective, CITES needed to be adopted by as many countries as possible. Thus a specific provision for accession is found in Article XXI. There is no time limitation on when a State may become a Party: "the present Convention shall be open indefinitely for accession." As with ratification, the process of accession is internal to each State. The end result of the process is an instrument of accession which declares that the State wishes to join the treaty and agrees to be bound by its provision. Since 1976 over 150 countries have become Party States to CITES, most by accession. Normally, once a State is a party to a treaty, whether by ratification or accession, the obligations and rights are identical.
As discussed above, when considering the adoption of a treaty, a State may not unilaterally change the language of the treaty. However, there is a method by which a State can either reject some provisions of a treaty or give its own interpretation of the language of the treaty. This is referred to as a "reservation". The Vienna Convention defines the term as follows:
(d) "reservation" means a unilateral statement, however phrased or named, made by a State, when signing, ratifying, accepting, approving or acceding to a treaty, whereby it purports to exclude or to modify the legal effect of certain provisions of the treaty in their application to that State;
The specific provisions of each treaty must be examined to determine whether, or to what degree, reservations are allowed. The extent to which reservations are to be allowed is one of the significant points of negotiation at a plenipotentiary conference. A general reservation is one which might deal with any responsibility or duty under the treaty. Article 18 of the Montreal Protocol on Substances that Deplete the Ozone Layer (1987) states succinctly, "No reservations may be made to this Protocol". Article XXIII of CITES does not allow any general reservations. However, CITES does allow specific reservations as to species listed for protection. When Japan submitted its instrument of ratification, it took reservations on the species of sea turtles protected under CITES. Thus, even though most countries were bound by the treaty limitation that imported sea turtle parts could not be used for commercial purposes, Japan could and did make commercial use of sea turtles products. Because of the formal reservations, Japan was not considered in violation of its international legal obligations. (In 1991, after considerable international pressure, Japan announced its intention to withdraw its reservations relating to sea turtles after existing stock is consumed.)
The depository government is the official keeper of the documents. This obligation does not result in the granting of any executive or legislative powers under a treaty. The depository government must be one that is respected and trusted by all Parties to carry out certain fundamental functions. In Article XX of CITES, the Government of the Swiss Confederation is designated as the Depositary Government. It is the keeper of official documents with duties both to keep documents and provide official notice to the Parties of certain events. The determination of which government shall be the depository government is a point of negotiation at the plenipotentiary conference.
Articles XX and XXI of CITES require that a State's instrument of ratification or accession be deposited with the depository government. Acceptance of the instrument triggers the obligation of compliance with the provisions of CITES. Additionally, under the provisions of Article XXV(2), the Swiss government is under a duty to notify other Party States of the receipt of instruments of ratification, acceptance, approval or accession to the treaty itself or any amendments thereto. It is also from the Swiss government that official word is received as to the taking and withdrawal of reservations, and notification of denunciation. The depository government has the obligation to determine the legal adequacy of any document presented to it by a State.
D. Analysis of a Modest Treaty
The short treaty on conservation of polar bears provides a working example of the structure and operation of a treaty. A treaty usually contains the following elements:
1. Statement of policy and purpose - precatory language.
2. Substantive provisions - action language - that which the States agree to do or not to do.
3. Implementation requirements - what States agree to do within their domestic law system.
4. Internal procedure - the provisions for how the Party States will make future decisions, including the modification of the treaty. This can also include provisions for dispute resolution and enforcement provisions for treaty violations.
5. Creation procedure - those steps necessary to bring the treaty into force and any limitation on who may be a Party State.
Review the provisions of the Polar Bear treaty and decide which article performs which function.
Agreement on the Conservation of Polar Bears, 1973
27 UST 3918, TIAS 8409
The Governments of Canada, Denmark, Norway, the Union of Soviet Socialist Republics, and the United States of America,
Recognizing the special responsibilities and special interests of the States of the Arctic Region in relation to the protection of the fauna and flora of the Arctic Region;
Recognizing that the polar bear is a significant resource of the Arctic Region which requires additional protection;
Having Decided that such protection should be achieved through co-ordinated national measures taken by the States of the Arctic Region;
Desiring to take immediate action to bring further conservation and management measures into effect;
Have Agreed as follows:
1. The taking of polar bears shall be prohibited except as provided in Article III.
2. For the purposes of this Agreement, the term "taking" includes hunting, killing and capturing.
Each Contracting Party shall take appropriate action to protect the ecosystems of which polar bears are a part, with special attention to habitat components such as denning and feeding sites and migration patterns, and shall manage polar bear populations in accordance with sound conservation practices based on the best available scientific data.
1. Subject to the provisions of Articles II and IV, any Contracting Party may allow the taking of polar bears when such taking is carried out:
a) for bona fide scientific purposes; or
b) by that Party for conservation purposes; or
c) to prevent serious disturbance of the management of other living resources, subject to forfeiture to that Party of the skins and other items of value resulting from such taking; or
d) by local people using traditional methods in the exercise of their traditional rights and in accordance with the laws of that Party; or
e) wherever polar bears have or might have been subject to taking by traditional means by its nationals.
2. The skins and other items of value resulting from taking under sub-paragraphs (b) and (c) of paragraph 1 of this Article shall not be available for commercial purposes.
The use of aircraft and large motorized vessels for the purpose of taking polar bears shall be prohibited, except where the application of such prohibition would be inconsistent with domestic laws.
A Contracting Party shall prohibit the exportation from, the importation and delivery into, and traffic within, its territory of polar bears or any part or product thereof taken in violation of this Agreement.
1. Each Contracting Party shall enact and enforce such legislation and other measures as may be necessary for the purpose of giving effect to this Agreement.
2. Nothing in this Agreement shall prevent a Contracting Party from maintaining or amending existing legislation or other measures or establishing new measures on the taking of polar bears so as to provide more stringent controls than those required under the provisions of this Agreement.
The Contracting Parties shall conduct national research programmes on polar bears, particularly research relating to the conservation and management of the species. They shall as appropriate co-ordinate such research with research carried out by other Parties, consult with other Parties on the management of migrating polar bear populations, and exchange information on research and management programmes, research results and data on bears taken.
Each Contracting Party shall take action as appropriate to promote compliance with the provisions of this Agreement by nationals of States not party to this Agreement.
The Contracting Parties shall continue to consult with one another with the object of giving further protection to polar bears.
1. This Agreement shall be open for signature at Oslo by the Governments of Canada, Denmark, Norway, the Union of Soviet Socialist Republics and the United States of America until 31 March 1974.
2. This Agreement shall be subject to ratification or approval by the signatory Governments. Instruments of ratification or approval shall be deposited with the Government of Norway as soon as possible.
3. This Agreement shall be open for accession by the Governments referred to in paragraph 1 of this Article. Instruments of accession shall be deposited with the Depositary Government.
4. This Agreement shall enter into force ninety days after the deposit of the third instrument of ratification, approval or accession. Thereafter, it shall enter into force for a signatory or acceding Government on the date of deposit of its instrument of ratification, approval or accession.
5. This Agreement shall remain in force initially for a period of five years from its date of entry into force, and unless any Contracting Party during that period requests the termination of the Agreement at the end of that period, it shall continue in force thereafter.
6. On the request addressed to the Depositary Government by any of the Governments referred to in paragraph 1 of this Article, consultations shall be conducted with a view to convening a meeting of representatives of the five Governments to consider the revision or amendment of this Agreement.
7. Any Party may denounce this Agreement by written notification to the Depositary Government at any time after five years from the date of entry into force of this Agreement. The denunciation shall take effect twelve months after the Depositary Government has received the notification.
8. The Depositary Government shall notify the Governments referred to in paragraph 1 of this Article of the deposit of instruments of ratification, approval or accession, of the entry into force of this Agreement and of the receipt of notifications of denunciation and any other communications from a Contracting Party specifically provided for in this Agreement.
9. The original of this Agreement shall be deposited with the Government of Norway which shall deliver certified copies thereof to each of the Governments referred to in paragraph 1 of this Article.
10. The Depositary Government shall transmit certified copies of this Agreement to the Secretary-General of the United Nations for registration and publication in accordance with Article 102 of the Charter of the United Nations.
In Witness Whereof the undersigned, being duly authorized by their Governments, have signed this Agreement.
Done at Oslo, in the English and Russian languages, each text being equally authentic, this fifteenth day of November, 1973.
1. Precatory Language
The "whereas" and "recognizing" clauses set the stage for the treaty by stating, or hinting at, why the treaty is being drafted. In may be full of code words which have to be deciphered. Each word has been carefully chosen. States with often differing interests and perspectives have approved the language. While the language may provide a context for interpretation of subsequent provisions, the language does not create any binding obligation on the Parties. For the Polar Bear Agreement, consider the following:
1. Why the limited number of States? What special responsibilities and interests do these States have? The polar bear is referred to as a "significant resource". Are polar bears the equal of oil fields? Why that phrase? What does this suggest about the drafters' attitudes?
2. The precatory language states that the polar bear "requires additional protection" but it does not say from what. List several reasons why the bear needs additional protection. Why is an international treaty required for this protection? The fourth paragraph suggests the nature of the action will be "through co-ordinated national measures." What does this tell us? What other approaches have implicitly been rejected?
3. In the fifth paragraph, what does the phrase "conservation and management measures" mean? "Immediate" suggests time pressure. Is there really an urgency about the problem?
4. Why did the drafters entitle this document an "Agreement"? Is it not a treaty?
While the Polar Bear Treaty does not contain any definitions, many treaties follow the precatory language with a set of definitions. Sometime this is done to provide clarity to an ambiguous term. In the Vienna Convention for the Protection of the Ozone Layer (1985) the key term "ozone layer" is defined as "the layer of atmospheric ozone above the planetary boundary level," thus clarifying a point of science. The key operative phrase of the Ozone Convention requires the State to take measures that are necessary to protect against "adverse effects". This phrase is, in turn, defined as:
changes in the physical environment or biota, including changes in climate, which have significant deleterious effects on human health or on the composition, resilience and productivity of natural and managed ecosystem, or on materials useful to mankind.
Obviously, this definition gives focus to the phrase but, from a legal point of view, it is still vague and subject to various interpretations. This was the result intended by the drafters.
Often political compromises can be found in the definition section. In the CITES treaty, "species" is defined an "any species, subspecies or geographically separated population". This is not a biological definition. Instead, it establishes a fundamental policy point negotiated by the parties. With this definition, the Party States will be able to list and protect population segments of a species even when the worldwide status of the species would not justify action under the treaty.
3. Action Language
Consider Articles I and II of the Polar Bear Agreement. Article I is a prohibition, while Article II requires affirmative action. It is typical of a well drafted treaty that the critical obligation of the treaty be set out in the initial substantive articles. The starting point of this treaty is the language of Article I (1): "The taking of polar bears shall be prohibited except as provided in Article III." This gives the first sense of the scope or jurisdiction of the treaty. It deals only with the polar bear species, and seeks to control the taking thereof.
In part (2) of Article I there is a clarification of the term "taking". Why would the parties need to do this? Is this an all inclusive definition? What else might have been said? Is this a minimum or a maximum? Domestic U.S. legislation has a much more extensive list of terms.(Section 3(19) of the Endangered Species Act defines a "taking" as "to harass, harm, pursue, hunt, shoot, wound, kill, trap, or collect, or to attempt to engage in any such conduct.") What if a camera crew of ten people chase a polar bear for a week in order to obtain movies. Is this within the jurisdiction of the treaty?
What is required of the Party States under Article II? By what standard will the actions of the Parties be judged? How might a Party State carry out its promise under Article II? In Articles I and II what does the use of the word "shall" denote?
Consider the exceptions to the prohibition of Article I as set out in Article III. Are the scope of the exceptions clear? Are they capable of being monitored so that they are not abused? Do any of the exceptions represent special interests (constitute parts of the States in question) seeking to be protected? Why do you think Article IV was included?
4. Implementation Requirements
After having established the prohibited actions or required actions, there will usually be a series of articles which set out the scope of the duty of each Party State to domestically implement international obligations. Domestic law is normally required in order to impose binding law on the individual within each State. Examine Articles V - IX of the Polar Bear Treaty. Presume you are requested to draft a memo to the President of the United States setting out what legislative action is required to carry out our international obligations under the treaty. What would you include? What if the President said alternatively, a) "this is not important to me, do as little as possible" or b) "I want to satisfy the environmentalists on this one - we should go the extra mile." What might be the different results? (The polar bear is within the protection of the Marine Mammal Protection Act, 16 U.S.C. §§ 1361-1407. )
A very important aspect of implementation is the question of what will happen under a treaty, and/or international law generally, if a State does not carry out its treaty obligations. What is the "downside risk" to the United States if it decides to do nothing under the Polar Bear Treaty? What interests - legal, political, economic, environmental or social - support implementation and enforcement of treaties?
5. Internal Procedures
The treaty next needs to be examined to determine what the parties contemplated concerning the administration of the treaty's provisions after the treaty comes into effect. The provisions of Articles I - VI are unilateral in nature. No further meetings or discussions between the States are necessary. Article VII contemplates co-ordination of research, which supports and allows contacts between scientists (a significant point of agreement between the US and the USSR during the height of the Cold War).
There is no provision requiring regular meeting of the Parties. Article X (6) does provide a mechanism for the calling of a meeting by any of the Party States. The CITES treaty, however, was drafted with the expectation that the Party States would need to meet on a regular basis once the treaty came into effect. Article XI (1) of CITES requires that a meeting of the Parties be called within two years of the date of its entry into force and every two years thereafter.
Another major issue in the drafting of a treaty is whether to create a structure to help with the administration and implementation of the treaty. The drafters of the Polar Bear Treaty decided not to create any administrative structure for the treaty. On the other hand the drafters of CITES created the office of the Secretariat, headed by an individual referred to as the Secretary General of the Treaty. This office is not equal to the executive branch of government with independent powers. The power of decision making is in those States which are Parties to the treaty, with the Secretariat being the support staff for the Party States. The role of the Secretariat is usually that of gathering and distributing information, coordinating research projects, and setting up the meetings of the Party States of the treaty. The Secretariat does not have any power to force compliance from States that do not fulfill their obligations under a treaty.
6. Procedural Provisions
The final articles of a treaty will set out the necessary requirements to bring the treaty into force. These articles will also contain the procedures for modification of the treaty and withdrawal from the treaty. Consider the final Article of the Polar Bear Treaty. What is necessary for the treaty to come into force?
In this final section of the treaty, there is usually a provision which states the official languages of the treaty. While French was historically the language of diplomacy and, therefore, treaties, today English is the usual language of negotiation in multilateral negotiations. However, pride and international status often result in the translation of the treaty into different languages with official recognition of each version. This always creates the risk of a treaty having slightly different meanings depending on the language being used. The Biological Diversity Treaty lists Arabic, Chinese, English, French, Russian and Spanish as equally authentic text. (Article 42.)
Questions and Problems
1. Under the provisions of the treaty, would a hunter from Texas be allowed to kill a polar bear in Canada and return to Texas (1) with a personal sport trophy, or (2) for commercial resale of the skin and head?
2. Do the provisions of this treaty have any impact on the plans of the U.S. government to allow oil drilling in the upper portions of Alaska? Could the Sierra Club or Defenders of Wildlife sue the federal government to stop oil drilling in Alaska on the basis of violation of the Polar Bear Agreement?
3. What might be entailed in a "sound conservation practice" (Article II)?
4. How do the first and last portions of Article IV fit together?
5. What would be the obligations of a Party State under the treaty if government scientists reported that the most critical threat to the polar bear was the industrial toxins found in the bears' favorite food: seal blubber?
E. Framework for Analysis
The following questions are presented to provide an analytical framework for understanding any treaty.
1. Who drafted the treaty?
2. Who may become a member?
3. What is the purpose or policy of the treaty?
4. What interests outside the jurisdiction of the treaty may run counter to the purpose of the treaty? Which States would be expected to support or oppose the treaty?
5. Within the provisions of the treaty, which State actions are jointly taken and which are unilateral?
6. Are there any unilateral decision points where conflicts between the member states might be expected (tension points)? How are they to be resolved?
7. Is there a mechanism available for enforcing the requirements of the treaty for States not meeting their responsibilities under the treaty? How does it work?
8. Does a Party State have the option of exempting itself from any of the provisions of the treaty? Why is this allowed? What impact does it have on the overall effectiveness of the treaty?
9. What external problems may limit the ability of the treaty to obtain its goal?
10. Are the legal requirements of the treaty sufficient to accomplish the goal of the treaty?
|
Coastal areas are among the most dynamic, productive environments in the world, but they have been extensively developed for human uses. Future conditions in coastal riverscapes (the mosaic of aquatic and riparian habitats from the headwaters along the river to the estuary) are inherently complex. Current climate projections predict sea-level rise; alterations in type, timing, and intensity of precipitation; and increases in water temperature. Diverse estuarine habitats are used for varying lengths of time by juvenile salmonids of different life histories. Sea-level rise may flood currently productive salt-marsh habitats, with limited potential for these habitats to shift upstream or into floodplains due to human development. Land managers and citizens lack the spatially explicit data needed to incorporate effects of climate change and sea-level rise into planning for habitat improvement projects in estuarine areas. Scientists developed simple models using Light Detecton and Ranging, or LiDAR, to characterize the geomorphologies of multiple Oregon estuaries. They mapped the margin of current mean high tide, and contour intervals associated with different potential increases. They found that some estuaries had increased potential for complex edge habitat for rearing juveniles, whereas others showed a marked decrease. This research can be used to integrate current science into land use management decisions that have broad implications for the future. The scientists plan to work with local watershed groups and the state of Oregon to expand the use of mapping tools to enhance planning for estuarine habitat restoration, recovery, and protection.
|
Deficiency of vitamin D can over a period of months cause rickets in children and osteomalacia in adults – a skeletal demineralization especially in the spine, pelvis, and lower extremities. Signs and symptoms of osteomalacia are burning in the mouth and throat, nervousness, diarrhea, and insomnia. Vitamin D is a fat-soluble vitamin and while some is supplied by the diet, most of it is made in the body. To make vitamin D, cholesterol is necessary. Once cholesterol is available in the body, a slight alteration in the cholesterol molecule occurs, with one change taking place in the skin. This alteration requires ultraviolet light (sunlight). Vitamin D deficiency, as well as rickets and osteomalacia, tends to occur in persons who do not get enough sunlight and who fail to eat foods that are rich in vitamin D.
Once consumed or made within the body, vitamin D is further altered to produce a hormone called 1,25-dihydroxy-vitamin D (1,25-diOH-D). The conversion of vitamin D to 1,25-diOH-D does not occur in the skin, but in the liver and kidneys. First, vitamin D is converted to 25-OH-D in the liver; it then enters the bloodstream, where it is taken-up by the kidneys. At this point, it is converted to 1,25-diOH-D. Therefore, the manufacture of 1,25-diOH-D requires the participation of various organs of the body – the liver, kidneys, and skin.
The purpose of 1,25-diOH-D in the body is to keep the concentration of calcium at a constant level in the bloodstream. The maintenance of calcium at a constant level is absolutely required for human life to exist, since dissolved calcium is required for nerves and muscles to work. One of the ways in which 1,25-diOH-D accomplishes this mission is by stimulating the absorption of dietary calcium by the intestines.
Most foods contain little or no vitamin D. As a result, sunshine is often a deciding factor in whether vitamin D deficiency occurs. Although fortified milk and fortified infant formula contain high levels of vitamin D, human breast milk is rather low in the vitamin.
Rickets continues to be a problem in Africans and Asian Indians who migrate to Canada or Great Britain, especially where these immigrants do not consume fortified products.
Bone growth occurs through the creation of new cartilage, a soft substance at the ends of bones. When the mineral calcium phosphate is deposited onto the cartilage, a hard structure is created. In vitamin D deficiency, though, calcium is not available to create hardened bone, and the result is soft bone. Other symptoms of rickets include particular bony bumps on the ribs called rachitic rosary (beadlike prominences at the junction of the ribs with their cartilages) and knock-knees. Seizures may also occasionally occur in a child with rickets, because of reduced levels of dissolved calcium in the bloodstream.
Although osteomalacia is rare in the United States, symptoms of this disease include reduced bone strength, an increase in bone fractures, and sometimes bone pain, muscle weakness, and a waddling walk.
Rickets is diagnosed by X-ray examination of leg bones. A distinct pattern of irregularities, abnormalities, and a coarse appearance can be clearly seen with rickets. Osteomalacia is also diagnosed through X-ray examination. Measurements of blood plasma 25-OH-D, blood plasma calcium, and blood plasma parathyroid hormone must also be obtained for the diagnosis of these diseases. Parathyroid hormone and 1,25-diOH-D work together in the body to regulate the levels of calcium in the blood.
Rickets may also occur with calcium deficiency, even when a child is regularly exposed to sunshine. This type of rickets has been found in various parts of Africa. The bone deformities are similar to, or are the same as, those that occur in typical rickets; however, calcium deficiency rickets is treated by increasing the amount of calcium in the diet. No amount of vitamin D can cure the rickets of a child with a diet that is extremely low in calcium. For this reason, it is recommended that calcium be given in conjunction with vitamin D supplementation.
Food fortification has almost completely eliminated rickets in the United States. Vitamin D deficiency can be prevented by acquiring the RDA through consuming fortified products or taking supplements in the form of pills. In some older people, a 400 IU supplement may not be enough to result in the normal absorption of calcium; therefore, daily doses of 10,000 IU per day may be needed. For infants who are fed only breast milk (and rarely exposed to sunshine), a daily supplement of 200-300 IU is recommended.
The sequence of events that can lead to vitamin D deficiency, and then bone disease, is as follows: a lack of vitamin D in the body creates an inability to manufacture 1,25-diOH-D, which results in decreased absorption of dietary calcium and increased loss of calcium in the feces. When this happens, the bones are affected. Vitamin D deficiency results in a lack of bone mineralization (calcification) in growing persons, or in an increased demineralization (decalcification) of bone in adults.
British Medical Journal, January 2010: Those with a higher level of vitamin D in their blood are less likely to develop bowel cancer than those with low levels. A study has concluded that those with the highest levels of the vitamin were at 40% lower risk of developing the disease compared with those with the lowest levels. Researchers at the International Agency for Research on Cancer (IARC) in Lyon, France, and Imperial College London looked at vitamin D quantities in 1,248 people with bowel cancer and 1,248 controls in the largest ever study of the subject.
The body's main source of vitamin D is sunlight, but higher latitudes mean less available sunlight – especially during the winter. At most latitudes in the United States, little or no vitamin D is made in the skin in the late fall (autumn) and early winter. In the most northern regions, the vitamin D blackout lasts for about six months. As a result, it has been estimated that up to 70% of Americans (and Europeans) may be deficient in vitamin D. Only in the last several hundred years has urbanization, industrialization, glass (UVB does not penetrate glass), excessive clothes (UVB does not penetrate clothes) and sunblock greatly lowered levels.
Even if one is taking vitamin D supplements, vitamin D recommendations keep going up. It is difficult to take, for example, 1,000 IU, which may be what is needed. Ideally, sunlight exposure should be one's source.
An Alabama researcher found that lack of enough sunshine exposure may increase risk of hypertension in blacks and other dark-skinned people. Those with greater amounts of pigment in the skin require six times the amount of ultraviolet B (UVB) light to produce the same amount of vitamin D3 found in lighter-skinned people.
Rickets heals promptly with 4,000 IU of oral vitamin D per day administered for approximately one month. During this treatment, the doctor should monitor the levels of 25-OH-D in the plasma to make certain they are raised to a normal value. The bone abnormalities (visible by X-ray) generally disappear gradually over a period of 3-9 months. Parents are instructed to take their infants outdoors for approximately 20 minutes per day with their faces exposed. Children should also be encouraged to play outside.
Osteomalacia is treated by eating 2,500 IU per day of vitamin D for about three months. Measurements of 25-OH-D, calcium, and parathyroid hormone should be obtained after the treatment period to make sure the therapy did, in fact, result in normal blood values.
People should aim to get 10 to 15 minutes of exposure to direct sunlight each day when the weather allows, without sunscreen, to allow adequate synthesis of vitamin D. Most people achieve this simply by going about their daily activities. Those living at higher latitudes (further from the equator) should supplement their diets to ensure they are getting enough vitamin D, particularly during winter. A lack of sun during the winter months means that many people are deficient in this vitamin by December each year.
In the spring and summer, light-skinned adults can make large amounts (20,000 IU) by sunbathing on both sides, without sunblock, for a few minutes (about one-third the time it takes for the skin to begin to slightly redden). Darker-skinned persons need five to 10 times longer depending on the amount of melanin pigment in the skin.
Vitamin D production occurs within minutes and is maximized long before the skin turns red or begins to tan. One does not have to get repeated blood tests when using sun exposure to obtain vitamin D. Toxicity can not occur even with heavy and continuous sunbathing because ultraviolet light begins to degrade vitamin D after making about 20,000 IU, thus reaching a steady state.
|
Whenever chemical reactions take place, energy is involved. That's because energy is always transferred as chemical bonds are broken and formed.
Some reactions transfer energy from the reacting chemicals to the surroundings. We call these exothermic reactions. The energy transferred from the reacting chemicals often heats up the surroundings. This means that we can measure a rise in temperature as the reaction happens.
Some reactions transfer from the surroundings to the reacting chemicals. We call these endothermic reactions. Because they take energy from their surroundings, these reactions cause a drop in temperature as they happen.
Q. What do we call a reaction that gives out heat?
A. An exothermic reaction.
Q. What do we call a chemical reaction that absorbs heat from its surroundings?
A. An endothermic reaction.
Feuls burning are an obvious example of exothermic reactions, but there are others which we often meet in the chemistry lab.
Neutralisation reactions between acid and alkali are exothermic. We can easily measure the rise in temperature using simple apparatus.
Similarly, heat is released…
|
Discovery Expands Search for Earthlike Planets
Newly spotted frozen world orbits in a binary star system
Published on July 03, 2014
Image credit: Cheongho Han, Chungbuk National University, Republic of Korea
COLUMBUS, Ohio—A newly discovered planet in a binary star system located 3,000 light-years from Earth is expanding astronomers’ notions of where Earth-like—and even potentially habitable—planets can form, and how to find them.
At twice the mass of Earth, the planet orbits one of the stars in the binary system at almost exactly the same distance from which Earth orbits the sun. However, because the planet’s host star is much dimmer than the sun, the planet is much colder than the Earth—a little colder, in fact, than Jupiter’s icy moon Europa.
The study provides the first evidence that terrestrial planets can form in orbits similar to Earth’s, even in a binary star system where the stars are not very far apart. Although this planet itself is too cold to be habitable, the same planet orbiting a sun-like star in such a binary system would be in the so-called “ habitable zone” —the region where conditions might be right for life.
“This greatly expands the potential locations to discover habitable planets in the future,” said Scott Gaudi, professor of astronomy at Ohio State. “Half the stars in the galaxy are in binary systems. We had no idea if Earth-like planets in Earth-like orbits could even form in these systems. ”
Very rarely, the gravity of a star focuses the light from a more distant star and magnifies it like a lens. Even more rarely, the signature of a planet appears within that magnified light signal. The technique astronomers use to find such planets is called gravitational microlensing , and computer modeling of these events is complicated enough when only one star and its planet are acting as the lens, much less two stars.
When the astronomers succeeded in detecting this new planet, they were able to document that it produced two separate signatures—the primary one, which they typically use to detect planets, and a secondary one that had previously been only hypothesized to exist.Searching for planets within binary systems is tricky for most techniques, because the light from the second star complicates the interpretation of the data. "But in gravitational microlensing,” Gould explained, "we don't even look at the light from the star-planet system. We just observe how its gravity affects light from a more distant, unrelated, star. This gives us a new tool to search for planets in binary star systems."
The first was a brief dimming of light, as the planet’s gravity disrupted one of the magnified images of the source star. But the second effect was an overall distortion of the light signal.
“Even if we hadn’t seen the initial signature of the planet, we could still have detected it from the distortion alone,” Gould said, pointing to a graph of the light signal. “The effect is not obvious. You can't see it by eye, but the signal is unmistakable in the computer modeling.”
Gaudi explained the implications.
“Now we know that with gravitational microlensing, it’s actually possible to infer the existence of a planet—and to know its mass, and its distance from a star—without directly detecting the dimming due to the planet,” he said. “We thought we could do that in principle, but now that we have empirical evidence, we can use this method to find planets in the future.”
The nature of these distortions is still somewhat of a mystery, he admitted.
“We don't have an intuitive understanding of why it works. We have some idea, but at this point, I think it would be fair to say that it’s at the frontier of our theoretical work.”
The planet, called OGLE-2013-BLG-0341LBb, first appeared as a “dip” in the line tracing the brightness data taken by the OGLE (Optical Gravitational Lensing Experiment) telescope on April 11, 2013. The planet briefly disrupted one of the images formed by the star it orbits as the system crossed in front of a much more distant star 20,000 light-years away in the constellation Sagittarius.
“Before the dip, this was just another microlensing event,” Gould said. It was one of approximately 2,000 discovered every year by OGLE, with its new large-format camera that monitors 100 million stars many times per night searching for such events.
“It’s really the new OGLE-IV survey that made this discovery possible,” he added. “They got a half dozen measurements of that dip and really nailed it.” From the form of the dip, whose “wings” were traced out in MOA (Microlensing Observations in Astrophysics) data, they could see that the source was headed directly toward the central star.
Then, for two weeks, astronomers watched the magnified light continue to brighten from telescopes in Chile, New Zealand, Israel and Australia. The teams included OGLE, MOA, MicroFUN (the Microlensing Follow Up Network), and the Wise Observatory .
Even then, they still didn’t know that the planet’s host star had another companion—a second star locked into orbit with it. But because they were already paying close attention to the signal, the astronomers noticed when the binary companion unexpectedly caused a huge eruption of light called a caustic crossing.
By the time they realized that the lens was not one star, but two, they had captured a considerable amount of data—and made a surprising discovery: the distortion.
Weeks after all signs of the planet had faded, the light from the binary-lens caustic crossing became distorted, as if there were a kind of echo of the original planet signal.
Intensive computer analysis by professor Cheongho Han at Chungbuk National University in Korea revealed that the distortion contained information about the planet—its mass, separation from its star, and orientation—and that information matched perfectly with what astronomers saw during their direct observation of the dip due to the planet. So the same information can be captured from the distortion alone.
This detailed analysis showed that the planet is twice the mass of Earth, and orbits its star from an Earth-like distance, around 90 million miles. But its star is 400 times dimmer than our sun, so the planet is very cold—around 60 Kelvin (-352 degrees Fahrenheit or -213 Celsius), which makes it a little colder than Jupiter’s moon Europa. The second star in the star system is only as far from the first star as Saturn is from our sun. But this binary companion, too, is very dim.
Still, binary star systems composed of dim stars like these are the most common type of star system in our galaxy, the astronomers said. So this discovery suggests that there may be many more terrestrial planets out there—some possibly warmer, and possibly harboring life.
Three other planets have been discovered in binary systems that have similar separations, but using a different technique. This is the first one close to Earth-like size that follows an Earth-like orbit, and its discovery within a binary system by gravitational microlensing was by chance.
“Normally, once we see that we have a binary, we stop observing. The only reason we took such intensive observations of this binary is that we already knew there was a planet,” Gould said. “In the future we’ll change our strategy.”
In particular, Gould singled out the work of amateur astronomer and frequent collaborator Ian Porritt of Palmerston North, New Zealand, who watched for gaps in the clouds on the night of April 24 to get the first few critical measurements of the jump in the light signal that revealed that the planet was in a binary system. Six other amateurs from New Zealand and Australia contributed as well.
Other project collaborators hailed from Ohio State, Warsaw University Observatory , Chungbuk National University, Harvard-Smithsonian Center for Astrophysics, University of Cambridge, Universidad de Concepción, Auckland Observatory, Auckland University of Technology, University of Canterbury, Texas A&M University, Korea Astronomy and Space Science Institute , Solar-Terrestrial Environment Laboratory , University of Notre Dame , Massey University, University of Auckland, National Astronomical Observatory of Japan, Osaka University, Nagano National College of Technology , Tokyo Metropolitan College of Aeronautics , Victoria University , Mt. John University Observatory , Kyoto Sangyo University, Tel-Aviv University and the University of British Columbia.
Funding came from the National Science Foundation , NASA (including a NASA Sagan Fellowship), European Research Council, Polish Ministry of Science and Higher Education , National Research Foundation of Korea , U.S.-Israel Binational Science Foundation , Japan Society for the Promotion of Science , Marsden Fund from the Royal Society of New Zealand and the Israeli Centers of Research Excellence.
|
April 6, 2012
Long-Term Study Sheds New Light On Climate Change Impact
Scientists working on an ongoing study investigating the impact of climate change on various ecosystems have revealed that habitants dependent upon areas that typically experience ice and snow during the winter months are the most threatened by increasing global temperatures.
The finding comes after more than three decades worth of study as part of the Long Term Ecological Research (LTER) Network, a National Science Foundation (NSF) initiative that features more than 1800 scientists and students conducting long term investigations at 26 diverse ecosystems located in the US, Alaska, Antarctica, and islands in the Caribbean and the Pacific. The findings were published Friday in the journal BioScience.According to an April 6 press release, the research team reported that, as the average temperatures of areas that typically experience winter-time snow and ice increased, a "significant" amount of water that typically enters streams and is used for human consumption and irrigation in semi-arid locales winds up being lost to the atmosphere instead.
"The vulnerability of cool, wet areas to climate change is striking," Julia Jones, one of the lead authors of the BioScience study, said in a statement. "Streams in dry forested ecosystems seem more resilient to warming. These ecosystems conserve more water as the climate warms, keeping streamflow within expected bounds“¦ This research shows both the vulnerability and resilience of headwater streams. Such nuanced insights are crucial to effective management of public water supplies."
The LTER research also discovered that the warming climate is also effecting the cryosphere, the high-latitude regions where water is frozen for at least one month out of the year, in ways experts had not previously realized. A second press release, this one originating from BioScience publishers American Institute of Biological Sciences (AIBS), revealed that it isn't only larger animals such as the polar bear and the penguin that are being affected by climate change.
The article, which was written by Andrew G. Fountain of Portland State University and five coauthors, "describes how decreasing snowfall in many areas threatens burrowing animals and makes plant roots more susceptible to injury, because snow acts as an insulator," the AIBS said. Furthermore, "because microbes such as diatoms that live under sea ice are a principal source of food for krill, disappearing sea ice has led to declines in their abundance -- resulting in impacts on seabirds and mammals that feed on krill."
The disappearing sea ice also appeared to decrease the uptake of carbon dioxide from the atmosphere to the sea, the organization reported, and that changes in the amount of snow on the ground and the melting of permafrost in regions can make these places unsuitable for certain types of plant life, while also reducing the amount of CO2 taken out of the atmosphere by both plants and microbes.
"Shrinking glaciers add pollutants and increased quantities of nutrients to freshwater bodies, and melting river ice pushes more detritus downstream. Disappearing ice on land and the resulting sea-level rise will have far-reaching social, economic, and geopolitical impacts," Fountain and his colleagues discovered, according to the press release. "Many of these changes are now becoming evident in the ski industry, in infrastructure and coastal planning, and in tourism. Significant effects on water supplies, and consequently on agriculture, can be predicted."
The Fountain and Jones studies are two of six resulting from LTER Network research, all of which were published in Friday's special edition of the AIBS journal.
The LTER program was created by the NSF in 1980, with the goal of conducting research on ecological issues over massive geographical areas and extended periods of time, even decades. Annually, the project involves more than 2,000 scientists participating in over 200 large-scale field experiments, with the findings typically being made available to the public for no charge online.
"The LTER sites are providing transformative information about the causes and consequences of climate and environmental changes to ecosystems," David Garrison, the NSF program director for coastal and ocean LTER sites, said in a statement. "These sites are some of our best hopes for providing the sound scientific underpinnings needed to guide policy for the challenges of future environmental change."
Image Caption: Adelie penguins near the Palmer Station LTER site in Antarctica; their numbers have declined. Credit: Zena Cardman
|
According to Math Is Fun, "a cross section is the shape you get when cutting straight across an object." For example, if you were to "cut" through the middle of a cylinder, you would have a circle. To determine the volume of a cross-section shape you will need to calculate the end area volume. Although this may sound a bit confusing, the formula is actually quite simple. To find the end area volume, you need to first know the length and the areas of the shape.
Write down the equation for end area volume: Volume = length x 1/2 (A1 + A2) cubic meter
Fill in the variables that are known. For this example, let's say that you need to find the volume (V) of two cross sections with length (L) 40 m and two areas (A1 and A2) of 110 m^2 and 135 m^2, respectively: V = 40 x 1/2 (110 + 135)
Add the two areas (A1 + A2) together: V = 40 x 1/2 (245)
Multiply 1/2 and 245 together: V = 40 x 122.5
Multiply 40 and 122.5 together: V = 4,900 m^3
|
Thong et al. / Journal of Mammalogy / ASM / Allen Press
Griffin’s leaf-nosed bat, a newly identified species from Vietnam, has a bizarre set of leaflike protuberances arrayed around its nose.
A brand-new species of leaf-nosed bat has been identified in Vietnam, on the basis of its genetic differences as well as its sonar frequency. The findings, reported in the Journal of Mammalogy, suggest that different bat species living in the same habitat keep to their own in part due to the echolocating sounds they emit.
The new species — Griffin’s leaf-nosed bat, also known by the scientific name Hipposideros griffini — is slightly smaller than its close cousin, Hipposideros armiger, the great leaf-nosed bat. During a three-year bat survey, researchers found 11 specimens of the new species on Cat Ba Island in Ha Long Bay in northern Vietnam, and in Chu Mom Ray National Park on the mainland, more than 600 miles (1,000 kilometers) to the south.
Like its bigger cousin, Griffin’s leaf-nosed bat a bizarre-looking array of leaflike facial protuberances that are thought to enhance the echolocation signals it sends out to avoid obstacles and scan for potential prey. But a computerized analysis of bat calls determined that the smaller bat emits its signals in a slightly higher frequency: 76.6 to 79.2 kHz, as opposed to the range of 64.7 to 71.4 kHz for several subspecies of the great leaf-nosed bat. The researchers said H. griffini’s call is distinguishable from all other known leaf-nosed species in its habitat, which means the frequency could be used to identify the bat in future field studies.
Lead researcher Vu Dinh Thong of the Vietnam Academy of Science and Technology said there were other differences as well.
“While captured, some similar body-sized bats, i.e. great leaf-nosed bat, reacts very angrily,” he told National Geographic in an email. “But Griffin’s leaf-nosed bat seems quite gentle.”
The research team confirmed their suspicions that the gentler, smaller, higher-pitched bat represented a different species by analyzing the bats’ mitochondrial DNA, according to the journal report. The species was named after the late Rockefeller University researcher Donald Redfield Griffin, who played a leading role in the echolocation research that helped in the identification. H. griffini joins more than 70 other species in the genus Hipposideros.
More discoveries from Vietnam:
- Species found in Vietnam’s ‘Green Corridor’ | Slideshow
- Slideshow: Endangered species from the Mekong Delta
- 208 Mekong species discovered in a year | PhotoBlog
In addition to Vu Dinh Thong, authors of “A New Species of Hipposideros (Chiroptera: Hipposideridae) From Vietnam” in the February issue of the Journal of Mammalogy include Sebastien J. Puechmaille, Annette Denzinger, Christian Dietz, Gabor Csorba, Paul J.J. Bates, Emma C. Teeling and Hans-Ulrich Schnitzler.
Alan Boyle is msnbc.com’s science editor. Connect with the Cosmic Log community by “liking” the log’s Facebook page, following @b0yle on Twitter or adding Cosmic Log’s Google+ page to your circle. You can also check out “The Case for Pluto,” my book about the controversial dwarf planet and the search for other worlds.
|
The transition between the Precambrian and the Cambrian period (about 550 million to 500 million years ago) records one of the most important patterns of fossils in all the geological record. Complex animals with a suite of shells, intricate body plans and associated movement traces appeared for the first time, suddenly and unambiguously, in sequences all over the world during this interval. This ‘Cambrian explosion’ remains one of the most controversial areas of research in all of the history of life, and one of the most exciting. Palaeontological data like this is definitive in its support for evolutionary theory, the relative sequence of first appearances in the fossil record over the past several billion years ties very closely with what we would expect from evolutionary theory. Bacteria appear first, followed by simple nucleated cells, then simple animals, plants and fungi, followed by more advanced organisms. Without palaeontological data, we have no way of looking at the biology of extinct organisms, which represent more than 99% of everything that have ever lived. We wouldn’t know about anomalocaridids, dinosaurs or giant sloths, and we would be much the poorer for that. Furthermore, without palaeontological data we wouldn’t know about the great events in the history of life, such as the mass extinction of dinosaurs. In contrast to that event, the Cambrian explosion is a mass origination: a time when the pattern of fossils suggests that animals appeared during a relatively short interval of time. To understand what happened at that time, we must understand what a ‘pattern’ of fossils is. How do we recognize patterns of fossils? Is the observed pattern reliable, or are we in some way being misled by the fossil record? What process formed the pattern?
These questions are really all the same question, although at first they look very different. The problem all comes down to what is known as the pattern and process paradox. Once you understand the existence of this paradox, it changes how you look at the history of life. A fossil pattern is the information gained when you classify organisms that are observed in sequences of rocks. Some fossils appear before others, some occur later. Classifying them into different biological groupings produces a pattern of appearances of these groups in the fossil record. The natural processes that generate such a pattern of fossils can be of many different types, such as the evolutionary processes that cause creatures to change from generation to generation, or the process of fossilization itself. At first, this all seems fine. There are two distinct things that it is reasonable to be interested in: fossil patterns and evolutionary processes. That is how many biologists and palaeontologists treat them. If you learn something about the fossil pattern, then you can work out the evolutionary processes. If you discover some new process of evolution, then that would help to explain fossil patterns.
Unfortunately, it is much more difficult than that (see Fig. 1). Fossil patterns and evolutionary processes are not separate sets of data. The fossil pattern that you see is based on how you have classified your fossils, and that in turn is based on two things. First, what evolutionary process do you think best explains how the organisms should be classified? Should you classify fossils based on overall similarity to each other, or overall difference? Should some characters have priority over others (for example, is mode of reproduction more important than length of arm), or should all characters be treated equally? If characters are to be treated differently, then how should they be treated differently?
Second, how does preservation affect the fossil pattern that you see? Are the characters present in the fossil just the ones that preserve well, or are they a reliable witness to evolutionary history? Are characters that are absent from the fossil absent because they do not preserve well or because they hadn’t evolved yet in the group that the fossil represents? For example, soft tissue doesn’t preserve as readily as biominerals, such as shells and bone, so we expect to see more hard-bodied animals preserved in the fossil record and fewer soft-bodied animals. These sorts of bias in the preservation processes affect the pattern of fossils we see. So in merely recognizing a fossil pattern, you have already had to make a great many assumptions about the evolutionary and preservation processes that produced it.
The situation is similar for understanding evolutionary processes from the fossil record. There are many evolutionary processes that are well documented and well understood from studies of modern organisms: gradual evolution by natural selection, for example; symbiogenesis (when two organisms merge and start living and reproducing as one organism — found only in very simple creatures such as bacteria); or even major alterations owing to large or significant mutations. These processes are very useful for understanding vast amounts about biodiversity on Earth. But are there other processes that act over longer time periods, which we can’t observe in the modern world because they are too slow? What evolutionary processes operate during the evolution of major new groups of animals? Are they the same as or different from those that cause small changes in living animal groups? To answer such exciting questions about long-term evolutionary processes, we must look at the patterns that we see in the fossil record.
When examining what we think may be a rational idea about evolutionary processes, we must refer to the evidence from fossil patterns — but the evidence from fossil patterns is already full of assumptions about evolutionary processes. The only reason we could form a fossil pattern is because we have ideas about evolutionary process that we think are reasonable that allow that pattern to form. The fossil pattern and evolutionary process are a circular paradox from which there is seemingly no escape.
Attempts to understand the Cambrian explosion provide some of the most interesting examples of the pattern and process paradox. Before the Cambrian explosion, the fossil record contains no shells or bones — only soft-bodied fossils. During the Cambrian explosion, biomineralized animals appear for the first time, as does unambiguous evidence for almost all the major modern groups of animals. But what should we make of all the fossils that are older than this? Does the appearance of biomineralized animals mark roughly the origin of animals, or did animals evolve earlier and only later add biominerals to their bodies? Some scientists base their views concerning the origin of animals on evidence from evolutionary processes, whereas others base their views on patterns of fossils. There are five basic models concerning the origin of animals and how this relates to the pattern of fossils at the base of the Cambrian (see Fig. 2).
Model 1: The slow-burning fuse
The fossil pattern suggests that the major diversification of animals started at the beginning of the Cambrian period (about 543 million years ago). Evidence for this includes the gradual assembly in the fossil record of the major modern animal groups as they acquire the important characters that allow us to distinguish them in the modern world. The pattern of the fossils in this regard is generally accepted to be true, which is important evidence that something significant and interesting took place at this time.
However, many scientists have argued that some very primitive animals existed a long way back in to the Precambrian. Evidence for this comes from controversial genetic data from ‘molecular clocks’. The molecular clock works more or less on the principle that speed = distance/time. If you can take genes from two different animals, and you know in general how quickly genes evolve (speed) and how different two particular genes are from each other (distance), then you can calculate how long it must have been since they were the same gene: that is, when the ancestor containing the genes must have lived. In the calculation, the speed of evolution is worked out using fossils that everyone can agree on, known as calibration points. These allow you to say that certain animal groups must have evolved by a particular time, because we have some very convincing fossils of them by then (usually from the Cambrian or younger sediments). When you do this calculation to work out when the first animals lived you invariably get an origin time well into the Precambrian: the earliest molecular clocks (which were very methodologically flawed) all made predictions of very ancient divergences. In this way, calculations based on an understanding of processes can lead people to form very strong views about what the pattern of the fossil record should look like. People who believe this molecular-clock data expect to find animal fossils one billion years old, and they sometimes accept evidence to support this even when that evidence isn’t very good.
The slow-burning-fuse model suffers from over-reliance on evidence from molecular clocks. Molecular clocks are like trying to draw a dot-to-dot picture, with fossil calibration points as dots and lines based on inferences from genetic data. Unfortunately, you don’t really know for certain where the dots are and you can complete the picture only because you have an idea of how you should draw lines. Ultimately such analysis becomes an argument about the reliability of the dots (have the fossil calibration points been correctly interpreted?). In fact, it is even worse that this makes it seem. Imagine you had a dot-to-dot picture of a bear, and from what you could see it wasn’t wearing a hat. Now, imagine that you started to draw a hat on the bear not because you have any reason to put it there (there are no dots), but because you think it should be there. Using molecular clocks to learn about the origin of animals is like inventing new bits of the picture, having gone way past the parts of the picture that did have reliable dots.
Some fossils, such as the Stirling Biota from Western Australia — centimetre-scale fossils from around 2 billion years ago — have been controversially interpreted to support the slow-burning-fuse model, but these are not considered by most palaeontologists to be good evidence for early animals, because they show no characteristics that are unique to animals. Many other things could have made them, some of which may not even be biological.
In summary: animals originated perhaps a billion or more years ago but didn’t really get going until around the Cambrian when they diversified dramatically.
Model 2: The shallow fuse
Perhaps more mainstream than the first model, this view also supports the idea that animal groups diversified during the Cambrian, but it places the origin of animals later: at the end of the Precambrian (Ediacaran period, about 635 million to 542 million years ago). A number of molecular-clock models have made predictions that have supported animal origins in this interval; these models, including the most recent ones, seem to have overcome some of the methodological flaws of earlier efforts. Unfortunately, we have few ways to judge molecular-clock estimates apart from against the pattern of the fossil record. We have a tendency to think that molecular clocks are more reliable when the estimates they give are closer to the evidence from the fossil record. However, that doesn’t mean that anything is analytically better about the molecular clocks that give an estimate of 600 million years for the origin of animals as opposed to the clocks that give estimates of 700 or 800 million years, they still suffer from the problem of drawing lines in a dot-to-dot picture that lacks the necessary dots.
There are many Ediacaran fossils that have been interpreted as evidence for the earliest animals. Here, the fossil record is regarded mostly as a reliable witness to early evolution, with preservation processes having fossilized soft-bodied organisms. The Ediacaran biota are a group of highly enigmatic soft-bodied fossils (lacking shells, bones, and teeth) that are preserved globally from roughly 580 million to 545 million years ago. These and related fossils have received much attention for their basic morphological similarity to many different modern animal groups (see Figs 3 and 4).
But it is important to remember that there is no such thing as an unambiguously interpreted Precambrian animal fossil: if there were, there would be no debate. When you look at the Ediacaran fossils in detail, they are difficult to match directly with modern animal groups — they lack definitive characters and they seem to grow in completely different ways to modern animals groups (Fig. 5). Similarly, fossils once thought to be animal embryos from the Doushantuo Formation of China (about 555 million years old) no longer convince the community at large that they are animals or embryos, let alone both. We must also ask, why should good animal fossils not be preserved for so long? How did they evade the fossil record, when there are so many examples of amazing preservation of fossil algae and other soft-bodied forms in the late Precambrian? It doesn’t seem to be because the fossil record at this time was a hopeless record of what was living.
Overall, this remains a fairly mainstream view that has much to recommend it — not least the logic that if the first animal fossils appear at 543 million years ago then animals must have evolved before they left fossils. That does not mean that they had to evolve 90 or 100 million years earlier; perhaps it was just a few million years. The shallow fuse could be very shallow indeed. In that case, this model becomes similar to model 3.
In summary: Origin of animals somewhere in the Ediacaran, major diversification in the Cambrian.
Model 3: Evolutionary Big Bang
The pattern of the Cambrian record is profoundly impressive. In geological sections all over the world, dated independently using different methods that all give the same answer, a simple suite of animal burrows and shells appears in a regular pattern from about 543 million to 510 million years ago. It includes the origin of biominerals, the first complex animal burrows, the first animal reefs (see Fig. 6), the gradual assembly of recognizable animal body plans, the first zooplankton (microscopic animals) and the first animal predators. The third model differs little from the previous two, except that here, the fossil pattern of the Cambrian explosion is thought to mark not just the major diversification of animals, but also their actual origin. Many opponents of this view point out the controversially interpreted Ediacaran fossils and the data from molecular clocks that imply a much older origin of animals. Furthermore, the origin of biominerals at the base of the Cambrian greatly increased fossilization potential; many argue that this means that the Cambrian fossil pattern is an artefact of better preservation. But the major strength of this model in comparison to the first two is that it does not require some unknown and speculative mechanism to prevent the diversification of animals once they have originated. In this model, once animals evolved they could immediately diversify; there is no need for a long time to pass with a few animals but not all, as in the previous models. It also means that animals do not need to avoid leaving convincing fossils in the record for hundreds of millions of years before the Cambrian. So mechanistically, this model is probably the best understood and probably requires fewest assumptions regarding the nature of the Precambrian fossil record, which on the whole seems to preserve a lot of high-quality fossils that are simply not definitively animal.
In summary: Animals originated and diversified at or around the base of the Cambrian.
Models 4 and 5: The shallow missing record and the deep missing record
The only book written by a geologist that most biologists have read is On The Origin of Species (1859) by Charles Darwin. This great work had one unfortunate side effect: it convinced a great number of biologists that the fossil record was so incomplete that it was essentially not worth their attention. This view is still common among biologists, who, despite being exceptionally well versed in modern biological data, are often roughly 150 years out of date when it comes to palaeontological data. The Precambrian is no longer considered a wasteland barren of fossils, as it was during the nineteenth century. Still, many scientists propose that because molecular clocks predict that animals originated and diversified in the Precambrian, this is what must have taken place, and the fossil record is too incomplete to provide meaningful constraints. This is rather interesting because molecular clocks have to be calibrated against fossils (the dots in the dot-to-dot picture); otherwise there is little idea of the picture of early animal evolution.
Advocates of the shallow missing record model and the deep missing record model usually argue that the Cambrian explosion is simply an artefact of fossil preservation, and not at all related to evolutionary process and events. However, this doesn’t account for the great number of exceptionally preserved soft-bodied fossils now known from the Precambrian, or for the astounding absence of even a single uncontroversial animal among them. The only real difference between these models is how far into the Precambrian the molecular clock predicts diversification.
In summary: Animals originated and diversified somewhere in the Precambrian; the Cambrian explosion is an artefact of fossil preservation.
Discussion (and my opinion):
Exploring the origin of animals and the Cambrian explosion produces a pattern and process paradox, and a doubly difficult one because the evolutionary process is not the only type of process involved. We must also concern ourselves with the preservation processes that were in play between about 550 million and 500 million years ago. Much work over the past few decades has focused on the context and the time frame of Cambrian explosion fossils, which has allowed more recent work to speculate on the evolutionary processes that operated during this time. It is now crucial in this field to appreciate the subtle inter-relations of fossil pattern and evolutionary process, particularly when making inferences from one to the other. To make matters worse, at this interval events are confounded by the changes in preservation process that also interact with the fossil pattern, forming a second inference paradox. When these things are considered independently, the situation seems hopeless and beyond obvious resolution. However, when taken together, models of large-scale evolutionary processes and large-scale preservation processes shed light on each other and produce a coherent and predictive model for the origin of animals.
Ultimately, the debate would be settled by a well preserved animal fossil from the Precambrian. It is not really plausible that animals, if they existed then, could have evaded fossilization for hundreds of millions of years in the Precambrian. The Ediacaran period was a time of prodigious fossilization potential, shown by the global preservation of soft-bodied fossils such as the Ediacaran biota; by geological data showing that sediment turned to rock quickly and early in the sediment cycle, resulting in fossils forming before bacteria had time to decay tissue; by crystal deposits that show that the oceans were not dissolving cements, and so were helping rock to form quickly; and by the chemical profiles of the sediments that show that oxygen levels were low, so bacterial decay would have been hindered.
These conditions were destroyed towards the start of the Cambrian, when animals started to dig vertical burrows that let air defuse into the sediment, allowing bacteria to use oxygen to help them to decay tissue. The burrows also help to break up buried bodies, because the predatory animals making the burrows would eat organisms in the sediment. The evolution of bones and teeth depleted the oceans of all the chemicals that were making early cements in the Precambrian, so decay processes had longer to act before sediment turned to rock and organism turned to fossil. Thus the preservation of soft tissue became substantially less likely at around the same time that animals acquired biominerals for the first time, and the ‘shelly’ fossil record began. This leads to the astounding but apparently real conclusion that the quality of the fossil record actually improves as you go back into deep time. Animals, with their shells and burrowing behaviour, didn’t improve the fossil record (except in terms of bulk abundance of shells and burrows), but actually went a long way to destroying the possible preservation mechanisms of soft tissue (see Fig. 7).
What caused the Cambrian events can be explained either by the origin of animals and their subsequent natural selection causing diversification, or by a much earlier origin followed by a long gap of uncertain cause before something else triggers the radiation of animals during the Cambrian. Such triggers always fall into the pattern and process paradox: there is no independent or cohesive evidence for the extraordinarily slow evolutionary processes that would be required for deep Precambrian origins and then later Cambrian diversification. By contrast, there is a great deal of evidence from the Cambrian fossil pattern that indicates that the Cambrian explosion saw both the origin of the animal kingdom and the diversification of animal body plans with the origin of biomineralization, the origin of animal burrowing, the origin of the predator–prey system, the evolution of the first metazoan reefs and the evolution of zooplankton.
The Precambrian is full of interesting and controversial fossils, and not enough people are working on them. You should come and join us. Now is the time for detailed palaeontology on these fossils to understand their anatomy, compare suites of similar fossils worldwide and study their preservation. I think that this field has the most exciting questions in palaeontology, and it doesn’t look as if the scientific community is going to settle on ‘the answers’ any time soon. There is still so much to do.
Suggestions for further reading:
Antcliffe, J. B. & Brasier, M. D. 2008. Charnia at 50: developmental analysis of Ediacaran fronds. Palaeontology 51, 11–26. (doi:10.1111/j.1475-4983.2007.00738.x)
Bengtson, S. 1985. Taxonomy of disarticulated fossils. Journal of Paleontology 59, 1350–1358. (doi:10.2307/1304949)
Brasier, M. D., Antcliffe, J. B. & Callow, R. 2010. Taphonomy in the Ediacaran interval. In Taphonomy: Bias and Process Through Time (eds Allison, P.A. & Bottjer, D.), pp. 519–567. New York: Springer. (ISBN:9789048186426)
Budd, G. E. & Jensen, S. 2000. A critical reappraisal of the fossil record of the bilaterian phyla. Biological Reviews 75, 253–295. (doi:10.1017/S000632310000548X)
Erwin, D. H., Lalflamme, M., Tweedt, S. M., Sperling, E. A., Pisani, D. & Peterson, K. J. 2011. The Cambrian conundrum: early divergence and later ecological success in the early history of animals. Science 304, 1091–1097. (doi:10.1126/science.1206375)
Gehling, J. G. & Rigby, J. K. 1996. Long expected sponges from the Neoproterozoic Ediacara fauna of South Australia. Journal of Paleontology 70, 185-195 (http://www.jstor.org/discover/10.2307/1306383)
Kemp, T. S. 1999. ‘Some fundamental ideas’ in Fossils and Evolution. Chapter 2. Oxford: Oxford University Press.
Peterson, K. J., Lyons, J. B., Nowak, K. S., Takacs, C. M., Wargo, M. J. & McPeek, M. A. 2004. Estimating metazoan divergence times with a molecular clock. Proceedings of the National Academy of Sciences 101, 6536–6541. (doi:10.1073/pnas.0401670101)
1Department of Earth Sciences, University of Bristol, Queen’s Road, Bristol, BS8 1RJ, UK.
|
Credit: (Photo: Gaius J. Augustus)
The United States is in the midst of a head-and-neck cancer epidemic. Although survival rates are relatively high — after treatment with chemotherapy and radiation — survivors can suffer permanent loss of salivary function, potentially leading to decades of health problems and difficulties eating.
It is unknown why the salivary gland sometimes cannot heal after radiation damage, but Wen Yu “Amy” Wong, BS, a University of Arizona cancer biology graduate student, may have taken a step toward solving that riddle.
Radiation often comes with long-term or even permanent side effects. With a head-and-neck tumor in radiation’s crosshairs, the salivary gland might suffer collateral damage.
“When you get radiation therapy, you end up targeting your salivary glands as well,” Wong said. Losing the ability to salivate predisposes patients to oral complications and an overall decrease in their quality of life. “Salivary glands help you digest food, lubricate your mouth and fight against bacteria. After radiation, patients could choke on their food because they can’t swallow. They wake up in the middle of the night because their mouth is so dry. They often get cavities.”
Favorite foods may lose their flavor. “Saliva produces certain ions that help
Article originally posted at
|
Calderas are massive crater-like volcanic features that can cover many tens of square miles. A caldera is a volcano without a cone. These calderas form one of two ways. One way is when volcanoes erupt with terrible destruction. The emptying of the magma chamber below and destroying the original volcano results in a caldera. Another way calderas may form is over time, with a series of smaller volcanic eruptions.
Aerial view at Caldera Batur.
They are different from craters. Craters form from blasts happening outward, blowing away debris. Calderas form from the volcano sinking inward. In the Portuguese language, caldera translates to “cauldron”.
A caldera has felsic magma, which is high in silica and gas. They have explosive eruptions with pyroclastic flows.
One famous example of a caldera is North America’s Crater Lake. Around 7,000 years ago a massive volcanic peak reaching 12,000 feet in elevation was set in this location. A powerful eruption blasted the top 4,000 feet away, leaving a deep bowl-shaped caldera. Since that time, a new dome has begun to form in the center of the caldera.
The largest caldera in the United States is in Yellowstone National Park. The Yellowstone caldera is often referred to as the Yellowstone Supervolcano. It measures at about 34 by 45 miles. The last time the Yellowstone caldera erupted was about 650,000 years ago. This eruption released about 1,000 kg of debris, covering much of North America. At the same time, the lava cooled to form many of the landforms we know today.
A picture of a geyser erupting in Yellowstone Park. Geysers exist in areas of high volcanic activity.
The Yellowstone Caldera is over the Yellowstone hotspot. This causes the Caldera to still be active! That said, it’s very unlikely that it will erupt anytime soon.
The largest volcanic structure on Earth is a caldera. This caldera is now where Lake Toba is located in Indonesia. The results of this eruption are still clear to see today. About 75,000 years ago this caldera erupted to release about 2,800 kg worth of debris. This is the largest known eruption in the last 25 million years. Anthropologists suggest that this eruption caused a volcanic winter. This winter may have reduced the human population to 2,000 – 20,000 people.
|
Benign moles are extremely common, but their functions (if any) are unknown. There are theories however, the most prominent being that the extra melanin produced in the moles' melanocytes helps to protect against UV radiation. But melanin-rich sites in human skin are often prevalent in areas where little or no sunlight ever reaches. Further, many nocturnal mammals - bats for example - have skin rich in melanocytes though they are not in any danger from UV in sunlight.
Alternative theories suggest that moles may play a role in immunological defence.
“ […] melanocytes are not simply pigment-producing cells, but produce substances with a range of biological functions, including structural strengthening by cross-linking proteins, antimicrobial defense, photon shielding, and chemoprotection. […] to provide several physiologically significant functions, including the provision of communicatory links with several different systems, e.g. the skin, central nervous system, and immune/ inflammatory responses.”
See: 'The mole theory: primary function of melanocytes and melanin may be antimicrobial defense and immunomodulation (not solar protection)' International Journal of Dermatology, 44, 340–342.
|
Previously, scientists thought that the growing number of receptors triggered a strong T cell activation. But when Grove and his team blocked the migration of T cell receptors by binding them to locked-in proteins on the artificial membrane, which acts like an infected cell, they discovered it was the position of the receptor that actually controlled the response.
The technology eventually could be used to develop cell-based drug screens in order to determine how candidate compounds affect immune-cell signaling. For example, scientists could expose cells bound into an artificial membrane to different drugs, and observe how those drugs affect T cell clustering. "Understanding how [cell signaling] works is a big component of learning how to control it with drugs," says Groves.
The findings could also lead to new treatments for auto-immune diseases, in which the immune system attacks the body's own proteins. "Effective treatments for auto-immune diseases like Rheumatoid arthritis turn down immune response, but this leaves the patient more vulnerable to infection," says Michael Dustin, an immunologist at the Skirball Institute of Biomolecular Medicine at New York University, who collaborated on the Berkeley project. "You could use patterned particles to make more specific treatments, but first we need to learn the language."
Once researchers experimentally determine the signals associated with different patterns, it may be possible to build a particle with pre-patterned receptors that direct T cells to turn off the immune response, says Dustin. If the pattern was specific enough to turn off the immune response in particular organs, such as the brain in multiple sclerosis or the joints in rheumatoid arthritis, the rest of the immune system could still function effectively to fight viral invaders.
The technique also has blue-sky applications, going far beyond the immune system. "If you can make artificial surfaces that communicate with cells on a sophisticated level, you could make devices that tell cells what to do," says Groves. "You could get cells to generate energy or do a chemical conversion; it would be tremendous."
|
Dimensions of Technology Change
From its inception, education technology has caused divergent views about its effects on teaching and learning. For some educators, the advent of radio in the 1930s and television in the 1950s promised to democratize education; others believed these technologies might become tools of fascist leaders used to dominate people's thinking. In fact, radio and TV did neither. Eventually, they supplemented and extended traditional courses, yet most educators still had few options. Faculty members could lecture, engage in classroom discussion, and assign readings, papers, and projects—not much else. The academic schedule permitted few variations in the number and length of weekly meetings.
In the 1960s and 1970s, some educators thought computer assisted instruction (CAI) and computer-based training (CBT) would enable everyone to learn at their own pace, making lectures and class meetings obsolete. Often under new names, these technologies continued to supplement and extend traditional courses, but did not offer new options. However, the value of individualized learning was much discussed. The desirability of both enabling learners to take different paths through an organized collection of materials and allowing others to use different media was confirmed. Many came to acknowledge the variety and impact of learning disabilities and styles. But most faculty members could not even begin to reach new goals without making extraordinary personal effort or gaining unusual external support.
The advent of the personal computer in the 1980s renewed enthusiasm for a revolution in education. Although PCs continue in technology's role to supplement and extend traditional courses, they alone do not extend teaching and learning options beyond what they were in the pre-PC days.
The Web—and even more rapidly and widely, e-mail—began in the 1990s to give teachers and learners a few more viable options. Faculty could now find and provide timely, fresh, immediate information to their students via the Web. Students could do research, with access to unprecedented volumes and varieties of information. Online education became much more common, most often as supplement to classroom-based courses.
Meanwhile, the calls for a major shift toward "learner-centered education" increased, with emphasis on individualization, independent learning, active learning, authentic teaching, standardization of course content and outcomes, "scalable" new programs. In the era of "seamless integration" of academic resources and "ubiquitous computing," many reports suggest that the more pervasive use of such technologies can support this "personalization" of education, in which participants build personal connections with each other.
The new challenge for students, faculty, staff, and administrators is to learn how to take advantage of too many options—instead of too few. How can each institution make good choices and effectively implement them?
This little history of technology in education suggests that we all watch these dimensions of change:
- Individualization: Responding to individual differences among learners and teachers, both in learning and teaching "styles" and in media preferences
- Standardization and Access: Providing equitable and convenient access to information and instructional resources
- Personalization: Enhancing communication and "connectedness" across all kinds of boundaries
- "Communitization": Supporting the development and maintenance of communities of learners, teachers, citizens.
More dimensions will continue to evolve, and as they do, we need to learn how to keep changing.
Steven Gilbert is President of the TLT Group and moderates the Internet listserv TLT-SWG.
|
Adapt and overcome. At least this is what scientists believe two species of shark have done to create a hybrid that can cope with climate change off the east coast of Australia. The Australian black-tip, whose range extends north from Brisbane, and the common black-tip from Australia's southeastern coastal waters have interbred, yielding a new shark able to tolerate both cooler and warmer waters. So far, 57 hybrid sharks have been found along a 1,243-mile stretch of coastline, from New South Wales and as far as north Queensland.
The astounding discovery, which, scientists add, has created a more 'robust' form of shark is unprecedented. While hybridization is common in plants and other fish because of egg release, sharks must physically mate. "It's very surprising," said lead researcher Jess Morgan from the University of Queensland, "because no one's ever seen shark hybrids before, this is not a common occurrence by any stretch of the imagination," Morgan told AFP
The interbreeding is believed to have occurred in response to rising sea temperatures caused by global warming. The new, potentially stronger hybrid is the world's first known hybrid shark which contains both common and Australian black tip DNA. By hybridizing, the new sharks are able to range further afield. The smaller Australian black-tip for example, is a tropical shark that needs warmer waters, yet the hybrid offspring has been discovered in much cooler waters.
Scientists are now wondering whether the hybrid offspring are limited to just these two species of shark. Fellow researcher Colin Simpfendorfer of James Cook University said, "we thought we understood how species of sharks have separated, but what this is telling us is that in reality we probably don't fully understand the mechanisms that keep species of shark separate."
Researchers also discovered that the hybrids accounted for 20% of black-tip populations in some areas, but did not displace the single-breed black-tips. Scientists describe the find as seeing "evolution is action."
|
Endosomes and Endocytosis
Endosomes are membrane-bound vesicles, formed via a complex family of processes collectively known as endocytosis, and found in the cytoplasm of virtually every animal cell. The basic mechanism of endocytosis is the reverse of what occurs during exocytosis or cellular secretion. It involves the invagination (folding inward) of a cellís plasma membrane to surround macromolecules or other matter diffusing through the extracellular fluid. The encircled foreign materials are then brought into the cell, and following a pinching-off of the membrane (termed budding), are released to the cytoplasm in a sac-like vesicle. The size of vesicles varies, and those larger than 100 nanometers in diameter are typically referred to as vacuoles.
Three primary mechanisms of endocytosis that are exhibited by a typical cell are illustrated in Figure 1. On the far left of the figure, receptor mediated endocytosis, which is the most specifically-targeted form of the endocytic process, is presented. Through receptor mediated endocytosis, active cells are able to take in significant amounts of particular molecules (ligands) that bind to receptor sites extending from the cytoplasmic membrane into the extracellular fluid surrounding the cell. These receptor sites are commonly grouped together along coated pits in the membrane, which are lined on their cytoplasmic surface with a bristle-like collection of coat proteins. The coat proteins are thought to play a role in enlarging the pit and forming a vesicle. Note, as shown in Figure 1, vesicles produced via receptor mediated endocytosis may internalize other molecules in addition to ligands, though the ligands are usually brought into the cell in higher concentration.
A less specific mechanism of endocytosis is pinocytosis, which is illustrated in the central section of Figure 1. By means of pinocytosis, a cell is able to ingest droplets of liquid from the extracellular fluid. All solutes found in the droplets outside of the cell may become encased in the vesicles formed via this process, with those present in the greatest concentration in the extracellular fluid also becoming the most concentrated in the membranous sacs. Pinocytic vesicles tend to be smaller than vesicles produced by other endocytic processes.
The final type of endocytosis, termed phagocytosis (see Figure 1), is probably the most well-known manner in which a cell may import outside materials. In many school science labs, children observe amoebas under the microscope and watch the single-celled organisms eat by stretching out pseudopodia and encircling any food particles they find in their paths. This engulfment and subsequent packaging of the particles into vesicles, which are usually large enough to be correctly referred to as vacuoles, is phagocytosis. Though commonly associated with amoebas, phagocytosis is practiced by many organisms. In most multicellular animals, phagocytic cells chiefly function in bodily defense rather than as a means to gain nourishment. For example, leukocytes in the human body often phagocytose protozoa, bacteria, dead cells, and similar materials in order to help stave off infections or other problems.
Once freed into the cytoplasm, several small vesicles produced via endocytosis may come together to form a single entity. This endosome generally functions in one of two ways. Most commonly, endosomes transport their contents in a series of steps to a lysosome, which subsequently digests the materials. In other instances, however, endosomes are used by the cell to transport various substances between different portions of the external cell membrane. This latter function is particularly important among epithelial cells, such as those that compose the outer layer of the skin, because they exhibit polarity (one side of the cell is different from the other side). Illustrated in Figure 2 is a fluorescence digital image of a single African green monkey kidney fibroblast cell (CV-1 line) transfected with a fluorescent protein fused to a targeting amino acid sequence for endosomes (green). The nucleus, plasma membrane, and endosome components are marked in the figure.
An endosome that is destined to transfer its contents to a lysosome generally goes through several changes along its way. In its initial form, when the structure is often referred to as an early endosome, the specialized vesicle contains a single compartment. Over time, however, chemical changes in the vesicle take place and the membrane surrounding the endosome folds in upon itself in a way that is similar to the invagination of the plasma membrane. In this case, however, the membrane is not pinched off. Consequently, a structure with multiple compartments, termed a multivesicular endosome, is formed. The multivesicular endosome is an intermediate structure in which further chemical changes, including a significant drop in pH level, take place as the vesicle develops into a late endosome.
Though late endosomes are capable of breaking down many proteins and fats, a lysosome is needed to fully digest all of the materials they contain. Frequently the contents of late endosomes are conveyed to a lysosome through fusion (joining together) of their membranes. Under some circumstances, late endosomes are able to further mature into lysosomes through additional chemical and structural modifications, in which case fusion is not necessary for digestion to be completed.
Questions or comments? Send us an email.
© 1995-2013 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, software, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
This website is maintained by our
|
n 2003, the Hubble Space Telescope took the image of a millenium, an image that shows our place in the universe. Anyone who understands what this image represents, is forever changed by it.
The Hubble Space Telescope (HST) is a telescope in orbit around the Earth, named after astronomer Edwin Hubble. Its position outside the Earth's atmosphere provides significant advantages over ground-based telescopes — images are not blurred by the atmosphere, there is no background from light scattered by the air, and the Hubble can observe ultra-violet light that is normally absorbed by the ozone layer in observations made from Earth. Since its launch in 1990, it has become one of the most important instruments in the history of astronomy. With it, astronomers have made many observations leading to breakthroughs in astrophysics. Hubble's Ultra Deep Field is the most sensitive astronomical optical image ever taken.
From its original conception in 1946 until its launch, the project to build a space telescope was beset by delays and budget problems. Immediately after its 1990 launch, it was found that the main mirror suffered from spherical aberration due to faulty quality control during its manufacturing, which severely compromised the telescope's capabilities. However, after a servicing mission in 1993, the telescope was restored to its intended quality and became a vital research tool as well as a public relations boon for astronomy. The HST is part of NASA's Great Observatories series, with the Compton Gamma Ray Observatory, the Chandra X-ray Observatory, and the Spitzer Space Telescope. Hubble is a collaboration between NASA and the European Space Agency.
The Hubble is the only telescope ever designed to be serviced from space by astronauts. To date, there have been four servicing missions, with a fifth and final mission planned for September 2008. Servicing Mission 1 took place in December 1993 when Hubble's imaging flaw was corrected. Servicing Mission 2 occurred in February 1997 when two new instruments were installed. Servicing Mission 3 was split into two distinct missions: SM3A occurred in December 1999 when urgently needed repairs were made to Hubble; and then SM3B followed in March 2002 when the Advanced Camera for Surveys (ACS) was installed.
Since SM3B, the Hubble has lost use of two major science instruments and is operating with viewing restrictions because of rate-sensing gyroscope failures. There are six gyroscopes onboard Hubble and three are normally used for observing. However, after further failures, and in order to conserve lifetime, a decision was taken in August 2005 to switch off one of the functioning gyroscopes and operate Hubble using only 2 gyros in combination with the Fine Guidance Sensors. This mode retains the excellent image quality of Hubble, and provides a redundancy should it be needed. Further redundancy is available now that an operational mode requiring only one gyro has now been developed and tested. Six new gyroscopes are planned to be installed in SM4.
|
In physics and engineering, a phasor (a portmanteau of phase vector), is a complex number representing a sinusoidal function whose amplitude (A), frequency (ω), and phase (θ) are time-invariant. It is a special case of a more general concept called analytic representation. Phasors separate the dependencies on A, ω, and θ into three independent factors. This can be particularly useful because the frequency factor (which includes the time-dependence of the sinusoid) is often common to all the components of a linear combination of sinusoids. In those situations, phasors allow this common feature to be factored out, leaving just the A and θ features.[clarification needed] A phasor may also be called a complex amplitude and—in older texts—a phasor is also called a sinor or even complexor.
The origin of the term phasor rightfully suggest that a (diagrammatic) calculus somewhat similar to that possible for vectors is possible for phasors as well. An important additional feature of the phasor transform is that differentiation and integration of sinusoidal signals (having constant amplitude, period and phase) corresponds to simple algebraic operations on the phasors; the phasor transform thus allows the analysis (calculation) of the AC steady state of RLC circuits by solving simple algebraic equations (albeit with complex coefficients) in the phasor domain instead of solving differential equations (with real coefficients) in the time domain. The originator of the phasor transform was Charles Proteus Steinmetz working at General Electric in the late 19th century.
Glossing over some mathematical details, the phasor transform can also be seen as particular case of the Laplace transform, which additionally can be used to (simultaneously) derive the transient response of an RLC circuit. However, the Laplace transform is mathematically more difficult to apply and the effort may be unjustified if only steady state analysis is required.
or as the real part of one of the functions:
The term phasor can refer to either or just the complex constant, . In the latter case, it is understood to be a shorthand notation, encoding the amplitude and phase of an underlying sinusoid.
Multiplication by a constant (scalar)
Multiplication of the phasor by a complex constant, , produces another phasor. That means its only effect is to change the amplitude and phase of the underlying sinusoid:
In electronics, would represent an impedance, which is independent of time. In particular it is not the shorthand notation for another phasor. Multiplying a phasor current by an impedance produces a phasor voltage. But the product of two phasors (or squaring a phasor) would represent the product of two sinusoids, which is a non-linear operation that produces new frequency components. Phasor notation can only represent systems with one frequency, such as a linear system stimulated by a sinusoid.
Differentiation and integration
The time derivative or integral of a phasor produces another phasor.[b] For example:
Therefore, in phasor representation, the time derivative of a sinusoid becomes just multiplication by the constant,
Similarly, integrating a phasor corresponds to multiplication by The time-dependent factor, , is unaffected.
When we solve a linear differential equation with phasor arithmetic, we are merely factoring out of all terms of the equation, and reinserting it into the answer. For example, consider the following differential equation for the voltage across the capacitor in an RC circuit:
When the voltage source in this circuit is sinusoidal:
we may substitute:
where phasor and phasor is the unknown quantity to be determined.
In the phasor shorthand notation, the differential equation reduces to[c]:
Solving for the phasor capacitor voltage gives:
As we have seen, the factor multiplying represents differences of the amplitude and phase of relative to and
In polar coordinate form, it is:
The sum of multiple phasors produces another phasor. That is because the sum of sinusoids with the same frequency is also a sinusoid with that frequency:
where . A key point is that A3 and θ3 do not depend on ω or t, which is what makes phasor notation possible. The time and frequency dependence can be suppressed and re-inserted into the outcome as long as the only operations used in between are ones that produce another phasor. In angle notation, the operation shown above is written:
Another way to view addition is that two vectors with coordinates [A1 cos(ωt + θ1), A1 sin(ωt + θ1)] and [A2 cos(ωt + θ2), A2 sin(ωt + θ2)] are added vectorially to produce a resultant vector with coordinates [A3 cos(ωt + θ3), A3 sin(ωt + θ3)]. (see animation)
In physics, this sort of addition occurs when sinusoids interfere with each other, constructively or destructively. The static vector concept provides useful insight into questions like this: "What phase difference would be required between three identical sinusoids for perfect cancellation?" In this case, simply imagine taking three vectors of equal length and placing them head to tail such that the last head matches up with the first tail. Clearly, the shape which satisfies these conditions is an equilateral triangle, so the angle between each phasor to the next is 120° (2π/3 radians), or one third of a wavelength λ/3. So the phase difference between each wave must also be 120°, as is the case in three-phase power
In other words, what this shows is:
In the example of three waves, the phase difference between the first and the last wave was 240 degrees, while for two waves destructive interference happens at 180 degrees. In the limit of many waves, the phasors must form a circle for destructive interference, so that the first phasor is nearly parallel with the last. This means that for many sources, destructive interference happens when the first and last wave differ by 360 degrees, a full wavelength . This is why in single slit diffraction, the minima occurs when light from the far edge travels a full wavelength further than the light from the near edge.
Electrical engineers, electronics engineers, electronic engineering technicians and aircraft engineers all use phasor diagrams to visualize complex constants and variables (phasors). Like vectors, arrows drawn on graph paper or computer displays represent phasors. Cartesian and polar representations each have advantages, with the Cartesian coordinates showing the real and imaginary parts of the phasor and the polar coordinates showing its magnitude and phase.
With phasors, the techniques for solving DC circuits can be applied to solve AC circuits. A list of the basic laws is given below.
- Ohm's law for resistors: a resistor has no time delays and therefore doesn't change the phase of a signal therefore V=IR remains valid.
- Ohm's law for resistors, inductors, and capacitors: V = IZ where Z is the complex impedance.
- In an AC circuit we have real power (P) which is a representation of the average power into the circuit and reactive power (Q) which indicates power flowing back and forward. We can also define the complex power S = P + jQ and the apparent power which is the magnitude of S. The power law for an AC circuit expressed in phasors is then S = VI* (where I* is the complex conjugate of I).
- Kirchhoff's circuit laws work with phasors in complex form
Given this we can apply the techniques of analysis of resistive circuits with phasors to analyze single frequency AC circuits containing resistors, capacitors, and inductors. Multiple frequency linear AC circuits and AC circuits with different waveforms can be analyzed to find voltages and currents by transforming all waveforms to sine wave components with magnitude and phase then analyzing each frequency separately, as allowed by the superposition theorem.
In analysis of three phase AC power systems, usually a set of phasors is defined as the three complex cube roots of unity, graphically represented as unit magnitudes at angles of 0, 120 and 240 degrees. By treating polyphase AC circuit quantities as phasors, balanced circuits can be simplified and unbalanced circuits can be treated as an algebraic combination of symmetrical circuits. This approach greatly simplifies the work required in electrical calculations of voltage drop, power flow, and short-circuit currents. In the context of power systems analysis, the phase angle is often given in degrees, and the magnitude in rms value rather than the peak amplitude of the sinusoid.
The technique of synchrophasors uses digital instruments to measure the phasors representing transmission system voltages at widespread points in a transmission network. Small changes in the phasors are sensitive indicators of power flow and system stability.
- In-phase and quadrature components
- Analytic signal
- Phase factor, a phasor of unit magnitude
- This results from: which means that the complex exponential is the eigenfunction of the derivative operation.
Since this must hold for all , specifically: it follows that:
It is also readily seen that:
- Huw Fox; William Bolton (2002). Mathematics for Engineers and Technologists. Butterworth-Heinemann. p. 30. ISBN 978-0-08-051119-1.
- Clay Rawlins (2000). Basic AC Circuits (2nd ed.). Newnes. p. 124. ISBN 978-0-08-049398-5.
- Bracewell, Ron. The Fourier Transform and Its Applications. McGraw-Hill, 1965. p269
- K. S. Suresh Kumar (2008). Electric Circuits and Networks. Pearson Education India. p. 272. ISBN 978-81-317-1390-7.
- Kequian Zhang; Dejie Li (2007). Electromagnetic Theory for Microwaves and Optoelectronics (2nd ed.). Springer Science & Business Media. p. 13. ISBN 978-3-540-74296-8.
- J. Hindmarsh (1984). Electrical Machines & their Applications (4th ed.). Elsevier. p. 58. ISBN 978-1-4832-9492-6.
- William J. Eccles (2011). Pragmatic Electrical Engineering: Fundamentals. Morgan & Claypool Publishers. p. 51. ISBN 978-1-60845-668-0.
- Richard C. Dorf; James A. Svoboda (2010). Introduction to Electric Circuits (8th ed.). John Wiley & Sons. p. 661. ISBN 978-0-470-52157-1.
- Allan H. Robbins; Wilhelm Miller (2012). Circuit Analysis: Theory and Practice (5th ed.). Cengage Learning. p. 536. ISBN 1-285-40192-1.
- Won Y. Yang; Seung C. Lee (2008). Circuit Systems with MATLAB and PSpice. John Wiley & Sons. pp. 256–261. ISBN 978-0-470-82240-1.
||This article's further reading may not follow Wikipedia's content policies or guidelines. Please improve this article by removing excessive, less relevant or many publications with the same point of view; or by incorporating the relevant publications into the body of the article through appropriate citations. (October 2014)|
- Douglas C. Giancoli (1989). Physics for Scientists and Engineers. Prentice Hall. ISBN 0-13-666322-2.
- Dorf, Richard C.; Tallarida, Ronald J. (1993-07-15). Pocket Book of Electrical Engineering Formulas (1 ed.). Boca Raton,FL: CRC Press. pp. 152–155. ISBN 0849344735.
|Wikimedia Commons has media related to Phasors.|
|
New Map Shows Where Zika Mosquitoes Live in the US
As summer heats up, new maps from the Centers for Disease Control and Prevention (CDC) gives us our best guess at where Zika-carrying mosquitoes could be hanging out this year in the US.
The Zika virus is a potentially serious virus carried by mosquitoes. As of June 2017, the total confirmed case count for all states and the District of Columbia was 1,883. For the past two years, Zika has spread in South America and into the US and so has our knowledge of the infection, its dangers, and how people can protect themselves, their partners, and unborn children.
Zika is spread through the bite of an infected mosquito. Although there are multiple species of mosquitoes in the US that can theoretically spread the disease, the primary mosquitoes of concern are Aedes aegypti and Aedes albopictus.
Identified in 1947 in the Zika Forest in Uganda, the Zika virus, transmitted mostly by A. aegypti it has been circulating in equatorial Asia and Africa, before being identified as the cause of flu-like outbreaks in the Pacific Islands. Moving to South America, the virus had a virulent emergence in Brazil, with more than one million Brazilians infected. The virus targets the brain of unborn children, causing microcephaly, and other congenital disabilities.
A. aegypti is one nasty, aggressive character. The mosquito has long been a carrier of deadly Yellow Fever, Dengue fever, and Chikungunya. Like other mosquitoes, it is the females that are human bloodsuckers, and A. aegypti is particularly fond of human habitat. It isn't particular about where it lays its eggs, and it is more likely to hunt humans all day long, rather than merely at dawn and dusk.
In addition to the habits of this mosquito, the flavivirus that causes Zika replicates quickly in A. aegypti, making it more likely that an infected mosquito can transmit the pathogen.
Unfortunately for humans, A. aegypti are not only aggressive, and partial to human blood, but a female can lay about five batches of around 100 eggs each, if she is well fed during her two-week life span.
In addition to A. aegypti, the mosquito A. albopictus can also transmit Zika. Together, the two mosquitoes cover a range in the US considerably larger than many will be comfortable with. In a report from the CDC published in the Journal of Medical Entomology, officials confirm the presence of A. aegypti in 220 counties in 28 states, plus the District of Columbia. Researchers found the secondary vector, A. albopictus in 1,368 counties in 40 states and DC.
The survey, conducted at the end of 2016, updates surveillance from early in the same year. The more recent survey saw a 21% increase (38 new counties) and 10% (127 new counties) in counties reporting the presence of A. aegypti and A. albopictus, respectively.
While this survey, which analyzed historical data between 1995 and late 2016, showed recent significant jumps in the presence of these species, study authors felt those numbers could be partially due to increased attention due to the Zika virus emergence in Florida and Brazil. It is possible either mosquito could be living in other areas of the US, but has not yet come to the attention of health authorities.
The study does not illustrate the current danger of Zika in a particular area but offers information on where the mosquitoes are already present, areas in which we should worry about Zika's potential. The research also highlights surveillance gaps where the mosquitoes are likely to be present but remain unreported.
In a press release, Rebecca Eisen, a research biologist with the CDC, remarked: "This information will help to target limited public health surveillance resources and help to improve our understanding of how widespread these mosquitoes are."
The study authors note the presence of the mosquito species creates the possibility that the Dengue and chikungunya viruses could be transmitted in the US if those viruses emerged here. These maps closely track with the current estimated US range map of A. aegypti and A. albopictus, that graphically illustrate the presence of these mosquito species in the US.
Again, these maps do not indicate that Zika is spreading wherever these mosquitoes are present. Southern Florida and Brownsville, Texas are the only two locations where local transmission of Zika has occurred. Some biologists believe the range of A. aegypti will not advance quickly, but that A. albopictus is already endemic as far north as New Jersey, and possibly Connecticut.
A relatively large percentage of the population of the US lives in an area where one, or both, of these mosquito species reside. While it is frightening to consider Zika spreading further into the US, awareness of where these mosquitoes, or "disease vectors" are endemic helps officials and agencies understand where to deepen surveillance efforts.
For the rest of us, this information is a heads-up to be sure you use the right mosquito repellent, drain standing water on your property, and be mosquito-aware even during daylight areas in these regions of the US.
Yes, the mosquitoes may be coming — but we will be ready for them.
|
Concrete is one of the world's most versatile building materials. It's used to build everything from sidewalks to buildings to dams, and it holds up incredibly well under stress and pressure. But to build the world's strongest concrete scientists are going to have to take a lesson from ancient Rome.
Ancient Roman concrete is the strongest in the world, so durable it's survived 2000 years of environmental damage. Many Roman concrete structures are standing just as tall as when they were built, and it's because of a unique concrete formula that scientists are still trying to figure out.
But they have a clue, and a group of scientists are traveling to the island of Surtsey in Iceland to drill a bunch of holes and hopefully find the answer.
While the specific recipe for Roman concrete has been lost to history, we know the recipe involved volcanic ash and seawater. So to find out more, scientists are traveling to Surtsey, an entire island made out of volcanic ash and seawater. Surtsey was formed by an underwater volcanic eruption in 1963, and it could provide the missing ingredients for making Roman concrete.
The scientists will drill two holes, one parallel to a hole drilled back in 1979 and another at an angle. The first hole will enable scientists to study microbial life inside the volcanic rock, while the second hole will allow them to study how the rocks and minerals formed. It's this second hole that could provide clues to Roman concrete.
Surtsey is unique because when it formed the mixture of heat, volcanic material, and seawater created hydrothermal minerals that strengthened the rock. This makes Surtsey much more durable than other volcanic islands, and if we could understand this unique durability and strength we just might be able to reproduce it.
|
A Brief History of Brass Bands
The history of brass bands dates back as far as the early 19th Century, when a number of technological and social changes produced a new kind of ensemble which proved to be popular with performers and audiences alike
Early Brass Instruments were Limited to a Few Notes
Before roughly 1830, brass instruments were not commonly played outside of orchestras, and the existing horns, trumpets, and cornets had a huge limitation: without a viable system of keys or valves they were limited to playing only the harmonic series of notes dictated by the length of their tubing. Trombones, of course, got around this problem by adjusting the length of the tubing with their slide, but woodwind instruments such as flutes and clarinets, which could play every note in their range through the use of keys along their length, were used far more commonly in the bands of the day.
Valves Removed this Limitation
The introduction in the 1830s of instruments with valves, which instantaneously diverted the airflow through different lengths of tubing to change the pitch, removed this limitation. In particular, the horns produced by Adolphe Sax (known collectively as saxhorns) proved to be popular with bands. They were made of a durable material (metal), and they could be industrially mass-produced from patterns at a time when string and woodwind instruments were still produced by traditional craft methods. With the use of only three valves they could produce every pitch in their range and be reasonably in tune, which made them relatively easy to learn. Every instrument, from the cornets on top to the bombardons or basses on the bottom, used the same valve fingerings, making it possible for players to switch instruments or parts if required. Finally, all of the saxhorns were "conical" in shape, gradually increasing the inside bore size along the length of the horn from the mouthpiece to the bell, giving them a warm sound that blended well into a larger ensemble sound.
Brass Instruments Became Preferred Choice
All of these reasons made brass instruments the preferred choice among the many new bands that were cropping up in the mid-19th Century in Europe (especially Britain) and the United States. This was a period marked by major population shifts to new cities and towns as factories and mills opened, and music grew as a major interest of the new working and middle classes. In this time before recordings, the main exposure most people had to the major musical works of the day was through performances of transcriptions by local or touring bands. With a new large audience, and an increasing number of interested players, bands became a major feature of the mid- to late-1800s. Every town or community worth its salt had a band, sometimes with only eight or a dozen players, but also sometimes much larger.
Bands Found Community Funding
Bands were also frequently founded or supported by private companies or wealthy capitalists. Part philanthropy (industrialists were often interested in improving the living conditions and leisure activities of their workers; not coincidentally, this same period also saw the development of parks, athletic fields, club sports, and public libraries) and part advertisement, some of these bands have even managed to outlive the companies that founded them. For example, the famous Black Dyke Mills Band was named for the local textile mill whose owner bought a set of horns for the band in 1855. The band remains, but the mill has long since disappeared.
Contests Influence Brass Band Development
Contests have been another major influence on the development of brass bands. From very early on in their history, brass bands have been going to competitions, often for cash prizes but sometimes just for bragging rights. Contests provided an incentive for bands to constantly stretch and expand their repertoire and technique in pursuit of excellence, either through their own arrangements or through required test pieces that were specially commissioned by the contest organizers. Also, within a few decades of the first bands and contest, the need to have competing groups all play with similar numbers and types of horns led to the standard instrumentation of roughly 25 to 30 musicians that persists to this day. Contests today come in many forms; often all bands are required to play a required test piece, or the contest may be more free-form and only require certain elements in each band's performance (such as a march or a slow hymn). "Entertainment" contests are the least structured in their formal elements but, with points given for entertainment value as well as musical technique, require each band to look at their performance through the eyes of an audience.
Brass Bands Lost Popularity to Concert Bands in the U.S.
Interestingly, while brass bands continued to be popular in much of the rest of the world, in the United States they gradually disappeared early in the 20th Century, replaced by the current familiar concert band which includes a large number of woodwinds. Perhaps due to the popularity it achieved in jazz, or maybe because conductors wanted a more "orchestral" sound, the trumpet replaced the cornet as the highest brass voice, and french horns took the place of the similarly pitched tenor horns.
Brass Bands find Renewed Popularity with over 100 in the U.S.
In the past few decades, however, brass bands have begun to rebound here in America. Popular with brass musicians because they get to play more challenging and extensive parts, they are also popular with audiences who appreciate the unique and dynamic music that brass bands make. There are over 100 active brass bands in the U.S. today, including many in the larger British tradition and others in the smaller style common during the Civil War era. That brass bands persist to this day is a testament to their continued vibrancy and vitality as skilled and entertaining ensembles.
|
Hydrogen can be found everywhere on Earth. Along with oxygen it forms water. But getting the hydrogen out of water and distributing it for use as fuel has remained a challenge. Hydrogen easily combusts. It is difficult to transport because you need a lot of it to get the equivalent energy you get from other fuels. And because you need a lot we compress it. To turn it into a liquid like gasoline we have had to cool it and then store it in pressurized tanks.
Well according to research by a team led by Jonathan Hull, a chemist at Brookhaven National Laboratory in New York, by combining hydrogen with a water-soluble molecule containing iridium and baking soda, they converted the gas to a form storable in a liquid state at low pressure in ambient temperatures. To reverse the process the researchers added an acid to the solution releasing the stored hydrogen.
The process makes it possible to transport hydrogen easily without the need for high pressure and low temperature storage. For hydrogen fuel cell manufacturers this represents an enormous breakthrough.
The research at Brookhaven was done in collaboration with Yuichiro Himeda of the National Institute of Advanced Industrial Science and Technology in Japan.
|
Messenger RNA (mRNA) is a single stranded molecule that is used as the template for protein translation. It is possible for RNA to form duplexes, similar to DNA, with a second sequence of RNA complementary to the first strand. This second sequence is called antisense RNA (Figure 1). The formation of double stranded RNA can inhibit gene expression in many different organisms including plants, flies, worms and fungi.
The first discovery of this inhibition in plants was more than a decade ago and occurred in petunias. Researchers were trying to deepen the purple colour of the flowers by injecting the gene responsible into the petunias but were surprised at the result. Instead of a darker flower, the petunias were either variegated (Figure 2) or completely white!
This phenomenon was termed co-suppression, since both the expression of the existing gene (the initial purple colour), and the introduced gene (to deepen the purple) were suppressed. Co-suppression has since been found in many other plant species and also in fungi. It is now known that double stranded RNA is responsible for this effect.
aRNA and RNAi
When antisense RNA (aRNA) is introduced into a cell, it binds to the already present sense RNA to inhibit gene expression. So what would happen if sense RNA is prepared and introduced into the cell? Since two strands of sense RNA do not bind to each other, it is logical to think that nothing would happen with additional sense RNA, but in fact, the opposite happens! The new sense RNA suppresses gene expression, similar to aRNA. While this may seem like a contradiction, it can be easily resolved by further examination. The cause is rooted in the prepared sense RNA. It turns out that preparations of sense RNA actually contain contaminating strands of antisense RNA. The sense and antisense strands bind to each other, forming a helix. This double helix is the actual suppressor of its corresponding gene. The suppression of a gene by its corresponding double stranded RNA is called RNA interference (RNAi), or post-transcriptional gene silencing (PTGS). The gene suppression by aRNA is likely also due to the formation of an RNA double helix, in this case formed by the sense RNA of the cell and the introduced antisense RNA.
How Does it Work?
But how does the double stranded RNA cause gene suppression? Since the only RNA found in a cell should be single stranded, the presence of double stranded RNA signals is an abnormality. The cell has a specific enzyme (in Drosophila it is called Dicer) that recognizes the double stranded RNA and chops it up into small fragments between 21-25 base pairs in length. These short RNA fragments (called small interfering RNA, or siRNA) bind to the RNA-induced silencing complex (RISC). The RISC is activated when the siRNA unwinds and the activated complex binds to the corresponding mRNA using the antisense RNA. The RISC contains an enzyme to cleave the bound mRNA (called Slicer in Drosophila) and therefore cause gene suppression. Once the mRNA has been cleaved, it can no longer be translated into functional protein (Figure 3 and see a Flash animation of PTGS here).
The suppression of protein synthesis by introducing antisense RNA into a cell is very useful. A gene encoding the antisense RNA can be introduced fairly easily into organisms by using a plasmid vector or using a gene gun that shoots microscopic tungsten pellets coated with the gene into the plant cells. Once the antisense RNA is introduced, it will specifically inhibit the synthesis of the target protein by binding to mRNA. This is a quick way to create a knockout organism to study gene function. Using antisense RNA as a tool in this way is an exciting prospect for many molecular biologists.
Antisense RNA is also being investigated for use in cancer therapy. Injecting aRNA that is complementary to the proto-oncogene BCL-2 may be useful for treating some B-cell lymphomas and leukemias. Antisense oligodeoxynucleotides (ODNs) are also being studied for human therapy. ODNs are similar to antisense RNA, but they are made synthetically and are deoxynucleotides (like those in DNA) rather than nucleotides. ODNs are being tested for their effectiveness against HIV-1, cytomegalovirus (a member of the herpesvirus group), asthma and certain cancers.
Antisense RNA methods have also been used for commercial food production. You may have heard of the Flavr Savr tomato. This tomato was developed by Calgene Inc. of Davis, California in 1991 and was approved by the U.S. FDA in 1994. The tomato was the first whole food created by biotechnology that was evaluated by the FDA. One of the problems associated with tomato farming is that the fruit must be picked while still green in order to be shipped to market without being crushed. The enzyme that causes softening in tomatoes is polygalacturonase (PG). This enzyme breaks down pectin as the tomato ripens, leading to a softer fruit. Calgene suppressed the expression of the gene encoding PG by introducing a gene encoding the antisense strand of the mRNA. When the introduced gene was expressed, the antisense strand bound to the PG mRNA, suppressing the translation of the enzyme. The Flavr Savr tomatoes therefore had low PG levels and remained firmer when ripe. This meant the Flavr Savr tomatoes can ripen on the vine and then be shipped to market. Although the Flavr Savr tomatoes were approved for sale in the U.S., production problems and consumer wariness stopped the production of this fruit in 1997.
RNA interference is a field that was stumbled upon by accident while trying to improve the colour of petunias, however its implications may be far reaching in the near future.
1. Kimball’s Biology Pages — Antisense RNA
2. Ambion — The RNA interference resource.
(Art by Fan Sozzi)
|
More than just a tool for predicting health, modern genetics is upending long-held assumptions about who we are. A new study by Harvard researchers casts new light on the intermingling and migration of European, Middle Eastern and African and populations since ancient times.
In a paper titled "The History of African Gene Flow into Southern Europeans, Levantines and Jews," published in PLoS Genetics, HMS Associate Professor of Genetics David Reich and his colleagues investigated the proportion of sub-Saharan African ancestry present in various populations in West Eurasia, defined as the geographic area spanning modern Europe and the Middle East. While previous studies have established that such shared ancestry exists, they have not indicated to what degree or how far back the mixing of populations can be traced.
Analyzing publicly available genetic data from 40 populations comprising North Africans, Middle Easterners and Central Asians were doctoral student Priya Moorjani and Alkes Price, an assistant professor in the Program in Molecular and Genetic Epidemiology within the Department of Epidemiology at the Harvard School of Public Health.
Moorjani traced genetic ancestry using a method called rolloff. This platform, developed in the Reich lab, compares the size and composition of stretches of DNA between two human populations as a means of estimating when they mixed. The smaller and more broken up the DNA segments, the older the date of mixture.
Moorjani used the technique to examine the genomes of modern West Eurasian populations to find signatures of Sub-Saharan African ancestry. She did this by looking for chromosomal segments in West Eurasian DNA that closely matched those of Sub-Saharan Africans. By plotting the distribution of these segments and estimating their rate of genetic decay, Reich's lab was able to determine the proportion of African genetic ancestry still present, and to infer approximately when the West Eurasian and Sub-Saharan African populations mixed.
"The genetic decay happens very slowly," Moorjani explained, "so today, thousands of years later, there is enough evidence for us to estimate the date of population mixture."
While the researchers detected no African genetic signatures in Northern European populations, they found a distinct presence of African ancestry in Southern European, Middle Eastern and Jewish populations. Modern southern European groups can attribute about 1 to 3 percent of their genetic signature to African ancestry, with the intermingling of populations dating back 55 generations, on average--that is, to roughly 1,600 years ago. Middle Eastern groups have inherited about 4 to 15 percent, with the mixing of populations dating back roughly 32 generations. A diverse array of Jewish populations can date their Sub-Saharan African ancestry back roughly 72 generations, on average, accounting for 3 to 5 percent of their genetic makeup today.
According to Reich, these findings address a long-standing debate over African multicultural influences in Europe. The dates of population mixtures are consistent with documented historical events. For example, the mixing of African and southern European populations coincides with events during the Roman Empire and Arab migrations that followed. The older-mixture dates among African and Jewish populations are consistent with events in biblical times, such as the Jewish diaspora that occurred in 8th to 6th century BC.
"Our study doesn't prove that the African ancestry is associated with migrations associated with events in the Bible documented by archeologists," Reich says, "but it's interesting to speculate."
Reich was surprised to see any level of shared ancestry between the Ashkenazi and non-Ashkenazi Jewish groups. "I've never been convinced they were actually related to each other," Reich says, but he now concludes that his lab's findings have significant cultural and genetic implications. "Population boundaries that many people think are impermeable are, in fact, not that way."
|
In this post we’ll cover the second of the “basic four” methods of proof: the contrapositive implication. We will build off our material from last time and start by defining functions on sets.
Functions as Sets
So far we have become comfortable with the definition of a set, but the most common way to use sets is to construct functions between them. As programmers we readily understand the nature of a function, but how can we define one mathematically? It turns out we can do it in terms of sets, but let us recall the desired properties of a function:
- Every input must have an output.
- Every input can only correspond to one output (the functions must be deterministic).
One might try at first to define a function in terms of subsets of size two. That is, if are sets then a function would be completely specified by
where to enforce those two bullets, we must impose the condition that every occurs in one and only one of those subsets. Notationally, we would say that means is a member of the function. Unfortunately, this definition fails miserably when , because we have no way to distinguish the input from the output.
To compensate for this, we introduce a new type of object called a tuple. A tuple is just an ordered list of elements, which we write using round brackets, e.g. .
As a quick aside, one can define ordered tuples in terms of sets. We will leave the reader to puzzle why this works, and generalize the example provided:
And so a function is defined to be a list of ordered pairs where the first thing in the pair is an input and the second is an output:
Subject to the same conditions, that each value from must occur in one and only one pair. And again by way of notation we say if the pair is a member of as a set. Note that the concept of a function having “input and output” is just an interpretation. A function can be viewed independent of any computational ideas as just a set of pairs. Often enough we might not even know how to compute a function (or it might be provably uncomputable!), but we can still work with it abstractly.
It is also common to call functions “maps,” and to define “map” to mean a special kind of function (that is, with extra conditions) depending on the mathematical field one is working in. Even in other places on this blog, “map” might stand for a continuous function, or a homomorphism. Don’t worry if you don’t know these terms off hand; they are just special cases of functions as we’ve defined them here. For the purposes of this series on methods of proof, “function” and “map” and “mapping” mean the same thing: regular old functions on sets.
One of the most important and natural properties of a function is that of injectivity.
Definition: A function is an injection if whenever are distinct members of , then . The adjectival version of the word injection is injective.
As a quick side note, it is often the convention for mathematicians to use a capital letter to denote a set, and a lower-case letter to denote a generic element of that set. Moreover, the apostrophe on the is called a prime (so is spoken, “a prime”), and it’s meant to denote a variation on the non-prime’d variable in some way. In this case, the variation is that .
So even if we had not explicitly mentioned where the objects came from, the knowledgeable mathematician (which the reader is obviously becoming) would be reasonably certain that they come from . Similarly, if I were to lackadaisically present out of nowhere, the reader would infer it must come from .
One simple and commonly used example of an injection is the so-called inclusion function. If are sets, then there is a canonical function representing this subset relationship, namely the function defined by . It should be clear that non-equal things get mapped to non-equal things, because the function doesn’t actually do anything except change perspective on where the elements are sitting: two nonequal things sitting in are still nonequal in .
Another example is that of multiplication by two as a map on natural numbers. More rigorously, define by . It is clear that whenever as natural numbers then . For one, must have differing prime factorizations, and so must because we added the same prime factor of 2 to both numbers. Did you catch the quick proof by direct implication there? It was sneaky, but present.
Now the property of being an injection can be summed up by a very nice picture:
The arrows above represent the pairs , and the fact that no two arrows end in the same place makes this function an injection. Indeed, drawing pictures like this can give us clues about the true nature of a proposed fact. If the fact is false, it’s usually easy to draw a picture like this showing so. If it’s true, then the pictures will support it and hopefully make the proof obvious. We will see this in action in a bit (and perhaps we should expand upon it later with a post titled, “Methods of Proof — Proof by Picture”).
There is another, more subtle concept associated with injectivity, and this is where its name comes from. The word “inject” gives one the mental picture that we’re literally placing one set inside another set without changing the nature of . We are simply realizing it as being inside of , perhaps with different names for its elements. This interpretation becomes much clearer when one investigates sets with additional structure, such as groups, rings, or topological spaces. Here the word “injective mapping” much more literally means placing one thing inside another without changing the former’s structure in any way except for relabeling.
In any case, mathematicians have the bad (but time-saving) habit of implicitly identifying a set with its image under an injective mapping. That is, if is an injective function, then one can view as the same thing as . That is, they have the same elements except that renames the elements of as elements of . The abuse comes in when they start saying even when this is not strictly the case.
Here is an example of this abuse that many programmers commit without perhaps noticing it. Suppose is the set of all colors that can be displayed on a computer (as an abstract set; the elements are “this particular green,” “that particular pinkish mauve”). Now let be the set of all finite hexadecimal numbers. Then there is an obvious injective map from sending each color to its 6-digit hex representation. The lazy mathematician would say “Well, then, we might as well say , for this is the obvious way to view as a set of hexadecimal numbers.” Of course there are other ways (try to think of one, and then try to find an infinite family of them!), but the point is that this is the only way that anyone really uses, and that the other ways are all just “natural relabelings” of this way.
The precise way to formulate this claim is as follows, and it holds for arbitrary sets and arbitrary injective functions. If are two such ways to inject inside of , then there is a function such that the composition is precisely the map . If this is mysterious, we have some methods the reader can use to understand it more fully: give examples for simplified versions (what if there were only three colors?), draw pictures of “generic looking” set maps, and attempt a proof by direct implication.
Proof by Contrapositive
Often times in mathematics we will come across a statement we want to prove that looks like this:
If X does not have property A, then Y does not have property B.
Indeed, we already have: to prove a function is injective we must prove:
If x is not equal to y, then f(x) is not equal to f(y).
A proof by direct implication can be quite difficult because the statement gives us very little to work with. If we assume that does not have property , then we have nothing to grasp and jump-start our proof. The main (and in this author’s opinion, the only) benefit of a proof by contrapositive is that one can turn such a statement into a constructive one. That is, we can write “p implies q” as “not q implies not p” to get the equivalent claim:
If Y has property B then X has property A.
This rewriting is called the “contrapositive form” of the original statement. It’s not only easier to parse, but also probably easier to prove because we have something to grasp at from the beginning.
To the beginning mathematician, it may not be obvious that “if p then q” is equivalent to “if not q then not p” as logical statements. To show that they are requires a small detour into the idea of a “truth table.”
In particular, we have to specify what it means for “if p then q” to be true or false as a whole. There are four possibilities: p can be true or false, and q can be true or false. We can write all of these possibilities in a table.
p q T T T F F T F F
If we were to complete this table for “if p then q,” we’d have to specify exactly which of the four cases correspond to the statement being true. Of course, if the p part is true and the q part is true, then “p implies q” should also be true. We have seen this already in proof by direct implication. Next, if p is true and q is false, then it certainly cannot be the case that truth of p implies the truth of q. So this would be a false statement. Our truth table so far looks like
p q p->q T T T T F F F T ? F F ?
The next question is what to do if the premise p of “if p then q” is false. Should the statement as a whole be true or false? Rather then enter a belated philosophical discussion, we will zealously define an implication to be true if its hypothesis is false. This is a well-accepted idea in mathematics called vacuous truth. And although it seems to make awkward statements true (like “if 2 is odd then 1 = 0”), it is rarely a confounding issue (and more often forms the punchline of a few good math jokes). So we can complete our truth table as follows
p q p->q T T T T F F F T T F F T
Now here’s where contraposition comes into play. If we’re interested in determining when “not q implies not p” is true, we can add these to the truth table as extra columns:
p q p->q not q not p not q -> not p T T T F F T T F F T F F F T T F T T F F T T T T
As we can see, the two columns corresponding to “p implies q” and “not q implies not p” assume precisely the same truth values in all possible scenarios. In other words, the two statements are logically equivalent.
And so our proof technique for contrapositive becomes: rewrite the statement in its contrapositive form, and proceed to prove it by direct implication.
Examples and Exercises
Our first example will be completely straightforward and require nothing but algebra. Let’s show that the function is injective. Contrapositively, we want to prove that if then . Assuming the hypothesis, we start by supposing . Applying algebra, we get , and dividing by 7 shows that $x = x’$ as desired. So is injective.
This example is important because if we tried to prove it directly, we might make the mistake of assuming algebra works with the same way it does with equality. In fact, many of the things we take for granted about equality fail with inequality (for instance, if and it need not be the case that ). The contrapositive method allows us to use our algebraic skills in a straightforward way.
Next let’s prove that the composition of two injective functions is injective. That is, if and are injective functions, then the composition defined by is injective.
In particular, we want to prove that if then . Contrapositively, this is the same as proving that if then . Well by the fact that is injective, we know that (again contrapositively) whenever then , so it must be that . But by the same reasoning is injective and hence . This proves the statement.
This was a nice symbolic proof, but we can see the same fact in a picturesque form as well:
If we maintain that any two arrows in the diagram can’t have the same head, then following two paths starting at different points in will never land us at the same place in . Since is injective we have to travel to different places in , and since is injective we have to travel to different places in . Unfortunately, this proof cannot replace the formal one above, but it can help us understand it from a different perspective (which can often make or break a mathematical idea).
Expanding upon this idea we give the reader a challenge: Let be finite sets of the same size. Prove or disprove that if and are (arbitrary) functions, and if the composition is injective, then both of must be injective.
Another exercise which has a nice contrapositive proof: prove that if are finite sets and is an injection, then has at most as many elements as . This one is particularly susceptible to a “picture proof” like the one above. Although the formal the formal name for the fact one uses to prove this is the pigeonhole principle, it’s really just a simple observation.
Aside from inventing similar exercises with numbers (e.g., if is odd then is odd or is odd), this is all there is to the contrapositive method. It’s just a direct proof disguised behind a fact about truth tables. Of course, as is usual in more advanced mathematical literature, authors will seldom announce the use of contraposition. The reader just has to be watchful enough to notice it.
Though we haven’t talked about either the real numbers nor proofs of existence or impossibility, we can still pose this interesting question: is there an injective function from ? In truth there is not, but as of yet we don’t have the proof technique required to show it. This will be our next topic in the series: the proof by contradiction.
|
Also found in: Dictionary, Thesaurus, Medical, Wikipedia.
Related to Paleobotany: paleobotanist
a branch of botany dealing with fossil plants. It includes the study and classification of plants of the geological past, as well as the study of their relationship with each other and with extant plants. Paleobotanists seek to establish the distribution of various plants during different geological periods and to understand the principles governing changes in plant cover. Paleobotany and paleozoology are usually joined in the science of paleontology.
Principal approaches and methods. The taxonomy of fossil plants, or taxonomic paleobotany, is based on the same principles as the taxonomy of extant plants. It does, however, have a number of special features. Most plant fossils are disconnected remains of plants whose membership in the same taxon cannot always be proved. The remains, therefore, are often given different specific and generic names. Uniting dissociated remains of plants under the same name is possible only when there is proof of their organic connection.
The tasks of morphological paleobotany consist in elucidating the external and internal structures of plants of the geological past and reconstructing their appearance. Morphological paleobotany provides important data for taxonomic paleobotany, paleofloristics, and paleoecology. The data serve as the foundation for elucidating the history of certain taxons and the evolution of the plant kingdom as a whole. In addition, the information is valuable for constructing a phylogenetic system of the plant world. Paleofloristics, the comparative study of the floras of the geological past, yields valuable information for stratigraphy and paleogeographic reconstruction.
Plant paleoecology is concerned with the conditions under which plants and their communities existed in the geological past. Paleoecological research is based on morphological features of fossil plants, the aggregate of buried plant remains (taphocenosis), and the structure of the burials themselves.
The boundaries between these three principal approaches of paleobotany are to a large extent conventional. A paleobotanist studyingthe flora of a certain period usually investigates simultaneously the taxonomy, morphology, and ecology of fossil plants.
Paleobotany is divided into several disciplines. Each discipline deals with a specific type of remain (leaf, fruit, seed, wood, spore, pollen grain) or a specific form of preservation (compression fossils, petrifactions, impressions). Compression plant fossils are mummified remains that have been altered somewhat and are sometimes slightly mineralized. Petrifactions are fossils in which the original plant tissue is replaced by mineral matter. Impressions are the imprints of leaves or other plant organs. They are usually accompanied by remains of the plant itself, which are preserved as a compression fossil or a distinctive flat petrifaction.
The name “ichnophytology” has been proposed for the division of paleobotany that studies the impressions and, less frequently, the remains of leaves, stems, and other parts of plants. Paleostomatography is concerned with the epidermis, especially the stomatal apertures, of the leaves and other organs of fossil plants. Paleocarpology deals with fossil fruits, seeds, and mega-spores. Paleopalynology is concerned with investigating fossil spores and pollens. Paleoxylology is the study of fossil wood.
Paleomycology and paleoalgology have been separated into independent disciplines. The study of planktonic forms of fossil algae is acquiring ever greater stratigraphic significance. The investigation of the most ancient bacteria, Cyanea, and algae is very important for an understanding of the earliest stages of development of the organic world.
Principal stages of development. Remains of fossil plants in the form of petrified wood chips, leaf impressions, and amber have been described since ancient times. In the sixth and fifth centuries B.C., Xenophanes wrote that “laurel leaves” had been found in rocks on the island of Paros. However, leaf impressions and other traces and remains of ancient plants were interpreted for a long time as a fluke of nature. Leonardo da Vinci and scientists of his day identified these formations as remains of plants that existed at one time, but they had no idea how old they were.
In the late 17th and 18th centuries, scientists primarily concentrated on identifying fossil plants by comparing them with living plants. They sought to classify the various types of plant remains. What seems to have been the first detailed classification of fossil plants was published in London in 1699 by E. Llywd. In the 18th century the German paleobotanists G. Volkman, G. Knorr, and J. Walch published several monographs with descriptions and illustrations of fossil plants. However, the first attempts at periodization had to await the introduction of the paleontological method in stratigraphy by the British botanist W. Smith at the turn of the 19th century. The taxonomy of fossil plants was begun by the German paleobotanist E. von Schlotheim (1804), who was the first to apply Linnaeus’ binary system to fossil plants. The Czech paleobotanist C. Sternberg (1820–38) and the French biologist A.-T. Brongniart (1822–38), working from the assumption that the various classes of plants had different degrees of antiquity, proposed the division of the history of the plant world into three and four periods, respectively. They thereby laid the foundation for the future phytogeny of plants. Great contributions to the taxonomy of fossil plants were made by the English botanists J. Lindley and W. Hutton (1831–37), the German paleobotanist H. Goeppert (1836–45), the German mycologist-paleobotanist A. Corda (1845), and the Austrian botanist F. Unger (1845). In Russia some descriptions of fossil plants were published in the 18th century. Of special importance was the work of la. G. Zembnitskii (1825–33), in which methods of paleobotany were set forth and a survey of all fossil plants known at that time was given. The first finds of fossil plants were predominantly in Europe. In the middle of the 19th century, besides detailed descriptions of European flora of all periods, investigations of the floras of polar regions, North America, India, and Australia were begun. The descriptive works of that period, which are associated with Goeppert, Unger, the Swiss paleobotanist O. Heer, the Austrian botanist C. von Ettings-hausen, the French botanist G. de Saporta, and the American geologist J. Newberry, retain their importance to this day. In Russia the most important paleobotanical research of the mid-19th century was conducted by E. I. Eikhval’d, K. E. Merklin, and I. F. Shmal’gauzen.
At this stage in the development of paleobotany, the history of the plant world was divided into seven periods (Unger, 1852). Subsequent elaboration of the geochronological scale and progress in the study of the floras of all geological periods led to the formulation of theoretical generalizations at the end of the 19th century, including those by A. Engler (1879–82) and R. Zeiller.
At the beginning of the 20th century there were new developments in paleobotany. A number of new disciplines arose, including paleopalynology and paleocarpology. Old research methods were improved, and new ones appeared. In 1903 the English paleobotanist D. Scott discovered a group of seed ferns. In 1911 the Russian paleobotanist M. D. Zalesskii discovered Callixylon wood from the Devonian; its structure was similar to that of wood from gymnosperms. In 1925 the British scientist H. Thomas discovered Caytoniales. Comprehensive studies were carried out on Rhynia (British scientists R. Kidston and W. Lang, 1917–21), the subclass Primofilices (Lang, 1926; the German botanists R. Kräusel and H. Weyland, 1933), and the strobiles of Cordaianthus and ancient conifers (Swedish botanist R. Florin, 1944–51). These discoveries, studies, and other works of morphological-taxonomic character made possible the formulation of an accurate picture of the evolution of morphological structures and led to important generalizations concerning plant morphology and phytogeny. Such generalizations were formulated by the English botanist C. Jeffrey (1917); the German botanists W. Zimmermann (1930, 1959), F. Bower (1935), and A. Eames (1936); and the Soviet botanist A. L. Takhtadzhian (1956).
The organic connection between the wood of Callixylon and the foliage of Archaeopteris, typical of ferns, was proved by the American paleobotanist C. Beck in 1960. Beck’s evidence and the detailed study of psilophytes by the Americans H. Banks and F. Hueber (1968) led to the reexamination of several traditional ideas concerning the evolutionary relationships among ancient groups of higher plants (Beck, 1960–62; T. Delevoryas, 1962; Banks, 1970; D. Bierhorst, 1971).
In the 1930’s and 1940’s the foundation was laid for broad paleophytogeographic generalizations by the German paleobotanist W. Gothan, the Britain botanist A. Seward, and such Soviet paleobotanists as A. N. Krishtofovich, V. D. Prinada, and M. F. Neiburg. Important research devoted to problems of paleophytogeography was conducted by the American paleobotanist D. Axelrod (1958); the Soviet botanists V. A. Vakhrameev (1964, 1970), Takhtadzhian (1966), and S. V. Meien (1970, 1973); and the British botanist W. Chaloner (1973). The scientific and organizational activity of Krishtofovich, who, in addition to studying numerous paleofloras from the Devonian to the Quaternary, elaborated a number of important theoretical problems, were of great significance in the development of paleobotany in the USSR.
At present, paleobotany is characterized by the progressive specialization and integration of its disciplines. This is due to the new multilateral theoretical approach to the solution of a number of general theoretical and practical problems of biology and geology. Paleobotanical data and data from related disciplines are correlated, and new methods of research (anatomical, cyto-logical, biochemical, mathematical, experimental) are continually being introduced. In addition, the most up-to-date equipment, including raying and scanning electron microscopes, is used.
A number of phytogeographic concepts and paleogeographic and paleoclimatic reconstructions have been reexamined owing to the comprehensive worldwide revision of certain taxons of fossil plants and of paleofloras of all periods and to the ecological approach to their interpretation. In addition, a more accurate periodization of the history of the plant world as a whole has been formulated.
Leading paleobotanical laboratories of the USSR. The first scientific collection of fossil plants was gathered in 1830 at the St. Petersburg Botanical Garden (now the V. L. Komarov Botanical Institute). Here in 1932, I. V. Palibin organized the first paleobotanical laboratory. Since the late 19th century, paleobotanical research has been conducted at the Geological Committee (now the All-Union Scientific Research Geological Institute). Such research was initiated in the late 1920’s at the Geological Institute of the Academy of Sciences of the USSR and in the 1930’s at the All-Union Geological Oil Exploration Institute in Leningrad and at the Institute of Geological Sciences of the Ukrainian SSR. Paleobotanical laboratories or small groups of paleobotanists are found at a number of botanical and geological institutions in Moldavia, Georgia, Armenia, Azerbaijan, Kazakhstan, Uzbekistan, and other Union republics. Such institutions are located in Kazan, Novosibirsk, Tomsk, Vladivostok, and other cities.
International organizations. The International Organization of Paleobotany is part of the International Union of Biological Sciences. At the Third International Palynological Conference in Novosibirsk in 1971, the International Palynological Committee was founded, some of whose members are concerned with paleopalynology. The All-Union Botanical Society and the All-Union Paleontological Society have paleobotanical sections.
Periodicals. Articles on paleobotany appear in Botanicheskii zhurnal SSSR (Botanical Journal of the USSR; published since 1916), Paleontologicheskii zhurnal (Paleontological Journal; since 1959), and various geological and general scientific periodicals. International journals dealing with paleobotany include Palaeontographica. Abt. B. Palaeophytologie (Stuttgart; since 1846), The Palaeobotanist (Lucknow; since 1952), and Review of Palaeobotany and Palynology (Amsterdam; since 1967).
REFERENCESSeward, A. C. Veka i rasteniia: Obzor rastitel’nosti proshlykh geologiches-kikh periodov. Leningrad-Moscow, 1936. (Translated from English.)
Takhtadzhian, A. L. Vysshie rasteniia, vol. 1. Moscow-Leningrad, 1956.
Krishtofovich, A. N. Istoriia paleobotaniki v SSSR. Leningrad, 1956.
Krishtofovich, A. N. Paleobotanika, 4th ed. Leningrad, 1957.
Osnovy paleontologii: Vodorosli, mokhoobraznye, psilofitovye, plaunovidnye, chlenisto-stebel’nye, paporotniki. Moscow, 1963.
Osnovy paleontologii: Golosemennye i pokrytosemennye. Moscow, 1963.
Paleopalinologiia, vols. 1–3. Leningrad, 1966.
Paleozoiskie i mezozoiskie flory Evrazii i fitogeografiia etogo vremeni. Moscow, 1970. (Trudy Geologicheskogo instituta AN SSSR, issue 208.)
Krasilov, V. A. Paleoekologiia nazemnykh rastenii (Osnovnye printsipy i melody). Vladivostok, 1972.
Iskopaemye tsvetkovye rasteniia SSSR, vol. 1. Leningrad. (In press.)
Andrews, H. N. Studies in Palaeobotany. New York-London, 1961.
Delevoryas, T. Morphology and Evolution of Fossil Plants New York, 1962.
Gothan, W., and H. Weyland. Lehrbuch der Paldobotanik, 2nd ed. Berlin, 1964.
Mägdefrau, K. Paläobiologie der Pflanzen, 4th ed. Jena, 1968.
Němejc, F. Paleobotanika, vols. 1–3. Prague, 1959–68.
Traité de paleobotanique, vols. 2–4 (fasc. 1). Published under the direction of E. Boureau. Paris, 1967–70.
Bierhorst, D. W. Morphology of Vascular Plants. New York-London, 1971.
S. G. ZHILIN and N. S. SNIGIREVSKAIA; under the general editorship of A. L. TAKHTADZHIAN
|
The hoary marmot (Marmota caligata) is a species of marmot that inhabits the mountains of northwest North America. The largest populations are in Alaska. In the northern part of that state they may live near sea level. Hoary marmots live near the tree line on slopes with grasses and forbs to eat and rocky areas for cover. It is the largest North American ground squirrel and is often nicknamed "the whistler" for its high-pitched warning issued to alert other members of the colony to possible danger. The animals are sometimes called "whistle pigs." Whistler, British Columbia, originally London Mountain because of its heavy fogs and rain, was renamed for these animals to help make it more marketable as a resort.
The "hoary" in their name refers to the silver-grey fur on their shoulders and upper back; the remainder of the upper parts are mainly covered in reddish brown fur. The underparts are greyish. They have a white patch on the muzzle and black feet and lower legs.
These animals hibernate 7 to 8 months a year in burrows that they excavate in the soil, often among or under boulders. Mating occurs after hibernation and 2 to 4 young are born in the spring. Males establish "harems," but may also visit females in other territories. Predators include golden eagles; grizzly and black bears; and wolves.
Unlike most animals their size, hoary marmots are not shy around humans. Rather than running away at first sight, they will often go about their business while being watched.
|
Deciduous forests of the world
Starved rock state
The weather cycles in the deciduous forests changes from time to time. Weather can go from cold to dry with the coming of a new season. In autumn trees drop their leaves and remain bare throughout the winter. In spring, leaves once again grow from the trees. In the winter birds migrate south and animals hibernate.
All Deciduous forests are
located between coniferous forests and Tropical Regions. Deciduous forests are
The latitude range is anywhere from 23 degrees N and 38 degrees S. It has four seasons, spring, summer, autumn, and winter. Mild summers averaging about 70 degrees F. About 14 inches of rain in the winter months and 18 inches of rain in the summer.
Plantae = Daisies, Ferns, mosses, Oak Tree, and the Elm Tree
Protista = amoeba, and Paramecium
Fungi = Mushroom, and lichens
Monera = Bacteria
Animalia = red foxes, porcupine, wolf spiders, black bears, and mosquitoes, squirrel, Japanese beetle, hawk
Black Bear Mosses Bacteria Red Fox
Paramecium Daisies Oak Tree Mushroom
American Bald Eagle
Black Bear American Bald Eagle Red tailed Hawk
Red Fox Wolf Spider
Rabbit Red Squirrel Japanese Beetle
Daisies Fern Oak tree Lichens
1, What is the red squirrel eaten by?
2. The bald eagle eats the wolf spider which eats the ( )?
3. The Rabbit eats the daisies and the ( )?
4. The bald eagle and the ( ) are at the top of the food chain.
5. The Daisies are eaten by three of the animals which are they?
1. The bald eagle has extremely good vision it can see forward and to the side at the same time.
2. Its sharpness is at least four times that of a person with perfect vision.
3. Its beak is curved and sharp to help it rip its prey apart.
1. How many more times is the sharpness of the bald eagles vision than the humans?
2. Why is the Bald eagles beak curved and sharp?
3. When are the trees leaves at?
4. When the temperature gets cold the trees do what?
5. The Bald eagle has extremely good vision but can it see forward and to the side?
Gray Wolf = the gray wolf lives in the northern parts of the deciduous forest and travel in packs. The gray wolf was put on the endangered species list in 1988. They have been hunted or driven out of there homes by humans.
Deforestation = Trees are cut down for paper, which destroys the habitat for many animals, which puts them in risk of extinction
Symbiosis = symbiosis is the close relationship between to or more different species.
Mutualism both species benefit
Commensalism one benefits while the other is unaffected
Parasitism one benefits while the other is harmed
Competition neither benefit
Neutralism both are unaffected
An example of symbiosis in the deciduous forest is the plants and the animals. The tall tree serves as a home for birds. The canopy serves as protection from predators and the heavy winds. The tree also grows berries and seeds for the birds to eat. This is an example of commensalism because the animals benefit while the trees are unaffected.
1. What is the relationship between a shark and barnacles?
2. What is the relationship between woodpecker and an oak tree?
3. What is the relationship between tree and grass in the deciduous forest?
4. What is the relationship between frog and a spider?
5. What is the relationship between an American goldfinch and crested flycatcher?
Mrs. Jones 3rd Hour
|
By Dawn's Early Light by Leonard L. Milberg, gives the insights of living in an age when Jews are fully integrated into so much of America’s public and popular culture, it is difficult to imagine a time before they shone on the stage and printed page. Such a future for Jews was scarcely imaginable in the crucible years after the birth of the United States. In the colonial period, there was little precedent for Jews speaking for themselves vocally and volubly in the public arena. At the dawn of the Republic, they were new to American public life. Yet as the United States started its grand experiment with liberty, and began to invent a culture of its own, Jews, too, began a grand experiment of living as equals. In a society that promised exceptional freedom, this was both liberating and confounding. As individuals, they were free to participate as full citizens in the hurly-burly of the new nation’s political and social life. But as members of a group that sought to remain distinctive, freedom was daunting. In response to the challenges of liberty, Jews adopted and adapted American and Jewish artistic idioms to express themselves in new ways as Americans and as Jews. In the process, they invented American Jewish culture, and contributed to the flowering of American culture during the earliest days of the Republic. Exhibition catalogue for The First Jewish Americans. By Princeton University Library.
|
The United Nations Educational, Scientific and Cultural Organization — known as UNESCO — released its global status report on School Violence and Bullying today. It found that about 246 million kids experience bullying in some form every year. It effects kids in all countries. This estimate is based on a poll of 100,000 young people in 18 countries.
Some things might not surprise you:
- Students who were lesbian, gay, bisexual or transgender identifying were three to five times more likely to be bullied. In the U.S. 82 percent of LBGT students ages 13-20 reported being verbally harassed in the past year.
- Girls are more likely than boys to report bullying, but 30 percent of all kids tell no one.
- Cyber bullying is a growing problem. Between 5 percent and 21 percent of kids are affected and it happens to girls more than boys.
- Boys are more likely to engage in physical violence.
- Physical violence is less common than bullying in industrialized countries.
- Bullies attack disabilities, gender, poverty or social status, ethnic or cultural differences, physical appearance and sexual orientation and gender identities. Those things were pretty evenly divided.
Why should we care? The report tells us the impact:
- Students who are bullied suffer physical effects: stomach pains, headaches, difficulty eating and sleep.
- They become depressed, lonely or anxious, have low self-esteem and suicidal thoughts.
- They miss classes or drop out.
- One study found that kids who are bullied have lower test scores (probably because they missed a lot of classes or were depressed or anxious.)
What should we be doing about it? The study recommends:
- Taking actions to change the culture of schools by taking a strong stance on violence.
- Creating schools that have strong leadership.
- Creating schools that are inclusive.
- Providing provided training and education about bullying for students and teachers.
- Providing effective, child-friendly ways to report bullying and follow up support.
- Collecting data about bullying.
- Creating national laws and policies that are enforced.
- Offering national programs that raise awareness.
- Getting student involvement in planning and implementing interventions.
In the U.S. we don’t have one single federal anti-bullying law, though we do have laws about hate crimes.
In Texas, the Texas Education Code has several provisions on what to do with a student who is being bullied, including moving them to another class, as well as a student code of conduct, a provision for staff training.
There are new anti-bullying bills being proposed this session:
House Bill 306 and Senate Bill 179 known as David’s Law would cover cyber bullying even if it happens not on school property and not during the school day, establishes a way to report bullying and ensures that parents be notified within one day of the offense, and allows for the suspension of the bully if he or she has caused a child to want to commit or attempt suicide, has incited violence through group bullying or has released intimate visuals of a person.
|
Enthrallment makes up a good chunk of our ‘once upon a time’. On examining society today, the nature of slavery may have changed, but it is still an ongoing phenomenon in this contemporary, structured society. A slave is defined as a person who is the legal property of another and is forced to obey them or alternatively, it can mean to work excessively hard. Whereas, slavery is the state of being a slave. So, you may see where it makes space in the present, every-day situations in architecture. But first, let’s expound on what the history of slavery looks like in architecture.
There are clashing theories of whether it was the Mesopotamian region or the Indus Valley that incepted the world’s first civilization. While Indus Valley shows no indication of labor-intensive suppression, the earliest signs of slavery can be seen in the Sumerian region, the southern Mesopotamia. Linked to present-day Iraq, it then, comprised of independent city-states, divided by canals and boundary stones. Each city was temple-centric, dedicated to a particular patron God. The temple was supervised by a priestly governor (ensi) or by a king (lugal) who was intimately tied to the city’s religious rites. To be concise, it was an ambitious scale, what they were trying to achieve with the tools that existed. Hence, it may not seem surprising; a civilization of such grandeur, overseen with a hierarchal structure such as this is based on the blood, sweat, and tears of a few oppressed unfortunates. We are all quite acclimatized to the idea of such a ‘once upon a time’; thanks to the history-based cinema (The prince of Egypt, 1998) or familiarity with literature, like the Book of Exodus and so on. To be fair, our whole mental image with respect to the early cultivation of Egypt as a historically revered architectural space rests on slaves. Done by raising from the ground, multi-edifices on the order of their pompous pharaohs, tight-roping on the beliefs of Ma’at, the order of the cosmos.
This was when slavery was a norm and as time passed, the term transfigured to ‘legal’. Legality meant ‘bonds people’ were seen as a symbol of wealth and stature, not too different from the architecture of yesterday and today. ‘The more the merrier’ was an equivalence of the bigger, the better. While this went on, dominating communities were the Dutch, Portuguese, Spanish, British and Arabs. The upcoming middle ages saw the sights of the Byzantine-Ottoman wars of which the consequence fell on the majestic Hagia Sophia. Hagia Sophia rests in modern-day Istanbul, Turkey. Famous for it’s the breath-taking engineering and vaulting of the nave. It was initially a Greek Orthodox Christian patriarchal cathedral, which after the win of the Ottoman Empire was converted into a mosque. As authority shifted, a place for refuge became a horrific site as people– mainly women, children and the elderly – were enslaved, sexually violated and even slaughtered. Architecture is in this case seen as a sophisticated fishnet, as a means to obtain; to objectify those who were once seen as beings.
Another case example could be that of the Taj Mahal where Emperor Shah Jahan dismembered arms of slaves who built the marbled beauty. In this case, architecture took from the slaves, not just their hard work, but in the most literal sense, parts of them. The slaves contributed to the respected ‘wonder of the world’ and an entire country (India): unparalleled glory, a forever history, and for those who are not aware, it was a token of the emperor’s love for his wife. Hence, an eternal love story.
Akin are tales of every nation which has seen colonies take over, change of command, a sovereignty switch and have had a lineage of some sort of authoritative oppression over a sect of their population. Throughout history we see as power transcends; the act of slavery shifting from one ‘people’ to another, allows the architecture to be built, morphed and broken. Architecture is consequently scaled. Other instances encourage additions and subtractions in the built form; challenging its identity as architecture, while the power-play seduces the urban fabric. Structures are multiplied and divided to build a form that intends to make space to symbolize an unspoken dialect, acting as a mode of communication from the powerful to their audience, which may or may not involve the slaves themselves.
Today, we exist in a time where ‘slavery’ as we knew it, is deemed illegal. Contemporary shackles may not be made of bulky iron bows, its pin may not hold our ankles while clearance piecing our bone. Nevertheless, it has taken a form of what is termed as ‘neo-slavery’ or institutional slavery. To explain, ideate on the following terms- prison labor, bonded labor, forced-migrant labor, sex-slavery, forced marriage, and child marriage or child labor. Issues such as these have made architecture. Some temporary, some semi-structured while some are beyond the physicality of construction; seeped into cultural roots. Picture a red-light district; to realize, it too has an identity comprising of volume and mass, color, texture. Injecting a peculiar sense of spatial quality. Color palettes or snippets of what you may add on to a mood board might appear when you think about each term carefully. This is another association slavery has with architecture. One of memory and intuition.
Another way to associate contemporary slavery with architecture would be ‘dignity’. The question you ask is when you see every day, a symbol of your woeful past which instills a sense of pseudo-inferiority. When your daily visual (or otherwise) language spoken, belongs to your oppressors more than it can ever belong to you. How can you have a sole sense of belonging?
Many statues may be sculpted in the bygone, but are inculcated in the everyday architecture of the people today. They are silent but prominent in present-day America, standing tall as a reminder of the oppressed that were once slaves in the country. Long abolished, officially the concept still shines when visitors pass by public squares and parks as a reminder of the dark, cold history of a community that considers the nation as home. Mabel O. Wilson suggests in her paper, Negro Building: Black Americans in the World of Fairs and Museums, a term called black ‘counter-public’ sphere – which is inherently spaces in which African American leaders could represent black history and identity on their own terms.”
|
In his Critique of Pure Reason, Immanuel Kant discusses what makes up identity. He suggests that identity comes from self-consciousness and that self-consciousness arises from a combination of ideas that a person calls their own. This combination of ideas arises as a result of understanding how things are related. This seems to mean that the more people relate different ideas in the same ways the more similar their personal identities.
So how does this factor into schooling? Schools are institutions in which students are taught to make connections in a specific way. For example, we learn relations between letters and words, colors and objects, reprimands or rewards and actions, etc. Everyone is taught to make these same connections. Furthermore, many schools are restrictive and do not allow for deviance. For example, in math class, a student may discover a new way to solve a problem, one that is different from the way the teacher explained. Although the student came up with the correct answer, the teacher reprimands the student, or takes points off a test because it was not the “right” way to solve the problem. This reinforces connections the teacher made earlier between the idea of correctness and her method of solving the problem.
I believe this shows the limiting effect of schooling. It creates fewer differences between the ways in which people combine their ideas and therefore fewer differences in identity.
In Friday’s class, there was a discussion on the different elements that make up happiness or “eudaimonia.” Some of these components included the “measures of health: courage, wisdom, piety, moderation and justice, along with moral character and external characteristics.” Discussion also led into Socrates’ insistence on knowledgeability in academic regards as a key to happiness and virtuosity. I agree with Socrates in the sense that knowledge is truly important and with more wisdom comes the ability to make better decisions and in some light live a happier life. However, I do not necessarily agree that academic prowess leads to a happier life for every individual.
In Plato’s Republic Socrates’ character states that people are better off restricting themselves to one craft than practicing many(Republic 370b). He makes several comments of this nature, going so far as to insist that a person should, “stick to [his trade] for life, and keep away from other crafts so as not to miss the opportunities to practise his own craft well”(Republic 374c). Interestingly enough, there is a slightly-more-modern- than-Plato figure of speech which basically sums up the idea that a person with many skills is not necessarily outstanding at any one skill: “Jack of all trades, master of none”. Although the question of many trades versus one was not brought up as part of the education of the guardians(it was only made relevant to the formation of the city), I think it is important to our classroom discussion on education. Should education be broad or focused? Continue reading
In an age in which freedom of expression is revered as an undeniable right, Socrates’ suggestions about education in The Republic seem to violate the basic principles of liberty and freedom. “Then we must first of all, it seems, supervise the storytellers. We’ll select their stories whenever they are fine or beautiful and reject them when they aren’t,”(377c) suggests Socrates. A modern-day person will most certainly brand this as unwarranted censorship. However, I believe that modern-day educators can learn something important from Socrates.
Childhood is a critical period in a person’s development. Values, principles, and habits developed in this period persist throughout a person’s lifetime. According to Socrates, “it’s at that time that it is most malleable and takes on any pattern one wishes to impress on it.”(377b) A child’s moral sense is not fully developed; he or she sometimes cannot distinguish the good from the bad. Consequently, it is completely logical and reasonable to expose children to the good and justice, and deny them access to the bad and evil. With the advent of the Internet, children can easily access billions of webpages, images, and videos. A child can easily pick up any bad habit or principle off the Internet; the opportunities are endless. As a result, carefully censoring the Internet for children is a necessity.
To rid the world of evil, you don’t work with adults who have already developed their values and principles, but with children who are developing theirs. The world would be a much better place if every single child was raised in an environment that promotes justice and goodness.
In Book II and Book III of the Republic, Glaucon and Socrates discuss the implication of justice and how education can help create a model citizen who could identify what is just. Continue reading
Book II of Plato’s Republic includes a conversation between Glaucon and Socrates, in an attempt to the get to the heart of what justice/injustice is. To accomplish this, Socrates leads Glaucon down the concept of a city, and tangents off into explaining things that a city needs not only to be healthy, but luxurious as well(373b).
On explaining the role of guardians in a city, Socrates go Continue reading
In schools, the relationship between student and teacher is a strange one. How far should the teacher be willing to educate a student, and how far should the student be willing to try?
Meno questioned whether knowledge (teaching) and experience (practice) are mutually exclusive at the beginning of the dialogue. This dichotomy has me ponder, “What is the way to obtain the best education?”
Before attempting to answer the question, I will differentiate knowledge (teaching) and experience (practice). First of all, knowledge might be superficial in one’s mind because it is often proved by someone else’s studies. Thus, teaching is the same as spreading one’s experience to other individuals. However, acquiring knowledge from teaching does not secure the meaning behind it because one’s experience is something that cannot be transferred. On the other hand, trials create experience that are realistic because of the consequences one receives. To sum up their differences, knowledge exists in a blurry vision while experience lives with vivid images.
In order to weigh teaching vs. practice, the issue of theoretical knowledge vs. practical knowledge is considered. The former is obtained from reading formal writings and listening to lectures, or so-called “book-smart.” Whereas, the latter is grasped by performing experiments and trial-and-errors, or so-called “street-smart.” According to the definitions above, they are completely distinct from one another, but share a common goal: personal improvement. Similarly, education is about acquiring and applying existing knowledge to increase overall human intelligence. Thus, teaching cannot bring the best results, nor can practice. They have to work together in order to yield the best results. For instance, a surgeon should not be allowed to perform a surgery if she has no ideas where the heart is. At the same time, she should not conduct the surgery for someone’s life if she has no prior experience.
In conclusion, knowledge and experience are two different perceptions. However, their differences are blessings because they are the final missing pieces of the puzzle called education.
|
California is experiencing its deadliest fires in state history. The recent landfalls of hurricanes are the worst in recent memory. These events should serve as the catalyst for conversations about climate change and natural disasters. Let’s take a look at how researchers see climate change affecting the future of these natural disasters.
Since the early 1970s, wildfire season has increased from 5 months to 7 months.
Global warming is increasing temperature levels, so, there is less soil moisture on average in a year of well below average rainfall. Without moisture in the soil, things burn much more easily. Although fire is part of the natural ecosystem, and even necessary for some trees reproduction, California has suffered more than in the past due to the drought and the abundance of dead and dry trees. Since 1970, the average annual temperature in the Western US has increased by 1.9 degrees Fahrenheit.
The rising temperature also affects the ecosystem. Consider the mountain pine beetle. Cold winters kept the insect confined to its historic natural environment for the majority of its existence. With the rise of increasing temperatures, however, the beetle’s territory can expand, and they can weaken or kill more trees. Dry and dead trees are very susceptible to catching fire, thus providing more wildfire fuel than in the past.
It’s still difficult for scientist to confidently pinpoint direct correlations between climate change and specific droughts, yet scientists still draw a few connections.
The increase in the Earth’s temperature causes water to evaporate from the soil into the air. This evaporation takes moisture away from the plants depending on it. The lack of moisture can increase the potential for drought conditions.
Climate change can also cause subtropical high-pressure systems to get much stronger than usual. These systems prevent moist air from traveling into the atmosphere to condense as rain or snow. As these systems gain in strength and size, they have a higher chance of blocking precipitation.
Hurricane damage may be exacerbated for several reasons. First, ocean warming continues to cause our sea levels to rise. Warmer water occupies more volume. This alone increases the risk of storm surge, or the intensity of flooding caused by a storm pushing water onshore.
A study in 2013 led the authors to project that the threat of a Katrina-level storm will rise two to seven times for every 1.8 degrees Fahrenheit increase in temperature.
Finally, climate change will also affect rainfall. With higher temperatures, the atmosphere is capable of holding more moisture, allowing more intense rainfall. Climate change will also affect the types of storms we see. For example, it’s predicted that we will see a decrease in weaker Category 1 and Category 2 storms and stronger Category 3, 4, and 5 storms.
Conversations surrounding climate change and natural disasters will continue to be a hot topic amongst scientists. Our one hope is that the increased damage and destruction caused by these “natural” disasters will cause policy makers to more seriously address the underlying issue of limiting global warming.
|
The importance of good night sleep for our memory is proven by many researches. The quantity and quality of sleep have a profound impact on our learning and memory.
Sleep and Memory
Recent neuro-scientific researches show that human brain registers and records knowledge during sleep. If the brain doesn’t get enough sleep, this information is lost and can’t be recalled. Sleep-deprived people cannot have optimal focus, therefore, cannot learn efficiently.
Sleep, Memory and Learning Cycle
Each day’s experience is registered in hippocampus, part of the brain which stores short term memory. Then the information is moved from there to prefrontal cortex during sleep. If the brain is not in sleep mode, that process gets interrupted. The higher parts of the brain get involved for linking the information with previous or future recordings, which is essential for correct judgement and problem solving. Sleep cleans up the hippocampus so we can record fresh information every day.
Sleep, Memory and Information Retention
The brain also shows 3 second short spikes of activity during sleep, which are called “Sleep Spindles”. The streaming activity between hippocampus and higher brain centers create these sleep spindles. People with more sleep spindles can retain learned information better and remember more easily.
Sleep and Children’s Learning Development
It is shown that children have more sleep spindles than adults, which explains why kids learn quicker as well. Pre-school age children who get daytime naps have shown better vocabulary growth, generalization of the meaning of words and abstraction in language learning.
Clearly, for children as well as adults, prolonged sleep isn’t a sign of laziness. It is critical for our brain’s connections and our body’s rhythms. In fact, sleep continues to be important for memory and learning throughout your lifetime.
Recent national statistics show millions of Canadian school kids are sleep deprived. This not only causes long-term health consequences but also leads to poor academic performance and mental instability.
Sleep Deprivation and Depressive Symptoms
Without adequate sleep and rest, over-worked neurons can no longer function to coordinate information properly, and we lose our ability to access previously learned information.
Sleep deprivation can also cause depressive symptoms which can alter our ability to learn.
All these examples show that in order to have effective learning and memory skills, everyone needs a good night’s sleep.
Learn more about - SleepGift™ Weighted Blanket
SleepGift EMF Weighted Blanket with EMF protection up to 99% is a reinvented weighted blanket designed with 6 features that are medically and scientifically proven to improve your sleep and health.
Author: Dr. Tina Ureten
|
Chronic obstructive pulmonary disease (COPD) is a umbrella term that includes emphysema. Emphysema is a progressive lung condition that affects the tiny air sacs in the lungs, alveoli, and causes them to fill with air. Over time these air sacs expand causing them to burst or become damaged, which causes the lungs to form scar tissues. These scar tissues will begin to affect the patients breathing ability by making them progressively short of breath, which is a side affect known as dyspnea. Eventually the alveoli turn into swollen air pockets referred to as bullae, eventually causing less and less surface area for the lungs resulting in less oxygen entering into the bloodstream. When the alveoli becomes damaged so do the tiny fibers that hold open the airways leading to the alveoli, causing them to collapse and trap air every time the patient expels air. The number one cause of emphysema is long-term cigarette use.
Cause and Effects of Emphysema
Signs and Symptoms of Emphysema
What makes emphysema so tricky and difficult to diagnose at an early stage is the simple fact that symptoms may go unnoticed for many months, and even up to years. As emphysema progresses and further damage is done to the lungs, patients will begin to feel shortness of breath which is referred to as dyspnea. At the beginning stages of emphysema dyspnea will occur during times of physical activity, and gradually the disease will get worse and worse eventually causing dyspnea during times of rest or little physical activity. The dyspnea can eventually take a toll on patients to the point where eating is difficult, which will lead to a reduced appetite and weight loss. Other common signs and symptoms of emphysema include:
- Tightness in the Chest
- Chronic Cough
- Expanded Chest or "Barrel Chest"
- Clubbing of the Fingers May be Noticed as Emphysema Progresses
- Collapsed Lung
What Leads to Emphysema
Long-term regular cigarette use is the #1 CAUSE of emphysema! Burning a cigarette releases 4,000 known chemicals into the surrounding areas, and many of those chemicals are carcinogenic (cancer causing). Those carcinogenic chemicals are the reason why 5 out of every 6 lung disease patients were once long-time or still are smokers. However cigarette smoke is not the only cause of emphysema, excess exposure to air pollution, factory fumes, dust, or burning wood are just a few other common irritants that can leave patients susceptible for developing emphysema. In very few cases emphysema can be attributed to genetics, which is known as Alpha-1-antitrypsin deficiency. This is when there is a deficiency in the proteins that protect the elastic structures in the lungs. Other causes for emphysema include:
- Respiratory Infections
The damage that has already been done to the lungs cannot be reversed, but with treatment options, diet changes, and exercise emphysema will be more easily managed and the overall quality of life will improve. Here are some of the most common treatment options:
- Eliminate Smoking - After all this is more than likely the reason you are diagnosed with emphysema, but by stopping smoking the overall progressiveness of the disease will begin to slow.
- Supplemental Oxygen - This is for patients that suffer with extreme dyspnea during times of minimal activity or all throughout their day. This treatment is done with a home oxygen concentrator, portable oxygen concentrator, oxygen tanks, or liquid oxygen that delivers medical grade oxygen to the user through a nasal cannula. Patients must have a prescription for medical grade oxygen in order to purchase an oxygen concentrator.
- Antibiotics - Patients with emphysema are at a higher risk for pneumonia or other infections, antibiotics are needed in order to treat these infections.
- Breathing Techniques - Patients can be taught breathing techniques such as pursed lip breathing to help reduce the feeling of breathlessness, these can also help patients regain their ability to exercise.
If you are experiencing any of the previous symptoms and have been a long time smoker, then speak with your doctor about getting tested for emphysema.+Caleb Umstead
|
In terms of size and biology, the Common Seadragon is generally smaller than its close relative, the Leafy Seadragon, reaching only about 30 cm in length. It has an elongated, slender body, and its leaf-like appendages provide it with excellent camouflage in its aquatic environment. It also has a long, snout-like mouth that it uses to suck up its small prey.
The Common Seadragon, also known as the Weedy Seadragon, is a species of fish that belongs to the Syngnathidae family, along with Seahorses and Pipefish. It is known for its elaborate, leaf-like appendages and vibrant colors, which help it blend in with its surroundings.
The Common Seadragon is found in the waters of southern and western Australia, where it inhabits shallow, coastal waters and seagrass beds. It can be found along the southern coast of Western Australia, as well as in areas such as Victoria, South Australia, and New South Wales.
The diet consists primarily of small crustaceans such as mysids and amphipods. It uses its long, snout-like mouth to suck up its prey, and it has specialized gills that allow it to filter out small particles from the water.
In terms of behavior, the Common Seadragon is a relatively slow-moving fish that relies on its camouflage for protection. It is a solitary creature and tends to move slowly through the water, using its leaf-like appendages to blend in with its surroundings.
The Common Seadragon is scientifically known as Phyllopteryx taeniolatus, and is the only species in the Phyllopteryx genus. It is a member of the Hippocampinae subfamily within the Syngnathidae family, and is closely related to seahorses and pipefish.
Despite its name, these unique fish are considered to be vulnerable to extinction due to habitat loss and pollution.
|
A mining facility in northern Myanmar became the crash site of a huge piece of space debris last Thursday. As the impact occurred, a smaller piece of debris with Chinese markings on it simultaneously destroyed the roof of a house in a nearby village. Fortunately, no one was injured in either incident.
The larger object is barrel-shaped and measures about 4.5 meters (15 ft) long, with a diameter barely over a meter. "The metal objects are assumed to be part of a satellite or the engine parts of a plane or missile," a local news report said. The Chinese government is neither confirming nor denying whether both pieces of space junk came from the same object.
While there may be not confirmation yet, it's worth noting that just last month the Chinese Tiangong-1 spacecraft re-entered Earth's atmosphere. It's possible that these debris may be part of it.
This incident points to the ever growing problem of space junk and debris surrounding our planet. NASA estimates there are more than 500,000 pieces of debris currently floating at speeds of up to 28,162 km/h (17,500 mph) around the Earth.
Space junk accumulation is a product of humanity's space exploration projects. These debris come mostly from old, decommissioned satellites or discarded parts of shuttles. Because they move quickly, they present dangers to the International Space Station and other working satellites, as well as space shuttles or other human transport space vehicles.
These could also crash into the Earth without warning, like what happened in Myanmar.
Fortunately, efforts are in the works to deal with our space junk problem, such as using lasers or dust clouds. One group from Switzerland even proposed sending a satellite equipped with a special type of net to collect space debris. While these efforts are commendable, a more comprehensive solution to the issue is needed – one where each country that's contributed to space junk allots resources to cleaning it up.
If we want to send humans to Mars or to make (near-)space tourism possible, we have to make sure space rockets are safe from potentially destructive space junk in our orbit.
Share This Article
|
Section A, Part 6
When you think of school groups, you probably think about student council, honor society, clubs, teams, legislatures, or committees. Most groups, however, are not as rigidly structured as these. Any collection of people, from two to 2 million, constitutes a group when the people in the group have:
- A common identity
- A common purpose
- Common goals
As a leader, you have both a tremendous influence and a responsibility to your group members. The types and extent of interaction among group members are often determined by your ability to perceive, understand, and influence their interactions. This can be a difficult task, because patterns of group interaction are not static. Groups are constantly changing and evolving.
(Based on the model developed by B. Tuckman, 1965)
While the precise dynamics of a particular group are determined by many factors, to a large extent all groups undergo five basic stages of development:
A group goes through this initial stage when its members first come together as a collection of individuals unfamiliar with other group members. At this stage, you are instrumental in providing opportunities and a positive environment for initial group interactions.
Start by encouraging group members to introduce themselves. Never assume that people are acquainted, and when you are introducing people, try to think of one or two facts about them that others may find interesting.
Once the group has become acquainted, conflicts may arise over such issues as power, leadership, goals, and attention. These potential problems can be minimized by setting standards and modeling the desired behaviors. Often group members look to each other as guides for standards of behavior, particularly in terms of the acceptable levels of criticism and conflict and the ways in which disagreements are handled.
Make sure that the message you are sending is consistent. Your body language should not encourage behavior that you verbally discourage.
During the third stage, conflicts are resolved and the group begins to function smoothly as a unit. These functions include working out compromises, encouraging participation, maintaining a conducive environment, and handling individual problems.
In the fourth stage, the group experiences maximum productivity and involvement. The group members recognize each other as being important components of the group.
In the final stage, members come to terms with the end of the task/exercise and must decide whether or not to apply their experience to work with other groups in which they may belong and with future activities of the current group.
Functional Roles of Group Members
Individual behavior in a group can be examined from the point of view of its purpose or function. When a member says something, is he or she 1) primarily trying to get the group task accomplished (task roles), 2) trying to improve or patch up some relationships among members (maintenance roles), or 3) primarily meeting some personal need without regard to the group’s concerns (self-oriented roles)?
Several examples of these three types of behavior are:
- The initiator-contributor suggests or proposes to the group new ideas or a changed way of regarding the group’s problem or a goal.
- The information seeker asks for clarification of suggestions made in terms of their accuracy and for authoritative information pertinent to the problem being discussed.
- The opinion seeker asks not for the facts of the case, but for a clarification of the values pertinent to what the group is undertaking or for clarification of values involved in a suggestion or solution.
- The opinion giver states his or her belief or opinion pertinent to a suggestion. The emphasis is on his or her proposal of what should become the group’s view, not on relevant facts or information.
- The information giver offers facts or generalizations that are authoritative or relates his or her personal experience.
- The elaborator spells out suggestions in terms of examples, offers a rationale for suggestions previously made, and tries to deduce how an idea would work out if adopted by the group.
- The orienter defines the position of the group with respect to its goals by summarizing what has occurred or raising questions about the direction that the group discussion is taking.
- The energizer prods the group to action or decision and attempts to stimulate the group to greater or higher quality activity.
- The recorder writes down suggestions and makes a record of group decisions. The recorder is the group memory.
- The encourager praises, agrees with, and accepts the contributions of others. He or she indicates warmth and solidarity in his or her attitude toward the other group members and indicates understanding and acceptance of other points of view.
- The harmonizer mediates the difference between other members, attempts to reconcile disagreements, and reduces tension.
- The compromiser operates from within a conflict in which his or her idea or position is involved. He or she offers compromise by yielding status, admitting error, or disciplining him or herself to maintain group harmony or growth.
- The gatekeeper and expediter attempt to keep communication channels open by encouraging or facilitating the participation of others or by proposing regulation of the flow of communication. (“We haven’t heard from yet.” “Why don’t we limit the length of our contributions so that everyone will have a chance to speak?”)
- The standard setter expresses ideals for the group to attempt to achieve or applies standards in evaluating the quality of group processes.
(Excerpt from A Handbook for the Student Activity Adviser by Ron Joekel)
Group members sometimes exhibit behaviors that do not contribute to group maintenance or task accomplishment and interfere with the effectiveness of the group. Examples of such nonfunctional roles are:
- Dominator: tries to assert authority or superiority or to manipulate the group through flattery, interruptions, or demanding right-to-attention; embarks on long monologues; is overpositive and overdogmatic; constantly tries to lead group even against group goals; is autocratic and monopolizing.
- Blocker: resistant, stubborn, negative, uncooperative, pessimistic, interferes with group progress by rejecting ideas and arguing unduly.
- Help-seeker: seeks sympathy; whines, expressing insecurity and personal confusions; depreciates self.
- Special interest-pleader: claims to speak for a special group, but usually is seeking attention for self; name-drops to impress the group.
- Aggressor: attacks the group or the stature of its problems; deflates the status of others; may joke, express disapproval of values/acts/ feelings of others, or try to take credit for another member’s contributions.
- Fun-expert: is not involved in the group and doesn’t wish to be; may be cynical, aloof; often involved in horseplay; behaves childishly; distracts others, makes off-color remarks.
- Self-confessor: uses the group as audience for expressions of personal and emotional needs; is not oriented to the group.
- Avoider: withdraws from ideas, from group, from participation; is indifferent, aloof, and excessively formal; daydreams, doodles, whispers to others; wanders from the subject or talks about irrelevant personal experiences.
- Recognition seeker: exaggerated attempt to get attention by boasting or claiming long experience or great accomplishments; struggles against being placed in “inferior positions.”
How to Keep Groups Working Together
Successful group action in solving problems and addressing the group’s goals often depends on understanding some basic principles about the way people behave in groups and the kinds of behaviors you as a leader should encourage. Start early by communicating your expectations to the group and modeling and reinforcing them throughout the problem-solving process.
- Identification with Other Members
Try to find out how the other person feels. Don’t assume that what you want is what others want, too. Discovering common attitudes among group members is productive. Encourage input from all members when setting up ground rules or guidelines for the group.
Encourage everyone in the group to take an active part. Consensus is much better than an unhappy minority. People participate in their own ways, so be tolerant and helpful in encouraging participation. Help members find roles that fit them.
- Democratic Climate
Democratic leadership involves more people than a dictatorship. Your job as a leader is to create an atmosphere of honesty and frankness. Keep things moving but allow the group to make the decisions when they are ready to do so.
- Individual Security
People under pressure may call names, get angry, show prejudice, or behave in other ways destructive to group cohesiveness. Security comes as trust develops within a group. Act swiftly to remind the group of the agreed upon guidelines for working together if you observe anyone whose actions or words are out of line with any one of the guidelines.
- Open Lines of Communication
Explain and listen. Make your messages honest and accurate. Encourage the flow of listening, talking, and responding.
- Better Listening
Attempt to interpret both the literal meaning and the intention of each speaker. You need to hear what other people say, what they intend to say, and what they would have said if they could have said what they wanted to say.
- Handling Hostility
Hostility in itself is not necessarily harmful to a group, or even to individual productiveness. People need freedom to express hostility within a group (through channels) because inhibition will decrease the efficiency of the group members. Call a timeout from the exercise if needed to give the group time to work through their frictions and to refocus their efforts on the challenge at hand.
|
Most readers when the word bully comes up visualize a stronger bigger male child tormenting a smaller child frequently with glasses or some other physical problem. However, some basic facts about bullying let us know that this is profiling in every sense. Bullies can be both girls and boys. A child can be both the bully and the victim. Bullies target children who cry, get mad or easily give in to them.
Bullying occurs when there is an imbalance of power. Usually, the children who are bullied are weaker or smaller, are shy or feel helpless. Other children at high risk of being bullied are those with disabilities or other special needs or those who are lesbian, gay, bisexual or transgender.
There are three types of bullying which are physical such as hitting, kicking, and choking; verbal such as threatening, teasing, hate speech or social such as excluding their victims from activities or starting rumors about them. We tend to think of bullying happening at school but it can happen anywhere adults are not watching such as playground and on the ever-present electronic devices.
Nobody wants their child to be bullied and there are a lot of suggestions about how to teach a child to cope with these difficult situations such as looking the bully in the eye, standing tall, staying calm, and walking away. Most important is not to respond to electronic messages and to cut off communications with those who are sending unwanted messages. Children also need reassurance that it is OK to ask an adult for help and to show them disturbing texts.
Many of these social skills don’t come naturally and need practice. They can practice these skills in adult-supervised groups such as sports, music groups, and social clubs. School officials should be alerted to know where and when bullying happens so they can help with teaching about what bullying is and plan how to prevent it from happening again.
Everyone wants to protect the bullied child but the bully also needs to have help. Bullying is a learned behavior. There is evidence that bullies continue to have problems which frequently get worse. As adults they tend to be less successful in work and with adult relationships. They have an increase in antisocial behaviors and may have trouble with the law.
Parents who find their child is a bully can help their child understand what bullying is and why it is a problem, how it hurts other children and is never OK. All children can learn to treat others with respect. When disciplining use nonphysical discipline such as loss of privileges. Parents can talk to the school personnel to find positive ways to stop bullying. Parents should supervise their time on line and monitor which sites they are visiting. It is strongly recommended that a parent requires them to “friend you” on social media and to share their passwords with you. Always consider asking for help from school personnel, a counselor or your doctor.
by Sally Robinson, MD Clinical Professor
Keeping Kids Healthy
Published August 2023
|
Practice troubleshooting hardware and program issues by designing and programming a new model.
Questions to investigate
• What debugging techniques can be used when designing a new model?
• Ensure SPIKE Prime hubs are charged, especially if connecting through Bluetooth.
(Group Discussion, 5 minutes)
Spark a discussion about what a debug-inator is through brainstorming.
Ask students to come up with at least 3 things that their debug-inator will need to do. During this brainstorming session, students should gather as many ideas as possible and record them in their journals.
Prompt students as needed with questions like:
• Will your model need to sense anything?
• How will your model need to move?
• Will you need to utilize the console in any way?
Allow students to share their ideas from their brainstorming. Students should then decide on the final three main criteria that needs to be included in their model.
(Small Groups, 20 minutes)
Challenge students to design, build, program, test, and troubleshoot a new model that is a debug-inator meeting the criteria that they set.
Students should create their prototype being careful to include the three main criteria determined in the engage section. Students should practice their troubleshooting strategies while designing and building their model by testing the model’s ability to move as intended.
Students will need to program their model. When creating their program, students should
• write a pseudocode program first to show the intended outcome of their program
• document their program using code comments with the #
• test the program, watching the console for error messages
• test the program using expected and unexpected outcomes or data
Allow students time to design, build, and program their models. Students should document any problems they encounter and how they fix or troubleshoot these issues.
(Whole Group, 5 minutes)
Allow students to share their work. Discuss students’ models and programs together.
Ask students questions like:
• What does your debug-inator do?
• What 3 expectations did you have for you model? How did you create something to meet these expectations?
• How did you program your model? Ask students to share the program using the code comments to explain it.
• What trouble did you have? Where did you find bugs? How did you fix them?
(Small Groups, 10 minutes)
Have students finish their models and programs.
Allow students additional time to finalize their model and programs. Encourage collaboration between teams and sharing of ideas.
(Group Exercise, 5 minutes)
Discuss the program with students. Ask students questions like:
• What problems did you run into while creating your debug-inator?
• How did you test your model and program for errors? How did you troubleshoot the errors found?
• How did you determine if problems encountered were from the model or the program?
Have students answer the following in their journals:
• Ask students what challenges they encountered in creating their debug-inator.
• Ask students to rate themselves on a scale of 1-3, on their time management today.
• Ask students to rate themselves on a scale of 1-3, on their materials (parts) management today.
• What characteristics of a good teammate did you display today?
Suporte ao Professor
• Debug a software problem.
• Troubleshoot a hardware problem
• SPIKE PRIME Set
• Device with SPIKE App installed
• Student journal
2-AP-10 Use flowcharts and/or pseudocode to address complex problems as algorithms.
2-AP-13 Decompose problems and subproblems into parts to facilitate the design, implementation, and review of programs.
2-AP-17 Systematically test and refine programs using a range of test cases.
2-AP-19 Document programs in order to make them easier to follow, test, and debug.
|
“Others dream of things that were, and ask ‘Why?’ I dream of things that never were, and ask ‘Why not?'” – Cardinal Saint-Saens
The INTENT of learning Design Technology at Clearwell
Design and technology is an inspiring and practical subject at Clearwell Primary School. Our curriculum allows opportunities for pupils to use their creativity and imagination, to design and make products that solve real and relevant problems within a variety of contexts, considering their own and others’ needs, wants and values. Learners are encouraged to take risks, become resourceful, innovative and enterprising. We believe that design and technology gives young people the skills and abilities to engage positively with the designed and made world and to harness the benefits of technology. We want our children to learn how products and systems are designed and manufactured, how to be innovative and to make creative use of a variety of resources including digital technologies, to improve the world around them
How Design and Technology is taught at Clearwell (Implementation)
Pupils experience DT skills planned to allow pupils to acquire a broad range of subject knowledge and draw on disciplines such as mathematics, science, engineering, computing and art. Pupils begin all design projects by gathering research. Then they are taught how to evaluate past and present designs and develop an understanding of its impact on the wider world. They also learn the importance of the target audience and the relevance of market research. Pupils learn how to write design specifications, developing their ability to plan for products that are fit for purpose.
Pupils make end products from selecting and using a wider range of tools and equipment to perform practical tasks. They are also able to select from and use a wider range of materials and components, including construction materials, textiles and ingredients, according to their functional properties and aesthetic qualities. Throughout the design and making process, the pupils learn how to develop their ability to critique, evaluate and test their ideas and products and the work of others. They do this through regular formal and informal self and peer assessment opportunities.
The IMPACT of learning Design and Technology at Clearwell
- Children who passionately enjoy designing, constructing and evaluating
- Children who develop a desire to learn about up-to-date technological innovations, products and systems
- Learners who take information from other subjects, especially maths and science, and apply it using logic
- Originality – children who take safe yet creative risks
- Independent learners and learners who work constructively and productively with others
- Learners who analyse
- Learners who make mistakes and share and learn from them
|
The Miracle of Owls: The Bird That Should Not Exist
Owls, although birds, are unique compared to other airborne avians. Called the greatest hunters, they are one of the rare bird species that regularly hunt at night.[i] They also have eyes that face forward rather than being located on the sides of their head like most other birds. Also, unlike most other birds, when not flying, owls sit straight up supported by their two legs. Many of the bones that are separated in mammals are fused together in owls, making them strong enough to support their weight when on the ground. They also have large, broad heads surrounded by a collection of feathers around the eyes. Called a “facial disc”, it functions like a satellite dish to amplify sound.[ii] The facial disc is their distinctive trait, possessed by all owls but by no other bird. Also, in contrast to most birds, they do quite well in very diverse habitats, from deserts to forests and even in locations near the arctic, where they are appropriately named snowy owls.[iii] They are also critically important in keeping the rodent population, especially rats, under control.[iv]
Their incredible vision
Owls have one of the sharpest known visual acuities of any known animal. They possess large, forward-facing eyes set behind their hawk-like beaks. The perfect and precise distance between their eyes was specially designed to achieve the excellent depth perception required for their carnivorous diet. Their vision is over ten times better than humans, so that potentially they could make out letters on a newspaper 100 yards away. Owls’ superior binocular vision was specially created to allow them to hunt at night. Even on nights lacking moonlight, an owl can easily spot a mouse 50 feet away.
Besides their regular eyelid they have an elastic transparent nictitating membrane that functions like the window wiper blades of an automobile. It has brush-like cells that ensure the eye surface is moist, free of dirt, and protected against microorganisms.[v]
Many design innovations contribute to their superior vision. In most owl species, their eyes are five percent of the birds’ total body weight. If this proportion was applied to human beings, we would have eyes the size of large grapefruits. The larger the eyes, the more light they can take in. Furthermore, owls also have very large pupils, which let even more light in to strike the retina. Owl eyes also have a higher proportion of rod cells than many animals. Their rods are very sensitive to light, allowing them to see superbly in darkness.
In contrast to most animals, their eyes are located at the front of their heads, allowing them to zero in on their prey. However, owls cannot move their eyeballs as can most mammals, so in order to see their side visual field, they can rotate their heads up to 270 degrees, enough to see behind themselves. This design produces a very wide field of vision, wider than most life forms. They can also turn their neck almost completely upside down![vi] And when tired they can rest their large head on their shoulders in order to sleep.
To achieve these feats owls have specially designed neck vertebrae that are strong and flexible. Their 14 neck vertebrae (compared to 7 in humans) allow them to twist and turn their necks in just about any direction.[vii] To turn their head to see behind them, they have neck blood vessels designed to allow turning this far without causing damage. To achieve this feat owls have a special jugular vein arrangement with associated bypass connector blood vessels to ensure that their blood supply (and return) is not impeded as the neck is rotated.[viii] In contrast to humans, owls have only one occipital articulation with the cervical vertebrae. This design allows an owl to pivot its head on its vertebral column – comparable to a human pivoting on one foot. Their muscle structure is designed to allow this movement as well.
Owls have “one of the most extraordinary capacities for hearing in the animal kingdom.”[ix]
The ear design includes the placement of the ear canals. The left ear is about one inch higher than the right, but points downward while the lower right ear points upward. The result of this design is that the left ear is more sensitive to sound.[x]
Animal life with two ears, one on each side of the head, (as is true of owls), requires comparing the different information received from each of the two ears. This is achieved in their brain which combines the two signals into one signal. Otherwise, the sound information would be like an echo chamber. Differences in sound arrival time, intensity, loudness, and force all must be fused by the brain into one harmonious whole.[xi]
Owls’ binaural fusion is so good that they can hear a mouse under two feet of snow! They also can determine the exact location of their rodent prey.[xii] Owls achieve this by a second processing system. One processing system fuses the two sounds from both ears, (which is the information message as described above). The other system uses differences in arrival time, intensity, loudness, and force to accurately determine the location of the sound source.[xiii]Owls do this so accurately that they can pinpoint their rodent prey location to within a few centimeters. This allows them to catch a mouse in complete darkness by relying solely on acoustical information.
Feathers designed for silent flight
Bird wing flapping is noisy, and would make catching mice very difficult. To deal with this problem each feather is bordered by a fringe made of tiny comb-like serrations at both the leading and trailing edge of their wings that break up the air currents which cause flight noise.[xiv]This is important because their prey, such as mice, also have excellent hearing that allows them to hear the slightest sound. Their comb-like serrations design has inspired engineers to design quieter fan blades in computers, drones and other devices.[xv]
Owls and all birds are believed by the majority of scientific workers to have “evolved from one group of dinosaurs (Theropoda) possibly during the Jurassic Period. Unfortunately, as so often happens, the fossil record is incomplete and one cannot trace all the steps between birds and their reptilian ancestors.”[xvi] Evolutionists cannot trace their evolution because it never occurred. Because owl skulls are very distinct due to their telescoping eye sockets, if they evolved, their change from their bird ancestor should be easy to document in the fossil record.
The fact is “owls are well represented in the fossil record. … New fossil discoveries are rare, and only over time will they either corroborate or refute the ever-changing proposed evolutionary relationships.”[xvii] One problem is, when an owl fossil is found, it often consists of only small fragments. One well-known owl fossil was later confirmed to be a small dinosaur.[xviii]
Actually, the claim that the bird fossil record is incomplete is an understatement. Indeed, evolutionists have no meaningful evidence for the evolution of owls from some pre-owl bird. As Burton admits “Bones of the earliest owls are good owl bones, but not halfway stages between owls and some other ancestral groups.”[xix] This is a problem because, as this review documents, the contrast between owls and all other birds is enormous.
The oldest known owl fossils are not links to theropods, nor even to non-owl birds, but simply extinct owl species. The most well-known example, Primoptynx poliotaurus, is an extinct owl that lived in Wyoming and is believed by evolutionists to have lived during the Eocene epoch around 55 million years ago close to the time when evolutionists believed the dinosaurs lived. Although missing its head, this fossil was largely complete. It was discovered in North America, causing Darwinists to conclude that owls must have first evolved in North America. So far, all owl fossils found only show variation within the Genesis kind.
We are indeed grateful that these exceptional predators exist. No wonder we are excited whenever we have the opportunity to observe an owl however large or small. The more we learn about these birds, the greater our appreciation of God, their creator, becomes.
[i] Breeding, Dan, Night stalker, Answers 4(4):20-22, Oct.- Dec. 2009, p. 22.
[ii] Devore, Sheryl, The Greatest Hunter, Birds &Blooms, October 2017, p. 11,
[iii] Duncan, James. 2003. Owls of the World. Boston, MA: Firefly Books. p. 73-74.
[iv] Adler, Jerry, Top Talon, National Wildlife 10(2):51-60, 1992.
[v] Duncan, James. 2003. p. 41.
[vi] Duncan, James. 2003. p. 44.
[vii] Breeding, Dan, 2009.
[viii] Fabian de Kok-Mercado et al. 2013. Science. February 1 DOI: 10.1126/science.339.6119.514
[ix] O’Quinn, Jonathan C., “Hear, ye – hear, ye” Creation Matters 21(2):12, 2016.
[x] Konishi, Masakazu, Listening with two ears. Scientific American 268(4):66-73, 1993, p. 67.
[xi] Konishi, Masakazu, 1993, p. 66.
[xii] Konishi, Masakazu, 1993, p. 66.
[xiii] Konishi, Masakazu, 1993, p. 66.
[xiv] Catchpoole, David. As Silent as a Flying Owl. Creation 40(2):56. 2018
[xv] Rao, Chen. Owl-inspired leading-edge serrations play a crucial role in aerodynamic force production and sound suppression. Bioinspiration & Biomimetics. 12(4):6008. 4 July 2017.
[xvi] Burton, John, editor Owls of the World: Their Evolution, Structure and Ecology, Chapter Two: The Origin of Owls, pp. 27-33, New York: E. P. Dutton, 1973, p. 27.
[xvii] Duncan, James. 2003. p. 72.
[xviii] Duncan, James. 2003. p. 72.
[xix] Burton, John. 1973, p. 15.
Subscribe to Dialogue
|
Fat is a class of food considered a macronutrient (along with carbohydrates and protein) that provides nutritional energy to the body. Fats are strands of naturally occurring fatty acids, involving collections of hydrocarbons and acids, that are in 1 chain, or 2 bonds.
There are two types of fat that are consumed as food, unsaturated (double bonded strands missing some hydrogen molecules) and saturated (strands saturated with hydrogen molecules) fats. Some examples of saturated fats are butter, milk from cows, coconut and palm oil, and meat. Persons ingesting diets high in saturated fat are said to have a higher incidence of heart disease. Unsaturated fats, or mono unsaturated fats, are said to be the best fats to consume as they have less of a negative affect on the cardio vascular system.
Fats are needed because they are a provider of essential fatty acids to the body. These acids are absorbed from foods and are necessary for the normal health of a person and the growth of children. Essential fatty acids contribute to appropriate cell function and help with the regulation of mood. Essential fatty acids fall into 2 categories, Omega 3 and Omega 6. Omega 6 essential fatty acids are obtained through eating green leafy vegetables and the oils of some plants such as sunflower, soy and corn. Fish such as Tuna, Trout, and Salmon are the biggest contributors to the Omega 3 fatty acids group.
Fat insulates the body's organs, helps to regulate and maintain body temperature, and is part of the structure of the brain and the body's cell membranes. Fat also is needed for healthy skin and hair, as well as storing energy for the body. Resulting chemicals used from the breakdown of fat in digestion are further converted by the liver to produce glucose and provide the body with energy. Fat also serves the body by aiding in the absorption and transportation of certain vitamins (A, E, K, and D).
It is said that fat consumption should be limited to 30% of the contributing calories in a person's diet. One gram of fat has 9 calories. A diet consisting of 2000 calories a day would involve 67 grams of fat.
|
Tonsil Disease PDF(Left click to view, Right click + save as to download)
We have recommended removal of the tonsils and/or adenoid. Here are the answers to some of the more commonly asked questions.
WHAT ARE THE TONSILS?
The tonsils are lumps of tissue that can be seen at each side of the throat. The tonsils are made from lymphoid tissue, which is part of our immune system. The tonsils are larger in childhood and usually begin to shrink down around age 6. In some people the tonsils do not shrink down and persist into adulthood.
WHAT IS THE ADENOID?
The adenoid is a single lump of tissue located at the very back of the nose. The adenoid is made from lymphoid tissue, just like the tonsils. The adenoid cannot be seen through the mouth. To see the adenoid a mirror, small telescope or x-ray is needed.
WHY ARE THE TONSILS REMOVED?
The tonsils are removed either because of recurrent infection or because they are causing obstruction (blockage). We consider surgery to be a last resort. It is used for failure of medical therapy or if medical treatment is inappropriate. Considerations include the number and severity of infections, complications of tonsillitis, such as an abscess or other illnesses that increase the seriousness of tonsillitis. Most cases of tonsillitis are caused by Streptococcal bacteria (strep), although throat cultures do not always come back positive for this. The tonsils may also be enlarged without necessarily being infected. This is called tonsil hypertrophy. If the tonsils (and/or adenoid) are large enough, they may interfere with breathing (especially when asleep) or eating.
WHAT IS SLEEP APNEA?
Enlarged tonsils and adenoid are one of the most common causes of sleep apnea, especially in children. Patients with sleep apnea intermittently block off the flow of air to their lungs while they sleep. This happens when the throat muscles relax during sleep and breathing forces create a weak suction that causes the tonsils and throat tissue to collapse in. Loud snoring is a clue to this problem that affects children and adults. There may be some pauses in breathing or choking and gasping sounds made during sleep. Sleep apnea is a potentially serious condition that can affect the heart, lungs and blood pressure. Patients with sleep apnea are often tired during the day and fall asleep easily. Some children have behavioral problems or difficulty concentrating in school. Removing the tonsils and adenoid often cures the sleep apnea in this situation.
WHY IS THE ADENOID REMOVED?
The adenoid is often involved with tonsil infections and contribute to sleep apnea. The adenoid may also cause blockage of nasal breathing and occasionally sinus problems. In children 4 or older, removal of the adenoid may help clear up recurrent ear infections or fluid in the middle ear.
WHAT IS A TONSILLECTOMY?
A tonsillectomy is a surgical procedure where both tonsils are removed. This operation requires a brief general anesthetic. The patient’s mouth is held open using a special device and the tonsils are dissected out. Once the tonsils are removed from the muscles of the throat, bleeding is stopped using electric cautery or sutures. The procedure normally takes around 30 minutes. When a tonsillectomy and adenoidectomy are performed together it is often called a T&A procedure.
WHAT IS AN ADENOIDECTOMY?
An adenoidectomy is the removal of the adenoid. This operation, often done at the same time as a tonsillectomy, also requires a brief general anesthetic. A special device is used to hold the mouth open. An instrument is then passed through the mouth, behind the palate and into the area behind the nose. The adenoid is removed and any bleeding is stopped.
WILL REMOVING THE TONSILS AND ADENOID AFFECT THE IMMUNE SYSTEM?
The tonsils and adenoid are made from lymphoid tissue, which is part of the body’s immune system. There are around 300 other structures in the head and neck area, called lymph glands that are
also made from lymphoid tissue. Since the tonsils and adenoid represent only a small fraction of this tissue, there is no noticeable effect on the immune system. This has been proven with research, which clearly showed that children who had their tonsils and adenoid removed were no more likely to get infections than other children. In fact those patients who have recurrent infections of the tonsils, adenoid or ears will often get far fewer infections after the tonsils and/or adenoid are removed.
AM I TOO OLD TO HAVE MY TONSILS REMOVED?
There is no age limit for this surgery. The tonsils often persist into adulthood, and can cause recurrent infections and sleep apnea. As long as the patient’s general health is adequate a tonsillectomy can be performed without difficulty.
Patients are evaluated by us and a decision is made that a tonsillectomy and/or adenoidectomy is necessary. Our office then arranges the surgery. It is important to inform your surgeon about any medical disorders or bleeding problems. Blood thinners such as aspirin, plavix, ibuprofen or other similar drugs should be stopped to surgery. If you take any of these please inform your
surgeon. The patient should have nothing to ear or drink after midnight on the night prior to surgery unless otherwise specified. Please show up at the scheduled time arranged by the hospital/surgery center. There are forms to fill out and patients are seen by a nurse and anesthetist prior to the procedure.
Children are put to sleep using an anesthetic gas with a small mask. This is a painless, gentle and quick method. An intravenous line is then started to give medication and fluids to the patient. Adults are put to sleep using the intravenous line rather than a mask. The procedure is then performed by your surgeon. After the surgery is finished, the patient is woken up and taken to the recovery area. Parents are allowed to see their child in the recovery area once they are awake and stable. The intravenous is removed when the patient is stable and drinking fluids. Patients are usually discharged a few hours after surgery. You cannot drive for 24 hours after anesthesia so please arrange transportation.
On leaving the hospital/surgery center you will be given prescriptions for medications. This will usually include a strong pain medicine, and often an antibiotic. You should fill these prescriptions as soon as possible so they are available. No aspirin, ibuprofen or similar products should be used until at least two weeks after the procedure.
When patients return home they should drink fluids frequently. Keeping the throat moist helps healing and reduces pain. Common complaints after surgery include pain, nasal congestion and bad breath. After tonsillectomy patients should expect significant pain for around 7 – 10 days, which is felt in the throat and ears. You should use the prescribed pain medication as directed for this discomfort and anesthetic lozenges may be helpful. Patients are encouraged to drink fluids and eat soft foods, avoiding anything spicy or acidy. Popsicles and frozen ice treats also provide fluids and are soothing. Some patients will not eat solid food for a few days. This is acceptable as long as the patient drinks and does not become dehydrated. Soft foods are encouraged and foods that can scratch the throat, such as potato chips, pretzels, etc. should be avoided. Milk/dairy products may thicken mucus and if this causes discomfort should be discontinued. Patients are allowed only light activities for two weeks after surgery. Heavy straining or rough play may cause bleeding. Bleeding is the most important potential complication following surgery. It occurs in 3% of patients and may occur up to two weeks following the procedure. If a patient spits up blood or vomits blood they should be taken to the hospital ER for evaluation. Once the first two weeks have gone by normal activities can usually be resumed. If you have any concerns or problems following surgery, please call our office for assistance.
In case of an emergency, we are available 24 hours a day through our answering service. You should remember that no surgery is 100% effective and that there are risks to all surgical procedures. Since some risks only actually occur very rarely, they cannot all be mentioned. We hope this pamphlet helps you better understand your condition, the recommended procedures and its risks. If you have any questions about your procedure or its risks please call us prior to surgery at 313-582-8853.
|
In the realm of science and activism, certain individuals stand out for their significant contributions to society. One such remarkable figure is Henrietta Borstein Douglas, a trailblazer who made groundbreaking strides in various fields, leaving an indelible mark on history. This article delves into the life, achievements, and lasting impact of this extraordinary woman.
Early Life and Education
Henrietta Borstein Douglas was born on March 15, 1945, in a small town in upstate New York. Raised by compassionate parents who encouraged her curiosity, Henrietta’s passion for learning was evident from a young age. Her early education laid the foundation for her future endeavors.
Venturing into Science
1. A Pioneering Mind
Henrietta’s inquisitive mind led her to pursue a degree in biochemistry, a relatively uncharted territory for women in the 1960s. Undeterred by societal norms, she entered a male-dominated field and quickly became recognized for her exceptional intellect and dedication.
2. The Discovery That Shook the Scientific World
During her graduate studies, Henrietta, along with her research team, made an astonishing discovery that would revolutionize modern medicine. They identified a unique protein responsible for regulating cell growth, opening the door to potential breakthroughs in cancer treatment.
A Voice for Activism
1. Championing Women’s Rights
As Henrietta’s scientific career flourished, so did her commitment to advocating for women’s rights. She fearlessly spoke out against gender discrimination in academia and the workplace, inspiring countless women to pursue their dreams, regardless of societal barriers.
2. Environmental Advocacy
Recognizing the importance of environmental preservation, Henrietta actively participated in various campaigns to raise awareness about climate change and pollution. Her efforts earned her a reputation as a formidable environmental advocate.
Recognition and Awards
Henrietta Borstein Douglas’s significant contributions did not go unnoticed. Over the course of her illustrious career, she received numerous accolades, including the prestigious Nobel Prize in Medicine for her groundbreaking discovery.
Legacy and Impact
1. Shaping Medical Advancements
Henrietta’s groundbreaking discovery paved the way for targeted therapies and personalized medicine. Her research remains foundational in the fight against cancer and continues to save countless lives.
2. Empowering Future Generations
Henrietta’s activism inspired a generation of women to break free from societal constraints and pursue their ambitions in science and other fields. Her legacy lives on in the empowered voices of the women she encouraged to speak up and create change.
In conclusion, Henrietta Borstein Douglas was a true visionary whose contributions to science and activism continue to resonate today. Her pioneering research in biochemistry and her unwavering dedication to advocating for women’s rights and environmental causes have left an enduring impact on society. We owe it to her legacy to carry forward her torch of progress, innovation, and compassion.
|
Weekly Classes Are Starting Soon! Learn More!
What we're talking about!
Check back often for updates!
Writing is both an academic skill and a tool for creativity in almost any classroom. Children are taught to spell and write coherently as much as they are taught to think outside the box and entertain their most fanciful ideas. When these ideas are fashioned into stories through writing, children are able to make meaning of the world around them in a clever and often humorous way.
Not all students enjoy writing. Some would much rather build with Legos, play dress-up or draw pictures on paper. However, no matter what students are doing in the classroom, there’s one thing that’s always taking place… Storytelling. They are telling stories in the imaginary worlds they create in the classroom. Telling stories are what young children are good at, and it’s a teacher’s job to make sure they are given the chance to express themselves as a way to build self-esteem and confidence. One such way is through writing.
Now what happens when your kids run short of ideas, and no words come out? What about the kids who are too excited by other things to sit still and put pen to paper? What if they are, plain and simple, bored by writing?
Here are 5 tried-and-tested tips to keep young students engaged in their writing...
Tip #1: Get them talking, get them writing
Sometimes the problem isn’t with a student’s willingness to write. It could be that they just don’t know what to write about. It’s possible that they can’t relate to the topic, or the topic isn’t interesting enough. Keep them engaged by asking them questions about what they like or enjoy doing. Involve them when planning or choosing topics. Some kids take much longer to warm-up but even the shyest student will have something to say. Once you get them talking, eventually you can get them writing about what they’re talking about. Be patient. Keep probing until the words come out!
Tip #2: Set up the environment to encourage writing
Create an environment that encourages young students to participate in writing. Set up word walls related to your topic and provide word mats to support students who are still learning or struggling to write. Display pictures with labels next to them to help younger students make connections between words and the things they represent. These learning aids make writing more accessible as it helps students feel less worried about their limitations in writing and instead focus on the writing itself.
Tip #3: Give them choices to express creativity
Make a variety of writing materials available for them to use such as pens, pencils, markers, crayons, magnetic writing boards and other tools which can be adapted to student needs and interests. Welcome the use of technology for students who may otherwise write more when typing on a keyboard or on story writing apps like PopSmartWrite. Young students who are given more freedom, choice and creativity in the writing process are more likely to stay motivated throughout the task.
Tip #4: Use writing prompts to stimulate imagination
Writing prompts can take many forms. Use fun and imaginative topics and story starters to inspire young students to craft their own whimsical stories and adventures. Have students take photos of what piques their interest and invite them to write stories about it. Book quotes can be used as prompts for reflective writing among older children, while poetry books the likes of Dr. Seuss can urge young students to come up with their own inventive rhymes. Anything can be a prompt so long as it draws out words and ideas from your students!
Tip #5: Give them an audience to recognize their work
Young students feel more assured of their writing when someone is there to read or listen to their work. Invite students to share their writing in front of the class or their family. Generate curiosity by asking questions and encouraging the audience to do the same. By expressing genuine interest in their work, students are affirmed of the value of their thoughts and ideas. Recognition is a powerful tool in boosting a students’ sense of self. Once they experience the intrinsic rewards of writing, they will turn to the experience more often.
To sum it all up, make writing fun, student-centered and provide support where needed. Maximize the use of learning aids in the environment so that any obstacles to writing are removed, and students can fully engage in the writing process without feeling afraid of making mistakes or not having the writing skills needed to succeed. Make room for free choice, and allow them to write about things that interest them the most. Give them opportunities to showcase their writing to an audience and recognize their strengths and individuality.
Help your kids fall in love with reading!
Need a list of amazing books to get your kids inspired about writing their own stories?
Grab your free copy by completing the form.
We'll email you the ebook!
|
Astronomers using NASA’s Hubble Space Telescope have conducted the first spectroscopic survey of the Earth-sized planets (d, e, f, and g) within the habitable zone around the nearby star TRAPPIST-1. This study is a follow-up to Hubble observations made in May 2016 of the atmospheres of the inner TRAPPIST-1 planets b and c.
Hubble reveals that at least three of the exoplanets (d, e, and f) do not seem to contain puffy, hydrogen-rich atmospheres similar to gaseous planets such as Neptune.
Additional observations are needed to determine the hydrogen content of the fourth planet’s (g) atmosphere. Hydrogen is a greenhouse gas, which smothers a planet orbiting close to its star, making it hot and inhospitable to life. The results, instead, favor more compact atmospheres like those of Earth, Venus, and Mars.
By not detecting the presence of a large abundance of hydrogen in the planets’ atmospheres, Hubble is helping to pave the way for NASA’s James Webb Space Telescope, scheduled to launch in 2019. Webb will probe deeper into the planetary atmospheres, searching for heavier gases such as carbon dioxide, methane, water, and oxygen. The presence of such elements could offer hints of whether life could be present, or if the planet were habitable.
“Hubble is doing the preliminary reconnaissance work so that astronomers using Webb know where to start,” said Nikole Lewis of the Space Telescope Science Institute (STScI) in Baltimore, Maryland, co-leader of the Hubble study. “Eliminating one possible scenario for the makeup of these atmospheres allows the Webb telescope astronomers to plan their observation programs to look for other possible scenarios for the composition of these atmospheres.”
The planets orbit a red dwarf star that is much smaller and cooler than our Sun. The four alien worlds are members of a seven-planet system around TRAPPIST-1. All seven of the planetary orbits are closer to their host star than Mercury is to our Sun. Despite the planets’ close proximity to TRAPPIST-1, the star is so much cooler than our Sun that liquid water could exist on the planets’ surfaces.
Two of the planets were discovered in 2016 by TRAPPIST (the Transiting Planets and Planetesimals Small Telescope) in Chile. NASA’s Spitzer Space Telescope and several ground-based telescopes uncovered five additional ones, increasing the total number to seven. The TRAPPIST-1 system is located about 40 light-years from Earth.
“No one ever would have expected to find a system like this,” said team member Hannah Wakeford of STScI. “They’ve all experienced the same stellar history because they orbit the same star. It’s a goldmine for the characterization of Earth-sized worlds.”
The Hubble observations took advantage of the fact that the planets cross in front of their star every few days. Using the Wide Field Camera 3, astronomers made spectroscopic observations in infrared light, looking for the signature of hydrogen that would filter through a puffy, extended atmosphere, if it were present. “The planets are close enough to their host star, and they have very short orbital periods, which means there are lots of opportunities to make observations,” Lewis said.
Although Hubble did not find evidence of hydrogen, the researchers suspect the planetary atmospheres could have contained this lightweight gaseous element when they first formed. The planets may have formed farther away from their parent star in a colder region of the gaseous protostellar disk that once encircled the infant star.
“The system is dynamically stable now, but the planets could not have formed in this tight pack,” Lewis said. “They’re too close together now, so they must have migrated to where we see them. Their primordial atmospheres, largely composed of hydrogen, could have boiled away as they got closer to the star, and then the planets formed secondary atmospheres.”
In contrast, the rocky planets in our solar system likely formed in the hotter, dryer region closer to the Sun. “There are no analogs in our solar system for these planets,” Wakeford said. “One of the things researchers are finding is that many of the more common exoplanets don’t have analogs in our solar system. So the Hubble observations are a unique opportunity to probe an unusual system.”
The Hubble team plans to conduct follow-up observations in ultraviolet light to search for trace hydrogen escaping the planets’ atmospheres, produced from processes involving water or methane lower in their atmospheres.
Astronomers will then use the Webb telescope to help them better characterize those planetary atmospheres. The exoplanets may possess a range of atmospheres, just like the terrestrial planets in our solar system.
“One of these four could be a water world,” Wakeford said. “One could be an exo-Venus, and another could be an exo-Mars. It’s interesting because we have four planets that are at different distances from the star. So we can learn a little bit more about our own diverse solar system, because we’re learning about how the TRAPPIST star has impacted its array of planets.”
Publication: Julien de Wit, et al., “Atmospheric reconnaissance of the habitable-zone Earth-sized planets orbiting TRAPPIST-1,” Nature Astronomy (2018) doi:10.1038/s41550-017-0374-z
|
It is said that one man’s trash is another man’s treasure. This could not be more true for the garbage piles left by humans in times long past. Archaeologists call these deposits middens. While the word midden may sound like a euphemistic way to refer to what was, at the time, actual garbage, the word has equally trashy origins. It comes via Middle English from the Old Norse word mydyngja which means "manure pile". While it still has this meaning in modern English, archaeologists use the word to refer specifically to the discarded remains of food preparation and consumption.
Middens can occur anywhere a settlement once existed but are most commonly found on or near the (ancient or modern) coast where they are made up of shells, the bones of fish, birds and mammals, mixed with the ash and charcoal from fires that cooked them. Middens come in all shapes and sizes. Some represent single events, while others are built up over hundreds or thousands of years. Some middens, specifically the Whaleback Midden in Maine, became so massive they were mined by Europeans in the 19th century to produce lime, chicken feed, and fertiliser!
The shell species the midden is composed of tell us about the past environment in the surrounding area; shell species like pipi live in sandy bays while mussels cling to rocky shores. If the shell species change over time, or differ from those found in the area today, we can ask questions about resource management, cultural change, or environmental change. Bivalves (shells with two halves) lay down layers of shell incrementally as they grow. Careful analysis of these growth rings allows archaeologists to work out the season the shells were harvested and to see if there are patterns of seasonal exploitation. In addition to all this useful data that can be gleaned from the shells themselves, the presence of large quantities of shells in dense middens changes the pH balance of the soil or sand they are buried in, which allows small, fragile bones that would normally be quickly degraded or dissolved by acidic substrate to survive.
Image: The midden in our Shag River Mouth diorama. Visit Tāngata Whenua for a closer look.
The bones of fish, birds, and mammals are common in middens but often make up a much smaller proportion of the total. By analysing the species of fish present we can learn about past fishing practices – were they targeting fish species that lived close to shore, or venturing further out to deeper water to target different species? Analysis of the ear bones of fish can also be used to determine the season the fish died.
The remains of birds can inform archaeologists about the wider terrestrial environment. Were they forest-dwelling species, ones that preferred more open scrub, or coastal waders? If the remains of extinct species are present they can tell us about their preferred environments. Light can also be shed on the former range of species. Analysis of middens tells us that seals were once found all over New Zealand.
However, reconstructing the past environment through the analysis of terrestrial species can only go so far. To build up a more definitive picture of the environment during the period a midden was produced, we turn to archaeobotanical analysis. The fuel for the fires used for cooking and warmth was sourced from the immediate environment. Discarded seeds as well as bones were thrown into the fire but not all were fully consumed by the flames. Many seeds and burnt fragments of wood survive in a carbonised form. All plant species have a unique cell pattern when examined under the microscope that often allows them to be identified, even when fragmented into small pieces. By identifying the species represented in charcoal, we can build a more complete view of the environment people were living in in the past.
Finally we come to the smallest component of middens, the artefacts. These are usually broken items that ended their days on the trash pile. These could be things as simple as broken fish hooks or as conspicuous as adzes. The debris from producing items like fish hooks could also end up in middens, which in turn sheds light on the manufacturing process and tells us at what point an item would not have been repaired.
Now we can see that a carefully analysed midden gives us a window through which we can look back to the past. Unfortunately this window is in danger of closing. These numerous and important archaeological sites are the most vulnerable to climate change and rising sea levels given their commonly coastal locations. Higher than normal tides and storms can cause sites to erode faster or expose new sites.
All archaeological sites are protected under New Zealand law; it is an offense to interfere with them. If you find a midden eroding, please note the location and contact Heritage New Zealand Pouhere Taonga to report it. Together we can help preserve our past!
Fancy becoming an archaeologist for a day? Come along to the Museum on Saturday 4 May and learn to sort and identify shells, analyse adzes/toki and compare animal bones (including moa). 10am–2pm, Atrium Level 1.
Sanger, David, and Mary Jo (Elson) Sanger. "BOOM AND BUST ON THE RIVER: THE STORY OF THE DAMARISCOTTA OYSTER SHELL HEAPS." Archaeology of Eastern North America 14 (1986): 65-78. http://www.jstor.org/stable/40914267.pp67
Renfrew, Colin, and Paul Bahn. Archaeology: Theories, Methods and Practice (1991) 2008 ed. pp 305.
Waselkov, Gregory A. "Shellfish Gathering and Shell Midden Archaeology." Advances in Archaeological Method and Theory 10 (1987): 93-210. http://www.jstor.org/stable/20210088.pp155
Mellars, P A and M R Wilkinson. “Fish Otoliths as Indicators of Seasonality in Prehistoric Shell Middens: the Evidence from Oronsay (Inner Hebrides)” in Proceedings of the Prehistoric Society 46 (1980) pp.19-44.
Smith, I W G (1989). Maori impact on the marine megafauna: pre-European distributions of New Zealand sea mammals, pp. 76–108. In Sutton, D.G. (Ed.) “Saying so doesn’t make it so”, papers in honour of B Foss Leach. New Zealand Archaeological Association, Dunedin
Renfrew, Colin, and Paul Bahn. Archaeology: Theories, Methods and Practice (1991) 2008 ed. pp. 251.
Top image: Cross-section of the midden in Tāngata Whenua. Shellfish were a constant and reliable source of food for the people of the Shag River Mouth settlement. Otago Museum.
|
The ideas of social Darwinism attracted little support among the mass of American industrial laborers. American workers toiled in difficult jobs for long hours and little pay. Mechanization and mass production threw skilled laborers into unskilled positions. Industrial work ebbed and flowed with the economy. The typical industrial laborer could expect to be unemployed one month out of the year. They labored sixty hours a week and could still expect their annual income to fall below the poverty line. Among the working poor, wives and children were forced into the labor market to compensate. Crowded cities, meanwhile, failed to accommodate growing urban populations and skyrocketing rents trapped families in crowded slums.
Strikes ruptured American industry throughout the late-nineteenth and early-twentieth centuries. Workers seeking higher wages, shorter hours, and safer working conditions had struck throughout the antebellum era, but organized unions were fleeting and transitory. The Civil War and Reconstruction seemed to briefly distract the nation from the plight of labor, but the end of the sectional crisis and the explosive growth of big business, unprecedented fortunes, and a vast industrial workforce in the last quarter of the nineteenth century sparked the rise of a vast American labor movement.
The failure of the Great Railroad Strike of 1877 convinced workers of the need to organize. Union memberships began to climb. The Knights of Labor enjoyed considerable success in the early 1880s, due in part to its efforts to unite skilled and unskilled workers. It welcomed all laborers, including women (the Knights only barred lawyers, bankers, and liquor dealers). By 1886, the Knights had over 700,000 members. The Knights envisioned a cooperative producer-centered society that rewarded labor, not capital, but, despite their sweeping vision, the Knights focused on practical gains that could be won through the organization of workers into local unions.
In Marshall, Texas, in the spring of 1886, one of Jay Gould’s rail companies fired a Knights of Labor member for attending a union meeting. His local union walked off the job and soon others joined. From Texas and Arkansas into Missouri, Kansas, and Illinois, nearly 200,000 workers struck against Gould’s rail lines. Gould hired strikebreakers and the Pinkerton Detective Agency, a kind of private security contractor, to suppress the strikes and get the rails moving again. Political leaders helped him and state militias were called in support of Gould’s companies. The Texas governor called out the Texas Rangers. Workers countered by destroying property, only winning them negative headlines and for many justifying the use of strikebreakers and militiamen. The strike broke, briefly undermining the Knights of Labor, but the organization regrouped and set its eyes on a national campaign for the eight-hour day.
In the summer 1886 the campaign for an eight-hour day, long a rallying cry that united American laborers, culminated in a national strike on May 1, 1886. Somewhere between 300,000 and 500,000 workers struck across the country.
In Chicago, police forces killed several workers while breaking up protestors at the McCormick reaper works. Labor leaders and radicals called for a protest at Haymarket Square the following day, which police also proceeded to break up. But as they did, a bomb exploded and killed seven policemen. Police fired into the crowd, killing four. The deaths of the Chicago policemen sparked outrage across the nation and the sensationalization of the “Haymarket Riot” helped many Americans to associate unionism with radicalism. Eight Chicago anarchists were arrested and, despite direct evidence implicating them in the bombing, were charged and found guilty of conspiracy. Four were hanged (and one committed suicide before he could be). Membership in the Knights had peaked earlier that year, but fell rapidly after Haymarket: the group became associated with violence and radicalism. The national movement for an eight-hour day collapsed.
The American Federation of Labor (AFL) emerged as a conservative alternative to the vision of the Knights of Labor. An alliance of craft unions (unions composed of skilled workers), the AFL rejected the Knights’ expansive vision of a “producerist” economy and advocated “pure and simple trade unionism,” a program that aimed for practical gains (higher wages, fewer hours, and safer conditions) through a conservative approach that tried to avoid strikes. But workers continued to strike.
In 1892, the Amalgamated Association of Iron and Steel Workers struck at one of Carnegie’s steel mills in Homestead, Pennsylvania. After repeated wage cuts, workers shut the plant down and occupied the mill. The plant’s operator, Henry Clay Frick, immediately called in hundreds of Pinkerton detectives but the steel workers fought back. The Pinkertons tried to land by river and were besieged by the striking steel workers. After several hours of pitched battle, the Pinkertons surrendered, ran a bloody gauntlet of workers, and were kicked out of the mill grounds. But the Pennsylvania governor called the state militia, broke the strike, and reopened the mill. The union was essentially destroyed in the aftermath.
Still, despite repeated failure, strikes continued to roll across the industrial landscape. In 1894, workers in George Pullman’s “Pullman Car” factories struck when he cut wages by a quarter but kept rents and utilities in his company town constant. The American Railway Union (ARU), led by Eugene Debs, launched a sympathy strike: the ARU would refuse to handle any Pullman cars on any rail line anywhere in the country. Thousands of workers struck and national railroad traffic ground to a halt. Unlike nearly every other major strike, the governor of Illinois sympathized with workers and refused to dispatch the state militia. It didn’t matter. In July, President Grover Cleveland dispatched thousands of American soldiers to break the strike and a federal court had issued a preemptive injunction against Debs and the union’s leadership. The strike violated the injunction, and Debs was arrested and imprisoned. The strike evaporated without its leadership. Jail radicalized Debs, proving to him that political and judicial leaders were merely tools for capital in its struggle against labor.
The degrading conditions of industrial labor sparked strikes across the country. The final two decades of the nineteenth century saw over 20,000 strikes and lockouts in the United States. Industrial laborers struggled to carve for themselves a piece of the prosperity lifting investors and a rapidly expanding middle class into unprecedented standards of living. But workers were not the only ones struggling to stay afloat in industrial America. Americans farmers also lashed out against the inequalities of the Gilded Age and denounced political corruption for enabling economic theft.
|
Not all science teaching leads to scientifically literate students — students who can apply their knowledge to make sense of their own lives and make a difference in the world around them. But when teaching explicitly incorporates scientific literacy, it produces motivated students with a deep understanding of science concepts. This Literacy in Science series shows just how easy it can be to make scientific literacy activities a regular, engaging part of your science programme — and to reap the rewards. The accessible resource materials have been designed for customising: you can choose the set of activities on the concept relevant to your current focus and use it to either consolidate learning or really delve into the concept and its real-life implications. As well as offering a diversity of topics within each science discipline, activities cover the full range of learning contexts from individual independent work to whole-class tasks.
|
The parsec is a unit of length used to measure large distances to astronomical objects outside the Solar System. A parsec is defined as the distance at which one astronomical unit subtends an angle of one arcsecond, which corresponds to 648000/π astronomical units. One parsec is equal to 31 trillion kilometres or 19 trillion miles; the nearest star, Proxima Centauri, is about 1.3 parsecs from the Sun. Most of the stars visible to the unaided eye in the night sky are within 500 parsecs of the Sun; the parsec unit was first suggested in 1913 by the British astronomer Herbert Hall Turner. Named as a portmanteau of the parallax of one arcsecond, it was defined to make calculations of astronomical distances from only their raw observational data quick and easy for astronomers. For this reason, it is the unit preferred in astronomy and astrophysics, though the light-year remains prominent in popular science texts and common usage. Although parsecs are used for the shorter distances within the Milky Way, multiples of parsecs are required for the larger scales in the universe, including kiloparsecs for the more distant objects within and around the Milky Way, megaparsecs for mid-distance galaxies, gigaparsecs for many quasars and the most distant galaxies.
In August 2015, the IAU passed Resolution B2, which, as part of the definition of a standardized absolute and apparent bolometric magnitude scale, mentioned an existing explicit definition of the parsec as 648000/π astronomical units, or 3.08567758149137×1016 metres. This corresponds to the small-angle definition of the parsec found in many contemporary astronomical references; the parsec is defined as being equal to the length of the longer leg of an elongated imaginary right triangle in space. The two dimensions on which this triangle is based are its shorter leg, of length one astronomical unit, the subtended angle of the vertex opposite that leg, measuring one arc second. Applying the rules of trigonometry to these two values, the unit length of the other leg of the triangle can be derived. One of the oldest methods used by astronomers to calculate the distance to a star is to record the difference in angle between two measurements of the position of the star in the sky; the first measurement is taken from the Earth on one side of the Sun, the second is taken half a year when the Earth is on the opposite side of the Sun.
The distance between the two positions of the Earth when the two measurements were taken is twice the distance between the Earth and the Sun. The difference in angle between the two measurements is twice the parallax angle, formed by lines from the Sun and Earth to the star at the distant vertex; the distance to the star could be calculated using trigonometry. The first successful published direct measurements of an object at interstellar distances were undertaken by German astronomer Friedrich Wilhelm Bessel in 1838, who used this approach to calculate the 3.5-parsec distance of 61 Cygni. The parallax of a star is defined as half of the angular distance that a star appears to move relative to the celestial sphere as Earth orbits the Sun. Equivalently, it is the subtended angle, from that star's perspective, of the semimajor axis of the Earth's orbit; the star, the Sun and the Earth form the corners of an imaginary right triangle in space: the right angle is the corner at the Sun, the corner at the star is the parallax angle.
The length of the opposite side to the parallax angle is the distance from the Earth to the Sun (defined as one astronomical unit, the length of the adjacent side gives the distance from the sun to the star. Therefore, given a measurement of the parallax angle, along with the rules of trigonometry, the distance from the Sun to the star can be found. A parsec is defined as the length of the side adjacent to the vertex occupied by a star whose parallax angle is one arcsecond; the use of the parsec as a unit of distance follows from Bessel's method, because the distance in parsecs can be computed as the reciprocal of the parallax angle in arcseconds. No trigonometric functions are required in this relationship because the small angles involved mean that the approximate solution of the skinny triangle can be applied. Though it may have been used before, the term parsec was first mentioned in an astronomical publication in 1913. Astronomer Royal Frank Watson Dyson expressed his concern for the need of a name for that unit of distance.
He proposed the name astron, but mentioned that Carl Charlier had suggested siriometer and Herbert Hall Turner had proposed parsec. It was Turner's proposal. In the diagram above, S represents the Sun, E the Earth at one point in its orbit, thus the distance ES is one astronomical unit. The angle SDE is one arcsecond so by definition D is a point in space at a distance of one parsec from the Sun. Through trigonometry, the distance SD is calculated as follows: S D = E S tan 1 ″ S D ≈ E S 1 ″ = 1 au 1 60 × 60 × π
ArXiv is a repository of electronic preprints approved for posting after moderation, but not full peer review. It consists of scientific papers in the fields of mathematics, astronomy, electrical engineering, computer science, quantitative biology, mathematical finance and economics, which can be accessed online. In many fields of mathematics and physics all scientific papers are self-archived on the arXiv repository. Begun on August 14, 1991, arXiv.org passed the half-million-article milestone on October 3, 2008, had hit a million by the end of 2014. By October 2016 the submission rate had grown to more than 10,000 per month. ArXiv was made possible by the compact TeX file format, which allowed scientific papers to be transmitted over the Internet and rendered client-side. Around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Paul Ginsparg recognized the need for central storage, in August 1991 he created a central repository mailbox stored at the Los Alamos National Laboratory which could be accessed from any computer.
Additional modes of access were soon added: FTP in 1991, Gopher in 1992, the World Wide Web in 1993. The term e-print was adopted to describe the articles, it began as a physics archive, called the LANL preprint archive, but soon expanded to include astronomy, computer science, quantitative biology and, most statistics. Its original domain name was xxx.lanl.gov. Due to LANL's lack of interest in the expanding technology, in 2001 Ginsparg changed institutions to Cornell University and changed the name of the repository to arXiv.org. It is now hosted principally with eight mirrors around the world, its existence was one of the precipitating factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists upload their papers to arXiv.org for worldwide access and sometimes for reviews before they are published in peer-reviewed journals. Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv; the annual budget for arXiv is $826,000 for 2013 to 2017, funded jointly by Cornell University Library, the Simons Foundation and annual fee income from member institutions.
This model arose in 2010, when Cornell sought to broaden the financial funding of the project by asking institutions to make annual voluntary contributions based on the amount of download usage by each institution. Each member institution pledges a five-year funding commitment to support arXiv. Based on institutional usage ranking, the annual fees are set in four tiers from $1,000 to $4,400. Cornell's goal is to raise at least $504,000 per year through membership fees generated by 220 institutions. In September 2011, Cornell University Library took overall administrative and financial responsibility for arXiv's operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it "was supposed to be a three-hour tour, not a life sentence". However, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. Although arXiv is not peer reviewed, a collection of moderators for each area review the submissions; the lists of moderators for many sections of arXiv are publicly available, but moderators for most of the physics sections remain unlisted.
Additionally, an "endorsement" system was introduced in 2004 as part of an effort to ensure content is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, but to check whether the paper is appropriate for the intended subject area. New authors from recognized academic institutions receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for restricting scientific inquiry. A majority of the e-prints are submitted to journals for publication, but some work, including some influential papers, remain purely as e-prints and are never published in a peer-reviewed journal. A well-known example of the latter is an outline of a proof of Thurston's geometrization conjecture, including the Poincaré conjecture as a particular case, uploaded by Grigori Perelman in November 2002.
Perelman appears content to forgo the traditional peer-reviewed journal process, stating: "If anybody is interested in my way of solving the problem, it's all there – let them go and read about it". Despite this non-traditional method of publication, other mathematicians recognized this work by offering the Fields Medal and Clay Mathematics Millennium Prizes to Perelman, both of which he refused. Papers can be submitted in any of several formats, including LaTeX, PDF printed from a word processor other than TeX or LaTeX; the submission is rejected by the arXiv software if generating the final PDF file fails, if any image file is too large, or if the total size of the submission is too large. ArXiv now allows one to store and modify an incomplete submission, only finalize the submission when ready; the time stamp on the article is set. The standard access route is through one of several mirrors. Sev
Proper motion is the astronomical measure of the observed changes in the apparent places of stars or other celestial objects in the sky, as seen from the center of mass of the Solar System, compared to the abstract background of the more distant stars. The components for proper motion in the equatorial coordinate system are given in the direction of right ascension and of declination, their combined value is computed as the total proper motion. It has dimensions of angle per time arcseconds per year or milliarcseconds per year. Knowledge of the proper motion and radial velocity allows calculations of true stellar motion or velocity in space in respect to the Sun, by coordinate transformation, the motion in respect to the Milky Way. Proper motion is not "proper", because it includes a component due to the motion of the Solar System itself. Over the course of centuries, stars appear to maintain nearly fixed positions with respect to each other, so that they form the same constellations over historical time.
Ursa Major or Crux, for example, looks nearly the same now. However, precise long-term observations show that the constellations change shape, albeit slowly, that each star has an independent motion; this motion is caused by the movement of the stars relative to the Solar System. The Sun travels in a nearly circular orbit about the center of the Milky Way at a speed of about 220 km/s at a radius of 8 kPc from the center, which can be taken as the rate of rotation of the Milky Way itself at this radius; the proper motion is a two-dimensional vector and is thus defined by two quantities: its position angle and its magnitude. The first quantity indicates the direction of the proper motion on the celestial sphere, the second quantity is the motion's magnitude expressed in arcseconds per year or milliarcsecond per year. Proper motion may alternatively be defined by the angular changes per year in the star's right ascension and declination, using a constant epoch in defining these; the components of proper motion by convention are arrived at.
Suppose an object moves from coordinates to coordinates in a time Δt. The proper motions are given by: μ α = α 2 − α 1 Δ t, μ δ = δ 2 − δ 1 Δ t; the magnitude of the proper motion μ is given by the Pythagorean theorem: μ 2 = μ δ 2 + μ α 2 ⋅ cos 2 δ, μ 2 = μ δ 2 + μ α ∗ 2, where δ is the declination. The factor in cos2δ accounts for the fact that the radius from the axis of the sphere to its surface varies as cosδ, for example, zero at the pole. Thus, the component of velocity parallel to the equator corresponding to a given angular change in α is smaller the further north the object's location; the change μα, which must be multiplied by cosδ to become a component of the proper motion, is sometimes called the "proper motion in right ascension", μδ the "proper motion in declination". If the proper motion in right ascension has been converted by cosδ, the result is designated μα*. For example, the proper motion results in right ascension in the Hipparcos Catalogue have been converted. Hence, the individual proper motions in right ascension and declination are made equivalent for straightforward calculations of various other stellar motions.
The position angle θ is related to these components by: μ sin θ = μ α cos δ = μ α ∗, μ cos θ = μ δ. Motions in equatorial coordinates can be converted to motions in galactic coordinates. For the majority of stars seen in the sky, the observed proper motions are small and unremarkable; such stars are either faint or are distant, have changes of below 10 milliarcseconds per year, do not appear to move appreciably over many millennia. A few do have significant motions, are called high-proper motion stars. Motions can be in seemingly random directions. Two or more stars, double stars or open star clusters, which are moving in similar directions, exhibit so-called shared or common proper motion, suggesting they may be gravitationally attached or share similar motion in space. Barnard's Star has the largest proper motion of all stars, moving at 10.3 seconds of arc per year. L
Minute and second of arc
A minute of arc, arc minute, or minute arc is a unit of angular measurement equal to 1/60 of one degree. Since one degree is 1/360 of a turn, one minute of arc is 1/21600 of a turn – it is for this reason that the Earth's circumference is exactly 21,600 nautical miles. A minute of arc is π/10800 of a radian. A second of arc, arcsecond, or arc second is 1/60 of an arcminute, 1/3600 of a degree, 1/1296000 of a turn, π/648000 of a radian; these units originated in Babylonian astronomy as sexagesimal subdivisions of the degree. To express smaller angles, standard SI prefixes can be employed; the number of square arcminutes in a complete sphere is 4 π 2 = 466 560 000 π ≈ 148510660 square arcminutes. The names "minute" and "second" have nothing to do with the identically named units of time "minute" or "second"; the identical names reflect the ancient Babylonian number system, based on the number 60. The standard symbol for marking the arcminute is the prime, though a single quote is used where only ASCII characters are permitted.
One arcminute is thus written 1′. It is abbreviated as arcmin or amin or, less the prime with a circumflex over it; the standard symbol for the arcsecond is the double prime, though a double quote is used where only ASCII characters are permitted. One arcsecond is thus written 1″, it is abbreviated as arcsec or asec. In celestial navigation, seconds of arc are used in calculations, the preference being for degrees and decimals of a minute, for example, written as 42° 25.32′ or 42° 25.322′. This notation has been carried over into marine GPS receivers, which display latitude and longitude in the latter format by default; the full moon's average apparent size is about 31 arcminutes. An arcminute is the resolution of the human eye. An arcsecond is the angle subtended by a U. S. dime coin at a distance of 4 kilometres. An arcsecond is the angle subtended by an object of diameter 725.27 km at a distance of one astronomical unit, an object of diameter 45866916 km at one light-year, an object of diameter one astronomical unit at a distance of one parsec, by definition.
A milliarcsecond is about the size of a dime atop the Eiffel Tower. A microarcsecond is about the size of a period at the end of a sentence in the Apollo mission manuals left on the Moon as seen from Earth. A nanoarcsecond is about the size of a penny on Neptune's moon Triton as observed from Earth. Notable examples of size in arcseconds are: Hubble Space Telescope has calculational resolution of 0.05 arcseconds and actual resolution of 0.1 arcseconds, close to the diffraction limit. Crescent Venus measures between 66 seconds of arc. Since antiquity the arcminute and arcsecond have been used in astronomy. In the ecliptic coordinate system and longitude; the principal exception is right ascension in equatorial coordinates, measured in time units of hours and seconds. The arcsecond is often used to describe small astronomical angles such as the angular diameters of planets, the proper motion of stars, the separation of components of binary star systems, parallax, the small change of position of a star in the course of a year or of a solar system body as the Earth rotates.
These small angles may be written in milliarcseconds, or thousandths of an arcsecond. The unit of distance, the parsec, named from the parallax of one arc second, was developed for such parallax measurements, it is the distance at which the mean radius of the Earth's orbit would subtend an angle of one arcsecond. The ESA astrometric space probe Gaia, launched in 2013, can approximate star positions to 7 microarcseconds. Apart from the Sun, the star with the largest angular diameter from Earth is R Doradus, a red giant with a diameter of 0.05 arcsecond. Because of the effects of atmospheric seeing, ground-based telescopes will smear the image of a star to an angular diameter of about 0.5 arcsecond. The dwarf planet Pluto has proven difficult to resolve because its angular diameter is about 0.1 arcsecond. Space telescopes are diffraction limited. For example, the Hubble Space Telescope can reach an angular size of stars down to about 0.1″. Techniques exist for improving seeing on the ground. Adaptive optics, for example, can produce images around 0.05 arcsecond on a 10 m class telescope.
Minutes and seconds of arc are used in cartography and navigation. At sea level one minute of arc
A constellation is a group of stars that forms an imaginary outline or pattern on the celestial sphere representing an animal, mythological person or creature, a god, or an inanimate object. The origins of the earliest constellations go back to prehistory. People used them to relate stories of their beliefs, creation, or mythology. Different cultures and countries adopted their own constellations, some of which lasted into the early 20th century before today's constellations were internationally recognized. Adoption of constellations has changed over time. Many have changed in shape; some became popular. Others were limited to single nations; the 48 traditional Western constellations are Greek. They are given in Aratus' work Phenomena and Ptolemy's Almagest, though their origin predates these works by several centuries. Constellations in the far southern sky were added from the 15th century until the mid-18th century when European explorers began traveling to the Southern Hemisphere. Twelve ancient constellations belong to the zodiac.
The origins of the zodiac remain uncertain. In 1928, the International Astronomical Union formally accepted 88 modern constellations, with contiguous boundaries that together cover the entire celestial sphere. Any given point in a celestial coordinate system lies in one of the modern constellations; some astronomical naming systems include the constellation where a given celestial object is found to convey its approximate location in the sky. The Flamsteed designation of a star, for example, consists of a number and the genitive form of the constellation name. Other star patterns or groups called asterisms are not constellations per se but are used by observers to navigate the night sky. Examples of bright asterisms include the Pleiades and Hyades within the constellation Taurus or Venus' Mirror in the constellation of Orion.. Some asterisms, like the False Cross, are split between two constellations; the word "constellation" comes from the Late Latin term cōnstellātiō, which can be translated as "set of stars".
The Ancient Greek word for constellation is ἄστρον. A more modern astronomical sense of the term "constellation" is as a recognisable pattern of stars whose appearance is associated with mythological characters or creatures, or earthbound animals, or objects, it can specifically denote the recognized 88 named constellations used today. Colloquial usage does not draw a sharp distinction between "constellations" and smaller "asterisms", yet the modern accepted astronomical constellations employ such a distinction. E.g. the Pleiades and the Hyades are both asterisms, each lies within the boundaries of the constellation of Taurus. Another example is the northern asterism known as the Big Dipper or the Plough, composed of the seven brightest stars within the area of the IAU-defined constellation of Ursa Major; the southern False Cross asterism includes portions of the constellations Carina and Vela and the Summer Triangle.. A constellation, viewed from a particular latitude on Earth, that never sets below the horizon is termed circumpolar.
From the North Pole or South Pole, all constellations south or north of the celestial equator are circumpolar. Depending on the definition, equatorial constellations may include those that lie between declinations 45° north and 45° south, or those that pass through the declination range of the ecliptic or zodiac ranging between 23½° north, the celestial equator, 23½° south. Although stars in constellations appear near each other in the sky, they lie at a variety of distances away from the Earth. Since stars have their own independent motions, all constellations will change over time. After tens to hundreds of thousands of years, familiar outlines will become unrecognizable. Astronomers can predict the past or future constellation outlines by measuring individual stars' common proper motions or cpm by accurate astrometry and their radial velocities by astronomical spectroscopy; the earliest evidence for the humankind's identification of constellations comes from Mesopotamian inscribed stones and clay writing tablets that date back to 3000 BC.
It seems that the bulk of the Mesopotamian constellations were created within a short interval from around 1300 to 1000 BC. Mesopotamian constellations appeared in many of the classical Greek constellations; the oldest Babylonian star catalogues of stars and constellations date back to the beginning in the Middle Bronze Age, most notably the Three Stars Each texts and the MUL. APIN, an expanded and revised version based on more accurate observation from around 1000 BC. However, the numerous Sumerian names in these catalogues suggest that they built on older, but otherwise unattested, Sumerian traditions of the Early Bronze Age; the classical Zodiac is a revision of Neo-Babylonian constellations from the 6th century BC. The Greeks adopted the Babylonian constellations in the 4th century BC. Twenty Ptolemaic constellations are from the Ancient Near East. Another ten have the same stars but different names. Biblical scholar, E. W. Bullinger interpreted some of the creatures mentioned in the books of Ezekiel and Revelation as the middle signs of the four quarters of the Zodiac, with the Lion as Leo, the Bull as Taurus, the Man representing Aquarius and the Eagle standing in for Scorpio.
The biblical Book of Job also
Stellar parallax is the apparent shift of position of any nearby star against the background of distant objects. Created by the different orbital positions of Earth, the small observed shift is largest at time intervals of about six months, when Earth arrives at opposite sides of the Sun in its orbit, giving a baseline distance of about two astronomical units between observations; the parallax itself is considered to be half of this maximum, about equivalent to the observational shift that would occur due to the different positions of Earth and the Sun, a baseline of one astronomical unit. Stellar parallax is so difficult to detect that its existence was the subject of much debate in astronomy for hundreds of years, it was first observed in 1806 by Giuseppe Calandrelli who reported parallax in α-Lyrae in his work "Osservazione e riflessione sulla parallasse annua dall’alfa della Lira". In 1838 Friedrich Bessel made the first successful parallax measurement, for the star 61 Cygni, using a Fraunhofer heliometer at Königsberg Observatory.
Once a star's parallax is known, its distance from Earth can be computed trigonometrically. But the more distant an object is, the smaller its parallax. With 21st-century techniques in astrometry, the limits of accurate measurement make distances farther away than about 100 parsecs too approximate to be useful when obtained by this technique; this limits the applicability of parallax as a measurement of distance to objects that are close on a galactic scale. Other techniques, such as spectral red-shift, are required to measure the distance of more remote objects. Stellar parallax measures are given in the tiny units of arcseconds, or in thousandths of arcseconds; the distance unit parsec is defined as the length of the leg of a right triangle adjacent to the angle of one arcsecond at one vertex, where the other leg is 1 AU long. Because stellar parallaxes and distances all involve such skinny right triangles, a convenient trigonometric approximation can be used to convert parallaxes to distance.
The approximate distance is the reciprocal of the parallax: d ≃ 1 / p. For example, Proxima Centauri, whose parallax is 0.7687, is 1 / 0.7687 parsecs = 1.3009 parsecs distant. Stellar parallax is so small that its apparent absence was used as a scientific argument against heliocentrism during the early modern age, it is clear from Euclid's geometry that the effect would be undetectable if the stars were far enough away, but for various reasons such gigantic distances involved seemed implausible: it was one of Tycho Brahe's principal objections to Copernican heliocentrism that in order for it to be compatible with the lack of observable stellar parallax, there would have to be an enormous and unlikely void between the orbit of Saturn and the eighth sphere. James Bradley first tried to measure stellar parallaxes in 1729; the stellar movement proved too insignificant for his telescope, but he instead discovered the aberration of light and the nutation of Earth's axis, catalogued 3222 stars. Stellar parallax is most measured using annual parallax, defined as the difference in position of a star as seen from Earth and Sun, i.e. the angle subtended at a star by the mean radius of Earth's orbit around the Sun.
The parsec is defined as the distance. Annual parallax is measured by observing the position of a star at different times of the year as Earth moves through its orbit. Measurement of annual parallax was the first reliable way to determine the distances to the closest stars; the first successful measurements of stellar parallax were made by Friedrich Bessel in 1838 for the star 61 Cygni using a heliometer. Being difficult to measure, only about 60 stellar parallaxes had been obtained by the end of the 19th century by use of the filar micrometer. Astrographs using astronomical photographic plates sped the process in the early 20th century. Automated plate-measuring machines and more sophisticated computer technology of the 1960s allowed more efficient compilation of star catalogues. In the 1980s, charge-coupled devices replaced photographic plates and reduced optical uncertainties to one milliarcsecond. Stellar parallax remains the standard for calibrating other measurement methods. Accurate calculations of distance based on stellar parallax require a measurement of the distance from Earth to the Sun, now known to exquisite accuracy based on radar reflection off the surfaces of planets.
The angles involved in these calculations are small and thus difficult to measure. The nearest star to the Sun, Proxima Centauri, has a parallax of 0.7687 ± 0.0003 arcsec. This angle is that subtended by an object 2 centimeters in diameter located 5.3 kilometers away. In 1989 the satellite Hipparcos was launched for obtaining parallaxes and proper motions of nearby stars, increasing the number of stellar parallaxes measured to milliarcsecond accuracy a thousandfold. So, Hipparcos is only able to measure parallax angles for stars up to about 1,600 light-years away, a little more than one percent of the diameter of the Milky Way Galaxy; the Hubble telescope WFC3 now has a precision of 20 to 40 microarcseconds, enabling reliable distance measurements u
SIMBAD is an astronomical database of objects beyond the Solar System. It is maintained by the Centre de données astronomiques de France. SIMBAD was created by merging the Catalog of Stellar Identifications and the Bibliographic Star Index as they existed at the Meudon Computer Centre until 1979, expanded by additional source data from other catalogues and the academic literature; the first on-line interactive version, known as Version 2, was made available in 1981. Version 3, developed in the C language and running on UNIX stations at the Strasbourg Observatory, was released in 1990. Fall of 2006 saw the release of Version 4 of the database, now stored in PostgreSQL, the supporting software, now written in Java; as of 10 February 2017, SIMBAD contains information for 9,099,070 objects under 24,529,080 different names, with 327,634 bibliographical references and 15,511,733 bibliographic citations. The minor planet 4692 SIMBAD was named in its honour. Planetary Data System – NASA's database of information on SSSB, maintained by JPL and Caltech.
NASA/IPAC Extragalactic Database – a database of information on objects outside the Milky Way maintained by JPL. NASA Exoplanet Archive – an online astronomical exoplanet catalog and data service Bibcode SIMBAD, Strasbourg SIMBAD, Harvard
|
The most basic (though not necessarily easiest or most accurate) way to measure population is simply to count everyone. This is known as a census and is usually undertaken by government officials. In the past, religious organizations carried out censuses, but usually on a local or regional level. The Roman Empire conducted censuses in order to measure the pool of military-age men and for taxation purposes, but these were limited because Romans had to report to government officials in their hometown to be counted. People who were poor or otherwise unable to travel were seldom counted [source: Weinstein & Pillai]. The U.S. government conducted the first true census in 1790 and has conducted a full census every 10 years ever since. A full census is sometimes known as complete enumeration -- every single person is counted either through face-to-face interviews or through questionnaires. There are no estimates.
Even a full census has limits. In countries with very remote areas, it can be impossible for census takers to count everyone. The 1980 U.S. census suffered from a documented undercount in part because census takers were afraid to go into some inner-city neighborhoods [source: Weinstein & Pillai]. A census also has trouble collecting information on rare populations. A rare population is one that is small or not reflected in standard census data. The United States isn't allowed to collect religious information in the national census, for example, so American Muslims could be considered a rare population. People who participate in a particular hobby or own a certain model of car are other examples of rare populations.
One alternative to a complete enumeration census is sampling. You might be familiar with this as the method used by market research companies and political analysts to conduct their research. Statisticians use a mathematical formula to determine the minimum number of people who must be counted to constitute a representative sample of the total population. For example, if the total population is 1,000 people, researchers might only need to survey 150 of them directly. Then they can take the data from the sample and extrapolate it to the full population. If 10 percent of the people in the sample are left-handed, it can be assumed that 100 out of a population of 1,000 are left-handed.
Sampling can actually return more accurate results than full enumeration, but there are some caveats. All samples have a margin of error, because there's always a chance that the sample selected for the survey differs from the total population in some way. This is expressed as a percentage of possible variation, such as "plus or minus four percent." The larger the sample size, the lower the margin of error. In addition, samples must be chosen as randomly as possible. This can be harder than it sounds. Let's say you want to survey a sample of everyone in France. One method used in the past was to select names at random from the phone book. However, this eliminates certain classes of people from the possibility of being selected for the sample: poor people with no phones; people who use cell phones and thus don't appear in the phone book; people with unlisted numbers; and most college students.
Gathering population data for places that don't conduct censuses, or from historical periods before censuses became common, is accomplished by piecing together whatever demographic information is available. There may be partial censuses, local population data or information gathered by church or civic groups. Examining birth and death records provides other clues.
|
Mars and Earth have quite a few things in common. Both are terrestrial planets, both are located within the Sun’s habitable zone, both have polar ice caps, similarly tilted axes, and similar variations in temperature. And according to some of the latest scientific data obtained by rovers and atmospheric probes, it is now known that Mars once had a dense atmosphere and was covered with warm, flowing water.
But when it comes to things like the length of a year, and the length of seasons, Mars and Earth are quite different. Compared to Earth, a year on Mars lasts almost twice as long – 686.98 Earth days. This is due to the fact that Mars is significantly farther from the Sun and its orbital period (the time it takes to orbit the Sun) is significantly greater than that of Earth’s.
Mars average distance (semi-major axis) from the Sun is 227,939,200 km (141,634,852.46 mi) which is roughly one and half times the distance between the Earth and the Sun (1.52 AU). Compared to Earth, its orbit is also rather eccentric (0.0934 vs. 0.0167), ranging from 206.7 million km (128,437,425.435 mi; 1.3814 AU) at perihelion to 249.2 million km (154,845,701 mi; 1.666 AU) at aphelion. At this distance, and with an orbital speed of 24.077 km/s, Mars takes 686.971 Earth days, the equivalent of 1.88 Earth years, to complete a orbit around the Sun.
This eccentricity is one of the most pronounced in the Solar System, with only Mercury having a greater one (0.205). However, this wasn’t always the case. Roughly 1.35 million years ago, Mars had an eccentricity of just 0.002, making its orbit nearly circular. It reached a minimum eccentricity of 0.079 some 19,000 years ago, and will peak at about 0.105 in about 24,000 years from now.
But for the last 35,000 years, the orbit of Mars has been getting slightly more eccentric because of the gravitational effects of the other planets. The closest distance between Earth and Mars will continue to mildly decrease for the next 25,000 years. And in about 1,000,000 years from now, its eccentricity will once again be close to what it is now – with an estimated eccentricity of 0.01.
Earth Days vs. Martian “Sols”:
Whereas a year on Mars is significantly longer than a year on Earth, the difference between an day on Earth and a Martian day (aka. “Sol”) is not significant. For starters, Mars takes 24 hours 37 minutes and 22 seconds to complete a single rotation on its axis (aka. a sidereal day), where Earth takes just slightly less (23 hours, 56 minutes and 4.1 seconds).
On the other hand, it takes 24 hours, 39 minutes, and 35 seconds for the Sun to appear in the same spot in the sky above Mars (aka. a solar day), compared to the 24 hour solar day we experience here on Earth. This means that, based on the length of a Martian day, a Martian year works out to 668.5991 Sols.
Mars also has a seasonal cycle that is similar to that of Earth’s. This is due in part to the fact that Mars also has a tilted axis, which is inclined 25.19° to its orbital plane (compared to Earth’s axial tilt of approx. 23.44°). It’s also due to Mars orbital eccentricity, which means it will periodically receive less in the way of the Sun’s radiance during at one time of the year than another. This change in distance causes significant variations in temperature.
While the planet’s average temperature is -46 °C (51 °F), this ranges from a low of -143 °C (-225.4 °F) during the winter at the poles to a high of 35 °C (95 °F) during summer and midday at the equator. This works out to a variation in average surface temperature that is quite similar to Earth’s – a difference of 178 °C (320.4 °F) versus 145.9 °C (262.5 °F). This high in temperatures is also what allows for liquid water to still flow (albeit intermittently) on the surface of Mars.
In addition, Mars’ eccentricity means that it travels more slowly in its orbit when it is further from the Sun, and more quickly when it is closer (as stated in Kepler’s Three Laws of Planetary Motion). Mars’ aphelion coincides with Spring in its northern hemisphere, which makes it the longest season on the planet – lasting roughly 7 Earth months. Summer is second longest, lasting six months, while Fall and Winter last 5.3 and just over 4 months, respectively.
In the south, the length of the seasons is only slightly different. Mars is near perihelion when it is summer in the southern hemisphere and winter in the north, and near aphelion when it is winter in the southern hemisphere and summer in the north. As a result, the seasons in the southern hemisphere are more extreme and the seasons in the northern are milder. The summer temperatures in the south can be up to 30 K (30 °C; 54 °F) warmer than the equivalent summer temperatures in the north.
These seasonal variations allow Mars to experience some extremes in weather. Most notably, Mars has the largest dust storms in the Solar System. These can vary from a storm over a small area to gigantic storms (thousands of km in diameter) that cover the entire planet and obscure the surface from view. They tend to occur when Mars is closest to the Sun, and have been shown to increase the global temperature.
The first mission to notice this was the Mariner 9 orbiter, which was the first spacecraft to orbit Mars in 1971, it sent pictures back to Earth of a world consumed in haze. The entire planet was covered by a dust storm so massive that only Olympus Mons, the giant Martian volcano that measures 24 km high, could be seen above the clouds. This storm lasted for a full month, and delayed Mariner 9‘s attempts to photograph the planet in detail.
And then on June 9th, 2001, the Hubble Space Telescope spotted a dust storm in the Hellas Basin on Mars. By July, the storm had died down, but then grew again to become the largest storm in 25 years. So big was the storm that amateur astronomers using small telescopes were able to see it from Earth. And the cloud raised the temperature of the frigid Martian atmosphere by a stunning 30° Celsius.
These storms tend to occur when Mars is closest to the Sun, and are the result of temperatures rising and triggering changes in the air and soil. As the soil dries, it becomes more easily picked up by air currents, which are caused by pressure changes due to increased heat. The dust storms cause temperatures to rise even further, leading to Mars’ experiencing its own greenhouse effect.
Given the differences in seasons and day length, one is left to wonder if a standard Martian calendar could ever be developed. In truth, it could, but it would be a bit of a challenge. For one, a Martian calendar would have to account for Mars’ peculiar astronomical cycles, and our own non-astronomical cycles like the 7-day week work with them.
Another consideration in designing a calendar is accounting for the fractional number of days in a year. Earth’s year is 365.24219 days long, and so calendar years contain either 365 or 366 days accordingly. Such a formula would need to be developed to account for the 668.5921-sol Martian year. All of this will certainly become an issue as human beings become more and more committed to exploring (and perhaps colonizing) the Red Planet.
We have written many interesting articles about Mars here at Universe Today. Here’s How Long is a Year on the Other Planets?, Which Planet has the Longest Day?, How Long is a Year on Mercury, How Long is a Year on Earth?, How Long is a Year on Venus?, How Long is a Year on Jupiter?, How Long is a Year on Saturn?, How Long is a Year on Uranus?, How Long is a Year on Neptune?, How Long is a Year on Pluto?
For more information, check out NASA’s Solar System Exploration page on Mars.
|
Enviroscience - Question 1
a. The heat island hypothesis is based on the idea of heat absorption by the dense concentration of heat-absorbing materials in a city, heat-exchanging machines (like automobiles and air conditioners), and little plants for cooling, which brings the temperature up around the city; this effect causes hot air to rise and then cool, condensing clouds out of the atmosphere.
b. The second hypothesis is based on convergence, or collision of air surfaces caused by disruption of air flow across a city. This effect, which is caused by the tall buildings in a city, pushes surface air into the atmosphere where it then condenses and forms clouds. The third hypothesis is that the city’s buildings split the storms that form around it, either physically or because of the heat island effect described in question a.
c. The accuracy of rainfall results from satellite data were checked by physically measuring the rain that fell around Atlanta, the place where Dr. Shepherd was doing the research. He used rain gauges to measure rainfall at various places around the city, and then compared these findings to the findings provided b the satellite in order to make sure they were correct.
d. Shepherd’s experiments first compared historical data to present data, which showed that Houston rainfall had increased following urbanization, and that the timing of storms had also shifted.They then modeled the city based on their findings, and found that the city appeared to act as predicted by the convergence theory (the second theory), with the air splitting and then coming together on the other side of the city to form storms; this was consistent with the actual findings of the research. The experimental findings were convincing.
Enviroscience - Question 2
a) The animal chosen for this discussion is the dhole, or Asiatic wild dog. The dhole is a social pack mammal similar to the domestic dog (a canid species) (Durbin, n.d.). Its scientific name is Cuon alpinus. The dhole is around 90 cm in length and 50 cm in height at the shoulder, or the size of a small domestic dog. It has a red coat with black paws, although there are several subspecies with different colorations. Their vocalization range is wide, including a communication whistle. They range across India and southern Asia, including through multiple types of forest and alpine steppes, and their primary food source is small animals and ungulates (deer) they share their range with.
Enviroscience - Animal
b) The purpose of choosing the dhole is to bring attention to an organism that is not well known. Even though the dhole is very endangered, it does not receive much attention in the press or in conservation efforts (Durbin, n.d.). However, as a highly social canid species that may be the ancestor of our own domestic dog species, I think it is a highly important species and should be better known.
c) The dhole is one of the most endangered predators on the Asian subcontinent, with an estimated wild population of only 2,500 dholes remaining. The main threats to the dhole include habitat destruction by encroaching human settlements as well as active hunting and destruction by humans (Durbin, n.d.). Habitat destruction has been brought by firewood collection and dam building as well as deer hunting, which destroys the dhole’s main food source (Durbin, n.d.). The dhole is also considered to be a pest, and is often destroyed by humans even though it is endangered (Durbin, n.d.).
Enviroscience - Question 3
a) The table below shows the different calculations for each of the three calculators that were used.
|Consumer||Water Footprint Network||Regional Water Providers Consortium (RWPC)||Zerofootprint|
|Male consumer||3,086 cubic meters/year||83.3 gallons/day (115.1 cubic meters/year)||119,282 liters/person/year (119.3 cubic meters/year/person)|
|Female consumer||2,935 cubic meters/year|
|Figures||$50,000 year, average meat consumer||$50,000/year, 7 showers, 10 minutes, 5 toilet flushes, no low-flow fittings, no baths, 4 DW loads, 2 washer loads, no energy star, no outdoor work||No swimming pool, no lawn watering, 2 loads/week laundry, 3 loads/week dishes (DW), 10-minute shower, regular fittings)|
b) The Water Footprint Calculator found that food was the highest category that contributed to the water footprint for both male and female consumers, with meat being the majority of the water consumption accounted for in the food category. The RWPC calculator did not break down water usage by specific categories, but only gave a specific figure. The Zerofootprint calculator calculated bathroom usage as the highest usage.
c) The Web sites were more likely to be accurate based on the amount of information they asked for. The Zerofootprint and RWPC calculators asked for a similar amount of information, including frequency of various water-using activities, but did not ask for income; in contrast, the Water Footprint Calculator asked for income level and food consumption as well as gender, but did not ask about specific water consumption activities. The relevance of the questions involved are likely to be indicative of the accuracy of usage – the income of an individual is not as directly related to water usage as the amount of time spent showering or other water usages. Thus, the Zerofootprint and RWPC calculators are likely to be more useful than the Water Footprint Calculator.
d) Some of the invalid assumptions that may reduce the usefulness of this way of assessing water footprint could include inaccurate assumptions on the part of the calculator as well as inaccurate recollection of the type of activities that are undertaken by individuals (or lack of knowledge about them). Inaccurate assumptions could include assumptions regarding gender-related usage of water (as seen in the Water Footprint Calculator) or assumptions about washer capacity, shower flow, or other standard issues involved in calculation of water usage. For example, if one had a compact dishwasher, it could only use half the water of a regular dishwasher. Personal recollection of activities could also affect the accuracy; if I don’t know how many times I wash my car or miss-estimate the amount of time I spend in the shower, the results could be wrong.
e) There are a number of lessons that could be learned from these activities. From the Water Footprint Calculator, you could learn the impact of your eating habits on the amount of water that is used on your behalf – that could convince you to eat less meat, which would increase sustainability through reducing the agricultural load. The most useful calculator for learning is the Zerofootprint calculator, which displays everything on one page, so you can directly see the effects of changing your habits on your water usage. That would let you draw solid conclusions about how much water you use and how you can best reduce it (even if the figures are only estimated).
f) I would recommend the Zerofootprint calculator to friends and family, because it is the easiest to use and because it provides the best insight into the ways that you can change your habits and conserve water. This is because it is all on one page and changes happen immediately, letting you make immediate changes and see what effect they have.
Enviroscience - Question 4
The source I chose for description of the environmental analyst position was the Bureau of Labor Statistics (BLS) Occupational Outlook Handbook (OOH). This included the environmental analyst along with a grouping of environmental scientist and specialist positions.
a) The job of an environmental analyst can be varied. The work can involve analysis of soil, water, air, food, or other environmental samples, analysis and testing of industrial processes, or working with governments to create or enforce regulations designed to protect the environment. Other tasks and activities can include reporting reports and risk assessments, doing research into the environment, working in industries in order to monitor pollution levels, and otherwise working to reduce the impact of humans on the environment. Some environmental analysis work can be highly political, because it involves challenges to government and business regulation as well.
b) The minimum educational requirement is a bachelor’s degree in earth science or a related area of study. That only suffices for entry-level positions, however, and in order to perform advanced research or to provide more complex consulting services, a Master’s or PhD is preferred. Other requirements include teamwork and computer skills.
c) Some of the pros of the environmental analyst job include the ability to work with environmental factors and reduce environmental problems, the working environment, and the varied environments and jobs they can choose to work in. Environmental analysis is also growing faster than average, leading to good job prospects for entering workers. Median earnings of $59,700 are moderate but not a negative factor. One of the major negative factors in this job choice may be the requirement to work within industry to change it, which may not be an ideal working environment for some.
BLS. Environmental scientists and specialsts. Retrieved from Occupational Outlook Handbook.
Durbin, L. S. (n.d.). Dhole home page.
IUCN. Dhole (Cuon alpinus). Retrieved from IUCN/SSC Canid Specialist Group.
Parkinson, A. Dhole - Cuon alpinus. Retrieved from ARKive.
Regional Water Providers Consortium. Water calculator. Retrieved from Regional Water Providers Consortium.
Remer, L. Urban rain. Retrieved from NASA Earth Observatory.
Water Footprint Network. Water footprint and virtual water. Retrieved from Water Footprint Network.
Zerofootprint. One minute calculator. Retrieved from Go Blue.
|
Becoming a fluent reader requires both the capacity to utilise sound-based decoding strategies (‘sounding out’) and the ability to accurately recognise familiar letter patterns either as whole words (e.g. ‘was’) or within words (e.g. the ‘igh’ in night). The ability to rely less heavily on sound-based decoding strategies is very much dependent on the development of orthographic processing.
Orthography refers to the conventional writing system of any given language and includes rules around letter order and combinations as well as capitalisation, hyphenation and punctuation. Orthographic processing is the ability to understand and recognise these writing conventions as well as recognising when words contain correct and incorrect spellings.
Children with weak orthographic processing rely very heavily on sounding out common words that should be in memory, leading to a choppy and laborious style of decoding. Delays in orthographic processing are also linked to ongoing difficulties in letter recognition and letter reversals. If the shape and orientation of a letter is not fully consolidated and stored in visual memory, then a child is more likely to make reversal errors and be unable to recognise when they have made a mistake.
As skilled readers need to recognise words and/or components of words automatically, there is a heavy reliance on orthographic processing in the development of reading fluency. Delays in this area are likely to inhibit a child’s applied reading skills and ultimately affect his/her reading comprehension skills.
In addition, poor orthographic processing will almost certainly result in both a high rate of spelling errors and poor written expression. Children find it difficult to remember the correct spelling pattern for a particular word and don’t seem to benefit from the editing tool, “Does it look right?”. Rather they demonstrate the tendency to over-rely on phonological information, writing words like ‘rough’ as ‘ruff’ and ‘night’ as ‘nite’.
|
Consider the rainbow as seven colours, represented by strings as
Red Orange Yellow Green Blue Indigo Violet.
Your task is to create a program that receives one of these colours as input and outputs next in order rainbow colour. This includes overlapping
Violet -> Red
A string containing one of rainbow colours.
The next in order colour of the rainbow.
- Colour names are case sensitive. They must match the case included in this post.
- The input will always be valid. Any behavior is allowed for invalid input.
- This is code golf, so the shortest amount of bytes wins!
Example Input and Output
Input -> Output Red -> Orange Orange -> Yellow Yellow -> Green Green -> Blue Blue -> Indigo Indigo -> Violet Violet -> Red
|
October 21, 2015 by libraryheather
Target Age Range: Grades 1-3
Program Length: 60 minutes
Learn how computers read code, then try simple programming on your own using Code.org.
Laptops connected to the internet (1 per 1-2 children)
Laptop connected to a projector
Simple puzzle to build
Cost: $ 0-50
The only thing you need to do (other than setting up all the computers) is make sure you understand how coding works (also check out Code Conquest’s explanation), as well as how to play Harold the Robot!
1. Settling in, welcome, introductions: 5 minutes
2. Play Harold the Robot or another similar fun demonstration that introduces the concept of how computers are very literal in the way they understand directions.
3. Explain what happened during Harold the Robot. You may want to read Microsoft’s description of “How Programming Works” to help explain.
Special Instructions and Procedures:
Harold the Robot:
Using simple puzzles or blocks, have the children decide as a group how to program Harold to build the desired object.
For our purposes, we used a Build-a-Burger puzzle to build with Harold. To aid in coding, and to emphasis the nature of program coding, we also numbered each burger piece (i.e. tomato was #3, the top bun #7, etc.). As a group, the children decided in which order the burger should be built, and worked together to figure out how to direct Harold to build the burger in language he could understand.
Afterwards, we discussed what instructions Harold did and did not understand, as well as why this was. Refer to Microsoft’s resource for further information on simple programming language.
You could modify this to be any sort of simple, direction-oriented activity for use with younger children. Our program was run by Andy Fischoff, a local software engineer. Andy did a different activity that used the same principles with the 1st-3rd graders.
Andy Fischoff – Software Engineer
What we would do differently:
Nothing, this program worked extremely well as-is. If you have high demand for this program but don’t have enough computers, you can pair up children of similar skill levels and have them work on the Code.org puzzles together.
Adaption for older/younger audience:
Some aspects of this program were initially used in a tween program we did: Computer Programming Unplugged. That program works so well that we’ve already done it three times.
|
Tooth sensitivity is the pain you may feel when you eat or drink hot or cold foods or drinks. You may also feel pain when you breathe in cold air. Sensitivity can happen when gums pull away from the teeth or when gum tissue is lost. Gum loss can occur as a result of brushing too hard or not brushing and flossing regularly. When gum loss occurs, the part of the tooth below the gumline can be exposed, called the tooth root. There are tiny tunnels that contain fluid and lead from the tooth root to the tooth’s nerve center, called the pulp. When cold or heat touches these tunnels, the tooth fluid can excite ther nerve pulp, causing pain in your teeth. Sensitivity can also happen if the tooth’s hard surface layer, called enamel, gets worn away Tooth sensitivity can come and go, but ignoring it can lead to other health problems in your mouth.
These some factors that can contribute to tooth sensitivity:
Brushing too hard or using a toothbrush with hard bristles…this can cause gum loss.
Sugary and acidic foods and drinks…soda, fruit juices and sugary snacks can contribute to cavities, which may cause sensitivity.
Teeth Grinding…this can wear down tooth surfaces.
Dental cleanings or treatments…Sensitivity can happen after dental cleanings or treatments like tooth whitening. It usually goes away shortly after treatment ends.
Here are some ways you can help prevent sensitivity:
Brush and floss your teeth twice a day to prevent gum loss…Be sure to clean all parts of your mouth, including between teeth and along the
Brush gently and use a toothbrush with soft bristles…This will help prevent gum loss and protect your enamel from being worn away.
Avoid acidic foods and drinks.
Use a toothpaste specially formulated to soothe the nerve endings in the tooth
Use a high concentration fluoride toothpaste (given to you by your dental professional) to strengthen the tooth surface.
Fluoride Varnish…this can be done at your dental office, and it is applied to the exposed areas, strengthening the enamel and dentin.
Fluoride foam or gel…this can be place in a mouth tray, then you sit with the tray in your mouths for 3-5 minutes. This can be done at your
dental office or at home.
Bonding…this material is used to stick tooth colored restorations to teeth, can be used to seal the dentin surface and provide a barrier to the
stimuli that cause sensitivity.
There are a number of treatments available, and your dental professional can help you find those that will work best, depending on your situation. Always seek a dental professional’s help ? do not try to diagnose this problem yourself. It may be the sign of something more serious, and only a dental professional can tell you what it really is.
|
As the European Union struggles under the reality and threat of countries or regions leaving it (as discussed in “Humpty Dumpty in slow motion”), it might be thought that the United States of America represents a haven of stability. But the Southern Confederacy was not the first or last word on secession, which history shows is as American as apple pie, writes Joseph E Fallon..
The United States was founded upon the concept of secession. Not once, but twice. First, in 1783, when colonies seceded from the British Empire. Second, in 1788, when states seceded from the United States. From 1788 to 1861, it was recognized by such leading political figures as Thomas Jefferson, Alexander Hamilton, James Madison, father of the U.S. Constitution, and John Quincy Adams that a state or a group of states, North, South, East, or West, had the legal right to secede from the Union, if their citizens so wished.
In 1783, by the Treaty of Paris, London recognized the independence of 13 colonies, which banded together as the United States of America. Their constitution was the Articles of Confederation and Perpetual Union. It required unanimous consent on political decisions. Article XIII declares – “the Articles of this confederation shall be inviolably observed by every state, and the union shall be perpetual; nor shall any alteration at any time hereafter be made in any of them; unless such alteration be agreed to in a congress of the united states, and be afterwards confirmed by the legislatures of every state.”
In 1787, a Constitutional Convention was called by the Congress of the United States “for the sole and expressed purpose of revising the Articles of Confederation and reporting to Congress and the several legislatures such alterations and provisions therein as shall when agreed to in Congress and confirmed by the States render the federal Constitution adequate to the exigencies of Government and the preservation of the Union.”
Constitutional Convention was held in Philadelphia, Pennsylvania from May 25 to September 17, 1787.
The convention drafted a new constitution in violation of the expressed legislation authorizing the convention only to propose amendments to the Articles of Confederation and Perpetual Union, making the drafted document illegal.
The proposed constitution was unconstitutional, since it sought to replace the Articles of Confederation and Perpetual Union, the existing government, without unanimous consent.
Rhode Island boycotted the convention. Lacking attendance by all the States in the Union raised a question of the legitimacy of the Convention at the very start.
Seventy delegates were appointed by the twelve States willing to attend the Convention, but fifteen (20 percent) of those appointed refused to do so.
Fifty-five delegates did attend, but of those sixteen (nearly 30 percent), including the majority of the delegates from New York and Virginia, and half the delegates from Georgia and Massachusetts, refused to sign the final draft of the proposed new Constitution.
The proposed constitution stated in Article VII – “The Ratification of the Conventions of nine States, shall be sufficient for the Establishment of this Constitution between the States so ratifying the Same.”
The nine states ratifying the new constitution seceded from the government of the United States under the Articles of Confederation and Perpetual Union to create a new government, and for two years, a new country with new borders. Soon 11 of the 13 States ratified the new constitution. North Carolina and Rhode Island, at first, refused to join the new union, and remained sovereign, independent republics until 1789 and 1790 respectively..
This secession by 11 states and the independence of North Carolina and Rhode Island was justified by James Madison, in The Federalist No. 43:“What relation is to subsist between the nine or more States ratifying the Constitution, and the remaining few who do not become parties to it? In general, it may be observed, that although no political relation can subsist between the assenting and dissenting States, yet the moral relations will remain uncancelled. The claims of justice, both on one side and on the other, will be in force, and must be fulfilled…”
Some delegates, however, sought to insert a clause in the proposed constitution prohibiting the future right of secession to States acceding to this new union. They advocated granting powers to the federal government to use force to suppress any future secessions.
Madison rejected the proposal declaring “a union of States containing such an ingredient seemed to provide for its own destruction.”
Alexander Hamilton concurred; writing in The Federalist No. 16: “The first war of this kind would probably terminate in a dissolution of the Union.”
In addressing the ratifying Convention of New York State, Hamilton declared: “…to coerce the states is one of the maddest projects that was ever devised. A failure of compliance will never be confined to a single state. This being the case, can we suppose it wise to hazard a civil war?…A complying state at war with a non-complying state; Congress marching the troops of one state into the bosom of another…Here is a nation at war with itself. Can any reasonable man be well disposed towards a government which makes war and carnage the only means of supporting itself — a government that can exist only by the sword?”
Within eight years of the ratification of the U.S. Constitution, the first secession movement arose. In 1796, fearing the election of Thomas Jefferson as the successor to Washington, New England states led by Massachusetts, sought independence from the United States. When John Adams defeated Jefferson and became second President of the United States, New England secessionism subsided. It flared up, again, in 1800 when Jefferson was elected the third President of the United States. And, again, in 1803 when President Jefferson purchased the Louisiana Territory from Napoleon.
A leader of New England secessionists was U.S. Senator from Massachusetts, Timothy Pickering, who previously served as George Washington’s secretary of state and secretary or war. In 1803, Pickering wrote: “I will rather anticipate a new confederacy, exempt from the corrupt and corrupting influence of the aristocratic Democrats of the South.” U.S. Senator from Connecticut, James Hillhouse, agreed declaring: “The Eastern States must and will dissolve the union and form a separate government.”
President Thomas Jefferson, a principal author of the Declaration of Independence, response to the New England secessionist was one of support. In 1803, Jefferson wrote: “God bless them both, & keep them in the union if it be for their good, but separate them, if it be better.”
n 1804, writing to Dr. Joseph Priestly, President Jefferson declared: “Whether we remain in one confederacy, or form into Atlantic and Mississippi confederacies, I believe not very important to the happiness of either part. Those of the western confederacy will be as much our children and descendants as those of the eastern… If there was a separation in the future, I should feel the duty & the desire to promote the western interests as zealously as the eastern, doing all the good for both portions of our future family which should fall within my power.”
New England States would seek secession from the United States again in 1811 over the admission of the State of Louisiana into the Union, and again in 1814-1815 over “Mr. Madison’s War” (The War of 1812) at the Hartford Convention.
|
Published at Sunday, April 28th, 2019 - 08:14:16 AM. Worksheet. By Roxanne Giraud.
Problem solving involves an element of risk. If we want children to learn to solve problems we must create safe environments in which they feel confident taking risks, making mistakes, learning from them, and trying again (Fordham & Anderson, 1992). In a play-based curriculum, each day provides opportunities to learn about reading, writing, and math through real, meaningful situations. For instance, children set the table for snack so each child has one napkin, one straw, and one box of milk. Children string beads to match the pattern on a card or wait their turn because there is room for only four children at the art table. Through these meaningful experiences children begin to understand number, quantity, size, and other mathematical concepts. Early childhood education experts agree that the years from birth to age eight are a critical learning time for children (Bee, 1992; Kostelnik, Soderman, & Whiren, 1993; Willis, 1995). During these years, children have many cognitive, emotional, physical, and social tasks to accomplish (Katz, 1989). While children may have the ability to perform a task, that does not mean that the task is appropriate and should be performed. Educators agree that learning to read, write, and compute are undeniably important skills for children to acquire. The question is how and when they should be learned.
Worksheets are Too Abstract, Young children are still in Piaget’s Preoperational Stage, which means they need symbols to represent objects. These young children cannot think abstractly. For example, they need a ball in their hands to understand what a ball is. Seeing the word ball on a worksheet or sometimes even just a picture of a ball, means nothing to them. That’s why hands on learning is best because it gives the child a symbol for their thinking. Related: Cognitive Development. Writing on Lines is Not Appropriate. A very popular type of worksheet for this age group is handwriting sheets where the child is expected to trace the letter. These are not developmentally appropriate for young children. Even though huge letters that take up the whole page may be annoying to most adults, it’s normal for a child to write this way. Their fine motor skills are not refined enough to focus on tracing small letters. I know worksheets are the easy way to give a child something to do and easy to plan, but sometimes the best things in life are not easy. Happy Learning!.
Any content, trademark’s, or other material that might be found on the Inotivity website that is not Inotivity’s property remains the copyright of its respective owner/s. In no way does Inotivity claim ownership or responsibility for such items, and you should seek legal consent for any use of such materials from its owner.
|
Economic well-being is a person's or family's standard of living based primarily on how well they are doing financially. Economic well-being is measured by the government to determine how their citizens are faring, as it is integral in a person's overall well-being.
Primarily, the government uses income to judge economic well-being. However, well-being is also measured by other factors, such as the current cost of living or whether a person is disabled or not, according to the United States Census Bureau. Age is also a factor. The U.S. Census Bureau identifies five main categories related to well-being: the possession of electronics, including appliances, how basic needs are met, the state of the household, the condition the neighborhood is in and the support system a person has when problems occur.
|
You can also describe a recent news event dealing with the problem, or refer to a movie or other situation the reader already knows about. If people have already tried to solve the problem but failed, you could explain what has been done that hasn't worked. All of these things should lead up to the body of the paper, which is your solution idea.
The bottom line, start with a story or a detailed description of the problem. Then end that introduction with your question about how to solve the problem. A "Solution" essay is just another name for this sort of paper assignment. Before you start to explain the solution, you will need to describe the problem in a paragraph or two, giving examples.
Then you need to explain how you would solve that problem, step-by-step. Finally, you will need to argue against any objections and explain why your idea is feasible, cost-effective and a better solution than other ideas. To find ideas for solutions, you can research other people's ideas, ask friends or family for their ideas, or just think about how it could be done better. I'm so glad you are helping your child as they learn to write. Teachers have different ways of helping children develop a topic.
Drawing a web and drawing a diagram are two different ways. These are also sometimes called "storyboards. You may have learned to outline or jot down notes, which are similar ways to do this. I'd always suggest that you read the teacher's instructions and ask your child what they remember about the directions first.
However, if you still aren't sure, here is how I would interpret that instruction:. Write the topic idea in the middle of a piece of paper. I usually tell my students to frame this as a question. By the way, expository is usually an argument essay and one kind of argument essay is a problem solution. How can we solve the problem of students being absent too often from school? Draw a circle around that question and then draw lines out from the circle looking like you are starting a spider web.
Each of the lines should be an answer to the question. Then draw a circle around each of those answers and draw lines off again. This time, you will give examples, reasons or objections that relate to that answer. By doing this, your child will have some information that they can use to write their paper. The first thing to do is to do some thinking on your own. I call this brainstorming.
Take out a sheet of paper or use your computer and start by listing everything you can think of that might cause this problem. After you've made a list, take a look at it and circle or bold print the causes and divide them into some groups.
Here are some ideas of how you can categorize them:. Most important causes the ones which, if solved, would make the biggest dent in solving the problem. Then, starting with the easiest to solve and most important to solve, think of some ways that it can be solved. Look at my list of how people can solve problems to get some idea.
After you've really thought this out as much as you can yourself, it is time to do some research and see what other people have already done, as well as to get some ideas. Here is how to research:. Google or use the library to see what other causes of the problem people have suggested. Look for what has already been done to try to solve the problem.
If it hasn't worked, you need to find out why. Sometimes, you can find a solution to the problem that has worked in another location. That can be a great starting place for your solution. Finding a solution is always the hardest part of this sort of essay.
I suggest that you follow a three-pronged approach:. Ask as many people as you can who know about the problem what their ideas are for a solution. Research the problem and solutions that others have tried. One trick my students taught me is that you often can find a solution that has been tried in a different location and adapt that to your situation. For example, when we had problems with people biking on campus and causing accidents, my students researched a nearby campus and found a solution that had been done there.
Think about each type of solution and how that could create a solution for your problem. For example, what could you add to the situation? What could you take away? Would changing leadership help? Could money solve the problem, and if so, how could you get the funds? Finally, when you have some solution ideas, check to see if they are feasible can you do them?
The best way to start a problem solution essay is to give a vivid description of the problem, explaining who it hurts and why. For a problem solution essay, should the problem be in one paragraph and the solution in a different paragraph? My students generally write essays that have at least five paragraphs, often more.
I would suggest that you do something like this:. Explain and describe the problem and why this should be solved. End with a question which is asking how the problem can be solved. How can we solve the problem of school shootings? Then in the next paragraph, you would give your solution idea. If your idea is easy to explain, then you would spend the rest of your paper refuting objections and explaining why your idea would work and be cost-effective, feasible, and effective.
On the other hand, if your idea is complicated to explain, you will need to spend a longer part of your paper making sure the reader understands it. In both cases, you will need to refute any objections and help the reader to see how important it is to do this solution. What if we had a topic that someone chose for us and we don't know anything about the topic, but we can't change it? If you need to find a solution to a problem someone else has chosen, you will need to research the problem and all of the solutions that other people have thought about or tried.
After you have looked up the ideas that other people have considered, you can choose the one that you think would work the best, or maybe you will come up with your own idea.
I have to write a "problem solution essay", and I am conflicted on what the topic should be. Do you have any suggestions? The hardest part of writing a problem solution essay is finding a solution.
Often, my students start with one solution idea. Then as they begin to write and collaborate on ideas with others, they will change their topics accordingly.
In reality, problem solution essays are a way of writing out what we are always doing in our lives and work: Because these essays are harder to write, it helps if you really care about the topic. That is why I have my students start by listing things that really annoy them or problems they feel need a solution.
Generally, I suggest they stick to something they personally experience. I tell them to think about all of the groups they belong to at school, home, and in their communities and then write a list of all the problems they notice in those groups.
Generally, once they have written that list, they start to see something they are most interested in solving. The best topic to choose is one that has these characteristics:. It is a problem that can be solved with resources or groups you know about and can identify. Sign in or sign up and post using a HubPages Network account.
Comments are not for promoting your articles or other sites. Hi Ron--Whether or not you need to provide solutions depends on the type of argumentative essay you are writing. This article is about a problem solution essay, where the main point is to give a solution. A cause essay is probably what you are writing. In a cause essay, your main point is to explain the cause behind something and sometimes the effects. Of course, if what you are explaining is a problem and you pinpoint the cause, you might want in your conclusion to suggest a possible solution or a direction that leads towards finding a solution.
However, you wouldn't have to give a detailed plan. Likewise, in a problem solution essay, you would probably need to begin the essay talking about what different people think the cause of the problem is because you need to explain why you think a particular cause is the most important. Do you need to provide solutions for an argumentative essay?
I was trying to focus on the why mainly? Hi Arianna, I don't exactly know what a "field trip essay" is. If you could tell me more about the assignment, then maybe I could write an article. However, I think what you may be talking about is a Personal Experience paper and I do have an article about how to write one of those, so you might want to check that out. I saw someone underline their thesis statement in the introduction. Is this recommended in an MLA problem-solution essay?
I'm confused and still learning these writing styles any help would be appreciated. I by hearted the whole article. For an introduction, I find here possible solutions in my quest for publication in the web pages and I have developed a little bit of confidence in my abilities as a writer. I think I can go ahead and find a problem and solutions that I throw open. Joe--so glad I could help you. I hope that you will check out all of my other "how to" essay help.
Many English instructors in colleges are graduate students who are new to this job. I like to be able to provide them and their students the information I've gathered over many years of working as an English instructor.
Most of what I've written has come from my experiences with students, not a teacher's manual or textbook. If it is helpful, you are welcome to show these to your teacher and invite them to share my articles with students online not copying out hard copies--that is a violation of copyright.
My english teacher is terrible and i have a problem solution paper due tomorrow. Thank you sooo much for making it so that i wont absolutely fail!
Wonderful detail on how to write a problem solving Essay. I found the charts and 1. The challenge is on to find the solutions. A great article to read. And the description given with the help of a table is simply superb. And suggestions for solving a problem, is quite interesting. I shared it with my Niece, who is in Honors writing classes in high school and still has problems with her essays.
She thanks you as well. Other product and company names shown may be trademarks of their respective owners. HubPages and Hubbers authors may earn revenue on this page based on affiliate relationships and advertisements with partners including Amazon, Google, and others. To provide a better website experience, owlcation.
Finding a Good Topic. Make a list of groups you belong to like: School Hometown community Clubs Sports teams Hobby groups People groups teenagers, high school students, college students, family, males, females, race, culture, or language group Step Two: Deciding on the Best Solution. Implemented easily Effective at solving the problem Cost effective Use the table below to get ideas for what types of solutions might already have been tried and which ones might work better to solve your problem.
Ways to Solve Problems click column header to sort results Solution. Assumes Cause of Problem is. Provide a way to enforce or else provide more resources like more police or money for regulators to enforce existing rules or laws.
More buildings or a new organization is needed because nothing currently existing will solve problem. How to Write an Excellent Essay. To write a persuasive solution essay, you need to organize carefully. Your main goals are: Interest your reader in the problem Convince your reader that the problem is important and needs to be solved Explain your solution clearly Convince the reader that your solution is cost-effective and feasible Convince your reader that your solution is better than other solutions.
If it is an unknown problem, you will need to explain in detail. If it is a familiar problem, then you need to paint a vivid picture. In both situations, you will need to convince the reader that it is an important problem. Creative Introduction Ideas Tell a true-life story about the problem. Give a personal experience story. Use a scenario or imagined story illustrating why this needs to be solved. Give statistics and facts about the problem which make it vivid for the reader.
Do a detailed explanation of the problem with facts that show why it needs to be dealt with. Give the history of the situation and explain how this problem developed. Use a frame story which gives an example of the problem in the introduction and then a return to the problem being solved in the conclusion.
Use a vivid description with sensory details that makes the reader see the situation. Use a movie, book, T. Thesis At the end of your introduction, you can ask your thesis question and then give your solution idea as the thesis statement. Here are some tips: State your solution clearly in one sentence. Usually, your thesis sentence will come after your description of the problem. Sometimes, you may not want to state this thesis until after you have shown that the present solutions aren't working, especially if your thesis is something simple.
The body of your paper will be three or more paragraphs and must: Explain your solution clearly Give details about how this solution will solve the problem Explain who will be in charge and how it will be funded Give evidence that your solution will work expert opinion, examples of when it has worked before, statistics, studies, or logical argument The body of your paper will also seek to argue that your solution: Will solve the problem.
Have a great day. Visit my site too. Choose one prompt and then write a sentence paragraph describing how you would solve it. Sometimes, problems are not solved on the first attempt. You may have two different attempts in your paragraph.
You have noticed your school needs to be cleaned up. Think about what students could do to clean up your school and keep it clean. How would you solve this problem? Who would be involved? How would you get supplies? New students who enroll in your school often feel left out because they do not have any friends and it is not easy for them to make new friends.
Think of some ways to solve this problem for new students to make them feel better about their new school. First, describe the problem of being a new student, and then explain what you would do to help solve the problem. Studies show that many young people spend more time watching TV than they do in any other leisure activity.
This has concerned some citizens and they have formed a group called Unplug and Tune In.
Excellent Problem Solution Essay Writing Prompts For Middle School. Writing a problem solution essay can be tough, especially if you are the one choosing the topic.
Jun 22, · Problem Solution Essay Topics with Sample Essays. Updated on February 20, Virginia Kearney. more. You can find everything you need to know about writing it by looking at my other articles on writing problem solution essays and writing argument essays. If you don't see the link for that, search in my profile, which Reviews:
Problem-Solution Writing Prompts. Think about the community in which you live. Think about what you could do to make it a better place. Choose one problem that needs to be solved to make your community a better place to live. Write a letter to the editor describing how solving this problem would make your community a better place. Sep 06, · The hardest part of writing a problem solution essay is finding a solution. Often, my students start with one solution idea. Then as they begin to write and collaborate on ideas with others, they will change their topics moiprods.tks:
The following writing prompts and guidelines focus our discussion through the article and into the work of Writing Project 2. Writing Project 2: Education as Problem/Solution In Writing Project 1, you considered stories and stereotypes. Aug 29, · Problem/Solution Writing Prompts Choose one prompt and then write a sentence paragraph describing how you would solve it. Sometimes, problems are not solved on the first attempt. You may have two different attempts in your paragraph.
|
Sight Words are high frequency words incorporating more than 50 percent of the words young readers will come across. Some of the sight words are hard to pronounce because they don't sound the way they are spelled. So, learning and recognizing the word by sight is easier. Sight words are often referred to as the Dolch Word List or Fry List. Teaching high frequency words to students early is the standard approach in elementary schools. Teachers and parents can use sight word flash cards with children to help them pick up on the words quickly. They do this by repeatedly showing them the words with the flash cards. Hellokids.com has created a list of High Frequency words for each grade to print and practice with your child. Discover the list of books your child should read at each grade level and help them enjoy the world of words and reading!
|
Communication & Literacy
Play experiences in natural environments offer children a unique opportunity to work and play together removed from the distraction of toys, media and more formal “activities”. Children are encouraged to use their imaginations, communicate their ideas, negotiate, develop their thinking and oral language skills, tell and listen to stories. Natural materials offer the children a variety of new ways to explore and express themselves including; drawing with sticks and rocks, weaving, threading and constructing and using mud for sculpting, cooking and creating.
Opportunities for Language Development afforded in nature:
· Negotiating, problem solving and sharing of ideas during play
· Story telling, singing, chants and rhymes shared throughout the day
· Development of skills for sharing ideas, feelings and understandings using language and representation in play
· Development of vocabulary through inquiry and exploration
Opportunities for Literacy Development afforded in nature:
· Natural materials can be used as the inspiration or props for a wide range of imaginative play and story telling: development of oral language and an awareness of elements of stories such as characters and beginning, middle and end.
· Drawing, mark making and writing in the dirt using sticks
· Pattern making using natural materials such as sticks, rocks and leaves
· Development of story writing, use of symbols and writing in reflective journals and learning experiences based on experiences in nature
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.