content
stringlengths 275
370k
|
---|
The intertidal zone, also known as the foreshore and seashore and sometimes referred to as the littoral zone, is the area that is above water at low tide and under water at high tide (in other words, the area between tide marks). This area can include many different types of habitats, with many types of animals like starfish, sea urchins, and some species of coral. The well known area also includes steep rocky cliffs, sandy beaches, or wetlands (e.g., vast mudflats). The area can be a narrow strip, as in Pacific islands that have only a narrow tidal range, or can include many metres of shoreline where shallow beach slopes interact with high tidal excursion.
Organisms in the intertidal zone are adapted to an environment of harsh extremes. Water is available regularly with the tides but varies from fresh with rain to highly saline and dry salt with drying between tidal inundations. The action of waves can dislodge residents in the littoral zone. With the intertidal zone's high exposure to the sun, the temperature range can be anything from very hot with full sun to near freezing in colder climates. Some microclimates in the littoral zone are ameliorated by local features and larger plants such as mangroves. Adaptation in the littoral zone allows the use of nutrients supplied in high volume on a regular basis from the sea which is actively moved to the zone by tides. Edges of habitats, in this case land and sea, are themselves often significant ecologies, and the littoral zone is a prime example.
A typical rocky shore can be divided into a spray zone or splash zone (also known as the supratidal zone), which is above the spring high-tide line and is covered by water only during storms, and an intertidal zone, which lies between the high and low tidal extremes. Along most shores, the intertidal zone can be clearly separated into the following subzones: high tide zone, middle tide zone, and low tide zone. The intertidal zone is one of a number of marine biomes or habitats, including estuaries, neritic, surface and deep zones.
Marine biologists divide the intertidal region into three zones (low, middle, and high), based on the overall average exposure of the zone. The low intertidal zone, which borders on the shallow subtidal zone, is only exposed to air at the lowest of low tides and is primarily marine in character. The mid intertidal zone is regularly exposed and submerged by average tides. The high intertidal zone is only covered by the highest of the high tides, and spends much of its time as terrestrial habitat. The high intertidal zone borders on the splash zone (the region above the highest still-tide level, but which receives wave splash). On shores exposed to heavy wave action, the intertidal zone will be influenced by waves, as the spray from breaking waves will extend the intertidal zone.
Depending on the substratum and topography of the shore, additional features may be noticed. On rocky shores, tide pools form in depressions that fill with water as the tide rises. Under certain conditions, such as those at Morecambe Bay, quicksand may form.
Low tide zone (lower littoral)
This subregion is mostly submerged - it is only exposed at the point of low tide and for a longer period of time during extremely low tides. This area is teeming with life; the most notable difference with this subregion to the other three is that there is much more marine vegetation, especially seaweeds. There is also a great biodiversity. Organisms in this zone generally are not well adapted to periods of dryness and temperature extremes. Some of the organisms in this area are abalone, sea anemones, brown seaweed, chitons, crabs, green algae, hydroids, isopods, limpets, mussels, nudibranchs, sculpin, sea cucumber, sea lettuce, sea palms, sea stars, sea urchins, shrimp, snails, sponges, surf grass, tube worms, and whelks. Creatures in this area can grow to larger sizes because there is more available energy in the localized ecosystem. Also, marine vegetation can grow to much greater sizes than in the other three intertidal subregions due to the better water coverage. The water is shallow enough to allow plenty of light to reach the vegetation to allow substantial photosynthetic activity, and the salinity is at almost normal levels. This area is also protected from large predators such as fish because of the wave action and the relatively shallow water.
The intertidal region is an important model system for the study of ecology, especially on wave-swept rocky shores. The region contains a high diversity of species, and the zonation created by the tides causes species ranges to be compressed into very narrow bands. This makes it relatively simple to study species across their entire cross-shore range, something that can be extremely difficult in, for instance, terrestrial habitats that can stretch thousands of kilometres. Communities on wave-swept shores also have high turnover due to disturbance, so it is possible to watch ecological succession over years rather than decades.
Since the foreshore is alternately covered by the sea and exposed to the air, organisms living in this environment must have adaptions for both wet and dry conditions. Hazards include being smashed or carried away by rough waves, exposure to dangerously high temperatures, and desiccation. Typical inhabitants of the intertidal rocky shore include urchins, sea anemones, barnacles, chitons, crabs, isopods, mussels, sea stars, and many marine gastropod molluscs such as limpets and whelks.
As with the dry sand part of a beach, legal and political disputes can arise over the ownership and use of the foreshore. One recent example is the New Zealand foreshore and seabed controversy. In legal discussions the foreshore is often referred to as the wet-sand area.
For privately owned beaches in the United States, some states such as Massachusetts use the low water mark as the dividing line between the property of the State and that of the beach owner. Other states such as California use the high water mark.
In the UK, the foreshore is generally deemed to be owned by the Crown although there are notable exceptions, especially what are termed several fisheries which can be historic deeds to title, dating back to King John's time or earlier, and the Udal Law, which applies generally in Orkney and Shetland.
Mussels in the intertidal zone in Cornwall, England. |
These "polymer opals" have elastic qualities and get their color from the pattern of nanoparticles forming its lattice.
By Jillian Scharr
Opals have a unique property: Their color comes not from pigment, but from their internal molecular structure, which is organized in such a way that it reflects certain wavelengths of light.
This "structural color" doesn't run or fade, unlike color from pigment.
Using nanotechnology, researchers at University of Cambridge in the U.K. have synthesized a substance that has structural color much like opals.
But that's not all — these "polymer opals," so called because they are comprised of polymers, or molecular compounds consisting of many repeating structures — are also thin and flexible, and could be used in fabrics, security devices and even printed money. [See also: Crystal 'Flowers' Bloom in Harvard Nanotech Lab]
The material is formed similarly to opals, which are created when tiny chips of silica, a glasslike material found in sand and quartz, are suspended in water. As the water evaporates, the silica chips settle onto each other in uneven layers, eventually bonding to form a hard stone that still contains somewhere between 6 percent and 10 percent water by weight.
The color of the opal depends on the structure the silica happen to take when they solidify into a crystal lattice. More space between the lattice's nodes results in bluer gems, while less space shifts the color down the spectrum toward red.
For the polymer opals, the researchers replaced the silica with spherical nanoparticles, and the water with a rubberlike substance that forms an outer shell. By controlling the pattern in which the nanoparticles bond to the shell, the researchers can control the finished material's color.
Once the structure is correct, the shell firms up into an elastic matrix that holds the nanoparticles together into a durable structure that reflects and refracts light, known as a photonic crystal.
If the outer shell in which the nanoparticles are encased is allowed to remain flexible, then moving or stretching this material will cause it to change color, because this action also changes the spacing of the nanoparticles' structure. So when stretched, the polymer opals' color shifts toward the blue range of the visible light spectrum, and when compressed, they shift toward red.
In addition to applications in jewelry, polymer opals also have the potential to make a splash in fashion, since the material could be made into a sheetlike shape and then bonded onto fabric.
Using polymer opals to create color instead of dye would lessen the textile industry's dependence on highly toxic chemicals to produce dye.
The researchers also emphasized polymer opals' potential to create bank notes and other security printing, as they can produce brighter colors than the holograms normally used on paper money and are much more difficult to counterfeit.
- 10 Ways You're Using Nanotech Right Now (And Don't Even Know It)
- The 10 Most Stunning Video Games
- 7 Biometric Technologies on the Horizon
Copyright 2013 TechNewsDaily, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. |
Understanding how the new standards will improve reading comprehension for students:
The Common Core State Standards for English Language Arts are designed to ensure students fully understand what they read, and can effectively talk and write about it. These are the fundamental reading comprehension skills needed to succeed throughout elementary, middle and high school, college and beyond – regardless of career path.
While the old standards focused on simply expecting students to recite facts learned through reading textbook passages, the new standards expect students to read books and textbook passages that are more challenging than what was previously read in each grade level – including reading more original writings whenever possible, such as President Abraham Lincoln’s “The Gettysburg Address” or Martin Luther King, Jr.’s “Letter from Birmingham Jail.” Students are then asked to show a deeper understanding of this material than has previously been required of them, demonstrating greater critical thinking and analytic skills.
Kindergarten and 1st Grade Example
The passage below is an example of the kind of text that was read in 2nd or 3rd grade under past standards. Under the English Language Arts Common Core Standards this text is expected to be read at the Kindergarten and 1st grade level.
|Frog and Toad Together, by: Arnold Lobel|
|Frog was in his garden. Toad came walking by. |
“What a fine garden you have, Frog,” he said. “Yes,” said Frog. “It is very nice, but it was hard work.”
“I wish I had a garden,” said Toad. “Here are some flower seeds. Plant them in the ground,” said Frog, “and soon you will have a garden.”
“How soon?” asked Toad. “Quite soon,” said Frog.
Toad ran home. He planted the flower seeds.
“Now seeds,” said Toad, “start growing.”
Toad walked up and down a few times. The seeds did not start to grow. Toad put his head close to the ground and said loudly, “Now seeds, start growing!” Toad looked at the ground again. The seeds did not start to grow.
Toad put his head very close to the ground and shouted, “NOW SEEDS, START GROWING!”
Frog came running up the path. “What is all this noise?” he asked. “My seeds will not grow,” said Toad. “You are shouting too much,” said Frog. “These poor seeds are afraid to grow.”
“My seeds are afraid to grow?” asked Toad. “Of course,” said Frog. “Leave them along for a few days. Let the sun shine on them, let the rain fall on them. Soon your seeds will start to grow.”
That night, Toad looked out of his window. “Drat!” said Toad. “My seeds have not started to grow. They must be afraid of the dark.”
Toad went out to his garden with some candles. “I will read the seeds a story,” said Toad. “Then they will not be afraid.” Toad read a long story to his seeds.
All the next day Toad sang songs to his seeds. And all the next day Toad read poems to his seeds. And all the next day toad played music for his seeds.
Toad looked at the ground. The seeds still did not start to grow. “What shall I do?” cried Toad. “These must be the most frightened seeds in the whole world!” Then Toad felt very tired and he fell asleep.
“Toad, Toad, wake up,” said Frog. “Look at your garden!” Toad looked at his garden. Little green plants were coming up out of the ground.
“At last,” shouted Toad, “my seeds have stopped being afraid to grow!”
“And now you will have a nice garden too,” said Frog. “Yes,” said Toad, “but you were right, Frog. It was very hard work.”
|Student task before Common Core||Student task with Common Core|
|Students retell the main events (e.g., beginning, middle, end) of Frog and Toad Together, and identify the characters and the setting of the story.||Students compare and contrast the adventures and experiences of Frog and Toad in Frog and Toad Together, and participate in collaborative conversations about their comparisons.|
2nd and 3rd Grade Example using American Literature – Charlotte’s Web
One of the ways that teachers teach young children how to read is by reading aloud to them. It helps them learn information that they may not be able to read and understand by themselves. Children can focus on the words and the pictures, which will help them when they try to tackle rich written content on their own. An example of a book that would be read aloud to second or third graders is Charlotte’s Web. It’s the story of a little girl named Fern, who loves a piglet named Wilbur, and his friend Charlotte, who is a spider who lives in the barn with Wilbur.
Each read aloud session includes a teacher-led discussion among the class, where the teacher asks questions and students ask each other questions. They may talk about what a narrator is, and how conversations between characters are not part of the narrator’s role. They will discuss Wilber and Fern and the setting of the barn yard. All to get students actively thinking about the story.
Reading aloud is not a new way to teach reading. What is new is that the discussion that follows is more in depth, and more rigorous. Below are some questions that the teacher may ask after she finishes a chapter or the book:
|Charlotte’s Web, by E. B. White|
|Old Expectation||New Expectation|
|Who is telling the story in Charlotte’s Web?|
How does Wilbur feel towards Charlotte at the end of the story? How do you know?
|What is your point of view about Wilbur? |
How is it different from Fern’s point of view about Wilbur?
How is it different from the narrator’s point of view?
|The old expectation requires students to state that the narrator is telling the story, and give one example to show that Wilbur loves Charlotte. |
In order to be successful in this task, the student only has to give an example that shows that Wilbur loves Charlotte. He doesn’t have to understand why Wilbur loves Charlotte. This example could be as simple as Wilbur telling Charlotte that he loves her.
|The new expectation requires students to understand and explain that characters see the “world” differently. The reader learns about Wilbur and Charlotte and Fern from the narrator, but also learns about each character by what they say to each other, and how they act in certain situations.|
In order to be successful in this task, the student has to have listened to the teacher and engaged in the rich classroom discussion that occurs after each chapter has been read. He has to think about why Fern sees Wilbur in a different way than the Narrator does, and explain that.
2nd and 3rd Grade Non-Fiction Example
The two passages below are an example of the kind of text that was used in Grades 2-3 to teach old standards, compared to the kind of text that will now be used under the new Common Core State Standards for English.
|Two Texts: Apollo 11|
|Grades 2-3 Text Sample before Common Core||Grades 2-3 Common Core Text Expectations|
|July 16, 1969. Cape Kennedy, Florida.|
A huge white rocket towers against the blue sky. It is thirty-six stories high. It weighs over six million pounds. It is called the Saturn V. It is the biggest, most powerful rocket ever built.
Today it is going to make the dream of centuries come true. It will send three men where no human being has ever been before. To the moon!
A few miles away almost a million people crowd the highways and beaches. Small boats full of excited people dot the ocean. They have all come to see the launch. People are not allowed any closer. The gander of an explosion is too great.
Around the planet millions more people are watching their television screens. Everyone wants to share in the longest, most incredible voyage in history.
As launch time approaches, three astronauts in gleaming white spacesuits walk toward the huge rocket. Their names are Neil Armstrong, Edwin “Buzz” Aldrin, and Michael Collins.
The men get into an elevator. They ride up to the top of the launch tower. There, at the very tip of the rocket, a spacecraft is waiting. It is called Apollo 11.
|High above there is the Moon, cold and quiet, no air, no life, but glowing in the sky.|
Here below there are three men who close themselves in special clothes, who—click—lock hands in heavy gloves, who—click—lock heads in large round helmets.
It is summer here in Florida, hot, and near the sea. But now these men are dressed for colder, stranger places. They walk with stiff and awkward steps in suits not made for Earth.
They have studied and practiced and trained, and said good-bye to family and friends. If all goes well, they will be gone for one week, gone where no one has been.
Their two small spaceships are Columbia and Eagle. They sit atop the rocket that will raise them into space, a monster of a machine: It stands thirty stories, it weighs six million pounds, a tower full of fuel and fire and valves and pipes and engines, too big to believe, but built to fly—the mighty, massive Saturn V.
The astronauts squeeze in to Columbia’s sideways seats, lying on their backs, facing toward the sky—Neil Armstrong on the left, Michael Collins in the right, Buzz Aldrin in the middle.
Click and they fasten straps. Click and the hatch is sealed. There they wait, while the Saturn hums beneath them.
Near the rocket, in Launch Control, and far away in Houston, in Mission Control, there are numbers, screens, and charts, ways of watching and checking every piece of the rocket and ships, the fuel, the valves, the pipes, the engines, the beats of the astronauts’ hearts.
As the countdown closes, each man watching is asked the question: GO/NO GO? And each man answers back: “GO.” “GO.” “GO.” Apollo 11 is GO for launch.
|*In this passage, the text is organized in the order the events happened. It is straightforward and easy to understand. The sentences are short and simple, and state the facts in a concrete fashion. The purpose of the text is clear: to describe the setting of the Apollo 11 launch.||*In this passage, the organization of the text is generally in the order the events happened, but connects some events in a less obvious manner. The sentences are longer and include some challenging constructions. The purpose of the text is implied, but not too difficult to identify based upon the context: to describe the setting of the Apollo 11 launch.|
Not only is the text more challenging, the task presented to students after reading such a passage is more difficult too.
|Grades 2-3 Student Task before Common Core||Grade 2-3 Common Core Student Task|
|What was the spacecraft called?||What is the author trying to convey when he says, “these men are dressed for colder, stranger places. They walk with stiff and awkward steps…”? Use information from the text to explain your answer.|
|Why were people across the planet watching their television screens?||What makes this voyage an important event in history? Use information from the text to explain your answer.|
|Who were the three astronauts on the spacecraft?| |
FRAME NARRATIVE: A story within a story, within sometimes yet another story, as in, for example, Mary Shelley's Frankenstein. As in Mary Shelley's work, the form echoes in structure the thematic search in the story for something deep, dark, and secret at the heart of the narrative. The form thus also resembles the psychoanalytic process of uncovering the unconscious behind various levels of repressive, obfuscating narratives put in place by the conscious mind. As is often the case (and Shelley's work is no exception), a different individual often narrates the events of a story in each frame. This structure of course also leads us to question the reasons behind each of the narrations since, unlike an omnicient narrative perspective, the teller of the story becomes an actual character with concomitant shortcomings, limitations, prejudices, and motives. The process of transmission is also highlighted since we often have a sequence of embedded readers or audiences, A famous example in film of such a structure is Orson Welles' Citizen Kane. See also the definition for narration.
Visits to the site since July 17, 2002 |
About Shreya ECHO & Ultrasound Center
What is an echo cardiogram?
An echo cardiogram (echo) is a test that uses high frequency sound waves (ultrasound) to make pictures of your heart. The test is also called echocardiography or diagnostic cardiac ultrasound.
Why do people need an echo test?
Your doctor may use an echo test to look at your heart’s structure and check how well your heart functions.
The test helps your doctor find out:
- The size and shape of your heart, and the size, thickness and movement of your heart’s walls.
- How your heart moves.
- The heart’s pumping strength.
- If the heart valves are working correctly.
- If blood is leaking backwards through your heart valves (regurgitation).
- If the heart valves are too narrow (stenosis).
- If there is a tumor or infectious growth around your heart valves.
What is an ultrasound?
An ultrasound scan is a medical test that uses high-frequency sound waves to capture live images from the inside of your body. It’s also known as sonography.
The technology is similar to that used by sonar and radar, which help the military detect planes and ships. An ultrasound allows your doctor to see problems with organs, vessels, and tissues without needing to make an incision.
Unlike other imaging techniques, ultrasound uses no radiation. For this reason, it’s the preferred method for viewing a developing fetus during pregnancy.
Why an ultrasound is performed
Most people associate ultrasound scans with pregnancy. These scans can provide an expectant mother with the first view of her unborn child. However, the test has many other uses.
Your doctor may order an ultrasound if you’re having pain, swelling, or other symptoms that require an internal view of your organs. An ultrasound can provide a view of the:
- brain (in infants)
- blood vessels |
Similar exercises are performed in water and on land, such as jumping jacks, high knees, running and walking. Plus, you exercise against the resistance of the water with every movement, which increases the feeling of workout intensity. You may think these things lead to a similar amount of calories burned during aerobics in water or on land. However, the different environments do lead to a different number of calories used.
Calories are a representation of the amount of energy your body needs for activities. Your body weight affects how many calories you burn. The more you weigh, the more energy it takes to perform exercise. According to the Harvard Health Publication, if you perform water aerobics for 30 minutes, you burn 120, 149 or 178 calories if you weigh 125, 155 or 185 pounds respectively. Individuals of the same weight who perform low-impact land aerobics burn approximately 165, 205 or 244 calories respectively.
An Affair of the Heart
One of the reasons for the changes in calorie-burning between land and water is in the heart rate. During aqua-aerobics, the water's temperature, compression and buoyancy cause your heart rate to beat at a slower rate. Experts at the Aquatic Exercise Association recommend using a target heart rate that is approximately 13 percent less than the one you use during land exercise. Typically, the slower your heart beats, the fewer calories you burn.
When you exercise on land, you feel the full weight of gravity pressing down on you. It connects you to the ground, and with every exercise, you support 100 percent of your body weight. In water, gravity's effects on you are reduced. When you are in water up to your neck, you are supporting only 10 percent of your weight. In chest-deep water, you support approximately 25 to 35 percent of your body. Water up to your waist presents the greatest challenge, as you carry 50 percent of your body weight. These reductions also reduce the number of calories required to move your body through the liquid environment.
Bang for Your Buck
Land and water aerobic exercise both improve your cardiovascular system and burn calories. The water environment reduces the impact on your joints, as you do not support all your body weight. This does not mean you have to sacrifice calorie-burning benefits. As with land exercise, your intensity can increase the amount of calories used. When you combine arm movements with your leg exercises, you increase the calorie-burning of water exercise. You can also perform high-intensity jumping movements in the water to increase the number of calories used.
- Martin Barraud/OJO Images/Getty Images |
(DigitalJournal) The heart of Yellowstone National Park is its caldera, a vast basin left behind after the last of three volcanic eruptions spanning a period of 2.1 million years. But what has scientists so concerned is what lies beneath the park’s natural wonders.
In what has been described as “astounding,” a new study on the size of the magma chamber beneath Yellowstone National Park was presented at the fall meeting of the American Geophysical Union (AGU), in San Francisco, Calif.
Using measurements from seismic waves from earthquakes, scientists were able to map the magma lake underneath Yellowstone’s caldera as being 55 miles long, and 18 miles wide. The magma lake runs from 6 to 9 miles underneath the caldera. |
Reading: This week we will be working on recounting the main idea of a text and provide evidence to support our answers.
- RI3.2 Determine the main idea of a text; recount the key details and explain how they support the main idea.
Writing: This week we will be working on informational/explanatory text. Students will be writing about how they can keep the earth clean.
- W3.2 Write informative/explanatory texts to examine a topic and convey ideas and information clearly.
Math: We will be starting Unit 9 in Math. This chapter revisits multiplication and division story problems.
- 3.OA.A.3: Use multiplication and division within 100 to solve word problems in situations involving equal groups, arrays, and measurement quantities, e.g., by using drawings and equations with a symbol for the unknown number to represent the problem.
- 3.NBT.A.3: Multiply one-digit whole numbers by multiples of 10 in the range 10-90 (e.g., 9 x 80, 5 x 60)
Science: Hello Engineers! This week we will be focusing on how animals adapt, behave, and communicate in their environment.
- 3-LS4-2 Use evidence to construct an explanation for how the variations in characteristics among individuals of the same species may provide advantages in surviving, finding mates, and reproducing.
- 3-LS4-3 Construct an argument with evidence that in a particular habitat some organisms can survive well, some survive less well, and some cannot survive at all.
Social Studies: This week we will be finishing up our unit on Michigan Government. We will be discussing the functions of courts and the rights and responsibilities of citizens.
- C5.0.1 Identify rights (e.g., freedom of speech, freedom of religion, right to own property) and responsibilities of citizenship (e.g., respecting the rights of others, voting, obeying laws).
- C3.0.3 Identify the three branches of state government in Michigan and the powers of each.
- C3.0.4 Explain how state courts function to resolve conflict. |
Keywords: Right triangle,isosceles right triangle,special right triangle
Common Core Standard G.SRT.6
A triangle with two equal sides, and a ninety degree angle will be a 45 45 90 triangle. Notice the triangle drawn inside a circle is a 45 45 90 because the radii are equal, and there is a 90 degree angle.
45 45 90
45 45 90
As the name implies a 45-45-90 triangle has two angle measures of 45 degrees, and one of ninety degrees.
A forty five,forty five, ninety triangle has two equal sides.
The hypotenuse is always the longest side, and is opposite the right angle.
The length of the hypotenuse equals, leg length√2
The length of one leg of a 45-45-90 triangle equals, hypotenuse/√2
A helpful angle to side ratio to remember when working with 45 45 90 triangles is as follows:
45 45 90
x x x√2
The legs opposite the forty five degree angles are equal, and the hypotenuse which is opposite the 90 degree angle, is equal to x√2
You can also write this ratio as 1:1: √2
The formula for finding area equals ½(leg)2
A 45 45 90 triangle is also called an isosceles right triangle
The diagonal of a square creates two 45 45 90 triangles.
Questions answered with this video
How do you find the hypotenuse of a 45 45 90 triangle?
What is the Pythagorean Theorem?
If a triangle has angle measures of 45 45 and 90 and a leg length of 3 units, what is the length of the hypotenuse?
If given the hypotenuse length, how do you find the leg length?
How do you use the Pythagorean Theorem to find the missing height of a the 30 60 90 special right triangle?
Special Right Triangles in Geometry: 45-45-90 and 30-60-90
Questions answered in this video
If given the hypotenuse how do you find the leg length of a 45 45 90 triangle?
If given the leg length how do you find the hypotenuse of a 45 45 90 triangle?
What shortcuts can I use to find the leg lengths of a special right triangle easily? |
Center and Apothem of Regular Polygons - Concept 13,215 views
An apothem is a perpendicular segment from the center of a regular polygon to one of the sides. When radii are drawn from the center to the vertices of the polygon, congruent isosceles triangles are formed with the polygon apothem as the height. These triangles are used in calculating the area of regular polygons. Related topics include properties of isosceles triangles and area of triangles.
In order to find the area of any regular polygon, first we need to inscribe it inside a circle. By doing that, we've created this apothem, where the definition of the apothem is a perpendicular segment from the center to the sides of the polygon. But back up. What's the center? Well, the center of this polygon is the center of the circle that circumscribes the polygon. So notice that I can draw in a radius from that center to a vertex and that is also the radius of the circle. So if we zoom in on what's going on with this apothem, the perpendicular segment from the center to one of the sides of the polygon. And if I draw in 2 radii, since the radius of a circle is constant, we've created an isosceles triangle.
So whenever you draw in apothem and segments to the vertices, you are going to create an isosceles triangle. One property of an isosceles triangle is if you have an altitude, you are bisecting that vertex angle. So remember that this apothem will bisect it and you'll have two congruent base angles down here.
Now specifically for a hexagon. Since I can draw in 6 radii, we're going to divide 360 degrees 6 ways, which means each of these angles up here is going to be 60 degrees, and since these two base angles are congruent, we're going to have equilateral triangles. So this only works for a hexagon. But remember when you're problem solving that by drawing in your radii, you're creating 6 equilateral triangles. But this doesn't give us an area formula. So what we're going to have to do is we're going to have to think in terms of this triangle.
Let's say I knew the length of this base, which is also going to be our side length. So I'm going to call that s. I can calculate the area of this triangle by saying, my apothem which is the height of this triangle times the base which is s and then dividing it in half. That's just the definition for the area of a triangle. Is the base times height divided by 2.
The question is how many of these triangles are we going to have? Well, in the case of a hexagon, we're going to have 6 triangles so I would have to multiply this by 6. And what I'm going to do is I'm going to generalise this formula and say that instead of writing 6 just for a hexagon, I'm going to write n for an n-sided polygon. So this formula will only work for a regular polygon. Which means equilateral and equiangular.
So the area formula is going to be the area, the excuse me the apothem times the side length times the number of sides all divided by 2 and what you're doing here is you're calculating the area of one of these triangles and then multiplying it by however many triangles you have. |
Extra Digits: Mole's Thumblike Wristbone Helps Tackle Tunneling
European mole (Talpa europaea)
Most animals with paws have a similar hand shape, with five fingers, or claws, on each. One big exception to this rule is the mole, which has an extra thumb on its front paws. New research shows that this extra thumb isn't a thumb at all, but an extended wristbone.
Having extra fingers or toes, a condition called polydactyly, isn't all that uncommon in humans and other animals. The condition happens in some form about once every 500 human births, even higher in males and African-Americans. It can occur on both hands or feet, or just one hand or foot.
Cats and dogs, especially some specific breeds, often have additional toes on their back paws, which traditionally have four digits. Giant and red pandas also have an extra thumb, which helps them grasp bamboo.
By studying the growth of Iberian mole (Talpa occidentalis) paws in the womb, and comparing it with the development of the paws of shrew — a closely related species that has five-fingered paws — the researchers were able to tease apart how these special moles grow their extra thumbs. Led by Marcelo Sanchez at the University of Zurich in Switzerland, the team specifically looked at gene expression during the paw's development.
They found that the mole's extra thumb sprouts from a bone in its wrist, with the thumb-bone growing parallel to the "normal" inner thumb; but that's where the similarities stop. The outer thumb doesn't have any moving joints, consisting of a single, sickle-shaped bone that develops later than the inner thumb and the rest of the mole's fingers.
The bone develops out of a wristbone called the seasmoid bone, the same way the pandas' extra thumb develops. The researchers hypothesize that too-high levels of testosterone in the moles could play a role in this extraordinary development, as the hormone is important for bone and finger growth.
Like the panda's, the mole's extra thumb gives the animal a special advantage. The researchers believe that the extra palm area (the bone makes the palm wider) allows more efficient digging, and the solid piece of bone on the outside edge makes the palm more rigid, since it can wiggle, but not bend. Improved digging abilities are important to the mole, which digs underground lairs.
This makes sense, because other species of mole that don't tunnel as much have smaller, stubbier outer thumbs, the researchers said. Either these moles never developed the need to tunnel underground to the same extent, so never fully developed the outer thumb, or environmental changes no longer required them to develop it, so they stopped investing extra energy into growing them, the researchers say.
The study was published in the July 12 issue of the Journal of the Royal Society Biology Letters.
MORE FROM LiveScience.com |
- slide 1 of 6
When learning the English language, understanding the differences between semantic and pragmatic meaning can be a valuable tool to maximize your linguistic ability. Although both are terms used in relation to the meanings of words, their usage is drastically different.
- slide 2 of 6
What Is Semantics?
Semantics refers to the meaning of words in a language and the meaning within the sentence.
Semantics considers the meaning of the sentence without the context. The field of semantics focuses on three basic things: “the relations of words to the objects denoted by them, the relations of words to the interpreters of them, and, in symbolic logic, the formal relations of signs to one another (syntax)" . Semantics is just the meaning that the grammar and vocabulary impart, it does not account for any implied meaning.
In this sense, there's a focus on the general 'rules' of language usage.
- slide 3 of 6
Pragmatic Word Usage
Pragmatic meaning looks at the same words and grammar used semantically, except within context. In each situation, the various listeners in the conversation define the ultimate meaning of the words, based on other clues that lend subtext to the meaning.
For example, if you were told to, “Crack the window," and the room was a little stuffy, and the speaker had just said prior to this that they were feeling a little warm, then you would know, pragmatically, that the speaker would like you to open the window a 'crack' or just a little.
If you were with a friend who was locked out of his home, and you were standing at a back door trying to get inside, your friend might say 'crack that window' and literally mean to put a 'crack' in the window, or break the window.
Confused? Let's dig deeper.
- slide 4 of 6
Differences in Meaning
As the example above shows, considering both the pragmatic and semantic meaning of your sentence is important when communicating with other people. Although semantics is concerned only with the exact, literal meaning of the words and their interrelations, pragmatic usage focuses on the inferred meaning that the speakers and listeners perceive.
The following examples demonstrate the difference between the two:
She hasn’t taken a shower.
He was so tired he could sleep for days.
In both of these examples, the context and pragmatic meaning really define the sentence.
In the first, did the speaker really mean to say that the woman has not ever taken a shower, not even once? Although the sentence says just that, the listener in the conversation may understand, based on other factors, that the speaker means that the woman they are referring to has not taken a shower ... today.
In the second example, we have a guy who is so tired he can sleep for days. Is he really going to sleep for days? Semantically, we would need to take that sentence to mean exactly that. But, in casual conversation, the listeners and speaker might tell you that the guy was just saying he was really, really tired, and using those words to convey that meaning, instead of saying, 'he was really tired'.
- slide 5 of 6
Idioms and Miscommunications
New English language learners need to learn how to understand the pragmatic meaning of the sentence in order to avoid miscommunications.
Some ways to make the transition easier is by learning phrases and idioms that are commonly said, but whose true meanings differ from the semantic meaning.
In the example used above, “Crack the window" is a common phrase or idiom meant to open the window so that only a crack is showing.
Although full comprehension of pragmatic meaning in a new language can take time, students can speed up the process by practicing the most common exceptions to the semantic meaning.
Ready to learn more? Here are some common idioms in the English language using food terms, such as 'not my cup of tea'.
You may also find this lesson plan on teaching semantic meaning helpful.
- slide 6 of 6
"Semantics." 2009. The Columbia Encyclopedia, 6th ed. Columbia University Press: New York.
Griffiths, Patrick. An Introduction to English Semantics and Pragmatics. Edingburgh, Scotland:Edinburgh University Press, 2006. |
The immediate motivation for safe disposal is the radioactive waste stored currently at the Hanford Site, a facility in Washington State that produced plutonium for nuclear weapons during the Cold War. The volume of this waste originally totaled 54 million gallons and was stored in 177 underground tanks.
In 2000, Hanford engineers began building machinery that would encase the radioactive waste in glass. The method, known as “vitrification,” had been used at another Cold War-era nuclear production facility since 1996. A multibillion-dollar vitrification plant is currently under construction at the Hanford site.
To reduce the cost of high-level waste vitrification and disposal, it may be advantageous to reduce the number of high-level glass canisters by packing more waste into each glass canister. To reduce the volume to be vitrified, it would be advantageous to separate the nonradioactive waste, like aluminum and iron, out of the waste, leaving less waste to be vitrified. However, in its 2014 report, the DOE Task Force on Technology Development for Environmental Management argued that, “without the development of new technology, it is not clear that the cleanup can be completed satisfactorily or at any reasonable cost.”
The high-throughput, plasma-based, mass separation techniques advanced at PPPL offer the possibility of reducing the volume of waste that needs to be immobilized in glass. “The interesting thing about our ideas on mass separation is that it is a form of magnetic confinement, so it fits well within the Laboratory’s culture,” said physicist Nat Fisch, co-author of the paper and director of the Princeton University Program in Plasma Physics. “To be more precise, it is ‘differential magnetic confinement’ in that some species are confined while others are lost quickly, which is what makes it a high-throughput mass filter.”
How would a plasma-based mass filter system work? The method begins by atomizing and ionizing the hazardous waste and injecting it into the rotating filter so the individual elements can be influenced by electric and magnetic fields. The filter then separates the lighter elements from the heavier ones by using centrifugal and magnetic forces. The lighter elements are typically less radioactive than the heavier ones and often do not need to be vitrified. Processing of the high-level waste therefore would need fewer high-level glass canisters overall, while the less radioactive material could be immobilized in less costly wasteform (e.g., concrete, bitumen).
For the full story: https://blogs.princeton.edu/research/2015/12/04/pppl-physicists-propose-new-plasma-based-method-to-treat-radioactive-waste-journal-of-hazardous-materials/ |
The Leach's petrel is a starling-sized seabird. These birds are all black underneath and mostly black above, apart from a white rump. It has a forked tail. The white rump has a black line down it.
Leach's petrels breed on remote offshore islands to the UK and feed out beyond the continental shelf. It is specially protected by law and it's important that its breeding colonies are protected from introduced predators such as cats and rats. It is also listed as a Schedule 1 species under The Wildlife and Countryside Act offering it additional protection.
They spend most of their time at sea, only approaching land to breed at night. Most British and Irish birds migrate in the winter to the tropics, although a few remain in the northern Atlantic.
What they eat:
Crustaceans, molluscs and small fish.
- UK breeding:
- 48,047 pairs |
Psychology is primarily concerned with understanding individual human behavior. In contrast, sociology is the scientific study of larger groups. These larger groups include families, organizations, societies, and cultures. Both sociologists and psychologists study the influence of these groups on individual behavior. Social and cultural forces can cause entire groups of people to be more vulnerable to addiction. If you are a member of a vulnerable group, then you have a greater risk for developing an addiction.
For our purposes, the term culture describes a group's learned and shared pattern of values and beliefs. These values and beliefs guide group members' behavior and their social interactions. Unlike skin color, hair color, or one's physical stature, we cannot readily observe culture. Some cultures have observable physical characteristics that become associated with that culture. People of Swedish descent have blond hair. People of African descent have dark skin. People of Asian descent have almond-shaped eyes. Regardless of whether or not there are observable physical characteristics associated with a particular culture, everyone has a culture. There are specific cultures associated with families, gender, race, ethnicity, workplaces, etc.
Researchers have learned that certain groups of people are more vulnerable to addiction than other groups of people. However, this observation does not explain why these differences exist. Gambling addiction research is still in its infancy. We have far more data about the effect of culture on alcoholism. Let's use alcoholism to illustrate the impact of culture with respect to addiction.
By studying the differences between groups of people with higher rates of alcoholism, as compared to groups with lower rates of alcoholism, we can begin to uncover the cultural forces that make some groups more vulnerable to addiction. For example, the Native American people have higher rates of alcoholism as compared to Italian people. What are some of the differences between these two groups of people? The Native American people's land was invaded and stolen by conquerors. Their spiritual practices were completely disrespected. These devastating experiences radically altered the stabilizing forces of community, family, and spirituality. These events threatened their very survival. People of Irish, African, and Afro-Caribbean descent can trace similar destructive forces in their cultural history. In contrast, Italian Americans who immigrated to the United States retained a strong sense of family and religious faith. By and large, Italian immigrants formed stable, supportive communities.
Sometimes people have difficulty understanding how devastating historical events can still affect a group of people today. The answer lies in the way we transmit culture from one generation to the next: families. Now imagine a family history that includes the systematic oppression of the group to which that family belongs. Oppression can lead to feelings of hopelessness, fear, distrust, and despair. Parents who directly experienced this oppression communicate this sense of despair to their children. Someday those children will become parents and communicate these same things to their children and so on. Many generations later, we can observe the transmission of hopelessness and despair. Therefore, it will continue to affect family members today.
In such families, each generation of children learns the world is an unsafe place. This occurs even though in present times, this may no longer be true. They may have learned that opportunities for a good life belong to other people with the "right" skin, eyes, or hair. That child eventually grows up to be an adult believing these same things. In the section called, "What causes gambling addiction?" we mentioned that stress, poor coping skills, and negative expectations make people more vulnerable to addiction. Therefore, it is not difficult to imagine that entire groups of people that experience more stress, negative expectations, and coping challenges are more vulnerable to addictions. Certainly, this is true of oppressed groups.
With respect to gambling addiction, the advent of the Internet, social networking, and mobile devices make gambling opportunities available in an unprecedented way. These societal changes reflect another socio-cultural component of gambling addiction. In the United States, government sanctioned lotteries are an example socio-cultural forces that influence gambling addiction. You will recall that culture refers to a shared pattern of values and beliefs. Government sanctioned lotteries are an indication that this culture does not view gambling as harmful.
An understanding of social and cultural forces helps to answer, "How do people get addicted?" However, individuals affected by cultural influences cannot readily change these influences. Nevertheless, we can interpret these cultural forces in helpful or unhelpful ways. Sometimes re-interpretation is our only recourse. As Shakespeare's Hamlet notes, "There is nothing either good or bad, but thinking makes it so." Or as Marcus Aurelius, the 2nd century CE Roman emperor stated, "The universe is change, life is what our thoughts make it." Although individuals can do very little to directly change cultural influences, knowledge and awareness of these forces strengthens recovery efforts. We have developed a guide for becoming more aware of these cultural influences.
Two other socio-cultural influences are key factors in recovery. These are families and social support. You can learn more about these and other socio-cultural causes of addiction in our topic center on Addiction. |
Last weekend, my daughter asked me how bees made honey, and I realized that I didn’t know the answer. How do bees make honey? I did some homework, and can now explain it to her – and to you.
Different honey bees have different jobs. Some of these bees are “forager” bees, which collect nectar from flowering plants. The foragers drink the nectar, and store it in their crop, which is also called the honey stomach. The crop is used solely for storage, and the bee does not digest the nectar at all.
The forager bee then takes the nectar back to the hive, regurgitating the nectar directly into the crop of a “processor” bee at or near the entrance to the hive.
While the forager heads back to the flowers for more nectar, the processor bee takes the nectar to the honeycomb, which tends to be near the top of the hive, and regurgitates it into a hexagonal wax cell. But now the nectar needs to ripen.
The processor bees add an enzyme called invertase every time they regurgitate their nectar (and it takes many loads of nectar to fill a cell). The nectar consists largely of sucrose (table sugar) and water. The invertase breaks the sucrose down into two simpler sugars: glucose (blood sugar) and fructose (fruit sugar).
By definition, honey contains less than 18.6 percent water, but water usually makes up approximately 70 percent of nectar. During the ripening process, the bees “dry out” the nectar. One of the ways they do this is by fanning their wings, which creates airflow around the honeycomb and helps water evaporate from the nectar.
Read more at: Phys.org |
№119 по английскому языку за 9 класс Л.М. Лапицкая, Н.В. Демченко, А.В. Волков
авторы: Л.М. Лапицкая, Н.В. Демченко, А.В. Волков
Английский 9 класс Лапицкая страница 119
B. The simplest explanation of weather is that it is the condition of atmosphere at a definite time and place on the Earth. It can be hot or cold, wet or dry, calm or stormy, clear or cloudy. This condition is influenced by a number of atmospheric factors, such as air pressure, temperature, humidity, precipitation.
C. Temperature is how hot or cold something is, for example the atmosphere or the sea. Temperature is measured ['me33d] in degrees Celsius (centigrade) or Fahrenheit.
F. Atmospheric pressure is the weight1 of air on the Earth's surface. Pressure is shown on a weather map with lines called isobars. Warm air is lighter than cold air, cold air is heavier than warm air. Low pressure near the Earth's surface occurs when air is warm and rises. High pressure occurs when air becomes colder and falls.
G. When the sun shines, the Earth's surface2 is heated. The sun heats the Earth unevenly and influences the atmosphere. Warm and cool air move and change air pressure. The movement of air around the Earth from high pressure to low pressure, of the cold and warm air brings about winds.
H. The sun also heats the water which is on the Earth in rivers, lakes, seas, oceans, and in the upper layer of the ground.
That means that it contains water in the form of vapour. Humidity is the amount of water vapour in the atmosphere. Humidity is measured as a percentage 100% humidity is the point (Tonica) where the air can hold no more water vapour.
D. Precipitation is the term given to moisture that falls from the air to the ground. The most common form of precipitation is rain, snow, hail, sleet, drizzle, fog, mist.
E. The air around us is never completely dry. It is usually moist. |
Indigenous peoples are also referred to as first peoples, First Nations, Aboriginal peoples, Native peoples, Indigenous natives, or Autochthonous peoples, are culturally distinct ethnic groups who are directly descended from the earliest known inhabitants of a particular geographic region, and who to some extent maintain the language and culture of those original peoples.
The term Indigenous was first, in its modern context, used by Europeans, who used it to differentiate the Indigenous peoples of the Americas from the European settlers, and the African Americans who were brought to the Americas due to enslavement, or who immigrated as free people. Indigenous societies range from those who have been significantly exposed to the colonizing or expansionary activities of other societies to those who as yet remain in comparative isolation from any external influence.
The tribes of Assam are divided into two groups: Scheduled Tribes (Hills) and Scheduled Tribes (Plains). Since hill tribes living in the plains and plains tribes living in the hills in large numbers are not recognised as scheduled tribes in the respective places, the census data may not reflect the correct figures.
The Assam Tribune has claimed that if these categories of tribes are counted the actual population. The Assamese language is used as the lingua franca by almost all the tribes. Various other indigenous communities of Assam were all tribes but were later attained non-tribal status through proselytization like the Ahoms, Morans, Motak, Keot(Kaibarta), Sutiya, Nath, Koch Rajbongshi etc. Several tribal groups have landed in the soils of Assam in the course of diverse directions as the territory was linked to several states and many different countries. Austro-Asiatic, Tibeto-Burmans, and Indo-Aryans had been the most important traditional groups that arrived at the site and lived in the very old Assam.
They were well-thought-out as the first inhabitants of Assam and yet at the moment they are essential elements of the "Assamese Diaspora".
Assamese Diaspora— A diaspora is a scattered population whose origin lies in a separate geographic locale. Historically, the word diaspora was used to refer to the mass dispersion of a population from its indigenous territories. In the context of Assam, it is acknowledged as the settling land for a lot of cultures. Several tribal grouping has landed in the soils of Assam in the course of diverse directions as the territory was linked to many states and many different countries. Austro-Asiatic, Tibeto-Burmans, and Indo-Aryans had been the most important traditional groups that arrived at the site and lived in the very old Assam. They were well-thought-out as the first inhabitants of Assam.
The greater Bodo-Kachari group forms a major part of Assam encompassing 19 major tribes, both plain and hills were historically the dominant group of Assam, later the Tai Ahoms rise as the dominant group, the ethnic group along with which the Upper Assam Bodo-Kachari groups like Chutias, Morans and Borahis were associated with the term "Assamese". Along with Tai Ahoms, they were other prominent groups that ruled Assam valley during the medieval period, those belonging to the Chutia, Koch, and Dimasa communities.
The first group ruled from 1187 to 1673 in the eastern part of the state, the second group ruled Lower Assam from 1515 to 1949, while the third group ruled the southern part of Assam from the 13th century to 1854. The Bodo tribe, also known as Boro, are the dominant group in Bodoland. They speak the Bodo language. Which is one of the 22 constitutional languages of India.
Most of the indigenous Assamese communities today have been historically tribal and even the now considered non-tribal population of Assam were tribes that have slowly been converted into castes through Sanskritisation. Assam has always been a historically tribal state. Ahoms along with Chutia, Moran, Motok, and Koch are still regarded as semi-tribal groups who have nominally converted to Ekasarana Dharma even though keeping alive their tribal traditions and customs.
As per the latest development, indigenous ethnicities like Moran, Chutia, Motok, Tai Ahoms and Koch & along also non-indigenous ethnic groups like the Tea tribes have realized the above-mentioned points and have applied for ST status. This will make Assam a predominantly tribal state having wider geopolitical ramifications.
We are sharing our rich heritage through this platform, starting with the cultural preservation efforts by voices of indigenous communities themselves. |
What is phage transduction?
Bacteriophage transduction is the process by which a bacteriophage shuttles or transfers bacterial genes from one bacterial cell to another. The cell whose genome segment is being shuttled is known as the donor cell, and the cell receiving it is known as the recipient cell. The transducing phage is the bacteriophage that transfers the bacterial segment. The transduction can be generalized or specialized. Among the genes that bacteria can acquire from phages via transduction are virulence and antibiotic resistance genes.
How was transduction discovered?
In 1951, Joshua Lederberg and Norton Zinder were testing for recombination in the bacterium Salmonella typhimurium by using the techniques that had been successful with E. coli. The researchers used two different strains: one was the− trp− tyr−, and the other was met− his−. No wild-type cells were observed when either strain was plated on a minimal medium. However, after the two strains were mixed, wild-type cells appeared at a frequency of about 1 in 105. Thus far, the situation seems similar to recombination in E. coli.
However, in this case, the researchers also recovered recombinants from a U-tube experiment, in which cell contact (conjugation) was prevented by a filter separating the two arms. By varying the size of the pores in the filter, they found that the agent responsible for recombination was about the size of the virus P22, a known temperate phage of Salmonella. Further studies supported the suggestion that the vector of recombination is indeed P22. The filterable agent and P22 are identical in size properties, sensitivity to antiserum, and immunity to hydrolytic enzymes. Thus, Lederberg and Zinder, instead of confirming conjugation in Salmonella, discovered a new type of gene transfer mediated by a virus. They called this process transduction. In the lytic cycle, some virus particles pick up bacterial genes that are transferred to another host, where the virus inserts its contents. Transduction has subsequently been shown to be quite common among both temperate and virulent phages.
Types of phage transduction
There are two kinds of transduction: generalized and specialized.
Generalized transducing phages can carry any part of the chromosome, whereas specialized transducing phages carry only restricted parts of the bacterial chromosome.
In generalized transduction, the bacterial chromosome is broken into small pieces when a donor cell is lysed by bacteriophage virions. Occasionally, the forming phage particles mistakenly incorporate a portion of the bacterial DNA into a phage head in place of phage DNA. This event is the origin of the transducing phage.
Transducing phages can bind to a bacterial cell and inject their contents, which now happen to be bacterial donor genes. When a transducing phage injects its contents into a recipient cell, a merodiploid situation is created in which the transduced bacterial genes can be incorporated by recombination.
Specialized transduction occurs during the lysogenic cycle. The recombination between phage regions and the bacterial chromosome is catalyzed by a specific enzyme system. This system typically ensures that the phage integrates at the same point in the chromosome. When the lytic cycle is induced (for instance, by ultraviolet light), it provides that the prophage excises at precisely the correct point producing a normal circular phage chromosome. Very rarely, excision is abnormal and can result in phage particles that now carry a nearby gene and leave behind some phage genes.
Lambda is an excellent example of a specialized transducing phage. As a prophage, λ always inserts between the gal region and the bioregion of the host chromosome. In transduction experiments, λ can transduce only the gal and bio genes. In λ, the nearby genes are gal on one side and bio on the other. The resulting particles are defective due to the genes left behind and are referred to as λdgal (λ-defective gal), or λdbio. These defective particles carrying nearby genes can be packaged into phage heads and infect other bacteria. In the presence of a second, normal phage particle in a double infection, the λdgal can integrate into the chromosome at the λ-attachment site. In this manner, the gal genes, in this case, are transduced into the second host. Because this transduction mechanism is limited to genes near the original integrated prophage, it is called specialized transduction.
How is generalized different from specialized?
Generalized transduction occurs during the lytic cycle of a virulent bacteriophage when the bacteriophage has the opportunity to acquire a host bacteria gene without becoming a prophage, whereas specialized transduction occurs during the lysogenic cycle of a temperate bacteriophage when the bacteriophage has been integrated into a host genome as a prophage. The transducing phage contains fragments of the host chromosome in all types of transduction. During the next infection, DNA from the previous host recombines with the new host chromosome.
Transduction occurs when newly forming phages acquire host genes and transfer them to other bacterial cells. Generalized transduction can transfer any host gene. It appears when phage packaging accidentally incorporates bacterial DNA instead of phage DNA. Specialized transduction is due to faulty separation of the prophage from the bacterial chromosome, so the new phage includes both phage and bacterial genes. The transducing phage can transfer only specific host genes.
what is unique about transduction compared to normal bacteriophage infection?
During transduction, the bacteriophage does not only lyse the infected cell; instead, it transfers bacterial DNA from one cell’s chromosome to another. The bacteriophage transports bacterial genome fragments during transduction. Unlike normal phage infections, which involve cell lysis and no transfer of genetic material from one bacteria to another. |
Genetic engineering is a rapidly advancing field that has the potential to revolutionize various aspects of our lives, from agriculture to medicine. However, with this power comes great responsibility, as genetic engineering raises a host of ethical concerns. The ability to manipulate the genetic makeup of living organisms raises questions about the boundaries of nature, the potential consequences of altering the genetic code, and the impact on society as a whole. In this article, we will explore the ethical implications of genetic engineering and delve into the various considerations that must be taken into account when engaging in this field.
- Genetic engineering is a complex and controversial field that raises ethical concerns.
- Understanding the science behind genetic engineering is crucial to making informed decisions about its benefits and risks.
- While genetic engineering has the potential to bring about significant benefits, it also poses risks to society and the environment.
- Ethical considerations, such as informed consent and social justice, must be taken into account when engaging in genetic engineering.
- Government regulation is necessary to ensure that genetic engineering is practiced ethically and responsibly.
Understanding the Science of Genetic Engineering
Genetic engineering is the process of manipulating an organism’s DNA to achieve desired traits or outcomes. This is done through various techniques such as gene splicing, gene editing, and gene transfer. Gene splicing involves cutting and recombining DNA from different sources to create a new genetic sequence. Gene editing, on the other hand, involves making precise changes to an organism’s DNA using tools like CRISPR-Cas9. Gene transfer refers to the process of introducing foreign genes into an organism’s genome.
Examples of genetic engineering can be found in agriculture, medicine, and biotechnology. In agriculture, genetically modified crops have been developed to enhance traits such as pest resistance, drought tolerance, and increased yield. These genetically modified organisms (GMOs) have sparked debates about their safety and potential impact on the environment. In medicine, genetic engineering has led to breakthroughs in gene therapy, where faulty genes are replaced or repaired to treat genetic disorders. Biotechnology also benefits from genetic engineering by producing enzymes, proteins, and other valuable products through genetically modified organisms.
The Benefits and Risks of Genetic Engineering
The potential benefits of genetic engineering are vast and varied. In agriculture, genetically modified crops have the potential to increase food production and reduce reliance on pesticides and herbicides. This can lead to improved food security and reduced environmental impact. In medicine, genetic engineering offers the promise of personalized medicine, where treatments can be tailored to an individual’s genetic makeup. This can lead to more effective and targeted therapies for various diseases.
However, genetic engineering also carries risks that must be carefully considered. One concern is the unintended consequences of altering an organism’s genetic code. There is a possibility that genetic modifications could have unforeseen effects on the organism’s health or the environment. Another concern is the potential for genetic engineering to exacerbate existing social inequalities. If genetic enhancements become available, there is a risk that only the wealthy will have access to them, further widening the gap between the haves and have-nots.
The Impact of Genetic Engineering on Society
|Impact of Genetic Engineering on Society
|Increased Agricultural Productivity
|Genetically modified crops have increased yields by 22% on average (source: ISAAA)
|Genetic engineering has led to the development of insulin for diabetes, growth hormone for children with growth disorders, and clotting factors for hemophilia (source: NIH)
|58% of Americans believe that genetically modified foods are unsafe to eat (source: Pew Research Center)
|Genetically modified crops have reduced pesticide use by 37% and herbicide use by 22% (source: ISAAA)
|Intellectual Property Rights
|As of 2019, there were over 12,000 patents related to genetic engineering (source: USPTO)
The social and cultural impact of genetic engineering cannot be underestimated. Genetic engineering has the potential to reshape our understanding of what it means to be human, as well as our relationship with nature. It raises questions about the boundaries between species and the moral status of genetically modified organisms. Additionally, genetic engineering has the potential to challenge traditional notions of family and kinship, as it allows for the manipulation of inherited traits.
Furthermore, genetic engineering has the potential to disrupt social norms and values. For example, if genetic enhancements become widely available, it could lead to a society where certain traits are valued more than others, potentially leading to discrimination and inequality. Additionally, the ability to select for certain traits in offspring raises ethical questions about eugenics and the potential for a slippery slope towards designer babies.
Ethical Considerations in Genetic Engineering
Ethical considerations play a crucial role in guiding the practice of genetic engineering. One key consideration is the principle of beneficence, which requires that any actions taken in genetic engineering should aim to maximize benefits while minimizing harm. This means that any potential risks or unintended consequences should be carefully evaluated before proceeding with genetic modifications.
Another important ethical consideration is the principle of autonomy, which emphasizes the importance of individual choice and consent. In the context of genetic engineering, this means that individuals should have the right to make informed decisions about whether to undergo genetic testing or genetic modifications. Informed consent is crucial to ensure that individuals are fully aware of the potential risks and benefits of genetic engineering and can make decisions that align with their values and beliefs.
The Role of Government in Regulating Genetic Engineering
The role of government in regulating genetic engineering is a complex and contentious issue. On one hand, government regulation is necessary to ensure the safety and ethical practice of genetic engineering. Regulations can help prevent unethical practices, protect public health and safety, and ensure that genetic engineering is conducted in a responsible manner.
On the other hand, government regulation can also stifle innovation and impede scientific progress. Excessive regulation can create barriers to entry for small startups and limit the potential benefits that genetic engineering can bring. Striking the right balance between regulation and innovation is crucial to ensure that genetic engineering can be conducted in a way that maximizes benefits while minimizing risks.
The Importance of Informed Consent in Genetic Engineering
Informed consent is a fundamental ethical principle in any medical or scientific practice, and genetic engineering is no exception. Informed consent requires that individuals be fully informed about the potential risks and benefits of any procedures or treatments they undergo, and that they have the capacity to make autonomous decisions based on this information.
In the context of genetic engineering, informed consent is particularly important due to the potential long-term consequences of genetic modifications. Individuals should have the right to know what changes are being made to their genetic code, as well as the potential risks and benefits associated with these modifications. Additionally, informed consent should also extend to future generations, as genetic modifications can be heritable.
The Ethics of Gene Editing and Human Enhancement
Gene editing, particularly using the CRISPR-Cas9 system, has opened up new possibilities for genetic engineering. While gene editing holds great promise for treating genetic disorders and improving human health, it also raises ethical concerns. One of the main ethical concerns is the potential for gene editing to be used for non-therapeutic purposes, such as enhancing certain traits or creating “designer babies.”
The concept of human enhancement raises questions about what it means to be human and what traits are desirable. There is a risk that genetic enhancements could lead to a society where certain traits are valued more than others, potentially leading to discrimination and inequality. Additionally, there are concerns about the potential unintended consequences of gene editing, as well as the potential for irreversible changes to the human genome.
The Implications of Genetic Engineering on Social Justice
The implications of genetic engineering on social justice are complex and multifaceted. On one hand, genetic engineering has the potential to exacerbate existing social inequalities. If genetic enhancements become available, there is a risk that only the wealthy will have access to them, further widening the gap between the haves and have-nots. This could lead to a society where certain individuals are genetically privileged, while others are left behind.
On the other hand, genetic engineering also has the potential to address social injustices. For example, genetic engineering could be used to treat genetic disorders that disproportionately affect marginalized communities. Additionally, genetic engineering could also be used to improve crop yields and address food insecurity in developing countries. The key is ensuring that genetic engineering is conducted in an equitable and responsible manner, with a focus on addressing social inequalities rather than exacerbating them.
Practicing Ethical Mindfulness in the Age of Genetic Engineering
In conclusion, genetic engineering holds great promise for improving various aspects of our lives, from agriculture to medicine. However, with this power comes great responsibility. It is crucial that we approach genetic engineering with ethical mindfulness, carefully considering the potential benefits and risks, as well as the social and cultural implications.
Practicing ethical mindfulness in the age of genetic engineering requires us to engage in open and transparent discussions about the ethical implications of genetic engineering. It also requires us to prioritize the principles of beneficence, autonomy, and informed consent in our decision-making processes. By doing so, we can ensure that genetic engineering is conducted in a responsible and ethical manner, maximizing benefits while minimizing risks.
If you’re interested in exploring the intersection of mindfulness and ethical decision-making, you might also enjoy reading about the power of presence in building deeper connections in your relationships. This article delves into the importance of being fully present with your loved ones and offers practical tips for cultivating a more mindful approach to your interactions. Check it out here: The Power of Presence: Building Deeper Connections in Your Relationship.
What is genetic engineering?
Genetic engineering is the process of manipulating the DNA of an organism to change its characteristics or traits.
What are the potential benefits of genetic engineering?
Genetic engineering has the potential to improve crop yields, create new medical treatments, and eradicate genetic diseases.
What are the potential risks of genetic engineering?
The potential risks of genetic engineering include unintended consequences, such as the creation of new diseases or the spread of genetically modified organisms into the environment.
What is ethical mindfulness?
Ethical mindfulness is the practice of being aware of the ethical implications of one’s actions and decisions.
Why is ethical mindfulness important in the context of genetic engineering?
Ethical mindfulness is important in the context of genetic engineering because it helps ensure that the potential benefits of genetic engineering are balanced against the potential risks and ethical considerations.
Some ethical considerations related to genetic engineering include the potential for unintended consequences, the impact on biodiversity, and the potential for discrimination based on genetic traits.
How can ethical mindfulness be practiced in the context of genetic engineering?
Ethical mindfulness can be practiced in the context of genetic engineering by considering the potential risks and benefits of genetic engineering, consulting with experts and stakeholders, and engaging in open and transparent communication about the ethical implications of genetic engineering. |
Clinical trials are research studies on human participants. They are conducted to collect data regarding the safety and the efficacy of a new drug or a medical device. Drug and device testing begins with extensive laboratory research, which can involve years of experiments in animals and human cells. Only after collecting enough data about the success of the initial laboratory tests, the investigators can continue with enrollment of volunteers and/or patients into small pilot studies, and subsequently conduct progressively larger scale comparative studies.
Each and every clinical trial must receive a regulatory body and ethics committee approval based on its risk/benefit ratio. Once approved, the trial is typically conducted in four phases, considered as a separate trials. Clinical trials may take place in one or in multiple centers (even in different countries). Clinical study design aims to ensure the scientific validity and reproducibility of the results.
Studies the safety of a drug or device and it usually includes a small number of healthy volunteers, who are generally paid for participating in the study. The aim of the study is to determine the effects of the drug or device on humans including how it is absorbed, metabolized, and excreted. It also investigates the side effects that may occur. About 70% of experimental drugs pass this phase of testing.
Studies test the efficacy of a drug or device within a larger group of people that allows to discover the less common side effects. Most phase II studies are randomized trials where one group of patients receives the experimental drug, while a second “control” group receives a standard treatment or placebo. Often these studies are “blinded” – neither the patients nor the researchers know who has received the experimental drug. About 30% of experimental drugs successfully complete both Phase I and Phase II studies.
Studies involve randomized and blind testing in an even larger group of people than phase II and it can last several years. The aim is a deeper understanding of the effectiveness of the drug or device, the benefits, and the range of possible adverse reactions. 70% to 90% of drugs that enter Phase III studies successfully complete this phase of testing. Once Phase III is complete, a pharmaceutical company can request approval for marketing of the drug.
Studies or Post Marketing Surveillance Trials provide additional information, including the treatment’s risks, benefits, and optimal use. This phase is ongoing during the drug’s lifetime of active medical use.
As pointed out above, in some cases clinical trials may involve healthy volunteers with no pre-existing medical conditions, whereas in others they involve patients with specific health conditions who are willing to try an experimental treatment. The sponsor of a clinical trial may be a governmental organization or a pharmaceutical, biotechnology, or medical device company.
Certain functions, such as monitoring, management etc., may be managed by an outsourced partner, such as a contract research organization like IMR. |
The Pacific Ocean is the largest of the Earth's oceanic divisions. It extends from the Arctic in the north to the Southern Ocean (or, depending on definition, to Antarctica) in the south, bounded by Asia and Australia in the west, and the Americas in the east.
At 165.2 million square kilometres (63.8 million square miles) in area, this largest division of the World Ocean – and, in turn, the hydrosphere – covers about 46% of the Earth's water surface and about one-third of its total surface area, making it larger than all of the Earth's land area combined. The equator subdivides it into the North Pacific Ocean and South Pacific Ocean, with two exceptions: the Galápagos and Gilbert Islands, while straddling the equator, are deemed wholly within the South Pacific. The Mariana Trench in the western North Pacific is the deepest point in the world, reaching a depth of 10,911 metres (35,797 ft)
Pacific Ocean in The Case of the Toxic Spell Dump
Pacific Ocean in Chicxulub Asteroid Missed
The southern part of the Peaceful Ocean was home to pale greenskins.
Pacific Ocean in Days of Infamy
At the time of the outbreak of World War II, the Pacific Ocean was effectively divided up among the United Kingdom, France, the Soviet Union, the Netherlands, Japan and the United States. The ocean was very quiet as World War II raged across the globe, but storm clouds were brewing as Japan started becoming more aggressive within China. When the US imposed an oil embargo upon Japan in response to its actions in China, Japan opted to go to war with the U.S.. The Japanese understood that the US base at Hawaii would allow them to control nearly half the ocean. A bold and daring plan was hatched to not only destroy the US Pacific Fleet, but to conquer Hawaii, in order to extend their control across the entire Pacific.
On December 7, 1941, Japan initiated the war with a strike at the islands that not only crippled the US Pacific Fleet, but allowed them to take control of the islands in three months. With the US Pacific Fleet in no position to fight, they were forced back to the American west coast, while Japan was able to extend its conquests across the entire Pacific. Within six months, Japan was able to spread south into the Pacific and South East Asia, defeating all of the European powers and taking their colonies and extending their empire from Hawaii, to Australia, and India, effectively bringing the entire ocean under their control.
In late June of that year, the US attempted to regain control of the islands, but were defeated, forcing them back to the west coast once more. With no enemies left in the Pacific, the Japanese High Command attempted to hammer out a strategy, but were unable to agree on what to do. It was eventually decided to go on the defensive in the Pacific, wearing down the United States resolve until they quit the war. Unbeknownst to the Japanese, the sneak attack on Pearl Harbor along with the dual defeats at the hands of the Japanese, had hardened the Americans resolve to fight on. Japan soon discovered that holding its conquest proved harder than taking it, as the US Navy began to slowly eat away at Japan's shipping lanes with their submarine fleet, before finally retaking Hawaii in mid 1943.
Pacific Ocean in In the Presence of Mine Enemies
Pacific Ocean in Southern Victory
The Pacific Ocean was the largest ocean in the world. It was divided up among many European powers. The United Kingdom, France, Russia, Germany, the Netherlands, Spain and the United States. Although the US had interests in the ocean, the loss of the War of Secession and the following Economic Crash of 1863 put a hold on any expansion into the Pacific. Having Pacific ports was the one major advantage the US had over the Confederate States, and they were more than aware of this. In 1881, the CSA attempted to rectify this problem by purchasing the provinces of Sonora and Chihuahua from the bankrupt Empire of Mexico. Realizing that this would give the Confederates access to the Pacific, the US threatened war should this sale go ahead. When the Second Mexican War was declared, the US was too weak in the Pacific to be much of a threat to anyone, let alone the Confederates. This allowed the British Pacific Squadron, based at Pearl Harbor in the Sandwich Islands, to not only defeat the US Pacific Squadron, but blockade the US West coast. After the war's conclusion, both the US and CS began building up their strength in the Pacific, but for the CS Navy, little effort was put into this as they relied more on the British.
By the 20th Century, a new power had emerged in the Pacific. Japan had defeated Spain in the Hispano-Japanese War for its Pacific colonies, throwing them out of the ocean and creating their own empire. When the Great War began in 1914, the US Navy truly entered the Pacific by capturing the Sandwich Islands during the opening weeks of the war. With this success, the US Navy was able to take the offensive in the ocean against both the UK and the Japanese. However, for the rest of the war, the Pacific was only a series of cat-and-mouse actions with one major engagement, the Battle of the Three Navies, ending inconclusively in 1916. Despite holding Hawaii and keeping it, the US was never able to cut the Pacific life line to Canada, as both the English, Canadians, and Japanese were able to bottle up the Seattle Squadron of the US Navy. Later that same year, the South Pacific saw action as both the Chileans and the Argentinians clashed. When the fighting started turning in favor of Argentina, the US detached a small squadron and sent it south to aid their Chilean allies, and as 1917 began, the combined fleet then took the offensive into the South Atlantic.
When the war came to an end, all Entente powers in the Pacific were devastated with the exception of the UK and Japan. In the economic crisis that followed the peace, Japan was able to secure control of Indochina from the French, and the Dutch East Indies from the Netherlands. However, the British Empire was still strong enough to contest Japan from taking their possessions, as the empire's position in Australia was still strong, while Russia was ignored. However, the major contesters in the Pacific were just Japan and the United States. Relations between both powers were strained until in 1932, war broke out between the two nations. In spite of the Japanese Navy attacking Los Angeles and a successful counter attack at Pearl Harbor, the war degenerated into a stalemate until 1934, when both sides agreed to end the war, resulting in the two powers returning to the status quo. As the world began to march towards war, it was feared that the Japanese could return in force, while the British Navy could operate submersibles out of Australia, cutting supply lines between the Sandwich Islands and the mainland.
In 1941, the Second Great War began and the Pacific yet again became a battleground as Japan declared war on the US, decisively defeating the US Pacific Navy and taking Wake Island and Midway, putting the main Hawaiian Islands under air attack, while Japanese submarines prowled the US West Coast. However, the Japanese did not press home their advantage as they were heavily engaged in a war with China. This allowed the US Navy to rebuild in strength and in late 1942, strike at the Japanese guarding Midway, defeating them. With this loss, the Japan abandoned the war against the US to concentrate on the British Empire in Asia. Fighting a losing war in Europe, the British soon lost Malaya. The US Navy then concentrated on Mexico and the Confederate States, swinging south and occupying Baja California, cutting off the CSA from the Pacific. In 1944, Japan turned her sights on Russia, threatening war over Siberia.
As the dust from the war settled, the US began to reorganize their priorities back to the Pacific as Japan emerged as the newest threat, having defeated all their old rivals.
Pacific Ocean in The War That Came Early
The Pacific was quiet when the Second World War erupted in Europe in 1938, but was still a powder keg ready to explode due to tensions between the Japanese, the Europeans and the US. In January of 1941, the Japanese opened the war in the Pacific after the US cut off oil supplies. Forced to make a grab for the Dutch East Indies, the Japanese proved virtually unstoppable, conquering a large swath of the Pacific, from Wake Island in the east, to Burma in the west and the Lesser Sunda Islands in the south.
Pacific Ocean in Worldwar
The Pacific Ocean was the largest ocean in the world. It was divided up among many powers. The United Kingdom, France, Russia, Netherlands, Japan and the United States. On December 7, 1941, Japan brought war to the ocean when it attacked the United States at Pearl Harbor, before striking south, pushing all the way to Australia and India, forcing all European powers out. As both the Japanese and the US prepared to strike at Midway, the Race landed, ending the war in the Pacific, although the Japanese were able to take Midway during the confusion. During the fight against the Race, the ocean was largely ignored by the Race until they realized that the human powers were using the oceans to ship vital supplies to each other. In retaliation for the atomic bomb that destroyed the Race held city of Miami, Fleetlord Atvar decided to bomb Pearl Harbor as he recognized its importance as a vital supply lane in the Pacific.
During the peace that followed, the Pacific was divided up between Japan, the United States, and the Race. The Soviet Union had access to the ocean through the major port city of Vladivostok, but were largely ignored in this region. Free France, which had lost nearly all of its empire to Germany, the Race and Japan, found itself reduced to the South Pacific islands of French Polynesia, with their headquarters in Tahiti. Although the weakest power in the Pacific, Free France was left alone as all sides saw the region as Neutral Ground. The Japanese meanwhile had leaned more towards the United States, while US did not pursue the return of any of its lost possessions. The Race didn't even bother with the ocean, content to rule Asia, Australia and South America. In 1965, Japan at last detonated its own atomic bomb at the Bikini Atoll. With this achievement, Japan demanded and was granted full diplomatic relations with the Race, the same as were afforded the other major nuclear powers. As a result of this and retaining a large empire, Japan was able to re-emerge as a great power in the late 20th century within the Pacific.
- In the Presence of Mine Enemies, pg. 26. |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. |
1. Discussing and imaging the past
Earlier sections in this module considered how living things are adapted to survive in their environment. The great human adaptive advantage, developed here in Africa, is the ability to think of and make tools to cope with changing environments and to learn new things. For example, the earliest evidence of learning how to make and use fire is found in South Africa. (See Resource 1: The ‘Out of Africa’ theory of human origins for further information about early humans.)
In Case Study 1, a teacher uses artefacts of human life from thousands of years ago, found on a local sand dune, to develop attitudes of respect for what early humans could do. This is one way of starting this topic with your pupils; you could also use some of the background materials in Resource 1. Make sure you give a purpose to this activity; ask pupils to find one idea that is new to them or to summarise the main ideas in a way suitable for younger pupils, perhaps with some pictures.
In Activity 1, you lead your pupils through thoughtful discussion that will encourage them to seek more evidence from a range of different sources.
Case Study 1: Inspiring pupils with shards, stones and bones
Alan is a teacher who grew up spending holidays on the coast of South Africa. Here, when the wind blows away sand in the dunes, it uncovers places long hidden. You can find broken parts of very ancient pottery and marvel at how it was made and decorated. You can find parts of stone that have been chipped and shaped to make tools for cutting, hammering and even grinding. There are also bits of bone that show evidence of having been shaped into awls (pointed tools) for piercing leather, or cut into tubes as beads.
Sometimes Alan takes his pupils there. When pupils hold these things and imagine people thousands of years ago, and the time and trouble they took to make these tools, he can see the sense of wonder in their faces.
For more details on looking at artefacts see Resource 2: Interrogating artefacts.
Activity 1: Imagining the deep human past
First read Resource 3: History of technology to yourself to give you some ideas about early technologies.
Now, sit your pupils around you. Ask them to close their eyes and imagine themselves back in the very distant past. They are a family of hunter-gatherers, living off the land, making their own tools and seeing to their own needs for survival. Tell them to keep their eyes shut and to hold the answers in their heads to the questions you ask (later you will talk about the answers).
Ask them to imagine themselves waking. Where did they wake up? What kept them warm and safe in the night? What are they wearing? Who made it for them and how? What do they eat and drink? How is it prepared and stored? Take them briefly through the probable activities of the day. Focus on the tools, implements and other objects used.
Record your subsequent discussion in the form of a mind map titled ‘The earliest technologies for a good life’. (See Key Resource: Using mind maps and brainstorming to explore ideas.) |
There are two ways to measure temperature and humidity in your garden; either using individual instruments or using a weather station.
Temperature can be measured with a simple glass thermometer (below left, £3-5) filled with alcohol, which expands up a thin tube when the temperature increases. (Older thermometers may use mercury). Use the Celsius (C) scale, and estimate temperature to the nearest degree, or half degree if it is in between.
Digital thermometers (above right, £5) use electronics to measure temperature and are easier to read. The probe at the end of the cable that senses the temperature could be put outside in the shade with the readout in a garage, for example. They usually show temperature to a tenth of a degree, for example 28.9ºC, so enter this number in your report. (NB: When using digital thermometers (or indeed digital instruments of any type) remember that, although the display may have a precision of 0.1°C, its accuracy is likely to be much poorer than that – maybe a degree or two).
Instruments for measuring humidity are known as hygrometers. We will be measuring relative humidity in percent (%). A dial hygrometer (above left, around £10) uses hair, which expands when the atmosphere is moist. An electronic hygrometer uses electronics and has a clear display – devices which read both temperature and humidity are popular. Some of them (above right, £16-20) transmit data from outside sensors to a display indoors.
Humidity can also be measured with a wet and dry bulb hygrometer. This gives a more accurate reading, but involves the use of tables, so is a lot more complicated. Instructions will come with the hygrometer.
All types of thermometer and hygrometer must be kept out of direct sunlight at the time of reading (15:00-16:00) and for half an hour beforehand, either using some form of white louvered screen (below, £90), or by placing it in a position where sunlight doesn’t reach when you are observing, for example sheltered north facing location. As a last resort the thermometer can be hung on a north-facing wall or fence, but stood off as far away from the wall as possible to allow air to circulate all around it.
An Automatic Weather Station (AWS) measures temperature and humidity (and other quantities) with outdoor instruments which radio the data to an indoor display console. They can be bought for as little as £100 (below left), but for better accuracy you will have to pay £300 or more (below, right). They are generally mounted on top of fences or garages, to put them out in the open as much as possible. AWS can be mounted in direct sunlight, but in light winds and strong sunlight the budget versions can be up to 4 degrees in error. Temperature and humidity can be read direct from the indoor display and entered in your report. |
On 7 December 1941, a Japanese naval attack force launched a surprise air attack on U.S. military installations on the island of Oahu, in the U.S. Territory of Hawaii. Two waves of aircraft, totaling 253 aircraft in all, attacked the naval base at Pearl Harbor, the home of the U.S. Pacific Fleet, Hickam, Wheeler and Bellows Army airfields, Schofield Barracks, Kaneohe Naval Air Station, and Ewa Marine Corps Air Station. The attack was the greatest military defeat in U.S. history, and when it was over, 2,388 U.S. sailors, soldiers, and civilians were dead, while another 1,178 were wounded. The Japanese had sunk or damaged 21 ships of the U.S. Pacific Fleet, including eight front line battleships. The attack thrust the United Stated into World War II against Japan and her Axis allies, Germany and Italy.
While the Japanese achieved a temporary victory against the United States, the attack set in motion the chain of events that would ultimately lead to the defeat of Japan and the Axis nations in 1945. The seeds of the attack were planted in 1931, when Japan invaded the Chinese province of Manchuria. The invasion of Manchuria was the first step in Japanese imperial expansion, and in 1937 Japan launched a full scale war against China.
In response to the Japanese invasion of China, the United States increased military and financial aid to the Chinese and cut off exports of oil and other raw materials to Japan. This embargo was viewed by the Japanese as a direct threat to their national security and decided to seize and conquer other Asian and Pacific area territories that were rich in oil and the natural resources that Japan did not possess.
Japan knew that the United States did not condone its war with China and would not agree to its seizure of additional territory in Asia. Both the American and Japanese governments had taken strong diplomatic positions in regards to each other that would not allow "backing down" without some sort of national humiliation and embarrassment. While the two governments continued negotiations to find a peaceful solution to the diplomatic impasse, the Japanese government believed that war with the United States was inevitable and began to prepare accordingly.
Japan decided that the only way to defeat the United States was to preemptively destroy the U.S. Pacific Fleet at Pearl Harbor with a strong and decisive blow. They believed that American industrial might would tip the scales against Japan in a prolonged war, and felt that its military success was dependent on destroying the U.S. Pacific Fleet early in the war. While the United States was recovering from such an attack, the Japanese felt they would be able to pursue its military campaign throughout Asia and the Pacific, unimpeded by the United States.
The Japanese also believed that a decisive victory would demoralize and eliminate the will of the American people to engage in war with Japan. While history has shown us that the Japanese were greatly mistaken about this, it should be remembered that the American people in 1941 were deeply divided over the issue of war, with a large share of the populace holding isolationist views. While many Americans tended to sympathize with the Allied nations, the memory of World War I still lingered in the national psyche, and the American people as a whole had no desire to fight another war.
It can be argued that the Japanese attack was, in a sense, a desperate act by a desperate nation. Japan's quest for imperial expansion put it on a collision course with the United States. With either side unwilling to retreat from its positions, the Japanese believed there was no other course of action but war with the U.S. Once this was decided on, Japan concluded that the only route to victory was to destroy the U.S. Pacific Fleet in a quick and decisive attack. Through a long, winding and difficult road, Japan finally made the fateful decision that would forever link Japan with Pearl Harbor and 7 December 1941. |
In middle school, 6th graders are encouraged to push themselves further in their writing and write with increased complexity in terms of length, subject matter, vocabulary, and general writing techniques. At the same time, 6th graders practice and refine many of the skills previously taught to them while enhancing them with the new skills and techniques they learn.
In order to build writing skills, your 6th grader:
- Writes using more complex vocabulary and about more complex content.
- Writes over extended periods of time, such as when writing long-term research or expressive pieces that may take a week.
- Writes for short amounts of times, such as in one sitting.
- Writes a variety of genres for a variety of audiences.
Writes structured and well-organized opinion, research, and informative pieces that:
- Use supporting claims and evidence based on credible texts and resources.
- Include an introduction, a conclusion, and transitions.
- Integrate other forms of media and formats, such as graphs, charts, headings, audio, or video when appropriate.
Writes well-structured narratives (both true and fiction) that include:
- Descriptive detail of characters, settings, and experiences.
- A clear structure, with a logical order and flow, thought-out word choice, and a conclusion.
- Plans, revises, and edits writing, with guidance from teachers and peers.
- Writes pieces that display the reading skills achieved, including analysis of text, making comparisons and claims, and developing arguments using specific evidence.
- Uses technology and the Internet to produce and publish writing, work with others, and type a minimum of three pages in one sitting. |
What is a workbook and why is it important?
Student workbook is an education material which helps ensuring knowledge and ability to students in
line with the acquisitions stated in education programs (MEB, 2004). Students should benefit from various
additional materials in order to have a permanent learning. These materials should meet the demands of a good
learning. Students have different learning styles and needs. Variety of materials will increase the chance to create
a learning that meets the needs of different learning styles (Yalın, 2004).
DOWNLOAD: WORKBOOKS FOR GRADE 3
- GRADE 3: WORKBOOK IN CURSIVE: DOWNLOAD
- GRADE 3: WORKBOOK IN ENGLISH: DOWNLOAD
- GRADE 3: WORKBOOK IN GRAMMAR: DOWNLOAD
- GRADE 3: WORKBOOK IN HANDWRITING: DOWNLOAD
- (MAIN POST) WORKBOOKS FOR GRADE 1-6: DOWNLOAD
- GRADE 1-6 MODULES IN MATH: DOWNLOAD
- MELC CG CODES & GUIDELINES: DOWNLOAD
- BUDGET OF WORK BASED ON MELC: DOWNLOAD
- READING MATERIALS: DOWNLOAD |
A pollen tube is a tubular structure produced by the male gametophyte (pollen grain) of seed plants when it germinates at the top of the stigma. It’s elongation is an integral stage in the plant life cycle as it brings the sperm nuclei towards the ovule and causes to happen fertilization.
- In maize, the pollen tube can grow longer than 12 inches (30 cm) to traverse the length of the pistil.
- Pollen tubes were first discovered by Giovanni Battista Amici (1824) in the 19th century.
Figure: Ancient historical evidence showed that pollen had to be brushed on the stigma surface as a means to assure seed production. As long as 5000 B.C. both Assyrian priests and Egyptian gods were pictured on ceremonial fertilization of date palms. Source Artificial Pollination in Tree Crop Production.
History of discovery
Giovanni Battista Amici (1824) was an Italian astronomer, a good microscope maker, and a botanist. In 1822, he accidentally found that the stigma of Portulaca oleracea was covered with hairs which contained some granules or particles inside them. Curiosity prompted him to ascertain whether they moved in the same way as the granules he had seen in the cells of Chara. It pleased him to find that they did.
While repeating the observation, he accidentally saw a pollen grain attached to the hair he had under observation. Suddenly the pollen grain split open and sent out a kind of tube or “gut” which grew along the side of the hair and entered the tissues of the stigma. For three hours he kept it under observation and watched the cytoplasmic granules circulate inside it, but eventually, he lost sight of them and could not say whether they returned to the grain, entered the stigma, or dissolved away in some manner.
Amici described the event as
”I happened to observe a hair with a grain of pollen attached to its tip which after some time suddenly exploded and sent out a type of transparent gut. Studying this new organ with attention, I realized that it was a simple tube composed of a subtle membrane, so I was quite surprised to see it filled with small bodies, part of which came out of the grain of pollen and the others which entered after having traveled along the tube or gut.”
Amici’s discovery stimulated the young French botanist Brongniart (1827) to examine a large number of pollinated pistils with a view to understanding the interaction between the pollen and the stigma and the introduction of the fertilizing substance into the ovule. He found the formation of the pollen tubes (he called them spermatic tubules) to be a very frequent occurrence but persuaded himself to believe that, after penetrating the stigma, the tubes burst and discharged their granular contents, which he likened to the spermatozoids of animals and considered to be the active part of the pollen. He thought he saw these “spermatic granules” vibrating down the whole length of the style and entering the placenta and ovule, and he drew a series of figures to illustrate the whole process.
In appreciation of this work, Brongniart was awarded a prize by the Paris Academy of Sciences and recommended for admission to the Academy.
Amici (1830) applied himself once again to the problem, studying Portulaca oleracea, Hibiscus syriacus, and other plants, and wrote a letter to Mirbel in which he put the following question:
“Is the prolific humor (means liquid containing tube) passed out into the interstices (means inside gap) of the transmitting tissue of the style, as Brongniart has seen and drawn it, to be transported afterward to the ovule, or is it that the pollen tubes elongate bit by bit and finally come in contact with the ovules, one tube for each ovule?“
His observations completely ruled out the first alternative, and he definitely concluded in favor of the second.
At about the same time, Robert Brown (1831, 1833) saw pollen grains on the stigmas and pollen tubes in the ovaries of certain orchids and asclepiads (members of Asclepiadaceae) but was uncertain as to whether the tubes were always connected with the pollen grains. He thought instead that, at least in some cases, the tubes arose within the style itself, although possibly they were stimulated to develop in consequence of the pollination of the stigma.
Scientific Papers on This Topic
- Pollen Germination in vitro by Jayaprakash P.
- Brink, Royal Alexander. “The physiology of pollen. I. The requirements for growth.” American Journal of Botany 11.4 (1924): 218-228.
- Pinillos, Virginia & Cuevas, Julián. (2008). Artificial Pollination in Tree Crop Production. 10.1002/9780470380147.ch4.
- Hepler, Peter K., et al. “Ions and pollen tube growth.” The pollen tube. Springer, Berlin, Heidelberg, 2006. 47-69.
- Kessler, Sharon A., and Ueli Grossniklaus. “She’s the boss: signaling in pollen tube reception.” Current opinion in plant biology 14.5 (2011): 622-627.
- Introduction to Embryology of Angiosperm by P. Maheshwari (Find this book in Plantlet’s Book section).
- Early Pollen Research and Pollen Physiology.
- Revised and edited by Abulais Shomrat on 25 October 2020. |
For the Student Climate Survey, inspiration was gained from this Student Climate Survey. Student surveys are a cornerstone of my instructional practice. Our school improvement teams analyze the survey data, and then make any necessary adjustments in order to improve the school climate. Classroom climate refers to the prevailing mood, attitudes, standards, and tone that you and your students feel when they are in your classroom. The purpose of the Classroom Environment Scale (CES) is to assess learning environments and compare teacher/student responses in order to determine areas of growth and foster positive change within the classroom. My students take a survey once or twice each month to reflect on their learning and classroom experiences as well as to provide me with valuable feedback. In one district, a simple change was made to reduce the intimidation factor of transitioning from elementary to middle school—an anxiety that many students reported on a school climate survey—instead of having to get up and introduce themselves to the entire class, teachers had students meet each other in small groups on the first day of class. On the Panorama Student Survey, School Climate and Classroom Climate are each assessed with 5 questions written for older students (grades 6-12) and 4 questions written for younger students (3-5). The School Climate Survey for Students provides a solid series of questions based on key dimensions known to impact student perception on their schools. The Test The test itself is part of a larger assessment tool in measuring UPDATE: Read our latest article “Three Ways to Foster a Positive Classroom Climate” written by Kim Gulbrandson, Ph.D. I’ve been hearing a lot about “positive classroom climate.” What does this mean? With these tools, educators can start building a positive classroom climate on day one. After some revisions, the Student Climate Survey below demonstrates a survey that is adequate for kindergarten to first grade, includes data on student learning, classroom climate, and teacher characteristics, and will provide helpful student feedback to improve the overall classroom experience for students. Students check off rarely, sometimes, usually, or always to 5 brief statements about instructor responsiveness. Independence Elementary Climate Surveys Twice a year, Independence Elementary teachers, students, and parents are surveyed to find out their overall feelings about the school climate. There is also room for additional student comments. Teachers can use this a Exit-ticket-type survey. After years of surveys, I have tried many questions and question types and have found some that lead to […] Questionnaires also can provide a sense of students as individuals. This NCES effort extends activities to measure and support school climate by ED’s Office of Safe and Healthy Students (OSHS). From giving an overall grade to their school to providing answers regarding specific programs and situations, students are the best suited to identify key issues that impact their experiences. This student survey is a brief way to gauge student opinions about classroom climate. The second component (Diversity Values) consisted of The first component (Classroom Positive) included 12 items that concerned the favorableness of the classroom experience ‐‐ including perceptions of how the instructor treated students and an evaluation of the physical environment in terms of accessibility. Student surveys get teachers up to speed quickly regarding young people’s learning preferences, strengths and needs. The ED School Climate Surveys (EDSCLS) are a suite of survey instruments that were developed for schools, districts, and states by NCES.
Routledge Dictionary Of Politics Pdf, Color By Number Online Hard, Miele Triflex Hx1 Cordless Stick Vacuum Cleaner, Pigeon For Sale - Craigslist, Mohair Wool For Knitting, Hawk Carries Off Baby, Greenworks Battery Not Charging, African Black Soap For Keratosis Pilaris, Reinforcement Learning Introduction Slides, Federal Reserve Principal Economist Salary, Luxury Ball Let's Go, |
Is the science that deals with the biological aspects of humans and their behaviour as members of a society. Its area of activity covers three dimensions: the biological dimension, the cultural dimension and philosophical dimension of the human being:
a) Biological anthropology studies the anatomical and physical transformations experienced by humans throughout their biological evolution, as well as their origin and differentiation as a species in the animal world.
b) Sociocultural anthropology studies human beings from their relationships with other living beings. His research deals with the comparative study between different social systems and different types of group behaviour.
c) Philosophical anthropology tries to establish, in line with science, the place of human beings in the world, their origin and nature, but focusing on understanding human beings as persons in terms of values, rights, freedoms and equality.
- THE BIOLOGICAL DIMENSION OF THE HUMAN BEING
Creationism. This theory asserts that the world and all the living things were created by God from nothingness. This theory is the basis of many religious doctrines, not only of Christianity.
1) All living species were created separately by God from nothingness in the beginning of times. So they are not related nor derive ones from others.
2) God created mankind in his own image. So the human being plays a special role in the divine creation.
Fixism. Pseudo-scientific theory formulated by Carl Linnaeus (1707-1778) which states that species, both plant and animal, do not evolve, but remain unchanged in time.To explain the fact, as evidenced by the fossils, of the disappearance of certain species and the emergence of new ones, fixism was complemented with the catastrophist explanation (catastrophism) developed by Georges Cuvier.
- Lamarck and the inheritance of acquired characteristics
The law of the use or disuse of the organs. In every animal which has not passed the limit of its development, a more frequent and continuous use of any organ gradually strengthens, develops and enlarges that organ, and gives it a power proportional to the length of time it has been so used;
The law of the inheritance of acquired characteristics. All the acquisitions or losses provoked by nature on individuals, through the influence of the environment, and hence through the influence of the predominant use or permanent disuse of any organ are preserved by reproduction to the new individuals which arise. |
Legionnaires’ disease, also known as Legionellosis, is a rare form of pneumonia.It takes its name from the first known outbreak which occurred in a hotel that was hosting a convention of the Pennsylvania Department of the American Legion in 1976...
It is a type of pneumonia caused by bacteria. You usually get it by breathing in mist from water that contains the bacteria.The Legionella bacteria are found naturally in the environment, usually in water. The bacteria grow best in warm water, like the kind found in hot tubs, cooling towers, hot water tanks, large plumbing systems, or parts of the air-conditioning systems of large buildings. They do not seem to grow in car or window air-conditioners. The mist may come from hot tubs, showers or air-conditioning units for large buildings. The bacteria don’t spread from person to person.The disease is fatal in approximately 5% to 15% of cases.
CLICK & SEE..> :Legionella bacteria under the microscope
Symptoms of Legionnaires’ disease include fever, chills, a cough and sometimes muscle aches and headaches. Other types of pneumonia have similar symptoms. You will probably need a chest x-ray to diagnose the pneumonia. Lab tests can detect the specific bacteria that cause Legionnaires’ disease.
The bacteria are more likely to make you sick if you:
* Are older than 65
* Have a lung disease
* Have a weak immune system
Legionnaires’ disease is serious and can be life-threatening. However, most people recover with antibiotic treatment. Legionnaires’ has an incubation period of between two and 10 days.
Initial symptoms of are similar to those of flu – headache, musclepain, and a general feeling of being unwell.These symptoms are followed by high fever and shaking chills. Nausea, vomiting, and diarrhoea may occur.On the second or third day, dry coughing begins and chest pain might occur. There may also be difficulty breathing.Mental changes, such as confusion, disorientation, hallucination and loss of memory, can occur to an extent that seems out of proportion to the seriousness of fever. Some patients may develop pneumonia. This could affect both lungs and lead to hospitalisation if severe.
Legionnaires’ disease is underreported and underdiagnosed, primarily because special tests are needed to distinguish Legionnaires’ disease from other types of pneumonia. To help identify the presence of legionella bacteria quickly, your doctor may use a test that checks your urine for legionella antigens — foreign substances that trigger an immune system response. You may also have one or more of the following:
* Blood tests
* A chest X-ray, which doesn’t confirm Legionnaires’ disease but does show the extent of infection in the lungs
* Tests on a sample of your sputum or lung tissue
* A CT scan of your brain or a spinal tap (lumbar puncture) if you have neurological symptoms such as confusion or trouble concentrating
Legionnaires’ disease usually strikes middle-aged people. Those at risk include smokers and those with an existing health problem.Many others may contract the bug and yet show no signs of infection. It is likely that many cases of Legionnaires’ disease go undiagnosed.People suffering from cancer or chronic kidney diseases are among those less able to fight infections.Chronic diseases, such as diabetes and alcoholism, also seem to increase vulnerability to Legionnaires’ disease.Cigarette smokers are more likely to contract Legionnaires disease, perhaps because smokers are generally more likely than non-smokers to develop respiratory tract infections.
Legionnaires’ is most often treated with the antibiotic drugs erthryomycin and rifampin. Recovery often takes several weeks.
The likelihood of Legionella infection can be best reduced by good engineering practices in the operation and maintenance of air and water handling systems.Cooling towers and evaporative condensers should be inspected and thoroughly cleaned at least once a year.Corroded parts, such as drift eliminators, should be replaced. Algae and accumulated scale should be removed.Cooling water should be treated constantly. Ideally, an automatic water treatment system should be used that continuously controls the quality of the circulating water.Fresh air intakes should not be built close to cooling towers since contaminated water particles may enter the ventilation system.This page contains basic information. If you are concerned about your health, you should consult a doctor .
Disclaimer: This information is not meant to be a substitute for professional medical advise or help. It is always best to consult with a Physician about serious health concerns. This information is in no way intended to diagnose or prescribe remedies.This is purely for educational purpose.
BBC NEWS:8 Feb, 2003 |
In vascular plants, xylem is one of the two types of transport tissue; phloem is the other vascular tissue. Xylem is the primary water-conducting tissue and phloem circulates a nutrient-rich sap throughout the plant.
The term “xylem” is derived from classical Greek xúlon, "wood," and indeed the best known xylem tissue is wood. Xylem conducts water and dissolved minerals from the root up the plant into the shoots.
The vascular system of xylem and phloem tissue reflects a unity and harmony of creation. The xylem moves water and minerals from the soil, through the roots, to other parts of the plant, including the leaves. The phloem transports sugars, produced in the leaves, to the diverse parts of the plant, including the roots. An analogy is often drawn between this network (xylem and phloem) and the harmony of the blood vessels (veins and arteries) of the human body, with both systems transporting essential fluids to and from parts of the organism.
Xylem can be found:
- in vascular bundles, present in non-woody plants and non-woody plant parts.
- in secondary xylem, laid down by a meristem called the vascular cambium. A meristem is a tissue in plants consisting of undifferentiated cells (meristematic cells) and found in zones of the plant where growth can take place—roots and shoots.
- as part of a stelar arrangement not divided into bundles, as in many ferns.
The most distinctive cells found in xylem are those that conduct water, the tracheary elements: tracheids and vessel elements. Both are elongated cells that are dead; the living material in the interior disintegrates, leaving behind thickened cell walls through which xylem sap flows. (Sap usually refers to a watery fluid with dissolved substances that travels through vascular tissues, whether involving the xylem or the phloem.)
In most plants, pitted tracheids function as the primary transport cells. Vessel elements transport water in angiosperms. Xylem also contains other kinds of cells in addition to those that serve to transport water.
A tracheid conducts water and gives support to the xylem. Tracheids are long, narrow cells with tapered ends whose walls are hardened with lignin, a chemical compound that fills in spaces in the cell wall. The lignin thickens the wall, making it strong and able to provide support as well as function in water transport. There are spaces along the cell wall where the secondary walls, hardened with lignin, are absent. Here, there are only the thin primary walls. These regions where only primary walls are present are called pits. Water passes from cell to cell through the pits.
Vessel elements are the building blocks of vessels, which constitute the major part of the water-transport system in the plants where they occur. They are elongated, but generally shorter and wider than tracheids. As with tracheids, the cell wall of vessel elements is strongly lignified. At both ends, there are openings that connect the individual vessel elements. These are called perforations or perforation plates, and they allow water to flow easily through the xylem vessel. These perforations have a variety of shapes: the most common are the simple perforation (a simple opening) and the scalariform perforation (several elongated openings on top of each other in a ladder-like design). Other types include the foraminate perforation plate (several round openings) and reticulate perforation plate (net-like pattern, with many openings). The side walls will have pits, and may have spiral thickenings.
Two forces cause xylem sap to flow:
- The soil solution (see soil) is more dilute than the cytosol of the root cells. Thus, water moves osmotically into the cells, creating root pressure. Root pressure is highly variable between different plants. For example, in Vitis riparia pressure is 145 kPa, but it is near zero in Celastrus orbiculatus (Tibbetts and Ewers 2000).
- The main phenomenon driving the flow of xylem sap is transpirational pull. The reverse of root pressure, this is caused by transpiration, the loss of water by evaporation. In larger plants such as trees, the root pressure and transpirational pull work together as a pump that pulls xylem sap from the soil up to where it is transpired.
Xylem is present in vascular bundles, secondary xylem, and stelar arrangements.
A vascular bundle is a strand of vascular tissue that runs the length of a stem. Both xylem and phloem tissues are present in a vascular bundle, which also has supporting and protective tissues.
The xylem typically lies adaxial (towards the axis or central line) with phloem positioned abaxial (away from the axis or central line). In a stem or root, where vascular bundles are cylindrical, this means that the xylem is closer to the center of the stem or root while the phloem is closer to the exterior, in the bark area. In a leaf, the adaxial surface of the leaf will usually be the upper side, with the abaxial surface the lower side. Aphids are typically found on the underside of a leaf rather than on the top, since the sugars manufactured by the plant are transported by the phloem, which is closer to the lower surface.
Usually a vascular bundle will contain primary xylem only.
The position of vascular bundles relative to each other may vary considerably.
The girth, or diameter, of stems and roots increases by secondary growth, which occurs in all gymnosperms, and most dicot species among angiosperms. Secondary xylem is laid down by the vascular cambium, a continuous cylinder of meristematic cells that forms the secondary vascular tissue.
The vascular cambium forms in a layer between the primary xylem and primary phloem, giving rise to secondary xylem on the inside and secondary phloem on the outside. Every time a cambium cell divides, one daughter cell remains a cambium cell while the other differentiates into either a phloem or a xylem cell. Cambium cells give rise to secondary xylem to the outside of the established layer(s) of xylem during secondary growth.
A cross section of a stem after secondary growth would show concentric circles of pith (the center), primary xylem, secondary xylem, vascular cambium, secondary phloem, primary phloem, cork cambium, cork, and periderm (the outermost layer). Bark consists of tissues exterior to the vascular cambium.
The tree's diameter increases as layers of xylem are added, producing wood. The secondary phloem eventually dies, protecting the stem until it is sloughed off as part of the bark during later growth seasons.
The two main groups in which secondary xylem can be found are:
- conifers (Coniferae): there are some six hundred species of conifers. All species have secondary xylem, which is relatively uniform in structure throughout this group. Many conifers become tall trees: the secondary xylem of such trees is marketed as softwood.
- angiosperms (Angiospermae): there are some quarter of a million to 400,000 species of angiosperms. Within this group, secondary xylem has not been found in the monocots. In the remainder of the angiosperms, secondary xylem may or may not be present; this may vary even within a species, depending on growing circumstances. Many non-monocot angiosperms become trees, and the secondary xylem of these is marketed as hardwood.
Secondary xylem is also found in members of the "gymnosperm" groups Gnetophyta and Ginkgophyta and to a lesser extent in members of the Cycadophyta.
Xylem can also be found in stelar arrangements. In a vascular plant, the stele is the central part of the root or stem containing the vascular tissue and occasionally a pith.
The earliest vascular plants are considered to have had both root and shoot with a central core of vascular tissue. They consisted of xylem in the center, surrounded by a region of phloem tissue. Around these tissues there might be an endodermis that regulated the flow of water into and out of the vascular core. Such an arrangement is termed a protostele.
There are three basic types of protostele:
- haplostele—the most basic of protosteles, with a cylindrical core of vascular tissue. This type of stele is the most common in roots.
- actinostele—a variation of the protostele in which the core is lobed. This type of stele is rare among living plants, but is found in stems of the whisk fern, Psilotum.
- plectostele—a protostele in which interconnected plate-like regions of xylem are surrounded and immersed in phloem tissue. Many modern club mosses (Lycopodiopsida) have this type of stele within their stems.
Plants that produce complex leaves also produce more complex stelar arrangements. The hormones produced by the young leaf and its associated axillary bud affect the development of tissues within the stele. These plants have a pith in the center of their stems, surrounded by a cylinder containing the vascular tissue. This stelar arrangement is termed a siphonostele.
There are three basic types of siphonostele:
- solenostele—the most basic of siphonosteles, with a central core of pith enclosed in a cylinder of vascular tissue. This type of stele is found only in fern stems today.
- dictyostele—a variation of the solenostele caused by dense leaf production. The closely arranged leaves create multiple gaps in the stelar core. Among living plants, this type of stele is found only in the stems of ferns.
- eustele—the most common stelar arrangement in stems of living plants. Here, the vascular tissue in arranged in vascular bundles, usually in one or two rings around the central pith. In addition to being found in stems, the eustele appears in the roots of monocot flowering plants.
Siphonosteles may be ectophloic, with the phloem tissue positioned on one side of the xylem and closer to the epidermis. They may also be amphiphloic, with the phloem tissue on both sides of the xylem. Among living plants, many ferns and some Asterid flowering plants have an amphiphloic stele.
There is also a variant on the eustele found in monocots like maize and rye. The variation has numerous scattered bundles in the stem and is called an atactostele. However, it is really just a variant of the eustele.
Xylem appeared early in the history of terrestrial plant life. Fossil plants with anatomically preserved xylem are known from the Silurian (more than four hundred million years ago), and trace fossils resembling individual xylem cells may be found in earlier Ordovician rocks. The earliest true and recognizable xylem consists of tracheids with a helical-annular reinforcing layer added to the cell wall. This is the only type of xylem found in the earliest vascular plants, and this type of cell continues to be found in the protoxylem (first-formed xylem) of all living groups of plants. Several groups of plants later developed pitted tracheid cells, apparently through convergent evolution. In living plants, pitted tracheids do not appear in development until the maturation of the metaxylem (following the protoxylem).
The presence of vessels in xylem has been considered to be one of the key innovations that led to the success of the angiosperms. However, the occurrence of vessel elements is not restricted to angiosperms, and they are absent in some archaic or "basal" lineages of the angiosperms: (e.g., Amborellaceae, Tetracentraceae, Trochodendraceae, and Winteraceae), and their secondary xylem is described as "primitively vesselless" (Cronquist 1988). Whether the absence of vessels in basal angiosperms is a primitive condition is contested, the alternative hypothesis being that vessel elements originated in a precursor to the angiosperms and were subsequently lost (Muhammad 1982; Carlquist 2002).
Although in vascular plants, the xylem is the principal water transporting medium and the phloem the main pathway of sugar transport, at times sugars do move in the xylem. An example of this maple sap, used to produce maple syrup. In late winter/early spring, producers of maple syrup tap trees and collect a sugary solution from the xylem, derived from carbohydrates stored in the stem. This collection can be done from several species of trees, but the most popular is Acer saccharum, the "sugar maple" or "hard maple." During cold nights, the hydrolysis of starch reserves in the xylem parenchyma cells produces sugars that are transported in the xylem during warm days, forced up the trunk by expanding carbon dioxide (CO2).
- Campbell, N. A., and J. B. Reece. 2002. Biology (6th ed.). San Francisco, CA: Benjamin Cummings. ISBN 0805366245
- Carlquist, S., and E. L. Schneider. 2002. “The tracheid–vessel element transition in angiosperms involves multiple independent features: cladistic consequences.” American Journal of Botany 89: 185-195.
- Cronquist, A. 1988. The Evolution and Classification of Flowering Plant. New York, New York: The New York Botanical Garden. ISBN 0893273325
- Gifford, E. M., and A. S. Foster. 1988. Morphology and Evolution of Vascular Plants. (3rd ed.). New York: W. H. Freeman and Company. ISBN 0716719460
- Kenrick, P., and P. R. Crane. 1997. The Origin and Early Diversification of Land Plants: A Cladistic Study. Washington, D. C.: Smithsonian Institution Press. ISBN 1560987308
- Niklas, K. J. 1997. The Evolutionary Biology of Plants. Chicago and London: The University of Chicago Press. ISBN 0226580822
- Tibbetts, T. J., and F. W. Ewers. 2000. “Root pressure and specific conductivity in temperate lianas: exotic Celastrus orbiculatus (Celastraceae) vs. native Vitis riparia (Vitaceae).” American Journal of Botany 87: 1272-78.
- Timonen, T. 2002. Introduction to Microscopic Wood Identification. Finnish Museum of Natural History, University of Helsinki.
- Wilson, K., and D. J. B. White. 1986. The Anatomy of Wood: its Diversity and variability. London: Stobart & Son Ltd. ISBN 0854420347
- Muhammad, A. F. and R. Sattler. 1982. Vessel Structure of Gnetum and the Origin of Angiosperms. American Journal of Botany 69: 1004-1021.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
- Xylem history
- Vascular_bundle history
- Secondary_xylem history
- Stele_(biology) history
- Vessel_element history
- meristem history
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed. |
The PYP is a rigorous study program based on research
that transforms teaching practices.
It is inquiry-based learning and research that helps to develop the student’s understanding of the world.
The PYP seeks above all to create a balance between the quest for meaning, understanding, the acquisition of essential skills and knowledge.
- Who are we:An inquiry into the nature of the self; beliefs and values; personal, physical, mental, social and spiritual health; human relationships including families, friends, communities and cultures; rights and responsibilities; what it means to be human
- Where are we in place and time:An inquiry into orientation in place and time; personal histories; homes and journeys; the discoveries, explorations and migrations of humankind; the relationships between, and the interconnectedness of, individuals and civilizations from local and global perspectives
- How we express ourselves:An inquiry into the ways in which we discover and express ideas, feelings, nature, culture, beliefs and values; the ways in which we reflect on, extend and enjoy our creativity; our appreciation of the aesthetic.
- How the world works: An inquiry into the natural world and its laws; the interaction between the natural world (physical and biological) and human societies; how humans use their understanding of scientific principles; the impact of scientific and technological advances on society and on the environment.
- How we organize ourselves: An inquiry into the interconnectedness of human-made systems and communities; the structure and function of organizations; societal decision-making; economic activities and their impact on humankind and the environment.
- Sharing the planet: An inquiry into rights and responsibilities in the struggle to share finite resources with other people and with other living things; communities and the relationships within and between them; access to equal opportunities; peace and conflict resolution.
Although the PYP adopts a transdisciplinary learning model, it is important to understand that knowledge disciplines are not the enemy to flee but rather an effective and necessary ally (Beane, 1995).
The question, therefore, is not whether to make room for subject-specific knowledge, but how to integrate knowledge into the transdisciplinary module in a convincing and authentic way.
To do this, transdisciplinary themes will be studied through six subjects:
– Languages: English and French
– Individuals and Societies
– Personal, Social and Physical Education
In the final year of PYP, students participate in a graduation project: the exhibition.
This project requires each student to invest in the 5 essential elements of the programme: knowledge, concepts, know-how, skills and action.
It is both a transdisciplinary research project conducted in the spirit of a personal and shared responsibility, and a summative evaluation activity to celebrate the transition of students from the PYP to the MYP.
The exhibition is an important event that brings together the essential elements of the PYP and shares them with the wider school community.
There are different assessment strategies:
- Task completion
- Procedure assessment
- Answers to set questions
- Open-ended questions and tasks |
Set kids up for life, with these tips for building gentle resilience
They may not pay bills, have full-time jobs, or deal with the pressures of adulthood, but children’s lives can still be extremely stressful as they’re constantly navigating multiple stages of emotional and physical development. Additionally, children experience daily stressors such as conflicts with peers, educational and social expectations, and the pressures that accompany their familial roles and relationships.
As a child psychotherapist, I’ve assisted many parents and caregivers as they help their children learn to manage both daily and transitional stress. I encourage adults to teach children the coping skills that they use to manage their own stress, and to discover new skills that they can practise together.
The process of teaching a child coping skills is not easy. Children may struggle with understanding, implementation, and motivation. How do you not only teach, but also encourage, children to consistently incorporate such skills into their lives? Here are my best tips for teaching a child to practise and internalise coping skills:
1. Keep it simple
Children don’t need overly detailed or complicated instructions. In fact, children are more likely to become distracted and/or forget the instructions if they’re too complex. Instead, choose a coping skill that is simple to do, and that requires few instructions. For example, progressive muscle relaxation (the practice of tensing one muscle group at a time, followed by a relaxation phase) can be used on all the muscles in the body. However, you may want to start with one muscle group at a time when teaching this to children. Also, try to avoid over-explaining how the coping skill works. So, instead of explaining how emotions become locked in the body and why releasing them is essential, it might be more effective simply to say: “This could make you feel better in your body and your mind.”
2. Incorporate all three learning styles
There are three main styles of learning: visual, auditory, and kinesthetic. Adults and children tend to have one or two primary styles. For example, one child may be primarily auditory, whereas another may be more visual, and another child may need a combination of auditory and visual learning techniques. When teaching children, you should try using all three styles so that you can meet their unique learning needs. In order to instruct children in a manner that incorporates each of the three main styles of learning, you would verbally explain the directions (auditory), show them how to do it (visual), and then do it with them (kinesthetic).
3. Make it fun
Children are less likely to try to use coping skills that are boring. Make sure that the skills that you teach are engaging. For example, one type of progressive muscle relaxation consists of sitting still and being aware of each of your muscle groups. Children can struggle to sit still and focus. Therefore, I tend to teach children the more active version of this skill, which consists of them squeezing and then releasing their muscles. You can also make ‘boring’ coping skills more engaging by adding fun elements to them, such as asking the child to pretend that they are squeezing slime out of their hands, or you might even give them slime or a ball to squeeze.
“If you want to teach a child a coping skill, then you need to use it, too”
4. Model the skills yourself
If you want to teach a child a coping skill, and you want them to use it, then you need to use it too. Observing an authority figure using a skill can instil a sense of the importance of the practice, which can motivate a child to implement the skill. This can be a better tactic than simply telling the child that they need to use the skill.
5. Utilise your support network
It’s not enough simply to teach a child a new skill. You also need to provide future support in order to help them use this skill when needed. A good way to provide this is to communicate with supportive adults in the child’s life, such as teachers, parents, family members, and members of the community. Inform these adults about the new skill, and make sure they know how to use it and when the child needs to use it. These adults can remind and encourage the child to utilise the skill, provide praise when the child uses the skill, and they can try the actions themselves in the child’s presence in order to model and reinforce it.
If your children are struggling with their mental wellbeing, visit counselling-directory.org.uk |
Infection occurs when skin comes in contact with contaminated freshwater in which certain types of snails that carry the parasite are living. Freshwater becomes contaminated by schistosome eggs when infected people urinate or defecate in the water. The eggs hatch, and if the appropriate species of snails are present in the water, the parasites infect, develop and multiply inside the snails. The parasite leaves the snail and enters the water where it can survive for about 48 hours. Larval schistosomes (cercariae) can penetrate the skin of persons who come in contact with contaminated freshwater, typically when wading, swimming, bathing, or washing. Over several weeks, the parasites migrate through host tissue and develop into adult worms inside the blood vessels of the body. Once mature, the worms mate and females produce eggs. Some of these eggs travel to the bladder or intestine and are passed into the urine or stool.
Symptoms of schistosomiasis are caused not by the worms themselves but by the body’s reaction to the eggs. Eggs shed by the adult worms that do not pass out of the body can become lodged in the intestine or bladder, causing inflammation or scarring. Children who are repeatedly infected can develop anemia, malnutrition, and learning difficulties. After years of infection, the parasite can also damage the liver, intestine, spleen, lungs, and bladder.
Most people have no symptoms when they are first infected. However, within days after becoming infected, they may develop a rash or itchy skin. Within 1-2 months of infection, symptoms may develop including fever, chills, cough, and muscle aches.
Without treatment, schistosomiasis can persist for years. Signs and symptoms of chronic schistosomiasis include: abdominal pain, enlarged liver, blood in the stool or blood in the urine, and problems passing urine. Chronic infection can also lead to increased risk of liver fibrosis or bladder cancer.
Rarely, eggs are found in the brain or spinal cord and can cause seizures, paralysis, or spinal cord inflammation. |
In the planetary nursery
Astronomers determine the mass of the disk of gas and dust surrounding the star TW Hydrae
The disk surrounding the young star TW Hydrae is regarded as a prototypical example of planetary nurseries. Due to its comparatively close proximity of 176 light-years, the object plays a key role in cosmological birth models. Using the Herschel Space Telescope, researchers including Thomas Henning from the Max Planck Institute for Astronomy in Heidelberg have, for the first time, determined the mass of the disk very precisely. The new value is larger than previous estimates and proves that planets similar to those of our solar system can form in this system. In addition, the observations are an example of how, in the world of science, not everything can be planned for.
Where Egyptologists have their Rosetta Stone and geneticists their Drosophila fruit flies, astronomers studying planet formation have TW Hydrae: A readily accessible sample object with the potential to provide foundations for an entire area of study. TW Hydrae is a young star with about the same mass as the Sun. It is surrounded by a protoplanetary disk: a disk of dense gas and dust in which small grains of ice and dust clump to form larger objects and, eventually, into planets. This is how our Solar System came into being more than 4 billion years ago.
What is special about the TW Hydrae disk is its proximity to Earth: at a distance of 176 light-years from Earth, this disk is two-and-a-half times closer to us than the next nearest specimens, giving astronomers an unparalleled view of this highly interesting specimen – if only figuratively, because the disk is too small to show up on an image; its presence and properties can only be deduced by comparing light received from the system at different wavelengths (that is, the object's spectrum) with the prediction of models.
In consequence, TW Hydrae has one of the most frequently observed protoplanetary disks of all, and its observations are a key to testing current models of planet formation. That's why it was especially vexing that one of the fundamental parameters of the disk remained fairly uncertain: The total mass of the molecular hydrogen gas contained within the disk. This mass value is crucial in determining how many and what kinds of planets can be expected to form.
Previous mass determinations were heavily dependent on model assumptions; the results had significant error bars, spanning a mass range between 0.5 and 63 Jupiter masses. The new measurements exploit the fact that not all hydrogen molecules are created equal: Some very few of them contain a deuterium atom – where the atomic nucleus of hydrogen consists of a single proton, deuterium has an additional neutron. This slight change means that these "hydrogen deuteride" molecules consisting of one deuterium and one ordinary hydrogen atom emit significant infrared radiation related to the molecule's rotation.
The Herschel Space Telescope provides the unique combination of sensitivity at the required wavelengths and spectrum-taking ability ("spectral resolution") required for detecting the unusual molecules. The observation sets a lower limit for the disk mass at 52 Jupiter masses, with an uncertainty ten times smaller than the previous result. While TW Hydrae is estimated to be relatively old for a stellar system with disk (between 3 and 10 million years), this shows that there is still ample matter in the disk to form a planetary system larger than our own (which arose from a much lighter disk).
On this basis, additional observations, notably with the millimetre/submillimetre array ALMA in Chile, promise much more detailed future disk models for TW Hydrae – and, consequently, much more rigorous tests of theories of planet formation.
The observations also throw an interesting light on how science is done – and how it shouldn't be done. Thomas Henning explains: "This project started in casual conversation between Ted Bergin, Ewine van Dishoek and me. We realized that Herschel was our only chance to observe hydrogen deuteride in this disk – way too good an opportunity to pass up. But we also realized we would be taking a risk. At least one model predicted that we shouldn't have seen anything! Instead, the results were much better than we had dared to hope."
TW Hydrae holds a clear lesson for the committees that allocate funding for scientific projects or, in the case of astronomy, observing time on major telescopes – and which sometimes take a rather conservative stance, practically requiring the applicant to guarantee their project will work. In Henning's words: "If there's no chance your project can fail, you're probably not doing very interesting science. TW Hydrae is a good example of how a calculated scientific gamble can pay off."
HOR / MP |
Rebecca Johnson--To design or refine a course in any mode, begin by looking at your course's learning objectives and how they are assessed. Do your objectives clearly and accurately describe what students will learn in your course? Do assignments, materials, and student experiences align in such a way that students are likely to be successful in your course if they earnestly apply themselves? In this post, I will discuss some fundamental ideas in instrucitonal design and how a design mindset can help you identify opportunities to refine your courses. Taking another look at your learning objectives can also help you refine with flexible learning in mind.
Start with Learning Objectives to Refine Instructional Choices
Instructional design (ID) is a systematic and iterative approach to creating learning experiences. The particular focus of instructional design is on learners and the skills, knowledge, and abilities they should gain as the result of participating in instruction. Learning is change. This change is also largely invisible. Instructors must devise methods for collecting evidence to determine how well students have learned, or the extent to which students have been changed by instruction. These desired changes are described as learning objectives. A learning objective should be clear, observable, and measurable.
Learning taxonomies are classification systems that describe different types of learning. The taxonomies employ observable and measurable verbs that instructors can use to write or refine learning objectives. For more information about learning taxonomies and writing good objectives, see
- Northeastern University’s Center for Advancing Teaching and Learning Through Research: Creating Outcomes That Guide Learning
- Harvard University’s Derek Bok Center for Teaching and Learning: Taxonomies of Learning
- Carnegie Mellon University’s Eberly Center for Teaching Excellence and Educational Innovation: Articulate Your Learning Objectives
Once the learning objectives for a course have been written, instructors select assessments. An assessment provides instructors (and students) with evidence of how well students are learning. An assessment can be formative or summative. A formative assessment provides feedback to learners on how well they are progressing, giving them opportunities to improve. A summative assessment provides information to instructors and students about how well students have achieved the learning objective. Summative assessments are often given at the end of a unit or the end of a class. See Classroom Assessment Techniques for more information on informal formative assessments; see also the Eberly Center's resource on Classroom Assessment Techniques.
Once objectives have been articulated and assessments designed, instructors can create experiences, select and sequence course materials, and provide opportunities for student interaction that they believe will help students learn. Designing instruction is a complex task. Happily, instructional design is iterative. Every course is an instructor’s best attempt at a design that supports student achievement of course objectives. Every course is also an opportunity to gather data that will help instructors refine their approach to the course. The ILI has experts on hand to assist you as you design and refine your courses.
Designing Flexibility into Your Course
Once the learning objectives are articulated, assessments identifiied, and course materials and topics selected and sequenced, you can begin to design a flexible course. Consider your objectives and how opportunities for gathering evidence about student learning may change depending on course mode. What physical and digital options do you have for assessing learning in a physical classroom? In Zoom breakout rooms? In a myCourses discussion board? Consider the suggestions in the blog post Using Digital Tools to Support Class Activities in All Modes.
The complexity of life in a pandemic has underscored the importance of maintaining communication with students and, when appropriate, developing flexible approaches to course policies and assignments. The following are ways that some faculty have responded to complexity with flexibility. Consult with colleagues and with your department chair if you have questions about implementing any of the following:
- Provide students with some flexibility on assignment due dates.
- Consider a revise and resubmit policy for some assignments.
- In a course with frequent, low-stakes quizzes or assignments, allow students to drop one or some number of them from the final grade tally.
- If a student experiences an unexpected disruption, consider allowing that student to complete group projects as an individual.
- Consider how you can provide students with options for the format of an assignment. What variety of evidence can you use to determine whether a student has met the desired learning objective? A poster instead of a paper? A video instead of a presentation?
These resources may be helpful to you as you design or refine with flexibility in mind.
- What Is Universal Design for Learning? Universal Design for Learning (UDL) combines what we know about how learning works with a commitment to providing course materials that are engaging and accessible to produce a learning experience that benefits all students. Instructional flexibility is a core value in Universal Design for Learning. This post has many links to UDL strategies and resources.
- How to Conduct Your Class Online This ILI blog post descriibes common tools used in the online mode.
- The Derek Bok Center for Teaching and Learning at Harvard has developed a set of resources on learner-centered design.
- The Eberly Center for Teaching Excellence and Educational Innovation at Carnegie Mellon University has developed their web-based resources in line with learning science. Their Solve a Teaching Problem site is particularly useful. |
Extensive burning of vegetation on Brazilian territory and neighbouring countries has been monitored from space during the last months. One of the latest satellite derived fire products is shown below.
Fig.1. INPE-DSA Fire product of 31 March 1999 (Source: INPE-DSA)
The Brazilian Institute for Environment and Natural Resources (IBAMA) conducted an extensive study of burning activities in the Amazon region which is generally refereed to as “Arc of Deforestation” (arco do desflorestamento) in which the forest conversion by fire and wildfire activities are predominantly occurring.
Fig.2. Arc of Deforestation in Brazil (Source: IBAMA 1998)
IBAMA differentiates between two kinds of vegetation fires, agricultural burnings (queimadas), where fire is traditionally used as a tool for preparing pasture or crop land. The other type of vegetation fire is defined as “uncontrolled fire in any kind of vegetation, caused either by man or natural causes” (“wildfire” or “forest fire”).
Most of the fires in Brazil must be seen in the context of intensive land development. Fire is used as a tool in forest conversion. This is done by small farmers as well as large agro-industrial companies. The careless use of fire often allow the “prescribed” burnings to escape and become forest fires in the adjacent forests. These wildfires are of global importance because they threaten global biodiversity as well as the livelihood and cultural identity of the indigenous people in Amazonia.
Almost all fires in the Amazon Region are human-caused, natural fires play a minor role in the tropical rain forest of Brazil and neighbouring countries. In the seasonally dry forests and bush formations (cerrado) lightning fires are observed occasionally.
Under normal weather conditions the primary forest in the humid tropics does not catch fire. The hydrological cycle in the closed forests produces a very humid microclimate where unfavourable conditions for forest fire exist. But in forests where selective logging already took place the former closed canopy is disturbed. This allows more light to penetrate through the canopy and thereby changing the energy balance within the forest – the forest becomes more susceptible to drought. With trees shedding their leaves in the extreme drought stress caused by the El Nino event in 1998 the fuel for forest fires increases dramatically and the risk of high intensity wildfires increases.
The following town regions in the State of Para and Apiacas, Mato Grosso and Rondonia were identified by IBAMA as high risk areas for destructive forest fires. This classification is based on the fact that in this region logging, mining and illegal prospecting and free range cattle rearing are concentrated in Brazil:
Tab.1. Town regions with high risk of forest fires
State of Para and Apiacas
State of Mato Grosso
State of Rondonia
Paragominas Conceicao do Araguaia Eldorado dos Carajas Maraba Parauapebas Redencao
Alta Floresta Nova Canaa do Norte Colider Sinop Peixoto Azevedo Sao Felix do Xingu Porto Alegre do Norte Luciara Santa Terezinha
Ji-Parana Ariquemes Alto Paraiso Nova Mamore
The Woods Hole Research Center (WHRC) and IPAM Instituto de Pesquisa Ambiental da Amazonia have prepared an Information Bulletin for the Buenos Aires Climate Conference. The bulletin describes the early warning of the upcoming fire seaon in Brazil early 1998 and the extent of damage. The bulletin provides also links to the WHRC and IPAM websites.
A detailed study on the spatio-temporal dynamics of the Boa Vista- Roraima fire events by the Space Applications Institute of the Joint Research Centre European Commission (Ispra,Varese) for the CLAIRE/LBA study can be seen at: http://www.mtv.sai.jrc.it/parbo/Roraima_Web.html.
The Brazilian Environmental Monitoring Centre (NMA) EcoForca is an Brazilian NGO which gives extensive information about issues like deforestation, forest fires etc. in Brazil. The NMA website provides background information to the current situation in Brazil.
References: IBAMA 1998.Programa de Prevencao e Controle as Queimadas e aos Incendios Florestais no Arco de Desflorestamento “PROARCO”. IBAMA, Brasilia, 49 p. (The full text of the documentation is available in Portuguese under http://www.ibama.gov.br/). |
Linear regression is a statistical method for examining the relationship between a dependent variable, denoted as y, and one or more independent variables, denoted as x. The dependent variable must be continuous, in that it can take on any value, or at least close to continuous. The independent variables can be of any type. Although linear regression cannot show causation by itself, the dependent variable is usually affected by the independent variables.
Linear Regression Is Limited to Linear Relationships
By its nature, linear regression only looks at linear relationships between dependent and independent variables. That is, it assumes there is a straight-line relationship between them. Sometimes this is incorrect. For example, the relationship between income and age is curved, i.e., income tends to rise in the early parts of adulthood, flatten out in later adulthood and decline after people retire. You can tell if this is a problem by looking at graphical representations of the relationships.
Linear Regression Only Looks at the Mean of the Dependent Variable
Linear regression looks at a relationship between the mean of the dependent variable and the independent variables. For example, if you look at the relationship between the birth weight of infants and maternal characteristics such as age, linear regression will look at the average weight of babies born to mothers of different ages. However, sometimes you need to look at the extremes of the dependent variable, e.g., babies are at risk when their weights are low, so you would want to look at the extremes in this example.
Just as the mean is not a complete description of a single variable, linear regression is not a complete description of relationships among variables. You can deal with this problem by using quantile regression.
Linear Regression Is Sensitive to Outliers
Outliers are data that are surprising. Outliers can be univariate (based on one variable) or multivariate. If you are looking at age and income, univariate outliers would be things like a person who is 118 years old, or one who made $12 million last year. A multivariate outlier would be an 18-year-old who made $200,000. In this case, neither the age nor the income is very extreme, but very few 18-year-old people make that much money.
Outliers can have huge effects on the regression. You can deal with this problem by requesting influence statistics from your statistical software.
Data Must Be Independent
Linear regression assumes that the data are independent. That means that the scores of one subject (such as a person) have nothing to do with those of another. This is often, but not always, sensible. Two common cases where it does not make sense are clustering in space and time.
A classic example of clustering in space is student test scores, when you have students from various classes, grades, schools and school districts. Students in the same class tend to be similar in many ways, i.e., they often come from the same neighborhoods, they have the same teachers, etc. Thus, they are not independent.
Examples of clustering in time are any studies where you measure the same subjects multiple times. For example, in a study of diet and weight, you might measure each person multiple times. These data are not independent because what a person weighs on one occasion is related to what he or she weighs on other occasions. One way to deal with this is with multilevel models. |
For those working in the field of advanced artificial intelligence, getting a computer to simulate brain activity is a gargantuan task, but it may be easier to manage if the hardware is designed more like brain hardware to start with.
This emerging field is called neuromorphic computing. And now engineers at MIT may have overcome a significant hurdle - the design of a chip with artificial synapses.
For now, human brains are much more powerful than any computer - they contain around 80 billion neurons, and over 100 trillion synapses connecting them and controlling the passage of signals.
How computer chips currently work is by transmitting signals in a language called binary. Every piece of information is encoded in 1s and 0s, or on/off signals.
To get an idea of how this compares to a brain, consider this: in 2013, one of the world's most powerful supercomputers ran a simulation of brain activity, achieving only a minuscule result.
Riken's K Computer used 82,944 processors and a petabyte of main memory - the equivalent of around 250,000 desktop computers at the time.
It took 40 minutes to simulate one second of the activity of 1.73 billion neurons connected by 10.4 trillion synapses. That may sound like a lot, but it's really equivalent to just one percent of the human brain.
But if a chip used synapse-like connections, the signals used by a computer could be much more varied, enabling synapse-like learning. Synapses mediate the signals transmitted through the brain, and neurons activate depending on the number and type of ions flowing across the synapse. This helps the brain recognise patterns, remember facts, and carry out tasks.
Replicating this has proven difficult to date - but researchers at MIT have now designed a chip with artificial synapses made of silicon germanium that allow the precise control of the strength of electrical current flowing along them, just like the ion flow between neurons.
In a simulation, it was used to recognise handwriting samples with 95 percent accuracy.
Previous designs for neuromorphic chips used two conductive layers separated by an amorphous "switching medium" to act like the synapses. When switched on, ions flow through the medium to create conductive filaments to mimic synaptic weight, or the strength or weakness of a signal between two neurons.
The problem with this approach is that, without defined structures to travel along, the signals have an infinite number of paths - and this can make the chips' performance inconsistent and unpredictable.
"Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way," said lead researcher Jeehwan Kim.
"But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it's hard to control. That's the biggest problem - nonuniformity of the artificial synapse."
With this in mind, the team created lattices of silicon germanium, with one-dimensional channels through which ions can flow. This ensures the exact same path is used every time.
These lattices were then used to build a neuromorphic chip; when voltage was applied, all synapses on the chip showed the same current, with a variation of just 4 percent.
A single synapse was also tested with voltage applied 700 times. Its current varied just 1 percent - the most uniform device possible.
The team tested the chip on an actual task by simulating its characteristics and using those with the MNIST database of handwriting samples, usually used for training image processing software.
Their simulated artificial neural network, consisting of three neural sheets separated by two layers of artificial synapses, was able to recognise tens of thousands of handwritten numerals with 95 percent accuracy, compared to the 97 percent accuracy of existing software.
The next step is to actually build a chip that is capable of carrying out the handwriting recognition task, with the end goal of creating portable neural network devices.
"Ultimately we want a chip as big as a fingernail to replace one big supercomputer," Kim said. "This [research] opens a stepping stone to produce real artificial [intelligence] hardware."
The research has been published in the journal Nature Materials. |
METEOROLOGIST JEFF HABY
Fog is a cloud on the ground. It is a situation in which the air is humid enough that cloud particles of moisture condense out of the
air. The most likely time for fog to develop is in the overnight hours. This is because at this time the air is generally cooling off
and the temperature is thus dropping closer to the dewpoint. Since the coolest temperatures of the day generally occur around sunrise,
it is not too surprising that this time also experiences the most fog typically. Once fog develops, it will persist as long as moisture
can continue to condense out of the air. Once the condensation process is slowed by rising temperature or other factors, the fog will
begin to dissipate.
It is in the morning hours that temperatures generally warm at the fastest rate. When comparing the 6 am to 11 am time frame to the
11 am to 4 pm time frame, generally much greater warming will occur in the 6 am to 11 am time frame. This is because on many nights
a shallow layer of cool air will pool at the surface. Long wave ground emission contributes to cooling the air. This effect is most
noticeable when the winds are fairly light since less mixing will occur with air higher aloft. Once the sun starts shining on the
earth’s surface, the shallow layer of cool air will quickly mix out with warmer and often drier air aloft. Also, warming takes place
from the sun warming the ground surface which in turn warms the surface layer of air.
It is the early morning sunlight hours that fog tends to dissipate. Under certain meteorological circumstances, fog can persists all
day long and can develop at times besides the overnight hours. In general though, fog develops overnight and dissipates (mixes out)
in the early morning sunlight hours. When the air warms, the temperature will increase above the dewpoint. Generally when conditions
are saturated in the air at the earth’s surface, the temperature will equal the dewpoint (Relative Humidity 100%). Once the
air warms, typically the temperature will increase above the dewpoint value. This will cause the tiny cloud (fog) droplets to
evaporate. Also, as the sun warm’s the surface, convective thermals will start mixing out the air (think of mixing as like water
mixing when it is boiling by being warmed by the burner below). These two processes (mixing out of surface air with drier
air aloft and the temperature warming) cause fog to decrease in density and then eventually dissipate altogether. On a foggy
morning, this process will often play out thus it is a meteorological treat to watch! |
Knowing what you don’t know can help your grades improve
Confidence is a great thing, when you know what you’re doing. But if a student has no idea just how little they know, that undeserved confidence might be followed by a failing grade or two. The solution? Show students exactly what they don’t know, and how to tackle it, a new study says. Bursting that confidence bubble might be painful, but it improved students’ rock-bottom chemistry grades by 10 percent — turning a potentially failing grade into a passing one.
Charles Atwood was on a mission to bring up failing grades. He’s a chemist at the University of Utah in Salt Lake City. When he first arrived at the university, he says, the failure rate for the school’s introductory chemistry class was 37 percent! He immediately began looking for ways to help the students.
Knowledge alone isn’t what matters when it comes to passing a test. What students don’t know also matters. And when someone doesn’t know very much about a topic, they may not recognize just how little they know. As a result, they can end up overconfident. They might overestimate what they can do. The person might think they’ll get a C on the first exam, for instance, when they might not know enough to pass the test at all. This overconfidence has a name — the Dunning-Kruger effect.
Atwood’s graduate student, Brock Casselman, was looking for possible reasons why some of the chemistry students were failing and found out about this effect. He and Atwood realized that success for the students relied not just on what they knew but also on how much they didn’t know. If Atwood and Casselman wanted the students to improve, though, they would have to teach those students to recognize the gaps in their knowledge.
Just seeing a failing grade doesn’t help students figure out those gaps. “You’d think failing would boot them into gear [and get them to study what they don’t know], but it doesn’t,” says Casselman. For students with the worst grades, every time they faced a new quiz or exam, “they thought they would do better,” he says. “They’re highly resistant to being aware of how terrible they are.”
Thinking about thinking
Knowing just how bad they are in a subject may not help a student get better. Casselman and Atwood thought that promoting metacognition — or thinking about thinking — might help the students to identify where they needed help. Metacognition, says Atwood, is “basically assessing as you would your way through a problem.” In this case, it could help students analyze their own study skills. That could aid them in realizing just how much chemistry they didn’t know so they could study accordingly.
To see if metacognition could help their chemistry students in class, the scientists studied two introductory chemistry classes, each with 300 students. The classes were taught in the same way, with one key difference. Students in one class were asked before each exam or weekly quiz how they thought they would perform. After each test or quiz, they got their scores via a computer program. The computer also showed them if they had predicted they would perform better or worse than they actually did.
The computer program gave the students a list of potential study topics — topics that they’d been especially bad at on the last test. Then, the students were guided into making a study plan that would prepare them for the next exam.
At first, the students in the experimental class tended to fall victim to the Dunning-Kruger effect. They overestimated how well they would do on the tests by about 11 percent. But after a semester of guided study — and probably some blows to their pride — the students underestimated how well they would do on each test.
The class that got the guided study and the feedback on their quizzes also did better in the chemistry course. Overall, there was only about a 4 percent improvement in scores. But the poorest performing students — those who had been the most overconfident to begin with — showed the most improvement. Their grades improved by 11 percent. That’s enough to raise a D to a C. Casselman and Atwood published their findings October 20 in the Journal of Chemical Education.
“I think [the study is] thoughtful, it’s rigorous, it’s detail-oriented, and it was a pleasure to see it,” says David Dunning. Yes. That Dunning. The namesake of the Dunning-Kruger effect is a scientist who studies psychology at the University of Michigan in Ann Arbor. The best way to get rid of dangerous overconfidence is to learn more, he says. And the method proposed in this study is one way to get there. “It’s the combination of having people guess and [having them] come up with a plan,” he says. “That combination seems to be key.”
The training in metacognition was so successful, Atwood says, he’s put it to use in every class since. “I show them the data,” he explains. Then he asks the students if they want to learn using the old methods or the new methods that promote metacognition. The vote is no contest, he says. Students want to do better in the course, after all.
It’s no fun to poke a hole in a student’s confidence, but in the long run, the students are better for it. “Confidence does have its benefits,” Dunning notes. “But you want to be both confident and competent.”
Follow Eureka! Lab on Twitter
chemical A substance formed from two or more atoms that unite (bond) in a fixed proportion and structure. For example, water is a chemical made when two hydrogen atoms bond to one oxygen atom. Its chemical formula is H2O. Chemical also can be an adjective to describe properties of materials that are the result of various reactions between different compounds.
chemistry The field of science that deals with the composition, structure and properties of substances and how they interact. Scientists use this knowledge to study unfamiliar substances, to reproduce large quantities of useful substances or to design and create new and useful substances. (about compounds) Chemistry also is used as a term to refer to the recipe of a compound, the way it’s produced or some of its properties. People who work in this field are known as chemists.
cognition The mental processes of thought, remembering, learning information and interpreting those data that the senses send to the brain.
computer program A set of instructions that a computer uses to perform some analysis or computation. The writing of these instructions is known as computer programming.
data Facts and/or statistics collected together for analysis but not necessarily organized in a way that gives them meaning. For digital information (the type stored by computers), those data typically are numbers stored in a binary code, portrayed as strings of zeros and ones.
Dunning-Kruger effect This effect occurs when someone doesn’t know how bad they are at something. They then overestimate how well they will perform doing it. For example, a student might think they know enough chemistry to get a C on an exam, when in fact, they know almost nothing — so little they end up with an F.
graduate student Someone working toward an advanced degree by taking classes and performing research. This work is done after the student has already graduated from college (usually with a four-year degree).
journal (in science) A publication in which scientists share their research findings with experts (and sometimes even the public). Some journals publish papers from all fields of science, technology, engineering and math, while others are specific to a single subject. The best journals are peer-reviewed: They send all submitted articles to outside experts to be read and critiqued. The goal, here, is to prevent the publication of mistakes, fraud or sloppy work.
metacognition Thinking about the mental processes of thinking. It’s a process that people can use to plan, keep track of and understand why they behave the way they do. |
Saliva is the watery fluid that moistens our mouths, helping us eat, speak, and maintain good oral health. Saliva consists of a clear, protein-rich fluid secreted by the salivary glands and trace amounts of various biochemicals present in blood serum that filter into the mouth. As certain health conditions arise, such as HIV infection and cancer, proteins and substances linked to these diseases can pass from the serum into the saliva. Increased concentrations of these compounds over time make saliva a potentially promising diagnostic fluid with several advantages over blood. Saliva is easy to collect, requires no painful needle sticks, and can be tested in many non-traditional settings because of the portability and lower cost of salivary test kits.
NIH supports research in technologies that use saliva to look for indicators of health conditions or diseases. Development of small, portable, and rapid processing technologies for saliva samples holds promise for faster identification of health issues and earlier access to treatment.
The faster turnaround allows more rapid communication and decision making, earlier initiation of therapy, better adherence to treatment, and greater patient satisfaction. It also has economic advantages. These include lower costs to perform tests, fewer doctor visits, fewer hospital admissions to run tests, and improved quality of life.
Technologies that will enable saliva to be used as a window into the body are being explored for their ability to detect disease and monitor our health. Efforts are underway to develop miniaturized lab-on-a-chip technology, where diagnostic tests and tools are made to be rapid, automated, and portable. Combined with saliva sample collection or cell collection (by gentle brushing of the skin surface), this technology could eliminate the need for blood sampling or mouth tissue biopsy, in many cases.
Building on this research, saliva will become a more commonly used diagnostic fluid. Ongoing studies indicate that saliva may be useful for detecting various cancers, heart disease, diabetes, periodontal diseases, and other conditions.
- Getting a diagnosis used to mean making a trip to the doctor’s office or to a hospital. The examination often required providing a blood or tissue sample. Collection of these samples involved insertion of needles into blood vessels or cutting away a small area of the tissue (a biopsy).
- The blood or tissue samples were labeled and sent to a laboratory for testing. Typically, patients waited several days for the results. In many cases, they were asked to schedule follow up visits for additional, often expensive, tests that further narrowed down the possible diagnosis.
- Most tests detected full blown disease. Few were sensitive enough to detect subtle biochemical changes that might indicate a developing health condition. No test analyzed saliva or was available for easy use in the home.
- Currently available salivary diagnostic tests include various hormonal, HIV, and alcohol tests. Each test requires a small amount of saliva and produces rapid and highly accurate results.
- In 2010, NIH funded two exciting new studies entitled “Salivary biomarkers for early oral cancer detection” and “Salivary proteomic and genomic biomarkers for primary Sjogren’s Syndrome.”
- Scientists have identified the genes and proteins that are expressed in the salivary glands. With these vast catalogues as their guide, they will define the patterns and certain conditions under which these genes and proteins are expressed in the salivary glands and how these parts function as a fully integrated biological system.
For additional information contact: NIDCR Office of Communications and Health Education at (301) 496-4261.
- Salivary diagnostic tests will provide immediate results to patients. The portable tests will initially approximate the size of a Personal Digital Assistant (PDA). The fully integrated diagnostic systems will have the potential to measure from one to possibly hundreds of compounds in saliva within a matter of minutes.
- An emergency medical technician will, with a patient’s consent, collect a small saliva sample, load it into the fully automated test, and have an extensive saliva panel readout ready by the time the ambulance brings the patient to the emergency room. The readout will contain a profile of various proteins in the patient’s mouth that are associated with various systemic diseases or conditions.
- As miniaturization of the technology advances, it may become possible to attach a tiny device to a patient’s tooth, allowing personalized monitoring of medication levels and the detection of biomarkers for specific disease states.
National Institute of Dental and Craniofacial Research (NIDCR): |
Unlike the simple heated filament of incandescent lights, both LEDs and fluorescents require a buffer between the lamps and the raw power supply. Fluorescents use ballasts, while LEDs use drivers. (LED drivers can be considered ballasts as well, but most documentation prefers ‘drivers’ or ‘power supply’ to avoid confusion with fluorescent ballasts.) Both ballasts and drivers do more than simply charge up their respective lights. They are an essential element to triggering and controlling your light fixtures, from the first flip of a switch through tens of thousands of hours of performance.
What They Do
Fluorescent ballasts provide an initial spike of high voltage, generating an arc that travels from cathode to anode within the discharge tube. Once the light is on, the ballast then acts as a current regulator. LED drivers convert high voltage, ac current into the low voltage, direct current that LEDs are designed to run on. Both fluorescent ballasts and LED drivers protect the light from fluctuations in the electrical supply. Irregular drops or spikes in voltage can affect light quality and shorten the length of the lamp.
While LED lighting fixtures are a relatively recent creation, fluorescent lights were introduced in 1939. With almost 80 years of development, there have been many variations in ballast design and complexity. The older magnetic ballasts consisted of not much more than inductors, capacitors and a series resistor. This was enough to help fluorescent tube fixtures become the lighting of choice for commercial applications, but they developed a reputation for the buzzing noise they produced and the flickering that signaled the end of their productive life. Modern solid-state electronic ballasts have largely eliminated these issues, while cutting energy consumption.
Internal vs. External Drivers and Ballasts
Household bulbs usually include an internal driver (LEDs) or solid-state ballast (CFLs) so that they can be screwed into a standard E26 socket. Without that bit of backward compatibility, neither form factor would likely have caught on with consumers.
Similarly, LED tube lamps designed to replace fluorescents often have an integrated driver, allowing tubes to simply slip into existing fixtures. This is a huge selling point to commercial property owners and facility managers, as it slashes the capital investment required to switch over from an existing lighting system.
On the opposite end of the spectrum from integrated drivers, remote ballasts and drivers can be located quite far from the actual lamps, and are often used to power more than one light at a time.
LED Temperature Concerns
LEDs operate at a very cool physical temperature, in that the lamps don’t heat up the same way that incandescents do. However, the fixtures themselves generate a fair amount of heat. If a fixture is installed incorrectly, improper ventilation can cause the internal temperature to skyrocket. Although this is almost never a safety issue, the hike in temperature can cause the electrolyte gel inside the driver’s capacitors to dry out, drastically shortening the life of the lamp.
Dimming and Color
Both LED drivers and electronic fluorescent ballasts are available in dimmable models. In order to be dimmed, fluorescent ballasts must maintain the internal temperature of the discharge tube to allow for gas excitation while the voltage is varied. The process is much easier for LEDs; as the voltage is lowered the brightness of the lamps simply drops accordingly, without any loss of efficiency. This simpler technology means that dimmable LED drivers carry a lower price tag than fluorescents.
Similar to dimmable units, color-changing LEDs make the most of LED drivers. Whether the color change is accomplished by dimming an percentage of colored LEDs organized in an array, or through using red, blue, and green to create a spectrum of colors, LEDs are versatile tools for a range of hues. And since LEDs can be integrated with circuit boards relatively easily, they can be set to follow pre-programed sequences or adjusted on the fly.
As mentioned earlier, fluorescent ballasts ramp-up standard voltages for the initial firing of the arc, while LED drivers ramp them down to low-voltage levels (UL Class 2). This means that at the lamp level, there’s effectively no risk of fire or electric shock.
Both LEDs and fluorescents can be used in exterior applications, as long as the fixtures are rated for exterior use. Fluorescents need a bit more of an initial boost to light in cold temperatures, and both types of lamp should be installed with exterior-appropriate drivers or ballasts.
Finally, some LED tube inserts for existing fluorescent fixtures come in “shatterproof” options. While these lamps can still be broken, their casings of plastic and steel mean that there won’t be a pile of glass shards to sweep up if a maintenance person drops one during an installation.
With a solid understanding of how fluorescent ballasts and LED drivers fit into your overall lighting system, you’ll be better able make decisions regarding upgrades, replacements, and budget allocations for lighting maintenance.
With over a decade of construction experience, Dan Stout writes articles that help demystify the industry for both contractors and customers. Visit him at www.DanStout.com. |
|This article needs additional citations for verification. (December 2008)|
Eclecticism is a conceptual approach that does not hold rigidly to a single paradigm or set of assumptions, but instead draws upon multiple theories, styles, or ideas to gain complementary insights into a subject, or applies different theories in particular cases.
It can sometimes seem inelegant or lacking in simplicity, and eclectics are sometimes criticized for lack of consistency in their thinking. It is, however, common in many fields of study. For example, most psychologists accept certain aspects of behaviorism, but do not attempt to use the theory to explain all aspects of human behavior.
Eclecticism was first recorded to have been practiced by a group of ancient Greek and Roman philosophers who attached themselves to no real system, but selected from existing philosophical beliefs those doctrines that seemed most reasonable to them. Out of this collected material they constructed their new system of philosophy. The term comes from the Greek ἐκλεκτικός (eklektikos), literally "choosing the best", and that from ἐκλεκτός (eklektos), "picked out, select". Well known eclectics in Greek philosophy were the Stoics Panaetius and Posidonius, and the New Academics Carneades and Philo of Larissa. Among the Romans, Cicero was thoroughly eclectic, as he united the Peripatetic, Stoic, and New Academic doctrines. Other eclectics included Varro and Seneca.
Architecture and art
The term eclecticism is used to describe the combination, in a single work, of elements from different historical styles, chiefly in architecture and, by implication, in the fine and decorative arts. The term is sometimes also loosely applied to the general stylistic variety of 19th-century architecture after Neo-classicism (c. 1820), although the revivals of styles in that period have, since the 1970s, generally been referred to as aspects of historicism.
Eclecticism plays an important role in critical discussions and evaluations but is somehow distant from the actual forms of the artifacts to which it is applied, and its meaning is thus rather indistinct. The simplest definition of the term—that every work of art represents the combination of a variety of influences—is so basic as to be of little use. In some ways Eclecticism is reminiscent of Mannerism in that the term was used pejoratively for much of the period of its currency, although, unlike Mannerism, Eclecticism never amounted to a movement or constituted a specific style: it is characterized precisely by the fact that it was not a particular style.
Eclecticism is recognized in approaches to psychology that see many factors influencing behavior and the psyche, and among those who consider all perspectives in identifying, changing, explaining, and determining behavior.
Some martial arts can be described as eclectic in the sense that they borrow techniques from a wide variety of other martial arts. For example, the way of thinking used by Bruce Lee called Jeet Kune Do, is classified as an eclectic system. It favors borrowing freely from other systems within a free-floating framework. It does not rigidly hold to a single paradigm nor set of assumptions or conclusions, but encourages students to learn what is useful for themselves.
In textual criticism, eclecticism is the practice of examining a wide number of text witnesses and selecting the variant that seems best. The result of the process is a text with readings drawn from many witnesses. In a purely eclectic approach, no single witness is theoretically favored. Instead, the critic forms opinions about individual witnesses, relying on both external and internal evidence.
Since the mid-19th century, eclecticism, in which there is no a priori bias to a single manuscript, has been the dominant method of editing the Greek text of the New Testament (currently, the United Bible Society, 4th ed. and Nestle-Aland, 27th ed.). Even so, the oldest manuscripts, being of the Alexandrian text-type, are the most favored, and the critical text has an Alexandrian disposition.
In philosophy, Eclectics use elements from multiple philosophies, texts, life experiences and their own philosophical ideas. These ideas include life as connected with existence, knowledge, values, reason, mind, and language. Antiochus of Ascalon (c. 125 – c. 68 BC), was the pupil of Philo of Larissa, and the teacher of Cicero. Through his influence, Platonism made the transition from New Academy Scepticism to Eclecticism. Whereas Philo had still adhered to the doctrine that there is nothing absolutely certain, Antiochus returned to a pronounced dogmatism. Among other objections to Scepticism, was the consideration that without firm convictions no rational content of life is possible. He pointed out that it is a contradiction to assert that nothing can be asserted or to prove that nothing can be proved; that we cannot speak of false ideas and at the same time deny the distinction between false and true. He expounded the Academic, Peripatetic, and Stoic systems in such a way as to show that these three schools deviate from one another only in minor points. He himself was chiefly interested in ethics, in which he tried to find a middle way between Zeno, Aristotle and Plato. For instance, he said that virtue suffices for happiness, but for the highest grade of happiness bodily and external goods are necessary as well.
This eclectic tendency was favoured by the lack of dogmatic works by Plato. Middle Platonism was promoted by the necessity of considering the main theories of the post-Platonic schools of philosophy, such as the Aristotelian logic and the Stoic psychology and ethics (theory of goods and emotions). On the one hand the Middle Platonists were engaged like the later Peripatetics in scholarly activities such as the exposition of Plato's doctrines and the explanation of his dialogues; on the other hand they attempted to develop the Platonic theories systematically. In so far as it was subject in this to the influence of Neopythagoreanism, it was of considerable importance in preparing the way for Neoplatonism.
In religion, Eclectics use elements from multiple religions, applied philosophies, personal experiences or other texts and dogmas to form their own beliefs and ideas, noting the similarities between existing systems and practices, and recognizing them as valid. These ideas include life, karma, the afterlife, God and Goddess, the Earth, and other spiritual ideas. This spiritual approach is promoted by Unitarian Universalism. Some use a mix of Abrahamic, Dharmic, Neopagan, Shamanism, Daoic doctrines, New Age, religious pluralism, and Syncretism. Eclectics are most interested in what really works, personally and communally.
- Eclecticism in architecture
- Eclecticism in art
- Eclecticism in music
- Eclecticism in textual criticism
- Eclectic medicine
- "Városképe a világon egyedülállóan egységes", Hungarian Wikipedia, quoted in "Eklektikus építészet: XIX. század".
- Encyclopædia Britannica – in philosophy and theology, the practice of selecting doctrines from different systems of thought without adopting the whole parent system for each doctrine
- ἐκλεκτικός, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library
- ἐκλεκτός, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library
- Leonard K. Eaton, The Architecture of Choice: Eclectism in America, 1880-1910, 1975
- Aland, B. 1994: 138
- Eduard Zeller, Outlines of the History of Greek Philosophy, 13th Edition, page 273
- Eduard Zeller, Outlines of the History of Greek Philosophy, 13th Edition, page 274
- Eduard Zeller, Outlines of the History of Greek Philosophy, 13th Edition, page 305
- Eduard Zeller, Outlines of the History of Greek Philosophy, 13th Edition, page 306 |
What is it?
The core is a functional system of stabilizing muscles that encompass your entire trunk. They are involved in almost every movement of the entire body, with a main function of stopping movement at the spine, rather than creating it. As we move our limbs in various positions, the “core”, also known as our deep or local stabilizing system, musculature’s role is to stabilize and protect the spine. Meanwhile, our ‘global stabilizing system’ muscles transfer loads between the extremities while providing support, stability, and eccentric control of the core during functional movements. Together these two systems of muscles that must function together as a unit to truly protect the your spine. The local system muscles of the core lie deeper than your abs or obliques. It includes the transverse abdominals, diaphragm, pelvic floor, multifidus and rotatoresrotatores (deep spine muscles), among other deeper tissues.
What does the core do?
Together, these muscles that lie deeper into your trunk create a cylinder of stability for the spine, so that as you move your arms and legs, the spine is protected from any shearing, friction or excessive strain. When the core is strong and healthy, it has anticipatory properties to them; rather, they will fire to protect the spine before body movement occurs. When the core is not strong and healthy, the anticipatory action of the core is lost, leaving our spine becomes more vulnerable to jarring/twisting movements.
Let’s break this down:
- Our bodies consist of global moving muscles, and local/segmental stabilizers.
- Each with their respective roles, aAs our limbs move, the global muscles create movement while the segmental stabilizers protect joints and controlthat movement.
- For example, as we pick up an object, our hip and knee muscles contract to generate force, while our “core” engages to protectthe joints of the spine and prevent shearing/friction/torque to transfer to those areas.
Why is it important to have a strong core?
The core is our main line of defense against excessive strains on the joints in our back as we move during daily activities. When the core is weakened, or not functioning properly, that line of defense is no longer in place. This means that a simple trunk movement, like picking a sock up off the ground or stepping down off of a step, can disrupt the joint/capsule/muscles/ligament or nerves of the spine. Without the core to protect the joints of the spine, injuries from a shearing/friction/loading force may ensue.
The good news is that the core is easily trained and can be strengthened and coordinated movement regained with the appropriate exercises!
How do I train my core?
Whether suffering low back pain, or simply being proactive, core training should be individualized and help improve your motor control, stabilization,
and strength. Core training should be systematic, progressive, functional, and emphasize training of the deep local stabilizing system, then integrating the coordination and use of global muscle systems for optimal functional movements. Core training program should progress to involve variations in: bodyposition, range of motion, amount of control, and speed of execution. The use of stabilizing or balance devices has proven to increase recruitment and benefits of core training exercises. As the exercises progress to fundamental movement skills, they should be progressed to include sport or activity specific movements.
Need Help? Want professional advice & a personalized core training program? Contact us to book your appointment and get stable and strong today!
Stay tuned in the new year for our core exercise sampler blog. |
What is Math Explorer?
Math Explorer is a collection of creative, hands-on mathematics games and activities, designed to engage young people in standards-based mathematics learning and to support academic enrichment. The program invites middle school youth to develop, review, and practice important math skills while they engage in compelling projects, games, puzzles, art activities, and experiments. The program is designed to be easy to use by facilitators with any level of mathematical knowledge. It was created by the San Francisco Exploratorium, a nationally recognized leader in inquiry-based, hands-on education.
Math Explorer for grades 6–8 invites children to fly planes and launch rockets, play games, learn card tricks, and make cool stuff to take home. The program reinforces a variety of math skills, from problem-solving and graphing to working with fractions and ratios. Without the students realizing it, their math attitudes and aptitudes begin to change. |
More than one billion people are undernourished worldwide. FAO estimates show a significant deterioration of an already disappointing trend witnessed over the past ten years. The large increase in the number of undernourished people in 2009 underlines the urgency of tackling the root causes of hunger swiftly and effectively.
The global economic crisis at the core
The current global economic slowdown—following soaring food prices in 2006-2008—lies at the core of the sharp increase in world hunger. It has reduced incomes and employment opportunities of the poor and significantly lowered their access to food.
With lower incomes, the poor are less able to acquire food, especially as prices are still high by historical standards. While international food prices have retreated from their mid-2008 peak levels, prices on local markets have not fallen to the same extent in many developing countries. In June 2009, domestic staple foods cost on average 22 percent more in real terms than two years earlier; a finding that was true across a range of important foodstuffs.
The latest hunger figures are particularly unsettling as undernourishment is not a result of limited international food supplies. Recent figures of the FAO Food Outlook indicate a strong world cereal production in 2009, which will only modestly fall short of last year’s record output level. Clearly, the world can produce enough food to eliminate hunger.
However, food supplies are very unevenly distributed across the globe. While wealthy countries produce large surpluses, many developing countries do not have enough food to guarantee their citizens a level of consumption required for a healthy life.
Another issue concerns the use of food. In fact, only half of the world’s cereal production is currently used directly for human consumption. Agricultural production increasingly goes into animal feed in order to satisfy growing meat consumption, especially in emerging economies. Or it serves non-food uses, such as the production of biofuels to help quench the world’s steadily growing energy needs.
Impacts of hunger
Undernourishment affects large segments of the population in developing countries. It particularly hurts the rural landless as they cannot rely on self subsistence farming and the urban poor. Female headed households constitute another vulnerable group as women are frequently prevented from engaging in paid employment and thus do not have the means to access adequate food.
Apart from humanitarian concerns, hunger threatens development more generally. Faced with food insecurity, households try to maintain income by migrating, selling assets such as livestock, borrowing money, or participating in new types of economic activity, including child labour. Furthermore, people tend to shift expenditures towards cheaper, calorie-rich, energy-dense foods such as grains, and away from more expensive protein- and nutrient-rich foods such as meat, dairy products, or fruits and vegetables; a situation that is particularly harmful for children and pregnant or lactating women.
Coping mechanisms thus involve undesirable but often unavoidable compromises: withdrawing children from schools destroys long-term human capital; the sale of assets reduces the stock of physical or financial resources and is not easily reversible; and shifting from more nutritious foods towards less nutritious items or simply eating less deteriorates people’s health, lowers labour productivity and reduces children’s cognitive potential.
What policy options are available?
The drastic increase in the number of hungry—and even more the fact that this number has remained above 800 million for the past 40 years—reveals the fragility of the present food system. In order to fight hunger a twin track approach remains key, involving both measures for immediate relief and more fundamental structural changes.
In the short term, safety nets and social protection programmes must be improved to reach those most in need. Simultaneously, small-scale farmers must be given access to indispensable tools and technologies that will allow them to boost production. These include high-quality seeds, fertilizers, and adequate farming equipments. Higher local production will be instrumental to lower food prices for poor consumers, both rural and urban.
In the medium and long term, the structural solution to hunger lies in increasing agricultural output in countries prone to food shortages. Stable and effective policies, regulatory and institutional mechanisms, and functional market infrastructures that promote investment in the agricultural sector are paramount.
More fundamental changes needed
However, a “business-as-usual” approach will not reduce hunger to the extent necessary. In order to achieve sustainable results, governance needs to improve at national and international levels.
In food insecure countries, institutions are needed based on the principles of the Right to Adequate Food. These should promote transparency and accountability, the empowerment of the poor and their participation in the decisions that affect them. The Voluntary Guidelines for the Implementation of the Right to Food at national level are an important step in this direction.
At the international level, improved governance includes a reformed Committee on World Food Security, which needs to become the cornerstone of international cooperation in the area of food security. The Committee should act as the leading political body to fight hunger, ensuring that all relevant stakeholders are heard in the policy debates, and that decisions are based on hard scientific evidence.
Keeping agriculture on the agenda
Soaring food prices propelled food security and agriculture back on the policy agenda. With food commodity prices in world markets gradually falling, and in the face of the global financial and economic crisis, the focus risks shifting away from the plight of poorer countries struggling to feed their populations. While dealing with the global recession, the international community must not forget its commitments to the one billion people suffering from hunger.
Economic crises have typically led to declines in public investment in agriculture, with devastating impacts on poverty and hunger. Past experiences and empirical studies tell us that particularly at this time, support to agriculture should not be reduced; indeed, it must be increased. Only a healthy agricultural and agro-industrial sector, combined with an overall growing economy and effective safety nets, will effectively reduce and eventually eliminate hunger. |
Winter Friends–A fun little play that explores how and which animals hibernate in winter months. Play includes rhyme, repetition, and predictable language to help young learners build reading confidence. Great for grades K-1 and now only 99 Cents at ereaderstheater.com.
While you are thinking about hibernating animals-check out this great idea to describe hibernating groundhogs by Jennifer Solis, a great teacher in Beaumont, California.
After reading a story about groundhogs our class made a sentence patterning chart to create super sentences based on the story. This tool is extremely helpful for the students to use while working on sentence structure. Before sending the students off to work independently, we created a few silly sentences together. We passed out 4 Post-It notes to 4 different children. Each student placed his or her Post-it on a different colored word on the chart. When the four words had been chosen, we chanted the sentence to the tune of “The Farmer in the Dell.”
The furry groundhog, The furry groundhog, The furry groundhog peered outside his burrow!
The students used words from the chart to write their own silly sentences. |
Kidney disease is often called “the silent disease, because most people have no symptoms before they are diagnosed. In fact, you might feel just fine until your kidneys have almost stopped working.
We consider the kidney's as the 'pressure regulators' of the body.
The kidneys are bean‐shaped organs that act as sophisticated filters to remove organic waste products from the blood, along with excess salt and water, through the urine. Nephrons are the working units of the kidney that are responsible for waste removal. Kidney function diminishes as the numbers of functional nephrons are reduced, which is a part of the normal aging process.
The kidney participates in whole‐body homeostasis, electrolyte concentrations, extracellular fluid volume, hormones, regulation of blood pressure, eye problems including glaucoma and other forms of pressure regulating systems in the body. The kidneys are responsible for filtering toxins from our blood on its way to becoming urine and maintenance of acid–base balance. The kidneys excrete wastes such as urea and ammonium, and they are responsible for the reabsorption of water, glucose, and amino acids. The kidneys also produce hormones including calcitriol, erythropoietin, and the enzyme renin.
The kidneys play a critical role in: controlling the acid‐base balance in the body along with electrolyte balance, cleaning waste material from the blood, retaining or excreting salt and water, regulating blood pressure, stimulating the bone marrow to make red blood cells and controlling the amount of calcium and phosphorous absorbed or excreted for bone health.
Kidneys health is important to prevent the following health disorders: analgesic nephropathy, chronic nephritis, diabetes, ESRD (End‐Stage Renal Disease), hypertension, eye diseases, infection, injury, stones, lupus erythematosus, and ADPKD (Autosomal Dominant Polycystic Kidney Disease), ringing of the ears (tinnitus), hearing loss and finally kidney failure.
Symptoms of renal disease can include frequent headaches, frequent urination, itching, poor appetite, fatigue, burning bladder, anemia, baggy eyes, nausea and vomiting, swollen or numb hands or feet, poor concentration, darkened skin, and muscle cramps.
Subclinical symptoms of kidney weakness include decreased urination, itching, fatigue, loss of appetite, numbness and tingling in the hands and feet, pale skin, facial swelling, leg swelling (bilateral), foot swelling (bilateral), ankle swelling (bilateral), hand swelling (bilateral), nausea, vomiting, unintentional weight loss, memory loss, malaise and tremors.
Subclinical symptoms of severe kidney weakness include, chest pain, chest pain when taking a breath, chest pain when coughing, pain often described as sharp, difficulty breathing, rapid breathing, confusion, loss of consciousness, seizures and coma.
Modern medical testing does not always give us a complete look at the subclinical function of our kidneys. alternative forms of testing are available.
Reviewing your family history of kidney related diseases can be a good place to start.
We have well over 200 years of collective knowledge in modern
and alternative medicine. We help our client's
restore their health. |
This GCSE English Literature quiz will challenge you on context in George Orwell's Animal Farm. A text’s context refers to the environment in which it was written. Context includes the political and social environment, as well as the time and geographical location in which the author was writing. If this combination sounds familiar, it is because these are the same specific elements which we discuss when we talk about setting, too. Where setting refers to these aspects of a text’s created, fictional world, context refers to these aspects of the author’s own world. Remember too that issues and events from the author’s past might have as much effect on a text as those occurring contemporaneously. The personal beliefs of an author also contribute to the context since they are likely to have had an effect on the text.
Learning about the context of a fictional work will enable you to develop an insight into many of the important influences which helped to shape the text. As you will know, the relationship between context and meaning is not simple and straightforward. Sometimes context provides useful information to bear in mind as you read and think about a piece of writing. At other times, context is much more important, and Animal Farm is a text which really benefits from a good knowledge of its context. George Orwell was a highly political author, as well as one who wished to educate and warn others about the dangers he perceived.
This text works on multiple levels. You can read it as a pure depiction of tyranny and the betrayal of hope. If you wish to understand the satire, however, you will need to know some history, so you will have paid close attention in lessons. How much do you know about historic dictatorships? What do you know about George Orwell’s life and times? The more you know about the events upon which the novella is based, the more you will understand this text. Some of the shocking things which happen to the animals, and which can seem unrealistic, had actually happened in more than one country and on more than one occasion.
Research the context of George Orwell’s Animal Farm, remembering everything you have learned in English and (perhaps) history lessons, then try these questions to see how much you know. |
Amplifier vs Receiver
Amplifier and receiver are two types of necessary circuits used in communication. Usually a communication happens between two points called transmitter and receiver through a wired or wireless medium. Transmitter sends a signal containing some information and receiver grabs that signal in order to reproduce that information. After travelling some distance, usually, a signal get weakened (attenuated) due to energy loss in the medium. Therefore, once this weak signal is received at the receiver, it should be improved (or amplified). Amplifier is the circuit which magnifies the weak signal to a signal with more power.
Amplifier (also shortened as amp) is an electronic circuit, which increases the power of an input signal. There are many types of amplifiers ranging from voice amplifiers to optical amplifiers at different frequencies. A transistor can be configured as a simple amplifier. The ratio between output signal power to input signal power called as the ‘gain’ of the amplifier. Gain may be any value depending on the application. Usually gain is converted into decibels (a logarithmic scale) for convenience.
Bandwidth is another important parameter for amplifiers. It is the frequency range of the signal which amplified in expected way. 3dB bandwidth is a standard measure for amplifiers. Efficiency, linearity and slew rate are some of other parameters to be considered when designing an amplifier circuit.
Receiver is the electronic circuit which receives and regenerates a transmitted signal from a transmitter through any medium. If the medium is wireless radio, receiver may consist of an antenna to convert the electromagnetic wave into an electrical signal and filters to remove unwanted noise. Sometimes receiver unit may also include amplifiers to amplify the weak signal and decoding and demodulation unit to reproduce the original information. If the medium is wired, there won’t be an antenna and it may be replaced by a photo detector in optical signaling.
Difference between amplifier and receiver
1. In many cases, amplifier is a part of the receiver.
2. Amplifier is used to magnify a signal, whereas receiver is used to reproduce a signal sent at a transmitter
3. In many cases amplifier may be a part of a receiver
4. Sometimes, amplifiers introduce some noise to the signal where receivers are always made to eliminate noise. |
Anaemia is a deficiency in the number or quality of red blood cells in your body. Red blood cells carry oxygen around your body using a particular protein called haemoglobin. Anaemia means that either the level of red blood cells or the level of haemoglobin is lower than normal. When a person has anaemia, their heart has to work harder to pump the quantity of blood needed to get enough oxygen around their body. During heavy exercise, the cells may not be able to carry enough oxygen to meet the body’s needs and the person can become exhausted and feel unwell. Anaemia isn’t a disease in itself, but a result of a malfunction somewhere in the body. This blood condition is common, particularly in females. Some estimates suggest that around one in five menstruating women and half of all pregnant women are anaemic.
Red blood cells explained
Red blood cells are produced in the bone marrow and have a life span of about 120 days. The bone marrow is always making new red blood cells to replace old ones. Millions of new red blood cells enter the blood stream each day in a healthy person.
You need certain nutrients in your diet to make and maintain red blood cells. Each red blood cell contains a protein called haemoglobin. This protein gives red blood cells their colour.
Oxygen molecules absorbed in the lungs attach themselves to haemoglobin, which is then delivered to all parts of the body. All of the body’s cells need oxygen to live and perform their various duties.
The bone marrow needs enough dietary iron and some vitamins to make haemoglobin. If you don’t have enough iron in your diet, your body will draw on the small reserves of iron stored in your liver. Once this reservoir is depleted, the red blood cells will not be able to carry oxygen around the body effectively.
Causes of anaemia
Anaemia can have many causes, including:
- dietary deficiency – lack of iron, vitamin B12 or folic acid in the diet
- malabsorption – where the body is not able to properly absorb or use the nutrients in the diet, caused by conditions such as coeliac disease
- inherited disorders – such as thalassaemia or sickle cell disease
- autoimmune disorders – such as autoimmune haemolytic anaemia, where the immune cells attack the red blood cells and decrease their life span
- chronic diseases – such as diabetes, rheumatoid arthritis and tuberculosis
- hormone disorders – such as hypothyroidism
- bone marrow disorders – such as cancer
- blood loss – due to trauma, surgery, peptic ulcer, heavy menstruation, cancer (in particular bowel cancer), or frequent blood donations
- drugs and medications – including alcohol, antibiotics, anti-inflammatory drugs or anti-coagulant medications
- mechanical destruction –mechanical heart valves can damage red blood cells, reducing their lifespan
- infection – such as malaria and septicaemia, which reduce the life span of red blood cells
- periods of rapid growth or high energy requirements – such as puberty or pregnancy.
Symptoms of anaemia
Depending on the severity, the symptoms of anaemia may include:
- pale skin
- tiring easily
- drop in blood pressure when standing from a sitting or lying position (orthostatic hypotension) – this may happen after acute blood loss, like a heavy period
- frequent headaches
- racing heart or palpitations
- becoming irritated easily
- concentration difficulties
- cracked or reddened tongue
- loss of appetite
- strange food cravings.
Groups at high risk of anaemia
Certain people are at increased risk of anaemia, including:
- menstruating women
- pregnant and breastfeeding women
- babies, especially if premature
- children going through puberty
- people following a vegetarian or vegan diet
- people with cancer, stomach ulcers and some chronic diseases
- people on fad diets
Diagnosis of anaemia
Depending on the cause, anaemia is diagnosed using a number of tests including:
- medical history – including any chronic illnesses and regular medications
- physical examination – looking for signs of anaemia and a cause for anaemia
- blood tests – including complete blood count and blood iron levels, vitamin B12, folate and kidney function tests
- urine tests – for detecting blood in the urine
- gastroscopy or colonoscopy – looking for signs of bleeding
- bone marrow biopsy
- faecal occult blood test – examining a stool (poo) sample for the presence of blood.
Treatment for anaemia
Treatment depends on the cause and severity, but may include:
- vitamin and mineral supplements – if you have a deficiency
- iron injections – if you are very low on iron
- vitamin B12 (by injection) –for pernicious anaemia
- antibiotics – if infection is the cause of your anaemia
- altering the dose or regimen of regular medications – such as anti-inflammatory drugs, if necessary
- blood transfusions – if required
- oxygen therapy – if required
- surgery to prevent abnormal bleeding – such as heavy menstruation
- surgery to remove the spleen (splenectomy) – in cases of severe haemolytic anaemia.
Please note: Take iron supplements only when advised by your doctor. The human body isn’t very good at excreting iron and you could poison yourself if you take more than the recommended dose.
Long-term outlook for people with anaemia
The person’s outlook (prognosis) depends on the cause of their anaemia. For example, if the anaemia is caused by dietary deficiencies, correcting the cause and the use of appropriate supplements for some weeks or months will resolve the condition. Relapses may occur, so changes to diet and, perhaps, regular supplements may be necessary.
In other cases, the anaemia may be permanent and lifelong treatment is needed. No matter what the cause, it is important to have a doctor regularly monitor your blood to make sure your red blood cell and haemoglobin levels are adequate and to adjust treatment if required.
Prevention of anaemia
Some forms of anaemia can’t be prevented because they are caused by a breakdown in the cell-making process. Anaemia caused by dietary deficiency can be prevented by making sure that you eat food from certain food groups on a regular basis, including dairy foods, lean meats, nuts and legumes, fresh fruits and vegetables.
If you follow a vegan diet (one that does not include any animal products) talk to your health professional about recommended vitamin and mineral supplements.
Where to get help
This page has been produced in consultation with and approved by:
Australian Centre for Blood Diseases
Content on this website is provided for information purposes only. Information about a therapy, service, product or treatment does not in any way endorse or support such therapy, service, product or treatment and is not intended to replace advice from your doctor or other registered health professional. The information and materials contained on this website are not intended to constitute a comprehensive guide concerning all aspects of the therapy, product or treatment described on the website. All users are urged to always seek advice from a registered health care professional for diagnosis and answers to their medical questions and to ascertain whether the particular therapy, service, product or treatment described on the website is suitable in their circumstances. The State of Victoria and the Department of Health & Human Services shall not bear any liability for reliance by any user on the materials contained on this website. |
Vocabulary is a set of words within a language that are familiar to a particular person. Every person’s particular vocabulary is unique and often not given much thought or attention as it tends to develop with age and grow and evolve over time. It is ordinarily defined as ‘all of the words known and used by a particular person’ although ‘knowing’ a word is not as simple as you may think.
Degrees and Depths
It can be easy to say ‘Yes I know this word’ but in order to determine exactly how well you know the word there have been stages to define how well.
- Never encountered the word.
- Heard the word, but cannot define it.
- Recognize the word due to context or tone of voice.
- Able to use the word and understand the general and/or intended meaning, but cannot clearly explain it.
- Fluent with the word (its use and definition).
Further to the degrees to which you are familiar with a certain word there are also different levels of depth. There are many components that go into knowing a word.
- Orthography – Written form.
- Phonology – Spoken form.
- Reference – Meaning.
- Semantics – Concept and reference.
- Register – Appropriate usage.
- Collocation – Lexical neighbours.
- Word Associations.
- Syntax – Grammatical function.
- Morphology – Word parts.
Of course with these many facets of word knowledge come many different types of vocabulary. We have all, at some part of our lives made ourselves look slightly foolish by having a stab at a new word that we have only ever read and getting the pronunciation completely wrong. These occasions, although quite unbearable at the time, are perfect examples of a gap between your reading and speaking vocabulary.
Why So Important?
Vocabulary is ultimately expression; having an extensive vocabulary will help you express yourself clearly and communicate well with clarity, a linguistic vocabulary is also identical to a thinking vocabulary meaning that you will be able to think concise thoughts with precision. Although much of your vocabulary is built up throughout childhood, it will certainly plateau once you leave education. In order to keep the vocabulary in order and expand after this time it is advisable to read, play word games or even set yourself goals to learn a new word each day.
Words For The Week
- Accoutrement n. Additional items of dress or equipment, carried or worn by a person or used for a particular activity. The General dressed for battle in shining accoutrements.”
- Demure adj. 1. Modest and reserved in manner or behavior. “Despite herdemure appearance, she is an accomplished mountain climber.”
- Hubris n. 1. Overbearing pride or presumption; arrogance. 2. A strong belief in a person’s own importance. “He was disciplined for his hubris.”
- Quixotic adj. 1. Idealistic without regard to practicality; impractical. 2. Impulsive: tending to act on whims or impulses. “It was clearly a quixoticcase against the defendant.” |
On November 24, 1859, Charles Darwin’s book, The Origin of Species, was published. As a result, the concept of organic evolution was popularized. The science of genetics, of course, was completely unknown at that time, and would not come into its own until approximately forty-one years later. Since around 1900, evolutionists have advocated “neo-Darwinism,” as opposed to “classical Darwinism.” In classical Darwinian thought, natural selection alone served as the mechanism of evolution. In neo-Darwinian thought, natural selection and genetic mutations work together as evolution’s mechanism.
Genetics has played an increasingly important role in evolution, especially in regard to mutations that alter the genetic code within each organism. That code is expressed biochemically in deoxyribonucleic acid (DNA). Mutations are “errors” in DNA replication (Ayala, 1978, pp. 56-69). It is those errors that cause the genetic change necessary for evolution to occur. In 1957, George Gaylord Simpson wrote: “Mutations are the ultimate raw materials for evolution” (1957, p. 430). Twenty-six years later, nothing had changed when Douglas J. Futuyma remarked:
By far the most important way in which chance influences evolution is the process of mutation. Mutation is, ultimately, the source of new genetic variations, and without genetic variation there cannot be genetic change. Mutation is therefore necessary for evolution (1983, p. 136).
Mutations can occur in several different ways, and can affect individual genes or entire chromosomes (see Futuyma, 1983, p. 136). Further, mutations can be placed, theoretically, into at least three categories: (a) bad; (b) neutral; and (c) good.
Some mutations, therefore, can have profound effects. They can alter the structure of a critical protein so much that the organism becomes severely distorted and may not survive. Other mutations may cause changes in the protein that do not affect its function at all. Such mutations are adaptively neutral—they are neither better nor worse than the original form of the gene. Still other mutations are decidedly advantageous (Futuyma, 1983, p. 136).
Neither bad nor neutral mutations aid evolution, since the bad ones produce effects that are deleterious (and often lethal), and the neutral ones neither help nor hurt an organism. Neo-Darwinian evolution relies entirely on good mutations, since they not only alter the genetic material, but are, to use Futuyma’s words, “decidedly advantageous.” Evolutionary progress, then, is dependent upon nature “selecting” the good mutations, resulting in genetic change that ultimately produces new organisms.
BACTERIA AND RESISTANCE TO ANTIBIOTICS
What does all of this have to do with the resistance of bacteria to antibiotics? Over the past several years, the medical community has become increasingly concerned over the ability of certain bacteria to develop resistance to antibiotics. Undoubtedly this concern is justified. Antibiotics, which usually are substances naturally produced by certain microorganisms, inhibit the growth of other microorganisms. One of the first antibiotics to be discovered (in 1928) was penicillin, produced by the mold Penicillium chrysogenum. Since then, more than a thousand similar substances have been isolated. Most people recognize the tremendous impact antibiotics have had in the battle with pathogenic (disease-causing) organisms. Without antibiotics, the death toll from infections and diseases would be much higher than it is.
Today, however, there is compelling evidence that we are in danger of losing our battle against certain pathogens. Bacteria sometimes develop resistance to even powerful antibiotics. As a result, the number of antibiotics that can be used against certain diseases is dwindling rapidly. Both scientific and popular publications have addressed the seriousness of this issue. The cover story of the March 28, 1994 issue of Newsweek was titled, “Antibiotics: The End of Miracle Drugs?” (Begley, 1994). Articles in Scientific American (Beardsley, 1994), Science (Travis, 1994; Davies, 1994), Discover (Caldwell, 1994), and Natural History (Smith, 1994), have all called attention to the impact on our lives that bacterial resistance to antibiotics is causing.
The phenomenon of bacterial drug resistance was first documented around 1952 (see Lederberg and Lederberg, 1952). Interest in the phenomenon has increased as fewer antibiotics are effective against pathogens, and as deaths from bacterial infections increase. Scientific interest in this problem is both pragmatic and academic. In the pragmatic sense, those working in medical fields (doctors, nurses, pharmacists, researchers, etc.) are interested because lives are at stake. In an academic sense, this issue is of importance to evolutionists because they believe the mutations in bacteria responsible for drug resistance are, from the standpoint of the bacterial population, “good,” and thus offer significant proof of evolution. Their point is that the bacteria have adapted so as to “live to fight another day”—an example of “decidedly advantageous” mutations. Evolutionist Colin Patterson of Great Britain has commented: “The development of antibiotic-resistant strains of bacteria, and also of insects resistant to DDT and a host of other recently discovered insecticides, are genuine evolutionary changes (1978, p. 85, emp. added). But are these mutations sufficient to explain long-term, large-scale evolution (macroevolution)?
AN ALTERNATIVE EXPLANATION
Bacteria do not become resistant to antibiotics merely by experiencing genetic mutations. In fact, there are at least three genetic mechanisms by which resistance may be conferred. First, there are instances where mutations produce antibiotic-resistant strains of microorganisms. Second, there is the process of conjugation, during which two bacterial cells join and an exchange of genetic material occurs. Inside many bacteria there is a somewhat circular piece of self-replicating DNA known as a plasmid, which codes for enzymes necessary for the bacteria’s survival. Certain of these enzymes, coincidentally, assist in the breakdown of antibiotics, thus making the bacteria resistant to antibiotics. During conjugation, plasmids in one organism that are responsible for resistance to antibiotics may be transferred to an organism that previously did not possess such resistance.
GERM WARFARE: During conjugation, one bacterial cell (A) can transfer any tiny DNA circle (plasmid) to another cell (B). This act can occur even between cells of different species. The transfer gives bacterium B a resistance to a drug that formerly was not present in its own DNA. In this example, the plasmid contains a gene (shown in red) to manufacture an enzyme that destroys the drug’s ability to interfere with bacterial cell division (as in the case of penicillin).
Third, bacteria can incorporate into their own genetic machinery foreign pieces of DNA by either of two types of DNA transposition. In transformation, DNA from the environment (perhaps from the death of another bacterium) is absorbed into the bacterial cell. In transduction, a piece of DNA is transported into the cell by a virus. As a result of incorporating new genetic material, an organism can become resistant to antibiotics. Commenting on these processes, Walter J. ReMine wrote:
Transformation and transduction occur extremely infrequently, but this rarity can be offset somewhat by the enormous population sizes that bacteria can achieve, especially under laboratory conditions. By those three methods bacteria can acquire DNA that alters their survival.... For example, DNA transposition can result in reduced permeability of the cell wall to certain substances, sometimes providing an increased resistance to antibiotics (1993, p. 404).
The issue is not whether bacteria develop resistance to antibiotics through alterations in their genetic material. They do. The issue is whether or not such resistance helps the evolutionists’ case. We suggest that it does not, for the following reasons.
First, the mutations responsible for antibiotic resistance in bacteria do not arise as a result of the “need” of the organisms. Futumya has noted: “...the adaptive ‘needs’ of the species do not increase the likelihood that an adaptive mutation will occur; mutations are not directed toward the adaptive needs of the moment.... Mutations have causes, but the species’ need to adapt isn’t one of them” (1983, pp. 137,138). What does this mean? Simply put, bacteria did not “mutate” after being exposed to antibiotics; the mutations conferring the resistance were present in the bacterial population even prior to the discovery or use of the antibiotics. The Lederbergs’ experiments in 1952 on streptomycin-resistant bacteria showed that bacteria which had never been exposed to the antibiotic already possessed the mutations responsible for the resistance. Malcolm Bowden has observed: “What is interesting is that bacterial cultures from bodies frozen 140 years ago were found to be resistant to antibiotics that were developed 100 years later. Thus the specific chemical needed for resistance was inherent in the bacteria” (1991, p. 56). These bacteria did not mutate to become resistant to antibiotics. Furthermore, the non-resistant varieties did not become resistant due to mutations.
Second, while pre-existing mutations may confer antibiotic resistance, such mutations may also decrease an organism’s viability. For example, “the surviving strains are usually less virulent, and have a reduced metabolism and so grow more slowly. This is hardly a recommendation for ‘improving the species by competition’ (i.e., survival of the fittest)” (Bowden, 1991, p. 56). Just because a mutation provides an organism with a certain trait does not mean that the organism as a whole has been helped. For example, in the disease known as sickle-cell anemia (caused by a mutation), people who are “carriers” of the disease do not die from it and are resistant to malaria, which at first would seem to be an excellent example of a good mutation. However, that is not the entire story. While resistant to malaria, these people do not possess the stamina of, and do not live as long as, their non-carrier counterparts. Bacteria may be resistant to a certain antibiotic, but that resistance comes at a price. Thus, in the grand scheme of things, acquiring resistance does not lead necessarily to new species or types of organisms.
Third, regardless of how bacteria acquired their antibiotic resistance (i.e., by mutation, conjugation, or by transposition), they are still exactly the same bacteria after receiving that trait as they were before receiving it. The “evolution” is not vertical macroevolution but horizontal microevolution (i.e., adaptation). In other words, these bacteria “...are still the same bacteria and of the same type, being only a variety that differs from the normal in its resistance to the antibiotic. No new ‘species’ have been produced” (Bowden, 1991, p. 56). In commenting on the changing, or sharing, of genetic material, ReMine has suggested: “It has not allowed bacteria to arbitrarily swap major innovations such as the use of chlorophyll or flagella. The major features of microorganisms fall into well-defined groups that seem to have a nested pattern like the rest of life” (1993, p. 404).
Microbiologists have studied extensively two genera of bacteria in their attempts to understand antibiotic resistance: Escherichia and Salmonella. In speaking about Escherichia in an evolutionary context, France’s renowned zoologist, Pierre-Paul Grassé, observed:
...bacteria, despite their great production of intraspecific varieties, exhibit a great fidelity to their species. The bacillus Escherichia coli, whose mutants have been studied very carefully, is the best example. The reader will agree that it is surprising, to say the least, to want to prove evolution and to discover its mechanisms and then to choose as a material for this study a being which practically stabilized a billion years ago (1977, p. 87).
Although E. coli allegedly has undergone a billion years’ worth of mutations, it still has remained “stabilized” in its “nested pattern.” While mutations and DNA transposition have caused change within the bacterial population, those changes have occurred within narrow limits. No long-term, large-scale evolution has occurred.
The suggestion that the development in bacteria of resistance to antibiotics as a result of genetic mutations or DNA transposition somehow “proves” organic evolution is flawed. Macroevolution requires change across phylogenetic boundaries. In the case of antibiotic-resistant bacteria, that has not occurred.
Ayala, Francisco (1978), “The Mechanisms of Evolution,” Scientific American, 239:56-69, September.
Beardsley, Tim (1994), “La Ronde,” Scientific American, 270:26,29, June.
Begley, Sharon (1994), “The End of Antibiotics,” Newsweek, 123:47-51, March 28.
Bowden, M. (1991), Science vs. Evolution (Bromley, Kent, England: Sovereign Publications).
Caldwell, Mark (1994), “Prokaryotes at the Gate,” Discover, 15:45-50, August.
Davies, Julian (1994), “Inactivation of Antibiotics and the Dissemination of Resistance Genes,” Science, 264:375-382, April 15.
Futuyma, Douglas J. (1983), Science on Trial (New York: Pantheon Books).
Grass‚, Pierre-Paul (1977), The Evolution of Living Organisms (New York: Academic Press).
Lederberg, J. and E.M. Lederberg (1952), Journal of Bacteriology, 63:399.
Patterson, Colin (1978), Evolution (Ithaca, NY: Cornell University Press).
ReMine, Walter J. (1993), The Biotic Message (St. Paul, MN: St. Paul Science).
Simpson, George Gaylord, C.S. Pittendrigh, and L.H. Tiffany (1957), Life: An Introduction to Biology (New York: Harcourt, Brace and World).
Smith, John Maynard (1994), “Breaking the Antibiotic Bank,” Natural History, 103:39-40, June.
Travis, John (1994), “Reviving the Antibiotic Miracle?,” Science, 264:360-362, April 15. |
Giant reptiles that ruled dinosaur-era seas might have been warm-blooded, a new study says.
Researchers found that ancient ocean predators possibly regulated their body temperatures, which allowed for aggressive hunting, deep diving, and fast swimming over long distances.
"These marine reptiles were able to maintain a high body temperature independently of the water temperature where they lived, from tropical to cold-temperate oceanic domains," said study co-author Christophe Lécuyer, a paleontologist at Université Claude Bernand Lyon 1 in France.
The prehistoric reptiles may have had body temperatures as high as 95 to 102 degrees Fahrenheit (35 to 39 degrees Celsius)—comparable to those of modern dolphins and whales, Lécuyer noted. (See whale pictures.)
Most modern reptiles and fish are cold-blooded, which means their internal temperatures vary along with those of the surrounding water.
Since the modern oceans' top predators—such as tuna and swordfish—are to some degree warm-blooded, this made the team wonder if ancient marine reptiles might have been, too, Lécuyer said.
Tuna and swordfish are homeothermic, or capable of keeping their body temperatures relatively constant, despite changing environmental temperatures. The predators are also partially endothermic, which means they can generate and retain enough heat to raise their body temperatures to high but stable levels.
Fossil Teeth Provide Sea-Reptile Clues
While dinosaurs dominated land during the Mesozoic period (251 million to 65 million years ago), three kinds of large swimming reptiles reigned in the seas—the dolphin-like ichthyosaurs, the serpentine mosasaurs, and the Loch Ness monster-like plesiosaurs. (See a prehistoric time line.)
By studying fossil teeth of fish that would have lived alongside these creatures, Lécuyer and colleagues were able to determine the teeth's oxygen isotopes, or atomic structures.
The levels of oxygen isotopes in teeth reflect those of the blood, which in turn reflect animals' body temperatures.
The team compared these results with oxygen-isotope compositions in modern-day fish that live in a variety of hot and cold environments.
Since most modern fish are cold-blooded, this data helped the team figure out the ocean temperatures of the ancient species' habitat.
Then the researchers compared oxygen-isotope data from the fossilized fish teeth with those seen in fossil-reptile teeth from the same areas.
"Enthralling" Sea-Reptile Findings
Homeothermy and endothermy in ichthyosaurs and plesiosaurs would make sense, as past studies of their body plans suggested the creatures were pursuit predators that needed to keep active, according to the study, published tomorrow in the journal Science.
The new data for mososaurs, which scientists suspect hunted by ambush, were more ambiguous, but are consistent with the idea that these reptiles could control their body temperatures to some degree, the authors write.
"The studies presented are enthralling," said paleontologist Zulma Gasparini at Argentina's Universidad Nacional de La Plata, who did not take part in the study.
"From what is known in living vertebrates, they are trying to interpret what could happen in vertebrates of the past."
The ancient reptiles' higher body temperatures also suggest the animals may have possessed heat-conservation systems, such as blubber layers and specialized blood circulation, said Ryosuke Motani, a vertebrate paleontologist at the University of California, Davis.
"From here we can really begin to investigate how this might have evolved," said Motani, who was not involved in the new research.
"These [sea reptiles] all came from land reptiles, who we're pretty sure were so-called cold-blooded, and it was probably the same when they started swimming. But over time it looks like homeothermy evolved, and so we need to figure out when that happened and why," he said.
"Maybe it evolved as they became better at cruising, or [because] there were changes in average temperature or in sea level." |
Species: North Atlantic right whale (Eubalaena glacialis)
Status: Endangered (EN)
Interesting Fact: The North Atlantic right whale can grow to a massive 16 metres in length and 70 tonnes in weight.
The North Atlantic right whale is one of the rarest large whales in the world. Like other members of the Balaenidae family, this species uses plates of fibrous ‘baleen’ on each side of its upper jaw to strain tiny prey items from the water. The North Atlantic right whale feeds at northern latitudes in summer, migrating south to calving grounds in the winter. Female North Atlantic right whales give birth to a single calf around once every three years.
The common name of this species comes from it being considered the ‘right’ whale to hunt, as it is easy to catch and floats when dead, yielding large quantities of oil and baleen. Centuries of hunting reduced the North Atlantic right whale population to fewer than 350 individuals by 2008, and the species is now virtually extinct in the eastern North Atlantic. Although hunting is now banned, the North Atlantic right whale still faces threats from entanglement in fishing gear, collisions with boats, pollution and climate change. Measures are in place to protect it, such as fishing gear regulations and changes to shipping lanes, but it is not yet clear whether these have been effective.
Find out more about the North Atlantic right whale and its conservation at WWF – North Atlantic right whale.
Liz Shaw, ARKive Text Author |
After the conquest, Spanish settlers introduced numerous Old World species into the New World. The most pernicious introductions were human-borne diseases, which led to the rapid and tragic decimation of the indigenous population. However, most of the introductions were deliberate, made with the intention of increasing the diversity of available food and resources. Among the non-native (exotic) plants and animals introduced were sheep, pigs, chickens, goats, cattle, wheat, barley, figs, grapevines, olives, peaches, quinces, pomegranates, cabbages, lettuces and radishes, as well as many flowers.
The environmental impact of all these introductions was enormous. The introduction of sheep to Mexico is a case in point.
In the Old World, wool had been a major item of trade in Spain for several centuries before the New World was settled. The first conquistadors were quick to recognize the potential that the new territories held for large-scale sheep farming.
The development of sheep farming and its consequences in one area of central Mexico (the Valle de Mezquital in Hidalgo) was analyzed by Elinor Melville in A Plague of Sheep. Environmental Consequences of the Conquest of Mexico.
Melville divides the development of sheep farming in the Valle of Mezquital into several distinct phases. Sheep farming took off during Phase I (Expansion; 1530-1565). During this phase, the growth in numbers of sheep in the region was so rapid that it caused the enlightened Viceroy, Luis de Velasco, to became concerned that sheep might threaten Indian land rights and food production. Among the regulations introduced to control sheep farming was a ban on grazing animals within close proximity of any Indian village. At first the Indians did not own any grazing animals, and consequently did not fence their fields, which inadvertently encouraged the Spaniards to treat the landscape as common land.
During Phase II (Consolidation of Pastoralism; 1565-1580), the area used for sheep grazing remained fairly stable, but the numbers of sheep (and therefore grazing density) continued to increase. By the mid-1570s, sheep dominated the regional landscape and the Indians also had flocks. One of the consequences of this was environmental deterioration to the point where by the late 1570s, some farmers did not have adequate year-round access to pastures and introduced the practice of seasonal grazing in which they moved their flocks (often numbering tens of thousands of sheep) from their home farm in central Mexico to seasonal pastures near Lake Chapala.
This practice of grazing on harvested fields or temporary pastures was known as agostadero. This term originally applied to summer (agosto=August) grazing in Spain but was adopted in New Spain for “dry season” grazing, between December and March. So important was this annual movement of sheep that provision was made in 1574 for the opening of special sheep lanes or cañadas along the route, notwithstanding the considerable environmental damage done by the large migrating flocks. As flock sizes peaked, more than 200,000 sheep made the annual migration by 1579.
In the words of historian Francois Chevalier:
By 1579, and doubtless before, more than 200,000 sheep from the Querétaro region covered every September the 300 or 400 kilometers to the green meadows of Lake Chapala and the western part of Michoacán; the following May, they would return to their estancias.
The prime dry season pastures were those bordering the flat, marshy swampland at the eastern end of Lake Chapala. The Jiquilpan district alone supported more than 80,000 sheep each year, as the Geographic Account of Xiquilpan and District (1579) makes clear:
More than eighty thousand sheep come from other parts to pasture seasonally on the edge of this village each year; it is very good land for them and they put on weight very well, since there are some saltpeter deposits around the marsh.
By the end of Phase III (The Final Takeover; 1580-1600), most land had been incorporated into the Spanish land tenure system, the Indian population had declined (mainly due to disease) and the sheep population had also dropped dramatically. Contemporary Spanish accounts reveal that this collapse was attributed to a combination of the killing of too many animals for just their hides by Spaniards, an excessive consumption by Indians of lamb and mutton, and by the depletion of sheep flocks by thieves and wild dogs. Melville’s research, however, suggests that the main reason for the decline was actually environmental degradation, brought on by the excessive numbers of sheep at an earlier time.
The entire process is, in Melville’s view, an excellent example of an “ungulate irruption, compounded by human activity.” The introduction of sheep had placed great pressure on the land. Their numbers had risen rapidly, but then crashed as the carrying capacity of the land was exceeded. The carrying capacity had been reduced as (over)grazing permanently changed the local environmental conditions.
By the 1620s, the serious collapse in sheep numbers in the Valle de Mezquital was over; sheep farming never fully recovered. The landscape had been changed for ever.
Sources / Further reading:
- Acuña, R. (ed) Relaciones geográficas del siglo XVI: Michoacán. Edición de René Acuña. Volume 9 of Relaciones geográficas del siglo XVI. Mexico City: Universidad Nacional Autónoma de México. 1987
- Chevalier, F. Land and Society in Colonial Mexico. University of California Press. 1963.
- Melville, Elinor G. K. A plague of Sheep. Environmental consequences of the conquest of Mexico. Cambridge University Press. 1994.
Click here for the original article on MexConnect.
Mexico’s ecosystems and biodiversity are discussed in chapter 5 of Geo-Mexico: the geography and dynamics of modern Mexico. The concept of carrying capacity is analyzed in chapter 19. Buy your copy today, as a useful reference book! |
What is Climate Change Impact?
Climate Change Impact is a way of looking at the effects climate change has on the people and environment of Nunavut, over time.
Almost every part of life in the region will be touched by climate change. It’s important to be aware of these changes in order to deal with impacts that have already happened and prepare for those that will most likely take place.
For example, decreasing ice could allow increased shipping through Arctic waterways, including the Northwest Passage. While this may mean economic benefits for Nunavut, it can also raise the risk of oil and chemical spills. Increased land-use activities and natural resource development, together with population growth and an expanding economy, mean we must plan to ensure environmental sustainability in the future.
Impacts on Culture, Health and Well-being
For centuries, Inuit have maintained a close relationship with ice (siku), land (nuna), sky (qilak), and wildlife (uumajut). Inuit rely on innovative survival skills adapted to the unique climate and weather of the Arctic. Rapid environmental changes will continue to affect Inuit culture and the well-being of all Nunavummiut.
Nunavummiut are part of a complex social and environmental system. Climate change in Nunavut cannot be addressed without considering other factors. Communities’ ability to cope and adapt to climate change will be limited by factors such as housing, poverty, food security, language, modernization, and the erosion of traditional land-based skills. All of these factors have direct impacts on the maintenance of Inuit cultural identity, and the well-being of Nunavummiut.
Impacts on Traditional Activities
Many Nunavummiut depend on hunting, fishing and gathering to support themselves and the local economy in their communities. Local hunting practices have already changed and new technologies are increasingly relied upon.
Inuit elders, who traditionally used their skills to predict the weather, have observed changing cloud and wind patterns (see Voices From the Land for direct quotes from elders on the changes they have witnessed). Their weather and climate-related knowledge does not fit with today’s weather conditions and patterns. Unpredictable weather and climate has increased the risk of travelling on the land. This has made it very difficult for elders to pass along their weather prediction skills to younger generations.
Some traditional travel routes are now unreachable, preventing the use of traditional camp sites. According to many elders and community members, decreasing water levels make travelling by boat more difficult. The early melt of lakes, rivers and sea ice make travel routes unsafe in the spring, and thawing permafrost makes travel by ATV in the summertime more difficult.
Impacts on Food Security
Climate change’s projected impacts include less access to wildlife and more safety risks from changes in sea ice thickness and distribution, permafrost conditions and extreme weather events. This means traditional food security may be significantly affected.
The shift from country food to expensive, store-bought, and often unhealthy food items has had negative effects on Inuit health and cultural identity. Climate change can make this problem even worse.
Food storage is also affected by warmer temperatures and thawing permafrost. Interviews with elders suggest that outdoor meat caches, which used to remain fresh and preserved in the cold, now spoil.
Country food is still the healthiest food choice for Nunavummiut. However, climate change may increase human exposure to contaminants. A shifting climate can change air and water currents that bring contaminants into the Arctic.Also, changes in ice cover and thawing permafrost appear to have contributed to increased mercury levels in some northern lakes. This results in more contaminants making their way into plants, animals, and ultimately humans.
Impacts on Health and Diseases
Diseases that can be transmitted from animals to humans (scientists call them “zoonotic diseases”) are expected to rise as temperatures warm. Previously isolated animal species may come in contact with each other when natural barriers like ice or snow decrease from climate change. This can increase the spread of diseases.
Extreme weather and natural hazards are both direct impacts on human health from a changing climate. Unpredictable weather patterns may cause more accidents and emergency situations. Search and rescue missions are affected, as searches are often held back by these unpredictable weather patterns.
Impacts on Heritage and Special Places
Heritage and special places in Nunavut are being affected by permafrost degradation and increased coastal erosion caused by the late freezing of sea ice. The cold Arctic climate helps preserve organic material frozen in permafrost. If the permafrost changes, it will ruin cultural remains and archaeological artifacts that were previously preserved. Ongoing freeze-thaw cycles promote the decay of artifacts such as sod houses (many of which hold their form because of permafrost) and other historical resources, such as sites relating to European exploration of the Arctic. Naturally occurring coastal erosion is expected get worse as sea levels rise. This will threaten historic sites on southern Baffin Island, northern Victoria Island and the western high Arctic islands, where little archaeological surveying has been done.
Nunavut has seen more tourists who want to experience our unique Arctic environment and visit heritage sites, parks and special places. Nunavut’s historic and archaeological resources are key attractions for cruise ships and other visitors. Their deterioration can negatively impact tourism.
Impacts on Infrastructure
Over the past several decades, we have used the unique properties of frozen ground, or permafrost, to our advantage. We tailored the engineering of buildings around the characteristics of frozen ground. Permafrost presents challenges to the construction, operation and maintenance of buildings, airports, roads and other northern infrastructure.
As a result, changes in permafrost, ice conditions, precipitation, drainage patterns, temperatures, and extreme weather events can have negative results for infrastructure designed for permafrost conditions.
Permafrost thaw can cause building foundations to shift and become weak. Frozen ground provides a secure foundation. If it does not stay frozen, its strength and integrity – or ability to support a building, pipeline, road or airstrip – may be affected. Older facilities may be more vulnerable because climate change was not considered when these structures were built.
The impacts of climate change are expected to become a major burden on government resources. Municipal infrastructure impacted by degrading permafrost (for example, sinking/cracking buildings) may divert resources from building new infrastructure. Engineering and construction practices for building on changing permafrost are being developed. However, these changing practices will affect the cost of both construction and maintenance of current and future infrastructure.
Pipelines, roads and airstrips, which also rely on permafrost for structural integrity, are experiencing stresses from shifting and thawing grounds. Eventually, these will need to be repaired due to changing freeze and thaw conditions.
Although new infrastructure is being designed to suit a changing environment, existing water and waste containment facilities may not have been designed for current and future warming trends. These facilities and other naturally-occurring containment structures may fail, with possible impacts on the environment and human health.
Land-use activities contribute to changes in the structure of the ground and permafrost by altering the amount of sunlight absorption, and changing the flow of water. This can cause collapsed roadways, and shifting building foundations. Avoiding this will involve a great deal of planning to make sure that infrastructural integrity is maintained. Environmental changes and effects on permafrost are presently considered in community land-use planning and climate change adaptation plans. Current data and tools being developed will continue to provide information to design appropriate, sustainable infrastructure that works in a changing climate.
Impacts on Transportation
Decreasing sea ice thickness and cover will open areas of land and water that have been inaccessible. This will lead to more shipping and industrial activities. While a longer summer shipping season will generate more economic opportunities for Nunavut, it will also increase risks to the environment, most notably through spills and other pollution incidents.
Other transportation-related challenges have been identified. For example, sea ice changes present challenges to traditional snowmobile or dog team transportation routes. New or alternate routes will be needed to continue safe traditional hunting and recreational activities.
Degrading permafrost and changing freeze-thaw cycles have visibly shifted and cracked the surface of airport runways throughout Nunavut. This is a significant transportation challenge because air travel is a main resource for Nunavummiut to receive food and supplies.
In response to these challenges, Nunavut will need improved research, monitoring and response capabilities. This includes new and better infrastructure, mapping, and navigational systems. Improved infrastructure will likely include roads, asphalt paved runways, and fixed marine structures in coastal areas.
Impacts on Resource Development
An increase in exploration and industrial activities will likely result from current climate change projections, which include reduced sea ice cover and warmer temperatures. The Canadian Arctic Archipelago has the potential for vast hydrocarbon deposits and other mineral deposits. Oil and mineral resource development are expected to increase.
Renewable resource development, such as fisheries, will also be impacted by climate change. Fishing in Nunavut is an important part of the economy and subsistence living. It is likely that the number of fish species present in the waters off Nunavut will increase as sub-arctic species will move further north with the warming climate. Although this can result in new opportunities for fisheries, it can also bring parasites and new predators. Current and planned fisheries activities and management will need to be continuously monitored and adjusted to address the impacts of climate change.
Impacts on Tourism
Longer summers can result in an extended ‘high’ tourism season and increased tourism activity. Decreasing ice cover is likely to result in more shipping traffic, particularly cruise ship activity, into areas that were formerly inaccessible and/or had limited access. While beneficial, more marine tourism brings challenges in the form of impacts on communities, historic resources, and the environment in general. Addressing these challenges will require additional resources.
Impacts on Arts and Crafts
An increase in tourism should lead to more sales of arts and crafts, and milder weather will make access to carving stone possible for longer periods during the year. However, sudden and unexpected weather patterns and thawing permafrost can pose a risk to the safety of artists and businesses accessing quarry sites at great distances from the communities.
Impacts on Energy
The changing climate will potentially have great impacts on our energy sector. Warmer temperatures will affect our heating requirements, making it less expensive to heat buildings.
Existing power plants will be affected by changes in permafrost conditions, which will influence the stability of infrastructure. Settling of foundations in existing power plants has already been noticed. Degrading permafrost is also expected to impact fuel tank farms and transmission lines. For example, permafrost degradation has created conditions where hydro poles are easier to install. At the same time, degradation is also responsible for destabilizing poles, causing them lean precariously because of weaker soil
Changes in water and precipitation patterns along with permafrost degradation may impact hydroelectricity development. Previous studies that estimated hydroelectric potential will no longer be reliable as the flow patterns in our lakes and rivers may change as a consequence of climate change. Some studies have suggested that precipitation will increase, which can have a positive effect on the amount of water available for hydroelectric power production. Possible changes in wind patterns may affect the feasibility of wind generation. |
Download the free Lego WeDo 2.0 software to access the guided and open ended Science and Computational thinking projects together with the detailed tutorials with building instructions and detailed programming examples for each model.
Examples of the WeDo 2.0 software Project Library, Design Library and Programming Library are shown below.
WeDo 2.0 is specifically designed for the younger children so teachers should have no trouble learning the programming of the robots with the visual, block based, drag and drop coding environment shown below. Introduce children to fundamental coding concepts such as sequences, loops, conditional statements and events to help develop their computational thinking processes in and engaging learning environment. There is a help system which explains in detail each programming block. Colour coded programming function blocks make it easy to find what you need. Simply drag the blocks onto the canvas and press the start block to see the robot in action. Learn from what happens and develop the code further.
The code below simply first writes the number 10 on the screen. The code within the loop (yellow icon with white arrows) then sets the motor speed to the number on the screen. The math screen function (with minus sign) then subtracts 1 from the number on the screen each time it runs through the loop. So the WeDo robot starts off at power level 10 and slows down progressively to power level 0 when both the robot and program stop.
Some of the projects have built in teacher assistant functionality to help plan and deliver the lesson by providing discussion topics, setting the scene, building and programming help together with skills development support for the areas of computational thinking, investigation, programming, design, modelling and communication.
The Lego Education website also has a great section with free lesson plans searchable by Subject type, Grade, Duration, Difficulty, Product and Course.
Lesson examples include projects such as designing and programming a glowing snail, a cooling fan, a moving satellite or a spy robot where students design and program a robot to make a sound when it detects something move in front of it. The project explains the requirements, has video examples, building instructions (if required) and possible programming solutions for the task. There are around 30 WeDo 2,0 projects to choose from and can be extended or simplified to suit your requirements. |
At the Tank Museum, mentioning the Battle of Arras is likely to bring to mind the tank attack on the 21st May 1940 rather than the much larger and far bloodier battle fought 23 years earlier, but which today is largely unremembered.
The April – May 1917 Battle of Arras was the British Empire’s part of a larger offensive planned by the French. Arras would both divert German attention from the French attack, to be launched further south along the Aisne, and allow the British to test newly developed offensive tactics.
Two training pamphlets introduced these tactics to the British Army. Pamphlet SS135 concentrated on the use of the Division of around 18,000 men, whereas SS143 focused on the Platoon of around 40. Both emphasised the importance of coordination between units and the use of combined-arms tactics when attacking German positions.
In the lead up to the battle the artillery used new techniques to target German positions. Flash spotting and sound ranging were highly advanced techniques based on cutting edge scientific research. Gunners could now accurately locate and fire on German artillery, forcing the crews to seek shelter and preventing them from shelling the advancing British infantry. Thanks largely to this development over 80% of the German guns around Arras were neutralised during the five day preliminary bombardment.
The development of the creeping barrage (shellfire that moved forwards with the advancing infantry) meant Germans sheltering in dugouts had no time to man their machine guns between the end of the barrage and the arrival of the British infantry.
In addition, before Arras most of the fuses fitted to these shells included a short delay, leading to them exploding underneath the German barbed wire, leaving it largely intact. The advanced new 106 fuse eliminated this delay, meaning shells detonated in contact with the wire, cutting it and allowing the infantry to quickly advance through it.
The first day of the offensive, the 9th April, was a great success. Most famously the Canadian Corps captured Vimy Ridge in a highly successful attack that also had a great impact on the development of Canada’s national identity. Further south in the Scarpe valley British Divisions along with small numbers of tanks advanced unprecedented distances through the snow against German forces exhausted and demoralised by the bombardment.
On the 11th British and Australian forces attacked around Bullecourt. Although the Australians were supported by tanks, they either broke down or were quickly destroyed. The attack was further hindered by a lack of artillery support and strong German counterattacks. The Australians were forced to retreat and suffered over 3,000 casualties.
By this point German reserves had begun to arrive and their defences were reorganised. This led to slower progress and higher casualties for the British. Further attacks and counter-attacks continued until the 16th May.
The two largest were on the 23rd April and the 3rd May with tanks being used on both days. In the face of strong German resistance, neither were successful. On the 3rd May over 7,000 British soldiers were killed. This was one of the highest totals on one day during the entire war. The battle continued as long as it did largely to support the French. Their attack was going badly and they needed the British to keep pressure on the Germans.
As successful as the first day had been, the later stages of the Battle of Arras showed that the British were still unable to convert a break-in, where they captured front line German positions over a limited area, into a break-through, where the Germans were forced into a large scale retreat.
For British soldiers the average daily loss rate at Arras was the highest of the war at 4,076. Total casualties amounted to 158,000, with the Germans losing around the same number.
Despite the mixed results, the experience of Arras continued the learning process the British Army went through throughout the war. It also contributed to the wearing down of the German Army. The effects of both these factors would become clear in 1918.
Find out more about The Tank Museum’s Mark II here. This vehicle is the oldest surviving combat tank in the world and was used at the Battle of Arras. |
Hay-Pauncefote Treaty, 1901–02, an agreement by which Great Britain recognized the right of the United States to build a canal across Nicaragua or Panama. Secretary of State John Hay and British Ambassador Lord Pauncefote negotiated the treaty. The pact abolished the Clayton-Bulwer Treaty of 1850, under which both countries were to control and protect the proposed canal.
The first draft treaty, drawn up in 1900, provided that the canal could not be fortified and should be open to all ships of all nations in both peace and war. The Senate adopted amendments, which Great Britain would not accept. The second treaty, ratified by both sides, did not forbid the United States to fortify the canal, but stated that ships of all nations should use the canal on equal terms. In 1912 Congress provided that United States ships engaged in domestic commerce should be exempted from canal tolls. In 1914 Congress yielded to British protests and repealed this provision. |
The Cell Walk (Part 1/3)
Lesson 3 of 5
Objective: Students will construct a gym-size model of a skin cell. They will explain how the endomembranous system works and how the organelles work together to keep the cell alive.
Creating models that can be compared is very useful in student comprehension. Over the next three lesson students will research an assigned organelle. Write a script and create a life-size model of the organelle. Then they will build a gym-size cell and give tours to their peers. Here is an overview of what students will learn in this lesson.
Present the following scenario to students:
Imagine you could examine the following objects with a very powerful microscope that would allow you to see evidence of cell structure. In your lab notebooks, make a list of the items that are made of cells or were once made of cells.
- pork roast
- cell membrane
After completing your list, explain your thinking. Describe the "rule" or reason you used to decide whether something is or was once made of cells.
based on Keeley, Page. 2007. Things Made of Cells. Uncovering Student Ideas in Life Science. NSTA Press: Arlington, VA.
This is an example of student work. (Note: This student does not a clear understanding of all of these items are related. While she can explain what eukaryotic cells share, she misunderstands that bacteria don't have organelles.)
Next, introduce essential vocabulary (organelle).
The purpose for this step in the lesson is to ensure that students have a proper review of the eukaryotic cell. Begin this lesson with a review of what students know about eukaryotic cells by using A Tour of the Eukaryotic Cell powerpoint. Focus first on the electron micrograph of the organelle. Then explain how artists use the information that scientists have about the organelle's anatomy and physiology to make their drawings. Students should fill out the graphic organizer while the lecture is being given. Here is an example of a completed graphic organizer. At the end of the lecture, focus on the role of the endomembrane system.
(Note: I have noticed that students are so used to seeing perfectly drawn artist's representations of the organelles that they do not understand the wide variation between the same type of organelles in the cell. Also, it is important for students to start looking at the complex 3-D structure of these organelles so they can better understand their function.)
Student Organelle Research
Students will explore their assigned organelle in more detail in order to develop a design and script for their Cell Walk presentation. Hand students the cell walk organelle research sheet and refer them to several websites to help them get started. Typically, our starting point is Rader's Biology4Kids.com. Students will focus on the webpage for cell function and cell structure. Specifically, student teams need to focus on their specific organelle. Students can navigate wherever they would like as long as they completely answer the worksheet and evaluated the sources so they are choosing the best sources possible.
While students are researching their organelle, the teacher should move about the room answering specific student questions and aiding them in determining the relevance of individual sources. Stress to students the importance of citing all sources using APA format.
(Note: Encourage students to search a minimum of five sources to determine the most recent research on their assigned organelle. When students get stuck in the research process, I typically refer them to the source page of Wikipedia. Many times there are vetted peer reviewed studies that can help them.)
The Next Step
As a class, students should calculate the relative size of their organelle in light of the information given in the lecture. They should use the images from the powerpoint (Endoplasmic Reticulum, Golgi Apparatus, Lysosomes, Mitochondria, Nucleus, and Peroxisomes) and the measuring feature in Vernier's LoggerPro to determine the relative size of the organelles in comparison to the cell. Each student group should measure their assigned organelle.
To use the measuring feature, first open LoggerPro. Next select Insert, then Picture, and finally Picture with Photo Analysis. Students will need to select the image of their organelle. Then using the photo analysis tools, they should determine the scale of the photo and measure the organelle's dimensions. Next, they should share their findings with the class. As a class, students should determine what materials they have readily available that would be best for the construction of their organelle. (Note: In the past six years that we have completed this project, most classes choose to use tents as their organelle's basic structure. Tents are decorated inside and out depending on the class. However, for the amount of money we have available to invest (approximately $200.00) and the amount of time we have to construct the cell (approximately 1 day), tent are easy to assemble and generally meet the range in size of the organelle specifications.)
Students will write a preliminary script for a three minute presentation. Student teams will present a three minute speech to the class the next day. If not completed in class, then the speech will be homework. (Note: The presentation rubric I use I found at the website Read, Write, Think. It best meets my needs. Since I teach so many preps, once I find a resource that works for me that simply use it and give credit where credit is due. There is no sense reinventing the wheel. After each group's presentation, student teams are given a list of additional questions for their peers and their teacher. Students are required to answer these questions and add them to the revised script. After their presentation on Day 2, student groups are required to make an appointment with the the teacher outside of class. Scripts are critiqued a second time by the teacher. During this meeting, student teams will submit their powerpoint which will be used for the final presentation. They have a week and a half to complete the teacher-team meetings, revised script, and powerpoint.) |
'Marshall' ryegrass, a top variety in nine southern states, was developed by mass selection. For 29 years, seed produced each spring on a small field of annual ryegrass at Holly Springs, Mississippi was used to plant the field the following fall. Gradually, the better plants replaced the weaker ones and the enhanced germplasm named Marshall was released as an improved cultivar.
To be effective, mass selection requires a variable, sexual, seed producing population of cross-pollinated plants. Genetic male-sterile genes can be used in self-pollinated species such as the soybean to create a cross-pollinated population (Brim and Stuber 1973). Enhancing such populations requires effective screens for the characters sought, effective methods for intermating the selected plants, and recurring cycles of these procedures to build up the frequency of the genes controlling the traits being selected.
`Grimm' alfalfa, the first winter-hardy alfalfa cultivar bred in the USA is a good example of the use of mass selection to enhance germplasm (Tysdal and Westover 1937). In 1858, Wendelin Grimm planted on his newly acquired land in Chaska, Minnesota, the 15 pounds of alfalfa seed he had brought from south Germany where the winters were mild. Only a few plants survived the first severe winter but his bees intermated the survivors. Grimm harvested and planted the seed produced and completed the first cycle of mass selection. Repeated cycles of winter screening and bee intermating produced 'Grimm' alfalfa, the first cultivar adapted to the northern USA.
Using mass selection to enhance traits desired in a good population avoids introducing undesirable traits that often come with exotic germplasm. Wayne Hanna, ARS geneticist at Tifton, Georgia, used Pennisetum glaucum var. monodii to supply dominant genes for immunity to rust, Puccinia substriata Ell. and Barth. var. indica Ramacha and Cumm. and leaf spot, Pyricularia grisea (Cke) Sarc (Hanna et al. 1985). But crosses required to transfer the disease resistant genes also brought with them undesirable monodii genes for seed shattering, short-day sensitivity, sterile cytoplasm and susceptibility to Helminthosporium leaf spot. Five backcrosses and 9 selfed generations were required to produce the resistant 'Tift 85A' and 'Tift 85B' used to produce 'Tifleaf 2' pearl millet.
Mass selection can retain in a population desirable characters not selected for or against. Increasing yield in Pensacola bahiagrass by mass selection did not alter its digestibility.
Forages are better suited to mass selection enhancement than most crops. Many forage species are highly variable and uniformity is usually not required. Many are perennials and most important characters such as whole plant yield are multigenic in their inheritance. Seed yield is usually less important, characters can be selected visually, and cycles can often be shorter.
Mass selection has usually not been recommended as a breeding method to increase yield because heritability for spaced plant yields has usually been low. But heritability can be improved by keeping the environment uniform for all plants in a population from the time the seeds are planted until the spaced plants are selected. This has been a major objective as we have tried to improve mass selection as a plant breeding method to increase forage yields.
In 1960, we decided to try to use mass selection to improve the yield of Pensacola bahiagrass, a seed-propagated, perennial, pasture grass widely grown in the deep South. At that time, few plant breeders would have advised its use as a plant breeding method.
From 1960 to 1988, we made a number of modifications and restrictions in conventional mass selection that have made it possible to produce one cycle per year with the same progress that we made in the beginning when we used three years per cycle. The plant breeding method resulting from these modifications of mass selection we call 'recurrent restricted phenotypic selection (RRPS)' (Burton 1982).
RRPS in a closed-population of diploid, cross-pollinated Pensacola bahiagrass each year has effectively intermated the 200 largest visually-grid-selected plants from 1000 spaced plants, the largest seedlings in 20,000 RRPS intermated seedlings and has increased forage yield 16% per cycle through cycle 14 in spaced-plant-tests and 5% per cycle through cycle 9 in 3-year seeded-plot-tests.
The enhanced germplasm developed by RRPS can provide breeders seed for a new cultivar equal to that cycle in performance. 'Tifton 9' Pensacola bahiagrass officially released as an improved cultivar in 1988 is an increase of cycle 9 of RRPS applied to broadbased population A of Pensacola bahiagrass, a blend of seed from 39 Georgia farms. In a 3-year replicated small plot test clipped to simulate grazing 'Tifton 9' produced 47% more forage than the Pensacola bahia control. It was also leafier and much more vigorous in the seedling stage. In 1980, a selfed progeny of two-clone Pensacola bahia F1 hybrid 15-21 x 17-29 that had topped a 3-year 9 x 9 lattice square 2-clone hybrid test was used to start the narrow based population B. RRPS was applied to population B as to population A. In the 1987 spaced-plant-population-progress test, the closed-population B cycles 1 and 7 yielded as well as closed-population A cycles 6 and 12. Thus, the selected narrow-based population B responded to RRPS as well as the broad based population, A and started at a much higher yield level.
Experience with mass selection and its improved RRPS indicates that germplasm is malleable and can be enhanced with recurrent cycles that intermate superior plants selected with an effective screen. |
We all do it. Most probably want more of it. And only a few of us are any good at it. Despite spending on average over 20 years of our life doing it, the benefits of sleep are often not spoken about clearly and explicitly to students. The National Sleep Foundation recommend GCSE and Sixth Form Students need up to 10 hours a night. However, many teenagers are not getting anywhere near this, with many reporting that they sleep for less than 7 hours a night.
What is the purpose of sleep? Writing in The Psychologist, Professor Gareth Gaskell writes that ‘the expectation of a single purpose of sleep is unrealistic. What is the purpose of a mouth? One could point to the central role of the mouth in eating, drinking, breathing and speaking, as well as other more subtle roles (e.g. smiling, kissing, fighting, holding). The truth is that sleep – like the mouth has many purposes’.
So what are the benefits of a good night’s sleep and are these the same skills that students need during revision?
Better Concentration - In one of the most comprehensive reviews on the effect of sleep deprivation on cognitive performance found that “first and foremost, total sleep deprivation impairs attention and working memory, but it also affects other functions, such as long-term memory and decision-making. Partial sleep deprivation is found to influence attention, especially vigilance”. This review confirms what most people know from anecdotal experience - it is far easier to be distracted or absent minded when we are tired.
Better Memory - One of the primary benefits of getting a good regular night’s sleep is that it aids and improves memory and recall. Recent research suggests that when we sleep new connections are formed between our brain cells. Indeed, it appears that sleep actually ‘prioritises memories that we care about’ (something that is particularly handy during revision). An undeniable part of success in exams is the ability to recall knowledge – something that sleep can clearly help with.
Reduced Focus on Negative Things - Getting a good night’s sleep has been found to help boost mood and general wellbeing. In a fascinating study, researchers elaborated on previous findings that people who are sleep deprived remember less, and investigated further what they did still remember. Confirming earlier findings, sleepy participants remembered about 40% less than their more alert peers. What made this really interesting was that they found that the sleepy participants remembered a lot less positive and neutral things, but almost the same amount of negative things.
Perhaps this is one reason why students feel more stressed (and more likely to be tense) during revision time? If they are not getting enough sleep, they are more likely to forget the good stuff and dwell on the negative.
Aids creativity – Getting a good night’s sleep improves your creativity and insightfulness. One study investigating this had participants perform a puzzle that required them to discover a hidden rule to unlock the task. After having a go at it, some participants went to sleep and others stayed awake. At the subsequent re-testing, more than twice as many participants who had slept worked out the hidden rule compared to those who had stayed awake. The authors of this study ‘conclude that sleep, by restructuring new memory representations, facilitates extraction of explicit knowledge and insightful behaviour’
Stronger Immune System - Research into teenagers and sleep patterns has also found that those who don’t get enough sleep are more likely to fall ill, with a trend of reduced sleep for several nights often being followed by feeling poorly. If you have headaches, sniffles and coughs, chances are it may be because you are not getting enough sleep. The effect of sleep and how we feel is not just a physical one, with sleep also having been found to wash away toxins that have been built up during the day.
Sleep helps improve concentration, memory, positivity, creativity and health. These are the exact skills we would want students to have over the coming weeks and month. Probably as a result of these things combining, students who sleep better have been found to get significantly higher grades than their sleepy peers, with the amount you sleep making up to half a grade's difference.
It is important to note that when it comes to sleep, more is not always better. Too much and it can get in the way of students actually doing their work. The benefits to memory and recall only come into play once students have done the work in the first place. However, if we can help educate them about the powerful effects and benefits of sleep, it can make a significant impact on how they think, feel and behave.
For even more info take a look at our page Best Ways to Revise, where you'll find links to blogs and research.
For all our other articles on sleep, check our our Hidden Benefits of Sleep guide. |
It's a central tenet of evolution: Life must adapt to its surroundings or die. And the genome knows it. A new study published in the journal Nature shows that in more stressful surroundings, a yeast cell's genome actually gains or loses chromosomes, improving the cell's ability to mutate - and thus its adaptability. This mechanism could explain how some cancerous cells manage to survive the poisonous onslaught of chemotherapy.
Top image: NASA
Scientists from the Stowers Institute for Medical Research exposed yeast to stress-inducing chemicals, and then examined their chromosomes. When a yeast cell reproduces under normal conditions, cellular mechanisms kick in to make sure that chromosomes are transmitted carefully to the daughter cells. Under stressful conditions, however, these mechanisms broke down, with daughter cells sometimes losing a chromosome or gaining a superfluous one, and passing this abnormal genome down to their own descendents.
But how can a cell thrive when it's missing whole chromosomes? After all, chromosomal instability, also known as aneuploidy, is most frequently encountered in cases of cancer, developmental defects, and poor cellular health. But yeast may be the exception to the rule, with abnormal numbers of chromosomes found in both the yeast in your kitchen and the wild strains outside.
An earlier study had found that the yeast with an odd number of chromosomes could actually survive stressful conditions better than their normal counterparts. Creative mutations occur more readily in the abnormal cells, allowing them to evolve and adapt to the dangers of their environment more quickly.
Based on these earlier results, the researchers decided to see if the stress itself was inducing the chromosomal instability that allowed yeast to resist the stressors. Exposure to a variety of yeast-harming chemicals provided stressful environments for the yeast, which responded by losing and gaining chromosomes at ten to twenty times the usual rate of chromosome loss. And specific chromosomes led to protection from specific drugs. For example, yeast exposed to fluconazole, a drug used to eradicate yeast infections and meningitis, usually evolved into resistant colonies with an extra chromosome VIII, while those colonies that evolved to resist the fungicide benomyl tended to have no chromosome XII.
Interestingly, it didn't take much stress to cause aneuploidy. The yeast were exposed to protein-inhibiting radicicol at a very low concentration, so low that it barely slowed cellular growth - but it was still enough to create chromosome instability, which allowed radicicol-resistant yeast colonies to evolve. This resistance came in handy in more than one environment: When exposed to other anti-yeast drugs, the radicicol-resistant yeast could still survive more successfully than yeast cells that started out with a normal complement of chromosomes.
Aneuploidy may help the yeast survive stress, but it's not an ideal condition. Researchers took yeast that had lost chromosome XVI when exposed to tunicamycin, an enzyme inhibitor, and grew it in a drug-free environment. Without the stress of the chemical, the yeast developed into two separate colonies: one with cells that had lost a chromosome XVI and one whose cells had regained the chromosome to revert to a normal yeast genome. The colony that had regained the chromosome grew faster than its chromosome-missing counterpart. It had, however, lost its resistance to tunicamycin.
For yeast cells in a healthy environment, cells with a normal, stable number of chromosomes are more fit than those with aneuploidy. But under stressful conditions, the aneuploidy helps yeast outlive its normal counterpart. Plus, the fact that losing chromosome XVI led to tunicamycin resistance, while reverting to the normal number of chromosomes came at the cost of the drug resistance, indicates that the aneuploidy was directly linked to the drug resistance.
As the researchers write, "These findings demonstrate that aneuploidy is a form of stress-inducible mutation in eukaryotes, capable of fuelling rapid phenotypic evolution and drug resistance."
If stress itself causes the chromosomal instability that lets cells adapt to the stressful chemical, this could explain how cancer resists chemotherapy treatment. The toxins in chemotherapy can kill cancer cells, the way that a fungicide can kill yeast cells, but if the chemo drugs also trigger the cancer cells to lose or gain chromosomes, the tumor may be able to adapt to the stressful environment and survive the chemotherapy. To avoid this situation and make cancer treatments more effective, further research should explore the link between stressful environments and faster adaptation. |
A vegetative state occurs when the cerebrum (the part of the brain that controls thought and behavior) no longer functions, but the hypothalamus and brain stem (the parts of the brain that control vital functions, such as sleep cycles, body temperature, breathing, blood pressure, heart rate, and consciousness) continue to function. Thus, people open their eyes and appear awake but otherwise do not respond to stimulation in any meaningful way. They cannot speak and have no awareness of themselves or their environment.
The vegetative state is rare. Traditionally, a vegetative state has been considered a long-lasting (chronic) disorder. That is, if a person appears to be in a vegetative state but recovers some mental (cognitive) function in a few weeks, that person was never in a vegetative state.
A vegetative state that lasts for more than 1 month is considered a persistent vegetative state. People with a persistent vegetative state rarely recover any mental function or ability to interact with the environment in a meaningful way. When any recovery occurs, the cause was usually brain damage due to a head injury (traumatic brain injury), not a disorder that resulted in the brain being deprived of oxygen. Also, recovery is often very limited. For example, people may reach for any and all objects or may utter the same word over and over. Even fewer people with a persistent vegetative state continue to slowly improve over months to years.
How many people are in a vegetative state is unknown, but about 25,000 people in the United States are thought to have this disorder.
A vegetative state occurs when the cerebrum (the largest part of the brain) is severely damaged (making mental function impossible), but the reticular activating system is still functional (making wakefulness possible). The reticular activating system controls whether a person is awake (wakefulness). It is a system of nerve cells and fibers located deep within the upper part of the brain stem (the part of the brain that connects the cerebrum with the spinal cord).
Most commonly, a vegetative state is caused by severe brain damage due to
A head injury
A disorder that deprives the brain of oxygen, such as cardiac arrest or respiratory arrest
People in a vegetative state can do some things because some parts of the brain are functioning:
Because of these responses, they may appear to be aware of their surroundings. However, these apparent responses to their surroundings result from involuntary basic reflexes and not from a conscious action. For example, they may instinctively grasp an object when it touches their hand, as a baby does.
People in a vegetative state cannot do things that require thought or conscious intention. They cannot speak, follow commands, move their limbs purposefully, or move to avoid a painful stimulus.
Most people in a vegetative state have lost all capacity for awareness, thought, and conscious behavior. However in a few people, functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) have detected evidence of some awareness. In these people, the cause was usually a head injury, not a disorder that resulted in the brain being deprived of oxygen. When the people were asked to imagine moving a part of their body, these tests showed appropriate brain activity for such an action (although the people did not do the action). However, these tests cannot determine how much awareness these people have.
People in a vegetative state have no control over urination and bowel movements (are incontinent).
Doctors suspect a vegetative state based on symptoms. However, before a vegetative state can be diagnosed, people should be observed for a period of time and on more than one occasion. If people are not observed long enough, evidence of awareness may be missed. People who have some awareness may be in a minimally conscious state rather than a vegetative state.
An imaging test, such as magnetic resonance imaging (MRI) or computed tomography (CT), is done to check for disorders that may be causing the problem, especially those that can be treated. If the diagnosis is in doubt, doctors may do other imaging tests—positron emission tomography (PET) or single-photon emission computed tomography (SPECT).
Electroencephalography (EEG) may be done to check for abnormalities in the brain's electrical activity that suggest seizures, which may impair consciousness.
Some people spontaneously recover from a vegetative state. The chances of recovery depend on the cause and extent of the brain damage and the person's age, as for the following:
Some recovery is more likely if the cause is a head injury, a reversible metabolic abnormality (such as low blood sugar), or a drug overdose rather than a stroke or cardiac arrest.
Younger people may recover more use of their muscles than older people, but differences in recovery of mental function, behavior, and speech are not significant.
If a vegetative state lasts for more than a few months, people are unlikely to recover consciousness. If people do recover, they are likely to be severely disabled.
The longer a vegetative state lasts, the more severe the disabilities are likely to be.
Recovery from a vegetative state is unlikely after 1 month if the cause was anything other than a head injury. If the cause was a head injury, recovery is unlikely after 12 months. However, a few people improve over a period of months or years. Rarely, improvement occurs late. After 5 years, about 3% of people recover the ability to communicate and understand, but few can live independently, and none can function normally.
Most people who are in a vegetative state die within 6 months. Most of the others live about 2 to 5 years. The cause of death is often a respiratory or urinary tract infection or severe malfunction (failure) of several organs. But death may occur suddenly, and the cause may be unknown.
Like people in a coma, people in a vegetative state require comprehensive care.
Providing good nutrition (nutritional support) is important. People are fed through a tube inserted through the nose and into the stomach. Sometimes they are fed through a tube (called a percutaneous endoscopic gastrostomy tube, or PEG tube) inserted directly into the stomach through an incision in the abdomen. Drugs may also be given through this tube.
Many problems result from being unable to move, and measures to prevent them are essential (see Problems Due to Bed Rest). For example, the following can happen:
Pressures sores: Lying in one position can cut off the blood supply to some areas of the body, causing skin to break down and pressure sores to form. Caregivers must turn people very frequently.
Contractures: Lack of movement can also lead to permanent stiffening of muscles (contractures) causing joints to become permanently bent.
Blood clots: Lack of movement makes blood clots more likely to form in leg veins.
To prevent these problems, physical therapists gently move the person’s joints in all directions (passive range-of-motion exercises). Therapists may splint joints in certain positions to help prevent contractures. People are also given drugs to prevent blood clots from developing.
Because people are incontinent, care should be taken to keep the skin clean and dry. If the bladder is not functioning and urine is being retained, a tube (catheter) may be placed in the bladder to drain urine.
If recovery is unlikely, doctors, family members, and sometimes the hospital ethics committee should discuss how aggressively future medical problems should be pursued and when and if life-sustaining treatment should be withdrawn. A person's wishes about such treatments should be considered if they are known—for example, if wishes have been stated in an advance directive (living will). |
South Carolina sent four delegates to the Constitutional Convention (Pierce Butler, Charles Pinckney, Charles
Cotesworth Pinckney, and John Rutledge) with one concern in mind: slavery and the security of its practice. South Carolina was the richest state by the time the Constitution was drawn up in 1787 and wished to remain as such. To continue their economic prosperity, however, they needed assurance that the slave trade would be allowed to continue. Though most people believed slavery would simply vanish in time, in the Constitution, the slave trade was guaranteed to continue for twenty years, until 1808, a compromise South Carolina begrudgingly accepted (they were also generously granted fugitive slave laws).
Additionally, the dispute on proportional representation versus a “one state, one vote” policy proved to be an important issue to South Carolinians. With their economic fortune continuing, South Carolina’s delegates knew that a proportional method of representation would benefit them the most since they hoped their state’s population would one day counter those of larger states (like Pennsylvania). The matter of how slaves should be counted in representation arose with northern states arguing that because slaves were considered property, said property could not be granted a vote. It was in the South’s best interest to fight for slave representation since it would “increase” their population and therefore their allotted representation in Congress. After much debate, the Convention decided that each slave would
count as three-fifths of a person, another compromise South Carolina was all but forced to accept.
Following these resolutions, South Carolina ratified the United States Constitution on May 23, 1988, making them the eighth state to do so. They suggested many diminutive adjustments to the framers’ work, including a change in phrasing of Article Six from “no religious test” to “no other religious test,” suggesting that the document itself was a religious oath of some sort.
South Carolina was an important factor in creating much of the Constitution, especially with nationalist Charles Pinckney on their side. Though they lost some of the battles they fought, they were also guaranteed the continuation of the ever-important slave trade for two decades and fractional slave representation in Congress, all of which were in their best interest. |
: Bernoulli’ s bio
Bernoulli’s principle applied to avionics
Venturi airfoil analogy
Credits Bernoulli's Principle Slide 2: Daniel Bernoulli (Groningen, 8 February 1700 – Basel, 8 March 1782) was a Dutch-Swiss mathematician and was one of the many prominent mathematicians in the Bernoulli family. He is particularly remembered for his applications of mathematics to mechanics, especially fluid mechanics, and for his pioneering work in probability and statistics.
Bernoulli’s work is still studied at length by many schools of science throughout the world. Daniel Bernoulli 1700-1782 Slide 3: The pressure of a fluid ( liquid or gas ) decreases at points where the speed of the fluid increases. Bernoulli’s principle In other words, Bernoulli found that within the same fluid, in this case air, air speed flow is associated with low pressure, and low speed flow with high pressure. Slide 4: A venturi tube is used to demostrate Bernoulli’s Principle Slide 5: An important application of this principle is found in aeronautics to give lift to the wing of an airplane. It can be seen that an aircraft wing is similar in shape to an half of a Venturi tube. With this configuration, the air molecules moving over the curved upper surface have a longer distance to travel.
Therefore, they have to move faster to keep pace with the molecules moving along the bottom of the wing. The acceleration of the air above the airfoil, according to Bernuolli’ s Principle, causes a lower pressure. Simultaneously, the impact of the slower air on the lower surface of the airfoil increases the pressure below. This combination of pressure decrease above and pressure increase below produces lift. Bernoulli’s principle applied to avionics Slide 6: High Speed Venturi - airfoil analogy Low Speed = Low Pressure = High Pressure Slide 7: Project Supervisor
Prof. Ian Lahey
Mr Giacomo Calligaris
Mr Simone Covassi
Mr Luigi Maronese
School: Technical Istitute for Aeronautics ERSAS A. Volta (UD) Italy |
From Learning to Read to Reading to Learn
The Stages of Reading Development
by Pamela Little
Afterschool practitioners who care about literacy development know the importance of putting the right books in the right children's hands. But which books are the right books for which children? How can you determine what books should be in your program library for your kids?
Understanding the stages of reading development can help. According to Dr. Jeanne Chall (1996) of Harvard University, the stages of reading development help us understand how children progress from the rudimentary stages of learning to read into the complex process of reading to learn.
Age or grade guidelines don't tell the whole story. It's more important to understand where the individual child is on her journey to lifelong reading. Some first graders have moved past picture books and early readers into more complex Transitional books (usually designated as grades 2-5), while some fourth graders need instruction in Early or Emergent stage activities.
The Learning-to-Read Stages
Stage 0: Pre-Reading. (Reading Readiness, Role-Play Reading), Ages 0-6
This stage covers more changes than any of the others. Children begin to build a fund of knowledge about letters, words, and books. Their ability to use language grows, and they learn to play with language by, for example, rhyming. While reading growth in this state takes place mostly at home, the skills children acquire during this stage help them succeed in first grade. Go to Parent A-Z and click on Literacy to find tips for encouraging reading readiness in young children.
Stage 1: Emergent. Grades K-1, Ages 5-6
During this stage, children learn to associate letters with their corresponding sounds. They internalize such aspects of reading as what letters are for; how to discern a new word when a letter is changed (can from cat), and how to recognize they've made an error. They learn a few high-frequency words that serve as a foothold as they learn to use both picture and text cues.
Since Emergent readers are just learning about print and letter sounds, their books must help them focus on words, integrate the cueing systems, and use what they know about words to learn new ones.
Books appropriate for Emergent readers have:
- large font size and ample spacing
- few words and lines per page, usually consisting of captions and phrases or short sentences
- a consistent pattern of placing print on the page
- rhyme, rhythm, and repetition
- language that reflects children's natural oral language
- simple story beginnings, middles, and endings
- clear, uncluttered illustrations that directly support the text
- a cover and title page that are an integral part of the book as a whole
Stage 2: Early. Grades 1-2, Ages 6-7
Early readers know they need to read from left to right and can match letters to their sounds. Their eyes are beginning to control their reading, so they don't always have to point to each word. Early readers have a store of sight words and don't have to sound everything out, so they can read familiar text with fluency. They are also working on strategies for identifying unfamiliar words.
Books appropriate for Early readers have:
- large font size and spacing
- an increasing number of words per page, including shorter and longer sentences
- a varied pattern of placing print on the page
- two or more sentence patterns
- dialogue mixed with narration
- a few unfamiliar vocabulary words
- simple concepts and story lines related to children's interests and experience
- illustrations that support the text but do not convey the whole story
The Reading-to-Learn Stages
Stage 3: Transitional. Grades 2-5, Ages 7-10
Transitional readers can read texts with many lines of print. They do not rely heavily on illustrations for meaning, though their books still have pictures. Transitional readers can read aloud fluently with some expression. They have a large repertoire of high-frequency words. During this stage, readers are beginning to adapt their reading to different types of texts.
Books appropriate for Transitional readers have:
- an increased amount of print with several paragraphs per page
- greater variety of words including more challenging and specialized vocabulary
- a balance of narration and dialogue
- more complex, fully developed story lines and characters
- story lines involving different times and settings
- stories on familiar topics and experiences
- age-appropriate concepts and humor
- illustrations that are less supportive of text, intended to add detail and create story atmosphere
- reading helps at the beginning of the book and of chapters that clearly set the stage and introduce the characters
- book features that help children access informationtables of contents, indices, glossaries, charts
See Ask the Librarian for tips on choosing Transitional books. Transitional readers love series; check out the Bookshelf for recommended series for this stage.
Stage 4: Independent. Grades 6+, Ages 11+
In this stage, reading is purposeful and automatic. Independent readers apply their reading strategies to long, complex texts; they become aware of their strategies only when they encounter difficult text or are reading for a specific purpose. Independent readers have a large store of high-frequency words that helps them read quickly and automatically. They are building background knowledge and learning to apply what they know. Independent readers will be asked to interpret the author's message, compare books on similar themes, and research topics of interest.
Books appropriate for Independent readers have similar features to Transitional books as well as:
- long text intended to be read over several sittings
- increased number of characters, more dialogue
- multidimensional characters whose personalities and desires create the tension that drives the plot
- settings that may change from chapter to chapter or even within a chapter; changes are usually signaled with a typographical device such as extra white space or a line of asterisks
- plot resolution that may rest on characters adjusting their own attitudes
- subtle humor and an increased need for inferential reasoning
- less familiar topics and experiences
Readers do not fit into neat categories that can be described in a single word. "Continuums and development charts provide a useful overview, common patterns, and typical behaviors and language for thinking about groups of readers. However, the best of these frameworks can only serve as a guide." (Routman, 1994, pp. 108-9.) We must always focus on the child in front of us and respond to how we see that child reading. See our Bibliography for a few recommended titles for each reading stage.
Chall, J. S. (1996). Stages of reading development. Fort Worth: Harcourt Brace.
Routman, R. (1994). Invitations: Changing as teachers and learners, K-12. Portsmouth, NH: Heinemann. |
Contemplation of the object ngc 6888, measuring about 25 x 18 light-years is equivalent to look at the past 4,700 years, the past, representing a nebula, which supplies the "fuel" and it stirs up the glow blue star, located in center of the nebula. And not just some blue star, a star – supergiant, with a large mass, which consumes its fuel at 'full speed'. In addition, it was not just a star-supergiant, but also a hot star belonging to the class of stars 'Wolf – Rayet (Wolf Rayet)' (HD 192163). Now, after just a couple of million years, 'Star Gas' is almost used up and the stars came a period of significant change: it goes into the category of candidates for supernovae. Here, look, the star, which spews out its outer layers into space with frightening speed! 'Snapshots are used to restrict the models of the ionization structure of the crust with nebular features', – says Brian D. Moore (Brian D. Read more from Code.org to gain a more clear picture of the situation.
Moore (and others) from the University of Arizona, Department of Physics and Astronomy. "Based on these models, we speculated what physical conditions must be maintained in the framework of these features, and figure out what elements could conceivably be present inside the nebula. The results of our analysis, given the small level of heterogeneity which is found in the photographs, cast doubt on the assumption underlying the traditional methodologies of data interpretation nebular spectroscopy. "Thermal" pressure masses higher than the estimated internal pressure of the stellar wind shock, which means that current conditions have changed significantly in less than a few thousand years. " While the central star undergoes a significant loss of mass, gas accumulates a lot of oxygen and hydrogen directly in front of separate big 'explosion' star WR-class, creating 'hot bubble', whose structure can not be fully explained until now. 'A detailed analysis of the distribution of hi at low positive rates allowed us to identify two various structures, most likely associated with the star and the ring nebula. They are (in the direction from the inner regions to outer): (1) elliptical shell having the dimensions 11.8 6.3 (in parsecs), which covers Ring Nebula (marked with the inner shell), and (2) a distorted ring hi, diameter of 28 pc (parsecs), is also found in the ir – emission (outer shell).
The boundaries of the inner shell striking manner are the most bright areas of ngc 6888, showing areas where there is an interaction with the surrounding nebula gas. The third structure, exterior feature is a broken arc, we find a slightly higher speeds than the above-mentioned shell '- says Christina Kapp (Christina Cappa) (and others),' We propose a scenario in which a strong stellar wind of hd 192163, which propagates in an inhomogeneous interstellar medium, the outer shell blew over phases of the main sequence star's life. Subsequently, from the material ejected by the star in the phase lbv (bright blue variables (star)) (or rsg (red supergiant)) and wr (Wolf-Rayet star), formed the nebula ngc 6888. This material collided with the farthest inner wall of the outer shell, giving rise to the inner shell. Unclear relationship with the external characteristics of stars and nebulae. " |
In this post we propose the use of a detector based on a SiPM coupled to a plastic scintillator for the detection of cosmic rays. The detector we are going to use has been described in the previous post SiPM & Plastic Scintillator. To increase the security that logged events are actually caused by cosmic rays we have chosen to use two identical detectors superimposed to each other and connected to a coincidence circuit.
In previous posts we have already been described detectors and measurements of cosmic rays :
- Cosmic rays and Coincidence Detector
- Scintillation detector of cosmic muons
- Cosmic Muons and the Muon Life Time
So we will only describe the new experimental setup we have used.
As seen in the figure above the two plastic scintillator crystals were placed one above the other and separated by a lead screen to stop environmental radiation.
In the images below you can see the detectors in overlapping position for measurements in coincidence and in the side-by-side position, used to evaluate the random coincidence due to spurious events or background radiation.
The pulses from SiPM are sent to an electronic device that performs the shaping of the pulses and produces the coincidence pulse with an AND logic gate :
Positioning the scintillator crystals side by side, the result of the counting is practically zero, this is a sign of good temporal “resolution” of the coincidence circuit, in fact the pulses generated by the fast comparator, and used within the coincidence logic, have a duration of only about 200ns; the probability of false coincidences is therefore very low.
By positioning the crystal scintillators one above the other a count value close to the theoretical value is obtained, this value corresponds to the expected value for the cosmic ray flux on the detector surface.
The flow of particles that reach the detector ( at sea level ) should have the following value :
Scintillator crystal area = 32,5cm2
Cosmic particle flux = 0,011 particles / cm²-s-sr
Cosmic particle flux crossing horizontal area = 0,024 particles / cm²-s-sr
32.5 cm2 x 0,024 muons/s cm2 = 0.78 muons/s = 0.78 cps
The value obtained is 0,22 CPS which is little lower to the theoretical value of 0,78 CPS. |
Multiple sclerosis (MS), also known as disseminated sclerosis or encephalomyelitis disseminata, is a demyelinating disease in which the insulating covers of nerve cells in the brain and spinal cord are damaged. This damage disrupts the ability of parts of the nervous system to communicate, resulting in a wide range of signs and symptoms, including physical, mental, and sometimes psychiatric problems. MS takes several forms, with new symptoms either occurring in isolated attacks (relapsing forms) or building up over time (progressive forms). Between attacks, symptoms may disappear completely; however, permanent neurological problems often occur, especially as the disease advances.
While the cause is not clear, the underlying mechanism is thought to be either destruction by the immune system or failure of the myelin-producing cells. Proposed causes for this include genetics and environmental factors such as infections. MS is usually diagnosed based on the presenting signs and symptoms and the results of supporting medical tests.
There is no known cure for multiple sclerosis. Treatments attempt to improve function after an attack and prevent new attacks. Medications used to treat MS, while modestly effective, can have adverse effects and be poorly tolerated. Many people pursue alternative treatments, despite a lack of evidence. The long-term outcome is difficult to predict, with good outcomes more often seen in women, those who develop the disease early in life, those with a relapsing course, and those who initially experienced few attacks. Life expectancy is on average 5 to 10 years lower than that of an unaffected population.
Multiple sclerosis is the most common autoimmune disorder affecting the central nervous system. As of 2008, between 2 and 2.5 million people are affected globally with rates varying widely in different regions of the world and among different populations. In 2013, 20,000 people died worldwide from MS, up from 12,000 in 1990. The disease usually begins between the ages of 20 and 50 and is twice as common in women as in men. The name multiple sclerosis refers to scars (sclerae—better known as plaques or lesions) in particular in the white matter of the brain and spinal cord. MS was first described in 1868 by Jean-Martin Charcot. A number of new treatments and diagnostic methods are under development.
A person with MS can have almost any neurological symptom or sign, with autonomic, visual, motor, and sensory problems being the most common. The specific symptoms are determined by the locations of the lesions within the nervous system, and may include loss of sensitivity or changes in sensation such as tingling, pins and needles or numbness, muscle weakness, very pronounced reflexes, muscle spasms, or difficulty in moving; difficulties with coordination and balance (ataxia); problems with speech or swallowing, visual problems (nystagmus, optic neuritis or double vision), feeling tired, acute or chronic pain, and bladder and bowel difficulties, among others. Difficulties thinking and emotional problems such as depression or unstable mood are also common. Uhthoff’s phenomenon, a worsening of symptoms due to exposure to higher than usual temperatures, and Lhermitte’s sign, an electrical sensation that runs down the back when bending the neck, are particularly characteristic of MS. The main measure of disability and severity is the expanded disability status scale (EDSS), with other measures such as the multiple sclerosis functional composite being increasingly used in research.
The condition begins in 85% of cases as a clinically isolated syndrome over a number of days with 45% having motor or sensory problems, 20% having optic neuritis, and 10% having symptoms related to brainstem dysfunction, while the remaining 25% have more than one of the previous difficulties. The course of symptoms occurs in two main patterns initially: either as episodes of sudden worsening that last a few days to months (called relapses, exacerbations, bouts, attacks, or flare-ups) followed by improvement (85% of cases) or as a gradual worsening over time without periods of recovery (10-15% of cases). A combination of these two patterns may also occur or people may start in a relapsing and remitting course that then becomes progressive later on. Relapses are usually not predictable, occurring without warning. Exacerbations rarely occur more frequently than twice per year. Some relapses, however, are preceded by common triggers and they occur more frequently during spring and summer. Similarly, viral infections such as the common cold, influenza, or gastroenteritis increase their risk. Stress may also trigger an attack. Women with MS who become pregnant experience fewer relapses; however, during the first months after delivery the risk increases. Overall, pregnancy does not seem to influence long-term disability. Many events have not been found to affect relapse rates including vaccination, breast feeding, physical trauma, and Uhthoff’s phenomenon.
The cause of MS is unknown; however, it is believed to occur as a result of some combination of genetic and environmental factors such as infectious agents. Theories try to combine the data into likely explanations, but none has proved definitive. While there are a number of environmental risk factors and although some are partly modifiable, further research is needed to determine whether their elimination can prevent MS.
MS is more common in people who live farther from the equator, although exceptions exist. These exceptions include ethnic groups that are at low risk far from the equator such as the Samis, Amerindians, Canadian Hutterites, New Zealand Māori, and Canada’s Inuit, as well as groups that have a relatively high risk close to the equator such as Sardinians, inland Sicilians, Palestinians and Parsis. The cause of this geographical pattern is not clear. While the north-south gradient of incidence is decreasing, as of 2010 it is still present.
MS is more common in regions with northern European populations and the geographic variation may simply reflect the global distribution of these high-risk populations. Decreased sunlight exposure resulting in decreased vitamin D production has also been put forward as an explanation. A relationship between season of birth and MS lends support to this idea, with fewer people born in the northern hemisphere in November as compared to May being affected later in life. Environmental factors may play a role during childhood, with several studies finding that people who move to a different region of the world before the age of 15 acquire the new region’s risk to MS. If migration takes place after age 15, however, the person retains the risk of his home country. There is some evidence that the effect of moving may still apply to people older than 15.
HLA region of Chromosome 6. Changes in this area increase the probability of getting MS.
MS is not considered a hereditary disease; however, a number of genetic variations have been shown to increase the risk. The probability is higher in relatives of an affected person, with a greater risk among those more closely related. In identical twins both are affected about 30% of the time, while around 5% for non-identical twins and 2.5% of siblings are affected with a lower percentage of half-siblings. If both parents are affected the risk in their children is 10 times that of the general population. MS is also more common in some ethnic groups than others.
Specific genes that have been linked with MS include differences in the human leukocyte antigen (HLA) system—a group of genes on chromosome 6 that serves as the major histocompatibility complex (MHC). That changes in the HLA region are related to susceptibility has been known since the 1980s, and additionally this same region has been implicated in the development of other autoimmune diseases such as diabetes type I and systemic lupus erythematosus. The most consistent finding is the association between multiple sclerosis and alleles of the MHC defined as DR15 and DQ6. Other loci have shown a protective effect, such as HLA-C554 and HLA-DRB1*11. Overall, it has been estimated that HLA changes account for between 20 and 60% of the genetic predisposition. Modern genetic methods (genome-wide association studies) have discovered at least twelve other genes outside the HLA locus that modestly increase the probability of MS.
Many microbes have been proposed as triggers of MS, but none have been confirmed. Moving at an early age from one location in the world to another alters a person’s subsequent risk of MS.An explanation for this could be that some kind of infection, produced by a widespread microbe rather than a rare one, is related to the disease. Proposed mechanisms include the hygiene hypothesis and the prevalence hypothesis. The hygiene hypothesis proposes that exposure to certain infectious agents early in life is protective, the disease being a response to a late encounter with such agents. The prevalence hypothesis proposes that the disease is due to an infectious agent more common in regions where MS is common and where in most individuals it causes an ongoing infection without symptoms. Only in a few cases and after many years does it cause demyelination. The hygiene hypothesis has received more support than the prevalence hypothesis.
Evidence for a virus as a cause include: the presence of oligoclonal bands in the brain and cerebrospinal fluid of most people with MS, the association of several viruses with human demyelination encephalomyelitis, and the occurrence of demyelination in animals caused by some viral infection. Human herpes viruses are a candidate group of viruses. Individuals having never been infected by the Epstein–Barr virus are at a reduced risk of getting MS, whereas those infected as young adults are at a greater risk than those having had it at a younger age. Although some consider that this goes against the hygiene hypothesis, since the non-infected have probably experienced a more hygienic upbringing, others believe that there is no contradiction, since it is a first encounter with the causative virus relatively late in life that is the trigger for the disease. Other diseases that may be related include measles, mumps and rubella.
Smoking has been shown to be an independent risk factor for MS. Stress may be a risk factor although the evidence to support this is weak. Association with occupational exposures and toxins—mainly solvents—has been evaluated, but no clear conclusions have been reached. Vaccinations were studied as causal factors; however, most studies show no association. Several other possible risk factors, such as diet and hormone intake, have been looked at; however, evidence on their relation with the disease is “sparse and unpersuasive”. Gout occurs less than would be expected and lower levels of uric acid have been found in people with MS. This has led to the theory that uric acid is protective, although its exact importance remains unknown.
Although there is no known cure for multiple sclerosis, several therapies have proven helpful. The primary aims of therapy are returning function after an attack, preventing new attacks, and preventing disability. As with any medical treatment, medications used in the management of MS have several adverse effects. Alternative treatments are pursued by some people, despite the shortage of supporting evidence.
During symptomatic attacks, administration of high doses of intravenous corticosteroids, such as methylprednisolone, is the usual therapy, with oral corticosteroids seeming to have a similar efficacy and safety profile Although, in general, effective in the short term for relieving symptoms, corticosteroid treatments do not appear to have a significant impact on long-term recovery. The consequences of severe attacks that do not respond to corticosteroids might be treatable by plasmapheresis.
Over 50% of people with MS may use complementary and alternative medicine, although percentages vary depending on how alternative medicine is defined. The evidence for the effectiveness for such treatments in most cases is weak or absent. Treatments of unproven benefit used by people with MS include dietary supplementation and regimens, vitamin D, relaxation techniques such as yoga, herbal medicine (including medical cannabis), hyperbaric oxygen therapy, self-infection with hookworms, reflexology, and acupuncture. Regarding the characteristics of users, they are more frequently women, have had MS for a longer time, tend to be more disabled and have lower levels of satisfaction with conventional healthcare. |
The Seventeenth Letter — Q
The Phoenicians called this letter “qoph” or “goph,” which meant “monkey.” Perhaps the symbol was the rear of a round little monkey-butt with a descending tail. There doesn’t appear to be an earlier letterform in the Semitic or Egyptian alphabets. It appears that the Phoenicians created this one on their own. The symbol stood for a guttural emphatic sound that isn’t used in Indo-European languages, so the Greeks borrowed it but changed the sound and changed the name to “koppa.” “Kappa” also had the same sound so one of the letters had to go and “koppa” left the Greek alphabet. The Etruscans had no difficulty having two symbols with the same sound and did one better by having a third symbol with the same sound. The Etruscans used “koppa” preceding the vowel “u”, a “c” followed by “e” and “I,” and “k” used before an “a.” The Romans adopted all the combinations.
The “Q” is not an “O” with a tail. It is an “O,” but the tail placement is a detailed assessment with a tremendous variety in hundreds of fonts. The tail placement and style is often the best clue in remembering the names of similar typestyles.
With credit to Allen Haley,
Upper & Lower Case magazine, a typographic centered publication last published from 1970 to 1999. |
It’s not just your imagination. Providing the first-ever definitive proof, a team of scientists has shown that emerging infectious diseases such as HIV, Severe Acute Respiratory Syndrome (SARS), West Nile virus and Ebola are indeed on the rise. The team – including University of Georgia professor John Gittleman and scientists from the Consortium for Conservation Medicine, the Institute of Zoology (London) and Columbia University – recently published their findings in leading scientific journal Nature.
By analyzing 335 incidents of previous disease emergence beginning in 1940, the study has determined that zoonoses – diseases that originate in animals – are the current and most important threat in causing new diseases to emerge. And most of these, including SARS and the Ebola virus, originated in wildlife. Antibiotic drug resistance has been cited as another culprit, leading to diseases such as extremely drug-resistant tuberculosis (XDR TB).
The scientists also found that more new diseases emerged in the 1980s than any other decade, “likely due to the HIV/AIDS pandemic, which led to a range of other new diseases in people,” said Mark Levy, deputy director of the Center for International Earth Science Information Network (CIESN) at Columbia University.
But this team did not stop with determining the causes and origins of emerging infectious diseases; they took it a step further. To help predict and prevent future attacks, sophisticated computer models were used to help design a global map of emerging disease hotspots.
“This is a seminal moment in how we study emerging diseases,” said Gittleman, dean of the Odum School of Ecology, who developed the approach used in analyzing the global database. “Our study has shown that bringing ecological sciences and public health together can advance the field in a dramatic way.”
Over the last three decades, billions of research dollars were unsuccessfully spent to try to explain the seemingly random patterns of infectious disease emergence and spread. Finally, this research gives the first insight about where future outbreaks may occur – and next up is likely the Tropics, a region rich in wildlife species and under increasing human pressure.
“Emerging disease hotspots are more common in areas rich in wildlife, so protecting these regions from development may have added value in preventing future disease emergence,” said Kate Jones, Senior Research Fellow of the Institute of Zoology.
Emerging diseases have caused devastating effects internationally, with millions infected and billions spent. Some diseases have become pandemic, spreading from one continent to another causing massive mortality rates and affecting global economies and livelihoods.
“This work by John and his collaborators is absolutely first rate, as evidenced by its publication in one of the world’s foremost scientific journals,” said UGA Vice President for Research David Lee. “It brings novel insights and perspective to the fight against global diseases and illustrates the tremendous potential of this new field of disease ecology. It is vital that we better understand how environmental factors, including man’s activities, affect the spread of infectious diseases.”
But knowing where the next outbreak is and understanding the reason for its occurrence does not alleviate the entire issue.
“The problem is, most of our resources are focused on the richer countries in the North that can afford surveillance – this is basically a misallocation of global health funding and our priority should be to set up ‘smart surveillance’ measures in these hotspots, most of which are in developing countries,” said Peter Daszak, executive director of the Consortium for Conservation Medicine. “If we continue to ignore this important preventative measure then human populations will continue to be at risk from pandemic diseases.”
This study was funded by the U.S. National Science Foundation, an NSF/NIH Ecology of Infectious Diseases award from the John E. Fogarty International Center of the National Institutes of Health and by three private foundations including The New York Community Trust, The Eppley Foundation and the V. Kann Rasmussen Foundation.
With roots that date back to the 1950s, the University of Georgia Odum School of Ecology offers undergraduate and graduate degrees, as well as a certification program. Founder Eugene P. Odum is recognized internationally as a pioneer of ecosystem ecology. The school is ranked eighth by U.S. News and World Report for its graduate program. The Odum School is the first standalone school of ecology in the world. For more information, see www.ecology.uga.edu.
Note to editors: High resolution maps are available. Please email [email protected] or call 706/542-6013 to obtain these images. |
Learn something new every day
More Info... by email
Although the terms are often used interchangeably by some patients, an MRI and a CT scan are two entirely different diagnostic tests. While they are both medical imaging tests that doctors may use to diagnose a problem inside the body, an MRI and a CT scan use different methods to form images. Other differences between an MRI and a CT scan include the quality of certain types of images, length of the tests, contrasting agents used during a procedure, and safety.
An MRI, or magnetic resonance imaging, test uses magnets to create images. During an MRI, a patient lies down on a table and is inserted into a long cylinder, which is essentially a large electromagnetic field. Radio waves inside the tube produce the internal images.
A CT, or computed tomography, test is an imaging technique that uses radiation to produce internal images. The CT scanner rotates around the patient's body. While this happens, x-rays are passed through the scanner to create the images.
Another major difference between an MRI and a CT scan is the quality of certain images. While an MRI scan will typically produce much clearer images, it is usually used to get images of tumors and soft tissues, as well as brain and spinal cord injuries. Unlike a CT scan, it usually does not do well when trying to get images of body cavities, such as the chest or abdomen. A CT scan is also considered to be the best way to get accurate images of bones.
Historically, both an MRI and a CT scan took up to an hour or more to complete. Today, however, both of these procedures are often completed much quicker. An MRI scan does take a little longer to complete, though.
During both an MRI and a CT scan, contrast agents may be used. This is a type of dye that can be used to enhance visibility in certain areas of the body, such as the blood vessels or gastrointestinal tract. The contrast agent used during an MRI, gadolinium, often causes less adverse reactions than the barium or iodine typically used during a CT scan.
Since it uses radiation as a means to produce an image, there is some concern about the safety of CT scans. Some research suggests that getting these types of scans can possibly increase a patient's risk of getting cancer. Since an MRI uses no radiation, it is considered much safer. Those with an artificial pacemaker, though, should avoid getting an MRI, since it could possibly cause the pacemaker to malfunction.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! |
Calculate the pH with the correct significant figures of:
a. A solution prepared by dissolving 2.55g (NH4)2SO4 (FW=132.141;Ka of NH4 = 5.7x10^-10)in 100.0 ml of water, adding 100 ml of 0.201 M NaOH, and diluting to 500.0 ml with water.
b. H2C, is a weak diprotic acid with a Ka1=1.2X10^-3 and Ka2 = 8.9x10^-9. A 30.00 ml sample of .1055 M solution of this acid titrated with standard .1575 M NaOH. What is the pH after adding 33.25 ml of titrant to the acid solution.
The solution explains, with step-by-step calculation, how to find the pH from titrations. |
British Poor Laws were a body of laws designed during the Elizabethan era to provide relief for the poor population living throughout the United Kingdom. Such laws began in sixteenth century England and prevailed until after World War II and the establishment of the welfare state.
Poor Laws provided relief in various forms, including care for the elderly, sick, and infant poor, and the establishment of supportive work programs for all able-bodied poor. Such programs were often run through local parishes until 1830, whereupon the state of poverty was recognized as a state of immorality. The characterization as a "pauper" thus became an additional burden, implying not only incapacity but also depravity. At that time, Poor Laws were amended to offer workhouse employment for all able-bodied poor, and mandated conditions of unpleasantness regarding housing conditions for their poor residents. Such conditions prevailed to prevent people from abusing acts of charity. Unfortunately, due to such efforts to dissuade those capable of working and supporting themselves, the conditions in workhouses were appalling, and many of those legitimately in need of help suffered excessively. Even so, there were often insufficient places in the workhouses to satisfy the needs of the poor population.
In the twentieth century, public housing and other social services began to develop outside the scope of the Poor Law; means tests were developed, and relief that was free of the stigma of pauperism became available. Following the end of World War II, Poor Laws were replaced by systems of public welfare and social security. Yet the problem of poverty remains. Its solution involves more than state-run programs; it requires a change in the hearts and minds of people to care for each other as one family.
The classification of the poor
For much of the period of the Poor Laws, poor members of the community were classified in terms of three groups.
- The "impotent poor" were a group who could not look after themselves or go to work. They included the ill, the infirm, the elderly, and children with no-one to properly care for them. It was generally held that they should be looked after.
- The "able-bodied poor" normally referred to those who were unable to find work, either due to cyclical or long term unemployment, or a lack of skills. Attempts to assist these people, and move them from this state, varied over the centuries, but usually consisted of relief, either in the form of work or money.
- "vagrants" or "beggars," sometimes termed "sturdy rogues," were deemed those who could work but refused to find employment. In the sixteenth and seventeenth centuries such people were seen as potential criminals, and apt to do mischief. They were normally seen as people needing punishment, and as such were often whipped in the market place as an example to others, or sometimes sent to so-called "houses of correction."
Before the English Reformation of the sixteenth century it was considered a Christian duty to care for the sick and needy. With the Church of England’s break from the Roman Catholic Church, some of this attitude was lost, which meant it became necessary for legislation to be created in order to care for the "deserving poor." Tudor Poor Laws, first introduced in 1495, aimed to deal with vagrancy, peasant begging, and charity, and were prompted by a desire for social stability. Such laws were harsh towards the able bodied poor as whippings and beatings were acceptable punishments. In the early sixteenth century, parishes began to register those of their communities considered "poor." By 1563, it became legally acceptable for Justices of the Peace to collect money from their communities on behalf of poor relief efforts. Under this legislation, all poor community members were to be classified as one of the three defined groups of poor.
Elizabethan Poor Law
In 1572, the first local poor tax was approved to fund poor relief, followed by the implementation of social workhouses and the 1601 passage of the Poor Law Act, also know as the Elizabethan Poor Law. This act allowed for the boarding of young orphaned children with families willing to accept them for a monthly payment paid to them by a local parish. The act also allowed provided materials to "set the poor on work," offered relief to people who were unable to work, and established various apprenticeships for able-bodied children.
Relief for those too ill or old to work, the so called impotent poor, often came in the form of monthly payments, donations of food, or donations of clothing. Some aged poor might also have been accommodated in parish alms houses, or private charitable institutions. Meanwhile, able-bodied beggars who had refused work were often placed in houses of correction. Provision for the able-bodied poor in the workhouse, which provided accommodation at the same time as work, was relatively unusual. Assistance given to the deserving poor that did not involve an institution like the workhouse was known as outdoor relief.
Poor Relief Act
There was much variation in the application of poor laws and there remained a large tendency for the destitute to migrate toward the more generous parishes, often situated in towns. This led to the Settlement Act of 1662, also known as the Poor Relief Act of 1662. This act allowed for provisional relief to only be available to established residents of a parish. Such affiliations could be traced mainly through birth, marriage, or apprenticeship, and all pauper applicants had to prove their membership to a certain "settlement." If they could not, they were removed to the next parish that was nearest to the place of their birth, or where they might prove some connection. Some paupers were moved hundreds of miles. Though each parish that the poor passed through was not responsible for them, they were responsible for the supply of food, drink, and shelter for at least one night.
The Poor Relief Act was criticized in later years for its effect in distorting the labor market through the power given to parishes to let them remove "undeserving" poor. Other legislation proved punitive, such as an act passed in 1697, which required the poor to wear a "badge" of red or blue cloth on the right shoulder with an embroidered letter "P" and the initial of their parish.
Eighteenth century Poor Law reforms
The eighteenth century workhouse movement began with the establishment of the Bristol Corporation of the Poor, an organization founded by an Act of Parliament in 1696. The corporation established a workhouse which combined housing and care of the poor with an affiliated house of correction for petty offenders. Following the example of Bristol, more than twelve further towns and cities established similar corporations over the next two decades.
From the late 1710s, the newly established Society for the Promotion of Christian Knowledge began to promote the idea of parochial workhouses. The Society published several pamphlets on the subject, and supported Sir Edward Knatchbull in his successful efforts to steer the Workhouse Test Act through Parliament in 1723. The act gave legislative authority for the establishment of parochial workhouses, by both single parishes and as joint ventures between two or more parishes. More importantly, the Act helped to publicize the idea of establishing workhouses to a national audience.
By 1776, more than one thousand parish and corporation workhouses had been established throughout England and Wales, housing almost 100,000 paupers. Although many parishes and pamphlet writers expected to earn money from the labor of the poor in workhouses, the vast majority of people obliged to take up residence in workhouses were the ill, elderly, or young children, whose labor proved largely unprofitable. The demands, needs, and expectations of the poor also ensured that workhouses came to take on the character of general social policy institutions, and often housed night shelters, geriatric wards, and orphanages.
In 1782, poor law reformer Thomas Gilbert finally succeeded in passing an act that established poor houses solely for the aged and the infirmed, and introduced a system of outdoor relief for the able-bodied. This was the basis for the development of the Speenhamland system, which made significant financial provisions for low-paid workers.
Nineteenth century Poor Law reforms
Widespread dissatisfaction with the poor law system grew at the beginning of the nineteenth century. The 1601 system was felt to be too costly and was widely perceived as pushing more people toward poverty even while it helped those who were already in poverty. Social reformer Jeremy Bentham argued for a disciplinary, punitive approach to social problems, whilst the writings of political economist Thomas Malthus focused attention on the problem of overpopulation, and the growth of illegitimacy. Economist David Ricardo argued that there was an "iron law of wages." In the view of such reformers, the establishment of poor relief sought to undermine the position of the "independent laborer."
In the period following the Napoleonic Wars, several reformers altered the function of the "poorhouse" into the model for a deterrent workhouse. The first of the deterrent workhouses in this period was at Bingham, Nottinghamshire. The second, established at Becher's workhouse in Southwell, is now maintained by the National Trust. George Nicholls, the overseer at Southwell, was to become a Poor Law Commissioner in the reformed system.
The Royal Commission on the Poor Law
In 1832, the Royal Commission into the Operation of the Poor Laws was written by a commission of eight members, including English economist Nassau William Senior, and social reformer Edwin Chadwick. The Royal Commission's primary concerns were with illegitimacy reflecting the influence of Malthusians, and the fear that the practices of the Old Poor Law were undermining the position of the independent laborer. Two practices were of particular concern to the commissioners: The "roundsman" system, where overseers hired out paupers as cheap labor, and the Speenhamland system, which subsidized low wages without relief.
Upon its publication, the 13 volume report pointed to the conclusion that the poor law itself was the cause of poverty. The report differentiated between poverty, which was seen as necessary, as it was fear of poverty which made people work, and indigence, or the inability to earn enough to live on.
The volume also served to define the term less eligibility, which mandated the position of the pauper to be less eligible, or less to be chosen, than that of the independent laborer. Under this idea, the reformed workhouses were to be uninviting, so that anyone capable of coping outside of them would choose not to enter one. The report also recommended separate workhouses for all aged, infirmed, children, able-bodied females, and able-bodied men. The report also mandated that parishes be grouped into various unions in order to spread the cost of workhouses, and that a central authority should be established in order to enforce such measures.
The Poor Law Commission took two years to write its report; the recommendations passed easily through Parliament support by both the Whigs and the Tories. The bill eventually gained Royal Assent in 1834. The few who opposed the Bill were more concerned about the centralization that the bill would bring rather than the underpinning philosophy of utilitarianism.
The 1834 Poor Law Amendment Act
In 1834, the Poor Law Amendment Act was passed. which allowed for various forms of outdoor relief. Not until the 1840s, would the only method of relief for the poor be to enter a workhouse. Such workhouses were to be made little more than prisons; families were normally separated upon entering. The abuses and shortcomings of such systems are documented in the novels of Charles Dickens and Frances Trollope.
However, despite the aspirations of various reformers, the Poor Law was unable to make the workhouse as bad as life outside. The primary problem was that in order to make the diet of the Workhouse inmates "less eligible" than what they could expect beyond the workhouse, it would be necessary to starve the inmates beyond an acceptable level. It was for this reason that other ways were found to deter entrance to the workhouses. These measures ranged from the introduction of prison style uniforms to the segregation of "inmates" into yards.
Fierce hostility and organized opposition from workers, politicians, and religious leaders eventually lead to further amendments of the Amendment Act, removing the harshest measures of the workhouses. The Andover workhouse scandal, where conditions in the Andover Union Workhouse were found to be inhumane and dangerous, prompted a government review and the abolishment of the Poor Law Commission, which was replaced with a Poor Law Board under which a Committee of Parliament was to administer the Poor Law, with a cabinet minister as head.
In 1838, the Poor Laws were extended into Ireland, although a few poorhouses had been established before that time. The workhouses were supervised by a Poor Law Commissioner in Dublin. The Irish Poor Laws were even harsher on the poor than the English Poor Laws; furthermore, the Irish unions were under funded, and there were too few workhouses in Ireland. As a result, the Irish Potato Famine became a humanitarian catastrophe.
Poor Law policy 1865-1900
In 1865, the Union Chargeability Act was passed in order to make the financial burden of pauperism placed upon the whole unions rather than individual parishes. Most Boards of Guardians were middle class and committed to keeping Poor Rates as low as possible
After the 1867 Reform Act, there was increasing welfare legislation. As this legislation required local authorities' support, the Poor Law Board was replaced with a Local Government Board in 1871. County Councils were formed in 1888, District Councils in 1894. This meant that public housing, unlike health and income maintenance, developed outside the scope of the Poor Law. The infirmaries and the workhouses remained the responsibility of the Guardians until 1930. This change was in part due to altering attitudes on the nature and causes of poverty; there was for the first time an attitude that society had a responsibility to protect its most vulnerable members.
The reforms of the Liberal Government from 1906 to 1914, made several provisions to provide social services without the stigma of the Poor Law, including Old age pensions and National Insurance. From that period, fewer people were covered by the system. Means tests were developed during the inter-war period, not as part of the Poor Law, but as part of the attempt to offer relief that was not affected by the stigma of pauperism.
One aspect of the Poor Law that continued to cause resentment was that the burden of poor relief was not shared equally by rich and poor areas but, rather, fell most heavily on those areas in which poverty was at its worst. This was a central issue in the Poplar Rates Rebellion led by George Lansbury and others in 1921.
Workhouses were officially abolished by the Local Government Act of 1929, which from April 1930, abolished the Unions and transferred their responsibilities to the county councils and county boroughs. Some workhouses, however, persisted into the 1940s. The remaining responsibility for the Poor Law was given to local authorities before final abolition in 1948.
ReferencesISBN links support NWE through referral fees
- Boyer, George. 2006. An Economic History of the English Poor Law, 1750-1850. Cambridge University Press. ISBN 0521031869
- Fideler, Paul A. 2006. Social Welfare in Pre-industrial England: The Old Poor Law Tradition. Palgrave-Macmillan. ISBN 0333688953
- Rose, Michael E. 1971. The English Poor Law 1780-1930. London: David & Charles. ISBN 0715349783
All links retrieved November 24, 2022.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed. |
Educational free games for kids can do much more than just entertain; they can also be incredibly powerful tools to improve brain function and learning. Over the years, scientists have researched the influence of educational games on brain development and cognitive functions. All in all, this research points toward educational games contributing to a child’s cognitive, academic and social development.
Defining Educational Games for Kids
First off, it is important to know that educational games for kids are defined as any games or activities intended to educate children. Such games ultimately aim to help the player gain knowledge and understanding, as well as engage in a process of learning. Educational games can be digital or non-digital, and can be used in the same way a regular game is, with the focus on fun, but at the same time, the game itself has educational purpose derived from its content or structure.
How Educational Games for Kids Can Improve Brain Function and Learning
Numerous studies have found that educational games for kids are highly effective in boosting brain activity. From increasing memory, to stimulating imagination and creativity, to cultivating problem-solving skills, educational games can provide dynamic learning experiences that have various levels of brain boosting benefits.
Educational games have been found to have the very important effect of improving the memory. When children are playing educational games, they are constantly challenged to recall information from both short-term and long-term memory. This repeated use of their memory helps to build better cognitive functioning skills for the future. Educational games for kids can be key in improving their memory, as these games allow them to practice different tasks, continually challenging their memory.
Creativity and Imagination
Providing children with educational games can also help to develop, encourage, and tap into their creativity and imagination. Many games involve various levels of problem solving, which turns on the brain’s imagination and leads to innovating ways of solving the given task. As the game level increases, the levels of creativity and problem-solving have to increase as well, which gives children a greater sense of freedom while still staying in the limits of the educational game.
In many educational games, children have to rely on problem-solving skills in order to progress and complete tasks within the game itself. By playing educational games and growing in knowledge, children get better and better at problem-solving skills, which are essential for cognitive growth in young children. In essence, playing these games helps to develop important skills and prepare them for real-world challenges.
Through educational games for kids, children can also learn about teamwork and collaboration. Most educational games require the players to work together and learn to coordinate their efforts in order to complete the game. Such games help teach children important social skills that will serve them in their lives.
In summary, educational games for kids can have an incredibly powerful effect on brain development and school success. From boosting memory to increasing problem-solving skills, educational games provide an optimal learning environment that can help to improve brain functions and learning. In addition to these highlights, educational games can offer children plenty of fun, making learning an enjoyable activity. |
The word “accused” is a verb that means to say someone has done something wrong or broken a rule. Being accused of something is a serious matter, but it is important to remember that everyone is innocent until proven guilty.
Accusations can happen in many different situations. For example, if someone takes something that doesn’t belong to them, they may be accused of stealing. If someone hurts another person, they may be accused of assault. Accusations can also happen in school, such as if a student is accused of cheating on a test or breaking a school rule.
It is important to remember that just because someone is accused of something, it doesn’t mean they are automatically guilty. In fact, it is important to investigate and gather evidence to make sure that the right person is held responsible.
If someone is accused of something, they have the right to defend themselves and explain their side of the story. It is important to listen carefully to both sides of the story and to make a fair and just decision.
In conclusion, the word “accused” means to say someone has done something wrong or broken a rule. Accusations can happen in many different situations, but it is important to remember that everyone is innocent until proven guilty. If someone is accused of something, it is important to investigate and gather evidence to make sure that the right person is held responsible.
For small group English tuition, click here.
For more vocabulary words, click here.
The word “accused” is a verb that means to charge someone with a crime or wrongdoing.
Example: The police accused the suspect of stealing the valuable painting.
Here are ten common uses of the word “accused”:
- The lawyer accused the defendant of committing the crime.
- The teacher accused the student of cheating on the test.
- The company accused the employee of stealing company property.
- The coach accused the player of not following the team’s rules.
- The neighbor accused the homeowner of violating noise ordinances.
- The supervisor accused the worker of not following safety protocols.
- The customer accused the store of selling defective products.
- The government accused the corporation of violating environmental regulations.
- The parent accused the child of not telling the truth.
- The journalist accused the politician of unethical behavior.
Top 10 Vocabulary words for accused with definition and examples for primary school
- Innocent – not guilty of a crime or wrongdoing. Example: The suspect is innocent until proven guilty in a court of law.
- Evidence – facts or information that support or prove something. Example: The prosecutor presented evidence that linked the accused to the crime scene.
- Trial – a legal process in which a case is heard by a judge or jury to determine whether someone is guilty or innocent of a crime. Example: The trial of the accused lasted for several weeks and involved many witnesses.
- Testimony – a statement made under oath in a court of law. Example: The witness gave powerful testimony that helped to convict the accused.
- Alibi – evidence that shows a person was somewhere else at the time a crime was committed. Example: The accused had an alibi that proved he was out of town on the day of the robbery.
- Suspect – a person who is believed to have committed a crime. Example: The police had a suspect in custody and were questioning him about the robbery.
- Witness – a person who sees or hears something and can provide evidence about it. Example: The witness testified that he saw the accused leaving the scene of the crime.
- Jury – a group of people who listen to the evidence in a trial and decide whether the accused is guilty or innocent. Example: The jury deliberated for several hours before reaching a verdict.
- Guilty – having committed a crime or wrongdoing. Example: The judge declared the accused guilty of theft and sentenced him to prison.
- Lawyer – a person who is trained and licensed to practice law, and who represents clients in legal matters. Example: The accused hired a lawyer to defend him in court.
Learning vocabulary words such as “accuse” helps students in a number of ways. Firstly, it helps them to expand their vocabulary and improve their language skills. This is important because having a strong vocabulary allows students to communicate more effectively and accurately, both in speaking and in writing.
Secondly, learning the word “accuse” can help students better understand the legal system and the concept of justice. By learning about the various aspects of the legal system, including the roles of the judge, jury, and lawyers, students can gain a greater appreciation for the rule of law and how it protects individuals and society as a whole.
Thirdly, understanding the meaning and use of the word “accuse” can help students develop critical thinking skills. By analyzing and evaluating the evidence in a case, students can learn to make logical and reasoned arguments, and to distinguish between fact and opinion.
Finally, learning the word “accuse” can also help students develop empathy and compassion for others. By understanding the consequences of being accused of a crime or wrongdoing, students can appreciate the importance of fairness, justice, and due process, and learn to treat others with respect and dignity.
Overall, learning vocabulary words such as “accuse” helps students in many different ways, from expanding their language skills to developing critical thinking and empathy.
Once upon a time, in a dense bamboo forest, there lived a lazy and gluttonous panda named Po. Po loved nothing more than eating bamboo all day, every day. He would munch on bamboo shoots, leaves, and stems all day long, completely oblivious to the destruction he was causing to the forest.
One day, the forest animals had had enough. They banded together and decided to put Po on trial for destroying their home. They accused him of being the worst animal to nature and causing deforestation. Po was shocked and couldn’t believe what he was hearing.
At the trial, the animals presented their evidence, and it was overwhelming. Po had eaten so much bamboo that he had single-handedly destroyed a large part of the forest. He was guilty as charged.
But just when it seemed like there was no hope for Po, a wily fox named Nick stepped forward to defend him. Nick was known for being the best defense attorney in the forest, and he had a trick up his sleeve.
During the trial, Nick argued that Po wasn’t the only one responsible for the deforestation. He pointed out that other animals, like the goats and deer, also ate vegetation from the forest. They too were contributing to the destruction of the forest.
Nick also argued that Po was a vital part of the forest ecosystem. He reminded the animals that pandas were an endangered species, and if they didn’t protect Po, they could face the extinction of pandas altogether. Good point that.
The animals were swayed by Nick’s argument, and they agreed to give Po another chance. They agreed to plant more bamboo in the forest, and they also agreed to put in measures to protect the remaining bamboo forest.
Po was overjoyed, and he promised to be more mindful of his eating habits. He knew that he couldn’t continue to be the worst animal to nature and cause deforestation. From that day on, he made sure to only eat what he needed and not to harm the forest any further.
In the end, the forest animals forgave Po, and he was allowed to roam free once again. He lived the rest of his days in peace, with a new appreciation for the importance of conservation and protection of the forest. |
Change affirmative to interrogative sentences
This worksheet tests your ability to change affirmative sentences into interrogative sentences.
Change affirmative sentences to interrogative sentences.
1. James teaches at a school.
2. Susie has a beautiful voice.
3. Peter knows the answer.
4. Maria lives with her aunt.
5. She took a lot of time to finish the work.
6. He told me a story.
7. The boy stole the money.
8. She went to school.
9. Alice made a cake yesterday.
10. Peter wants to become an engineer.
11. Lara works very hard.
12. Rohan played basketball at university.
13. Peter earns a handsome salary.
14. Julie visited Ibrahim yesterday.
1. Does James teach at a school?
2. Does Susie have a beautiful voice?
3. Does Peter know the answer?
4. Does Maria live with her aunt?
5. Did she take a lot of time to finish the work?
6. Did he tell me a story?
7. Did the boy steal the money?
8. Did she go to school?
9. Did Alice make a cake yesterday?
10. Does Peter want to become an engineer?
11. Does Lara work very hard?
12. Did Rohan play basketball at university?
13. Does Peter earn a handsome salary?
14. Did Julie visit Ibrahim yesterday? |
GEOFFREY M. GLUCKMAN, MSc.
For Functional Neurological Development (FND), he originally mentored under the guidance of Barry Heggsted for three years. He has given neurological programs to persons of all ages and walks of life for over twenty years. This hands–on education is supported by a Masters of Science Degree in Exercise Science and Biomechanics. He has also authored fiction and non-fiction, including the highly acclaimed Muscle Balance & Function Development® education system.
Scientists discovered many years ago that the brain develops through function, especially interaction with the surrounding environment. This foundation building of the brain occurs through the sensory-motor pathways, which develop in a different manner than other parts of the body, such as bones and muscles. Specifically, the central nervous system, which extends into practically all other systems anatomically and controls them functionally, waits for specific movements to repeatedly occur before the neurological pathways are developed. Thus, it is through the use (movement) of various parts of the body that the brain and the body create these pathways, which allow us to know where these parts are and how to control them. This process of movement and use is called, function and stimulation. This is the reason that neurological development is a process of function, not time.
BRAIN or CENTRAL NERVOUS SYSTEM (CNS)
The CNS is far more than 3.5 pounds of gray matter, which usually we refer to as the brain. The brain (CNS) is in two main parts, each of which has its own unique function. The first part receives, stores, and processes information and then sends signals as needed for the body to respond to the information. This part is also known as the cortex.
The second part of the brain is the network of sensory-motor pathways throughout the body. This part of the CNS sends information to the cortex and carries instructions from there. The sensory-motor pathways provide the link or tie-in from all parts of the body (your limbs, back, skin, and all the internal parts) to the brain.
WHAT HAPPENS with LACK of DEVELOPMENT?
As stated, the CNS waits for specific stimulation, especially through motion, to occur with frequency, intensity, and duration in order to develop the sensory-motor pathways. If for some reason these activities, usually fulfilled in infancy and early childhood, do not occur, then the development can be completed at a later age, if the specific stimulation occurs.
Lack of neurological development may reveal itself in numerous ways: poor reading and learning skills, short attention span, hypertension, excess nervousness, poor memory, imbalanced walking and awkward coordination.
In our society, we also have slow learners, individuals with speech problems, and many with neurologically-based vision problems, all as a result of lack of development.
Once a person’s CNS has developed in a certain fashion, it will remain so throughout life unless retraining is applied to correct the situation.
LEARNING IS A PHYSICAL ACT
The ability to listen in the classroom, watch television, or read a book is truly a physical skill. These skills rely on signals, which originate physically in the sense organs and are then physically transmitted through the appropriate sensory-motor pathways to the CNS.
If the pathways are not properly developed, then these signals cannot be properly transmitted through the system. This results in little or faulty or no input to the system. Therefore, proper learning and perception is impaired, or in some cases, non-functional.
TYPICAL SIGNS/SYMPTOMS that a child/adult might display indicating neurological brain development impairment (not an inclusive list): inability to focus (ADHD), inability to follow instructions, memory issues (short or long term), learning issues (math/reading/writing/comprehension) inability to maintain body temperature (too warm or cold), clumsiness, hyperactivity, hand-eye coordination issues, balance and depth perception issues (including diminished sports performance), and many others.
The exercises used for Functional Neurological Development require minimal equipment and can be performed almost anywhere, though some require ample space for movement.
The three-step consulting and education process of FND requires:
A) Functional Neurological Development Evaluation (2-3 hours), which involves evaluations of 42 areas of brain function, including: 1) Visual Development; 2) Auditory Perception; 3) Mobility and Manual Development; 4) Tactile and Kinesthetic Development
B) Home Activity Program, based on the information gathered, is designed for overcoming the functional neurological challenge presented.
The individual and parental guardians are taught how to perform the program in detail (1 hour to 90 minutes).
C) Follow-up Visits: are scheduled every three months after the initial visit. The purpose of the follow-up is to observe the prescribed program, evaluate progress, and make changes, as needed.
With these learning disability solutions
clients discover that they are primarily responsible for their own well being, and are provided the means to restore their bodies to a higher level of function and health.
Parenting Basics Series (#3)
Geoffrey M. Gluckman, MSc
Never has humanity had access to so much information. Digital devices deliver this to each and everyone of you on a second by second basis. For the most part it is a positive aspect of these devices (for possible dangers see article posted 6 December 2017).
However, too much information presents other challenges, especially for young and developing minds.
According to neurosurgeon Dr. Mark McLaughlin, too much information may affect your perception of self-control and thus become a negative stressor. Other researchers question whether the human mind has an unlimited capacity for keeping the information that it gathers.
The American Psychological Association states that too much multitasking may cause a drop in productivity, up to as much as forty percent.
What are the symptoms of “information fatigue syndrome” (a term coined by British psychologist Dr. David Lewis)?
>excess mental fatigue
>low attention span
>distracted and/or poor concentration
The loss of the ability to concentrate often leads to a fragmented daily existence.
Information overload becomes much worse in an individual who has poor neurological organization.
What are the solutions for information overload?
1) limit “information gathering” to pre-set time blocks, similar to limits you place on funds used for gambling fun
2) select priorities on which information to gather, i.e., make it relevant to you and your goals
3) give yourself time to absorb what you have gathered
4) discipline yourself to disconnect from digital devices (see #1)
5) balance digital information gathering with talking with live resources (persons) knowledgeable on subjects that interest you
The mind, if able to remain focused, can become one of the most powerful forces known to humanity.
And remember: information is a wonderful tool-use and manage it wisely.
Best wishes to all families,
Parenting Basics Series (#2)
Geoffrey M. Gluckman, MSc
Digital Devices and Screen Time
The digital age is certainly upon us. It brings unprecedented access to information and learning, which is positive. However, it brings less positive aspects, such as “screen time” concerns. The dangers of many hours spent in front of broadcasting screens of varying sizes are not fully known at present, especially for young developing brains.
Parents would be wise to limit the use of digital devices and screen time for their children (especially 12 years and younger). First, because the effects are unknown. Second, because recent research shows spikes in the brain, similar to drugs and addiction, when emails, texts, and other alerts are received (1). Third, and most important for Functional Neurological Development within young brains, the visual images seen on digital devices represent three-dimensional images that are not real. The developing brains of our children need to learn in as many ways as possible from real world stimuli. This allows all the regions of the brain to properly develop and grow. This means having a young child exposed to auditory, visual, and kinesthetic (by touch) stimuli in the real world, preferably all at the same time. Our visual sense is the weakest of human sensory abilities (2). Therefore, excess screen time could create imbalances in your child’s sensory functioning. Also, the developed visual abilities would be based on things that are not real, nor touched.
Similar caveats (cautions) apply to hearing. Many kids are spending countless hours with earbuds (earpieces) in their ears. The long term effect on this delicate part of human anatomy is not yet known.
However, the importance for hearing development to occur naturally is critical for survival in the real world. The placement of ears on each side of the head allows you to hear a sound and often determine from which direction it comes. Your ears are also critical for the development of your sense of balance (ability to stand on one leg). Both of these functions may be hindered by listening to sounds (music) through earpieces.
Parents would do well to limit the screen time for their children and instead encourage play and learning through real world pursuits. Experiencing and interacting with the real world in natural environments provides opportunities for normal brain growth and development in all of the bodily senses, as well as real world social skills development. In fact, “forest kindergartens” are becoming more popular because of the positive effects on young children and their brains (3). Recent research shows that being in nature allows the prefrontal cortex (the brain’s command center) to relax and rest(1).
Best wishes to all families,
1) Strafer, David. Cognitive psychologist. University of Utah.
2) Montague, Ashley. Touching.
3) Bateman, Greg. Stanford University. Kaplan, Stephen and Rachel. University of Michigan.
Parenting Basics Series (#1)
Geoffrey M. Gluckman, MSc
The Power of Choice
Does your young child resist following your daily directions?
Perhaps he or she does not listen to you at all?
Is it a daily struggle to get cooperation from your child in the morning preparing for school or for other simple requests?
Some parents will tell you that these are normal child behaviors and partly that is true. However, it is important to remember the power of choice.
For instance, when it is time to select clothes for the day for your child, lay out two (only two) outfits and ask your child to choose which one he/she wants to wear.
Why is this important? Because it empowers your child with a sense of control. Often that is the reason he/she fights with you during the day. This type of choice is not imporant to you, but the experience of getting to choose for a child is powerful.
The result: less battles throughout the day over decisions that are important.
When offering choices to a young child, they should be limited to two: option A or option B. That applies to choice on clothes to wear or what to eat for a snack, or other choices.
The reason: most young children have not yet fully developed logic or reasoning parts of the brain, so an excess of options causes confusion (overwhelm).
Any sense of overwhelm may trigger an alarm response (crying), or a shut down response (often refusal), which creates an uncooperative situation.
The most dangerous question to ask a child: why?
This is an open-ended question, and one that is oriented to higher level brain (cortex) functioning, which most young children have not yet fully developed. Again, the potential for the child to feel overwhelmed is possible.
The power of choice technique is a powerful tool for the parent to gain a child’s cooperation, as well as a means to produce improved behavior with your young child.
Give it a try and see if it produces the results you want.
Please realize that this technique will not resolve all issues with lack of cooperation, poor behavior, or not following directions. Often there are underlying neurological development deficiencies that may impede a child’s normal growth, behavior, development, and learning.
Best wishes to all families,
Like us at: www.facebook.com/FunctionalNeurologicalDevelopment (English)
Have a question? Please ask. We're here to help.
Please see testimonials page.
APPOINTMENTS: (a pre-consultation and Pre-Questionnaire are required prior to):
1) U.S./CANADA: please email: [email protected]
2) EUROPE: (conducted in English or in your native language through an interpreter that you provide or one may be provided at additional cost), please email: [email protected]
Please contact for fee structure, which includes evaluations and programs
Beginning September 2018: FND-Europe meetings will be conducted in Bratislava, Slovakia.
(Note: these meetings require pre-approval and scheduling with Geoffrey M. Gluckman, MSc.)
Currently accepting new clients in the Bay Area, California, Seattle, WA, Vancouver, British Columbia, and Europe.
Other locations may be arranged.
Please use contact form and send an email to Geoffrey M. Gluckman, MSc
Q: How long does it take for the average person/child to complete the Functional Neurological Development education process?
A: Typically 18-24 months
Q: How frequent are re-evaluations scheduled?
A: In North America, every 3 months. In Europe, every 16 weeks.
(Note: this may vary for special cases.)
Check here for special events and promotions in your area
"This work has revolutionized my life."
John Howard, Musician
For more testimonials, including videos, please visit: FND Facebook page |
Decoding the Language of Programmers: Understanding the Jargon and Terminology
The language of programmers is a unique and specialized form of language that is used within the programming community. It includes not only the programming languages themselves but also the jargon, terminology, and concepts used by programmers to communicate with each other and to write software.
Programming languages are the foundation of the language of programmers. These languages are used to write software, automate tasks, and control computer systems. There are many different programming languages, each with its own syntax, structure, and capabilities.
Some of the most popular programming languages include:
- Java – Java is a high-level programming language that is used to create software for desktop computers, servers, and mobile devices. It is known for its cross-platform compatibility and security features.
- Python – Python is a general-purpose programming language that is known for its simplicity and ease of use. It is used in a variety of applications, including web development, data analysis, and artificial intelligence.
- C++ – C++ is a powerful programming language that is used to create operating systems, video games, and other high-performance applications. It is known for its speed and efficiency.
- Ruby – Ruby is a programming language that is used to create web applications and dynamic websites. It is known for its simplicity and ease of use.
Programming jargon refers to the specialized vocabulary used by programmers to describe programming concepts, tools, and techniques. This jargon can be confusing to those outside the programming community, but it is essential for effective communication within the community.
Some examples of programming jargon include:
- Algorithm – An algorithm is a set of instructions for solving a problem or performing a task.
- API – An API (Application Programming Interface) is a set of rules and protocols for accessing a web-based software application or web tool.
- Debugging – Debugging is the process of finding and fixing errors in software.
- Object-Oriented Programming – Object-Oriented Programming (OOP) is a programming paradigm that uses objects to represent data and methods to manipulate that data.
- IDE – An IDE (Integrated Development Environment) is a software application that provides comprehensive tools for writing and testing software.
The language of programmers is a unique and specialized form of language that is used within the programming community. It includes programming languages, jargon, terminology, and concepts used by programmers to write software and communicate with each other. Understanding this language is essential for effective communication within the programming community and for learning new programming concepts and techniques. |
How Ukraine separated from Russia! The fall of the USSR
Gorbachev's decision to conduct fair elections with a multi-party system and create a presidency for the Soviet Union began a slow process of democratization that eventually destabilized Communist control and contributed to the collapse of the Soviet Union. We will cover the main reasons dedicated to the fall of the USSR under this article and why the largest country in the world got separated into 15 republics. #TWN
The recent conflicts and war between Russia and Ukraine have shocked the world, but are you aware that once Ukraine was a part of Russia. Yes, Russia and Ukraine were not once neighboring countries but were a part of the same Union called the USSR (Union of Soviet Socialist Republics), but they eventually separated and were divided into several countries. Russia might be the largest country according to land, but once, it was humongous. So, what happened and why did this Union fall, which eventually resulted in the separation of Russia and Ukraine. With this article, we will understand what was the situation which led to this separation and, was there any possibility that could have prevented it?
Reasons for the fall of the USSR
In the late 1980s, the USSR appeared to be somewhat of a powerhouse exhibiting a rigid recovery from the invasion of Afghanistan and with an economy that appeared to be performing well. The union appeared from the surface as powerful as it was back in the 1950s, but looks can be deceiving. Beneath this illusion, this union was falling apart, and it had been happening for decades. Some of the factors that led to this fall are:
Mikhail Gorbachev rose to power in 1985, with plans to reform the hybrid communist capitalist system, similar to modern-day China. He also planned to ease the restrictions on freedom of speech and religion. Before this, millions of Soviets were arrested on the grounds of speaking against the state. However, his plan backfired when he discovered his loosening of control over the people and reforming political restriction meant the people used their newfound powers to criticize the government until they eventually succeeded in pushing for reform.
This brand of communism had operated historically on tight central control, and the loosening control led to the abandonment of the entire construct. Back in the days of Vladimir Lenin, Leon Trotsky, and Joseph Stalin, the Soviets were led by a strong ideological belief tied to Marxism. By the 1960s, the radical policies of the past leaders were passed in favor of a more constructive approach. By the 1970s, the soviet people discovered the rise of the political elite, who lived in posh homes, ate in fancy restaurants, and spent their vacations in the luxury ski resorts, while millions of average Joe’s died from starvation.
Younger Generations were less keen to toe the lines like their parents had and were willing to step forward and protest for change. These newer generations were more in tune with the world events and wanted to create a democracy within the USSR. And slowly but surely, began to pull at the strands of the political regime.
Cold War tensions with the United States rose in the 1970s and 1980s, and with Ronald Reagan’s leadership and the resulting increase in military spending, it seemed that the U.S. had won the nuclear standoff. The strategic defense initiative, or SDI, claimed to be able to blast Russian missiles as they fell, meaning, in theory, the U.S. could win the long-running battle of wits. Reagan also managed to isolate the Soviets from the World Economy, and without the export of oil and sales, the soviet union was Severely weakened and limited.
The Soviets were unable to turn a corner, and in the 1980s, bread lines were commonplace as poverty soared. Many people didn’t have basic clothing and food, and under these conditions, it was only a matter of time before the people called out for regime change.
And then there is the National structure itself. There were literally 15 radically different republics under one flag, with different ethnicities, cultures, and languages, and hence the inherited tensions were bound to happen.
The Beginning of the Fall
The 1989 Nationalist movements brought about regime changes in Poland and Czechoslovakia, as the soviet satellite began to split away. As these nations began to pull away, The central apparatus was weekend until it finally collapsed. Due to all these factors by 1991, the Soviet Union was unable to maintain a normally functioning economy and run a huge military simultaneously. Gorbachev, unwillingly to go to war like his predecessors, instead pulled the plug on the military, and the 15 republics went their way.
Although a devoted Marxist, Gorbachev was an independent thinker who respected the need for reform and planned the restructuring of the economy. Everything along with his vision to lessen the control held by the central government and a move towards uncensored media, laid the path of total reform.
The Soviet Union was no more, and on Christmas day 1991, Soviet President Gorbachev announced we were living in the new world. With these seven words, the Soviet Union was dissolved, and Gorbachev stepped down from his post. After 40 years of the cold war and the threat of nuclear holocaust, the world's largest communist state broke up into 15 different republics, meaning that the U.S.A was now given the accolade of the new world superpower. At its strongest USSR had almost 5 million soldiers stationed around the world, and they all stepped down without a shot being fired. The Chernobyl disaster also played a huge role in separating these 15 republics, but that event was so huge that I would cover it in a dedicated blog.
Because of all the reasons discussed, the fall was bound to happen, and it was almost impossible to prevent it. The only thing that could have happened was the delay of the downfall. The fall of the USSR has resulted in the modern-day war between Russia and Ukraine, and even a threat of World War 3 has risen. To understand the present situation, please refer to the following blog.
If you liked reading this article, we have two more for you. Click on the link below to explore! |
After the Revolutionary War, the ideology that “all men are created equal” failed to match up with reality, as the revolutionary generation could not solve the contradictions of freedom and slavery in the new United States. Trumbull’s 1780 painting of George Washington (Figure 7.1) hints at some of these contradictions. What attitude do you think Trumbull was trying to convey? Why did Trumbull include Washington’s slave Billy Lee, and what does Lee represent in this painting?
During the 1770s and 1780s, Americans took bold steps to define American equality. Each state held constitutional conventions and crafted state constitutions that defined how government would operate and who could participate in political life. Many elite revolutionaries recoiled in horror from the idea of majority rule—the basic principle of democracy—fearing that it would effectively create a “mob rule” that would bring about the ruin of the hard-fought struggle for independence. Statesmen everywhere believed that a republic should replace the British monarchy: a government where the important affairs would be entrusted only to representative men of learning and refinement.
Access for free at https://openstax.org/books/us-history/pages/1-introduction |
What is GIST? Understanding the Problem
GIST stands for gastrointestinal stromal tumors, which start in the wall of the digestive tract. To know what is GIST, it is important to understand the structure and the functioning of the GI tract. It is in the gastrointestinal tract that the main food processing takes place so that the body gets energy for performing various kinds of functions. Not only this, it is through this same system that toxic wastes are eliminated from the system successfully. After chewing the food and swallowing, it gets into the esophagus and reaches the stomach via the neck and chest in order. The esophagus is joined to the stomach just below the diaphragm. It is in the stomach that the process of digestion begins where the food mixes with gastric juices. The digested food along with the acids passes to the small intestine. The almost 20 feet long small intestine helps in breaking down the food properly and also helps in absorption of nutrients into the blood. The small intestine is then joined with the large intestine. The first part of the large intestine is the colon, where water and mineral nutrients are absorbed from the food matter. Once this is done, the waste goes into the rectum. It gets stored and finally goes out of the body via the anus.
GIST cancer symptoms can be seen in any part of the GI tract. Though these are unusual kinds of tumors, they usually start in the stomach only along with the small intestine. These two are the most complex parts of the whole GI tract. Though these tumors do not spread very fast, there are some which tend to grow in other parts of the body or spread. Doctors can take a look at the tumors and carry out certain investigations to understand the GIST risk factors and to know whether the tumors will grow and spread fast. They also help in finding the proper location of the tumor in the GI tract, the size of the tumor and the rate of cell division. There are other kinds of cancers of the GI tract, but GIST is different from them. it is important to ascertain in the first place that the tumor is GIST only and then carry out treatment for the same.
The healthy banana bread recipe
GIST Cancer Symptoms
One of the biggest problems of GIST cancer is that the symptoms are not pronounced and rather vague. In fact, in many cases, treatments are done for other GI tract infections and ailments as doctors fail to understand that it might be a case of GIST. Some common GIST cancer symptoms include:
Feeling of a swelling or mass in the abdomen
Abdominal pain or discomfort
Feeling of fullness after eating a small portion of food
The tendency of nausea or vomiting
Loss of appetite
If the tumor is in the esophagus, a problem in swallowing food
Blood in the stool or vomit
diet plan for healthy weight loss in 1 month
GIST Cancer Symptoms Related to Loss of Blood
GIST tumors are extremely fragile and they tend to bleed easily. This bleeding spreads throughout the GI tract. Profuse bleeding in the small intestine and stomach can cause black and tarry stools. When there is brisk bleeding in the stomach or in the esophagus as the tumor is present in those areas, the person can vomit blood. The thrown up blood might look like coffee grounds in dark brown color and might not have the red color of blood.
If the bleeding takes place in the large intestine, there will be bloody stools with visible blood.
In case of slow bleeding, no such above-mentioned symptoms might be seen. However, when blood is tested, it might show low red blood cell count. A person is considered anemic. Tiredness and weakness are commonly felt in such cases. |
1. Gas is a type of matter that has no definite shape or volume, made up of atoms or molecules that are free to move around.
2. Gas is a state of matter that can easily be compressed or expanded depending on the pressure and temperature.
3. Gas molecules have a low density and high kinetic energy, meaning they can move quickly and rapidly mix with other molecules.
Did you know facts about gas?
Yes! Here are some interesting facts about gas:
1. Gasoline is one of the most commonly used fuels for transportation. It is a mixture composed primarily of hydrocarbons, plus additives to help keep engines running smoothly.
2. Gasoline doesn’t last forever. After about a year, it begins to break down and eventually becomes unusable.
3. When burned, gasoline produces carbon dioxide and water vapor, along with a number of other compounds such as nitrogen oxides, sulfur dioxide and particulate matter. These emissions can contribute to air pollution, so it is important to maintain a properly tuned and maintained vehicle to help reduce emissions.
4. Gasoline prices are affected by various factors including crude oil prices, refining costs, taxes, transportation costs, and distribution costs.
5. Some cars run on alternative fuels such as electricity, hydrogen, propane, and biodiesel. These “cleaner” fuels are designed to reduce emissions, though they may cost more upfront and require access to specific filling stations.
What is gas made of?
Gas is composed of particles of matter, as it is a form of matter just like solid and liquid. Gas particles have less density and therefore, they can spread out in a much larger area than solids or liquids.
The particles of gas are so small that they can be visible only when they are found together in large enough groupings.
The composition of gases can be classified according to their chemical composition. Rarefied air, a combination of nitrogen and oxygen, is the most common form of gas. Depending on their chemical makeup, some gases can be toxic, flammable and/or combustible.
Examples of gases which occur naturally include oxygen, nitrogen, argon and carbon dioxide. Other gases, such as chlorine and hydrogen, are produced by chemical means.
Who found gas first?
The discovery of gas as a useful energy source dates back to the ancient Greeks (as early as 500 BCE), though it was initially just used for lighting. The use of natural gas (methane) as an energy source can be traced back to China in 600 BCE, when the Chinese captured and stored it in bamboo containers to be used as a fuel to boil water.
However, it wasn’t until 1626 that gas was used for the first time to heat a building when Sir Johnerdney, in England, decided to attach pipes to his home that were connected to a coal mine. This system allowed coal-generated heat to be transferred to his house.
In 1792, William Murdock discovered a new method to produce gas from coal. By heating coal, he was able to release large volumes of coal gas which could then be burned to provide energy to run engines.
This method allowed for more efficient use of coal as an energy source.
In 1816, the newfound production capabilities led to the creation of the Gas Light and Coke Company in London, which provided gas for street lighting throughout the city. This company is considered to be the first gas provider in the world.
Around this time, gas-powered lighting also became popular in homes and businesses across Europe and the United States.
Thus, the discovery of gas can be traced back to the ancient Greeks, while modern uses and production methods were pioneered by individuals such as Sir Johnerdney and William Murdock in the early 19th century.
How many years of gas are left?
The answer to this question is not an easy one, as the amount of natural gas left in the world is dependent on several factors. It’s estimated that there are roughly 6,500 trillion cubic feet of natural gas left in the world, but it’s impossible to know exactly how much is left since this number is constantly changing due to new discoveries and extraction activities.
Additionally, the amount of natural gas left also depends on future demand and efficiency improvements. That being said, energy analysts generally believe that there is enough natural gas left in the world to last around 60 years at current levels of consumption, or around 200 years if the world is able to reduce its consumption of natural gas.
Who invented gas?
Although it is not known for certain who invented gas, it has been speculated that Chinese alchemists were the first to discover gas as long ago as 1000 BCE. It has also been speculated that British scientists may have discovered natural gas in the 1600s.
It seems likely that humans began experimenting with producing and collecting combustible gases as early as the 1500s.
Stephan Hales, an English scientist, used plant fluids to study the production of combustible gases in the late 1700s. Later, in 1790, scientist and physician John Clayton discovered natural gas while drilling into a coalmine in Virginia.
Following Clayton’s discovery, “coal gas” was used as a fuel in England and by the mid-1800s, it had spread to other parts of Europe.
Due to the industrial revolution in the mid-1800s, gas became more widespread as its use allowed for heating, lighting, and powering various machinery. In the 19th century, the burning of coal was finally replaced by natural gas as a main source of heat for factories and homes.
The 20th century brought about the discovery of other combustible gases such as propane and butane, and new inventions such as gas stoves, gas turbines, gas engines, and gas lighting. While the exact inventor of gas remains a mystery, it is clear that humans have been using gas as a source of energy and heating for centuries.
What are the 2 most common gases?
The two most common gases in the Earth’s atmosphere are nitrogen (N2) and oxygen (O2). Nitrogen makes up 78% of the atmosphere, while oxygen comes in at 21%. Other gases present in the atmosphere include argon (Ar), carbon dioxide (CO2), and water vapor (H2O).
These other gases undergoes various cycles between the atmosphere and the oceans, plants, and animals. Because nitrogen and oxygen are so abundant, they are considered the two most common gases in the Earth’s atmosphere.
How much natural gas is left?
Due to the fact that there is no one-size-fits-all answer to this question, estimating the amount of natural gas left can be a bit difficult to accurately measure. However, the consensus among scientists and energy experts is that the world has around 6,622 trillion cubic meters of natural gas, with around 60% of this being untapped reserves.
Global consumption of natural gas has been increasing over time and the world’s consumption rate is around the same as its production rate, which is between 119 and 123 trillion cubic meters per year.
Almost 80 percent of the world’s natural gas reserves are located in nine countries, including the United States, Canada, Qatar, Mexico, Saudi Arabia, Russia, Iran, Turkmenistan, and the United Arab Emirates.
In total, these nations hold around 85 percent of the world’s proven reserves, such as shale gas and unconventional sources like methane clathrates.
The sources of natural gas are thought to be able to provide global energy needs for the foreseeable future, though many countries are attempting to move away from natural gas and focus on renewable energies such as solar, wind, and hydro power.
That being said, it is difficult to give an exact answer to the question of how much natural gas is left. Nonetheless, since the world has quite a substantial reserve of natural gas, it is enough to meet the current energy needs of the world while still being able to provide enough energy for a number of years.
How long will natural gas last?
It is difficult to determine how long natural gas will last as it is a non-renewable resource. Natural gas is produced by the natural processes of the Earth and is finite, meaning that it cannot be replenished once it has been used.
Many factors must be taken into consideration when determining how long natural gas will last, such as the amount of natural gas currently in reserves, the rate of extraction, advancements in technology, and disruptions in supply.
Based on current estimates, the International Energy Agency (IEA) estimates that natural gas reserves will last around another 60 to 70 years. While this seems like a long period of time, it is worth noting that the majority of these reserves are located in the Middle East, Russia, and the former Soviet Union.
This means that any disruptions in supply in those regions could significantly alter the current estimates.
Given the factors outlined above, it is difficult to provide an exact answer to the question of how long natural gas will last. It is likely to last several more decades, however, further advancements in renewable and sustainable energy technologies could help to conserve the currently known reserves and potentially reduce the overall dependence on natural gas in the future.
Can natural gas run out?
Yes, natural gas can run out. Natural gas reserves are finite, meaning they will eventually be depleted. However, there is currently enough natural gas in the world to last centuries. Estimates suggest that the global natural gas resources could last approximately 250 years at current rates of production.
Natural gas can also be generated from other sources, such as coalbed methane and biogas, which helps increase reserves. Natural gas production technologies and exploration strategies have also advanced significantly over time to help make better use of existing reserves and increase supply.
There are also various methods for conserving natural gas, including improved energy efficiency, renewable energy initiatives, and capturing and reusing naturally occurring methane. By utilizing all these strategies, natural gas could play a key role in the global energy mix for many years to come.
Where is natural gas found?
Natural gas is a fossil fuel that is found in underground reservoirs of porous rock, typically formed millions of years ago from the remains of dead plants and animals. Natural gas is found all over the world, but it is usually pulled from large reserves located in underground beds of sedimentary rock such as sandstone and limestone.
These gas reserves, called “traps,” are created when natural gas is trapped under a ‘cap’ of non-porous rock such as shale or clay. The most common and accessible natural gas reserves exist in areas where ancient oceans existed and the sedimentary rock beds containing the trapped gas still remain.
Common locations where natural gas is extracted include the U. S. , Canada, Russia, China and the Middle East. Natural gas is also sometimes found in coal beds and methane hydrates, which are ice-like structures found in ocean floors and permafrost regions.
How much gas is left in the earth?
It is estimated that there is approximately 4 trillion cubic meters (or 4,000,000,000,000 cubic meters) of natural gas left in the earth, according to estimates from the US Energy Information Administration.
This estimate is based on current consumption rates and estimates of current global reserves. However, this amount of natural gas will not be enough to meet current or projected demand in the coming years.
In fact, experts estimate that the global demand for natural gas will double by 2050. As a result, finding new sources of natural gas, such as in shale, or investing in renewable energy sources, such as solar and wind, will become essential in order to meet the needs of a growing global population.
Is the Earth still making oil?
Yes, the Earth is still making oil. This process, known as abiogenic petroleum formation, occurs over millions of years. Abiogenic petroleum formation works by transforming carbon, water, and hydrogen found in the mantle and other regions of the Earth into oil and gas.
This occurs due to extreme temperature and pressure conditions found in certain parts of the Earth. Research suggests that abiogenic petroleum formation may be responsible for a significant portion of global reserves of oil and gas, including up to one third of the world’s petroleum reserves.
However, these reserves are often found at greater depths than those currently being tapped through drilling, meaning that they remain largely unexplored. As a result, the amount of oil and gas being generated remains largely unknown.
How poisonous is gas?
It depends on what type of gas you’re referring to and its concentration. Some types of gas can be extremely toxic and even fatal in high enough concentrations, whereas others may cause only mild symptoms if accidentally inhaled.
Commonly toxic gases include chlorine, carbon monoxide, and nitrogen dioxide. Chlorine is especially dangerous because it is highly toxic even at relatively low concentrations. Inhaling too much of it can cause severe irritation of the throat, chest pain, and coughing.
In the worst cases, it can lead to respiratory failure and even death. Carbon monoxide is also poisonous and can be lethal if inhaled in high concentrations, due to its ability to deprive the body of oxygen and suffocate someone.
Nitrogen dioxide is a byproduct of burning fossil fuels, and can cause a wide range of health issues if exposed to it long enough, from shortness of breath and irritation of the eyes to lung damage. Some other less commonly known toxic gases include phosgene (used as a chemical weapon in WWI), ammonia, sulfur dioxide, and methyl isocyanate.
However, it should be noted that there are also plenty of non-toxic gases as well, such as nitrogen, oxygen, and methane. So the overall level of toxicity of gas depends on the particular gas and its concentration.
Is gas a yes or no?
No, gas is not a yes or no. Gas is a type of energy that can be created from a variety of sources, such as coal, oil, natural gas, etc. Gas is used in a wide variety of applications, from fuel to heating, cooling, and electricity generation.
The types of gases and the way they are used vary, depending on the application. For example, natural gas can be used to heat homes and businesses and generate electricity, while gasoline is used to power cars.
Additionally, gas can be converted into different forms of energy, such as electricity and heat. The way that gas is used and the forms of energy it can create are important factors when determining its value and use. |
What are the Characteristics Of a Wolf? – (Characteristics & Interesting Facts)
Finding out the characteristics of a wolf?
Wolves (Canis lupus) have interesting characteristics and are highly intelligent animals. These wild animals live with their entire packs and are highly social animals. Adult wolves use a variety of ways to communicate with their wolf pack.
With this, wolves and dogs belong to the same Canidae family, that’s why they share many similar traits. But wolves are purely wild animals, and dogs are domesticated breeds that arise from wolves.
In wolves, the gray wolf is the common wolf species with many other subspecies, but they all have the same characteristics and behavior. Let’s look closely at the interesting characteristic and some interesting facts about the wolves.
Wolf Characteristics & Behaviour
Distribution and Habitat
Wolves habituated in North America, Europe, and Asia, with some subspecies living in Africa. They tend to inhabit remote areas such as mountainous regions and forests. Wolf packs can make their homes in a range of terrain from deserts to tundra.
Wolves habituate places with plenty of prey and low human disturbance, but some have adapted to living near people in suburban areas.
The wolf population has decreased in recent years due to human-related causes such as habitat destruction and human-wolf conflicts. Wolves have been reintroduced into many parts for restoration purposes.
Wolves are predatory animals that primarily feed on large ungulates, such as deer, elk, moose, and bison.
Wolves feed on smaller mammals, including rabbits, beavers, mice, and voles. In addition to this mammalian prey, they eat birds, fish, reptiles, and insects.
Wolves hunt larger animals in the winter and smaller ones in summer when food supplies are scarce. Wolves have also been known to scavenge carcasses left behind by other predators.
Their stomach has the capacity to consume 20 to 25 pounds at a time, as well as can survive without food for more than 2 weeks or even more in shortage of food.
This opportunistic behavior of feeding allows them to adapt to changing environmental conditions. Even in certain circumstances, they turn into omnivorous or herbivorous.
Wolf eats grasses when they have stomach problems, as it helps them to excrete unhealthy food through puke.
Reproduction and Mating Process
Wolves mate for life and typically live with one partner for their entire life. The breeding season usually takes place between late January and March.
During this period, a male wolf becomes vocal and active in attracting a mate. Courtship behaviors include howling, touch, play fighting, and scent marking.
Wolves can reproduce once annually and give birth to an average of six pups. Their gestation period remains 60 to 63 days, after which their pups are born.
Although the female wolf is solely responsible for nurturing and caring for their litter, the male plays an important role in protecting, hunting food, and helping to build a den.
Once the wolf pups are born, both parents are responsible for teaching them life skills such as hunting and socialization.
Wolves can live up to 13 years in the wild, although their lifespan is often shorter due to hunting and habitat destruction.
Predators and Threats
Wolves have a few natural predators, including bears, Siberian tigers, and grizzly bears. However, these animals are predatory animals and top predators on the food chain.
Attacks on wolves are rare, as the cause of attacks on wolves is wild, mainly on land.
Humans are also a threat to wolves, as they hunt them for meat, sport, and many other reasons.
Wolves that live close to human populations are more likely to come in contact with people, which can lead to conflict. Humans kill wolves for their fur or to protect livestock.
Additionally, wolves are losing their natural habitats due to human activities such as land development and logging. Wolves require large areas and open spaces to survive.
If these essential elements of their habitat are disturbed or destroyed, wolves become more vulnerable to predators and other threats.
For the wolf population to remain healthy, humans must take steps to ensure their habitats are protected.
The conservation of wolf populations is important for healthy ecosystems, as wolves play a vital role in maintaining balance within nature. By preserving their habitat and coexisting with humans, wolves can continue to thrive.
Wolves are highly social animals and live with their entire pack. They communicate through vocalizations, body language, and scent markings.
A wolf Pack’s members participate in hunting, care for their young ones together, and even play with one another.
Wolves also have a hierarchical structure within the pack that is maintained through displays of dominance and submission.
Wolves show strong family bonds and can be very loyal to their pack mate, often interacting in playful ways with their pack. These all are aspects of a wolf personality.
Despite myths about them being dangerous or over-aggressive, wolves are generally shy animals who spend most of their time avoiding human interactions.
3 Interesting Facts about Wolves
Wolves are monogamous
Wolves are highly social wild animals and communicate using body language, vocalizations, and scent marking with their own and other pack members. Mating season for wolves typically occurs from January to March.
An alpha male and a female wolf pair bond for life. However, if a mate dies, the wolf may take another mate once after recovering from the grief.
During mating season, they become increasingly active while searching for a mate at night.
A wolf can usually find a mate within its pack. However, if none are available, they will travel to another pack in search of one.
When two wolves decide to become mates, they stay together and help raise their pups until the next mating season, when they part ways.
Wolves are incredibly loyal and strong creatures who put family first and can sacrifice their lives to keep their mates safe.
Their Vocal and Nonvocal Ways of Communication
Wolves are incredible creatures that communicate both vocally and non-vocally. Vocal communication includes growls, howls, barks, whines, and yips. Wolf uses vocalizations to mark territory, attract mates, and teach young pups.
Nonvocal communication between wolves is just as crucial for their social interaction and includes body language, physical contact, and scent marking.
Wolves communicate their emotions through their posture, facial expressions, tail movements, and the positioning of their ears.
Through physical contact, such as nuzzling or touching noses, wolves can show affection. Scent marking is another way for wolves which they use to identify members and mark their territories.
Wolves are social animals, and vocal and nonvocal communication help them maintain strong pack bonds.
Alpha Male and female wolves are breeders, not leaders.
Have you ever thought about how the concept of an alpha wolf came?
In 1947, researcher Rudolf Schenkel published about the social structure and body language of Wolves.
He observed and studied the 10 wolves captivated in the Basel Zoo in Switzerland. He stated that the highest-ranked alpha female and alpha male mate in wolves, and the hierarchy system could be changed.
He wrote that “by constantly controlling and suppressing all types of competition within the same sex, both ‘alpha animals’ maintains their social position,”
After this, another researcher, David Mech, mentioned that Schenkel presented the concept of the alpha wolf in a pack, as per the international wolf center.
Gray wolves are highly intelligent animals with interesting characteristics. There are around 200,000 to 250,000 wolves in the world.
Wolves live in packs of four to nine wolves, in which only an alpha male and female mate with each other and give rise to 4 to 6 pups per litter. With this, they use various vocals and non-vocals to communicate with other wolves.
They are primarily carnivorous animals that feed on large ungulates such as bison, deer, elk, and moose.
Moreover, attacks on wolves are rare, as they are top predators and predatory animals.
Additionally, they are monogamous, which means they mate with one partner for life. Lastly, the concept of alpha wolf arrived from some experimental research, which different researchers further studied. Check out the video below for more about the wolf’s characteristics.
Frequently Asked Questions
What does a wolf symbolize?
A wolf symbolizes guardianship, loyalty, communication, and intelligence.
What is the personality of a wolf?
Wolves are highly intelligent animals with caring, playful, and devoted personalities.
What are 8 interesting facts about wolves?
Their 8 interesting facts about wolves are they mate for life, have 4 to 6 pups per litter, pups are born deaf and dumb, a pack of wolves includes 2 to 30 wolves, a wolf can run up to 36 to 38 Mph, they can smell miles away, and they can jump high in the air.
How loyal are wolves?
Wolves are the most loyal animals.
- How To See Tonle Sap Waterbirds In Real Life: Travel Guide - 2023-05-12
- What Does A Black Wolf Symbolize? - 2023-05-12
- Can A Wolf And A Coyote Mate? - 2023-05-11 |
Ticks are parasitic arthropods that live off the blood of vertebrates and can cause several diseases in humans, including Lyme disease. The life cycle of ticks includes four stages: eggs, larvae, nymphs, and adults. All these stages have their peculiarities.
The eggs are tiny and round, the larvae are small and have six legs, the nymphs look like miniature adults, and the adults are about the size of a sesame seed. Interestingly, ticks can crawl on walls and ceilings. Another interesting fact is that not all ticks have wings; only adult male ticks have wings.
Female ticks, on the other hand, don’t need wings to crawl around; they can attach themselves to a host with their mouthparts. Ticks can be found all over the world, but they are especially common in warm and humid climates. So if you’re planning on traveling to a tropical destination this summer, be sure to protect yourself from ticks!
Do Ticks Have Wings?
Do ticks have wings? The answer to this question is no, ticks do not have wings. Ticks are parasites that rely on blood from other animals to survive and they cannot fly. There are four stages in the life cycle of a tick: eggs, larvae, nymphs, and adults. All these stages have their unique features.
Ticks have wings, but these wings are not used during flight because this would require them to be much bigger than they already are. What they have is called the scutum, which is a hard plate on their backs that looks like wings but isn’t used for flying or gliding either. It helps them maintain balance during blood feeding and moving around to find hosts.
What Do Ticks Look Like?
Adult ticks are easy to identify because they have eight legs and a hard outer shell. However, nymphs and larvae are much smaller and can be difficult to spot. They usually have six legs, but some may have as many as eight. All ticks are brown or black, so they can easily blend in with their surroundings.
What Do Ticks Eat?
Ticks feed on the blood of mammals, birds, and reptiles. They attach themselves to their hosts by embedding their mouthparts into the skin. Some ticks can survive up to a year without feeding.
How Do Ticks Spread Diseases?
Ticks can spread diseases when they bite their hosts. They can also spread disease indirectly by carrying bacteria on their bodies. Some of the most common diseases that ticks transmit include Lyme disease, Rocky Mountain spotted fever and ehrlichiosis.
How Big Are Ticks?
Ticks are small arthropods that range in size from as tiny as a sesame seed to about the size of a grape. Only the adults have wings, and they are used for balance during feeding and movement instead of flight. Female ticks do not need wings to move around since they can attach themselves to hosts with their mouthparts. Ticks have a unique scutum on their backs that looks like wings but aren’t used for flying or gliding. This feature helps them maintain balance during blood feeding and moving around to find hosts.
Despite their small size, ticks can be a big problem because they can carry several diseases that can make you sick. It’s important to be aware of these risks and take the necessary precautions to protect yourself.
How Do Ticks Get On You?
Ticks can get on your skin by attaching themselves to you or they can land in your hair and wait until their surroundings are suitable. They travel from the ground up, so if you’re in tall grasses, ticks will climb onto any exposed body parts such as legs and arms.
After a tick attaches to its host, it feeds for several days or even weeks if no one removes them. When it’s ready to detach from the host, it releases saliva that contains chemicals that keep the host from feeling it and also increase blood flow to its mouth. This is a very important process that allows ticks to live off their hosts for several days or weeks until they are ready to drop off on their own.
Where Do Ticks Live?
Ticks live all over the world in warm and temperate climates. They can be found in forests, fields, gardens, and even on animals. Ticks that spread Lyme disease are mainly found in the northeastern and north-central United States, but they are also present in other parts of the country.
Ticks become active once temperatures reach 55 degrees Fahrenheit (13 Celsius), but their activity level depends on where they live because some regions get colder than that during winter months while others don’t. Ticks can be active during winter in mild climates and regions with hot summers, but they don’t survive the cold winters elsewhere.
Do Ticks Jump?
Ticks are unable to leap because they have a flat underside and legs that aren’t designed for leaping. They can only travel across a surface with their legs, just like fleas. It should not be said that ticks may leap from one animal to another in this manner. undoubtedly, it’s frequent to hear this from individuals who are unaware of the infection, therefore we’re here to set your mind at rest.
How Many Legs Do Ticks Have?
A tick has eight legs. Six of these are used for walking and the remaining two are used to grab a host, such as a human or a dog. Ticks have rounded bodies with hard shells that protect them from being crushed by their hosts while they feed on blood. They also have eyes and mouths with sharp parts that help them attach themselves to their hosts.
Do Ticks Have Antennae?
Ticks do not have antennas. They have a hard plate on their backs called the scutum that looks like wings, but it is not used for flying or gliding. The scutum helps them maintain balance during blood feeding and moving around to find hosts.
Do Ticks Fly?
Ticks are unable to fly. They have a hard plate on their backs called the scutum that looks like wings, but it is not used for flying or gliding. The scutum helps them maintain balance during blood feeding and moving around to find hosts.
How Do Ticks Find A Host?
Ticks are active animals that look for hosts to attach themselves to. They use several methods to find a host, including luring with carbon dioxide and other chemicals that humans release when they exhale or produce sweat, sensing heat from warm-blooded mammals with special receptors located in their legs, hearing sounds made by potential hosts moving nearby, detecting vibrations from potential hosts walking on the ground, and following scent trails.
How Long Do Ticks Live Without A Host?
Ticks usually feed for several days or weeks, depending on how long it takes their fill of blood. If they are unable to attach themselves to a host when they become adults, female ticks can lay eggs and die while waiting for the next suitable opportunity. The life cycle of some species is completed in one year while others may take two years or longer. |
Revisiting Addition and SubtractionAll Classroom Lessons
A Lesson for Third Graders
by Suzy Ronfeldt
Students benefit from repeated practice with addition and subtraction throughout the year. In her book, Third-Grade Math: A Month-to-Month Guide (Math Solutions Publications, 2003), Suzy Ronfeldt provides a midyear perspective on providing practice, suggesting fresh approaches to computing with larger numbers that are suitable for older students as well. The problems are useful not only for your students’ learning but also for assessing their progress.
How Old Is the Coin?
Children can find many opportunities to figure out the age of something—whether it’s a coin in their money bags or a character in a book. For example, have students examine quarters and determine the years in which they were minted. For a quarter minted in 1974, for example, have them first estimate the coin’s age. Then give them quiet time to figure out the solution step-by-step on paper. Next, either in pairs or in a class discussion, ask students to share their strategies. (See Figure 1.)
Figure 1. Marcus figured out the age of a dime that was minted in 1966.
- Children find three different coins and record the name and value of each coin and its mint date. Then they figure out the age of the coin today and explain their reasoning.
- Students look for the oldest U.S. coin they can find. They then figure out the age of the coin, showing their work and explaining their reasoning.
How Old Is the Character?
When you and the children read books together, you’ll find many opportunities to figure out how old a character might be. For example, in the book Ramona the Pest, by Beverly Cleary (New York: William Morrow), Ramona is in kindergarten, which makes her about five years old. According to the copyright date, the book was written in 1968. How old would Ramona be today? (See Figure 2.)
Figure 2. Nan calculated Ramona the Pest’s age in 2001.
How Long Ago?
Checking the copyright dates of books you read in the classroom provides plenty of opportunities to ask “How long ago?” questions. For instance, Caps for Sale, by Esphyr Slobodkina (New York: W.R. Scott), was written in 1940. Ask your students to figure out how long ago 1940 was. (See Figure 3.)
Figure 3. Kihyun calculated that in 2003, the year 1940 was 63 years ago.
Also, in my class, I meet individually with four children each day to discuss the books they are reading as part of their weekly homework. When you have one-on-one book conferences with students, along with discussing their reading, ask how long ago their books were published.
School Days Problems
Here are three school days problems for your students to solve:
- How many school days are there in a school year? (Make calendars available for students to solve this problem.)
- How many days has school been in session this year so far?
- How many school days are left in the school year?
Calendar Days So Far
At the bottom of CNN’s television picture, the scrolling words often include information on the number of days in the calendar year so far. You can use this information from time to time to pose problems for your students. For example:
- What day of the year is today?
- December 31 is the 365th day (or 366th day in leap years). How many days are left until the end of the year?
Pages Left to Read in a Book
After you and your students have read some pages in a chapter of a book, ask them to figure out how many pages are left to read. This is also a good question to ask children during one-on-one book conferences.
From Online Newsletter Issue Number 13, Spring 2004
A Month-to-Month Guide: Third-Grade Math
by Suzy Ronfeldt |
First Language Lessons is a teacher’s manual for introducing language arts to 1st through 4th grade students. Based on the classical principles of (1) Memory Work; (2) Copying and Dictation; (3) Narration; and (4) Grammar, you’ll find 100 short lessons per grade that you can use as a grammar primer. The second edition of this book presents the text in a much easier-to-read layout and design, without altering the actual content of the book.
The author, Jesse Wise (the author of The Well Trained Mind), presents a rule-based, narrative approach for helping children develop an ear for the correct use of language. What that means for your child is that they spend a lot of time listening to you read a script from the manual; reciting grammar rules 3 times in a row – 3 times a day – multiple days in a row; and answering a few oral questions each day.
Levels 1 and 2
Each lesson can be completed in 5 – 15 minutes each. Lessons alternate between reviewing basic grammar rules, memorizing poems, narration, and occasional copywork or dictation. The Level 1 book focuses on Oral Usage, while Level 2 requires more writing. Level 3 ———
Levels 1 and 2 include optional Enrichment Activities, such as drawing a picture of a cat and then looking up the word in an encyclopedia so you can brainstorm a list of adjectives and adverbs to describe different types of cats.
First Language Lessons does not follow a generally accepted list of grammar topics usually covered in 1st or 2nd grades. For example, while the Level 1 (1st grade) book exhaustively covers identifying different types of nouns, pronouns, and verbs in its first 90 lessons, it never explicitly touches the topics of how to make plural nouns, what is an adjective, or even how to properly use punctuation.
Aside from the Copywork and Dictation, you will not find daily written work to reinforce grammar skills. While there is ample opportunity to memorize poems and practice single sentence copywork, children are not expected to put skills into practice through the use of worksheets or other types of written exercises. While the manual provides periodic oral review lessons, there are no actual tests or formal written assessments to use.
The pace of First Language Lessons Level 1 may be a good fit for children who need constant review of essential concepts. However, some children will be frustrated with reviewing the fact that nouns refer to people for 16 lessons in a row – before moving on to learning how nouns can also refer to places. The pace of Level 2 picks up.
Levels 3 and 4
First Language Lessons for third and fourth graders still uses scripted lessons and memorization, but the scope of student work focuses on sentence diagramming. Students also get a number of lessons on letter writing skills and dictionary work. While still a year-long course, the authors recommend that you schedule lessons 2 – 3 times a week, with 30-minutes per lesson. Due to the amount of topic review in Level 3, you can actually start your grammar instruction with this level without worry of having missed foundational skills or knowledge if you didn’t complete Level 1 or 2. You will need to buy both the Instructor’s Guide and the Student Workbook for Levels 3 and 4.
Regardless of the level you use, weak auditory learners or children with poor short term memories may have difficulty finding success with First Language Lessons. Memorizing rules and poems can be tough, especially if the only way you’re learning is by listening. Parents can easily adapt to the needs of their child by creating mini-posters that provide visual cues and hang them on walls – but they’ll have to come up with this on their own, as First Language Lessons focuses almost exclusively on a child’s listening ability.
Finally, First Language Lessons provides moral lessons, sans a religious bent. Most of the narration lessons are classic fables or author written stories that extol the virtues or hardwork, cleanliness, and listening to authority. The teacher Q/A scripts used to reinforce lessons presume families are traditional, in that there is a mother and father and extended family members are active in the child’s life. Pronouns use the traditional male singular as an inclusive form for all students.
Date of Review: 11/06/2015 |
“Body on a chip” could improve drug evaluation
MIT engineers have developed new technology that could be used to evaluate new drugs and detect possible side effects before the drugs are tested in humans. Using a microfluidic platform that connects engineered tissues from up to 10 organs, the researchers can accurately replicate human organ interactions for weeks at a time, allowing them to measure the effects of drugs on different parts of the body.
Such a system could reveal, for example, whether a drug that is intended to treat one organ will have adverse effects on another.
“Some of these effects are really hard to predict from animal models because the situations that lead to them are idiosyncratic,” says Linda Griffith, the School of Engineering Professor of Teaching Innovation, a professor of biological engineering and mechanical engineering, and one of the senior authors of the study. “With our chip, you can distribute a drug and then look for the effects on other tissues, and measure the exposure and how it is metabolized.”
These chips could also be used to evaluate antibody drugs and other immunotherapies, which are difficult to test thoroughly in animals because they are designed to interact with the human immune system.
David Trumper, an MIT professor of mechanical engineering, and Murat Cirit, a research scientist in the Department of Biological Engineering, are also senior authors of the paper, which appears in the journal Scientific Reports. The paper’s lead authors are former MIT postdocs Collin Edington and Wen Li Kelly Chen.
When developing a new drug, researchers identify drug targets based on what they know about the biology of the disease, and then create compounds that affect those targets. Preclinical testing in animals can offer information about a drug’s safety and effectiveness before human testing begins, but those tests may not reveal potential side effects, Griffith says. Furthermore, drugs that work in animals often fail in human trials.
“Animals do not represent people in all the facets that you need to develop drugs and understand disease,” Griffith says. “That is becoming more and more apparent as we look across all kinds of drugs.”
Complications can also arise due to variability among individual patients, including their genetic background, environmental influences, lifestyles, and other drugs they may be taking. “A lot of the time you don’t see problems with a drug, particularly something that might be widely prescribed, until it goes on the market,” Griffith says.
As part of a project spearheaded by the Defense Advanced Research Projects Agency (DARPA), Griffith and her colleagues decided to pursue a technology that they call a “physiome on a chip,” which they believe could offer a way to model potential drug effects more accurately and rapidly. To achieve this, the researchers needed new equipment — a platform that would allow tissues to grow and interact with each other — as well as engineered tissue that would accurately mimic the functions of human organs.
Before this project was launched, no one had succeeded in connecting more than a few different tissue types on a platform. Furthermore, most researchers working on this kind of chip were working with closed microfluidic systems, which allow fluid to flow in and out but do not offer an easy way to manipulate what is happening inside the chip. These systems also require external pumps.
The MIT team decided to create an open system, which essentially removes the lid and makes it easier to manipulate the system and remove samples for analysis. Their system, adapted from technology they previously developed and commercialized through U.K.-based CN BioInnovations, also incorporates several on-board pumps that can control the flow of liquid between the “organs,” replicating the circulation of blood, immune cells, and proteins through the human body. The pumps also allow larger engineered tissues, for example tumors within an organ, to be evaluated.
The researchers created several versions of their chip, linking up to 10 organ types: liver, lung, gut, endometrium, brain, heart, pancreas, kidney, skin, and skeletal muscle. Each “organ” consists of clusters of 1 million to 2 million cells. These tissues don’t replicate the entire organ, but they do perform many of its important functions. Significantly, most of the tissues come directly from patient samples rather than from cell lines that have been developed for lab use. These so-called “primary cells” are more difficult to work with but offer a more representative model of organ function, Griffith says.
Using this system, the researchers showed that they could deliver a drug to the gastrointestinal tissue, mimicking oral ingestion of a drug, and then observe as the drug was transported to other tissues and metabolized. They could measure where the drugs went, the effects of the drugs on different tissues, and how the drugs were broken down. In a related publication, the researchers modeled how drugs can cause unexpected stress on the liver by making the gastrointestinal tract “leaky,” allowing bacteria to enter the bloodstream and produce inflammation in the liver.
Kevin Healy, a professor of bioengineering and materials science and engineering at the University of California at Berkeley, says that this kind of system holds great potential for accurate prediction of complex adverse drug reactions.
“While microphysiological systems (MPS) featuring single organs can be of great use for both pharmaceutical testing and basic organ-level studies, the huge potential of MPS technology is revealed by connecting multiple organ chips in an integrated system for in vitro pharmacology. This study beautifully illustrates that multi-MPS “physiome-on-a-chip” approaches, which combine the genetic background of human cells with physiologically relevant tissue-to-media volumes, allow accurate prediction of drug pharmacokinetics and drug absorption, distribution, metabolism, and excretion,” says Healy, who was not involved in the research.
Griffith believes that the most immediate applications for this technology involve modeling two to four organs. Her lab is now developing a model system for Parkinson’s disease that includes brain, liver, and gastrointestinal tissue, which she plans to use to investigate the hypothesis that bacteria found in the gut can influence the development of Parkinson’s disease.
Other applications include modeling tumors that metastasize to other parts of the body, she says.
“An advantage of our platform is that we can scale it up or down and accommodate a lot of different configurations,” Griffith says. “I think the field is going to go through a transition where we start to get more information out of a three-organ or four-organ system, and it will start to become cost-competitive because the information you’re getting is so much more valuable.”
The research was funded by the U.S. Army Research Office and DARPA. |
Home or community gardening is gaining popularity worldwide. According to a study by the National Gardening Association (NGA), over 30% of American households are actively engaged in growing their own food. The same is happening in the UK where more people are gardening for food and to combat pollution. Unfortunately, poor drainage can deter many from realizing the full potential of gardening.
What Is Drainage?
Drainage is the process by which water moves or passes through the soil. Gardens with poor drainage can create an environment that is nearly impossible for plants to grow. As such, it is important to always check drainage in your garden if you want to maximize your garden.
Why Drainage Matters
Plant roots need to breathe. Poorly drained soils often get waterlogged causing air pockets in the soil to be compact with liquid and suffocate the root systems. This reduces the amount of oxygen available for microbial population and adversely affects root development.
As oxygen levels reduce, carbon dioxide levels increase, which creates an unbalanced microenvironment and promotes the growth of bad bacteria. With time, the plant roots start to rot or develop fungal or bacterial diseases.
Poorly drained soils also reduce nutrient bioavailability by stopping or reducing the rate of decomposition of organic matter. As a result, root development can be stunted and the entire health and development of the plant restricted.
Factors Affecting Soil Drainage
Soils are comprised of different amounts of clay, sand, gravel, and silt. Soils with high amounts of gravel drain faster than soils high in clay and are prone to soil erosion. Most soils are usually a blend of all the constituents mentioned above. Therefore, understanding your soil type can help you choose the right corrective measures.
Healthy soils are rich in organic matter or humus. These nutrients and life are usually derived from decaying animal and plant matter.
The amount and source of organic material in your soil can affect the drainage. For instance, if your soil contains densely packed roots, it will not drain well.
Apart from having organic matter, healthy soils should also be teeming with worms, healthy bacteria, beneficial fungi, and macrobiotic species. If all these components are present in healthy ratios, they will develop natural aeration in your plant roots and assist in proper drainage.
Millions of people use containers to garden. Plastic containers without drainage holes have poor drainage and this also applies to wooden and clay containers with extra sealing.
How To Improve Drainage
Minor lawn and garden drainage problems are typically caused by clay soil. That explains why you have standing water after heavy rainfall for a few hours. Minor issues can be corrected by improving clay soil.
For more serious garden and lawn drainage issue, you may have to find other ways to improve soil drainage because you can have water standing for several days. These problems are usually caused by low grading, extremely compacted soil, layers of hard materials under the soil and high water table. Here are a few solutions.
Install Land Drains
If your lawn or garden is very bad, you need to install land drains. This involves digging trenches in your lawn and installing perforated land drain from trusted dealers like easymerchant.co.uk.
Water will drain through your lawn or garden into a perforated land drainage pipe and channeled away from your garden to another area.
It is worth noting that some bylaws may prohibit you from channeling the water into public sewer systems, so you have to find a suitable area. For instance, you can channel the water to an area without waterlogging issues.
Grow More Plants
Growing more plants can be an inexpensive and sure way to improve drainage in your garden. That means finding plants that can survive in wet conditions. Sadly, most plants can’t tolerate waterlogged conditions.
Some plants you can consider include maples, ferns, mint, willows, bee balm, Cornus alba, filipendula, and more. These plants can thrive in boggy or wet conditions and can play a huge role in removing excess moisture from the ground.
Improve Soil Drainage
Improving your soil permeability can solve drainage issues in your garden. You can do that by digging in lots of organic matter. Several studies show that soil with high amounts of organic matter tends to allow water to drain through more easily. You can get a constant supply of organic matter by making your own compost.
Manage Surface Water Run-Off
Incorporating sloping surfaces within your garden can improve drainage. It ensures that surface water is channeled to an area where it can be easily disposed of. However, this can be expensive because you may have to hire a mini excavator.
Other solutions to consider include:
- Using bark chippings
- Building raised beds
- Spiking your lawn
- Installing artificial grass
Drainage plays a significant role in gardening. Without proper management, you will not maximize your outdoor space. |
More than 600 million years of evolution has taken two unlikely distant cousins – turkeys and scallops – down very different physical paths from a common ancestor. But University of Leeds researchers have found that a motor protein, myosin 2, remains structurally identical in both creatures.
The discovery suggests that the tiny motor protein is much more important than previously thought – and for humans it may even hold a key to understanding potentially fatal conditions such as aneurisms.
Says Professor Knight of the University’s Faculty of Biological Sciences: “This is an astonishing discovery. Myosin 2’s function is to make the smooth muscle in internal organs tense and relax involuntarily. These creatures have completely different regulatory mechanisms: the myosin in a turkey’s gizzards allows it to ‘chew’ food in the absence of teeth, while that in a scallop enables it to swim. Yet they have exactly the same structure.”
Myosin molecules generate tension in smooth muscle by adhering to form a filament, which grabs hold of a neighbouring filament, and relaxes by letting go. When the muscle is in a relaxed state, myosin molecule folds itself up into a compact structure.
This folded structure allows the smooth muscles to adjust to being different lengths so they can operate over a large distance, such as the bladder or the uterus expanding and contracting. In contrast, skeletal muscles operate over a narrow range, defined by how much joints can move.
Professor Knight says: “We were puzzled to find that the scallop’s myosin 2 had retained its ability to fold and unfold, as they don’t need to accommodate a large range of movement. After all, the scallop only moves its shell a little when it swims.
“In evolution, if something is not essential to the survival of an organism, it is not conserved. The fact that the scallop has retained all the functions of its myosin 2 over hundreds of millions of years tells us that this folding is of fundamental functional importance in muscle and that we don’t know as much about it as we need to know.”
In humans, any changes in the composition of myosin within the muscles can be fatal. For example, a swelling in the walls of an artery can cause a brain aneurism, while an enlarged heart can lead to cardiac arrest in a young, fit person.
Says Professor Knight: “Because these malfunctions occur in our internal organs, we are often unaware of what is going wrong until it’s too late. Learning how to control myosin, how to move it around without disturbing the delicate balance between filaments and individual molecules, is an emerging area and one we are only just beginning to tackle.”
The research, funded by BBSRC, is published in the US journal Proceedings of the National Academy of Sciences (PNAS).
Further information from:
Jo Kelly, campuspr Ltd. Tel 0113 258 9880, Mob 07980 267756, Email [email protected]
Guy Dixon, Press Office, University of Leeds. Tel 0113 3438229, Email [email protected] |
Which of the following is/are advantages of virtual memory?
a) Faster access to memory on an average.
b) Processes can be given protected address spaces.
c) Linker can assign addresses independent of where the program will be loaded in physical memory.
d) Programs larger than the physical memory size can be run.
(A) a and b
(B) b and c
(C) b and d
(D) All of the above
Explanation: Virtual memory provides an interface through which processes access the physical memory. So,
A. Swapping of pages increases time while virtual memory concept is there and it slower the execution as compared to Direct access from physical memory. So, it is false.
B. Without virtual memory, it is difficult to give protected address space to processes as they will be accessing physical memory directly. So, it is true
C. Some other method can be used to assign addresses independent of where the program. so it is false.
D. Virtual memory allows a process to run using a virtual address space which is larger in size then the address of physical memory and pages are swapped in between physical memory and virtual memory when physical memory gets full. So, it is True.
So, statement (b) and (d) are Correct.
Quiz of this Question |
I begin by having my class to watch a video clip of ▶ If you give a mouse a cookie. I chose this movie because it is an example of a story told mostly in the second person point of view. We discuss that the second person point of view is written with you being drawn into the story so that you become part of the story. This POV pulls the reader right into the story. You often find this POV in choose your own adventure books or self help books. Although the pronouns "you" and "yours" are often indicators that a story is told in the second person's point of view, the most important indicator is the perspective or voice of the story, and that you become part of the story. I want to emphasize this so that my students don't get hung up on signal pronouns that might lead them astray.
I show and discuss the Second Person POV Flip Chart with my class to review the major points of our discussion. We discuss to make the connection that the narrator's point of view may be different than that of the characters. The narrator's point of view has great influence on the way stories or events are told. We compare/ contrast the difference between first person point of view compared to second person point of view. Making those distinctions in perspectives and using examples found in literature further distinguishes the different view point of the same story. I build on their prior knowledge that the same story can be told from different characters' views such as the different versions of The Three Little Pig. This lesson makes even deeper analysis for determination of whose perspective the story is told for less obvious texts.
Because my students are high level readers, I decided that it is important they know how to distinguish POVs, including first, second, and third person. My students will encounter complex POVs in the higher level texts that they will read, so it makes sense to amp up teaching this standard for when they encounter the complexities of POV in their higher level texts. In fact, Common Core Standards usually addresses this concept in fourth grade, and many of my high achieving and gifted students are reading at fourth grade or higher levels.
I show another video to my class from a book entitled: ▶ If you take a mouse to school by Laura Numeroff and Felicia Bond. I distribute a Second Person POV Graphic Organizer for students to write their thoughts and reasons why they know it is written from the second person's point of view.
Like a second reading, we watch the video a second time for students to take notes and cite examples from the video. I ask students to pair with another students and discuss their findings. They share their evidence proving that the sample story is from the second person perspective. I ask students to create a Trifold Manual that summarizes the second person point of view by giving a definition, example, and tips of how to identify the second person point of view. The second POV manual students create serves as a guide to finding more examples of second person POV in literature.
Using the trifold manual as a guide, students select a book from the library that exemplifies the second person point of view. I distribute a Second Person POV Graphic Organizer for students to write their thoughts and reason why their selection is written from the second person's point of view. Most second person point of views are Procedural texts. Second person point of view is the least common in literature. Therefore, I recommend researching examples from both digital and text sources that exemplify this perspective before you teach this lesson.
Students share their second person POV citations and explanations with the class. Using a rubric for self-assessment, students rate their performance on Citing Text Evidence that effectively shows that the story is told from the second person point of view. We talk about procedural texts such as recipes and assembly directions since these are the most common real world examples of second person point of view. Citing evidence from text and real world applications are the important elements for teaching common core. |
Depressive disorder, frequently referred to simply as depression, is more than just feeling sad or going through a rough patch. It’s a serious mental health condition that requires understanding and medical care. Left untreated, depression can be devastating for those who have it and their families. Fortunately, with early detection, diagnosis and a treatment plan consisting of medication, psychotherapy and healthy lifestyle choices, many people can and do get better.
Some will only experience one depressive episode in a lifetime, but for most, depressive disorder recurs. Without treatment, episodes may last a few months to several years.
An estimated 16 million American adults—almost 7% of the population—had at least one major depressive episode in the past year. People of all ages and all racial, ethnic and socioeconomic backgrounds experience depression, but it does affect some groups more than others.
Is someone you care about dealing with Depression? Or are you? We can help. Click to email our Helpline for more support.
Depression can present different symptoms, depending on the person. But for most people, depressive disorder changes how they function day-to-day, and typically for more than two weeks. Common symptoms include:
- Changes in sleep
- Changes in appetite
- Lack of concentration
- Loss of energy
- Lack of interest in activities
- Hopelessness or guilty thoughts
- Changes in movement (less activity or agitation)
- Physical aches and pains
- Suicidal thoughts
Depression does not have a single cause. It can be triggered by a life crisis, physical illness or something else—but it can also occur spontaneously. Scientists believe several factors can contribute to depression:
- Trauma. When people experience trauma at an early age, it can cause long-term changes in how their brains respond to fear and stress. These changes may lead to depression.
- Genetics. Mood disorders, such as depression, tend to run in families.
- Life circumstances. Marital status, relationship changes, financial standing and where a person lives influence whether a person develops depression.
- Brain changes. Imaging studies have shown that the frontal lobe of the brain becomes less active when a person is depressed. Depression is also associated with changes in how the pituitary gland and hypothalamus respond to hormone stimulation.
- Other medical conditions. People who have a history of sleep disturbances, medical illness, chronic pain, anxiety and attention-deficit hyperactivity disorder (ADHD) are more likely to develop depression. Some medical syndromes (like hypothyroidism) can mimic depressive disorder. Some medications can also cause symptoms of depression.
- Drug and alcohol abuse. Approximately 30% of people with substance abuse problems also have depression. This requires coordinated treatment for both conditions, as alcohol can worsen symptoms.
To be diagnosed with depressive disorder, a person must have experienced a depressive episode lasting longer than two weeks. The symptoms of a depressive episode include:
- Loss of interest or loss of pleasure in all activities
- Change in appetite or weight
- Sleep disturbances
- Feeling agitated or feeling slowed down
- Feelings of low self-worth, guilt or shortcomings
- Difficulty concentrating or making decisions
- Suicidal thoughts or intentions
- Many treatment options are available for depression, but how well treatment works depends on the type of depression and its severity.
Owner and Founder at the Goddess Bibles A Memoir By Laura Zukerman
Becoming Your Inner Goddess
Goddess on Fire |
At Franche Community Primary School we recognise that a rigorous approach to the teaching of reading will develop pupils’ confidence and enjoyment in this subject. As part of an exciting and engaging creative curriculum, we encourage all pupils to read a wide range of stories, poems, rhymes and non-fiction to develop a wider vocabulary, language comprehension skills and love of reading.
Reading is often described as ‘a gateway into another world’ and for many pupils their world view is indeed shaped by the books that they read. In view of this, we wish to give each child the opportunity to explore social and moral issues, explore different faiths and cultures, and promote British Values by providing access to relevant high-quality texts for both teaching and reading for pleasure.
At Franche, we follow Letters and Sounds. Our aim is to deliver daily, high-quality phonics sessions to enable our children to blend and segment words with confidence, as well as ensuring phonics is part of a broad and rich language curriculum. In Early Years and Key Stage One, phonics is taught every morning and is embedded throughout our creative curriculum. During each session, we revisit and review previous sounds through fun and creative investigative activities. We then teach, practise and apply new skills to ensure the children are confident, independent learners. The children benefit from actions and rhymes to help them remember new sounds. We involve our parents throughout the year through a variety of activities, including parent workshops, family learning, stay and share sessions and fun phonics mornings. Click on one of the buttons below to find out more about how we teach phonics at Franche.
Here are some FREE useful websites to practise phonics at home:
Guided reading sessions focus upon explaining vocabulary, retrieving key information and interpreting the meaning of texts through the use of high quality, topic related texts and VIPERS skills. VIPERS is a range of reading prompts designed to improve comprehension skills. Pupils explore a wide range of genres, exposing them to a broad and balanced range of texts. Each pupil has a Reading Journal where they are encouraged to annotate that week’s text, jot down thoughts and reactions and complete any text related activities to demonstrate the depth of their understanding. In addition to Guided Reading, each pupil will read with an adult on a 1:1 basis at least once a week, reading their home reading books.
We encourage all pupils to share a book at home with their grown-ups as we believe that this not only helps inferential skills but also supports a lifelong love of reading. It is essential that pupils choose their own books to promote reading for pleasure. Pupils will work through the book colour bands and then become free readers. Each class has a home library where pupils can borrow high interest books in addition to their home reading book and Key Stage Two pupils can also access our fabulous school library during devoted class time, breaks and lunchtimes. Pupils are encouraged to read every day at home in our ‘R.E.D.’ (Reading Every Day) initiative.
This is key to reading fluently with confidence and also makes pupils eligible for weekly and half termly rewards.
Click below to read some of our top tips for helping your child read at home.
Reading for pleasure
At Franche Community Primary School we encourage pupils to develop the habit of reading widely and often for both pleasure and information. Our amazing school library has a wide range of fiction and non-fiction books by well known authors as well as developing a bank of new, award winning texts by less familiar writers. Displays in the library provide information to help pupils make choices based on what they have previously enjoyed or encourage them to take a chance on something new!
Each Key Stage Two class has weekly library time where pupils can return books and browse the shelves for a new choice. Class teachers attend theses sessions and discuss choices with pupils, using this knowledge to make their own recommendations.
Each class also takes part in four 20 minute Book Babble sessions every week, where pupils independently read books of their own choosing. At the end of each session, pupils engage in a five minute discussion with their teacher or fellow class mates about the book they have been reading.
We hold many reading events throughout the year to promote reading for pleasure, and the teachers always love to share their love of books with children at all times! Take a look at our 'Franchanory' page to listen to our teachers reading some of their favourite stories or watch one our our World Book Day videos to see how many different places our teachers love to read! |
Preschool Social Studies Social Skills WorksheetsBack
Below is list of all worksheets available under this concept. Worksheets are organized based on the concept with in the subject.
Click on concept to see list of all available worksheets.
- Senses and Feelings
Quiz your kindergartener on feelings and the five senses with this cute picture test.
- Manners Worksheet
This manners worksheet shows even the youngest child the basics of being polite and helpful. Try this manners worksheet to teach your child proper etiquette.
- First, Next, Last 2
In preschool, your little one should be learning what we call "common knowledge." In this worksheet, she'll label each picture as first, next or last.
- All About Me, and You!
What's the preschooler's favorite subject? Himself! Kids love questions about themselves, but this worksheet also has your kid finding out about someone else.
- All About My Hair
Don't do this worksheet on a bad hair day! It's all about you and your hair.
- First, Next, Last 3
Have your child look at these pictures and decide which steps in the sequence go first, next and last.
- Emotions Flashcards
Tap into your emotions with these adorable flashcards. Kids can practice recognizing and naming the different emotions that we feel.
- First, Next, Last 5
Help your preschooler with the basics with this fun worksheet. Can she label each step in the sequence as first, next or last?
- Map Fun 1
Give your preschooler an introduction to maps with basic scenarios like walking to school or mailing a letter.
- Firefighting Equipment
Help your child cut out the items that a fireman would need to do his job, and paste them over the items that don't belong.
- House Rules
Every family has house rules, even if they aren't posted anywhere. Let your child make up her own house rule and make a poster for it.
- First, Next, Last 1
Test your preschooler's knowledge of common activities going on in the world, labeling the steps that come first, next and last.
- Follow the Exit Route
Schools and other buildings have exit routes for emergencies. Your preschooler will read the instructions and draw the exit route on the diagrams.
- Map Fun 2
Has your kid ever tried working with a map? Use this worksheet to give your kid a fun start with maps.
- Haunted House Map
Follow directions to find a key and escape a spooky haunted house.
- Paper Doll Boy: Party Fashion
How would you dress for a party? Help your little learner to identify different articles of clothing with a fun paper doll activity.
- Paper Doll Girl: Party Fashion
Dress to impress with a paper doll activity! Your child can practice identifying different articles of clothing as she dresses this doll for a party.
- Map Fun 5
Here's a worksheet for a rainy day. Can't go to the park? Settle for a park map. Follow the directions and help your preschooler develop some early math skills.
- First, Next, Last 4
Does your kid know how to mail a letter? Make sure he can get it right with this fun worksheet where he labels which steps go first, next and last.
- Map Fun 4
If you were a house cat, this would be a map of the world, but it's really just a map of a house. Read the instructions to your child and watch him map a route.
- Map Fun 3
Does your preschooler love maps? No, not apps...maps! They're kind of important, so start your little one off with some fun worksheets to introduce her to them. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.