content
stringlengths 275
370k
|
---|
1. Start your class with a discussion of Who Discovered America? Allow them to express their opinions. Depending on the discussion: * Agree with students who suggest Christopher Columbus. He is credited in most history books. * Agree with students who suggest the ancestors of Native People in North America may have come from Asia. Explain the belief or theory that in the distant past a land connection existed between the continents. * If any students are already aware of the early Viking settlements, use their input to present the concept: "the Vikings discovered America, 500 years before Christopher Columbus."
Emphasize that Viking history was not written, but passed on orally in Viking Sagas. Announce to your students that they will all have the opportunity to create their own personal Viking Sagas.
* If none of your students suggest the history, theory, or concept, you can present them. Describe them in your own words, (recommended). When you get to the Viking concept, read the two page handout to/with them.
Be prepared to pronounce the names of the Vikings involved. ie: Eric the Red; Herjolf (partner of Erik the Red), Bjarni Herjolfsson, (son of Herjolf). Discovered America by accident. Leif Eriksson, (son of Eric) known as 'Leif the Lucky.' Explored the east coast of America. Thorvald Eiriksson (second son of Erik). First Viking to die fighting with the Native People. Thorstein Eiriksson (third son of Eric) His widow, Gudrid married Thorfinn Karlsefni. Karlsefni and Gudrid built the first European settlement at L'Anse aux Meadows (Straumfjord). Beothuks, a warrior tribe of Native People with white skin, blue eyes, and blonde hair. Freydis, (daughter of Erik) a female Viking warrior who disgraced her family name.
* You don't require any additional materials to use this free lesson. After you've tested it, you could ask your librarian to obtain any of the following products from Little Brick Schoolhouse. (www.littlebrick.com) They could be used to enhance your lesson in the future.
1. The story of what happened to Karlsefni and Gudrid is told in an illustrated comic/magazine, The Saga of Karlsefni, ($2.00/copy)
2. Stories about the four children of Eirik the Red, and many others, are told in the Discovering Canada book, (90 pages), The Vikings. ($14.95) The book contains many Student projects based on the Viking culture: * Back-Yard Archaeology * Build a model Viking Knarr (Ocean-going ship) * Writing With Runes (using symbols of the Viking's Runic Alphabet.) * Play a Viking Board Game * Find your Latitude * Make a Husnotra * Find the North Star * Create a Game of Viking Adventure (Westward to Vinland) * Make cardboard Viking weapons * Make a Viking Helmet * Build a model Viking House * Solve a Vinland Crossward Puzzle
3. The story of Thorvald Eiriksson is one of 4 short stories told in a DVD, A Really Short History of Canada, made by Canada's History Television, The other 3 stories are about early fur traders and settlers in Canada. (See our website for more information on these 3-5 minute videos.) The School Kit Includes public performance rights & a 45-page Teachers' Guide & Student assignments. ($49.90)
4. Give your students a library or internet 'map assignment' to discover the locations of Newfoundland in Canada, Greenland, Iceland, & Norway. They will discover the route of the first Europeans to arrive and settle in North America.
5. Give your students a library or internet challenge to discover more information about the restored Viking settlement at L'Anse aux Meadows in Newfoundland today.
|
LOUISVILLE, KY (WAVE) - Bees are key to the sustainability of our planet. They are an important pollinator for not just wild plants about also crops. Without the 30,000 bee species around the world, we wouldn’t have our morning coffee or that eggplant parmesan for dinner. Ninety commercially grown crops in North America rely on honeybees. More than $15 billion worth of crops each year are pollinated by honey bees in the United States. So basically fewer bees can contribute to a less productive harvest.
Most of us have heard about pesticides and disease affecting bees but our weather and an evolving climate can have an effect on bees. Weather and climate have a significant impact on their populations and habitat. A 2015 study found that as temperatures around the globe rise, the southern boundaries of North American bumblebee ranges are being pushed north. However, the northern edges of the ranges remain the same. Researchers found that bees lost a range of nearly 200 miles across North America and Europe.
Honey bees don't hibernate during the winter; they actually survive by creating their own heat source. They cluster together around the queen, eggs, and larvae in the hive as temperatures fall. The honey bees on the outside of the cluster insulate those on the inside of the sphere who feed on honey stockpiled for winter, according to Terminix. When temperatures drop below 57 degrees, the bees are basically stationary, using their combined body heat to stay warm. They create this heat by flexing the thorax muscles, creating a vibration that raises their body temperature and actually increases the temperature at the center of the bee cluster to the low 90s!
This is why honey bees toil during warmer weather. Without a stockpile of honey, they will have no energy to keep warm. Without warmth, in the winter they will die.
German scientists found that warming temperatures may cause temporal mismatches between the bees and the food they depend on. That means if a bee hatches too early in the Spring, they may be without food for the first days or even week of its life, especially if a warm period is followed by a cold snap. The queen lays eggs when the weather becomes more mild, increasing the worker population. A wave of cold air during this time would leave the bees unable to harvest and replenish their quickly depleting honey stores for the ever growing colony. The Collaborative Research Center found that "a minor temporal mismatch of three or six days is enough to harm the bees."
Temperature is not the only thing that affects the bee’s harvesting habits, precipitation does too.
Researchers say that bees do not leave their hives during rain and gather water to keep the colony cool on very hot days. Weather that impacts their food sources leaves bees at even more of a disadvantage. Weather that’s too dry can affect flowers, an important food supply, reducing the amount of nectar available for harvest. Honey bees aren’t attracted to certain flowers after rain because nectar becomes diluted.
As the climate evolves, temperature, precipitation, and humidity fluctuations will most likely impact plant life, in turn, impacting the honey bees which we so greatly depend on.
|
Harassment and Bullying Policy
Our aim is to create a healthy environment where you feel valued and respected, where you can make full use of your abilities, skills and experiences and participate with others to:
LEARN, LAUGH AND LIVE
As part of this, we want to do everything we can to prevent and eradicate harassment and bullying, and all forms of inappropriate behaviour in our groups.
WHAT WE MEAN BY HARASSMENT
We define harassment as any behaviour which someone finds unwanted, offensive, demeaning, humiliating or unreasonable, whether it is intentional or unintentional. This could be related to (but not limited to) individual differences, such as race, religious beliefs, creed, colour, nationality, ethnic or national origins, sexual orientation, marital or parental status, sex, age or disability.
Whether someone considers certain behaviour to be harassment is also down to the impact it has on them – and whether that impact is negative. It is the impact of the behaviour that’s important, not the intent.
The main characteristics of harassment is that it’s unwanted. Harassment can take the form of a single act or a series of acts over a period of time, and it can include abuse of power or position.
Examples of harassment
- Derogatory remarks which are offensive, such as jokes or banter relating to race, creed, colour, nationality, ethic or national origin, sexual orientation, marital or parental status, gender, age or disability.
- Expression of racist, sexist or similarly offensive views.
- Suggestive remarks, gestures, innuendo, leering, unwanted advances, compromising invitations or requests for sexual favours.
- Physical threats and abuse or unwanted physical contact.
- Physical assault.
- Offensive language or gestures.
- Offensive, threatening or demeaning electronic communication.
- Offensive or objectionable literature, graffiti or pictures.
WHAT WE MEAN BY BULLYING
Bullying is a form of harassment. It is unwanted. Offensive, humiliating, malicious behaviour, that undermines someone’s self-esteem and confidence. It can include persistently negative or malicious attacks on an individual or group of people.
Examples of bullying
- Abuse of power that results in any form of unfair discrimination.
- Controlled unjustifiable criticism.
- Humiliating and overly hostile behaviour.
- Physical assault
- None co-operation, isolation or exclusion by other group members.
- Any other conduct which creates an intimidating, hostile or humiliating social environment.
IF YOU HAVE ANY CONCERNS ABOUT HARASSMENT OR BULLYING:
- It is important that you raise them as soon as you can, to ensure we can take appropriate and effective action, as soon as possible.If you feel able:
- You should approach the individual you believe is acting inappropriately, and ask for their behaviour to stop. In some instances, an individual may not be aware their behaviour is upsetting you, and will willingly change once they know it’s causing offence.
- If the harassment or bullying continues, or you feel unable, for whatever reason, to approach the person who is causing you offence: you can discuss the situation confidentially with an appropriate friend or group member, or you may wish to go straight to a Committee Member for help. After your discussion that person may speak confidentially with the individual you’ve complained about, on your behalf, to ask them to stop the inappropriate behaviour.
LEVELS OF DISCIPLINARY ACTION
- A verbal warning about future conduct by two elected officers and confirmed in writing.
- A written warning which clearly states what will happen if the situation is repeated.
- A final written warning.
- Exclusion from an interest group.
- Termination of membership of Becconsall U3A.
- If any act of extreme conduct occurs, then the individual concerned will immediately have their membership terminated and the stated disciplinary process will be negated.
RIGHT OF APPEAL
If exclusion or termination of membership happens, there is a right of appeal and this must be lodged within 7 days in writing and it will then be considered by the Committee. You have the right to make a personal representation with a friend to the Committee when it is discussed. A final decision will be communicated to you by letter.
Remember everyone is here to:
LEARN, LAUGH AND LIVE
and we want to create an environment for you to be able to just do
|
Table of Contents
Functions of Thyroid Hormones
The thyroid hormones generally work to increase the body’s metabolic rate by increasing the rate at which chemical reactions occur inside cells. They affect the following metabolisms:
- Protein metabolism stimulates protein synthesis by speeding the transcription process on the ribosome and the translation process in the nucleus. This primarily results in an increased number of cellular enzymes.
- Carbohydrate metabolism stimulates glucose absorption from the gut and increases secretion of insulin as well as glucose uptake by body cells; it also speeds up glycolysis and gluconeogenesis.
- Fat metabolism increases fatty acid concentration in the plasma by mobilizing fatty acids in the adipose tissues. This process is known as lipolysis.
- Vitamin metabolism: By increasing the synthesis of cellular enzymes, the thyroid hormones subsequently increase the body’s need for vitamins. Vitamins are crucial components of enzymes and coenzymes in metabolic reactions.
- Cardiovascular system: Thyroid hormones directly boost the heart rate. Another way to raise heart rate is by increasing cardiac output, and cardiac output increases as a result of amplified blood flow, which, in turn, is due to increased metabolism in the body.
- Respiratory system: Thyroid hormones increase the body’s metabolism which, in turn, raises the demand and utilization of oxygen. Increased oxygen demand leads to higher respiratory rate and depth.
- Central Nervous System: The maternal levels of thyroid hormones are most important when it comes to CNS development during the perinatal period. Proper maturation of the central nervous system is highly dependable on the thyroid hormones’ saturation during the perinatal period. Low thyroid levels in the pregnant mother can lead to permanent mental retardation.
Thyroid stimulating hormone
This glycoprotein hormone is produced by the thyrotrope cells in the anterior pituitary. It works through the adenylate cyclase-cAMP mechanism to increase the synthesis and secretion of thyroid hormones from the follicular cells of the thyroid gland. Thyroid releasing hormone, produced by the hypothalamus, causes the release of thyroid-stimulating hormone from the pituitary gland.
Thyroid binding globulin
This binding globulin has the highest affinity for thyroid hormones in the plasma. Its levels in the blood can be used to test for thyroid diseases, especially in the case of elevated endogenous thyroid hormones.
Maternal Thyroid Changes During Pregnancy
Early in pregnancy, the mother’s thyroid hormone production increases by 50 %.
Estrogen is a primary female sex hormone which also plays a major role in pregnancy. Estrogen contributes to the development of many fetal parts, mainly stimulating the fetus’s adrenal glands to produce hormones. It also maintains the uterus to accommodate the pregnancy, in addition to responding to oxytocin.
One of the functions of estrogen is to cause increased levels of thyroid binding globulin synthesis and release from the liver. This significantly increases its blood concentration during pregnancy.
TBG has a higher affinity for T4 than T3, hence increase in the blood concentration of TBG leads to lowered levels of free T4 in the blood. This triggers a negative feedback reaction which leads to increased production of TSH from the anterior pituitary. As a result, thyroid hormone production increases. Therefore, the final effect of increased TBG is amplified production of thyroid hormones, which meets the demands of the pregnant woman’s body.
Furthermore, a hormone called human chorionic gonadotropin, which is produced by the syncytiotrophoblast in the placenta of pregnant women, also works to stimulate the thyroid gland due to its similarity to TSH. HCG levels are the highest during the last days of the first trimester, therefore, there is a fall in TSH levels during this time.
Increased demand for iodine and pregnancy-induced goiter
Iodine plays a pivotal role in thyroid hormone synthesis. There’s an increased demand for iodine in pregnant women, and it is not only due to increased synthesis of thyroid hormones. Another reason is the increased glomerular filtration rate in the kidneys, which results in loss of iodine to the urine, in addition to the fetus taking a lot of maternal iodine for its own development. If the mother does not take an adequate supply of iodine supplements, a goiter can form.
Goiter is a swelling in the lower part of the neck caused by an enlarged thyroid gland. Because of iodine deficiency, thyroid hormones cannot be synthesized in an adequate amount, which results in increased TSH concentration from the anterior pituitary as a feedback mechanism. TSH continuously stimulates the follicular cells in the thyroid gland to make more thyroid hormones; this persistent stimulation results in follicular cells growing in size and proliferating. This change in the follicular cells causes diffuse hyperplasia of the thyroid gland, which leads to goiter.
Thyroid stimulation by hCG
hCG can weakly turn on the thyroid because it can bind and transduce signaling from the TSH receptor (hCG and TSH are structurally very similar. High circulating hCG levels in the first trimester may result in a slightly low TSH. When this occurs, the TSH will slightly decrease in the first trimester and then return to normal throughout the duration of pregnancy.
Thyroid hormones and fetal brain development
Research has proved that thyroid hormones play the most vital part in the last stages of brain differentiation. This includes formation of axons and dendrites, myelination, formation of synapses and neuronal migration.
Thyroid Function Tests During Pregnancy
|Thyroid Testing||TSH||↓ 1st trimester||Elevated β-HCG|
|Total T3 & T4||↑||Estrogenic elevation of TBG, normal free hormone levels|
|Cortisol||↑||Estrogenic elevation of CBG, normal free cortisol levels|
|Cardiovascular||PVR||↓ 50 % 1st trimester||Progesterone, NO, calcitonin|
|CO||↑ 50 % 1st trimester||↑ plasma volume, ↓ PVR|
|Heart rate||↑ beats/min|
|Respiratory rate||↑ tp 20/min|
|Plasma volume||↑ 50 %|
|RBC mass||↑ 20—30 %||Hemodilution|
The human fetus has two main sources of thyroid hormones: the mother’s and its own. The fetus does not start synthesizing its own thyroid hormones until 12 weeks of gestation.
Congenital hypothyroidism, or Cretinism, causes:
Maternal hypothyroidism: Overt maternal hypothyroidism alone cannot cause Cretinism because that would usually result in infertility. However, the subclinical type of maternal hypothyroidism has been found to cause significant developmental abnormalities in the baby.
- Fetal hypothyroidism: These babies do well until the mother’s supply of thyroid hormones is circulating in the body. However, sometime after birth, symptoms of hypothyroidism start manifesting.
Causes of this can be an anatomical defect in the thyroid gland, in addition to genetic or iodine deficiency in the mother. Symptoms include mental retardation, jaundice, hypotonia, decreased activity, small size of the baby and decreased weight gain. These babies also present with a large anterior fontanelle, rough facial features, macroglossia (large tongue), pale and dry skin, goiter and umbilical hernia.
Congenital hypothyroidism is diagnosed by low levels of T4 and elevated levels of TSH. Thyroid scanning can further help identify the cause of the disease. Early diagnosis is crucial for managing this condition. Levothyroxine is used as an optimal treatment.
|
Description & Behavior
Krill, which means “whale food” in Norwegian, are eucarid crustaceans divided into 2 families. The Bentheuphausiidae family consists solely of Bentheuphausia amblyops (G. O. Sars, 1883), a deep water krill species that differs from members of the Family Euphausiidae in that Bentheuphausia amblyops is not bioluminescent. The Euphausiidae family includes the other 89 known krill species, including one of the most common, Euphausia superba (Dana, 1852), which is the most frequent species associated with krill.
Krill in the Euphausiidae family are shrimp-like crustaceans that swarm in dense shoals, particularly in Antarctic waters. Krill swarms may be as dense as 10,000 krill/cubic meter of water, and can stretch for kilometers. Individuals range in length from 8-70 mm, the largest up to 14 cm long. The bioluminescence of krill species in the Euphausiidae family is a strong blue-green light that may be used for communication to help them congregate and spawn.
Like other crustaceans, krill have a hard calcified exoskeleton, which is divided into three tagmata, or segments: the cephalon, thorax, and abdomen. The head and thorax are fused into a cephalothorax and are sometimes difficult to distinguish. Generally, their head has five segments, their thorax has eight, and their tail has six. In most krill, each segment has a pair of appendages, but occasionally variations occur. The anterior-most five abdominal appendages are biramous, meaning they have two branches. The last few abdominal appendages are flattened and form the tail fin also called the telson.
World Range & Habitat
Krill, Bentheuphausia amblyops, is a bathypelagic species found in the southern part of the North Atlantic Ocean near 40°N in deep waters of at least 1,000 m. Euphausia superba are found in Antarctic waters between the continent and the polar front generally within depths of a 100 m or less. Euphausia crystallorophias is also common in Antarctic waters, but tends to inhabit pack and floating ice as well as pelagic waters.
Other krill species are found worldwide in open seas such as: the Pacific Ocean between 55°N and 55°S, in the Indian Ocean between 10°N and 10°S, the Gulf of Oman and east of Sri Lanka. Most are found near the surface, but some have been found as deep as 2,000 m.
Feeding Behavior (Ecology)
Most krill species are filter feeders and consume diatoms (phytoplankton), small one-celled or colonial algae in the Class Bacillariophyceae and occasionally zooplankton. They filter food with 1-3 thoracic (throat) appendages that have been modified into feather-like maxillipeds (see more on their morphology here). Their mouth leads to a 2-chambered stomach that contains a gastric mill to aid digestion.
Krill tend to rise to the surface at night to feed, and retreat to deeper waters during the day.
Krill reproduce during the spring by spawning eggs in several “broods” that contain as many as 8,000 eggs. Krill can release eggs multiple times per season, and the spawning season can last as long as 5 months.
Conservation Status & Comments
Krill are heavily fished commercially. The krill fishery has been the largest fishery in the Southern Ocean for the last 25 years, particularly by Ukraine, Poland and Japan. Because krill are at the center of the Antarctic food web, these countries have signed an agreement limiting the size of krill catches to hopefully keep a large enough population for the larger animals that feed on them such as baleen whales, penguins, and seals.
Visit Krill Facts – centre of information on Krill and Antarctica – KrillFacts.org for more information about krill in the Antarctic Region.
References & Further Research
Research Euphausia superba @
Barcode of Life ~ BioOne ~ Biodiversity Heritage Library ~ CITES ~ Cornell Macaulay Library ~ Encyclopedia of Life (EOL) ~ ESA Online Journals ~ FishBase ~ Florida Museum of Natural History Ichthyology Department ~ GBIF ~ Google Scholar ~ ITIS ~ IUCN RedList (Threatened Status) ~ Marine Species Identification Portal ~ NCBI (PubMed, GenBank, etc.) ~ Ocean Biogeographic Information System ~ PLOS ~ SIRIS ~ Tree of Life Web Project ~ UNEP-WCMC Species Database ~ WoRMS
|
The Wanariset Orangutan Reintroduction Project was established in East Kalimantan, Borneo, in 1993 to help rehabilitate orphaned or injured orang-utans and then reintroduce them to the wild. In its first seven years the Project released 169 animals into two sites, representing some 1.7% of the minimum number of animals estimated to remain in Borneo. In many forest fragments, the existence of the orang-utans is the only justification for withholding logging concessions.
Success or otherwise of such programmes has historically been assessed in terms of the animals' survival for six months and subsequent reproduction. However such criteria are not suitable for slow-breeding animals such as orang-utans which might not be ready to breed for several years following their release. Success must therefore be evaluated in terms of the animals adaptation to their new environment, which could be measured by comparing behaviour before and after release. Rondang Siregar will be assessing a 'half-way house' concept where young animals spend some time prior to release in a habitat more similar to the wild environment, to measure whether this contributes significantly to their successful reintroduction.
|
Skip to 0 minutes and 8 secondsHi, I’m Oliver Caviglioli, once a Headteacher and now an Information Designer. Let’s start with two classroom situations in which some students struggle. One is following a teacher’s spoken explanation. Her schema — her understanding of the topic at hand — is in her head and, therefore, invisible to the students. Instead they receive a succession of words and from these have to guess at the content of her schema. Similarly, when a student is reading a piece of text — written by the teacher or someone else — he has to reconstruct the author’s schema in his own head, by use of the text alone. Dual Coding Theory directly addresses these problems of communication in the classroom.
Skip to 0 minutes and 53 secondsHumans receive new information from the environment in either visual or verbal formats. There are others but these two are the most fundamental. Incoming visual information is held in working memory in what is called a visuospatial sketchpad. And incoming verbal information is held and processed in an auditory loop. Both are limited in storage capacity, and both are separate. These two channels are independent of each other but do form, at moments, links, or associations. When images are linked in this way to words, they enrich the encoding process — otherwise known as learning. A double memory trace is formed, that, correspondingly greatly strengthens the potential for retrieval. Legendary psychologist, Paul Kirschner, calls this double-barrelled learning.
Skip to 1 minute and 49 secondsOf equal interest to teachers is the fact that the verbal channel is organised sequentially. That’s to say, words are ordered in line and can only be addressed one at a time. Visual information, by contrast, is what psychologists call synchronously organised. This means that the eye can take in, and understand, many elements at the same time when looking at a simple diagram. Remember that it takes a great many words to accurately describe the simplest of visual images. That should help you realise the power of visuals to communicate complex ideas in the most efficient way to the most number of students.
Skip to 2 minutes and 29 secondsBecause visuals offer this degree of direct access to knowledge, there are a number of benefits when it comes to teaching and learning. As learning is dependent on our attention, visuals’ role in directing students’ attention is significant. It’s far easier to explain something to students when they have a visual focus to channel their attention. Such graphic displays also help trigger students’ prior knowledge — or as we are now describing it, their existing schema. Schema are not organised in the sequential way that speech or text is, but are closer in structure to the spatial arrangements of diagrams.
Skip to 3 minutes and 6 secondsInstead, then, of wrestling with the fleeting nature of the spoken word and the complex grammar of text, visuals offer students a far more effective way of accessing knowledge. By helping students get a rapid gist of the meaning, they are left with more cognitive resources free to engage in higher order thinking. A direct result of such deeper thinking is the development of students’ own schema. The visual and explicit links found in diagrams stimulate connections between concepts that lead to more meaningful learning. The degree to which such schemas are organised and meaningful, is the degree to which they are easily transferred back into working memory when needed.
Skip to 3 minutes and 49 secondsSuch automatic access to their own prior knowledge, leaves students’ working memory not overloaded and fresh to process new information. Building success in learning in this way, forms the steadiest of platforms for motivation as nothing motivates like success. When creating your own resources with dual coding in mind there are some basic guides that can transform the quality and effectiveness of our endeavours. The first of these is to cut the amount of content you had intended to include on your page or presentation slide. It is the simplest and most effective of all advice you can follow. With your selected content, chunk it. Instead of long sentences across the page, think of how the material falls into different sections.
Skip to 4 minutes and 40 secondsGive each of these sections a heading that stands out. This is the signalling that greatly helps the reading and understanding process. It needn’t be jazzy — just a bold version of the font, or perhaps in capitals is all it takes. The following piece of advice may sound rather low key, petty even. Make sure everything is neatly lined up. Every single piece of professionally produced page or newspaper or magazine was designed around a grid where images, titles, and text all align neatly. It gives the page an immediate impression of order and gives the reader confidence. Lastly — and this may be frustrating to some — curb your artistic urges. Use fonts and colours with restraint.
An introduction to Dual Coding Theory
In this video, Oliver Caviglioli speaks about how Dual Coding Theory can support the building of knowledge and understanding.
Oliver shares how our presentation of new learning using slides or resources can be enhanced by what research evidence suggests to us. He reflects on how new knowledge is processed by the brain and the role of images in supporting this process.
All of the words and images included in this video were designed by Oliver Caviglioli.
- Visuals are powerful for communicating complex ideas in an efficient way; it takes a great many words to describe the simplest of images
- Images, if chosen correctly for their clarity, enable pupils to get a rapid gist of meaning; leaving them with more cognitive resources to engage in higher order thinking
- Cut the amount of content we intend to include on a slide or resource; chunk the information into headings that stand out; line up information neatly to give the reader confidence in its order; use fonts and colour with restraint
When you’ve watched this video and made any notes to record key learning points, click the ‘Mark as complete’ button below and then select ‘Next’ to hear more about how to make multimedia learning most effective.
|
Chapter: Our Environment
Q1: Define Ecology
Answer: Ecology is a scientific study of the interactions between organisms and the environment. Ecology integrates all areas of biological research and informs environmental decision.
Q2: What is the scope of Ecology Research?
Answer: Ecology study can be broadly classified as:
a. Organismal Ecology
b. Population Ecology
c. Community Ecology
d. Ecosystem Ecology
e. Landscape Ecology
f. Global Ecology
In nutshell, Ecology study advocates protection of nature and the environment.
Q3: Define Ecosystem.
Answer: An environment comprises of all living (biotic) and non-living (abiotic) things that occur naturally on the Earth or any of its region. All these living organisms interact with each other and their growth, reproduction and other activities are affected by the abiotic components of ecosystem.
Q4: Is garden an example of Ecosystem?
Answer: Yes. In an garden, all biotic components (e.g. plants, trees, animals like rats, frogs, birds, insects etc.) interact with each other and with abiotic components (garden soil, water etc.) for their growth and reproduction and other activities. Thus garden forms an ecosystem.
Q5: Give examples of natural ecosystems.
Answer: Forests, ponds, lakes, sea, oceans, coral reefs, rivers etc.
Q6: Give examples of human made (artificial )ecosystems.
Answer: Gardens, Crop-fields etc.
Q7: Name different abiotic factors that affect the ecosystem.
Answer: Temperature, water, sunlight, salinity, rocks, soil, precipitation and wind.
Q8: What do you mean by biogeochemical cycle? Name examples of biogeochemical cycles exist in ecosystem.
Answer: A biogeochemical cycle is a pathway by which a chemical element or molecule moves through both biotic and abiotic compartments of Earth. These are critical to life and hence for the ecosystem sustenance. A few examples of the biogeochemical cycles are:
- Nitrogen Cycle
- Water Cycle
- Carbon Cycle
- Oxygen Cycle
- Phosphorus Cycle
Answer: Biosphere is the area on the earth where life exists. It includes about 20 kilometers upwards in the atmosphere and 11 kilometer downwards. In the Biosphere different plants and animals are present. This diversity of life is an important characteristic of earth.
Q10: Define food chain.
Answer: The pathway of transfer of food from one trophic level to another is known as food chain.
Q11: What are trophic levels?
Answer: Trophic levels are the feeding levels in an ecosystem. The trophic levels of living beings represent their placement in a food chain. It also tells the order of consumption and energy transfer throughout the ecosystem (or environment).
In general 4 or 5 trophic levels exist in a food chain. These are:
Producers or Autotrophs: Producers make up the first trophic levels that supports the other trophic levels. It consists mainly of green plants and certain types green algae and some types of bacteria. They convert solar energy (photosynthesis) into food consumable by other organisms.
Primary Consumers: These are the consumers which feed upon producers. In general these are herbivores. Examples are horse, cow, deer, insects, zoo planktons (shrimps, protists etc.) and birds.
Secondary Consumers: Secodnary or second level consumers eat primary consumers. In general these are omnivores and carnivores. On land, secondary consumers cover many small mammals and reptiles that eat insects, as well as large carnivores that eat rodents and grazing mammals. In aquatic ecosystems, secondary consumers are mainly small fish that eat plankton.
Tertiary Consumers: These are third level consumers which feed on secondary consumers. E.g. snakes eating rodents, Lion, bear etc.
Quanternary Consumers: Fourth level consumers are defined in a few food chains. E.g. hawks eating owls or snakes.
All food chains end up with top predators, animals having little or no natural enemy.
Q12: Define Dertivores
Answer: Dertivores or Decomposers are the consumers that get energy from dead organic matter (detritus). Important groups of dertivores are prokaryotes and fungi. These organisms secrete enzymes to digest organic matter. They link the consumers and primary producers of an ecosystem.
Q13: Explain energy relationship with trophic levels.
Answer: The energy relation ship within trophic levels can be represented in a form of pyramid as shown in the figure. Following conclusions are arrived:
b. Each food chain can be considered as an energy chain.
c. The energy that is captured by the autotrophs does not revert back to the solar input and the energy which passes to the herbivores does not come back to autotrophs. As it moves progressively through the various trophic levels it is no longer available to the previous level.
d. Plants utilize only 50% of the total available energy for their life processes. But each of the trophic levels utilizes 90% of their available energy for their metabolic activities. Remaining 10% of the energy alone is transferred to the next trophic level. This is the reason long food chains are not commonly seen in nature.
Q14: Explicate the principle of food web.
Answer: Each organism is generally eaten by two or more other kinds of organisms which in turn are
eaten by several other organisms. So instead of a straight line food chain, the relationship can be shown as a series of branching lines called a food web.
Q15: Define biological magnification.
Answer. It means accumulation of poisonous materials in successive trophic levels in a food chain. This happens when a toxin is ingested or eaten and moved up the food chain from one organism to another organism. As it moves up the food chain the toxin levels gets magnified or more concentrated. DDT is one example of harmful substance that have contaminated food chains.
Q16: Explain Ozone depletion and its impact on our environment?
|Ozone Depletion at Antartica|
Answer: Life on the Earth is protected from the damaging Ultra-violet radiation by a layer of ozone molecules $(O_3)$ The amount of ozone began to drop sharply in 1980. This decrease has been linked to synthetic chemicals like chrolo-fluoro-carbons (CFCs) which are used in refrigerants and in fire extinguisher.
1. UV radiation causes a Chlorine atom to break away from CFC molecule.
2. The free chlorine atoms hits on ozone molecule and pulls away one oxygen atom from it.
3. A free oxygen atom hits the chlorine monoxide molecule which results another chlorine atom.
4. In this way free chlorine will continue to delete ozone in stratosphere. One Chlorine atom can destroy more than 100,000 ozone molecules before it is removed from stratosphere.
Decrease ozone levels in stratosphere increase the intensity of UV radiation. Its consequences can be very harmful and may lead to an increase in skin cancers and cataracts among humans. UV radiation are also harmful to crops and other primary producers and may lead to unpredictable results.
|
Cell Cycle, Mitosis, and Meiosis
The term meiosis (greek meiosis "diminution") refers to a special kind of cell division that leads to four genetically different daughter cells, each of which only contains half of the chromosomes of the mother cell. While mitosis produces the cells needed for growth and division of tissues in the body, meiosis forms gametes: either eggs or sperm. Meiosis involves two sequential cell divisions. Like mitosis, it is subdivided into six phases: prophase, prometaphase, metaphase, anaphase, telophase, and cytokinesis.
Beyond this analogy, the first meiotic division (meiosis I) differs from normal mitosis in two important details:
- In metaphase, two rows of chromosomes are formed at the equatorial plate of the cell. This alignment of the homologous (corresponding) chromosomes from the father and mother is called synapsis.
- In the subsequent anaphase I, each pair of homologous chromosomes is separated while the two of each chromosome stay together.
After meiosis I, each of the daughter cells only contains a single set of chromosomes (1n) with two chromatids per chromosome (2c). Because the second meiotic division follows directly without an intervening interphase, the result is two more daughter cells with only one chromatid of a haploid chromosome (1n/1c). During the interphase that follows the single chromatids are replicated to form the typical gamete genome: 1n/2c.
|
In small villages and hamlets in the mountains of New Mexico live communities of individuals claiming descent from Jewish ancestors from Spain and Portugal. These people, often called secret Jews or crypto-Jews, live within a complex set of identities. Often, externally, they are part of churches of different denominations; the majority are Catholic, but some belong to Protestant churches. Internally, however, they maintain a hidden Jewish identity, with unique customs, practices and beliefs.
While crypto-Jewish communities are found in the mountainous region around Taos, crypto-Jews live in other parts of New Mexico and the wider Southwest. Indeed, they live in all areas settled by Spanish and Portuguese colonists, even along the New England coastline, where many individuals of Portuguese descent settled. All crypto-Jews share one thing in common: they trace their descent back to Jews in Spain and Portugal.
Crypto-Judaism within the Christian world first emerged between 1390 and 1492. (A similar phenomenon existed with Jewish communities in the Islamic world during periods of religious persecution.) Starting in 1390, significant numbers of Jews living in Spain converted to Catholicism. While these conversions were often forced, in many cases individuals chose to convert for economic or social reasons. These communities of new Christians, often called conversos, included a minority of individuals who chose to secretly maintain their Jewish identity, beliefs and practices. These individuals can be considered the first crypto-Jews.
The religion of the crypto-Jews diverged significantly from that of their Jewish compatriots. While traditional Judaism includes a wide range of public practices, often led by men in synagogues, increasingly the religion of the crypto-Jews became a religion of the home, with women often taking a significant role in the practices and their transmission to the next generation. Over time, the practices became increasingly narrow, as memory and knowledge of traditional Judaism began to fade. This trend accelerated with the expulsion of the Jewish community from Spain in 1492. Up till that point, crypto-Jews could draw on the Jewish community for knowledge and even ritual items; this largely ceased after 1492.
While 1492 is a particularly sad point in Jewish history, with the destruction of one of the world’s largest and most successful Jewish communities, it also heralded the opening up of the Americas to colonial expansion and exploitation. A wide range of documentary evidence suggests that crypto-Jews played a part in the expansion, initially into Mexico and later into other parts of the Americas, including the territory that became New Mexico. For unknown reasons, crypto-Judaism in America seems to have persisted in stronger forms than in Spain, although evidence suggests that it may have persisted equally strongly in the mountainous regions of Portugal, particularly around the town of Belmonte.
New Mexico manifestations
In New Mexico, crypto-Judaism was expressed in diverse ways. In the larger cities, particularly those that had a strong religious and military presence in colonial times, crypto-Jews did not maintain communal structures. Only a small set of families, who intermarried, shared the tradition. Religious practices rarely moved outside the private space of the home, although a butcher might slaughter animals in a traditional way. My interviews suggest that one butcher in Albuquerque, Don Siverio Gomez, kept kosher meat away from pork and removed the blood.
The most common practices mentioned by crypto-Jews from all parts of New Mexico, particularly the cities, relate to observance of the Sabbath on Friday night and Saturday. Home practices included variations on lighting candles on Friday evening at sundown, drinking of special wine — occasionally with a blessing in Spanish that was similar to that recited by traditional Jews — and customs related to cleaning, clothing and abstaining from work. Other individuals spoke of family practices associated with different festivals, particularly Passover. These practices included eating unleavened bread similar to matzo and the telling of stories related to the Jews’ exodus from Egypt, often conflated with the story of exile from Spain.
Crypto-Jews in mountainous areas seem to have developed a more communal set of structures and practices. Many villages in Northern New Mexico were illegal settlements, not sanctioned by authorities; they were remote from both religious and military colonial powers. This remoteness allowed these communities to develop unique practices and traditions, a strong communal identity and a degree of freedom to practice openly. Many interviews of individuals from this region suggest that crypto-Judaism was an open secret, well-known to the wider community. This view was never expressed in interviews with individuals from the larger centers of colonial power.
Many individuals from the mountains speak of ritual practices that brought the crypto-Jewish community together. These practices centered on life-cycle events — birth, marriage and death. They also included some celebration of festivals. As in the cities, Passover is the holiday most often mentioned. Rituals surrounding birth are some of the most indicative of crypto-Jewish culture. Since all crypto-Jews were publically Christian — specifically Catholic in Northern New Mexico — babies needed to be baptized soon after birth. To forgo this would be a public repudiation of the Catholic faith. But soon after the church baptism, crypto-Jewish children would be taken to another location, where they were ritually washed with water or perfume. This practice was seen as washing off the baptism and emphasizing the Jewish origins of the baby.
Stories are both an important mechanism of cultural transmission and a way of illustrating the complexity of crypto-Jewish identity. One family tradition, related by a friend in Albuquerque, serves as illustration.
He told me, “When my great-grandmother Isabelle was born, her family lived in the mountains. There was no church nearby, and they needed to come down to Santa Fe to get the baby baptized. On the way down, the wagon hit a bump and the baby flew from the wagon to the side of the road, but nobody noticed. They got to the church and could not find the baby. They had come so far, so the priest put her in the book anyway. They started back, and there on the side of the road was my great-grandmother, as happy as could be. So they went home.”
This memory highlights the conflict between the Jewish and Christian aspects of the family’s identity. On the one hand, to be good Catholics, a family had to have a baby baptized, and indeed Isabelle’s name is in the baptismal register. On the other hand, to be a good Jew, one should not be baptized. In this story, the conflict is resolved by a trick.
A whispered tradition
It might be assumed from such stories and from the popular depiction of crypto-Judaism that most crypto-Jews are aware of their identity and have practices and rituals that are well understood. This is far from the case. Crypto-Jewish identity is very complex. Most crypto-Jews are only vaguely aware of their Jewish heritage. For some it is merely a whispered tradition — “Somos Judíos” — with little additional content or meaning. For many others it emerges from an attempt to understand strange practices that make them different from their neighbors. Only a small minority have a strong familial tradition with a wide range of practices and beliefs.
The practices and rituals also diverge. Not only did the religion change substantially in Spain, moving from a public to a private tradition, but it also became simplified and narrower due to the progressive loss of traditional knowledge. Many of the practices found today are shaped by these trends. But like all cultural traditions, crypto-Judaism continually changes and takes on new interpretations and practices. So some practices that have no historical Jewish connection have been given new meaning to fit into a hidden Jewish identity. Other practices, learned from neighbors or the Internet, stem from worldwide Jewish rituals and beliefs. And increasingly, some crypto-Jews have an affinity for Zionism, which has also impacted their self-understanding.
Despite the persistence of practices and identity, crypto-Judaism is largely a culture of memory — a culture of stories and narratives passed down between generations. Like all cultures of memory, it is increasingly impacted by internal and external cultural forces, which tend to pose challenges to its persistence. The impact of Hispanic identity, and even more the pervasiveness of American cultural tropes, prevents the whispered messages from being clearly heard and remembered. While some crypto-Jews struggle to maintain their culture, it is possible that in the next generation, crypto-Judaism will become a distant memory, lost in the mountains of New Mexico.
Seth Kunin is a professor and deputy vice chancellor at Curtin University in Australia. He previously worked for 30 years in Scotland and England. He spent more than 10 years doing research in New Mexico and has written widely about crypto-Judaism in New Mexico and the American Southwest.
In order to read our site, please exit private/incognito mode or log in to continue.
|
Breastfeeding: Feeding a child human breast milk. According to the American Academy of Pediatrics, human breast milk is preferred for all infants. This includes even premature and sick babies, with rare exceptions. It is the food least likely to cause allergic reactions; it is inexpensive; it is readily available at any hour of the day or night; babies accept the taste readily; and the antibodies in breast milk can help a baby resist infections.
In breast milk, the amino acids (the building blocks of proteins) are well balanced for the human baby, as are the sugars (primarily lactose) and fats. The baby's intestinal tract is best aided in its digestion by the vitamins, enzymes, and minerals found in breast milk. Breastfed babies do eat more often than formula fed babies since breast milk is more quickly digested and leaves the stomach empty more frequently.
Exclusive breastfeeding is ideal nutrition and it is sufficient to support optimal growth and development for the first 6 months after birth, according to the American Academy of Pediatrics. Furthermore, it is recommended that breastfeeding continue for at least 12 months, and thereafter for as long as mutually desired. Infants weaned before 12 months of age should not receive cow's milk feedings, but should receive iron-fortified infant formula. See also: Breastfeeding practices; and Breast milk.
|
One example of teaching your child to say juice is the following. In this case we have identified the targeted word or phrase and this must be done before you start teaching. Also the item identified is preferably a strong reinforcer and an item the child frequently wants. Usually you want to choose a word the child might use frequently and that is fairly easy for him to pronounce or that he has some approximations to the word already in his repertoire. Secondly, the caregiver waits for the child to have a situation in which he wants the items such as the juice. Third, the caregiver delays giving the item until the child says some approximation or the actual word. Fourth, the caregiver gives the child the item immediately or as soon as possible after the word or approximation is completed by the child. Fifth the parent continues to delay giving the item in every situation in which the child wants the item during the day. Sixth, the parents delays giving the water until the child states the complete word clearly. Seventh, the caregiver adds other words to the criteria for reinforcement of the juice. For example, the caregiver might wait for the child to say "juice Please". Finally, the caregiver continues this process and adds more words to the original words and additional words for other reinforcers! In a sort time usually even in the most difficult cases the child will be talking!
|
Overview of an O'Neill Cylinder
The Inside of an O'Neill Cylinder
Gerard K. O'Neill created the O'Neill Cylinder in his book "The High Frontier". An O'Neill Cylinder consists of two cylinders which counter-rotate around each other, each one has a two mile (3 kilometer) radius, and a 20 mile (30 kilometer) length. The two cylinders counter-rotate to create simulated gravity by centripetal force: everything is pushed to the outer wall due to that force. However, some design choices stem from this, some to combat the negative effects, and others to take advantage the centripetal force. Due to the nature of artificial gravity, many people might experience nausea and dizziness. To combat this, the speed of rotation would need to be decreased to about two revolutions per minute. To take advantage of artificial gravity, different parts of the O'Neill Cylinder can rotate at different speeds. In the middle of the cylinder, the artificial gravity will be smaller than everywhere else in the cylinder, and manufacturing facilities would be placed here to take advantage of that fact.
The cylinders themselves would have six sections on them, half of them are windows, the other half is the ground. Behind each window would be a mirror so they could direct the sunlight into the cylinder, while night could be simulated by simply moving the mirrors to reveal the blackness of space. Day would be simulated as the sun moved across the mirrors, which would redirect the light. A side effect of the light being shone through the windows and reflected on the mirrors, would be that the light would be polarized, which could affect certain types of animals, such as bees.
|
The secrets of a new alloy’s amazing toughness is seen in this transmission electron microscopy movie that shows the formation of nano-sized bridges across a growing crack. These bridges inhibit the crack’s growth, and are one of several mechanisms identified by the scientists that give the alloy incredible toughness and strength. (Credit: Berkeley Lab)
Just in time for the icy grip of winter: A team of researchers led by scientists from the U.S. Department of Energy Lawrence Berkeley National Laboratory (Berkeley Lab) has identified several mechanisms that make a new, cold-loving material one of the toughest metallic alloys ever.
The alloy is made of chromium, manganese, iron, cobalt and nickel, so scientists call it CrMnFeCoNi. It’s exceptionally tough and strong at room temperature, which translates into excellent ductility, tensile strength, and resistance to fracture. And unlike most materials, the alloy becomes tougher and stronger the colder it gets, making it an intriguing possibility for use in cryogenic applications such as storage tanks for liquefied natural gas.
To learn its secrets, the Berkeley Lab-led team studied the alloy with transmission electron microscopy as it was subjected to strain. The images revealed several nanoscale mechanisms that activate in the alloy, one after another, which together resist the spread of damage. Among the mechanisms are bridges that form across cracks to inhibit their propagation. Such crack bridging is a common toughening mechanism in composites and ceramics but not often seen in unreinforced metals.
Their findings could guide future research aimed at designing metallic materials with unmatched damage tolerance. The research appears in the December 9, 2015, issue of the journal Nature Communications.
“We analyzed the alloy in earlier work and found spectacular properties: high toughness and strength, which are usually mutually exclusive in a material,” says Robert Ritchie, a scientist with Berkeley Lab’s Materials Sciences Division who led the research with Qian Yu of China’s Zhejiang University and several other scientists.
“So in this research, we used TEM to study the alloy at the nanoscale to see what’s going on,” says Ritchie.
In materials science, toughness is a material’s resistance to fracture, while strength is a material’s resistance to deformation. It’s very rare for a material to be both highly tough and strong, but CrMnFeCoNi isn’t a run-of-the-mill alloy. It’s a star member of a new class of alloys developed about a decade ago that contains five or more elements in roughly equal amounts. In contrast, most conventional alloys have one dominant element. These new multi-component alloys are called high-entropy alloys because they consist primarily of a simple solid solution phase, and therefore have a high entropy of mixing.
They’re a hot topic in materials research, and have only recently been available in a quality suitable for study. In 2014, Ritchie and colleagues found that at very cold temperatures, when CrMnFeCoNi deforms, a phenomenon called “twinning” occurs, in which adjacent crystalline regions form mirror arrangements of one another. Twinning likely plays a part in the alloy’s incredible toughness and strength. But twinning isn’t extensively found in the alloy at room temperature (except in the crack bridges), yet the alloy’s toughness and strength is still almost off the charts.
“If we don’t see twinning at room temperature, then what other mechanisms give the alloy these amazing properties?” asks Ritchie.
To find out, the scientists subjected the alloy to several straining experiments at room temperature, and used transmission electron microscopy to observe what happens.
Their time-lapse images revealed two phenomena related to shear stress: slow-moving perfect dislocations that give the material strength, and fast-moving partial dislocations that enhance ductility. They also saw a phenomenon involving partial dislocations called “three-dimensional stacking fault defects,” in which the 3-D arrangement of atoms in a region changes. These faults are big barriers to dislocation, like placing a stack of bricks in front of a growing fissure, and serve to harden the alloy.
The images also captured the nanoscale version of chewing a mouthful of toffee and having your teeth stick together: In some cases, tiny bridges deformed by twinning are generated across a crack, which help prevent the crack from growing wider.
“These bridges are common in reinforced ceramics and composites,” says Ritchie. “Our research found that all of these nanoscale mechanisms work together to give the alloy its toughness and strength.”
The research was funded in part by the Department of Energy’s Office of Science.
Lawrence Berkeley National Laboratory addresses the world’s most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab’s scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy’s Office of Science. For more, visit www.lbl.gov.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit the Office of Science website at science.energy.gov/.
|
3.1.B use a problem-solving model that incorporates analyzing given information, formulating a plan or strategy, determining a solution, justifying the solution, and evaluating the problem-solving process and the reasonableness of the solution;
3.1.C select tools, including real objects, manipulatives, paper and pencil, and technology as appropriate, and techniques, including mental math, estimation, and number sense as appropriate, to solve problems;
3.2 The student applies mathematical process standards to represent and compare whole numbers and understand relationships related to place value.
3.2.A compose and decompose numbers up to 100,000 as a sum of so many ten thousands, so many thousands, so many hundreds, so many tens, and so many ones using objects, pictorial models, and numbers, including expanded notation as appropriate;
3.2.C represent a number on a number line as being between two consecutive multiples of 10; 100; 1,000; or 10,000 and use words to describe relative size of numbers in order to round whole numbers; and
3.4 The student applies mathematical process standards to develop and use strategies and methods for whole number computations in order to solve problems with efficiency and accuracy.
3.4.A solve with fluency one-step and two-step problems involving addition and subtraction within 1,000 using strategies based on place value, properties of operations, and the relationship between addition and subtraction;
3.4.G use strategies and algorithms, including the standard algorithm, to multiply a two-digit number by a one-digit number. Strategies may include mental math, partial products, and the commutative, associative, and distributive properties;
3.4.K solve one-step and two-step problems involving multiplication and division within 100 using strategies based on objects; pictorial models, including arrays, area models, and equal groups; properties of operations; or recall of facts.
3.6.B use attributes to recognize rhombuses, parallelograms, trapezoids, rectangles, and squares as examples of quadrilaterals and draw examples of quadrilaterals that do not belong to any of these subcategories;
3.6.E decompose two congruent two-dimensional figures into parts with equal areas and express the area of each part as a unit fraction of the whole and recognize that equal shares of identical wholes need not have the same shape.
3.7.C determine the solutions to problems involving addition and subtraction of time intervals in minutes using pictorial models or tools such as a 15-minute event plus a 30-minute event equals 45 minutes;
|
Why do leaves change color in the fall? Why is fall color better in some years than others?
Green leaves actually contain colorful pigments all season, but during the growing season those colors are masked by an abundance of green chlorophyll. Chlorophyll is used in photosynthesis, the process in which the tree uses sunlight to produce food. The shorter fall days signal to the trees that winter is coming and it will soon be time to shed its leaves. At this time the tree stops producing Chlorophyll and the colorful pigments that have been there all along are finally revealed.
The brilliance of fall color is affected mostly by the sunlight; however, temperature fluctuation and soil moisture also play a role. A series of warm sunny days, followed by cold, but not freezing night time temperatures will produce the best fall color. If the trees experience a fall with a lot of cloud cover and moderate temperatures, the color will be dull. In addition, a wet spring followed by a moderate to dry summer and fall will produce the best fall color. During late summer and early fall, if a tree is under a slight amount of stress due to dry soil conditions, it will have more brilliant color.
If you are interested in finding out more about the science behind tree color in the fall, visit the following link from the UNL extension office.
|
History of Medicine
History of Printing
Students use the library's printed and online sources to create timelines of the history of printing, then write a two-page paper comparing the historical impact of the printing press and the Internet.
- Students learn about the history of printing and the impact of the printing press and the Internet.
- Students sequence in chronological order historical events about printing and the printing press.
- Students use critical thinking to compare printing press and Internet.
Rag paper, scribal hand-copying, Johannes Gutenberg, Gutenberg Bible, printing press, Internet, timeline
- computers with Internet access and word processing, printouts from the online sources listed in the Lesson Plan, library reference books, and online databases.
- pens or pencils
- Ask students to define the term timeline and give examples of information that appears in timelines.
- Introduce China's rag paper and Western Europe's Gutenberg press, and tell students that they will each create a timeline about the history of printing.
- Each student will also write a two-page paper comparing the historical impact of the printing press to the impact of the Internet.
- The following are useful Web sites:
The quality of students' work can be assessed by the following rubric:
- Three points: The student develops a timeline that includes a majority of the major historical events in the history of printing. Student develops a well-written two-page paper comparing the printing press to the Internet (using electronic and reference sources).
- Two points: The student creates a timeline with little attention to detail. The student develops a written paper with sweeping, generalized responses that fail to reference historical facts.
- One point: The student makes an attempt to create a timeline and two-page paper, but fails to create a well-developed project.
If students fail to complete any of the assignment, they should receive no points.
- Evaluation Strategies
- Evaluating Data
- Developing Research Skills
- World History
- The emergence of the first global age, 1450-1770
- Technology communication tools
- Technology research tools
|
The concept of a juvenile court system was put into practice in the United States in the late 1800s and early 1900s, based on the premise that youthful offenders should be handled differently from adults. As a rule, proceedings in juvenile courts are more informal than in adult courts. Juvenile courts also take a less adversarial stance toward defendants. However, in practice the juvenile court system presents some disadvantages.
Other People Are Reading
Early intervention with a young offender helps prevent future criminal acts. According to a report from the National Center for Juvenile Justice, the juvenile court system has the best chance of stopping a delinquent from committing crimes again if it intervenes as early as possible. For the court to intervene in a timely fashion, it must expedite case processing. The delays associated with the juvenile court system are one disadvantage of the system. It is not clear whether the delays are because the courts are overloaded or because they are inefficient.
Although juvenile courts were set up with the idea that informal proceedings would be beneficial for young offenders, in reality informality has become a disadvantage. This informality can result in overlooking juvenile defendants' due process rights, according to Preston Elrod and R. Scott Ryder in "Juvenile Justice: A Social, Historical and Legal Perspective." Juveniles and their parents should be aware of their rights and feel comfortable exercising them. However, many are not familiar with their rights.
No Orientation to Juveniles
Juvenile courts' ability to address the problems of juvenile offenders is limited. Although a part of their mandate is to control and reform these offenders, they often lack the understanding or the resources to fulfil this mandate, according to Elrod and Ryder. Often these courts opt for quick fixes such as coercion and imprisonment, rather than long-term solutions.
- 20 of the funniest online reviews ever
- 14 Biggest lies people tell in online dating sites
- Hilarious things Google thinks you're trying to search for
|
A new computer simulation called Illustris takes into account everything from the large-scale filamentary structure of the universe all the way down to the level of star-forming gas clouds in individual galaxies. Dark matter, dark energy and normal matter are all simulated in a cube 350 million light-years across, containing 41,416 realistic model galaxies.
To simulate the formation of galaxies, one must model the universe at three scales simultaneously: first, the large-scale structure of the universe; second, the galaxies themselves; and finally, the nebulas from which stars are born. [See images from the Illustris simulation]
Galaxies are classified as elliptical, disk or irregular. Previous simulations of the universe had trouble producing disk galaxies like the Milky Way. Unlike previous attempts, the Illustris simulation naturally produces disk galaxies. One weakness of the simulation is that it still has trouble producing accurate low-mass galaxies.
The largest structures in the known universe are the galaxy filaments, or “great walls” of galaxy superclusters. The filaments form the boundaries between great voids in space. It is thought that the galaxy filaments form along a web-like distribution of dark matter, the dominant form of matter in the universe.
|
Scientists have uncovered the fossilized remains of an ancient sea creature that lived 520 million years ago. The South China discovery of the fuxianhuiid specimen represents one of the earliest animal fossils ever found.
Perhaps even more unique is the state of preservation the creature was found in, allowing scientists an unprecedented glimpse at the oldest nervous system to reach beyond the head in fossil record. The discovery is also one of the earliest examples of feeding limbs in evolutionary history.
According to Live Science, fuxianhuiid belongs to the arthropod family, known to be the first animals to have jointed limbs that enabled them to crawl. Fuxianhuiid was soft-bodied and covered by a carapace, a hard shell which covered its head.
The South China discovery is the first fuxianhuiid to be found preserved in a position which allowed scientists to see the exact nature of what lay beneath its protective covering. The new find provides confirmation that the creature had a series of limbs beneath its head that were used for feeding purposes.
The fuxianhuiid used a process scientists refer to as “detritus sweep-feeding,” in which the creature used its limbs to shovel sediment from the seafloor into its mouth.
Javier Ortega-Hernández, from Cambridge’s Department of Earth Sciences, led the excavation in a fossil-rich region of southwest China known as Xiaoshiba. Ortega-Hernández spoke about the discovery in a statement:
“Since biologists rely heavily on organization of head appendages to classify arthropod groups, such as insects and spiders, our study provides a crucial reference point for reconstructing the evolutionary history and relationships of the most diverse and abundant animals on Earth. This is as early as we can currently see into arthropod limb development.”
“These fossils are our best window to see the most primitive state of animals as we know them – including us. Before that there is no clear indication in the fossil record of whether something was an animal or a plant — but we are still filling in the details, of which this is an important one.”
[Image by Yie Jang Yie and Javier Ortega-Hernández]
|
The Angular Size of the Moon and Other Planetary Satellites: An Argument For Design
The Angular Size of the Moon and Other Planetary Satellites: An Argument For Design
CRSQ Volume 35(1) June 1998
Danny R. FaulknerAbstract
It previously has been argued that the circumstances of total solar eclipses for the earth-moon system are unique in the solar system and that this suggests design. This is reexamined using the latest data on the many satellites now known to exist in the solar system. This argument is shown to be stronger than ever. Some comments about the design argument in astronomy are made. It is suggested that discussion of the definition and application of the design argument be pursued.
While the sun is about 400 times larger than the moon, the moon is also approximately 400 times closer to the earth, so that both objects extend nearly identical angular sizes of about 1/2 degree. This causes a total solar eclipse to be a very remarkable event, one of the most beautiful and awe- inspiring experiences in nature, as anyone who has seen one can attest. If the moon were slightly farther away or smaller (or the sun closer or larger in size), total solar eclipses would not be possible. If the situation were reversed, many of the startling features of a total solar eclipse, such as the diamond ring effect, Bailey’s beads, and prominences near the sun’s limb, would not be as readily visible. Total solar eclipses also would be more common, making them less thrilling phenomena that they are now.
As beautiful as total solar eclipses are, perhaps more importantly they offer an opportunity for scientific study of certain solar phenomenon that would be difficult or impossible to do otherwise. For instance, the sun’s chromosphere is briefly visible at the instants when totality begins and ends. Almost all of the energy that we receive from the sun comes from the portion of the sun’s atmosphere called the photosphere. The chromosphere is a thin, cooler, more rarefied region of the sun’s atmosphere lying just above the photosphere. It’s feeble light is usually overpowered by the photosphere, except when the photosphere is blocked during a total eclipse. Historically the chromosphere’s emission spectrum has been studied when it is revealed as a flash spectrum that briefly appears around the onset and end of totality.
Lying above the chromosphere is the solar corona, which extends a few solar diameters into space. Only visible during totality, the pearly white corona is very rarefied, but is at a high temperature (between one and two million °K) .How this high temperature is maintained has remained a mystery for some time, and some recent creationists have used its high temperature as evidence of its recent formation. Magnetic field lines are clearly visible in the corona, and the size and shape of the corona changes from sun spot minimum to sun spot maximum. So the observations of the corona during total solar eclipses provides clues to the complex magnetic interactions taking place in the sun.
One of the first confirmations of general relativity was the bending of star light by the sun’s mass, which could be observed only during a total solar eclipse when the images of stars were visible near the sun’s edge. Total (or near total) solar eclipses give us a unique opportunity to gauge the relative sizes of the sun and moon. This provides data in deciding the question of whether the sun is shrinking, another argument that is used for the sun’s recent origin. Historical data on the locations of eclipses have allowed us to determine the rate at which the earth’s rotation is slowing because of tidal braking. This too places an upper limit on the age of the earth-moon system.
For generations astronomers have traveled to exotic locations to observe total solar eclipses because total solar eclipses are such rare events. On average a total solar eclipse is visible from any location only once every few centuries. Therefore without planning it is unlikely that a typical person will ever view a total solar eclipse, let alone more than one. Whitcomb and DeYoung (1978, p. 132-136) and Mendillo and Hart (1974) have previously called attention to the interesting circumstance necessary for total solar eclipses as an argument for design in the earth-moon-sun system. More recently Englin and Howe concluded that the unique geometry of the earth-moon system that gives us total eclipses is no accident. No other moon in the solar system has such a close balance between the rarity and stark beauty of eclipses. Many have no eclipses at all. In the two decades since the work of Whitcomb and DeYoung the number of known satellites in the solar system has nearly doubled. At the same time the orbital parameters and measured sizes of most of the others have been greatly improved. Let us examine the latest values to determine how unique our moon is in this respect.
Calculation of Ratios
Table I lists the 61 satellites known at the time of the writing of this article. It is possible that additional ones may be discovered or confirmed by the time that this goes to press, but, as it will be argued later, any of those would be unlikely to alter the conclusion given here. All data were taken from the 1997 Astronomical Almanac. The first two columns give the names of the satellites. The third column lists the angular size, in degrees, of the sun at the distance of the planet from the sun. The fourth column gives the angular size, in degrees, that each satellite has as seen from the planet about which the satellite orbits. The angular sizes were calculated using the average distance (semi-major axis) of each orbit (epoch February 1, 1997). For ease in comparison, it was decided to express each number as a simple decimal rather than in scientific notation. The precision of each number reflects the precision of the satellite parameters, with the uncertainty usually dominated by uncertainties in satellite diameters. Some of the satellites are known to be oblong rather than spherical in shape. In those cases the largest diameters were used.
Because the orbits of the planets and the major satellites are nearly circular, these calculated average angular diameters are a good starting approximation. If any satellites were discovered to have nearly the same angular diameter as the sun, then they could be further investigated as to the conditions of eclipse. The orbits of some of the smaller satellites are appreciably elliptical, and so these could be further investigated as well if it appears that eclipses could be possible near the extremes of the orbits.
Table I. Planets and their satellites with their relationships to the sun and to each other.
The best way to evaluate the possibility, rarity, and beauty of a particular satellite’s eclipses is to compare the sizes of the apparent solar and satellite diameters. For instance, the ratio of the moon’s apparent diameter to that of the sun is 0.9719. This means that a typical centerline eclipse tends to be annular rather than total. An annular eclipse is one in which the moon is too small to completely cover the sun, so that a thin ring, or annulus, of the sun’s photosphere remains visible at mid eclipse. This is particularly true when an eclipse occurs near the moon’s apogee or the earth’s perihelion. This also effects the duration of an eclipse. The longest totalities, about seven minutes, occur at noon in the tropics, with the earth at aphelion and the moon at perigee.
We can conclude that if the ratio of the angular diameter of a satellite to that of the sun is much less than one, then no total eclipse would be possible. On the other hand, a ratio much larger than one would cause eclipses to be very total and very frequent. As described above, both of these effects would tend to detract from the wonder of a total eclipse, though gross over totality would have the greater effect. Much of the beauty of a total solar eclipse derives from the appearance of the inner corona and the very colorful prominences, both of which are visible near the limb (edge) of the sun. Because of the near match in angular diameters of the moon and sun, these are visible all around the sun’s limb. For a overly total eclipse, these would only be briefly visible near the points of second and third contact (defined below), the points where totality begins and ends.
Table II. Satellites with satellite/solar ratios exceeding 0.9
|Jupiter I||4.81||Jupiter II||2.61|
|Jupiter III||2.750||Jupiter IV||1.425|
|Saturn I||2.17||Saturn II||2.16|
|Saturn III||3.70||Saturn IV||3.05|
|Saturn V||2.98||Saturn VI||4.333|
|Saturn XI||0.95||Saturn XVI||1.03|
|Uranus I||12.6||Uranus II||9.13|
|Uranus III||7.52||Uranus IV||5.42|
|Uranus V||7.70||Uranus VI||1.08|
|Uranus VII||1.16||Uranus VIII||1.47|
|Uranus IX||2.1||Uranus X||1.8|
|Uranus XI||2.7||Uranus XII||3.4|
|Uranus XIII||1.6||Uranus XIV||1.8|
|Uranus XV||3.7||Neptune I||24.82|
|Neptune III||3.9||Neptune IV||5.2|
|Neptune V||9.2||Neptune VI||8.3|
|Neptune VII||9.20||Neptune VIII||12.1|
Table II displays the ratios of the angular diameters (satellite/solar) for the 34 satellites for which the ratio exceeds 0.9. It can be assumed that the other 37 satellites fail to pro- duce any total solar eclipses. As can be seen from the second table, the ratios show that most satellites that produce total eclipses produce ones that are overly total. The most extreme is Pluto’s moon, which has a ratio of 258. The best candidates for total eclipses are Saturn XI (0.95), Saturn XVI (l.02), and Uranus VI (1.08)
Saturn XI and Saturn XVI are not spherical, but are elongated, and as stated above, the longest diameter was used to find the angular size. Most of the satellites of the solar system are believed to follow synchronous orbits, that is, they orbit the planets with one face toward the parent body at all times. This is caused by a tidal interaction, and is expected to be especially true of the small, elongated satellites. For a particular satellite this would result in the longest diameter pointing toward the planet, and so a smaller diameter would be the diameter needed to calculate the angular diameter of the moon. Therefore it is unlikely that total eclipses would occur for these two small moons. Using the largest satellite diameter, the angular diameter of the sun, and the satellite’s orbital period, the duration of eclipse can be calculated. The duration of an eclipse can best be expressed in terms of the times of first, second, third, and fourth contacts. First contact is defined as the instant when the eclipsing body first begins to block the sun’s disk, and is generally considered the beginning of the eclipse. Second contact is the instant when the sun’s disk is completely blocked, and thus marks the onset of totality. Third contact is the end of totality, while fourth contact is the end of the eclipse. The time from second to third contacts is the duration of totality, and the length of the entire eclipse is the time difference between first and fourth contacts.
For Saturn XI the duration of eclipse is 19 seconds, while Saturn XVI has duration of 17 seconds. These durations are for the entire eclipses from first to fourth contacts, including the partial phases before and after any totality (or annularity). The length of totality is impossible to calculate with the current knowledge of the diameters of these two satellites, but it would likely be less than one second. Such eclipse would be almost unnoticeable, let alone enjoyable or useful for scientific study. An even worse situation prevails for Uranus VI, with a ratio of 1.08. It is not known if it is elongated, but given its small size, it probably is. Eclipse duration from first to fourth contact would be less than five seconds, causing any totality to be far less than a second.
It is obvious that the smaller satellites of the solar system do not provide a good opportunity for total solar eclipses, because their small sizes and rapid motion combine to produce very short duration eclipses. It then becomes obvious that the only hope of producing awe-inspiring eclipses is to look to the larger satellites. Most of the larger moons produce very overly total eclipses, but the most promising one is Jupiter IV (Callisto) with a ratio of 1.425. Calculation shows that Callisto produces eclipses having first to fourth contact duration of 16.6 minutes, with totality lasting 2.9 minutes. At first look this appears to fulfill our requirements established for rare, beautiful events. But the over totality means that the inner corona and the prominences can only be glimpsed at narrow ranges near the points of second and third contact. The author personally noted that while watching the February 26, 1979 total solar eclipse in Arborg, Manitoba with two minutes, 50 seconds of totality, prominences were best visible on the east limb of the sun early in totality and on the west end late in totality. This was caused by the moon’s proximity to perigee at the time, giving it a slightly larger apparent size, covering those features first on the west limb, and then on the east limb. The rapid motion of Callisto, combined with the more over total nature of its eclipses, would greatly shorten the length of time that these features would be visible.
This leads to a very subtle effect that is hiding in these calculations. Note that for the planets closer to the sun, total eclipses are quite rare, while for the more distant ones, they are quite common. For instance, only the four larger (Galilean) of the 16 satellites of Jupiter produce total eclipses, all the satellites of Uranus and all but one of Neptune do. This is because the angular diameter of the sun is progressively smaller as one gets farther from the sun. This has three effects. First, it lowers the requirement for totality. Second, it causes the eclipses to be very over total. Third, the decreasing angular diameter diminishes the visual effect of the eclipses. For instance, Jupiter being more than five times more distant from the sun causes the features of the corona and prominences to be more that five times smaller as seen from the earth. From Saturn and beyond it is doubtful that the appearance of the sun with its photosphere eclipsed would be that impressive or that the eclipses would be very noticeable.
The doubling of the number of planetary satellites in the past two decades has not undermined the prior conclusion of Whitcomb and De Young and Mendillo and Hart that the earth-moon system produces uniquely beautiful total eclipses. To the contrary, the calculations presented here demonstrate that their conclusion is more sound than ever. Additional consideration shows that overly total eclipses are not expected to be as spectacular as the ones produced by our moon. Furthermore the greatly diminished apparent size of the sun at the distances of the larger planets means that any total solar eclipse there would lack the visual effect as seen from the earth. The earth-moon system combines three aspects that enhances the beauty and wonder of total solar eclipses:
- A large angular size of the sun, which produces high visual resolution of features only seen during total solar eclipses
- Optimal duration of totality of up to seven minutes that allows for maximum enjoyment
- Frequency that makes total solar eclipses uncommonly rare, yet occur often enough to be enjoyed by many
For some time this author has been concerned with the design argument in astronomy. In discussing biological systems, the design argument can be very powerful. For instance, if gross properties of the earth, such as atmospheric composition or gravity were altered, life would be impossible. If the sun’s size and temperature or the earth’s orbit were different, life would again be endangered. The same can be said for atomic properties of matter, such as the many bonds that carbon can form, or the status of water as the universal solvent, or the unique property of water expanding upon freezing. In short, the design argument is a demonstration that nature must be as it is, or else life as we know could not exist. Even evolutionary scientists have recognized this fact and have coined the term the “anthropic principle” to describe it (Barrow and Tipler, 1986).
Creationists often attempt to extend this very powerful design argument to astronomical topics as discussed here. But the design argument for the earth-moon system presented here is a much weaker one than is usually presented for biological systems. If the earth-moon system were not unique, or if total solar eclipses did not occur, life would not be imperiled. In other words, while the earth-moon system may demonstrate the Creator’s imagination and concern for our enjoyment, it must not be thus for our existence.
Just as Barrow and Tipler define weak and strong anthropic principles, perhaps creationists should adopt the terms weak and strong in discussing design arguments. Many of the astronomical design arguments, including the one discussed here, would be of the weak variety. Even more basic would be a definition of design and a methodology in consistently applying the design argument. At this time it appears that this definition and methodology do not exist, because most people assume that design is readily recognized. If this is the case, then two criticisms readily come to mind. First, many may see design where none actually exists. Second, a sort of circular reasoning may develop where people see design because they know that it must exist, while others of the different persuasion fail to see the evidence.
It is hoped that other creationists join in the discussion to define and refine the design argument.
The Astronomical Almanac for 1997. U. S. Government Printing Office. Washington, D.C.
Barrow, J. D. and F. I. Tipler. 1986. The anthropic principle. Oxford University Press. New York.
Englin, D. and G. F. Howe. 1985. An annular solar eclipse. Creation Research Society Quarterly 22: 7.
Mendillo, M. and R. Hart. 1974. Resonances. Physics Today 27(2): 73.
Whitcornb, J. C. and D. B. DeYoung. 1974. The Moon, its creation, form, and significance. BMH Books. Winona Lake, IN. pp. 132-136.
|
Did you know that you are only 10% human? There is actually 10 times the amount of microbial cells in your digestive tract than cells in your entire body. So, 90% of you is bacteria!
Sometimes referred to as a “newly discovered organ”, the bacteria in your intestines weighs about 1.5-2kg, which if collected together would be approximately the same weight as your liver. These bacteria take up residence from the day we are born (or earlier), and remain with us all our lives. They are essential to overall health.
So, now that I have your attention, let’s go over some key terms and definitions related to bacteria and human health.
1) Microbiota (formerly known as flora): the collection of microorganisms that reside in a previously established environment. For us, this means we have microbiota in and on our skin, lungs, digestive tract, urinary & vaginal tracks. The term “flora” has been used as a term for microbiota, but it relates to plant life instead of live microorganisms, so is technically incorrect. Each of us has an individual collection of species of bacteria (there are at least 1000 species known) so your microbiota is like an internal fingerprint (since the majority live in your gut). Similarly, the term microbiome is used interchangeably with microbiota, but refers to the combined genetic material of microorganisms in a particular environment (this “second genome” actually makes up 99% of our genetic information).
2) Probiotics: Live microorganisms which, when administered in adequate amounts, give health benefit to the host. Probiotics are simply live food or supplements you take to support your microbiota.
So, what do these little guys do for us? Bacteria are our friends. They’re actually more than that, they are friends with benefits! Our microbiota performs many physiological functions and directly impacts our health in the following ways:
- It helps us digest our foods properly so that we can comfortably absorb our nutrients. It ensures proper digestive function and even assists in the production of some vitamins (B and K).
- It acts as a barrier to infectious microorganisms and also combats pathogenic toxins (like those from Clostridium difficile).
- It balances and drives correct development of the immune system, influencing the formation of white blood cells & cytokines in the gut to prevent allergies and autoimmunity.
The development of gut microbiota starts at birth. As the newborn baby enters the world, it is quickly colonized by the microorganisms from the mother and external environment. Vaginal vs. C-section birth will influence a baby’s microbiota development (more on this in another post). From the third day of life, the composition of intestinal microbiota is directly related to how the infant is fed. Breastfed babies colonize different bacteria than those who are formula fed.
As we age, our microbiota is constantly in flux and is reduced or becomes imbalanced (a term called dysbiosis) by a variety of factors, such as:
- Antibiotic or medication use
- Lifestyle and poor diet (specifically a diet high in sugar and simple carbohydrates)
- Stress (which hormonally can influence the type of bacteria in your gut)
- Digestive disorders
- Infection or illness
Since these factors relate to all of us daily, it is imperative to introduce probiotics to maintain health. This can be done in the form of fermented foods or supplements. Probiotics are naturally occurring in fermented foods such as yogurt, miso, kimchi, sauerkraut, and kefir. If you’re looking just to maintain your health, including fermented foods in your diet daily may be all that you need. However, if you’re looking to improve or therapeutically treat a condition, I would suggest consulting a Naturopathic Doctor and supplementing with a good quality probiotic. A good quality probiotic will:
- Be potent. Look for colony forming units on the bottle, or CFUs. For therapeutic probiotics, I generally recommend a minimum of 10 billion CFUs and increase the dose depending on the condition being treated.
- Be scientifically proven to work. There are many probiotics on the market so it’s important to use strains that have been studied. Strains used in the probiotics are important for therapeutic use and are studied continually. Supplements have unique strains for certain conditions or uses. Brand does matter and affects quality of the product. Brands I recommend to my patients include: Genestra, Metagenics & NFH.
- Be human. Human strains of probiotics will naturally adhere to your digestive tract more readily than animal strains. They also tend to survive stomach acid better.
- Be free of allergens. Many people are sensitive to dairy and, therefore, yogurt wouldn’t work for them. High quality supplements are a great way to take your probiotics without dairy.
So, in my opinion, daily probiotic supplementation can be a component of a healthy diet throughout your lifetime. If you’re not quite convinced, here is a list of clinical conditions that probiotics are most indicated for:
- Digestive complaints: diarrhea, gas, bloating, constipation
- Food sensitivities (LINK food sensitivity blog)
- Antibiotic use
- Atopy: eczema, allergies, asthma
- Any infection: colds & flus (treatment and prevention – LINK BLOG treating colds and flus naturally), urinary tract infections, digestive infections, ets
- Dysbiosis and candida (yeast infections)
- IBS, Crohn’s and Ulcerative colitis
- There is also growing evidence in probiotics supporting healthy mood and weight management
This was an overview blog on the wonders of bugs. Don’t be afraid – we need them to live! Please let me know if you have any questions. Also, please consult a Naturopathic Doctor for treatment as we are experts in probiotics and optimizing your microbiota.
Yours in health,
Sarah Oulahen HBHSC, ND
Naturopathic Doctor at Sow Health
|
English language arts: content knowledge – ets home, The praxis® study companion 5 step 1: learn about your test 1. learn about your test learn about the specific test you will be taking english language arts: content.
English language arts literacy history/social studies, Common core state standards for english language arts & literacy in history/social studies, science, and technical subjects appendix b | 3 how to read this document.
Academic language english-language arts, For how to effectively teach academic language. we also include an overview of academic language specifically for teachers of english-language arts (ela)..
English language composition, Ap english language composition description, effective fall 2014 college board college board mission-driven –profit organization.
English language proficiency standards -12 schools, English language proficiency standards -12 schools fice school improvement www.michigan.gov/mde.
English–language arts content standards , Ii california department education &uhdwhg0d publishing information english–language arts content standards california public schools, kindergarten.
|
Chandra's Find of Lonely Halo Raises Questions About Dark Matter
The Chandra image of NGC 4555 revealed that this large, isolated, elliptical galaxy is embedded in a cloud of 10-million-degree Celsius gas (left). The hot gas cloud has a diameter of about 400,000 light years, roughly twice that of the visible galaxy (right).
Astronomers have concluded that the combined gravity of the stars in the galaxy is far too low to hold the hot gas cloud to the galaxy - an enormous envelope, or halo, of dark matter is needed. The total mass of the required dark matter halo is about ten times the combined mass of the stars in the galaxy, and 300 times the mass of the hot gas cloud.
A growing body of evidence indicates that dark matter - which interacts with itself and normal matter only through gravity - is the dominant form of matter in the universe. According to the popular "cold dark matter" theory, dark matter consists of mysterious particles left over from the dense early universe that were moving slowly when galaxies and galaxy clusters began to form.
Most large, elliptical galaxies are found in groups and clusters of galaxies where they can gain or lose dark matter through collisions with other galaxies, so it is difficult to determine how much dark matter they originally possessed. The Chandra observation of NGC 4555 confirms that an isolated, elliptical galaxy can possess a dark matter halo of its own.
|
Dehydration and heat stroke are two very common heat-related diseases that can be life threatening if left untreated.
Dehydration can be a serious heat-related disease, as well as being a dangerous side effect of diarrhea, vomiting, and fever. Children and persons over the age of 60 are particularly susceptible to dehydration.
Under normal conditions, we all lose body water daily through sweat, tears, urine, and stool. In a healthy person, this water is replaced by drinking fluids and eating foods that contain water. When a person becomes so sick with fever, diarrhea, or vomiting, or if an individual is overexposed to the sun, dehydration occurs. This is caused when the body loses water content and essential body salts such as sodium, potassium, calcium bicarbonate, and phosphate.
Occasionally, dehydration can be caused by drugs, such as diuretics, which deplete body fluids and electrolytes. Whatever the cause, dehydration should be treated as soon as possible.
The following are the most common symptoms of dehydration. However, each individual may experience symptoms differently. Symptoms may include:
In children, additional symptoms may include:
The symptoms of dehydration may resemble other medical conditions or problems. Always consult your physician for a diagnosis.
If caught early, dehydration can often be treated at home under a physician's guidance. In children, directions for giving food and fluids will differ according to the cause of the dehydration, so it is important to consult your child's physician.
In cases of mild dehydration, simple rehydration is recommended by drinking fluids. Many sports drinks on the market effectively restore body fluids, electrolytes, and salt balance.
For moderate dehydration, intravenous (IV) fluids may be required, although, if caught early enough, simple rehydration may be effective. Cases of serious dehydration should be treated as a medical emergency, and hospitalization, along with intravenous fluids, is necessary. Immediate action should be taken.
Take precautionary measures to avoid the harmful effects of dehydration, including the following:
Heat stroke is the most severe form of heat illness and is a life-threatening emergency. It is the result of long, extreme exposure to the sun, in which a person does not sweat enough to lower body temperature. The elderly, infants, persons who work outdoors, and those on certain types of medications are most susceptible to heat stroke. It is a condition that develops rapidly and requires immediate medical treatment.
Our bodies produce a tremendous amount of internal heat and we normally cool ourselves by sweating and radiating heat through the skin. However, in certain circumstances, such as extreme heat, high humidity, or vigorous activity in the hot sun, this cooling system may begin to fail, allowing heat to build up to dangerous levels.
If a person becomes dehydrated and cannot sweat enough to cool their body, their internal temperature may rise to dangerously high levels, causing heat stroke.
The following are the most common symptoms of heat stroke. However, each individual may experience symptoms differently. Symptoms may include:
The symptoms of a heat stroke may resemble other medical conditions or problems. Always consult your physician for a diagnosis.
It is important for the person to be treated immediately as heat stroke can cause permanent damage or death. There are some immediate first-aid measures you can take while waiting for help to arrive, including the following:
Intravenous (IV) fluids are often necessary to compensate for fluid or electrolyte loss. Bed rest is generally advised and body temperature may fluctuate abnormally for weeks after heat stroke.
There are precautions that can help protect you against the adverse effects of heat stroke. These include the following:
If you live in a hot climate and have a chronic condition, talk to your physician about extra precautions you can take to protect yourself against heat stroke.
Click here to view the
Online Resources of Non-Traumatic Emergencies
|
What is Bi-Polar Disorder?
Bi-polar disorder, also known as manic-depression, is a brain disorder that causes people to experience unusual shifts in mood. People with bi-polar disorder may experience dramatic and recurrent highs and lows. These high and low periods are known as mania and depression, and often occur at levels and in cycles that are unique to each person.
Some people may experience periods of relief between the shifting cycles and may be able to return to their normal activities, while others may have little to no time between cycles.
Nearly 6 million Americans have this mental health issue. Typically, the first symptoms manifest in young adulthood, but initial symptoms have been seen in young children and older people as well.
The causes are not yet completely understood, but like other mental illnesses, it is believed to be a combination of genetic-predisposition and the person’s environment. Substance abuse may also play a role in the onset of symptoms. People with bi-polar disorder, especially young men, may be at greater risk of suicide.
Signs and Symptoms*
There are two phases of this illness; mania and depression. In each phase the symptoms may range from mild to severe.
This is the period of highly elevated mood. To be diagnosed with bi-polar disorder a person must experience at least 3 of the following symptoms most of the day, every day, for at least one week.
- Increased energy, activity, and restlessness
- Excessively “high,” overly good, euphoric mood
- Extreme irritability
- Racing thoughts and talking very fast, jumping from one idea to another
- Distractibility, can’t concentrate well
- Little sleep needed
- Unrealistic beliefs in one’s abilities and powers
- Poor judgment
- Spending sprees
- Increased sexual drive
- Abuse of drugs, particularly cocaine, alcohol, and sleeping medications
This is the period of low mood. To be diagnosed with bi-polar disorder the person must experience at least 5 of the following symptoms most of the day, every day, for at least two weeks.
- Lasting sad, anxious, or empty mood
- Feelings of hopelessness or pessimism
- Feelings of guilt, worthlessness, or helplessness
- Loss of interest or pleasure in activities once enjoyed, including sex
- Decreased energy, a feeling of fatigue or of being “slowed down”
- Difficulty concentrating, remembering, making decisions
- Restlessness or irritability
- Sleeping too much, or can’t sleep
- Change in appetite and/or unintended weight loss or gain
- Chronic pain or other persistent bodily symptoms that are not caused by physical illness or injury
- Thoughts of death or suicide, or suicide attempts
* Information courtesy of The National Institute of Mental Health
Bi-polar disorder and Suicide
People with bi-polar disorder may become suicidal in either phase of their mood cycling. When people with bi-polar disorder become depressed they might feel hopeless about ever feeling better and think consider suicide. This may be especially true in the earlier stages of the illness when a person may not yet understand their illness.
In the manic phase, because people are apt to think that they have powers they do not really have. They may accidentally hurt of kill themselves by doing things so as trying to fly, jumping out of or in front of cars, or other dangerous behaviors. Because young people naturally tend to be more impulsive than older people they are at greater risk of suicide.
Not everyone with bi-polar disorder becomes suicidal, but in the event that someone one is displaying some of the warning signs of suicide (talking about death or dying, making plans for their belongings, saying good bye to friends and family), they should be seen by a mental health professional immediately. If someone is actively trying to hurt themselves 911 should be called immediately.
Types of Treatments
Because bi-polar disorder is a cyclic disorder that is likely to have several recurrences over the course of a person’s life, preventive treatments are recommended. A combination of medication therapy and psycho-social support has been found to provide the greatest amount of success.
There are several medications known as mood stabilizers that can be used to help control the symptoms of bi-polar disorder. The most common, and the one that has been used the longest is Lithium. There are other newer medications as well. Often mood stabilizers are used in conjunction with anti-depressant medications, but this will depend on an individual’s specific symptoms. Medications for bi-polar disorder are best prescribed by a psychiatrist.
Family Systems Therapy
This refers to a number of different therapy models that recognize that families influence who we are. The messages we learn from our families can help us to cope with stress later in life, or they can create unhealthy styles of coping. Family Systems Therapies help us to explore and use the role of family relationships, legacies, spoken and unspoken messages, and styles of communication to heal from issues that cause us distress in the present.
Cognitive therapy is a form of talk therapy that helps people to examine the thoughts influence their choices and behavior. Cognitive therapy challenges irrational thoughts and works to replace them with more rational ones.
This form of treatment can be helpful to both the person with bi-polar disorder and his/her family. It helps to increase understanding about the illness, it suggests healthy coping techniques, and helps identify the early signs of relapse so that intervention can take place quickly before the full illness returns. This form of treatment can be used in conjunction with all other forms of treatment.
Electroconvulsive Therapy (ECT)
In situations where medication, psychosocial treatment, and the combination of these interventions prove ineffective, or work too slowly to relieve severe symptoms such as psychosis or suicidality, electroconvulsive therapy (ECT) may be considered. ECT may also be considered to treat acute episodes when medical conditions, including pregnancy, make the use of medications too risky. ECT is a highly effective treatment for severe depressive, manic, and/or mixed episodes. The possibility of long-lasting memory problems, although a concern in the past, has been significantly reduced with modern ECT techniques. However, the potential benefits and risks of ECT, and of available alternative interventions, should be carefully reviewed and discussed with individuals considering this treatment and, where appropriate, with family or friends.*
If you would like more information or an assessment by a mental health professional you can contact Catholic Charities Behavioral Health Services at 1-866-682-2166 or e-mail at [email protected].
*Information courtesy of The National Institute of Mental Health
|
From our studies, we know that the International Organization for Standards (ISO) created the Open Systems Interconnection (OSI) networking model to standardize data networking protocols, to enable communication between all computers and devices across any network anywhere in the world. The OSI model is now mainly used as a point of reference for discussing the specifications of protocols used in network design and operation. The upper layers of the OSI reference model (application, presentation, and session = Layers 7, 6, and 5) define functions focused on the application. The lower four layers (transport, network, data link, and physical = Layers 4, 3, 2, and 1) define functions focused on end-to-end delivery of the data.
When we consider the seven layers of the OSI Reference Model, there are two that deal with addressing the data link layer and the network layer. The physical layer is not strictly concerned with addressing at all, only sending at the bit level. The layers above the network layer all work with network layer addresses.
When we discuss end-to-end delivery of data, we must necessarily talk about how datagrams are addressed. We find out that addressing is done at two different layers of the OSI model and two different layers are used, which are very different types of addresses that are used for different purposes. Layer 2 addresses, such as IEEE 802 MAC addresses, are used for local transmissions between hardware devices that can communicate directly. They are used to implement basic LAN, WLAN, and WAN technologies. In contrast, layer 3 addresses, which are most commonly 32-bit Internet Protocol addresses, are used in internetworking to create a virtual network at the network layer.
The most important difference between these types of addresses is the distinction between layers 2 and 3 themselves. Layer 2 MAC addresses enable communication between directly-connected devices residing on the same physical network. Layer 3 IP addresses allow communications between both directly and indirectly-connected devices.
For example, say you want to connect to the Web server at http://www.cisco.com. This is a Cisco Web site that resides on a server that has an Ethernet card used for connecting to its Internet service provider site. However, even if you know its Layer 2 MAC address, you cannot use it to talk directly to this server using the Ethernet card in your home PC. This is because these two devices are on different networks. In fact, they may even be on different continents!
Instead, these devices communicate at layer 3, using the Internet Protocol and higher layer protocols such as TCP and HTTP. Your request is routed from your home machine through a sequence of routers to the Cisco server. The response is then routed back to you. The communication is, logically, at layers 3 and above. You send the request, not to the MAC address of the server’s network card, but rather to the server’s IP address.
While we can virtually connect devices at Layer 3 through routers, these connections are really conceptual only. When you send a datagram that has been created using the OSI 7-Layer-Model, it is sent one hop at a time, from one router to another, from one physical network to the next. At each of these hops, an actual transmission occurs at the physical and data link layers.
When your request is sent to your local router at layer 3, which is usually referred to as your default gateway, the actual request is encapsulated in an Ethernet frame using whatever method you use to physically connect to the router. It is addressed and sent to the default gateway router using the router’s data link layer MAC address. The same happens for each subsequent step until, finally, the router nearest the Cisco Web server, sends the datagram to the destination using the data link (MAC) address of the NIC card of the Cisco Web server.
In my next blog, I will discuss the Address Resolution Protocol (ARP) that is a method used for finding a device’s link layer MAC hardware address when only its Internet Layer IP address is known.
Author: David Stahl
|
Grasshopper Glacier North of Yellowstone Park|
Photo courtesy of Alexandre Lussier
Dept. of Physics, Montana State University
One of the most unusual "Earth Science places" in Montana is Grasshopper Glacier located 70 miles southwest of Billings (10 miles north of Cooke City). The Glacier, which sits at 11,000 feet in the heart of the Beartooth Mountains, takes its name from the millions of grasshoppers embedded within it. A study done by scientists in 1914 estimated the grasshoppers had been extinct for 200 years. More recently, entomologists identified the hopper as a species of migratory locusts (Melanoplus spretus) commonly called "Rocky Mountain Locusts".
Migration gone bad . . .
Centuries ago this species of locust was found in large numbers throughout the West. Scientists believe they became embedded in the ice when migrating swarms, passing over the high mountains, became chilled or were caught in a severe storm and were deposited on the glacier. As snow built up over decades, the grasshoppers were buried deeper and deeper. Then as the climate in the area has warmed over recent centuries, melting of the snow exposed the embedded grasshoppers, and they were discovered. Until recent years, visitors could dig perfectly preserved specimens from the ice. However, years of light snow during the winter and thawing during the summer months have exposed many of the grasshoppers to decomposition.
University of Wyoming study . . .
Fortunately there are other glaciers in the Rockies that also contain swarms of grasshoppers, and in the 1990s a team led by Jeffrey Lockwood, Professor of Entomology at the University of Wyoming, found one in Wyoming that contained intact grasshoppers. The team used radiocarbon dating to determine that the swarm was blown into the mountains in the early 1600s.
Glacier, or Snowfield? . . .
In order for a glacier to form there needs to be build-up of snow over many years. At lower elevations where most of us live, all of the snow that falls in the winter melts in the spring. At high altitudes that isn't always the case. For example, an average of 8 feet of snow might fall every winter, but only 3 feet melt away every summer, leaving a build-up of 5 feet of snow. When this goes on for decades these annual 5-foot layers form, one on top of the other, and begin to compress the layers beneath. As a result the snow nearer the bottom is changed into ice, and a glacier is born. Apparently not everyone is convinced that has happened with Grasshopper Glacier. Not enough snow built up to transform the bottom layers into ice, so technically Grasshopper Glacier is simply a "snowfield" that has been around for a very long time. But don't expect a name change anytime soon . . . "Grasshopper Glacier" has a much nicer ring to it than "Grasshopper Snowfield" does.
NOTE: In 2000, Grasshopper Glacier was approximately 1 mile long and 1/2 mile wide. Scientists believe it was over 4 miles long at its peak during colder times.
Below: Robert Grebe took this photo of Grasshopper Glacier in September of 2010.
Photo courtesy of Robert Grebe
Terms: entomology, locust
|
Promoting British Values
The Department for Education (DfE) state that there is a need “to create and enforce a clear and rigorous expectation on all schools to promote the fundamental British values of democracy, the rule of law, individual liberty, and mutual respect and tolerance of those with different faiths and beliefs.”
The government set out its definition of British values in the 2011 Prevent Strategy, and these values were reiterated by the Prime Minister in 2014. In addition, guidance was published by the DfE in November 2014 and states that as part of SMSC provision schools should:
Through their provision of SMSC, schools should:
- enable students to develop their self-knowledge, self-esteem and self-confidence;
- enable students to distinguish right from wrong, and to respect the civil and criminal law of England;
- encourage students to accept responsibility for their behaviour, show initiative, and to understand how they can contribute positively to the lives of those living and working in the locality of the school, and to society more widely;
- enable students to acquire a broad general knowledge of and respect for public institutions and services in England;
- further tolerance and harmony between different cultural traditions by enabling students to acquire an appreciation of and respect for their own and other cultures;
- encourage respect for other people; and
- encourage respect for democracy and support for participation in the democratic processes, including respect for the basis on which the law is made and applied in England.
The guidance also gives specific examples of the understanding and knowledge that is expected of students:
- an understanding of how citizens can influence decision-making through the democratic process;
- an appreciation that living under the rule of law protects individual citizens and is essential for their wellbeing and safety;
- an understanding that there is a separation of power between the executive and the judiciary, and that while some public bodies, such as the police and the army, can be held to account through Parliament, others, such as the courts, maintain independence;
- an understanding that the freedom to choose and hold other faiths and beliefs is protected in law;
- an acceptance that other people having different faiths or beliefs to oneself (or having none) should be accepted and tolerated, and should not be the cause of prejudicial or discriminatory behaviour; and
- an understanding of the importance of identifying and combatting discrimination.
At Hungerhill School, these values are taught explicitly through Personal, Social, Health, and Emotional (PSHE), and Religious Education (RE). We also teach British values through planning and delivering a broad and balanced curriculum.
Hungerhill School takes opportunities to actively promote British values through form time, our assemblies, and whole school systems and structures, such as electing and running a successful School Council, and electing Year 11 Student Leaders. We also actively promote the British values through ensuring our curriculum planning and delivery includes real opportunities for exploring these values. Actively promoting British values also means challenging students, staff, or parents expressing opinions contrary to fundamental British values, including ‘extremist’ views.
At Hungerhill School we uphold and teach students about the British Values which are defined as:
- Rule of Law
- Individual Liberty
- Mutual Respect
- Tolerance of those with different faiths and beliefs
Democracy is an important value at our school. Student leadership opportunities exist throughout Hungerhill School. The positions of Head Boy, Head Girl, and Student Leaders are established through an election process, which involves the nomination and selection of candidates, running a ‘campaign,’ and a democratic vote. A regular feature of our Awards Mornings are ‘Student Choice’ presentations.
The Rule of Law
The importance of laws and rules, whether they are those that govern the class, the school, or the country, are consistently reinforced throughout regular school days. Hungerhill School has established a clear set of ‘Core Values’ which aim to support individual progress, respect for others, and the recognition that the school is a shared community with common values. We also work closely with local agencies such as the police, PCSOs, and Youth Offending Service.
Within school, students are actively encouraged to make choices, knowing that they are in a safe and supportive environment. As a school, we educate and provide boundaries for our students to make choices safely, through the provision of a safe environment, a planned curriculum, and an empowering education. Students are encouraged to know, understand, and exercise their rights and personal freedoms, and are advised how to exercise these safely, for example through our e-safety teaching and delivery of sessions on alcohol, drugs, and SRE.
All staff are informed of the key elements of the PREVENT agenda as part of our on-going work on safeguarding.
Respect is one of the core values of our school. This can be seen and felt in our pervading ethos in school. The students know and understand that it is expected and imperative that respect is shown to everyone, whatever differences we may have and to everything, however big or small. Children and adults alike, including visitors, are challenged if they are disrespectful in any way.
Tolerance of Those with Different Faiths and Beliefs
This is achieved through enhancing students’ understanding of their place in a culturally diverse society. Assemblies and discussions involving prejudices and prejudiced-based bullying have covered areas such as homophobia, disability, and racism. The school monitors incidents that involve those of ‘protected characteristics’ and notifies the local authority of any concerns. The school also seeks to do its own work on reconciliation/restorative practice.
From 1st July 2015, all schools are subject to a duty under Section 26 of the Counter-Terrorism and Security Act 2015 (CTSA 2015) to have due regard to the need to prevent students from being drawn into terrorism.
If a parent, member of the public, or member of staff has any reason to believe a student may be at risk, we ask that you inform the Child Protection Officer (Mrs W Sumner), the Deputy Headteacher (Mrs J Rivers), or the Headteacher (Mrs H Redford-Hernandez) immediately.
If appropriate, a referral will be made to the channel programme.
|
Graphene is a one-atom thick wonder material that is stronger than steel, and since its discovery it has seemingly led to one breakthrough after another. Here's the latest: the world's thinnest light bulb, reports Phys.org.
A team of scientists from Columbia, Seoul National University, and Korea Research Institute of Standards and Science are responsible for the invention, which could soon herald the further development of atomically thin, flexible, transparent displays. It could also make on-chip optical communications possible.
To create the super-slim light bulb, researchers used graphene as a filament. They attached small strips of graphene to metal electrodes and suspended the strips above the substrate, then passed a current through the filaments to cause them to heat up. The technique heated the graphene to temperatures in excess of 2,500 degrees Celsius, which was enough to cause it to glow brightly with visible light.
"The visible light from atomically thin graphene is so intense that it is visible even to the naked eye, without any additional magnification," explained Young Duck Kim, co-lead author on the study.
Scientists have tried to make such small light sources before, but micro-scale metal wires made from other materials cannot withstand the extremely hot temperatures required to make them glow in the visible range. Graphene, however, has a remarkable property: as it heats up, it becomes a much poorer conductor of heat. Essentially, this means that the highest temperatures are confined to a small spot in the center of the material, conveniently contained. This also means that less energy is needed to attain the high temperatures.
Interestingly, the invention represents a return to history of sorts. When Thomas Edison originally invented the light bulb, he first used carbon — which is the same stuff graphene is made of — as a filament too. But Edison never had the luxury of using carbon in so pure a form as graphene, at only one atom thick.
"We are just starting to dream about other uses for these structures — for example, as micro-hotplates that can be heated to thousands of degrees in a fraction of a second to study high-temperature chemical reactions or catalysis," suggested James Hone, one of the researchers on the study.
Related on MNN:
|
DNA is the blueprint of all life, giving instruction and function to organisms ranging from simple one-celled bacteria to complex human beings. Now Northwestern University researchers report they have used DNA as the blueprint, contractor and construction worker to build a three-dimensional structure out of gold, a lifeless material.
Using just one kind of nanoparticle (gold) the researchers built two common but very different crystalline structures by merely changing one thing -- the strands of synthesized DNA attached to the tiny gold spheres. A different DNA sequence in the strand resulted in the formation of a different crystal.
The technique, to be published Jan. 31 as the cover story in the journal Nature and reflecting more than a decade of work, is a major and fundamental step toward building functional “designer” materials using programmable self-assembly. This “bottom-up” approach will allow scientists to take inorganic materials and build structures with specific properties for a given application, such as therapeutics, biodiagnostics, optics, electronics or catalysis.
Most gems, such as diamonds, rubies and sapphires, are crystalline inorganic materials. Within each crystal structure, the atoms have precise locations, which give each material its unique properties. Diamond’s renowned hardness and refractive properties are due to its structure -- the precise location of its carbon atoms.
In the Northwestern study, gold nanoparticles take the place of atoms. The novel part of the work is that the researchers use DNA to drive the assembly of the crystal. Changing the DNA strand’s sequence of As, Ts, Gs and Cs changes the blueprint, and thus the shape, of the crystalline structure. The two crystals reported in Nature, both made of gold, have different properties because the particles are arranged differently.
“We are now closer to the dream of learning, as nanoscientists, how to break everything down into fundamental building blocks, which for us are nanoparticles, and reassembling them into whatever structure we want that gives us the properties needed for certain applications,” said Chad A. Mirkin, one of the paper’s senior authors and George B. Rathmann Professor of Chemistry in the Weinberg College of Arts and Sciences, professor of medicine and professor of materials science and engineering. In addition to Mirkin, George C. Schatz, Morrison Professor of Chemistry, directed the work.
By changing the type of DNA on the surface of the particles, the Northwestern team can get the particles to arrange differently in space. The structures that finally form are the ones that maximize DNA hybridization. DNA is the stabilizing force, the glue that holds the structure together. “These structures are a new form of matter,” said Mirkin, “that would be difficult, if not impossible, to make any other way.”
He likens the process to building a house. Starting with basic materials such as bricks, wood, siding, stone and shingles, a construction team can build many different types of houses out of the same building blocks. In the Northwestern work, the DNA controls where the building blocks (the gold nanoparticles) are positioned in the final crystal structure, arranging the particles in a functional way. The DNA does all the heavy lifting so the researchers don’t have to.
Mirkin, Schatz and their team just used one building block, gold spheres, but as the method is further developed, a multitude of building blocks of different sizes can be used -- with different composition (gold, silver and fluorescent particles, for example) and different shapes (spheres, rods, cubes and triangles). Controlling the distance between the nanoparticles is also key to the structure’s function.
“Once you get good at this you can build anything you want,” said Mirkin, director of Northwestern’s International Institute for Nanotechnology.
“The rules that govern self-assembly are not known, however,” said Schatz, “and determining how to combine nanoparticles into interesting structures is one of the big challenges of the field.”
The Northwestern researchers started with gold nanoparticles (15 nanometers in diameter) and attached double-stranded DNA to each particle with one of the strands significantly longer than the other. The single-stranded portion of this DNA serves as the “linker DNA,” which seeks out a complementary single strand of DNA attached to another gold nanoparticle. The binding of the two single strands of linker DNA to each other completes the double helix, tightly binding the particles to each other.
Each gold nanoparticle has multiple strands of DNA attached to its surface so the nanoparticle is binding in many directions, resulting in a three-dimensional structure -- a crystal. One sequence of linker DNA, programmed by the researchers, results in one type of crystal structure while a different sequence of linker DNA results in a different structure.
“We even found a case where the same linker could give different structures, depending on the temperatures at which the particles were mixed,” said Schatz.
Using the extremely brilliant X-rays produced by the Advanced Photon Source synchrotron at Argonne National Laboratory in combination with computational simulations, the research team imaged the crystals to determine the exact location of the particles throughout the structure. The final crystals have approximately 1 million nanoparticles.
“It took scientists decades of work to learn how to synthesize DNA,” said Mirkin. “Now we’ve learned how to use the synthesized form outside the body to arrange lifeless matter into things that are useful, which is really quite spectacular.”
Source: Northwestern University
Explore further: Demystifying nanocrystal solar cells
|
It was Halloween yesterday and, unusually for the UK, it fell in school term time. As it turned out, I was teaching chemistry to a group of 12-13 year olds on that day which was too good an opportunity to miss.
Time for the puking pumpkin!
A side note: there’s loads of great chemistry here, and the pumpkin isn’t essential – you could easily do this same experiment during a less pumpkin-prolific month with something else. Puking watermelon, anyone?
First things first, prepare your pumpkin! Choose a large one – you need room to put a conical flask inside and put the pumpkin’s “lid” securely back in place.
Carve the mouth in the any shape you like, but make it generous. Draw the eyes and nose (and any other decoration) in waterproof marker – unless you want your pumpkin to “puke” out of its nose and eyes as well!
Rest the pumpkin on something wipe-clean (it might leak from the bottom) and put a deep tray in front of it.
To make the “puke” you will need:
- 35% hydrogen peroxide (corrosive)
- a stock solution of KI, potassium iodide (low hazard)
- washing up liquid
You can also add food colouring or dye, but be aware that the reaction can completely change or even destroy the colours you started with. If colour matters to you, test it first.
- Place about 50 ml (use more if it’s not so fresh) of the hydrogen peroxide into the conical flask, add a few drops of washing up liquid (and dye, if you’re using it).
- Add some KI solution and quickly put the pumpkin’s lid back in place.
- Enjoy the show!
Check out some video of all this here.
What’s happening? Hydrogen peroxide readily decomposes into oxygen and water, but at room temperature this reaction is slow. KI catalyses the reaction, i.e. speeds it up. (There are other catalysts you could also try if you want to experiment; potassium permanganate for example.) The washing up liquid traps the oxygen gas in foam to produce the “puke”.
The word and symbol equations are:
hydrogen peroxide –> water + oxygen
2H2O2 –> 2H2O + O2
There are several teaching points here:
- Evidence for chemical change.
- Compounds vs. elements.
- Breaking the chemical bonds in a compound to form an element and another compound.
- Balanced equations / conservation of mass.
- The idea that when it comes to chemical processes, it’s not just whether a reaction happens that matters, but also how fast it happens…
- … which of course leads to catalysis. A-level students can look at the relevant equations (see below).
Some health and safety points: the hydrogen peroxide is corrosive so avoid skin contact. Safety goggles are essential, gloves are a Good Idea(™). The reaction is exothermic and steam is produced. A heavy pumpkin lid will almost certainly stay in place but still, stand well back.
But we’re not done, oh no! What you have at the end of this reaction is essentially a pumpkin full of oxygen gas. Time to crack out the splints and demonstrate/remind your students of the test for oxygen. It’s endlessly fun to put a glowing splint into the pumpkin’s mouth and watch it catch fire, and you’ll be able to do it several times.
And we’re still not done! Once the pumpkin has completely finished “puking”, open it up (carefully) and look inside. Check out that colour! Why is it bluish-black in there?
It turns out that you also get some iodine produced, and there’s starch in pumpkins. It’s the classic, blue-black starch complex.
Finally, give the outside of the pumpkin a good wipe, take it home, carve out the eyes and nose and pop it outside for the trick or treaters – it’s completely safe to use.
Brace yourselves, more equations coming…
The KI catalyses the reaction because the iodide ions provide an alternative, lower-energy pathway for the decomposition reaction. The iodide reacts with the hydrogen peroxide to form hypoiodite ions (OI–). These react with more hydrogen peroxide to form water, oxygen and more iodide ions – so the iodide is regenerated, and hence is acting as a catalyst.
H2O2 + I– –> H2O + OI–
H2O2 + OI– –> H2O + O2 + I–
The iodine I mentioned comes about because some of the iodide is oxidised to iodine by the oxygen. At this point we have both iodine and iodide ions – these combine to form triiodide, and this forms the familiar blue-black complex.
Phew. That’s enough tricky chemistry for one year. Enjoy your chocolate!
Like the Chronicle Flask’s Facebook page for regular updates, or follow @chronicleflask on Twitter. All content is © Kat Day 2017. You may share or link to anything here, but you must reference this site if you do.
All comments are moderated. Abusive comments will be deleted, as will any comments referring to posts on this site which have had comments disabled.
|
Bottom Content goes here.
Wikipedia content requires these links.....
Wikipedia content is licensed under the GNU Free Documentation License.
In science and statistics, values are sometimes rounded and given as
approximations, typically because complete precision is not attainable or
not required. The number of digits to the left of (and including) the
rounding place is the number of significant figures or significant digits.
Take for example the value 4,215.02474. Rounded to two significant figures,
we have 4,200; to three significant figures, it is 4,220; to 5 significant
figures we have 4,215.0 and to 7 significant digits we get 4,215.025. Values
such as these are often expressed in scientific notation: 4.2 × 103,
4.22 × 103, 4.2150 × 103 and 4.215025 × 103. In this
notation, the number of significant digits is directly apparent.
Different conventions are used when rounding a number whose last digit is a
five. In one convention, such numbers are always rounded up; in another
convention, the rounding is performed so that the new last digit becomes
Note that because of the rounding, a number to n significant figures is not
necessarily the same as the first n digits of that number (as in 4,220 above).
For numbers written with decimals, the number of decimals can be used to
indicate the number of significant figures: for example, 4,215.02 is
represented to six significant figures. However, from a notation like 4,220
we can not see whether 0 is a significant digit or not; scientific notation
would be more informative here.
It is useful to know how the number of significant figures changes when
performing various calculations with rounded numbers.
When multiplying a number having n significant figures with a number having
m significant figures, and m ≤ n, then the result will have m-1
significant figures. For example, a rectangular table has been measured to
be 23.2 inches wide (3 significant figures) and 146.5 inches long (4
significant figures). In order to compute the table's area, we use a
calculator and find 23.2 × 146.5 = 3398.8 square inches. This result
should be properly stated as 3400 square inches with two significant digits.
When squaring or taking the square root of a value, the number of
significant figures can decrease by one.
When adding, it is not the number but the position of the significant
figures that determines the significant figures of the result: if the first
summand has significant digits which are to the right of the significant
digits of the second summand, then these digits are insignificant in the
result. For example, adding 2103.45 (6 significant digits) to 3.453245 (7
significant digits) on a calculator results in 2106.903245, but this should
be stated with 6 significant digits as 2106.90.
When subtracting two numbers that are approximately equal, the number of
significant digits drops. For example, 1.75 - 1.72 = 0.03.
When using a calculator, one should keep track of the significant digits of
all numbers, but only the final results should be rounded for presentation,
not the intermediate values.
In programming languages which contain the floor function, rounding of the
number x to the nearest integer can be achieved by calculating floor (x + 0.5).
|
Back to Science Lithosphere
Vocabulary on Continental Drift
and Plate Tectonics
|Plate Tectonics||The pieces of Earth's lithosphere are in slow , constant motion and float on convection currents in the mantle.|
|Continental Drift||This is a theory developed by Alfred Wegener that states that Earth's plates were once one huge landmass and over years they were broken apart and drifted to their present location.|
|Pangaea||The name that Alfred Wegener gave to Earth's one huge landmass that existed 300 million years ago. It means all Earth.|
|Alfred Wegener||A German scientist that developed the theory of continental drift. He did not have any proof although he based his theory on the fact that the continents looked like pieces of a puzzle that fit together.|
|Fossil||A trace of an ancient organism (animal or plant) that has been preserved in rock.|
|Give 3 pieces of evidence for the continental Drift Theory on separate index Cards||Puzzle evidence: the edges of the continents fit together
like puzzle pieces (Africa and South America)
Land features: Mountain Ranges and coal fields line up on various continents (for example: European coal fields line up with similar coal fields in North America)
Fossil Evidence: Glossopteris, Lystrosaurus, Mesosaurus see details below
Climate Evidence: Fossils of tropical plants were found on Spitsbergen, an island in the Arctic Ocean. When those plants were there it must have been closer to the equator.
Ocean Floor Evidence: In the 1960's scientist were able to study the ocean floor with technology, such as sonar. This enabled them to see evidence of sea-floor spreading. This actually showed how the plates were being pushed apart. They had evidence from molten rock, magnetic stripes on the ocean floor and drilling rock samples.
|Glossopteris||A fossil of a seed from a fern plant. This seed fossil was found on the continents of Africa, South America, Australia, India, and Antarctica. This seed was too heavy to be carried by wind or water. The continents must have been connected at one time.|
|Lystrosaurus and Mesosaurus||Fossils of a hippo-like creature and a reptile. These fossils were found on continents that are separated by great oceans and neither animal could swim those distances. Therefore the continents must have been connected at one time.|
|Plate Boundaries||The edges of Earth's plates where two or more plates meet|
|Convergent Boundaries||When two plates move toward each other. There are two types:
Convergent boundaries of plates of the same density: When two continental plates collide or two oceanic plates collide. A mountain will form and the older more dense rock will subduct - be pushed back into the mantle to melt!
Convergent boundaries of plates of different density: When a continental plate collides with an oceanic plate. A trench will form on the side of the oceanic plate and a mountain or volcano might form on the continental plate. Subduction of the oceanic plate will occur because it is more dense.
|Divergent Boundaries||When two plates move apart: Ocean floor Spreading occurs. New ocean floor is formed. A mid-ocean ridge forms and a Rift valley forms.|
|Strike-Slip Boundary/Transform Fault||When two plates move past each other or up and down. An earthquake will occur here. This can also be called a fault or fracture zone.|
|Ocean Floor Spreading||This occurs at divergent boundaries where new ocean floor is formed.|
|Convection Currents||The movement of material caused by changes in temperature. This occurs in the mantle due to the heat from the core below and the cool temperature of the crust. When magma is near the core the heat makes it less dense so it rises; when it nears the crust it cools and falls. This movement of magma, breaks the lithosphere up and moves the plates.|
|Rift Valley||The flat space between the mid-ocean ridges at a divergent boundary, it consists of new ocean floor.|
|Trench||A v-shaped valley formed at a subduction zone (convergent boundary)|
|Mid-ocean Ridge||Underwater mountain chains - formed at a divergent boundary due to the piling up of magma.|
|Subduction||When more dense rock is pushed down into the mantle so that it melts back into magma. This occurs at convergent boundaries|
|
Map of predicted changes in soil C stocks by 2030-50 due to a 1°C rise in global average temperature. The darker red the color, the greater the carbon loss.
Image courtesy of the researchers
A new global analysis finds that warming temperatures will trigger the release of trillions of kilograms of carbon from the planet’s soils into its atmosphere, driven largely by the losses of soil carbon in the world’s colder places. The increase in the atmospheric CO2 concentration will accelerate the pace of climate change.
For decades, scientists have speculated that rising global temperatures might reduce the ability of soils to store carbon, potentially releasing huge amounts of carbon into the atmosphere and triggering runaway climate change. Yet dozens of studies at single locations in different places around the world have produced mixed signals on whether this storage capacity will actually decrease—or even increase—as the planet warms.
A new study published in the journal Nature, based on 49 climate change experiments worldwide, including six in Minnesota, suggests that scientists might have been looking in the wrong places.
The study, led by College of Food, Agricultural and Natural Resources Sciences (CFANS) Department of Forest Resources Adjunct Professor T.W. Crowther, found that warming will drive the loss of at least 55 trillion kilograms of carbon from the soil by mid-century, adding an additional 17 percent on top of the projected emissions due to human-related activities during that period. That would be roughly the equivalent of adding to the planet another industrialized country the size of the United States and thus have a big impact by accelerating climate change
Critically, the researchers, including CFANS Dept. of Forest Resources Regents Professor and Institute on the Environment Fellow Peter Reich, found that carbon losses will be greatest in the world’s colder places, at high latitudes, which had largely been missing from most previous research. In those regions, massive stocks of carbon have built up over thousands of years and slow microbial activity has kept them relatively secure.
Most of the previous research had been conducted in the world’s temperate regions, where there were smaller carbon stocks to begin with. Studies that focused only on these regions would have missed the vast proportion of potential soil carbon losses, said Crowther, who conducted his research while a postdoctoral fellow at the Yale School of Forestry & Environmental Studies and at the Netherlands Institute of Ecology.
“Carbon stores are greatest in places like the Arctic and the sub-Arctic, where the soil is cold and often frozen. In those conditions microbes are less active and so carbon has been allowed to build up over many centuries,” said Crowther.
“But as you start to warm, the activities of those microbes increases, and that’s when the losses start to happen,” he said. “The scary thing is, these cold regions are the places that are expected to warm the most under climate change.”
The results are based on an analysis of data on stored soil carbon from dozens of climate warming experiments conducted over the past 20 years by more than 30 co-authors in different regions of the world.
The study predicts that for one degree of warming, about 30 petagrams of soil carbon will be released into the atmosphere.
“This is a big deal,” Reich said, “because the Earth is likely to have warmed by 2 degrees Celsius by mid-century, releasing as much carbon over that time period as will be emitted from fossil fuel burning in the United States.” A petagram is equal to 1,000,000,000,000 kilograms.
The study considered only soil carbon losses in response to warming. There are several other biological processes—such as faster plant growth as a result of carbon dioxide increases or slower plant growth due to climate warming and drought—that could dampen or enhance the effect of this soil carbon feedback. Several long-term experiments in Minnesota forests and grasslands led by Reich are addressing these questions.
“Getting a handle on these kinds of feedbacks globally is essential if we’re going to make meaningful projections about future climate conditions,” said Crowther. “Only then can we generate realistic greenhouse gas emission targets that are effective at limiting climate change.”
Press link for more: Twin Cities University
|
…continuedA Solar Observing Refresher Course
When it comes to eyepieces for projecting the Sun's image, the much-maligned Huygenian design is a good choice because it does not contain cemented elements that can be damaged by the Sun's intense heat. Most solar projection is done onto white paper or card stock. But no matter how white the screen, it must be adequately shaded from direct sunlight and other extraneous light in order for the viewer to see the finest details in the solar image. This powerful technique enables a 4-inch telescope to produce a usable image of the Sun 30 inches across. The size and brightness of the Sun's image depend mainly on the distance between the eyepiece and the viewing surface the farther away it is, the larger and dimmer the image.
The Spotted Sun
Sunspots are cooler regions of the solar surface caused by intense localized magnetic fields that bring the upward convection of internal material to a virtual standstill. Although they appear almost black, this is merely a contrast effect. If it were possible to place a modest-size sunspot into the night sky, it would shine 10 times brighter than the full Moon!
Even the casual observer will soon learn that sunspots come in a wide variety of shapes and sizes. While the simplest sunspots are isolated dark areas, larger spots are quite dramatic. Complex spots feature a dark central region called the umbra surrounded by a gray penumbra. The penumbra normally appears as a smooth fringe, but under steady seeing conditions it may exhibit radial patterns or knots of light and dark. During those fleeting moments of good seeing you may also see tiny circular sunspots 2 arcseconds in diameter or smaller. These are called pores. Sometimes they erupt into full-fledged spots but usually they simply disappear sometimes after a lifetime of only a few minutes.
Sketching sunspots with a pencil and paper can be a rewarding way to follow their evolution. In the same way that drawing the planets sharpens your observing skills, so will regularly recording the Sun's appearance. You can follow the complex ways sunspot groups change with time, and you might even come to regard some active regions as old friends as you watch them disappear beyond the Sun's western limb and reappear on the eastern limb two weeks later. Spots near the Sun's limb sometimes appear like shallow depressions on the solar surface. This is the so-called Wilson effect, named for the 18th-century Scottish astronomer Alexander Wilson, who first called attention to the phenomenon.
More Solar Sights
The solar viewing I've described above is known as white-light observing. If you find Sun-gazing to your liking, you may choose to investigate more advanced forms of observation that use special filters to isolate portions of the spectrum for spectacular views of a wide range of phenomena. Coronagraphs, hydrogen-alpha filters, and other observing gear are available but at a significantly greater cost than the simple filters needed for white-light observing.
Riding the Solar Cycle
Solar activity varies with an 11-year cycle. As the cycle progresses, activity rises and falls, and with it the amount of detail visible on the Sun. At solar minimum, the Sun often appears nearly featureless, completely free of sunspots. At maximum, however, there can be hundreds of sunspots arranged in a half dozen or more groups and plenty of faculae. Obviously, the most exciting time to observe the Sun is in the years surrounding solar maximum. The last solar maximum was in 2000, and NOAA's Space Weather Prediction Center forecasts the next maximum for May 2013. So there's no better time than now to become a daylight astronomer!
|
Summertime Math Stories
Students will learn to locate and interpret important keywords when solving for a missing variable in a given world problem. They will learn to use the WIKED map as a solving strategy for word problems containing a missing variable.
Introduction (5 minutes)
- Begin the lesson by asking students an essential question, to gauge their prior knowledge of word problem strategies. One example is: What word problem solving strategies do you know?
- Invite students to answer as a class, or have them turn to a class partner for think-pair-share.
- Discuss answers as a whole group.
Explicit Instruction/Teacher Modeling (15 minutes)
- Continue the ongoing discussion by informing students that today they will learn a new strategy for solving math stories with missing variables.
- As you explain this, write the following equation on the board: 23 – ___ = 18
- Place an index card inside the blank spot within your equation and write the word “variable” inside of it.
- Explain to the students that a variable is an alphabet letter used in math to represent a missing number. Take the index card off, and replace it with the variable y.
- Explain to the students that their job today will be to solve math stories that have a missing number, or a missing variable.
- Remind students that great problem solvers handle word problems like a detective who looks for clues to help them solve a mystery. One way to look for clues is to circle important keywords.
- Review the Key Words Anchor Chart and use the following procedure to monitor for understanding:
- Ask students to make a plus sign or a minus sign with their fingers for each of the keywords you say:
- Total (+)
- Altogether (+)
- Fewer (-)
- Continue this exercise until students feel confident with their understanding of the key terms and what they entail.
- Draw a representation of the WIKED map on the board and write the following math story above it: Cara and Susan have a hat collection totaling 52 hats. If there are 12 hats in Cara’s house, how many are there in Susan’s house?
- Use the WIKED map on the board to model how a student would use it to solve for the missing addend.
- Repeat this exercise 2 times.
Guided Practice/Interactive Modeling (10 minutes)
- Draw names from your name jar, or call students at random to participate in coming up to the board and solving for a missing variable using the WIKED map.
- Repeat the exercise with two more students or as time permits.
Independent Working Time (20 minutes)
- Provide students with a sheet-protected copy of the WIKED map, the Summer Time Math Stories handout, and dry erase markers.
- Have students use dry-erase markers to use, and re-use the WIKED map as a solving strategy.
- Once they feel comfortable with the map, ask students to generate their own word stories and exchange it with a partner for problem solving.
- Enrichment: Challenge above level students by having them generate their own word problem stories with missing variables and having them exchange these with peers from their same level for problem solving.
- At Level: Help approaching students by asking them to explain or justify their reasoning behind their answers by writing down a one to two sentence explanation for their answers inside the “Summer Time” handouts.
- Support: Provide students below level with additional support by providing them with highlighters to help them identify the keywords and a 120 counting chart to serve as a visual aid.
- Teacher may project the WIKED Map or Summer Time Math Stories PDF on an interactive white board or projector.
Assessment (5 minutes)
- Ask students to exchange their Summer Times Math Stories handout amongst each other for grading. Discuss answers as a whole group.
Review and Closing (5 minutes)
- Prompt students to turn to their nearest classmate and discuss two things they learned in this lesson. For example: I learned that keywords help me understand whether I should add, subtract, multiply, or divide. I learned what the word “variable” means.
|
Mastering letters and sounds is a very important step in early literacy—it is one of the building blocks on which the rest of learning to read and write is built. Having the right activities and tools is crucial to success in teaching the necessary skills: letter-shape knowledge, letter-name knowledge, letter-sound knowledge, and letter-writing ability. With these skills in their repertoire, the exciting world of reading begins to open to students!
In order to read a word, a learner must be able to recognize the letters in the word and associate each letter with its sound. The goal is for the learner to be able to make the sound of each letter when shown the letters, think of more than one word that begins with each letter sound, name the letter that makes each sound when given the sound, and name the letter that a word begins with when given the word.
The more tools you have in your toolbox to expose students to letters and their sounds, the easier your students will find success. This post will cover some available tools and offer some ideas from the Letter Buddies teacher's guide on how to use them. The lessons can translate to a wide variety of similar products, but we use our own as an example here.
To begin with, Letter Buddies Letter Books provide the opportunity for exploration of the four elements of letter knowledge. Each letter book introduces six experienced-based vocabulary words that begin with the same initial sound. This series is an excellent resource that will help your students begin to make connections between letters, sounds, and words.
Several sets of the letter books will allow you to provide powerful letter instruction in your small groups. One benefit of the Letter Buddies Letter Books is that they can be “read” with minimal guidance and are often some of the first reading experiences for emergent readers.
- Use the illustrated scene at the top for great oral language warm-up and storytelling activities.
- Begin by introducing the Letter Buddy (e.g.: explain what ‘active’ means in the A book) and tell
the children that the book will have pictures that begin with that letter.
- Have the children name the pictures that they see at the top of the cover and get them to predict why they might be there.
- Direct the children’s attention to the textured uppercase and lowercase letter on the cover. Use verbal directions for letter formation. Have the children practice tracing over the textured letters several times while saying the verbal directions.
Inside the Book
- Inside Cover – read the speech bubble above the Letter Buddy. Ask the children to point to each word while you read.
- Title Page – ask the children to trace over the letters again, saying the verbal directions to reinforce proper formation.
- At this point, you should select what your focus will be for the reading of the letter book.
- If your focus is to draw children’s attention to the visual form of the letter (letter recognition), you would use language like this as you introduce each page:
- “This is a _______.” (name the picture)
- “This word says __________." (point to the word as you say it)
- “Can you find the letter _____ at the beginning of ______?” (name the letter and the word)
- “Point to it.” (child points to the first letter of the word)
- “Good. _______ starts with the letter ________.” (name the word and the letter)
- Continue this procedure for each of the six pictures in the letter book.
- If your focus is to draw children’s attention to the sound the letter makes in the initial position of the word (letter-sound knowledge), you would use language like this as you introduce each page:
- “This is a ____________ .” (name the picture)
- "This word says __________.” (point to the word as you say it)
- “Can you hear the _________” at the beginning of _________ ?” (say the sound of the letter and then the word, putting emphasis on the initial sound)
- “Say __________ .” (child says the word slowly)
- "Show me the __________ .” (say the sound of the letter)
- "Good. _________ begins with __________.” (name the word and the sound)
- Continue this procedure for each of the six pictures in the letter book.
- Activity Page – This activity is the same through all the books. Once you have demonstrated what to do, children can practice and review independently or in pairs.
- All the Letter Buddies are presented in alphabetical order providing more opportunities for conversation and practice.
- Have the students locate the Letter Buddy for the particular book you are working with.
- Have the students locate other Letter Buddies they know.
- Have the students name the letters in alphabetical order.
Below is a flipbook of the book used in this example, so you can further explore the words that it offers.
To cover the topic more in-depth with your students, repetition and thorough exposure are key. The Letter Buddies Starters series was developed to accompany the Letter Buddies Letter Books. In this series, the same six vocabulary words introduced in the Letter Books are used in a simple story, placed within the context of a sentence. PreK–K sight words (based on the Dolch list) and repetitive sentence stems are used throughout each story to provide young readers the opportunity to practice and develop early reading behaviors.
Following is an example of how you can lead a guided reading lesson using the Starters. In your small group, give each child a copy of Look at Me! (Starters Book – L). You will also need a copy of the Letter Book L.
- Engage children in a short conversation by naming things on the cover: “Remember when we looked at Loud L’s letter book and talked about the things that begin with ‘L’? Today we have a new book. It’s called Look at Me! In this story we’ll see the same things we saw in Loud L’s letter book. Let’s look at the front cover and find them!”
- Introduce the sentence stem in the story: “In this book, Loud L is going to ask us to ‘look at’ these things.”
- “Let’s turn to the title page and read the title together. Make sure to point to each word while you read.” (Develop early reading behavior: one-to-one matching)
- “What do we see in the picture?” (Use the picture to support meaning)
- “Right, it’s Loud L and a lion. Loud L wants us to: Look at the lion.” (Reinforce the introduced sentence stem)
- “Let’s point and read this page.” (Practice one-to-one matching)
- “Good, I like how you pointed to each word as you said it.” (Reinforce one-to-one matching)
- “There’s a word you know on this page...‘the’. Can you point to the word ‘the’?” (Prompt for an early behavior: locating a known word)
- “Good! It helps when we look for words we know when we read.” (Explain why certain reading behaviors are helpful: recognizing sight words)
- “Here’s Loud L with a lamb. Let’s read about it.”
- Children read.
- “How did you know that
word was ‘lamb’?”
- Child responds that the
picture shows a lamb and points to the word saying it starts with ‘l’.
- “Good noticing. It helps to look at the picture and check the first letter of a word we’re not sure of.” (Reinforce crosschecking behavior)
- Continue a similar conversation across pages 4-6, encouraging children to check the picture, then identify the first letter of the word and its sound.
- “What does Loud L want us to look at on
this page?” Child answers ‘lips’.
- “How do you know it’s lips?” Child talks about how Loud L is eating the lemon and lemons are sour.
- “Good prediction.” (Comprehension strategy)
- “Let’s run our finger under that word. Does it look like it says ‘lips’?” (Say word slowly to reinforce visual checking of whole word)
- “Right, it is ‘lips’ and I like how you checked the picture and the word.”
- “Let’s look at the end of the sentence. Here’s the exclamation mark we’ve seen in our other books. How do you think we’d read this sentence?” (Read punctuation to enhance fluency)
- “Now turn to the beginning of the story.
I want you to read Look at Me! by yourself. Remember to point to the words while you read and check the pictures and the first letter of each word.
- Children read the story.
- At the end of the lesson, ask the children to turn
to the last page of the book—the activity page. Demonstrate how to create the next page of the story. As the children begin this task, you can move on to work with another group.
As with Letter Buddies, you can look through the flipbook below to get an idea of what else it contains.
This is only the first installment in a three-piece series on how to teach letters and their corresponding sounds to emerging readers. Check back tomorrow for more lesson ideas! If you're interested in learning about other early literacy products, you can click below to request our catalog or download a one-page series overview on the Letter Buddies Letter Books.
- Tara Rodriquez
|
In this video excerpt from NOVA: "Hunting the Elements," New York Times technology columnist David Pogue explores how isotopes of carbon can be used to determine the age of once-living matter. Learn how variations in atomic structure form isotopes of an element and how the three natural isotopes of carbon differ from each other. Meet paleoclimatologist Scott Stine, who uses radiocarbon dating to study changes in climate. Find out what it means for an isotope to be radioactive and how the half-life of carbon-14 allows scientists to date organic materials.
This video is available in both English and Spanish audio, along with corresponding closed captions.
Visit the program page here.
|
In an effort to cut the weight
of today's vehicles, manufactures are using a lot of magnesium.
This is a very strong, but light weight metal, weighing only 1/3 the weight of aluminum. Unlike
most metals, this metal burns and is very hard to extinguish.
|Click picture to enlarge
Magnesium fires have a very violent reaction to water, putting
firefighters in much danger.
Click to see REACTION TO WATER
This reaction was from a small piece of magnesium
on the steering column.
We have always known about magnesium in the fire
service, but most firefighters only relate magnesium to the old VW engine, on vehicles. Today we never know where to expect
it, we find it in:
Large truck frames
Steering columns Wheels
F-150 Ford's whole radiator support
Example of Mag. Fire
Magnesium is some what like wood, in that the smaller
the particles, the easier it is to ignite. But unlike wood, magnesium must have its complete surface area heated in order
to ignite. For instance; a piece of magnesium metal shaving will heat all the way around its surface and ignite very rapidly,
while a large piece of plate can actually have a hole burnt in the middle of it with a torch and as the torch is removed the
plate will cool and never ignite, because the whole surface area was not heated.
|Click picture to enlarge
Many scientist disagree on what causes this violent
reaction to water, but the most recognized answer is: (In simple terms)
When heated magnesium ignites and burns with
an intense white light and releases extreme amounts of heat. Most magnesium fires cannot be extinguished by water, since
water reacts with hot magnesium and releases hydrogen.
As we know water expands 1700 times its volume
as it is converted to steam, this in-turn makes the water molecule much smaller and the intense heat from magnesium breaks
them down before they can effectively cool the heat. When broken down these molecules form extreme amounts of hydrogen gas,
which is very explosive. These explosions of hydrogen are then supported by the oxygen, making them even more intense and simply
cause the molten metal to splatter, throwing the small bits of molten metal in all directions.
Warning! Do not look directly into a magnesium fire for long periods of time. The intense light from these fires are
much the same as seen from an arc welder. The fire
is many times brighter than the sun at a normal distance. Please take this seriously, flash blindness is very unpleasant,
it feels like you have sand under your eyelids. A quick one or two second glance every several minutes seems to be okay. This
can be extended slightly if you have welding goggles, or an arc welding mask.
If you do get flash blindness
see a doctor immediately, they can give you the same drugs as those designed for welders who get flash blindness this
will minimize the amount of damage.
IC should immediately evacuate any by standers.
These fire are a beautiful light show that is hard to resist looking at, but this eye damage can occur from a very long
distance if the person continues to watch the fire.
Warning! Structural firefighting gear (PPE)
will not protect a person from a magnesium fire. The sparks that are seen in the pictures are approximately 5400 degrees and
like welder sparks they will collect in the wrinkles of your gear and burn through to your skin before cooling.
Warning! Magnesium produces very
A Class D Fire is one that involves combustible metals or combustible metal alloys. There are basically two types of Class D fire extinguishers.
Type 1: The extinguishing agent for type 1 Class D is Sodium Chloride. The type 1 Class D extinguisher is effective at
controlling magnesium, sodium, potassium, sodium potassium alloys,
uranium, and powdered aluminum metal fires.
Type 2: The extinguishing agent for type 2 Class D is a copper
based dry powder. The copper compounds smother the fire and provides an excellent heat sink for dissipating
the heat of the fire.
Video -- Magnesium Identification
Video -- Magnesium Vinegar Test
See -- Full MSDS Sheet
Magnesium plant fire
|
History of Luxembourg
|This article needs additional citations for verification. (August 2009)|
The history of Luxembourg is inherently entwined with that of surrounding countries, peoples, and ruling dynasties. Over time, the territory of Luxembourg has been eroded, whilst its ownership has changed repeatedly, and its political independence has grown gradually.
Although the recorded history of Luxembourg can be traced back to Roman times, the history of Luxembourg proper is considered to begin in 963. Over the following five centuries, the powerful House of Luxembourg emerged, but its extinction put an end to Luxembourg's independence. After a brief period of Burgundian rule, Luxembourg passed to the Habsburgs in 1477.
After the Eighty Years' War, Luxembourg became a part of the Southern Netherlands, which passed to the Austrian line of the Habsburg dynasty in 1713. After occupation by Revolutionary France, the 1815 Treaty of Paris transformed Luxembourg into a Grand Duchy in personal union with the Netherlands. The treaty also resulted in the second partition of Luxembourg, the first being in 1658 and the third in 1839. Although these treaties greatly reduced Luxembourg's territory, they increased its independence, which was confirmed after the Luxembourg Crisis in 1867.
In the following decades, Luxembourg fell further into Germany's sphere of influence, particularly after the creation of a separate ruling house in 1890. Luxembourg was occupied by Germany from 1914 until 1918 and again from 1940 until 1944. Since the Second World War, Luxembourg has become one of the world's richest countries, buoyed by a booming financial services sector, political stability, and European integration.
- 1 Early history
- 2 Medieval Luxembourg (963 – 1477)
- 3 Habsburg (1477–1795) and French (1795-1815) rule
- 4 Developing independence (1815–1890)
- 5 Separation and the World Wars (1890–1945)
- 6 Modern history (since 1945)
- 7 See also
- 8 Footnotes
- 9 Further reading
- 10 External links
In the territory now covered by the Grand Duchy of Luxembourg, there is evidence of primitive inhabitants right back to the Paleolithic or old stone age over 35,000 years ago. The oldest artifacts from this period are decorated bones found at Oetrange.
However, the first real evidence of civilization is from the Neolithic or 5th millennium BC from which evidence of houses has been found. Traces have been found in the south of Luxembourg at Grevenmacher, Diekirch, Aspelt and Weiler-la-Tour. The dwellings were made of a combination of tree trunks for the basic structure, mud-clad wickerwork walls, and roofs of thatched reeds or straw. Pottery from this period has been found near Remerschen.
While there is not much evidence of communities in Luxembourg at the beginning of the Bronze Age, a number of sites dating back to the period between the 13th and the 8th century BC provide evidence of dwellings and reveal artifacts such as pottery, knives and jewelry. The sites include Nospelt, Dalheim, Mompach and Remerschen.
What is present-day Luxembourg was inhabited by Celts during the Iron Age (from roughly 600 BC until 100 AD). The Gaulish tribe in what is present-day Luxembourg during and after the La Tène period was known as the Treveri, which reached the height of prosperity in the 1st century BC. They constructed a number of fortified settlements or oppida near the Moselle valley in what is now southern Luxembourg, western Germany and eastern France. Most of the archaeological evidence from this period has been discovered in tombs, many closely associated with Titelberg, a 50 ha site which reveals much about the dwellings and handicrafts of the period.
The Romans under Julius Caesar completed their conquest and occupation in 53 BC. The first known reference to the territory of present-day Luxembourg was by Julius Caesar in his Commentaries on the Gallic War. By and large, the Treveri were more co-operative with the Romans than most Gallic tribes, and the Treveri adapted readily to Roman civilization. Two revolts in the 1st century AD did not permanently damage their cordial relations with Rome. The land of the Treveri was at first part of Gallia Celtica, but with the reform of Domitian in c. 90 was re-assigned to Gallia Belgica.
Gallia Belgica was infiltrated by the Germanic Franks from the 4th century, and was abandoned by Rome in AD 406. The territory of what would become Luxembourg by the 480s became part of Merovingian Austrasia and eventually part of the core territory of the Carolingian Empire. With the Treaty of Verdun (843), it fell to Middle Francia, in 855 to Lotharingia and with the latter's division in 959 to the Duchy of Upper Lorraine within the Holy Roman Empire.
Medieval Luxembourg (963 – 1477)
The history of Luxembourg properly began with the construction of Luxembourg Castle in the High Middle Ages. It was Siegfried I, count of Ardennes who traded some of his ancestral lands with the monks of the Abbey of St. Maximin in Trier in 963 for an ancient, supposedly Roman, fort named Lucilinburhuc. Modern historians explain the etymology of the word with Letze, meaning fortification which might have referred to either the remains of a Roman watchtower or to a primitive refuge of the early Middle Ages.
Around this fort a town gradually developed, which became the centre of a small but important state of great strategic value to France, Germany and the Netherlands. Luxembourg's fortress, located on a rocky outcrop known as the Bock, was steadily enlarged and strengthened over the years by successive owners, among others the Bourbons, Habsburgs and Hohenzollerns, which made it one of the strongest fortresses on the European continent, the Fortress of Luxembourg. Its formidable defences and strategic location caused it to become known as the ‘Gibraltar of the North’.
The Luxembourgish dynasty provided several Holy Roman Emperors, Kings of Bohemia, and Archbishops of Trier and Mainz. From the Early Middle Ages to the Renaissance, Luxembourg bore multiple names, depending on the author. These include Lucilinburhuc, Lutzburg, Lützelburg, Luccelemburc, Lichtburg, among others.
Luxembourg remained an independent fief of the Holy Roman Empire until 1354, when the emperor Charles IV elevated it to the status of a duchy. At that time the Luxembourg family held the Crown of Bohemia, but the duchy was usually possessed as appanage by a separate branch of the family. In 1437 the imperial Luxembourg family became extinct in the male line. At that time, the duchy and castle were held by the Bohemian princess Elisabeth of Gorlitz, Duchess of Luxembourg, a Cadet granddaughter of emperor Charles IV.
Elisabeth was childless, and in 1440 made a treaty with her powerful neighbour Philip III, Duke of Burgundy that Philip would administer the duchy and inherit it after the Duchess Elisabeth's death. Elisabeth died in 1451, but Philip accelerated things by expelling Elisabeth in 1443. The heirs of the main Luxembourg dynasty were not happy with the arrangement the Burgundians had made, and managed at times to wrest the possession from Burgundy. The Habsburg prince Ladislas the Posthumous, king of Bohemia and Hungary (d. 1457), held the title in the 1450s. After his death, his brother-in-law William of Thuringia (1425 to 1482) held (or at least claimed) it from 1457 to 1469.
In 1467, Elisabeth, Queen of Poland, the last surviving sister of Ladislas, renounced her right in favour of Burgundy by treaty and some concessions, since the possession was next to impossible to hold against Burgundian actions. After being captured by Philip of Burgundy in 1443 and ultimately from 1467 to 1469, the duchy became one of the Seventeen Provinces of the Netherlands. With the marriage of Mary of Burgundy in 1477 all the Netherlands provinces, including Luxembourg, came under Habsburg rule in the person of her husband Maximilian, and later their son Philip the Handsome.
Habsburg (1477–1795) and French (1795-1815) rule
In these centuries the electors of Brandenburg, later kings of Prussia (Borussia), advanced their claim to the Luxembourg patrimony as heirs-general to William of Thuringia and his wife Anna of Bohemia, the disputed dukes of Luxembourg of the 1460s – Anna was the eldest daughter of the last Luxembourg heiress. From 1609 onwards, they had a territorial base in the vicinity, the Duchy of Cleves, the starting-point of the future Prussian Rhineland. This Brandenburger claim ultimately produced some results when some districts of Luxembourg were united with Prussia in 1813.
The first Hohenzollern claimant to descend from both Anna and her younger sister Elisabeth, was John George, Elector of Brandenburg (1525–98), his maternal grandmother having been Barbara Jagiellon. In the late 18th century, the younger line of Orange-Nassau (the princes who held sway in the neighbouring Dutch oligarchy) also became related to the Brandenburgers.
In 1598, the then possessor, Philip II of Spain, bequeathed Luxembourg and the other Low Countries to his daughter the Infanta Isabella Clara Eugenia and her husband Albert VII, Archduke of Austria. Albert was an heir and descendant of Elisabeth of Austria (d. 1505), queen of Poland, the youngest granddaughter of Sigismund of Luxembourg, the Holy Roman Emperor. Thus, Luxembourg returned to the heirs of the old Luxembourg dynasty – at least those of the line of Elisabeth. The Low Countries were a separate political entity during the couple's reign. After Albert's childless death in 1621, Luxembourg passed to his great-nephew and heir Philip IV of Spain, who through his paternal grandmother Anna of Austria, queen of Spain, Albert's sister, was the primogenitural heir to the Queen Elisabeth of Poland.
Luxembourg was invaded by Louis XIV of France (husband of Maria Theresa, daughter of Philip IV) in 1684, an action that caused alarm among France's neighbours and resulted in the formation of the League of Augsburg in 1686. In the ensuing War of the Grand Alliance, France was forced to give up the duchy, which was returned to the Habsburgs by the Treaty of Ryswick in 1697.
During this period of French rule, the defences of the fortress were strengthened by the famous siege engineer Vauban. The French king's great-grandson Louis (1710–74) was, from 1712, the first heir-general of Albert VII. Albert VII was a descendant of Anna of Bohemia and William of Thuringia, having that blood through his mother's Danish great-great-grandmother, but was not the heir-general of that line. Louis was the first real claimant of Luxembourg to descend from both sisters, the daughters of Elisabeth of Bohemia, the last Luxembourg empress.
Habsburg rule was confirmed in 1715 by the Treaty of Utrecht, and Luxembourg was integrated into the Austrian Netherlands. Emperor Joseph and his successor Emperor Charles VI were descendants of Spanish kings who were heirs of Albert VII. Joseph and Charles VI were also descendants of Anna of Bohemia and William of Thuringia, having that blood through their mother, although they were heirs-general of neither line. Charles was the first ruler of Luxembourg to descend from both sisters, daughters of Elisabeth of Bohemia, the last Luxembourg empress.
Austrian rulers were more or less ready to exchange Luxembourg and other territories in the Low Countries. Their purpose was to round out and enlarge their power base, which in geographical terms was centered around Vienna. Thus, Bavarian candidate(s) emerged to take over the Duchy of Luxembourg, but this plan led to nothing permanent. Emperor Joseph II however made a preliminary pact to make a neighbour of Luxembourg, Charles Theodore, Elector Palatine, as Duke of Luxembourg and king in the Low Countries, in exchange of his possessions in Bavaria and Franconia. However, this scheme was aborted due to Prussia's opposition. Charles Theodore, who would thus have become Duke of Luxembourg, was genealogically a junior descendant of both Anna and Elisabeth, but main heir of neither.
During the War of the First Coalition, Luxembourg was conquered and annexed by Revolutionary France, becoming part of the département of the Forêts in 1795. The annexation was formalised at Campo Formio in 1797. In 1798 Luxembourgish peasants rebelled against the French but the Rebellion was rapidly oppressed.This short Rebellion is called the Peasant's War.
Developing independence (1815–1890)
Luxembourg remained more or less under French rule until the defeat of Napoleon in 1815, when the Congress of Vienna gave formal autonomy to Luxembourg. The Prussians had already in 1813 managed to wrest lands from Luxembourg, to strengthen the Prussian-possessed Duchy of Julich. The Bourbons of France held a strong claim to Luxembourg, the Emperor of Austria on the other hand had controlled the duchy until the revolutionary forces had joined it to the French republic (he reportedly was not enthusiastic about regaining Luxembourg and the Low Countries, being more interested in the Balkans).
The King of Prussia held the claim of the senior heiress, Anna. An additional claimant emerged, William I of the Netherlands who now ruled the Netherlands, and whose mother and wife were descendants of the Prussian royal family and thus also descendants of both daughters of the last Luxembourg heiress. Prussia and Orange-Nassau made the following exchange deal: Prussia received the Principality of Orange-Nassau, which included the ancestral lands of Nassau in Central Germany; the Prince of Orange in turn received Luxembourg.
Luxembourg, somewhat diminished in size (as the medieval lands had been slightly reduced by the French and Prussian heirs), was augmented in another way through the elevation to the status of grand duchy and placed under the rule of William I of the Netherlands. This was the first time that the duchy had a monarch who had no claim to inheritance of the medieval patrimony (as lineages through his mother and wife had a better entitled claimant, the Prussian king himself). However, Luxembourg's military value to Prussia prevented it from becoming a part of the Dutch kingdom. The fortress, ancestral seat of the medieval Luxembourgers, was taken over by Prussian forces, following Napoleon's defeat, and Luxembourg became a member of the German Confederation with Prussia responsible for its defense.
In July 1819 a contemporary from Britain visited Luxembourg: his journal offers some insights. Norwich Duff writes that "Luxembourg is considered one of the strongest fortifications in Europe, and … it appears so. It is situated in Holland (then as now used by English speakers as shorthand for the Netherlands) but by treaty is garrisoned by Prussians and 5,000 of their troops occupy it under a Prince of Hesse. The civil government is under the Dutch and the duties collected by them. The town is not very large but the streets are broader than [in] the French towns and clean ands the houses are good.... [I] got the cheapest of hot baths here at the principal house I ever had in my life: one franc."
Much of the Luxembourgish population joined the Belgian revolution against Dutch rule. Except for the fortress and its immediate vicinity Luxembourg was considered a province of the new Belgian state from 1830 to 1839. By the Treaty of London in 1839 the status of the grand duchy was confirmed as sovereign and in personal union to the king of the Netherlands. In turn, the predominantly French speaking part of the duchy was ceded to Belgium as the province de Luxembourg.
This loss left the Grand Duchy of Luxembourg a predominantly German state, although French cultural influence remained strong. The loss of Belgian markets also caused painful economic problems for the state. Recognizing this, the grand duke integrated it into the German Zollverein in 1842. Nevertheless, Luxembourg remained an underdeveloped agrarian country for most of the century. As a result of this about one in five of the inhabitants emigrated to the United States between 1841 and 1891.
Crisis of 1867
It was not until 1867 that Luxembourg's independence was formally ratified, after a turbulent period which even included a brief time of civil unrest against plans to annex Luxembourg to Belgium, Germany or France. The crisis of 1867 almost resulted in war between France and Prussia over the status of Luxembourg. It involved competition between France and Prussia over control of Luxembourg, which had become free of German control when the German Confederation was abolished at the end of the Seven Weeks War in 1866.
William III, king of the Netherlands, which still had sovereignty over Luxembourg, was willing to sell the grand duchy to France's Emperor Napoleon III in order to retain Limbourg but backed out when Prussian chancellor Otto von Bismarck expressed opposition. The growing tension brought about a conference in London from March to May 1867 in which the British served as mediators between the two rivals. Bismarck manipulated public opinion, resulting in the denial of sale to France and the continued suzerainty of Holland, a member of the customs union with close ties to Prussia. The issue was resolved by the second Treaty of London which guaranteed the perpetual independence and neutrality of the state. The fortress walls were pulled down and the Prussian garrison was withdrawn.
Famous visitors to Luxembourg in the 18th and 19th centuries included the German poet Johann Wolfgang von Goethe, the French writers Émile Zola and Victor Hugo, the composer Franz Liszt, and the English painter Joseph Mallord William Turner.
Separation and the World Wars (1890–1945)
Luxembourg remained a possession of the kings of the Netherlands until the death of William III in 1890, when the grand duchy passed to the House of Nassau-Weilburg due to a Nassau inheritance pact of 1783.
First World War
World War I affected Luxembourg at a time when the nation-building process was far from complete. The small grand duchy (about 260,000 inhabitants in 1914) opted for an ambiguous policy between 1914 and 1918. With the country occupied by Germans troops, the government, led by Paul Eyschen, chose to remain neutral. This strategy had been elaborated with the approval of Marie-Adélaïde, Grand Duchess of Luxembourg. Although continuity prevailed on the political level, the war caused social upheaval, which laid the foundation for the first trades union in Luxembourg.
The end of the occupation in November 1918 squared with a time of uncertainty on the international and national levels. The victorious Allies disapproved of the choices made by the local élites, and some Belgian politicians even demanded the (re)integration of the country into a greater Belgium. Within Luxembourg a strong minority asked for the creation of a republic. In the end, the grand duchy remained a monarchy but was led by a new head of state, Charlotte. In 1921 it entered into an economic and monetary union with Belgium, the Union Économique Belgo-Luxembourgeoise (UEBL). During most of the 20th century, however, Germany remained its most important economic partner.
The introduction of universal suffrage for men and women favored the Rechtspartei (party of the Right), which played the dominant role in the government throughout the 20th century, with the exception of 1925–26 and 1974–79, when the two other important parties, the Liberal and the Social-Democratic parties, formed a coalition. The success of the resulting party was due partly to the support of the church — the population was more than 90 percent Catholic — and of its newspaper, the Luxemburger Wort.
On the international level, the interwar period was characterized by an attempt to put Luxembourg on the map. Especially under Joseph Bech, head of the Department of Foreign Affairs, the country participated more actively in several international organizations, in order to ensure its autonomy. On December 16, 1920, Luxemburg became a member of the League of Nations. On the economic level in the 1920s and the 1930s, the agricultural sector declined in favor of industry, but even more so for the service sector. The proportion of the active population in this last sector rose from 18 percent in 1907 to 31 percent in 1935.
In the 1930s the internal situation deteriorated, as Luxembourgish politics were influenced by European left- and right-wing politics. The government tried to counter communist-led unrest in the industrial areas and continued friendly policies towards Nazi Germany, which led to much criticism. The attempts to quell unrest peaked with the Maulkuerfgesetz, the "muzzle" Law, which was an attempt to outlaw the Communist Party. The law was turned down in a 1937 referendum.
Second World War
Upon the outbreak of the Second World War in September 1939, the government of Luxembourg observed its neutrality and issued an official proclamation to that effect on September 6, 1939. On May 10, 1940, an invasion by German armed forces swept away the Luxembourgish government and monarchy into exile. The German troops, made up of the 1st, 2nd, and 10th Panzer Divisions invaded at 04:35. They did not encounter any significant resistance save for some bridges destroyed and some land mines, since the majority of the Luxembourgish Volunteer Corps stayed in their barracks. Luxembourgish police resisted the German troops, but to little avail; the capital city was occupied before noon. Total Luxembourgish casualties amounted to 75 police and soldiers captured, six police wounded, and one soldier wounded.
The Luxembourg royal family and their entourage received visas from Aristides de Sousa Mendes in Bordeaux. They crossed into Portugal and subsequently travelled to the United States in two groups: on the USS Trenton from Lisbon to Baltimore in July 1940, and on the Pan American airliner Yankee Clipper in October 1940. Throughout the war, Grand Duchess Charlotte broadcast via the BBC to Luxembourg to give hope to the people.
Luxembourg remained under German military occupation until August 1942, when the Third Reich formally annexed it as part of the Gau Moselland. The German authorities declared Luxembourgers to be German citizens and called up 13,000 for military service. 2,848 Luxembourgers eventually died fighting in the German army.
Luxembourgish opposition to this annexation took the form of passive resistance at first, as in the Spéngelskrich (lit. "War of the Pins"), and in refusal to speak German. As French was forbidden, many Luxembourgers resorted to resuscitating old Luxembourgish words, which led to a renaissance of the language. The Germans met opposition with deportation, forced labour, forced conscription and, more drastically, with internment, deportation to concentration camps and execution.
Executions took place after the so-called general strike from 1 September to 3 September 1942, which paralyzed the administration, agriculture, industry and education in response to the declaration of forced conscription by the German administration on 30 August 1942. The Germans suppressed the strike violently: executing 21 strikers and deporting hundreds more to concentration camps. The then civilian administrator of Luxembourg, Gauleiter Gustav Simon, had declared conscription necessary to support the German war-effort. The general strike in Luxembourg remained one of the few mass strikes against the German war machine in Western Europe.
U.S. forces liberated most of the country in September 1944: they entered the capital city on 10 September 1944. During the Ardennes Offensive (Battle of the Bulge) German troops took back most of northern Luxembourg for a few weeks. Allied forces finally expelled the Germans in January 1945.
Between December 1944 and February 1945, the recently-liberated city of Luxembourg was designated by the OB West (German Army Command in the West) as the target for V-3 superguns, which were originally intended to bombard London. Two V-3 guns based at Lampaden fired a total of 183 rounds at Luxembourg. Fortunately for the Luxembourgers, the V-3 was not very accurate. 142 rounds landed in Luxembourg, with 44 confirmed hits in the urban area, and the total casualties were 10 dead and 35 wounded. The bombardments ended with the American Army nearing Lampaden on 22 February 1945.
Altogether, of a pre-war population of 293,000, 5,259 Luxembourgers lost their lives during the hostilities.
Modern history (since 1945)
After World War II Luxembourg abandoned its politics of neutrality, when it became a founding member of the North Atlantic Treaty Organization (1949) and the United Nations. It is a signatory of the Treaty of Rome, and constituted a monetary union with Belgium (Benelux Customs Union in 1948), and an economic union with Belgium and the Netherlands, the so-called BeNeLux.
Between 1945 and 2005, the economic structure of Luxembourg changed significantly. The crisis of the metallurgy sector, which began in the mid-1970s and lasted till the late 1980s, nearly pushed the country into economic recession, given the monolithic dominance of that sector. The Tripartite Coordination Committee, consisting of members of the government, management representatives, and trade union leaders, succeeded in preventing major social unrest during those years, thus creating the myth of a “Luxembourg model” characterized by social peace. Although in the early years of the 21st century Luxembourg enjoyed one of the highest GNP per capita in the world, this was mainly due to the strength of its financial sector, which gained importance at the end of the 1960s. Thirty-five years later, one-third of the tax proceeds originated from that sector. The harmonization of the tax system across Europe could, however, seriously undermine the financial situation of the grand duchy.
Luxembourg has been one of the strongest advocates of the European Union in the tradition of Robert Schuman. In 1957, Luxembourg became one of the six founding countries of the European Economic Community (later the European Union) and in 1999 it joined the euro currency area.
Encouraged by the contacts established with the Dutch and Belgian governments in exile, Luxembourg pursued a policy of presence in international organizations. It was one of the six founding members of the European Coal and Steel Community (ECSC) in 1952 and of the European Economic Community (EEC) in 1957. In the context of the Cold War, Luxembourg clearly opted for the West by joining the North Atlantic Treaty Organization (NATO) in 1949, thus renouncing its traditional neutrality, which had determined its international policy since the founding of the state. Engagement in European construction was rarely questioned subsequently, either by politicians or by the greater population.
Despite its small proportions, Luxembourg often played an intermediary role between larger countries. This role of mediator, especially between the two large and often bellicose nations of Germany and France, was considered one of the main characteristics of national identity, allowing the Luxembourger not to have to choose between one of these two neighbours. The country also hosted a large number of European institutions such as the European Court of Justice.
Luxembourg’s small size no longer seemed to be a challenge to the existence of the country, and the creation of the Banque Centrale du Luxembourg (1998) and of the University of Luxembourg (2003) was evidence of the continuing desire to become a “real” nation. The decision in 1985 to declare Lëtzebuergesch (Luxembourgish) the national language was also a step in the affirmation of the country’s independence. In fact, the linguistic situation in Luxembourg was characterized by trilinguilism: Lëtzebuergesch was the spoken vernacular language, German the written language, in which Luxembourgers were most fluent, and French the language of official letters and law.
In 1985, the country became victim to a mysterious bombing spree, which was targeted mostly at electrical masts and other installations.
The current Prime Minister, Jean-Claude Juncker follows this European tradition. On 10 September 2004, Mr Juncker became the semi-permanent President of the group of finance ministers from the 12 countries that share the euro, a role dubbed "Mr Euro".
The present sovereign is Grand Duke Henri. Henri's father, Jean, succeeded his mother, Charlotte, on 12 November 1964. Jean's eldest son, Prince Henri, was appointed "Lieutenant Représentant" (Hereditary Grand Duke) on 4 March 1998. On 24 December 1999, Prime Minister Juncker announced Grand Duke Jean's decision to abdicate the throne on 7 October 2000, in favour of Prince Henri who assumed the title and constitutional duties of Grand Duke.
On 10 July 2005, after threats of resignation by Prime Minister Juncker, the proposed European Constitution was approved by 56.52% of voters.
- County of Luxembourg
- List of monarchs of Luxembourg
- List of Prime Ministers of Luxembourg
- Politics of Luxembourg
- "Luxembourg". Catholic Encyclopaedia. 1913. Retrieved 2006-07-30.
- Jacobs, Frank (17 April 2012). "Who's Afraid of Greater Luxembourg?". The New York Times.
- Literally 'woods', in reference to the Ardennes.
- Frédéric Laux, "Bismarck et l'affaire du Luxembourg de 1867 a la Lumiere des Archives Britanniques," [Bismarck and the Luxembourg Affair of 1867 in Light of British Archives] Revue d'histoire diplomatique 2001 115(3): 183-202
- Herbert Maks, "Zur Interdependenz Innen- Und Aussenpolitischer Faktoren in Bismarcks Politik in Der Luxemburgischen Frage 1866/67," ["The Interdependence of Domestic and Foreign Factors in Bismarck's Policies on the Luxembourg Question, 1866-67] Francia Part 3 19./20. 1997 24(3): 91-115.
- Government of the Grand Duchy of Luxembourg, Luxembourg and the German Invasion: Before and After (London and New York, 1942) p. 32
- Horne, Alistair, To Lose a Battle, p.258-264
- Arblaster, Paul. A History of the Low Countries (Palgrave Essential Histories) (2005)
- Blom, J.C.H. History of the Low Countries (2006)
- de Vries, Johan. "Benelux, 1920-1970," in C. M. Cipolla, ed. The Fontana Economic History of Europe: Contemporary Economics Part One (1976) pp 1–71
- Kossmann, E. H. The Low Countries 1780–1940 (1978)
|Wikimedia Commons has media related to History of Luxembourg.|
- Luxembourg emigration in the 19th century - Offers reasons why people left Luxembourg in the 19th century.
- History of Luxembourg: Primary Documents
- History of Luxembourg – History of Luxembourg from 53 BC to the present.
- Historical Map of Luxembourg 1789
- National Museum of Military History
|
These are problems that are often not-well defined. Students have to conduct websearches for relevant information, or more commonly, make estimates of quantities. This means they must also decide what information is needed. Many of these problems can be solved in more than one way, so students have to determine the approach that works best for them.
Some examples of activities from the classroom and exams:
How many two-step paces is it from LA to NYC?
What are the dimensions of the standard kilogram?
How far does a bowling ball get before it stops skidding and is only rolling?
How many candy bars worth of energy does it take to push a shopping cart past the snack aisle?
|
Cervical spondylotic myelopathy (CSM) is a neck condition that arises when the spinal cord becomes compressed—or squeezed—due to the wear-and-tear changes that occur in the spine as we age. The condition commonly occurs in patients over the age of 50.
Because the spinal cord carries nerve impulses to many regions in the body, patients with CSM can experience a wide variety of symptoms. Weakness and numbness in the hands and arms, loss of balance and coordination, and neck pain can all result when the normal flow of nerve impulses through the spinal cord is interrupted.
Your spine is made up of 24 bones, called vertebrae, that are stacked on top of one another.
The seven small vertebrae that begin at the base of the skull and form the neck comprise the cervical spine.
Other parts of your spine include:
Spinal cord and nerves. The spinal cord extends from the skull to your lower back and travels through the middle part of each stacked vertebra, called the central canal. Nerves branch out from the spinal cord through openings in the vertebrae (foramen) and carry messages between the brain and muscles.
Intervertebral disks. In between your vertebrae are flexible intervertebral disks. They act as shock absorbers when you walk or run.
Intervertebral disks are flat and round and about a half inch thick. They are made up of two components:
- Annulus fibrosus. This is the tough, flexible outer ring of the disk.
- Nucleus pulposus. This is the soft, jelly-like center of the disk.
Animation courtesy Visual Health Solutions, Inc.
Cervical spondylotic myelopathy (CSM) arises from degenerative changes that occur in the spine as we age. These degenerative changes in the disks are often called arthritis or spondylosis.These changes are normal and they occur in everyone. In fact, nearly half of all people middle-aged and older have worn disks that do not cause painful symptoms. It is not known why some patients develop symptoms and others do not.
Cervical Disk Degeneration
Bone spurs. As the disks in the spine age, they lose height and begin to bulge. They also lose water content, begin to dry out, and become stiffer. This problem causes settling, or collapse, of the disk spaces and loss of disk space height.
As the disks lose height, the vertebrae move closer together. The body responds to the collapsed disk by forming more bone—called bone spurs—around the disk to strengthen it. These bone spurs contribute to the stiffening of the spine. They may also make the spinal canal narrow—compressing or squeezing the spinal cord.
Herniated disk. A disk herniates when its jelly-like center (nucleus pulposus) pushes against its outer ring (annulus fibrosus). If the disk is very worn or injured, the nucleus may squeeze all the way through. When a herniated disk bulges out toward the spinal canal, it can put pressure on the spinal cord or nerve roots.
As disks deteriorate with age, they become more prone to herniation. A herniated disk often occurs with lifting, pulling, bending, or twisting movements.
Other Causes of Myelopathy
Myelopathy can arise from other conditions that cause spinal cord compression, as well. Although these conditions are not related to disk degeneration, they may result in the same symptoms as CSM.
Rheumatoid arthritis. Rheumatoid arthritis is an autoimmune disease. This means that the immune system attacks its own tissues. In rheumatoid arthritis, immune cells attack the synovium, the thin membrane that lines the joints.
As the synovium swells, it may lead to pain and stiffness and, in severe cases, destruction of the facet joints in the cervical spine. When this occurs, the upper vertebra may slide forward on top of the lower vertebra, reducing the amount of space available for the spinal cord.
Injury. An injury to the neck—such as from a car accident, sports, or a fall—may also lead to myelopathy.
For example, a "rear end" car collision may result in hyperextension, a backward motion of the neck beyond its normal limits, or hyperflexion, a forward motion of the neck beyond its normal limits. Because these types of injuries often affect the muscles and ligaments that support the vertebrae, they may lead to spinal cord compression.
Typically, the symptoms of CSM develop slowly and progress steadily over several years. In some patients, however, the condition may worsen more rapidly.
Patients with CSM may experience a combination of the following symptoms:
- Tingling or numbness in the arms, fingers, or hands
- Weakness in the muscles of the arms, shoulders, or hands. You may have trouble grasping and holding on to items.
- Imbalance and other coordination problems. You may have trouble walking or you may fall down. With myelopathy, there is no sensation of spinning, or "vertigo." Rather, your head and eyes feel steady, but your body feels unable to follow through with what you are trying to do.
- Loss of fine motor skills. You may have difficulty with handwriting, buttoning your clothes, picking up coins, or feeding yourself.
- Pain or stiffness in the neck
After discussing your medical history and general health, your doctor will ask you about your symptoms. He or she will conduct a thorough examination of your neck, shoulders, arms, hands, and legs, looking for:
- Changes in reflexes—including the presence of hyper-reflexia, a condition in which reflexes are exaggerated or overactive
- Numbness and weakness in the arms, hands, and fingers
- Trouble walking, loss of balance, or weakness in the legs
- Atrophy—a condition in which muscles deteriorate and shrink in size
X-rays. These provide images of dense structures, such as bone. An x-ray will show the alignment of the vertebrae in your neck.
Magnetic resonance imaging (MRI) scans. These studies create better images of the body's soft tissues. An MRI can show spinal cord compression and help determine whether your symptoms are caused by damage to soft tissues—such as a bulging or herniated disk.
Computed tomography (CT) scans. More detailed that a plain x-ray, a CT scan can show narrowing of the spinal canal and can help your doctor determine whether you have developed bone spurs in your cervical spine.
Myelogram. This is a special type of CT scan. In this procedure, a contrast dye is injected into the spinal column to make the spinal cord and nerve roots show up more clearly.
In milder cases, initial treatment for CSM may be nonsurgical. The goal of nonsurgical treatment is to decrease pain and improve the patient's ability to perform daily activities. Nonsurgical treatment options include:
Soft cervical collar. This is a padded ring that wraps around the neck and is held in place with velcro. Your doctor may advise you to wear a soft cervical collar to allow the muscles of the neck to rest and limit neck motion. A soft collar should only be worn for a short period of time since long-term wear may decrease the strength of the muscles in your neck.
Physical therapy. Specific exercises can help relieve pain, strengthen neck muscles, and increase flexibility. Physical therapy can also help you maintain strength and endurance so that you are better able to perform your daily activities. In some cases, traction can be used to gently stretch the joints and muscles of the neck.
Medications. In some cases, medications can help improve your symptoms.
- Nonsteroidal anti-inflammatory medications (NSAIDs). Drugs like aspirin, ibuprofen, and naproxen can help relieve pain from reduce inflammation.
- Oral corticosteroids. A short course of oral corticosteroids may help relieve pain by reducing inflammation.
- Epidural steroid injection. Although not often used to treat CSM, in this procedure, steroids are injected into the space next to the covering of the spinal cord (the "epidural" space) to help reduce local inflammation. Although a steroid injection may temporarily help relieve pain and swelling, it will not relieve pressure on the spinal cord.
- Narcotics. These medications are reserve for patients with severe pain that is not relieved by other options. Narcotics are usually prescribed for a limited time only.
Although people sometimes turn to chiropractic manipulation for neck and back pain, manipulation should never be used for spinal cord compression.
If nonsurgical treatment does not relieve your symptoms, your doctor will talk with you about whether you would benefit from surgery. The majority of patients with symptoms and tests consistent with CSM are recommended to have surgery.
There are several procedures that can be performed to help relieve pressure on the spinal cord. The procedure your doctor recommends will depend on many factors, including what symptoms you are experiencing and the levels of the spinal cord that are involved.
Learn more about surgery for CSM:.
The American Academy of Orthopaedic Surgeons
9400 West Higgins Road
Rosemont, IL 60018
|
How many of us pipers have a firm grasp of the physics of sound that causes the unique and rich sound of our bagpipes? We are told that we should maintain a pressure in the pipe bag that is at the chanter reed’s “sweet spot”, that pressure that causes the reed to maximally vibrate and bring out the most “harmonics” and richness of sound of the reed. But what, really, are harmonics?
In Part 1, we explained how sound waves are described, and what they look like when analyzed visually. We also learned that musical instruments produce complex patterns of sound frequencies (pitch). These patterns of a single note include its “fundamental” frequency along with many harmonics, or overtones. Taken together the fundamental and its harmonics make up the timbre (pronounced “tamber”) of the musical instrument. In this post, we will explore the relationships between harmonics and a major scale, as well as how harmonics can be useful in tuning our bagpipes.
Much of modern music is made up of “major scales” for melodies and harmonies, including the music for bagpipes. A major scale contains eight tones, or sounds. Since we discussed in Part 1 different instruments playing the note “A”, we will continue using the A Major scale for our examples. It should be noted, however, that the following points can be used for any major scale. Figure 1 shows the A Major scale on the piano.
Each note of the scale can be given a corresponding number, so that we have the following:
The As at positions 1 and 8 are called the “tonic” note, the most important note because it names the A Major scale. The second most important note is the 5th note of the scale, also known as the “dominant”. Lastly, the next important note is the 3rd note, called the “mediant”. The 3rd note of a scale is what makes it a major scale. The tonic, 3rd, and 5th notes taken together make a major triad, or chord. Play Low A, C#, E, and High A on your practice chanter, and you will appreciate how each of those notes fit with the others in this major scale. These notes also constitute an A Major arpeggio. Understanding how different notes of a scale fit together is important for appreciating and writing harmonies.
Now, what does knowing this information about a major scale have to do with harmonics? From Part 1, we saw that a cello playing the note “A” 220Hz is the fundamental note, while there are additional frequencies that are higher in pitch but lower amplitude. Some harmonics can be heard, while others are of such high pitch and low amplitude that they are beyond our hearing. See Figure 2.
If one examines the fundamental frequency, and those of the harmonics, it can be seen that there is a clear numerical relationship between the fundamental its the harmonics. That relationship shows that each harmonic is a multiple of the fundamental’s frequency. The following table shows the “A” fundamental and its many harmonics:
Notice that the first note, the fundamental note A, has a pitch of 220Hz. The next note is also an A, but with a pitch of 440 Hz. The second harmonic is 660 Hz, which happens to be the note E. The third harmonic is yet another A, at 880 Hz, followed by a C# at 1100 Hz. The fifth harmonic is a frequency of 1320 Hz, and is another E. The 6th harmonic is G, with a pitch of 1540 Hz. The 7th harmonic is another A, but with a pitch of 1760 Hz.
Finally, let’s put our knowledge to work as it relates to the bagpipe. For the sake of this discussion, let’s assume that Low A on the chanter closely approximates concert A (440 Hz). Thus, the chanter has two As, Low A and High A (880 Hz). Further, each of those notes has its own set of rich harmonics. Now, when the tenors are tuned one octave lower than the chanter's Low A, the tenors’ fundamental note is A, but at a pitch of 220 Hz. Obviously, the drones produce their own harmonics. Finally, the bass drone, tuned to an octave below the tenors, has a fundamental frequency of 110 Hz. This low, but loud, pitch can be heard as vibrations of the bass drone reed. With four reeds in a bagpipe, each with its own harmonics, it is no wonder that a well-tuned bagpipe, played at the chanter reed’s sweet spot, sounds so incredibly beautiful.
At this point, we should not only better understand the science behind harmonics, we should also be able to appreciate the role harmonics play in tuning. For example, when tuning one tenor drone to another, we know that as the drones come closer together in pitch, there is a "warble" or "beat" produced. The closer the tuning gets, the "beats" become farther and apart, ultimately coming together and disappearing. If one keeps moving one of the tenors in the same direction, however, the "beats" will reappear. What is happening with the beats is that the fundamental frequency and harmonics of one drone are slightly out of synch with the other, so there is discord between the As. As the frequencies of one drone get closer to those of the other drones, the frequencies begin to line up on top of each other, which produces unison between the drones. If we listen carefully, as one tenor gets into tune with the other, the sound actually gets a bit louder because well aligned frequencies will reinforce each other, giving a slightly higher amplitude.
This writer again thanks Scott Laird, music instructor at the North Carolina School of Science and Math, for producing some outstanding educational videos that discuss the intersection of music and science.
|
Entropy and the 2nd & 3rd Laws of Thermodynamics
|Spontaneous Chemical Reactions||Entropy as a Measure of Disorder||Entropy and the Second Law of Thermodynamics|
|The Third Law of Thermodynamics||Standard-State Entropies of Reaction||Enthalpy of Reaction vs. Entropy of Reaction Calculations|
The first law of thermodynamics suggests that we can't get something for nothing. It allows us to build an apparatus that does work, but it places important restrictions on that apparatus. It says that we have to be willing to pay a price in terms of a loss of either heat or internal energy for any work we ask the system to do. It also puts a limit on the amount of work we can get for a given investment of either heat or internal energy.
The first law allows us to convert heat into work, or work into heat. It also allows us to change the internal energy of a system by transferring either heat or work between the system and its surroundings. But it doesn't tell us whether one of these changes is more easy to achieve than another. Our experiences, however, tell us that there is a preferred direction to many natural processes. We aren't surprised when a cup of coffee gradually loses heat to its surroundings as it cools, for example, or when the ice in a glass of lemonade absorbs heat as it melts. But we would be surprised if a cup of coffee suddenly grew hotter until it boiled or the water in a glass of lemonade froze on a hot summer day, even though neither process violates the first law of thermodynamics.
Similarly, we aren't surprised to see a piece of zinc metal dissolve in a strong acid to give bubbles of hydrogen gas.
Zn(s) + 2 H+(aq) Zn2+(aq) + H2(g)
But if we saw a film in which H2 bubbles formed on the surface of a solution and then sank through the solution until they disappeared, while a strip of zinc metal formed in the middle of the solution, we would conclude that the film was being run backward.
Many chemical and physical processes are reversible and yet tend to proceed in a direction in which they are said to be spontaneous. This raises an obvious question: What makes a reaction spontaneous? What drives the reaction in one direction and not the other?
So many spontaneous reactions are exothermic that it is tempting to assume that one of the driving forces that determines whether a reaction is spontaneous is a tendency to give off energy. The following are all examples of spontaneous chemical reactions that are exothermic.
|2 Al(s) + 3 Br2(l)||2 AlBr3(s)||Ho = -511 kJ/mol AlBr3|
|2 H2(g) + O2(g)||2 H2O(g)||Ho = -241.82 kJ/mol H2O|
|P4(s) + 5 O2(g)||P4O10(s)||Ho = -2984 kJ/mol P4O10|
There are also spontaneous reactions, however, that absorb energy from their surroundings. At 100oC, water boils spontaneously even though the reaction is endothermic.
|H2O(l) H2O(g)||Ho = 40.88 kJ/mol|
Ammonium nitrate dissolves spontaneously in water, even though energy is absorbed when this reaction takes place.
|NH4NO3(s)||NH4+(aq) + NO3-(aq)||Ho = 28.05 kJ/mol|
Thus, the tendency of a spontaneous reaction to give off energy can't be the only driving force behind a chemical reaction. There must be another factor that helps determine whether a reaction is spontaneous. This factor, known as entropy, is a measure of the disorder of the system.
Perhaps the best way to understand entropy as a driving force in nature is to conduct a simple experiment with a new deck of cards. Open the deck, remove the jokers, and then turn the deck so that you can read the cards. The top card will be the ace of spades, followed by the two, three, and four of spades, and so on. Now divide the cards in half, shuffle the deck, and note that the deck becomes more disordered. The more often the deck is shuffled, the more disordered it becomes.What makes a deck of cards become more disordered when shuffled?
In 1877 Ludwig Boltzmann provided a basis for answering this question when he introduced the concept of the entropy of a system as a measure of the amount of disorder in the system. A deck of cards fresh from the manufacturer is perfectly ordered and the entropy of this system is zero. When the deck is shuffled, the entropy of the system increases as the deck becomes more disordered.
There are 8.066 x 1067 different ways of organizing a deck of cards. The probability of obtaining any particular sequence of cards when the deck is shuffled is therefore 1 part in 8.066 x 1067. In theory, it is possible to shuffle a deck of cards until the cards fall into perfect order. But it isn't very likely!
Boltzmann proposed the following equation to describe the relationship between entropy and the amount of disorder in a system.
S = k ln W
In this equation, S is the entropy of the system, k is a proportionality constant equal to the ideal gas constant divided by Avogadro's constant, ln represents a logarithm to the base e, and W is the number of equivalent ways of describing the state of the system. According to this equation, the entropy of a system increases as the number of equivalent ways of describing the state of the system increases.
The relationship between the number of equivalent ways of describing a system and the amount of disorder in the system can be demonstrated with another analogy based on a deck of cards. There are 2,598,960 different hands that could be dealt in a game of five-card poker. More than half of these hands are essentially worthless. Winning hands are much rarer. Only 3,744 combinations correspond to a "full house," for example. The table below gives the number of equivalent combinations of cards for each category of poker hand, which is the value of W for this category. As the hand becomes more disordered, the value of W increases, and the hand becomes intrinsically less valuable.
Number of Equivalent Combinations for Various Types of Poker Hands
|Royal flush (AKQJ10 in one suit)||4||1.39|
|Straight flush (five cards in sequence in one suit)||36||3.58|
|Four of a kind||624||6.44|
|Full house (three of a kind plus a pair)||3,744||8.23|
|Flush (five cards in the same suit)||5,108||8.54|
|Straight (five cards in sequence)||10,200||9.23|
|Three of a kind||54,912||10.91|
The second law of thermodynamics describes the relationship between entropy and the spontaneity of natural processes.
Second Law: In an isolated system, natural processes are spontaneous when they lead to an increase in disorder, or entropy.
This statement is restricted to isolated systems to avoid having to worry about whether the reaction is exothermic or endothermic. By definition, neither heat nor work can be transferred between an isolated system and its surroundings.
We can apply the second law of thermodynamics to chemical reactions by noting that the entropy of a system is a state function that is directly proportional to the disorder of the system.
Ssys > 0 implies that the system becomes more disordered during the reaction. Ssys < 0 implies that the system becomes less disordered during the reaction.
For an isolated system, any process that leads to an increase in the disorder of the system will be spontaneous. The following generalizations can help us decide when a chemical reaction leads to an increase in the disorder of the system.
Solids have a much more regular structure than liquids. Liquids are therefore more disordered than solids.
The particles in a gas are in a state of constant, random motion. Gases are therefore more disordered than the corresponding liquids.
Any process that increases the number of particles in the system increases the amount of disorder.
|Practice Problem 2:
Which of the following processes will lead to an increase in the entropy of the system?
(a) N2(g) + 3H2 (g) 2 NH3(g)
(b) H2O(l) H2O(g)
(c) CaCO3(s) CaO(s) + CO2(g)
(d) NH4NO3(s) + H2O(l) NH4+ (aq) + NO3- (aq)
The sign of H for a chemical reaction affects the direction in which the reaction occurs.
Spontaneous reactions often, but not always, give off energy.
The sign of S for a reaction can also determine the direction of the reaction.
In an isolated system, chemical reactions occur in the direction that leads to an increase in the disorder of the system.
In order to decide whether a reaction is spontaneous, it is therefore important to consider the effect of changes in both enthalpy and entropy that occur during the reaction.
|Practice Problem 3:
Use the Lewis structures of NO2 and N2O4 and the stoichiometry of the following reaction to decide whether H and S favor the reactants or products of this reaction:
2 NO2(g) N2O4(g)
The third law of thermodynamics defines absolute zero on the entropy scale.
Third law: The entropy of a perfect crystal is zero when the temperature of the crystal is equal to absolute zero (0 K).
The crystal must be perfect, or else there will be some inherent disorder. It also must be at 0 K; otherwise there will be thermal motion within the crystal, which leads to disorder.
As the crystal warms to temperatures above 0 K, the particles in the crystal start to move, generating some disorder. The entropy of the crystal gradually increases with temperature as the average kinetic energy of the particles increases. At the melting point, the entropy of the system increases abruptly as the compound is transformed into a liquid, which is not as well ordered as the solid. The entropy of the liquid gradually increases as the liquid becomes warmer because of the increase in the vibrational, rotational, and translational motion of the particles. At the boiling point, there is another abrupt increase in the entropy of the substance as it is transformed into a random, chaotic gas.
The table below provides an example of the difference between the entropy of a substance in the solid, liquid, and gaseous phases.
The Entropy of Solid, Liquid, and Gaseous Forms of Sulfur Trioxide
Note that the units of entropy are joules per mole kelvin (J/mol-K). A plot of the entropy of this system versus temperature is shown in the figure below.
Because entropy is a state function, the change in the entropy of the system that accompanies any process can be calculated by subtracting the initial value of the entropy of the system from the final value.
S = Sf - Si
S for a chemical reaction is therefore equal to the difference between the sum of the entropies of the reactants and the products of the reaction.
S = S(products) - S(reactants)
When this difference is measured under standard-state conditions, the result is the standard-state entropy of reaction, So.
So = So(products) - So(reactants)
By convention, the standard state for thermodynamic measurements is characterized by the following conditions.
|All solutions have concentrations of 1 M.|
|All gases have partial pressures of 0.1 MPa (0.9869 atm)|
Although standard-state entropies can be measured at any temperature, they are often measured at 25oC.
|Practice Problem 4:
Calculate the standard-state entropy of reaction for the following reactions and explain the sign of S for each reaction.
(a) Hg(l) Hg(g)
(b) 2NO2(g) N2O4(g)
(c) N2(g) + O2(g) 2NO(g)
At first glance, tables of thermodynamic data seem inconsistent. Consider the data in the table below, for example.
Thermodynamic Data for Aluminum and Its Compounds
Substance HFo(kJ/mol) So (J/mol-K) Al(s) 0 28.33 Al(g) 326.4 164.54 Al2O3(s) -1675.7 50.92 AlCl3(s) -704.2 110.67
The enthalpy data in this table are given in terms of the standard-state enthalpy of formation of each substance, Hfo. This quantity is the heat given off or absorbed when the substance is made from its elements in their most thermodynamically stable state at 0.1 MPa. The enthalpy of formation of AlCl3, for example, is the heat given off in the following reaction.
2 Al(s) + 3 Cl2(g) 2 AlCl3(s) Hfo = -704.2 kJ/mol
The enthalpy data in this table are therefore relative numbers, which compare each compound with its elements.
Enthalpy data are listed as relative measurements because there is no absolute zero on the enthalpy scale. All we can measure is the heat given off or absorbed by a reaction. Thus, all we can determine is the difference between the enthalpies of the reactants and the products of a reaction. We therefore define the enthalpy of formation of the elements in their most thermodynamically stable states as zero and report all compounds as either more or less stable than their elements.
Entropy data are different. The third law defines absolute zero on the entropy scale. As a result, the absolute entropy of any element or compound can be measured by comparing it with a perfect crystal at absolute zero. The entropy data are therefore given as absolute numbers, So, not entropies of formation, Sof.
|AlCl3(s)||So = 110.67 J/mol-K|
|
2000 Canadian Computing Competition, Stage 1
Problem S2: Babbling Brooks
A series of streams run down the side of a mountain. The mountainside is very rocky so the streams split and rejoin many times. At the foot of the mountain, several streams emerge as rivers. Your job is to compute how much water flows in each river.
At any given elevation there are m streams, labelled 1 to m from left-to-right. As we proceed down the mountainside, one of the streams may split into a left fork and a right fork, increasing the total number of streams by 1, or two streams may rejoin, reducing the total number of streams by 1. After a split or a rejoining occurs, the streams are renumbered consecutively from left-to-right. There is always at least one stream and there are never more than 100 streams.
The first line of input contains n, the initial number of streams at some high altitude. The next n lines give the flow in each of the streams from left-to-right. Proceeding down the mountainside, several split or rejoin locations are encountered. For each split location, there will be three lines of input.
- a line containing 99 (to indicate a split)
- a line containing the number of the stream that is split
- a line containing a number between 0 and 100, the percentage of flow from the split stream that flows to the left fork. (The rest flows to the right fork)
For each join location, there will be two lines of input:
- a line containing 88 (to indicate a join)
- a line containing the number of the stream that is rejoined with the stream to its right
Your job is to determine how many streams emerge at the foot of the mountain and what the flow is in each. Your output is a sequence of real numbers, rounded to the nearest integer, giving the flow in rivers 1 through m.
3 10 20 30 99 1 50 88 3 88 2 77
Point Value: 7
Time Limit: 2.00s
Memory Limit: 16M
Added: Sep 29, 2008
C++03, PAS, C, HASK, ASM, RUBY, PYTH2, JAVA, PHP, SCM, CAML, PERL, C#, C++11, PYTH3
|
Why and how are modern counties different?
Successive local government reforms from the late 19th century onwards have led to modern administrative areas which are no longer exactly the same as the historic counties. The most significant change took place in 1974.
Modern areas of local government include counties, unitary authorities and metropolitan boroughs. To a greater or lesser degree these areas are based on the historic counties, but with some changes, such as:
- A large amount of divergence in urban areas, e.g. London, Birmingham etc
- Counties have been split into smaller areas, e.g. East and West Sussex
- Counties have been combined, e.g. Cumberland, Westmorland and part of Lancashire merged to form Cumbria
- Borders have changed, e.g. a large amount of Berkshire became part of Oxfordshire
To further complicate matters, these areas of local government are distinct from the "ceremonial counties", which are closer to the historic counties but not identical.
Why does the Mills Archive use historic counties?
- Modern counties are likely to undergo further change, so that keeping up to date would require constant revision of our catalogues and databases. This is the main advantage of using historic counties.
- Much of the material in the Archive predates the major change in 1974, and therefore refers to the older counties, making it easier to index this material by historic county.
- Mills are historic buildings, most dating to the 19th century or earlier, making the use of historic boundaries appropriate.
What are the borders of the historic counties?
The historic counties were of course themselves subject to change, but to a much smaller degree. The Mills Archive uses the historic counties as defined in the Historic Counties Standard. For a useful map of the counties of England and Wales see here.
|
Getting kids in touch with their environment sometimes has to be literally getting them in touch with their environment. For very young children, learning is a tactile experience, as any parent of a two year-old knows. Encourage this need to touch and feel in positive ways that will help them grow and build in them a need to explore with all their senses, not just the ones tuned into the TV.
This article explains four activities for ages one to five that will encourages kids to explore and learn.
For you little ones who are still putting things in their mouths, have a “large” sensory table, which can be tubs or bowls filled with non-toxic, but fun to touch things. Have a couple of large stones (too big to fit in little mouths) that are smooth to the touch, hard and heavy. Give them the words to describe them. Fill another bowl with pieces of soft fabric of various types, scraps that can be scratchy and soft, smooth and bumpy and again give them words to describe them. Have some crinkley paper that makes some noise, or some pieces of wool. For the very brave, fill a bowl full of water or pudding and watch them splash and squish (might want to do that one outside). Large sea shells are also great tactile experiences full of different textures. Be creative and look around your house for different experiences, just remember to be safe.
For slightly older preschool age kids, have a texture bag full of interesting things to hold and touch and have the children guess what they are. Pick out small things like rocks, pinecones, acorns, pieces of wool or string-just have a variety. This game is a blast one-on-one or for a group to play. They can be so creative with their answers!
For children of pre-school age or kindergarten make some texture rubbings. This is so easy and so much fun. Using a piece of white typing paper and the side of a crayon with the paper removed, find ‘bumpy’ stuff-like the bottom of a shoe and make a rubbing of it. Have the children look around the house or class room and try to make rubbings of the things they find-the walls, the floor, toys, their friends-encourage them to explore and quiz them on successes as well as mistakes. Why did that one show up so well and not that one? What new things did you discover about your environment? Put many different textures and colors on the same piece of paper-you will all be amazed at how beautiful it turns out!
Leaf Rubbings. Okay, if your child or group had fun with number three, they are going to love number four. This is fun in the fall particularly, but can be done at any time of the year as long as there are leaves to be had. Go out with a basket or bag and collect some leaves of various types and sizes. Have the children examine them with their fingers but closing their eyes and giving an impression of what they feel. Ask them what they think will be there before and what they felt after and discuss briefly, then break out the crayons to immortalize the leaf. Carefully hold the leaf under the paper (some younger or less coordinated kids might need some help with this), and using the side of an unwrapped crayon make a rubbing of the leaf. Encourage the children to use many different colors and leaves and to make the rubbings all over the paper. If you have a pretty good group who seem to be engaged you can add some water color washes and watch true magic as the waxy crayon pushes the water away. These will be works of art that you will want to save for years to come.
What ever you choose to do, let your children explore with all their senses, and watch their world expand.
Other stuff by me:
- How to Make Walnut Ink
- Green Activities for Kids to do Outside
- Oh God: Conversations with Children About Religion in a Public School Setting
- Weaving for the Non-Weavers: Large and Small
Published in: Family
|
With a pull so strong not even light escapes, a black hole is defined by its gravity. But now a model that ignores gravity is proving surprisingly useful for pinning down how these cosmic giants work.
Black holes are where big ideas in cosmology, such as gravity and quantum mechanics, collide. That makes them great for testing new theories. "A black hole is a bit like the hydrogen atom of quantum gravity," says Samuel Braunstein of the University of York, UK. "It's a place to test ideas and test theories, and see what may or may not happen."
His team modelled a minimal black hole, defined only by having an inside and an outside, using quantum theory. To their surprise, they found that this object reproduces a lot of the features of real black holes that are thought to rely on gravity, including Hawking radiation, which could occur via a process called quantum tunnelling.
This chimes with suggestions that gravity is not a fundamental component of the universe but an emergent property of quantum mechanics, just as waves are an emergent property of water molecules.
Erik Verlinde, a theoretical physicist at the University of Amsterdam, the Netherlands, who came up with this idea, agrees. "This indeed sheds some light on my ideas on the emergence of gravity from entropy," he says. "In particular, he makes the point that quantum information is a key concept that is relevant."
Braunstein thinks the toy black hole also dodges the so-called black hole firewall paradox, a grisly thought experiment involving someone falling into a black hole that reveals an inconsistency between quantum mechanics and general relativity.
Last year, Joe Polchinski of the University of California, Santa Barbara, and colleagues showed how the quantum entanglement between photons emitted by a black hole through Hawking radiation and those still on the inside should at some point cause a wall of fire to form immediately outside the event horizon. As a result, someone falling into the black hole would be burned to a crisp. But this contradicts general relativity, which says that someone falling into a black hole shouldn't notice a difference when they cross the event horizon.
Using their gravity-free black hole model, Braunstein and colleagues showed that it was possible to create a black hole in which the firewall doesn't appear until the last instants of the black hole's life, when it is too small for someone to fall into anyway. "Instead of being a cutesy picture, [the toy black hole] is a fantastic contender for the real physics," says Braunstein.
Polchinski isn't so sure. "Braunstein's work is based on a somewhat non-standard model of the Hawking process," he says. "It is interesting that this actually makes it possible to delay or evade the problem, but I do not think that this is how things will turn out in detail."
Journal reference: Physical Review Letters, doi.org/ktd
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
|
In the second half of the twentieth century with the advent and improvement of communication technology began a new era of media and communications. Such new technologies as cable and satellite TV and the Internet, which providing truly global coverage, have brought rapid and profound changes in the sphere of communications. Progress has led to the era of new media, which reduce the distances and time, and provide access to vast areas of information. The speed with which the new technologies are gaining a mass audience is unprecedented for the history of communication and information: for example, for American radio it took 38 years to reach 50 million people, television made the same path for 14 years, and the Internet – only 4 years.
Newspapers, radio, and television traditionally have been seen as objects of research of mass communication for social, economic and political reasons. But despite the fact that cable and satellite TV, video and computer networks have already been recognized as new independent means of mass communication, they have not yet found a worthy reflection in the writings of theorists, that is, not enough studied in the theory of communication. As technology changes, the scientific categories must become more flexible, because new communications technologies give researchers a good opportunity to revise the hypothesis and, perhaps, even require a new look at the concept of communication.
The study of new media
The means of mass communication, which are now called “new”¯, began to work in 1970. Initially they were perceived as new improved traditional types of media, but then they challenge the whole understanding of production and distribution of information in its traditional forms, and require new theoretical interpretation. The main features that distinguish the new media from old are:
– decentralization, when supply and choice is no longer determined solely by information providers,
– high bandwidth – transmission via cable and satellite allows to overcome the severe limitations of broadcasting;
– interactivity – the recipient can choose the information, to answer on it, exchange information with other recipients,
– flexibility of its forms, content and use. (Flew 2002)
One of the main signs of the end of the XX century is accelerating of the transformation of technologies into new communications systems. For example, the newspaper became an important mean of communication since three centuries after the invention of the printing press; then from the discovery of Hertz to the regular radio broadcast in the U.S. passed only thirty-three years (from 1888 to 1921). Similarly, the microchip, the most important component of modern small but powerful computers, was made in 1971, and the mass marketing of personal computers began four years later. (Flew 2002)
|
About Randomcase tool
Convert text to random case letters. For example “HELLO” can become “h-e-L-L-O” with no pattern. With random case the rules of grammar that determine upper and lower case do not apply. Upper case or capital letters appear in a random sequence. Lower case letters also appear in a random sequence. Random refers to a pattern or series that has no pattern. This can be helpful for creating security codes. The random qualities can be unique. Randomness traditionally was generated through devices such as dice but computed randomness today can easily create randomness for statistical sampling, simulations, and cryptography.
The generation of randomness can be a typical task in computer programming. In statistically theory, randomization can be an important task with an application such as survey sampling. With letters in this random case application, the qualities of random generation apply and standard rules of the use of uppercase and lowercase do not apply. Some writing systems make no distinction between uppercase and lowercase letters but the random case utility could be effective here also for certain applications. For security reasons, random case could be useful. One use of randomness concerns search algorithms. In statistical theory, randomization is an important principle with survey sampling as an example.
|
Emotions can be understood as temporary affective and physiological reactions to events. In a learning context, positive emotions like enjoyment and pride and negative emotions like anxiety impact how students learn, whether or not they choose to re-engage in a learning activity, and how they perform. In chapter 10 you can learn more about how emotions influence learning and motivation in science, and how you as a science teacher can ensure that your classroom practices have a positive impact on your students' emotions.
Resources to support teachers in facilitating positive emotion and minimizing negative emotions toward science can be found by clicking on the links shown at right. Those resources include handouts and activities, further reading, links to recommended websites, and short (~ 3 minute) video clips illustrating the importance of this concept to practicing scientists and showing exemplary high school teachers who practice the strategies recommended in the book.
|
Acclimatization(redirected from acclimatisation)
Also found in: Dictionary, Thesaurus, Medical, Legal, Wikipedia.
the adaptation of organisms to new conditions of existence.
Although acclimatization literally means adaptation to the climate, the term has long been used to designate adaptation of the organism not, only to new climatic conditions but also to soil conditions and new biocenoses. Acclimatization can occur in either of two ways: (1) By changing the metabolism of organisms. This type of change (modification) is not inherited and is governed by the response rate of the organism. In that case, naturalization occurs—for example, noxious and quarantine weeds and pests of a genotype with a broad response rate are capable of propagating freely over the planet—and the genetic structure of the population or species undergoes no change. (2) By changing the genetic structure of the species. This constitutes true acclimatization. A factor determining the genetic structure of the species and responsible for acclimatization is natural selection. In ontogenesis, acclimatization is determined by the richness of the gene pool of the population. Spontaneous mutations are of some importance in acclimatization, but their frequency is low. Acclimatization occurs when organisms resettle in regions or sites which are new to them or where they had previously been wiped out (reacclimatization). Acclimatization is observed when a habitat’s conditions are altered—for example, when forests are cut down or new forest stands are planted, when deserts are irrigated or swamps dried out. In those cases, some organisms migrate to other sites or perish (like plants), while others adapt to the new environmental conditions—that is, they become acclimatized. Cultivated species of animals and plants also become acclimatized through introduction (artificial acclimatization), while wild species become acclimatized under natural conditions (natural acclimatization) as they resettle in new regions— spread by migration, wandering animals, or the random transfer of plants by humans, animals, or the wind.
In antiquity, nomadic tribes moved seeds of useful wild plants with them and resettled animals, which became acclimatized under their new conditions. The resettlement of plants and animals contributed later on to the development of world trade and means of transportation. In the 18th century A. Humboldt first expressed the idea of the possibility of gradual acclimatization, termed stepwise acclimatization. A. P. de Candolle and his son A. de Candolle stated that a certain set of conditions was required for a particular species to move into new areas. The works of C. Darwin were of great significance for the development of the theory and practice of acclimatization. Much attention was devoted to acclimatization in tsarist Russia in the mid-19th century. K. F. Rul’e and his disciple A. P. Bogdanov set up a committee on acclimatization in 1857. A periodical Akklimatizatsiia began to be published in 1860 on their initiative. Writings on acclimatization by the Russian scientists E. L. Regel’ and A. N. Beketov are well known. Theoretical research in acclimatization underwent further development in the USSR. I. V. Michurin and M. F. Ivanov developed effective methods of acclimatization. The Russian zoologists B. M. Zhitkov and T. A. Manteifel’ did much work on the acclimatization of animals. N. I. Vavilov made a major contribution to the acclimatization of plants.
Plants. Acclimatization of plants always results in an expansion of the range occupied by the species. For example, the Serbian spruce, whose range was restricted to the Drina River (Yugoslavia), acclimatized readily in Northern Europe and even flourishes at the latitude of Leningrad in the USSR. The horse chestnut, whose homeland is Africa, propagated to the phytocenoses of Europe, as did the black locust, or false acacia, from North America; the Atlas cedar from Africa, the giant sequoia from North America, and the eucalyptus tree from Australia all flourish on the Black Sea coast. As a result of polymorphism and the richness of its gene pool (abundance of mutations), the lilac, which is native to Southern Europe and Asia Minor, covers a wide area. An example of natural acclimatization as a result of hybridization polyploidy is the appearance of soft wheat (Triticum aestivum ) in one of the primary centers of origin of crop plants (Southwest Asia) and the expansion of its range far to the north. Acclimatization of plants is seriously affected by such climatic factors as air temperature and humidity, quantity and distribution of precipitation, nature of snow and ice cover, and air movements; by the light status; by the type of soil and the composition of the microflora inhabiting the soil; by the nature of the biogeocenosis; and by the biological features of the plants themselves. It is known, for example, that xerophytes withstand dangerous temperature drops with greater ease than moisture-loving plants.
Acclimatization of plants is one of the major problems facing the national economy, and success in handling that problem depends to a large extent on the totality of the methods available for use. Michurin utilized hybridization of geographically and systematically remote forms in his work on the acclimatization of fruit crops and also resorted to crossing wild-growing species. The Lombardy poplar was acclimatized in Moscow through hybridization. Various agricultural techniques are used, including grafting onto stable seedling stocks, picking, pinching, irrigating and fertilizing, utilizing growth stimulants or other preparations to inhibit the growth of ovaries and protect them from late frosts, cultivating plants on irrigated soil, growing plants in the initial period of acclimatization under greenhouse conditions, and artificially heating groves.
Botanical gardens, introduction nurseries, and other scientific research institutions whose functions include building collections of local and foreign plants and introducing them into cultivation in new regions are doing much work on the acclimatization of plants in the USSR. Their activities have resulted in the acclimatization of the tea tree, citrus plants, the oil tung, the eucalyptus, the bamboo, camphor trees, the oriental persimmon, and several species of palms on the Black Sea shores of the Caucasus. The cultivation of grape vines, sweet cherries, apricots, and other fruit-bearing plants, as well as such decorative plants as the horse chestnut and various poplar species, has been moved further north. Acclimatization of trees in the Far North, where farming had been considered impossible, is particularly important. The Pamir Botanical Garden and experimental stations in mountainous regions have contributed to the acclimatization of crop plants and to the development of agriculture in high-mountain regions. Intensive work is being done on acclimatization of plants under desert conditions. New medicinal and aromatic plants are being acclimatized and introduced into cultivation. Acclimatization of tree species and bush species has enriched the available assortment of decorative plants. Important work on the transformation of plant zones is being done in the USSR through acclimatization of plants.
N. A. BAZILEVSKAIA AND V. M. SHCHERBINA
Animals. It is generally known that thousands of both harmful and useful species of animals have propagated beyond the confines of their natural ranges over the earth. For example, over 180 species of pests (the Hessian fly, the appleworm, the corn borer, and many timber pests) have penetrated into the USA from other countries. The Colorado potato beetle, which has become one of the most dangerous and harmful pests to hit potato crops, came to Europe from America. The widespread gray and black rats are relatively recent newcomers to a large part of their present range; their resettlement was an indirect result of human activity. Species which find no serious competitors in their new habitats flourish with special vigor—for example, the European starling spread over the USA, Canada, South Africa, Australia, and New Zealand within a space of 60 years. Acclimatization on islands takes place with relative ease for the same reason (islands are characterized by depleted biocenoses and absence of competitors). In New Zealand, for example, imported species of animals now completely dominate the wild: over 50 percent of the mammals and birds are from Europe, Asia, and North America, including deer—the red, axis, sambar, Virginia, and fallow deer and the moose—wild boars, two species of rats, larks, thrushes, finches, and goldfinches.
In economic practice, acclimatization is caused by the artificial resettlement of useful wild or agricultural animals. Experiments on acclimatization of wild mammals in various countries have been carried out on 160 species. There are now 32 species of mammals resettled in the USSR, and acclimatized mammals account for over 10 percent of fur production. Muskrat acclimatization has been particularly profitable, with the range of muskrat in the USSR now exceeding the area it occupied in its native USA and in Canada. The American mink has taken well to acclimatization in several areas, as have the raccoon dog, the coypu, and the deer. Several species of birds have made successful resettlement—for instance, the gray partridge, Daurian and willow ptarmigan, and pheasant. Unique work on acclimatization is being done at the Askaniia-Nova Preserve, where over 80 types of mammals and 350 types of birds are being investigated. Acclimatization of fish is also of great importance. The resettlement, restocking, and acclimatization of carp, bream, whitefish, bonefish, and other species in internal reservoirs of the USSR yield an annual crop of over 10,000 tons of a most valuable product.
Acclimatization of food organisms—various invertebrates, predominantly worms, shellfish, mollusks, and crabs—increases the productivity of water reservoirs. For example, acclimatization of invertebrates stocked at Tsimlianskoe Reservoir has resulted in 1,500 tons of fish yearly. The nereis annelid, acclimatized in the waters of the Caspian Sea, has become the basic nutrient for sturgeon and Sevruga sturgeon. Acclimatization is accompanied not only by changes in the way of life of animals but also by changes in their morphological and physiological features: through improvements in the stability of animals under changes in temperature, light status, environmental humidity, atmospheric pressure, the air’s gas composition, and the potential available food supply. Adaptive reactions caused by relatively slight changes in their conditions of existence enhance the resistance of animals to various environmental changes—for example, maintaining some mammals at a temperature of 10°C enhances their stability to temperatures below - 15°C. Success in acclimatization depends on the selection of subjects and the length of time over which acclimatization is carried out. Natural selection of those individuals most completely adapted to unusual conditions of existence takes place in the new environment. The phenology of multiplication and the animal’s development conforms to the new seasonal and diurnal rhythms. Acclimatization can be considered completed when the species acquires the ability to maintain its population level under the new environmental conditions and to restore it after periods of depression. Changes in morphological and physiological features in response to acclimatization are expressed with particular clarity in fish but are also manifested in mammals. Acclimatization may also mean changes in the marketable qualities of animal furs—for example, changes in the quality of fur of the Teleut squirrel, coypu, and marmot during the acclimatization process. Stocking a species range with particularly valuable forms to improve the quality of the local animal population does not necessarily bring about the desired results, since the newcomers change in the direction of the aboriginal strains; for example, stocking the Urals with the most valuable eastern sables failed to produce the expected results, since the quality of fur in the acclimatized sables deteriorated. Acclimatization also plays a major role in restoring the original range of animals, once it has been curtailed as a result of human activities (reacclimatization). For example, in the 1920’s the population of riverine beavers in the USSR was no greater than 1,000. As a result of expensive resettlement of the animals from Byelorussia and from the Voronezh’ Preserve, riverine beaver are now found in 48 regions; the total number of beavers has increased to 40,000. Sable, squirrels, two species of rabbits, desman, and other commercially valuable species of animals have also become reacclimatized. Auroch herds are being successfully restored.
S. S. SHVARTS
Agricultural animals. Humans themselves play a major role in the acclimatization of agricultural animals that have undergone a prolonged and complicated process of domestication, since the concept of environment applicable to such animals includes, in addition to natural factors, such agricultural and economic factors as the chemical composition of the fodder, feeding levels, maintenance of the animals, prevention of sicknesses, and breeding. If these and the new environmental conditions contrast sharply, the acclimatization process, particularly in the case of commercially valuable breeding stocks, takes place under great tension and frequently ends in failure.
Successful acclimatization of agricultural animals requires not only attention to climatic conditions in a new habitat but also efforts to provide the imported animals with the proper types of fodder and food, food allowances with adequate nutrients, suitable barn and stall accommodations, and improved maintenance conditions. If the animals imported into the new area acclimatize poorly, they are crossed with local stocks, with animals carefully selected for health, constitution, and productivity, using breeding studs certified to produce easily acclimatizing progeny. Acclimatizing animals are prevented from degeneration by crossing them with already acclimatized animals of the same breeds, using different forms of selection, and also by resorting to interspecific crossing. One crucial method for overcoming the difficulties encountered in acclimization is hybridization. The progeny of the imported animals possess broader adaptive capacities, since their process of adaptation begins at early stages of development, when the organism is most flexible.
REFERENCESMaleev, V. P. Teoreticheskie osnovy akklimatizatsii. Leningrad, 1933.
Zhitkov, B. M. Akklimatizatsiia zhivotnykh i ee khoziaistvennoe znachenie. Moscow-Leningrad, 1934.
Lavrov, N. P. Akklimatizatsiia i reakklimatizatsiia pushnykh zverei v SSSR. Moscow, 1946.
Lavrov, N. P. Akklimatizatsiia ondatry v SSSR. Moscow, 1957.
Avrorin, N. A. Pereselenie rastenii na poliarnyi sever: Ekologo-geograficheskii analiz. Moscow-Leningrad, 1956.
Gurskii, A. V. Osnovnye itogi introduktsii drevesnykh rastenii v SSSR. Moscow-Leningrad, 1957. Shvarts, S. S. “Nekotorye voprosy teorii akklimatizatsii nazemnykh pozvonochnykh zhivotnykh.” Tr. Inst. Biologii AN SSSR Ural’skogo filiala, 1959, issue 18.
Nasimovich, A. A. “Nekotorye obshchie voprosy i itogi akklimatizatsii nazemnykh pozvonochnykh.” Zoologieheskii zhurnal, 1961, vol. 40, issue 7.
Akklimatizatsiia zhivotnykh v SSSR. Alma-Ata, 1963. Bazilevskaia, N. A. Teorii i metody introduktsii rastenii. Moscow, 1964.
Ivanov, M. F. Akklimatizatsiia i vyrozhdenie sel’skokhoziaistvennykh zhivotnykh: Polnoesobranie sochinenii, vol. 1. Moscow, 1963.
|
The single-most famous person from colonial Georgia is James Oglethorpe, the founder of the colony, the first trustee and the first unofficial governor. There were other trustees and governors but none near as well-known and revered as Oglethorpe.Continue Reading
James Oglethorpe first envisioned the idea of the Georgia colony while he lived in England and worked to reform the prison system there. Many of the inmates were imprisoned for debts, and Oglethorpe thought an American colony that embraced the poor and taught them useful trades sounded like a good idea. He also decided to repress class distinction by allowing colonists equal tracts of land and outlawing slavery. He managed to convince King George II to grant a charter for the new colony in 1732, naming Oglethorpe and 20 other people trustees.
Despite the good intentions, populating the new colony with the English poor never came to fruition. However, his other ideals lasted longer. He worked tirelessly for the colony while managing simultaneously to protect the local Native Americans. Once Oglethorpe returned to England in 1743, he lost control of the fate of Georgia. The trustees allowed ownership of large tracts of land, inheritance and slavery, all of which Oglethorpe was against. He cut ties to the trustees and Georgia by 1750.Learn more about US History
|
4. Produce clear and coherent writing in which the development, organization, and style are appropriate to task, purpose, and audience. (Grade-specific expectations for writing types are defined in standards 1–3 above.)
5. With some guidance and support from peers and adults, develop and strengthen writing as needed by planning, revising, editing, rewriting, or trying a new approach, focusing on how well purpose and audience have been addressed. (Editing for conventions should demonstrate command of Language standards 1–3 up to and including grade 7 on page 52.)
6. Use technology, including the Internet, to produce and publish writing and link to and cite sources as well as to interact and collaborate with others, including linking to and citing sources.
|
York of the Lewis and Clark Expedition. By Laurence Pringle, illustrations by Cornelius Van Wright and Ying-Hwa Hu. Calkins Creek Books, 2006.
Our fourth grades do a study of the Lewis and Clark expedition every year. The students chose five members of the Lewis and Clark expedition to research write about and present for their project. In our school York is always one of the most popular and compelling figures. There isn't as much information available about him. Because of his status as a slave and because he was not allowed to learn to read and write his journals never existed. It was against Virginia law, where he was born and raised, to teach a slave to read and write. For that reason his point of view cannot accurately be told and his life story must be pieced together from the records kept by others.
Laurence Pringle begins his book American Slave, American Hero by pointing that out in the introduction. He says, “In the 1770s, two boys were born on a Virginia plantation. One became a famous explorer and leader whose name is still celebrated to this day. Today the other is also considered a national hero but few know his name: York. Little is known about some times in his life, so you will find the word “Probably” used occasionally in this, the true story of York.”
Pringle tells about his early life with the William Clark family, as he became William’s personal slave at the age of 12 and worked alongside him in setting up a new family homestead in the Ohio River valley. York married his sweetheart, a slave woman from a neighboring farm, in his late 20s. His wife’s name is not known, nor whether they had any children.
When Clark received the call to join Meriwether Lewis on the great expedition in 1803, York was chosen to go with him. Pringle points out that “As a slave, York could not volunteer, or refuse, to go on the expedition. Whether he went was up to his master.” Clark wanted him along so there he was. Clark writes in his journal of all the work York contributes to, including gathering food for the party and attending to the sick or injured. He was known as a good hunter and a reliable help in difficulty.
On several occasions when the expedition met with Native Americans York was considered “big Medicine” and greatly admired. York again and again shines as capable, industrious and adaptable. The Shoshone (Sacagawea’s tribe) in particular admired York because of his dark skin, which they considered to be the mark of a great warrior. York is also mentioned as being instrumental in rowing and navigating the river boats and trading with the Nez Perce for food and supplies.
In September of 1806 the expedition finally returned to St. Lewis, Missouri. York was praised as a hero right along with the rest of the party, but he was not rewarded with land and money as the free white men were. He still belonged to Clark as a slave for another ten years before he was given his freedom. His wife lived far from him and although he asked permission to return to live with her he was refused. When her owner moved to Mississippi he lost contact with her. York died of cholera in 1832. His place of death and burial is unknown. Pringle says, “Like the other explorers, York endured extreme heat and cold, suffered injuries and illness, risked his life many times, and contributed to the success of an expedition that is still considered the greatest in United States history. He was both a slave and an American hero. In 2001, long after his death, York was promoted to the rank of honorary sergeant, Regular Army, by President William Jefferson Clinton.”
In the author’s note at the back of the book Pringle points out that in researching for this book he found more than a dozen books about Sacagawea and at least six about Seaman the dog that went on the journey but few about York. It is clear that this volume is much needed and makes a valuable contribution to our American history. This book is well written, beautifully illustrated and highly recommended. Bartography reviewed this book here.
|
A Graphic Design Primer, Part 1: The Elements of a Design
There are many elements that make up any visual design, whether it’s good or not. Becoming familiar with the parts of a design is necessary before you can start to apply the principles of good design to your own work, in the same way that a doctor needs to have an understanding of anatomy before he can learn to heal a patient.
There are seven basic elements of any design. Some are easier to grasp than others, but all are important. Once you can identify the elements of a design, whether it’s your own or someone else’s, you can learn how the principles of good design are best applied.
Lines are generally present throughout a design. They can be thick or thin, straight or curved, solid or dashed or dotted. Lines can be any color and any style. Straight lines are often used as delineations between sections of a design, or they may be used to direct a viewer’s vision in one direction or another.
The width of a line has a direct effect on its visual impact. Thick lines are bold and strong; they draw attention to themselves. Thin lines tend to do the opposite. Color also effects the impact of a line, with brighter and darker colors drawing more attention than lighter and paler colors. The style of a line also has an effect: dotted or dashed lines are less imposing than solid lines.
Curved lines often give a more dynamic or fluid look to a design. They indicate movement and energy. They’re also more common in designs with an organic nature, as they’re more likely to be seen in nature. Straight lines are more formal and structured, and indicative of “civilized” culture.
Justdot is another example of a site that uses a lot of curved and dashed lines to indicate movement and energy.
RePrint uses a number of curved lines to direct the eye of the visitor.
VideoDSLR uses straight lines of varying widths to delineate content sections.
Forms are three-dimensional objects within a design, like a sphere or cube. You can have forms that are actually three-dimensional in your designs (like with product packaging), or forms that are actually two-dimensional but are displayed in a way as to imply that they’re three-dimensional (like a line-drawing of a cube).
Forms are common in actual three-dimensional graphic design, of course, but are also seen in web and print design. Website designs that use 3D techniques are making use of forms. Another common place to see forms is in logo designs where a sphere or cube is present.
Print Mor NYC
Print Mor NYC uses a 3D effect behind their main content.
Another example of a 3D effect in website design.
Shapes are two-dimensional. Circles, squares, rectangles, triangles, and any other kind of polygon or abstract shape are included. Most designs include a variety of shapes, though deliberate use of specific shapes can give a design a certain mood or feeling.
For example, circles are often associated with movement, and also with organic and natural things. Squares are more often seen with orderly, structured designs. The color, style, and texture of a shape can make a huge difference in how it is perceived.
Method Design Lab
Method Design Lab uses ovals and other rounded shapes throughout their design.
Passion About Design
Circles are used throughout the design.
The Cappen site uses triangles throughout their site.
Textures are an important part of just about any design. Even designs that, on the surface, don’t seem to use textures actually are (“smooth” and “flat” are textures, too). Textures can add to the feeling and mood of a design, or they can take away.
The most commonly seen textures, apart from flat or smooth, are things like paper, stone, concrete, brick, fabric, and natural elements. Textures can be subtle or pronounced, used liberally or sparingly, depending on the individual design. But texture is an important aspect of design, that can have a surprising effect on how a design comes across.
The Heads of State
The Heads of State site uses a few subtle textures.
Doublenaut uses a more pronounced texture in their background.
The Cuban Council website uses textures on virtually every element of their design.
Color is often the most obvious thing about a design. We’re taught colors from an early age, and even go so far as to identify some things with color descriptors (“my green jacket” or “my red shoes”). Color is also capable of creating strong reactions among people, who consciously and subconsciously apply certain meanings or emotions to different colors (this is also influenced by culture, as many colors mean different things in different cultures).
Color theory is an important aspect of design, and something designers should at least have casual knowledge of. You should know the difference between a shade (when black is added to a pure color), tint (when white is added to a pure color) and tone (when gray is added to a pure color). You should also know terms like chroma, value, and hue. But more importantly, you should know how all these things work together to create a mood or feel in a design.
For a more complete overview of color theory, check out our archived series, Color Theory for Designers.
Go Live Button
The very bright colors used on the Go Live Button website have a definite impact on the perception of the visitor.
The more muted colors here give a completely different feeling than the site above.
Old Putney Row to the Pole
The Old Putney Row to the Pole site uses darker but still muted colors, which gives yet another impression.
Value is closely related to color, but it’s more general. It’s how light or dark a specific design is. Again, this relates directly to the mood a piece gives. Darker designs convey a different feeling than lighter designs, even with all other design elements being equal. This is one reason you’ll often see designers releasing both light and dark versions of their themes.
Not every piece has a clear-cut value. With very colorful pieces, you might not really be able to tell how high or low the value is. One trick is to convert the design to grayscale, to get a better sense of how light or dark it is. You can also look at the histogram of an image to get an idea of where the value is more heavily concentrated.
This After That
This After That is an example of a site with a relatively light value.
The Lounge has a relatively dark value.
There are two kinds of space in design: positive space and negative space. Positive space is that which is occupied by design elements. Negative space (also called “white space”) is the area that’s left over. The relationship between positive and negative space has a strong influence on how the design is perceived. Lots of negative space can give a piece a light, open feeling. A lack of negative space can leave a design feeling cluttered and too busy, especially if the designer is careless.
Negative space can create its own shapes and forms, which impact the design. Understanding the effect of negative space and how to use it to your advantage in a design is one of the most important techniques a designer can learn, and can make the difference between a good design and a great design.
80/20 Studio has a lot of negative space in their design.
Dazed Digital, on the other hand, has very little white space in their design.
Another example of a site without a whole lot of negative space.
In the next installment, I’ll be covering the principles that make up a good design, and how to apply them to the elements we covered here.
- Shape – Basic Elements of Design
This article from About.com offers a brief rundown of how shapes are used in design, as well as links to more specific resources.
- Textures in Modern Web Design
An article from our archives on using textures effectively in website designs.
- The Elements of Design
An overview from Digital Web Magazine.
- Learn the Basic Elements of Visual Design, Go For the Right Composition
Another overview of the basic elements, this time from DesignModo.
Cameron Chapman is a professional Web and graphic designer with many years of experience. She writes for a number of blogs, including her own, Cameron Chapman On Writing. She’s also the author of Internet Famous: A Practical Guide to Becoming an Online Celebrity.
|
Phonics and Reading in the Foundation Stage and Key Stage 1
At Carrington Primary School we use a synthetic phonics programme based on ‘Letters and Sounds’.
In Foundation 2 and Key Stage 1, classes have a 20 minute daily phonics session. In this session, children initially learn how to blend and segment sounds alongside learning the letter sounds. As their Phonics knowledge progresses, children are taught how to represent longer vowel sounds using groups of letters e.g. ‘ai’ in ‘rain’, ‘igh’ in ‘light’. In years 1 and 2 children are taught alternative ways of representing the same sound and begin to be taught spelling rules.
For further information about learning to read through phonics you can visit:
Children in Foundation 2 and Key Stage 1 are given a Phonics-based reading book and a home/school reading diary. All books are banded into a progressive and cohesive scheme for the children to progress through. When children have finished the scheme, they become ‘free readers’ and are able to choose their own reading book from a range of longer books.
|
- 1 How did monks and missionaries impact the spread of Christianity?
- 2 How did the missionaries help spread Christianity throughout Europe?
- 3 How did monks spread Christianity in Europe?
- 4 How did the missionaries help in spreading Christianity?
- 5 How did Christianity spread in Germany?
- 6 What were the long lasting effects of the Crusades?
- 7 Who is responsible for spreading Christianity throughout Europe?
- 8 When did Christianity spread throughout Europe?
- 9 Why was the fourth century CE so crucial to the development of Christianity?
- 10 Why did Christianity spread in Europe?
- 11 What was the last pagan country in Europe?
- 12 Why did Christianity decline in Europe?
- 13 What was the impact of missionaries?
- 14 What are the impact of the missionaries on education?
- 15 Which religion has no deity?
How did monks and missionaries impact the spread of Christianity?
How did missionaries and monks help spread Christianity into new areas? Monasteries were built in remote areas. Most powerful force that helped spread Christianity were missionaries. They both helped Christianity spread throughout Europe.
How did the missionaries help spread Christianity throughout Europe?
How did missionaries help spread Christianity throughout Europe? In Eastern Europe, monks worked to convert Slavic people. Monks and Nuns devoted their lives to the spiritual gods. Monks and nuns made vows to live and worship within their communities for the rest of their lives.
How did monks spread Christianity in Europe?
During the early Middle Ages, many missionaries ( monks ) were sent by popes to travel across Europe to spread Christianity. By the 700’s and 800’s, Catholic missionaries were working in many parts of Europe. Over time, the Catholic faith became part of everyday life in most parts of Europe.
How did the missionaries help in spreading Christianity?
Perhaps the most lasting cultural impact of the missionaries has come through their contributions to Bible translation and education. By translating the Bible into the language of a non-European people, missionaries had to become pupils, learning the finer points of a local language from indigenous teachers.
How did Christianity spread in Germany?
It was introduced to the area of modern Germany by 300 AD, while parts of that area belonged to the Roman Empire, and later, when Franks and other Germanic tribes converted to Christianity from the 5th century onwards. The area became fully Christianized by the time of Charlemagne in the 8th and 9th centuries.
What were the long lasting effects of the Crusades?
In fact, religious intolerance increased during and after the Crusades. During the 200 years of the Crusades, Christians killed thousands of Muslims and Muslims killed thousands of Christians. In fact, some Western European Christians killed Eastern European Christians because they dressed like Muslims!
Who is responsible for spreading Christianity throughout Europe?
After Jesus, the two most significant figures in Christianity are the apostles Peter and Paul/Saul. Paul, in particular, takes a leading role in spreading the teachings of Jesus to Gentiles (non Jews) in the Roman Empire.
When did Christianity spread throughout Europe?
The Roman Empire officially adopted Christianity in AD 380. During the Early Middle Ages, most of Europe underwent Christianization, a process essentially complete with the Baltic Christianization in the 15th century.
Why was the fourth century CE so crucial to the development of Christianity?
Christianity in the 4th century was dominated in its early stage by Constantine the Great and the First Council of Nicaea of 325, which was the beginning of the period of the First seven Ecumenical Councils (325–787), and in its late stage by the Edict of Thessalonica of 380, which made Nicene Christianity the state
Why did Christianity spread in Europe?
The Catholic Church started a major effort to spread Christianity around the world. Spiritual motivations also justified European conquests of foreign lands. The Catholic Church set up Christian missions to convert indigenous people to the Catholic faith.
What was the last pagan country in Europe?
In fact, Lithuania was the last pagan state in Europe. Almost 1,000 years after the official conversion of the Roman Empire facilitated the gradual spread of Christianity, the Lithuanians continued to perform their ancient animist rituals and worship their gods in sacred groves.
Why did Christianity decline in Europe?
Starting in 1880 and accelerating after the Second World War, the major religions began to decline among the Dutch, while Islam began to increase. During the 1960s and 1970s, pillarization began to weaken and the population became less religious.
What was the impact of missionaries?
The effects of missionaries on West Africa included a loss of cultural identity, a change in the unity of West Africa, an increase of nationalism, and a spread of Christianity due to trained black missionaries.
What are the impact of the missionaries on education?
Two general conclusions from the literature are that Protestant missionaries significantly improved the levels of education in their surrounding community, mostly by encouraging female education, and that British colonisers created more supportive policies for missionary activity in the colonies.
Which religion has no deity?
Atheism. Atheism describes a state of having no theistic beliefs; that is, no beliefs in gods or supernatural beings.
|
Good cartoonists are excellent observers of people, and that’s why cartoons can be a tremendous resource for teaching psychology. Use these cartoons by visiting their websites as copying/pasting into your slide deck or your learning management system is most likely a violation of copyright.
Use these to freshen your operant conditioning examples, or use these as a basis for discussion or as a stand-alone assignment.
Edge City, May 3, 2021. We see both Colin’s behavior and his father’s behavior. What behavior has likely been positively reinforced? And what behavior has likely been negatively reinforced? Explain.
Deflocked, March 26, 2021. We know both the sheep’s behavior and the sheep’s mother’s behavior. Which behavior has been positively reinforced? And which behavior has been negatively reinforced? Explain.
Bleeker: The Rechargeable Dog, March 11, 2021. The real dog has learned to turn off the robot vacuum. Has the “turning off” behavior been positively or negatively reinforced? Explain.
Stone Soup Classics, February 9, 2021. Max has learned a new word. When he yells this word, he gets a reaction that will likely increase the chances of him saying it again. Has his saying the word been positively or negatively reinforced? Explain.
Nest Heads, December, 2020. We know both Taylor’s behavior and her grandfather’s behavior. What behavior has likely been positively reinforced? And what behavior has likely been negatively reinforced? Explain.
And now for the hard one.
Drabble, February 23, 2021. In operant conditioning, a discriminative stimulus is a signal that a specific behavior is likely to be reinforced. What is the discriminative stimulus in this strip? What behavior has this discriminative stimulus signaled will be reinforced?
Source: macmillan psych community
|
Telescopes are generally understood to be optical instruments for viewing distant objects, as expressed by the Greek words tele (far) and skopein (to view). In this article, we focus on optical telescopes, i.e., not other types of telescopes e.g. for radio waves or X-rays.
We first consider the basic optical function of a telescope without looking at concrete realizations. According to the original use in conjunction with the human eye, the telescope receives light with approximately plain wavefronts and also outputs light with plain wavefronts, only with a reduced diameter (see Figure 1), typically below the diameter of the eye's pupil. In a description based on geometrical optics, parallel input rays are resulting in parallel output rays.
The contribution of the human eye to the optical function is obviously of vital importance. In the eye, the light output of the telescope is focused to the retina, and each input beam direction is (at least within geometrical optics and without image aberrations) associated with one image point on the retina. Effectively, one obtains imaging of a very distant object onto the retina, when the eye is accommodated to infinite distance (as is normally the case for a relaxed eye). Compared with direct viewing, a certain image magnification is achieved.
In case that a telescope is used with an image sensor, for example, that also needs to be equipped with an additional focusing lens or a more sophisticated kind of objective for forming an image. Alternatively, the optical design of the telescope itself can already have the imaging function.
At a first glance, one may think that the telescope should lead to a demagnification of images, since a collimated beam is transformed into another collimated beam with smaller beam radius. However, the telescopes is actually magnifying. This is because the location of the image point on the retina is not dependent on a beam diameter, but rather a beam direction, and further analysis shows that any change of the input beam direction is transformed into a larger change of the output beam direction – just as the divergence of a beam is increased when the beam diameter is reduced. Essentially, what is relevant is the angular magnification of the telescope.
Although a telescope can be slightly modified or adjusted to focus at objects in a finite distance, its basic function is as explained above: producing parallel output rays for parallel input rays, assuming that the imaging is completed e.g. by the optical system of the eye. When describing that basic configuration with the ABCD matrix algorithm, the obtained matrix has a C value equal to zero: any beam offset at the input may not cause a change of propagation angle. Such a system is called afocal. It has no focal length, no focal points, no principal points and no nodal points. Because of the general rule A · D − B · C = 1, we then have A · D = 1. We can further identify D with the magnification and recognize the inversion relation to A, which determines the reduction of beam diameter.
In a more general sense, the term telescope is often understood to be an afocal system as explained above, which is not necessarily used for viewing purposes. For example, beam expanders as used in laser optics (e.g. for transmitters in free space optical communications) are often called telescopes. On the other hand, not all telescopes for viewing purposes are designed as afocal systems; some of them focus light to some image plane, where an image sensor is placed, for example. For a measurement telescope, it is possible to insert a reticle in that image plane, which will then also appear in the generated image.
Figure 2 shows the basic setups of two common types of refractive types of telescopes (refractors), each one being based on two lenses. The Keplerian telescope uses two focusing lenses, where the distance between them is the sum of the focal lengths (both taken as positive). It produces inverted images. Between the lenses, there is a real image plane.
The Galilean telescope, shown in Figure 2 for the same magnification, contains a focusing and a defocusing lens and produces non-inverted images. It does not have a real image plane.
Lenses cause chromatic aberrations, related to the dependence of the focal length of a lens on the optical wavelength. That problem is often much reduced by using an achromatic lens doublet or even a triple-lens apochromat at least for the objective, sometimes also for the ocular. Otherwise, the design of objectives for telescopes is often quite simple (e.g. compared with photographic objectives), at least if a relatively small field of view is sufficient for the application. More sophisticated designs are required for relatively wide-field objectives, is required for astro cameras, for example. See below for more information on optical aberrations.
For inverting telescope types (i.e., those producing inverted images), one may either accept that inversion or undo it with additional optics. For telescopes with light delivery to photographic films or image sensors, the inversion is of course not relevant.
Refractive telescopes are often used in the form of binoculars (see below).
Telescopes can also be realized based on purely reflecting optics, i.e., with mirrors. While focusing and defocusing functions are easily achieved with curved mirror surfaces, one requires design adaptations in order to cope with the inevitable change of beam direction upon reflection. Two common solutions – the Cassegrain telescope and the Newton telescope – are shown in Figure 3. Both have a secondary mirror which is suspended with some spider and causes a circular central obscuration of the primary mirror. That leads to some loss of resolution, which is avoided by some other telescope designs, where however the inherent asymmetries cause other types of problems.
Both are not afocal systems as explained above, but rather produce a focal point in an accessible area, where one could place an image sensor, for example. However, it is of course no problem to convert such a telescope into an afocal system for direct viewing with the human eye.
Reflective telescopes typically work with aspheric mirrors. For example, Cassegrain reflectors are based on a parabolic primary mirror and a hyperbolic secondary mirror, which reflects light through a hole in the primary mirror.
The main advantages of reflective telescopes, compared with refractive telescopes, are the following:
- Any chromatic dispersion is avoided. That advantage has already been realized by Isaac Newton, who therefore in 1668 developed the first reflecting telescope, called the Newtonian telescope.
- One can produce relatively large telescope mirrors which still have a reasonable weight, while large lenses would become very heavy and expensive.
For those reasons, reflective telescopes have become the usual solution for astronomy.
Early reflective telescopes suffered from the problem of rapid tarnishing of the reflecting surfaces, when using speculum metal mirrors. This problem was done largely solved by using metal-coated first surface mirrors based on glass or ceramic mirror substrates. Those are also harder, i.e., they preserve their shape more accurately, and some of them exhibit very small thermal expansion coefficients.
Extremely precise large telescope mirrors are nowadays usually made with glass ceramic substrate materials, optimized for a very low coefficient of thermal expansion. Note that deviations from the ideal shape should ideally be far below one optical wavelength. The wavefront accuracy can be further improved with adaptive optics, which usually correct distortions from the primary mirror not at their source, but at a more convenient location, where the beam path is more compact.
Telescopes which combine refractive and reflective optics are called catadioptric. That combination provides additional options for correcting image aberrations (even for a wide field of view) and for developing compact and lightweight designs. The simplest type is the catadioptric dialyte, consisting of a single-element focusing lens as the objective and a silver-coated concave mirror.
- Lenses and prisms cause chromatic aberrations, related to the dependence of the focal length of a lens on the optical wavelength. One may use achromatic lenses for minimizing such problems, or avoid lenses altogether.
- Further, there are other aberrations in the form of astigmatism, coma and geometrical image distortion; one usually restricts the field of view such that excessive aberrations of those types are avoided. Particularly if the magnification is large, a smaller field of view may be required. For achieving a larger field of view with good quality, one can replace the objective and ocular with combinations of several lenses, designed for compensating aberrations as much as possible.
- Another problem is the curvature of field. For some telescope designs, the image “plane” is significantly curved, so that sharp images could not be obtained over the whole image area when using a flat image sensor. Therefore, curved image sensors are used with some telescopes, where the field curvature cannot be reduced.
Performance Parameters of Optical Telescopes
The magnification is the factor by which the angular resolution for an observer is increased, compared with direct viewing (without the telescope). This parameter is of course relevant only for telescopes which are used together with the human eye instead of an image sensor, for example.
A design with high magnification does not inevitably have a high image resolution. However, the magnification should be large enough to make full use of the image resolution – as in a microscope.
One may realize different values of the magnification of a telescope by using different oculars depending on the observation conditions. The total angular magnification is the product of those parameters of objective and ocular.
Field of View
The field of view is the range of angular directions which can be viewed with a fixed orientation of the telescope. As explained above, it is often limited by image aberrations, which become more severe for extreme viewing angles.
A large field of view is particularly relevant, for example, for astronomical survey telescopes, which are used to image large areas of the sky.
Image Resolution and Light Gathering Power of Telescopes
The achievable image resolution of a telescope, quantified as an angular resolution on the object side, is ultimately limited by diffraction if the optical quality is excellent; the essential design parameter is the diameter of the entrance aperture. Although the output aperture is much smaller, diffraction is less relevant there due to the image magnification. The angular resolution can be estimated as 1.22 times the optical wavelength divided by the aperture diameter. For green light in a telescope with a 1-m mirror, this leads to a resolution of ≈0.67 μrad = 0.14 arcseconds.
A large aperture usually also leads to a high light gathering power, which is important for observing faint objects like distant stars.
Typical Format of Specification
The most essential design parameters of a telescope are the opening aperture and the magnification. These are often specified in a compact form, for example as 8 × 30 for a binocular telescope if the magnification is 8 and the entrance diameter is 30 mm. For an astronomical telescope, such a specification may not given, because the instrument can be used with different eyepieces, leading to different values of the magnification.
Telescopes for Specific Applications
Telescopes for Terrestrial Observations
Small terrestrial telescopes are often made in the form of hand-held binoculars, essentially consisting of two independent telescopes – one for each eye. The spatial separation of the two objectives can be increased beyond the spacing of the human eyes in order to achieve better 3D vision. The required modification of the beam paths can be made with prisms which may at the same time undo the image inversion, as far as that is caused by the other optics. Binoculars are typically used for purposes like ornithology, hunting, sports watching and military reconnaissance.
There are also compact monoculars for viewing with one eye, which can be made with lower cost and weight.
Larger telescopes, e.g. for applications in geodesy, are often made as monoculars and are mounted on a flexible system, which may be motorized for accurately looking in certain directions.
Small telescopes are mounted on rifles for precise targeting and are then called riflescopes. Similar telescopes are also used for other types of weapons.
Rather large telescopes – most often with a Cassegrain architecture – have been developed for astronomical observations. The largest realized ones have open apertures with diameters around 10 m. The diffraction limit for the angular resolution is then normally no more reachable due to image distortions in the atmosphere – even when telescopes are placed on high mountains. Therefore, adaptive optics are increasingly used for correcting such distortions. The measurement of the distortions to be corrected can be made on the same telescope, either using light from stars or from artificial laser guide stars.
Several even larger telescopes are currently planned, with apertures above 20 m and partly even well above 30 meters.
Astronomical observations often require substantial time in order to acquire enough light energy for a proper exposure of a photographic film or an image sensor. It is then necessary to accurately move the telescope such that the effects of the rotation of Earth are compensated.
High performance is required also from the used image sensors, which are mostly of CCD type. Rather large sensor designs, possibly including multiple CCD chips, are used in various telescopes. For highest sensitivity, they are often operated at low temperatures. Further, one may take additional measurements under dark conditions and apply noise subtraction algorithms.
Some large observatories (e.g. the European Southern Observatory in Chile with its Very Large Telescope) work with a combination of several telescopes, combining signals from them with interferometry for substantial further increases of angular resolution.
Another option for avoiding the problem of atmospheric distortions is to place a telescope outside the atmosphere of Earth – typically in an orbit around Earth. The most famous example is the Hubble Space Telescope (HST), which has been launched in 1990 and has delivered astronomical images of enormous scientific value during several decades. Although its entrance aperture of 2.4 meters is small compared with that of terrestrial telescopes, the freedom of atmospheric distortions allows for very high image quality. Light in the visible, ultraviolet and near infrared region can be utilized. Other space telescopes have later been deployed, for example the Herschel telescope in 2009, which however operates in the far infrared. The planned James Webb Space Telescope is expected to cover the wavelength range from 0.6 μm to 28.5 μm with a primary mirror of 6.5 m diameter.
Instead of imaging, one may analyze the light coming from a star or a galaxy, for example, with a highly sensitive spectrometer. In other cases, the polarization properties of light are carefully studied, using polarimeters.
There are also solar telescopes, which are made specifically for imaging details of the Sun. Here, one is definitely not short of brightness; to the contrary, the system must be able to handle substantial optical powers. Because of the comparatively small observation distance, the angular resolution usually does not need to be as high as for the observation of distant stars. The telescope designs are accordingly quite different from those of other astronomical telescopes.
Questions and Comments from Users
Here you can submit questions and comments. As far as they get accepted by the author, they will appear above this paragraph together with the author’s answer. The author will decide on acceptance based on certain criteria. Essentially, the issue must be of sufficiently broad interest.
Please do not enter personal data here; we would otherwise delete it soon. (See also our privacy declaration.) If you wish to receive personal feedback or consultancy from the author, please contact him e.g. via e-mail.
By submitting the information, you give your consent to the potential publication of your inputs on our website according to our rules. (If you later retract your consent, we will delete those inputs.) As your inputs are first reviewed by the author, they may be published with some delay.
|
The type of agriculture practiced in a given region depends heavily on the climate and weather that region receives. So naturally, with climate change, agriculture will be forced to change. Certain crops will have to be discarded for alternative crops which may grow better in the new climate. In other cases, agriculture will simply be no longer sustainable. Farms may have to close down or move to different latitudes or elevations. The unpredictable nature of climate change will make this quite a conundrum for farmers and the world at large.
One man has attempted to explain it all through a book which can help guide us through a potentially rocky transition. Ariel Dinar, director of the Water Science and Policy Center at the University of California (UC), Riverside, has co-edited the book, “Handbook on Climate Change and Agriculture.” The book has contributions from scholars around the world. It explores direct effects on agriculture, economic impacts, and farmer adaptation.
The writers of the book make the argument that climate change will likely have a significant impact on agriculture around the world. The changes will be in the form of temperature, precipitation, CO2 concentrations, and available water flows.
“Developing countries already face food problems,” said Dinar. “The effects of climate change on agriculture in these and other countries will depend on how well the agricultural sector can adapt through technology institutions, and better management practices. Developing countries are better able to engage in adaptation since mitigation is much harder for these countries to do.”
Dinar first began his studies on climate change in 1994 and realized the effects on agriculture were largely ignored. The effects could be particularly dire for developing countries where farming is still relatively low-tech and there is a strong reliance on steady climate. Livestock as well, would be particularly hard hit.
“It soon became clear to me that people did not know much about adaptation to the effects of climate change,” he said. “The net effect of climate change on agricultural production is still not well understood. It’s not just the production of food from crops that is involved, but also livestock. Agriculture suffers from climate change, but it also contributes to it through land use and abuse, as well as the adoption of practices that are unsustainable where climate change is concerned such as unsuitable cropping patterns and irrigation technologies.”
Article by David A. Gabel, appearing courtesy Environmental News Network.
|
, a vessel's freeboard
is the distance from the waterline
to the upper deck
level, measured at the lowest point of sheer
where water can enter the boat
In commercial vessels, the latter criterion measured relative to the ship's load line
, regardless of deck arrangements, is the mandated and regulated meaning.
In yachts, a low freeboard is often found on racing boats
, for increased speed (by reducing weight and therefore drag). A higher freeboard will give more room in the cabin
, but will increase weight and drag, compromising speed. A higher freeboard, such as used on ocean liners
, also helps weather waves and so reduce the likelihood of being washed over by full water waves on the weather deck
. A low-freeboard boat is susceptible to taking in water in rough seas. Freighter ships and warship
s use high freeboard designs to increase internal volume, which also allows them to satisfy International Maritime Organization
(IMO) damage stability regulations, due to increased reserve buoyancy
*For the term as used in measuring sea ice, see Sea ice thickness
, a type of vessel that often is a surface ship with a very low freeboard.
|
A bone fracture is a medical condition in which a bone is cracked or broken. It is a break in the continuity of the bone. While many fractures are the result of high force impact or stress, bone fracture can also occur as a result of certain medical conditions that weaken the bones, such as osteoporosis.
Fractures: Types and Treatment
The word “Fracture” implies to broken bone. A bone may get fractured completely or partially and it is caused commonly from trauma due to fall, motor vehicle accident or sports. Thinning of the bone due to osteoporosis in the elderly can cause the bone to break easily. Overuse injuries are common cause of stress fractures in athletes.
Types of fractures include:
- Simple fractures in which the fractured pieces of bone are well aligned and stable.
- Unstable fractures are those in which fragments of the broken bone are misaligned and displaced.
- Open (compound) fractures are severe fractures in which the broken bones cut through the skin. This type of fracture is more prone to infection and requires immediate medical attention.
- Greenstick fractures: This is a unique fracture in children that involves bending of one side of the bone without any break in the bone.
Our body reacts to a fracture by protecting the injured area with a blood clot and callus or fibrous tissue. Bone cells begin forming on the either side of the fracture line. These cells grow towards each other and thus close the fracture.
The objective of early fracture management is to control bleeding, prevent ischemic injury (bone death) and to remove sources of infection such as foreign bodies and dead tissues. The next step in fracture management is the reduction of the fracture and its maintenance. It is important to ensure that the involved part of the body returns to its function after fracture heals. To achieve this, maintenance of fracture reduction with immobilization technique is done by either non-operative or surgical method.
Non-operative (closed) therapy comprises of casting and traction (skin and skeletal traction).
closed reduction is done for any fracture that is displaced, shortened, or angulated. Splints and casts made up of fiberglass or plaster of Paris material are used to immobilize the limb.
Traction method is used for the management of fractures and dislocations that cannot be treated by casting. There are two methods of traction namely, skin traction and skeletal traction.
Skin traction involves attachment of traction tapes to the skin of the limb segment below the fracture. In skeletal traction, a pin is inserted through the bone distal to the fracture. Weights will be applied to this pin, and the patient is placed in an apparatus that facilitates traction. This method is most commonly used for fractures of the thighbone.
- Open Reduction and Internal Fixation (ORIF)
This is a surgical procedure in which the fracture site is adequately exposed and reduction of fracture is done. Internal fixation is done with devices such as Kirschner wires, plates and screws, and intramedullary nails.
- External fixation
External fixation is a procedure in which the fracture stabilization is done at a distance from the site of fracture. It helps to maintain bone length and alignment without casting.
External fixation is performed in the following conditions:
- Open fractures with soft-tissue involvement
- Burns and soft tissue injuries
- Pelvic fractures
- Comminuted and unstable fractures
- Fractures having bony deficits
- Limb-lengthening procedures
- Fractures with infection or non-union
Fractures may take several weeks to months to heal completely. You should limit your activities even after the removal of cast or brace so that the bone become solid enough to bear the stress. Rehabilitation program involves exercises and gradual increase in activity levels until the process of healing is complete.
Growth Plate Fractures
Growth plates, also called the epiphyseal plate or physis, are the areas of growing cartilaginous tissue found at the ends of the long bones in children. These growth plates determine the length and shape of the mature bone. The growth plates are more susceptible to damage from trauma because they are not as hard as bones.
Growth plate injuries commonly occur in growing children and teenagers. In children, severe injury to the joint may result in a growth plate fracture rather than a ligament injury. Any injury that can cause a sprain in an adult can cause a growth plate fracture in a child.
Growth plate fractures are more common in boys than girls because the plates develop into mature bone faster in girls. Growth plate fractures commonly occur at the wrist,, long bones of the forearm (radius) and fingers (phalanges), legs ( tibia and fibula), foot, ankle or hip during sports activities such as football, basketball and gymnastics.
Types of growth plate fractures
Growth plate fractures can be classified into five categories based on the type of damage caused.
- Type I – Fracture through the growth plate
The epiphysis is separated from the metaphysis with the growth plate remaining attached to the epiphysis. The epiphysis is the rounded end of the long bones below the growth plate and the metaphysis is the wider part at the end of the long bones above the growth plate.
- Type II – Fracture through the growth plate and metaphysis
This type is the most common type of growth plate fracture. The growth plate and metaphysis are fractured without involving the epiphysis.
- Type III – Fracture through the growth plate and epiphysis
In this type of injury, the fracture runs through the epiphysis and separates the epiphysis and growth plate from the metaphysis. It usually occurs in the tibia, one of the long bone of the lower leg.
Type IV – Fracture through growth plate, metaphysis, and epiphysis:
- Type IV is when the fracture goes through the epiphysis and growth plate, and into the metaphysis. This type often occurs in the upper arm near the elbow joint.
- Type V – Compression fracture through growth plate:
This type of fracture is a rare condition where the end of the bone gets crushed and the growth plate is compressed. It can occur at the knee or ankle joint.
Growth plate injuries are caused by accidental falls or blows to the limbs during sports activities such as gymnastics, baseball, or running. They may also result from overuse of tendons and certain bone disorders such as infection that can affect the normal growth and development of the bone. The other possible causes which can lead to growth plate injuries are:
- Child abuse or neglect – Growth plate fractures are one of the most common fractures that occur in abused or neglected children.
- Exposure to intense cold (frostbite) – Extremely cold climatic conditions can cause damage to the growth plates resulting in short fingers and destruction of the joint cartilage.
- Chemotherapy and medications – Chemotherapy to treat cancer in children and continuous use of steroids for arthritis may affect bone growth.
- Nervous system disorders – Children with disorders of the nerves may have sensory deficits and muscular imbalances that can cause them to lose their balance and fall.
- Genetic disorders – Gene mutations may result in poorly formed or malfunctioning growth plates which are vulnerable to fracture.
- Metabolic diseases – Diseases such as kidney failure and hormonal disturbances affect the proper functioning of the growth plates and increase susceptibility to fractures.
Signs and symptoms
Signs and symptoms of a growth plate injury include:
- Inability to move or put pressure on the injured extremity
- Severe pain or discomfort that prevents the use of an arm or leg
- Inability to continue playing after a sudden injury because of pain
- Persistent pain from a previous injury
- Malformation of the legs or arms as the joint area near the end of the fractured bone may swell
In children, fractures heal faster. If a growth plate fracture is left untreated it may heal improperly causing the bone to become shorter and abnormally shaped.
Your doctor will evaluate the condition by asking you about the injury and performing a physical examination of the child.
X-rays may be taken to determine the type of fracture. Since the growth plates have not hardened and may not be visible, X-rays of the injured as well as the normal limb are often taken to look for differences in order to help determine the place of injury.
Other diagnostic tests your doctor may recommend include computed tomography (CT) scan or magnetic resonance imaging (MRI). These tests are helpful in detecting the type and extent of injury as it allows the doctor to see the growth plate and soft tissues.
The treatment for growth plate injuries depends upon the type of fracture involved. In all cases, the treatment should begin as early as possible and include the following:
- Immobilization: The injured limb is covered with a cast or a splint may be given to wear. The child will be advised to limit activities and avoid putting pressure on the injured limb.
- Manipulation or surgery: If the fracture is displaced and the ends of the broken bones do not meet in proper position, then your doctor will unite the bone ends into correct position either manually (manipulation) or surgically. Sometimes, a screw or wire may be used to hold the growth plate in place. The bone is then immobilized with a cast to promote healing. The cast is removed once healing is complete
- Physical therapy: Exercises such as strengthening and range-of-motion exercises should be started only after the fracture has healed. These are done to strengthen the muscles of the injured area and improve the movement of the joint. A physical therapist will design an appropriate exercise schedule for your child.
- Long-term follow up: Periodic evaluations are needed to monitor the child’s growth. Evaluation includes X-rays of matching limbs at intervals of 3 to 6 months for at least 2 years.
Most growth plate fractures heal without any long term problems. Rarely, the bone may stop growing and become shorter than the other limb.
A fracture is a break in the bone that occurs when extreme force is applied. Treatment of fractures involves the joining of the broken bones either by immobilizing the area and allowing the bone to heal on its own, or surgically aligning the broken bones and stabilizing it with metal pins, rods or plates. Sometimes, the broken bone fails to re-join and heal even after treatment. This is called non-union. Non-union occurs when the broken bones do not get sufficient nutrition, blood supply or adequate stability (not immobilized enough) to heal. Non-union can be identified by pain after the initial fracture pain is relieved, swelling, tenderness, deformity and difficulty bearing weight.
When you present with these symptoms, your doctor may order imaging tests like X-rays, CT scans and MRI to confirm a diagnosis of non-union. The treatment of non-union fractures can be achieved by non-surgical or surgical procedures.
Non-surgical treatment: This method involves the use of a bone stimulator, a small device that produces ultrasonic or pulsed electromagnetic waves, which stimulates the healing process. You will be instructed to place the stimulator over the region of non-union for 20 minutes to a few hours every day.
Surgical treatment: The surgical method of treatment for non-union is aimed at:
- Establishing stability: Metal rods, plates or screws are implanted to hold the broken bones above and below the fracture site. Support may be provided internally or externally.
- Providing a healthy blood supply and soft tissue at the fracture site: Your doctor removes dead bone along with any poorly vascularized or scarred tissue from the site of fracture to encourage healing. Sometimes, healthy soft tissue along with its underlying blood vessels may be removed from another part of your body and transplanted at the fracture site to promote healing.
- Stimulating a new healing response: Bone grafts may be used to provide fresh bone-forming cells and supportive cells to stimulate bone healing.
A stress fracture is described as a small crack in the bone which occurs from an overuse injury of a bone. It commonly develops in the weight bearing bones of the lower leg and foot. When the muscles of the foot are overworked or stressed, they are unable to absorb the stress and when this happens the muscles transfer the stress to the bone which results in stress fracture.
Stress fractures are caused by a rapid increase in the intensity of exercise. They can also be caused by impact on a hard surface, improper footwear, and increased physical activity. Athletes participating in certain sports such as basketball, tennis or gymnastics are at a greater risk of developing stress fractures. During these sports the repetitive stress of the foot strike on a hard surface causing trauma and muscle fatigue. An athlete with inadequate rest between workouts can also develop stress fracture.
Females are at a greater risk of developing stress fracture than males, and may be related to a condition referred to as “female athlete triad”. It is a combination of eating disorders, amenorrhea (irregular menstrual cycle), and osteoporosis (thinning of the bones). The risk of developing stress fracture increases in females if the bone weight decreases.
The most common symptom is pain in the foot which usually gets worse during exercises and decreases upon resting. Swelling, bruising, and tenderness may also occur at a specific point.
Your doctor will diagnosis the condition after discussing symptoms and risk factors and examines the foot and ankle. Some of the diagnostic tests such as X-ray, MRI scan or bone scan may be required to confirm the fracture.
Stress fractures can be treated by non-surgical approach which includes rest and limiting the physical activities that involves foot and ankle. If children return too quickly to the activity that has caused stress fracture, it may lead to chronic problems such as harder-to-heal stress fractures.
Protective footwear may be recommended which helps to reduce stress on the foot. Your doctor may apply cast to the foot to immobilize the leg which also helps to remove the stress. Crutches may be used to prevent the weight of the foot until the stress fracture is healed completely.
Surgery may be required if the fracture is not healed completely by non-surgical treatment. Your doctor makes an incision on the foot and uses internal fixators such as wires, pins, or plates to attach the broken bones of the foot together until healing happens after which these fixators can be removed or may be permanently left inside the body.
Some of the following measures may help to prevent stress fractures:
- Ensure to start any new sport activity slowly and progress gradually
- Cross-training: You may use more than one exercise with the same intention to prevent injury. For example you may run on even days and ride a bike on odd days, instead of running every day to reduce the risk of injury from overuse. This limits the stress occurring on specific muscles as different activities use muscles in different ways
- Ensure to maintain a healthy diet and include calcium and vitamin D-rich foods in your diet
- Ensure that your child uses proper footwear or shoes for any sports activity and avoid using old or worn out shoes
- If your child complains of pain and swelling then immediately stop the activities and make sure that your child rests for few days
|
Posted on Jun 09, 2017, 6 a.m.
Certain types of bacteria in the gut can leverage the immune system to decrease the severity of stroke.
Stroke is currently the second leading cause of death worldwide. The most common type is ischemic stroke, during which a blocked blood vessel prevents blood from reaching the brain. Researchers at Memorial Sloan Kettering Cancer Center induced ischemic stroke in mice two weeks after administering a combination of antibiotics. The mice treated with antibiotics had a stroke that was approximately 60 percent smaller than the mice that did not receive antibiotics. The microbial environment in the gut instructed the immune cells present there to protect the brain, shielding it from the stroke’s full force. "Our experiment shows a new relationship between the brain and the intestine," stated Dr. Josef Anrather, the Finbar and Marianne Kenny Research Scholar in Neurology and an associate professor of neuroscience in the Feil Family Brain and Mind Research Institute at Weill Cornell Medicine. "The intestinal microbiota shape stroke outcome, which will impact how the medical community views stroke and defines stroke risk." These findings open up the possibility that altering the macrobiotic makeup of the gut could become a new method of preventing stroke. For high-risk patients, such as those who are having cardiac surgery or those who have multiple obstructed blood vessels in the brain, this could be particularly beneficial. Further exploration is required to figure out exactly which bacterial components generated their protective message. The researchers do know, however, that the bacteria did not interact with the brain chemically, but instead influenced neural survival by changing the behavior of the immune cells. The gut’s immune cells traveled up into the outer coverings of the brain, which are called the meninges. Here they organized and directed a response to the stroke. "One of the most surprising findings was that the immune system made strokes smaller by orchestrating the response from outside the brain, like a conductor who doesn't play an instrument himself but instructs the others, which ultimately creates music," said Dr. Costantino Iadecola, director of the Feil Family Brain and Mind Research Institute and the Anne Parrish Titzell Professor of Neurology at Weill Cornell Medicine. This new gut-brain connection holds promise for preventing stroke in the future, which the researchers say may be achieved by changing at-risk patients’ nutrition.
Nature Medicine, DOI: 10.1038/nm.4068 Commensal microbiota affects ischemic stroke outcome by regulating intestinal γδ T cells
|
The earth is neither heating up nor cooling down as there is a balance between incoming insolation and outgoing terrestrial radiation. Cold and warm regions do not get increasingly cold or warm respectively. But there are significant variations within the atmosphere. So although energy is lost through radiation throughout the atmosphere the net gain in radiation is not experienced in the Polar Regions, instead there is a net deficit between incoming and outgoing radiation. At latitudes between 40 and 35 degrees there is a net surplus of radiation (positive heat balance) This imbalance is addressed by horizontal and vertical transfers of heat.
Sensible heat transfer
Parcels of air aquire heat at their source region and place this heat in areas they travel including the Poles or vice versa, examples include tropical storms and depressions.
Warm water is carried from the equator to the Poles, conversley, cold water is carried by the ocean to the tropics.
These supply energy to the atmosphere by means of radiation, conduction and convection.
Earth's surface absorbs solar radiation which in turn heats the air above it.
Air parcels are heated by the earth's surface,becomes lighter, rises and cools.
the transfer of heat through matter.
When condensation occurs heat trapped in the air parcel is released into the atmosphere. Read more hereGlobal Heat Fluxes
|
Fort Sumter, located at the entrance to the Charleston harbor, was built after the War of 1812 as one of a series of forts protecting the southern coast of the United States. On April 12, 1861, South Carolina militia troops at Fort Johnson on James Island fired the first shots of the Civil War at the Union forces occupying Fort Sumter. In December of 1860, after Abraham Lincoln won the presidential election, South Carolina led Southern states in adopting ordinances of succession. A few days later, US Army Major Robert Anderson moved his troops from Fort Moultrie to Fort Sumter, hoping to delay an attack by South Carolina militia.
By the time Lincoln was inaugurated in March of 1861, six other states had joined South Carolina in seceding from the Union. The seven wrote a new constitution and formed the Confederate States of America, establishing their temporary capital at Montgomery, Alabama. At the same time, governors in Massachusetts, New York, and Pennsylvania quietly began buying weapons and training militia units. Tensions were clearly building toward confrontation.
In April, multiple requests by South Carolina’s Governor for Major Anderson to abandon Fort Sumter had been repeatedly ignored. South Carolina militia troops, under the command of Brigadier General Beauregard, gathered in readiness across the harbor at Fort Johnson.
The US Army sent heavily defended war ships to restock supplies at Fort Sumter. When the first US ship arrived on April 11, Beauregard’s aides Colonel James Chesnut, Captain Stephen Lee, and Lieutenant A. R. Chisholm visited Major Anderson at the fort to once again, demand a surrender. Though Anderson deliberated, ultimately he refused and Chesnut, Lee, and Chisolm returned to Fort Johnson and reported his answer to Beauregard.
At 4:30 AM on April 12, Confederate soldiers began to fire their cannons at Fort Sumter and they kept up the bombardment for 34 straight hours. Federal troops returned fire, but were ineffective. On April 13, the fort was surrendered and evacuated.The people of Charleston could see the battle taking place across the harbor. Naive to the ruin that would come, they believed the conflict would be short-lived. A famous diary of Mary Chesnut described a festive atmosphere along what is now known as The Battery with residents watching from their balconies and raising toasts to the beginning of the hostilities.
The Confederacy held Fort Sumter until the very last months of the war, finally surrendering it to the Federal Army in February of 1865. Repairs were made to the structure after the war ended, but between 1876 and 1897 it was used only as an unmanned lighthouse station.
Each time America went to war in the ensuing years, reconstruction was done to make Fort Sumter battle-ready. At the beginning of the Spanish American War, a new concrete blockhouse-style installation was built inside the original walls. Rifles were mounted during WWI and antiaircraft guns were installed during WWII, but the fort never saw combat again.
Fort Sumter became a United States National Monument in 1948. A short ferry ride from Charleston, the historic fort is open daily to the public. For more on visiting, go to the Fort Sumter Visitor Education Center.
Special thanks to photographer Barry Gooch of Charleston for sharing his photos, including part of his collection of historical photos, with the South Carolina Picture Project.
Also, thank you to Larry Gleason for contributing his stunning aerial photos.
Fort Sumter National Monument is listed in the National Register:
Perhaps no area in America embraces the evolution of harbor fortifications as well as Fort Sumter National Monument, which includes both Fort Sumter and Fort Moultrie. Strategically located at the mouth of Charleston Harbor, the first Fort Moultrie was the scene of a victory on June 28, 1776 that prevented the British from quenching the American Revolution in its early stages. The second Fort Moultrie occupied almost the same site from 1794-1804 as war clouds in Europe posed numerous threats to America. The third Fort Moultrie, completed in 1811, played its most significant role during the Civil War. On December 26, 1860, Union Major Robert Anderson evacuated the fort to occupy the new Fort Sumter one mile southwest in Charleston Harbor. Fort Sumter was built as a defensive counterpart to Fort Moultrie. The guns at Fort Moultrie helped drive Major Anderson out of the fort during the opening of the Civil War, April 12-13, 1861.
As the symbol of secession and Southern resistance, Fort Sumter was heavily damaged by Union rifled guns in 1863-1865, which signaled the end of obsolete masonry forts with many guns. During rehabilitation of these forts in the 1870s, larger guns were spaced further apart, powder magazines built underground and closer to the guns. Batteries Jasper and Huger were built in the Spanish-American War era. These huge concrete structures could withstand the more powerful naval armament. To protect minefields, smaller batteries such as Bingham, McCorkle, and Lord were developed. In World War II, the logical culmination in the evolution of harbor fortifications was the employment of electronic detection equipment of the Harbor Entrance Control Post with nearby defensive guns. The structures of Fort Sumter National Monument, whether large or small, have played a substantial role in safeguarding the Charleston area through nearly 200 years of history and seven wars.
More Pictures of Fort Sumter
Reflections on Fort Sumter
Contributor Kathie Lee shares: “Fort Sumter is a very special place to visit. I’ve visited many civil war monuments but to stand at the place of the war’s genesis is a very solemn experience for me. This shot took a bit of patience due to the number of people milling around the front of the fort. I wanted to get capture the beauty of the harbor and contrast it against the warmth and texture of the bricks and very large gate.”
Contributor Kevin Cunningham describes his moonlight photo: “This is the Strawberry Moon rising over Fort Sumter. The Strawberry Moon occurs when the June full moon coincides with the summer solstice. It last occurred in 1967 and won’t happen again until 2062. Various sources say that the term ‘strawberry moon’ was given because this full moon coincides with the start of strawberry season in the eastern United States.”
|
Our system of numerals has a long history of development, with the symbols derived from the Indian Brahmi numerals, then adopted and popularised by the Arab empire. Initially the place value system used nine numerals and a blank space called sunya in India and sifr in the Arab world, both words meaning empty. The tenth symbol for zero appeared later. There are two uses of zero that are both important but are somewhat different. One use is as an empty place indicator in our place-value system and the second use of zero is as a number in its own right. There are also different aspects of zero (and other numbers), namely the concept or quantity, the notation and the name. In assessing numeral identification we seek to establish the link between the verbal and the symbolic.
In using number, children must integrate many layers of verbal, procedural, symbolic and conceptual meaning. To illustrate how these layers can coexist, consider the following video showing some of the different meanings attributed to “five”.
Get me 5 counters…
This child has learnt what five fingers are, and in that limited context, could be said to know five. He has knowledge of the forward sequence of number words to five and recognition that the last word in a count has special meaning. His counting procedure starts with the first item in a row and ends on the final item in the row, but he does not match his counting words one-to-one with each and every item.
As well as the spoken word (e.g. “five”), number can be represented symbolically as a written word (e.g. five) or as a written numeral (e.g. 5). Although numerals are the written and read symbols for numbers, they can also play a similar role to letters in forming part of a name, as in licence plates and telephone numbers. At its most basic level, numeral identification is a form of shape recognition, which can result in a simple association of the word “two” with the symbol ‘2’ without a cardinal meaning (Mix, Sandhofer, & Baroody, 2005). This means that numeral identification can develop at a different rate to number knowledge.
Learning to identify, recognise and write numerals is an important part of early arithmetical development. When a young child learns the name of a numeral it sows the idea that a symbol can stand for a whole word (Mix, Huttenlocher, & Levine, 2002).
Numeral identification refers to being able to state the name of a displayed numeral.
Level 0: Emergent
At the emergent numeral identification level the student may identify some, but not all numerals in the range 1–10.
Level 1: 1-10
At the 1–10 numeral identification level the student can identify all numerals in the range 1–10.
Level 2: 1-20
At the 1–20 numeral identification level the student can identify all numerals in the range 1–20.
Level 3: 1-100
At the 1–100 numeral identification level the student can identify all numerals in the range 1–100.
Level 4: 1-1000
At the 1–1000 numeral identification level the student can identify one–, two– and three–digitnumbers.
Level 5: 1-10 000
At the 1–10 000 numeral identification level the student can identify one–, two–, three– andfour–digit numbers
|
What is Hazard Classification?
Hazard classification is the process of evaluating the full range of available scientific evidence to determine if a chemical is hazardous, as well as to identify the level of severity of the hazardous effect. When complete, the evaluation identifies the hazard class(es) and associated hazard category of the chemical. The HCS defines hazard class as the nature of a physical or health hazard, e.g., flammable solid, carcinogen, and acute toxicity. Hazard category means the division of criteria within each hazard class, e.g., acute toxicity and flammable liquids each include four hazard categories numbered from category 1 through category 4. These categories compare hazard severity within a hazard class and should not be taken as a comparison of hazard categories more generally. That is, a chemical identified as a category 2 in the acute toxicity hazard class is not necessarily less toxic than a chemical assigned a category 1 of another hazard class. The hierarchy of the categories is only specific to the hazard class. The hazard classification process provides the basis for the hazard information that is provided in SDSs, labels, and worker training.
The hazard classification process, as provided in the Hazard Communication Standard, has several steps, including:
The HCS provides specific criteria for hazard classification to ensure that chemical manufacturers, importers, and other classification experts come to similar conclusions regarding the hazards of chemicals. The resulting classification is then used to determine appropriate hazard warnings. This method not only provides employers and workers with more consistent classification of hazards, but the hazard information on SDSs and labels is in a form that is more 4 consistent and presented in a way that facilitates the understanding of the hazards of chemicals.
This hazard information can then be used when evaluating the workplace conditions to determine the hazards in the workplace, as well as to respond to exposure incidents. The information and criteria provided in Appendix A to 29 CFR 1910.1200 are used to classify the health hazards posed by hazardous chemicals. Similarly, the information and criteria
provided in Appendix B to 29 CFR 1910.1200 are used to classify the physical hazards posed by hazardous chemicals.
Hazard classification does not involve an estimation of risk. The difference between the terms hazard and risk is often poorly understood. Hazard refers to an inherent property of a substance that is capable of causing an adverse effect. Risk, on the other hand, refers to the probability that an adverse effect will occur with specific exposure conditions. Thus, a chemical will present the same hazard in all situations due to its innate chemical or physical properties and its actions on cells and tissues. However, considerable differences may exist in the risk posed by a chemical, depending on how the chemical is contained or handled, personal protective measures used, and other conditions that result in or limit exposure. This document addresses only the hazard classification process, and will not discuss risk assessment, which is not performed under the HCS.
|
Ivesia lycopodioides facts for kids
Quick facts for kidsIvesia lycopodioides
Ivesia lycopodioides is a species of flowering plant in the rose family known by the common name clubmoss mousetail, or clubmoss ivesia. It is native to the Sierra Nevada and to regions east of the range in California. It may also be found beyond the state line into Nevada. This is a perennial herb which grows in the crevices of rock ledges in the mountains and in wet high-elevation meadows. It produces a rosette of flat to cylindrical leaves up to 15 centimeters long, each of which is made up of many tiny, lobed leaflets. The stems may grow erect or drooping to 30 centimeters long and each holds an inflorescence of clustered flowers. Each flower has hairy, greenish triangular sepals and much larger oval-shaped petals of bright yellow. In the center of the flower are usually five stamens and several pistils. There are three subspecies.
Ivesia lycopodioides Facts for Kids. Kiddle Encyclopedia.
|
Asphyxiating thoracic dystrophy, also known as Jeune syndrome, is an inherited disorder of bone growth characterized by a narrow chest, short ribs, shortened bones in the arms and legs, short stature, and extra fingers and toes (polydactyly). Additional skeletal abnormalities can include unusually shaped collarbones (clavicles) and pelvic bones, and and cone-shaped ends of the long bones in the arms and legs. Many infants with this condition are born with an extremely narrow, bell-shaped chest that can restrict the growth and expansion of the lungs. Life-threatening problems with breathing result, and people with asphyxiating thoracic dystrophy may live only into infancy or early childhood. However, in people who survive beyond the first few years, the narrow chest and related breathing problems can improve with age.
Some people with asphyxiating thoracic dystrophy are born with less severe skeletal abnormalities and have only mild breathing difficulties, such as rapid breathing or shortness of breath. These individuals may live into adolescence or adulthood. After infancy, people with this condition may develop life-threatening kidney (renal) abnormalities that cause the kidneys to malfunction or fail. Heart defects and a narrowing of the airway (subglottic stenosis) are also possible. Other, less common features of asphyxiating thoracic dystrophy include liver disease, fluid-filled sacs (cysts) in the pancreas, dental abnormalities, and an eye disease called retinal dystrophy that can lead to vision loss.
Asphyxiating thoracic dystrophy affects an estimated 1 in 100,000 to 130,000 people.
Mutations in at least 11 genes have been found to cause asphyxiating thoracic dystrophy. Genetic changes in the IFT80 gene were the first to be associated with this condition. Later, researchers discovered that mutations in another gene, DYNC2H1, account for up to half of all cases. Mutations in other genes each cause a small percentage of cases. In total, about 70 percent of people with asphyxiating thoracic dystrophy have mutations in one of the known genes.
The genes associated with asphyxiating thoracic dystrophy provide instructions for making proteins that are found in cell structures called cilia. Cilia are microscopic, finger-like projections that stick out from the surface of cells. The proteins are involved in a process called intraflagellar transport (IFT), by which materials are carried to and from the tips of cilia. IFT is essential for the assembly and maintenance of these cell structures. Cilia play central roles in many different chemical signaling pathways, including a series of reactions called the Sonic Hedgehog pathway. These pathways are important for the growth and division (proliferation) and maturation (differentiation) of cells. In particular, Sonic Hedgehog appears to be essential for the proliferation and differentiation of cells that ultimately give rise to cartilage and bone.
Mutations in the genes associated with asphyxiating thoracic dystrophy impair IFT, which disrupts the normal assembly or function of cilia. As a result, cilia are missing or abnormal in many different kinds of cells. Researchers speculate that these changes alter signaling through certain signaling pathways, including the Sonic Hedgehog pathway, which may underlie the abnormalities of bone growth characteristic of asphyxiating thoracic dystrophy. Abnormal cilia in other tissues, such as the kidneys, liver, and retinas, cause the other signs and symptoms of the condition.
Asphyxiating thoracic dystrophy is part of a group of disorders known as skeletal ciliopathies or ciliary chondrodysplasias, all of which are caused by problems with cilia and involve bone abnormalities. Several of these disorders, including asphyxiating thoracic dystrophy, are sometimes classified more specifically as short rib-polydactyly syndromes (SRPSs) based on their signs and symptoms. Some researchers believe that SRPSs would be more accurately described as a spectrum with a range of features rather than as separate disorders.
This condition is inherited in an autosomal recessive pattern, which means both copies of the gene in each cell have mutations. The parents of an individual with an autosomal recessive condition each carry one copy of the mutated gene, but they typically do not show signs and symptoms of the condition.
Other Names for This Condition
- Asphyxiating thoracic chondrodystrophy
- Asphyxiating thoracic dysplasia
- Chondroectodermal dysplasia-like syndrome
- Infantile thoracic dystrophy
- Jeune syndrome
- Jeune thoracic dysplasia
- Jeune thoracic dystrophy
- Thoracic asphyxiant dystrophy
- Thoracic-pelvic-phalangeal dystrophy
Additional Information & Resources
Genetic Testing Information
- Genetic Testing Registry: Asphyxiating thoracic dystrophy 2
- Genetic Testing Registry: Asphyxiating thoracic dystrophy 4
- Genetic Testing Registry: Asphyxiating thoracic dystrophy 5
- Genetic Testing Registry: Jeune thoracic dystrophy
- Genetic Testing Registry: Short-rib thoracic dysplasia 1 with or without polydactyly
Genetic and Rare Diseases Information Center
Research Studies from ClinicalTrials.gov
Catalog of Genes and Diseases from OMIM
- SHORT-RIB THORACIC DYSPLASIA 1 WITH OR WITHOUT POLYDACTYLY
- SHORT-RIB THORACIC DYSPLASIA 10 WITH OR WITHOUT POLYDACTYLY
- SHORT-RIB THORACIC DYSPLASIA 11 WITH OR WITHOUT POLYDACTYLY
- SHORT-RIB THORACIC DYSPLASIA 2 WITH OR WITHOUT POLYDACTYLY
- SHORT-RIB THORACIC DYSPLASIA 3 WITH OR WITHOUT POLYDACTYLY
- SHORT-RIB THORACIC DYSPLASIA 4 WITH OR WITHOUT POLYDACTYLY
- SHORT-RIB THORACIC DYSPLASIA 5 WITH OR WITHOUT POLYDACTYLY
- SHORT-RIB THORACIC DYSPLASIA 6 WITH OR WITHOUT POLYDACTYLY
- SHORT-RIB THORACIC DYSPLASIA 7 WITH OR WITHOUT POLYDACTYLY
- SHORT-RIB THORACIC DYSPLASIA 8 WITH OR WITHOUT POLYDACTYLY
- SHORT-RIB THORACIC DYSPLASIA 9 WITH OR WITHOUT POLYDACTYLY
Scientific Articles on PubMed
- Baujat G, Huber C, El Hokayem J, Caumes R, Do Ngoc Thanh C, David A, Delezoide AL, Dieux-Coeslier A, Estournet B, Francannet C, Kayirangwa H, Lacaille F, Le Bourgeois M, Martinovic J, Salomon R, Sigaudy S, Malan V, Munnich A, Le Merrer M, Le Quan Sang KH, Cormier-Daire V. Asphyxiating thoracic dysplasia: clinical and molecular review of 39 families. J Med Genet. 2013 Feb;50(2):91-8. doi: 10.1136/jmedgenet-2012-101282. Citation on PubMed
- Beales PL, Bland E, Tobin JL, Bacchelli C, Tuysuz B, Hill J, Rix S, Pearson CG, Kai M, Hartley J, Johnson C, Irving M, Elcioglu N, Winey M, Tada M, Scambler PJ. IFT80, which encodes a conserved intraflagellar transport protein, is mutated in Jeune asphyxiating thoracic dystrophy. Nat Genet. 2007 Jun;39(6):727-9. Epub 2007 Apr 29. Citation on PubMed
- Huber C, Cormier-Daire V. Ciliary disorder of the skeleton. Am J Med Genet C Semin Med Genet. 2012 Aug 15;160C(3):165-74. doi: 10.1002/ajmg.c.31336. Epub 2012 Jul 12. Review. Citation on PubMed
- Keppler-Noreuil KM, Adam MP, Welch J, Muilenburg A, Willing MC. Clinical insights gained from eight new cases and review of reported cases with Jeune syndrome (asphyxiating thoracic dystrophy). Am J Med Genet A. 2011 May;155A(5):1021-32. doi: 10.1002/ajmg.a.33892. Epub 2011 Apr 4. Review. Citation on PubMed
- Schmidts M, Arts HH, Bongers EM, Yap Z, Oud MM, Antony D, Duijkers L, Emes RD, Stalker J, Yntema JB, Plagnol V, Hoischen A, Gilissen C, Forsythe E, Lausch E, Veltman JA, Roeleveld N, Superti-Furga A, Kutkowska-Kazmierczak A, Kamsteeg EJ, Elçioğlu N, van Maarle MC, Graul-Neumann LM, Devriendt K, Smithson SF, Wellesley D, Verbeek NE, Hennekam RC, Kayserili H, Scambler PJ, Beales PL; UK10K, Knoers NV, Roepman R, Mitchison HM. Exome sequencing identifies DYNC2H1 mutations as a common cause of asphyxiating thoracic dystrophy (Jeune syndrome) without major polydactyly, renal or retinal involvement. J Med Genet. 2013 May;50(5):309-23. doi: 10.1136/jmedgenet-2012-101284. Epub 2013 Mar 1. Citation on PubMed or Free article on PubMed Central
- Schmidts M. Clinical genetics and pathobiology of ciliary chondrodysplasias. J Pediatr Genet. 2014 Nov;3(2):46-94. Citation on PubMed or Free article on PubMed Central
|
Nicolaus Copernicus (February 19, 1473 – May 24, 1543) was one of the great polymaths of his age. He was a mathematician, astronomer, jurist, physician, classical scholar, governor, administrator, diplomat, economist, and soldier. Amid his extensive accomplishments, he treated astronomy as an avocation. However, it is for his work in astronomy and cosmology that he has been remembered and accorded a place as one of the most important scientific figures in human history. He provided the first modern formulation of a heliocentric (Sun-centered) theory of the solar system in his epochal book, De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres).
That change, often known as the Copernican revolution, had important and far-reaching implications for not only science and cosmology but also theology, philosophy, and culture, and for the relationship between religion and science. Copernicus' concept marked a scientific revolution. It has been equated it with the initiation of "the scientific revolution."
Copernicus was born in 1473 in Toruń (Thorn). On account of geographical and historical uncertainties, it remains a matter of dispute whether Copernicus was German or Polish. A modern view is that he was an ethnically German Pole.
When Copernicus was ten years old, his father, a wealthy businessman and copper trader, died. Little is known of his mother, Barbara Watzenrode, who appears to have predeceased her husband. Copernicus' maternal uncle, Lucas Watzenrode, a church canon (an administrative position below that of bishop) and later prince-bishop governor of Warmia, reared him and his three siblings after the death of his father. His uncle's position helped Copernicus in the pursuit of a career within the church, enabling him to devote time for his astronomy studies. Copernicus had a brother and two sisters:
- Andreas, who became a canon at Frombork (Frauenburg)
- Barbara, who became a Benedictine nun
- Katharina, who married businessman and city councilor Barthel Gertner
In 1491, Copernicus enrolled at the Jagiellonian University in Kraków, where he probably encountered astronomy for the first time, taught by his teacher Albert Brudzewski. This science soon fascinated him, as shown by his books, which were later carried off as war booty by the Swedes during "The Deluge," to the Uppsala University Library. After four years at Kraków, followed by a brief stay back home at Toruń, he went to Italy, where he studied law and medicine at the universities of Bologna and Padua. His bishop-uncle financed his education and wished for him to become a bishop as well. However, while studying canon and civil law at Ferrara, Copernicus met the famous astronomer, Domenico Maria Novara da Ferrara. Copernicus attended his lectures and became his disciple and assistant. The first observations that Copernicus made in 1497, together with Novara, are recorded in Copernicus' epochal book, De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres).
In 1497, Copernicus' uncle was ordained Bishop of Warmia, and Copernicus was named a canon at Frombork (Frauenburg) Cathedral, but he waited in Italy for the great Jubilee of 1500. Copernicus went to Rome, where he observed a lunar eclipse and gave some lectures in astronomy or mathematics.
It is uncertain whether Copernicus was ordained a priest; he may only have taken minor orders, which sufficed for assuming a chapter canonry. It appears that he visited Frombork in 1501. As soon as he arrived, he requested and obtained permission to return to Italy to complete his studies at Padua (with Guarico and Fracastoro) and at Ferrara (with Giovanni Bianchini), where in 1503 he received his doctorate in canon law. It has been supposed that it was in Padua that he encountered passages from Cicero and Plato about opinions of the ancients on the movement of the Earth, and formed the first intuition of his own future theory. His collection of observations and ideas pertinent to his theory began in 1504.
Having left Italy at the end of his studies, he came to live and work at Frombork. Some time before his return to Warmia, he received a position at the Collegiate Church of the Holy Cross in Wrocław (Breslau), Silesia, which he resigned from a few years before his death. He made astronomical observations and calculations through the rest of his life, but always in his spare time and never as a profession.
Copernicus worked for years with the Prussian Diet on monetary reform and published some studies about the value of money. As governor of Warmia, he administered taxes and dealt out justice. It was at this time (beginning in 1519, the year of Thomas Gresham's birth) that Copernicus came up with one of the earliest iterations of the theory now known as Gresham's Law. During these years, he also traveled extensively on government business and as a diplomat on behalf of the prince-bishop of Warmia.
In 1514, he made his Commentariolus—a short, handwritten text describing his ideas about the heliocentric hypothesis—available to friends. Thereafter, he continued gathering evidence for a more detailed work. During the war between the Teutonic Order and the Kingdom of Poland (1519–1524), Copernicus successfully defended Allenstein (Olsztyn) at the head of royal troops besieged by the forces of Albert of Brandenburg.
In 1533, Albert Widmanstadt delivered a series of lectures in Rome, outlining Copernicus' theory. These lectures were watched with interest by several Catholic cardinals, including Pope Clement VII. By 1536, Copernicus' work was already in definitive form, and some rumors about his theory had reached educated people all over Europe. From many parts of the continent, Copernicus received invitations to publish. In a letter dated Rome, November 1, 1536, Cardinal Nicola Schönberg of Capua wrote, asking Copernicus to communicate his ideas more widely and requesting a copy for himself: "Therefore, learned man, without wishing to be inopportune, I beg you most emphatically to communicate your discovery to the learned world, and to send me as soon as possible your theories about the Universe, together with the tables and whatever else you have pertaining to the subject." Some have suggested that this note may have made Copernicus leery of publication, while others have suggested that this letter indicates that the Church wanted to ensure that his ideas were published.
Despite the insistence of many, Copernicus kept delaying the final publication of his book, probably out of fear of criticism for his revolutionary work by the establishment. He was still completing his masterpiece (even if he was not convinced that he wanted to publish it) when in 1539, Georg Joachim Rheticus, a great mathematician from Wittenberg, arrived in Frombork. Philipp Melanchthon had arranged for Rheticus to visit several astronomers and study with them. Rheticus became a disciple of Copernicus' and stayed with him for two years, during which he wrote a book, Narratio prima, outlining the essence of the theory.
In 1542, in Copernicus' name, Rheticus published a treatise on trigonometry (later included in the second book of De revolutionibus). Under strong pressure from Rheticus, and having seen that the first general reception of his work had been favorable, Copernicus finally agreed to give the book to his close friend Tiedemann Giese, bishop of Chełmno (Kulm), to be delivered to Rheticus for printing in Nuremberg (Nürnberg).
Legend says that the first printed copy of De revolutionibus was placed in Copernicus' hands on the day he died, so that he could take farewell of his opus vitae. He supposedly woke from a stroke-induced coma, looked at his book, and died peacefully.
Copernicus was buried in Frombork Cathedral. In August 2005, a team of archaeologists led by Jerzy Gąssowski, head of an institute of archeology and anthropology in Pułtusk, discovered what they believe to be Copernicus' grave and remains, after scanning beneath the cathedral floor. The find came after a year of searching, and the discovery was announced only after further research, on November 3, 2005. Gąssowski said he was "almost 100 percent sure it is Copernicus." Forensics experts used the skull to reconstruct a face that closely resembled the features—including a broken nose and a scar above the left eye—on a self-portrait. The experts also determined that the skull had belonged to a man who had died at about age 70—Copernicus' age at the time of his death. The grave was in poor condition, and not all the remains were found. The archaeologists hoped to find relatives of Copernicus in order to attempt DNA identification.
The Copernican heliocentric system
Much has been written about earlier heliocentric theories. Philolaus (fourth century B.C.E.) was one of the first to hypothesize movement of the Earth, probably inspired by Pythagoras' theories about a spherical globe.
In the third century B.C.E., Aristarchus of Samos had developed some theories of Heraclides Ponticus, to propose what was, so far as is known, the first serious model of a heliocentric solar system. His work about a heliocentric system has not survived, so one may only speculate about what led him to his conclusions. It is notable that, according to Plutarch, a contemporary of Aristarchus, accused him of impiety for "putting the Earth in motion."
Aryabhata of India was the first to have noted that the Earth is round. He says, "Bhumukha sarvato golah" (Earth is round). Furthermore, Bhaskara I anticipated Copernicus' discoveries by about one thousand years. The work of the fourteenth-century Arab astronomer Ibn al-Shatir contains findings similar to those of Copernicus, and it has been suggested that Copernicus might have been influenced by them.
Copernicus cited Aristarchus and Philolaus in an early manuscript of his book that survives, stating: "Philolaus believed in the mobility of the Earth, and some even say that Aristarchus of Samos was of that opinion." For reasons unknown, he struck this passage before publication of his book.
Inspiration came to Copernicus not from observation of the planets but from reading two authors. In Cicero, he found an account of the theory of Hicetas. Plutarch provided an account of the Pythagoreans Heraclides Ponticus, Philolaus, and Ecphantes. These authors had proposed a moving Earth that revolved around a central Sun. In addition, it has been claimed that in developing the mathematics of heliocentrism, Copernicus drew on not just the Greek but also the Arabic tradition of mathematics, especially the work of Nasir al-Din al-Tusi and Mu’ayyad al-Din al-‘Urdi.
The Ptolemaic system
As Copernicus was developing his heliocentric model, the prevailing theory in Europe was that created by Ptolemy in his Almagest, dating from about 150 C.E. The Ptolemaic system drew on many previous theories that viewed Earth as a stationary center of the universe. Stars were embedded in a large outer sphere, which rotated relatively rapidly, while the planets dwelt in smaller spheres between—a separate one for each planet. To account for certain anomalies, such as the apparent retrograde motion of many planets, a system of epicycles was used, in which a planet was thought to revolve around a small axis while also revolving around the Earth. Some planets were assigned "major" epicycles (for which retrograde motion could be observed) and "minor" epicycles (that simply warped the overall rotation).
Ptolemy's unique contribution was the idea of an equant. This complicated addition specified that, when measuring the Sun's rotation, one sometimes used the central axis of the universe, but sometimes one set at a different location. This had an overall effect of making certain orbits "wobble," a fact that greatly bothered Copernicus (because such wobbling rendered implausible the idea of material "spheres" in which the planets rotated). In the end, astronomers could still not get observation and theory to match up exactly. In Copernicus' day, the most up-to-date version of the Ptolemaic system was that of Peurbach (1423-1461) and Regiomontanus (1436-1476).
Copernicus' major theory was published in De revolutionibus orbium coelestium in 1543, the year of his death. The book marks the beginning of the shift away from a geocentric view of the universe.
Copernicus held that the Earth is another planet revolving around the fixed Sun once a year, and turning on its axis once a day. He arrived at the correct order of the known planets and explained the precession of the equinoxes correctly by a slow change in the position of the Earth's rotational axis. He also gave a clear account of the cause of the seasons: that the Earth's axis is not perpendicular to the plane of its orbit. He added another motion to the Earth, by which the axis is kept pointed throughout the year at the same place in the heavens; since Galileo Galilei, it has been recognized that for the Earth not to point to the same place would have been a motion.
Copernicus also replaced Ptolemy's equant circles with more epicycles. This is the main source of the statement that Copernicus' system had even more epicycles than Ptolemy's. With this change, Copernicus' system showed only uniform circular motions, correcting what he saw as the chief inelegance in Ptolemy's system. Although Copernicus put the Sun at the center of the celestial spheres, he placed it near but not at the exact center of the universe.
The Copernican system did not have any greater experimental support than Ptolemy's model. Copernicus was aware of this and could not present any observational "proof" in his manuscript, relying instead on arguments about what would be a more complete and elegant system. From publication until about 1700, few astronomers were fully convinced of the Copernican system, though the book was relatively widely circulated (around five hundred copies are known to still exist, which is a large number by the scientific standards of the time). Many astronomers, however, accepted some aspects of the theory at the expense of others, and his model did have a large influence on later scientists such as Galileo and Johannes Kepler, who adopted, championed, and (especially in Kepler's case) sought to improve it. Galileo's viewing of the phases of Venus produced the first observational evidence for Copernicus' theory.
The Copernican system can be summarized in seven propositions, as Copernicus himself collected them in a Compendium of De revolutionibus that was found and published in 1878. These propositions are:
- There is no one center in the universe.
- The Earth's center is not the center of the universe.
- The center of the universe is near the Sun.
- The distance from the Earth to the Sun is imperceptible compared with the distance to the stars.
- The rotation of the Earth accounts for the apparent daily rotation of the stars.
- The apparent annual cycle of movements of the Sun is caused by the Earth revolving around the Sun.
- The apparent retrograde motion of the planets is caused by the motion of the Earth, from which one observes.
Whether these propositions were "revolutionary" or "conservative" was a topic of debate in the late twentieth century. Thomas Kuhn argued that Copernicus merely transferred "some properties to the Sun many astronomical functions previously attributed to the Earth." Other historians have since argued that Kuhn underestimated what was "revolutionary" about Copernicus' work, and emphasized the difficulty Copernicus would have had in putting forward a new astronomical theory relying alone on simplicity in geometry, given that he had no experimental evidence.
De revolutionibus orbium coelestium
Copernicus' major work, De revolutionibus, was the result of decades of labor. When published, it contained a preface by Copernicus' friend, Andreas Osiander, a Lutheran theologian. Osiander stated that Copernicus wrote his heliocentric account of the Earth's movement as a mere mathematical hypothesis, not as an account that contained truth or even probability. This was apparently written to soften any religious backlash against the book.
De revolutionibus began with a letter from Copernicus' (by then deceased) friend Nicola Schönberg, the Archbishop of Capua, urging him to publish his theory. Then, in a lengthy introduction, Copernicus dedicated the book to Pope Paul III, explaining his ostensible motive in writing the book as relating to the inability of earlier astronomers to agree on an adequate theory of the planets, and noting that if his system increased the accuracy of astronomical predictions, it would allow the Church to develop a more accurate calendar. At that time, a reform of the Julian Calendar was considered necessary and was one of the major reasons for Church funding of astronomy.
The work itself was then divided into six books:
- General vision of the heliocentric theory, and a summarized exposition of his idea of the World
- Mainly theoretical, presents the principles of spherical astronomy and a list of stars (as a basis for the arguments developed in subsequent books)
- Mainly dedicated to the apparent motions of the Sun and to related phenomena
- Description of the Moon and its orbital motions
- Concrete exposition of the new system
- Concrete exposition of the new system
Impact of the Copernican Revolution
Copernicus' formulation of heliocentric cosmology, the view that the Sun is at the center of the universe, stands in contrast to Ptolemy's geocentric cosmology, in which the Earth was placed at the center. The heliocentric model is almost universally considered to be one of the most important scientific hypotheses in history, as well as being of extraordinary importance in the history of human knowledge altogether. It came to mark the starting point of modern astronomy and modern science, and it is often known as the Copernican revolution; it is considered the start of "the scientific revolution.".
It is hard to [over]estimate the importance of this work: it challenged the age-long views of the way the universe worked and the preponderance of the Earth and, by extension, of human beings. ... All the reassurances of the cosmology of the Middle Ages were gone, and a new view of the world, less secure and comfortable, came into being. Despite these 'problems' and the many critics the model attracted, the system was soon accepted by the best minds of the time such as Galileo.
The construction and/or acceptance of Ptolemy's geocentric cosmology had been based on a number of assumptions and arguments that were philosophical and theological in nature. First was Aristotle's notion that things are naturally fixed and unmoving unless something moves them. A second assumption was that the place of human beings as children of God—an assertion made by both Jewish and Christian doctrine—and thus the highest or most important beings in the cosmos (except for those who held angels to be higher than humans), requires that Earth as the dwelling place of humans be at the center of the universe. A third assumption was that philosophy, logic, and theology are paramount in importance, superior to natural science and its methods. A fourth assumption had to do with falling bodies: the Ptolemaic view had held that if the Earth were not the center of the cosmos, then things would not fall to Earth when thrown into the sky and that the Earth itself would fall toward whatever was the center. A fifth was that, if the Earth moved, then things thrown into air above the Earth would be "left behind" and not fall to Earth as the Earth moved. A sixth was that, if the Earth moved, this would be a contradiction of scripture, which says that Joshua commanded the Sun and Moon (not the Earth) to be still and cease moving across the sky (Josh 10: 12-13).
Today we know that each of those assumptions was incorrect. We now know that the principle of inertia means that moving things will continue to move unless some force stops them. Second, we have come to realize that the Earth's position needs to be determined by scientific methods, not by religious doctrine or philosophical arguments. At the same time, it needs to be understood that the place of humans in the universe as the children of God does not depend on the physical location of the Earth, or the size or prominence of the Sun, or the prominence of the Milky Way—the galaxy in which Earth is situated—in the cosmos. Falling bodies move toward whatever attracts them gravitationally; moreover things thrown up into the air from Earth are already part of Earth's inertial system, so they move as the Earth moves and fall back to earth having moved as the Earth moved during their flight. The claim in Joshua may be interpreted as a figure of speech rather than as a literal event.
The notion of a "Copernican Revolution" became important in philosophy as well as science. For one thing, philosophy of science had to recognize and account for the fact that science does not grow in a smooth and continuous pattern. Instead, there are occasional revolutions in which one scientific pattern or paradigm is overthrown by another. Later, in the twentieth century, American historian and philosopher of science Thomas Kuhn made scientific revolutions and the notion of a "paradigm" and "paradigm shift" central points in his monumental and highly influential work, The Structure of Scientific Revolutions. German philosopher Immanuel Kant captured the transcendent rationalism of the Copernican revolution, postulating that it was human rationality that was the true interpreter of observed phenomena. In addition, he referred to his own work as being a "Copernican revolution" in philosophy. More recent philosophers, too, have found continuing validity and philosophical meaning in Copernicanism.
The Copernican heliocentric system was rejected for theological and philosophical reasons by the Catholic and Lutheran churches of his day. This may not have been the first time in human history when a clash between religion and science occurred, but it was the most significant one up to that time. That clash—often referred to as a warfare between science and religion—continues in some form, with sometimes waxing and sometimes waning intensity, to this day. An important result of the Copernican revolution was to encourage scientists and scholars to take a more skeptical attitude toward established dogma.
Based on the work of Copernicus and others, some have argued that "science could explain everything attributed to God," and that there was no need to believe in an entity (God) who grants a soul, power, and life to human beings. Others, including religious scientists, have taken the view that the laws and principles of nature, which scientists strive to discover, originated from the Creator, who works through those principles. Copernicus himself continued to believe in the existence of God.
Copernicanism was also used to support the concept of immanence—the view that a divine force or divine being pervades all things that exist. This view has since been developed further in modern philosophy. Immanentism can also lead to subjectivism, to the theory that perception creates reality, that underlying reality is not independent of perception. Thus some argue that Copernicanism demolished the foundations of medieval science and metaphysics.
A corollary of Copernicanism is that scientific law need not be directly congruent with appearance or perception. This contrasts with Aristotle's system, which placed much more importance on the derivation of knowledge through the senses.
- "Of all discoveries and opinions, none may have exerted a greater effect on the human spirit than the doctrine of Copernicus. The world had scarcely become known as round and complete in itself when it was asked to waive the tremendous privilege of being the center of the universe. Never, perhaps, was a greater demand made on mankind—for, by this admission, so many things vanished in mist and smoke! What became of our Eden, our world of innocence, piety and poetry; the testimony of the senses; the conviction of a poetic—religious faith? No wonder his contemporaries did not wish to let all this go and offered every possible resistance to a doctrine which in its converts authorized and demanded a freedom of view and greatness of thought so far unknown, indeed not even dreamed of."
- "For I am not so enamored of my own opinions that I disregard what others may think of them. I am aware that a philosopher's ideas are not subject to the judgment of ordinary persons, because it is his endeavor to seek the truth in all things, to the extent permitted to human reason by God. Yet I hold that completely erroneous views should be shunned. Those who know that the consensus of many centuries has sanctioned the conception that the Earth remains at rest in the middle of the heaven as its center would, I reflected, regard it as an insane pronouncement if I made the opposite assertion that the Earth moves.
- "For when a ship is floating calmly along, the sailors see its motion mirrored in everything outside, while on the other hand they suppose that they are stationary, together with everything on board. In the same way, the motion of the Earth can unquestionably produce the impression that the entire universe is rotating.
- "Therefore alongside the ancient hypotheses, which are no more probable, let us permit these new hypotheses also to become known, especially since they are admirable as well as simple and bring with them a huge treasure of very skillful observations. So far as hypotheses are concerned, let no one expect anything certain from astronomy, which cannot furnish it, lest he accept as the truth ideas conceived for another purpose, and depart from this study a greater fool than when he entered it. Farewell."
Declaration of the Polish Senate issued on June 12, 2003:
- "At the time of five hundred thirty anniversary of birth and four hundred sixty date of death of Mikołaj Kopernik, the Senate of Republic of Poland expresses its highest respect and praise for this exceptional Pole, one of the greatest scientists in the history of the world. Mikołaj Kopernik, world famous astronomer, author of the breakthrough work "O obrotach sfer niebieskich," is the the one who "Held the Sun and moved Earth." He distinguished himself for the country as exceptional mathematician, economist, lawyer, doctor, and priest, as well as defender of the Olsztyn Castle during Polish-Teutonic war. May memory about his achievements last and be a source of inspiration for future generations."
- K. Stuart Parkes, Understanding Contemporary Germany (London: Routledge, 1997, ISBN 0415141230), xxi.
- David Banach (2006), Timeline of the Scientific Revolution. Retrieved June 29, 2007.
- Jose Wudka (1998), The Copernican Revolution, Department of Physics and Astronomy, University of California Riverside. Retrieved June 29, 2007.
- Works of Copernicus
- The complete works of Copernicus are collected in On the Revolutions, ed. and trans. by Edward Rosen (1978, reissued 1992), and Minor Works, ed. and trans. by Edward Rosen and Erna Hilfstein (1985, reissued 1992). Three Copernican Treatises, trans. by Edward Rosen (1971) contains, in addition, a biography and a bibliography of works on Copernicus from 1939-70.
- Biographies of Copernicus
- Adamczewski, Jan, and Edward J. Piszek. Nicolaus Copernicus and His Epoch. Scribner, 1974. ISBN 978-0684138398
- Rosen, Edward. Copernicus and the Scientific Revolution. Malabar, FL: Krieger, 1884. ISBN 978-0898745733
- Works About Copernicus and His Work
- Armitage, Angus. The World of Copernicus. New York, NY: Mentor Books, 1951. ISBN 0846409798
- Blumenberg, Hans. The Genesis of the Copernican World. The MIT Press, 1989. ISBN 978-0262521444
- Dreyer, J. L. E. A History of Astronomy from Thales to Kepler. Dover Publications, 2011. ISBN 978-0486600796
- Gingrich, Owen. The Eye of Heaven: Ptolemy, Copernicus, Kepler. Springer, 1997. ISBN 978-0883188637
- Gingrich, Owen. The Book Nobody Read. Penguin Books, 2004. ISBN 0143034766
- Goodman, David C., and Colin A. Russell (eds.). The Rise of Scientific Europe, 1500-1800. Dunton Green, Sevenoaks, Kent: Hodder & Stoughton: The Open University, 1991. ISBN 034055861X
- Hoyle, Fred. Nicolaus Copernicus: An Essay on His Life and Work. Harper & Row, 1973. ISBN 978-0060119713
- Koyre, Alexander. The Astronomical Revolution. Dover Publications, 1992. ISBN 0486270955
- Kuhn, Thomas. The Copernican Revolution: Planetary Astronomy in the Development of Western Thought. Cambridge, MA: Harvard University Press, 1985. ISBN 0674171004
- Kuhn, Thomas. The Structure of Scientific Revolutions. Chicago, IL: The University of Chicago Press, 1996. ISBN 978-0226458083
- Lindberg, David C. (ed.). Science in the Middle Ages. Chicago, IL: University of Chicago Press, 1978. ISBN 978-0226482330
- Nebelsick, Harold P. Circles of God: Theology and Science from the Greeks to Copernicus. Scottish Academic Press, 1985. ISBN 978-0707304489
- Westman, Robert S. (ed.). The Copernican Achievement. Univ of California Press, 1976. ISBN 978-0520028777
All links retrieved December 3, 2018.
- John J. O'Connor and Edmund F. Robertson. Nicolaus Copernicus at the MacTutor archive
- Works by Nicolaus Copernicus. Project Gutenberg
- Nicholaus Copernicus Museum in Frombork
- Portraits of Copernicus
- Copernicus and Astrology—Cambridge University
- Stanford Encyclopedia of Philosophy entry
- 'Body of Copernicus' identified – BBC News – Article including image of Copernicus using facial reconstruction based on located skull
- The Copernican Universe from De Revolutionibus
- De Revolutionibus, 1543 first edition—Full digital facsimile, Lehigh University
- The text of De Revolutionibus
- (Italian) Copernicus in Bologna
- "Chasing Copernicus: The Book Nobody Read—Was One of the Greatest Scientific Works Really Ignored?" All Things Considered, National Public Radio
- "Copernicus and his Revolutions" by James Hannam (2003) – A detailed critique of the rhetoric of De Revolutionibus
- "Whose Science is Arabic Science in Renaissance Europe?" by George Saliba – Discusses Copernicus's debt to the Arabic tradition
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed.
|
View Inequality Algebra
Pics. To solve your inequality using the inequality calculator, type in your inequality like x+7>9. Linear inequalities in one variable.
Equations and inequalities are both mathematical sentences formed by relating two expressions to each other. It is used most often to compare two numbers on the number line by their size. Algebra linear inequalities and absolute value inequality expressions.
In these lessons, we will learn.
Learn about algebra inequalities properties with free interactive flashcards. Inequalities are all around us. The meaning of a continued inequality. Linear inequalities in one variable.
|
Moles are small burrowing mammals.
There are about 42 species of moles.
Moles are found on every continent except Antarctica and South America.
The term mole is especially and most properly used for “true moles” of the Talpidae family in the order Eulipotyphla, which are found in most parts of North America, Asia, and Europe, although it may also refer to unrelated mammals of Australia and southern Africa that have convergently evolved the “mole” body plan.
Moles typically live underground, burrowing holes, but some species are semi-aquatic.
Although all moles dig tunnels, their habitat preferences vary. Some moles, such as the star-nosed mole, like moist soil and live in bogs and marshes, while others, including the eastern and star-nosed moles, live in the drier soil found in wooded areas, meadows and fields.
They are found at elevations extending from sea level to 4,500 meters (14,800 feet).
The lifespan of a mole is 3 to 6 years in the wild.
Moles have cylindrical bodies, velvety fur, very small, inconspicuous ears and eyes, reduced hindlimbs, and short, powerful forelimbs with large paws adapted for digging.
Most moles species grow from 11.5 to 16 centimeters (4.5 to 6.25 inches) long from snout to rump. Their tails add 2.5 to 4 centimeters (1 to 1.6 inches) of length.
The smallest mole is the American shrew mole (Neurotrichus gibbsii), which weighs only 7 to 11 grams (0.25 to 0.39 ounce) and has a body 3 to 4 cm (less than 2 inches) long and a slightly shorter tail.
The largest mole is the Russian desman (Desmana moschata) of central Eurasia, which weighs 100 to 220 grams (3.53 to 7.76 ounces) and has a body 18 to 22 cm (7 to 9 inches) long and a tail nearly as long.
The powerful forelimbs of most species are rotated outward from the body, like oars protruding from a boat. The large circular hands are fringed with sensory hairs and have broad spadelike claws for digging; they also function as paddles for swimming.
Moles spend their lives underground, digging tunnels to reach their prey, which includes earthworms, snails, slugs, grubs and any other insects they can find. In such a dark, dirt-filled environment, moles don’t need senses such as powerful eyesight like some other animals do, but they depend on other adaptations for their health and survival.
The muzzle is tipped with thousands of microscopic tactile structures (Eimer’s organs). Using these structures and sensory hairs along the muzzle and elsewhere on the body, moles detect and differentiate details of their environment and their prey.
Few mammals could survive extended periods in underground tunnels without a regular source of oxygen. Moles, however, show no adverse affects when exposed to high levels of carbon dioxide, or conversely, low levels of oxygen, for long periods of time. Specialized blood cells affect the way hemoglobin binds to carbon dioxide, allowing them to breathe in the same air they just breathed out without any ill effects.
Moles are generally active all year and by day or night in cycles of activity and rest. Typical moles will only infrequently go to the surface to gather nest materials and seek water during drought.
They are solitary creatures, coming together only to mate. Territories may overlap, but moles avoid each other and males may fight fiercely if they meet.
There is one litter per year, usually of three to five young, born in a nest of dry vegetation. Youngs leave the nest 30–45 days after birth to find territories of their own.
Male moles are called “boars”, females are called “sows”. A group of moles is called a “labour”.
One mole can easily eat 70 to 100 percent of its weight in food each day.
The star-nosed mole can detect, catch and eat food faster than the human eye can follow.
The expression “don’t make a mountain out of a mole hill” – exaggerating problems – was first recorded in Tudor times.
|
The talented tenth was an article written in 1903 by W.E.B. Du Bois. It was about the efforts of the American Baptist Missionary Home Society trying to start black colleges which would train African American teachers. W.E.B Du Bois fought for civil rights for black people in the United States. During the nineteen twenties and nineteen thirties, he was the person most responsible for the changes in conditions for black people in American society.
He was a man who demanded respect for African-Americans during the Civil Rights movement, and for all working people throughout his career as a Labor organizer. Randolph demand freedom and human rights for all oppressed people. This paper will solely focus on his labor movement and how it paved the way for African Americans today. Asa Philip Randolph, son of a Southern minister, was born on April 15, 1889, at Crescent City, Florida. At a very young age Randolph enjoyed reading; he sensed that education was of vital importance to him.
The Niagara movement was founded by W.E.B Dubois. His main goal in creating the group was to educate African Americans and to have total equality with blacks and whites. Dubois gathered people from all corners of the states with the exception of the West. The member’s men and women included would rally together and come up with strategies for equality of blacks. They would lay out plans to have equal rights with the whites.
In 1895 he was the first African American to earn a Ph.D. from Harvard University. He had placed his stress on culture and liberty, urging higher education, and full political and civil rights for all DuBois wanted Black Africa independent from colonial rule and united within He demanded for all black citizens 1. right to vote 2. civic equality 3. education on Negro youth according to ability
Knowing the background information of the speaker(s) and audience(s) will help us to understand how the speaker tailors a message in order to effectively reach their audience(s). One influential leader among African Americans was Booker T. Washington. In his autobiography, Up From Slavery, he describes his life as a slave; his education after freedom from slavery; and discusses people who helped him succeed in life. Mr. Washington was a race leader who saw himself “lifting as he rose.” As he succeeded, he wanted to help others succeed. The highlight of Mr. Washington’s autobiography was in his speech entitled “The Atlanta Exposition Address.” In this speech he was not only representing himself, but he represented the
Martin Luther King Jr. Dr. Martin Luther King Jr. was born on January 15, 1929 in Atlanta, GA during a period when racism was extremely prominent. One can only imagine the experiences Black Americans endured during those times unless one lived through them personally. Dr. Martin Luther King Jr. used his strength and knowledge to help society overcome these tumultuous times. Dr. King fought for equal justice for all races and gender to love one another and eliminate violence. He served the community as a clergyman, activist, and leader of the Civil Rights Movement [ (Biography, 2012) ].
Assess how effective Malcolm X’s strategies were in the Civil Rights Movement in America in the 1950’s and 1960’s. For a person to be effective they must achieve what they wanted many times and by doing this they gain respect and power. Malcolm X was thought of as an activist, an outspoken public voice of African American civil rights and a prominent leader of Nation of Islam, challenged the mainstream Civil Rights Movement and the nonviolent pursuit of integration championed by Martin Luther King Jr and promoted Black Nationalism that encompassed the belief in black separatism. Malcolm X urged his followers to defend themselves against white aggression thus not following the non-violent ways of other leaders. Malcolm X was one of America’s
Using sources A-E and your own knowledge. How did civil rights for African Americans develop in the 1950’s? The 1950’s were a time of change in America for the black people of America I am going to write about the themes: black Americans working together, the emergence of MLK and non-violence, the use of the media and black Americans standing up for what they believed was right. The first theme is black Americans working together from my own knowledge I know the civil rights movement was one that was tackled with unity by the people who were fighting for all people to have civil rights. Source A shows us that the black people would work together as the man is not getting on the bus and he would rather walk and from my own knowledge I know
Why did the visions of Martin Luther King Jr feature in Barack Obama’s 2008 election campaign and inauguration speech in 2009? The Role and significance Martin Luther King Jr in America’s History: Martin Luther King Jr was a leader; he gave a voice to the African American citizens who could not express their own needs and opinions. His role was to lead the civil rights movement, and speak for justice, peace and equality in the lives of every American man, woman and child. King struggled with the laws and politics of his time and worked to eradicate segregation and discrimination from the American way of life. Martin Luther King Jr’s writings, teachings and speech’s are timeless; they left people rethinking their attitudes towards African Americans and racism.
The practices of the late Malcolm X were deeply rooted in the theoretical foundations of the Black Panther Party. Malcolm had represented both a militant revolutionary, with the dignity and self-respect to stand up and fight to win equality for all oppressed minorities; while also being an outstanding role model, someone who sought to bring about positive social services; something the Black Panthers would take to new heights. The Panthers followed Malcolm's belief of international working class unity across the spectrum of color and gender, and thus united with various minority and white revolutionary groups. From the tenets of Maoism they set the role of their Party as the vanguard of the revolution and worked to establish a united front, while from Marxism they addressed the capitalist economic system, embraced the theory of dialectical materialism, and represented the need for all workers to forcefully take over the means of
|
In psychology, Trait theory is a major approach to the study of human personality. Trait theorists are primarily interested in the measurement of traits, which can be defined as habitual patterns of behavior, thought, and emotion. According to this perspective, traits are relatively stable over time, differ among individuals (e.g. some people are outgoing whereas others are shy), and influence behavior.
Gordon Allport was an early pioneer in the study of traits, which he sometimes referred to as dispositions. In his approach, central traits are basic to an individual's personality, whereas secondary traits are more peripheral. Common traits are those recognized within a culture and may vary between cultures. Cardinal traits are those by which an individual may be strongly recognized. Since Allport's time, trait theorists have focused more on group statistics than on single individuals. Allport called these two emphases "nomothetic" and "idiographic," respectively.
There is a nearly unlimited number of potential traits that could be used to describe personality. The statistical technique of factor analysis, however, has demonstrated that particular clusters of traits reliably correlate together. Hans Eysenck has suggested that personality is reducible to three major traits. Other researchers argue that more factors are needed to adequately describe human personality. Many psychologists currently believe that five factors are sufficient.
Virtually all trait models, and even ancient Greek philosophy, include extraversion vs. introversion as a central dimension of human personality. Another prominent trait that is found in nearly all models is Neuroticism, or emotional instability.
Sorry, no news articles match your request
Please, allow us to send you push notifications with new Alerts.
|
We’ll finish up our discussion of movement with the second type of movement: Migration. Migration is a periodic movement involving a round-trip!
There are two types of migration:
- Altitudinal Migration: animals migrate relatively short distances up and down mountains out west e.g., big game (elk, mule deer, moose) – their “summer” and “winter” ranges
- Latitudinal Migration: animals migrate a long distance; north-south movements covering large amounts of land area e.g., waterfowl, sea turtles, butterflies, fish
Among mammals only 4 groups – bats, cetaceans, pinnipeds, and large hooved herbivores undergo regular latitudinal migrations.
There are both benefits and problems with migrations. Two benefits of long-distance migration are that a migrating animal may:
- Exploit food resources on a seasonal basis – thereby allowing the food resources of one area to recover while the animal migrates to another area, and
- To go to an area where reproduction and survival of young may be enhanced – take advantage of surplus food or lack of predators.
Problems associated with long-distance migrations are:
- Requires a lot of energy
- Predators, and
- Crucial dependence on special habitat areas, e.g., migrating waterfowl and wetlands
Now that you know all the types of movement, what type do your animals participate in? Remember, migration is not relegated to only large animals like elk and deer; small animals migrate too – just on a smaller scale. Do the wildlife you’re managing have what they need to make the journey?
|
A common plant structure is the driving force behind a new EU research project, thanks to its potential for inspiring novel materials – from insect repellents to colours.
Compared to beautiful flowers or delicious fruits, a plant’s cuticle – the layered structure that controls the movement of waxes to the surface of leaves and petals – sounds boring.
But in fact, the cuticle is crucial to a plant’s survival. It regulates everything from gas exchange to water permeability, and even controls which insects are able to land on the plant – to pollinate it, for example – and which are repelled.
All these functions depend on the cuticle’s unique layered structure, which has intrigued scientists for years.
Nico Bruns, a professor of macromolecular chemistry at the Adolphe Merkle Institute (AMI) in Fribourg, is coordinating a new four-year research initiative funded by the European Commission, known as PlaMatSu (Plant-Inspired Materials and Surfaces).
The project brings together both students and experienced researchers from Switzerland, Germany and the UK in fields ranging from biology and chemistry to physics. They’re all are interested in understanding how plant cuticles grow and develop, in the hope of creating useful new materials and technologies.
“I think the most famous example of plant-cuticle inspired materials are those based on the "lotus effect,"" Bruns tells swissinfo.ch.
“Lotus plant leaves are very water repellent – if you put a drop of water on one, it simply rolls off. Researchers all over the world have developed materials, like wall paints or glass, that mimic this self-cleaning mechanism. So, we decided to see what other interesting properties in the plant cuticle could be useful.”
Current PlaMatSu projects include developing self-lubricating synthetic materials that mimic the way waxes move through plant cuticles in nature.
“The technical world is full of moving parts that need lubricants – even a Swiss watch,” Bruns laughs.
Cuticles can also be useful for lending colour to things – but not in the way you might think.
“Most plant flowers are colourful because they have coloured substances called pigments in them. But others are coloured because they have a structured surface, similar to some butterflies or beetles,” Bruns explains. He says that this type of “structural colour” is useful in the field of optics, or for developing colourful synthetic materials.
Yet another application of cuticle-inspired research is species-specific insect repellents, which can prevent certain pests from getting a good grip on crop plants.
“Think of a bee landing on a flower: if the petals were super slippery the bee would slip off, and this wold not be very useful for the plant,” Bruns says. “So you can see that plants have evolved surfaces that can interact with insects, while insects have evolved to match the surfaces they want to stick to.”
Based on this concept, physicists in the PlaMatSu group at AMI are developing structures that form rough areas in certain patterns, with which they hope to be able to control insect adhesion – or stickiness. A potential application if this research could be a spray which, when applied to a tree trunk, would physically prevent certain insects from crawling up it.
Bruns says that in addition to novel materials and technologies, he hopes the PlaMatSu project will motivate its nine PhD students to further pursue innovations inspired by nature and biology…as well as something he calls “retro-bio-inspired research”.
“As a scientist, when you have an idea, you often need to look for analogies in nature – systems that people are familiar with – to be able to explain that idea to a general audience outside of your own scientific community,” he explains.
“Then all of the sudden, you realise that if you look more deeply into these natural systems, that they can provide you with more ideas for your own research, even though it didn’t start out as a genuinely bio-inspired project.”
What are your favourite examples of innovations inspired by plants? Share them in the comments!
|
Synthetic Aperture Radar (SAR) is an imaging radar that operates at microwave frequencies and can “see” through clouds, smoke and foliage to reveal detailed images of the surface below in all weather conditions. Below is a SAR image superimposed on an optical image with clouds, showing how a SAR image can reveal surface details that cannot be seen in the optical image.
SAR systems usually are carried on airborne or space-based platforms, including manned aircraft, drones, and military and civilian satellites. Doppler shifts from the motion of the radar relative to the ground are used to electronically synthesize a longer antenna, where the synthetic length (L) of the aperture is equal to: L = v x t, where “v” is the relative velocity of the platform and “t” is the time period of observation. Depending on the altitude of the platform, “L” can be quite long. The time-multiplexed return signals from the radar antenna are electronically recombined to produce the desired images in real-time or post-processed later.
Source: Christian Wolff, http://www.radartutorial.eu/20.airborne/pic/sar_principle.print.png
This principle of SAR operation was first identified in 1951 by Carl Wiley and patented in 1954 as “Simultaneous Buildup Doppler.”
There are many SAR applications, so I’ll just highlight a few.
Boeing E-8 JSTARS: The Joint Surveillance Target Attack Radar System is an airborne battle management, command and control, intelligence, surveillance and reconnaissance platform, the prototypes of which were first deployed by the U.S. Air Force during the 1991 Gulf War (Operation Desert Storm). The E-8 platform is a modified Boeing 707 with a 27 foot (8 meter) long, canoe-shaped radome under the forward fuselage that houses a 24 foot (7.3 meters) long, side-looking, multi-mode, phased array antenna that includes a SAR mode of operation. The USAF reports that this radar has a field of view of up to 120-degrees, covering nearly 19,305 square miles (50,000 square kilometers).
Lockheed SR-71: This Mach 3 high-altitude reconnaissance jet carried the Advanced Synthetic Aperture Radar System (ASARS-1) in its nose. ASARS-1 had a claimed 1 inch resolution in spot mode at a range of 25 to 85 nautical miles either side of the flight path. This SAR also could map 20 to 100 nautical mile swaths on either side of the aircraft with lesser resolution.
Northrop RQ-4 Global Hawk: This is a large, multi-purpose, unmanned aerial vehicle (UAV) that can simultaneously carry out electro-optical, infrared, and synthetic aperture radar surveillance as well as high and low band signal intelligence gathering.
Below is a representative RQ-4 2-D SAR image that has been highlighted to show passable and impassable roads after severe hurricane damage in Haiti. This is an example of how SAR data can be used to support emergency management.
NASA Space Shuttle: The Shuttle Radar Topography Mission (SRTM) used the Space-borne Imaging Radar (SIR-C) and X-Band Synthetic Aperture Radar (X-SAR) to map 140 mile (225 kilometer) wide swaths, imaging most of Earth’s land surface between 60 degrees north and 56 degrees south latitude. Radar antennae were mounted in the Space Shuttle’s cargo bay, and at the end of a deployable 60 meter mast that formed a long-baseline interferometer. The interferometric SAR data was used to generate very accurate 3-D surface profile maps of the terrain.
An example of SRTM image quality is shown in the following X-SAR false-color digital elevation map of Mt. Cotopaxi in Ecuador.
You can find more information on SRTM at the following link:
ESA’s Sentinel satellites: Refer to my 4 May 2015 post, “What Satellite Data Tell Us About the Earthquake in Nepal,” for information on how the European Space Agency (ESA) assisted earthquake response by rapidly generating a post-earthquake 3-D ground displacement map of Nepal using SAR data from multiple orbits (i.e., pre- and post-earthquake) of the Sentinel-1A satellite. You can find more information on the ESA Sentinel SAR platform at the following link:
You will find more general information on space-based SAR remote sensing applications, including many high-resolution images, in a 2013 European Space Agency (ESA) presentation, “Synthetic Aperture Radar (SAR): Principles and Applications”, by Alberto Moreira, at the following link:
ISAR technology uses the relative movement of the target rather than the emitter to create the synthetic aperture. The ISAR antenna can be mounted in a airborne platform. Alternatively, ISAR also can be used by one or more ground-based antennae to generate a 2-D or 3-D radar image of an object moving within the field of view.
Maritime surveillance: Maritime surveillance aircraft commonly use ISAR systems to detect, image and classify surface ships and other objects in all weather conditions. Because of different radar reflection characteristics of the sea, the hull, superstructure, and masts as the vessel moves on the surface of the sea, vessels usually stand out in ISAR images. There can be enough radar information derived from ship motion, including pitching and rolling, to allow the ISAR operator to manually or automatically determine the type of vessel being observed. The U.S. Navy’s new P-8 Poseidon patrol aircraft carry the AN/APY-10 multi-mode radar system that includes both SAR and ISAR modes of operation.
The principles behind ship classification is described in detail in the 1993 MIT paper, “An Automatic Ship Classification System for ISAR Imagery,” by M. Menon, E. Boudreau and P. Kolodzy, which you can download at the following link:
You can see in the following example ISAR image of a vessel at sea that vessel classification may not be obvious to the casual observer. I can see that an automated vessel classification system is very useful.
Imaging Objects in Space: Another ISAR (also called “delayed Doppler”) application is the use of one or more large radio telescopes to generate radar images of objects in space at very long ranges. The process for accomplishing this was described in a 1960 MIT Lincoln Laboratory paper, “Signal Processing for Radar Astronomy,” by R. Price and P.E. Green.
Currently, there are two powerful ground-based radars in the world capable of investigating solar system objects: the National Aeronautics and Space Administration (NASA) Goldstone Solar System Radar (GSSR) in California and the National Science Foundation (NSF) Arecibo Observatory in Puerto Rico. News releases on China’s new FAST radio telescope have not revealed if it also will be able to operate as a planetary radar (see my 18 February 2016 post).
The 230 foot (70 meter) GSSR has an 8.6 GHz (X-band) radar transmitter powered by two 250 kW klystrons. You can find details on GSSR and the techniques used for imaging space objects in the article, “Goldstone Solar System Radar Observatory: Earth-Based Planetary Mission Support and Unique Science Results,” which you can download at the following link:
The 1,000 foot (305 meter) Arecibo Observatory has a 2.38 GHz (S-band) radar transmitter, originally rated at 420 kW when it was installed in 1974, and upgraded in 1997 to 1 MW along with other significant upgrades to improve radio telescope and planetary radar performance. You will find details on the design and upgrades of Arecibo at the following link:
The following examples demonstrate the capabilities of Arecibo Observatory to image small bodies in the solar system.
- In 1999, this radar imaged the Near-Earth Asteroid 1999 JM 8 at a distance of about 5.6 million miles (9 million km) from Earth. The ISAR images of this 1.9 mile 3-km) sized object had a resolution of about 49 feet (15 meters).
- In November 1999, Arecibo Observatory imaged the tumbling Main-Belt Asteroid 216 Kleopatra. The resulting ISAR images, which made the cover of Science magazine, showed a dumbbell-shaped object with an approximate length of 134.8 miles (217 kilometers) and varying diameters up to 58.4 miles (94 kilometers).
More details on the use of Arecibo Observatory to image planets and other bodies in the solar system can be found at the following link:
The NASA / Jet Propulsion Laboratory Asteroid Radar Research website also contains information on the use of radar to map asteroids and includes many examples of asteroid radar images. Access this website at the following link:
In recent years, SAR units have become smaller and more capable as hardware is miniaturized and better integrated. For example, Utah-based Barnard Microsystems offers a miniature SAR for use in lightweight UAVs such as the Boeing ScanEagle. The firm claimed that their two-pound “NanoSAR” radar, shown below, weighed one-tenth as much as the smallest standard SAR (typically 30 – 200 pounds; 13.6 – 90.7 kg) at the time it was announced in March 2008. Because of power limits dictated by the radar circuit boards and power supply limitations on small UAVs, the NanoSAR has a relatively short range and is intended for tactical use on UAVs flying at a typical ScanEagle UAV operational altitude of about 16,000 feet.
Nanyang Technological University, Singapore (NTU Singapore) recently announced that its scientists had developed a miniaturized SAR on a chip, which will allow SAR systems to be made a hundred times smaller than current ones.
“The single-chip SAR transmitter/receiver is less than 10 sq. mm (0.015 sq. in.) in size, uses less than 200 milliwatts of electrical power and has a resolution of 20 cm (8 in.) or better. When packaged into a 3 X 4 X 5-cm (0.9 X 1.2 X 1.5 in.) module, the system weighs less than 100 grams (3.5 oz.), making it suitable for use in micro-UAVs and small satellites.”
NTU estimates that it will be 3 to 6 years before the chip is ready for commercial use. You can read the 29 February 2016 press release from NTU at the following link:
With such a small and hopefully low cost SAR that can be integrated with low-cost UAVs, I’m sure we’ll soon see many new and useful radar imaging applications.
|
Reading & Vision
Having a good vision is important when it comes to being able to read well. There are about 8 vision skills that you need to read, with most of them being overlooked even in the typical eye chart tests. If you have trouble reading, these are the vision skills that an optometrist should cover during your examination.
This is the ability to clearly see objects that are located at a distance. Normal visual acuity is rated either as a 20/20 vision or 6/6 vision if using a metric system, a measure that allows you to comfortably view objects at a distance of 20 feet, which is about 6 meters.
This is measured in most Canadian schools during a vision screening. If there are any problems with your visual acuity, optometrists recommend that you get a full examination to determine the cause of the problem and possible remedies.
With visual fixation, you should have the ability to aim your eyes accurately at certain objects. There are two types of visual fixation; direct and pursuit.
Direct fixation establishes your eyes’ ability to focus on stationary objects, or read a fine line of print. Pursuit fixation on the other hand, focuses on your ability to follow a moving object with your eyes.
This is a vision skill that determines your ability to adjust the focus of your eyes as the distance between you and the object in question changes.
It’s a skill mostly tested in children while in class, on their ability to comfortably shift focus between the blackboards and their books.
Your eyes have the ability to be precisely aligned. Binocular fusion tests out whether your brain has the capacity to collect information received from each eye separately, and form one uniform upright image.
Without this vision skill, you can easily experience a double vision, commonly known as diplopia. This causes your brain to suppress or limit vision in one eye, subconsciously, to avoid confusion. The affected eye then develops poor visual acuity (lazy eye) as a result.
This is a function of the binocular fusion, which establishes how your eyes perceive depth or the relative distances of objects you are trying to see.
Convergence is a vision skill that determines your ability to turn your eyes towards each other when observing a close object.
This is a skill commonly tested out in most Canadian schools, since any close work, such as students reading from their desks, requires convergence. Without it, you may experience difficulties when reading, and double vision may occur in the long run.
Field Of Vision
This covers any area over which vision is possible. Eye experts point out that it’s crucial that you are aware of objects on your periphery; these include objects on your right and left sides, up or down, and those located at the centre of your field of vision.
As a reader, you should be able to remember the shape of words, a vision skill that is defined as your reading skills develop. Perception is the process that involves your eyes’ ability to receive and recognize visual stimuli.
If diagnosed with vision problems during the eye chart tests, an optometrist may either prescribe contact lenses, glasses or vision therapy. In most cases, they recommend both remedies.
With vision therapy, you get a customized training program, designed to enhance your vision skills and develop the eye muscles responsible for focusing, when viewing objects.
The Canadian system of education allows educators, optometrists and psychologists to work together in ensuring they meet each students’ vision needs, to enhance proper reading.
|
It’s so important that we teach our children about being kind to the Earth and environment. Earth Day isn’t just about a one day celebration. It’s about lifestyle changes that will make a positive impact every day of the year.
Do you want to celebrate Earth Day with your kids this year? You’re in luck! Here are some ideas to help you plan an Earth Day celebration that will leave a lasting impression on your whole family.
What is Earth Day?
Earth Day (April 22) is the day to educate children — and adults — about the importance of keeping our planet clean.
Get kids excited and involved
Children are very influential and often learn new things every day. They also tend to have a heart of gold and instinctively want to help and please others. If we can educate our children about the importance of caring for our planet through family fun and play, we can only hope they will continue the cycle with future generations.
Make lifestyle changes
The best activities you can perform on Earth Day are to put active plans in place for the entire year. Use Earth Day as your starting point to educate your family and create awareness for doing good deeds. This is also a great way to conserve, cut down on your household bills and implement change and responsibility for your kids.
Teach your children about conserving electricity, water and reducing fuel emissions by turning off house lights and appliances when not needed, turning off the faucet while brushing teeth and taking more walks or riding bikes instead of driving.
Look for ways to reuse things around the house or use them for craft projects. Buy or accept secondhand clothing and household items when you can, and donate unused items to an organization or school in need.
If you don’t already have one, designate a waste bin for recyclable materials. For younger children, designate three smaller waste bins or stackable boxes and draw or use magazine pictures of recyclable items to the side of each bin. It’s easier for younger children to grasp the concept of recycling if it’s visual — and if the items are separated into categories like plastic, paper and aluminum.
Click the infographic (right) to enlarge >>
Host an Earth Day event
Organize and host a beach, park or community clean-up event. Pick up littered public areas and/or lay grass or plant flowers and trees where the agriculture has died. Cleaning up debris helps our natural wildlife and the newly planted horticulture reduces the amount of wasted water consumption, and that helps the environment.
Throw an Earth Day party
Throw a party for your kids to cap off your Earth Day efforts. Start your day with a clean-up outdoors, watch Dr. Seuss’ The Lorax, make an Earth Day jar (see below), cook a meal from organic, locally grown fruits and vegetables — then finish it off with a fun Earth Day dessert.
How to make an Earth Day jar
Create a jar full of ways your children and your family can help the Earth beyond Earth Day this year.
What you’ll need:
- scrap paper and/or paper bags
- large mason jar
What you’ll do:
- Cut small strips from paper bags or scrap paper.
- Write down simple things you can do on a weekly or monthly basis to help the Earth.
- Draw one piece of paper each weekend or once a month — depending on what you decide.
Examples of what to put in your jar:
- Plant a tree.
- Sew your own towels to replace paper towels.
- Volunteer at a recycling center or recycling event in your community.
- Make a home movie about what you do to help the Earth, and share it with friends and family.
- Get involved with conservation groups or programs in your area (aquariums, the zoo, etc.)
- Host a movie night showing the kid-friendly Disney movie, Earth, and invite friends.
- Plant a garden in your yard.
- Join your local horticulture society or club.
How are you celebrating Earth Day with your kids this year? Leave a comment below and let us know!
|
Helicobacter Pylori: Pathogenesis, Signs or symptoms and Incubation
Helicobacter pylori (H. pylori) are a form of intestinal microorganisms (spiral-shaped gram-negative) that result in the majority of ulcers in the tummy and duodenum. They succeed in very acidic areas and have a way of creating to the severe environment with the stomach. H. pylori are classified because low-potential cancerous carcinogens (cancer-causing substances) by the Community Health Firm.
The Life Period (Pathogenesis) involving Helicobacter pylori
H. pylori are able to thrive in acid because they generate enzymes (special proteins) this neutralize the acid. This mechanism allows H. pylori bacterias to enter the main stomach and create their technique to the “safe” area – the appropriate mucous paving of the belly wall. In the event the bacterium with the mucous coating of the tummy, the body’s natural protection cannot achieve it. Immune system will reply to an H. pylori virus but will be unable to kill the actual bacteria considering they are hidden in the particular stomach coating. The immune system help keep sending disease fighters for the infection web-site and They would. pylori will feed on the actual nutrients given by the body, allowing for them (the bacteria) to outlive in the digestive system environment.
H. pylori destroy the protecting mucous covering of the abs and duodenum, allowing the exact stomach acid to get through to the delicate lining down below. Both the stomach acid and the microorganisms irritate the liner, causing pain in the gastric area (stomach inflammation) and perhaps the main formation associated with an ulcer within a few days with the initial illness. Ironically, it might be the H. pylori micro organism, but the soreness response to the bacteria, a great deal of the ulcer to form.
The exact series of steps – often the pathogenic accessories – which will H. pylori go through any time establishing his or her self in the tummy are as follows:
- Idolatry – The very H. pylori bacteria has to enter the ab and attach themselves towards lining from the stomach to ascertain an environment that has to grow.
- Contaminant production instant H. pylori produce dangerous substances to increase the secretion of standard water and electrolytes in the abs and result in cell dying in the tissues of the stomach lining. This will help the microbes take over the actual stomach natural environment and will cut down the competition with regard to required nutrition.
- Cell offensive – The main bacteria will probably enter the tummy lining microscopic cells for proper protection and will then simply kill the cells they are on (their machine cells) so they can move on to take ? conquer more stomach-lining cells. The process will carry on, thus building tissue damage. This specific tissue damage can be the ulcer formation within the stomach.
- Loss of microvilli/villi instructions The compounds released to the host mobile during the ‘Cell Invasion’ step cause a improvement in the stomach-lining cells. This change triggers fewer calories from fat getting assimilated by the digestive system. The end result? The body will receive fewer nutrition from the nutrition eaten each and every meal.
Ulcers develop when there is a break down while in the mucous layer lining the exact stomach, permitting the gastrointestinal (stomach) acid solution and problems enzymes to attack and also aggravate often the stomach strength. Helicobacter pylori contribute to this kind of breakdown just by living in this unique layer and even increasing the likelihood of it conking out. Stress plus diet may well irritate a great ulcer, smaller cause it all.
Symptoms together with incubation precious time of an H. pylori an infection
Getting the H. pylori infection is certainly nothing like finding a common chilled in that immediate consequences associated with an infection are rarely seen. In fact , it is possible to travel many years without noticeable indicators. When conditions do manifest, abdominal soreness is the most typical. This awkwardness is usually a uninteresting, gnawing discomfort that occurs and goes for several nights or period. It generally occurs several hours after a meal as well as in the middle of the night (when the abdomen is empty) and is done with the effort by eating, consuming milk or perhaps taking antacid medications. Different symptoms contain: heartburn, increased burping, fat loss, bloating as well as burping, and fewer common indicators include: terrible appetite, queasieness and nausea or vomiting. If you suspect that you have an ulcer along with experience many of the following conditions, a doctor ought to be called without delay.
- Razor-sharp, sudden, running stomach pain
- Bloody or possibly black eliminate
- Bloody throw up or be violently ill that seems like coffee argument
The above mentioned symptoms is usually signs of a serious problem, for instance:
- Perforation – once the ulcer burrows through the ab or duodenal wall.
- Bloody – when ever acid or even ulcer chips a blood vessel.
- Obstruction – should the ulcer hindrances the path for food planning to leave the actual stomach.
An infection with L. pylori happens worldwide, nevertheless the prevalence varies greatly among nations and concerning population teams within the exact country. All around prevalence about H. pylori infection can be strongly linked to socioeconomic factors. The frequency among middle-aged adults is over 80 percent in many developing countries, as compared together with 20 in order to 50 percent within industrialized nations.
Prevalence for infection is actually higher on developing nations around the world than associated with developed nations. In established countries, even if overall prevalance of virus in younger children is < 10%, close to 50 percentage of children dealing with poor aliado economic the weather is infected. Upto 80 per-cent of children less than age of several years are afflicted in acquiring countries. Prevalance of disease in Indian is 22%, 56% and even 87% 0-4, 5-9 and 10-19 years age group respectively. Important dilemma is that, through the entire developed nations, the infection is rare concerning children where in fast developing nations fairly in little ones. It has been viewed that there is basically no statistical big difference of H. pylori infections between man and female kids. Studies with developing areas suggest that, untill the last 100 years, nearly all persons carried H. pylori or simply closely similar bacteria on their stomachs, although with socio global financial development much less children are acquiring H. pylori. Annual number of cases of They would. pylori irritation is 0. 3%-0. 7 percent in produced countries together with 6-14 por 100to in developing nations.
Helicobacter pylori infection is common inside the Indian subcontinent. Exposure is situated childhood together with approximately 81% of men and women have been attacked at some time. Sero-surveys indicate a seroprevalence of 22%-57% for children in the age of 5, increasing that will 80%-90% from the age of 30, and left over constant afterwards.
There is now facts from epidemiological studies that will H pylori carriers write my paper have a significantly greater risk for the introduction of gastric cancers. Results by three would-be epidemiological studies10-12 estimate of which H pylori carriers have got a 2 . 8- to 6. 0-fold increased possibility of gastric melanoma developing around mean follow-up periods about 6 to 16 yrs when compared with their H pylori-negative counterparts. The overall mean probability was scored to be several. 8. thirteen This possibilities ratio amplified to 8. 7 in people that were diagnosed 15 several years or more immediately after testing impressive for H pylori.
L. pylori infection- treatment
Ulcers caused by L. pylori usually can be cured with a much more two-week span of antibiotics. Procedure usually includes a combination of drugs, acid suppressors, and digestive system protectors. Stomach acid suppression through the H2 blocker or proton pump inhibitor in conjunction with the antibiotics helps lower ulcer-related symptoms, helps repair gastric mucosal inflammation and may even enhance the performance of the anti-biotics against They would. pylori on the gastric mucosal surface. The usage of only one medicine to treat L. pylori will not be recommended. Here, the most effective treatment is a good two-week practice called triple therapy. It involves taking a pair of antibiotics to be able to kill the main bacteria and also either the acid suppressor or stomach-lining protector preserve the ab lining. Two-week triple treatment reduces ulcer symptoms, weakens the bacteria, and helps prevent ulcer occurance in more as compared to 90 % of affected individuals, but , regrettably, patients might find triple treatment complicated since it involves taking as many as something like 20 pills a full day. The anti-biotics used in three times the therapy might result in mild negative effects such as: a sick stomach, vomiting, diarrhea, dark bar stools, metallic style in the mouth, light headedness, headache, together with yeast infections within women.
|
Problem Set 5
1. What impact will an unanticipated increase in the money supply have on the real interest rate, real output, and employment in the short run? How will expansionary monetary policy affect these factors in the long run? Explain.
The money supply in an economy is the benchmark by which interest rates are determined. The supply of money is directly tied into the amount of money that can be loaned and borrowed in various capacities. The more money there is to loan, the less “expensive” it is to borrow that money. This is because when there is an increase in the money supply, the demand for that money fluctuates as well. This causes an increase in the overall amount of money being exchanged, and in turn, …show more content…
This obviously is lost economic activity that can cause ripple effects across the market. When the general price level is stable however, the economy becomes appealing to investors, and causes them to spend their money in the market. This confidence that investors gain is a huge asset to economic growth and development. When people and businesses are confident that their money is going to be put to good use, they are much more likely to spend it. Domestically, price stability is important for the government, and the Fed to be able to maintain fiscal policies. The Central Bank is also affected by the stability of prices when it makes monetary adjustments and investments. Therefore, it is vital for the Fed to monitor and attempt to stabilize prices as much as possible.
4. Compare and contrast the impact of an unexpected shift to a more expansionary monetary policy under rational and adaptive expectations. Are the implications of the two theories different in the short run? Are the long-run implications different? Explain.
When monetary policy is created, there are 2 popular theories that guide the actions of decision makers. One of these policies is Rational Expectations. The theory of Rational Expectations is based on the presumption that the economic future of a market can be systematically
|
Asbestos is a cancerous material, and exposure to it may result in later development of diseases such as benign pleural effusion, pleural plaques, diffuse pleural thickening, rounded atelectasis, asbestosis, mesothelioma, and lung cancer. Most exposure to asbestos has occurred occupationally. However, people have also been exposed to asbestos through common household products, old buildings, and by indirect contact from loved ones who have work with asbestos directly and have carried home asbestos dust on their clothing.
Although the manufacturing of asbestos products has been greatly reduced in the United States due to increasing governmental regulations since the late 1970's, asbestos still remains present today in old structures, buildings, and even warships that were built before this time. For this reason and due to the long latency period between the initial symptoms of the disease and diagnosis, asbestos-related disease still remains a serious public health hazard.
Asbestosis is one of many diseases categorized as an “environmental lung disease” or “occupational lung disease”. It is a lung condition referred to as diffuse pulmonary fibrosis. Asbestosis results from coming in contact with asbestos and inhaling its deadly fibers into your lungs. These asbestos fibers, once inhaled, accumulate in the lung tissue, thus distinguishing it from other fibrotic diseases. Additionally, asbestos fibers have been found in small numbers beyond the lungs; such as the tonsils, thoracic and abdominal lymph nodes, pleura, peritoneum, pancreas, spleen, kidneys, liver, stomach, esophagus, small and large intestines. This disease is progressive and irreversible in nature and typically leads to subsequent respiratory disability. In most severe cases, asbestosis may even lead to death from pulmonary hypertension and cardiac failure.
Any accumulation of dust in the lungs, whether is asbestos or not, is referred to as “pneumoconiosis”. Pneumoconiosis also refers to the pathologic response of the human body to the presence of the accumulated dust in the lungs, which results in asbestosis. Some of the symptoms of asbestosis are; shortness of breath, dry cough, X-Ray changes, and pulmonary function deficiencies. The latency period for asbestosis is generally several decades and it can occur in individuals exposed to large amounts of any of the three commercial forms of asbestos (chrysotile, amosite, or crocidolite) for extended periods of time. This disease may also develop even if the exposure was as brief as three years or less, if the level of exposure was heavy. There are also two other types of asbestos, which are non-commercial, and they are amphibole and anthophyllite.
The term Mesothelioma is used to describe a cancerous tumor that involves the “mosothelial” cells of an organ, usually the lungs, heart or abdominal organs.
Mesothelioma is classified into two types, pleural and peritoneal. Pleural
mesothelioma is the most common type and it is a very rare and aggressive form of lung cancer. The “pleura” is a thin membrane found between the lungs and the chest cavity, which serves as a lubricant to prevent the lungs from chafing against the chest walls. Peritoneal esothelioma, although less common, is more invasive and therefore results in a shorter life expectancy for the patient. Mesotheliomas have also been found in other abdominal organs.
As with other types of cancer, there are benign and malignant mesothelioma. The most common of the two is by far the diffuse malignant pleural mesothelioma. This particular type of tumor is very aggressive and invasive, spreading quickly over the surface of the lungs, abdominal organs or heart. Depending at which stage this disease is detected and the general health and strength of the patient, the life expectancy for the victims is between four and twenty-four months. The average person diagnosed with this aggressive type of tumor survives for between four and twelve months from the onset of symptoms. These symptoms include cough, shortness of breath, difficulty breathing and sleeping, pain in the chest and abdominal regions, progressive loss of weight and appetite and pleural effusions (fluid in the chest cavity). However, some victims have survived for several years with the proper treatment.
Smoking is by far the leading cause of lung cancer. Tobacco smoke causes more than 8 out of 10 cases of lung cancer, according to the American Cancer Society. Tobacco products contain harmful carcinogens (cancer causing agents) that can damage the cells in the lungs. The longer a person has been smoking and the more cigarettes a day smoked, the greater the chances are of contracting lung cancer. Exposure to secondhand tobacco smoke, which is called involuntary or passive smoking, can also increase the risk of developing lung cancer. If a person stops smoking before lung cancer develops, the lung tissue slowly returns to normal.
Even though smoking is mostly responsible for causing lung cancer, it is not the sole cause of it. Asbestos exposure can also cause lung cancer. People who work with asbestos are already in danger of getting lung cancer, and by smoking, the risk is greatly increased. Although the manufacturing of asbestos has slowed down significantly due to government regulations, it is still present in some products and old buildings. However, if left undisturbed, asbestos poses no danger. Asbestos is only dangerous when it has been disturbed and its raw form (fibers) is released into the air and breathed in.
|
The easiest way for your child to learn about directionality of print is by reading together with you. Here are some tips to help you make the most of your reading time:
- Sit close with your child as you read together. When you snuggle close with a book, it is easier for your child to observe how you read a book. This physical closeness will help your child associate reading with spending fun time with their favorite person: you!
- Talk about and interact with the book. Point out features of a book: front and back covers. With your finger, trace the title, author and illustrator’s names, and any other interesting information about the book. Show your child the direction we read words in a book – in English that’s from left to right, and from top to bottom.
- Demonstrate how to handle a book. Talk about how to hold a book. When you turn the pages from beginning to end, children will begin to understand that each page has a part of the story. As you read more, they will get a sense of how books share stories with a beginning and an end. As your child gets older, ask them to help turn pages.
- Make sure to read “extra” parts. If the book includes a dedication, read it. If the book includes biographies about the author/illustrator, read it. Reading these parts of a book teaches children that all parts of a book are important.
*Tips adapted from: https://study.com/academy/lesson/how-students-learn-directionality-of-print.html
|
While many other colonial and early republican towns played important roles in shaping the early American world, few places reveal such complete integration of so many of the period’s institutions as Litchfield. The town’s influence lasted well into the 19th century as reform movements fostered by the town gained followers and as young people who were educated or reared in the community matured into social, religious, and political leaders. The convergence of the forces of politics, reform, education, and religion in this early American community permits insight into the complexities of both regional and national stories.
Despite its inland location in the hills of northwest Connecticut, several days distant from the nation’s major commercial and political centers, during its period of greatest prominence Litchfield was a regional heart of culture, education, and Federalist politics. Founded in the early 1720s by migrants from Hartford and Windsor, Litchfield was a frontier town. The first European residents struggled to seek their livelihood in an inhospitable climate and on hilly, stony fields. The town began its ascent in 1751, when it became the seat of the newly organized Litchfield County. The county court, in particular, turned the surrounding area into a thriving market and made Litchfield village the hub of local turnpikes.
As the imperial crisis unfolded, a large majority of the town’s residents joined the ranks of protest and rebellion. Patriot activity in the county, from outcries against the Stamp Act in 1765 to the 1774 resolutions of solidarity with beleaguered Boston, originated in Litchfield. Although they were located away from the fighting, the townspeople found ways to support the American cause. Its distance from the coast offered the town some protection from the British, yet Litchfield’s network of roads connected it with both New England and New York. Townspeople stored military supplies for the American army, housed senior prisoners of war, and melted and molded the New York City statue of King George III into bullets for the American army. Local revolutionaries included Oliver Wolcott, signer of the Declaration of Independence and Major Moses Seymour, in whose home prisoner David Matthews was confined.
In the aftermath of independence, the community embraced the strife of party politics and the bustle of the market economy. By the 1780s the town emerged as the unofficial capital of western New England. Litchfield supported one of the earliest newspapers in the state, as well as an active lending library and a club that sponsored speeches and debates on political, philosophical, and literary subjects. A tax-roster from the 1780s showed a population of growing occupational diversity, including attorneys, physicians, merchants, tailors, a goldsmith, joiners, blacksmiths, and no fewer than eighteen tavern keepers.
In American education, Litchfield was in the vanguard. Out of his home on South Street, Tapping Reeve developed the first curriculum for teaching common law and opened the first law school in the United States. Legal historian John Langbein has stated that, “the origins of American common law can be better traced through [the students’] law school notebooks than any other source.” The Litchfield Law School launched the town into regional and national eminence, and its closing in 1833 signaled the community’s transition to lesser political and cultural prominence.
In the years after they finished their studies and well after the law school’s closing, Litchfield students formed a network of leadership and influence that encompassed public service, business, and many other areas of American life. Ultimately, the small law school would boast of having educated two vice-presidents of the United States, Aaron Burr and John C. Calhoun, as well as fourteen governors, fourteen members of the federal cabinet, twenty-eight U.S. Senators, 100 members of the House of Representatives, three members of the U.S. Supreme Court, and many other state and local public officials. Some of the young men who attended the school, such as educator Horace Mann and artist George Catlin, went on to distinguished careers in fields other than the law. Reeve’s students played influential roles in every major political and social battle of the antebellum years and often found themselves on opposing sides. For example, when Roger Sherman Baldwin represented the Africans in the “Amistad” case, he argued against fellow former Litchfield students William Holabird and William Elsworth.
Tapping Reeve’s success with the law school prompted many others to open proprietary law schools and, eventually, universities to offer professional legal training. Reeve’s students often became his competitors. More than twenty alumni of the school started or were early professors of new law schools. For example, Edward King, fourth son of notable politician and diplomat Rufus King, took his legal education and family connections west to found the Cincinnati Law School. Women, as well as men, discovered unique educational opportunities in the town. In 1792 Sarah Pierce founded one of the pioneer institutions of female education in America. Through her innovative curriculum that combined academic, practical, and ornamental courses, Pierce and her fellow teachers expanded the world of the more than 3,000 girls who attended the school over its forty-one year history. By the time of its closing, the Litchfield Female Academy had attracted students from fifteen states and territories, as well as Canada, and the West Indies. Sarah Pierce encouraged her students to become involved in benevolent and charitable societies. Academy students raised money for the training of ministers and organized to support local missionary, bible, and tract societies. Many of the academy alumnae carried on these activities in later life, becoming leaders and ardent members of maternal societies, moral reform movements, and temperance organizations. While most of the students went on to private lives devoted to their families, many others, such as Catharine Beecher, chose to teach or establish their own schools.
Many townspeople played leading roles in another experiment in education when they helped found and sponsor one of the most progressive social experiments of the antebellum era. This venture, a Congregational school in neighboring Cornwall, trained Asians, Hawaiians, Africans, and Native Americans to become missionaries to the pupils’ home communities. Two American Indian members of the school community unwittingly tested racial boundaries in Jacksonian America when they married daughters of the town’s elite and took the young ladies to live in Cherokee country.
Politically, the town is fascinating. Robust partisanship characterized Litchfield during the years of the early republic. Questions that defined the nation during the period–such as whether there would be a party system and whether the nation would take the shape of a democracy or a republic–were argued not only in Philadelphia, New York and eventually Washington, D.C., but also in Litchfield. Residents of Litchfield served in national and state politics at a greater rate than their counterparts in similar towns, gaining an extraordinary number of high offices. In the 1780s, townspeople identified with the new nation and figured prominently in both the leadership and rank-and-file of the Federalist Party, reaching the pinnacle of their political and intellectual influence in the late 1790s. In fact, Litchfield of the early republic was the home of one U.S. Senator, four U.S. Congressmen, two state governors, and two chief justices of the Connecticut Supreme Court. In the era of Jeffersonian Republicanism, the town leaders directed the region’s Federalist resistance. Democrats indicted Litchfield Law School founder Tapping Reeve for libeling President Jefferson, and in 1806 local Federalists imprisoned an editor who dared to publish a Jeffersonian newspaper in town. Letters from the period discuss concerns over state and national politics as well as record local disagreements. “There is more virulence and party animosity here than you can imagine,” wrote one of the students at the Litchfield Law School in 1808. But when some in the region toyed with secession in late 1814, many of Litchfield’s Federalists united behind Frederick Wolcott, commander of the Connecticut volunteer regiment, to oppose New England sectionalism. After the War of 1812, as the contest between religious toleration and Congregationalist hegemony dominated the political landscape of Connecticut, the leaders of both movements, Oliver Wolcott, Jr. and Reverend Lyman Beecher, resided in the town.
Many of the religious and reform movements of the era first developed in Litchfield and its vicinity. In the second half of the eighteenth century the Reverend Joseph Bellamy of neighboring Bethlehem was one of the most influential religious forces in the region. Bellamy, who earned the pejorative title “Pope of Litchfield County,” was a central figure in the battles of the Great Awakening and its aftermath. In 1787 leading residents of Litchfield organized one of the country’s first temperance groups. The town’s religious revival in the first decade of the 19th century was an early manifestation of the Second Great Awakening. And the message spread through the students from the law school and female academy who returned to their homes following the completion of their education in Litchfield. Students, particularly from the Litchfield Female Academy, also circulated reform messages as they wrote home about their work in Litchfield voluntary organizations. The town gained yet another force for reform when Lyman Beecher came to the Congregational church in 1810. During his tenure in the town, he initiated his influential campaigns against drinking and slavery.
Litchfield residents played a disproportionately large role in bringing the industrial and financial revolutions to the region. The town’s elite sponsored the construction of small iron works, paper mills, textile plants, cheese factories, and a broadcloth woolen mill in neighboring towns. Residents and graduates of the town’s law school figured prominently in the directories of the ten banks chartered by Connecticut from 1798 to 1818. Account books, as well as furniture, musical instruments, and other manufactured products testify to the centrality of Litchfield to the proto-industrialization and the modernizing finances that ushered in capitalism in western New England.
Already in decline, Litchfield’s fortunes worsened in 1833, the year both the Litchfield Law School and the Litchfield Female Academy closed their doors. As the schools’ founders retired and other schools competed for students, the law school and female academy faced shrinking enrollments and loss of status. As fewer prominent families from around the country sent their children to study in the community, Litchfield experienced both a decline in social activity and a loss of prestige. Businesses also suffered setbacks as the industrial revolution gained speed in riverside communities, rendering the small scale manufacturing of Litchfield obsolete. The political scene had also quieted. After the Federalists, the driving political force in early national Litchfield fell into disarray and as the community’s most activist politicians aged, the town retreated from the heart of the political fray and its squabbles mirrored those of other small, rural New England towns.
Yet Litchfield continued to play a role in the nation as its influence lingered in its native children and those who had been educated in the community. For example three of Lyman Beecher’s children, novelist Harriet Beecher Stowe, educator and home economist Catharine Beecher, and minister Henry Ward Beecher immortalized the town of their youth in their writings and made clear that the small rural community was a defining influence on their lives. Georgian Eugenius Nisbet left Litchfield with more than a legal education. In later life he used his Litchfield experience in shaping his arguments for the South to secede from the Union when he served as Chairman of the committee to draft the Georgia ordinance of secession.
In spite of its unusual importance, no modern scholar has comprehensively investigated the history of Litchfield. Early research on the town and its inhabitants dates from the colonial revival, when the community’s residents collectively created and embraced a mythical vision of Litchfield’s past. Documents not held by the Society are scattered throughout the nation, and even abroad, because of the educational institutions and diverse activities of Litchfield and its residents. As early as 1951, in a letter to the Society, noted historian Carl Bridenbaugh bemoaned the absence of accessible information about the history of this pivotal town. Although recent books such as Joseph Wood’s The New England Village (1997), Cornelia Dayton’s Women before the Bar (1997), and Laurel Thatcher Ulrich’s Age of Homespun (2001) have all used Litchfield material, modern scholars have left many areas unexplored and many of the rich collections of this remarkable community and its immediate surroundings remain untapped. It is the Society’s hope that the Ledger will inspire and enable new scholarship focusing on Litchfield’s significance.
|
Emperor Taizu (960-976 CE), formerly known as Zhao Kuangyin, was the founder of the Song (aka Sung) dynasty which ruled China from 960 to 1279 CE. Taizu settled for a territorially smaller but more unified and prosperous China than was seen in previous dynasties, and he made particular efforts to curb the powers of the military and bolster those of the scholar-officials within the state bureaucracy. Taizu's careful governance would ensure that his successors had the foundation upon which they could build one of the most successful dynasties in China’s history.
Rise to Power
The Tang dynasty had ruled China from 618 CE with great success, but their collapse in 907 CE resulted in a sustained period of political upheaval. The once unified Chinese state was broken up into many competing political entities and so the era from 907 to 960 CE is often referred to as the Five Dynasties and Ten Kingdoms (Wudai shiguo) period. One man would rise above all other military rulers in these turbulent times, the Later Zhou dynasty general-warlord Zhao Kuangyin (also spelt Guangyin).
Born in 927 CE in Luoyang, Henan province, Zhao Kuangyin was the second son of an important military commander called Zhao Hongyin. The younger Zhao turned out to be a fine archer and horseman. At age 20, Zhao Kuangyin was already a commander in his own right, fighting for the Later Zhou dynasty (951-960 CE). Extending their control over much of southern China, Zhao became the foremost commander in the Zhou army. At the same time, the ruler of the Zhou died, and his son took the title but he was still a child. Consequently, in 960 CE the Zhou army endorsed Zhao as their new leader, dressed him in yellow imperial robes and confidently proclaimed him the emperor of all China.
The Emperor’s Domestic Policies
Zhao Kuangyin took the reign title Taizu, meaning ‘Grand Progenitor’. The emperor’s first priority was to ensure he kept his own position as the most powerful man in China. To that end, Taizu introduced a rotation system for his top generals, gave many former commanders only minor positions in the new regime, and reduced the powers of those commanders in the country’s 15 newly created administrative regions or circuits. Consequently, Taizu ensured that no single military leader ever became powerful enough to usurp him. As a further check on the army’s power, some generals were encouraged to retire on a handsome pension, others were given gifts to gain their loyalty, and still others were simply replaced by civilian officials whenever they retired or died. The government was centralised around the court at Kaifeng and the powers of the civil service were increased as was its status compared to the military professions. Further, the civil service was tasked with overseeing the army, acting as its supervisory body. The emperor was creating a much less militaristic regime and focusing instead on a more efficient administration than had been seen in China throughout the 10th century CE.
Taizu attempted to reduce corruption and the power of the eunuchs in the imperial court and was known for his tight hold on the state purse strings. To ensure the scholar-officials did not abuse their new-found power, Taizu revived the tried and tested civil service examination system. These entry tests to the civil service ensured that at least a healthy majority of officials were selected on merit rather than their family connections or outright bribery. Finally, the emperor introduced a new law code in 962 CE with harsh penalties and punishments, especially for government malpractice. After 963 CE, to further strengthen his meritocratic system, senior officials were prohibited from making appointments based only on recommendations. All of these measures would allow the Song to get off to a good start and form the solid foundation of state management that Taizu’s successors would build upon with great success.
Kaifeng, located on the Wei river in northern-central China at a strategically useful meeting point of various waterways, had already been a capital in earlier dynasties and it was selected by Taizu as his capital, too. The old Tang capital at Changan had, in any case, been utterly destroyed during the fall of that dynasty. The imperial heart of Kaifeng was laid out on a precise grid pattern, an intentional design meant to reflect harmony and good governance, as noted by the emperor himself when addressing his officials:
My heart is as straightforward as all this, and as little twisted. Be ye likewise. (quoted in Dawson, 62)
Kaifeng prospered and became one of the great metropolises of the world under the Song. With a population of around one million, the city would benefit from industrialisation and was well supplied by nearby mines producing coal and iron. A major trade centre, Kaifeng was especially famous for its printing, paper, textile, and porcelain industries, the products of which were exported far and wide along the Silk Roads.
In terms of foreign policy, Taizu had his hands full defending his northern borders against the Khitan Liao dynasty (907-1125 CE) who, significantly, remained in control of the Great Wall of China. The Khitan were great horsemen and they launched so many raids into Song China that Taizu and his successors were compelled to pay their neighbours annual tribute in the form of silver and silk. Tribute was cheaper than warfare, though, and much of the silver came back again as the two cultures remained committed trading partners. In any case, Taizu was content to consolidate his grip over central and southern China, a big enough task considering the recent fragmented history of the country. In addition, officials considered that the fall of the Tang had been mostly due to their over-ambitious foreign policy. The Song would settle for a smaller but more unified and prosperous state.
Taizu and his successors had to deal with less tangible problems than court rivalries and foreign threats. The period saw a new political and intellectual climate which questioned imperial authority and sought to explain where it had gone wrong in the final years of the Tang dynasty. A symptom of this new thinking was the revival of the ideals of Confucianism, Neo-Confucianism as it came to be called, which emphasised the improvement of the self within a more rational metaphysical framework. This new approach to Confucianism, with its metaphysical add-on, now allowed for a reversal of the prominence the Tang had given to Buddhism seen by many intellectuals as a non-Chinese religion. Taizu himself was always keen to present himself as the classic Chinese Confucian ruler, that is a wise, benevolent, and indisputable sovereign who presided over a fixed and efficient hierarchy of power.
Taizu, perhaps surprisingly considering his military background, was a keen patron of the arts once he had established himself as emperor. It may have been a strategy of his to help reunify China not just militarily, politically, and economically but also culturally. The emperor promoted the idea of ‘this culture of ours’ (si wen). The printing of books was promoted on all three of the major religions: Confucianism, Taoism, and Buddhism. An imperial library was established at Kaifeng which collected together in one place thousands of volumes of literature and histories; Taizu even ordered the collection of important silk scroll paintings and calligraphy specimens, always a highly-valued art form in Chinese culture. These efforts were amongst the first to not only produce great art but also preserve that which had been made by previous generations.
Successors & legacy
Taizu died in 976 CE, and his successor was his younger brother Taizong (r. 976-997 CE). Together, the stability of their four-decade-long rule ensured that the Song got off to the best possible start. The Song dynasty would, in fact, rule China until 1279 CE and see great developments in agriculture, trade, arts and science, although the reign was split into two periods: the Northern Song (960-1125 CE) and Southern Song (1125-1279 CE) following the invasion by the Jin state in the first quarter of the 12th century CE.
|
Slavery is a well known part of American history, however, very little is ever mentioned about who these slaves were. It often surprises many that a significant number of slaves who were brought to the “New World” self-identified as Muslims. What is interesting is that many accounts show that African slaves were strategically taken from different parts of Africa and different tribes so that they could not speak to one another and plan revolts or escape. However, where many white colonists believed that these Africans to be different, it turns out many shared the common religion of Islam and therefore many were able to communicate or use Arabic as common ground. Muslim slaves were able to preserve their liturgical language both through writing (where applicable) and through oral recitation and passing on of the religion to descendants. Many slave owners fervently opposed this and required their slaves to convert to Christianity, take on Christian names or attend Christian religious services. A slave’s ability to read and write in another language (and foreign script) was seen as a threat to the power of a slave owner.
While it was dangerous for Muslims to openly practice Islam, many kept it within the home and had secret group worship. Many were forced to convert to Christianity and others falsely “converted” to adhere to rules and laws established in the colonies. Others chose to defy those orders and publicly practiced Islam such as in the islands of Georgia and in the Caribbean. Although some Muslim slaves were forced to convert through physical force and abuse they did not easily renounce their religion and fought hard to continue following their faith including the fundamentals of Islam. Their strong religious beliefs led them to refuse conversion to Catholicism or Protestantism. In cases of absolute necessity, they would outwardly convert.
Looking at this history of the treatment of the Islam and the Arabic language in colonial America, the derogatory view of these central aspects to Muslim and African slave identity remain in line with much of today’s views of Arabic/Muslims in America. There is still an irrational fear of Arabic, both in reading and writing where several recent cases involve whole interruptions of airline flights because of phone calls in Arabic or even students holding Arabic flashcards (to name a few).
According to the traditions and teachings of Islam, each Muslim is obligated to pray five times a day, fast the month of Ramadan, perform pilgrimage, and give charity. How could Muslim slaves be able to stay true to their beliefs while being coerced to change their name and to eat pork? The many stories of slaves remaining steadfast in the preservation of their faith in the most excruciating of circumstances are remarkable. Some slaves, such as the famous Kunta Kinte never allowed his master to gain power over him and refused to change his name. As he was being lashed his master would ask him repeatedly what was his name, every time he answered, “Kunta Kinte.” Some slaves such as Ayuba Suleiman Diallo was forced to change his name to Job ben Soloman. Diallo put his faith in Allah and when found in a dangerous situation he would recite the Islamic testimony of faith (shahada). Diallo would also leave his post with the cattle to go pray in the woods. It was due to education and his faith in Islam that saved him and freed him from his bondage. Although most of the praying was in private there were instances of slaves praying the five prayers in public. Some Muslims were even able to hold prayer groups, specifically on Fridays, which is quite remarkable.
A former slave by the name of Muhammad Yarrow has a remarkable story that remain unknown to many. Because he was able to read and write in Arabic, historians believe he came from a wealthy Muslim family in West Africa. He was enslaved and brought to Maryland where he served as a slave for 44 years before winning his freedom. He purchased 3324 Dent Pl. NW and became a financer who lent funds to merchants. He also owned stock in the Columbia Bank of Georgetown. Many slaves had the Quran already memorized, which helped them keep the Arabic language by reciting it and/or writing it. This helped them cope with the unimaginable pains of slavery. They used their knowledge of Arabic to communicate with each other, write things down for their masters, some were even able to write to their families back at home. For example Omar Ibn Said who was an Islamic scholar who was taken to become a slave in America. Omar Said wrote a chapter in the Quran about victory, while his master believed he was translating a Lord’s Prayer. Omar was able to keep his faith despite the difficult circumstances he was in.
To say that the slaves simply practiced Islamic rituals is an understatement, it is more accurate to say that they breathed Islam. Islam was their source of inspiration, hope, and consultation in the most trying of moments. These are the unknown and unsung heroes of American and Islamic history. Many slaves will never be known, remembered, or celebrated. But in this month when we reflect on black history, we send a prayer on their souls. May they be in a place of everlasting freedom, happiness, and bliss. May their pain and suffering in this world be replaced with joy and elation in the hereafter.
|
Biological wastewater treatment is a process that seems simple on the surface since it uses natural processes to help with the decomposition of organic substances, but in fact, it's a complex process at the intersection of biology and biochemistry which use organisms to break down organic substances in wastewater, biological treatments include the use of bacteria, nematodes, or other small organisms.
Bacteria are categorized by the way that they obtain oxygen. In wastewater treatment, there are three types of bacteria used to treat the waste that comes into the treatment plant: aerobic, anaerobic and facultative.
These three types of bacteria are grouped only by their method of respiration. There are many species of bacteria in a wastewater treatment plant.
We can supply the different bacterial strains, each selected for its efficiency at degrading certain waste materials. With the bacterial product, the content of the waste stream determines how many enzymes are produced, in what sequence, at what concentration, and for what duration to be function correctly and effectively.
|
Air Quality is measured using a metric called Air Quality Index (AQI). AQI will display the changes in air pollution in the atmosphere. Clean air is extremely important to maintain good health and the environment. Our atmosphere is predominantly made up of 2 important gases that are vital for life on earth, these are Oxygen and Nitrogen. AQI keeps a tab on 8 major air pollutants in the atmosphere namely,
- Particulate Matter (PM10)
- Particulate Matter (PM2.5)
- Nitrogen Dioxide (NO2)
- Sulphur Dioxide (SO2)
- Carbon Monoxide (CO)
- Ozone (O3)
- Ammonia (NH3)
- Lead (Pb)
Aspirants would find this topic very helpful in the IAS Exam.
|The topic, ‘Measurement of Air Pollution’ is an important topic of UPSC GS 3 Environment and Ecology. Prepare for UPSC 2021 GS 3 with similar topics:|
How is PM 2.5 Measured?
The most common measurement used to measure air quality is PM 2.5 and PM 10. It measures the particles in micrograms per cubic metre. PM 2.5 refers to the concentration of microscopic particles less than 2.5 microns in diameter and PM 10 refers to the concentration of particles less than 10 microns in diameter. Across the globe, all the countries use the same metrics for measuring the health of atmospheric air. India measures 2 additional pollutants namely lead and ammonia. AQI value of less than 50 is considered safe.
What Instrument is used to Measure Air Quality?
Some of the instruments used are given below.
- PCE-RCM 05
- PCE-HFX 100
- PCE-RCM 8
How Does PM get into the air?
PM stands for Particulate Matter. It is a term used to define a mixture of solid particles and liquid droplets found in the air. Some particles such as dust, smoke, soot are visible to the naked eye, but other particulate matters are too small that they are only visible in the electron microscope. Some of the sources of PM are construction sites, fire, fields, unpaved roads etc. Most of the particles are formed due to complex reactions of chemicals such as sulfur dioxide and nitrogen oxides. These are the pollutants emitted by automobiles, industries, power plants etc.
Air Quality Index
- National Air Quality Index was launched in 2014 to measure the air quality in terms of six categories:
- Moderately Polluted
- Very Poor and
- Central Pollution Control Board (CPCB) has developed this Air Quality Index in consultation with IIT-Kanpur and air quality-professionals and experts.
- The states/cities are categorised in the range of 0-500 to measure its air quality:
Air Quality Index
Air Quality Index:- Download PDF Here
|
Reflection helps students to organize, communicate their thoughts, and understand if they understand a topic.
According to a study conducted by Harvard University, giving students time to reflect on their knowledge really matters and is an essential practice for their formation. By allowing them to reflect, instead of jumping from lesson to lesson, students can recognize their strengths and weaknesses.
Jamie Back, a math teacher at Cincinnati Country Day School, says she asks students to reflect at the beginning of the year to know what they are looking forward about the class, their fears, how they overcome mistakes in math and how they feel when solving a math problem.
Moreover, she makes students reflect during a lesson because by “asking students to reflect on one class or one idea forces them to make time to determine if they understand before it is too late to seek help.” Additionally, it helps them organize and communicate their thoughts and realize if they truly understand a topic.
The teacher also recommends different tools to help students reflect on their knowledge:
GrokSpot: a collaborative discussion tool that blends reflection with mindset messages. It allows teachers and students to communicate and reflect their thoughts through emojis.
Microsoft OneNote: it can be used as a digital notebook to keep notes and reflections. It also has Class Notebook, a place where teachers can respond to questions by inserting a text, picture, video, or writing/drawing using a Smartpen.
FlipGrid: a tool that allows video discussions using stickers and emojis.
Besides these tools, there are many tips teachers can follow to make students reflect on what they are learning:
Blogging: it’s a simple way to make students communicate their thoughts and get them to write.
Make videos: make students use their creativity by giving a different perspective on a topic. It can be a great way to look back at what they learn, giving them an insight into their progress.
Quote highlights: ask students to find a quote, song, brand or piece of art that represents a concept from the class so they can relate what they see in the lesson with things in “real life.” It also allows them to show more about their passion and interest.
Take breaks: teachers can’t force students to reflect; they can only make it a habit, so it comes naturally. By giving “reflection breaks” during lessons, encourages students to express their thoughts about what they’ve learned so far. After a while, they’ll learn to reflect on their own.
Students reflections can help teachers modify and plan future lessons, see what strategies are helping and which aren’t, detect when students need extra attention, and what connections they make between the teaching and what they see outside of the classroom.
|
Will It Fly?
Students learn about kites and gliders and how these models can help in understanding the concept of flight. They learn about the long history of human experimentation with kites, the eventual achievement of flight with the invention of airplanes, and the pervasive impact of the airplane on the modern world (pros and cons). Then students move on to conduct the associated activity, during which teams design and build their own balsa wood glider models and experiment with different control surfaces, competing for distance and time. They apply their accumulated existing knowledge (from previous lessons and activities in this unit) about the four forces affecting flight and modifiable airplane components, and apply an engineering design methodology to develop sound gliders. To conclude, they reflect on and communicate the reasoning and results of their design modifications.
|
596-598 - Rise of Napoleon
598-599 - Napoleon's domestic policies
598; 600-602 - Growth of Empire
602-603 - Collapse of Empire
603. - Napoleon defeated
624-625 . Congress of Vienna
626-627 - New Ideas - Liberalism & Nationalism
627-629 - 1848 Revolutions
630-631 - Unification I : Crimean War
631-632 - Unification II : Italy
632-633 - Unification III : Germany
It was in the 19th century that nationalism became a widespread and powerful force. During this time nationalism expressed itself in many areas as a drive for national unification or independence. The spirit of nationalism took an especially strong hold in Germany, where the nationalism that inspired the German people to rise against the empire of Napoleon was conservative, tradition-bound, and narrow rather than liberal, progressive, and universal. When the fragmented Germany was finally unified as the German Empire in 1871, it was a highly authoritarian and militarist state.
After many years of fighting, Italy also achieved national unification and freedom from foreign domination, but certain areas inhabited by Italians (e.g., Trieste) were not included in the new state. In the latter half of the 19th cent., there were strong nationalist movements among the Balkan peoples subject to the supranational Austrian and Ottoman empires, as there were in Ireland under British rule, and in Poland under Russian rule.
At the same time, however, with the emergence in Europe of strong, integrated nation-states, nationalism became increasingly conservative. It was turned against such international movements as socialism, and it found outlet in pursuit of glory and empire - the age of New Imperialism dawned. Nationalist conflicts likewise had much to do with bringing on World War I.
Napoleon and European nationalism
Napoleon Bonaparte was born on August 15, 1769 in the city of Ajaccio on the island of Corsica. His father was Carlo Buonaparte, an important attorney who represented Corsica at the court of the French King. He had four brothers and three sisters including an older brother named Joseph.
Coming from a fairly wealthy family, Napoleon was able to attend school and get a good education. He went to a military academy in France and trained to become an officer in the army. When his father died in 1785, Napoleon returned to Corsica to help handle the family's affairs.
While in Corsica, Napoleon became involved with a local revolutionary named Pasquale Paoli. For a while he helped Paoli in fighting against the French occupation of Corsica. However, he later changed sides and returned to France after the French Revolution occurred in Paris. The people revolted against the King of France Louis XVI and took control of the country. The royal family and many aristocrats were eventually killed.
Upon Napoleon's return, he had allied himself with a radical group of the revolutionaries called the Jacobins who ere responsible for these executions. He received a position as the artillery commander at the Siege of Toulon in 1793. The city of Toulon was occupied by British troops and the British navy had control over the port. Napoleon came up with a strategy that helped to defeat the British and forced them out of the port. His military leadership in the battle was recognized by the leaders of France and, at the young age of 24, he was promoted to the position of brigadier general.
In 1796, Napoleon was given command of the French army in Italy. When he arrived in Italy, he found the army to be poorly organized and losing to the Austrians. Napoleon, however, was an ambitious man and a brilliant general. He used superior organization in order to move troops rapidly around the battlefield so they would always outnumber the enemy. He soon drove the Austrians out of Italy and became a national hero.
After next leading a largely unsuccessful but still well publicised military expedition in Egypt, Napoleon then returned to Paris in 1799. The political climate in France was changing. The current government called the Directory were losing power. Together with his allies, including his brother Lucien, Napoleon formed a new government called the Consulate. Initially, there were to be three consuls at the head of the government, but Napoleon soon gave himself the title of First Consul. His powers as First Consul essentially made him dictator of France.
As the dictator of France, Napoleon was able to institute a number of government reforms. One of these reforms was the famous Napoleonic Code. This code said that government positions would not be appointed based on a person's birth or religion, but on their qualifications and ability. This was a big change in the French government. Before the Napoleonic Code, high positions were given to aristocrats by the king in return for favors. This had often led to incompetent people in important positions.
Napoleon also helped to improve the French economy by building new roads and encouraging business. He reestablished the Catholic Church as the official state religion, but at the same time allowed for freedom of religion to those who weren't Catholic. Napoleon also set up non-religious schools, so anyone could get an education.
Napoleon's power and control continued to grow with his reforms. In 1804, he was crowned the first Emperor France. At the coronation, he did not allow the Pope to place the crown on his head, but instead crowned himself.
How did Napoleon change the world?
Using your CCWH answers and your extra details from historychamps, complete the googledoc table on the right by clicking the question mark icon...
2-3 bullet points about how Napoleon changed France and then Europe…
Then think about what he introduced to change the world - what kinds of thing(s) did he introduce that changed how we think + act?
Once you’ve chosen something(s), then explain why it was so important, what did it cause or change..?
With the expansion of the his French Empire from the Iberian peninsula to the gates of Moscow, Napoleon Bonaparte had a great say in how European ideas evolved over the 19th Century. His Code Civile was used across the continent as a basis for restructuring the political systems in each conquered territory. Whilst formally unifying the Empire under one system, it also meant the continent's population was exposed to Enlightenment ideas such as freedom, brotherhood and equality. These ideas prompted individual peoples across this vast area to start calling for unity amongst their own ethnic, cultural and linguistic groups. In turn, this growth of nationalism across Europe not only spelled disaster for Napoleon's Empire, but also created the nationalistic and increasingly imperialistic Europe who eventually saw no other option but starting a war in 1914 which would unleash hell...
Click on the painting and read the article above to discover why and how German people in particular unified as a direct result of Napoleon and his policies, then answer the questions on the document accessed on its right... This is an advanced document, with some academic English. Answer as many qs as possible, but don't worry if you can't answer them all - we will go through them as a class....
Congress of Vienna
Otto von Bismarck + German unification
The most obvious place to start when looking at the effects of Napoleon and his impact on Europe via nationalism is by examining what happens in the lands which were to become Germany. Napoleon had smashed the 1000 Year First Reich when he dismantled the Holy Roman Empire, replacing it with his Confederation of the Rhine. Once Napoleon was defeated at Waterloo in 1815, the Confederation dissolved into 100s of tiny statelets, princedoms, alongside some major established powerful entities such as Prussia. These splintered fragments of the old HRE proved fertile ground for the ideas within Napoleon's Code Civile, with the shared language, history and cultural base allowing for a gradual realignment. However, it still needed the input of another great historical figure to completely remake the pieces into one whole again - enter Otto von Bismarck...
Who was Otto von Bismarck?
Otto von Bismarck was one of the individuals we study who genuinely changed the world in which he lived, and like Napoleon, continues to shape the modern world with the actions he took in the 19th century. Born into a wealthy Prussian family, he was a member of the ruling classes from birth with Prussia being the largest of the states that emerged from the Holy Roman Empire, and one with a proud military tradition and heritage. It would be along Prussian lines that the new Germany would based when it emerged in 1871. Von Bismarck ensured that this would happen when his desire to unite the German-speaking peoples of Northern Europe started to become reality in the 1850s and 1860s. With his Blood and Iron policies using a Realpolitik philosophy as their basis, OVB launched a series of easy, quick wars against much weaker opponents like Austria and Denmark to unify the people behind him and his ideas. Once he then defeated France in the Franco-Prussian war of 1870-71, Germany came into existence. Once united, Germany quickly established itself as the up and coming power in Europe quickly challenging the UK and another quickly growing power, the United States of America, in the race to become the world's leading superpower...
Cavour, Garibaldi + Italian unification
The second example of European Nationalism in action we will look at is the unification of Italy, a complicated process that is funnily enough completed in 1870, just before Otto von Bismarck unifies Germany in 1871.
It was a complicated process as Italy had experienced nearly two millenia of separation since the collapse of the Roman Empire with multiple independent states operating under different rules with different cultures and identities. However, they were all bonded by their Roman Imperial past and the Roman Catholic faith that had emerged from it. By the mid 1800s, there was increasing support for the idea that Italy should become united and so compete on the world stage alongside the other Great Powers of the day. For this to happen however, huge differences had to be overcome with determined and ruthless leadership central to any success. The epicentre of this unification movement, or the Risorgimento, was surprisingly far removed from the historical centres of Rome, Florence, Milan Venice or Naples - it was on the island of Sardinia and its lands in North West Italy called Piedmont that Italian unification was plotted and launched...
Watch the 3 minute history summary on the left hand side of the screen.
Who were the following individuals and how did they affect the Risorgimento, or unification of Italy?
- Giuseppi Mazzini
- Count Camillo Cavour
- Giuseppi Garibaldi
- Victor Emmanuel II
- Napoleon III
- Otto von Bismarck
Choose who were most important and least important and then put into a diagram called a causal pyramid seen below:
______D____ _____E_______ ____F_____
Put the most important where A is and explain why; next important where B and C are (and explain); repeat for the least important individuals by putting them in D, E and F...and again explain why....
Now research the events in more detail using first the textbook copy which you can access by clicking on the map below
Print out wherever possible, read and highlight, and then make notes under the following headings:
(629) - Italy - 1848 Revolts
(631) - Piedmont vs Austria
(632) - Garibaldi in the South
(632) - Italian unification with Rome + Venice
Make sure you have completed a colour coded map of the unification using either the map in the text or on the website using the blank copy of the map available by clicking on the small Word icon bottom right of the map below...
Finally, we are going to examine the events and individuals in more detail (remember how the examiners love this subject!)
Access the big Word documents under the title to the map - on the left are the events and on the right are the individuals
- For the events make a revision poster timeline using the most important events and themes
- For the individuals, construct some revision summary flashcards OR crib sheets depending on your preference
Now, get out your original causal pyramid - do you still agree with your original judgements or have you changed any?
Reorganise your pyramid where necessary, and explain either why you have kept your original choices in detail OR made the changes you have on the googledoc accessible below by clicking on the Italian flag icon...
German unification in 3 minutes
OVB summary in 3 minutes
Franco-Prussian War in 3 minutes
Italian unification in 3 minutes
Italian unification lecture
Googledoc - Who was most responsible for the Risorgimento?
Ideologies - 1830 & 1848 Revolutions in Europe
Open publication - Free publishing
- `Little Dictator` clip - how does this show Napoleon?
Who composed it? Why?
BBC Rise of Napoleon : Hero or Villain
PBS Empires series -Napoleon - 3/4 Summit of Greatness
PBS Empires series - Napoleon - 4/4 The End
|
New Zealand has four species of southern beech (Nothofagus), which is different to beech found in the northern hemisphere. Similar species live in southern South America, Australia, and at higher altitudes in New Caledonia and New Guinea (and fossils have been found in Antarctica). Beech forest grows on poor soils and in mountainous and cooler areas. It was long thought that beech trees had evolved from ancestral trees that had been in New Zealand when it broke away from Gondwana 85 million years ago. But recent DNA research has shown that beech species living in New Zealand today are more recent arrivals. They probably established in New Zealand from seeds carried across the sea from Antarctica or Australia.
Using this item
Photograph by Lloyd Homer
Permission of GNS Science must be obtained before any use of this image.
|
Linear programming is a powerful tool that is widely used in business. It is essentially shading inequalities. In your algebra class, you might encounter both one-dimensional and two-dimensional problems. Fortunately, the principles are the same.
Number Line -- One Inequality
Inequalities have two forms, one that includes the condition of being equal, and one that does not. The inequality x<5 excludes 5, while in x≤5 includes 5. To graph x<5, draw an open circle at 5. This divides the number line into two regions, one below 5 and one above 5. Test the region that includes 0. Is 0 less than 5? Yes. So shade or draw a thick line from the circle at 5 to the left, through 0 and beyond.
Number Line -- Two Inequalities
Now include the condition x≥-3. Because the inequality includes 3, draw a solid circle at -3 and test. Zero is greater than -3, so shade the region containing 0, to the right of -3. Be sure that you do not shade past the open circle at 5, as you must still meet the condition that x<5.
Sciencing Video Vault
In the x-y plane, use dashed and solid lines instead of open or solid circles. Draw a dashed vertical line at x=5 and a solid vertical line at x=-3, and then shade the entire region in between. To shade the two-variable inequality y<-2x + 3, first graph the line y=-2x + 3. Use a dashed line because the inequality is <, not ≤. Then test an x-y point on one side of the line. If the result makes sense, shade that side of the line. If not, shade the other. For instance, (3,4) gives 4<9, which checks out.
|
Pupils learn about infectious disease treatments, and what happens to medicines, and drugs, when they are swallowed, injected or inhaled.
An interactive activity about the different bones and organs in the body, where they are and what they do.
A short article based on an extract from Topics in Safety, Topic 17 (Electricity), which is freely available to Association for Science Education (
ASE Health and Safety Group
The Gardening Club Grant Scheme has provided schools with the means to develop natural habitats in their school grounds.
Research-based publications and web-based activities to support active learning from the Biotechnology and Biological Sciences Research Council.
Apply for one of our outreach grants to put a pharmacology outreach or public engagement activity into action!
British Pharmacological Society
The popular science shows that strip science down to its bare essentials.
<p>Student Books and Teacher Guides provide a clear route through this new specification.</p>
<p>The Student Books and Teacher Guides provide a clear route through this new specification.</p>
<p>Everything your students need to perform their required practical activities in one place.</p>
<p>Level-specific practice to help students prepare for their exams.</p>
In this activity children use the exciting space mission to understand the concept of orbits and to link this to their understanding of gravity.
|
Measurement 1Reflective Displacement Sensor
When light is directed at a transparent target, light is reflected from the top and bottom surfaces. Thickness is measured by calculating the difference in position of the light reflected from the top and bottom surfaces.
Displacement sensor selection is important
- Does the displacement sensor have enough range to see both the top and bottom surfaces?
- Check to see if stable measurement is possible even if the reflectance of the top and bottom surfaces is different.
- Triangulation Method.
- World's fastest sampling rate 392 kHz.
- 12 sensor heads can be connected.
|
The Best Advice on Children's Products
Explore chemistry as you cook up candies and chocolates in the kitchen! Perform a number of sweet experiments and learn important physical science principles related to candy and cooking. Discover why sugar crystallizes to make rock candy. Learn about specific heat and the phases of matter by molding chocolates. Use a thermometer and learn about heat and temperature while boiling sugar. Gain experience with measurements and conversions, volumes, and weights. Investigate the chemistry of gummy bears. Learn about the mysterious phenomenon of triboluminescence with wintergreen candies.
Make gummy candies, chocolate shapes, and hard sugar candy using the special tools included: plastic and metal molds, candy thermometer, spatula, dipping fork, and more. Finish them with foils, paper cups, sticks, and wrappers. The 48-page, full-color manual provides instructions and explanations. Does not contain hazardous chemicals or food items; you supply the safe ingredients from your kitchen.
|
You likely hear the term hay fever a lot, but do you know what it means?
“Hay fever” is the common term for a medical condition called allergic rhinitis. Ironically, hay fever is not a fever and rarely involves hay.
However, most people are familiar with this term; few people say “my allergic rhinitis is terrible in the spring!”
If you want to beat your allergies this season, you need to start by understanding the causes and symptoms of hay fever. You also need to know what makes it worse, and what can make it better.
With this knowledge, you’ll have a better chance a getting through the season without hay fever!
What is Hay Fever?
Hay fever is marked by sneezing, runny and stuffy noses, and itchy eyes, mouth, or skin. Sufferers can also have symptoms of fatigue, largely because they are sleeping poorly from nasal issues and sneezing. According to the American College of Allergy, Asthma, and Immunology, the condition effects more than 40 to 60 million Americans.
So, what causes hay fever?
The most common culprits are outdoor allergens, especially pollen from grasses, trees, weeds, wild flowers, and other plants.
However, indoor allergens can be a trigger for hay fever. Pet dander can cause allergic rhinitis, as can dust mites, mold, cigarette smoke, and perfume. There is also scientific evidence that diesel fuel is more likely to cause allergy problems than other types of fuels, although the connection remains unclear.
Allergy suffers are essentially victims of their own immune system. When an allergen (the allergy-causing substance) is inhaled, the immune system reacts, sending disease-fighting chemicals to the nasal passages, sinus, eyelids, and other areas. While the reaction is meant to protect your body, the immune system is causing the runny nose, sneezing, and itchy eyes, all in an effort to fight a substance that is essentially harmless.
When Does Hay Fever Occur?
Many people assume that hay fever occurs exclusively in the spring. While symptoms are often strong in the spring, the condition can plague people all throughout the year, even during winter in cold climates.
Spring is often hay fever season for many allergy sufferers. This is mostly due to pollen released from trees and other plants. To reproduce, many plants rely on the breeze to spread their pollen. To blow in the wind and spread as far as possible, pollen needs to be extremely small and light. As the pollen blows in the wind, people will often inhale it, triggering their allergies.
Flowers are often thought of as causes for hay fever, but they rely on bees and other insects to spread their pollen, rather than the wind. These plants have pollen that is quite dense, clumpy, sticky, and heavy.
So, it’s not the fields of flowers that you should avoid, it’s actually the woods!
Pollen can travel an incredible distance, especially when the wind is high. While most pollen will only travel a few hundred meters from its source, it’s not unheard of for pollen to travel as far as a thousand miles.
The lesson? You don’t have to be near the woods or prairies to suffer from hay fever, as the pollen can easily make its way to you.
Understanding “Super Pollen”
There’s a term that you’ll often see when researching hay fever. Although it might be overly sensational, “super pollen” is a term used to describe the combination of hay fever and outdoor air pollution, especially diesel fuel.
When pollen mixes with diesel fuel exhaust particles, it’s believed that the material becomes stickier and does not disperse into the atmosphere as quickly. This means the allergens are not only more prevalent, they stay around for a longer period. Super pollen is a strong concern in the UK, and could become an issue for Americans as well.
Air Pollution and Hay Fever: Is there a Connection?
What about other forms of air pollution?
Are there factors other than diesel exhaust that contribute to someone’s hay fever?
Is it possible that regular gasoline exhaust, which is the standard in the U.S., also adds to hay fever problems?
What about smoke from tobacco, industry, or even wildfires?
To find the answers, we need to look at scientific studies...
One study found that hay fever is connected to poor air quality. This cross-sectional study, from researchers at Brigham and Women's Hospital in Boston, Massachusetts, analyzed past surveys on respiratory issues and looked at historical data for air quality, which included information for pollutants, carbon monoxide, nitrous dioxide, sulfur dioxide, and particulate matter.
Lining up the two data sets, researchers discovered that “improvements in air quality are associated with decreased prevalence of both hay fever and sinusitis.” (Sinusitis is an inflammation of sinus tissue; it essentially means blocked sinuses.)
A study from South Africa seems to support the “super pollen” theory as well as the theory that air pollution contributes to allergies.
The study analyzed the presence of truck traffic. (By “trucks, it’s fair to assume they mean diesel-engine semi trucks, although the study mentions both petrol and diesel.)
The research asked school children questions about “truck traffic” near their home, as well as the presence of allergic rhinitis. Researchers found that the "results support the hypothesis that traffic related pollution plays a role in the prevalence of allergic rhinitis symptoms in children residing in the area."
A New Side Effect of Pollen: Bad Grades?
The common side effects of hay fever include sneezing, runny noses, and watery eyes.
Could we add poor academic performance to the list?
According to research out of Norway, pollen allergies may have a negative effect on exam results.
Simon Bensnes of the Norwegian University of Science and Technology's Department studied the connection between pollen levels and exam scores and found that when pollen is higher, scores drop among some students. The author of the study speculates that people with hay fever may also have reduced performance during work, which could, in theory, lead to lower average pay and fewer promotional opportunities.
Factors That Make Hay Fever Worse?
Is it possible that certain habits or other factors are making your hay fever worse than it needs to be?
According to Prevention, a magazine dedicated to health and well-being topics, there are a few common habits that could be making your hay fever even worse.
Stress, for example, could create more allergy symptoms. It’s possible that stress stimulates blood hormones that cause allergic reactions, contributing to the frequency and severity of allergies.
Over-consumption of alcohol could also be a factor. It’s believed that bacteria and yeast in alcohol could produce histamines that lead to allergy symptoms. When your symptoms are acting up, it might be wise to reduce or eliminate alcohol consumption.
The presence of some houseplants could also make your allergies worse by adding more pollen to your indoor air.
It’s also possible that swimming in chlorine pools contributes to enhanced allergies, and tobacco smoke, including first and secondhand, could be responsible for more allergy symptoms as well.
Alleviating Hay Fever: How to Have Fewer, Lighter Symptoms
So, what can you do to relieve allergic reactions in yourself, a loved one, or your children?
Drugs and other treatments are common, but by taking strategic measures to reduce exposure to pollen and other allergens, you’ll make life easier and more comfortable for everyone.
First, it must be said that discussing the issue with a doctor or medical professional should always be the first step. Your doctor can help you determine if you are allergic to certain substances, and whether treatments and other advanced measures are needed, including both prescription and non-prescription measures. Whether you are taking medication or not, you can use these steps to reduce your symptoms...
Wash Clothes and Sheets in Hot Water
An article published by WebMD.com discusses the benefits of washing your clothes in hot water to remove allergens. It says that water should be 140 degrees F to kill 100% of the dust mites found on clothing and sheets. Hot water also removes pet dander and pollen more thoroughly than warm or cool water. Steam-cleaning can also be as effective as hot water.
Take a Shower Before Bed
At the end of the day, your body and hair can be covered in pollen, so if you go straight to bed without rinsing off, that pollen, as well as other allergens, will stay with you. Taking a shower before bed will help remove many of the allergens that are giving you trouble and will help reduce your overall exposure. Be sure to rinse thoroughly and wash your hair or beard, especially if either are particularly long.
Wear an Air Filter Mask When Mowing, Landscaping, or Gardening The mere act of mowing the lawn tosses a lot of dust, debris, and pollen into the air, making the chore almost unbearable for some allergy sufferers.
However, you can reduce your exposure by wearing a filtering mask while you do outside chores. If pollen and allergies cause eye irritation, you can also consider using a pair of goggles to keep particles out of your eyes. You might look silly to the neighbors, but you’ll feel much throughout the day!
Avoid the Outdoors During Peak Pollen Times
Certain seasons, as well as particular times of the day, can be better or worse on your allergies. According to Pollen.com, the highest pollen counts usually occur between 5:00 am and 10:00 am, so if you can avoid outdoor activities during these times, you’ll likely feel better.
A lot will depend on your specific region, but April and May are often the worst spring allergies months for tree pollen, while June and July (summer allergies) bring seasonal grass pollen. Ragweed, however, starts in the late summer and usually lasts into October. This weed is often found in cities because it can grow fast and thrive from small cracks in concrete.
Drive with Windows Up
Driving with the windows down gives you fresh air, but it can also contribute to your allergy symptoms. Even if you are traveling a short distance, keep the windows up to avoid exposing your lungs to pollen and other allergens. Keeping the windows up also keeps pollen from landing in your car, preventing future issues.
Clean the HVAC Vents, Replace Filters
Roughly once a year, have your HVAC vents cleaned. This has less to do with pollen and more with dust and indoor pet dander, but it can still remove many of the allergens that cause or contribute to hay fever. Replacing your HVAC air filter on a regular basis is primarily intended to maintain the quality of your home furnace, but can also ensure less dust is distributed throughout your home.
Keep Pets out of the Bedroom
Like a few of our suggestions, this one isn’t related to pollen, but to other factors that can make allergies worse. If you have pets in the home, you may enjoy having them in the bedroom. This provides a sense of companionship and, if it’s a dog, may even make you feel safer while you sleep. However, the pet’s dander is likely contributing to your allergies, making it harder to enjoy a good night’s rest. It might seem like a drastic step to some pet owners, but if you suffer from hay fever, you should seriously consider removing the pets from the bedroom to reduce your allergy symptoms.
Internal Causes of Allergic Rhinitis
We’ve talked a lot about various factors that cause allergic symptoms and hay fever, but what about personal factors?
Why is it that one person develops allergic rhinitis while another does not?
Let’s look at some of the personalized factors that may be causing the development of allergic rhinitis.
The first factor is genetics: a person with allergies is more likely to have children who also have allergies. There’s a theory that a single genetic glitch may be the cause of most people’s allergies, including allergies to pollen.
There is scientific data supporting the thought that food allergies are passed through genetics, so it’s not unfounded to think that airborne allergies could function in this way as well.
According to an article published in Therapeutics and Clinical Risk Management, “there is clear evidence to support the concept that allergic diseases are influenced by genetic predisposition and environmental exposure.”
Determining the environmental causes of allergy development can be tricky. It’s tricky because you need to separate allergy development compared to allergy symptoms.
For example, you might find that allergies are higher among people in urban areas, but it’s possible the same rate of people, all over the country, have the same allergies.
Urban dwellers are simply exposed to the allergens more often, skewing the data.
Allergies are, in fact, more common in urban areas than rural areas, so where you live could be a factor.
Oransi Air Purifiers for Removing Hay Fever
Max HEPA Air Purifier
The Max HEPA Air Purifier is one of the most effective and reliable air purifiers for allergies. It cleans a wide range of allergens, including hay-fever-inducing pollen. It uses a HEPA filter, which brings reliable performance and traps many of the smallest particles, further increasing its effectiveness. By removing over 99% of airborne particles, this air cleaner creates a more comfortable space for anyone suffering from hay fever.
OV200 Air Purifier
The OV200 Air Purifier is perfectly suited to remove pollen in bedrooms, offices, small basements, and small living rooms. It uses both a HEPA and a carbon filtration system, removing 98% of all airborne allergens. With a thin, unobtrusive profile, this air purifier can fit into many different spaces, allowing you to combat hay fever without noise or clutter.
EJ Air Purifier
If you want serious pollen removal for a large area, EJ120 Air Purifier, which uses a medical-grade HEPA filter, is perfect for your needs. It removes 99.99% of allergens and particulates, and can be used for large and small areas. From the office to the home, this is an effective air purifier for many different needs.
With a large selection of leading air purifiers, Oransi has the right products to help reduce and alleviate your hay fever symptoms.
Beating hay fever takes a comprehensive approach, so make sure your purifier meets a high standard by contacting Oransi today!
|
Holes in flower leaves usually indicate insect pests rather than disease, which tends to cause spots on the leaves or dropping leaves. Holes are caused by insects with chewing mouthparts, such as caterpillars and beetles. Before you rush for a can of insecticide, though, consider that each insect is part of a larger ecosystem. Those bugs that eat your plants probably feed birds and frogs. In most cases, insects feed for a few weeks on plants and then move on. If you take a wait and see approach, the problem might resolve itself. In more serious cases, try a few natural or cultural strategies before resorting to chemicals.
Identify the problem. In most cases, holes in the leaves of your flowers mean insect pests, such as caterpillars or slugs. Look on the undersides of leaves for insects or inspect the ground for other telltale signs. Caterpillars, for example, leave green fecal pellets, while slugs and snails leave a shiny trail.
Select a treatment based on your findings. For example, handpick caterpillars and drop them in soapy water or treat them with Bacillus thuriengensis (Bt), soilborne bacteria that prevent caterpillars from feeding and eventually destroy them. Milky spore disease controls Japanese beetles. Treat slugs and snails with a commercial product containing iron phosphate or handpick them at night. Spreading sand on the soil around the plants might also deter slugs and snails, according to "Sunset" magazine.
Remove weeds, dead plant debris and webs from around your flowers. By removing shelter for bothersome insects, they'll often leave on their own.
Grow a variety of flowers in your garden. Most insects prefer certain flowers to others. By increasing the diversity in your garden, you reduce the amount of damage one type of insect can cause. Growing a diversity of plants also encourages beneficial insects and predatory animals, such as ladybugs, praying mantises, birds, frogs and snakes. These animals find shelter in your flower garden and eat the insects.
Things You Will Need
- Soapy water
- Bacillus thuriengensis
- Milky spore disease
- Iron phosphate
- You probably can't prevent all insect damage, but healthy plants are better able to survive attacks. Keep flowers healthy by planting them in the right conditions. Most flowers prefer moderate moisture and rich, well-draining soil.
- If insects constantly attack a particular flower in your garden, consider replacing that plant with one that is less desirable to pests.
- Select flowers that are known to attract beneficial insects and thrive in warm, coastal climates. A few to try include coreopsis (Coreopsis spp.), yarrow (Achillea) or cosmos (Cosmos bipinnatus cvs.), all hardy in U.S. Department of Agriculture plant hardiness zones 4 through 9. Cosmos is hardy to zone 11.
- Ablestock.com/AbleStock.com/Getty Images
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.