content
stringlengths 275
370k
|
---|
Following is an explanation of the difference between Aspergers And Autism Syndrome
Asperger Syndrome exists as part of the Autism spectrum but differs in early development of language from classic Autism
Here’s a brief explanation of the two conditions and what the main differences are.
Autism is a spectrum disorder – which basically means that the signs and the severity of symptoms can vary significantly in each person. It usually begins at an early age (before 3) and causes delays to the normal development of social and emotional skills.
The main areas in which autism symptoms can be seen are:
- Communication – both verbal and non-verbal, such as eye contact, facial expressions and body language.
- Social Behaviors – people with autism struggle with expressing emotions, relating to other peoples emotions and holding conversations. They have a tendency to withdraw from social interaction (but not always) and can over-react to what we would consider a normal situation.
- General Behaviors- repetition of actions, phrases and routines are common as are following strict organization patterns. The routine tends to make them feel safe.
People with autism can also display abnormal sensory perception. For example, a normal volume noise may seem extremely loud and even painful to them.
Physical interaction can also cause problems for children with autism. They may dislike the feeling of being touched or will only allow themselves to be hugged in a certain way. These children also tend to favor rigid objects and toys such as metal cars rather than soft toys like teddy bears, and some even show pain from touching a stuffed animal.
Smells may also cause problems. For example, scents that are pleasant to you and me may cause those with autism to gag.
It is absolutely not true that all individuals who develop autism show retardation. Many are quite brilliant, for example, Autistic Savants.
People with Asperger’s Syndrome display autistic characteristics like obsessive behaviors or lack of social and communication skills. And like autism, the level and severity of these signs will vary from person to person.
They do not show delayed skills. In fact, one of the symptoms of Asperger’s Syndrome is having a normal IQ. As a result, those with Asperger’s are sometimes called “higher-functioning” autistics.
Asperger’s is also usually noticed at a later age, with social and communication problems less severe than with autism. Verbal IQ tends to be higher than physical IQ and clumsiness is more common.
People with Asperger’s Syndrome usually have good language skills – However, their use of language can be awkward and speech patterns can be unusual, without inflection or changes in pitch or tone.
The subtleties of language, such as irony and humor can be lost on someone with Asperger’s and they may struggle to understand how a conversation should flow. They do not understand metaphors in languag and are known to take things literally, for example, “Hop up on the bed” will cause them to literally get up on the bed and start hopping.
It is hard to generalize ASD ’s, but two main differences between Autism and Asperger’s seem to be:
1. People with Aspergers tend to have a normal or sometimes a high IQ.
2. There is no speech delay in people with Asperger’s. Yet there is something about their speech which is “different” and has been described as “wooden” or “monotone”.
Understanding and knowing how to manage aspergers and autism is one of the most important steps in rebuilding relationships.
When you’re trying to fix your relationship but your emotions are out of control, you will always end up fighting. It’s time to get some professional help.
You can change this today.
I can help you to:
- Grow your emotional skills – emotional skills are far more important than any functional skill in achieving a high level of peace and calm within oneself.
- Know your emotional style – your emotional style effects how you react in emotional situations.
- Understand your emotional brain – learn how your brain effects your personal emotions. |
So what is a thyristor?
A thyristor is actually a high-power semiconductor device, also referred to as a silicon-controlled rectifier. Its structure contains 4 levels of semiconductor elements, including three PN junctions corresponding to the Anode, Cathode, and control electrode Gate. These three poles are definitely the critical parts in the thyristor, letting it control current and perform high-frequency switching operations. Thyristors can operate under high voltage and high current conditions, and external signals can maintain their functioning status. Therefore, thyristors are commonly used in various electronic circuits, like controllable rectification, AC voltage regulation, contactless electronic switches, inverters, and frequency conversion.
The graphical symbol of the Thyristor is usually represented from the text symbol “V” or “VT” (in older standards, the letters “SCR”). Furthermore, derivatives of thyristors also include fast thyristors, bidirectional thyristors, reverse conduction thyristors, and light-weight-controlled thyristors. The functioning condition in the thyristor is the fact that when a forward voltage is used, the gate should have a trigger current.
Characteristics of thyristor
- Forward blocking
As shown in Figure a above, when an ahead voltage is used involving the anode and cathode (the anode is attached to the favorable pole in the power supply, as well as the cathode is linked to the negative pole in the power supply). But no forward voltage is used to the control pole (i.e., K is disconnected), as well as the indicator light fails to light up. This implies that the thyristor will not be conducting and contains forward blocking capability.
- Controllable conduction
As shown in Figure b above, when K is closed, and a forward voltage is used to the control electrode (referred to as a trigger, as well as the applied voltage is known as trigger voltage), the indicator light switches on. Because of this the transistor can control conduction.
- Continuous conduction
As shown in Figure c above, right after the thyristor is excited, whether or not the voltage in the control electrode is removed (that is certainly, K is excited again), the indicator light still glows. This implies that the thyristor can continue to conduct. At this time, in order to shut down the conductive thyristor, the power supply Ea has to be shut down or reversed.
- Reverse blocking
As shown in Figure d above, although a forward voltage is used to the control electrode, a reverse voltage is used involving the anode and cathode, as well as the indicator light fails to light up at the moment. This implies that the thyristor will not be conducting and may reverse blocking.
- In conclusion
1) Once the thyristor is exposed to a reverse anode voltage, the thyristor is in a reverse blocking state no matter what voltage the gate is exposed to.
2) Once the thyristor is exposed to a forward anode voltage, the thyristor will only conduct once the gate is exposed to a forward voltage. At this time, the thyristor is within the forward conduction state, which is the thyristor characteristic, that is certainly, the controllable characteristic.
3) Once the thyristor is excited, as long as there exists a specific forward anode voltage, the thyristor will always be excited regardless of the gate voltage. That is, right after the thyristor is excited, the gate will lose its function. The gate only functions as a trigger.
4) Once the thyristor is on, as well as the primary circuit voltage (or current) decreases to seal to zero, the thyristor turns off.
5) The disorder for that thyristor to conduct is the fact that a forward voltage should be applied involving the anode as well as the cathode, as well as an appropriate forward voltage should also be applied involving the gate as well as the cathode. To transform off a conducting thyristor, the forward voltage involving the anode and cathode has to be shut down, or perhaps the voltage has to be reversed.
Working principle of thyristor
A thyristor is actually a unique triode composed of three PN junctions. It could be equivalently regarded as consisting of a PNP transistor (BG2) as well as an NPN transistor (BG1).
- When a forward voltage is used involving the anode and cathode in the thyristor without applying a forward voltage to the control electrode, although both BG1 and BG2 have forward voltage applied, the thyristor remains turned off because BG1 has no base current. When a forward voltage is used to the control electrode at the moment, BG1 is triggered to generate basics current Ig. BG1 amplifies this current, and a ß1Ig current is obtained in their collector. This current is precisely the base current of BG2. After amplification by BG2, a ß1ß2Ig current will likely be introduced the collector of BG2. This current is sent to BG1 for amplification then sent to BG2 for amplification again. Such repeated amplification forms a vital positive feedback, causing both BG1 and BG2 to get into a saturated conduction state quickly. A large current appears within the emitters of the two transistors, that is certainly, the anode and cathode in the thyristor (how big the current is in fact dependant on how big the stress and how big Ea), so the thyristor is totally excited. This conduction process is finished in a really limited time.
- Following the thyristor is excited, its conductive state will likely be maintained from the positive feedback effect in the tube itself. Whether or not the forward voltage in the control electrode disappears, it really is still within the conductive state. Therefore, the purpose of the control electrode is just to trigger the thyristor to transform on. After the thyristor is excited, the control electrode loses its function.
- The only way to shut off the turned-on thyristor would be to lessen the anode current so that it is insufficient to keep up the positive feedback process. How you can lessen the anode current would be to shut down the forward power supply Ea or reverse the link of Ea. The minimum anode current necessary to keep your thyristor within the conducting state is known as the holding current in the thyristor. Therefore, as it happens, as long as the anode current is under the holding current, the thyristor may be turned off.
What exactly is the distinction between a transistor and a thyristor?
Transistors usually consist of a PNP or NPN structure composed of three semiconductor materials.
The thyristor is composed of four PNPN structures of semiconductor materials, including anode, cathode, and control electrode.
The task of the transistor relies upon electrical signals to control its closing and opening, allowing fast switching operations.
The thyristor requires a forward voltage and a trigger current in the gate to transform on or off.
Transistors are commonly used in amplification, switches, oscillators, and other facets of electronic circuits.
Thyristors are mainly utilized in electronic circuits like controlled rectification, AC voltage regulation, contactless electronic switches, inverters, and frequency conversions.
Method of working
The transistor controls the collector current by holding the base current to accomplish current amplification.
The thyristor is excited or off by managing the trigger voltage in the control electrode to comprehend the switching function.
The circuit parameters of thyristors are based on stability and reliability and often have higher turn-off voltage and larger on-current.
To summarize, although transistors and thyristors can be used in similar applications sometimes, because of their different structures and functioning principles, they have noticeable differences in performance and make use of occasions.
Application scope of thyristor
- In power electronic equipment, thyristors can be used in frequency converters, motor controllers, welding machines, power supplies, etc.
- In the lighting field, thyristors can be used in dimmers and light-weight control devices.
- In induction cookers and electric water heaters, thyristors can be used to control the current flow to the heating element.
- In electric vehicles, transistors can be used in motor controllers.
PDDN Photoelectron Technology Co., Ltd is a wonderful thyristor supplier. It really is one in the leading enterprises in the Home Accessory & Solar Power System, which can be fully active in the growth and development of power industry, intelligent operation and maintenance handling of power plants, solar power panel and related solar products manufacturing.
It accepts payment via Charge Card, T/T, West Union and Paypal. PDDN will ship the goods to customers overseas through FedEx, DHL, by air, or by sea. If you are looking for high-quality thyristor, please feel free to contact us and send an inquiry. |
Lesson Topics: the history of Halloween, inclusivity, cultural appropriation
Skill Focus: Speaking, Listening, Vocabulary
Approximate Class Time: 1.5 hours
Lesson Plan Download: Sample Lesson (PDF), Member Download (DOCX)
- After warm-up questions, students do a pre-listening vocabulary activity.
- Next, students watch a short 3:11-minute video by National Geographic on the History of Halloween. The video should be suitable for B2/C1/C2 level learners.
- After a listen-and-recall activity, students answer comprehension questions, do a vocabulary-matching activity, and then create questions using the new vocabulary.
- The first speaking activity is a debate about an elementary school in Wisconsin that prohibited students from wearing costumes to school due to reasons related to inclusivity.
- The lesson has two roleplays. The first is among a family who is debating whether they should decorate and participate in the Halloween festivities or not. The second roleplay relates to potentially inappropriate costumes.
- After a final vocabulary review, the lesson closes with final discussion questions and a review of collocations.
UPPER-INTERMEDIATE (B2/C1) Lesson Plan on the History of Halloween
- What are your favorite holidays throughout the year?
- Why do we celebrate Halloween? What’s it all about?
- Do you know of any festivals or celebrations in other cultures that are similar to Halloween?
Pre-Listening Vocabulary Matching
Match the words with their meaning as used in the article.
|1. prank (n)2. patchwork (adj)|
3. veil (n)
4. frown on (sth)
5. ritual (n)
6. morph into sth (v)
7. bonfire (n)
8. vandalism (n)
9. trash sth (v)
|a. a large outdoor fire, often used in a celebrationb. a religious ceremony involving a series of actions done in a fixed order|
c. a practical joke / trick played on someone
d. to damage or wreck something
e. action intended to purposefully damage someone’s property
f. to gradually change into something else
g. a thing composed of many different parents
h. a thing used to cover or conceal something
i. to disapprove of something
Pronunciation: Repeat the above phrases with your teacher, stressing the underlined syllable.
Video: Halloween History from National Geographic (3:11)
Note: The festival “Samhain” is pronounced as “Sow-win.”
- … from communion with the dead, to pumpkins and pranks, Halloween is a patchwork (0:10)
- It was the time when the veil between death and life was supposed to be at its thinnest. (0:48)
- … the villagers gathered and lit huge bonfires to drive the dead back to the spirit world. (0:59)
- But as the Catholic Church’s influence grew in Europe, it frowned on the pagan rituals like Samhain (1:05)
- … the night was All Hallows Eve, which gradually morphed into ‘Halloween’ (1:50)
- But over the years, the tradition of harmless tricks grew into outright vandalism. (2:20)
- It was originally an extortion deal: give us candy or we’ll trash your house. (2:40)
Comprehension: Watch, Recall, Retell: Retell the video’s main ideas to a partner in your own words.
1) What were the Celts celebrating on a festival called Samhain?
2) What did they believe happened on Samhain? What did they do on the holiday?
3) What was November 1st known as?
4) Idiom: “It was a calculated move, on part of the church, to bring more people into the fold.” What does the idiom bring into the fold mean?
5) Who brought Halloween to America?
6) What was different about the holiday in the 1930s?
Replace the underlined phrase with a word from the pre-listening vocabulary list on page 1. Then ask the questions to a partner.
- you / ever / do / a practical joke / on someone?
- Why / people / disapprove of / Halloween?
- your city / a thing composed of different parts / of different cultures?
- someone / ever / damage / your property?
Speaking Activities: Debate!
Background: In 2017, Hillcrest Elementary School in Wisconsin decided not to let children wear costumes at school. The school explained, “We want to be inclusive of all families including those families who don’t celebrate Halloween or find purchasing a costume a hardship.” (Video 1:59)
One parent, Crystal Landry, commented: “I just think it’s sad…. It just kind of seems the way society is going... It just kind of seems silly to take it all away.”
Task: Choose a role and spend a few minutes preparing. The School Board member will go first.
|Crystal Landry, Parent||You don’t believe that canceling Halloween is the right choice. Express your disappointment. Also, share reasons and/or personal experiences highlighting the positives of celebrating Halloween in school. Finally, offer possible solutions to solve the school’s problems.Key expression: While I understand the concern, ...|
|School Board member||You are happy that students cannot celebrate Halloween at school. There were a few reasons why the holiday was a problem: 1) some students from other cultures did not celebrate it, 2) it is expensive for some families to get a costume, and 3) it takes a lot of time for teachers to get their students in an out of costumes. Explain to the other parent, Crystal Landy, why this decision is the right one.Key expression: As a school board, we frown on activities that could exclude students from different communities.|
Speaking Practice: Role-plays (Each person only reads his/her role.)
|Partner A||You and your partner have just moved into your first house together. Next week is Halloween. You want to decorate your house and give out candy. Think of reasons why participating in the Halloween tradition is important.Key expression: If we don’t give out candy, the kids might trash our house.|
|Partner B||You and your partner have just moved into your first house together. Next week is Halloween. You do not want to decorate your house or give out candy. Think of reasons why you should not participate in the Halloween tradition.Key expression: Halloween has morphed into a purely commercial holiday…|
Situation: You and your friend are out shopping for Halloween costumes.
|Friend A||You have found two costumes that interest you: ninja or Cleopatra. Your friend doesn’t think you should choose one of these costumes because they represent other cultures, and therefore, are ‘cultural appropriation’. You don’t agree, however.Key expression: It's all in good fun. No one takes these costumes seriously.|
|Friend B||Your friend wants to dress up as either a ninja or Cleopatra. You don’t think these are appropriate costumes, however, because they represent other cultures and are therefore ‘cultural appropriation’. Talk with your friend. Suggest a different costume.Key expression: Even in festivities, it’s important to respect and honor other cultures.|
Vocabulary Review: Insert a word from today’s lesson into the appropriate blank.
vandalism / trash / ritual / veil / patchwork / pranks / frown on / morph / bonfire
|1. ...from communion with the dead, to pumpkins and _________ , Halloween is a _________ holiday.2. It was the time when the _________ between death and life was supposed to be at its thinnest.|
3. … the villagers gathered and lit huge bonfires to drive the dead back to the spirit world.
4. But as the Catholic Church’s influence grew in Europe, it _________ the pagan _________ like Samhain.
5. the night was All Hallows Eve, which gradually _________ into ‘Halloween.’
6. But over the years, the tradition of harmless tricks grew into outright _________ .
7. It was originally an extortion deal: give us candy or we’ll _________ your house.
|1. frown2. morph|
3. bring people into
4. It’s all
|a. intob. in good fun|
c. the fold
Final Discussion Questions
1. Is it possible to communicate with the dead?
2. Is the way people celebrate Halloween today different from its original intention or history?
3. What are your favorite scary movies to watch at Halloween time?
4. Is there any way to make Halloween healthier in North America?
-- Lesson plan on the history of Halloween written by Matthew Barton of EnglishCurrent.com (copyright). Site members may photocopy and edit the file for their classes. Permission is not given to rebrand the lesson, redistribute it on another platform, or sell it as part of commercial course curriculum. For questions, contact the author.
Answers to Comprehension Questions:
- They were celebrating the end of the harvest season.
- They believed that the ghosts of the dead walked the earth because it was a time between years. On this holiday, they had large fires to scare the ghosts of the dead away.
- All Saints Day
- To include more people
- The wave of Irish immigrants brought it to North America.
- It was more dangerous because of the level of vandalism.
Vocabulary Answers: 1-c, 2-g, 3-h, 4-I, 5-b, 6-f, 7-a, 8-e, 9-d
Vocabulary Review Answers: see page 1
Collocation Answers: 1-e, 2-a, 3-c, 4-b, 5-d |
Animals in Action
Life Science Activities for Grades
Written by Katharine Barrett
The classroom corral used in this unit serves to focus attention
on the animals gently placed inside, allowing students to observe
the behavior of a gerbil, the movements of a cricket, and other
compelling characteristics of animals in action.
Observing the contained animals closely, the class adds foods,
shelters, and other elements to the "corral environment,"
exploring the concepts of stimulus and response. As other questions
are introducedHow do animals move? What do they prefer
to eat? How do they respond to light and sound?students
generate hypotheses and test their validity through behavior
experiments with rats, crickets, guinea pigs, cardboard boxes,
and common classroom objects. The class concludes the unit with
a scientific convention to discuss findings.
This unit makes an excellent connection to the GEMS guide Aquatic
Habitats, another unit in the GEMS series that
explores animal behavior, habitat, and conservation.
Time: Five 45-minute sessions, plus follow-up sessions.
Comment on this GEMS unit.
Spanish Language Student Materials
What materials are needed to present this unit? See the full list. |
South Australia was one of the first colonies to adopt parliamentary government. Its bicameral parliament operates under responsible cabinet government.
In 1894, South Australia became the first state in the world to give women the vote. Since then, the state has been at the forefront of social reform. It has also been an industrial powerhouse.
The House of Assembly
The House of Assembly is the lower house of South Australia’s parliament. It consists of 47 members representing individual electoral districts, and is voted on by citizens at state elections every 4 years. The political party or coalition of parties that wins most seats forms the government and the leader of that government becomes the Premier of South Australia.
The legislative process starts in the House of Assembly, where legislation is introduced and debated. Once it is passed by the House of Assembly, it is sent to the Legislative Council for consideration. Legislation must be approved by both houses to become law.
South Australia has a long history of social experimentation, including its early commitment to religious toleration that attracted a disproportionate number (though never a majority) of Nonconformists. It also pioneered a secret ballot for elections in 1855, and granted women the vote in 1894—long before most other nations did so. It also made voting compulsory in 1942.
While South Australia remained relatively small, its politicians saw many advantages for the state in joining a federation of states, and it played a prominent role in discussions and conventions leading up to Federation. Some of its delegates to the Federation Conventions were considered among the most capable and senior politicians in the country.
In the early 1900s, South Australia had a vibrant economy, with strong agricultural and mining sectors, a healthy manufacturing sector, and a sophisticated steel industry that produced naval submarines. During the 1980s and ’90s, however, the state experienced slower economic growth than other parts of the country and higher unemployment. This was despite the implementation of debt-reduction policies and efforts to diversify the state’s economy into information technology, grain exports, and tourism.
The South Australian government is responsible for many state functions, including primary and secondary education, hospitals, public housing, prisons and police, roads, water supply, and land resources. It has limited control over the state’s finances, and its Commonwealth (federal) government has substantial de facto influence through specific-purpose financial grants to the state. The federal government’s core responsibilities include defense, foreign policy, trade and economic policy, immigration, welfare payments, customs and excise, postal services, and shipping.
The Legislative Council
The Legislative Council is South Australia’s upper house of parliament. Its members are elected for an eight-year term, with half of the 22 seats declared vacant at each state election. Unlike the lower House of Assembly, there is no formal party system in the Legislative Council, although MPs tend to have historical liberal or conservative beliefs. It is not unusual for non-Labor parties to join with a government in order to pass legislation, and the Legislative Council’s power to block legislation is considerable.
Until the mid-1890s progress toward popular representation in the House of Assembly was fitful, with the Women’s Suffrage League led by Mary Lee and Mary Colton, and well-known social reformer Catherine Helen Spence leading the charge for female suffrage. It was only in 1894 that full adult suffrage was achieved, with property qualifications abolished and payment of MPs introduced.
By the 1930s, the emergence of South Australia’s natural gas fields and the discovery of the extraordinary ore bodies in Olympic Dam made it an industrial giant, and the state’s economic outlook improved. The new prosperity brought with it a sense of civic duty to protect the environment, and a resurgence in support for the parliamentary system of government. The Labor Party, much strengthened by the long-term impact of electoral reform, redefined its relationship with the trade unions while appealing to the middle strata of society for electoral support. It was emulated by a reunited Liberal Party and by the small coalition called the Australian Democrats that held the balance of parliamentary power several times in the 1980s.
The judiciary is based on the English Westminster system, with the Supreme Court hearing the most serious civil and criminal cases and the District and Magistrates Courts handling lesser matters. There are also Youth and Children’s Courts, and courts of summary jurisdiction.
The Cabinet is the key decision-making body of the Government. It is led by the Premier and consists of ministers chosen to oversee specific policy areas such as Health, Transport or Foreign Affairs. Cabinet members meet twice a week – on Monday and Thursday mornings. The Monday Cabinet considers the day to day business of government through formal Cabinet papers, while the Thursday Cabinet allows ministers to discuss strategic issues from a whole-of-government perspective.
South Australia has a bicameral parliament with the House of Assembly (Lower House) and the Legislative Council (Upper House). Both houses are elected at state elections. The political party or coalition of parties that wins the most seats forms government and its leader becomes the Premier. Members of the House of Assembly represent single-member electoral districts and are elected for a four year term. The House of Assembly is where most legislation starts, with bills passed after being discussed and voted on in the House. The Legislative Council is an upper house that reviews legislation that has been passed by the House of Assembly.
Parliamentary systems in Australia are based on Westminster models and operate under the principles of responsible Cabinet government. This means that the Governor acts on the advice of the Ministers, headed by the Premier. In South Australia, the Cabinet is known as the Executive Government and its members are referred to as Ministers.
The Cabinet is supported by a number of departments that provide services such as the police, fire service and hospitals. In addition, the state has a tertiary education system with universities such as Flinders University and the University of Adelaide offering courses ranging from TAFE and community college level to research and professional degrees.
South Australians have a long history of supporting Federation and the aims of the Commonwealth of Australia. The state’s representatives in the 1897-98 Constitution Convention, led by the Premier Charles Kingston, argued passionately for women to be granted voting rights at the federal level and helped convince the other colonies that they needed to limit the power of the Senate.
The state’s bicameral Parliament consists of the 47-member House of Assembly (lower house) and 22-member Legislative Council (upper house). The electoral system is based on a single-member district, full preferential voting, and compulsory enrolment. Members of both houses are elected at a state election every four years. Casual vacancies are filled by by-elections. The political party or coalition of parties that wins the most seats forms government, with its leader becoming the Premier of South Australia.
The decisive intervention of the British government during the financial crisis of 1841-42 curbed many of the experimental pretensions that accompanied the arrival of European colonists, but after that there was a renewed zeal for reform. Partial self-government was introduced in 1851, and the Legislative Council was reformed to include two-thirds elected membership.
South Australia remained a peculiarly image-conscious colony, especially in its commitment to culture. This image largely depended on the affluence linked to industrialization, and it was nurtured by the high level of effective protection given local motor vehicle, household appliance, shipping, and electrical goods industries.
In terms of progressive social policies, however, progress toward equal enfranchisement was fitful. Although it took a while to extend the vote to Indigenous men, it did so in the 1850s; and women gained the right on the same terms as other British subjects in 1894, though property qualifications and bureaucratic interpretation still occasionally conspired against them.
The state was one of the first to abolish slavery, and its constitution provided a model for the other Australian colonies. In addition, the state was one of the first to establish public hospitals and to give workers the legal right to form trade unions.
The state’s political fortunes have fluctuated over the years, but it remains a wild card at Federal elections. The 2022 state election is expected to be close, and Labor has a good chance of winning the marginal Federal seat of Boothby. The success of the Nick Xenophon Team at the double dissolution election of 2016 means that two of the state’s six-year Senate terms will be up for grabs. |
The Moon slides by the tail of the lion tonight. Denebola, the star that represents the tail, stands to the left of the Moon as night falls, and to the upper right of the Moon as they set, about an hour before dawn.
Denebola is about 36 light-years away, which makes it a close neighbor. It’s almost twice as big and heavy as the Sun, and about 15 times brighter. And it’s pretty young as stars go — about one-tenth as old as the Sun.
Like a few other stars in its age and size range, Denebola is encircled by a wide disk of dust. Some of the tiny particles that make up the disk probably are left over from the cloud that gave birth to the star. Others may be debris from collisions between larger chunks of material — the size of asteroids or bigger.
No one has detected planets amidst this debris. But astronomers have found gaps in the disk that could have been cleared out by the gravity of orbiting planets. If planets do exist, they probably formed as the dust grains stuck together to form bigger and bigger bodies — the same way our own Earth took shape.
New planet-hunting instruments could someday snap pictures that are sharp enough to actually see planets around Denebola — worlds orbiting the lion’s tail.
Denebola is at the lower left of the triangle of stars that forms the lion’s hindquarters. It’s among the few dozen brightest stars in the night sky, so it’s easy to find even through the glare of the nearby Moon.
Script by Damond Benningfield |
Food sustainability and climate change are closely intertwined. As the global population continues to grow, there is an increasing demand for food production, leading to more energy and resources being used to produce food. As a result, the carbon emissions created by this increased production are contributing to the effects of climate change.
At the same time, climate change can also have a direct impact on food production. Higher temperatures, extreme weather events, and rising ocean levels can all have a damaging effect on crops and livestock, leading to a decrease in the amount of food available. This decrease in food availability can lead to food insecurity in many communities, as well as a decrease in the overall quality of available food.
In order to combat both the effects of climate change and food insecurity, it is essential that we take steps towards creating a more sustainable food system. This includes implementing sustainable farming practices such as crop rotation, reducing food waste, and investing in renewable energy resources. These steps will help to ensure that we continue to have access to healthy, nutritious food while also reducing our impact on the environment.
The Impact of Climate Change on Health and Well-being
Climate change is one of the greatest threats to human health and wellbeing in the 21st century. It is caused by an increase in greenhouse gases in the atmosphere, which traps the sun’s heat and causes temperatures to rise. This leads to changes in weather patterns, sea levels, and ocean temperatures, all of which can have a direct impact on human health and wellbeing.
The most direct impact of climate change on health is through extreme weather events, such as floods and heat waves. These can cause injury and death through drowning, heat exhaustion, or other diseases caused by exposure to contaminated water or air. Heat waves can also lead to an increase in air pollution, which can result in respiratory and cardiovascular diseases.
In addition to extreme weather events, climate change can also lead to an increase in vector-borne diseases, such as malaria, dengue fever, and Lyme disease. Warmer temperatures can create ideal conditions for these diseases to spread, as mosquitoes and other insects thrive in warmer climates.
The effects of climate change are not limited to physical health, but can also have an impact on mental health. Stress, anxiety, and depression can all be triggered by environmental changes, such as extreme weather events or displacement due to flooding or other disasters.
Climate change is also having an effect on food production, leading to shortages of food and water in some areas. This can result in malnutrition, which can lead to long-term health problems such as stunted growth and weakened immune systems.
The effects of climate change are far-reaching and will continue to have a serious impact on human health and wellbeing in the future. It is essential that we take action now to reduce greenhouse gas emissions and limit the effects of climate change on human health.
Exploring the Role of Agriculture in Climate Change
Climate change is an issue that affects the entire world, and agriculture is one of the sectors with a major role to play. As temperatures rise and weather patterns become increasingly unpredictable, it is essential for the agricultural industry to adjust and adapt to the changing climate.
The agriculture industry is made up of a wide variety of sectors, including crop production, livestock farming, and fisheries. These sectors are essential for providing food, fiber, and fuel to society and are highly sensitive to changes in climate. As temperatures rise, crop yields can be affected by extreme weather events such as droughts, floods, and heat waves. These events can also disrupt livestock and fisheries, leading to reduced productivity and increased costs for farmers.
It is not only the production side of agriculture that is affected by climate change. Transportation, storage, and distribution of food can be impacted by extreme weather events. For example, floods and storms can damage roads and other infrastructure, making it difficult to move products to market. In addition, higher temperatures can increase the risk of food spoilage, which can lead to food waste and lost profits for farmers and businesses.
In order to mitigate the effects of climate change, the agricultural industry must take steps to reduce its carbon footprint. This includes reducing the use of fossil fuels, increasing the use of renewable energy sources, and adopting more efficient farming practices. Farmers can also take steps to improve soil health, which can help reduce the amount of carbon dioxide released into the atmosphere and improve crop yields.
It is clear that the agricultural industry has an important role to play in addressing climate change. In order to ensure a sustainable future, it is essential for the industry to work together to reduce its carbon footprint and adapt to the changing climate.
How Science Can Help Us Adapt to Climate Change
As the world continues to face the effects of climate change, it is important to understand how science can help us adapt to the changing climate. Scientists are working to understand the dynamics of climate change and how it will affect us in the future. By studying the global climate, they can develop models that predict the effects of climate change on different regions of the world.
These models can help us identify which areas, regions, and countries are most vulnerable to the effects of climate change. This information can help us prepare for the potential impacts of climate change in those areas and make informed decisions about how to best manage the resources in those areas.
In addition to understanding how climate change will affect different regions, scientists are also working to develop technologies that can help us adapt to climate change. One example of this is the development of renewable energy sources, such as solar, wind, and hydropower. These sources of energy can help reduce the amount of fossil fuels we use, which will help reduce the amount of greenhouse gases that are released into the atmosphere.
Scientists are also looking into ways to reduce the amount of water that is lost through evaporation and other processes. By developing more efficient irrigation systems and using other methods to reduce water loss, we can help conserve water and reduce the strain on our water resources.
Finally, scientists are also looking into ways to reduce the amount of carbon dioxide that is released into the atmosphere. One example of this is the development of carbon capture and storage technologies, which can help capture and store carbon dioxide from burning fossil fuels. This can help reduce the amount of carbon dioxide that is released into the atmosphere, helping to reduce the effects of climate change.
By understanding the dynamics of climate change and developing technologies that can help us adapt to it, science can help us to better prepare for the effects of climate change and make sure that our resources are used in the most efficient manner possible. |
What is a friction disc? Friction discs, sometimes also called clutch discs or brake discs, are elements of the common disc brake. Their purpose is to slow or completely stop the motion of drive shafts, so that they may in turn slow or stop the rotation of the wheels.
What does a friction disc do on a snowblower? The friction disc helps propel the snowblower forward. When the snowblower friction disc is in contact with the spinning drive wheel, the friction disc rotates and turns the axle. If the friction disc is worn or damaged, the snowblower moves slowly or erratically.
What is a friction disc used for? The friction discs are used to slow or stop the motion of drive shafts which will in turn stop the wheels from rotating. As pressure is applied to the brake pedal, the calipers cause the discs to close around the rotors.
What is a friction disc transmission? A friction drive or friction engine is a type of transmission that utilises two wheels in the transmission to transfer power from the engine to the driving wheels. The system is naturally a continuously variable transmission, by moving the two disks positions the output ratio changes continually.
What is a friction disc? – Related Questions
How long does a friction disc last?
A friction disk will last 30+ years.
What is a friction drive list examples?
Examples of friction drive: Belt drive. Rope drive.
How does a friction disc work?
As the vertical disc passes the center position and moves to the right of center, the rotational direction of the vertical disc will reverse and rotate the opposite direction. This rotation will also increase in speed as the vertical disc is moved towards the right edge.
Where is friction clutch used?
A friction clutch plate is used in vehicles to allow the transmission input shaft and engine to run at the same speed when rotating. The friction that is created between the engine and the transmission is what provides the force required to move the vehicle.
What is a friction clutch disc?
A clutch plate or disc is the main component in the Friction clutches. This disc or plate is a metallic plate with frictional surfaces on both sides of the disc. This frictional surfaces should be made up of high friction coefficient material, Which will ensure the transmitting the torque without any slipping.
What is friction clutch in purifier?
Purifier Drive: FRICTION CLUTCH ARRANGEMENT. It consist of a Friction Drum mounted on the horizontal shaft. 3 Friction Pads are mounted on the motor shaft, which are contained within the Friction Drum. Friction pads have a curved surface with Ferodo lining.
What is a friction drive motor?
A Friction-Drive (FD) is an electric bike drive system that spins a roller that’s pressed against the bicycles tire. FDs have been around for over 100 years. Small gasoline engines (the size of a chainsaw) have been used to drive a roller on a bicycle tire almost as soon as small gasoline engines were invented.
What are the types friction?
There are two main types of friction, static friction and kinetic friction. Static friction operates between two surfaces that aren’t moving relative to each other, while kinetic friction acts between objects in motion.
How do you adjust friction discs?
If it is a Friction Disc drive, there should be a way to adjust it closer to the center of the drive plate, as that is the slower, until you go to the other side of center, and that is your reverse. Going more to the outside from center gives you speed.
What is dry friction?
Dry friction is the force that opposes one solid surface sliding across another solid surface. Dry friction always opposes the surfaces sliding relative to one another, and it can have the effect of either opposing motion or causing motion in bodies.
What is screw friction?
The concept of an applied force in the direction of impending motion works for either (1) a force applied in the impending motion direction of a screw, or (2) a force applied to the impending motion direction of a nut.
How does a friction clutch work?
Most cars use a friction clutch operated either by fluid ( hydraulic ) or, more commonly, by a cable. When a car is moving under power, the clutch is engaged. When the clutch is disengaged (pedal depressed), an arm pushes a release bearing against the centre of the diaphragm spring which releases the clamping pressure.
What is the principle of friction clutch?
PRINCIPLE OF CLUTCH It operates on the principle of friction. When two surfaces are brought in contact and are held against each other due to friction between them, they can be used to transmit power. If one is rotated, then other also rotates.
How many types of clutches are there?
Clutches can be categorized into two main classifications: friction clutches and fluid flywheel. Friction clutches rely on the principle of friction.
What do clutches transmit?
Most automotive clutches are a dry single plate clutch with two friction surfaces. No matter the application, the function and purpose of a clutch is to transmit torque from a rotating driving motor to a transmission. Clutches require a mode of actuation in order to break the transmission of torque.
At what point is the clutch actually wearing?
The clutch only wears while the clutch disc and the flywheel are spinning at different speeds. When they are locked together, the friction material is held tightly against the flywheel, and they spin in sync. It’s only when the clutch disc is slipping against the flywheel that wearing occurs.
What are the components of a friction clutch?
The numerical simulation of the friction clutch system (pressure plate, clutch disc, and flywheel) during the full engagement period (assuming no slipping between contact surfaces) is carried out using finite element method.
Why starting current is high in purifier?
6. Note the current (amps) during starting. It goes high during starting and then when the purifier bowl picks-up speed and when it reaches the rated speed, the current drawn drops to normal value.
Why is the purifier Bowl not closing?
Check the Level of Operating Water – If there is a separate operating water tank provided, check the water level in the same. If the operating water is not sufficient, the purifier bowl will not lift, resulting in the sludge ports remaining in the open position.
What is the friction between a drive wheel and the road surface?
“traction is the friction between a drive wheel and the road surface. If you lose traction, you lose road grip.”
What are the 3 types of friction?
The reason we are able to control cars at all is because of friction between the car’s tires and the road: more accurately, because there are three kinds of friction: rolling friction, starting friction, and sliding friction. |
Water is essential for all life on Earth, and our children must understand the importance of water conservation. Teaching kids about the value of water can help them develop a sense of responsibility towards nature, while also giving them an appreciation for how much they have access to it. Not only will this knowledge benefit them in their day-to-day lives, but it could also shape their future decisions when it comes to protecting our planet’s resources. We’ll discuss six methods you can use to teach your kids about the importance of water and how they can use that knowledge in their daily lives.
Table of Contents
Why Teaching Them about Water Is Crucial
Not only does water make up 70% of the human body, but it’s also essential for all physical and biological processes. By teaching them about the importance of water we can help our children understand why it is so vital that they use resources responsibly. Water plays an important role in keeping us healthy – from providing us with clean drinking water to helping us grow food and sustaining whole ecosystems – so kids must learn how to use this resource sustainably. The video lessons from Generation Genius about water and ecosystems can be a great starting point. Ecosystems and food webs rely heavily on water, so teaching kids about this concept is key to developing an understanding of how they can protect the environment.
How Kids Can Use this Knowledge in Their Lives
There are several ways that kids can make use of their knowledge about water when it comes to their day-to-day lives. First and foremost, they should be encouraged to conserve wherever possible by turning off the tap when brushing their teeth or taking shorter showers than normal. Additionally, kids can take what they’ve learned and put it into action by getting involved in local cleanup projects or lobbying for changes that will help protect our planet’s waters. By engaging in these activities, they’ll not only feel empowered but also understand the importance of using resources responsibly.
1. Interactive Lessons
An interactive lesson about water can help your kids make the connection between what they’re learning and how it affects their own lives. Try out a few online lessons or have them create their own ‘water wheel’ to demonstrate how water cycles through the environment. You can also use games to teach them about the importance of water conservation and its impact on our environment. Additionally, make sure to emphasize the role of water in keeping us healthy.
2. Field Trips and Community Service
Nothing gets kids more invested than being out in the field or doing something hands-on. Take them on a field trip to a local park, lake, or river and explain how different organisms rely on water for survival – you can even organize something in your own garden. You can also look into ways they can give back to their community by volunteering in organizations that protect our watersheds, such as beach clean-ups or planting trees near rivers. Some of these activities can be done while social distancing too. Keep in mind that sometimes, even a small act can make a big impact on our environment.
3. Art Projects
Any type of project is an excellent way to get your kids to think more deeply about the importance of water and how it impacts our lives. Have them create collages, drawings, or paintings that represent different aspects of the water cycle and its importance in nature. This can also include writing stories or poems about the world’s most precious resource. In some schools, they may even have the opportunity to create installations or sculptures that portray their understanding of water conservation. Additionally, art projects can be a great way to explore the history of human’s relationship with water. By looking at artwork from different periods and cultures, your kids can gain a better appreciation for how water has been used in past societies. Not only will this increase their understanding of the importance of water, but also help them appreciate its value as a resource.
4. Reading and Discussion
Encourage your kids to read books and articles about water conservation and its impact on our environment. Ask them questions afterward to get them thinking more critically, such as what other areas require water protection. Or, why is fresh drinking water so important? You can also use educational videos, documentaries, and podcasts to help deepen the kids’ understanding of the topic – this will help foster their interest in the issue. For example, the National Geographic Kids channel has an entire section devoted to water conservation.
5. Make a Water Conservation Plan
Help your kids come up with a plan for conserving water in their daily lives. Have them consider what activities use the most water, like showering or washing dishes, and how they can reduce their consumption of this resource. Discuss simple ways to cut back on water usage such as using shorter showers, turning off the tap while brushing teeth, collecting rainwater, and reusing bath water to flush the toilet. This will help turn knowledge into actionable steps that kids can take to protect our environment.
6. Teach Them about Marine Life
Explain to your kids the importance of water in preserving marine life. Have them learn about different species that depend on clean and healthy water ecosystems, such as coral reefs. They might even have the chance to virtually explore a reef or watch videos about these habitats online. This will teach them about how humans can also play an active role in protecting our oceans by reducing plastic waste and conserving energy usage. You might even get your kids interested in joining an ocean conservation club to get more involved and make a real difference.
Teaching your kids about the importance of water is a great way to instill in them an appreciation for our planet and its environment. With this knowledge, they can learn to be responsible citizens so that future generations may enjoy healthy oceans and clean drinking water as much as we do now. So don’t forget to include lessons about the importance of water in your child’s education! It just might make all the difference. |
The article that I have found this week talks about how this winter will be the lowest expansion of winter polar ice since records began 40 years ago. Records show that two million square kilometers of midwinter sea ice has disappeared. This could be due to global warming caused by carbon emissions from cars and factories and could have profound implications for the planet. According to the article, researchers believe that a loss of sea ice would mean a loss in reflectivity of solar rays which could raise global temperatures. Researchers now believe that the rise in sea ice loss is now posing threats to Arctic animal species. Many of the animals who live there are forced to travel north because of retreating ice caps. Researchers say that there is a limit to the distance in which these animals can tolerate. The melting of ice is disturbing the whole ecosystem of the Arctic in which these animals live and the food chain is being impacted. With animals being forced to travel northward it is disrupting the breeding process and the genetic wellbeing of populations.
Climate Change is a prevalent global issue that urges both instant attention and action. The Earth’s climate is experiencing massive changes due to human activities like fossil fuels, pollution, and deforestation which may lead to increased greenhouse gas emissions. The repercussions are immense, negatively impacting ecosystems, weather patterns, |
Everyone has suffered from allergy in some form or other. Allergy is body’s response to external stimulus. Our body immune system is built to respond to foreign particles. These foreign substances need not be always harmful. Often our immune system’s response is hyper. We call this hyperactivity as allergy. Substance stimulating this response is allergen. Body responds to allergen with antibody protein – Immunoglobulin E or IgE. This IgE binds to allergens and triggers release of organic nitrogenous substance – histamine from protective mast cells and basophils. Histamines are known to be involved in 23 different physiological functions in our body. Histamines are also found in some natural foods, which may to cause hyperacidity or allergy in human. Most fermented foods are histamine containing foods.
These allergens can come from food, air, contact, insects or animal bites. Histamines cause inflammation and irritation in body and results various signs and symptions. Some symptoms of allergies are listed below:
- Nose – swelling of the nasal mucus, running and stuffy nose, congestion
- Sinuses – inflammation, mucus, internal itching, pain
- Eyes – redness and itching
- Lungs / Chest – Sneezing, coughing, heaviness of breath, wheezing, sometimes outright attacks of asthma, choking
- Throat – swollen throat
- Ears – feeling of fullness, possibly pain, and impaired hearing
- Skin – rashes, such as eczema and hives
- Gastrointestinal tract – abdominal pain, bloating, vomiting, diarrhea,
- Food pipe – coughing, problem swallowing, heartburn
Common sources of allergies – What can cause allergy to you need not cause allergy to other. However most allergens are phytochemicals from our natural food – herbs, nuts, vegetables and sea food. Pollen, dust, pollutants and mites are some airborne allergens. Latex, pet hair, feathers, pesticides or certain pathogens are popular contact allergens.
There is no definitive cure for all types of allergies, but some natural remedies may be helpful with either preventing or easing the symptoms of allergies. Airborne allergens cause allergies of eyes, nose and respiratory tracts. Contact or food allergy often results skin allergy. Gastrointestinal allergies are mainly because of food.
Table of Contents
Nutrients to prevent allergies
Most medications for treating allergies are anti-histamine in nature. These medications however do have various side effects to our body. Nature has provided us with cure for all. Our food provides us with necessary nutrients to develop strong immune system and fight most allergies.
Vitamin C – Vitamin C is a proven antihistamine. It works wonder in keeping allergies in control. Some of other health benefits of Vitamin C include its ability to boost immunity, heal wounds, and treat common cold. Consuming vitamin C rich food is good way to prevent hypersensitivity of body.
Vitamin A – Vitamin A helps in controlling allergic body reactions. Food rich in vitamin A help relieve allergic symptom.
Omega-3 fatty acids – They are very important for our immune system. Research shows that people with Omega-3 fatty acid rich diet suffer allergies less. Flax seed, chia seeds are some good source of Omega 3 fatty acids.
Magnesium – This mineral plays vital role at cellular level including its growth, development and protection. Magnesium is required for protection of white blood cells.
Selenium – Selenium is strong antioxidant and helps boost our immune system. Deficiency of selenium often results weak immunity which can cause allergy.
Zinc – Zinc is another mineral required for production of immune cells.
Quercetin – Quercetin is flavonoid present in fruits and vegetables which is a natural antihistamine. It helps control release of histamines in allergic reactions. It is useful in inflammatory response in asthama, eczema and some viral infection.
Anthocyanins – These phytochemicals are dark purple in colour and present in some foods like beet, cherries and grapes. Anthocyanins are strong antioxidant and has anti-allergic properties.
Hesperetin/Hesperidin – They are bioflavanones found in citrus fruits which have anti-histamine properties. They also offer anti-inflammatory properties and potential sedative properties. They are beneficial in treatment of allergies.
Home remedies for allergies
Antihistamine pharmaceutical business is multi-billion dollar business globally. But these drugs cause side effects like drowsiness and headache. Hence it is always good to go natural way of treating allergy. Our food offers many home remedy for allergies. So start eating your antihistamines – Natural food. Some natural foods which are natural remedies for allergies are listed below:
1. Apple cider vinegar – It has been traditional used in treating indigestion and allergies. It protects body from allergens, clears mucus and relieves skin rashes. In case of allergy, drinking two spoon of apple cider vinegar along with warm water can be beneficial.
2. Saline solution – Salt water helps soothe respiratory allergies which cause nasal congestion and inflammation. Just sniffing such salt water made by adding a teaspoon of salt and pinch of baking soda in distilled water helps in case of nasal allergies.
3. Honey – Eating honey helps in training body immune system to allergic spores which are also contained in honey. It works as immunotherapy over pollen allergy.
4. Ginger – It is decongestant and antihistamine. Its anti-inflammatory properties help in treating allergies of lungs and stomach. Dry ginger powder along with sugar can be consumed in case of food allergy. Gingers works wonders along with honey.
5. Butterbur – Certain phytochemcials in butterbur are believed to relieve asthma and other allergies. Butterbur tea is most common way of consuming.
6. Green Tea – Green tea is very effective antihistamine. Japanese researchers have identified compound which inhibits stimulus of allergens. Green tea helps treat allergic symptoms like sneezing and cough.
7. Peppermint Tea – It helps relieve congestion. It helps clear respiratory tracts and remove irritating allergens from respiratory tract.
8. Chamomile Tea – It has long history of use as anti allergic tea. It has strong anti-histamine properties.
9. Thyme – Thyme also help relieve congestion, and treat nasal and sinus infections and allergies.
10. Wasabi – Hay fever is allergic response of body and Wasabi has been used in treatment of hay fever. Anti-allergic property of wasabi can be attributed to plant compound – Allyl isothiocynate
11. Lemon – Vitamin C rich Lemon is best home remedy for allergies. Regular drinking of a glass of warm water with teaspoon of honey and lemon juice can keep you protected from allergeis.
12. Oatmeal – Oatmeal are helpful in treating hives as response of allergy from insect bite and other allergens. Apply oatmeal paste made in boiling water to hives and get relief from allergy.
13. Turmeric – Turmeric contains compound curcumin which has various health benefits including treatment of cancer. Turmeric acts as anti-allergic substance. Drinking turmeric milk is considered good way to keep away cold and cough and increase immunity.
14. Nettle Leaf – Nettle leaf is a natural antihistamine. Nettle leaf tincture or tea are popularly taken. It is long used as natural remedy for allergy relief.
15. Ginkgo Biloba – This herb contains compound ginkgolide which works well over allergies and asthma. It has been long used in Traditional Chinese medicine.
16. Probiotics – Probiotics health benefits include its ability to regulate immune system and control allergies.
17. Onion – Health benefits of onion for allergies can be attributed to presence of Quercetin.
18. Garlic – Quercetin is also present in garlic which acts as antihistamine.
19. Kokum – People in west India, have been using Kokum juice for its effectiveness over allergies due to bee bite, insect bite, hyperacidity and sun sensitivity. Replacing tamarind by Kokum in food preparation can help reduce allergy.
20. Mangosteen – Mangosteen which also belongs to Kokum family has been used in treatment of allergies.
21. Kale – This food is rich in vitamin A and other phytochemicals and helps in reducing allergic incidence.
22. Collard Greens – Collard greens along with its nutrient profile eases allergic symptoms.
23. Elderberries – They are believed to strengthen immunity, reduce inflammation and allergic symptoms.
24. Parsley – Parsley works as antihistamine inhibiting secretion of histamine.
25. Red Wine – Apart from being an antioxidant, Red wine depending on its source contain good amount of quercetin which controls allergies.
26. Chia Seeds – Loaded with fiber and Omega 3 fatty acids, this super food helps relieve inflammation due to allergies.
27. Sunflower Seeds – Sun flower seeds contain Vitamin E, which works as anti-inflammatory compound and helps reduce allergic reactions in body. It also contains selenium which is equally useful over allergy.
28. Bee Pollen – Bee pollen comprises of pollen packed by bee into honey. While offering various healthy nutrient, they also contains quercetin which makes bee pollen effective over allergy.
29. Holy Basil – This herb has been used in Ayurveda for treating various disorders including allergy. Herbal tea preparation containing basil, ginger and turmeric can keep wide range of disease away. Soil stuck to basil roots is applied to insect bite to reducing the pain in some parts of India.
30. Papaya – Papaya is source of papain, an enzyme which is anti-histamine and treats allergies. Papaya paste is used as home remedy for insect biteres and insect allergies. Papain is also believed to digest antibody generated during allergic response.
31. Licorice root – Licorice root is used in reducing inflammation by hives. It helps build immunity over allergens.
32. Goldenseal – Goldenseal is effective over food allergens which causes discomfort in digestive tract.
33. Devil’s claw– Devil’s claw also helps reduce skin inflammation and treat skin lesions and hives during allergy.
34. Echinacea– This widely used herb is natural antihistamine. It helps reduce inflammation in lungs and respiratory tract.
35. Plantain – Plantain has been traditionally used in reliving respiratory problems and allergy.
36. Reishi – Reishi contains lanostan, that reduces histamine production in body due to allergens. IT is also known as mushroom of immortaliy.
37. Wild Oregano – Wild Oregano contains various biochemicals which works well over allergies and hyperacidity.
38. Saffron – Saffron is great in treatment of stomach allergic symptoms.
39. Pineapple – Bromelain is enzyme present in pineapple which has anti-inflammatory and anti-allergic properties. This compound is present in large quantities in pineapple stem.
40. Water – This supergift from nature should have come first in treatment of allergy. Keeping body hydrated keeps histamines and allergens under check. Regular bath washes away allergens from body. Steam inhalation works wonders in relieving nasal congestion and reducing inflammation.
We have shared with you 40 natural foods which are best remedy for allergies. Include these foods in your diet to keep allergy under control. Some other lifestyle changes or practices are also useful and can be used as home remedy for allergy
- Reduce Stress – Stress increases production of histamine in body. Try reducing stress with yoga, meditation and other excercises.
- Nasal irrigation or netty pot – This practice should be done under guidance and can help prevent nasal allergies.
- Pet hygiene – pets are often carrier of allergens which causes allergy in many people.
- Personal hygiene – without this you are prone to many allergens. Bathing twice a day is necessary for people with allergy.
- Dust mite free house – make sure your rooms get sufficient sunlight, use minimal carpets, ensure that no damp zones in your room, if required use dehumidifier or exhause fan.
- Acupuncture – may as alternative medicine for allergy work in reducing histamine in body. |
On November 26, NASA's InSight lander will complete its six-and-a-half month journey to Mars, touching down at Elysium Planitia, a broad plain near the Martian equator that is home to the second largest volcanic region on the planet.
There, NASA scientists hope to "give the Red Planet its first thorough checkup since it formed 4.5 billion years ago," according to the InSight mission website. Previous missions have examined features on the surface, but many signatures of the planet's formation—which can provide clues about how all the terrestrial planets formed—can only be found by sensing and studying its "vital signs" far below the surface.
To check on those vital signs, InSight will come equipped with two main instrument packages: a seismometer for studying how seismic waves (for example, from marsquakes and meteorite impacts) travel through the planet and a "mole" that will burrow into the ground, dragging a tether with temperature sensors behind it to measure how temperatures change with depth on the planet. These instruments will tell scientists about Mars's interior structure (similar to the way an ultrasound lets doctors "see" inside a human body) and also about the heat flow from the planet's interior.
Engineers hope that the mole will reach a depth of between three and five meters—far enough down that it will be isolated from the temperature fluctuations of day and night and Mars's annual cycle on the surface above. Meters may not sound like much, but to dig that far using only equipment that can be launched on a spacecraft and controlled from 55 million miles away is a technical challenge that has never been attempted before.
Using a sliding weight inside its narrow body, the mole, which is 15.75 inches (400 millimeters) long and weighs just 1.9 pounds (860 grams), hammers itself into the ground, 1 mm at a time, while dragging a tether that is studded with 14 temperature sensors along its length. A traditional drill attempting to perform the same task would need to be as long as the hole it was attempting to drill—and would need a massive supporting structure. Were it to hammer continuously, the mole would take between a few hours to a few days to reach its final depth, depending on the characteristics of the soil. However, the mole will stop every 50 centimeters to measure the soil thermal conductivity, a process which requires periods of cooling and heating lasting several days. With the additional time needed to assess progress and send new commands, the mole could take six weeks or more to reach its final depth.
When designing the probe, engineers at JPL, which Caltech manages for NASA, wanted to be certain that the mole would be capable of reaching the necessary depth, and so they called on Caltech's José Andrade, George W. Housner Professor of Civil and Mechanical Engineering in the Division of Engineering and Applied Science and an expert on the physics of granular materials.
"About five years ago, when the mole kept getting stuck during testing, the InSight team pulled together what's called a 'tiger team'—a bunch of specialists from different areas who are brought in to help resolve an issue," Andrade says. "I was called to serve on this tiger team as an expert in soil mechanics."
Because soil is a granular material—a conglomeration of solid particles that are each larger than a micrometer—it exhibits somewhat unusual properties. For example, soil composed of round particles will flow easily as the particles slide past one another, like sand in an hourglass. But soil composed of the same sizes of particles but with more jagged and angular shapes will lock together like puzzle pieces and cannot flow without significant outside force.
Granular materials can be described as singular objects that will deform based on their critical state plasticity—an idealized model for how groups of grains will force their way past one another as stress is applied to them. That plasticity is governed by air pressure and the force of gravity. As such, it is difficult to simulate in a laboratory the critical state plasticity of a granular material on Mars, which has one-third the gravity and 0.6 percent of the air pressure of Earth at sea level.
"We kept trying to extrapolate how critical state plasticity would translate to Mars," Andrade says. "Without knowing that, we could not effectively model how much resistance InSight's mole would face when attempting to drill through Mars's soil, and whether it could reach the desired depth. So, this sparked a clear need for more understanding."
To help investigate the mole's penetration in a granular material, Andrade and the InSight team hired postdoctoral researcher Ivan Vlahinic, who had recently completed a PhD at Northwestern University. Vlahinic set up tests in which early mock-ups of the mole were monitored and mathematically analyzed as they worked their way through a glass column filled with sand.
Andrade, Vlahinic, and their colleagues found that Mars's lower overburden pressure, compared to Earth, will actually make it harder for the mole to penetrate Mars's soil. Overburden pressure is the pressure on a layer of rock or sand exerted by the material stacked above it. At any given depth, the overburden pressure on Mars is one-third of Earth's, corresponding with the Red Planet's lower gravity. For the same packing fraction—the amount of space filled by material—the low pressure allows granular materials to exist in a looser state that actually increases the number of individual contacts that each grain has with its neighbors, and this increases the overall resistance of the material to penetration.
Vlahinic's research was eventually taken over by Jason Marshall, who earned a PhD from Carnegie Mellon University in 2014 and worked as a postdoctoral researcher at Caltech from 2015 to 2018.
"We not only studied penetration, but also how heat moves through the soil," Marshall says. "One of the things that InSight seeks to understand is how the temperature of the planet changes with depth. What we found is that as we're deforming the sand, the particles are obviously being rearranged, and that's going to affect the thermal conductivity measurements." As granular materials deform, the amount of space between the individual grains changes, adjusting the amount of space through which heat will either radiate or conduct via the planet's thin atmosphere. It also increases the number of grain-to-grain contacts as the soil is packed more tightly.
With this knowledge, Andrade was able to develop new computer models that helped the JPL team predict the mole's effectiveness in Martian soil. Unless the mole encounters an obstacle, he is confident that it will be successful.
"The tests show that this thing can go much deeper than two meters. A dealbreaker could be a large formation of rock that blocks the path of the mole, but the InSight landing site selection team have chosen a location on Mars that is as rock-free as possible," he says. In addition, armed with Marshall's information on the effect of particle rearrangement on thermal conductivity, InSight should be in a good position to not only reach its desired depth, but also send back accurate information on the temperature at that depth, Andrade says.
For now, Andrade and his former postdocs can only watch—and wait. "We've done everything we could here on Earth. Now it's up to InSight," he says. |
Sleep research suggests that a teenager needs between nine and 10 hours of sleep every night. This is more than the amount a child or an adult needs. Yet most adolescents only get about seven or eight hours. Some get less.
Regularly not getting enough sleep leads to chronic sleep deprivation. This can have dramatic effects on a teenager’s life, including reduced academic performance at school. One recent US study found that lack of sleep was a common factor in teenagers who receive poor to average school marks.
Causes of sleep deprivation
Some of the reasons why many teenagers regularly do not get enough sleep include:
- Hormonal time shift – puberty hormones shift the teenager’s body clock forward by about one or two hours, making them sleepier one to two hours later. Yet, while the teenager falls asleep later, early school starts don’t allow them to sleep in. This nightly ‘sleep debt’ leads to chronic sleep deprivation.
- Hectic after-school schedule – homework, sport, part-time work and social commitments can cut into a teenager’s sleeping time.
- Leisure activities – the lure of stimulating entertainment such as television, the Internet and computer gaming can keep a teenager out of bed.
- Light exposure – light cues the brain to stay awake. In the evening, lights from televisions, mobile phones and computers can prevent adequate production of melatonin, the brain chemical (neurotransmitter) responsible for sleep.
- Vicious circle – insufficient sleep causes a teenager’s brain to become more active. An over-aroused brain is less able to fall asleep.
- Social attitudes – in Western culture, keeping active is valued more than sleep.
- Sleep disorder – sleep disorders, such as restless legs syndrome or sleep apnoea, can affect how much sleep a teenager gets.
Effects of sleep deprivation
The developing brain of a teenager needs between nine and 10 hours of sleep every night. The effects of chronic (ongoing) sleep deprivation may include:
- Concentration difficulties
- Mentally ‘drifting off’ in class
- Shortened attention span
- Memory impairment
- Poor decision making
- Lack of enthusiasm
- Moodiness and aggression
- Risk-taking behaviour
- Slower physical reflexes
- Clumsiness, which may result in physical injuries
- Reduced sporting performance
- Reduced academic performance
- Increased number of ‘sick days’ from school because of tiredness
Preventing sleep deprivation – tips for parents
Try not to argue with your teenager about bedtime. Instead, discuss the issue with them. Together, brainstorm ways to increase their nightly quota of sleep. Suggestions include:
- Allow your child to sleep in on the weekends.
- Encourage an early night every Sunday. A late night on Sunday followed by an early Monday morning will make your child drowsy for the start of the school week.
- Decide together on appropriate time limits for any stimulating activity such as homework, television or computer games. Encourage restful activities during the evening, such as reading.
- Avoid early morning appointments, classes or training sessions for your child if possible.
- Help your child to better schedule their after-school commitments to free up time for rest and sleep.
- Assess your child’s weekly schedule together and see if they are overcommitted. Help them to trim activities.
- Encourage your child to take an afternoon nap after school to help recharge their battery, if they have time.
- Work together to adjust your teenager’s body clock. You may like to consult with your doctor first.
Preventing sleep deprivation – tips for teenagers
The typical teenage brain wants to go to bed late and sleep late the following morning, which is usually hard to manage. You may be able to adjust your body clock but it takes time. Suggestions include:
- Choose a relaxing bedtime routine; for example, have a bath and a hot milky drink before bed.
- Avoid loud music, homework, computer games or any other activity that gets your mind racing for about an hour before bedtime.
- Keep your room dark at night. The brain’s sleep–wake cycle is largely set by light received through the eyes. Try to avoid watching television right before bed. In the morning, expose your eyes to lots of light to help wake up your brain.
- Do the same bedtime routine every night for at least four weeks to make your brain associate this routine with going to sleep.
- Start your bedtime routine a little earlier than usual (for example, 10 minutes) after four weeks. Do this for one week.
- Add an extra 10 minutes every week until you have reached your desired bedtime.
- Avoid staying up late on the weekends. Late nights will undo your hard work.
- Remember that even 30 minutes of extra sleep each night on a regular basis makes a big difference. However, it may take about six weeks of getting extra sleep before you feel the benefits.
Other issues to consider
If lack of sleep is still a problem despite your best efforts, suggestions include:
- Assess your sleep hygiene. For example, factors that may be interfering with your quality of sleep include a noisy bedroom, a lumpy mattress or the habit of lying awake and worrying.
- Consider learning a relaxation technique to help you wind down in readiness for sleep.
- Avoid having any food or drink that contains caffeine after dinnertime. This includes coffee, tea, cola drinks and chocolate.
- Avoid recreational drugs (including alcohol, tobacco and cannabis) as they can cause you to have broken and poor quality sleep.
- See your doctor if self-help techniques don’t increase your nightly sleep quota.
Where to get help
- Your doctor
- Sleep disorder clinic
Things to remember
- Sleep research suggests that a teenager needs between nine and 10 hours of sleep every night.
- Chronic sleep deprivation can have dramatic effects on a teenager’s life, including reduced academic performance at school.
- Even 30 minutes of extra sleep each night makes a difference.
- All recreational drugs (including alcohol, caffeinated drinks and cannabis) and chocolate can cause broken sleep.
This page has been produced in consultation with and approved by:
Newcastle Sleep Disorders Service
Page content currently being reviewed.
Content on this website is provided for education and information purposes only. Information about a therapy, service, product or treatment does not imply endorsement and is not intended to replace advice from your doctor or other registered health professional. Content has been prepared for Victorian residents and wider Australian audiences, and was accurate at the time of publication. Readers should note that, over time, currency and completeness of the information may change. All users are urged to always seek advice from a registered health care professional for diagnosis and answers to their medical questions. |
This week in Maths we are going to be learning about ‘Fractions!’ This is not as scary as it sounds!
We are going to be focusing on halves and quarters. We have done some of this at school together so hopefully it will not be too tricky. Start by watching this little funky fraction video about halves.
Today we are going to be focusing on halving objects and shapes. It is important that you know that a half means ‘one of two equal parts’ and the key language you need is half and whole. Halving shapes and objects mean that if you cut them in the middle they would be in two parts, exactly the same size.
Open up the ‘What is halving PowerPoint’ below to remind yourself. Then complete the activity attached and check with the answers. As an extension I would like you to use some scrap paper to draw out some shapes and then draw a line showing half. Use the template below if needed. Then if you can, cut them out and fold the shape using the line you drew to check it splits exactly in half.
As a bonus – Who can remember what that line is called? |
ARISTOTLE THE MATHEMATICIAN
Aristotle, having spent twenty years of his life in or near the Academy, was necessarily a mathematician. He was not a professional mathematician like Eudoxos, Menaichmos, or Theudios, but he was less of an amateur than Plato. This is proved positively by the mass of his mathematical disquisitions¹³²⁶ and negatively by his lack of interest in the mathematical occultism and nonsense that disgraced Platonic thought. He was well trained but not quite up–to–date, and inclined to avoid technical difficulties. He was probably well acquainted with Eudoxos’ ideas, but not so well with those of other contemporaries like Menaichmos. His references to incommensurable quantities are frequent, but the only example quoted by him is the simplest of all, the irrationality of the diagonal of a square in relation to its side. He was primarily a philosopher, and his mathematical knowledge was sufficient for his purpose. All considered, he is one of the greatest mathematicians among philosophers, being surpassed in this respect only by Descartes and Leibniz. Most of his examples of scientific method were taken from his mathematical experience.
In his classification of sciences he considered most exact those that are most concerned with first principles. On that basis, mathematics came first, arithmetic being ahead of geometry.¹³²⁷ Like Plato he was interested in knowledge for its own sake, for the contemplation of truth, rather than for its applications. Moreover, he was more interested in generalities than in particularities, and more interested in the determination of general causes than in the multiplicity of consequences.
He made a distinction between axioms (common to all sciences) and postulates (relative to each science). Examples of axioms or common notions (coinai en–noiai) are the “law of excluded middle” (everything must be either affirmed or denied), the “law of contradiction” (a thing cannot at the same time both be and not be), and “if equals are subtracted from equals the remainders are equal.” As to definitions, they must be understood; they do not necessarily assert the existence or nonexistence of the object defined. We must assume in arithmetic the existence of the unit or monad, and in geometry of points and lines. More complex things, like triangles or tangents, must be proved to exist, and the best proof is the actual construction.
Aristotle’s greatest service to mathematics lies in his cautious discussion of continuity and infinity. The latter, he remarked, exists only potentially, not in actuality. His views on those fundamental questions, as developed and illustrated by Archimedes and Apollonios, were the basis of the calculus invented in the seventeenth century by Fermat, John Wallis, Leibniz, and the two Isaacs, Barrow and Newton (as opposed to the lax handling of pseudo infinitesimals by Kepler and Cavalieri).¹³²⁸ This statement, which cannot be amplified in a book meant for nonmathematical readers, is very high praise indeed, but justice obliged us to make it, the more so because Plato is more famous as a mathematician than Aristotle, and that is exceedingly unfair. Aristotle was sound but dull; Plato was more attractive but as unsound as could be. Aristotle and his contemporaries built the best foundation for the magnificent achievements of Euclid, Archimedes, and Apollonios, while Plato’s seductive example encouraged all the follies of arithmology and gematria and induced other superstitions. Aristotle was the honest teacher, Plato the magician, the Pied Piper; it is not surprising that the followers of the latter were far more numerous than those of the former. But we should always remember with gratitude that many great mathematicians owed their vocation to Plato; they obtained from him the love of mathematics, but they did not otherwise follow him and their own genius was their salvation.
SPEUSIPPOS OF ATHENS
Let us now leave Aristotle and the Lyceum and return to the Academy. We should always bear in mind that mathematical studies were then fashionable in Athens and were conducted in both schools, probably in friendly emulation. Most of the mathematical work was probably done in the Academy; Speusippos and Xenocrates were Plato’s successors at the head of it; the brothers Menaichmos and Deinostratos were both mentioned by Proclos ¹³²⁹ as friends of Plato and pupils of Eudoxos; Theudios of Magnesia wrote the textbook of the Academy; on the other hand, Eudemos of Rhodes, quoted as a pupil of Aristotle and Theophrastos, must be assigned to the Lyceum. These matters cannot be settled with any certainty, for we know the headmasters of both schools (some of them at least), but there never were any lists of students, and it is possible that attendance was informal. So–and–so are named disciples of Plato or of Aristotle, not members of the Academy or the Lyceum.
Speusippos, nephew of Plato, succeeded him in 348/47 as master of the Academy. Judging from the fragments, his lost work “On the Pythagorean numbers” was derived from Philolaos and dealt with polygonal numbers, primes versus composite numbers, and the five regular solids.
XENOCRATES OF CHALCEDON¹³³⁰
At the time of Speusippos’s death there was an election for a new master and the votes were almost equally divided between Heracleides of Pontos and Xenocrates of Chalcedon, but the latter won and was the head of the Academy for twenty–five years (339–315). Note that Aristotle, Heracleides, Xenocrates were all “northerners,” and that the new master was an old friend of Aristotle (who referred many times to him in his writings). Hence, we must assume that Xenocrates was as familiar with Aristotle’s mathematical views as with Plato’s. He continued Plato’s policy of excluding from the Academy the applicants who lacked geometric knowledge and said to one of them, “Go thy way for thou hast not the means of getting a grip of philosophy.”¹³³¹ The story is plausible.
Xenocrates wrote a great many treatises, all of which are lost, but judging from the titles¹³³² some of them dealt with numbers and with geometry. The perennial controversy on geometric continuity which had been dramatized by Zeno’s paradoxes led him to the conception of indivisible lines. He calculated the number of syllables that could be formed with the letters of the alphabet (according to Plutarch that number was 1,002,000,000,000); this is the earliest problem of combinatorial analysis on record.¹³³³Unfortunately, we know nothing about his activities but the meager information just given.
Menaichmos and Deinostratos were two brothers, about whose circumstances we know only what Proclos told us in a short paragraph of his commentary on Book I of the Elements of Euclid: “Amyclas of Heraclea, one of Plato’s friends, Menaichmos, a pupil of Eudoxos who had also studied with Plato, and Deinostratos his brother made the whole of geometry more nearly perfect.” ¹³³⁴
We do not know when and where these brothers were born, but they lived in Athens, attended the Academy, and sat at the feet of Plato and later of Eudoxos. We may conclude that they flourished about the middle of the century.
Both brothers were concerned with the building up of a geometric synthesis. Menaichmos was especially interested in the old problem of the duplication of the cube. That problem had been reduced by Hippocrates of Chios (V B.C.) to the finding of two mean proportionals between one straight line and another twice as long. In modern language we would say that Hippocrates had reduced the solution of a cubic equation to that of two quadratic equations. How would these be solved? Menaichmos found two ways of solving them by determining the intersection of two conics — two parabolas in the first case, a parabola and a rectangular hyperbola in the second.
This marks the appearance of conics in world literature, and the discovery of those curves is ascribed to Menaichmos. His construction of them seems very peculiar to us; he imagined that a plane cuts a right circular cone, the plane being always perpendicular to the generating line of that cone. The three different conics (which he seems to have differentiated) were obtained by increasing the cone’s angle;¹³³⁵ as long as the angle is acute, the section is an ellipse; when the angle is right the section is a parabola; when the angle is obtuse one obtains the two branches of a hyperbola. Neugebauer has surmised that Menaichmos may have been led to his discovery by the use of sundials.¹³³⁶ If he is right (and his argument is very plausible to me), it is strange to think that those curves, of astronomic origin, were not introduced into astronomic theory until almost two millennia later. Menaichmos discovered them (c. 350 B.C.) because of his solar observations, but not until Kepler (1609) were they used for the explanation of the solar system.
Alexander the Great asked Menaichmos whether there was not a short cut to geometric knowledge and Menaichmos answered, “O King, for traveling over the country there are royal roads and roads for common citizens, but in geometry there is one road for all.” ¹³³⁷ The story has become a commonplace, and it has been ascribed to Euclid and Ptolemy, as well as to Menaichmos. It fits the last best because he is the most ancient and because Alexander, whose intellectual ambitions had been fanned by Aristotle, might well have asked such a question. The great king was naturally impatient, but he had to find out that it might take longer to acquire sound knowledge than to conquer the world.
We have explained above (p. 278) that geometric thinking was activated in the fifth century by the emergence of three problems: (1) the squaring of the circle, (2) the trisection of the angle, (3) the duplication of the cube. Hippocrates of Chios and Menaichmos were especially interested in the third of those problems; Hippias of Elis found an ingenious solution of the second by means of the curve invented by him, the quadratrix. That name was given to it because Deinostratos, Menaichmos’ brother, applied it to the solution of the first problem. We thus see that the three famous problems were still exercising the minds of the geometers of the Academy in the fourth century and helping them to extend the frontiers of their knowledge.
THEUDIOS OF MAGNESIA
Said Proclos: “Theudios of Magnesia distinguished himself in mathematics and in other branches of philosophy; he arranged beautifully the Elements (ta stoicheia) and made many partial theorems more general.”¹³³⁸
This statement is very significant in spite of its concision. It reveals the existence of a book which might be called “The geometric textbook” (or the “Elements”) of the Academy. The mathematicians of that time were interested, some in discovery, others in synthesis and logical consistency; the former were like adventurers or conquerors, the latter like colonizers. The two tendencies have always coexisted in times of healthy mathematical development, and they are equally necessary. There must be continual pressure on the frontiers and better organization within. As far as we can guess from Proclos’ laconic account, Theudios’ task was to put the geometric knowledge already obtained by the pioneers into as strong and beautiful a logical order as possible. Theudios was the forerunner of Euclid, and made the latter’s achievement easier.
EUDEMOS OF RHODES
Eudemos was a pupil of Aristotle and a friend of Theophrastos. We may thus conclude that he flourished in the third quarter of the century and that he was a member of the Lyceum. In fact, Proclos, who quotes him four times in his commentary on Euclid I, calls him Eudemos the Peripatetic.¹³³⁹ Among the writings ascribed to him, but lost, were histories of arithmetic, geometry, and astronomy. He is the first historian of science on record,¹³⁴⁰ and, though only fragments have come to us, we have good reason to assume that his work was the main source out of which whatever knowledge we possess of pre–Euclidean mathematics has trickled down. One of the most important fragments is the one concerning the quadrature of the lunes by Hippocrates of Chios, of which we have already spoken.
The appearance at this time of a historian of mathematics and astronomy is very significant, for it proves that so much work had already been accomplished in these two fields that a historical survey had become necessary. Let us remember with gratitude the name of the first historian of mathematics and consider his presence in Athens around the year 325 as a new illustration of the glory of Hellenism.¹³⁴¹
ARISTAIOS THE ELDER¹³⁴²
The last mathematician of this century marks the transition between the age of Aristotle and the age of Euclid. Two treatises of great originality are ascribed to him. One of them was devoted to solid loci connected with conics, that is, it was a treatise on conics regarded as loci, and was prior to Euclid’s book on the same subject.¹³⁴³ He defined the different kinds of conics in the same way as Menaichmos, as sections of cones with acute, right, and obtuse angles. The other book was entitled Comparison of the five figures, meaning the five regular solids, and among other things it proved the remarkable proposition that “the same circle circumscribes both the pentagon of the dodecahedron and the triangle of the icosahedron when both solids are inscribed in the same sphere.“¹³⁴⁴
How beautiful a result this was, and how unexpected! For who could have foreseen that the faces of two different regular solids are equally distant from the center of the sphere enveloping them? These two solids, the icosahedron and the dodecahedron, had thus a special relation which the three other solids did not have. How much more beautiful indeed in its truth and honesty than the Platonic illusions on the same “figures.”
MATHEMATICS IN THE SECOND HALF OF THE FOURTH CENTURY
The second half of the century did not witness the renewal of revolutionary efforts comparable in their pregnancy to those of Eudoxos of Cnidos, yet the total amount of new mathematics was splendid. The members of the Lyceum headed by Aristotle improved the definitions and axioms and more generally the philosophic substructure; Eudemos facilitated the needed synthesis by his historical surveys. Under the guidance of Speusippos and Xenocrates the Academy continued geometric investigations of various kinds which led to the composition of the “Elements” by Theudios. The brothers Menaichmos and Deinostratos, and Aristaios were creative geometers of the first order. We owe to Menaichmos and to Aristaios the first study of conics.
HERACLEIDES OF PONTOS
Pride of place in our astronomic section must be given to Heracleides not only because of his age but also because of his singular greatness. He was born in Heracleia Pontica ¹³⁴⁵ c. 388, before Aristotle, and he lived until the ninth decade of the century (c. 315–310). His singularity was such that he has been called “the Paracelsus of antiquity,” a silly nickname, yet meaningful, whether it is taken as praise or blame. To compare him with a man who appeared nineteen centuries later is to invite unnecessary trouble; it is more helpful to compare him with his predecessor, Empedocles, a man whom he greatly admired and tried to emulate.
We know little of his life except that he was wealthy, emigrated to Athens, and was a pupil of Plato and Speusippos, perhaps also of Aristotle. When Speusippos died in 339 and was replaced by Xenocrates (Aristotle’s friend), Heracleides returned to his country. He wrote many books on philosophy and mythology which obtained some popularity not only among the Greeks but also among the Romans of the last century B.C. For example, Cicero admired him and one can detect traces of Heracleides’ influence in “Scipio’s dream.”¹³⁴⁶ Even as Plato had written a revelation of other–world mysteries in his myth of Er, Heracleides wrote a similar revelation in his myth of Empedotimos: ¹³⁴⁷ his Hades where the disin–carnated souls found their last refuge was located in the Milky Way; the souls were illuminated!
Such poetic fancies explain his popularity but would not justify our own praise in this volume. Yet to be a spiritual descendant of Empedocles was a remarkable thing and we must pause a moment to consider it: there was an irrational trend in Greek thought cutting through the centuries via the Pythagoreans, Empedocles, Plato, Heracleides, and their epigoni. Heracleides, however, combined his apocalyptic with scientific tendencies, and we must speak of him at greater length because of his astronomic theories, which make him one of the forerunners of modern science.
One more word, however, concerning his relation with Empedocles. The latter’s view of the universe included the four elements and the two antagonistic forces (love and strife). Heracleides conceived the world as made up of jointless particles (anarmoi oncoi),as opposed probably to the Democritean atoms, which had various shapes and could cling to one another. The Heracleidean particles might hold together by some kind of Empedoclean attraction.¹³⁴⁸
Heracleides’ astronomy was more rational, as we would expect, than his cosmology. He had probably heard of the views expressed by Hicetas and Ecphantos and agreed with them. On the basis of those views and of other Pythagorean–Platonic ideas he explained his own theory, which can be summarized as follows. The universe is infinite. The Earth is in the center of the solar system; the Sun, Moon, and superior planets revolve around the Earth; Venus and Mercury (the inferior planets) revolve around the Sun; the Earth rotates daily on its own axis (this rotation replaces the daily rotation of all the stars around the Earth).¹³⁴⁹ This geoheliocentric system had an astounding fortune. It was not sufficiently bolstered up with observations to deserve the acceptance of the practical astronomers of Heracleides’ time; yet the hypotheses that it included were never forgotten. They reappeared in Chalcidius (IV–1), Macrobius (V–1), Martianus Capella (V–2), John Scotus Erigena (IX–2), William of Conches (XII–1).¹³⁵⁰
Looked at from the modern point of view, Heracleides’ system is a compromise between the Ptolemaic (centered upon the Earth) and the Copernican (centered upon the Sun), but this should not be exaggerated as is done by the historians who call Heracleides the Greek Tycho!¹³⁵¹ The compromise suggested by Tycho Brahe (1588; regular publication, 1603) and by Nicholas Reymers (1588) was deeper: all the planets, not two only, were supposed to revolve around the Sun. Strangely enough, the Jesuit, Giovanni Battista Riccioli, in his Almagestum novum published half a century later (Bologna, 1651), came back somewhat closer to Heracleides, for he accepted the rotation of three planets around the Sun, the two most remote ones (Jupiter and Saturn ) moving around the Earth.¹³⁵²
Heracleides was not a Copernicus, nor even a Brahe, yet his conception of the solar system, imperfect as it was, was astoundingly good for its time.
CALLIPPOS OF CYZICOS
In the meanwhile, the work of Eudoxos was being continued by Aristotle and Callippos. They worked together at the Lyceum; though Callippos was somewhat younger than his chief, he seems to have been the originator in astronomic research. That would be natural enough, for Aristotle was obliged to busy himself with the whole institution and with the logical and philosophic teaching. If he had been tempted to make special investigations on his own account, he would probably have made them in the field of zoölogy, or he would have devoted more time to zoology than he was able to do.
After his return from Egypt, Eudoxos had spent some time in Cyzicos (Sea of Marmara), where he started a school of his own. Now, Callippos was born in that very place c. 370 and he may have known Eudoxos in his youth. In any case, he must have heard of Eudoxos’ mathematical and astronomic teaching, either directly or from a disciple such as his countryman, Polemarchos of Cyzicos, who is quoted as one of the first critics of the theory of homocentric spheres.¹³⁵³ Indeed, he was Polemarchos’ pupil and followed him to Athens, where “he stayed with Aristotle helping the latter to correct and complete the discoveries of Eudoxos.”¹³⁵⁴ The date of Callippos’ arrival in Athens was probably after the beginning of Alexander’s rule (336), and before the beginning of Callippos’ cycle (330). According to Aristotle,¹³⁵⁵ Callippos realized the imperfections of Eudoxos’ system and tried to remove them by adding seven more spheres, that is, two each for the Sun and the Moon, and one more for each of the other planets, except Jupiter and Saturn. The theory as improved by Callippos thus required a total of 33 concentric spheres rotating simultaneously each on its own axis and with its own speed.
Callippos concerned himself also with the reform of the calendar, the last establishment of which had been made in Athens in 432 by Meton and Euctemon. Better solstitial and equinoctial observations enabled him to determine more exactly the lengths of the seasons (beginning with the spring, 94, 92, 89, 90 days, the errors ranging from 0.08 to 0.44 day). He improved the Metonic cycle of 19 years by dropping 1 day out of each period of [19 × 4 = ] 76 years. The epoch of the new era was possibly 29 June 330.¹³⁵⁶The comparison of Callippos’ calendar with Meton’s gives us a measure of the progress in astronomic observation that had been achieved in a century.
ARISTOTLE THE ASTRONOMER
Aristotle’s views on astronomy are explained in Metaphysics lambda, in Physics, in De caelo,¹³⁵⁷ and in Simplicios’ Commentary. He was not satisfied with the theory of homocentric spheres, even as perfected by Callippos. As Heath puts it,
In his matter–of–fact way, he thought it necessary to transform the system into a mechanical one, with material spherical shells one inside the other and mechanically acting on one another. The object was to substitute one system of spheres for the Sun, Moon, and planets together, instead of a separate system for each heavenly body. For this purpose he assumed sets of reacting spheres between successive sets of the original spheres. Saturn being, for instance, moved by a set of four spheres, he had three reacting spheres to neutralize the last three, in order to restore the outermost sphere to act as the first of the four spheres producing the motion of the next lower planet, Jupiter, and so on. In Callippos’ system there were thirty–three spheres in all; Aristotle added twenty–two reacting spheres making fifty–five. The change was not an improvement.¹³⁵⁸
This is typical of Aristotle’s mind; in his anxiety to give a mechanical and tangible explanation of planetary movements, he introduced unnecessary complications. Did Aristotle believe in the physical reality of the homocentric spheres? We cannot be sure; yet his transformation of the geometric concept into a mechanical one suggests such a belief. It is a good example of the eternal conflict between the explanation that satisfies the mathematician and the one that the practical man requires. The practical man is often defeated by his very practicality, and so was Aristotle in this case.
We cannot dissociate his astronomic views from the physical ones. Let us describe them rapidly together. There are three kinds of motion in space: (1) rectilinear, (2) circular, (3) mixed. The bodies of the sublunar world are made out of the four elements. These elements tend to move along straight lines, earth downward, fire upward; water and air, being relatively heavy and relatively light, fall in between. Hence, the natural order of the elements, starting from the Earth, is: earth, water, air, fire. Celestial bodies are made out of another substance, not earthly, but divine or transcendent, the fifth element or aether, whose natural motion is circular, changeless, and eternal.
The universe is spherical and finite; it is spherical, because the sphere is the most perfect shape; it is finite, because it has a center, the center of the earth, and an infinite body cannot have a center.¹³⁵⁹ There is but one universe and that universe is complete; there can be nothing (not even space) outside of it.
Is there a transcendent mover of the spheres (that is, a superior and unmoved mover of the spheres and of everything else)? Aristotle could not reach a certain answer on that fundamental question.¹³⁶⁰ His final conclusion in De caelo was that the sphere of the fixed stars was the prime mover (though itself moving) and hence the foremost and highest god; ¹³⁶¹ but in the Metaphysics lambda, his conclusion is that there is behind the fixed stars an unmoved mover influencing all the celestial motions as the Beloved influences the Lover. This implies that the celestial bodies are not only divine but alive, sensitive, and makes us realize once more, and more deeply, that ancient physics and ancient astronomy were very close to metaphysics, so close that one could not know any more where one was. Is this astronomy or metaphysics or theology?
We come closer to reality in Aristotle’s discussion of the shape of the Earth and estimate of its size. The Earth must be spherical for reasons of symmetry and equilibrium; the elements that fall upon it fall from every direction and the final result of all the deposits can only be a sphere. Moreover, during lunar eclipses the edge of the shadow is always circular, and when one travels northward (or southward) the general layout of the starry heavens changes; one sees new stars or ceases to see familiar ones. The fact that a small change in our position (along a meridian) makes so much difference is a proof that the Earth is relatively small. Here is the relevant text:
There is much change, I mean, in the stars which are overhead, and the stars seen are different, as one moves northward or southward. Indeed there are some stars seen in Egypt and in the neighborhood of Cyprus which are not seen in the northerly regions; and stars, which in the north are never beyond the range of observation, in those regions rise and set. All of which goes to show not only that the earth is circular in shape, but also that it is a sphere of no great size: for otherwise the effect of so slight a change of place would not be so quickly apparent. Hence one should not be too sure of the incredibility of the view of those who conceive that there is continuity between the parts about the Pillars of Hercules and the parts about India, and that in this way the ocean is one. As further evidence in favor of this they quote the case of elephants, a species occurring in each of these extreme regions, suggesting that the common characteristic of these extremes is explained by their continuity. Also, those mathematicians who try to calculate the size of the earth’s circumference arrive at the figure 400,000 stades. This indicates not only that the earth’s mass is spherical in shape, but also that as compared with the stars it is not of great size.¹³⁶²
The mathematicians referred to are probably Eudoxos and Callippos. Their estimate of the size of the Earth as quoted by Aristotle is the earliest of its kind; it was too large yet very remarkable.¹³⁶³ This fragment of Aristotle was the first seed out of which grew eventually in 1492 the heroic experiments of Christopher Columbus.
The main achievement of the astronomers of this period, if not of Aristotle himself, was the completion of the theory of homocentric spheres. This achievement implied the availability of a fairly large number of solar, lunar, and planetary observations. Where did Eudoxos, Callippos, and Aristotle obtain them? In Egypt and Babylonia.
According to Simplicios’ commentary on the De caelo, the Egyptians possessed a treasure of observations extending over 630,000 years, and the Babylonians had accumulated observations for 1,440,000 years.¹³⁶⁴ A more modest estimate was quoted by Simplicios from Porphyry, according to which the observations sent from Babylon by Callisthenes, at Aristotle’s request, covered a period of 31,000 years. All that is fantastic, but Oriental observations covering many centuries were actually available to the Greek theorists and were sufficient for their purpose. The Greeks obtained them in Egypt and Babylonia; they could not have obtained them in Greece, where men of science had preferred to philosophize each in his own way and where no institution had ever been ready to continue astronomic observations throughout the centuries. Simplicios’ exaggerations are simply a tribute to the antiquity and the admirable continuity of Oriental astronomy.
To return to Aristotle, though he was acquainted in a general way with Egyptian and Babylonian astronomy, he did not need their observations as keenly as did professionals like Eudoxos and Callippos. Being primarily a philosopher, he was more interested in questions of such generality that observations were of little help. For example, in the De caelo we find discussions concerning the general shape of the heavens, the shape of the stars, the substance of the stars and planets (which he assumed to be “aether”), the musical harmony caused by their motions. This may seem very foolish, but in justice to Aristotle and his contemporaries we should remember that many irrelevant and futile questions had to be asked and discussed before the pertinent ones were disentangled from the rest. In science immense progress is made whenever the right question is asked, the asking in proper form is almost half of the solution, but we can hardly expect these right questions to be discovered at the beginning.
The fortune of Aristotelian astronomy was singular. The theory of homocentric spheres was eventually displaced by the theories of eccentrics and epicycles, which was eventually crystallized in the Almagest of Ptolemy (II–1). Later, as the weaknesses of the Almagest appeared more clearly, some astronomers went back to Aristotle. The history of medieval astronomy is largely a history of the conflict between Ptolemaic and Aristotelian ideas; the latter were relatively backward and hence the growth of Aristotelianism retarded the progress of astronomy.¹³⁶⁵
AUTOLYCOS OF PITANE
In order to complete our survey of mathematics and astronomy in this golden age we must still speak of one great person, whose appearance ends it beautifully. Autolycos was born in Pitane ¹³⁶⁶ in the second half of the century, and he flourished probably in the last decade. He was an older contemporary of Euclid.¹³⁶⁷ Hence, he represents the transition between the great Hellenic school of mathematics and the Alexandrian age.
We know almost nothing about him, not even the place where he flourished. Did he go to Athens? That would have been natural enough. Yet Pitane was a civilized and sophisticated place, a well–located harbor facing Lesbos, not very far from Assos where Aristotle had taught. We know that Autolycos was the teacher of a fellow citizen of his, Arcesilaos of Pitane (315–240), founder of the Middle Academy. This suggests that he resided in Pitane and fixes the date approximately, the turn of the century.
Our ignorance concerning his personality is in paradoxical contrast with the fact that he wrote two important mathematical treatises, which are the earliest Greek books of their kind transmitted to us in their integrity. We know his works exceedingly well, but nothing of himself, except that he was the author of them.
Before speaking of these two books we must refer briefly to a third one which is lost and wherein he criticized the theory of homocentric spheres. He wondered how that theory could be reconciled with the changes of relative size of Sun and Moon and with the variations in the brightness of the planets, especially Mars and Venus. Judging from his controversy with Aristotheros, he could not solve that difficulty.¹³⁶⁸
The two books that have come down to us deal with the geometry of the sphere.¹³⁶⁹ As all the stars were supposed to be on a single sphere (and in any case one might always consider their central projections on that sphere), mathematical problems concerning their relations were problems of spherical geometry. For example, any three stars are the vertexes of a spherical triangle, the sides of which are great circles. When we try to measure the distance between two stars on that sphere (one side of the triangle), what we measure really is the angle which that side subtends at the center of the earth or as seen by a terrestrial observer. All such problems are solved now by means of spherical trigonometry, but trigonometry had not yet been invented in Autolycos’s time and he tried to obtain geometric solutions.
Irrespective of their practical value, which was considerable, these books are of great interest to us because of their Euclidean form, before Euclid. That is, the propositions follow one another in logical order; each proposition is clearly enunciated with reference to lettered figures, then proved. Some propositions, however, are not proved; that is, they are taken for granted, and this suggests that Autolycos’ books were not the first treatises on spherical geometry, but had been preceded by at least one other now lost. The substance of the lost treatise is somewhat preserved in the Sphaerics of Theodosios of Bithynia (I–1 B.C.), which gives the proofs of theorems unproved by Autolycos.
The first of Autolycos’ treatises, entitled On the moving sphere, deals with spherical geometry proper; the second, On risings and settings [of stars], is more astronomic, that is, it implies observations. Both treatises are too technical to be analyzed here.
How did it happen that such books were preserved? Their practical value was immediately realized by mathematical astronomers, who transmitted them from generation to generation with special care. Their preservation was facilitated and insured by the fact that they were eventually included in a collection called “Little astronomy” (in opposition to the “Great collection,” Ptolemy’s Almagest). The “Little astronomy” was transmitted in its integrity to the Arabic astronomers, and became in Arabic translation a substantial part of what they called the “Intermediate books.”¹³⁷⁰ The maxim “l’union fait la force” (part of the heraldic achievement of Belgium) applies to books as well as to men: when books become parts of homogeneous collections, each helps the other to survive.
ASTRONOMY IN ARISTOTLE’S TIME
The main achievement is the completion of the theory of homocentric spheres by Callippos; this may be put to the credit of the Lyceum. The Greeks were theorists rather than observers, but they were fortunate in that a treasure of Egyptian and Babylonian observations was available to them. It is almost impossible to determine their use of it except in a very general way. We can see only the fruits of that use, the main one being the theory of homocentric spheres. Heracleides was the first to propose a kind of geoheliocentric system, that is, to postulate the rotation of some planets around the Sun. He may be called the first Greek forerunner of the Copernican astronomy. At the end of the century Autolycos was building the geometric foundation of astronomy. Aristotle helped to state astronomic problems and to explain their relation to the rest of knowledge.
Note that none of these men was a Greek of Greece proper; their birthplaces were in Macedonia (Stageira) or in Asia Minor (Heracleia Pontica, Cyzicos, and Pitane).
PHYSICS IN THE EARLY LYCEUM
Aristotle, his colleagues, and his younger disciples must have devoted much time to the discussion of physical questions; it was the old Ionian tradition of research de natura rerum, though already much better focused. A part of that was astronomic, but astronomy was always mixed with physics. The great advantage of astronomy proper, and the main cause of its early progress, was that some problems at least were very definite, and could be isolated with relative ease — such problems as how to account for the regular irregularities of planetary motions, or what are the shapes of the Earth and the planets, their mutual distances, their sizes. Not only was it possible to state these problems, but solutions were offered, some of which were sufficient at least as first approximations.
The universe was divided into two parts, essentially different — the sublunar world and the rest. Physical questions applied mainly to the sublunar world, astronomical ones to the Moon and beyond.
Fig. 93. Beginning of the Aristotelian physics in Latin translation, Physica sive De physico auditu (Padua, 1472–1475; Klebs, 93.1). First edition of the Physics in any language. It contains the double text in Latin with commentary by Ibn Rushd (XII–2). The anonymous printer was Laurenzius Canozius, in Padua. [Courtesy of the Bibliothèque Nationale, Paris.]
Aristotelian physics, or more correctly Peripatetic physics, is found in many books, such as Physica (Fig. 93), Meteorologica, Mechanica, De caelo, De generatione et corruptione, and even in Metaphysica, and the dating of some of these works is very uncertain. For example, theMechanica has been ascribed not only to Aristotle, but also to Straton of Lampsacos (III–1 B.C.), who was Euclid’s contemporary. The fourth book of the Meteorology is also ascribed to Straton. Let us forget for a moment these differences and try to describe the physical ideas that were explained in the Lyceum in the fourth and third centuries.
In order to avoid confusion we must try to forget another thing, our present conception of physics, which is relatively recent. In ancient and medieval times, and even down to the seventeenth century, physics concerned the study of nature in general, inorganic and organic.
The center of Aristotelian¹³⁷¹ physics is the theory of motion or of change. Aristotle distinguished four kinds of motion:
(1) Local motion, that is, our kind, translation of an object from one place to another. Such local motion, Aristotle recognized, is fundamental; it may and does occur in the other kinds.
(2) Creation and destruction; metamorphoses. As such changes are eternal, they imply compensations, or some kind of cyclic return. If they proceeded only in one direction they could not continue eternally. Creation is the passage from a lesser to a higher perfection (say the birth of a living being); destruction is the passage from a higher form to a lower (say the passage from life to death). There is neither absolute creation nor absolute destruction.
(3) Alterations, which do not affect the substance. Objects may receive another shape yet remain substantially alike. A man’s body may be altered by injury or by disease.
(4) Increase and decrease.
Everything that happens, happens because of some kind of motion as defined above. The physicist studies these “motions” for their own sake but also better to understand the substance undergoing them.
It is impossible, however, to explain nature only in terms of “material motions” or mechanism. One has to take into account some general ideas, such as that of universal economy: God (or nature) does nothing in vain. Every motion has a direction and a purpose. The direction is toward something better or more beautiful. The purpose of a being is revealed by the study of its genesis and evolution. We are falling back upon the theory of finalism (or teleology) which has been discussed in the previous chapter.
Everything in nature has a double aspect: material and formal. The form expresses the aim, which cannot be accomplished, however, except through some kind of matter. The weaknesses, imperfections, monstrosities that occur in nature are caused by the blind inertia of matter, defeating the purpose.
Aristotle had inherited and accepted the theory of four elements, at least to account for the changes that occur in the sublunar world. (For the changeless world above the Moon it was necessary to postulate a fifth, incorruptible, element, the aether.) He had also accepted the four qualities; at least, he considered them (wet and dry, hot and cold) the fundamental ones, to which others (for example, soft and hard) could be reduced. Only the necessities are formal; individual objects are contingent. It is the forms that the scientist must try to understand, but he cannot understand them except through individual (accidental) examples. We are thinking of Plato, and in some way, Aristotle is as idealistic as his predecessor, yet with a difference: Plato passes from the Form (the Idea) to the object, Aristotle does the reverse. That difference is simple but immense.
Aristotle made an exception, however, for some fundamental beings, such as the Prime Mover or the Elements, beings whose essence implies existence, and which cannot be known except a priori. All the rest can be known only empirically, by gradual induction, from individual cases to more general ones, and from inferior forms to superior ones. Mechanism alone can never explain the universe, yet analyses, descriptions, and inductions must precede every synthesis. That procedure is essentially the procedure of modern science.
Though he often quoted Democritos and praised him repeatedly, Aristotle rejected the atomic theory and what might be called Democritian materialism. He rejected the concept of vacuum,¹³⁷² because he could not conceive motion except in a definite medium, and was not everything that happened due to a kind of motion? It is possible that Aristotle rejected the atomic theory only because of the wrong use that Democritos (or his disciples) had made of it. It was claimed that Democritos tried to explain everything in mechanical terms, while the Aristotelian explanations were partly material and partly formal.
Celestial bodies move eternally, with constant speed, along circles. Sublunar bodies do not move if they are in their natural places; if they are removed from those places they tend to return to them along a straight line. There are two possible motions along a straight line, upward and downward.¹³⁷³ Heavy bodies like earth move downward; light ones like fire, upward. Between these two elements, which are absolutely heavy and absolutely light, occur the two others, water and air, which are respectively less heavy than earth and less light than fire.
Aristotelian mechanics includes adumbrations of the principle of the lever, of the principle of virtual velocities, of the parallelogram of forces, of the concept of center of gravity, and of the concept of density. Some of these ideas were to be given explicit and quantitative formulation by Archimedes of Syracuse (III–1 B.C.), others would be developed later, but the germs were already in the Aristotelian corpus.
Most discussions of Aristotelian mechanics center upon his dynamics. The genesis of Aristotle’s ideas on this subject is extremely instructive. We have seen that he did not accept the concept of vacuum.¹³⁷⁴ Motion is inconceivable in emptiness; hence, when he considered the movement of bodies it was always in a resisting medium. On the basis of gross observations he concluded that the speed of a body is proportional to the force pushing (or pulling) it and inversely proportional to the resistance of the medium. Any object moving in a resisting medium is bound to come to a standstill unless a force continues to push it. (In a vacuum, the resistance would be zero and the speed infinite. ) He also remarked that the speed of a falling body would be proportional to its weight, and that it would increase as the body was further removed from its point of release and came closer to its natural place. Hence the velocity would be proportional to the distance fallen.
The discovery of the true laws of motion became possible only when the Aristotelian prejudice against a vacuum was removed. Instead of rejecting motion in a vacuum as absurd, one assumed its possibility and considered what would happen if resistances were eliminated. Thanks to that happy abstraction, Galileo found that the speed was independent of the weight or mass of the falling body. He first thought that the speed would be proportional to the distance fallen but then realized that it was proportional to the time elapsed. The final laws of motion were discovered by Newton, chiefly the one that motive forces are proportional not to the speed of the body moved but to its acceleration. In fairness to Aristotle, however, one must remember that his conclusions were not unreasonable within the frame of his experimental knowledge. Mach was unjust to him and Duhem perhaps too generous. It is just as unfair to condemn Aristotle for not accepting what the invention of the air pump would prove, as for not seeing what could be seen only after the invention of the telescope.
The great difficulty of terrestrial (as compared with celestial) mechanics consisted in the extreme complexity of natural events. These could not become understandable without abstractions of great boldness. Aristotle’s imagination was not equal to that, not because it was inferior to Galileo’s or Newton’s, but because it could not depend upon the same mass of experience and could not soar off from the same altitude.
The Meteorologica ascribed to Aristotle contains meteorology in our sense, plus much else that we would classify under physics, astronomy, geology, even chemistry. ¹³⁷⁵ The astronomic part came in because Aristotle considered such phenomena as comets and the Milky Way as originating below the Moon; these phenomena were thus for him meteorologic rather than astronomic. Such errors were natural and pardonable in his time, and indeed until the end of the sixteenth and the seventeenth centuries. The unpredictable behavior of comets seemed absolutely different from the complex and solemn regularities of planetary motions. The planets suggest eternity and divinity; on the contrary, what better examples of capriciousness and evanescence could one adduce than the comets, which appear in the sky and after a relatively short time dissolve and disappear? Moreover, comets were generally seen outside of the zodiac. That Aristotelian prejudice was not shaken until the publication by Tycho Brahe, in 1588, of his observations of the comet of 1577. Brahe proved that its parallax was so small that the comet could not be sublunar; its orbit exceeded that of Venus.¹³⁷⁶
As to the Milky Way, which divides the heavens as a great circle along the solstitial colure, it also was supposed to be a meteorologic phenomenon, formed by dry and hot exhalations, similar to those that cause the meteors. A better understanding of the Milky Way was hardly possible without a telescope. Aristotle’s views were finally disproved by Kepler, according to whom the Milky Way was concentric with the Sun, on the inner surface of the starry sphere.
A great many other phenomena are described and discussed in the Meteorology, such as meteors, rain, dew, hail, snow, winds, rivers and springs, the saltness of the sea, thunder and lightning, earthquakes. The consideration of each of them would require at least a page, and space is lacking, the patience of our readers limited. Let us restrict ourselves to a few remarks concerning Aristotle’s optical theories. He rejected the view that light is material, being due to corpuscles emitted by the luminous object or emanating from the eye; on the contrary, he suggested that it was a kind of aetherial phenomenon. (Please do not call this an anticipation of the wave theory of light.) He was aware of the repercussions of sound (echo) and of light, and offered a theory of the rainbow, based upon the reflection of light in water drops and thus incomplete, yet very remarkable. His theory of colors has been compared to Goethe’s, a comparison that is not very complimentary to the latter but is very much to Aristotle’s credit.¹³⁷⁷
It is right to marvel at the endless number of physical questions in the Aristotelian corpus, but one should resist the temptation of reading into them too many ideas that are comparable to modern ideas yet could not possibly have had in their author’s mind the meaning and pregnancy that they have in ours. One should never forget that the authority of a statement is a direct function of the knowledge and experience upon which it is based; many Aristotelian statements are brilliant, yet as irresponsible as the queries of an intelligent child.
The fourth book of the Meteorology is probably the work of Straton.¹³⁷⁸ As it has come to us, it might be called the first textbook of chemistry. It discusses the constitution of bodies, the elements and qualities, generation and putrefaction, concoction and inconcoction (indigestion), solidification and solution, properties of composite bodies, what can and what cannot be solidified and melted, homoiomerous bodies.¹³⁷⁹ The final conclusion is that end and function are more evident in nonhomoiomerous bodies than in the homoiomerous bodies that compose them, and in these than in the elements. Aristotle (or Straton) had been thinking hard on the differences that may or may not occur when two different bodies are mixed together; they may remain separate or separable, or they may be combined into something essentially new; their two forms may disappear or exist only in potentia, while a new form is created.¹³⁸⁰
All of which is again very impressive, especially when we bear in mind the impenetrability of the chemical jungle until the end of the eighteenth century. Aristotle and Straton went as far as it was possible to go in their time, or more exactly, their thinking far exceeded their experimental reach, and more than two thousand years would be needed to bring it to maturity and to fruitage.
We have given a few examples of the long acceptance of Aristotelian ideas and prejudices. One might say in a general way that Aristotelian physics dominated European thought until the sixteenth century. Then the revolt that had been gathering strength for centuries became more articulate, more intense, and better organized. In the middle of that century Ramus¹³⁸¹ went to the extreme of proclaiming that everything that Aristotle had said was false. The foundations of Aristotelian physics were undermined in the following century by Gassendi, who revived atomism, and by Descartes,¹³⁸² who accepted some of Aristotle’s prejudices yet built up an entirely new structure. Yet even then the general conception of physics remained as broad as ever. Knowledge was hardly strong and sharp enough in any part of the immense field to separate that part from the rest, or to create physics as we understand it now.¹³⁸³
Aristotle’s views were rejected, but they were not forgotten or overlooked, and there remained an active Scholastic and Peripatetic opposition. Aristotle was still very much alive, though on the defensive, as late as the eighteenth century.
GREEK MUSIC. ARISTOXENOS OF TARENTUM
One disciple of Aristotle must still be introduced before we close this chapter, not the least of them, the musician, or rather the theorist of music, Aristoxenos. Aristotle himself was much interested in music, not only in the ethical value of it, somewhat in the Platonic manner,¹³⁸⁴ but also in the more technical sense. He was familiar with the Pythagorean discovery, the numerical aspect of musical harmony. Pythagoras or one of his early disciples had observed that when the vibrating string of a musical instrument was divided in simple ratios (1:¾: :½) one obtained very pleasant accords. Aristotle¹³⁸⁵ extended the same operation to reed pipes.¹³⁸⁶ He realized the importance of frequency of vibration, yet confused it with speed of transmission, and wrongly believed with Archytas that the speed of sound increased with the pitch. He asked the question, Why is the voice higher when it echoes back?¹³⁸⁷ The question was curious and pertinent, but it was not answered until 1873 by Lord Rayleigh’s theory of harmonic echoes.¹³⁸⁸
It is probable that other members of the Lyceum discussed questions concerning acoustics and music, because the books of Aristoxenos, which we shall examine presently, contain a body of knowledge on that subject that is remarkable alike because of its relative depth, extent, and complexity.
Most of what we know concerning Aristoxenos is derived from Suidas (X–2), but Suidas used ancient books that are lost to us, and whatever he tells us is sufficiently confirmed from various other sources to be reliable. Aristoxenos was born in Tarentum, close to the country where Pythagorean fancies had matured; he was educated by his father, Spintharos, who was a musician, by Lampros of Erythrai and Xenophilos the Pythagorean,¹³⁸⁹ finally by Aristotle. After the master’s death the election of Theophrastos instead of himself as head of the Lyceum infuriated him. Suidas says that he flourished in the 111th Olympiad (336–333)¹³⁹⁰ and that he was a contemporary of Dicaiarchos of Messina; he adds that Aristoxenos’ writings dealt with music, philosophy, history, and all the problems of education, and that he wrote altogether 453 books!
The only work of his that has come down to us is his Elements of harmony (Harmonica stoicheia), which is the most significant treatise of its kind in ancient literature. As we have it, it seems to be an artificial recombination of two separate works. It covers (in Macran’s edition ) 70 pages or some 1610 lines.¹³⁹¹ It is a tedious book wherein Aristoxenos applied the logical methods of the Lyceum to the exposition of the knowledge that had been transmitted to him by Spintharos, Lampros, and Xenophilos or that he had obtained by his own experiments. It is divided into three parts, treating (1) generalities, pitch, notes, intervals, scales; (2) idem, plus keys, modulation, melody (the polemical tone of this suggests the existence of other writings now lost ) ; (3) some twenty–six theorems on the combination of intervals and tetrachords in scales.
The most original part of Aristoxenos’ work is the theoretical determinations of the intervals. Starting from the three Pythagorean intervals ( , , ; octave, fifth, and fourth) he takes as unit the difference between the fifth and the fourth (the tone). That unit is too large, however; in order to obtain subunits he divides the interval arithmetically (not by extraction of roots). For example, in the descending fourth la—mi he inserts two tones, which gives the notes sol, fa. The new interval between fa and mi is the semitone. If this new interval is really a semitone, there are 5 semitones in the fourth, 7 in the fifth, and 12 in the octave. Aristoxenos went even further and considered not only semitones but also thirds, fourths, and even eights of the tone; these smaller divisions fell into abeyance. The empirical confusion between a leimma ¹³⁹² and a semitone led Aristoxenos to a calculus comparable to the calculus by logarithms: the intervals (which are ratios) are calculated by means of additive units. This is extremely interesting, yet it would be foolish to conclude that Aristoxenos was a forerunner of Napier! There’s many a slip ’twixt the cup and the lip, and there are many more between an idea and the theory eventually built upon it.¹³⁹³
The treatise of Aristoxenos is nevertheless highly significant, one of the masterpieces of Hellenic thought. Its influence was considerable, either directly or through the intermediary of the Harmonics of Ptolemy (II–1). The higher learning of late antiquity and of the medieval period included four main subjects (hence the name quadrivium),¹³⁹⁴ and those four subjects were arithmetic, music, geometry, astronomy. Music, not physics! Thanks to Pythagoras and Aristoxenos, music was a mathematical science, while physics remained in a qualitative stage, close to philosophy.
Aristoxenos was less influential in the West, because the first great teacher of music in the Latin language was Boetius (VI–1), whose handbook was based chiefly upon the Pythagorean tradition rather than on the Aristoxenian one. The Byzantine musicologists, on the contrary, followed Aristoxenos. For Manuel Bryennios (XIV–1), who composed the latest Byzantine Harmonics, the history of music was divided into three periods — pre–Pythagorean, Pythagorean, and post–Pythagorean. The third of these periods was the one initiated by Aristoxenos and continued by the other musicologists of classical and Byzantine times; Manuel himself was still in that third and last age, the age of Aristoxenos. Indeed, Greek musical theory never surpassed Aristoxenos’ exposition; nor did the practice of music (composition, playing, singing, teaching) change materially after him.¹³⁹⁵
Ancient music included not only music as we understand it but also metrics, poetry, for Greek poetry was composed to be chanted. Moreover, it had an ethical and cosmologic aspect; the theory of harmony in music was a part of the theory of harmony in the whole cosmos or in the soul of man. Thus music was a branch of philosophy as well as a branch of mathematics. It brought the humanities into the quadrivium. |
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, videos, activities, or other ideas you'd like to contribute, we'd love to hear from you.
Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals.
Teacher Resources by Grade
|1st - 2nd||3rd - 4th|
|5th - 6th||7th - 8th|
|9th - 10th||11th - 12th|
Id, Ego, and Superego in Dr. Seussís The Cat in the Hat
|Grades||9 – 12|
|Lesson Plan Type||Unit|
|Estimated Time||Eight 50-minute sessions|
Charleston, South Carolina
Grades 1 – 12 | Student Interactive | Organizing & Summarizing
The Plot Diagram is an organizational tool focusing on a pyramid or triangular shape, which is used to map the events in a story. This mapping of plot structure allows readers and writers to visualize the key features of stories.
Grades K – 2 | Calendar Activity | March 2
After sharing The Cat in the Hat and other patterned books with students, groups brainstorm sets of rhyming words and create a story using these words.
Grades K – 12 | Printout | Graphic Organizer
Students use this graphic organizer to describe similarities and differences between two objects.
Grades 8 – 12 | Professional Library | Journal
Presents two brief articles--the first discusses purposes for using children's picture books in a secondary classroom, activities for children's literature, and integrating children's books into the classroom curriculum; and the second discusses using folktales in the classroom to engage reluctant readers and writers, suggests ideas/topics for folktale writing projects, lists writing prompts, and recommends folktales. |
Kennewick Man: Science and Sacred Rights
This lesson plan explores the controversy surrounding "Kennewick Man," the name given to a skeleton discovered near Kennewick, Washington, in July 1996. Identified by scientists as approximately 9,000 years old, Kennewick Man was claimed by five Northwestern tribes, who invoked their right under NAGPRA, the Native American Graves Protection and Repatriation Act, to rebury him in accordance with their religious traditions. When archeologists filed suit to prevent this, arguing that the skeleton is not a tribal ancestor and can shed new light on the earliest inhabitants of North America, Kennewick Man became the center of a debate between science and religion in which both sought the protection of government and the law. The lesson plan introduces students to this complex, sharply contested controversy in a case study format, gathering documents from both sides to enrich their understanding of ancient and present day Native American cultures, and to encourage reflection on the relationship between science and religion, which have been cast as antagonists over similar issues from Galileo's time to our own.
No guiding questions provided.
To learn about the discovery of Kennewick Man and what this ancient skeleton suggests about the earliest inhabitants of North America.
To examine the controversy surrounding Native American efforts to rebury Kennewick Man in accordance with their traditions and federal law.
To explore the relationship between science and religion as reflected in their shared concern about human origins.
To gain experience in the close analysis of argument. |
ODD / Oppositional and Defiant Behavior
Oppositional defiant disorder (ODD) is a diagnosis that applies to some people who are excessively aggressive, angry, or defiant. Though it’s most commonly diagnosed in children, adults can have ODD. Oppositional defiant behavior goes beyond what is developmentally normal or a clear reaction to challenging circumstances. For example, neither a toddler throwing tantrums nor a teenager reacting with anger to abuse warrant a diagnosis of ODD.
ODD can feel overwhelming. Kids and teens may feel out of control and angry. This can affect their relationships with peers and family, and undermine their ability to succeed at school. ODD often affects an entire family. Parents may feel frustrated, angry, and anxious. Siblings may be afraid of another child with ODD. It’s common for families to disagree about how to manage ODD. Spouses may find themselves frequently fighting about the child.
Therapy can help children with ODD and their families manage the many challenges they face. Individual counseling can help kids better control their emotions, while family counseling can help families support a child struggling with ODD and find better strategies for communicating with one another. If you suspect the condition may be at play, find a therapist who specializes in ODD.
Read more at GoodTherapy.org |
Many of the actions you take on a daily basis can measured by their carbon footprint. As your power usage increases, so does your environmental impact. Cars, homes and possessions all contribute to this impact by using energy, most of which is produced by burning fossil fuels. However, renewable and sustainable resources can help lighten your ecological footprint, even more so when combined with reducing overall energy usage when possible.
What Is a Carbon Footprint?
Your carbon footprint is a measurement of the amount of greenhouse gases produced by the activities in your daily life. One main source of greenhouse gas is burning fossil fuels. That includes the gas in your car and the coal burned at your power plant. Scientists have concluded that humans are producing more greenhouse gases than ever before. These gases trap heat in our atmosphere, causing our planet to warm up and changing our climate. (See References 1)
Your carbon footprint, therefore, measures the amount of potential impact your daily life has on the environment. By reducing the amount of greenhouse gases produced by your lifestyle, you can reduce your footprint and help slow climate change on Earth.
What Are Sustainable Resources?
Sustainable resources are resources that can be used indefinitely without ever running out. For example, wood can be used to build structures or burned for fuel, and trees can be replanted to replenish the supply of wood. Sunlight does not need to be replaced --- as long as the sun shines, we can make use of solar energy. Solar, wind, water and geothermal energy do not emit any greenhouse gases and provide steady sources of energy, which make them integral to reduce greenhouse emissions. (See References 2) Materials such as wood or ethanol are sustainable resources as well, however they emit greenhouse gases when burned.
How to Use Renewables
Most of your daily activities probably depend on electricity, and most electricity is produced by burning fossil fuels. Your power plant likely creates electricity by burning coal or natural gas. Switching to renewable energy can be a challenge, but it is possible. First, contact your utility company to ask about buying green power. Many utilities can sell you energy produced using sustainable methods, including solar and wind energy, through your existing power lines. (See References 3)
If you are planning to buy or build a new home, consider one built with sustainable materials including recycled wood or other "green" technologies. Decking, floors and even insulation can be made from recycled wood and plastics. Some new houses include solar panels for lighting, heat and cooling.
While using sustainable resources can reduce your carbon footprint, you can also help by changing some of your lifestyle choices. For example, you might sell your gasoline-powered car and buy an electric hybrid --- or you could save gas by carpooling to work and telecommuting when possible. You could install solar panels on your roof, or weatherproof your house and change the light bulbs and appliances to energy efficient versions to reduce the amount of energy your household requires. Ultimately, the best way to reduce your carbon footprint is with a combination of energy consciousness and sustainable resources.
- U.S. Environmental Protection Agency: Frequently Asked Questions About Global Warming and Climate Change
- U.S. Energy Information Administration: Renewable Energy Explained
- U.S. Environmental Protection Agency: Climate Change --- What You Can Do
- U.S. Environmental Protection Agency: Climate Change --- Greenhouse Gas Emissions: Individual Emissions
- Kane Skennar/Digital Vision/Getty Images |
A body was projected vertically upwards at 9.1 meters per second. Determine the time taken to reach the maximum height. Take 𝑔 equal to 9.8 metres per second squared.
In order to solve this question, we will use one of the equations of motion or SUVAT equations: 𝑣 equals 𝑢 plus 𝑎𝑡, where 𝑢 is the initial velocity, 𝑣 is the final velocity, 𝑎 is the acceleration, and 𝑡 is equal to the time. The body is projected vertically upwards at 9.1 meters per second. This means that 𝑢 is equal to 9.1. At the maximum height, the velocity of the body is zero metres per second. Therefore, 𝑣 is equal to zero. As gravity is working against the body, 𝑎 is equal to negative 9.8 metres per second squared. And finally, 𝑡 is the value we’re trying to calculate.
Substituting in these values into the equation 𝑣 equals 𝑢 plus 𝑎𝑡 gives us zero is equal to 9.1 minus 9.8𝑡. Rearranging this equation gives us 9.8𝑡 is equal to 9.1. Dividing both sides of the equation by 9.8 gives us a value for 𝑡 of 9.1 divided by 9.8. This is equal to 13 14ths of a second or 0.93 seconds to two decimal places.
This means that the time taken for a body to reach its maximum height if it is projected vertically upwards at 9.1 meters per second is 13 14ths of a second. |
Roughly 1,000 years ago, Europe enjoyed several centuries of balmier average temperatures. Dubbed the "Medieval Warm Period," it was the last time before the present that agriculture could flourish in Greenland. This era also shows yet again that changes to natural systems can drive local climate change—and provided fodder for countless misunderstandings about the nature of present day global warming.
But new research shows that the MWP, as it is affectionately known in acronym-happy science circles, as well as the "Little Ice Age" that almost immediately followed it (and spelled doom for the Greenland Norse) were likely the result of fluctuations in the sun's strength and the frequency of volcanic eruptions, among other natural causes.
Cores drilled from ancient ice sheets as well as of the coral reef and lake sediment varieties show that at the same time as Europe enjoyed average temperatures as warm as today, the tropical Pacific was unusually cold. This suggests that natural cycles—such as the succession of El Nino and La Nina conditions in the Pacific Ocean—forced these climate anomalies.
Unfortunately, neither the sun nor other natural cycles can entirely explain the recent warming trend that has brought potatoes back to coastal Greenland. To date, the only explanation that matches those observations are a concurrent rise in greenhouse gas concentrations in the atmosphere.
CO2 and its peers now are responsible for trapping an extra three watts per square meter of planet. And if that continues, the MWP will end up looking like an ice age. |
An informational narrative is used when the writer is trying to inform others about a particular subject of which they may not be familiar. Informative essays can also be called an expository or explanatory essays. This kind of essay is aimed at presenting information in a clear and concise manner so the reader can learn about the subject matter.
Basics of an Informational Narrative
An informational narrative is written using a variety of resources to present information. It consists of an introduction with a thesis, the body of the text with cited information supporting the thesis, and the conclusion which sums up the points presented in the narrative.
When writing an informational narrative, avoid using the personal pronoun 'I,' and do not use contractions. Because this is a narrative which is supposed to present an idea, you will need to a method to cite the source where you obtained information and ideas that inspired your own thoughts. This is important so the reader can go back to the sources cited and confirm what was written is in fact what the source states.
Elements of the Introduction
The introduction is where the writer presents the thesis which the narrative will be based around. The thesis is the central argument in which the body of the narrative will be supporting. When writing an introduction attempt to get to the thesis in the first or second sentence of the narrative. The narrative is a tool to inform the reader about the thesis of the paper. Realize that the reader is more concerned about the thesis and less about beautiful prose. Present your argument in the thesis as soon as possible.
Body of the Narrative
The body of the narrative is where you present the information which you have obtained in your research and supports the thesis which was presented in the introduction. The body of the text can be as long or as short as required to complete the narrative, but in general they are at least three paragraphs long. When writing you will want to present a single idea which relates to the thesis in each paragraph along with the required citations. Do not present more than a single idea in each paragraph. Otherwise the paragraph can seem cluttered.
Conclusion of the Narrative
Concluding the narrative is not about just simply restating the thesis in the introduction. Summarize the information which was presented in the body of the narrative in relation to the thesis, creating a synthesis of information which the reader can conclude that what was presented supports the thesis which you presented. At the very most a conclusion should only be several paragraphs unless the narrative is extremely long or complex. Never introduce new information in the conclusion. If the information is important enough to include in the conclusion it should have been included in the body of the narrative.
Tips for an informational narrative
The informational narrative is about informing the reader in a clear and concise manner the thesis of the paper. Again, concise and precise wording is more important in this sort of narrative than lyrical language. Think about the audience of the narrative as well. This will affect how you write the narrative. Writing to an audience who does not understand the thesis presented will require more information and explanation than writing for a group of experts in the field which the subject matter is part of their career. Finally, the flow of the narrative should include transitions between each paragraph. Transitions are sentences or phrases that help the reader move smoothly between your ideas. |
This artist’s impression shows exocomets orbiting Beta Pictoris. L. Calçada / ESO
Scientists have spotted three exocomets, or comets outside of our Solar System, in orbit around a bright young star called Beta Pictoris in the constellation of Pictor. They used NASA’s Transiting Exoplanets Survey Satellite (TESS) to capture detailed data about the amount of light being generated, which they used to identify the tails of the comets.
Exocomets are rarely spotted. First observed in 1987 in the same Beta Pictoris system, since then only 11 stars have been found which have exocomets in orbit around them. All of the stars which support exocomets discovered thus far are young A-type stars, which are white or bluish in color and have very high surface temperatures of up to 7500 Kelvin.
Beta Pictoris is a particularly good location to hunt for cosmic bodies because it is relatively nearby, at 63 light-years away from Earth. It also has a debris disk of dust and gas around it which is warped, and which could potentially create icy bodies like those found in the Kuiper Belt in our Solar System. And on a practial level, astronomers are able to get a good view of the disk because of its angle in relation to Earth. Plus there is a lot of dust in the disk which scatters starlight, making it shine brightly.
“Because of its close proximity and circumstellar disk, the Beta Pictoris system can be considered an ideal test bed to study the formation and evolution of planetary systems, including minor bodies such as exocomets and exomoons,” the researchers say in the paper.
The researchers used data collected from TESS between October 19, 2018 and February 1, 2019 to search for exocomets. They observed three events during this 105-day period in which the light from the star dipped for up to two days at a time. These dips suggest that a comet with a long tail was passing between the disk and Earth, which is how they were able to identify the exocomets.
The existence of exocomets in the system was predicted 20 years ago by French astronomer Alain Lecavelier des Etangs, and now the prediction has finally been shown to be correct. The next step for the team is to perform more comprehensive modeling using the TESS data to search for more exocomets and other bodies.
The findings are to be published in the journal Astronomy & Astrophysics and are available to view on pre-publication archive arXiv.org.
- Death of a planet: Astronomers discover grisly scene of planetary destruction
- Cosmic dust bunnies: Scientists find unexpected ring around Mercury
- NASA’s planet-hunting satellite TESS locates its first exoplanet
- Planet Nine could be five times the size of the Earth and closer than we thought
- Planet-hunting satellite discovers its first Earth-sized planet |
The following points highlight the eight major biomes of the world. The biomes are: 1. Tundra 2. Northern Conifer Forest 3. Temperate Deciduous Forests 4. Tropical Rain Forest 5. Chapparal 6. Tropical Savannah 7. Grassland 8. Desert.
Biome # 1. Tundra:
The literal meaning of word Tundra is north of the timberline. The tundra extends above 60°N latitude. It is almost treeless plain in the far northern parts of Asia, Europe and North America. A tundra consists of plains characterised by snow, ice and frozen soil most of the year. The permanent frozen soil of tundra is called permafrost.
Winters are very long on the tundra with little daylight. In contrast summers are short but there are many daylight hours. Precipitation is low, amounting to only 25 cm or less per year, because cold air can hold relatively little moisture.
The ground is soggy in the summer because moisture cannot soak into the permanently frozen ground. Ponds, small lakes and marshes are abundant due to the nearly flat terrain.
There are no upright trees on the tundra. Only trees such as dwarf willows and birches, which grow low to the ground, can escape the drying effect of the wind which upright trees would experience. This biome consists mainly of mosses, grasses, sedges, lichens and some shrubs. Seasonal thawing of the frozen soil occurs only up-to a few centimetres depth, which permits the growth of shallow rooted plants.
Carbon, arctic hare and musk ox are important herbivores of tundra biome. Some important carnivores that prey on the herbivores are the arctic fox, arctic wolf, bobcat and snowy owl. Polar bears live along coastal areas, and prey on seals.
Because of the severe winters, many of the animals are migratory and move from one region to another with the change in seasons. Many shorebirds and water fowls, such as ducks and geese, nest on the tundra during the summer but migrate south for the winter. The tundra make a very delicate ecosystem, and may be recovered from any disturbance very slowly.
Biome # 2. Northern Conifer Forest:
The northern coniferous forest or taiga is a 1300-1450 km wide band south of the tundra. This extends as an east-west band across North America, Europe and Asia. This area also has long, cold winters, but summer temperatures may reach 10-12°C, and the summer and the growing season are longer than in the tundra. Precipitation is higher than in the tundra, ranging from 10 to 35 cm annually.
The moisture is the combined result of summer rains and winter snows. Lakes, ponds and bogs are abundant. The duration of growing period of plants is only about 150 days. Since five physical conditions are variable, the organisms are resistant to fluctuations of temperature.
The taiga makes really a northern forest of coniferous trees such as spruce, fir, pine, cedar and hemlock. In disturbed areas, deciduous trees such as birch, willow and poplar are abundant. In certain areas the trees are so dense that little light may reach the floor of the forest. Vines, maple and spring wild flowers are common. Mosses and ferns also grow in moist areas.
The common smaller mammals are herbivores, such as squirrels, snowshoe hare, and predatory martens. Important migratory herbivores include moose, elk, deer and carbon. Moose and carbon migrate to the taiga for winters and to the tundra for summers.
Important predators are the timber wolf, grizzly bear, black bear, bobcat and wolverine. Many insects are found during the warmer months. Migratory shore birds and waterfowls are abundant during summer months.
Biome # 3. Temperate Deciduous Forests:
The deciduous forests are found in the temperate regions of north central Europe, east Asia and the eastern United States, that is, south of the taiga in the Northern Hemisphere. Such forests occur in regions having hot summers, cold winter, rich soil and abundant rain. Annual rainfall is typically around 100 cm per year.
Common deciduous trees are the hardwoods such as beech, maple, oak, hickory and walnut. They are broad-leaved trees. The trees shed their leaves in the late fall so the biome has an entirely different appearance in the winter than in the summer.
The fallen leaves provide food for a large variety of consumer and decomposer populations, such as millipedes, snails and fungi living in or on the soil. The temperate deciduous forest produces flowers, fruits and seeds of many types which provide a variety of food for animals.
The common herbivores of this biome are deer, chipmunks, squirrels, rabbits and beavers. Tree-dwelling birds are abundant in number and diversity. Important predators are—black bears, bobcats, and foxes. Predatory birds are also found, such as hawks, owls and eagles. The coldblooded or ectothermic animals, such as snakes, lizards, frogs, and salamanders are also common.
The temperate deciduous forest makes a very complex biome. Many changes take place during the year, and a large variety of species inhabit the soil, trees and air.
Biome # 4. Tropical Rain Forest:
This biome is situated in the equatorial regions having the annual rainfall more than 140 cm. However, the tropical rain forest makes an important biome across the earth as a whole. This biome is found in Central America, the Amazon Basin, Orinocon Basin of South America, Central Africa, India and Southeast Asia.
Tropical rain forests have high rainfall, high temperature all year, and a great variety of vegetation. Plant life is highly diverse reaching up-to a framework of 200 species of trees per hectare. The warm, humid climate supports broad- leaved evergreen plants showing peculiar stratification into an upper storey and two or three understoreys.
The tallest trees make an open canopy, but the understoreyed plants block most of the light from the jungle floor. The climbers and lianas reach the highest level of the trees in search of light.
An enormous variety of animals lives in the rain forest, such as insects, lizards, snakes, monkeys and colorful birds. The ant eaters, bats, large carnivorous animals, and a variety of fish in the rivers are quite common. About 70-80 per cent of the known insects are found in tropical rain forests. Such rich animal diversity is linked to plant-animal interaction for pollination and dispersal of fruits and seeds.
Biome # 5. Chapparal:
This biome is also known as mediterranian scrub forest. This is marked by limited winter rain followed by drought in the rest of the year. The temperature is moderate under the influence of cool, moist air of the oceans. The biome extends along the mediterranian.
Pacific coast of North America, Chile, South Africa and South Australia. This biome has broad-leaved evergreen vegetation. The vegetation is generally made up of fire resistant resinous plants and drought-adapted animals. Bush fires are very common in this biome.
Biome # 6. Tropical Savannah:
The savannahs are warm climate plants characterized by coarse grass and scattered trees on the margins of tropics having seasonal rainfall. Primarily they are situated in South America, Africa and Australia. However, there is no savannah vegetation in India. The average total rainfall in such regions is 100 to 150 cm. There is alternation of wet and dry seasons.
Plants and animals are drought tolerant and do not have much diversity. The animal life of tropical savannah biome consists of hoofed herbivorous species, such as giraffe, zebra, elephant, rhinoceros and several kinds of antelope. Kangaroos are found in the savannahs of Australia.
Biome # 7. Grassland:
Some grasslands occur in temperate areas of the earth and some occur in tropical regions. Temperate grasslands usually possess deep, rich soil. They have hot summers cold winters and irregular rainfall. Often they are characterized by high winds. The main grasslands include the prairies of Canada and U.S.A., the pampas of South America, the steppes of Europe and Asia, and the veldts of Africa.
The dominant plant species comprise short and tall grasses. In tall-grasses prairies in the United States, important grasses are tall bluestem, Indian grass and slough grass. Short-grass prairies generally have blue grama grass, mesquite grass and bluegrass. Many grasses have long, well-developed root systems which enable them to survive limited rainfall and the effects of fire.
The main animals of this biome are-the prong-horned antelopes, bison, wild horse, jack rabbit, ground squirrel and prairie dogs. Larks, the burrowing owl and badgers are also found. Important grassland predators include coyotes, foxes, hawks and snakes.
Biome # 8. Desert:
The desert biome is characterised by its very low rainfall, which is usually 25 cm per year or less. Most of this limited moisture comes as short, hard showers. Primarily the deserts of the world are located in the south-west U.S.A., Mexico, Chile, Peru, North Africa (Sahara desert), Asia (Tibet Gobi Thar) and central Western Australia. Deserts generally have hot days and cold nights, and they often have high winds.
The reason for the difference of temperature between day and night is due to the lack of water vapour in the air. Deserts are characterised by scanty flora and fauna. Desert organisms must meet some initial requirements if they are to survive. The plants must be able to obtain and conserve water.
In order to meet these requirements, many adaptations have been made by desert plants. Such adaptations are—reduced leaf surface area, which reduces evaporation from the plants, loss of leaves during long dry spell; small hairs on the leaf surfaces, and the ability to store large amount of water.
The examples of important desert plants are—yuccas, acacias, euphorbias, cacti, many other succulents and hardy grasses. Many of the small plants are annuals.
Animals also must meet the requirements of heat, cold and limited water. Many desert animals are nocturnal in habit, and are active mainly at night. Many reptiles and small mammals burrow to get away from the intense heat of midday. The other common desert animals are the herbivorous kangaroo, rat, ground squirrel, and jack rabbit.
The important predators are—coyotes, badgers, kit fox, eagles, hawks, falcons and owls. Ants, locusts, wasps, scorpions, spiders, insect-eating birds, such as swifts and swallows, seed-eating quails, doves and various cats are other common desert animals. |
This resource features activities to help students better understand the importance of geography and the world in which they live.
The World Geography Quick Starts workbook features a review of general geography terms and map skills, as well as units focusing on the seven continents: Africa, Antarctica, Asia, Australia & Oceania, Europe, North America, and South America. Activities include matching, short answer, true/false, word games, and map activities. Each page features two to four quick starts that can be cut apart and used separately. The entire page may also be used as a whole-class or individual assignment.
The Quick Starts Series provides students in grades 4 through 8+ with quick review activities in science, math, language arts, and social studies. The activities provide students with a quick start for the day’s lesson and help students build and maintain a powerful domain-specific vocabulary. Each book is correlated to current state, national, and provincial standards.
Mark Twain Media Publishing Company specializes in providing engaging supplemental books and decorative resources to complement middle- and upper-grade classrooms. Designed by leading educators, the product line covers a range of subjects including mathematics, sciences, language arts, social studies, history, government, fine arts, and character. |
We know that the avian flu is transmitted to humans by sick birds.
However, did you know that animals often carry microbes that will not harm them, but are dangerous for our health?
How do animals transmit microbes to humans?
- Through scratches and bites : even clean and healthy cats may carry the Bartonella henselae or Pasteurella multocida bacteria and transmit them to us. Moreover, the Capnocytophaga canimorsus bacterium often lives in the muzzle of cats and dogs, and is dangerous for people with a weaker immune system1, such as small children, pregnant women, transplant patients, or patients undergoing chemotherapy.
- Through the air or the wind which can transport dust from animals, containing bacteria such as Coxiella burnetii or Chlamydia psittaci.
- Through the consumption of food stemming from contaminated animals, such as raw milk or eggs.
Some behaviors to avoid
Different activities will expose us to microbes responsible for zoonosis2. For example:
- Teasing or annoying a cat. It could bite or scratch to defend itself.
- Getting too close to sheep or goats, especially if the wind blows in your direction.
- Sleeping in a sheepfold or shearing sheep or goats without a protective mask.
- Surrounding yourself with many birds by feeding them for example.
- Standing really close to a bird or a reptile and kiss them.
In all our contacts with animals, whether they are in good or bad health, pets or exotic, we need to be reasonable and aware that there is always a slight risk that microbes dangerous for humans may be transmitted.
Arthropods3 such as fleas and ticks , may also transport diseases from the animal to humans. The deadly plague4 in the Middle Ages was caused by a specific bacterium (Yersinia pestis), which is passed from rats to humans via fleas. The plague still exists nowadays. It is endemic6 in certain regions in Africa and some outbreaks can sometimes occur. The island of Madagascar experienced an outbreak of plague in 2017 with over 200 casualties.
Immune system1 = Set of mechanisms (antibodies, white blood cells…) protecting us from infections.
Zoonosis2 = Infectious disease transmitted directly or indirectly to humans by animals.
Arthropod3 = Small animal with articulated legs and an external rigid skeleton, which forces it to evolve through successive molts. Over one million and a half different species of arthropods exist. It is the one animal group most present on earth. It includes insects, arachnids (spiders, scorpions and mites), shellfish, and Myriapoda (centipedes).
Plague4 = Disease caused by Yersinia pestis. Several rodent species may carry this bacterium, such as rats or squirrels. The fleas living on these rodents are infected by the bacterium and can transmit it to humans by biting them. Since the discovery of antibiotics5, the plague does no longer present a serious threat compared to the deadly epidemics of the past.
Antibiotic5 = A drug which allows to kill bacteria or at least to stop their growth. Antibiotics act against bacteria, but do not help treat diseases caused by viruses and parasites.
Endemic6 = Defines a disease which is present in a defined geographic area. |
Missouri State Archives
Before Dred Scott:
Freedom Suits in Antebellum Missouri
Missouri became the twenty-fourth state on August 10, 1821. The petition for statehood asked that Missouri enter the Union as a slave state. This request created explosive debates in the United States Congress. Some people did not want to allow slavery in the American territory west of the Mississippi River. Finally, Congress passed an Act that allowed Missouri to enter as a slave state, and Maine as a free state, therefore keeping the balance of free and slave states equal in Congress. This act also required that slavery not be allowed in any additional lands north of the geographic line known as 36'30° (Missouri's southern border). This was known as the Missouri Compromise.
To maintain control of the state's slave population, the Missouri legislature passed a "black code." These laws in this code governed the movement and activity of slaves in the state. The code also controlled interaction between slaves and white citizens or free blacks.
Under the laws of territorial Missouri, a person of any race illegally held as a slave could sue for freedom. This statute was first passed in 1807. The Missouri legislature continued to include it in the laws created after statehood. |
With Relative Pitch, you'll also COMMUNICATE your ideas to another musician who has Relative Pitch.
Example: Whenever you mention a Dominant Seventh Sharp Five chord, someone with Relative Pitch will know exactly what sound you are talking about.
This is your ability to SPEAK the language of music, similar to learning words in any language like English, Spanish, French or German.
And with Burge's 41 Relative Pitch Power Lessons, you'll soon speak like a pro.
Learn the Language of Music BY EAR
The language of music is PITCH.
Just as a child first begins to learn his or her native language BY EAR, every musician must learn the language of music pitch BY EAR.
Your EAR is the key to all your talents.
Why? Because music is a HEARING art.
The more FLUENTLY you master the language of music BY EAR the more your talents will unfold from within you.
Relative Pitch is your PERSONAL COMMAND of the musical language your ability to understand what is happening INSIDE the music, including all the various chords, progressions, and pitch relationships that create the musical flow.
Relative Pitch gives you a skillful knowledge about music which naturally enlivens the artistic intelligence within you.
The better you can hear, the more easily you are able to:
- Play by ear and improvise
- Write what you hear
- Sing with perfect intonation
- Compose artfully
- Transpose freely
- Perform with confidence
- Tune with precision
- Memorize easily
- Deepen your sense of music appreciation
It's a fact: A simple Relative Pitch tune-up empowers all your talents to rise to their higher potential. |
powered by a radio beam (red) generated on the surface of a planet.
The leakage from such beams as they sweep across the sky
would appear as superbright light flashes
known as fast radio bursts, according to a new study.
Bizarre flashes of cosmic light may actually be generated by advanced alien civilizations, as a way to accelerate interstellar spacecraft (below video) to tremendous speeds, a new study suggests.
Astronomers have catalogued just 20 or so of these brief, superbright flashes, which are known as fast radio bursts (FRBs), since the first one was detected in 2007.
FRBs seem to be coming from galaxies billions of light-years away, but what's causing them remains a mystery.
One potential artificial origin, according to the new study, might be a gigantic radio transmitter built by intelligent aliens.
So Loeb and lead author Manasvi Lingam, of Harvard University, investigated the feasibility of this possible explanation.
The duo calculated that a solar-powered transmitter could indeed beam FRB-like signals across the cosmos - but it would require a sunlight-collecting area twice the size of Earth to generate the necessary power.
And the huge amounts of energy involved wouldn't necessarily melt the structure, as long as it was water-cooled. So, Lingam and Loeb determined, such a gigantic transmitter is technologically feasible (though beyond humanity's current capabilities).
Why would aliens build such a structure...?
The most plausible explanation, according to the study team, is to blast interstellar spacecraft to incredible speeds.
These craft would be equipped with light sails, which harness the momentum imparted by photons, much as regular ships' sails harness the wind.
(Humanity has demonstrated light sails in space, and the technology is the backbone of Breakthrough Starshot, a project that aims to send tiny robotic probes to nearby star systems.)
Indeed, a transmitter capable of generating FRB-like signals could drive an interstellar spacecraft weighing 1 million tons or so, Lingam and Loeb calculated.
Humanity would catch only fleeting glimpses of the "leakage" from these powerful beams (which would be trained on the spacecraft's sail at all times), because the light source would be moving constantly with respect to Earth, the researchers pointed out.
The duo took things a bit further.
Assuming that ET is responsible for most FRBs, and taking into account the estimated number of potentially habitable planets in the Milky Way (about 10 billion), Lingam and Loeb calculated an upper limit for the number of advanced alien civilizations in a galaxy like our own:
Lingam and Loeb acknowledge the speculative nature of the study. They aren't claiming that FRBs are indeed caused by aliens.
Rather, they're saying that this hypothesis is worthy of consideration.
The new study (Fast Radio Bursts from Extragalactic Light Sails) has been accepted for publication in The Astrophysical Journal Letters. |
Indo-Aryan migration The separation of Indo-Aryans proper from Proto-Indo-Iranians has been dated to roughly 2000 BC–1800 BC. The Nuristani languages probably split in such early times, and are either classified as remote Indo-Aryan dialects, or as an independent branch of Indo-Iranian. It is believed Indo-Aryans reached Assyria in the west and the Punjab in the east before 1500 BC: the Indo-Aryan Mitanni rulers appear from 1500, and the Gandhara grave culture emerges from 1600. This suggests that Indo-Aryan tribes would have had to be present in the area of the BMAC (southern Turkmenistan / northern Afghanistan) from 1700 BC at the latest (incidentally corresponding with the decline of that culture). The spread of Indo-Aryan languages has been connected with the spread of the chariot in the first half of the second millennium BC. Some scholars trace the Indo-Iranians (both Indo-Aryans and Iranians) back to the Andronovo-Sintashta-Petrovka culture (ca. 2200 BC–1600 BC). Other scholars like Brentjes (1981), Klejn (1974), Francfort (1989), Lyonnet (1993), Hiebert (1998), Bosch-Gimpera (1973) and Sarianidi (1993) have argued that the Andronovo culture cannot be associated with the Indo-Aryans of South Asia or with the Mitannis because the Andronovo culture took shape too late and because no actual traces of their culture (e.g. warrior burials or timber-frame materials of the Andronovo culture) have been found in South Asia or Mesopotamia (see Edwin Bryant 2001). The archaeologist J. P. Mallory (1998) found it "extraordinarily difficult to make a case for expansions from this northern region to northern South Asia" and remarked that the proposed migration routes "only gets the Indo-Iranian to Central Asia, but not as far as the seats of the Medes, Persians or Indo-Aryans" (Mallory 1998; Edwin Bryant 2001: 216). The best evidence, however, is linguistic, 'not' archaeological (see e.g. Hans Hock in Bronkhorst & Deshpande 1999) Other scholars see some relationship between the BMAC and the Indo-Aryans. But although horses were known to the Indo-Aryans, evidence for the presence of horse in form of horse bones is missing in the BMAC (e.g. Bryant 2001). Asko Parpola (1988) has argued that the Dasas were the "carriers of the Bronze Age culture of Greater Iran" living in the BMAC and that the forts with circular walls destroyed by the Vedic Aryans of the Rigveda were actually located in the BMAC. Parpola's hypothesis has been criticized by K.D. Sethna (1992) and others. Moreover, cultural links between the BMAC and the Indus Valley can also be explained by reciprocal cultural influences uniting the two cultures. The Indo-Aryan migration is often compared and associated with the Indo-European migrations, the Indo-Iranian migrations and with other Eurasian nomads. Many scholars also believe that the Dravidian speakers migrated to South Asia from the north-west. Other migrations that are connected with South Asia include the migrations of Ghandari/ Niya Prakrit, Parya and Dumaki speakers, the Indo-Scythians, the Indo-Greeks and the Islamic conquest of South Asia. The Vedic Corpus provides no evidence for the so called "Aryan Invasion" of India Koenraad Elst The dominant paradigm concerning the presence of the Indo-Aryan branch of the Indo‑European language family is the so-called Aryan invasion theory, which claims that Indo-Aryan was brought into India by "Aryan" invaders from Central Asia at the end of the Harappan period (early 2nd millennium BC). Though the question of Aryan origins was much disputed in the 19th century, the Aryan invasion theory has been so solidly dominant in the past century that attempts to prove it have been extremely rare in recent decades, until the debate flared up again in India after 1990. The main attempt to prove the Aryan invasion (presented in Bernard Sergent : Genlse de l'Inde, Paris 1997) uses the archaeological record, which, paradoxically, is invoked with equal confidence by the non‑invasionist school (e.g. B.B. Lal : New Light on the Indus Civilization, Delhi 1997). Here we will consider the sparse attempts to discover references to the Aryan invasion in Vedic literature, and argue that these have not yielded any such finding. A first category consists of old but still commonly repeated cases of circular reasoning, e.g. the assumption that the enemies encountered by the tribe with which the Vedic poet identifies, are "aboriginals" (e.g. in Ralph Griffith's translation The Hymns of the Ŗgveda, 1889, still commonly used). In fact, there is not one passage where the Vedic authors describe such encounters in terms of "us invaders" vs. "them natives", even implicitly. Among more recent attempts, motivated explicitly by the desire to counter the increasing skepticism regarding the Aryan invasion theory, the most precise endeavour to show up an explicit mention of the invasion turns out to be based on mistranslation. Michael Witzel ("Ŗgvedic History", in G. Erdosy, ed.: The Indo-Aryans of Ancient South Asia, Berlin 1995, p.321) tries to read a line from the "admittedly much later" Baudhayana Shrauta Sutra as attesting the Aryan invasion: "Pr�n ayuh pravavr�ja, tasyaite kuru-panchalah k�sh�videh� ity, etad �yavam, pratyan am�vasus tasyaite g�ndh�rayas parshavo'ratt� ity, etad �m�vasyam" (BSS 18.44:397.9). This is rendered by Witzel as: "Ayu went eastwards. His (people) are the Kuru- Panch�la and the K�sh�-Videha. This is the Ayava (migration). (His other people) stayed at home in the West. His people are the G�ndh�r�, Parshu and Aratta. This is the Am�vasava (group)." This passage consists of two halves in parallel, and it is unlikely that in such a construction, the subject of the second half would remain unexpressed, and that terms containing contrastive information (like "migration" as opposed to the alleged non-migration of the other group) would remain unexpressed, all left for future scholars to fill in. It is more likely that a non-contrastive term representing a subject indicated in both statements, is left unexpressed in the second: that exactly is the case with the verb pravavr�ja "he went", meaning "Ayu went" and "Amavasu went". Amavasu is the subject of the second statement, but Witzel spirits the subject away, leaving the statement subjectless, and turns it into a verb, "am� vasu", "stayed at home". In fact, the meaning of the sentence is really quite straightforward, and doesn't require supposing a lot of unexpressed subjects: "Ayu went east, his is the Yamuna-Ganga region", while "Amavasu went west, his is Afghanistan, Parshu and West Panjab". Though the then location of "Parshu" (Persia?) is hard to decide, it is definitely a western country, along with the two others named, western from the viewpoint of a people settled near the Saraswati river in what is now Haryana. Far from attesting an eastward movement into India, this text actually speaks of a westward movement towards Central Asia, coupled with a symmetrical eastward movement from India's demographic centre around the Saraswati basin towards the Ganga basin. |
250 years ago the city contained only "Altstadt" and the "Neustadt" of nowadays. From 1821 to 1848 the population of Bremen increased from 54,000 to 78,000 and more and more inhabitants settled at the outskirts. Since the building of the train station (1847), the expanding of the harbours in the northwest of the city (1888) and the industrialisation after the fall of the custom barriers the city area and the population increased extremly. Until 1905 261,000 people lived in Bremen. Unlike other cities Bremen did not build tenement houses but developed the typical "Bremer Haus". 1939 had 450,000 inhabitans. It was not until the bombs destroyed the old houses in the World War 2 that architects started to build apartment houses and skyscrapers (Osterholz-Tenever).
Additional information concerning Bremen: |
2008/9 Schools Wikipedia Selection. Related subjects: Geology and geophysics
Metamorphic rock is the result of the transformation of a pre-existing rock type, the protolith, in a process called metamorphism, which means "change in form ".The protolith is subjected to heat and pressure (temperatures greater than 150 to 200 °C and pressures of 1500 bars) causing profound physical and/or chemical change. The protolith may be sedimentary rock, igneous rock or another older metamorphic rock. Metamorphic rocks make up a large part of the Earth's crust and are classified by texture and by chemical and mineral assemblage ( metamorphic facies). They may be formed simply by being deep beneath the Earth's surface, subjected to high temperatures and the great pressure of the rock layers above. They can be formed by tectonic processes such as continental collisions which cause horizontal pressure, friction and distortion. They are also formed when rock is heated up by the intrusion of hot molten rock called magma from the Earth's interior.
The study of metamorphic rocks (now exposed at the Earth's surface following erosion and uplift) provides us with very valuable information about the temperatures and pressures that occur at great depths within the Earth's crust.
Some examples of metamorphic rocks are gneiss, slate, marble and schist.
Metamorphic minerals are those that form only at the high temperatures and pressures associated with the process of metamorphism. These minerals, known as index minerals, include sillimanite, kyanite, staurolite, andalusite, and some garnet.
Other minerals, such as olivines, pyroxenes, amphiboles, micas, feldspars, and quartz, may be found in metamorphic rocks, but are not necessarily the result of the process of metamorphism. These minerals formed during the crystallization of igneous rocks. They are stable at high temperatures and pressures and may remain chemically unchanged during the metamorphic process. However, all minerals are stable only within certain limits, and the presence of some minerals in metamorphic rocks indicates the approximate temperatures and pressures at which they were formed.
The change in the particle size of the rock during the process of metamorphism is called recrystallization. For instance, the small calcite crystals in the sedimentary rock limestone change into larger crystals in the metamorphic rock marble, or in metamorphosed sandstone, recrystallisation of the original quartz sand grains results in very compact quartzite, in which the often larger quartz crystals are interlocked. Both high temperatures and pressures contribute to recrystallization. High temperatures allow the atoms and ions in solid crystals to migrate, thus reorganizing the crystals, while high pressures cause solution of the crystals within the rock at their point of contact.
The layering within metamorphic rocks is called foliation (derived from the Latin word folia, meaning "leaves"), and it occurs when a rock is being compressed from one direction to a recrystallizing rock. This causes the platy or elongated crystals of minerals, such as mica and chlorite, to grow with their long axes perpendicular to the direction of the force. This results in a banded, or foliated, rock, with the bands showing the colors of the minerals that formed them.
Textures are separated into foliated and non-foliated categories. Foliated rock is a product of differential stress that deforms the rock in one plane, sometimes creating a plane of cleavage: for example, slate is a foliated metamorphic rock, originating from shale. Non-foliated rock does not have planar patterns of stress.
Rocks that were subjected to uniform pressure from all sides, or those which lack minerals with distinctive growth habits, will not be foliated. Slate is an example of a very fine-grained, foliated metamorphic rock, while phyllite is coarse, schist coarser, and gneiss very coarse-grained. Marble is generally not foliated, which allows its use as a material for sculpture and architecture.
Another important mechanism of metamorphism is that of chemical reactions that occur between minerals without them melting. In the process atoms are exchanged between the minerals, and thus new minerals are formed. Many complex high-temperature reactions may take place, and each mineral assemblage produced provides us with a clue as to the temperatures and pressures at the time of metamorphism.
Metasomatism is the drastic change in the bulk chemical composition of a rock that often occurs during the processes of metamorphism. It is due to the introduction of chemicals from other surrounding rocks. Water may transport these chemicals rapidly over great distances. Because of the role played by water, metamorphic rocks generally contain many elements that were absent from the original rock, and lack some which were originally present. Still, the introduction of new chemicals is not necessary for recrystallization to occur.
Types of metamorphism
Contact metamorphism is the name given to the changes that take place when magma is injected into the surrounding solid rock (country rock). The changes that occur are greatest wherever the magma comes into contact with the rock because the temperatures are highest at this boundary and decrease with distance from it. Around the igneous rock that forms from the cooling magma is a metamorphosed zone called a contact metamorphism aureole. Aureoles may show all degrees of metamorphism from the contact area to unmetamorphosed (unchanged) country rock some distance away. The formation of important ore minerals may occur by the process of metasomatism at or near the contact zone.
When a rock is contact altered by an igneous intrusion it very frequently becomes more indurated, and more coarsely crystalline. Many altered rocks of this type were formerly called hornstones, and the term hornfels is often used by geologists to signify those fine grained, compact, non-foliated products of contact metamorphism. A shale may become a dark argillaceous hornfels, full of tiny plates of brownish biotite; a marl or impure limestone may change to a grey, yellow or greenish lime-silicate-hornfels or siliceous marble, tough and splintery, with abundant augite, garnet, wollastonite and other minerals in which calcite is an important component. A diabase or andesite may become a diabase hornfels or andesite hornfels with development of new hornblende and biotite and a partial recrystallization of the original feldspar. Chert or flint may become a finely crystalline quartz rock; sandstones lose their clastic structure and are converted into a mosaic of small close-fitting grains of quartz in a metamorphic rock called quartzite.
If the rock was originally banded or foliated (as, for example, a laminated sandstone or a foliated calc- schist) this character may not be obliterated, and a banded hornfels is the product; fossils even may have their shapes preserved, though entirely recrystallized, and in many contact-altered lavas the vesicles are still visible, though their contents have usually entered into new combinations to form minerals which were not originally present. The minute structures, however, disappear, often completely, if the thermal alteration is very profound; thus small grains of quartz in a shale are lost or blend with the surrounding particles of clay, and the fine ground-mass of lavas is entirely reconstructed.
By recrystallization in this manner peculiar rocks of very distinct types are often produced. Thus shales may pass into cordierite rocks, or may show large crystals of andalusite (and chiastolite), staurolite, garnet, kyanite and sillimanite, all derived from the aluminous content of the original shale. A considerable amount of mica (both muscovite and biotite) is often simultaneously formed, and the resulting product has a close resemblance to many kinds of schist. Limestones, if pure, are often turned into coarsely crystalline marbles; but if there was an admixture of clay or sand in the original rock such minerals as garnet, epidote, idocrase, wollastonite, will be present. Sandstones when greatly heated may change into coarse quartzites composed of large clear grains of quartz. These more intense stages of alteration are not so commonly seen in igneous rocks, because their minerals, being formed at high temperatures, are not so easily transformed or recrystallized.
In a few cases rocks are fused and in the dark glassy product minute crystals of spinel, sillimanite and cordierite may separate out. Shales are occasionally thus altered by basalt dikes, and feldspathic sandstones may be completely vitrified. Similar changes may be induced in shales by the burning of coal seams or even by an ordinary furnace.
There is also a tendency for metasomatism between the igneous magma and sedimentary country rock, whereby the chemicals in each are exchanged or introduced into the other. Granites may absorb fragments of shale or pieces of basalt. In that case hybrid rocks called skarn arise which have not the characters of normal igneous or sedimentary rocks. Sometimes an invading granite magma permeates the rocks around, filling their joints and planes of bedding, etc., with threads of quartz and feldspar. This is very exceptional but instances of it are known and it may take place on a large scale.
Regional metamorphism is the name given to changes in great masses of rock over a wide area. Rocks can be metamorphosed simply by being at great depths below the Earth's surface, subjected to high temperatures and the great pressure caused by the immense weight of the rock layers above. Much of the lower continental crust is metamorphic, except for recent igneous intrusions. Horizontal tectonic movements such as the collision of continents create orogenic belts, and cause high temperatures, pressures and deformation in the rocks along these belts. If the metamorphosed rocks are later uplifted and exposed by erosion, they may occur in long belts or other large areas at the surface. The process of metamorphism may have destroyed the original features that could have revealed the rock's previous history. Recrystallization of the rock will destroy the textures and fossils present in sedimentary rocks. Metasomatism will change the original composition.
Regional metamorphism tends to make the rock more indurated and at the same time to give it a foliated, shistose or gneissic texture, consisting of a planar arrangement of the minerals, so that platy or prismatic minerals like mica and hornblende have their longest axes arranged parallel to one another. For that reason many of these rocks split readily in one direction along mica-bearing zones ( schists). In gneisses, minerals also tend to be segregated into bands; thus there are seams of quartz and of mica in a mica schist, very thin, but consisting essentially of one mineral. Along the mineral layers composed of soft or fissile minerals the rocks will split most readily, and the freshly split specimens will appear to be faced or coated with this mineral; for example, a piece of mica schist looked at facewise might be supposed to consist entirely of shining scales of mica. On the edge of the specimens, however, the white folia of granular quartz will be visible. In gneisses these alternating folia are sometimes thicker and less regular than in schists, but most importantly less micaceous; they may be lenticular, dying out rapidly. Gneisses also, as a rule, contain more feldspar than schists do, and they are tougher and less fissile. Contortion or crumbling of the foliation is by no means uncommon, and then the splitting faces are undulose or puckered. Schistosity and gneissic banding (the two main types of foliation) are formed by directed pressure at elevated temperature, and to interstitial movement, or internal flow arranging the mineral particles while they are crystallizing in that directed pressure field.
Rocks which were originally sedimentary and rocks which were undoubtedly igneous are converted into schists and gneisses, and if originally of similar composition they may be very difficult to distinguish from one another if the metamorphism has been great. A quartz-porphyry, for example, and a fine feldspathic sandstone, may both the converted into a grey or pink mica-schist.
Metamorphic rock textures
The five basic metamorphic textures with typical rock types are:
- Slaty: slate and phyllite; the foliation is called 'slaty cleavage'
- Schistose: schist; the foliation is called 'schistosity'
- Gneissose: gneiss; the foliation is called 'gneissosity'
- Granoblastic: granulite, some marbles and quartzite
- Hornfelsic: hornfels and skarn |
It is a conical shaped structure built of boulders, roughly 230 feet in diameter, 30 feet high and weighing an estimated 60,000-tons, 40 feet underwater in the Sea of Galilee. And archaeologists have no idea what it is. Based on the build up of sediment, it is between 2,000 and 12,000 years old, which is too wide a range to help identify it. It’s not even clear if the structure was built on land when the sea levels were lower, or if it was constructed underwater. The structure was located in 2003 by sonar scan. Now ten years later, researchers from the Israel Antiquities Authority are mounted an expedition to attempt to learn more about the unexpected mound of boulders, which they speculate could have been a burial site, a place of worship or even a fish nursery.
Archaeologists said the only way they can properly assess the structure is through an underwater excavation, a painstakingly slow process that can cost hundreds of thousands of dollars. And if an excavation were to take place, archaeologists said they believed it would be the first in the Sea of Galilee, an ancient lake that boasts historical remnants spanning thousands of years and is the setting of many Bible scenes….
Much of the researchers’ limited knowledge about this structure comes from the sonar scan a decade ago. Initial dives shortly after that revealed a few details. In an article in the International Journal of Nautical Archaeology published earlier this year, Nadel and fellow researchers disclosed it was asymmetrical, made of basalt boulders and that “fish teem around the structure and between its blocks.” |
Astronomers with NASA’s Kepler Mission find ‘puzzling pair of planets’
Two planets with very different densities and compositions are locked in surprisingly close orbits around their host star, according to astronomers working with NASA’s Kepler Mission.
One planet is a rocky super-Earth about 1.5 times the size of our planet and 4.5 times the mass. The other is a Neptune-like gaseous planet 3.7 times the size of Earth and eight times the mass. The planets approach each other 30 times closer than any pair of planets in our solar system.
The discovery of the Kepler-36 planetary system about 1,200 light years from Earth is an example of planets breaking with the planetary pattern of our solar system: rocky planets orbiting close to the sun and gas giants orbiting farther away.
The discovery is reported June 21 in the online Science Express. Lead authors of the study are Joshua Carter, a Hubble Fellow at the Harvard-Smithsonian Center for Astrophysics, and Eric Agol, an associate professor of astronomy at the University of Washington.
“The planetary system reported in this paper is another example of an ‘extreme’ planetary system that will serve as a stimulus to theories of planet migration and orbital rearrangement,” researchers wrote in the paper.
Steve Kawaler, an Iowa State University professor of physics and astronomy, was part of the research team that provided information about the properties of the planets’ host star. He and other researchers measured changes in the star’s brightness to precisely identify the size, mass and age of the host star.
Kawaler explained the importance of the discovery:
“Small, rocky planets should form in the hot part of the solar system, close to their host star – like Mercury, Venus and Earth in our Solar System. Bigger, less dense planets – Jupiter, Uranus – can only form farther away from their host, where it is cool enough for volatile material like water ice, and methane ice to collect. In some cases, these large planets can migrate close in after they form, during the last stages of planet formation, but in so doing they should eject or destroy the low-mass inner planets.
“Here, we have a pair of planets in nearby orbits but with very different densities. How they both got there and survived is a mystery.”
The discovery was made possible by NASA’s Kepler Mission, a spacecraft launched in 2009 that’s carrying a photometer to measure changes in star brightness. Its primary job is to use tiny variations in the brightness of the stars within its view to find earth-like planets that might be able to support life.
The Kepler Asteroseismic Investigation is also using data from that photometer to study star oscillations, or changes in brightness, that offer clues to a star’s interior structure. The investigation is led by a four-member steering committee: Kawaler, Chair Ron Gilliland of the Space Telescope Science Institute based in Baltimore, Jorgen Christensen-Dalsgaard and Hans Kjeldsen, both of Aarhus University in Aarhus, Denmark.
Kawaler said the Kepler spacecraft was essential to discovering what the researchers called in their paper “this puzzling pair of planets.”
“The seismic signal is very small, and only Kepler has the sensitivity and persistence to reveal it,” Kawaler said. “Also, the transit signal from the planets crossing in front of the star is very small, and only visible with Kepler’s level of sensitivity.”
(photo credit: Image courtesy of Harvard-Smithsonian Center for Astrophysics/David Aguilar. |
The Galapagos is home to over 9,000 species. These are all recorded in the datazone produced by the Charles Darwin Foundation. The list is always growing – despite scientists studying the wildlife on and around Galapagos for three centuries; new species are still being discovered every year.
Introduction to the wildlife of Galapagos
The animal and plant species of Galapagos fit into three categories: native, introduced (often by humans) or endemic meaning that they cannot be found anywhere else in the world. Galapagos is famous for its high number of endemic species such as the Galapagos giant tortoise, marine iguana, daisy trees and the Galapagos penguin. Often, introduced species can present a major threat to native and endemic species.
In 2009, the Galapagos pink iguana (Conolophus rosada) was officially described as a separate species of Galapagos land iguana. There are less than 100 individuals left meaning that they are a critically endangered species.
In 2012, a new species of deep-water catshark was discovered and a new species of gecko was also found on Rabida island, having only previously been known through 5,000 year old fossils.
The great variety of animal and plant life in Galapagos can be attributed to the wide range of habitats on and around the Islands. Differing habitats across the Archipelago mean that many species have adapted to suit the unique environmental conditions of each island. These continue to change over the life of an island through the process of succession. Discover more about the different habitat zones of Galapagos.
Threat of extinction
On Galapagos, 23 species face extinction or have disappeared already. Currently, the main cause of extinction is human activity which has changed the many ecosystems and environments of Galapagos. For example, land has been cleared for farming so that human food demands can be met.
Threat of extinction is measured by the Red List. Curated by the International Union for Conservation of Nature (IUCN), the Red List gives each species a rating from ‘Least concern’ to ‘Extinct’, according to the level of risk.
Previous: Wildlife of Galapagos – Colonisation
Next: Wildlife of Galapagos – Classification and Keys |
Black Holes and Star Formation: A Herschel Perspective
News Release • February 13, 2013
The effects of supermassive black holes on their host galaxies pose a tricky puzzle: are black holes able to influence, and possibly even suppress, star-formation activity on galactic scales? Astronomers have been searching for a signature of such a feedback effect, and have been spurred on recently by the large surveys of distant galaxies compiled with the Herschel Space Observatory, a European Space Agency mission with important NASA contributions. A variety of results has emerged from the first joint analyses of these data and other observations, performed either in X-rays or in radio waves, but the only certainty so far is that it remains a vexed question.
Surveys of the distant cosmos reveal that in the early days of the Universe a substantial fraction of galaxies were quite different from those that we observe at present. In particular, their star formation activity was extremely fierce, as these galaxies were producing hundreds or even thousands of stars every year - a very intense pace, compared to the one or two stars produced each year in typical galaxies in the local Universe.
These prolific and distant stellar factories contain large amounts of interstellar gas and dust from which stars are produced. Heated by starlight, interstellar dust shines brightly at the far-infrared and sub-millimetre wavelengths probed by Herschel, making it an extraordinary tool to study the star formation activity of galaxies in the early Universe.
"Herschel is revealing more and more about the history of star formation in the life of the Universe," comments Göran Pilbratt, Herschel Project Scientist at ESA. "These data are helping astronomers to investigate the dramatic drop in star formation rate between early and present-day galaxies."
Numerical simulations suggest that star formation in early galaxies might be suppressed by the activity of the supermassive black holes at their centres. In some galaxies, the central black hole accretes matter at extraordinarily high rates, giving rise to very bright emission across the electromagnetic spectrum; these galaxies are said to host an active galactic nucleus (AGN). The accretion process is accompanied by the outflow of material and, if it is powerful enough, this may eventually drain the galaxy's reservoir of gas, halting its star-forming activity.
Since active galactic nuclei are only observed in a small fraction of galaxies, astronomers believe that they correspond to a relatively short-lived phase in a galaxy's life. However, surveys of the distant cosmos reveal that galaxies hosting an AGN were more common at early times than at present, and that galaxies hosting the most powerful AGN are usually found in the early Universe. For this reason, astronomers have long been searching for a signature of AGN-induced feedback on star formation by observing distant galaxies at several different wavelengths.
The advent of Herschel has sparked a number of new studies in this field. Several teams of astronomers are combining Herschel's large surveys of star-forming galaxies with observations performed at other wavelengths to trace the AGN activity. The key question for all of them: is the activity of supermassive black holes correlated to star formation in their host galaxies and, if so, to what degree?
One of the earliest results to emerge showed evidence for powerful AGN having halted their host galaxy's star formation. In this study, led by Mathew Page from the Mullard Space Science Laboratory, UK, the astronomers combined data from Herschel and NASA's Chandra X-ray Observatory to study the relation between star formation rate and AGN power - the latter estimated by means of the X-ray data.
"Our study shows a double trend: for moderately powerful AGN, the star formation rate of their host galaxies increases with the AGN power. However, this seems to no longer hold for the most powerful AGN in our sample, which appear to be hosted in galaxies with a somehow reduced star-forming activity," says Page.
This surprising result was based on observations of several dozens of galaxies in one field of the sky. Would this result hold for larger samples of galaxies and for different regions of the sky? A recent study, led by Christopher Harrison, a PhD student at the University of Durham, UK, and based on observations of three different fields in the sky from Herschel and Chandra, shows that it doesn't.
"We looked at several hundreds of AGN-hosting galaxies which are bright in X-rays, and stacked them to measure how the average star formation rate varies as a function of the AGN power," explains Harrison, "but we found different results while analysing different patches of the sky."
When they looked at the same field studied by Page and collaborators, Harrison and his colleagues also found that the most powerful AGN have lower star formation rates; however, the same result did not emerge from the analysis of the other fields they observed, which include a much larger field containing over ten times more AGN.
"Our study shows that the average star formation rate of galaxies that host very powerful AGN is not different from that of galaxies hosting less powerful AGN. In fact, the star formation rates that we measure are even consistent with those of comparable galaxies that do not display any ongoing AGN activity," says Harrison. "This indicates that no clear signature of AGN suppressing star formation in their host galaxies can be extracted from the data with this type of analysis," he adds.
Whilst both studies of X-ray selected AGN are based on average estimates of the star formation rate and AGN power - computed by stacking several galaxies together - Page and his colleagues also looked at the sources individually.
"By looking at the individual sources, we verified that the most powerful AGN in our sample are indeed not detected in the Herschel data," says Page. "This suggests that the star formation activity of their host galaxies is weak."
Why such an effect would appear only on certain patches of the sky and not others is currently being debated. One of the possible explanations is that AGN activity did contribute to suppress star formation in galaxies but that it does not show up in most of the data because it was a short-lived phenomenon.
"If the suppression of star formation took place via a very brief but effective AGN phase, it would be extremely difficult to find a signature of such an effect in a large survey of galaxies, because it may be washed out when looking at average properties," says Harrison.
"To find out more, we need to better understand the properties of individual objects in the surveys," says David Alexander, one of Harrison's supervisors at the University of Durham, UK, "So we plan to study samples of AGN and their host galaxies and to analyse their emission at several different wavelengths."
Another recent study, led by Peter Barthel of the University of Groningen, The Netherlands, is following a similar approach, as the astronomers have been using Herschel to investigate star formation in individual galaxies that host very powerful AGN. They chose radio data rather than X-rays for their sample, to concentrate on the most powerful AGN.
"By choosing the AGN on the basis of their radio emission, we can single out some truly extreme sources that are more powerful than those detected in X-ray surveys. The radio emission is generated outside of the AGN's host galaxy - in the relativistic jets and in the gigantic radio lobes. As such, it is a reliable indicator of the AGN power as opposed to X-ray emission, which is prone to absorption by intergalactic material," explains Barthel.
"In this study, we looked at three of the most exceptional AGN known, which lie in very massive galaxies that host supermassive black holes accreting mass at dramatically high rates. The Herschel data revealed highly prolific star formation activity in the host galaxies, as the galaxies appear to produce five hundred to a thousand stars per year," he adds.
The positive correlation between star formation rate and AGN power found by Barthel and his colleagues suggests that these mighty black holes have not quenched the star formation in their host galaxies.
"We are now extending this study to several dozen powerful, radio-bright AGN," says Barthel, "and a preliminary analysis shows that the majority of them are indeed hosted in galaxies that are forming stars very vigorously."
The next step for the various teams of astronomers involved in these investigations is to observe large numbers of individual sources in greater detail, possibly trying to bridge the gap between the exceptionally massive galaxies hosting very powerful AGN selected using radio data and the less extreme ones selected in X-rays.
"Herschel has triggered a great number of studies that are addressing the intricacies of AGN feedback on star formation from several different angles. We are very curious to see what the data eventually will tell us," concludes Pilbratt.
Herschel is a European Space Agency cornerstone mission, with science instruments provided by consortia of European institutes and with important participation by NASA. NASA's Herschel Project Office is based at NASA's Jet Propulsion Laboratory, Pasadena, Calif. JPL contributed mission-enabling technology for two of Herschel's three science instruments. The NASA Herschel Science Center, part of the Infrared Processing and Analysis Center at the California Institute of Technology in Pasadena, supports the United States astronomical community. Caltech manages JPL for NASA. |
What Is Colour Gamut?
A colour gamut is a subset of colours, such as those in a specific colour space.
The range of colours printers, cameras, scanners, monitors etc. can reproduce varies, so a colour gamut is used to make these differences clear and it also shows what colours these devices have in common.
sRGB, Adobe RGB and NTSC all have their own colour gamut which is often shown on what's called an xy chromaticity diagram. This diagram was established by the International Commission on Illumination (CIE) and the colours of the visible range are represented using numerical figures and graphed as color coordinates.
In the diagram here, the large shape surrounded by a dotted line represents the range of colours visible to the human eye. The color gamut defined by each standard (sRGB, Adobe sRGB and NTSC) are shown as triangles on the diagram. These triangles show the peak RGB coordinates connected by straight lines.
The larger the area of the triangle, the more colours can be displayed. For LCD monitors, this means that one which is compatible with a colour gamut that has a larger triangle will be able to produce a wider range of colours on screen.
The colour gamut of an LCD monitor's hardware can be indicated using similar triangles. An LCD monitor can't reproduce colours outside its color gamut.
Calibration And Colour Gamut
To make full use of an LCD monitor with a wide colour gamut and to display colors as the user intended it's important to maintain a colour calibration system. For more tips on colour calibration, take a look at EIZO ColorZone's technique section.
for more information. |
Contrary to popular belief, printed circuit boards can actually be assembled by anyone, even those with low engineering and technical skills. PCBs can be customized and assembled depending on the purpose it is intended for.
Printed circuit boards are generally used as main components of appliances, gadgets and other technological innovations that function using electrical circuit. Printed circuit boards are made from non conductive materials such as fiberglass and plastic to create the thin and flexible base while the electrical components and links are usually made from copper. The PCB connects all the parts of an appliance or gadget to make it function as a whole. It provides a circuit of path for the electricity to flow from one part to another.
PCBs can be made and customized as long as a circuit design is ready and the materials are available. To start assembling printed circuit boards, the first step is to choose what method to use. This is usually based on the available materials and one’s technical skills and knowledge about PCBs, engineering and electricity.
Some of the popular methods in making PCBs are UV etching method, acid etching method, routing method and laser etching method. The acid etching method requires extreme safety measures and advances technical skill in handling equipments and materials. Nonetheless, it is preferred in making simple circuit boards.
The UV etching method requires expensive equipments in assembling the PCB. But, it uses very simple steps and requires less safety measure and technical skill. Aside from that, the output is of good quality.
The routing method uses certain machines in making the circuit boards. But it is preferred in producing large numbers of PCBs. Like the routing method, the laser etching method is also used by PCB manufacturing companies. Instead of a machine, laser is used to etch the circuit boards.
After choosing the assembly method, the next step is designing the circuit layout of the PCB. This step is made easier by using special softwares and programs that can transform circuit diagrams into PCB layouts.
Then, gather all the needed materials and equipments for assembling the PCB. Next, design PCB layout onto the copper coated board which will serve as the circuit board’s base. This step, however, is only applicable to the UV etching method and acid etching method.
After printing the PCB layout, start etching the details on the board. Etching removes unnecessary copper parts and forms the holes and parts that are needed in linking wires together.
Using a drilling machine, drill the mount points of the PCB. In mass production, a special drilling machine is used. However, in making simple PCBs, a drilling tool found at home can be used.
Once the drills are made, the next step is to start mounting and soldering the components and other parts of the PCB. The last step is to check if the PCB works.
When assembling PCBs at home, it is important to use good quality materials and equipments. As much as possible, take the time to practice to improve in assembling PCBs. Learn different assembly techniques and use good quality equipments in able to produce good quality circuit boards.
However, if one wants to save time and effort, circuit boards can be bought or ordered in bulks from PCB making companies. Several companies specialize in manufacturing PCBs according to the specifications of the customers. Company ordered printed circuit boards are of higher quality compared to homemade ones since it’s made with high quality materials and state of the art equipments. Aside from that, commercially sold circuit boards are also affordable and cost effective. |
They share many of the same qualities as old growth forests, but the only way to really appreciate these magnificent places is through the lens of a diving mask. Sometimes encompassing several miles of area with towering vegetation, thick canopies, and abundant wildlife, these underwater forests are unfamiliar destinations for the average weekend hiker. In the May issue of Ecological Monographs, researchers from the Scripps Institution of Oceanography at the University of California San Diego (UCSD) investigate the demography of one of earth's largest underwater kelp forests.
Within the Point Loma kelp forest community off the coast of San Diego, researchers have been conducting long term ecological kelp studies over three decades. The goal of their research is to evaluate the roles of large-scale, low frequency oceanographic processes on the demography patterns of the area's most conspicuous species of kelp. These processes range from seasonal climate variability to episodic nutrient-rich La Ninas and warm water, nutrient-poor El Ninos.
"As expected, we found considerable differences in the habitat adaptations of the specific kelps over large temporal and spatial scales," says marine ecologist Paul Dayton from UCSD. "Standard experiments of the type that ecologists often do at small scales give different results under different oceanographic climate conditions."
By repeating experiments over an extended period of time and in different areas, researchers were able to observe certain changes within the community that occur from episodic shifts in nutrients and temperature. For instance, during the nine-year study, Macrocystis species were not affected by competitive effects from other species of kelp. On the other hand, Pterygophora californica, an important understory species, exhibited reduced growth and reproduction by the light-limited conditions and competition with Macrocystis during La Nina periods when Macrocystis thrived. When El Nino conditions led to poor Macrocystis growth, the understory kelps did much better.
"By doing small scale experiments over large scales, researchers can gain a much more realistic understanding of oceanic ecosystems," says Dayton.
Although their research found that small-scale events in coastal zones driven by local processes (e.g. competition, disturbance and dispersal) were important, Dayton et al have concluded that the most lasting effects on the kelp communities were the result of very large-scale, low-frequency events, such as the El Nino and especially the La Nina phenomena.
"Statistics analyzing small-scale experiments can give the illusion of power, but our study shows that they might lack generality because very different patterns appear over larger scales," concludes Dayton.
Research was funded by the National Science Foundation, Pew Charitable Trust, California Sea Grant College Program, UCSD, and the City of San Diego.
###Ecological Monographs is a journal published four times a year by the Ecological Society of America (ESA). Copies of the above articles are available free of charge to the press through the Society's Public Affairs Office. Members of the press may also obtain copies of ESA's entire family of publications, which includes Ecology, Ecological Applications, Ecological Monographs, and Conservation Ecology. Others interested in copies of articles should contact the Reprint Department at the address in the masthead.
Founded in 1915, the Ecological Society of America (ESA) is a scientific, non-profit, organization with over 7000 members. Through ESA reports, journals, membership research, and expert testimony to Congress, ESA seeks to promote the responsible application of ecological data and principles to the solution of environmental problems. For more information about the Society and its activities, access ESA's web site at: http://esa.sdsc.edu.
Materials provided by Ecological Society Of America. Note: Content may be edited for style and length.
Cite This Page: |
Silicosis is a disease that is caused by small particles of silica (glass) getting trapped in the lungs. When people have silicosis, the changes in their body often are cyanosis (when skin goes a blueish color), a fever, when the body gets hotter or being not able to breathe properly. Sometimes doctors do not realize that someone has silicosis, and think that they have other illnesses like pneumonia, tuberculosis or fluid in the lungs.
It was first noticed in 1705 by Bernardino Ramazzini (an Italian doctor). He saw something that looked like sand in the lungs of stonecutters. The name silicosis is from Visconti in 1870. The name comes from the Latin silex which means flint.
Related pages[change | change source]
Other websites[change | change source]
- "Preventing Silicosis" (not simple English) |
cohesion, in physics, the intermolecular attractive force acting between two adjacent portions of a substance, particularly of a solid or liquid. It is this force that holds a piece of matter together. Intermolecular forces act also between two dissimilar substances in contact, a phenomenon called adhesion. These forces originate principally because of coulomb (electrical) forces. When two molecules are close together, they are repelled; when farther apart, they are attracted; and when they are at an intermediate distance, their potential energy is at a minimum, requiring the expenditure of work to either approximate or separate them. Thus, work is required to pull apart two objects in intimate contact, whether they be of the same or different material.
The attractive forces of cohesion and adhesion act over a short range and vary in magnitude, depending on the substances concerned. If a piece of glass is submerged in water and then withdrawn, it will be wet—i.e., water will cling to it, showing that the force of adhesion between water and glass molecules is greater than the force of cohesion between water molecules. |
Earth's mean orbital speed is the average speed at which the Earth revolves around the sun. This is defined in two different ways, based on the sidereal year and the tropical year.
The sidereal year is the time it takes for the Earth to revolve once around the sun with respect to the distant stars. This is approximately 365.2564 mean solar days, or 3.155815 x 107 seconds (s). The tropical year is the time it takes for the Earth to revolve once around the sun, as measured between two consecutive March equinoxes. This is about 365.2422 mean solar days, or 3.155693 x 107 s.
The circumference of the Earth's orbit is approximately 2 pi (6.283185) times the mean radius of its orbit. This radius is also known as the astronomical unit (AU), and is about 1.4959787 x 1011 meters (m). Therefore, the circumference is about 9.399511 x 1011 m. The Earth's mean orbital speed, in meters per second (m/s), is obtained by dividing this number by the length of the year in seconds. This can result in either of two figures.
Let vs be the Earth's mean orbital speed as defined based on the sidereal year. This speed is:
vs = (9.399511 x 1011) / (3.155815 x 107)
= 2.978473 x 104 m/s
Let vt be the Earth's mean orbital speed as defined based on the tropical year. This is:
vt = (9.399511 x 1011) / (3.155693 x 107)
= 2.978589 x 104 m/s
A rough, general figure for the Earth's mean orbital speed is 30 kilometers per second (km/s), or 18½ miles per second (mi/s).
Also see the Table of Physical Units and Constants. |
THEMATIC VOLCANO PHOTO GALLERY:
- (volcanic) dikes -
Dikes are imaginable as the veins of a volcano, the pathways of rising magma. A dike is called a -usually more or less vertical- flat, sheet-like magma body that cuts unconformingly through older rocks or sediments.
Most dikes can be described as fractures into which magma intrudes or from which they might erupt. The fracture can be caused by the intrusion of pressurized magma, or vice versa, the rise of magma can be caused by and exploit existing or tectonically forming fractures. The point where a dike reaches the surface and erupts lava can be called a vent.
The interior of a typical large volcanic edifice is crossed by hundreds of dikes. Very often, dikes occur as swarms concentrated within zones of structural weakness within a volcanic edifice. This is nicely illustrated by the dikes exposed at the caldera cliffs of NE Santorini, where the interior structure of ancient stratovolcanoes (ca. 500-300 ka) of northern Santorini (Thera) is exposed (photos below). These dikes follow a structural trend of volcanism on Santorini (read more of Santorini's geology on the Santorini pages).
Right: dikes of NE Thera (Santorini).
Before using text and photos of this page elsewhere please contact Tom Pfeiffer.
© Tom Pfeiffer, page created 1 February 2003. |
Samuel Taylor Coleridge wrote long ago, “Water, water, everywhere, nor any drop to drink.” Although he was living in England at the time, Coleridge could have been describing the lives of people currently living in the South Pacific. According to recent reports, some islands have less than one week’s worth of bottled water left. Considering that small islands like Tuvalu, Tokelau and Samoa sit in the world’s largest ocean, the irony of Coleridge’s words rings a bit desperately. What caused this crisis — and what’s the solution?
As for the cause, the short answer is that the global weather system is experiencing an extreme year. Right now, the Earth is in what climatologists call the “La Niña” stage of the El Niño Southern Oscillation, or ENSO. The ENSO has three different stages: La Niña, El Niño and neutral. The ENSO is a natural global climate cycle that has been operating for thousands of years — perhaps much longer. Each stage of the cycle alters weather patterns around the world.
A La Niña stage creates dry conditions in the South Pacific and eastern Africa, where a massive famine is occurring in the countries of Ethiopia, Somalia, Djibouti and Eritrea. This year’s La Niña is very powerful. It’s caused a severe drought in the southern U.S. — coal-fired power plants in Texas are close to shutting down because there hasn’t been enough water in the local rivers and lakes to cool their turbines. In South America, the Brazilian Amazon has seen severe forest fires because of the drought.
But a La Niña stage isn’t just about dry conditions. I live in Oregon, in the northwestern corner of the U.S. We’ve had a record wet year here, and the snowfall last winter was so high that a number of major roads were shut down for weeks because they couldn’t be plowed. Lake Mead near Las Vegas, Nevada, has more water in it than it’s had for more than a decade because of all of rain to the north. El Niño stages of the ENSO cycle tend to reverse many of these patterns — more rain in Africa, Texas and the South Pacific, less rain in places like Oregon. Most years, though, are “neutral,” without any influence from the ENSO cycle.
But citing ENSO as the reason why people in Tuvalu don’t have water isn’t really sufficient. If this drought is just part of a natural cycle, then why is this year so extreme?
In fact, it’s probably a result of climate change. Most people think of warmer air temperatures when they hear about climate change (you can thank the term “global warming” for that) but in reality, every part of the global climate system is shifting, including the ENSO cycle.
The science is still evolving about how climate change will alter ENSO, but right now it appears that ENSO is becoming more intense and extreme. The African drought is the worst in more than 60 years, while the Amazon drought is the worst in more than a century. These are extreme extreme events. Climate scientists are usually reluctant to point to any particular event and say it’s a direct result of climate change. However, this year continues a trend we’ve seen in recent decades.
For the people who live in the South Pacific, their water resources have always been limited; there’s never been very much fresh water on these little islands, since all of it comes from precipitation and there’s no good way to capture it. Over the last few decades, the population and per capita rate of water consumption have both grown — more demand but no more supply. This year, the small margin of error they had is gone.
Life without fresh water is very difficult, and the solutions for the South Pacific are equally complicated. Tuvalu has already started considering a slow, long-term evacuation of the entire country’s population to one or more other countries richer in natural resources and not in danger of sea-level rise, which is another climate change issue for Pacific islands.
Another option is to both reduce demand by making water use as efficient as possible (which most of these countries have done already) and to increase supply, probably by building desalinization plants, which convert salt water into fresh water. This is the route that places like the Middle East and California are pursuing. But these islands are very poor, and these plants are expensive to build and to operate.
If the ENSO continues to become more intense, these countries may need to relocate more of their people to other places. The people on these islands will get past this crisis, but they have some hard choices ahead as they think about what might be best for their children’s future.
These decisions may have to be made sooner than we’d like; recent projections suggest that 2012 may continue the current La Niña stage for another year.
John Matthews is the director of CI’s freshwater climate change program. |
“Negro slavery is contrary to the sentiments of humanity
and the principles of justice.”
The Abbé Henri-Baptiste Grégoire (1750–1831), a Catholic priest and bishop, was a leading French abolitionist at the turn of the eighteenth century, a participant in the Revolution of 1789 and member of its governing assembly, and a supporter of the rights of Jews and free blacks in France and its colonies.
Shortly after being ordained, his first published essays from the 1770s concerned equal treatment under law for the Jewish population in France. Grégoire supported the French Revolution of 1789 and was elected as one of the few clergymen to the Estates-General, the revived French Assembly. He served in several public offices in the following decades and was elected Bishop of Blois. Grégoire was an early supporter of abolition, a stance that led to later clashes with Napoleon and the Bonapartist regime. He met Julien Raimond, the Haitian advocate for racial reform, in 1789 and supported Raimond’s work to convince the Assembly to strike racially discriminatory laws in the French colony of Saint-Domingue (Haiti). Grégoire supported the Haitian Revolution of 1791. The Constituent Assembly’s law to grant the same rights to some free men of color in French colonies was made in 1791 on his proposal.
Grégoire joined the Société des amis des Noirs (Society of the Friends of Blacks) in 1787 and began writing abolitionist pamphlets. Thomas Jefferson, then living in Paris as American minister, was invited to join the society at the same time, but declined. Grégoire’s full-length work An Enquiry Concerning the Intellectual and Moral Faculties, and Literature of Negroes was first published in 1808. The first edition in English, the complete text of which is included here, was brought out in 1810 by Brooklyn printer Thomas Kirk. It was translated by David Bailie Warren, the acting American consul in France at the time.
The book was an immediate rallying point for the nascent abolitionist cause in America. As the long listing of dedicatees (many of whom were still living when the book went to press) shows, the English abolitionist movement was considerably larger and more established than its counterpart in America at this time. Britain had abolished the slave trade the previous year, and America’s ban on the importation of slaves began in 1808.
In his book, Grégoire systematically refutes all the major arguments for the inferiority of blacks, countering them with examples showing how blacks and black societies possess the same elements of intellect and civilization found in white societies. Its examples of African-American achievement, especially the biographical listings in Chapter VII, remained a standard source for abolitionist writings throughout the nineteenth century.
In his arguments supporting black intellect, leadership and initiative, Grégoire’s examples of the Haitian Revolutionary leaders Toussaint L’Ouverture and Ogé won him no favors in Bonapartist France, which had quickly moved to repress the Revolution in Haiti and reinforce the rights of slaveholders. Grégoire’s relationship with both the Church and the French government remained strained for the rest of his life due to his progressive views.
The works and achievements of most of the writers cited—Equiano, Ignatius Sancho, and Phyllis Wheatley, for example—would have been known in transatlantic intellectual circles of the time, though their accomplishments had not been systematically documented in this manner. In one sense, Grégoire’s book is the first volume of African-American literary criticism.
Thanks to Rare Books and Special Collections Librarian Jeffrey Makala for suggesting and helping to make this book available in electronic form. The project could also not have been completed without the work of Tony Branch from the Systems Department, and Kate Boyd, Deborah Green (2007, MLIS Library Science), and Laura Coleman (2007, MLIS Library Science) from the Digital Activities Department.
The monograph was scanned on a flatbed Epson Expression 10000 XL scanner using SilverFast scanning software. Deborah scanned the images as color TIFFs at 24-bit and 300 ppi. From the TIFFs she created high-quality JPEGs and added the preservation metadata to the TIFF and JPEG images. Laura Coleman OCR’d the text with OmniPage Pro, creating text files, to make the pages full-text searchable. The JPEGs and text files were then uploaded to CONTENTdm. The TIFFs will be maintained as the archival masters on a SAN server, backed up to DVD and tape.
Laura began creating a home page for the collection, which Stewart finished, adding a table of contents page for easy accessibility to the different chapters. (This page has been replaced by the table of contents in the book’s CONTENTdm viewer.) Jeffrey wrote an introduction to the book. Kate Boyd created metadata in an Excel file for bookmarking the individual chapters in CONTENTdm. The metadata records follow the Western States Best Practices Dublin Core format and were uploaded as a tab-delimited file at the same time as the images. Kate reviewed the collection and uploaded the images to the CONTENTdm database. |
|This article needs additional citations for verification. (November 2009)|
Formally, a string is a finite sequence of symbols such as letters or digits. The empty string is the extreme case where the sequence has length zero, so there are no symbols in the string. There is only one empty string, because two strings are only different if they have different lengths or a different sequence of symbols. In formal treatments, the empty string is denoted with ε or sometimes Λ or λ.
The empty string should not be confused with the empty language ∅, which is a formal language (i.e. a set of strings) that contains no strings, not even the empty string.
The empty string has several properties:
- |ε| = 0. Its string length is zero.
- ε ⋅ s = s ⋅ ε = s. The empty string is the identity element of the concatenation operation. The set of all strings forms a free monoid with respect to ⋅ and ε.
- εR = ε. Reversal of the empty string produces the empty string.
- The empty string precedes any other string under lexicographical order, because it is the shortest of all strings.
Use in programming languages
In most programming languages, strings are a data type. Individual strings are typically stored in consecutive memory locations. This means that the same string (for example the empty string) could be stored in two different places in memory. (Note that even a string of length zero can require memory to store it, depending on the format being used.) In this way there could be multiple empty strings in memory, in contrast with the formal theory definition, for which there is only one possible empty string. However, a string comparison function would indicate that all of these empty strings are equal to each other.
In most programming languages, the empty string is distinct from a null reference (or null pointer) because a null reference does not point to any string at all, not even the empty string. The empty string is a legitimate string, upon which most string operations should work. Some languages treat some or all of the following in similar ways, which can lessen the danger: empty strings, null references, the integer 0, the floating point number 0, the boolean value false, the ASCII character NUL, or other such values.
The empty string is usually represented similarly to other strings. In implementations with string terminating character (null-terminated strings or plain text lines), the empty string is indicated by the immediate use of this terminating character.
|λ representation||Programming languages|
||C, C++, Objective-C (as a C string)|
||Objective-C (as a constant
||Objective-C (as a new
||C#, Visual Basic .NET|
Examples of empty strings
|This section requires expansion. (March 2010)|
The empty string is a syntactically valid representation of zero in positional notation (in any base), which does not contain leading zeros. Since the empty string does not have a standard visual representation outside of formal language theory, the number zero is traditionally represented by a single decimal digit 0 instead.
Zero-filled memory area, interpreted as a null-terminated string, is an empty string.
Empty lines of text show the empty string. This can occur from two consecutive EOLs, as often occur in text files, and this is sometimes used in text processing to separate paragraphs, e.g. in MediaWiki.
- Kernighan and Ritchie, C, p. 38
- JOHN CORCORAN, WILLIAM FRANK, and MICHAEL MALONEY, String theory, Journal of Symbolic Logic, vol. 39 (1974) pp. 625– 637
- CSE1002 Lecture Notes - Lexicographic |
Solar energy is arguably the most reliable and cleanest form of renewable energy available to the common person. Additionally it is a never ending source of energy. While fossil fuels can theoretically run dry one time or the other, solar energy will be available in abundance till the day the earth exists. Given this scenario – affordability is not to be excluded – it is no wonder that countries and individuals are switching to solar energy over other forms of power.
What then are the components that make up a total solar energy unit?
Solar panels on the roof
The first component of solar systems is the solar panels which are placed on the roof where there will not be any shade or shadow, especially during the peak sunlight hours of 9am to 3pm. Generally a south facing orientation is preferred but that depends in inclination the panels are being installed. Any shading will disrupt production of power. Its importance can be gauged from the fact that even if one of the 36 cells in a solar panel goes below a shade, power production will be halved. The latest solar panels have systems that move and track the progress of the sun across the sky throughout the day.
Solar panels are also called modules. These contain photovoltaic cells made of silicon and transform the sunlight not into heat but into electricity. These cells have a positive and negative film, made of silicon which is placed under a thin slice of glass. The sunlight that falls on the cells has photons which displace the electrons from the silicon. These free negatively charged electrons move to one side of the silicon cell, thereby creating an electric voltage. It can be collected and stored in an electric box known as a fused array combiner. This box contains fuses and connections that transfer this electricity to an inverter. The current at this stage is DC (Direct Current) and must be converted to AC (Alternating Current) if it has to be used in your home or office. Any unit has a large number of solar panels that are connected to each other by wiring and forms a solar photovoltaic array.
The inverter steps up the DC power generated by the solar panels to AC 120/240 volt that can be used in the home or office. The inverter is connected directly to a dedicated circuit breaker in the main electric panel. The inverter is usually installed as close to the solar panels as possible and also in close proximity to the electrical main or sub panels of the home. Since the inverter makes a humming noise, it is usually fixed to external walls. If your solar energy system produces more power than you immediately consume, your electric utility meter will turn backwards!
In a solar electricity system that is linked to the utility grid, the DC power is converted to usable 120/240 AC power that goes directly into the electricity distribution network of the building. This is “net-metered”, that is the demand for power from the general utility grid is reduced when the solar panels are generating power. This leads to a lowering of energy bills.
While it is true that solar energy generating systems do not come cheap, the initial investment is quickly recouped through lower electricity bills. This is apart from ensuring a clean and green environment and reducing pollution levels. |
June 25 (UPI) -- Scientists have discovered a volcanic heat source underneath Antarctica's Pine Island Glacier.
Already threatened by rising atmospheric temperatures and warming ocean currents, the West Antarctic Ice Sheet's enemy list continues to grow.
Scientists discovered the heat source while analyzing trace gases from water samples collected near the glacier's coastal shelf.
"I was sampling the water for five different noble gases, including helium and xenon," Brice Loose, a chemical oceanographer at the University of Rhode Island's Graduate School of Oceanography, said in a news release. "I use these noble gases to trace ice melt as well as heat transport. Helium-3, the gas that indicates volcanism, is one of the suite of gases that we obtain from this tracing method."
Loose and his colleagues weren't looking for volcanism. When they measured the elevated levels of helium-3, they assumed it was an anomaly or a mistake.
But followup measurements confirmed the helium isotope spike wasn't an aberration.
"When you find helium-3, it's like a fingerprint for volcanism. We found that it is relatively abundant in the seawater at the Pine Island shelf," Loose said.
Pine Island Glacier is losing ice mass faster than any other glacier in Western Antarctica, but researchers don't believe the volcanic heat source is the main driver of the glacier's melting.
"There are several decades of research documenting the heat from ocean currents is destabilizing Pine Island Glacier, which in turn appears to be related to a change in the climatological winds around Antarctica," Loose said.
The volcanic heat source is, however, one more factor to account for when modeling the stability of the West Antarctic Ice Shelf. The ice shelf's stability has serious implications for the future of sea level rise.
The analysis of trace gases -- published in the journal Nature Communications -- suggests the volcanic heat source is putting off as much as 25 times more thermal energy than a dormant volcano. And while climate change explains the bulk of the glacier's melting, the new heat source is most likely accelerating the glacier's ice loss.
"The discovery of volcanoes beneath the Antarctic ice sheet means that there is an additional source of heat to melt the ice, lubricate its passage toward the sea, and add to the melting from warm ocean waters," said Karen Heywood, a professor at the University of East Anglia. "It will be important to include this in our efforts to estimate whether the Antarctic ice sheet might become unstable and further increase sea level rise." |
Priming the brain to sprout new blood vessels before a stroke occurs could reduce the severity and improve the patients’ chances of recovering afterward, according to new research.
“They [might still] get the stroke, but it’s only half as bad and they may in fact recover,” said Jeff Dunn, Director of the Experimental Imaging Center at the University of Calgary. “I think that’s pretty exciting.”
Fifteen million people suffer a stroke globally each year (.pdf) according to the World Health Organization, leaving many permanently disabled, or worse. Stroke occurs when fats or blood clots clog a mid-sized blood vessel, restricting blood flow, oxygen and nutrients to our sensitive gray and white matter. If the blockage lasts long enough, brain cells can start to die.
Dunn has studied the protective effect of new blood vessel growth on the brain for years. Several years ago he discovered that when an animal lives at altitude, the oxygen partial pressure — a measure of healthy blood supply to a tissue — increases. Presumably, he thought, the boosted oxygen pressure, and therefore blood supply, was due to new vessels forming in the brain.
In a study published in PLoS ONE in September, Dunn’s team found evidence to support their suspicions by raising two groups of rats in different oxygen levels. One of the groups of rats lived at the natural atmospheric pressure of Calgary. They raised another group in a cage with half the normal atmospheric pressure and a lower oxygen percentage, equivalent to a rat cage lifted 3 miles higher.
After three weeks, the high-altitude rats had, on average, 30 percent more small blood vessels in their brains compared to their counterparts. The scientists then induced strokes in the rats by restricting blood flow and oxygen delivery to the brain and found that the high-altitude rats were more resistant to the negative effects of stroke, showing around half as much brain cell death and significantly reduced inflammation. They maintained motor functions, such as being able to peel a piece of sticky tape off their feet after the stroke, that were more or less lost by the rats raised at lower altitude.
Dunn believes that the brain, while strapped for blood supply in a low-oxygen, high-altitude environment, ratcheted up its production of a protein that helps cause new blood vessels to form. Dunn’s theory is that a kind of interconnected web of blood vessels forms within the brain. So when one mid-sized vessel gets clogged, it can rely on its partner vessels to provide an alternate path for the blood and oxygen.
While Dunn’s results are promising in the short-term, stroke researcher Donna Ferriero, chief physician at the University of California Benioff Children’s Hospital, says the researchers may have jumped the gun in determining how the animal was affected by the stroke; ideally they should check how the rats are doing a few weeks later, rather than immediately after the stroke.
Dunn hopes that in time, his findings in animals could benefit patients who come into the emergency room suffering from transient ischemic attacks (TIA), a condition where blood flow is only temporarily shut off from the brain, causing stroke-like symptoms. “These people with TIAs, many of them come back with a major stroke within the next week or two,” Dunn said. “If we could … treat them in a way that protects them if they have a major stroke, well that would be huge.”
Even for patients with a high risk of experiencing a stroke in the near future, preparing them by reducing the amount of oxygen they breathe isn’t the best approach, Dunn acknowledged. But it may be possible to use drugs to get the same effects as reduced oxygen.
Ferriero agreed that a number of animal studies have shown that drugs can produce some of the same protective effects by increasing blood vessel formation. And a recent Phase 1 clinical trial in newborn humans supports Dunn’s hypothesis as well. However, current data for the same treatment in adults is not as encouraging.
With further research, it’s at least possible that doctors could one day use such a treatment on high-risk patients so that their brains are primed with new vessels in case something worse happens down the line.
Citation: Dunn JF, Wu Y, Zhao Z, Srinivasan S, Natah SS (2012) Training the Brain to Survive Stroke. PLoS ONE 7(9): e45108. doi:10.1371/journal.pone.0045108
Go Back to Top. Skip To: Start of Article. |
The study of weather patterns is intrinsically tied with that of the movement of wind and water. Shifts in the currents can have lasting yearly climatic consequences for many regions throughout the globe.But exactly how does the movement of water within the depths of the oceans affect the weather expressed on the surface?
The weather itself is the product of the water cycle, the movement of water from one phase to another. Earth’s water is constantly recycled, evaporating with the heat of the sun and condensing back to the ground and the oceans.
Water, in turn, is moved by its own currents, caused by differences in salinity and temperature: cold, salty water sinks, displacing warmer, less saline water. Currents can also be affected by underwater features such as trenches, basins, and seamounts, creating differences in its flow.In addition, the tides, winds, and the very rotation of the earth is known to affect the currents of the surface.
The constant flow of both surface and deep-water currents plays a key role in evenly distributing heat throughout the planet, with the currents bringing warm water toward the poles and cooler water toward the equator.
Bringing it full circle to the water cycle (and thus, the weather), the actions of the currents ultimately bring warmer water to the surface, creating more evaporation. The presence of warm water currents in warmer times of the year thus leads to more precipitation on land, and vice versa. This is amplified in the tropics: as more evaporation happens in tropical oceans, the surrounding regions are often very rainy.
Outside the equator, the cycle of warm and cold currents is responsible for the frequency of precipitation at various times of the year. |
Jack Mostow, Alexander G. Hauptmann, Lin Lawrence Chase, Steven Roth
What skill is more important to teach than reading? Unfortunately, millions of Americans cannot read. Although a large body of educational software exists to help teach reading, its inability to hear the student limits what it can do. This paper reports a significant step toward using automatic speech recognition to help children learn to read: an implemented system that displays a text, follows as a student reads it aloud, and automatically identifies which words he or she missed. We describe how the system works, and evaluate its performance on a corpus of second graders’ oral reading that we have recorded and transcribed. |
Out in the clear waters near the Great Barrier Reef, a common blanket octopus male swim toward a female. This male need not worry about showing his brightest colors or engaging in a showy battle of strength in hopes of winning the female’s permission to approach. In fact it’s unclear if the female even notices his approach at all.
You see, the male blanket octopus is less than an inch long. His object of affection? She often tops six feet. How can a male this small fertilize the eggs of a female that large? Therein lies one of the trickier sex acts in the natural world.
Most animals require close contact to reproduce, using either internal fertilization (as humans do) or fertilization nearby (think spawning salmon). So having a similarly matched body size is helpful for reasons of physical logistics—as well as for personal safety (as we’ll soon see). But for causes still obscure to most—except Evolution herself—in a very few, very distantly related animals, males and females long ago diverged to two radically different scales.
The blanket octopus (Tremoctopus violaceus) is one of a handful of animals that go to great lengths to overcome their partner’s drastic difference in size. Males of the species have to mount females that are some 72 times their size. So instead of searching for a lasting union, the male removes his specialized mating arm (known as the hectocotylus), replete with his genetic contribution, and deposits it with the female for her to use at a later date—while he swims away with his seven arms to safety (and rapid senescence). (In fact, the octopus’s mating arm was named the hectocotylus because, initially, it was mistaken for a “100-suckered” parasitic worm.)
Male spiders are another example. They must be as cautious as wee octopuses because their mates are not only huge, but also frequently cannibalistic. In cases like these, small size can actually be to a male’s advantage. For the tiger spider (Nephila plumipes), a type of orb spider, the male’s extreme bittiness might diminish his odds of becoming a meal. “The female may fail to detect a small male,” researchers suggest—and “the female may ignore a small male because his size provides little in the way of a nutritional meal.” So, over time, natural selection seems to have favored smaller males, who are most successful—whether through stealth or just unappetizing scrawniness—at avoiding cannibalization prior to mounting a female (all bets are off afterward, according to scientists who watched this in action). But what stops the male tiger spider from shrinking into obscurity? This species competes with other males for access to females, so it can pay to be small—but not the smallest.
In some cases, one sex is not just a smaller version of the other, but actually fails to truly fully develop much at all. If you thought the she-octopus was a mate to be reckoned with, you have not met the female sea devil (family Ceratiidae), a type of anglerfish. This deep-sea-dwelling female has a massive toothed jaw, and a dangling bioluminescent bulb in front of her face to attract prey into her waiting maw. The male, by contrast, hunts only for the female, for she is his only chance of survival. At approximately 1/64th her size, the males lack a fully formed mouth or digestive tract. These underdeveloped fish ply the dark waters hoping to catch scent or sight of their female dirigible-savior. They use their rudimentary jaws to latch on to her underside, a bite that also releases chemicals that help to fuse the male’s mouth to the female’s body, eventually integrating him into her circulatory system. He will stay there—sometimes in the company of several other “parasitic” males—for the rest of the female’s life, absorbing nutrients from her and providing her with sperm for spawning once he and she are both mature.
These cases of extremely mismatched mates can provide challenges not only to the animals involved, but also to the humans who study them. For decades, no one had ever seen a male blanket octopus because we (silly humans) were searching for one that was similar in size to the females. It wasn’t until 2002 that one was first identified in the wild—a reminder to us to continue to look outside of our own human-sized ideas about pairing. |
Each winter, one of the most troublesome respiratory viruses of childhood makes the rounds. Most older children tolerate the respiratory syncitial virus (RSV) well and just suffer the symptoms of a cough, sore throat, runny nose and fever.
For some children, however, the virus can be very serious. These are usually children who are between the ages of two and five months. They are old enough that they have lost the immunity they received from their mothers and too young to tolerate such a harsh virus. Children who have lung, heart or immune problems are also at greater risk of complications from RSV. One of the other problems with this virus is that it may take two or three infections before a child develops good immunity to it.
The virus is spread by coughing and close contact, such as touching hands. The virus is so common that most children have had the infection by three years of age. After this age, RSV infections are seldom a problem.
The virus causes a great deal of irritation to the lining of the airways and lungs. Increased mucus and sloughed cells cause the tiny airways to plug up. This plugging slows the air that is leaving the lungs, so the lungs become over-expanded in a way similar to emphysema. In the more severe cases, the airways become completely plugged up and the airways collapse.
Treatment is usually aimed at reducing the symptoms and making sure a child is getting adequate oxygen. Infants often need to be hospitalized for RSV infections. A steroid syrup or shot is commonly used and can decrease the symptoms. In severe cases, an antiviral antibiotic can be used, but its usefulness is still being studied. |
Temperature, pressure, level, and flow instruments all sense a process parameter and produce a signal for indication or controller input.
If we want to control a process parameter, the controller output must convert to a signal that can translate to and subsequently drive a control valve. The control valve is a final control element. A final control element is any device or element that changes the value of a manipulated variable. Valves and heaters are common examples. Let's look at control valves and the devices that process the signal supplied to the control valve.
Achieve the programmed
In this illustration you can see the controller output sends an electronic signal to the current-to-pressure transducer (I/P), which sends a pneumatic signal to the control valve.
The control valve position changes in response to the signal to adjust flow to the setpoint. As the flow changes, it is sensed by the flow transmitter. When the flow sensed is equal to setpoint, the valve position remains the same. Any time there is a disturbance to the system or a change in setpoint, the flow control loop automatically responds to achieve the programmed setpoint. A block diagram of this concept is here.
The final control element can be proportional control or ON-OFF control. For ON-OFF control, a controller output relay changes the state of the relay contact, which completes the circuit for a solenoid valve to energize. The solenoid valve opens to allow air to open (or close) a control valve.
The first component in the final control subsystem is the signal conditioner. The signal conditioner amplifies and, if necessary, converts the signal for compatibility with the actuator.
Typical devices used as signal conditioners include current-to-pneumatic transducers, current-to-voltage (I/E) transducers, amplifiers (electronic or pneumatic), relays, digital-to-analog converters, or analog-to-digital converters. The most common signal conditioner in a proportional control loop is an I/P transducer.
A typical I/P transducer is a force balance device in which a coil suspends and hangs in the field of a magnet. Current flowing through the coil generates axial movement of the coil, which causes movement of the beam. The beam controls the backpressure against the nozzle by controlling the restriction of airflow through the nozzle. This backpressure acts as a pilot pressure to control the outlet pressure.
The zero adjustment causes the beam to move relative to the nozzle. The span adjustment is a potentiometer that limits the current through the coil. The I/P transducer must be supplied with instrument air within the range specified by the manufacturer, usually at least 20 psig.
The typical I/P transducer is calibrated for a 4-20 mA input = 3-15 psig output. Most I/P transducers can be configured for direct action (output pressure increases as input signal increases) or reverse action (output pressure decreases as input signal increases).
Mechanically to the valve
The next component in the final control subsystem, if applicable, is the actuator. The actuator receives the conditioned signal and changes it to some form of mechanical energy or motion.
Typical devices used as actuators include solenoids, pneumatic valve positioners, AC and DC motors, stepper motors, hydraulic motors, and hydraulic pistons. Many control valves include a pneumatic valve positioner.
A valve positioner is a device used to increase or decrease the air pressure (from the I/P) operating the control valve actuator. Positioners usually mount to the control valve actuator and connect mechanically to the valve stem for position indication.
A positioner is a type of air relay, which acts to overcome hysteresis, packing box friction, and effects of pressure drop across the valve. It assures exact positioning of the valve stem and provides finer control. There are many types of positioners. The basic principles of operation are similar for all types.
The instrument pressure (from an I/P, for example) acts on the input module, which controls the flapper-nozzle system of the relay. Supply pressure applies to the relay and the output pressure of the relay goes to the control valve actuator.
Most positioners can set up and function for direct or reverse action. For a direct-acting positioner, increasing the instrument pressure causes the input module to pivot the beam. The beam pivots the flapper and restricts the nozzle. The nozzle pressure increases and causes the relay assembly to increase output pressure to the actuator.
With a direct-acting actuator, the increased pressure moves the actuator stem downward. The positioner connects mechanically to the stem of the valve. Stem movement feeds back to the beam by means of a feedback lever and range spring, which causes the flapper to pivot slightly away from the nozzle to prevent further increase in relay output pressure.
Note that some positioners accept a milliamp input and include an integral I/P transducer.
The last component in the final control subsystem is the final control element. Let's look at control valves (Other final control elements include servo valves, heaters, conveyors, auger feeds, and hopper gates.).
There are many different types, sizes, and applications for control valves. Selecting the correct control valve for a specific application is crucial to proper system performance. Under sizing and over sizing are common problems.
There are many valuable resources available to assist with proper selection, not the least of which is a control valve sales engineer. Here's a typical control valve.
The pneumatic signal from the positioner (or I/P if a positioner is not used) applies directly to the actuator. For this control valve, the air enters above the diaphragm and pushes against spring pressure to close the valve. The valve fully closes when the plug seats tightly against the seat ring.
As air pressure decreases, the spring pressure causes the diaphragm, stem, and plug to move upward, opening the valve. This means a loss of pressure would cause the valve to open. This is a fail-open valve.
Different configurations of air inlet, spring location, and valve seat arrangement result in different fail positions and determine whether the valve is direct- or reverse-acting. For example, this same valve, with the plug below the seat ring (reverse-seated), would open with increased air pressure and would fail closed on loss of air pressure.
So, all components in the final control subsystem must be configured correctly for the system to work properly. The fail-safe positions must be correct for the application, and the action must produce the desired results. These configurations must be properly documented and utilized during calibration, loop checks, or troubleshooting.
Attune I/P transducer
The figure below shows the setup for a bench calibration of an I/P transducer. The air supply connected to the input must be in accordance with manufacturer's specification (typically between 20-100 psig).
The pressure standard connects to the air outlet, and a mA simulator connects to the current input. It is important for the I/P transducer to be oriented the same way as the installed position in the field. A change in orientation will introduce error in most I/P transducers.
If the calibration takes place in the field, one uses the existing supply air. It is convenient to tee into the air outlet so one can check the control valve position at the same time. Of course, you need to ensure the system is in a safe condition before you open and close the valve.
Once the setup is established, apply the mA inputs for each desired test point, such as 4.0, 8.0, 12.0, 16.0, and 20.0 mA. Record the corresponding outlet pressure at each test point. For a 4-20 mA input = 3-15 psig output I/P, the corresponding outputs would be 3.0, 6.0, 9.0, 12.0, and 15.0 psig.
Some facilities adjust the 0% test point so a slightly higher mA input results in the 0% output. For example, 4.10 mA may result in a 3.0 psig output. This ensures the valve is in the closed state with a controller output of 4.0 mA.
Upon ascertaining the as-found readings, evaluate the results against the required specification. If required, perform zero and span adjustments until no further adjustment is required. Then, repeat all test points to record as-left readings.
Many organizations do not require periodic calibration of I/P transducers, positioners, or control valves. The justification is the control signal will adjust the output until the required setpoint is achieved based on the process measurement. This is true, but you want to make sure the output loop is performing correctly. The best way to do so is to check the calibration periodically.
Calibrate valve positioner
Calibration of the valve positioner can be performed at the same time as the I/P in a loop calibration. Simply tee in the pressure module at the I/P outlet in the I/P calibration. Record the valve position at each test point.
If calibrating the valve positioner separately, connect an input test pressure regulator or hand pump, and monitor the input pressure applied with a pressure standard. If there is no supply air, connect the required supply air to the positioner. Apply the pressure for the desired test points and record valve position.
For example, assume our valve positioner is 3-15 psig input = 0-100% valve position. In this case, apply 3.0, 6.0, 9.0, 12.0, and 15.0 psig. The expected valve positions should be 0, 25, 50, 75, and 100%, respectively.
The valve position indicator on the stem usually marks off in 5% or 10% increments. Therefore, a best estimate of the valve position may be all you can obtain. In other cases, a valve position detector provides a remote indication to a DCS. In such cases, ensure both indicators are working properly.
Many organizations do not require calibration of valve positioners for these reasons. There's much documentation that control valve positioner performance is responsible for significant loss in system efficiency and, therefore, increased costs.
To provide guidance on methods for testing positioners and control valve performance, ISA has developed a standard, ANSI/ISA-75.25.01-2000, Test Procedure for Control Valve Response Measurement for Step Inputs.
As to control valve calibration, the process is similar to positioner calibration in that one applies a pressure signal to the actuator and then tallies the resulting valve position. This step can take place with the positioner calibration, if applicable, and it can happen in conjunction with I/P calibration.
Remember to ensure the system is in a safe condition if performing the calibration in the field. In addition, know the correct action, direct or reverse, and fail position before starting. CE
Nicholas Sheble ([email protected]) edits the Certification department for InTech magazine. This article is from Michael Cable's book Calibration: A Technician's Guide, ISA Press 2005. Cable is a Level 3 Certified Control System Technician and is the validation manager at Argos Therapeutics. |
Data Structures and Algorithms
with Object-Oriented Design Patterns in Ruby|
The Ruby class hierarchy which is used to represent the basic repertoire of abstract data types is shown in Figure . Two kinds of classes are shown in Figure ; abstract Ruby classes , which look like this , and concrete Ruby classes , which look like this . Arrows in the figure indicate the specializes relation between classes; an arrow points from a derived class to the base class from which it is derived.
Figure: Object class hierarchy.
The distinction between an abstract class and a concrete class is purely one of convention. These concepts are not built into the Ruby language. Nevertheless, it is possible to write Ruby programs in a way that makes it clear that a class is an abstract class.
An abstract class is a class which defines only part of an implementation. Consequently, it does not make sense to create object instances of abstract classes. By convention, an abstract class may contain zero or more abstract methods . As with classes, the distinction between abstract methods and concrete methods is purely one of convention. An abstract method or property is one for which no implementation is given.
An abstract class is intended to be used as the base class from which other classes are derived . The derived classes are expected to override the abstract methods of the base classes. By defining abstract methods in the base class, it is possible to understand how an object of a derived class can be used. We don't need to know how a particular object instance is implemented, nor do we need to know of which derived class it is an instance.
This design pattern uses the idea of polymorphism . Polymorphism literally means ``having many forms.'' The essential idea is that a base class is used to define the set of values and the set of operations--the abstract data type. Then, various different implementations (many forms) of the abstract data type can be made. We do this by defining abstract classes that contain shared implementation features and then by deriving concrete classes from the abstract base classes.
The remainder of this section presents the top levels of the class hierarchy shown in Figure . The top levels define those attributes of objects which are common to all of the classes in the hierarchy. The lower levels of the hierarchy are presented in subsequent chapters where the abstractions are defined and various implementations of those abstractions are elaborated. |
November/December 2005 | Volume 56, Issue 6
Two hundred and fifty years ago this winter, European courts and diplomats were moving ever closer to war. It would prove larger, more brutal, and costlier than anyone anticipated, and it would have an outcome more decisive than any war in the previous three centuries.
Historians usually call it the Seven Years’ War. Modern Americans, recalling a few disconnected episodes—Braddock’s defeat, the Fort William Henry “massacre,” the Battle of Quebec—know it as the French and Indian War. Neither name communicates the conflict’s immensity and importance. Winston Churchill came closer in The History of the English-Speaking Peoples when he called it “a world war—the first in history,” noting that unlike the previous Anglo-French wars, this time “the prize would be something more than a rearrangement of frontiers and a redistribution of fortresses and sugar islands.”
That prize was the eastern half of North America, and the war in which Britain won it raised, with seismic force, a mountain range at the midpoint of the last half-millennium in American history. On the far side of that range lay a world where native peoples controlled the continent. On the other side we find a different world, in which Indian power waned as the United States grew into the largest republic and the most powerful empire on earth. In that sense it may not be too much to give the conflict yet another name: the War That Made America.
Seeing what north america looked like on the far side of the Seven Years’ War illuminates the changes the war wrought and its lingering influences. The traditional narrative of American history treats the “colonial period” as a tale of maturation that begins with the founding of Virginia and Massachusetts and culminates in the Revolution. It implies that the demographic momentum of the British colonies and the emergence of a new “American character” made independence and the expansion of Anglo-American settlement across the continent inevitable. Events like the destruction of New France, while interesting, were hardly central to a history driven by population expansion, economic growth, and the flowering of democracy. Indians, regrettably, were fated to vanish beneath the Anglo-American tide.
But if we regard the Seven Years’ War as an event central to American history, a very different understanding emerges—one that turns the familiar story upside down. Seen this way, the “colonial period” had two phases. During the first, which lasted the whole of the sixteenth century, Indian nations controlled everything from the Atlantic to the Pacific, north of the Rio Grande, setting the terms of interaction between Europeans and Indians and determining every significant outcome. The second phase began when the Spanish, French, Dutch, and English established settlements in North America around the beginning of the seventeenth century, inaugurating a 150-year period of colonization and conflict by changing the conditions of American life in two critical ways. First, permanent colonies spread disease in their immediate vicinities; second, they radically increased the volume of trade goods that flowed into Indian communities. The results of this transformation were many, powerful, and enduring.
Epidemic diseases—smallpox, diphtheria, measles, plague—dealt a series of deadly blows to native populations. Ironically, the Indians nearest the European settlements, and who sustained the earliest and worst losses, also had the closest access to trade goods and weapons that gave them unprecedented advantages over more distant groups. As warriors raided for captives to prop up their dwindling populations and pelts to exchange for European weapons, wars among native peoples became ever more deadly. The Five Nations of the Iroquois, in what is now upstate New York, grew powerful in the mid-seventeenth century by trading with the Dutch at Fort Orange (Albany) and seizing captives from Canada to the Ohio Valley to the Carolinas. Iroquois power, of course, had its limits. Tribes driven west and north by their attacks forged alliances with the French, who supplied them with arms, and encouraged them to strike back.
The Iroquois were already under pressure when England seized New Netherland from the Dutch in 1664. This deprived the tribes of an essential ally when they could least afford it. Iroquois fortunes spiraled downward until the beginning of the eighteenth century, when the battered Five Nations finally adopted a position of neutrality toward the French and British empires.
The Iroquois soon found that this neutrality gave them a new form of power. They could play Britain and France off against each other in the wars that the contending empires fought during the first half of the eighteenth century. By the 1730s a half-dozen Indian groups—Cherokees, Creeks, Choctaws, Abenakis, and various Algonquians, as well as the Iroquois—were engaging in balance-of-power politics that made any maneuverings of the French, the British—and the Spanish too—indecisive. While it lasted, this balance permitted Indian and European groups to develop along parallel paths. When it ended, however, the whole edifice of native power came crashing down.
The Seven Years’ War brought about that shift and, in doing so, opened a third American epoch, which lasted from the mid-eighteenth century to the beginning of the twentieth. The shift was not immediately perceptible, for from beginning to end the war reflected the importance of Indian power. The fortunes of war in North America ebbed and flowed according to when the Indian allies of the Europeans decided to engage or withdraw. When, in 1758, the French-allied Indians on the Ohio chose to make a separate peace, Anglo-American forces could at last seize the Forks of the Ohio, the site of modern Pittsburgh and the strategic key to the transappalachian West, bringing peace to the Virginia-Pennsylvania frontier. The following year the Iroquois League shifted from neutrality to alliance with the British, permitting the Anglo-Americans to take Fort Niagara and with it crucial control of the Great Lakes. In 1760 Iroquois diplomats preceding Gen. Jeffery Amherst’s invading army persuaded the last Indian allies of New France to make peace, facilitating the bloodless surrender of French forces at Montreal.
Recognizing the central role of Indians in the war certainly should not deny the importance of French and British operations in America or diminish the critical part played by the large-scale mobilization of the colonists. Those too were decisive and were part of the worldwide extension of the fighting. Britain’s war leader, William Pitt, knew that the British army was too small to confront the forces of Europe on their home ground. He therefore used the navy and army together to attack France’s most vulnerable colonies, while subsidizing Prussia and smaller German states to do most of the fighting in Europe. Similarly, from late 1757 Pitt promised to reimburse North America’s colonial governments for raising troops to help attack Canada and the French West Indies, treating the colonies not as subordinates but as allies. This policy precipitated a surge of patriotism among the colonists. Between 1758 and 1760 the number of Anglo-Americans voluntarily participating in the war effort grew to equal the population of all New France.
Britain’s colonists continued to enlist in numbers that suggest they had come to believe they were full partners in the creation of a new British empire that would be the greatest since Rome. Their extraordinary exertions made for a decisive victory, but one that came at a fearful cost. And that in turn had an impact that extended far beyond the Peace of Paris, which put an end to the hostilities in 1763.
Paradoxically, the war had seemed to damage the vanquished less than it did the victor. Despite the loss of its North American possessions and the destruction of its navy, France recovered with remarkable speed. Because the British chose to return the profitable West Indian sugar islands to France and to retain Canada, always a sinkhole for public funds, French economic growth resumed at pre-war rates. Because France funded its re-armament program by borrowing, there was no taxpayers’ revolt. The navy rebuilt its ravaged fleet using stateof-the-art designs. The army, re-equipped with the most advanced artillery of the day, underwent reforms in recruitment, training, discipline, and administration. These measures were intended to turn the tables on Britain in the next war, which was precisely what happened when France intervened in the American struggle for independence. (The expense of that revenge tempered its sweetness somewhat, but it was only in 1789 that King Louis and his ministers, facing a revolution of their own, learned how severe the reckoning would be.)
For Britain and its American colonies the war had complex, equivocal legacies. Pitt’s prodigal expenditures and the expansion of the empire to take in half of North America created immense problems of public finance and territorial control. The virtual doubling of the national debt between 1756 and 1763 produced demands for retrenchment even as administrators tried to impose economy, coherence, and efficiency on a haphazard imperial administration. Their goal was both to control the 300,000 or so Canadians and Indians whom the war had ushered into the empire and to make the North American colonies cooperate with one another, take direction from London, and pay the costs of imperial defense.
The war’s most pernicious effect, however, was to persuade the Crown that Britain was unbeatable. The extraordinary battlefield triumphs of the previous years made this inference seem reasonable, and the perilous conviction that Britannia had grown too mighty to fail contributed to the highhanded tone imperial officials now used to address the colonists and thus helped sow the seeds of revolution.
Britain’s American colonists had come to believe they were members of a transatlantic community bound together by common allegiance, interests, laws, and rights. Imperial administrators found this absurd. Even before the war they had been proposing reforms that would have made it clear the colonists were anything but legal and constitutional equals of subjects who lived in Britain. The outbreak of the fighting had suspended those reforms, and then Pitt’s policies had encouraged the colonists to see the empire as a voluntary union of British patriots on both sides of the ocean.
So when the empire’s administrators moved to reassert the pre-war hierarchy, the colonists reacted first with shock, then with fury. What happened, they wanted to know, to the patriotic partnership that had won the war? Why are we suddenly being treated as if we were the conquered, instead of fellow conquerors?
During the 12 years between the Peace of Paris in 1763 and the battles of Lexington and Concord the colonists clarified their beliefs, using language echoing the broad, inclusive spirit of equality that had rallied them during the late war. In time those ideas became the basis of all our politics, but between 1763 and 1775 they were not yet founding principles. Rather, what took place in the postwar years was a long, increasingly acrimonious debate about the character of the empire, a wrangle over who belonged to it and on what terms and about how it should function. The dispute became so bitter precisely because the colonists believed they were British patriots who had proved their loyalty by taking part in a vast struggle for an empire they loved.
The irony here is intense and bears examining. The most complete victory in a European conflict since the Hundred Years War quickly became a terrible thing for the victor, whereas the defeated powers soon recovered purpose and momentum. Even a decisive victory can carry great dangers for the winner. Britain emerged from the war as the most powerful nation of its day, only to find that the rest of Europe feared it enough to join ranks against it; it confidently undertook to reassert itself in America only to unite its colonists in opposition to imperial authority. Finally, when Britain used its military might to compel the fractious colonists to submit, it turned resistance into insurrection—and revolution.
And what of the indians? for them, the war’s effects were transforming, and tragic. By eliminating the French Empire from North America and dividing the continent down its center between Britain and Spain, the Peace of Paris made it impossible for the Iroquois and other native groups to preserve their autonomy by playing empires off against one another. The former Indian allies of New France came to understand the tenuousness of their position soon after the war, when the British high command began to treat them as if they, not the French, had been conquered. They reacted with violence to Britain’s abrupt changes in the terms of trade and suspension of diplomatic gift giving, launching an insurrection to teach the British a lesson in the proper relationship of ally to ally. By driving British troops from their interior forts and sending raids that once again embroiled the frontier in a huge refugee crisis, the Indians forced the British to rescind the offending policies. Yet by 1764, when various groups began to make peace, native leaders understood that their ability to carry on a war had become limited indeed. Without a competing empire to arm and supply them, they simply could not keep fighting once they ran out of gunpowder.
Meanwhile, the bloodshed and captive-taking of the war and the postwar insurrection deranged relations between Indians and Anglo-American colonists. Even in Pennsylvania, a colony that had never known an Indian war before 1755, indiscriminate hatred of Indians became something like a majority sentiment by 1764. When most native groups sided with the British in the Revolution, the animosity only grew. By 1783 Americans were willing to allow neither Indians nor the ex-Loyalists with whom they had cooperated any place in the new Republic, except on terms dictated by the victor.
In the traditional narrative mentioned earlier, the fate of native peoples is a melancholy historical inevitability; Indians are acted upon far more than they are actors. To include the Seven Years’ War in the story of the founding of the United States, however, makes it easier to understand Indians as neither a doomed remnant nor as noble savages, but as human beings who behaved with a canniness and a fallibility equal to those of Europeans and acted with just as much courage, brutality, and calculated self-interest as the colonists. In seeking security and hoping to profit from the competition between empires, they did things that led to a world-altering war, which in turn produced the revolutionary changes that moved them from the center of the American story to its margins. No irony could be more complete, no outcome more tragic.
Finally, treating the Revolution as an unintended consequence of the Anglo-American quest for empire offers a way to understand the persistence of imperialism in American history. We like to read the rhetoric of the Revolution in such a way as to convince ourselves that the United States has always been a fundamentally anti -imperial nation. What the story of the Seven Years’ War encourages us to do is to imagine that empire has been as central to our national self-definition and behavior over time as liberty itself has been—that empire and liberty indeed can be seen as complementary elements, related in as intimate and necessary a way as the two faces of a single coin.
Changing our thinking about the founding period of the United States by including the Seven Years’ War can enable us to see the significance not only of America’s great wars of liberation—the Revolutionary War, the Civil War, and World War II—but of the War of 1812, the Mexican War, the Spanish-American War, and all of the country’s other wars for empire as well. Those conflicts are not exceptions to some imagined antimilitarist rule of American historical development; they too have made us who we are. To understand this may help us avoid the dangerous fantasy that the United States differs so substantially from other historical empires that it is somehow immune to the fate they have all, ultimately, shared. |
How we change what others think, feel, believe and do
Degrees of Freedom
'Degrees of freedom' is a term that can be rather confusing. In fact it is, but there are several ways of explaining it that help to make sense of it.
A simple (though not completely accurate) way of thinking about degrees of freedom is to imagine you are picking people to play in a team. You have eleven positions to fill and eleven people to put into those positions. How many decisions do you have? In fact you have ten, because when you come to the eleventh person, there is only one person and one position, so you have no choice. You thus have ten 'degrees of freedom' as it is called.
Likewise, when you have a sample, the degrees of freedom to allocate people in the sample to tests is one less than the sample size. So if there are N people in a sample, the degrees of freedom is N-1.
When you are calculating an average of a sample, you want the sample to have the same average as the population.
For example, if the average score for an entire population on a test is 3 and in a group of five people, four of them score 1, 2, 3 and 5, then for the sample average to be the same as the population average, then the last one must be scored as 4.
There may be N observations in an experiment, but one parameter that needs to be estimated. That leaves N-1 degrees of freedom for estimating variability.
Where there are multiple samples, then the degrees of freedom for each are N1-1, N2-1, etc. When the samples are combined, the total degrees of freedom is (N1 + N2 + ...) - Y, where Y is the number of samples. Thus combining two groups gives DF = N1 + N2 - 2.
As an example, if you have a table with a set of rows and columns, as below, and where you know the total of the rows and columns, then when you know the yellow squares, the blue squares can be calculated. You thus only have (R -)*(C - 1) choices (or degrees of freedom) in allocating numbers to cells.
And the big |
Before touching on equipment or technique, it is important to understand exactly what sound is and how it works. While there are far more scientific aspects involved with the subject of sound than what I am going to cover here, my goal is to break things down in a way that musicians will relate to.
A great definition of sound comes from mediacollege.com, which says: “Sound waves exist as variations of pressure in a medium such as air. They are created by the vibration of an object, which causes the air surrounding it to vibrate. The vibrating air then causes the human eardrum to vibrate, which the brain interprets as sound.”
Sound waves are created much in the same way as waves in water. If you place your finger in a tub of water and move it back and forth, you immediately begin to see ripples forming and moving outward from the point of contact. These ripples exist because: a) you, as the source, caused a vibration; and b) the vibration you caused had a medium to travel through – the water. With sound, you have many possible sources — e.g. a hammer striking a piano string, someone’s voice resounding, a mallet striking a drum – which produce vibrations that travel through the air and finally to your eardrum. Your eardrum is an incredibly sensitive membrane that will vibrate at even the slightest pressure created by sound waves. The waves then travel through the inner ear, which is lined in rows with tiny hairs that act as receptors. The vibrations pass along these hairs, reacting to the frequencies of the waves that pass through, and produce the sensation of hearing. Hearing loss can occur when these tiny hairs are bombarded with too much pressure all at once, or too frequently, thus causing the hairs to essentially lie down flat and no longer be able to experience the sensation of sound waves traveling through.
Each sound wave has its own unique set of characteristics that are primarily defined by three things: 1. Wavelength 2. Frequency 3. Amplitude
Wavelength – Measurement from the crest of one wave to the crest of the next. High-pitch sounds have short wavelengths, while low-pitch sounds have long wavelengths.
Frequency – The number of sound wave cycles completed per second. A cycle is one complete peak and fall of a sound wave or one vibration of the vibrating source. Frequency is measured in Hertz (Hz). The average human ear picks up on frequencies (or pitches) that range from 20-20,000 Hz. The low sound of a tuba vibrates around 25 Hz, whereas the frequency of a C piccolo lands right around 587 Hz.
Amplitude – Essentially the height of the wave from its equilibrium point. Amplitude refers to the intensity or loudness of the wave and is measured in decibels (dB). The average conversation is spoken at about 70 dB, whereas levels around 130 dB would be akin to standing next to the engine of a jet. The sound pressure of 130 dB is such that it will cause pain and potential damage to your ear drum.
If you would like to delve more into the subject of sound waves and their characteristics, then www.mediacollege.com offers supplemental information as well as audio examples of some of the things we have covered here. There’s also a great (and inexpensive) book by Jerry Slone called The Basics of Live Sound that offers a great crash-course without all of the engineer “jargon.”
Now that you have an understanding of what sound is and how it works, we will next discuss what the job of a sound engineer entails. Stay tuned! |
World War 1 History: The Kettering Bug—World's First Drone
The American World War I Flying Bomb
After the Allies landed in Normandy on June 6, 1944, the Germans unleashed their V-1 flying bombs against London. By the end of World War II, nearly 10,000 of the terror weapons had been launched against British targets. They were the first pilotless bombs ever used in war, but the very first such weapon (“unmanned aerial vehicle” in modern military-speak or, more commonly, “drone”) was actually developed more than 25 years earlier during World War I by the Americans. It was called the Kettering Bug.
Charles F. Kettering
Development of the Kettering Bug, formally called the Kettering Aerial Torpedo, started in April 1917 in Dayton, Ohio after the U.S. Army asked inventor-engineer Charles F. Kettering to design an unmanned flying bomb with a range of 40 miles. Kettering assembled his team, including Orville Wright, one of the famous Wright brothers, and got to work.
Papier Mache and Cardboard
What emerged was an ungainly-looking contraption. Its fuselage was constructed of papier mache reinforced with wood laminates; its smooth 12-foot wings were made of cardboard. Kettering’s invention looked like a propeller-driven torpedo with wings. It took off from a small four-wheeled carriage, which rolled down a portable “aiming” track. It was, however, a technical marvel for its time.
It had a small gyroscope which kept its heading true. Its elevation was controlled by a small aneroid barometer that was so sensitive it was triggered when moving it from the desk top to the floor. An ingenious arrangement of cranks and bellows (taken from player pianos) controlled its flight.
To set flight duration to target, three factors were needed: wind direction, wind speed and actual distance to target. Using these figures, the number of engine revolutions necessary to carry the Bug to its destination were calculated and a cam was set. When the engine had made that number of revolutions, the cam dropped, shutting off the engine and releasing the wings. The Bug's torpedo-shaped fuselage, carrying high explosive, would then plunge to earth.
Ready to Launch
The Bug Had Bugs
After initial tests were highly successful, it was decided to demonstrate the Bug's progress to the military. One of the witnesses, General Arnold, said:
“After a balky start before the distinguished assemblage, it took off abruptly, but instead of maintaining horizontal flight, it started to climb. At about 600 to 800 feet, as if possessed by the devil, it turned over, made Immelmann turns, and, seeming to spot the group of brass hats below, dived on them, scattering them in all directions. This was repeated several times before the ‘Bug’ finally crashed without casualties.”— General Arnold
Still Needed Tweaking
Adjustments were made and a second demonstration arranged. The Bug was set to fly at 50 mph and the dignitaries piled into cars to give chase so they could witness it crashing into the ground. Unfortunately, instead of flying straight, it went off course and circled the city of Dayton, cars in pursuit. The main concern wasn't what might happen if it crashed in the city, but whether the enemy might get wind of the Kettering Bug. The entourage searched the vicinity where they thought it had come down and came upon some excited farmers who reported a plane crash-- but they couldn't find the pilot. One of the passengers in the pursuit team was a flying officer in a leather coat and goggles and a quick-thinking colonel explained that he was the pilot who jumped out of the plane in his parachute. General Arnold again: “Our secret was secure. The awed farmers didn’t know that the U. S. Air Corps had no parachutes yet.”
$400 Flying Bomb
Despite these setbacks, the Kettering Bug was approved after adjustments were made. The production model flew at 50 mph and had a maximum range of 75 miles, exceeding the original requirement by 35 miles. The power to fly and operate the controls was provided by a 40-horsepower Ford engine, which cost $50, putting the total price per Bug at only $400. Including 300 lbs of explosive, its total weight was just 600 lbs.
Kettering Bug's Successor
The War Ends
The government was impressed and ordered 20,000 Kettering Bugs, but only fifty were produced before World War I ended on November 11, 1918 and none were used in combat. When World War II started, serious consideration was given to reactivating and improving the Kettering Bug, but it was decided that even an improved Bug couldn't hit key targets in Germany from England. Lessons from the Kettering Bug, however, were used in the development of the first guided missiles and radio-controlled drones. It is also interesting to note that the German V-1 flying bomb, while so much more advanced, also had a small propeller whose sole purpose was to determine when to shut off the V-1's jet engine and was launched from a ramp.
Kettering's Aerial Torpedo
© 2012 David Hunt |
“Play is our brain’s favorite way of learning.” By Diane Ackerman
Play is a critical component of developing minds. Research shows play is as important as sleep and food; so why don’t we promote more play?
Kevin Carroll is one of the guru’s of why we need play in our lives both as adults and children. He is the reason I started learning more about play a few years ago when I heard him as a keynote at ISTE14. He got me thinking about how we could incorporate play more into the classroom. Kevin is an author of three highly successful books published by ESPN, Disney Press and McGraw-Hill., a speaker and change agent i.e. Katalyst, you can read more about him here. I highly recommend watching some of his Ted Talks and videos on his site.
Three ways to integrate play into education:
- Create Makerspaces in your classroom or school to allow creative play
- Utilize missions or design thinking projects
- Purposeful play at recess
Resources about Play: |
Hands-on Teaching: Area and Volume
Geometry make more sense when kids can hold math in their hands.
- Grades: 3–5
Easy, Fun Math Manipulatives
Tools to teach area and volume are all around you.
- Cheez-It square crackers
- Graham crackers
- Colorful tiles from a tile store
- Square, laminated photos of students
- Styrofoam peanuts
- Dried beans
- Colored water
- Sugar cubes
Review the concept of volume by asking students to bring in clean, empty food containers from home, such as cans, jugs, and cartons. Explain that these containers are labeled with different ways we measure volume (e.g., gallons, ounces, liters, cups). Start a word wall that displays both the terms and the items, using tacks and glue to hold up containers. Kids can peel labels off cans and cut out catalog items and magazine ads. Soon they’ll be fluent in the language of volume.
Distribute rulers, and have students use them to help draw squares and rectangles of different sizes from graph paper, construction paper, or magazines. After they’ve cut out the shapes, challenge kids to create pictures or designs by gluing the shapes onto cardboard, making sure they don’t overlap. Ask them to find the area of each shape and then the total area of the design. Have students display their works by hanging them on a bulletin board or shelf in order from least to greatest area. Make a class mural by asking students to arrange their designs on one sheet of butcher paper and then calculate the total area of all the shapes.
Ask students if they can find the volume of a shoe box using Unifix cubes. Do they need to fill the box to find the answer? Line the bottom of the box with cubes. How many cover the bottom? What is the area of the base of the box? Now ask how many layers they will need to fill the box. Let students experiment in small groups. Some will want to fill the box entirely with cubes to find the volume, while others may realize they only need to find the height of the box in cubes and multiply it by the area of the base. When reflecting on the strategies they used to find the volume, students should discover that the “shortcut” to finding volume is to multiply length and width and height.
Tall and Short Containers
Use two identical pieces of construction paper to make cylinders—one tall and skinny, the other short and stout. Tape each cylinder together by lining up the seams so they do not overlap. Ask students which cylinder holds more. Place the tall, skinny cylinder inside the short, stout cylinder. Fill the tall cylinder to the top with dry ingredients such as rice, popcorn or Styrofoam peanuts. Lift the tall cylinder, letting the dry ingredients fill the short cylinder. Children will see that the short cylinder still has room for more, and thus has a larger capacity.
Give each student 20 square units and ask them to make any shape or design they’d like to with their squares. Lead students on a tour to see the shapes made by others in the class. Once they’ve returned to their seats, ask what all of the designs have in common (they’re all made of squares, and they all have an area of 20 square units). Next, have the students find the perimeter of their shapes by counting the units along the outside edges. Whose shape has the smallest perimeter? The largest? They’ll discover that perimeter does not depend on area.
Have students work in pairs with a partition between them. Give them each a pile of square tiles and challenge them to make a figure with a specified area of square units. Once they’ve constructed their figures, let them lift the partition and compare. Do two figures with the same area have the same perimeter or shape?
Give students sets of tangram puzzle pieces and have them find the area of each piece. (For the rhombus, they will need to divide it into two triangles and one square, find the area of each, and then find the total.) Have students write the area on each puzzle piece. Now ask them to put the pieces together to form one large square and find the area of the square. If they add the areas of the seven smaller pieces, their answer should be equal to (or close to) the total area of the large square.
Line up five or six containers of different sizes and shapes and have students figure out how they compare in volume. This can be done with water on a warm day, or with dry ingredients indoors. Have groups of students share the strategies they used.
Solid containers aren’t the only ones with capacity. Containers that change shape, such as balloons, sponges, and even our lungs, also have capacity. Bring in a selection of sponges that have different sizes, shapes, and density. Have students predict which sponges will hold the most water, arranging the sponges in order from least to greatest capacity according to their predictions. Place the sponges in a tub of water for 10 minutes, then let students squeeze the water into individual graduated cylinders to get an estimate of how much water each sponge held. Older students can measure the volume of each sponge (length times width times height for rectangular and square prism shapes) before and after it sits in the water, as if it were an empty container. How does the calculated volume relate to the amount of water a sponge can hold?
Finally, for a digital twist, introduce Setting the Stage with Geometry, a new math program designed to help students build basic skills for measuring 2D and 3D shapes. It includes extension activities and worksheets. |
Living or Non-living?
Grade level(s):Grade 3, Grade 4, Grade 5
Topic:Common Functions of Living Things
All living things share common characteristics. Living things are made up of one or more cells, use energy (which includes nutrition, excretion, respiration), grow and develop, reproduce, and respond to their surroundings (which includes movement and sensation).
reproduce, feed, excrete, grow, sense, respond, respire, cell
What you need:
- 1 basin of 3 specimens from the list below for each pair of students. Pairs will have different combinations of items: one alive and the other either confusing or not alive. It makes for a very interesting discussion if some objects are given to multiple pairs, but paired with different items.
- Extra specimens for students to choose from who have finished early.
The following are possible specimens to use. Feel free to come up with your own ideas and vary which specimens are grouped together.
Heat Pack, Compass, Weasel Ball, Battery Operated Toys, Baking Soda & Vinegar, Drinking Bird, Rock
Bacteria/Mold Plate, Worms, Plant, Isopods, Yeast Bottle, Fungus, Worms, Goldfish, Elodea/Aquarium Plant, Potato or Onion with roots
Seeds, Dried Yeast, Moss, Lichen, Pieces of Wood, Corn Cob, Carrot, Pinecone, Cut Flower, Soil, Shell, Plant Bulb
Non-living materials are available at the SEP Daly Ralston Resource Center (K 118, K137, K138)
One class period 40-60 minutes
Students will investigate different objects and discuss whether they are alive or not alive. Students are challenged to provide evidence for their decision and defend their opinion.
This is the second lesson of a unit (What are Living Things and How does a Living thing Respond to Its Environment?) that was designed to precedes teaching the adopted FOSS unit on life sciences. In this unit students are given time to think about and discuss the fundamental question, "What is a Living Thing?" They are also introduced to a method for doing their own science investigations on the topic of how different living things interact with their environment. The unit ends with students deciding on a testable question, designing an investigation, doing the investigation, collecting data and drawing conclusions. Students then create poster presentations of their investigation and findings for a grade level science fair.
Students will be able to list some characteristics that all living things have in common. See Lesson "What Do Living Things Have in Common?"
Students will use a class generated list of characteristics of living things to identify objects as living or non-living.
This lesson also allows students to practice:
1) Building a community of scientists
2) Thinking critically and being skeptical
3) Constructing an argument and defending a position
Living things are made up of one or more cells, use energy (which includes nutrition, excretion, respiration), grow and develop, reproduce, and respond to their surroundings (which includes movement and sensation).
Gather objects and place two objects in each basin as described above. Have enough basins for each pair of students.
Have extra objects for students who finish early. This may include a candle to light if there is a volunteer to light the candle and have a discussion with students who are finished.
Lesson Implementation / Outline
- Share with students that their task today is to try to identify objects that they are given as either living or non-living things.
- Review the class generated lists of characteristics of all living things created during the prior lesson. Review any vocabulary on list that are new to students. Remind students that this is just our initial list. They can use the characteristics that they agree are common to all living things to help them identify objects that are living or non-living.
- As students look at the objects, ask them to think if there are any new characteristics to add or if they question ones that are already listed.
Explain activity and distribute basin of objects to each pair of students. Hand out record sheet (attached).
Investigation in pairs:
1) Examine your objects.
2) For each of your objects, discuss the following questions:
a) Is this a living thing? If I think it is, why do I think so? What is my evidence? (focus on what you can directly observe). What does it have in common with all living things?
b) Is this a living thing? If I thing it isn't, Why do I think so? What is my evidence? (focus on what you can directly observe). Does it have anything in common with all living things? How does it differ?
3) Write down your ideas.
4) Choose the object about which you have had the most interesting discussion, and be prepared to present your conclusions about whether it is alive or not. Your conclusions should be supported by your observations.
Post directions on board
1) Reporter 1 holds up object and describes it to the rest of the class.
2) Reporter 2 answers the following:
a) Did you and your partner decide that the object was alive, not alive, or are you not sure?
b) What is your evidence for that decision?
Encourage students to question and challenge each other during their discussions with partners as well as during their report-out. Depending on the age of your students, you might want to model how to ask clarifying and/or skeptical questions. Sentence starters can be displayed:
Why do you think that?
What made you decide that?
But what about ______?
How does that fit with your decision that ____________________?
Questioning each other does not only challenge students to think deeper but also models that it is the nature of science to encourage researchers to be skeptical of one another’s findings. Questioning also allows students to become accustomed to defending their positions with evidence.
Revisit the class generated lists of characteristics of all living things created during the prior lesson. "After today's lesson, decide if you still agree with all the characteristics we have listed as common to all living things. Spend a few minutes discussing this with your partner. Is there anything you would add? Is there anything that you would remove from the list?"Have students who want to add or take away expain why. See if there is class agreement, and if so adjust list.
Extensions and Reflections
This lesson is part of the unit, "What Are Living Things and How does a Living Thing Respond to Its Environment?" It follows the lesson, "What Do All Living Things Have in Common?"
I have found that for students to have a good understanding of the structures and functions of living things and how they adapt to environmental changes, they must first have an opportunity to reflect upon and discuss their basic understanding of what a living thing is. That is the primary goal of this lesson and the lesson that precedes it.
|Living or Non-living lesson p. 1.doc||25 KB|
|Living or Non-living lesson p. 2.doc||25 KB| |
Astrobiology in a Box is a resource created by the UK Centre for Astrobiology to teach concepts in science using astrobiology. The box contains resources for four classroom activities for 20 students and an instruction kit. It is appropriate for both primary and secondary schools. The activities are ‘Detection of Life Experiment’, ‘Extremophiles and the Limits to Life’, ‘UV Radiation and Damage to Life’ and ‘Pressure and the Limits to Life’.
Although the Astrobiology Academy distributes these boxes to participants in its CPD events, you can make your own Astrobiology in a Box by downloading the instructions below, which contain details on each activity as well as the resources you’ll need to put together the kit. |
Charles Dickens, required to write Hard Times in twenty sections to be published over a period of five months, filled the novel with his own philosophy and symbolism. Dickens expounds his philosophy in two ways: through straight third-person exposition and through the voices of his characters. His approach to reality is allegorical in nature; his plot traces the effect of rational education on Gradgrind's two children. He presents two problems in the text of his novel; the most important one is that of the educational system and what divides the school of Facts and the circus school of Fancy. The conflicts of the two worlds of the schoolroom and the circus represent the adult attitudes toward life. While the schoolroom dehumanizes the little scholars, the circus, all fancy and love, restores humanity. The second problem deals with the economic relationships of labor and management. Here one sees that Dickens lets the educational system be dominated by, rather than serve, the economic system. His philosophy, expounded through his characters, is best summarized by Sleary, who says that people should make the best of life, not the worst of it.
Dickens' symbolism takes such forms as Coketown's being a brick jungle, strangled in sameness and smoke, the belching factories as elephants in this jungle, the smoke as treacherous snakes, and the children as little "vessels" which must be filled. His symbolism also becomes allegorical as he utilizes biblical connotation in presenting the moral structure of the town and the people.
In addition to dialogue, straight narration, and description, Dickens employs understatement to convey through satire the social, economic, and educational problems and to propose solutions for these problems. His often tongue-in-cheek statements balance the horror of the scenery by the absurdity of humor, based on both character and theme. |
Learn something new every day More Info... by email
Phosphorus is a vital mineral that helps cells and tissues function throughout the body. While phosphorus naturally occurs in the body, several food sources are also high in phosphorus. People often receive adequate amounts of phosphorus by consuming proteins, dairy, and grains. Sometimes, however, certain health conditions require an increase or decrease in phosphorus consumption.
Phosphorus-rich foods often come from several protein sources. Meats such as liver, pork, and beef are usually high in phosphorus, and it can also be found in turkey and chicken. Fish such as salmon, halibut, and trout, as well as shrimp and clams, all contain rich sources of phosphorus. According to research by the Linus Pauling Institute, a person can consume more than 150 mg (approximately 0.15 g) of phosphorus through a 3 oz. (approximately 85 g) serving of poultry, meat, or seafood.
Most dairy products contain adequate sources of phosphorus. An average 8 oz. (226 g) serving of plain yogurt or milk delivers more than 200 mg (0.2 g) of phosphorus. Those who consume cheeses such as cheddar, Swiss, and mozzarella also benefit from the recommended doses of phosphorus. Even treats such as milkshakes, chocolate pudding, and eggnog are high in the mineral.
Starches, nuts, and grains also consist of sources high in phosphorus. The mineral can be found in white and wheat breads, as well as certain types of cereal such as wheat bran. Sources of phosphorus can also be gained from eating lentils, almonds, and peanuts, to name a few.
The allowance of food high in phosphorus usually depends on a person’s overall health. An otherwise healthy person may consume more than 100 mg (0.1 g) of phosphorus-rich dairy, meat, or grain products daily. In rare instances, people with diabetes, celiac, or Crohn’s disease may suffer from a phosphorus deficiency because the body has difficulty absorbing nutrients. The doctor’s remedies for increasing phosphorus consumption may include a change in diet or prescription supplements.
Sometimes, the body can absorb too much phosphorus, specifically in people with kidney disease. In this case, phosphorus levels can increase because the kidneys do not filter out waste, leaving dangerous toxins in the body. According to the National Kidney Foundation, high phosphorus levels also affect calcium intake, which can lead to weakened bones. Limiting phosphorus-rich foods and taking a medication known as a phosphate binder helps to keep levels under control.
In addition to maintaining tissue and cell production, phosphorus works with calcium to build strong bones and teeth. It also assists the body in using and storing energy, as well as remove waste from the kidneys. Phosphorus also supports muscle function by reducing pain from physical activities. The mineral helps to produce DNA and RNA in the body, as well as balance the use of other nutrients such as iodine, magnesium, and vitamins B and D.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! |
The largest known rodent is the capybara. These rodents can weigh up 66 kilograms as adults and may span more than 140 cm in length. Capybaras are gentle in nature and can be seen in many zoos.
The capybara is a semiaquatic animal endemic to temperate and tropical zones of South America. They are equipped with webbed feet and are excellent swimmers. Their lives are spent primarily in the water, where they mate and hide from predators such as anacondas and jaguars. Capybaras submerge themselves underwater, as well, leaving their noses exposed to the air. These giant rodents are herbivores that feed primarily on river plants and tree bark. They also consume their own feces in order to assist with digestion of grass. |
If you’re like me, you think it’s a problem when students immediately reach for a calculator. Especially when they see a problem like 302-298. But to solve this problem, I’ve also worked with many students who immediately reach for a pencil instead. Most of them actually can’t even solve it without a pencil because their only mental math strategy is using the algorithm on their mental whiteboard.
So we see a couple of things here. First, students aren’t even taking a moment to look at the numbers before jumping to a solution strategy. That means they’re not thinking critically. Second, students might not know any other strategies for solving subtraction problems. For 302-298, you could use 300 as a landmark and recognize each number is 2 away from 300. Or, you could shift the whole problem to be 304-300. Either way, you certainly don’t need a calculator or a pencil. But too often, students only know the algorithm. And as the adage goes, when all you have is a hammer, everything looks like a nail. Students know when a subtraction problem will require a lot of borrowing. That’s one reason why they immediately reach for a calculator.
At DreamBox, we knew we wanted to make a game to help students choose good strategies based on the numbers in the problem, but we weren’t sure how the game should be designed. So we began by creating a short concept video called the “Addition Action Avengers.” In this video, students are introduced to heroes who use different strategies to solve addition problems. When teachers and students saw the video, their very first question was, “Can we see the game that goes with it?”
At DreamBox, we have over a thousand interactive and adaptive lessons for Pre-K through grade six content. In our lessons, students learn the algorithms and use number line strategies and other mental strategies to calculate answers and develop number sense. And with this ground-breaking game, we’re continuing to help students think critically about which strategies are appropriate for certain problems. As always, our goal at DreamBox is to help students be great thinkers in math. Because in the end, we know it’s the thought that counts.
Thanks for your time. |
Many of you may have seen a recent survey finding: Americans know more about "The Simpsons" than they do about the First Amendment. Newspapers in this country – and around the globe – had a field day. In fact the findings, based on a survey conducted by the McCormick Tribune Freedom Museum in Chicago, aren't new – or news. Other studies have come to identical conclusions. Too few Americans – particularly young people – understand our Constitution and the civil rights and liberties that Americans take for granted. Several years back a National Constitution Center survey put it similarly: more youth could name the Three Stooges than the three branches of government.
- Planning Tips
- Programming Ideas
- Web Resources
- The Constitution Online
- Printable Guide
- Guide Home Page
American Democracy Project
A national initiative to promote student understanding of contemporary issues and encourage meaningful civic action.
These surveys say a lot about what is being taught (or not taught) in our nation's schools. They also say a lot about how too many of us have failed in one of our prime responsibilities – teaching Americans about the most important document in U.S. history: the Constitution. Unfortunately, today fewer and fewer college students grapple with the history of the Constitution, its language or meaning. At a time when constitutional issues – from privacy to presidential power, gun control to Guantanamo Bay – confront this nation daily, understanding the Constitution should be the hallmark of citizenship, not its forgotten tradition.
Constitution Day is an opportunity for students to broaden their knowledge of the Constitution, with its short elegant language and its far reaching ramifications that have both shaped our history and our lives. I urge you all to take a close look at the invaluable resources assembled here by the New York Times Knowledge Network; at Annenberg Classroom and at The National Constitution Center's Constitution Day site. This array of audio, visual and electronic materials will bring the Constitution to life: students can watch Supreme Court Justices answer students' questions, click through interactive timelines and listen to contemporary debates. Teachers can find free lesson plans and classroom handouts – it is Constitution Day made easy. Most importantly, I hope it will be a way to jump-start conversations on the Constitution that will last throughout the year. |
One of the best ways to teach a child to read is to use both phonics based and sight word based reading programs. DVDs such as Meet the Sight Words are effective in helping children memorize the most frequently used sight words in the primary grades. When used in conjunction with a phonics reading program the level of reading skills developed in young children is tremendous. Here are some tips on teaching your child to read.
- Purchase a DVD that teaches the letters and sounds, such as LeapFrog Letter Factory, and have your child watch the program once a day until he or she can recognize each letter and imitate its sound.
- Do mini-lessons with your child to discuss the vowels -- reinforcing the short vowel sounds and long vowel sounds.
- Use phonics based readers to show your child how to phonetically sound out words. The sentences in the books should be very short. For example, Pam ran; Sam ran; Pam sat; Dan sat; etc.
- Introduce your child to common sight words. Use the Meet the Sight Words DVD program levels 1 through 3. View the DVDs with your child consistently until your child can recognize all the sight words.
- Use Meet the Sight Words books or other books based on the series for reading practice with your child. If your child has learned all the sight words in the Meet the Sight Words DVD set, he or she will be able to successfully read the books that accompany the series. If your child has difficulty reading the books review the sight word DVDs.
If you follow the above tips consistently, You will succeed at teaching your child to read. He or she will have a solid reading foundation to build upon. There are many programs that help teach children to read. LeapFrog is one of the best for learning the letters and sounds. Meet the Sight Words is excellent for learning the most common sight words. You’ll be amazed by the results.
Note: These tips are recommended for teaching reading to children ages 3 to 5, but may be helpful to children of other ages, those who speak English as a second language (ESL – ELL), or for children diagnosed with learning disabilities. |
The gaseous area surrounding the planet is divided into several concentric strata or layers. About 99% of the total atmospheric mass is concentrated in the first 20 miles (32 km) above Earth's surface. Historical outline on the discovery of atmospheric structure.
Atmospheric layers are characterized by variations in
temperature resulting primarily from the absorption of solar radiation; visible light at the surface, near ultraviolet radiation in the middle atmosphere,
and far ultraviolet radiation in the upper atmosphere.
The troposphere is the atmospheric layer closest to
the planet and contains the largest percentage (around 80%) of the mass of the total atmosphere.
Temperature and water vapor content in the troposphere decrease rapidly with altitude. Water vapor
plays a major role in regulating air temperature because it absorbs solar energy and thermal radiation
from the planet's surface. The troposphere contains 99 % of the water vapor in the atmosphere. Water
vapor concentrations vary with latitude. They are greatest above the tropics, where they
may be as high as 3 %, and decrease toward the polar regions.
All weather phenomena occur within the troposphere,
although turbulence may extend into the lower portion of the stratosphere. Troposphere means "region
of mixing" and is so named because of vigorous convective air currents within the layer.
The upper boundary of the layer, known as the tropopause, ranges in height from 5 miles (8
km) near the poles up to 11 miles (18 km) above the equator. Its height also varies with the seasons; highest in the
summer and lowest in the winter.
The stratosphere is the second major strata of air
in the atmosphere. It extends above the tropopause to an altitude of about 30 miles (50 km) above the planet's surface. The air temperature in
the stratosphere remains relatively constant up to an altitude of 15 miles (25 km). Then it increases gradually to
up to the stratopause. Because the air temperature in the stratosphere increases with altitude, it does
not cause convection and has a stabilizing effect on atmospheric conditions in the region. Ozone plays
the major role in regulating the thermal regime of the stratosphere, as water vapor content within the
layer is very low. Temperature increases with ozone concentration. Solar energy is converted to
kinetic energy when ozone molecules absorb ultraviolet radiation, resulting in heating of the
The ozone layer is centered at an altitude between
10-15 miles (15-25 km). Approximately 90 % of the ozone in the atmosphere resides in the stratosphere. Ozone
concentration in the this region is about 10 parts per million by volume (ppmv) as compared to approximately
0.04 ppmv in the troposphere. Ozone absorbs the bulk of solar ultraviolet
radiation in wavelengths from 290 nm - 320 nm (UV-B radiation). These wavelengths are harmful to life because they
can be absorbed by the nucleic acid in cells. Increased penetration of ultraviolet radiation to the planet's
surface would damage plant life and have harmful environmental consequences. Appreciably large
amounts of solar ultraviolet radiation would result in a host of biological effects, such as a dramatic
increase in cancers.
A popular pastime of late seems to be sending things up on balloons into the stratosphere and posting the video on YouTube
A beer can goes to 90,000 feet altitude (17 miles, 27 km) and lands in the "drink"
A toy robot, a Lego Space Shuttle, a Thomas the Train toy, and even
A human goes to 128,000 feet (24+ miles, 39 km) and jumps breaking the sound barrier as he falls
A balloon is for fine tourists, but try a rocket to get there in a hurry!
The mesosphere a layer extending from approximately 30 to 50 miles (50 to 85 km) above the surface, is characterized by decreasing temperatures. The coldest temperatures in Earth's atmosphere occur
at the top of this layer, the mesopause, especially in the summer near the pole. The mesosphere has sometimes jocularly been referred to as the "ignorosphere" because it had been probably the least studied of the atmospheric layers.
The stratosphere and mesosphere together are sometimes referred to as the middle atmosphere.
The thermosphere is located above the mesosphere. The temperature in the thermosphere generally
increases with altitude reaching 600 to 3000 F (600-2000 K) depending on solar activity. This increase in temperature is due to the absorption of
intense solar radiation by the limited amount of remaining molecular oxygen. At this extreme altitude
gas molecules are widely separated. Above 60 miles (100 km) from Earth's surface the
chemical composition of air becomes strongly dependent on altitude and the atmosphere becomes
enriched with lighter gases (atomic oxygen, helium and hydrogen). Also at 60 miles (100 km) altitude, Earth's atmosphere becomes too thin to support aircraft and vehicles need to travel at orbital velocities to stay aloft. This demarcation between aeronautics and astronautics is known as the Karman Line. Above about 100 miles (160 km) altitude the major atmospheric component becomes atomic oxygen.
At very high altitudes, the residual gases begin to stratify according to
molecular mass, because of gravitational separation.
The exosphere is the most distant atmospheric region from
Earth's surface. In the exosphere, an upward travelling
molecule can escape to space (if it is moving fast enough) or
be pulled back to Earth by gravity (if it isn't) with little
probability of colliding with another molecule. The altitude of its
lower boundary, known as the thermopause or exobase,
ranges from about 150 to 300 miles (250-500 km) depending on solar activity. The upper boundary
can be defined theoretically by the altitude (about 120,000 miles, half the distance to the Moon) at which the influence of solar radiation pressure on atomic hydrogen velocities exceeds that of the Earth's gravitational pull. The exosphere observable from space as the geocorona is seen to extend to at least 60,000 miles from the surface of the Earth. The exosphere is a transitional zone between Earth's atmosphere and
The upper atmosphere is also divided into regions based on the behavior and number
of free electrons and other charged particles.
The ionosphere is defined by atmospheric effects on radiowave propagation
as a result of the presence and variation in concentration of free electrons in the atmosphere.
D-region is about 35 to 55 miles (60 - 90 km) in altitude but disappears at night.
E-region is about 55 to 90 miles (90 - 140 km) in altitude.
F-region is above 90 miles (140 km) in atitude. During the day it has two regions
known as the F1-region from about 90 to 115 miles (140 to 180 km) altitude and
the F2-region in which the concentration of electrons peaks in the altitude range of
150 to 300 miles (around 250 to 500 km). Most recent map of the
of Maximum (hmF2). The ionosphere above the
peak electron concentration is usually referred
to as the Topside Ionosphere.
The plasmasphere is not really spherical but a doughnut-shaped region (a torus) with the hole aligned with
Earth's magnetic axis. [In this case the use of the suffix -sphere is more in the figurative sense of a "sphere of influence".]
The Earth's plasmasphere is made of just that, a
plasma, the fourth state of matter. (Test your skills on sorting the states of matter with the Matter Sorter.) This plasma is composed mostly of hydrogen ions (protons) and electrons. It has a very sharp
edge called the plasmapause. The outer edge of this doughnut over the equator is usually some 4 to 6 Earth radii from the center of the Earth
or 12,000-20,000 miles (19,000-32,000 km) above the surface. The plasmasphere is essentially an extension of the
ionosphere. Inside of the plasmapause, geomagnetic field
lines rotate with the Earth. The inner edge of the plasmasphere is taken as the
altitude at which protons replace oxygen as the dominant
species in the ionospheric plasma which usually occurs at about 600 miles
(1000 km) altitude. The plasmasphere can also be considered to be a structure within the magnetosphere.
Outside the plasmapause, magnetic field lines
are unable to corotate because they are influenced strongly by electric
fields of solar wind origin. The magnetosphere is a cavity (also not spherical) in which the Earth's magnetic field is constrained by the
solar wind and interplanetary magnetic field (IMF). The outer boundary of the magnetosphere is called
The magnetosphere is shaped like an elongated teardrop (like a Christmas Tree ornament) with the tail pointing away
from the Sun. The magnetopause is typically located at about 10 Earth radii or some 35,000 miles (about 56,000 km) above the Earth's surface on the day side
and stretches into a long tail, the magnetotail, a few million miles long (about 1000 Earth radii), well past the orbit of the Moon (at around 60 Earth radii), on the night side of the Earth.
However, the Moon itself is usually not within the magnetosphere except for a couple of days around the Full Moon.
Beyond the magnetopause are the magnetosheath and bow shock which are regions in the solar
wind disturbed by the presence of Earth and its magnetic field. |
The question of Palestine was brought before the United Nations shortly after the end of the Second World War.
The origins of the Palestine problem as an international issue, however, lie in events occurring towards the end of the First World War. These events led to a League of Nations decision to place Palestine under the administration of Great Britain as the Mandatory Power under the Mandates System adopted by the League. In principle, the Mandate was meant to be in the nature of a transitory phase until Palestine attained the status of a fully independent nation, a status provisionally recognized in the League's Covenant, but in fact the Mandate's historical evolution did not result in the emergence of Palestine as an independent nation.
The decision on the Mandate did not take into account the wishes of the people of Palestine, despite the Covenant's requirements that "the wishes of these communities must be a principal consideration in the selection of the Mandatory". This assumed special significance because, almost five years before receiving the mandate from the League of Nations, the British Government had given commitments to the Zionist Organization regarding the establishment of a Jewish national home in Palestine, for which Zionist leaders had pressed a claim of "historical connection" since their ancestors had lived in Palestine two thousand years earlier before dispersing in the "Diaspora" [click here to read our response to this argument].
During the period of the Mandate, the Zionist Organization worked to secure the establishment of a Jewish national home in Palestine. The indigenous people of Palestine, whose forefathers had inhabited the land for virtually the two preceding millennia felt this design to be a violation of their natural and inalienable rights. They also viewed it as an infringement of assurances of independence given by the Allied Powers to Arab leaders in return for their support during the war. The result was mounting resistance to the Mandate by Palestinian Arabs, followed by resort to violence by the Jewish community as the Second World War drew to a close.
After a quarter of a century of the Mandate, Great Britain submitted what had become "the Palestine problem" to the United Nations on the ground that the Mandatory Power was faced with conflicting obligations that had proved irreconcilable. At this point, when the United Nations itself was hardly two years old, violence ravaged Palestine. After investigating various alternatives the United Nations proposed the partitioning of Palestine into two independent States, one Palestinian Arab and the other Jewish, with Jerusalem internationalized. The partition plan did not bring peace to Palestine, and the prevailing violence spread into a Middle East war halted only by United Nations action. One of the two States envisaged in the partition plan proclaimed its independence as Israel and, in a series of successive wars, its territorial control expanded to occupy all of Palestine. The Palestinian Arab State envisaged in the partition plan never appeared on the world's map and, over the following 30 years, the Palestinian people have struggled for their lost rights.
The Palestine problem quickly widened into the Middle East dispute between the Arab States and Israel. From 1948 there have been wars and destruction, forcing millions of Palestinians into exile, and engaging the United Nations in a continuing search for a solution to a problem which came to possess the potential of a major source of danger for world peace.
In the course of this search, a large majority of States Members of the United Nations have recognized that the Palestine issue continues to lie at the heart of the Middle East problem, the most serious threat to peace with which the United Nations must contend. Recognition is spreading in world opinion that the Palestinian people must be assured its inherent inalienable right of national self-determination for peace to be restored.
In 1947 the United Nations accepted the responsibility of finding a just solution for the Palestine issue, and still grapples with this task today. Decades of strife and politico-legal arguments have clouded the basic issues and have obscured the origins and evolution of the Palestine problem, which this study attempts to clarify.
CLICK HERE for the official UN's version of this booklet. |
Comment: 08:59 - 10:00 (01:01)
Source: Annenberg/CPB Resources - Earth Revealed - 11. Evolution Through Time
Keywords: "Dee Trent", "hard part", Cambrian, Precambrian, mutation, oxygen, evolution, jellyfish, earthworm, atmosphere
Our transcription: The significance of the Cambrian Explosion is that this marks the first appearance, the widespread appearance, of hard shelled organisms.
Undoubtedly, some of the earlier organisms were Precambrian, mutated and evolved hard parts, but for some reason those hard parts were an adverse thing, probably having to do with availability of oxygen.
Also, those earlier creatures just absorbed oxygen through the tissues of their skin.
If you can imagine a jellyfish or an earthworm, it has a large surface area, through which it can absorb oxygen.
A hard part would inhibit that.
In the early part of the Earth's history, we had much less oxygen in the atmosphere than we do now.
By the Cambrian, it may have been then necessary for animals to have hard parts in order to survive because of other animals eating them.
So if you have a hard shell, you were less attractive and harder to eat, so you might survive a little better than something like an earthworm or a jellyfish.
Geology School Keywords |
Who hasn’t walked through a groomed park, a yard, or a city street, or gazed across acres of crops or rows of buildings and wondered, “What did this look like before?” What would nature produce if humanity had not intervened? Climate scientists take the question one step farther. Climate scientists ask, when you change what’s growing over a large area like the eastern half of the United States, what does that do to the weather?
To give scientists some fresh tools to answer that question, Louis Steyaert, a climate and atmospheric scientist visiting NASA Goddard Space Flight Center from the U.S. Geological Survey, and Robert Knox, an ecologist at NASA Goddard Space Flight Center, recently completed a series of maps that describe land cover in the East in 1650, 1850, 1920, and 1992. These images show land use in 1850 and 1920. The top images show how much of the land still supported old growth vegetation (forest, grassland, wetland, etc.); the center images show the intensity of human disturbance; and the bottom images show where agriculture was concentrated.
The transformation from dark green to white between 1850 and 1920 in the top pair of images documents the dramatic development of the United States after the Civil War. In the span of a single lifetime, seventy years, the eastern United States went from being largely covered in old growth vegetation to having almost no old growth. The center images tell the same story in reverse. Areas where the land was not being used in any way are white, while intensive land use, such as cities, agriculture, and logging, is shown in red. In 1850, land east of the Mississippi River had been disturbed, with the greatest land use in the northeast, while land west of the Mississippi remained largely untouched. By 1920, nearly all of the land cover had been disturbed to some degree. The lower images show one of the most significant land uses, agriculture. Agriculture, like the U.S. population, was concentrated on the eastern seaboard in 1850. By 1920, farms had moved into the Midwest, and some Eastern farms were abandoned, particularly in New England and the Mid-Atlantic.
The scientists mapped changes over time by starting with maps of potential vegetation—the type of vegetation that would naturally grow in a location based on characteristics like climate and soil type. Using potential vegetation maps, census records, historical surveys, and satellite data, they reconstructed how the landscape has changed since settlement. The land use maps were an intermediate step toward describing changes in climate-relevant characteristics of the landscape—such as canopy height, soil moisture, and the amount of sunlight reflected by the surface. To read more about Steyaert and Knox’s work, see Ancient Forest to Modern City on the Earth Observatory. |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Discrete emotion theory assumes that there are seven to ten core emotions and thousands of emotion related words which are all synonyms of these core emotions (Beck 2004). Depending on the theory the most well known core emotions are happiness, surprise, sadness, anger, disgust, contempt, and fear (Izard & Malatesta 1987). This theory states that these specific core emotions are biologically determined emotional responses whose expression and recognition is fundamentally the same for all individuals regardless of ethnic or cultural differences. The theory also states that certain repetitive emotional experiences during childhood can develop traits and biases that will govern interpersonal relationships during adulthood. Some scholars believe that these emotions have evolved in us as a way for people, regardless of communication differences, to predict what other people are thinking and feeling (Beck 2004). It was a way for our ancestors to tell the difference between friend or foe, and has continued to serve the same function today.
Darwin (1872) described several facial, physiological, and behavioral processes that are associated with different emotions in humans as well as animals. Although Darwin was important in the creation of the discrete emotion theory, William McDougall was the first to believe that emotions were caused by many biological instincts or urges.
William James believed in discrete emotion theory but often argued against it. He believed emotions were made from mental events being broken down into smaller elements but each element wasn’t a specific emotion. He thought of emotions as a product instead of separate individual things.
James (1884) and Dewey (1894) suggested that emotions are associated with different neutral and physiological processes and also with different functions and experiences.
Tomkins' (1962, 1963) idea was influenced by Darwin's concept. He proposed that there is a limited number of pancultural basic emotions or "affect programs." His conclusion was that there are eight pancultural affect programs namely, surprise, interest, joy, rage, fear, disgust, shame, and anguish.
John Watson believed emotions could be describe in physical states.
Edwin Newman and colleagues who believed emotions were a combination of one’s experiences, physiology, and behaviour.
Floyd Allport came up with the facial feedback hypothesis.
After performing a series of cross-cultural studies, Ekman and Izard reported that there are various similarities in the way people across the world produce and recognize the facial expressions of at least six emotions.
Evidence for Discrete Emotion theoryEdit
A study conducted in New Guinea, where people have never seen Caucasians nor been exposed to photographs or television, to see if they could identify specific facial expressions. Researchers showed the people of New Guinea pictures of people portraying seven different emotions that are known as core emotions, happiness, anger, sadness, disgust, surprise, fear, and contempt (Ekman & Friesen 1971). Researchers found that the people of New Guinea could in fact point out the different emotions and distinguish between them. Various parts in the brain can trigger different emotions. For example, the amygdala is the locus of fear. The amygdala senses fear and it orchestrates physical actions and emotions. From this experiment researchers concluded that these specific emotions are innate. They also looked at pictures of people ranging in age from infants to elders and saw that the core emotions look the same, further supporting the discrete emotion hypothesis. Also, deaf and blind children show typical facial expressions for these same core emotions.
- Affect theory
- Affect (psychology)
- Developmental Psychology
- Emotions and culture
- Emotion classification
- Silvan Tomkins
- ↑ Emotions in psychopathology: theory and research Flack, William F. Flack & Laird, James D.
- ↑ Colombetti, Giovanna (August 2009). From affect programs to dynamical discrete emotions. Philosophical Psychology 22 (4): 407–425.
- ↑ Barrett,L.F., Gendron,M., Huang, Y.(2009). Do discrete emotions exist?
2. Colombetti, G. (2009). From Affect Programs to Dynamical Discrete Emotions. Philosophical Psychology, 22, 407-425. Barrett, L. F., Gendron, M., & Huang, Y. M. (2009). Do discrete emotions exist?. 22(4), 427-437. Retrieved from
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| |
At Cupernham Junior School, we believe that the teaching of History is integral for supporting children to develop vital tools for lifelong learning. Through inspiring, first-hand experiences with educational trips and visitors, as well as stimulating topics - such as ancient civilisations and local studies - children are encouraged to analyse, evaluate and think critically in a progressive way, as they make their way up the school.
By igniting the children’s curiosity with engaging topics, they gain a chronological story of Britain’s past and that of the wider world. They are encouraged to constantly enquire and make judgements and ask questions about change, cause and consequence among other vital historical concepts and skills suitable to their own individual needs and knowledge. Significantly, British values are at the heart of the history curriculum at Cupernham. We promote tolerance, diversity and democracy through the teaching of History. This, importantly, also enables children to explore their own identities in relation to the eras taught.
History should be planned using the six step approach to Historical Enquiry and Hampshire’s Medium Term Planning format. The latter will in order skills and progression is brought to the forefront of planning, as will using the progression skills document while planning for History. It will also provide teachers with opportunities to consider assessment and the evidence of it. The six step approach provides ideas for the types of activities teachers can plan at each step of the enquiry. Prior knowledge must be assessed (through use of KWL grids) and seen in books and planning should be differentiated accordingly. Chris Quigley’s documents will support differentiation through ‘BAD’ learning. Evidence of skills and objectives attained must be seen in books even if this is just an annotated picture of active or whole class learning with the LO displayed. Other ways to evidence learning is through cross curricula opportunities (with English especially) or explicit tasks in a History lesson. |
Língua do Pê, or P Language, is a language game spoken in Brazil and Portugal with Portuguese. It is also known in other languages, such as Dutch and Afrikaans. (Wikipedia)
There are some dialects in this language game. The different languages the game is played with even have their own unique dialects. Some people are fluent in speaking P Language and the best can even translate any text to their preferred dialect on the spot!
In this challenge, we will use the Double Talk dialect.
To translate text into P Language, any sequence of vowels in the text is appended with a single
p character followed by a copy of the sequence of vowels.
Write a function or program that accepts a string as input and outputs its translation in P Language.
- The input consists only of printable ASCII characters.
- The output consists only of the translated input and optionally a trailing newline.
- Vowels are any of the following characters
- A sequence of vowels is delimited by any other character. The string
"Aa aa-aa"has three vowel sequences.
- Leading and trailing whitespace may optionally be omitted from the translated output string.
"" => "" "Lingua do Pe" => "Lipinguapua dopo Pepe" "Hello world!" => "Hepellopo woporld!" "Aa aa-aa" => "AapAa aapaa-aapaa" "This should be easy, right?" => "Thipis shoupould bepe eapeasypy, ripight?" "WHAT ABOUT CAPS?" => "WHApAT ApABOUpOUT CApAPS?" " Hi " => " Hipi " or "Hipi"
The double quotes character
" is used to delimit the input and output strings in the examples but obviously this character may also appear in any valid input string. |
The discussion on climate change has turned the spotlight to the subject of sustainability. Hydrogen delivers a powerful answer to many of the questions raised by this debate. For this reason, it looks set to become a key molecule for the transition to a sustainable energy economy and the transition from fossil-based to renewable resources.
There is an evident need for hydrogen technology. In the power sector, for example, we require flexible ways of storing surplus electricity, so that this can then be fed back into the grid at times when solar or wind energy are unavailable. If climate targets are to be met, we must continue to expand the renewable generation of electricity. Yet this expansion only makes sense in conjunction with the development of hydrogen-based technologies. Ten-megawatt electrolyzers can rapidly correct an imbalance between supply and demand in the grid. In the future, they will perform an important function in ensuring grid stability. However, a shift to renewable energy alone will not be enough to achieve a 95 percent reduction in CO2 emissions. In addition, industrial processes will have to be defossilized, combined with a shift towards renewable resources in the raw-materials base. For this reason, hydrogen solutions will rapidly become the sensible option both ecologically and economically in other areas as well. From 2021, for example, the steel industry will be using hydrogen to reduce its CO2 footprint. By 2050, it should be possible to produce steel on a CO2-neutral basis. Moreover, if CO2 is removed from highly concentrated waste gases and converted into basic chemicals such as methanol by means of hydrogen, it will not only improve the climate impact of industrial processes but also mark the beginning of a new form of production that is no longer dependent on fossil-based resources. In the long term, it should also be possible to remove CO2 from the atmosphere, combine it with hydrogen and thereby create a source of raw materials that fills the gap in the global carbon cycle. Hydrogen will also help achieve climate neutrality in the transport sector, especially in areas where directly electrified propulsion is not an option.
How do experts assess the future development of hydrogen technology? This is the purpose of the Hydrogen Council, which was created in 2017 and involves 53 companies from around the world, including Linde, Daimler, Audi, Bosch and BMW. This body forecasts that by 2050 as much as 18 percent of the world’s energy needs could be covered by hydrogen, which would mean an annual reduction in CO2 emissions of six billion metric tons. A 2019 study conducted by the Fraunhofer Institute for Solar Energy Systems ISE concluded that Germany could need as much as 800 terawatt-hours of hydrogen by 2050, if this technology is fully exploited by then and, for example, shipping and aviation are based on hydrogen and hydrogen-based synthetic fuels. It would appear feasible for Germany to create an electrolysis capacity of 80 gigawatts, but even that would only cover part of this demand.
It is therefore clear from the very outset that the hydrogen economy will have an international dimension. Many regions around the world are now preparing for future trading in sustainably produced energy carriers and basic chemicals. This will enable Germany to forge new trading relationships beyond its former partners for the import of fossil fuels. Saudi Arabia, for example, is now beginning to plan and build large photovoltaic parks to produce hydrogen for export. And countries such as Norway, Australia, Chile, the United Arab Emirates and Morocco are also turning to hydrogen. Japan, meanwhile, launched a national hydrogen strategy back in 2017, with an annual budget of 300 million euros, and is now playing a leading role in the establishment of a hydrogen economy. For German companies, this means that attractive markets for hydrogen technology are now beginning to emerge worldwide.
It is vital that we start preparing for this at once. Although demand for hydrogen will only increase gradually over the coming years, it is now time to begin enhancing this technology, establishing standards and building the requisite infrastructure. By the end of the 2020s, Germany needs to be increasing its capacity for water electrolysis by around one gigawatt a year. This is the only way to halt climate change and, at the same time, maintain Germany’s economic performance and secure new opportunities for the export of technology. Fraunhofer Institutes are on hand to provide expert support here, both to industry and government. We not only develop the technology required to meet such challenges but also produce studies on market development and on sustainability. With numerous countries now poised to ramp up the hydrogen economy, it is time for Germany to start bringing this technology to market. |
“We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the Pursuit of Happiness.”These stirring words from the Declaration of Independence are at the very foundation of the American tradition of civil liberties. In this course, we explore this tradition from its beginning with the Declaration of Independence, the Constitution, and the Federalist Papers, through a number of notable historical and contemporary cases in which claims to rights and liberties have been at stake. We will examine issues of slavery, segregation, abortion, campaign finance, free speech, religion, affirmative action, and marriage. Our discussion will be guided by thinkers like John Locke, John Stuart Mill, Friedrich Hayek, and Martin Luther King Jr., as well as important Supreme Court opinions, such as the majority and dissenting opinions in Dred Scott v. Sandford (on slavery), Brown v. Board of Education (on segregation), Roe v. Wade (on abortion), Citizens United v. FEC (on campaign finance and free speech), and Obergefell v. Hodges (on marriage).We do not seek unanimity of opinion, but rather a deepening of understanding. Whatever your views happen to be—liberal, conservative, whatever—they will be sympathetically explored but also challenged. The goal of the course is not to persuade you to think as anyone else does; rather, it is to encourage and empower you to think about disputed questions of civil rights and liberties more deeply, more critically, and for yourself. |
Global Albedo Map of Mars
The albedo of any planetary surface is defined as the fraction of incident solar radiation reflected by the surface. The magnitude and spatial distribution of Martian surface albedo are important inputs for characterisation of Martian surface and Atmospheric circulation. The global Short Wave Infra Red (SWIR) albedo map in wavelength band 1.64-1.66 µm has been derived for the surface of Mars using the data from Methane Sensor for Mars (MSM) onboard the Mars Orbiter Mission (MOM). Five months (October 2014- February 2015) of radiance data from the reference channel of MSM are converted to the top of atmosphere reflectance normalised to sun-sensor viewing geometry and incoming solar flux. The global view of MSM derived Martian SWIR albedo has been averaged at ~50 km spatial resolution.
The bright regions (albedo > 0.4) are mainly localised over the Tharsis plateau, Arbia Terra and Elysium Planitia regions of Mars. The low albedo regions (< 0.15) are mainly localized in Syrtis Major and Southern highlands and parts of Northern hemisphere. In general low albedo values are associated with darker surface on Mars having volcanic rock basalt on surface. Higher albedo values represents surface covered by Dust. The area shown in blue colour indicates the presence of basaltic composition while red indicates the dust covered regions of Mars.
Global albeda of Mars using MSM data |
Mangroves, trees that form forests in the transition between land and sea, provide a habitat for a great diversity of plants and animals worldwide. These coastal ecosystems are invaluable to humans, supplying a number of services essential for our survival. We still do not know how much these ecosystems are worth from an economic perspective – but they are essential from an ecological perspective. Scripps Oceanography’s Octavio Aburto examines mangrove ecosystems and explains why it is vital to put enormous efforts into understanding their value.
Rosina Bierbaum, formerly of President Obama’s Council of Advisors on Science and Technology (PCAST) and an Adaptation Fellow at the World Bank, shows how climate change will affect all regions and sectors of the economy, and disproportionately affect the poorest people on the planet. Therefore, improving the resilience, adaptation, and preparedness of communities must be a high priority, equal to that of achieving deep greenhouse gas reductions and rapid development and deployment of innovative technologies, as well as altered planning and management strategies in the coming decades to achieve a sustainable world.
To see more programs from this series, click here.
This year, California’s winter weather has been wet and wild. Join Scripps scientist Marty Ralph, Director of the Center for Western Weather and Water Extremes (CW3E) as he describes the phenomena of atmospheric rivers, their impact on our weather, and the essential role modeling and prediction play in managing California’s precious water resources.
To see more programs in the Jeffrey B. Graham Perspectives on Ocean Science Lecture Series, click here.
On the surface, it might seem like an ocean without sharks would be a more enjoyable place. But, these predators play a very important role in the ocean ecosystem and they need our protection just like many other ocean dwelling creatures.
Sharks have been at the top of the food chain for hundreds of millions of years, but today their populations are in danger because of human activities, such as overfishing and finning (this is when people catch sharks, remove the fins, and dump the carcass overboard).
Andrew P. Nosal, Ph. D, Birch Aquarium’s new DeLaCour Postdoctoral Fellow for Ecology and Conservation, shares his shark expertise with the Perspectives on Ocean Science series in order to explain that all sharks are not the evil villains seen in movies, but are essential in maintaining a balanced ocean.
Watch “Shark Conservation: Safeguarding the Future of Our Ocean” to hear about all of the benefits sharks provide and why they deserve our protection.
We’ve been to the moon and we’ve explored remote corners of our universe. What is next in our quest to unlock the secrets of our solar system?
Hear from Charles Kennel, chair of the National Academy’s Space Science Board and former Scripps Institution of Oceanography director, as he reviews NASA’s past accomplishments, present projects, and anticipated goals in “The Future of Human Space Exploration.”
To see more programs on Astrophysics and Space Science, visit our archive. |
This purchase contains 4 questions for each standard in the Operations and Algebraic Thinking domain for 2nd grade. Use these printables for short, but effective homework assessments.
EACH SHEET HAS FOUR QUESTIONS THAT ARE MEANINGFUL AND RIGOROUS. NO MORE HAVING YOUR STUDENTS COMPLETE REPETITIVE PROCEDURAL QUESTIONS THAT ONLY SKIM THE SURFACE OF THINKING!
What’s included in this product?
•Conceptual based math questions
•Quality prompts and word problems that promote rigorous thinking
•4 questions per standard
•Each standard is formatted to one page
Standards and Topics Covered
➥ 2.OA.1 – Represent and solve addition and subtraction word problems, within 100
➥ 2.OA.2 – Demonstrate fluency with addition and subtraction, within 20, using mental strategies
➥ 2.OA.3 – Determine whether a group of objects, within 20, has an odd or even number of members
➥ 2.OA.4 – Use addition to find the total number of objects arranged in rectangular arrays with up to 5 rows and up to 5 columns; write an equation to express the total as a sum of equal addends.
WHAT ARE POWER PROBLEMS?™
PURPOSEFUL – These problems are meant to keep students focused, while strengthening initiative and perseverance.
OPPORTUNITIES – These prompts can be used in a variety of ways. POWER Problems™ can be used to introduce a lesson, spiral review, or as formative assessments.
ENGAGEMENT – Problems are real world applicable and designed to hook students with interest and presentation. Complexity of problems promotes problem solving skills.
RIGOR – Tasks are specifically designed to challenge students and assess conceptual understanding of curriculum versus procedural understanding. Students will need to apply more than just a “formula.”
WHY USE POWER PROBLEMS?™
BUILD STAMINA WITHIN YOUR STUDENTS!
POWER Problems™ are designed to challenge your students with their open ended presentation. Majority of problems that come from textbooks and workbooks assess procedural understanding of curriculum. Some textbooks even provide step by step instructions where the textbook is thinking for the students and taking away that “productive struggle” for children. When we rob students of that event, we rob them of their ability to reason, problem solve, and see beyond a standard algorithm. POWER Problems™ are meant to show students that there are different ways to answer one question in math. With these tasks students take ownership and are part of the problem solving process versus filling in blanks in a textbook.
HOW TO USE POWER PROBLEMS™:
YOUR KIDS. YOUR CHOICE. FLEXIBILITY.
TO INTRODUCE A LESSON – POWER Problems™ can be used to introduce a new skill. In this case your students will experience a “productive struggle.” Their problem solving skills and prior knowledge will kick in. Often times most of my students will have the incorrect answer or no answer at all. I then have someone explain their method/reasoning and allow my students to critique their peer’s answer. This makes for great accountable talk discussions. If I see that most students do not have an answer I will assist the class in getting to a specific point and then allow them to finish independently.
SPIRAL REVIEW – Avoid your students forgetting standards, by using POWER Problems™ to spiral review previously taught lessons.
FORMATIVE ASSESSMENTS – You can use these problems to assess mastery and levels of understanding. |
For many students, math class can feel overwhelming, unwelcoming, and stressful. While there are many ways math teachers can work to shift this mindset in our students, one easy way is to infuse joy into math lessons through games. The following three math games can be done in as little as five minutes once they have been introduced to students and require little to no prep. Additionally, these games can easily be scaled up or down in difficulty to work for any classroom.
1. Buzz (No Prep)
Buzz is a quick and easy way to help students recognize multiples. To play, first have all students stand up. This game works well when students are arranged in rows or a circle but can be done with any arrangement as long as students know the order in which they will participate.
Once all students are standing, select a student to start counting. Before that student says 1, tell the students which multiple they must “buzz” on. For example, you may say that students will buzz on multiples of 3. That means that as the students count, any student whose number is a multiple of 3 will say “Buzz” instead of the number. Any student who says the wrong number or forgets to say “Buzz” is out and sits down.
The game can continue until you have a few students left as the winners. If you have a few students who are particularly nervous about being put on the spot, encourage them to keep track of the numbers called on a piece of paper to better prepare themselves for their turn. Remind those students that the game moves quickly and very little attention will be given to any single mistake.
The game will sound like this if students are going to buzz on multiples of 3:
Student A begins counting at “1.” The next student in the given order (make sure to tell students the order in which they will go) continues with “2.” The third student says, “Buzz.” The next student then picks up and says “4.”
To scale up the difficulty, you can have students buzz on a more difficult multiple, such as 7 or 12. You could even require students to buzz on common multiples of a given two numbers such as 3 and 4.
2. What Number Am I? (No Prep)
This game is a great way to practice not only fact fluency but math vocabulary, too. To play, select one student to be the first player. That student will come to the front of the class with their back to the board. On the board behind them, you will write a number so that the student cannot see what it is.
All other students will then give the player clues to help him or her guess the number. Students must raise their hands and, when called on by the player, can give one math fact as a clue. When the player accurately guesses the number, they select the next player to come to the board.
The game will sound like this:
Student A comes to the board and faces the class. The number 18 is written on the board. Student A calls on student B for a clue, and student B says, “You are the product of 3 and 6.” If student A knows this product, they can say, “I’m 18!” but if they are not sure, they can call on another student for a new clue.
To scale down the difficulty, you might tell students to only use addition and subtraction facts as clues and to emphasize words like sum and difference. You may want to focus on smaller numbers to write on the board.
To scale up the difficulty, you may give students larger numbers to work with, encourage the use of multiplication and division facts, or have students use square roots and exponents in their clues.
3. Fact Fluency Challenge (Minimal Prep)
This game allows students to engage in a competition as they work on given fluency practice. To play, split the class into two teams and select a representative from each team to start. I like to bring two chairs to the front of the room so the participants are right in front of the board when they play. On the board, post a math fact; the first student to answer wins a point for their team. The participants rotate so that each team member gets an opportunity to compete.
I use an online math fact generator so that I can quickly present math facts for a given operation and number range. If you want math facts that address a specific topic not easily found in an online flashcard version, you can make your own slide show to use with your students.
To scale down the difficulty, focus on single-digit numbers dealing with addition and subtraction, and to scale up the difficulty, you could focus on larger numbers dealing with multiplication or division, use decimals or fractions, or require students to simplify a multi-operation expression. |
Geography, Culture 3-6, 6-12
As we are re-visiting countries and flags of Europe from primary level, we are starting the study of the biomes of Europe. Today, I’m sharing a fully detailed presentation of Waseca Biomes incredible geography and culture Montessori-aligned materials.
Direct link to Waseca Biomes give you $15 off on your purchase when you provide an email address.
I am presenting all the materials seen on the shelf. We will first explore the Europe map, which can be used with 5 year old+. Then, I want to share a comparison between the Biomes Readers vs. Biomes Cards Primary, and Biomes Cards Primary vs. Biomes Cards Elementary.
Europe Biomes Map
My 7 and 9 year olds are always happy to work with Waseca materials. The reading components can be a challenge for a new reader, but easy for an experienced reader.
Regardless of age, children approach the Biomes Maps knowing they will be successful, effortlessly. It’s just that the material offers the perfect tactile and visual method to memorize geographical features. We sort the cards first, by river, by mountains, by desert (Africa!), or by seas/lakes. Then we place the features while pronouncing the names (my job, or Google’s job, if uncertain). When we feel confident, we test our knowledge using the arrows (see pictures below).
Later, when the children have received a presentation or two, they are excited to use the Command cards (see below picture). The cards come in 3 color-coded groups. Each group of cards offers a different level of challenge. Therefore, level 1 is easy and accessible for all, while level 3 could be less obvious, and more geared towards older learners.
What is amazing with Waseca Biomes is that they use the same symbols and colors throughout all their readings, writing, cultural, and geographical materials. For example, Europe has a red theme because children in primary Montessori classrooms learn each continent using different colors. Waseca Biomes applied this method for their biomes as well. They use specific colors for each type of biome. Therefore, when you look at a map, or a Waseca globe, you can have a global understanding of how the biomes are laid out without even “reading.” How cool is that?!
Biomes Reader of Europe, Primary
Along with the Biomes Map, children of primary age can enjoy learning mostly about different animals of the biomes using the Biomes Readers set. It is adequate for emerging readers.
There are 9 portfolios; red being the easiest to read, gold being the most challenging one. Each portfolio contains 6 pictures, 6 descriptions (stories), and 1 booklet.
Ideally, the children read the small booklet first, which has no pictures to avoid distraction. Then they are to lay the 6 pictures from left to right, which are labeled 1-6 on the back. Finally, they read the description-story cards and match them. They can control their work by flipping the cards and compare the numbers on the back. It’s worth mentioning this work is currently on sale. 💫
Europe Biomes Map cards, Primary
The Biomes Map cards are a lovely complementary work that can go along with studying the biomes. Waseca Biomes offers other works, not shown, such a a wooden puzzle of the biomes for each continent, as well as a fun stencil maps making work.
The Biomes Map cards for primary level need to be understood. They are different than the elementary ones, because the reading level is more accessible to the primary level. There’s also an emphasis on vertebrate animals, which appeal to younger learners on the first plane of development.
Each biome comes in 3 parts: one large card being the control of error, the smaller ones containing a description and label. One card serves as control for error (see picture below).
There are 5 biomes described for Europe: Temperate Forests, Mountains, Wetlands, Grasslands, and Polar Region. You will see 3 icons on the back of each card, which represent: 1. the continent, 2. the biome, 3. the component represented (reptile, people, inset, plant…).
A child can study each biome and a few components of its unique fauna and flora. They may choose to focus on one particular animal, plant, or person, and write about it in their Companion Journal. Of course, the children love it, because they get to pick an illustration from the Master copy, color it, and glue it in the journal. Then, they can use the 3 part cards to write about a component of their choice.
I encourage my younger learner to do copy work to remove writing frustration. As for my older learner, external research and paraphrasing is encouraged.
Children can also color a map showing the biomes. The map legend is a nice map making tool that the children will experience multiple times. They become expert at it!
Europe Biomes Map cards, Elementary
The Biomes Map cards for elementary is designed in the same fashion as the primary level, except that they contain much more specific information and cultural components. The text is in the front of the card, which means the children have to match text to picture, then label to the text (see picture below).
The control for error is different from the primary level. The cards contain icons on the back that should match. A child can study either every component of one biome, or, one component from every biome. For example, a child might choose to focus on studying all fishes from different biomes.
Each biome offers 13 components on animals, plants, or culture (people). I used a compartmentalized tray to sort our cards, but Waseca Biomes does have a drawer furniture to store the cards. You might find your own creative way to make it appealing for your learners.
In conclusion, Waseca Biomes materials for studying continents provide opportunities for reading, writing, learning about geography, biology and culture. The organization of the material supports children inner need for order as well, and help help them classify knowledge.
I hope you too are excited about this material. Feel free to drop a question in the comments if I missed something! Thank you for reading.
Ready for a lesson? |
WARNING STOP. INCOMING TSUNAMI STOP. Giant waves might one day send scientists such an underwater telegram via telecommunication cables on the ocean floor.
Ocean water interacting with Earth’s magnetic field could create strong enough signals in the underwater cables to alert scientists that a tsunami is on the way, a paper in the February Earth, Planets and Space argues.
“This is a very good supplementary, augmentative system to the existing tsunami warning,” says Manoj Nair, a geophysicist at NOAA’s National Geophysical Data Center in Boulder, Colo., and a coauthor of the study. “It can be information in places where we don’t have any information.”
Science News headlines, in your inbox
Headlines and summaries of the latest Science News articles, delivered to your email inbox every Thursday.
Thank you for signing up!
There was a problem signing you up.
Though tsunamis move tremendous amounts of water at hundreds of kilometers per hour, their passage barely makes a ripple on the surface of the open ocean. But moving huge volumes of saltwater can make another kind of wave below: electromagnetic waves.
Because it is so salty — and therefore full of freely moving sodium and chloride ions — ocean water is a good conductor of electricity. As water passes through Earth’s magnetic field, the movement of positive and negative ions generates a weak electric field.
That electric field can induce a voltage, the force that drives electric current, in the kilometers-long telecommunication cables that crisscross the ocean floor. So a tsunami moving large amounts of water quickly can generate a pulse in the voltage. If the voltage is large enough to be detected above background noise, it could serve as a warning from an imminent wave.
This effect had been hinted at as early as 1971 and is now routinely used to monitor long-term ocean water flow across the Florida Strait as a measure of climate change. In 1995, Bell Labs researchers suggested that water movement triggered by the 1992 Cape Mendocino earthquake could be detected through submarine cables, as well. But detailed studies of how much voltage a tsunami would produce were lacking.
Subscribe to Science News
Get great science journalism, from the most trusted source, delivered to your doorstep.
“Only recently we had the capability of doing such sophisticated numeric and computer simulations,” Nair says.
Nair and his colleagues built a 3-D computer model of the 2004 Indian Ocean tsunami, which was triggered by a magnitude 9.3 earthquake off the coast of Indonesia and killed more than 180,000 people. The researchers calculated that three cables at the bottom of the Indian Ocean could have seen voltages of about 500 millivolts, well above the estimated 100 millivolts of background noise from the ionosphere and other sources. Future research will investigate ways to further subtract out background noise, Nair says.
The system wouldn’t replace current tsunami early warning systems, which use direct measurements from pressure sensors on the ocean floor. Those systems are still superior — they are more reliable and give more information about the origin and direction of a tsunami, Nair says. But using the preexisting cables could cheaply supplement existing observations.
“We argue that we have cables already, and setting up a voltage difference measurement system is pretty easy and inexpensive,” he says. “This can really augment the tsunami measuring system we have already in existence.”
The paper doesn’t address the logistics of monitoring such a system, however.
The idea sounds feasible, says Mark Everett of Texas A&M University in College Station. “I think the results are sufficiently encouraging that a move could be made to develop an experimental system for the Indian Ocean.”
Smaller but still-devastating waves could slip under the noise threshold unnoticed, cautions Alan Chave of Woods Hole Oceanographic Institution in Massachusetts. “Being able to detect the tsunami of the century is not really what’s needed,” he says. “You’d need to be able to detect things that are more common that would place people at risk.”
Chave adds: Regardless of the detection method, the usefulness of any tsunami early warning system is limited by authorities’ ability to quickly warn the public. |
Creating a Barrier Between People and Microbes
PPE is a critical element of standard precautions for infection control.
Personal protective equipment (PPE) is an essential element of infection control in oral healthcare delivery. Potentially infectious microorganisms may be present in the oral fluids of patients, in the body fluids or on the hands of dental healthcare personnel (DHCP), and on environmental surfaces. Disease transmission has the potential to occur from direct contact with infectious materials or indirect contact with contaminated surfaces and equipment.1 A combination of infection control procedures called standard precautions, of which PPE is one element, is needed to ensure a safe oral healthcare environment for the patient and a safe working environment for DHCP.
Prior to 1996, the Centers for Disease Control and Prevention (CDC) recommended that all blood and certain body fluids likely to contain blood be considered potentially infectious for human immunodeficiency virus (HIV), hepatitis B virus (HBV), and other bloodborne diseases.
These universal precautions were updated and expanded in 1996 by the CDC, replacing them with standard precautions.2 According to the CDC, "Standard precautions integrate and expand the elements of universal precautions into a standard of care designed to protect HCP and patients from pathogens that can be spread by blood or any other body fluid, excretion, or secretion. Standard precautions apply to contact with 1) blood; 2) all body fluids, secretions, and excretions (except sweat), regardless of whether they contain blood; 3) nonintact skin; and 4) mucous membranes. Saliva has always been considered a potentially infectious material in dental infection control; thus, no operational difference exists in clinical dental practice between universal precautions and standard precautions."1
Simply put, standard precautions dictate that infection control practices should be the same for all patients regardless of their known infectious disease status. Differences in protocols may be indicated based on procedural differences (eg, simple oral examination vs. oral surgery), but should be consistently applied for a given procedure each time any patient is treated. By using standard precautions, DHCP are ensuring that patients and personnel are provided the suitable level of safety during every dental procedure.
Regulations and Recommendations
The Occupational Safety and Health Administration (OSHA) is the government agency charged with protecting the health and safety of all workers in the United States. One of the many regulations this agency enforces is the Bloodborne Pathogens Rule, dealing with infection control in healthcare settings. The Bloodborne Pathogens Rule requires the use of appropriate PPE, and also requires that the employer provide, maintain, and replace PPE as needed.3 PPE includes gloves, masks, eye protection, and protective garments. OSHA also includes items such as resuscitation bags, face masks, or other ventilation devices in their requirements for PPE in the event direct contact with the patient may be necessary during a medical emergency.
The nature of OSHA regulations is such that it is the employer's responsibility to ensure that employees use PPE where indicated. This means that not only does the employer have to provide gloves, masks, eye protection, and protective clothing, they must ensure that employees with occupational exposure to body fluids wear these items when there is the potential for exposure.
The CDC is a branch of the US Department of Health and Human Services and, as such, makes recommendations intended to promote public health. The CDC has guidelines that are specific to infection control in dentistry, as well as many general infection control topics, such as disinfection and sterilization, prevention of bloodborne disease transmission in healthcare settings and others that apply to all healthcare settings. These guidelines do not carry the weight of law, but are frequently the basis for infection control regulations by agencies such as OSHA or state boards of dentistry.
Although not widely used in oral healthcare until the mid-1980s, the use of medical gloves for all dental procedures is now routine for oral healthcare delivery in the developed world. Gloves are made of a variety of materials including latex, polyvinyl, nitrile, chloropene, and many other materials and combinations of synthetic and natural products (Figure 1). There are two grades of gloves: medical examination gloves and sterile surgical gloves. The selection of which grade of glove to use depends on the type of procedure to be performed.1
Medical Examination Gloves
Routine dental procedures require only the use of nonsterile examination gloves. Medical examination gloves are nonsterile, packed in boxes of multiples, and are usually not hand-specific (ie, packaged in pairs intended for the right and left hand). Gloves are intended for the protection of the patient as well as the DHCP. Gloves prevent the DHCP from having bare-handed contact with a patient's oral fluids, but also serve as a barrier between contaminants on the DHCP's hands and the patient's oral mucosa. Gloves should be the last item of PPE the DHCP dons before treating patients, to avoid inadvertently contaminating the gloves before initiating treatment. After performing hand hygiene and then placing gloves, nothing should be touched with gloved hands except the patients' oral tissues and patient care equipment. OSHA prohibits washing or decontaminating gloves for re-use.3 Hand hygiene should be performed again after removal of gloves because inadvertent contamination of hands can occur due to small breaks in the glove material or while removing gloves.
Sterile Surgical Gloves
Sterile gloves are indicated for use when performing surgical procedures. Oral surgical procedures expose the vascular system and other normally sterile tissues to the numerous organisms that normally colonize in the oral cavity.1 Sterile surgical gloves should be worn for procedures such as surgical extractions, periodontal surgery, tissue grafting, and other procedures that involve incision, ablation, or excision of hard and soft tissues.4
Frequent hand washing, exposure to chemicals, and glove use can lead to hand irritation and contact dermatitis. The most common type of dermatitis related to these is irritant contact dermatitis, which develops as dry, itchy, irritated areas on the skin of the fingers or other surfaces of the hand. Individuals may also experience allergic contact dermatitis, which is related to exposure to irritating chemicals such as those used in the manufacturing of latex and in certain disinfectants. Rarely, but more seriously, individuals may develop a latex allergy, which can result in anything from hives, itchy eyes, runny nose, and other common allergy symptoms to more serious symptoms including asthma, wheezing, and even anaphylaxis.5 Contact dermatitis and latex allergy are medical conditions that should be evaluated by a qualified healthcare professional.
In order to keep skin healthy and intact, hand lotions may be used throughout the day to prevent dry skin, which can lead to irritation and breaks in the skin's surface. However, lotions that contain petroleum or other oil-based products should be avoided, except at the end of the day.6
Masks and Eye Protection
Surgical masks were originally adopted to prevent infections of surgical wounds by attending medical providers.7 Masks are now widely used as a means of protecting healthcare workers' oral and nasal mucosa from contamination with aerosols or spatter generated during patient care. A surgical mask and protective eyewear with solid side shield or a face shield should be worn by DHCP during procedures likely to generate splashes or droplets of blood or body fluids. The protection should provide coverage to the mucous membranes of the eyes, nose, and mouth. A surgical mask protects against smaller aerosolized particles and also larger particle droplets that may occur during the use of dental equipment such as handpieces, ultrasonic scalers, and air/water syringes.
Because a mask's outer surface may become contaminated from contact with droplets generated during treatment or from touching by the DHCP, the mask should be changed between patients to prevent cross-contamination. Additionally, when a mask becomes wet from moisture in the DHCP's exhaled air, its ability to perform filtration is compromised. Therefore, if the mask becomes wet, it should be changed during patient treatment, when possible.1 A surgical mask is not a respirator and is not considered adequate protection against certain airborne disease such as tuberculosis, chicken pox, and measles, among others.2
Gowns and Lab Coats
Street clothes, work clothes, and skin may all be vulnerable to contamination during dental procedures. In addition to gloves and face protection, it is usually necessary to wear a gown or lab coat that will protect clothing and intact skin from contact with blood or other oral fluids during dental procedures. Some OSHA-specific requirement related to protective clothing include that the employer provide all PPE and ensure its use by employees with occupation exposure to blood and other potentially infectious materials (OPIM). The employer must also arrange to have attire laundered or disposed, preventing the employees from taking potentially contaminated garments home where contamination may spread to the home environment.3 Because dental procedures often involve spray or aerosol from devices held in the DHCP's gloved hands, long sleeves on protective garments are necessary to prevent contamination of the skin or clothing on the DHCP's forearms (Figure 2).
Personal protective equipment is a critical element of standard precautions and must be used in combination with other infection control procedures. Improper use or selection of PPE can result in potential cross-contamination between patients and DHCP and the oral healthcare environment. PPE should be selected based on the anticipated exposure, which may vary depending on the type of dental procedure.
1. Belkin NL. A century after their introduction, are surgical masks necessary? AORN J. 1996;64(4):602-607.
2. Kohn WG, Collins AS, Cleveland JL, et al. Guidelines for infection control in dental health-care settings, 2003. MMWR. 2003;52(RR-17):1-61.
3. Occupational Safety and Health Administration, US Department of Labor. Bloodborne pathogens. Occupational Safety and Health Standards—Toxic and Hazardous Substances. 29 CFR Part 1910.1030. 3. Available at: www.osha.gov/pls/oshaweb/owadisp.show_document?p_table=standards&p_id=10051.
4. Occupational exposure to bloodborne pathogens, needlesticks and other sharps injuries; final rule. Federal Register. 2001;66:5317-5325. [As amended from and includes 29 CFR Part 1910:1030. Occupational exposure to bloodborne pathogens; final rule. Federal Register. 1991;56:64174-63182.]
5. Garner JS. Hospital Infection Control Practices Advisory Committee. Guideline for isolation precautions in hospitals. Infect Control Hosp Epidemiol. 1996;17(1):53-80.
6. Mangram AJ, Horan TC, Pearson ML, et al. Hospital Infection Control Practices Advisory Committee. Guideline for prevention of surgical site infection, 1999. Infect Control Hosp Epidemiol. 1999;20(4):250-278.
7. Hunt LW, Fransway AF, Reed CE, et al. An epidemic of occupational allergy to latex involving health care workers. J Occup Environ Med. 1995;37(10):1204-1209.
8. Larson EL. APIC guideline for hand washing and hand antisepsis in health-care settings. Am J Infect Control. 1995;23(4):251-269.
About the Author
Eve J. Cuny, RDA, MS
Assistant Professor, Dental Practice
Director, Environmental Health and Safety
University of the Pacific
Arthur A. Dugoni School of Dentistry
San Francisco, California |
Coffee grounds compost is a compost composed, in part, of used coffee grounds. After coffee has been brewed, the grounds may be used to create the compost. The compost itself may be used to help grow everything from houseplants to vegetable gardens.
Grounds may be mixed with compost to facilitate nitrogen balance. Since coffee grounds are composed of about 2% nitrogen, the grounds can substitute for animal excrement that would typically be used in compost piles. Mixing extra nitrogen fertilizer into the compost may be beneficial to plants, as can adding another carbon source like leaves. Generally, one part coffee grounds should be added to four parts compost.
Some gardeners prefer to use coffee grounds compost to create their own liquid fertilizer. To do this, gardeners first gather .5 lbs (0.23 kg) of coffee grounds and 5 gal (18.92 l) of water. Then, they combine the two, set the mixture outside for a day and allow it to achieve a favorable temperature before applying it to their plants.
Coffee grounds may also be sprinkled over or around plants before imminent rain or a scheduled watering. This will result in the slow release of nitrogen. Coffee grounds should always be kept damp so to activate the properties that are beneficial to plants.
The benefits of using coffee grounds compost are numerous. Applying coffee grounds compost to plants may give them an extra boost of calcium and magnesium. Coffee grounds also keep soil temperature optimal which, in turn, helps combat pathogens, deter weed formation and structure soil. Though many gardeners express concern over the compost's acidic content, such concerns are usually unfounded since the coffee grounds' acidity is neutralized during brewing. Typically, used coffee grounds reach a neutral pH level of somewhere between 6.5 and 6.8, with most of its acidity transferring to the beverage.
When used in conjunction with eggshells, coffee grounds compost can even serve as a useful pest repellent. Ants, slugs and stray cats may find a garden undesirable if it contains coffee grounds. The compost can also be used to feed welcome earthworms.
Using coffee grounds compost reduces environmental impact by reducing greenhouse emissions. Those interested in using coffee grounds in their gardening projects can simply reach for old grounds in their own kitchens. However, coffee grounds can also be collected from restaurants or places of employment. In addition, coffee filters are broken down naturally and easily through the composting process. Coffee grounds should be utilized as compost no more than three weeks after their collection. |
about the Nitrogen Cycle
Keeping fish requires an understanding of the environment they live in and the most important part of this is the Nitrgen Cycle.
Following the introduction of fish, plants, and food into your aquarium a process is started which is known as biological filtration or the nitrogen cycle.
This process naturally converts waste materials into less toxic compounds by the use of bacteria.
The first stage of the nitrogen cycle is the formation of ammonia (toxic), or ammonium (non-toxic). In an aquarium more than 50% of the waste produced by fish is in the form of ammonia, the majority of which is secreted through the gills. The remainder of the waste, excreted as fecal matter, undergoes a process called mineralization. Mineralization occurs when Heterotrophic bacteria consume fish waste, decaying plant matter, & uneaten food, converting all 3 to ammonia & other compounds. Ammonia is lethal to fish above certain concentrations & must be removed or broken down.
Efficient filtration is the most effective method of removing ammonia. With ammonia now present in the water the nitrogen cycle is underway. As the ammonia levels rise in the aquarium, a group of bacteria called “Nitrosomonas” feed on the ammonia and convert it to nitrite.
Nitrite is then consumed by another bacteria “Nitrobacter”, and this is converted to nitrates (a relatively nontoxic compound), which is the end product of the biological filtration process.
Nitrate levels will slowly rise in the aquarium, and over a period of time will become toxic to the fish when high levels have accumulated. The filter system will not remove nitrates in the same manner as it removes ammonia and nitrite, and fortnightly partial water changes are required for nitrate control.
The nitrogen cycle can take up to SIX weeks to complete and we advise you follow our stocking levels to keep the toxic lelvels under control. |
Health libraryBack to health library
Coronavirus variants: The basics
May 12, 2022—You may have read about new coronavirus variants spreading around the country. But what is a variant, and where does it come from? Here are the basics, based on information from the American Medical Association, the Centers for Disease Control and Prevention (CDC), and the World Health Organization (WHO).
What are variants and subvariants?
Viruses mutate, or change, over time. The result is new versions, or variants, of that virus. Different variants can work differently. For example, the Omicron variant of the coronavirus spreads more easily than some others. Subvariants happen when an existing variant mutates again, but the change is not big enough to classify as a new variant.
Are new variants always more dangerous?
Not always. Some variants can cause more severe illness, like Delta. Others are less dangerous.
Will there always be new coronavirus variants?
Scientists expect new variants to emerge from time to time. Some may go away quickly, while others may stick around.
How can we help slow down new variants?
Viruses mutate as they spread, so the best way to slow down variants is to slow the spread:
- Stay up-to-date with your COVID-19 vaccines and boosters.
- If you think you may be sick, take a COVID-19 test.
- Wear a mask when recommended.
What is a "variant of concern?"
CDC and the WHO track new variants. Sometimes a new variant may be more dangerous than previous ones. That might be because compared to other variants:
- It is more likely to make people very sick.
- Tests do not detect it as well.
- Vaccines and past infections do not protect people as well.
- It is more contagious than other variants.
If at least one of those is true, CDC may call the new variant a "variant of concern." For example, the Delta variant was considered a variant of concern because it caused more severe illness in many people. The Omicron variant is not as likely to cause severe illness, but it is very contagious.
The WHO also calls some new strains of the coronavirus "variants of concern." What's the difference? The WHO tracks which variants are dangerous around the world. CDC focuses on how the variant affects people in the U.S.
Do COVID-19 vaccines still work against new variants?
So far, all available vaccines offer protection against variants of COVID-19. But vaccine protection can fade over time. Stay protected by following CDC's guidelines on when to get your next booster.
For more information on COVID-19, visit our Coronavirus health topic center. |
Within a community there is a need to belong, and this sense is created by the people within it. Notions of culture and identity are two major concepts that lie within the conformities for an individual to belong. Alienation is often the outcome, if an individual does not fit the criteria for identity within that community. Romulus my father, a biography by Raimond Gaita, emphasises the importance placed on identity and how ones beliefs and philosophy’s can shape an individual’s potential to belong. Spirited away an anime cartoon by Hayao Miyazak shows how racial identity and physical traits of an individual can separate ones potential to belong.
Sturts (the famous explorers) perspective and the Aborigines (the indigenous peoples) perspective. The ideas and language in the poem conveys the reality of their thoughts. We notice that both perspectives point of view differ based on their background knowledge and understanding of their world and its concepts. In Stanza 1, line 1 we are introduced to captain Charles Sturt. “ Charlotte called him…Charlies dear” stating the connection he has between him and his wife.
The process of belonging involves both choosing and being chosen. Do you agree? Belonging is a complex process whereby we gain our identity: a sense of who and what we are. This sense of identity comes from the connections we make physically, emotionally and spiritually. These connections are integral to this identity and for each of them to be fully realised the things we choose must choose or accept us in return.
The dreaming is infinite and links the past with the present to determine the future. It is the natural world, created by the spirit beings, to which a person belongs; this therefore provides the spiritual link between the people and the dreaming. Aboriginal people regard land as sacred, formed during the Dreaming through the journeys of the Ancestor beings. Different tribal groups have different beliefs but they all share in the common belief, that their ancestors created the land around them. During the coarse of many of thousands of years Aboriginal people have developed an intimate relationship between themselves and their environment.
Belonging allows for the substantiation of characters through the formation of identity and connections. The sense of belonging humans naturally seek in life reflects the feeling of security and being accepted. They struggle with their identity as they make the choice whether to reject the individuality and belong to a community or group. When individuals seek to belong and rigidly follow society’s norms and practices, they must adhere to the strict rules of their society. In doing do, the desire to belong comes into conflict with the need to be an individual.
Belonging is a tricky concept, as you can argue that people who don’t want to belong actually "belong" to a group of people who don’t want to belong. But what is belonging? One idea is that you belong when you feel comfortable with people who have similar objectives, goals, and aims as you. There are many themes of belonging that are recurring in this novel such as ‘Belonging is based on people rather than places.’ The text The Simple Gift written by Steven Herrick is a verse novel that incorporates many aspects of belonging. The theme ‘belonging is
Area of Study Questions: Questions: 1. A sense of belonging or not belonging can emerge from connections made with people, places, groups, communities and the larger world. How have the connections or the lack of connections your prescribed text and one related text of your own choosing shaped your understanding of belonging or not belonging? 2. Discuss how an individual’s choice not to belong or barriers which prevent belonging have been presented in your prescribed text and one related text of your own choosing.
Having a sense of being different makes it difficult to belong Identity and belonging are inter-related, they go hand in hand. The groups we chose to belong to and the ways we connect with others help to form our identity. Together these issues go to the heart of who we are and how we present ourselves to world. We humans are social creatures and the need to belong is innate. It is funny in a way, as we all long to be free, to be who we truly are, yet we conform and do everything asked of us in order to belong to some kind of community or group.
Dispossession of their land has had an impact on their rituals and responsibilities, separation from their kinship groups and ceremonial life and loss of their native language. The land for Aboriginal people was a part of them and they were a part of the land, this belief and strong will from the Aboriginal people is what led the Land Rights Movement. The fundamental principle that is linked with Aboriginal spirituality is a concept known as the Dreaming. The Dreaming is strongly connected to the land, as the land is what the dreaming is communicated through, since it is within the land that the ancestor spirits of the Dreaming continue to dwell. The influence of the Dreaming is embedded in all aspects of Aboriginal life.
These includes but not limited to ideology (beliefs and values), love (personal relationships), and work. It is important to note that forming identity means making informed choice of which block of culture you want to associate with. It becomes rather complex than simple when choosing which block you want to associate yourself with. The important |
What is Lightning? How does it occur?
Because the weather is insulating, the so-called ‘cumulus-nimbus’ lightning clouds charge as they move and generate high-voltage lightning clouds. During the loading, the ground surface is loaded with positive charges along the cloud and rarely there can be reverse loading (10%). A conductive channel is formed and discharge begins from the cloud to the soil or from the soil to the cloud.
The discharge between the clouds is called lightning and the discharge between the lightning and the cloud ground.
If we examine the formation of the lightning cloud in three stages;
Youth: Air flow increases from bottom to top and from the edge to the center, time is 10-15 minutes
Maturity: Approximately zero temperature, reduced buoyancy causes heavy rain. During this time, cold winds are seen from top to bottom. When they reach the ground, they cause a short, severe storm. This step is between 15-30 minutes.
Aging: Air currents end and last about 30 minutes.
In order for a lightning discharge to occur, the electric field strength must reach 2500kV / m. Cloud-cloud or cloud-earth discharge occurs when the electric field strength in the cloud increases sufficiently.
a) Lightning ascending: It is the pre-discharges starting from the positively charged pointy regions, the negatively charged region of the cloud. It starts at the peaks of high structures (GSM towers) or high mountainous areas on smooth terrain. At this time currents ranging from 1 to 10 kA are seen. current value is 10kA.
b) Lightning descending: When the energy in the lower part of the lightning cloud is sufficient, an electron beam moves towards the ground. The first beam traverses a distance of 10-15 meters at a speed of 50,000-60,000 km / sec. 30 to 100 microns of second, third, fourth electron beams move this distance 30 to 50 meters further than the previous one. Thus, each discharge allows the tip of the lightning to approach the earth. . As the pre-discharges approach the ground surface, the electric field is punctured by the insulator of the air, whereby a discharge moves upwards from a pointed point on the ground surface to the pre-discharges to form a conductive channel in the air and from this channel the high voltage (@ 100 million volts) flows to the ground with a current strength of 200,000 Amper. .
Effects of Lightning
a) Electrodynamic Effect: If a part of the lightning flow path is within the magnetic field of another part, great forces arise. This effect results in crushing of thin antenna pipes, short-circuit of parallel conductors, and damage of conductor clamps.
b) Pressure and sound effect: The pressure caused by the forces in the lightning channel expands in the form of explosion with the extinction of this current creates thunder. It is that it brings.
c) Electrochemical effect: At large current intensities, metals such as iron, zinc and lead are released by electrolyte disintegration.
d) Heat Effect: The temperature increases in the conductors where the lightning discharge current passes, but since the time is very short, there is no large heat increase in the conductors.
e) Light effect: Very bright light is formed around the conductive channel formed during lightning discharge. It causes glare or temporary visual disturbance in the near ones.
Lightning Protection Methods
Here we will talk about the methods used today to protect buildings from lightning.
1) Passive catch tips: It is the oldest of the lightning protection methods that do not have the lightning pull feature and the sharp pointers are used. The first application was made by Franklin in the 1760s. It is based on the connection of pointed rods to the earth by means of conductor by placing them on the structures to be protected. At that time, the length of the rod was calculated as the half diameter of the protection diameter, while today the length of the bar is considered as protection diameter. This method has been further developed and the method called FARADAY CAGE has emerged.
Faraday Cage cage the more frequent the eye gaps, the better results are obtained, this means an increase in costs.
Oxidation at the joint points of the horizontal and vertical laying conductors is an obstacle in providing the desired protection.
Maintenance is difficult, conductors and their current status are quite difficult to detect.
Implementing this method later in completed structures will increase costs and it is almost impossible to cage the lower part of the building.
It is almost impossible to detect the damage of the system as a result of tectonic movements in structures in earthquake-risk areas.
The most tragic example of a poorly established cage system is the Mont Black Observatory.
In this case, the horizontal surface of the observatory sitting on the ground should also be caged, but this was not done and the conductivity of the soil was trusted for the closure of the cage there. When the conductivity of the soil was insufficient, there were accidents at the observatory due to lightning strikes.
Active catches (lightning rods)
Active trapping leads to an ionizing path to lightning clouds or by sending ions, disrupting the insulation of air for the discharge of lightning, causing a conductive channel to be opened in the air.
Active Lightning Rods (ESE-TYPE)
The latest point in lightning protection is the electrostatic active lightning rods. Lightning discharges are realized at the highest and pointed places of the lightning rods. capacitor system.
ESE (Early Streamer Emission), called Early Flow Warning Lightning Rods, has also entered the standards in France and USA.
The principle of operation is very simple. It works only in cases where the risk of lightning may occur in the air, it does not cause unnecessary discharges at other times. Using this type of lightning rods, it is a system that will cause the same effect instead of planting catch bars at the height of meters.
Airborne electrostatic charge generation in Turkey has made active SAT-ESE lightning rod for national and international quality and testing necessary documents and certificates. In our country, these lightning rods are manufactured with stainless steel body so that they are not affected by meteorological conditions and impacts and are resistant to the effects of weather conditions and lightning discharges in terms of insulation and sealing. They do not require maintenance or arc. It is easily possible to test that they are working with the test device at the sales points.
Technical Specification for Lightning Protection and Grounding
1) Customer Address
2) Address of the Construction Project
3) Available Documents
4) General Information
5) Concept Definition of Lightning Protection System
6) Lightning Impulse and Surge Protection (Surge Arrester) Measures
7) Grounding System
8) Lightning Current Parameters
1. Customer Address
2. Address of project
3. Available Documents
Drawings DWG / PDF
4. General Information
Lightning, a natural phenomenon, cannot be controlled. It is therefore necessary to take measures to protect structures against:
– Threats to human life
– Destruction of security systems (eg fire alarm systems, burglar alarm systems, etc.)
– Accidental activation of fire extinguishing systems
– Damage to electronic devices
– Destruction or dysfunction of measuring and control systems
– Electronically stored change or loss The contents of parts 1 to 4 of IEC 62305 standard are general concepts.
4. 4.1 Lightning Protection System
Damage to electronic devices
The lightning protection system is a whole system and is applied to reduce the physical damage caused by direct lightning strikes in the building. It has an external lightning protection system and an internal lightning protection system.
External lightning protection system:
a) Directs the lightning strokes to the capture terminals.
b) Ensures the safe transmission of lightning current to earth by means of down-conductors.
c) Provides the distribution of lightning current to the ground by means of grounding system.
Fig. 4.1 – Lightning protection system
The internal lightning protection system shall be constructed by means of the equipotential bonding between the electrically conductive parts and the external lightning protection parts or by the separation distance method (safe electrical separation method).
4.2 Separation distance
Electrical isolation between external lightning protection and conductors (means the isolation between the catches, down conductors and parts of the natural lightning protection system and the metal and electrical installation to be protected inside the building) is provided by the separation distance “s ((fig. 4.2).
Fig. 4.2 – Separation distance
Calculation formula for separation distance: s = ki * kc / km * l [m]
Function of two selected lightning protection classes (induction factor; fig. 4.3) function of geometric arrangement kc (current division coefficient) material function of the proximity point (material multiplier; fig. 4.3) l Separation distance from the next equipotential point of the catch end system or the downstream conductor system distance to the point of problem
fig. 4.3 – ki ve km katsayıları
5. Conceptual description of external lightning protection system to be made:
5.1. In the XXXXX project, the Lightning Protection System specified in the latest IEC 62305 standard shall be made. Implementation shall be made according to the measures specified in risk management.
5.2. Risk management is a whole and failure to take the specified measures in risk management invalidates the practice.
5.3. All construction steel in building reinforced concrete, IEC 62305-3 E4.3.3. All the interconnected reinforced concrete steel of the building shall be considered as natural descent conductor.
5.4. To check the connection accuracy of this reinforced concrete steel, see IEC 63250-3 E4.3.1. and the measured impedance value is below 0.2 Ohm. If this value is not reached, external HVI isolated conductors will be descended from the columns.
5.5. Since the isolated capture tip system will be placed with tripods and clamps as specified in the project, the population of any direct lightning stroke is not expected when the specific lightning current parameters are taken into consideration.
5.7. All metal building parts (façade, building reinforced concrete steel, metal conductors / pipes…) should be connected to each other with the capacity to carry lightning current. In this way, the different potential differences between the parts will be drawn to a single potential to prevent the formation of dangerous arcs.
5.8. The catch ends to be mounted on the tripods shall be fixed with insulated support pipes and HVI insulated conductor shall be passed through these insulated support posts.
5.9. The equipotentialization of the outer sheath of the HVI insulated conductor shall be carried out through the support post.
5:10. The external diameter of the HVI insulated conductor shall be 20 mm and the copper conductor therein shall be at least 19 mm2.
5:11. The HVI insulated conductor shall be capable of carrying a lightning current of 150kA 10 / 350us and the equivalent separation distance “s adaki in the air shall be 75cm. Since the insulated conductor is related to insulation, tests regarding separation distance shall be documented.
5:12. The dielectric strength of the insulated down conductor of the HVI insulated conductor and the creping discharge resistance test on the outer sheath insulation shall be documented. Thus, it shall be documented that no other dangerous current passes through the sheath of the conductor while providing a separation distance of 75 cm.
5:13. The HVI insulated conductor shall be easy to assemble with a minimum bending radius of 200 mm.
5:14. When connecting HVI insulated conductors to building reinforced steel, the internally tested accessories and clamps of the HVI insulated conductor shall be used.
5.15. As stated in the project, the length of the catch pole with support posts will be 4.2 m and 5.7 m at 14 and 2 points respectively.
5.16. Tripods will be fixed with necessary concretes to withstand 110km / h wind speed.
5:17. The tripod metal body shall be connected to the closest equipotential busbar to be formed on the building by means of aluminum if the surfaces to be contacted are not lime based and St / St stainless steel conductors if lime based.
5.18. The HVI insulated conductor must be placed on 1 m troughs with concrete feet to prevent dangerous displacement in the event of impact and the HVI insulated conductor will be connected to the reinforced concrete with a StSt stainless steel clamp, tested to IEC 62561 and carrying a lightning current of at least 150kA (10 / 350us) .
5.19. The total length of the insulated support pipes to be attached to the structures shall be 4.2 m together with the catch ends and the 50 cm portion of the support pipe shall be secured with clamps approved by the manufacturer and the metal part of the support pipe shall be connected to the closest equipotential busbar.
5.20. After the HVI insulated conductor support pipe to be installed in the structures, it must be fixed to the roof with plastic pedestal mounting parts approved by the related manufacturer every 1 meter and if there is a section from the ground to the ground point, the concrete feet should be placed on the ground with 1 meter spaces. The HVI insulated conductor is tested in accordance with IEC 62561 and will be connected to the reinforced concrete with a StSt stainless steel clamp that will carry at least 150kA (10 / 350us) lightning current
5.21. All products used must be tested in accordance with IEC 62561. The accessories and HVI insulated conductors must be of the same brand as the HVI insulated conductor and its attachments are tested together and the entire system is approved.
5:22. The HVI insulated conductor must be connected to the insulated support pipe internally and to the earth point with the StSt stainless termination parts of the insulated conductor.
6. Lightning Current and Impact Protection Measures (Surge Arrester)
6.1. In the XXXXX project, the Lightning Protection System specified in the latest IEC 62305 standard shall be constructed. Implementation shall be made according to the measures specified in the risk management report.
6.2. Risk management is a whole and failure to take the specified measures in risk management invalidates the practice.
6.3. Surge arresters specified below should be used where there is a low voltage power line and data / signal connections.
6.4. Fire alarm and automatic fire extinguishing equipment is a risk reducing factor in risk management and these systems are considered to exist in risk management when the energy and signal sides are protected with surge arrester.
6.5. For the air-conditioning and ventilation equipment to be installed in the building, necessary surge protection measures must be taken.
6.6. 3 + 1 surge arrester with a voltage protection of less than 1.5kV and a 10 / 350us Iimp value of 50kA should be used in the compact structure to the main panel entrance of the building with Tip1 + Tip2 sparkgap technology.
6.7. Type2 3 + 1 surge arrester with Imax value 120kA I nominal value 80kA should be used in sub-boards.
6.8. Type 3 3 + 1 surge arresters should be used in shops and offices.
6.9. Type 1P1 surge arresters suitable for 10kA 4-wire connection should be used in signal connections of fire alarm / extinguishing and air conditioning / ventilation systems.
6:10. The Ethernet connections found in the IT room should also be initialed against the direct and infectious effects of mobbing.
6:11. Surveillance cameras and connected Ethernet switches must be initialized.
7. Grounding system
7.1. In accordance with IEC 62305-3: 2010-12, type B ring ground electrode or basic ground electrode shall be used.
7.2. The base earth electrode tested and documented in accordance with IEC 62561-2 to be used must be a 30 × 3.5mm hot-dip galvanized steel strip with an average surface thickness of 70um (microns) or a V4A St / St 30 × 3.5mm stainless steel strip.
7.3. The total earthing resistance of the entire earthing system shall be ensured to be less than 2 ohms.
7.4. The basic earth electrode must be connected to the rebar every 2 meters with clamps approved by the manufacturer and in accordance with IEC 62561.
7.5. The basic earth electrode must be connected to equipotential busbars located at the appropriate locations within the building.
7.6. For the foundation grounding, hot dip galvanized steel strip with 30 × 3.5mm average 70um (micron) surface coating or V4A St / St 30 × 3.5mm stainless steel strip should be used in the bottom row of the iron to be left in the concrete inside the building.
7.7. The foundation earthing must be made in the form of a closed ring and placed on the foundations of the exterior walls of the building or within the foundation platform.
7.8. The basic earthing must be arranged in such a way that all sides are covered with concrete. The end points should be removed from the foundation and sufficiently fastened.
7.9. The connection shoots should be made of 50 × 5 mm galvanized strips to be connected to the equipotential busbars in the building, fixed to the foundation bars with vertical fixing pieces and min 2 mt. should be laid out.
7:10. If a white tank is to be applied with waterproof concrete, the earth electrode must be installed according to DIN 18014.
7.11. For this application, a 10mx10m mesh 10mm diameter V4A St / St stainless steel ring earth electrode2 will be placed under the gorebeton and the clamps4 will be removed at 10 meters with the clamp4 pictured below.
7:12. After the gorebeton, V4A St / St stainless steel ring earth electrode which is tested according to IEC 62561-2 with a diameter of 10mm with a 20mx20m mesh shall be placed into the reinforced concrete to be applied and connected to the construction iron with clamps specified in the following drawing every 2 meters.
7:13. At least every 20 meters and at least one section will have enough soil sprouts to drop 1 equipotential point and the soil sprouts must be connected to the waterproof wall bushings, which are based on a water pressure of 1 bar to form equipotential points.
8. Lightning Current Parameters
The design is based on the latest IEC 62305 Lightning Protection Standard.
The Lightning Protection Level (LPL) III is based on the following Lightning Current parameters.
Ipeak = 100 kA (10/350 s)
Qimpulse = 50 C
Specific energy = 2.5 MJ /
Rolling sphere = 45 m
Lowest peak current = 10 kA
Impacts greater than 10 kA = 91%
Since the lightning protection level is decided as LPL III, the lowest lightning current value is 10kA. with 91% of all possible lightning. |
Tiered instruction and support programs are often implemented on a school or district-wide level. Response to Intervention (RTI) and Multi-tiered Systems of Support (MTSS) are common programs that provide a framework for assessment, delivery of instruction, provision of supports, and guidelines for referrals for eligibility in special education. The law requires that teams document efforts to support students in general education settings before they can be referred for special education evaluation. Not all students who struggle have a documented disability or meet eligibility requirements. Teachers and staff must address screening and assessment to identify the needs of students and make data-driven decisions regarding delivery of instruction, needed supports, and referrals when necessary for additional supports and services. RTI is a framework that includes general supports for all students, supports that are more specific for at-risk students, and identification of students who need interventions that are more intensive.
Select a grade level Class Profile relevant to your field of study and imagine you are the classroom teacher for these students. Review the profile and formulate a small group of students with whom you will implement RTI strategies. Using this group as an example, create a 15-20 slide digital presentation providing an introduction to the RTI for new educators.
Within your presentation, provide:
- An overview of RTI and MTSS, including an explanation of the tiers and how these systems can be used to enhance and adapt instruction.
- An explanation of the role of the child study team.
- A discussion and examples of the types of data collected throughout the RTI process.
- An explanation of what factors determine appropriate student placement within the RTI tiers. Refer to your example student group to illustrate specific ideas.
- An explanation of how the RTI model can help meet the needs of students without disabilities and as a means of adapting instruction prior to evaluating students for a disability.
- Five examples of research-based intervention strategies that you will use with students in your example student group who are struggling in English language, arts, or mathematics. Include justification for each intervention strategy.
- Description of the data that would be collected as part of implementing the research-based intervention strategies. Explain how this data could be used to evaluate the effectiveness of the RTI strategies and how this information could be used as justification for testing for possible special education eligibility.
- A title slide, reference slide, and presenter’s notes.
Support your presentation with a minimum of three scholarly resources.
Is this the question you were looking for? Place your Order Here |
Table of Contents
- 1 What are 3 different types of proteins?
- 2 What types of proteins are enzymes?
- 3 What are the different types of proteins?
- 4 What are the two main types of protein?
- 5 How do you tell if a protein is an enzyme?
- 6 Which of the following is the main function of protein?
- 7 What are examples of enzymes?
- 8 What are the 2 types of protein?
What are 3 different types of proteins?
The three structures of proteins are fibrous, globular and membrane, which can also be broken down by each protein’s function. Keep reading for examples of proteins in each category and in which foods you can find them.
What types of proteins are enzymes?
Enzymes are mainly globular proteins – protein molecules where the tertiary structure has given the molecule a generally rounded, ball shape (although perhaps a very squashed ball in some cases). The other type of proteins (fibrous proteins) have long thin structures and are found in tissues like muscle and hair.
What are the 3 structures of enzymes?
In this lesson, the three-dimensional structure of proteins will be discussed: the primary structure of polypeptides, secondary structures in proteins (α-helix, β-sheet), and the tertiary structure. The concept of an enzyme active site will be introduced.
What are the different types of proteins?
There are seven types of proteins: antibodies, contractile proteins, enzymes, hormonal proteins, structural proteins, storage proteins, and transport proteins.
What are the two main types of protein?
There are two main categories (or sources) of proteins – animal and plant based. Animal proteins include: Whey (dairy) Casein (dairy)
What is the main function of protein?
Protein has many roles in your body. It helps repair and build your body’s tissues, allows metabolic reactions to take place and coordinates bodily functions. In addition to providing your body with a structural framework, proteins also maintain proper pH and fluid balance.
How do you tell if a protein is an enzyme?
Which of the following is the main function of protein?
What are proteins and what do they do?
|Structural component||These proteins provide structure and support for cells. On a larger scale, they also allow the body to move.||Actin|
|Transport/storage||These proteins bind and carry atoms and small molecules within cells and throughout the body.||Ferritin|
What is the primary level of protein structure?
The simplest level of protein structure, primary structure, is simply the sequence of amino acids in a polypeptide chain. For example, the hormone insulin has two polypeptide chains, A and B, shown in diagram below.
What are examples of enzymes?
Examples of specific enzymes
- Lipases – a group of enzymes that help digest fats in the gut.
- Amylase – helps change starches into sugars.
- Maltase – also found in saliva; breaks the sugar maltose into glucose.
- Trypsin – found in the small intestine, breaks proteins down into amino acids.
What are the 2 types of protein?
Different Types of Protein
- When it comes to protein, there are 20 different amino acids that make up each molecule of protein, and these are split into 2 categories: Non-Essential Amino Acids and Essential Amino Acids (EAAs)
- There are two main categories (or sources) of proteins – animal and plant based.
How many types of proteins are there in the human body?
In humans, up to ten different proteins can be traced to a single gene. Proteome: It is now estimated that the human body contains between 80,000 and 400,000 proteins. However, they aren’t all produced by all the body’s cells at any given time. Cells have different proteomes depending on their cell type. |
Regular physical activity plays a vital role in the management of Diabetes along with a good diet plan and the medication prescribed by your doctor. Physical activity increases the effectiveness of insulin and this effect persists several hours after exercise. Physical activity is also important for your overall well-being and to control many other health conditions.
At least 30 minutes of moderate exercise for 5 days in a week plays a very important role in your overall health. Before increasing usual patterns of physical activity or an exercise program, the individual with diabetes mellitus should undergo a detailed medical evaluation with appropriate diagnostic studies. Preparing the individual with diabetes for a safe and enjoyable physical activity program is as important as physical activity itself.
The young individual in good metabolic control can safely participate in most activities. The middle-aged and older individual with diabetes should be encouraged to be physically active. If you are overweight, combining physical activity with a reduced-calorie eating plan can lead to even more benefits.
Be sure to drink water before, during, and after exercise to stay well hydrated. Because physical activity lowers your blood glucose, you should protect yourself against low blood glucose levels, also called hypoglycemia. You are most likely to have hypoglycemia if you take insulin or certain other diabetes medicines. In such cases your doctor may suggest you to take less insulin or take a carbohydrate snack before, during or after exercise. Checking your blood glucose levels before and after physical activity is also very important.
Make a commitment to exercise and make it a priority. Your long-term health depends on it, so as tough as it may be to find time or to motivate yourself to exercise, keep at it. It will help you lose weight (if you need to do that), and it will make your body more efficient at using its insulin and glucose.
- Controls your blood glucose levels.
- Being physically active helps prevent long-term diabetes complications. People with prediabetes can delay the onset of diabetes.
- Lowers blood pressure and cholesterol.
- Makes you physically fit.
- Lowers risk of cardiovascular disease and stroke.
- Reduces stress and increase the feeling of wellbeing.
- Burns extra calories so you can keep your weight down if needed.
- Helps you sleep better
- Reduces symptoms of depression and improves quality of life
- Reduces the risk of osteoporosis and arthritis. |
Expectations and purposes of homework
Individual abilities are contributing factors in determining how long a student will spend on any given task. Homework should make authentic use of students’ reading and writing skills.
Homework should be a meaningful experience, designed to develop independent work habits that will assist students during their years of study. Students need to learn to organize their work and budget their time, both for daily and long-range assignments. Charlton Heights provides all students in grades 1-5 with an individual Student Agenda Planner to help them learn how to organize their time and to facilitate home-school communication. One important aspect of homework in elementary school is to instill in a student the idea that homework doesn’t always have to be written. Studying spelling words, reading independently, studying for tests, doing a science project, etc. all qualify as “homework.”
Homework should be seen as a reinforcement of skills learned at school, a way of practice and possible remediation or enrichment. In addition, it is an opportunity to complete unfinished class assignments if deemed necessary by the teacher.
Homework should provide a means of communication between the home and the school and an opportunity for parents to become involved in their child’s education.
The success of a homework program depends upon the cooperative efforts of students, parents, teachers and administrators.
Students: What are your responsibilities?
- Think of your homework assignment as part of your learning experience. It is an opportunity to grow in your skill and knowledge and to pursue your interests.
- Refer to your Agenda and collect the necessary materials.
- Be responsible for completing the assignment on time and returning it to your teacher. Follow the expected standards of quality. With your parent, set up a suitable environment for homework time.
- Carefully plan your activities and interests so that you will complete your homework assignment successfully.
Parents: What are your responsibilities?
- Understand that homework is an important part of your child’s learning process that helps him or her accept the responsibilities of school life and develop and reinforce lifelong skill.
- Provide your child with the time and space needed to complete assignments.
- Show an interest by asking to see your child’s homework on a regular basis. This reinforces the importance of homework and provides an opportunity for you to keep informed about your child’s progress. Remember that homework is your child’s responsibility. You are not responsible for doing your child’s work, but should be concerned that he or she does it carefully and accurately. Be available to provide guidance and answer questions without doing the homework.
- Encourage your child to have an organized approach to homework by providing requested materials such as notebooks, etc.
- Work closely and cooperatively with your child’s teacher(s) especially if there have been difficulties with homework. Your child needs to see a connection between home and school, with consistent expectations coming from parents and teachers. This will be the most effective way to help children improve in their responsibility.
- Check your child’s Agenda daily and sign if expected.
- Homework grades will be determined by individual teachers. Contact your child’s teacher if you have any questions.
- Contact your child’s teacher regarding any difficulties your child may be having with homework or projects. Contact may be made in several ways:
- All teachers can be reached through the district email system, which uses the staff member’s first initial and entire last name and the extension @bhbl.org with no spaces. For instance, Jane Doe’s email would be: [email protected].
- Use the Agenda to write a note to the teacher.
- Leave a message at the school office asking the teacher to return your call.
Teachers: What are your responsibilities?
- Establish homework assignments at the correct level for each student to ensure that skills taught in the classroom can be practiced and reinforced at home successfully.
- Use the Agenda and check for signatures, notes, etc.
- Establish acceptable standards of neatness and quality allowing for individual challenges.
- Provide a rubric for all long-term projects, if graded.
- To ensure students understand how to study, encourage them to ask questions in class when they are not sure of something or do not understand. Also, go over and encourage them to read the student responsibility section of this guideline.
- Set up a system for handling late or incomplete assignments and make certain the system is understood by students and parents.
- When making long-range assignments, make sure students have guidelines for completing the assignments and understand the due dates. Make parents aware of long-range assignments so they can help their child budget time at home properly. Work with other teachers to assure there is not an abundance of homework.
- Contact parents and seek their cooperation when you are unable to satisfactorily resolve homework problems with a student.
- Make sure students know and are capable of the basic study skill techniques appropriate for the grade level. These may include:
- Using resource materials: glossaries, encyclopedia, etc.
- Concentrating and developing self-discipline.
- Gathering, organizing, relating and communicating data in their own words.
- Making and using an outline.
- Reading for main ideas, specific information and details.
Estimated Homework Times
Kindergarten: About 15 minutes per night including reading and writing with occasional special projects
Grades 1 – 4: About 30 minutes per night, including reading, math facts, and spelling practice with occasional special projects
Grade 5: About 45 minutes per night. Daily reading and math facts are not included in this time frame. (According to state and district expectations, all math facts should be mastered by the end of fourth grade.) Typically, weekend homework is reserved for long-term projects only.
Requesting assignments for students who are ill
If your child is absent from school and you feel that he or she is capable of working on assignments during an illness, you may request work. When leaving a message on the Absence Calling number 518-399-9141, ext. 85510 please include a message requesting homework and indicate how you will obtain the assigned work. (For instance, would you like it sent home with another child or will you pick it up at the main office after school?) The teachers will do their best to accommodate you. Please remind your children to bring their books back when they return to school.
Assignments for students on trips
Burnt Hills-Ballston Lake discourages the practice of taking children out of school for an extended period of time. Since homework is both an extension and a reinforcement of class work, it is not as effective when done as an isolated exercise. The educational benefits derived from discussions, presentations, and demonstrations cannot be duplicated by merely reading a textbook. Teachers also find it difficult to project accurately exactly what will be taught during a child’s extended absence. It is not an easy task to predict how concepts will be grasped and content understood by the group.
For all these reasons formal homework assignments will NOT be prepared for these extended periods. General suggestions for reinforcing reading, math, and writing skills may be made in lieu of specific homework assignments. The specific assignments can be gathered during the period of absence and provided to the child when he or she returns to school. |
How does ICAN determine what a student knows about certain subjects? Or how does ICAN determine an individual's level of skill in a certain area? One of the most common ways to do this is to use level tests. Level tests are designed to measure a person's level of skill, accomplishment, or knowledge in a specific area.
New students are regularly expected to demonstrate their learning and proficiency in a variety of subjects. In most cases, certain scores on these tests are needed in order to provide them with proper classes and curriculum.
Research on the impact of tests on learning indicates that tests exert a strong influence on classrooms and education policy in general. Good tests lead to good teaching, which leads to good learning. For example, a test which includes a speaking component will likely lead to classrooms which focus on developing communicative skills.
In ICAN, we use an analyzer after the students have taken the level tests.
Click + to zoom in and - to zoom out. Drag to move around. Double click to reset size.
The Analyzer is used to get the accurate level of the student in the following criteria:
4 Macro Language Skills: Reading, Writing, Interview and Listening(Comprehension and Note-Taking)
5 Articulation Skills: Narrative, Descriptive, Expository, Argumentative and Persuasive
Fundamental Language Skills: Grammar and Vocabulary
Access your child/children's analyzer/s online with a few clicks.
The testing of grammar is one of the mainstays of language testing. While such tests test the ability to either recognize or produce correct grammar and usage, they do not test the ability to use the language to express meaning.
It's helpful to determine a student's reading level so you can find books that are appropriate for them to read on their own: not too difficult but challenging enough to encourage growth. Reading level classification is a convenient tool you can use when searching online or at the library.
Educators should test vocabulary that they expect their students to know or to use. Research shows that learners can recognize more words than they can actually use. Teachers need to decide between testing high frequency words or more specialized technical vocabulary
Listening enhances children's ability to use the other language arts. Teaching listeningallows students to follow directions, understand expectations, and make sense of oral communication. As children improve as listeners, they learn to use the same strategies to improve their command of the other language arts.
Writing assessment can be used for a variety of appropriate purposes, both inside the classroom and outside: providing assistance to students, awarding a grade, placing students in appropriate courses, allowing them to exit a course or sequence of courses, certifying proficiency, and evaluating programs– to name some.
When students aretaught digital literacy, they get more than just exposure to technology: they develop important life skills that lead to a deeper understanding of the digital world and curate content in a way that is useful and relatable. |
Authentication-: I understand that authentication is basically digital signature.
You can use a digital signature for authentication, be it entity authentication (e.g. in the TLS protocol) or message authentication (e.g. in the PGP protocol). It is however also possible to use other means, e.g. a MAC if you share a secret key: the digital signature is a means to an end.
In RSA encryption, we use public key(of whose sender or receiver?) for encryption and private key(of whom?) for decryption.
You'd use the public key of the receiver. You first need to establish trust in the public key though. This is why you can e.g. sign keys in PGP so that you can utilize it's web of trust.
They say hash is encrypted using RSA. But why are we using PRIVATE KEY (of sender) here (instead of public key) and public key of sender instead of private key?
The private key of the sender used in signature generation. It is possible to show that that the private key of the sender is used by verifying the signature. The hash over the data is used so that the integrity and authenticity of the message is maintained.
It's best not to think of that as encryption with a private key, so they got that wrong. Both signature generation and encryption in RSA depend on modular exponentiation. However, that's where the similarities end. For more information see my self answered question here.
Note that two key pairs are used if you want to encrypt and sign. The encryption part is performed using the key pair of the receiver, while the signature generation is performed using the key pair of the sender. Encryption is always performed using the public key, decryption with the private key. For signature generation the private key is used, for verification the public key.
Often a data or session key is encrypted instead of the message directly. That's just because symmetric encryption is more efficient (not just in compute time but also the resulting ciphertext size). Similarly usually you sign the hash instead of the message - although there are also some security related reasons for that. |
Definition of Camillo golgi
1. Noun. Italian histologist noted for work on the structure of the nervous system and for his discovery of Golgi bodies (1844-1926).
Camillo Golgi Pictures
Click the following link to bring up a new window with an automated collection of images related to the term: Camillo Golgi Images
Lexicographical Neighbors of Camillo Golgi
Literary usage of Camillo golgi
Below you will find example usage of this term as found in modern and/or classical literature:
1. The Nervous System and Its Constituent Neurones: Designed for the Use of by Lewellys Franklin Barker (1899)
"... method was first described by its inventor, camillo golgi, of Pavia, as early as 1873.* But little attention was paid to it by investigators in other ..."
2. The Lancet (1898)
"For the examination of detail central and peripheral in the nervous system the use by camillo golgi In 1880-81 and by -Santiago Ramon y Cajal in 1889 of the ..."
3. Contemporary Psychology by Guido Villa (1903)
"The most important 1 The researches of camillo golgi (professor at Pavia) are developed in his work Sulla fine anatomia degli organi centrali del sistema ..."
4. The Scientific Monthly by American Association for the Advancement of Science (1916)
"In 1885—6, camillo golgi,8 at Pavia, showed that the Laveran parasites reproduce by formation of spores, and that the paroxysm of fever begins, ..."
5. The Never-ceasing Search by Francis Otto Schmitt (1990)
"In 1909 Cajal, along with the Italian cytol- ogist, camillo golgi, whose famous "Golgi stain" was effectively used by Cajal, shared the Nobel Prize in ..."
6. Science by American Association for the Advancement of Science (1903)
"PROFESSOR EJ MAREY, of Paris, and Professor camillo golgi, of Padua, have been elected foreign corresponding members of the Vienna Academy of Sciences. ..." |
Here are 10 free science websites full of resources, experiments, printables, activities, lesson plans and more to expand your children’s science learning experiences.
- Let’s Talk Science – This terrific initiative helps kids to learn and love Science. There are different areas to branch out to, such as CurioCity – for teens and IdeaPark – for preschool and the early years. They have activities to try at home, right from the main menu list, and they offer a free cross-Canada community outreach program where university students come and do science activities with local groups. Plus, they host an annual Let’s Talk Science Challenge for kids in grades 6-8.
- School of Dragons – Based on the hit movies, How to Train Your Dragon, the School of Dragon website offers a great list of hands-on experiments, plus great additional resources that help teach the scientific method through activities and worksheets. If you visit Hiccup’s Science Workshop, you can watch videos and more. They’ve even created a grade by grade list of games on the site that co-ordinate with various curriculum guidelines.
- Ontario Science Centre – Take a virtual tour of the centre and see experiments and related activities along the way. They also have a section on the site specifically for the early years that includes video, lessons, and activities.
- Science: Government of Canada – Not only does this government site round up a great collection of games and activities from various different departments and museums around the country, but, when you explore the Educational Resources section, they also offer the opportunity for kids to “Ask A Scientist” questions (or read answers to past questions), links to recommended other resources, lesson plans, and – my favourite – a collection of 6 Activity Books that you can download and print out that are full of experiments, lessons, activities and more.
- Science.ca – Not your typical site of kids’ experiments, this site challenges students to look at experiments from a scientist point of view. Some of them are easy – such as looking for fossils, whereas others are extremely complex – like using a special downloadable calculator to do some equations on. The site also offers a few interactive activities, featured scientists, an opportunity to ask questions, what one 14-year-old did for a science fair project, and a pretty challenging quiz.
- Science Bob – Featured on TV shows like Jimmy Kimmel, Dr. Oz, and more, Science Bob’s website is full of experiment videos, and instructions on how to do them yourself. He has put together lots of great information on science fair resources. Plus, he’s written a set of 4 science based fiction adventure stories for kids 9+ all about a set of siblings named Nick & Tesla.
- Science is Fun – from the lab of a university chemistry Professor Bassam Z. Shakhashiri, this website has a lot of information – including a great collection of experiments you can do at home, the periodic table, recommended websites and resources, chemical of the week and more. It’s a little cluttered, but it’s still a great site!
- Science Kids – Based out of New Zealand, this website is jam-packed full of resources for teaching and learning science and technology. Here you can find: games, activities, lessons, videos, experiments, pictures, science fair ideas, quizzes and more. Everything is clearly sorted and organized by topic and subtopics, making navigation easy to use.
- Science World – Thanks to the Telus World of Science Museum in British Columbia, this website has a lot of great kids’ science games such as Ernie the Electric Eel, the changing Vancouver landscape, and how bodies work. Many of these games are based on exhibits in the museum. Plus, you can visit the Big Science For Little Hands @ Home section to for some experiments to try.
- Wonderville – This Canadian website helps kids explore more than the usual science experiments. It includes lots of information on careers in a wide variety of scientific fields. There are also a lot of great interactive games, videos, experiments, and stories. You can look by topic or by grade to help find exactly what you need.
BONUS: Link to Learning: Science – Lists of grade specific lesson plans, ideas, and activities. |
Three stories beneath the telescope dome at the Hale Solar Laboratory in Pasadena, California, a rusty spiral staircase marks the top of a nearly 80-foot-deep pit, concealed by a wooden trapdoor in the basement floor. At the bottom lies a grating meant to split light into a rainbow to allow scientists to study the makeup of the sun. The building’s current owners dare not descend, deterred by the lack of oxygen and impenetrable darkness below.
When architects Liz Moule and Stefanos Polyzoides bought the observatory in 2006, they knew they were purchasing a piece of history. The original owner, astronomer George Ellery Hale, established the world’s most powerful telescopes in the first half of the 20th century, including at Mount Wilson Observatory, high above Pasadena. Moule, who runs a local architecture firm with Polyzoides, regards Hale as “a model citizen” for his influence on Pasadena’s cultural landscape and civic architecture. The Hale Solar Laboratory, with its Egyptian-style relief of the sun beaming down over the front door, grand library on the first floor, telescope dome on the roof and ominous pit in the basement, was Hale’s private refuge just a few blocks south of the university he helped found, the California Institute of Technology.
Moule and Polyzoides had no idea the building, constructed in 1924, came with hidden astronomical treasures. The whole basement was a cluttered mess of furniture, papers and boxes of junk when they purchased the historic facility (along with the more modern stucco home in front of it). “We thought we were left with stuff we were just going to get rid of,” Moule says.
In the observatory basement, Moule and volunteers from Mount Wilson—Don Nicholson and Larry Webster—discovered hundreds of glass photographic plates from the 1880s to 1930s stacked in boxes in a large wooden cabinet. The collection includes images of sunspots and solar prominences—tendrils of plasma that snake out from the sun—and solar spectra, or series of lines that represent components of light, revealing the sun’s chemical composition. Larger plates depict the cratered moon, edged with ripples from basement water damage. Some of the plates are from Hale’s telescopes, while others were clearly gifts from far-flung astronomers.
All told, there were more than 1,100 plates and other artifacts from Hale’s private collection hidden in the Solar Laboratory’s basement, says Dan Kohne, who volunteered with the nearby Carnegie Observatories’ Pasadena office to inventory the find. Polyzoides and Moule donated the historic plates to the Carnegie archives.
These photographic plates represent the painstaking way astronomers used to work, hand-positioning a telescope on an object for long enough to capture it on a glass plate coated with emulsion, then developing the plate like film in a darkroom. The first daguerreotype photograph of a star other than the sun was taken in 1850 by William Cranch Bond, the first director of Harvard College Observatory, who made a 90-second exposure of Vega. For the next 150 years or so, scientists catalogued the universe on these glass plates, about as thick as a window pane.
While technological advances in photography, telescope guidance and computing have largely made plate-based skywatching obsolete, studying glass plates was how astronomers reached historic revelations, such as the existence of galaxies beyond the Milky Way and the fact that the very fabric of the universe is expanding in all directions.
Historic plates aren’t just relics. They represent a record of the sky at particular moments in the past that can never be revisited—not even with the most powerful space observatories. Today, humanity’s most advanced telescopes can reveal distant objects that periodically brighten, dim and pop in and out of view. The European Space Agency’s (ESA) Gaia space telescope, for example, is compiling the most complete star maps yet. Some of the objects going through changes right now could have also varied in the late 19th and early-to-mid 20th centuries, and they may have been captured on glass telescope plates.
As astronomers seek to tell more complete stories of how celestial objects evolve over time, these dusty old plates may prove all the more relevant.
“We’re not time travelers, are we?” says Michael Castelaz, associate professor of physics at Brevard College in North Carolina. “So how do you ever go back in time to investigate the night sky except with the data we have already?”
By some estimates there are more than 2 million glass plates made by professional astronomers in the U.S. alone. Worldwide there are likely more than 10 million, says Rene Hudec of the Academy of Sciences of the Czech Republic in Ondrejov, including many that may still be hiding in unexpected places. While there is an online database of over 2.5 million plates from more than 570 archives, there is no truly comprehensive list. Having visited more than 70 plate archives himself, Hudec reports some repositories are well kept and cataloged, but others are a “sad experience” with little funding and no one to manage them.
Harvard, thought to house the biggest collection in the world, has some 550,000 plates, including images once analyzed by such luminaries as Henrietta Swann Leavitt and Annie Jump Cannon. As Dava Sobel chronicles in The Glass Universe: How the Ladies of the Harvard Observatory Took the Measure of the Stars, women “computers” like Leavitt and Cannon not only classified and catalogued thousands of stars from the telescope plates but also made breakthrough discoveries that inform our view of the cosmos today. Edward Pickering, director of the observatory who hired these women, wrote in 1890: “For many purposes the photographs take the place of the stars themselves, and discoveries are verified and errors corrected by daylight with a magnifying glass instead of at night with a telescope.”
Hale’s collection from the Solar Laboratory basement joined more than 200,000 plates housed by Carnegie Observatories, including the 1923 “VAR!” plate, which convinced Edwin Hubble that Andromeda is a separate galaxy from the Milky Way. The Yerkes 40-inch telescope, the Mount Wilson 60-inch, the Mount Wilson 100-inch and the Palomar 200-inch, all Hale’s projects, have each taken turns enjoying the title of “world’s largest telescope.” Their results are stored in drawers behind a short black vault door in the basement of Carnegie Observatories’ main office building in Pasadena.
Farther afield, North Carolina’s Pisgah Astronomical Research Institute (PARI) has about 350,000 items including plates, as well as film and other data. These telescope plates largely come from the United States and Canada, from universities and other institutions that didn’t have room for their collections, as well as those uncovered by accident in “14 lawn-and-leaf bags” in someone’s garage, says Castelaz, who was formerly the science director of PARI. “I could live in that plate vault. It’s so exciting.”
In 2015, Holger Peterson stumbled upon boxes containing about 300 plates when he went to the basement to make tea at the Niels Bohr Institute in Copenhagen. Some of the artifacts were clearly identifiable: a 1950 exposure from the Palomar Samuel Oschin Telescope showing a large number of galaxies, and a copy-plate from the 1919 solar eclipse expedition to Sobral, Brazil, that helped confirm Einstein’s theory of general relativity. (Einstein predicted that the sun’s gravity should bend the fabric of space around it, so the positions of background stars would shift from our perspective when the moon blocks the sun during a total solar eclipse. Measurements on glass plates were used to confirm this.) But for many plates in this collection, now located at Copenhagen University, the details of the exposures have been lost, Peterson says in an e-mail.
Also in Europe, the Archives of Photographic Plates for Astronomical USE (APPLAUSE) currently comprises about 85,000 plates from five institutes in Germany and Estonia. Highlights include plates from Ejnar Hertzsprung, who helped show the relationship between stellar temperature and intrinsic brightness, and Karl Schwarzschild, who was instrumental in developing mathematical descriptions of black holes.
In Argentina, the plate archive at Cordoba Observatory houses some of the first photographs of stars in the Southern Hemisphere with about 20,000 photographs and spectra on plates dating from 1893 to 1983. The plate situations in Asia and Africa have not been as thoroughly researched. Hudec has visited various locations in China with plates and estimates some 40,000 have been both collected and digitized. Bosscha Observatory in Indonesia additionally has about 20,000 plates, he says. About 19,000 plates taken at the UK Schmidt telescope in Australia are stored in Edinburgh, Scotland, says David Malin, a photographic scientist at the Anglo-Australian Observatory. The Anglo-Australian Telescope in Siding Spring retains under 3,000 plates that were taken there, while other plates likely remain with observers who never handed them over to the observatory collections.
As of the early 1990s, professional astronomers abandoned the practice of capturing celestial images on glass in favor of using digital methods that are both quicker and allow for more sophisticated computational analysis. The invention of charged coupling devices (CCD), which also enable smart phone cameras, has revolutionized astronomical observations. Techniques as simple as “zooming in” digitally and heightening contrast on a computer are powerful tools for studying distant, faint objects.
But historical records of the sky have multiple layers of value. As a matter of cultural preservation, telescope plates encapsulate the process by which knowledge was once acquired and represent the state of science when they were used. For roughly 150 years but no longer, astronomy data was recorded on glass.
“Knowing about the precursors is in many ways something which even informs how we do astronomy now, so we shouldn’t forget,” says Harry Enke of the Leibniz Institute for Astrophysics Potsdam in Germany, one of the leaders of the APPLAUSE collaboration.
Astronomers can even use historical records to make discoveries today. While many cosmic processes take billions of years to evolve, “transient” objects in the sky, such as exploding stars called supernovae, change markedly over periods of weeks to years. Variable stars brighten and dim periodically, and plates can be used to determine if that period is constant or not. In 2016, one astronomer even used the Carnegie archive to point out evidence for exoplanets in a 1917 stellar spectrum, a plate made some 75 years before anyone would discover planets beyond our solar system.
“Our sky is moving very slowly for our human feelings of time,” Enke says. “Modern astronomy and the modern instruments with CCDs and so on, this is barely 40 years old. If you can add another hundred years to that, that’s great.”
The study of black holes is one reason Jonathan Grindlay at Harvard got interested in digitizing old plates. He is the principal investigator of a massive plate-digitizing effort called DASCH, the Digital Access to a Sky Century @ Harvard.
When a sun-like star and a “stellar mass” black hole—typically seven times the mass of the sun—orbit a common center of gravity, the star provides a steady stream of matter ripped away by the black hole. But instead of falling directly into the black hole, the material piles up in an accretion disk around the black hole first. After about 30 to 60 years, the disk becomes unstable and the black hole devours some of the accumulated material, resulting in a very bright outburst in optical and X-ray light. DASCH provides the first full-sky record of more than a century of these rare outbursts, allowing scientist to measure how long they are visible and how many flashes occur across the sky.
Many more telescope plates exist in the world than there are digital versions of them, and financial support for digitization and detailed cataloging is limited. A group of Czech astronomers led by Hudec visited Carnegie, PARI, Yerkes, Lick, Mount Palomar and nine other major U.S. locations from 2008 to 2012 to scope out the historical plate offerings. They found that some archives had not properly stored or had even damaged plates. They tested out a transportable scanning device and recommend that institutions scan and catalog their treasures. So far, Hudec’s group has created about 50,000 plate scans across the world.
DASCH has been able to digitize about 350,000 of Harvard’s plates, which are all searchable online, and plans to get to the total of 450,000 photographs by October 2020. The last 100,000 plates are stellar spectra which, while also interesting, are not being scanned because only the direct images can show visual changes in brightness over time. The whole process of cleaning and scanning is “like a choreographed ballet,” Grindlay says. In Europe, APPLAUSE is also digitizing its plates, taking inspiration from DASCH in some of its methods but using commercial scanners instead of custom-made devices.
The digitization enterprise stirred up controversy when some historians balked at the idea that original markings on the plates would be cleaned off in the scanning process, Grindlay says. From one perspective, if an astronomer of the past drew a circle around an object of interest, cleaning the plate could reveal more stars hiding behind the curve. But the markings are also a record of the scientific process. A 2016 study prompted by DASCH found that many astronomers and historians alike value the annotations on plates and their covers but also believe that photographing or scanning those markings before cleaning them off is sufficient for preservation, unless the plate is particularly important in the history of astronomy. DASCH follows this protocol, photographing all original markings, including on the plate’s “jacket” cover, before cleaning. The original annotations are saved on the most valuable plates, such as those made by Henrietta Swan Leavitt, “in deference to historians,” Grindlay says.
Even passionate archivists like Grindlay agree that once a plate is properly scanned and cataloged, there’s nothing more one can learn from the physical object that can’t be obtained from a high-resolution digital copy and a photograph of the annotations. Nonetheless, Grindlay says, “the original plates are the ultimate record and must be fully preserved, as they have been at the Harvard College Observatory.”
For Kohne, the plates are akin to works of art. Much of the archives at Pasadena’s Carnegie Observatories office, including the loot from the architect couple’s basement, represent Hale’s “studios,” the way a painting done in Raphael’s workshop by a different artist would be credited to the studio of the famous painter. In addition to being scientists, 20th telescope operators were skilled craftspeople.
“They’re capturing the light rays that have been traveling for thousands and millions of light-years, and getting it on the negative exposed exactly right,” Kohne says. “In the history of photography, it should be in there somehow.”
Hale’s iconic Solar Laboratory telescope in Pasadena will not remain dormant. A Mount Wilson volunteer crew is working to aluminize the mirrors so the telescope can clearly project the sun onto a viewing area in the basement. They plan to have local students learn to use the telescope for solar observing, too. Eventually, Moule hopes the team can get the diffraction grating at the bottom of the pit working again, or install a new one, allowing a new generation to examine the sun’s composition as Hale did.
On a perfectly sunny Southern California day in March, Mount Wilson volunteer Ken Evans opened the dome to work on its restoration. Evans, Kohne and Moule excitedly talked of viewing sunsets through the telescope and perhaps having a summer solstice party, if the mirrors are ready in time. When Evans, a retired engineer, rotated the dome’s slit to face Mount Wilson, the group lamented that a tree blocked the view of Hale’s other temples of astronomy in the distance.
Moule and Polyzoides have donated Hale’s journals, also discovered in the basement, to Caltech. Hale’s typewriter and desk remain on the first floor in the sunny, elegant library, a booklover’s dream, with an Egyptian-style bas-relief of a figure holding a bow on a chariot. The ancient Egyptians likely interested Hale because they worshipped the sun, Moule says. There’s even a crate in the basement addressed to him with another bas-relief inside—the next Hale mystery Moule plans to tackle. She describes her role at Hale’s Solar Laboratory as “lighthouse keeper.”
“Sadly solar astronomy has moved on past the technology of that building, so it’s not something of regular use, in the way that a lot of lighthouses are not used for what they were originally intended for either,” Moule says. “But it’s an important monument, and I’m a caretaker.”
This particular lighthouse guards a telescope that once used an instrument plunged almost 80 feet in the darkness to split sunlight from 93 million miles away. And thanks to Mount Wilson volunteers, the sun may soon stream through the cosmic lighthouse once again. |
If we want to know more about whether life could survive on a planet outside our solar system, it's important to know the age of its star. Young stars have frequent releases of high-energy radiation called flares that can zap their planets' surfaces. If the planets are newly formed, their orbits may also be unstable. On the other hand, planets orbiting older stars have survived the spate of youthful flares, but have also been exposed to the ravages of stellar radiation for a longer period of time.
Scientists now have a good estimate for the age of one of the most intriguing planetary systems discovered to date -- TRAPPIST-1, a system of seven Earth-size worlds orbiting an ultra-cool dwarf star about 40 light-years away. Researchers say in a new study that the TRAPPIST-1 star is quite old: between 5.4 and 9.8 billion years. This is up to twice as old as our own solar system, which formed some 4.5 billion years ago.
The seven wonders of TRAPPIST-1 were revealed earlier this year in a NASA news conference, using a combination of results from the Transiting Planets and Planetesimals Small Telescope (TRAPPIST) in Chile, NASA's Spitzer Space Telescope, and other ground-based telescopes. Three of the TRAPPIST-1 planets reside in the star's "habitable zone," the orbital distance where a rocky planet with an atmosphere could have liquid water on its surface. All seven planets are likely tidally locked to their star, each with a perpetual dayside and nightside.
At the time of its discovery, scientists believed the TRAPPIST-1 system had to be at least 500 million years old, since it takes stars of TRAPPIST-1's low mass (roughly 8 percent that of the Sun) roughly that long to contract to its minimum size, just a bit larger than the planet Jupiter. However, even this lower age limit was uncertain; in theory, the star could be almost as old as the universe itself. Are the orbits of this compact system of planets stable? Might life have enough time to evolve on any of these worlds?
"Our results really help constrain the evolution of the TRAPPIST-1 system, because the system has to have persisted for billions of years. This means the planets had to evolve together, otherwise the system would have fallen apart long ago," said Adam Burgasser, an astronomer at the University of California, San Diego, and the paper's first author. Burgasser teamed up with Eric Mamajek, deputy program scientist for NASA's Exoplanet Exploration Program based at NASA's Jet Propulsion Laboratory, Pasadena, California, to calculate TRAPPIST-1's age. Their results will be published in The Astrophysical Journal.
It is unclear what this older age means for the planets' habitability. On the one hand, older stars flare less than younger stars, and Burgasser and Mamajek confirmed that TRAPPIST-1 is relatively quiet compared to other ultra-cool dwarf stars. On the other hand, since the planets are so close to the star, they have soaked up billions of years of high-energy radiation, which could have boiled off atmospheres and large amounts of water. In fact, the equivalent of an Earth ocean may have evaporated from each TRAPPIST-1 planet except for the two most distant from the host star: planets g and h. In our own solar system, Mars is an example of a planet that likely had liquid water on its surface in the past, but lost most of its water and atmosphere to the Sun's high-energy radiation over billions of years.
However, old age does not necessarily mean that a planet's atmosphere has been eroded. Given that the TRAPPIST-1 planets have lower densities than Earth, it is possible that large reservoirs of volatile molecules such as water could produce thick atmospheres that would shield the planetary surfaces from harmful radiation. A thick atmosphere could also help redistribute heat to the dark sides of these tidally locked planets, increasing habitable real estate. But this could also backfire in a "runaway greenhouse" process, in which the atmosphere becomes so thick the planet surface overheats - as on Venus.
"If there is life on these planets, I would speculate that it has to be hardy life, because it has to be able to survive some potentially dire scenarios for billions of years," Burgasser said.
Fortunately, low-mass stars like TRAPPIST-1 have temperatures and brightnesses that remain relatively constant over trillions of years, punctuated by occasional magnetic flaring events. The lifetimes of tiny stars like TRAPPIST-1 are predicted to be much, much longer than the 13.7 billion-year age of the universe (the Sun, by comparison, has an expected lifetime of about 10 billion years).
"Stars much more massive than the Sun consume their fuel quickly, brightening over millions of years and exploding as supernovae," Mamajek said. "But TRAPPIST-1 is like a slow-burning candle that will shine for about 900 times longer than the current age of the universe."
Some of the clues Burgasser and Mamajek used to measure the age of TRAPPIST-1 included how fast the star is moving in its orbit around the Milky Way (speedier stars tend to be older), its atmosphere's chemical composition, and how many flares TRAPPIST-1 had during observational periods. These variables all pointed to a star that is substantially older than our Sun.
Future observations with NASA's Hubble Space Telescope and upcoming James Webb Space Telescope may reveal whether these planets have atmospheres, and whether such atmospheres are like Earth's.
"These new results provide useful context for future observations of the TRAPPIST-1 planets, which could give us great insight into how planetary atmospheres form and evolve, and persist or not," said Tiffany Kataria, exoplanet scientist at JPL, who was not involved in the study.
Future observations with Spitzer could help scientists sharpen their estimates of the TRAPPIST-1 planets' densities, which would inform their understanding of their compositions.
For more information about TRAPPIST-1, visit:
News Media ContactElizabeth Landau
Jet Propulsion Laboratory, Pasadena, Calif. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.