content
stringlengths
275
370k
With the contribution of the LIFE programme of the European Union - LIFE14 ENV/GR/000611 Gases that trap heat in the atmosphere are called greenhouse gases and these are: Carbon dioxide (CO2): Carbon dioxide enters the atmosphere through burning fossil fuels (coal, natural gas, and oil), solid waste, trees and wood products, and also as a result of certain chemical reactions (e.g., manufacture of cement). Carbon dioxide is removed from the atmosphere (or "sequestered") when it is absorbed by plants as part of the biological carbon cycle. Methane (CH4): Methane is emitted during the production and transport of coal, natural gas, and oil. Methane emissions also result from livestock and other agricultural practices and by the decay of organic waste in municipal solid waste landfills. Nitrous oxide (N2O): Nitrous oxide is emitted during agricultural and industrial activities, as well as during combustion of fossil fuels and solid waste. CO2 emissions are the primary greenhouse gas emissions and they account (in 2015) for about 82.2 % of all U.S. greenhouse gas emissions from human activities. In 2015, CH4 accounted for 10 %, whereas N2O for 5 %. For more information here The LIFE GYM [LIFE14 ENV/GR/000611] project is co-funded by the LIFE programme, the EU financial instrument for the environment. The sole responsibility for the content of this report lies with the authors. It does not necessarily reflect the opinion of the European Union. Neither the EASME nor the European Commission are responsible for any use that may be made of the information contained therein. Start Date: 15 September 2015 – Duration: 35 months
A coherent text can be described as a text where the information is organised and connected together into a logically-connected unit with cohesive devices joining the parts, so that the text makes sense. One important cohesive device is the topic sentence. This is the sentence which introduces the subject of the text and usually occurs at the beginning of the text. The continuity and organisation of the information is also an important factor in constructing a coherent text. In addition, there are many words called linking words, which act as links between clauses and sentences in a text. Examples of Linking Devices and, but, or, so, nor, for, yet, also, too Other sentence connectors - Ordering: firstly, secondly, next, in addition, furthermore, finally, in conclusion - Contrasting: however, on the other hand, in contrast, in comparison, nevertheless - Drawing conclusions: as a result, thus, therefore, consequently, in conclusion I, he, she, it, we, you, they, them, us, etc. this, that, these, those (These connect clauses to form a sentence. They can come at the beginning or in the middle of the sentence.) - Comparing and contrasting: while, whereas, although, though, even though, besides - Time: after, before, when, until - Cause: since, because, so that Cause and Effect - Because of - Due to As - Owing to - Thus (formal) - As a consequence - As a result - In contrast to - In comparison - On the contrary - Even though - Compared with/to - On the other hand - Just as - The same is true for - In the same way - The same can be said for - So as to - In order to - For the purpose of - So that Addition and Amplification - As well as - In addition - In fact - For example - For instance - Such as - That is to say - And by this I mean - This shows - This means - In other words - This indicates that Reference and Introducing - I would like to start by( -ing) - What I want to discuss is - I am going to discuss/write about… - My objectives are - N.N. mentions that.. - N.N. claims that.. - According to N.N. .. - What N.N. seems to think is .. Turning to a New Topic - Now I would like to turn to - The next point I would like to deal with is.. - The next aspect I would like to present is .. - Another point to consider is .. Returning to a Point - As I mentioned earlier.. - To return to what I wrote earlier .. - As I said / wrote in the introduction .. - It is quite clear that .. - What this shows is .. - As you can see .. - It is evident that - So, to sum up .. - I would like to conclude by (-ing) - In conclusion .. - Finally Finally, I could say that .. - Eventually, I would say that .. Attitude and Intention - I believe that .. - I think .. - What I am trying to say .. - In my opinion .. - As far as I am concerned .. - It seems to me that .. - I feel .. - The point I am trying to make .. - As I see it .. - What I feel is .. Example of Text Cohesion Compare these two texts and identify the linking devices in the second text. Bobby was a Skye Terrier. Bobby roamed the streets of Edinburgh. Bobby met John Grey in the 1850s. Grey worked as a night watchman in the Edinburgh police. Bobby kept John Grey company. The winters in Edinburgh can be very cold. Grey fell sick with tuberculosis. Tuberculosis was a fatal disease back in the 1800s. On 15 February 1858, Grey died. Bobby followed John Grey to his grave at Greyfriars Kirkyard in the old part of Edinburgh. Bobby did not leave the grave except for when he was hungry. Bobby did not leave the grave except for when he was very cold. People started to notice the dog in the churchyard. People started worrying about Bobby. The City of Edinburgh had decided that ownerless dogs should be shot. The city council bought a licence for Bobby. Bobby could keep on watching his master’s grave. Bobby survived his master by 14 years. He died in 1872. He was buried just inside the gate of the churchyard. He could not be buried together with his master. The church ground is sacred. Bobby was a Skye Terrier roaming the streets of Edinburgh in the 1850s until he met John Grey. Grey worked as a night watchman in the Edinburgh police and Bobby kept him company. The winters in Edinburgh can be very cold and one day Grey fell sick with tuberculosis. This was a fatal disease back in the 1800s and on 15 February 1858, Grey died. Bobby followed him to his grave at Greyfriars Kirkyard in the old part of Edinburgh and he did not leave the grave except for when he was hungry or very cold. People started to notice the dog in the churchyard and they started worrying about Bobby because the City of Edinburgh had decided that ownerless dogs should be shot. However, the city council bought him a licence and he could keep on watching his master’s grave. Bobby survived his master by 14 years, and when he died in 1872 he was buried just inside the gate of the churchyard. He could not be buried together with his master, since church ground is sacred. Tasks and Activities Try the following tasks: Text Cohesion - Linking Devices -Drag and Drop Text Cohesion, Linking Words - Drag and Drop
Future settlers take note: Galoshes and umbrellas are a must on Saturn's moon Titan,where mornings are eclipsed by dreary drizzles of methane. Getting drenched would be the least of your worries, however, as Saturn's largest satellite plunges to a bone-chilling -297 degrees Fahrenheit (-183 degrees Celsius) at the surface and its swirling orange atmosphere is full of hydrocarbons, such as methane, which is natural gas—and no oxygen. “Crude oil minus the sulfur is a decent estimate of what the haze is,” said lead author of a new study of Titan's weather, Mate Adamkovics, an astronomer at the University of California, Berkeley. “Really we don't know for sure, but I would describe it as tiny particles of wax that are really, really cold, or waxy snowflakes.” Adamkovics added that while scientists are not sure how toxic the particles are, the lack of oxygen would be much more of a hazard. Using near-infrared images from Hawaii's W. M. Keck Observatory and Chile's Very Large Telescope, the team of astronomers reveals a nearly global cloud cover at high elevations on Titan. They also found persistent morning drizzle made of methane over the western foothills of Xanadu, Titan's largest “continent.” Measuring about 3,200 miles (5,150 kilometers) across, Titan is larger than Mercury and Pluto and about 40 percent the diameter of Earth. It is the only moon in the solar system with a dense, planet-like atmosphere (10 times denser than Earth's). As is the case on Earth, where features like lakes and mountains can morph and direct weather systems, Titan's terrain also could be a rain maker. “Titan's topography could be causing this drizzle,” said study team member Imke de Pater, an astronomy professor at UC Berkeley. “The rain could be caused by processes similar to those on Earth: Moisture laden clouds pushed upslope by winds condense to form a coastal rain.” The new findings, detailed in the Oct. 11 issue of Science Express, an online version of the journal Science, provide strong evidence supporting past cloud-cover observations and possible indicators of methane drizzle over parts of Titan. In 2005, the Huygens probe that had been aboard NASA's Cassini spacecraft gathered data supporting the existence of frozen methane clouds at higher elevations and liquid methane clouds, with possible drizzle, lower in the atmosphere. But the extent of such clouds was unclear. “A single weather station like Huygens cannot characterize the meteorology on a 'planet-wide' scale,” said UC Berkeley astronomer Michael Wong, who was part of the recent study. And until now, liquid rain was inferred from reports of lakes of liquid hydrocarbon, which scientists presumed were filled by methane precipitation. Adamkovics and his team analyzed infrared measurements throughout Titan's atmosphere. By subtracting out the absorption and scattering due to aerosols low in the atmosphere as well as light from the surface, they were left with a signal that was due to actual droplets of liquid methane. Using a “radiative transfer model,” the scientists distinguished between miniscule drops inside clouds and larger ones that form drizzle. The results paint a dreary picture with a global cloud of frozen methane hovering at a height of about 16 to 22 miles (25 to 35 kilometers), liquid methane clouds below 12 miles (20 kilometers) and drizzling methane at lower elevations. “We show that the solid cloud covers the globe and the drizzle happens predominantly in the morning,” Adamkovics told SPACE.com. The methane droplets inside Titan's clouds are estimated to be a thousand times larger than those in terrestrial clouds. Yet, both contain similar moisture contents, Adamkovics said. And if a cosmic cloud wringer were to empty out Titan's clouds, about six-tenths of an inch (1.5 centimeters) of water would blanket the moon's surface. The drizzle or mist appears to dissipate after about 10:30 a.m. local Titan time, which is about three Earth days after sunrise. Titan takes nearly 16 Earth days to rotate once. - Video: Parachuting onto Titan - Top 10: The Wildest Weather in the Galaxy - Image Gallery: Saturn and Titan
There are a variety of sores that can occur in or around the mouth. Most are benign, but some may be indicative of cancer. These small, creamy white ulcers have a red border and always appear inside the mouth. Canker sores can be painful, but they are not contagious. They usually heal in one-to-two weeks. Prescription drugs and over-the-counter topical treatments can help reduce the pain. Also known as fever blisters, cold sores are fluid-filled blisters that form on the lips or around the mouth. Cold sores are usually caused by the herpes simplex virus, and are both contagious and painful. Fever, sunburn, trauma, hormonal changes or emotional upset can trigger their appearance. While there is currently no cure, cold sores can be treated with prescription ointments to help alleviate the pain. It is also important to wash your hands frequently and avoid sharing personal products to help prevent the spread of the infection to other people. Also known as oral thrush, this mouth sore is caused by a fungal infection. Painful red and cream-colored patches form on moist areas of the mouth. Candidiasis can cause difficulties with swallowing and taste. It is most commonly seen by denture wearers or people who have problems with their immune systems. Sometimes it occurs as a result of an unrelated antibiotic treatment, which can decrease normal bacterial development in the mouth. Saliva substitutes and antifungal creams are used to treat candidiasis. Chronic irritations inside the mouth, such as cheek chewing, dentures or braces, sometimes cause benign white patches to form inside the mouth. The treatment is to alleviate the irritation to allow for natural healing. Leukoplakias consist of thick, white lesions that most commonly form beneath or around the tongue, cheeks or gums. Leukoplakias are painless, but can become cancerous over time. These mouth sores are most often seen in tobacco users. A biopsy may be needed to accurately diagnose leukoplakias. Oral cancers appear as red or white patches of mouth tissue or small ulcers that look like a canker sores, but are painless. Oral cancers usually form on the tongue or floor of the mouth, but can occur on any tissue in and around the mouth. This includes cancers of the tonsils, adenoids, uvula (soft palate), roof of the mouth (hard palate), inside the lining of the cheeks, the gums, teeth, lips, the area behind the wisdom teeth and salivary glands. Some of these lesions may be benign, others may be malignant, and still others are precancerous. The most common type of precancerous cells in the mouth are: - Leukoplakias: Leukoplakias consist of thick, white lesions that most commonly form beneath or around the tongue, cheeks or gums. These mouth sores are most often seen in tobacco users. - Erythroplakias: These lesions appear as a red, raised area in the mouth and have a higher incidence of becoming malignant than leukoplakias. A biopsy is often needed to diagnose leukoplakias and erythroplakias. Squamous cell carcinomas are the most common type of oral cancer. Less common are lymphoma and salivery gland cancers. Most oral cancers occur in people age 45 and older. When cancers of the mouth do metastasize, they are most likely to spread to the lymph nodes in the neck. If you have a mouth sore that won’t heal, please contact our office and schedule an appointment with one of our otolaryngologists.
A. Definition of mental health B. Importance of mental health II. Physical Health Benefits of Mental Health A. Reduced risk of chronic diseases B. Improved immune system C. Better sleep quality III. Emotional Health Benefits of Mental Health A. Enhanced mood and emotional resilience B. Reduced stress and anxiety C. Improved self-esteem and self-confidence IV. Social Health Benefits of Mental Health A. Stronger relationships and social support B. Improved communication and emotional intelligence C. Increased community involvement V. Cognitive Health Benefits of Mental Health A. Enhanced focus and productivity B. Improved memory and cognitive function C. Increased creativity and problem-solving abilities VI. Overall Well-being Benefits of Mental Health A. Greater life satisfaction and happiness B. Increased resilience and ability to cope with challenges C. Better quality of life and overall well-being Article: The Mental Health Benefits You Should Know Mental health is a crucial aspect of our overall well-being. It refers to our emotional, psychological, and social well-being. When our mental health is in good shape, it allows us to cope with the stresses of life, make positive choices, and maintain healthy relationships. In this article, we will discuss the various benefits of mental health and how it can positively impact different areas of our lives. Physical Health Benefits of Mental Health Good mental health has a direct impact on our physical well-being. When we prioritize our mental health, we can experience the following physical health benefits: 1. Reduced Risk of Chronic Diseases Studies have shown that individuals with good mental health have a lower risk of developing chronic diseases such as heart disease, diabetes, and autoimmune disorders. This is because mental health influences our lifestyle choices, such as engaging in regular exercise, eating a balanced diet, and avoiding smoking and excessive alcohol consumption. 2. Improved Immune System A strong mental state supports a healthy immune system. Stress and negative emotions can weaken our immune system, making us more susceptible to illnesses. On the other hand, positive mental health can strengthen our immune response, helping us fight off infections and diseases more effectively. 3. Better Sleep Quality Mental health plays a significant role in our sleep patterns. When we have good mental health, we are more likely to have restful and rejuvenating sleep. Adequate sleep is essential for our overall health, as it allows our body and mind to repair and recharge, leading to increased energy levels and improved cognitive function. Emotional Health Benefits of Mental Health Prioritizing mental health positively impacts our emotional well-being. Here are some emotional health benefits associated with good mental health: 1. Enhanced Mood and Emotional Resilience When our mental health is in balance, we experience improved mood and emotional resilience. We can navigate through negative emotions more effectively, bounce back from setbacks, and maintain a positive outlook on life. This leads to better emotional well-being and a higher overall satisfaction with life. 2. Reduced Stress and Anxiety Good mental health helps us manage stress and anxiety effectively. It empowers us with coping mechanisms and relaxation techniques to handle challenging situations. By reducing stress levels, we can improve our mental clarity, decision-making abilities, and overall quality of life. 3. Improved Self-esteem and Self-confidence Mental health plays a significant role in our self-esteem and self-confidence. When we prioritize our mental well-being, we develop a positive self-image, believe in our abilities, and have a greater sense of self-worth. This enables us to take on challenges, assert ourselves, and have healthier relationships with ourselves and others. Social Health Benefits of Mental Health Strong mental health positively impacts our social interactions and relationships. Here are some social health benefits associated with good mental health: 1. Stronger Relationships and Social Support Good mental health improves our ability to build and maintain strong relationships. When we are emotionally well, we have healthier communication patterns, empathy, and understanding towards others. This fosters deeper connections with loved ones and provides a strong social support system during challenging times. 2. Improved Communication and Emotional Intelligence Mental health influences our communication skills and emotional intelligence. When our mental well-being is prioritized, we develop better listening skills, empathy, and the ability to express ourselves effectively. This enhances our relationships, reduces conflicts, and creates a positive social environment. 3. Increased Community Involvement Individuals with good mental health are often more involved in their communities. They actively participate in social activities, volunteer work, and initiatives that contribute to the well-being of their community. This engagement fosters a sense of belonging, purpose, and fulfillment. Cognitive Health Benefits of Mental Health Mental health has a significant impact on our cognitive abilities. Here are some cognitive health benefits associated with good mental health: 1. Enhanced Focus and Productivity When our mental health is in check, we experience improved focus and productivity. We can concentrate better on tasks at hand, manage distractions effectively, and maintain high levels of performance. This leads to increased efficiency and success in both personal and professional pursuits. 2. Improved Memory and Cognitive Function Good mental health positively affects our memory and cognitive function. A sound mental state promotes better information processing, retention, and retrieval. It also supports the development of cognitive skills such as problem-solving, critical thinking, and decision-making. 3. Increased Creativity and Problem-solving Abilities Good mental health fosters creativity and enhances problem-solving abilities. When our mind is in balance, we can think more innovatively, generate new ideas, and approach challenges from different perspectives. This allows us to adapt to changing situations and come up with effective solutions. Overall Well-being Benefits of Mental Health Prioritizing mental health brings about numerous benefits to our overall well-being. Here are some overarching benefits associated with good mental health: 1. Greater Life Satisfaction and Happiness Individuals with good mental health tend to have greater life satisfaction and overall happiness. When our mental well-being is nurtured, we experience a sense of fulfillment, contentment, and purpose in life. This positively impacts every aspect of our existence. 2. Increased Resilience and Ability to Cope with Challenges Good mental health equips us with the resilience to navigate life's challenges effectively. It enables us to bounce back from setbacks, learn from failures, and adapt to new circumstances. This resilience enhances our ability to cope with stressors and grow stronger through adversity. 3. Better Quality of Life and Overall Well-being Ultimately, prioritizing mental health leads to a better quality of life and overall well-being. When we take care of our mental well-being, we experience improved physical health, emotional stability, satisfying relationships, and cognitive abilities. This holistic well-being contributes to a more fulfilling and meaningful life. In conclusion, mental health encompasses various dimensions of our well-being, including physical, emotional, social, and cognitive aspects. By prioritizing mental health, we can experience numerous benefits that positively impact every facet of our lives. It is essential to pay attention to our mental well-being and take proactive steps in maintaining it to live a healthier and happier life. Frequently Asked Questions Q1: How does mental health affect physical health? A1: Mental health influences physical health by reducing the risk of chronic diseases, improving the immune system, and enhancing sleep quality. Q2: Can good mental health improve relationships? A2: Yes, good mental health improves relationships by fostering stronger connections, improving communication, and developing emotional intelligence. Q3: What are the cognitive benefits of good mental health? A3: Good mental health enhances cognitive abilities such as focus, memory, problem-solving, and creativity. Q4: Can mental health impact overall well-being? A4: Yes, mental health has a significant impact on overall well-being, leading to greater life satisfaction, resilience, and a better quality of life. Q5: How can I prioritize my mental health? A5: You can prioritize your mental health by practicing self-care, seeking support from loved ones or professionals, engaging in stress-relieving activities, and maintaining a healthy work-life balance.
The LHCb collaboration at CERN has seen, for the first time, the matter-antimatter asymmetry known as CP violation in a particle dubbed the D0 meson. The finding, presented today at the annual Rencontres de Moriond conference and in a dedicated CERN seminar, is sure to make it into the textbooks of particle physics. “The result is a milestone in the history of particle physics. Ever since the discovery of the D meson more than 40 years ago, particle physicists have suspected that CP violation also occurs in this system, but it was only now, using essentially the full data sample collected by the experiment, that the LHCb collaboration has finally been able to observe the effect,” said CERN Director for Research and Computing, Eckhard Elsen. The term CP refers to the transformation that swaps a particle with the mirror image of its antiparticle. The weak interactions of the Standard Model of particle physics are known to induce a difference in the behavior of some particles and of their CP counterparts, an asymmetry known as CP violation. The effect was first observed in the 1960s at Brookhaven Laboratory in the US in particles called neutral K mesons, which contain a “strange quark”, and, in 2001, experiments at the SLAC laboratory in the US and the KEK laboratory in Japan also observed the phenomenon in neutral B mesons, which contain a “bottom quark”. These findings led to the attribution of two Nobel prizes in physics, one in 1980and another in 2008. CP violation is an essential feature of our universe, necessary to induce the processes that, following the Big Bang, established the abundance of matter over antimatter that we observe in the present-day universe. The size of CP violation observed so far in Standard Model interactions, however, is too small to account for the present-day matter-antimatter imbalance, suggesting the existence of additional as-yet-unknown sources of CP violation. Read the paper:
What are the causes of diabetic retinopathy and long-term diabetes? Changes in blood-sugar levels is the main culprit. People suffering from diabetes generally develop diabetic retinopathy after at least ten years of having the disease. Once you are diagnosed with diabetes, it is essential to have an eye exam once a year or more. In the early stage of diabetic retinopathy, called background or non-proliferative retinopathy, high blood sugar in the retina damages blood vessels, which bleed or leak fluid. This leaking or bleeding causes swelling in the retina, which forms deposits. In the later stage of diabetic retinopathy, called proliferative retinopathy, new blood vessels begin to grow on the retinal. These new blood vessels may break, causing bleeding into the vitreous, which is the clear gelatinous matter that fills the inside of the eye. This breakage can cause serious vision difficulties. This form of diabetic retinopathy can cause blindness, and is therefore the more serious form of the disease. It is not hard to greatly reduce your risk of diabetic retinopathy by following some simple steps and being aware of your overall health. The most important factor you can control is maintaining your blood sugar at a healthy level. Eating a healthy diet will help greatly in controlling blood sugar levels. A regular exercise regimen is also a great help. Finally, make sure to listen to your doctor’s instructions.
Predictive tests can provide information about how a patient may respond (or be resistant) to treatment. Some DNA variants that lead to cancer also make the cancer cells susceptible to the effects of certain drugs. These drugs are called targeted therapies, because they target the genetic changes as a way of fighting the cancer. The targeted therapy is specifically designed for a particular pathway in the cancer cell making it more likely than other non-targeted treatment options to kill the cancer cells. Genomic testing of the tumor tissue (also called tumor profiling), which is a predictive test, can involve one specific tumor gene or several tumor genes depending on validated and repeated trials. Targeted therapies are available for colorectal, lung, and breast cancers, and a few others. Unfortunately, even the most promising targeted therapies that report a highly statistically significant and clinically relevant reduction in the risk of disease events are unlikely to benefit all or even most patients. Large scale studies are needed to understand the incredible variability of cancer disease expression. The use of predictive genomic testing clearly has enormous clinical potential to help very rare, select individuals, but more information is needed to apply this in a larger population. The term predictive testing may also refer to predisposition testing which is different because it looks at the inherited (or germline) DNA. Click here to learn more about scheduling a genetic counseling appointment for questions about pediatric or adult genetic conditions.
- What are the differences in instructional approach and sequencing in English and Spanish language arts? Does this vary by program model and grade level? - How much coordination should there be in literacy instruction across the two languages? Does this vary by program model or grade level? - What literacy skills transfer across English and Spanish and which need to be taught explicitly in each language? - Are there standards for Spanish language arts? Should they be different for L1 and L2 learners? - What characteristics are important when choosing basal readers and other curricular materials for Spanish literacy instruction in TWI programs? - What literacy skills are taught through the content areas and what are taught through language arts lessons? - How do you teach a classroom of students with varying levels of literacy and reading readiness? - Are any special supports given to students while they are developing literacy skills in their second language as opposed to their first? 5. What characteristics are important when choosing basal readers and other curricular materials for Spanish literacy instruction in TWI programs? Literacy specialists recommend high-interest materials that take into account studentsí backgrounds, levels of proficiency, and learning preferences. In TWI programs, the following types of reading materials are particularly appropriate: - Original texts in the partner language, rather than translations of English resources. Translated texts undermine the goal of cross-cultural awareness and true biculturalism, as they lack authenticity of themes, character motivation, and underlying values represented. The language is also unnatural and is not designed to capitalize on the playfulness, rhythm, and rhyme of each language. - Texts that relate to studentsí backgrounds - Leveled texts (see Question #10 in the Language Development section) - Both fiction and nonfiction books in a variety of genres and by many different authors - A combination of language rich materials that allow for teaching part to whole aspects of literacy.
The Mingo people are an Iroquoian group of Native Americans made up of peoples who migrated west to the Ohio Country in the mid-eighteenth century. Anglo-Americans called these migrants mingos, a corruption of mingwe, an Eastern Algonquian name for Iroquoian-language groups in general. Mingos have also been called “Ohio Iroquois” and “Ohio Seneca”. Most were forced to move to Kansas and later Indian Territory (Oklahoma) under Indian Removal programs. Their descendants reorganized as a tribe recognized in 1937 by the federal government as the Seneca-Cayuga Tribe of Oklahoma. The Mingos were an independent group in the Six Nations of the Iroquois Confederacy: Cayuga nation, Mohawk, Oneida, Seneca nation in Western New York State, Tuscarora, and Onondaga and were mostly Senecas and Cayugas. The etymology of the name Mingo derives from the Delaware Indian’s Algonquian word mingwe or Minque, meaning treacherous. The Mingos were noted for having a bad reputation and were sometimes referred to as Blue Mingos or Black Mingos for their misdeeds. The people who became known as Mingos migrated to the Ohio Country in the mid-eighteenth century, part of a movement of various Native American tribes to a region that had been sparsely populated for decades but controlled as a hunting ground by the Iroquois. The “Mingo dialect” that dominated the Ohio valley from the late 17th to early 18th centuries is considered a variant most similar to the Seneca language. After the French and Indian War (1754-1763), the Cayuga moved to Ohio, where the British granted them a reservation along the Sandusky River. They were joined there by the Shawnee of Ohio and the rest of the Mingo confederacy. Their villages were increasingly an amalgamation of Iroquoian Seneca, Wyandot and Susquehannock; and Algonquian-language Shawnee and Delaware migrants. Although the Iroquois Confederacy had claimed hunting rights and sovereignty over much of the Ohio River Valley since the late 17th century, these people increasingly acted independently. When Pontiac’s Rebellion broke out in 1763, many Mingo joined with other tribes in the attempt to drive the British out of the Ohio Country. At that time, most of the Iroquois nations were closely allied to the British. The Mingo-Seneca Chief Guyasuta (c. 1725–c. 1794) was one of the leaders in Pontiac’s War. Another famous Mingo leader was Chief Logan (c. 1723–1780), who had good relations with neighouring white settlers. Logan was not a war chief, but a village leader. In 1774, as tensions between whites and Indians were on the rise due to a series of violent conflicts, a band of white outlaws murdered Logan’s family. Local chiefs counseled restraint, but acknowledged Logan’s right to revenge. Logan exacted his vengeance in a series of raids with a dozen followers, not all of whom were Mingos. His vengeance satisfied, he did not participate in the resulting Lord Dunmore’s War, and was probably not at the climactic Battle of Point Pleasant. Rather than participate in the peace conference, he expressed his thoughts in “Logan’s Lament.” His speech was printed and widely distributed. It is one of the most well-known examples of Native American oratory. By 1830, the Mingo were flourishing in western Ohio, where they had improved their farms and established schools and other civic institutions. After the US passed the Indian Removal Act in that same year, the government pressured the Mingo to sell their lands and migrate to Kansas in 1832. In Kansas, the Mingo joined other Seneca and Cayuga bands, and the tribes shared the Neosho Reservation. In 1869, after the American Civil War, the US government pressed for Indian removal to Indian Territory (present-day Oklahoma). The three tribes moved to present-day Ottawa County, Oklahoma. In 1881, a band of Cayuga from Canada joined the Seneca Tribe in Indian Territory. In 1902, shortly before Oklahoma became a state, 372 members of the joint tribe received individual land allotments under a federal program to decrease common tribal land holdings and encourage assimilation to the European-American model. In 1937 after the Oklahoma Indian Welfare Act, the tribes reorganized. They identified as the Seneca-Cayuga Tribe of Oklahoma and became federally recognized. Today, the tribe numbers over 5,000 members. They continued to maintain cultural and religious ties to the Six Nations of the Iroquois.
31 Mar Soil Life Changes Along With the Crop Oceans cover roughly 70% of the earth’s surface. That is an incredibly large area for microscopic life to occupy, and the diversity of life found living in the oceans is just as large. However, what scientists are now finding out is that the variety of microscopic life found living in soil might be even larger. Soil microbiologist Dr. Marcia Monreal has been researching the wide range of diversity found within the soil, and how agriculture systems can cause it to change over time. What has been found is that different species of microbes become dominant when different crops are planted. The roots of each different crop will release different substances into the soil, such as sugars, acids or enzymes. This causes changes in the soil environment, which leads to different species of bacteria becoming dominant. What the roots themselves are made of can also affect the soil life. Some microbes are stimulated by roots that are hard and fibrous, while others prefer roots that are soft and mushy. As the growing environment changes, so does the microbe population. Researchers still have a lot to learn about the dynamics of life in the soil. As they do, they will also discover the effects that microbes have on each other, as well as on plants. From that information, it is hoped that new and unique ways can be discovered to help maximize crop yields and health and to make agriculture even more productive. Image source: XiteBio Technologies Inc.
In the United States, Flag Day is celebrated on June 14 and commemorates the Second Continental Congress passing the Flag Act of 1777. While the passage of this act is seen as a very important event today, it wasn’t that big of deal in 1777. A major reason why is that the concept of a national flag was very new during the late 18th century, and those that did exist were ensigns – national flags used exclusively on sailing ships. When combined with the fact that the Flag Resolution of 1777 was created by the Continental Congress’ Marine Committee, it seems very likely that the “stars and stripes” design was intended to be an ensign. The act is also surprisingly vague in its description of the flag’s design, particularly the stars. This lead to many different 13 star “constellations” showing up on early American flags, featuring anywhere from five to eight-pointed stars. A popular design that many people falsely attribute as being the original “Stars and Stripes” is called the Betsy Ross flag – a design featuring a circle of 13 five-pointed stars facing outward. Not only was this design not the first American flag, Betsy Ross may not have even designed or sewed it (you can read more about the myth and legend of Betsy Ross here). While we cannot be 100 percent certain who designed the first Stars and Stripes flag, historical records point to a very likely candidate – Francis Hopkinson. Not only was he Chairman of the Continental Navy Board’s Middle Department (remember that the Marine Committee proposed the Flag Resolution of 1777), but he was the only person to claim that he or she designed the flag during his or her lifetime (you can read more about Hopkinson and the first Stars and Stripes here). Hopkinson’s design featured 13 six-pointed stars in alternating rows of three and two across. It would be over a century before the U.S. Flag’s anniversary started to get recognition. The first recorded celebration of a Flag Day was in 1885 at the Stony Hill School in Waubeka, Wisconsin. This celebration was spearheaded by school teacher Bernard Cigrand, who would spend the next several decades advocating for a National Flag Day, stressing the importance of patriotism. Cigrand eventual became the president of the American Flag Day Association and later the National Flag Day Society. Though there would be a handful of other local flag day observances in the late 1800s, President Woodrow Wilson was the first to proclaim June 14 as a National Flag Day in 1916. However, this was just a one-off proclamation not intended to become an annual holiday. Eventually, some states began celebrating Flag Day as a state holiday, starting with Pennsylvania in 1937. A little over a decade later in 1949, an Act of Congress established National Flag Day. Despite popular belief, the act does not make Flag Day an official federal holiday and it is up to the President to officially declare a day of observance each year. The same congressional act that established Flag Day also details a National Flag Week, held the same calendar week as Flag Day. The President historically issues a proclamation during Flag Week urging U.S. citizens to fly national flags for the duration of the week.
Phthalates are plasticisers that are added to plastics to make them transparent, flexible and durable whilst giving longevity. The group of chemicals known as phthalates includes many different chemical esters and globally we produce 8.4 million tonnes per year. Phthalates are found in food packaging, personal care products, cosmetics, household cleaners and often in food itself, due to the manufacturing processes that might include plastic tubing or similar exposure of food to plastics during the manufacturing process. A good example of this is milk that has been transported through plastic tubing after the cow has been milked as well as storage of butter, cream cheese, pates etc sold in plastic containers, as the fat in these foods leaches phthalates from the plastic into the food you unknowingly eat. Phthalates are not just taken into the body via food and food packaging, but are also absorbed through the skin when using cosmetics, personal care products, such as shampoos, shower gels and soaps, but are also breathed in through house dust. Young children are especially vulnerable to the latter due to instinct to put things in their mouth, as well as when crawling around on the floor. If the floor is linoleum or a highly polished floor, it is likely that they are ingesting phthalates. Back in 2008 the Consumer Products Safety Bill was passed in the USA banning Phthalates in children’s toys and products. At the same time this Bill also included the directive to take a closer look at phthalate chemicals and the impact on human health. In 1999 the EU banned the use of phthalates in children’s toys, for exactly the reason mentioned above. Why are phthalates harmful? More research is indicating links between phthalates and several health conditions including Attention Deficit Disorder (ADHD), various autism spectrum disorders, obesity, breast cancer, male fertility issues, Type 2 diabetes, and asthma. The biggest concern is that phthalates are hormone-disrupting chemicals and the effects of this can be multiple. Tips for lessoning the phthalate load on the body and in the home: - Remove any food from the plastic containers and put it into a glass container. - Avoid storing food in any type of plastic container. - Avoid buying foods that come in soft, squeezable plastics such as sauces, honey and mayonnaise. - Use natural household products and avoid products with “fragrance” listed as one of the ingredients, as this is likely a phthalate. Look for phthalate free or DEP (Diethyl phthalate) free packaging. - Buy wooden toys rather than plastic toys - Plastic food containers should have either the code numbers 1,2 or 5 in the “recyclable symbol” as these are phthalate free. The number 3 indicates that is has been manufactured using phthalates. - Do a liver cleanse at least once a year, as this will lessen the toxic load on the liver and help it function more effectively in detoxifying your blood, producing bile needed to digest fat; breaking down hormones; and storing essential vitamins, minerals and iron.
Many people think that racial populations are “equal” with respect to their genetic potential for cognitive traits. On this website, we look at a lot of data having to do with racial differences in various traits to assess the validity, or lack thereof, of this assumption. Sometimes, though, it is important to step back and recognize just how impossible the notion of equality truly is. If the races really are genetically equal with respect to most psychological traits, it is nothing short of an evolutionary miracle, and in this article, I will explain why. We all accept that the races differ in various ways for genetic reasons. For instance, East Asians are shorter than Africans and Europeans. Certain body types were more likely to evolve in different climates. In response to environmental variables such as UV radiation, we evolved differences in skin color, hair color, hair texture, etc. Some historic populations had cows available to milk while others did not and, so, some populations are lactose tolerant while others are not. Some populations had to face malaria, while others did not, and this led to differences in our blood. The list could go on. This is all utterly uncontroversial. The races also differ in brain size. This has been shown repeatedly, all over the world, dating back more than a century (Last, 2016). More recently, it has been shown that you can predict someone’s race by looking at the shape of their brain (Fann et al. 2015). Yet, it is supposed that, unlike the racial differences in virtually every other part of the body, these ones are due entirely to the environment. This is obviously a political move. Evolution doesn’t care that genetic differences in personality are politically controversial, it sees the brain as just another organ. If we evolved differences in all the others, we probably evolved differences in the brain too. In fact, the brain is a more likely site for genetic differences between races than most other parts of the body are. Why? Because researchers have shown that genes involved in the brain are the ones that differ most between the races (Wu and Zhang, 2011) . “Other genes that showed higher levels of population differentiation include those involved in pigmentation, spermatid, nervous system and organ development, and some metabolic pathways, but few involved with the immune system.” – Wu and Zhang (2011) (emphasis added) Given this, if anything we should expect racial differences in the brain to be larger than other racial differences. The assumption that they are infinitely smaller, such that they do not exist, is not genetically plausible. Ultimately, this is just common sense. Populations around the world had different food sources. They hunted different kinds of animals and picked different kinds of plants. They lived in different climates. They fought different diseases. These differences impact behavior. For instance, some animals require more group work to kill than others. Harsh winters require more pre-planning and delayed gratification (saving food) than more temperate climates. The more easily acquirable food is around, the less important working with the group is. The more predators and other humans are around, the more physical strength and aggression will be needed. The more pathogens are present, the more important cleanliness will be. This list could go on infinitely. And maybe you think one of these explanations is wrong and an environmental difference will have the opposite effect of what I have said. That is certainly possible, but the idea that any one of these environmental differences, let alone all of them together, will have no effect whatsoever on the selective pressures for any mental traits is completely implausible. And this is all before culture comes into the picture. Once that happens, these differences are magnified times a hundred. In some cultures, being smart is the best way to have lots of kids. In others, physical strength, or determination, or social intelligence, etc., will be the most effective way. The notion that in every culture every psychological variable has the exact same association with fertility, which is the logical implication of egalitarianism, is obviously insane. That culture has sped up evolution is evident in our own DNA. By looking at our genome, researchers can estimate how the speed of evolution has changed over time. In 2007, a landmark paper was released showing that evolution sped up by a factor of 100 within the last 5000 years, suggesting that the development of civilization, which happened at different times and in different ways around the globe, had an extremely dramatic impact on evolution (Hawks et al., 2007). Even more recently, we are starting to get some idea of how culture influenced evolution. For instance, a 2014 paper found that England’s “war on murder”, a time in which criminals were essentially sent to die for fairly petty crimes, had a significant eugenic effect on the population in terms of criminality (Frost and Harpending, 2015). And, after all, how couldn’t it? If you kill a ton of criminals every generation, genes that predispose people towards criminality are obviously going to become less common. My pointing in bringing this up is not to suggest that England is especially non-criminal. Other countries no doubt had similar periods and England has had its share of crime problems in its history. Nonetheless, the “war on murder” is a vivid example of the fact that culture can, and in fact must, impact evolution. Anything that differentially impacts people’s probability of reproducing will. Given this, and given the enormous amount of culture diversity which has existed on earth for millennia, it is, once again, lunacy to suggest that this all led to every population on earth possessing the exact same genetic predisposition for every mental trait there is. On top of all this, there’s the Neanderthals (and others). After humans left Africa they met, and bred with, other species or subspecies of human. These other humans had been evolving separately from us for a really long time and they are universally accepted to have been different from us physically, and mentally, due to evolution. Some populations bred with these groups more than others, and Africans didn’t breed with them at all. This has led to the races differing in their degree of Neanderthal admixture. Moreover, Neanderthal DNA is associated with various traits, including mental ones. For instance, one researcher described their findings from early last year thusly: “We discovered associations between Neanderthal DNA and a wide range of traits, including immunological, dermatological, neurological, psychiatric and reproductive diseases.” Specifically, they found that Neanderthal DNA was related to traits like nicotine addiction, depression, and other mental traits. How is it even possible, you might ask, for the races to differ in their level of Neanderthal admixture and still be “equal” if Neanderthals weren’t “equal”? It’s not. For this, and all the other reason’s laid out here, equality is, practically speaking, a biological impossibility.
The Appalachian region of the United States has a unique culture centered on a rich history and a combination of northern European ancestries that makes the area distinct. The area stretches from southern New York State down to the northern part of Georgia, Alabama, and Mississippi in the general vicinity of the Appalachian Mountains. The region was originally home to Native American tribes such as the Cherokee, but it was settled in the eighteenth and nineteenth centuries primarily by Scottish, Irish, and English settlers. The people who settled in the Appalachian region were known as hearty people who lived in an often difficult environment. They were deeply religious, and that aspect of the region still carries on today. In fact, religion has played a major role in shaping the communities and the historical events that have taken place over the course of centuries. Because many of the settlers were of Scots-Irish descent, they brought their traditional music with them to the Appalachian region. Irish and English ballads were a popular form of music among early settlers, and that music evolved into New World ballads that were written entirely in the Appalachian region, as opposed to the "Old World" ballads people brought with them from Europe. The Appalachian region was also the home to bluegrass music and country music, which evolved from traditional Appalachian music combined with blues music played primarily by African Americans. In more recent years, the Appalachian region has become known for outdoor recreation as well. The Appalachian trail meanders through the Appalachian mountains from its starting point at Springer Mountain in Georgia all the way up through the Appalachian states and beyond to Mount Katahdin in Baxter State Park in Maine. The mountainous region has become synonymous with pristine wilderness and fantastic views, often in narrow corridors of wilderness bounded by cities and other urban areas. The Appalachian region is also home to national parks and several national forests besides the Appalachian trail. Unfortunately, not all the reasons that make the region unique are positive ones. Poverty has been a problem in the Appalachians since it was first recognized as a distinct region. For a brief period of time, residents prospered off the lumber and coal mining industries, but as those industries faded, families became impoverished, and that poverty persisted. The education system in the Appalachians has also suffered as a result, and due to a lack of funding and emphasis on the importance of education, the school systems in the region are often far behind the national trends.
Fundamentals of Mathematics Detalles del libro: |Licencia:||Pendiente de revisión| Fundamentals of Mathematics is a work text that covers the traditional topics studied in a modern prealgebra course, as well as the topics of estimation, elementary analytic geometry, and introductory algebra. It is intended for students who: - have had a previous course in prealgebra, - wish to meet the prerequisite of a higher level course such as elementary algebra, and - need to review fundamental mathematical concepts and techniques. This text will help the student develop the insight and intuition necessary to master arithmetic techniques and manipulative skills. It was written with the following main objectives: - to provide the student with an understandable and usable source of information, - to provide the student with the maximum opportunity to see that arithmetic concepts and techniques are logically based, - to instill in the student the understanding and intuitive skills necessary to know how and when to use particular arithmetic concepts in subsequent material, courses, and nonclassroom situations, and - to give the student the ability to correctly interpret arithmetically obtained results. We have tried to meet these objectives by presenting material dynamically, much the way an instructor might present the material visually in a classroom. (See the development of the concept of addition and subtraction of fractions in Section 5.3, for example.) Intuition and understanding are some of the keys to creative thinking; we believe that the material presented in this text will help the student realize that mathematics is a creative subject. This text can be used in standard lecture or self-paced classes. To help meet our objectives and to make the study of prealgebra a pleasant and rewarding experience, Fundamentals of Mathematics is organized as follows. El libro en números en catálogo desde06/04/2015 'LIKES' socialesNothing yet... Segmentación por países Páginas de entrada Segmentación por sitios web
The concentric layers or laminae constitute the auxiliary apparatus and do not take part in the generation of electric potentials. When the nerve endings of these receptors, either inside or outside, are subjected to increased pressure, the membrane potential in the region of the nerve endings is decreased with the result action potential in the form of nerve impulse is developed which is according to local circuit theory is transmitted down to the axon and ultimately reaches the central nervous system which acts accordingly. Pain is a sensory experience initiated by injurious or threatening stimuli. There are both painful sensations and reflex actions in response to painful stimuli. Intensive stimulation of almost any sensory neuron appears capable of producing a painful sensation and only recently has it been shown that specialized receptors for pain exist. Generally the sensation of pain is evoked by either of two sets of nerve fibres. Some fibres when sufficiently stimulated result in burning pain ; small fibres function in signalling pricking pain. The latter is usually a short term effect, while the former may continue for long periods of time as long as the stimulus remains. Touch receptors of various kinds are located in the dermis or in the subepidermal connective tissue, they have a basic structure of nerve endings or tactile cells enclosed in layers of connective tissue. Tactile receptors are found all over the skin in the animals and include wide variety of morphological types. Many have the forms of bulbs (Krause end bulbs) or corpuscles (Meissner’s corpuscles). Meissner’s corpuscles are generally found in the outer layer of the skin. Touch receptors not only convey information a out the presence of tactile stimuli but they also possess discriminatory abilities for both the intensity of the stimulus and for the spatial direction or arrangement of the stimulus. The greatest ability to discriminate between intensities of stimulation or to determine the extent of spatial separation of separate stimuli occurs in areas of greatest concentration of sensory receptors.
Computing is organised into three strands: Computer Science, Information Technology and Digital Literacy, as well as developing computational thinking skills. These strands often overlap in practice: Computer Science is how computers and computer systems work, and how they are designed and programmed. Information Technology deals with using computer systems to solve real-world problems and is the productive, creative and explorative use of technology. Digital Literacy is the ability to safely, effectively and responsibly create digital items using a range of technologies. From Reception to KS2, there is mostly an even weighting between the three strands with Computer Science being more heavily weighted in KS3. In addition to annual, relevant e-safety units, all pupils learn about algorithms and programming, game design and animation, do lots of creative projects with video, music and digital art, learn how computers actually work, and much more! Pupils have one lesson each week in the Computing Lab and are taught by subject specialists. There are regular Open Lab sessions during break times where children can use the computers, iPads or programmable robots. There is also a Computing-related club run after school each term, such as Animation Club, and Programming and Game Design Club. The curriculum covered in Computing is based on the programmes of study of the National Curriculum. Topics covered in Computing are also integrated where possible with other curriculum areas and are planned in conjunction with classroom teachers. Because technology is constantly changing and evolving, our curriculum, software and resources are regularly updated to meet the demands of a dynamic and exciting technological world.
The three main types of asbestos exposure are: - Secondary (second hand) Second-hand exposure occurs when the primary person exposed to asbestos at work brings home asbestos fibers on their clothing or other equipment, exposing their children and spouse. Occupational asbestos exposure is when someone is exposed to asbestos at work. Occupational exposure occurred in construction sites, among firefighters, in manufacturing plants, mines, and in oil refineries, and chemical plants. Environmental asbestos exposure typically occurs when people are exposed to naturally occurring asbestos that contaminates groundwater supplies, or fibers are disturbed and become airborne. Some environmental asbestos exposure is actually from soil, water and trees that have been contaminated by nearby asbestos manufacturing plants or mines. Although asbestos exposure has primarily occurred within occupational situations (ranging from the initial mining of the fiber to the final removal process of older construction materials containing asbestos), secondary and environmental exposure to asbestos fibers has also presented a significant hazard to the health of different members of society. Men who have been exposed to asbestos fibers while working represent the largest group currently suffering from an asbestos related disease, yet numerous women and children who were exposed to fibers which clung to the clothing and/or hair of their husband or father have developed health problems later in life. Additionally, those who live near naturally occurring deposits of asbestos are often subject to environmental exposure, accounting for a significant number of worldwide deaths each year. Dating back to the late 1800’s, asbestos was used in many different industrial applications and a countless number of workers were exposed to the fibers on the job. The use of asbestos was highest during the middle of the twentieth century, when an estimated 27 million workers were exposed to this material on a nearly daily basis. Any worker who spent time in one of the following industries or facilities may be at risk of developing an asbestos related illness: - Commercial or residential construction; - Automotive manufacturing or repair shops; - Companies involved in the mining of asbestos; - Manufacturers of steel, abrasives and/or sand, construction related products and many others; - Shipbuilders and anyone working near shipyards; - Firefighters and other emergency service providers; - Railroad, power plant and oil refinery workers, etc. Secondary Exposure Through Clothing Women represent eight percent of all current mesothelioma victims in the country. Most of these individuals were never exposed to asbestos fibers while working on a job site, but inhaled the fibers after their husbands returned home in clothing covered in the material. Due to the long latency period of up to 50 years or more, many of the women who are newly diagnosed with an asbestos related disease date back to a generation where most females were responsible for all the domestic duties of the household, including the shaking out and laundering of clothing. Such duties resulted in millions of women being exposed to asbestos fibers on a daily basis; children were also exposed when they greeted their father upon return from work and/or visited the place of his employment. Additionally, scientists have discovered that people with smaller lung volumes (such as women and children) are more likely to develop an asbestos related disease when exposed to lower concentrations of asbestos than their male counterparts. Proximity to An Asbestos Mine or Manufacturing Facility Another common source of secondary exposure occurred with those who lived in close proximity to an asbestos mine and/or a manufacturing facility which utilized this hazardous material. Such individuals were often exposed to airborne asbestos particles every time they left their homes or spent time outdoors. Women and children were impacted to a greater degree than men as a likely result of their smaller lung capacities. In certain parts of the world, naturally occurring deposits of asbestos have exposed millions of people to this material and have resulted in thousands of deaths every year. Studies conducted on residents of rural villages in central Turkey reveal the dangers of living in an environment where asbestos fibers contaminate the soil. More women in these villages are affected than men, as they are responsible for the “whitewashing” of the family home with soil containing these naturally occurring asbestos fibers. In some Australian locations, asbestos related deaths due to environmental exposure have been so significant that little evidence remains of the former local population. Gratefully, few locations within the United States are heavily contaminated with naturally occurring asbestos when compared to other locations around the world, yet areas which lie in close proximity to an asbestos mine and/or manufacturing facility may have soil which will remain hazardous for many decades to come.
A set of fossils collected 35 years ago belonged to the oldest-known scorpion species to date, a new study reports. The scorpion lived around 437 million years ago and was surprisingly versatile, having the ability to breathe both on land and underwater, the team explains. This fossil helps us make better sense not only of the scorpions’ evolutionary path, but also of how animals transitioned from an aquatic lifestyle to living on dry land. The first scorpion “We’re looking at the oldest known scorpion — the oldest known member of the arachnid lineage, which has been one of the most successful land-going creatures in all of Earth history,” said Loren Babcock, an author of the study and a professor of earth sciences at The Ohio State University. In a new study describing the fossils, researchers named the new species Parioscorpio venator, meaning “parent scorpion hunter”. The fossil was first unearthed in 1985 in Wisconsin at a site that was once a shallow pool on the base of an island cliff face. For 30 years, it was kept in a museum at the University of Wisconsin until Andrew Wendruff, paper co-author and now an adjunct professor at Otterbein University in Westerville, decided to examine it in detail. This scorpion is about 2.5 centimeters long, similar to many wild scorpions today. Wendruff looked at the fossil under a microscope, taking high-resolution photographs of it from different angles. This process helped highlight bits of the animal’s internal organs, allowing Wendruff to identify its venom appendages and the remains of its respiratory and circulatory systems. This system is almost identical to those of modern scorpions (which are exclusively land-living) but function more closely to those of horseshoe crabs (which are predominantly aquatic but can breathe on dry land for short periods of time). The discovery provides new information about how animals transitioned from living in the sea to living entirely on land: The scorpion’s respiratory and circulatory systems are almost identical to those of our modern-day scorpions — which spend their lives exclusively on land — but operate similarly to those of a horseshoe crab, which lives mostly in the water, but which is capable of forays onto land for short periods of time. The oldest-known scorpion prior to this study had been found in Scotland and dated to about 434 million years ago — it was one of the first animals (that we know of) to live fully on land. This fossil, found in Wisconsin in the Brandon Bridge Formation, is between 1 million and 3 million years older, the authors explain. They were likely alive between 436.5 and 437.5 million years ago, during the late Paleozoic era. “What is of even greater significance is that we’ve identified a mechanism by which animals made that critical transition from a marine habitat to a terrestrial habitat. It provides a model for other kinds of animals that have made that transition including, potentially, vertebrate animals. It’s a groundbreaking discovery.” The paper “A Silurian ancestral scorpion with fossilised internal anatomy illustrating a pathway to arachnid terrestrialisation” has been published in the journal Scientific Reports.
A Kyoto University graduate student and her team of scientists found that cats utilize basic ideas of physics when tracking prey. Saho Takagi released a paper in Animal Cognition detailing the experiment and the reaction of the cats to each of the stimuli the team of scientists presented to them. According to Quartz, Takagi and her team brought 30 domestic cats into the experiment and shook containers close to each cat individually. At first, the containers either had something inside or they did not, and the container made noise if it was full or did not make noise if it was empty. After a few shakes, the scientists shook empty containers that still made noise or full containers that did not make noise. With each shake, the team recorded the reaction of each cat. As Tech Times shares, the consistent response to the noise, whether real or generated, was longer eye contact with the container than when there was no noise. In addition, the cats showed confusion when a container made a noise but had nothing in it when turned upside down, and vice versa. The determination the scientists made was that cats display a causal-logical understanding based on their sense of hearing to find objects that were not immediately visible. According to Science Daily’s report on this study, the findings of this experiment correlate to a cat’s natural hunting abilities or instincts. Cats likely use the sound made by their prey to infer either their distance or location in the dark, as most cats do the majority of their hunting at night. More research needs to be undertaken to determine whether cats are able to determine the size or number of objects or prey they hear. Domestic cats often surprise us with their antics, and now we know there is sometimes a method to their madness. Watch this video of a cat using physics to play a game of fetch. Help Rescue Animals Provide food and vital supplies to shelter pets at The Animal Rescue Site for free! →
As a parent, do you often wonder how children use their parents’ expressions to form their responses to certain events and situations? You are not the only one. If you wish to know more about your infant’s emotional development, then you have come to the right place as we will discuss the theory of social referencing in a child’s development and how it impacts along. Read on to find out everything that you need to know about social referencing in infants. In this article: All You Need to Know About Social Referencing in Infants Social Referencing in Infants: A Definition The definition of social referencing is a simple one. It is basically the process by which infants take cues from the emotive displays of adults (parents or caregivers) to form their responses to certain events or adjust their behavior towards other people and objects. The effective display of adults can be through facial expressions, vocal sounds or body language. Social referencing is a vital tool that tends to help infants get a grasp of their new environment and the people and objects that form a part of it. Children Using the Social Referencing Tool Infants as young as six months of age usually use social referencing as a way to gain a deeper understanding of their immediate environment. As they grow older, kids use social referencing with an increased frequency. By the age of eighteen months, kids may be using their parents’ or other adults’ effective displays to form responses for all their actions. Babies may use social referencing for a wide variety of things. For instance, they see a new shiny object on the floor and are obviously intrigued by it. They look at their adults to see if it is okay for them to touch it. The adult’s smile or frown can act as referencing tools for this your baby and will determine if they proceeds to touch the object or avoid it. The Role of Social Referencing in the Child’s Development It is not clear how social referencing differs for every infant. But psychologists have pointed out numerous ways by which social referencing helps a child’s development, such as follows: - Social referencing helps a great deal in your child’s emotional development. Infants invariably learn the various meanings of different emotive expressions, accompanying sounds and how they relate to different people and things. This helps them in understanding their surroundings in a much clearer way. - Babies tend to use social referencing to make decisions about what actions they need to perform. Therefore, it becomes their first foray into crucial decision-making skills to use later in life. This is how babies learn the art of decision. - Infants also begin to understand positive and negative connotations derived from different expressions by observing the adults around them. It may not be clear from their behavior at that age, because they are just learning and experimenting. But learning the concepts begins with social referencing among other things such as their home environment, how adults behave among other things. Using Social Referencing for Your Child’s Development Parents and other adults in charge can use social referencing for babies and toddlers as a tool to impart knowledge. You can do so by being more mindful about your reactions in various situations and how you keep your body language and voice more articulate. - Always use facial expressions when playing and interacting with your child. Let them see your emotive displays at close range for a better understanding of what they could mean for different situations. Your reactions could affect how your child also reacts to certain situations and people in general. - Try to ensure that your voice syncs with your body language when you are around your child. You may pretend to greet a neighbor with a smile but if you are not actually happy about it, it can show up as ambiguity in your body language and vocal tones. This might be subtle, but your infant may pick on them and become confused by their difficulty to grasp them fully. - Always use social referencing to teach your child new things such as making decisions about food choices as well as other things. Your expressions can be crucial to your child willing to try new food groups and behave differently. - Social referencing affects the emotional development of your child and it begins in their infancy. It is best to use this tool for the maximum benefit so that your child goes through mindfulness and awareness of your reactions. Therefore, to conclude, social referencing could be a great way to build your relationship with your child and aid in the growth and development of their personality.
The simplest example of straight linear motion is the straight line movement at constant speed. A linear movement is a movement where the body travels equal lengths during the same amounts of time. Example: A car has passed the lengths of the road of 8m during the same time interval of 2s. From the above example, we can conclude that the speed is constant, and in two seconds the car moves by 8 meters, which means that for during 1 second the car moves 4 meters, and its speed is 4m/s, v = const. When the body travels 2 meters in 1 second, we say its speed is 2 meters per second. This expression is obtained when we divide the length of the traveled path with the time needed for the travel. The speed of the straight linear motion is equal to the traveled path in the unit of time. The speed unit in SI is a meter per second. The body moves at a speed of 1 m / s if it travels 1 meter of the path every second. Example: Car’s speed after 15 seconds, which has crossed the path of 45 meters, is v = s / t => v = 45m / 15s; v = 3m / s We can calculate the speed if we know the relationship between time and length of the traveled path. From the given relation we can calculate the time of movement if the traveled path and the velocity are known: Example: The picture shows the car traveled 12m with the speed of 8m / s. We need to find the time of the car’s movement. When we divide the length of the traveled path with the speed, we get that the time for which the car has crossed the path is 1.5s. If we know the speed and the time, we can also calculate the traveled path. Any of the three quantities that determine the motion can be calculated if the other two are known to us. The speed can also often measured in kilometers per hour (km / h) and kilometers per second (km / s). The value of the speed in meters per second for some examples: Nails grow at a speed: 0,000 000 001 m/s The snail is moving at a speed of: 0.000 002 m/s. Pedestrian is moving at a speed of: Man runs at a speed of: A fast train runs at a speed of: A plane flies at a speed of 250 m / s. Sound travels at: Light travels at: 300 000 000 m/s.
Free Math Worksheets For 4th Grade Multiplication Our grade 4 math worksheets help build mastery in computations with the 4 basic operations delve deeper into the use of fractions and decimals and introduce the concept of factors. Free math worksheets for 4th grade multiplication. Fourth grade math worksheets. 4th grade math worksheets printable pdf activities for math practice. Fourth grade made is a transitional stage where focus shifts from many of the basic math facts towards applications. Choose your grade 4 topic. Worksheets math grade 4. They are randomly generated printable from your browser and include the answer key. Help your students kick their math skills up a notch with these fourth grade multiplication worksheets and printables. Free printable math multiplication worksheets for 4th grade with the assistance of a great worksheet printable you can discover four year math swiftly and easily. This is a suitable resource page for fourth graders teachers and parents. All worksheets are printable pdf files. The worksheets that i will certainly go over below are ones that feature worksheets flashcards video games math quizzes and also other sources. These math sheets can be printed as extra teaching material for teachers extra math practice for kids or as homework material parents can use. These 4th grade worksheets provide practice in mental multiplication skills ranging from simple multiplication math facts to multiplying 3 digit by 1 digit numbers in your head. Multiplying numbers in columns is a math skill which requires a fair degree of practice to attain proficiency. There is still a strong focus on more complex arithmetic such as long division and longer multiplication problems and you will find plenty of math worksheets in this section for those topics. This is a comprehensive collection of free printable math worksheets for fourth grade organized by topics such as addition subtraction mental math place value multiplication division long division factors measurement fractions and decimals. Our grade 4 multiplication in columns worksheets range in difficulty from 2 digit by 1 digit to 3 digit by 3 digit. These math worksheets complement our grade 4 mental multiplication worksheets which focus on. Begin by reinforcing their times tables knowledge with basic multiplication equations or let them jump right into multi digit multiplication word problems and finding factors. - Printable Pre K Worksheets Pdf - Printable Worksheets For Grade 1 Pdf - Printable Math Worksheets For 12th Grade - Reading Worksheets For 1st Grade To Print - Reading Worksheets For 1st Graders Printable - Printable Pre K Letter Worksheets - Printable Math Worksheets For Grade 1 Pdf - Printable Vpk Math Worksheets - Printable Worksheets For Kindergarten Alphabet - Printable Learning Worksheets For 4 Year Olds - Printable School Worksheets For 1st Graders - Relief Teacher Worksheets Free - Reading Exercises For 5th Grade - Reading Worksheets For Sixth Grade - Printable Worksheets For 2nd Grade Grammar - Printable Pre K Reading Worksheets - Rediscovered Families For Kindergarten Worksheets And Games - Reading Worksheets For 6th Grade - Reading Worksheets For 4th Grade - Printable Math Worksheets For 11th Grade
MUSTARD GAS First used during World War I, mustard gas is a colorless, odorless liquid at room temperature and causes extreme blistering. The name stems from its color and smell in its impure state. It's not related to the condiment mustard in any way. It's commonly referred to as a gas because the military designed it for use as an aerosol. Even slight exposure leads to deep, agonizing blisters that appear within four to 24 hours of contact. If it gets into the eyes, they swell shut, and blindness can result. If inhaled at high doses, the respiratory system bleeds internally, and death is likely. Exposure to more than 50 percent of the body's skin is usually fatal. It also causes cancer. It was the most common type of chemical weapon dumped. It was dumped in 1-ton canisters and artillery shells for decades. Mustard agent is heavier than seawater, so it sinks and rolls around on the ocean floor with the prevailing current. It lasts at least five years in seawater in a concentrated gel. NERVE GAS The most deadly of chemical warfare agents, one drop of nerve gas can kill a person within a minute. Death comes through seizure. It's colorless and odorless, with the texture of high-grade motor oil. It's easily spread through the air. It attacks the human nervous system, causing almost-instant spasms before preventing involuntary muscle actions, such as the heart's pumping. The Germans developed nerve gas during World War II, and only a few countries are known to possess any of it now. LEWISITE Developed too late for use in World War I, Lewisite is a blister agent akin to mustard gas. It's oily in its pure form and can appear amber or black in its impure state. It smells a bit like geraniums. It can easily penetrate clothing and rubber masks. Exposure results in painful blisters and lesions that begin within seconds and last for two to three days. Lewisite was meant to incapacitate enemy forces -- not necessarily kill -- and thus clog hospitals and cause terror. Intense nausea, diarrhea and vomiting are common, and shock from low blood pressure is likely. Eye exposure can cause blindness. Extensive exposure can cause systemic arseniclike poisoning, leading to liver damage or death. After an antidote was found during World War II, the Army decided that it wasn't as useful as other chemical weapons. PHOSGENE Used during World War I, it's a highly toxic gas that has no color but smells vaguely like moldy hay. It's particularly insidious, in that exposure doesn't result in symptoms until 24 to 72 hours later. The gas combines with water in the respiratory tract to form hydrochloric acid, which dissolves lung membranes. Fluid then fills the lungs, and death comes from a combination of shock, blood loss and respiratory failure. Unlike nerve agents, phosgene must be inhaled to cause harm. It slowly dissolves in seawater, eventually converting to its chlorine base and dissipating. CYANOGEN CHLORIDE A cyanide-based chemical weapon also used during World War I, this is a rapid killer that's easily dispersed. It's known as a blood agent, circulating quickly through the bloodstream on exposure through either inhalation or skin contact. It's colorless but has a biting, pungent odor similar to almonds. But the aroma probably won't be detected because exposure causes almost-instant agony: The skin quickly turns cherry red. Seizures follow within 15 to 30 seconds, accompanied by vertigo and vomiting. Death is likely in six to eight minutes. HYDROGEN CYANIDE The gas used by the Nazis in their concentration camps under the infamous brand name Zyklon B, it's either colorless or pale blue and has a faint almondlike odor. It's easily dispersed in the air and readily absorbed through skin contact or inhalation. At the cellular level, it cuts the body's ability to take in oxygen. In high doses, the effects are quick and catastrophic, including gasping for breath, seizures, the collapse of the cardiovascular system and coma. Death comes within minutes. The gas is flammable and potentially explosive. It's lighter than air, so if it's released in the sea, it would bubble to the surface. WHITE PHOSPHORUS This is a colorless, waxy solid with a garliclike odor. It reacts rapidly on contact with air, bursting into a flame that's difficult to extinguish. Breathing it in small doses can cause coughing or irritation of the throat. Eating or drinking it in even small amounts can cause stomach cramps, vomiting, drowsiness and death. It's heavier than water, so it sinks to the bottom.
Gigantopithecus blacki was thought to stand almost three meters tall and tip the scales at around 600kg. In advance, scientists have obtained molecular evidence from a two-million-12 months-previous fossil molar tooth present in China. The mystery is a distant relative of orangutans, by sharing a standard ancestor around 12 million years in the past. “It might have been a distant cousin (of orangutans), within the sense that its closest living relatives are orangutans, in comparison with different residing nice apes akin to gorillas or chimpanzees or us,” stated Dr. Frido Welker, from the College of Copenhagen. The analysis, reported in Nature, is predicated on evaluating the traditional protein sequence of the tooth of the extinct ape, believed to be a female, with apes alive in the present day. Acquiring skeletal protein from a two-million-yr-previous fossil is uncommon if not unprecedented, elevating hopes of with the ability to look even additional again in time at different historical ancestors, including people, who lived in warmer regions. There’s a lot of more reduced chance of being able to discover historical DNA or proteins in tropical climates; the place samples tend to degrade faster. “This examine means that historical proteins is perhaps an acceptable molecule surviving throughout most of the current human evolution even for areas like Africa or Asia and we may thereby in the future research our evolution as a species over a long period,” Gigantopithecus blacki was first recognized in 1935, primarily based on a single tooth pattern. The ape is assumed to have lived in Southeast Asia from two million years in the past to 300,000 years in the past. Many teeth and four partial jawbones have been recognized; however, the animal’s relationship to different high ape species has been difficult to decipher. The ape reached massive proportions, exceeding that of residing gorillas, primarily based on the evaluation of the few bones that have been found. It’s thought to have gone extinct when the environment changed from forest to savannah.
Solar phenomena are the natural phenomena occurring within the magnetically heated outer atmospheres in the Sun. These phenomena take many forms, including solar wind, radio wave flux, energy bursts such as solar flares, coronal mass ejection or solar eruptions, coronal heating and sunspots. These phenomena are generated by a helical dynamo near the center of the Sun's mass that generates strong magnetic fields and a chaotic dynamo near the surface that generates smaller magnetic field fluctuations. The sum of all solar fluctuations is referred to as solar variation. The collective effect of all solar variations within the Sun's gravitational field is referred to as space weather. A major weather component is the solar wind, a stream of plasma released from the Sun's upper atmosphere. It is responsible for the aurora, natural light displays in the sky in the Arctic and Antarctic. Space weather disturbances can cause solar storms on Earth, disrupting communications, as well as geomagnetic storms in Earth's magnetosphere and sudden ionospheric disturbances in the ionosphere. Variations in solar intensity also affect Earth's climate. These variations can explain events such as ice ages and the Great Oxygenation Event, while the Sun's future expansion into a red giant will likely end life on Earth.
The hexactinellids are exclusively marine. Today, the roughly 500 species are mostly known from deeper waters, 200 to 2,000 meters. Many modern hexactinellids are found living on soft substrates. Fossil hexactinellids are often found in strata of fine-grained limestones and shales, suggesting that the group has been associated with quiet waters upon which soft sediments slowly accumulate for their entire history. Although they are most common at great depths today, they are more abundant and diverse at shallower depths of the polar regions. The sponge above, viewed from the side and from above, is from Antarctica where the hexactinellids are incredibly abundant. Frequently they are the most conspicuous form of benthic life in these chilly waters. Interestingly, it appears that some hexactinellids may be important in structuring biodiversity on the continental slopes, as well as on the continental shelf of Antarctica. Large mats of their spicules provide a hard substratum that may allow for a greater number of species to exist in a given area. Modern Sponge Reefs Recently, it has been discovered that some hexactinellid sponges form impressive reefs off the coast of British Columbia, Canada, in waters 180 to 250 meters deep. Reefs are defined as any biologically created hard structures that rise from the sea floor. And, some of these sponge reefs tower as high as 18 meters above the surrounding sea floor, with nearly vertical sides. These reefs in Canada are particularly interesting because they provide a living comparison to extensive reefs created by hexactinellids throughout the Jurassic.
Galileo's first telescope saw the moons of Jupiter and forever destroyed the idea that the Earth was at the center of the universe. Edwin Hubble used the Mount Wilson Observatory in 1917 to show that other galaxies exist and that the Milky Way was but one of many. The Vela satellites orbiting the Earth in the 1960s were designed to detect the gamma rays that accompany a nuclear explosion. They worked for that purpose, but they also discovered gamma ray bursts from space, which was eventually identified as the explosion of a star so violent that they could be seen across the entire universe. Wednesday, scientists made a momentous announcement resulting from the latest form of "telescopes" -- detectors of gravitational waves. And the scientists instrumental in the detection have now been awarded the Nobel Prize. What they revealed is the observation of a cosmic calamity. Literally a long time ago and in a galaxy far, far away, two black holes , locked for eons in a dance of death, finally slammed into one another. Over the course of a few milliseconds, energy equivalent to the mass of three stars the size of our sun was released as gravitational waves that roared across the cosmos. occur when the fabric of space and time are distorted by the movement of large masses. Their existence was predicted in 1916 by Albert Einstein. In this announcement, black holes with the masses of 31 and 25 solar masses merged into a larger black hole with a mass 53 times that of our sun. For that brief instant, the gravitational energy emitted by that collision outshined all of the light emitted by all the galaxies throughout the known universe. After traveling for about 1.8 billion light years, the death scream of these two ancient stars passed through the Earth. On August 14, three detectors recorded the passage of these gravitational waves. Two detectors in the United States -- one in Hanford, Washington, and the other in Livingston, Louisiana -- are called the Laser Interferometer Gravitational-Wave Observatory, or LIGO . The other detector, located near Pisa, Italy, is called Virgo All three detectors are L-shaped, with each leg being about two miles long. Using lasers and mirrors, these phenomenal pieces of scientific equipment are able to measure tiny changes in the length of the legs of the detectors and identify the passage of gravitational waves. In February 2016, scientists announced that the first direct observation of gravitational waves had been made using just the two LIGO detectors. That was followed by a second announcement in June 2016. Because Virgo was undergoing an extensive upgrade, it was not operating during these first observations. Virgo's upgrades were completed and the facility became operational August 1, and thus it also recorded the gravitational waves of August 14. The addition of a third facility is an enormous improvement in capability. Like seismographs on Earth, gravitational wave detectors are non-directional and individually cannot determine the location from which the gravitational waves originated. However, by employing multiple detectors and carefully recording the arrival time of gravitational waves at each detector, scientists can triangulate and vastly improve the directional precision of the measurement. By including Virgo with the two LIGO measurements, scientists' measurements of the location in the sky from which the waves originated was improved tenfold. A proposed additional facility in India called Indigo that is an exact copy of the LIGO equipment will result in an even greater improvement if it is built. So why are gravitational wave observatories interesting? Well, the simplest answer is that they can verify that Einstein's theory of general relativity is right, but that's actually not a very satisfying one. There have already been many other tests of general relativity, including the simple fact that the GPS on your phone would simply not work if the theory were not correct A better answer involves astronomy. Black holes are just that -- black. They are the corpses of dead stars, so massive and compact that not even light can escape them. They literally cannot be seen, and before LIGO came online, their existence could only be inferred by their gravitational effect on their neighbors or because of light (often X-rays) emitted by hot gas falling into the black hole But an isolated black hole is invisible. It interacts via gravity and, even then, it only emits gravitational radiation when it is moving. So detectors like LIGO or Virgo are the only way to see them. They are essentially black hole telescopes. With even just a few observations of gravitational waves, the LIGO measurements have already perplexed scientists. Prior to 2016, astronomers thought that there were two classes of black holes: stellar-class black holes, with masses no more than about 10 times that of our sun, and massive, monstrous black holes at the center of galaxies with masses in the range of hundreds of thousands to billions of solar masses. Black holes with masses in the range of 30 solar masses or so were unexpected . And yet, that's just what LIGO (and now LIGO plus Virgo) have observed. If history teaches us anything, it's that a new telescope means we should expect the unexpected. Studying gravitational waves will teach us something that can't be observed in any other way. There's no way to know what we'll learn. But I am positive that it will be fascinating. Correction: An earlier version of this article said that scientists made the first direct observation of gravitational waves in February, 2016. In fact, that was the month they announced the observation, which had been made in September, 2015.
A device can access the Internet must first get a valid IP address (by IP address, Wireless Routers subnet mask and DNS server). Generally, the primary route DHCP server is enabled by default, Wireless Routers so the devices connected to the primary route can be assigned a valid IP address through the primary routing DHCP server to access the Internet. If the DHCP server with the secondary route is closed, Wireless Routers the device connected to the secondary route will not be able to assign the IP address through the secondary route to the DHCP server and can not access the Internet. Even if the IP address of the secondary route default DHCP server is only applicable to the Internet via the secondary route, the IP address can not access the WAN through the primary route, that is, Wireless Routers the invalid IP address. As the main route is equivalent to a server, Wireless Routers then the gateway is its own LAN port IP address, so the secondary routing DHCP gateway should meet the main routing LAN port IP address. Followed by the DNS server, Wireless Routers the main route to the Internet there is a condition is a valid DNS server address, usually through the WAN port or dynamic IP can get two DNS server address, Wireless Routers one is the primary DNS server, the other is a backup DNS server The Then we need to set the DNS server address for the secondary DHCP server, Wireless Routers so that the IP address assigned by the secondary route DHCP server can be accessed through the WAN route of the primary route, that is, by the secondary route DHCP server The IP address is valid, so that can not solve the problem from the secondary routing Internet. Now the wireless router has been popular, Wireless Routers is not living in the building always feel the wireless router signal is not good, how to change the router without the case of enhanced signal, the use of small objects in life can be done, the following and share how to enhance the signal The first step: prepare an empty cans, ready to scissors, the following began to produce The second step: with scissors to pull the bottom of the mouth to cut a mouth, try to cut some of the Qi The third step: the whole bottom of the cans to cut down, Wireless Routers pay attention to not pull the hand, cut down after the cross section will have a lot of glitches cut with scissors can be Step 4: Cut the direction of the mouth with a scissors Step five: look at the following chart cut the mouth of the mouth cut in two directions, cut to open it can be Step 6: cut as soon as possible after the cans to open the larger the greater the signal to enlarge the better Step 7: put it in the wireless router on the signal line can be, and in which direction you play the direction of the direction of which direction, generally can enhance one or two signals
The image at right is Hubble's close-up view of the myriad stars near the galaxy's core, the bright whitish region at far right. An image of the entire galaxy, taken by the European Southern Observatory's Wide Field Imager on the ESO/MPG 2.2-meter telescope at La Silla, Chile, is shown at left. The white box outlines Hubble's view. Credit: NASA, ESA, R. O'Connell, B. Whitmore, M. Dopita, and the Wide Field Camera 3 Science Oversight Committee/European Southern Observatory The Hubble Space Telescope's powerful new camera has taken the most detailed image yet of star birth in the nearby spiral galaxy M83. Nicknamed the Southern Pinwheel, M83 is undergoing more rapid star formation than our own Milky Way galaxy, especially in its nucleus. In this galaxy, the sharp eye of the Hubble's Wide Field Camera 3 (WFC3) ? newly installed this summer during the telescope's fourth and final servicing mission ? has captured hundreds of young star clusters, ancient swarms of globular star clusters, and hundreds of thousands of individual stars, mostly blue supergiants and red supergiants. WFC3?s broad wavelength range, from ultraviolet to near-infrared, reveals stars at different stages of evolution, allowing astronomers to dissect the galaxy?s star-formation history. The new image reveals in unprecedented detail the current rapid rate of star birth in this spiral galaxy. The newest generations of stars are forming largely in clusters on the edges of the dark dust lanes, the backbone of the spiral arms. These fledgling stars, only a few million years old, are bursting out of their dusty cocoons and producing bubbles of reddish glowing hydrogen gas. Gradually, the young stars? fierce winds (streams of charged particles) blow away the gas, revealing bright blue star clusters. These stars are about 1 million to 10 million years old. The older populations of stars are not as blue. A bar of stars, gas, and dust slicing across the core of the galaxy may be instigating most of the star birth in the galaxy?s core. The bar funnels material to the galaxy?s center, where the most active star formation is taking place. The brightest star clusters reside along an arc near the core. The remains of about 60 supernova blasts, the deaths of massive stars, can be seen in the image, five times more than known previously in this region. M83 is located 15 million light-years away in the Southern Hemisphere constellation Hydra. - Video ? Stunning New Images from Hubble - Vote: The Best of the Hubble Space Telescope - Images: Amazing Galaxies
Social - Emotional Development We can see the beginnings of social-emotional development in our very young children. Can your baby begin to soothe herself back to sleep? Can your toddler negotiate an extra book at bedtime? Can your preschooler show caring when you stub your toe on the sidewalk or wait through a phone call before asking to go to the park with you? Social and emotional development is the change over time in children’s ability to react to and interact with their social environment—other people in their lives. Social and emotional development is complex and includes many different areas of growth. Social and emotional milestones are generally harder to pinpoint than signs of physical development. Research shows that social skills and emotional development (reflected in the ability to pay attention, make transitions from one activity to another, and cooperate with others) are a very important part of school readiness, getting along with others, and beginning to advocate for oneself when needed.
Poplar trees are part of the willow family and make useful landscaping trees for many scenarios. As a group, the poplars do best in moist soil. As many as 15 different types exist that are native to North America. Also known as a cottonwood tree, the poplars feature, for the most part, large leaves that often quiver when even a hint of a breeze arises. Examine the leaves of poplars and distinguish their shapes. Study those of the white poplar, looking for three or five distinctive lobes that make it resemble a maple leaf in its shape. The leaves are usually from 2 to 4 inches long, with green on the top and silver-white on their undersides. Other poplar leaves, like those of the eastern cottonwood and the plains cottonwood, will quickly remind you of the ace of spades from a deck of playing cards. Listen when the wind blows and you will hear the poplar leaves rustling. Their shape combines with their long stems to create a leaf that the wind will move back and forth in the canopy of the poplar. White poplar leaves, with their silver-white undersides, almost seem to glimmer in the wind. Look for the fall colors on poplars. One cultivar of white poplar called "Richardii" has leaves that are yellow on their top during the spring and summer. Other white poplars turn reddish in autumn, while eastern cottonwood leaves turn yellow. Estimate the height of a tree you suspect is a poplar. The average white poplar will fall anywhere between 60 to 100 feet tall, according to the University of Connecticut Plant Database website. However, cultivars like the "Pyramidalis" are shorter, usually well less than 60 feet high. Richardii is also not as tall as the typical white poplar. The eastern cottonwood is able to grow to 100 feet and possesses a much wider trunk than most poplars. Watch the woods and landscape for the poplar seeds floating on the wind. You will be able to see them as they come off the poplar tree, attached to what looks like a cotton parachute that carries them away from the tree when the capsules containing them burst apart. Poplars are either male or female, with only the female trees producing these seeds.
Purple Shore Crab (Hemigrapsus nudus) Shore crabs are members of a large group called grapsoid crabs that dispel the notion that crabs are only marine animals. Not only are they the dominant crabs along shorelines in many parts of the world, but many of the species are “land crabs,” common in terrestrial habitats at and near the coast in tropical latitudes. Some of them ascend tropical streams and rivers to high elevations in the mountains. Grapsoids have in common a body that is more or less square. The Purple Shore Crab occurs from southern Alaska to northern Mexico along the Pacific coast. Adult crabs are usually purple but vary to reddish brown or even olive or yellow; younger ones are even more variable. The largest adults have a carapace width of 56 mm in males and 34 mm in females. Shore crabs are most common on rocky coasts but may occur in brackish estuaries. They are easily found on and under boulders and cobble, where they can reach high densities. Most of their feeding is on green algae and single-celled organisms such as desmids and diatoms, but animals such as tiny crustaceans, newly growing bivalves, and even snail eggs make up a minor part of their diet. They are in turn eaten by fish, gulls, and scoters. Mating occurs during midwinter, the male clasping a female by her chelipeds and guiding her to fertilization with his walking legs. The first pleopods (swimming legs) of the male are modified to transfer sperm to the female. Female crabs have wider abdomens than males, allowing them to carry masses of fertilized eggs around until they hatch. The largest females may produce clutches of over 30,000 eggs at one time. Shore crabs are not always easy to catch, but when you do catch one, watch out; they can pinch! Claws notwithstanding, they are eaten by fishes, diving birds such as loons, and gulls.
||This article needs more sources for reliability. (September 2009)| - Stalagmites may also refer to a type of fungus. A stalagmite is a form that can be found on the floor of a cave. It rises from the floor of a limestone cave when mineralized solutions drip from the ceiling and deposits of calcium carbonate form columns on the ground. The corresponding formation on the ceiling of a cave is known as a stalactite. There are several methods to help remember which formation hangs from the ceiling (stalactite) and which rises from the floor (stalagmite): - StalaCtite has a "c" for "ceiling". - StalaGmite has a "g" for "ground". - The T in StalacTite resembles one hanging from the ceiling, while the M in StalagMite resembles a formation rising from the floor. When touring caves with stalactites and stalagmites you might be asked to not touch the rock formations. This is generally because the formation is considered to still be growing and forming. Since the rock buildup is formed by minerals solidifying out of the water solution, skin oils can disturb where the mineral water will cling. So the development of the rock formation will be affected and not natural anymore. Stalactites and stalagmites can also form on concrete ceilings and floors, but they form much more rapidly there than in the natural cave environment. References[change | change source] - from the Greek stalagma ("Σταλαγμίτης"), "drop" or "drip"
[Part 1 offers an overview and introduction to the sources of distortion in power amplifiers.] The input stage of an amplifier performs the critical duty of subtracting the feedback signal from the input, to generate the error signal that drives the output. It is almost invariably a differential transconductance stage; a voltage-difference input results in a current output that is essentially insensitive to the voltage at the output port. Its design is also frequently neglected, as it is assumed that the signals involved must be small, and that its linearity can therefore be taken lightly compared with that of the voltage amplifier stage (VAS) or the output stage. This is quite wrong, for a misconceived or even mildly wayward input stage can easily dominate HF distortion performance. The input transconductance is one of the two parameters setting HF open-loop (o/l) gain, and thus has a powerful influence on stability and transient behaviour as well as distortion. Ideally the designer should set out with some notion of how much o/l gain at 20 kHz will be safe when driving worst-case reactive loads " a precise measurement method of open-loop gain was outlined last month " and from this a suitable combination of input transconductance and dominant-pole Miller capacitance can be chosen. Many of the performance graphs shown here are taken from a model (small-signal stages only) amplifier with a Class-A emitter-follower output, at +16dBu on 15V rails. However, since the output from the input pair is in current form, the rail voltage in itself has no significant effect on the linearity of the input stage. It is the current swing at its output that is the crucial factor. Vive la differential The primary motivation for using a differential pair as the input stage of an amplifier is usually its low DC offset. Apart from its inherently lower offset due to the cancellation of the Vbe voltages, it has the added advantage that its standing current does not have to flow through the feedback network. However a second powerful reason is that its linearity is far superior to single-transistor input stages. Figure 1 shows three versions, in increasing order of sophistication. The resistor-tail version in Figure 1(a) has poor CMRR and PSRR and is generally a false economy; it will not be further considered. The mirrored version in Figure 1(c) has the best balance, as well as twice the transconductance of that in Figure 1(b). Figure 1: Three versions of an input pair: (a) Simple tail resistor; (b) Tail current-source; (c) With collector current-mirror to give inherently good Ic balance. Intuitively, the input stage should generate a minimal proportion of the overall distortion because the voltage signals it handles are very small, appearing as they do upstream of the VAS that provides almost all the voltage gain. However, above the first pole frequency P1, the current required to drive Cdom dominates the proceedings, and this remorselessly doubles with each octave, thus: Ipk = 2pF • Cdom • Vpk For example the current required at 100 W, 8 O and 20 kHz, with a 100 pF Cdom is 0.5 mA peak, which may be a large proportion of the input standing current, and so the linearity of transconductance for large current excursions will be of the first importance if we want low distortion at high frequencies. Figure 2, curve A, shows the distortion plot for a model amplifier (at +16dBu output) designed so that the distortion from all other sources is negligible compared with that from the carefully balanced input stage. With a small-signal class-A stage this essentially reduces to making sure that the VAS is properly linearised. Plots are shown for both 80 kHz and 500 kHz measurement bandwidths to show both HF behaviour and LF distortion. It demonstrates that the distortion is below the noise floor until 10 kHz, when it emerges and heaves upwards at a precipitous 18 dB/octave. Figure 2: Distortion performance of model amplifier differential pair at A compared with singleton input at B. The singleton generates copious second-harmonic distortion. This rapid increase is due to the input stage signal current doubling with every octave to drive Cdom; this means that the associated third harmonic distortion will quadruple with every octave increase. Simultaneously the overall NFB available to linearise this distortion is falling at 6 dB/octave since we are almost certainly above the dominant pole frequency P1. The combined effect is an 18 dB/octave rise. If the VAS or the output stage were generating distortion, this would be rising at only 6 dB/octave and would look quite different on the plot. This form of non-linearity, which depends on the rate-of-change of the output voltage, is the nearest thing to what we normally call TID, an acronym that now seems to be falling out of fashion. Slew-induced distortion SID is a better description of the effect. If the input pair is not accurately balanced, then the situation is more complex. Second as well as third harmonic distortion is now generated, and by the same reasoning this has a slope of closer to 12 dB/octave. This vital point requires examination.
The January issue of Chemistry World includes a warning on impending shortages of certain elements. Among them are the rare earth elements, in particular neodymium, production of which, it is reckoned, will have to increase five times to build enough magnets for the number of wind turbines deemed necessary for a fully renewable future. Rough calculations indicate that this would still take 50–100 years to implement, depending on exactly what proportion of the renewable electricity budget would be met from wind power, and if the manufacturing capacity and other resources of materials and energy needed for this Herculean task will prevail. Neodymium is a rare earth metal used extensively to produce permanent magnets found in everything from computer hard disks and cell phones to wind turbines and cars. Neodymium magnets are the strongest permanent magnets known, and a neodymium magnet of a few grams can lift a thousand times its own weight. The magnets that drive a Toyota Prius hybrid’s electric motor use around 1 kilogram of neodymium, while 10–15 kg of lanthanum is used in its battery. Interestingly, neodymium magnets were invented in the 1980s to overcome the global cobalt supply shock that occurred as the result of internal warfare in Zaire (now Congo). Around one ton of Rare Earth Elements (REE) based permanent magnets is needed to provide each MW of wind-turbine power. By alloying neodymium with dysprosium, magnets are created that more readily maintain their magnetism at the high temperatures of hybrid car engines. However, far more dysprosium relative to neodymium is required than occurs naturally in the REE ores, meaning that another source of dysprosium must be found if hybrid cars are to be manufactured at a seriously advancing rate. 97 percent of REEs come from China, and it appears that China will run out of dysprosium within 15 years, or sooner if demand continues to soar. Peak oil may already be with us, and peak coal in 10–15 years, while peak lithium remains a subject of speculation. Peak neodymium is the latest threat to green energy, while doubt emerges over the security of many other element groups including the rare earths, the platinum group metals, and elements such as antimony, beryllium, gallium, germanium, graphite, indium, magnesium, niobium, tantalum, and tungsten. Helium (used to cool superconducting magnets in hospital MRI scanners) and phosphorus (in agricultural fertilizers) are also under threat. If even "renewables" cannot save us from waning fossil fuel depletion, the only solution is to begin the deceleration of consumption to a lower-energy society based around local communities immediately, with vastly reduced inputs of energy and all kinds of "mined" resources. Recycling must be key to this most difficult transitional step, in hand with a new concept of a "circular economy," that aims to model nature where nothing is wasted. Source: Chris Rhodes, Forbes/Energy Source blog
Instruction Set Basics As you might expect, the instruction set is very simple. The way it is laid out it is very easy to read and write the hex op codes directly (although I have a cross assembler I'll tell you about shortly). Figure 1 shows the basic format. Note that the functional units have an 8-bit address and a 4-bit subfunction code. So an instruction that doesn't use registers and has no conditions looks like: 00 sSS dDD Where SS is the source unit, s is the source sub unit, and the D's represent the destination unit and sub unit. Constants are easy to identify also. The short form takes advantage of the fact that if the condition mask is zero, there's no meaningful reason to have the condition match bits set. So a short constant will have a 1 in the first position and you simply have to mask the top bit off. So 1000003F is the constant 3F hex (note that the constants are sign extended to 32-bits so 18000000 is the constant F8000000). Full 32-bit constants start with 00000F01 followed by the full 32-bit constant (this requires two cycles to execute and does not respect condition codes). Of course, the real meat to this processor isn't the format of the instructions, it is the possible values for the source and destination. Table 1 shows the standard functional units and their addresses. Note the constant functional unit is unit zero which allows for some mnemonic subfunctions. So 00000001 (source unit 0, subunit 0) loads a zero into the program counter while 00A00001 loads a constant A. Of course, that only goes so far and some of the subfunctions are just arbitrarily assigned. Although every instruction is technically a move, you might prefer to give some mnemonic aliases to special moves (my cross assembler does this). For example, moving something into the program counter is a jump. A subroutine return involves moving the top of the stack to the program counter. You might be wondering: This all sounds interesting, but how can I implement a custom CPU? The answer is to use a Field Programmable Gate Array (FPGA) device. (For more on programmable logic devices like FPGAs, see my article Programmable Logic and Hardware). In a nutshell, an FPGA implements a large number of logic cells and a programmable way to interconnect them. You can create your design by drawing schematics (only feasible for small designs) or by using a hardware description language (like Verilog, the language used to implement One-Der). A program running on a PC takes your description and converts it into a configuration of the FPGA's logic cells. The result is downloaded to the FPGA by the PC's parallel port. You can also download the result to a EEPROM which can automatically configure the FPGA on startup once you are happy with your design. Working with FPGAs used to be very expensive. Today, vendors offer development boards that work with free software for well under US$100. My prototype of the One-Der architecture works on a development board available from Digilent (see Resources) that has the equivalent of about 1,000,000 logic gates (it costs a bit more than $100, but was well under $200). In addition to the Xilinx Spartan 3 FPGA chip and assorted support functions, the board also contains some memory devices along with a handful of I/O devices such as switches, LEDs, and a serial port. Of course, none of these things do anything unless your logic makes them do something. The One-Der prototype can read the switches, light the LEDs, and implements a serial port you can use in your programs. It also provides 16 bits of general-purpose outputs and another 16 bits of input available on the board's edge connector. If you aren't familiar with Verilog, you'll notice it strongly resembles C. However, the way it is handled is very different since the result isn't object code, its a hardware configuration. Consider the following C snippet: This actually generates code that sets x at some discrete time, then sets y a short time later. After this code executes, x and y won't change unless the program reexecutes this code, or executes some other code that modifies x and y. In Verilog you might write: assign x=a|b|~c; assign y=a&b; This will create several logic gates; one that drives the x output and one that drives the y output. These gates will constantly compute the equations in hardware and the value of x (or y) at any instant will reflect the current values of the inputs. There's no sequence of computations implied by the line order. The resulting logic is all in parallel. You can specify gates directly or use other methods, but I find them all clunky compared to using assign. For completeness though, here's two other ways you could drive the x output equivalently: or(x,a,b,~c); always @(a,b,c) x=a|b|~c; All of these build asynchronous logic in Verilog. You can also create synchronous logic which make up the bulk of the CPU. With synchronous logic, everything is referenced to a clock pulse. This way outputs have until the next clock pulse to "settle" and you can build very complex logic without worrying too much about the different delays encountered by the various signals. For example, consider generating parity on a serial bit stream. The idea is to use a flip flop to remember the current parity value. When the serial data arrives (at the rising edge of the clock) the new parity will be the old parity exclusive-ored with the data bit. Here's a Verilog module to implement this logic: module serialparity(input clk, input reset, input data, output reg even, output odd) assign odd=~even; always @(posedge clk) if (reset) even=1'b0; else even=even^data; endmodule This defines a module (like a parallel subroutine) that "executes" on each positive clock edge. If you read C, you can probably puzzle out most of the syntax. The 1'b0 means a literal zero that is 1-bit long (the b is for binary). In English, the module takes each data bit and exclusive or's it with the previous parity value to form the next parity value. Of course, a complete Verilog tutorial is beyond the scope of this article, but there's plenty of resources on the Internet. The CPU is relatively simple, but it probably isn't the ideal first Verilog project. On the other hand, adding functional units and manipulating existing ones is well within the reach of the Verilog neophyte
Paraeducators and IDEA 2004: Understanding IDEA Terminology Why This Topic Is Important to Paraeducators Too often in education, unexplained terms and jargon are used that can undermine effective communication. Even worse, terms and acronyms are sometimes used inappropriately. When people find themselves unsure of what a term or phrase means, they may hesitate to ask for clarification so as not to appear uninformed. You want to be sure you are doing your job properly. An understanding of the commonly used terms found in IDEA 2004 will help you discuss and share information relevant to students with disabilities. You should familiarize your self with the glossary in the following section. How to Understand IDEA 2004—A Short Glossary of Terms Assistive technology device —any item, piece of equipment, or product system—whether acquired commercially off the shelf, modified, or customized—that is used to increase, maintain, or improve the functional capabilities of a child with a disability. The term does not include a medical device that is surgically implanted or the replacement of that device. Assistive technology service —any service that directly assists a student with a disability in the selection, acquisition, or use of an assistive technology device. This term covers: - Evaluating the needs of the student, including a functional evaluation of the student in his or her customary environment. - Purchasing, leasing, or otherwise providing for the acquisition of assistive technology devices by such student. - Selecting, designing, fitting, customizing, adapting, applying, maintaining, repairing, or replacing assistive technology devices. - Coordinating and using other therapies, interventions, or services with assistive technology devices, such as those associated with existing education and rehabilitation plans and programs. - Providing training or technical assistance for a child with a disability, or, where appropriate, the family of the child. - Providing training or technical assistance for professionals (including individuals providing education and rehabilitation services), employers, or other individuals who provide services to, employ, or are otherwise substantially involved in the major life functions of the child [emphasis added]. This would include paraeducators. Child with a disability —a child evaluated in accordance with the requirements of IDEA as having mental retardation, a hearing impairment (including deafness), a speech or language impairment, a visual impairment (including blindness), a serious emotional disturbance, an orthopedic impairment, autism, traumatic brain injury, an other health impairment, a specific learning disability, and/or deaf-blindness or multiple disabilities, and who, by reason of the preceding condition(s), needs special education and related services. Equipment —includes machinery, utilities, built-in equipment and any necessary enclosures or structures to house such machinery, utilities or equipment, and all other items necessary for the functioning of a particular facility for the provision of education services to students with disabilities. Such items include instructional equipment and necessary furniture; printed, published, and audiovisual instructional materials; telecommunications, sensory, and other technological aids and devices; and books, periodicals, documents, and other related materials. Free appropriate public education —often referred to as FAPE, this includes special education and related services that are provided at public expense, under public supervision and direction, and without charge; that meet the standards of the state education agency; that include an appropriate preschool, elementary school, or secondary school education; and that are provided in conformity with the individualized education program. Individualized education program —often referred to as the IEP, this is a written statement for a child with a disability that is developed, reviewed, and revised in accordance with IDEA’s requirements. The IEP team includes parents, teachers, and others who are deemed, by either the agency or the parents, to have special knowledge or expertise regarding the student. Paraeducators often fall into the category of those having special knowledge or expertise of a student. Individualized family service plan —a written plan for providing early intervention services to infants and toddlers. The contents of the plan include: - Statement of the infant’s or toddler’s present level of development in the following areas: physical, cognitive, communication, social or emotional, and adaptive. This statement shall be established using objective criteria. - Statement of the family’s resources, priorities, and concerns relating to the enhancement of the child’s development. - Statement of the measurable results or outcomes to be achieved, including pre-literacy and language skills, as developmentally appropriate for the child, and the criteria, procedures, and timelines used to determine progress toward the results or outcomes and whether modifications or revisions are necessary . - Statement of specific early intervention services based on peer-reviewed research, to the extent practicable, necessary to meet the infant’s or toddler’s and the family’s unique needs, including the frequency, intensity, and method of delivering services. - Statement of natural environments in which early intervention services will appropriately be provided, including a justification of the extent, if any, to which any services will not be provided in a natural environment. - Projected dates for initiation of services and the anticipated length, duration, and frequency of the services. - Identification of service coordinator from the profession most immediately relevant to the infant’s, toddler’s, or family’s needs, who will be responsible for the implementation of the plan, and the coordination with other agencies and persons, including transition services. - Steps to be taken to support the transition of the toddler with a disability to preschool or other appropriate services. Individuals with Disabilities Education Improvement Act —often referred to as IDEA 2004, the major federal education program for students with disabilities. IDEA 2004 remains in effect until it is reauthorized. It contains comprehensive requirements and authorizes state and local aid for special education and related services for children with disabilities. Related services —transportation and such developmental, corrective, and other supportive services as are required to assist a child with a disability in benefiting from special education. This includes: - Early identification and assessment of disabling conditions in children. - Developmental, corrective, and supportive services including speech-language pathology and audiology services; psychological, physical, and occupational therapy; recreation, including therapeutic recreation; social work services; counseling services including rehabilitation counseling; and orientation and mobility services. - Medical services included in related services shall be for diagnostic and evaluation purposes only. Related services do not include a medical device that is surgically implanted, the optimization of device functioning, maintenance of the device, or replacement of such a device. Supplementary aids and services —aids, services, and other supports that are provided in regular education classes or other education-related settings to enable children with disabilities to be educated with non-disabled children to the maximum extent appropriate. [Source: Definitions section of IDEA, located in Volume 20 United States Code Section 1401.]
Gender parity in education is a critical milestone along the way to universal primary education and gender equality. The previous chapters highlight the reality of accomplishments and challenges as understood by education and development specialists. Like a map, they locate where each region stands today. There is also a need to highlight some concrete measures that are necessary to move beyond present day realities to reach the ultimate destination – a world where all children complete a quality basic education. Abolish Fees, Cap Total Costs, Provide Incentives Children’s right to education cannot be held ransom to poverty. First and foremost, governments need to put a ceiling on the total household cost of schooling. This means abolishing tuition fees and implies limiting or eliminating the hidden costs related to education. - Provide scholarships for all disadvantaged children. Direct costs and school-related expenses need to be covered. - Cap fees and other school charges. Fees for parent teacher associations could be made illegal, school uniforms could be eliminated and textbooks could be rented rather than the more costly alternative of purchasing books. - Provide financial incentives to disadvantaged families in return for their children’s school enrolment and regular attendance. Impoverished families must often choose between sending children to school or engaging them in paid work, domestic labour and subsistence activities. Incentives could be in the form of cash or take-home food rations. - Abolish school fees and other charges as national policy. This intervention has the potential to flood schools with new students, but can be successfully addressed through strategic planning as well as phased-in approaches such as eliminating fees by grade over several years. Provide ‘Essential Learning Package’ When emergencies strike, the world somehow rises to the occasion and produces sterling results under dire circumstances. Within a short time frame millions of children are enrolled in school and learning because certain essential supplies and services have been widely distributed. The key is to utilize the lessons learned during these crises and adapt them to countries that face chronic problems that require emergency type solutions. Along with major education advocacy campaigns, certain actions are required: - Quick needs assessments as starting points for rapid progress - Identification and costing of key elements of supplies and services that constitute feasible ‘essential learning packages’ - Negotiating package elements and costs that are affordable in the medium to long term to ensure sustainability - Initial infusion of major funding by donors (front loading) to take up the costs involved in the short to medium term - Technical support for developing, evaluating and distributing the ‘essential learning package’ in a rapid, efficient and large scale manner - Partnerships between external agencies and governments to make full use of existing national systems, procedures and instruments rather than creating new or parallel systems. Promote Schools as One-Stop Centres for ‘Learning Plus’ School can be the gateway for ameliorating external barriers that threaten the delivery of essential services to children, including education. Schools that promote ‘Learning Plus’ can be the difference between hunger and nutrition, illness and health, fear and fun, and ignorance and knowledge. - Provide school feeding such as mid-day meals or take-home rations - Implement school health programmes that include deworming treatment, micronutrients and immunization - Provide care and support for orphans and other vulnerable children - Establish hygiene and health education that encourages practices for disease prevention - Establish safe and protective environments for children to learn and play. Date with Destiny Progress towards gender parity in education has been slow but steady. And while some countries have a long road to travel to meet this goal, cautious optimism abounds. Achievement of universal primary education by 2015 is more tenuous, and achievement of gender equality will be more elusive. For world leaders to be able to hold their heads high, and, more importantly, for all children to attain their right to quality education, definitive action must be taken now. The clock is ticking.
Art teachers need to include careers in art in their lesson plans. One career that uses the same principles as art is that of an interior decorator. After teaching the students the basics of perspective, they can use those skills to draw and decorate a bedroom. Interior decorators often draw a picture of the room to show to the customer before they begin the work. The students can practice decision making skills by choosing the style and colors used in the room. Just as art has movements, fashion and interior decorating have movements called trends or styles. Some of the styles of interior decorating include traditional, Early American, modern and Art Deco. To make this project, you will need: - Art print: The Room by Vincent Van Gogh - Crayons or colored pencils - Pictures of furniture from newspaper advertisements that show different styles Review perspective from previous lesson. Demonstrate how to draw a room, using a vanishing point that you can’t see because the wall is in front of it. Draw four lines from this point to represent the top and bottom of the side walls. Draw a rectangle by adding vertical and horizontal lines to make the ceiling and floor. Use the art print of Van Gogh’s “The Room” as an example. Point out how furniture rests on the floor by drawing another parallel line to the vanishing point. If you don’t draw another parallel line for both the top and bottom of the chest of drawers, the chest appears to be coming out of the wall instead of resting on the floor. Point out the parallel lines in the art print that are used to draw the furniture. Once the students have the basic room drawn, they can add furniture in the style of their choice. Have examples of different styles for them to draw from. First sketch it, make any needed adjustments and then do a finished drawing using color, and adding a color scheme. Point to the color wheel poster for examples of color schemes; complimentary, monochromatic, and analogous, which they were introduced to before. - Used parallel perspective lines to draw walls - Used parallel perspective lines to draw furniture - Sketched first and was able to make adjustments, erasures, easily - Used a color scheme Sunshine State Standards VA.E.1.3.2 understands the skills artists use in various careers and how they can be developed in art school or college or through internships. ©Paula Hrbacek All rights reserved. Please link to this article instead of reposting it. For reprint rights use the contact form at www.paulahrbacek.weebley.com.
In the Jim Crow South, poll taxes were one tactic commonly used to keep blacks from voting. Towns would arbitrarily impose taxes on people pursuing their right to vote. By 1904, every former Confederate state had put in place some form of poll tax. Coupled with literacy tests, which disqualified black voters for a single wrong answer but did not penalize whites, poll taxes were part of a system of widespread disenfranchisement in the South. Practices like these prompted civil rights groups to push for the Voting Rights Act, which prevented states from placing onerous burdens on citizens pursuing their right to vote. As part of the act, certain states and jurisdictions that were deemed to have a history of disenfranchising voters must obtain federal approval from keyboard shortcuts: V vote up article J next comment K previous comment
The Union of Soviet Socialist Republics is one of the greatest empires of the twentieth century. Formation of the territory took place from 1922 to 1956. In the final version, the USSR consisted of 15 republics. Formation of the USSR The date when it was officially proclaimed the creation of the USSR, should be considered December 29, 1922. The initiators of the signing of the memorandum on the formation of the state association were: the Russian Socialist Federative Soviet Republic, Belarus, Ukraine and the Transcaucasian republics. The main factors that influenced the unification of countries in a single union were: - ideology - the republics of the USSR were ruled by one party - the Bolsheviks; - common historical traditions - all the subjects in one way or another were formerly part of the Russian Empire; - the need to unite for collective military security - minimize costs by creating a unified army. The day after the declaration of unification within the framework of the Union of Soviet Socialist Republics, the first All-Union Congress of Soviets was held, which proclaimed the unification of the republics into a single state, and the declaration and text of the union treaty was signed. It is the date of December 30, 1922 and is the starting point in the history of the USSR. The declaration stated that the structure of the Union of Soviet Socialist Republics assumed the unification of the republics on the principles of equality, as well as on the basis of nationality. The form of the organization fully corresponded to the basic Bolshevik theses, such as the dictatorship of the proletariat and the poorest peasants. Like any state entity, the Union of Soviet Socialist Republics has received its Constitution. It was adopted in January 1924 and was a partial copy of the constitution of the "main" founder - the RSFSR. The first version of the Constitution of the USSR was considered quite modern and coped with the tasks characteristic of the fundamental law. Namely, it defended the basic rights and freedoms of a citizen. This constitution lasted until 1936. It was changed due to the fact that the composition of the USSR was expanded. Also on the adjustment of the main law influenced foreign economic and foreign policy factors. Republics that are part of the USSR in the twenties and thirties Soon after the first constitution of 1924 was adopted, the number of republics also increased. The Uzbek and Turkmen Soviet Socialist Republics were the first to join the newly formed state. These republics were created in the territory that is part of the RSFSR. On October 27, the Turkestan ASSR was abolished, and in its place appeared these two new subjects of the USSR. The next stage of reformatting the republics took place on December 5, 1929, when part of the territory (Tajik Autonomous Soviet Socialist Republic) was separated from the Uzbek Soviet Socialist Republic, which became a separate republic. Transformation of the TSSF Seven years later, on December 5, 1936, the Transcaucasian Socialist Federative Soviet Republic was abolished. On this day, it was decided to create three new republics in these territories: the Georgian, Armenian and Azerbaijani Soviet Socialist Republics. In the same period, the second branch of the RSFSR and the creation of two more union republics were completed. Thus, on December 5, the Kazakh and Kyrgyz Soviet Socialist Republics emerged, based on the respective ASSR. The subsequent growth of new actors is directly linked to foreign political events that took place in Europe. The expansion was due to the Second World War and military action against Finland. On March 31, the Karelo-Finnish Soviet Republic was established. It appeared as a result of the defeat of Finland and the loss of a substantial part of the territory in favor of the RSFSR. To the USSR have departed such parts of Finland as part of the fishing peninsula, the Ladoga Lake region, and the Karelian Isthmus. In the status of the republic, the Karelian-Finnish territory lasted until July 16, 1956. That day it was abolished in the ASSR and became part of the RSFSR. Also part of the territory was returned to Finland, and the name "Finnish" disappeared from the name. The current Ukraine in the USSR also did not have stable outlines. The first change in the Ukrainian Soviet Republic was within the framework of secret agreements between the USSR and Germany, known as the Molotov-Ribbentrop Pact. According to it, Ukraine received its modern western regions. In fact, the USSR returned the lands of the Russian Empire, which were given to Poland under a treaty of 1921. It happened in the autumn of 1939. And already on August 2, 1940, part of Ukraine, in the form of the Moldavian ASSR with the addition of Bessarabia, captured from Romania, became the Moldavian SSR. Accession of the Baltic States The last expansion of the USSR was the annexation of the Baltic states. June 22, 1940 in the USSR entered the Republic of Estonia. A corresponding declaration was adopted. Lithuania joined the USSR on August 3, 1940. The Lithuanian SSR was formed on July 21, 1940. The last of the Soviet republics became the Latvian SSR. Latvia in the USSR completed the formation of the main outlines of the territory of the Union. It happened on August 5, 1940. As a result of the unification of the 15 Soviet republics, the Soviet Union became the largest country, occupying one-sixth of the world's land area.
A census is the procedure of systematically acquiring and recording information about the members of a given population. It is a regularly occurring and official count of a particular population. The term is used mostly in connection with national population and housing censuses; other common censuses include agriculture, business, and traffic censuses. The United Nations defines the essential features of population and housing censuses as “individual enumeration, universality within a defined territory, simultaneity and defined periodicity”, and recommends that population censuses be taken at least every 10 years. United Nations recommendations also cover census topics to be collected, official definitions, classifications and other useful information to co-ordinate international practice. - Main advantage of this system is that officers don’t need to visit house by house and collect data. - If there is any error or typo in data officer can ask for correction of it. - Census need not be carried out every 10 years, user can update details where ever it changes. - The only disadvantage that the users who don’t have internet connection can’t access the system. - But there can be a future enhancement where we can setup and registration camp where user with lack of access of the systems can provide details to the system via any assigned officers.
An AB pattern has two repeating parts. This is a simple pattern that can be found in math, music, and even movement! In this activity demo you will see how A and B are represented by locomotor and nonlocomotor movements. It starts with A (march) and B (flick) being performed for a total of eight counts each. The AB pattern is repeated two times for a total of thirty-two counts. The next step is adding dance concepts to the locomotor (march) and nonlocomotor (flick) movements. For example, you will see “march” (locomotor movement) combined with a pathway (straight) and effort (strong), and “flick” (nonlocomotor movement) combined with a level (high) and effort (light). By adding dance concepts to march and flick they become more dynamic in the way they are performed. The final step in this demo is adding the concept of “freeze” to both A and B where the movement stops for a total of eight counts. In this example, “freeze” is demonstrated as a curved shape. Important: this is only one of many possible combinations when creating AB patterns with students. Watch the demo for other ideas! Here is the completed AB pattern as demonstrated: A = march (locomotor), strong (effort), straight (pathway) + freeze (curved shape) B = flick (nonlocomotor), light (effort), high (level) + freeze (curved shape) Counts: A = march/freeze (sixteen) + B = flick/freeze (sixteen) = thirty-two total
For a long time, scientists wondered how bulbs, which grow from seeds that germinate at or near the surface of the soil, end up being buried so deeply. And they discovered that it’s because they produce unique roots called contractile roots. These begin the season like any other root, penetrating deep into the soil, but then they begin to contract, becoming condensed and wrinkled, like an accordion that deflates. Since the lower end of the root clings to the soil particles in its vicinity, the result is that they pull the bulb downward. Thus, the bulb actually plants itself! The Right Depth Each species of bulb at its preferred soil depth, one that shelters it from predators and drought, and may take a few years to reach it, descending deeper and deeper each season as the bulb, tiny when it germinates, grows in size. The bulbs even adjust to the type of soil, descending deeper in light, airy soils than in dense, heavy ones. Studies have shown that the main influence in bulb depth is actually sunlight. Contractile roots react to light penetrating the soil, specifically rays at the blue end of the spectrum, pulling the bulb deep enough into the soil that the penetration of blue rays no longer influences it. And the degree of tolerance to blue rays varies according to the species, explaining why some bulbs descend more deeply than others. In Your Own Garden If you pay attention, you can actually observe the phenomenon of contractile roots in your own garden. Here are two examples: - While planting bulbs in the fall, you accidentally drop one and it lies on the ground all winter. (And who hasn’t done that!) You’ll find the lost bulb in the spring, as it will bloom even if it’s only partly buried. But come summer, it will have disappeared entirely, pulled underground by its contractile roots. Within two or three years, it will have planted itself at the right depth for the type of bulb in question. - Or you planted lily bulbs at a depth of 6 inches (15 cm), as recommended on the planting instructions that came with them. Now, some lilies adapt very well to that depth and will remain there. However, when you decide to divide your lilies five years later, you will find that some have “migrated” to a depth of 8 inches (20 cm) and others to a full foot (30 cm), according to their true preferred depth. Tulips Like It Deep Most tulips actually like much deeper plantings than bulb companies usually recommend: some botanical tulips will descend to a depth of 2 feet (60 cm) over the years! The theory for this extreme depth is that the bulb is trying to protect itself from bulb-loving marmots found in their natural environment. The squirrels of your own garden also like tulip bulbs, but they won’t dig anywhere near 2 feet (60 cm) to find them. They’ll even give up past 8 inches (20 cm)! That’s one of the reasons why tulip bulbs planted 1 foot (30 cm) deep often perennialize better than the tulip bulbs planted at the usually recommended 6 inches (15 cm). (Read more about this phenomenon at Deep Planting Prolongs Tulips.) The bulbs that plant themselves: ain’t nature wonderful?
Breathing oxygen at a higher-than-normal air pressure might ease some of the symptoms of Alzheimer’s disease, if recent research done in mice has the same results in humans. Mice genetically engineered to develop some human features of Alzheimer’s disease showed significant reductions in physical and behavioral symptoms after 2 weeks of daily treatment with hyperbaric oxygen therapy (HBOT). This was the result that a team hailing from the University of Tel Aviv (TAU) in Israel reported in a paper that was published recently in the journal Neurobiology of Aging. “This research is extremely exciting,” notes lead investigator Uri Ashery, a professor in TAU’s Faculty of Life Sciences, “as it explores a new therapy that holds promise as a treatment of Alzheimer’s disease.” Alzheimer’s is a progressive disease that gradually destroys brain tissue and people’s ability to remember, think, communicate, and lead independent lives. It is the most common form of dementia. In the United States — where an estimated 5.5 million people have Alzheimer’s — the disease is the just one of the top 10 leading causes of death for which there is currently no cure or treatments to prevent or decelerate it. The burden of the disease is growing as the population of the U.S. ages. While deaths from other major causes are falling, deaths from Alzheimer’s are rising fast. During 2000–2014, deaths from heart disease — the number one killer — fell by 14 percent, while deaths from Alzheimer’s rose by 89 percent. A classic hallmark of Alzheimer’s disease is the presence of “plaques” of amyloid protein fragments and “tangles” of another protein called tau in the brain. Damage to cells by free radicals, known as “oxidative stress,” is another hallmark, as is brain inflammation. HBOT is a type of treatment during which the person breathes oxygen at a pressure that is greater than normal air pressure. The treatment, which is delivered inside a pressurized chamber, can cause the lungs to absorb up to three times more oxygen than usual. It is thought that, by improving the blood’s delivery of oxygen, HBOT helps affected tissue to fight infection or recover from injury by releasing stem cells and growth factors. In the U.S., the Food and Drug Administration (FDA) have The FDA have also approved HBOT for 13 other medical uses, including: treating burns caused by heat or fire; carbon monoxide poisoning; and embolism, a condition wherein bubbles of air or gas can block the bloodstream. The researchers note in their study paper that, while HBOT “has been used successfully to treat several neurological conditions,” its effects on Alzheimer’s disease “have never been thoroughly examined.” Therefore, for their investigation, the team used a mouse model of Alzheimer’s disease to test the effects of HBOT on behavioral symptoms and physical hallmarks. This involved using “transgenic mice” that had been engineered to develop some of the hallmarks of Alzheimer’s disease. Although they do not fully replicate human Alzheimer’s disease, transgenic mice are widely used in preclinical studies of potential new treatments and as “tools for developing insights into the biological basis” of the disease. In a hyperbaric oxygen chamber that they custom-built for the small animals, the researchers gave the transgenic mice 1 hour of HBOT every day for 14 days. They also gave another group of normal mice (the controls) the same treatment. After this, the team observed the mice as they completed a number of behavioral tests. They also examined their brain tissue for effects of the treatment on the physical hallmarks of Alzheimer’s. They compared the results with the control mice. The researchers’ analysis showed various biological and biochemical signs that HBOT had reduced inflammation in the brain. Additionally, it revealed that HBOT reduced oxygen starvation, “amyloid burden,” and the type of tau protein seen in Alzheimer’s. There was also evidence of improvement in behavioral symptoms. The results showed that, compared with the control mice, HBOT reduced both disease-related plaques and brain inflammation by 40 percent, and it also reduced “behavioral deficits” in the transgenic mice. The team suggests that the findings show that HBOT shows promise as a treatment for Alzheimer’s disease, especially given that it “is used in the clinic to treat various indications, including neurological conditions.” “We assume that the main challenge in human use will be to initiate the treatment at early stages before significant amount of brain tissue is lost.” Prof. Uri Ashery
The outlook on the COVID-19 coronavirus is changing every day. As of this moment, the United States has just declared a national emergency — other countries will soon likely follow. It's difficult to contextualize rapidly evolving circumstances, but it's also equally necessary to help ourselves and others. This is why a list comparing prevalent diseases of today with the 2019 coronavirus follows. The COVID-19 pandemic has spread to every corner of the world. The landscape is murky, and while Chinese sources claim the outbreak is slowing down in its eastern origin, some global experts are claiming we should prepare for months of disruptions before the virus is controlled. As of writing, there are 125,048 cases of coronavirus infections in over 114 countries, and the death toll is now over 5,000 and continuing to rise. After days and weeks of constant developments, it might feel like the COVID-19 coronavirus is the only thing we hear about. But the very novelty of coronavirus — first diagnosed in November 2019 according to Chinese officials — is why we know relatively little about it, compared to other diseases, and why it's important to follow the latest updates and the advice of local authorities. Current cases (March 13, 2020): 145,336 Current death toll (same date): 5,416 1. The seasonal flu/Influenza The infectious disease COVID-19 is perhaps most often compared to the common flu, or influenza, with many people saying the coronavirus is "just the flu." It's not. Though many of the common symptoms are similar — muscle aches, sore throat, and fever — the reproduction rate for the COVID-19 coronavirus is significantly higher than that of the seasonal flu; experts estimate that each COVID-19 sufferer infects between two to three other people, while the seasonal flu typically infects 1.3 new people for each infected person. Then there's the death rate. COVID-19 has been shown to be fatal in roughly 3.5% of confirmed cases, as ScienceAlert reports. While we don't have enough data to know the exact mortality rate — many milder cases may have gone undiagnosed — the seasonal flu typically kills only 0.1% of those infected. Then there's the fact that we don't have a vaccine, as well as the fact that the coronavirus pandemic has the potential to overwhelm health systems worldwide, leading to deaths for people with other ailments that would have otherwise been treated. Yearly cases: approx. 3 to 5 million Yearly death toll: approx. 290,000 to 650,000 As the other most prominent coronavirus in recent times, SARS is also often compared to the COVID-19 coronavirus. SARS, also known as severe acute respiratory syndrome, was first identified in November 2002 in the Guangdong province of southern China. The SARS coronavirus, which also caused a viral respiratory illness, was eventually contained in July 2003. Before it did so, it spread to 26 countries in North America, South America, Europe, and Asia. Though the global health community has taken on many of the lessons of SARS in the containment and treatment of COVID-19, this year's coronavirus has far outdone the damage caused by SARS. During the outbreak, there were 8,098 reported cases of SARS and 774 deaths. As per the Centers for Disease Control and Prevention, there have been no known new cases of SARS since 2004. Though SARS killed 10% of patients, making it deadlier to sufferers than COVID-19, it infected a fraction of the people over a longer period of time. Total reported cases: 8,098 Death toll: 774 Another recent coronavirus, MERS, or Middle East Respiratory Syndrome, was first reported in Saudi Arabia as recently as 2012. It spread to 27 countries in Europe, Africa, Asia, and North America. Much in the same way that COVID-19 likely originated in bats, and was subsequently passed on to humans by an as yet unknown bridge animal. MERS is thought to have been jumped to humans via camels that originally got the disease from bats. Since it was first identified, there have been 2,494 reported cases of MERS, and 858 deaths. Infections occurred mainly due to close face-to-face contact between humans. Though MERS's fatality rate is a very high 34% (much higher than COVID-19), the low transmission when compared to the coronavirus that originated in Wuhan means that the death toll has stayed relatively low. Total reported cases: 2,494 Death toll: 858 Did you know that before the COVID-19 coronavirus isn't the only ongoing pandemic in the world? The HIV/AIDS pandemic began in 1960 and continues to this day. However, as World Atlas points out, the peak of the hysteria surrounding the disease came in the 1980s when the world became widely informed about its existence. From 1960 to 2020, the virus has caused over 39 million deaths. Treatment first became available for people with HIV/AIDS in 1987 and just last week the second person ever to be cured of HIV was announced. Today, there are approximately 37 million people living with HIV, and cases have been reduced by 40% since its peak in 1997, as access to antiretroviral medicines has a greatly extended life expectancy. Today, approximately 68% of global HIV/AIDS cases are found in Sub-Saharan Africa. This is due largely to poor economic conditions and a lack of sex education. People living with HIV (end of 2018): 32.7 million–44.0 million Death toll (2019): 570 000–1.1 million Unlike the COVID-19 coronavirus, Ebola, also known as EVD, is not an airborne disease; infection occurs solely when someone comes into direct contact with bodily fluids of some who is infected. Recent outbreaks of the viral infection, which was first detected during an outbreak in 1976 near the Ebola River in what is now known as the Democratic Republic of Congo, have led to alarming spikes in deaths from the virus. Ebola is another virus that is thought to have originated in bats — in this case, specifically, fruit bats, which are a local delicacy where the outbreak started. Ebola caused the deaths of approximately 11,325 people between 2014 and 2016 and the fatality rate sits at an average of 50%, according to the World Health Organization. Cases (Aug 2018- Nov 2019): 3,296 Deaths (Aug 2018- Nov 2019): 2,196 Meningitis is caused by inflammation of the meninges. These are membranes that cover the brain and spinal cord. The infectious disease is often caused by fungi, viruses, and bacteria, though it is also possible to get it after suffering a head injury, having brain surgery or having specific types of cancer. According to the World Health Organization, small outbreaks of meningitis occur sporadically worldwide, except in the African Meningitis Belt where large outbreaks are common and account for most deaths. The disease can cause flu-like symptoms, as well as vomiting, nausea increased sensitivity to light and a confused mental state. Yearly cases: approx. 1.5 million Yearly death toll: approx. 170,000 Malaria is caused by a parasite that is carried by mosquitoes. The initial symptoms include fever, chills and flu-like symptoms, which can quickly progress into more serious complications. The disease was eliminated from the U.S. in 1951 thanks to the pesticide DDT. Campaigns are ongoing to distribute mosquito nets to help prevent the disease in poorer countries. As the WHO says, "Africa carries a disproportionately high share of the global malaria burden." In 2018, Africa saw 93% of malaria cases and 94% of malaria deaths. Cases (2018): 228 million Deaths (2018): 405, 000
An aeroplane is flying with velocity v at right angles to the Earth’s magnetic field B near the North pole of the Earth, as shown in the plan view below. The plane’s wingspan (distance between wingtips) is L. The wingtips are labelled P and S . (a) Consider an electron of charge magnitude e in the metal wing of the plane at the point shown by a dot in the figure. (i) In what direction will this electron experience a magnetic force due to its motion in the magnetic field? Draw a vector on the diagram to represent the force. (ii) State an expression for the magnitude of the force on the electron in this situation (b) While the plane is flying steadily in the magnetic field, the electrons in the wing experience this magnetic force but do not move along the wing; such motion is opposed by an electric field arising in the wing. (i) Explain how this electric field originates, and draw a vector in the diagram to show its (ii) Explain why the electric force on the electron is exactly equal to the magnetic force, in this situation. (Hint: imagine this were not the case and consider what would happen (c) Show that the magnitude of the electric field produced in the wing is given by E = vB . (d) Derive an expression for the induced potential difference which arises between the tips of the moving wing. (In terms of the length L, speed v and magnetic field strength B.) (e) Calculate the potential difference developed between the wingtips if the plane flies at 200 ms−1 (720 kmhr−1 ) in the Earth’s field of 8×10^−5 T near the pole and the wingspan is (f) Would the potential difference between the wingtips still arise if the plane were flying near the Equator? Explain. (g) Suppose one wanted to check if there really was a potential difference between wingtips. If one connected a moving-coil voltmeter between the wingtips, would it give a reading? (h) Will there also be a potential difference between the nose and the tail of the plane? Explain why or why not.
Social studies : Making Ancient Egyptian Canopic Jars As part of our studies of Ancient Egypt, grade 5+6 learned about canopic jars. Canopic jars were used in the Egyptian mummification process to protect the dead person’s human organs. There were four jars to hold a different organ; the stomach, intestines, lungs, and liver. It was believed that these organs would be needed in the afterlife. Each jar was protected by a different god. For our canopic jar art project, the students first made a base model using large plastic cups, cardboard and clay and painted the base. The students then sculpted their chosen canopic jar figure using paper clay and sculpting tools. After perfecting the shape and allowing to dry, the students then painted the top of the jars in gold paint. The next step was to paint in the details on the canopic jar clay sculpture. The students each wrote their name along with their chosen canopic jar god’s name in hieroglyphs to place on the front of the jar. Finally the students added some gold décor to finish their jars. Here you can see the students standing proudly with their finished pieces!
The smallest dinosaur that ever lived A spectacular amber fossil holds the skull of the smallest dinosaur ever found: a bird-like creature that lived more than 99 million years ago and grew no bigger than a bee. Scientists have discovered the fossilised skull of a tiny dinosaur. The delicate fossil is no longer than a thumbnail, but it is safely preserved in a pebble of glassy amber. They estimate that the creature it belonged to would not have been much larger than a 50p coin, weighing in at just 2g. Find out more The fossil, which was discovered in Myanmar, has given scientists new insights into life during the Cretaceous period. The razor-like beak and narrow skull suggest that the dinosaur was a similar size and shape to a bee hummingbird. Rather than feasting on nectar, this tiny creature was almost certainly a predator of small insects. It had more than 100 teeth and bulging eyes that could spot fast-moving prey. Amber can preserve extremely delicate fossils without damaging them, so we can study small creatures as well as huge dinosaurs. Palaeontologists are now developing technology that will scan fossil DNA, bringing the bird to life by revealing the colour of its feathers and how it flew. Can we learn anything useful from studying dinosaurs? Not really. While it may be interesting to find unusual creatures from the past, there’s nothing useful we can learn. There are plenty of endangered species to protect today. Rather than spending time, effort, and money learning about extinct species, we should be focussing our efforts on studying living animals and preserving biodiversity now. Of course, we can! Studying dinosaurs means we can learn about how species evolved to survive and thrive in different environments. These creatures were champions of resilience, reigning unchallenged for the better part of 165 million years. By researching these extinct animals we can learn how to preserve ecosystems for future generations. - What is your favourite dinosaur? - Draw an imaginary dinosaur, make up a name for it, and create a fact file. Some People Say... “Nature has a habit of placing some of her most attractive treasures in places where it is difficult to locate and obtain them.”Charles Doolittle Walcott, paleontologist What do you think? - Amber is fossilized tree resin, which has been appreciated for its color and natural beauty since Neolithic times. - A country in Southeast Asia, bordered by India, Bangladesh, China, Laos, and Thailand. - Cretaceous period - A geological period that lasted from about 145 to 66 million years ago. It is the third and final period of the Mesozoic Era and came just after the Jurassic Period. - Bee hummingbird - A species of hummingbird which is the world’s smallest bird. It gets its name because of its size, which is the same as a large bumblebee. - Scientists who study prehistoric life. This includes fossils, rocks, and dinosaur remains. - DNA carries genetic information. It is a chemical made up of two long molecules, arranged in a spiral. We refer to this as the double-helix structure. DNA stands for deoxyribonucleic acid. - The word used to describe a number of different living species. Experts are slowly realising that the future of our species on Earth depends on maintaining high biodiversity. Biodiversity is important for human wellbeing as it provides food, potential foods, industrial materials, and new medicines. - Toughness; the ability to recover quickly from difficulties. - An ecosystem is a natural environment and includes the flora (plants) and fauna (animals) that live and interact within that environment. Flora, fauna, and bacteria are the biotic or living components of the ecosystem. - The animals of a particular region, habitat, or geological period are known as fauna. Plants are known as flora.
Involved Parents: The Hidden Resource in Their Child's Education Although parents conscientiously send their children off to school every day and expect them to do well, they can add an important extra ingredient that will boost their children's success. Parent participation is the ingredient that makes the difference. Parents' active involvement with their child's education at home and in school brings great rewards and has a significant impact on their children's lives. According to research studies, the children of involved parents: - are absent less frequently - behave better - do better academically from pre-school through high school - go farther in school - go to better schools Research also shows that a home environment that encourages learning is even more important than parents' income, education level, or cultural background. By actively participating in their child's education at home and in school, parents send some critical messages to their child; they're demonstrating their interest in his/her activities and reinforcing the idea that school is important. Becoming involved - Laying the groundwork in the elementary school years The reality is that some parents have more time than others to become involved, but it's important for even very busy parents to examine their priorities and carve out some time, even if it's brief. Some schools are working out more flexible schedules so that working parents have more options. The National Education Association recommends some specific ways for parents to become more involved in their child's education. - Read to your child - reading aloud is the most important activity that parents can do to increase their child's chance of reading success - Discuss the books and stories you read to your child - Help your child organize his/her time - Limit television viewing on school nights - Talk to your child regularly about what's going on in school - Check homework every night Meet with a teacher or other school staff member to determine where, when and how help is needed and where your interests fit in. Volunteer time. Parents can: - Be a classroom helper - Tutor or read with individual children - Assist children with special needs - Help in special labs, such as computer or science - Plan and work in fundraising - Plan and accompany classes on field trips - Assist coaches at sporting events - Help out with arts and crafts workshops - Assist with a special interest club or drama group - Speak to classes about your career or special expertise - Help write press releases, local news articles - Work as library assistant; help with story time The possibilities are endless. - Vote in school board elections - know what the candidates stand for - Participate in parent-teacher associations and school decisions - Help your school set challenging academic standards - Become an advocate for better education in your community and state. Staying involved - The middle and high school years In adolescence children become more independent and usually don't want their parents in school. In middle and high school students have to deal with more courses and more teachers in a more impersonal way, so parent involvement, although less direct, is still critical. Parents can participate in events at school, monitor homework, provide experiences and materials that supplement course work, and help children with organizational strategies. Parents can influence their children's academic progress by encouragement, reinforcement and modeling. Children learn from their parents' own learning style and activities such as discussions, newspapers and other reading materials, television habits and other quests for information and knowledge. How parent involvement pays off When parents contribute effort and time, they have the opportunity to interact with teachers, administrators and other parents. They can learn first-hand about the daily activities and the social culture of the school, both of which help them understand what their child's school is like. The child and the school both benefit, and parents serve as role models as they demonstrate the importance of community participation. In addition to improving academic progress, parent involvement pays off in other significant ways. Numerous studies have shown that parent involvement is a protective factor against adolescent tobacco use, depression, eating disorders, academic achievement, and other problems. By staying involved with their child and teenager, parents can be a source of support, create a climate for discussing tough issues and serve as a role model for responsible and empathic behavior. About the NYU Child Study Center The New York University Child Study Center is dedicated to increasing the awareness of child and adolescent psychiatric disorders and improving the research necessary to advance the prevention, identification, and treatment of these disorders on a national scale. The Center offers expert psychiatric services for children, adolescents, young adults, and families with emphasis on early diagnosis and intervention. The Center's mission is to bridge the gap between science and practice, integrating the finest research with patient care and state-of-the-art training utilizing the resources of the New York University School of Medicine. The Child Study Center was founded in 1997 and established as the Department of Child and Adolescent Psychiatry within the NYU School of Medicine in 2006. For more information, please call us at (212) 263-6622 or visit us at www.AboutOurKids.org. Reprinted with the permission of the NYU Child Study Center. © NYU Child Study Center.
In this lesson students will gain an overview of what climate change is. They will read some short texts, exchange information and make notes. They will also learn some useful vocabulary on the topic. - to help students develop their understanding of what climate change is - to introduce and practise some vocabulary of climate change - to ask and answer questions to find out about climate change - to develop note-taking skills Lesson plan: download By Owain Llewellyn The plans and worksheets are downloadable and in pdf format - right click on the attachment below and save it on your computer. Copyright - please read All the materials on these pages are free for you to download and copy for educational use only. You may not redistribute, sell or place these materials on any other web site without written permission from the BBC and British Council. If you have any questions about the use of these materials please email us at: [email protected]
However, the more parents know about what triggers a child's chronic headaches, the easier it is to help prevent and treat Each child responds to headaches differently, and it is important for parents to recognize their child's headache symptoms. As in adults, headaches in children can be classified as primary or benign (migraine, tension-type), and secondary (due to underlying, If symptoms persist for more than two weeks or the child is unable to attend school, it's time to see a pediatrician or specialist so more serious conditions can be ruled out. If a neurological exam comes back normal, the diagnosis might be a migraine or tension headache. The average age of the onset of migraine symptoms in children is six years. In children, the pain is often bilateral (on both sides of the head), and attacks tend to be briefer than those in adults, often lasting less than an hour. Associated symptoms, such as nausea, vomiting, abdominal pain, and vertigo, may be more prominent than the headache itself. Nausea and vomiting are especially prevalent in children, occurring in up to 90 percent of attacks. Other symptoms may include diarrhea, increased thirst, sweating, shivering or edema (swelling). Among pediatric patients who have headaches, the most common are migraines. Most patients with migraines have a family history and must take daily medication to prevent migraines. But migraines can be effectively managed. With the help of a health care provider, parents can identify and alleviate their Medications generally fall into two categories: - Preventive: Taken daily, preventive medications can help reduce the number of attacks in patients who experience more than two migraines per month. - Acute: Acute therapy treats the symptoms of migraine after the attack begins. Many medications must be taken as soon as the attack occurs, otherwise they may be less effective. While a child who has chronic migraines is likely to have inherited a predisposition to them, stress, food or environment can also trigger headaches. These are classified as tension headaches. A child's tension headache is a result of personal, family or It's important for a parent to understand what causes the child's stress and determine how to manage it. While learning to deal with stress is a normal life-management skill, counseling could be helpful here. Tension headaches, the most common form of headache, are probably caused by chemical and neuronal imbalances in the brain and may be related to muscle tightening in the back of the neck or scalp. The pain presses or tightens, occurs on both sides of the head and is not worsened by routine physical activity. Rarely are there associated symptoms, such as nausea or sensitivity to light or noise. Tension headaches fall into three categories, based on - Episodic: Episodic tension headaches occur less than once per month and are usually triggered by temporary stress, anxiety, fatigue or anger. They are commonly considered "stress headaches." They may disappear with the use of over-the-counter analgesics, withdrawal from the source of stress or a brief period of
What is a citation? A citation is a reference to a particular piece of writing like a tutorial, an article or a report, produced by specific authors or editors. A citation clearly identifies the place where you can find the work. Different fields of study have different citation formats, but all citations have an author(s), heading, date of publication. An insufficient citation can make a source difficult or impossible to find. While dealing with a certain major field of study, we have to use essential vocabulary, document references, and format text in an appropriate way. Now we will consider the APA style and the information about how to do citations in a research paper? Order it from experienced writers now! For Only $13.90/page Take into consideration that the term “citation” is a more specific term than “reference.” A citation can also be called a bibliographic reference. Most sources of information include a list of citations at the end. These are often called work cited or references. If the list is called a bibliography, however, it may also include citations to sources not used explicitly in a given work but suggested by the authors for further reading. While dealing with general knowledge or well-known facts in the introduction we do not need a citation. The same with common knowledge, something that is known by everyone or nearly everyone in a specific field, academic discipline or community. Common knowledge is acquired as you research a topic and notice some facts and concepts repeated in sources without citation. You will be acquiring common knowledge as long as you research. Statistical information that is not common knowledge and is associated with a single source must be cited. You may choose not to quote but put an author`s information into your own way of saying, i.e., paraphrase or summery. When you paraphrase, do not replace a few words with synonyms. Truly put it in your own words. This is a good choice because a quotation should be used only when the original wording is crucial for some reason. Citation in the introduction provides clarification of your thoughts. The in-text citation refers to the full reference citation on the references page at the end of a paper. In the body of the sentence, the data should appear in parentheses when it is next to the name of the source. We can write a cite and all the work for you. Just wrote to us – write my research paper for me. Citation in the body of a research paper While writing body paragraphs, you should state the topic sentence, explain and support the main points and conclude the paragraph. It is a good idea to provide a citation while supporting the essential idea of the paragraph. It is common to use the findings of various scholars to support the point you have just written in your own words. Doing so shows that what you claim is accepted by experts in this field and therefore your assertion is given credibility. State an opinion or observation in your own words, then back up what you state with the credible source material, especially quotes. Doing your research paper and thinking on the topic you form some specific opinions and positions, they belong to you, and you should express them in your own words. They do not need to be cited. Anything in your paper that does not have quotation marks is considered to be your thoughts. But because this is an academic research essay, you must add the voices of published authorities to support your assertions. Use of powerful quotes is justified by its detailed language and powerful construction. So you can use the author’s exact words to take up space in your paper since they serve to be strong evidence of your position. You should think carefully how to set up a quote with your leading sentence. The quote is a natural continuation of your sentence right before it. The quote should be formatted in regular font with no italics or bold. Remove irrelevant portions of the quotation and replace them with bracketed ellipses. Concluding sentence should be in your own words. So you can only do two things with a source either summarize information from it in your own phrases and sentences or quote from it directly. Before opting for which way to follow, be sure to have a good reason to do either one or the other since it should be relevant. It is always a good idea to introduce any quote with a signal phrase. You should introduce a researcher, scholar or expert. It should be clear how the quote relates to the sentence immediately before it. If the scholar is indifferent, maybe he comments, illustrates, notes describes or explains something. If it is a somewhat controversial point, maybe the author argues, claims, maintains, insists or contends something. If it is something what he conclusions for a research paper, you can simply write he concluded, predicted, proposes, found, suggests, considers or reveals. One more issue to consider is tense used. When you are referring to what an author says, you conclude about the topic, use the Present tense. If you are referring to information that is a part of a clinical study, it is something that happened in the past, then use the Past tense. Use quotations sparingly. Make sure you understand the source material you are using. If you have trouble putting it into your own words because you do not understand it well, then it is hard for you to use it to support your argument, so maybe you have better find another source that is easier to understand. How many citations should be in a research paper? An in-text citation can only apply to the sentence that includes it. You must cite the source as many times as you use it. But you should not state the author’s last name, date of publication and page, for example, three times in one paragraph, and it is too much. You should give the source`s publication information once per paragraph that is why just avoiding repetition of the author’s last name is necessary. While discussing a single source in one paragraph use the phrases such as, “the result,” “this approach,” “the researcher” that make it clear that the information is taken from the same source that you have already mentioned.
Scorpions - Class: Arachnida, Order: Scorpiones Scorpions are nocturnal arthropods which means they are active at night, searching for mates and food items. During the day they are hiding in dark areas, out of sight. Their main food sources are insects, millipedes, centipedes, other scorpions and any other small animal they can grasp and sting. The have one set of pinchers or claws, eight legs and a stinger at the end of the tail that is typically held over the back, ready to sting for protection or killing their food. There are over 1,500 different species of scorpions that have been described. Only about 25 are known to be a danger to humans when stung. Usually the larger the scorpion species, the less danger they pose. The smaller, the more dangerous. If you live near wooded areas, you have a greater chance of encountering scorpions than if you live in an urban area with no wooded areas nearby. Controlling insects will greatly reduce scorpions being seen in or near structures. Spiders & Scorpions Spiders - Class: Arachnida, Order: Arananea Spiders are arthropods that have two body regions, the cephalothorax and abdomen, and have eight legs. There are apporximately 40,000 know species of spiders worldwide. Most all spiders are beneficial due to the fact that their major food item are insects, so they help keep insect populations in check. However, a vast majority of people are uncomfortable with spiders being around them. Only a few spiders are important from a human suffering standpoint such as the black widow and brown recluse spider. Pest management of spiders concentrates on the elimination or reduction of the conditions that cause spiders to be present. Turning off exterior lights on buildings that remain on at night will have a huge impact on the number of spiders found in and around a structure. These lights attract large numbers of insects which the spiders are attracted to for food. Reducing the numbers of insects in a building will also reduce the number of spiders inside. Black Widow – Lactrodectus mactans (Fabricius). The black widow is one of the most important spiders due to its ability to bite humans and inject poison. The adult females of the spider is glossy jet black in color, has a ball-shaped abdomen, and a much smaller cephalothorax. Markings can range in color from yellow, orange or the usual red and appear as lines and hour glass markings. The hour glass is on the underneath of the abdomen. Black widows make irregular looking webs and are usually found behind items store against walls on the outside of structures or in storage areas such as sheds and garages. Brown Recluse – Loxosceles reclusa (Gertz & Muliak). Also know as the “violin spider” this spider causes the most serious bite wound of any of the spiders occurring in the United States. The bite area will turn into a large open ulcerous wound as the venom causes the tissues to die. Unfortunately, these bites look a lot like wounds that appear when a break in the skin gets infected with Staphylococcus spp. bacteria causing misdiagnosis of a brown recluse spider bite when it’s not. The spider has a “violin” marking on the top of the cephalothorax and has very long legs compared to the body with a grayish colored abdomen. Wolf Spiders – Family Lycosidae. The wolf spiders are hunting spiders which although they have the ability to spin webs, do not do so for catching prey. The female carries the egg sac with her wherever she goes. Once the young spiderlings hatch from the egg sac, they ride on the mother’s back until old enough to fend for themselves. Some wolf spider species are quite large causing a fear reaction in most people as they are also very quick. They feed on insects and other spiders. Orbweaver Spiders – Family Araneidae. Orbweavers are the spiders that seem to make the biggest, most beautiful webs that we see. Looking like wheels, these spiral webs are typically constructed between two items that are usually a few feet apart. The spider may be found in the center of the web resting face-down or up in the corner if there is shelter. They feed on anything that gets trapped in the web, especially insects and other spiders. They have even been known to feed on small birds that are caught.
The goal of the course is to give students the practical skills and acoustical background to build musical instruments of their own design. Underlying this goal is 1) to enhance the musical background students already have by giving them a deeper understanding of the theoretical acoustic principles of musical instruments and 2) to introduce engineering tools and concepts (in the context of musical instrument design) to an audience that may not otherwise be exposed to such techniques. The course is available to all Yale College students. It is designed for any student with interest in how musical instruments work and can be built user modern engineering and “maker” technology. On the theoretical side, the students learn about the physical acoustics of string, wind, and percussion instruments: sound propagation, standing waves, traveling waves, harmonic series, spectral analysis, etc. They also learn about musical tuning systems, and electronic music synthesis. On the practical side, they learn how to use the tools necessary for designing and building devices in the physical and electronic realm. The students become proficient in hand tools, laser cutters, 3D printers, machine shop tools, SolidWorks design and simulation tools, basic electronics including microprocessors, sensors, and actuators. They also learn to use sound recording and analysis equipment. Basic knowledge or interest in music or musical instruments. Solid background in math through pre-calculus, with knowledge of calculus preferred. Basic knowledge of physics including Newton's Laws and concepts of kinetic and potential energy. Students must become members of the Yale Center for Engineering Innovation and Design (CEID) by the end of the first week of class. During the lab portion of the course, the students are trained in the areas mentioned above (in hand tools, laser cutters, 3D printers, machine shop tools, SolidWorks design and simulation tools, basic electronics including microprocessors). These tools are formally introduced with instruction provided as a component of the class. This course is taught in the Center for Engineering Innovation and Design ([email protected]) which is an academic makerspace at Yale that innovates in the area of human-centered design and engineering. Hence, this course borrowed many ideas from earlier courses in its pedagogical philosophy. One specific example would be the Yale CEID course “Engineering, Innovation, and Design,” which also teaches students practical maker skills (along with more typical engineering and design concepts) and culminates with a client-based project. Resources come from a variety of areas. We refer when appropriate to the following texts: Bart Hopkins , Making Simple Musical Instruments. Altamont Press, 1995. Rossing, Thomas. The Science of Sound. Addison-Wesley, 2001. The Science of Musical Sound, Vol 1. William Ralph Bennet, Jr. Armstrong, Newton. An enactive approach to digital musical instrument design: Theory, Models, Techniques. AV Akademikerverlag, 2012. The students also take advantage of Solidworks tutorials, tutorials for spectral analysis freeware, as well as guides developed in the CEID for use of the laser-cutter, 3D printers, etc. Some resources are presented in class and posted online such as excel worksheets for looking at Fourier analysis and wave propagation, as well as modeling of instrument tuning and tuning systems. The students used the laser-cutter to construct a single-string instrument with fret positions at many exact integer ratios of the string length. They measured the frequencies and frequency ratios of all the possible fretted positions and also measured the string density and tension. This lab taught the students how to actually make a simple instrument (which in the second half of the lab they generalized to a 4 string instrument of novel design), but also taught them about the rules for tuning strings as well as the ideas involved in different tuning methods (equally-tempered versus just). It was a great example of “learning by making.” Quite simply, by interleaving theoretical concepts with active “making,” we learned that students could achieve a more profound understanding of material then if either was done alone. For example, a student may understand theoretically that the frequency of a wooden bar is proportional to its inverse length squared, but when they must cut one to size to achieve a musical note, the idea comes very much alive!
Distance running has become one of the most popular, accessible and cost-efficient forms exercise. The health and fitness benefits of distance running fall under the broader category of endurance exercise and include adaptations like improved metabolism, reduction of cardiovascular risk, reduced all-cause and cardiovascular mortality, increased breakdown of fat stores and many more. What might be less evident is how well designed the human body is for running. The ability of the human body to adapt to its environment and survival needs is phenomenal. Throughout human history our bodies have changed according to the instinctual needs to evade predators and catch our food. Bramble and Leiberman (2004) suggest 4 specific reasons why we have evolved to become great distance runners: The connective tissue in human beings has great elastic capacity, specifically we have springy tendons and ligaments in the legs such as the Achilles tendon and muscles of the arches underneath the feet that can reduce the metabolic cost of running by 50%. These structures function like a spring allowing us to store energy during each foot-strike which is then used to propel us forward as we push off of our toes. Running exposes our bodies to much higher forces than walking, especially when the foot collides with the ground, producing a shock wave that passes up the body from the heel through the spine to the head. These forces are approximately twice as high during running than during walking and may approach 3–4 times body weight at higher speeds. One evolutionary adaptation has been to increase the surface area of our joints in order to dissipate these imposed forces. This can be seen in certain joints and bones through the knees, hips, pelvic and low back. The shift from moving about on all 4’s to walking on our feet created an unstable situation which called for the development of special mechanisms to improve stability and balance during running.Some of these developments include expansions in joint surface area through our pelvic region, increased size of our bum muscles to help push us forward, and the ability to counteract the rotation through our hips by moving our trunk and arms in the opposite direction.We also have a very strong ligament connecting the back of our head to our spine which increases stability by reducing the amount of forward head movement that happens when we run. Temperature Regulation & Breathing As we all know movement generates heat. Early human evolution witnessed a decrease in body hair which heats us up more, and an increase in sweat glands which help to cool us off. We also developed an elaborate network of blood vessels carrying venous blood that plays a role in cooling hot arterial blood before it reaches the brain. Finally, we saw a change from nose to mouth breathing which allows us to unload excess heat, allow for higher air flow rates and reduces the amount of work that breathing muscles have to do. The increase in technological advances has meant in part that we move less. Everything can be delivered to us. Just because we don’t need to catch our food and run away from predators doesn’t mean that we should stop running; an activity that is built into human history. Developments in civilisation have also meant that we now move about on hard, stable, unforgiving surfaces like concrete, and wear thick soled shoes which deprive the receptors in our feet of information related to changes in temperature, pressure and contours. This is very different to the barefoot running on land that we began on during our early days. But as always our versatile human body will work to adapt to our environment, and with a bit of help from a health and exercise practitioner along the way if needed, we can return to one of the most natural human activities and enjoy it.
Since MERLIN began work in 1980, its images have helped shed light on the mysterious processes going on inside radio galaxies and quasars. High-resolution observations reveal that a typical radio galaxy or quasar has a bright, compact core from which issue two narrow "jets", which appear to impact on the twin lobes which have been known since the 1950s. Although not all sources show these features, it appears that the jets are transporting energy from the nucleus of the galaxy and depositing it in the lobes. Radio image of the giant elliptical galaxy known as Messier 87 or Virgo A. (US National Radio Astonomy Observatory; JL Nieto). Quasar 3C179, showing two lobes on either side of the core. One of the lobes is connected by a jet. MERLIN image of the core and jet in quasar 3C273. How this energy is generated, and how it is transported, is a key problem in astrophysics, but most theorists agree that only a massive black hole can provide enough energy to power these objects. In simple terms, a black hole is a region of space in which the gravitational pull is so strong that not even light can escape from it. To power a quasar, a black hole would need to contain a mass a billion times that of the Sun. Material falling into the hole is heated to extremely high temperatures, emitting x-rays that can be detected by telescopes on board satellites. This release of gravitational energy can be more than ten times as efficient as nuclear fusion, the process that powers the Sun and the hydrogen bomb. By means not yet understood, some of this energy escapes from the nucleus along the jets and energises the lobes. Jets contain a mixture of charged particles and magnetic fields, which together produce powerful radio emission ideally suited for study with MERLIN. The physics is undoubtedly complex, and the continuing study of radio jets will be a major task for MERLIN in the 1990s. MERLIN Home Page
Heart failure currently affects 5 million Americans, with 550,000 new cases being diagnosed each year. The condition is more common in women, African Americans, and those over 65. Heart failure doesn't mean the heart is about to stop working. Rather, it means that the heart has become weak and is having trouble pumping blood efficiently. Treatment options include lifestyle changes, medications, and specialized care. Heart failure, which is also called "congestive heart failure" (or CHF), is one of the most common reasons why people over the age of 70 are hospitalized. An estimated 5 million people are affected by it in the United States. The words "heart failure" make it sound like the heart has actually stopped working. But this isn't true. The heart is still beating, but it has become weak and has trouble pumping enough blood to keep up with your body's needs. This can cause extra fluid to get backed up in several places throughout your body. This process is called "congestion," and that's why this condition is called "congestive" heart failure. Heart failure isn't a disease itself. Rather, it's a condition that develops as a result of other diseases or health problems. Heart failure develops over time as the pumping action of the heart grows weaker. It can affect the left side, the right side, or both sides of the heart. Most cases involve the left side, where the heart cannot pump enough oxygen-rich blood to the rest of the body. With right-sided heart failure, the heart cannot effectively pump blood to the lungs, where the blood picks up oxygen. Key information about heart failure is as follows: - Heart failure is a condition in which the heart cannot pump enough blood throughout the body. - This does not mean that your heart has stopped or is about to stop working. But it does mean that your heart is not able to pump blood the way that it should. - Heart failure is a serious condition that develops over time as the pumping action of the heart grows weaker. - Heart failure is caused by other diseases or conditions that damage or overwork the heart muscle. - The leading causes of this condition are coronary artery disease (CAD), high blood pressure, and diabetes. - About 5 million people in the United States have heart failure. Each year, 550,000 people are diagnosed with this condition. It causes or contributes to about 300,000 deaths each year. - Heart failure can happen to anyone, but is more common in people over 65 years of age, among women, and in African Americans. - The most common symptoms are shortness of breath; feeling tired; and swelling in the ankles, feet, legs, and sometimes the abdomen (stomach). - An echocardiogram is the most useful test when diagnosing heart failure. - Treatment options include lifestyle changes, medications, and specialized care for those with severe forms of heart disease. - People with severe heart failure are frequently admitted to the hospital. - If you have a disease or condition that makes heart failure more likely, you may be able to prevent it by controlling or treating the disease or condition. - Heart failure usually cannot be cured, and you will likely have to take medication for the rest of your life. It is important that you understand your symptoms may get worse over time. As your symptoms worsen, you may not be able to do many of the things that you did before you had this condition. - If you have severe heart disease and symptoms at rest, you can expect your condition to get worse. It is important to discuss this with your family and also discuss your final treatment options with your doctor while you are still able to do so.
2. Variation of the frequency of a carrier wave (commonly a radio wave) in accordance with variations in the audio signal being sent. Developed by American electrical engineer Edwin H. Armstrong in the early 1930's, FM is less susceptible to outside interference and noise; such as, thunderstorms, nearby machinery, etc. than is AM. Such noise generally affects the amplitude of a radio wave but not its frequency, so an FM signal remains virtually unchanged. FM is also better able to transmit sounds in stereo than AM and commercial FM broadcasting stations transmit their signals in the frequency range of 88 megahertz (MHz) to 108 MHz.
Easy-to-follow video tutorials help you learn software, creative, and business skills.Become a member Now that we know a little bit more about CSS syntax, I want to take a moment to focus on selectors in a bit more detail. Remember, selectors allow us to tell the browser which elements on the page we want to style. In some cases, you are going to want to apply styles broadly to a number of elements all across your site. In other situations, you are going to want to target a smaller number of elements, or even a single element. Understanding how selectors work will allow you to do just that. The first selector I want to start with is the most basic, the element selector. Element selectors are global in nature, meaning they are going to affect every element of that type in a style document. You simply need to know what the tag is for a given element in order to style it. Now unlike HTML, we don't need the angle brackets around the tag name, just the tag itself. For example, to style paragraphs, you'd use the p; for heading 1s, you'd use an h1; for unordered lists you'd use an ul, and so on. While these selectors are very efficient, they're also very broad, which is why they are most often used to set global site-wide styles. Another basic selector type is the class selector. Classes are HTML attributes that can be set on any HTML element. You can name a class anything you want, and you can use it on as many elements, and as many times on the page as you need. As you can imagine, that makes classes pretty popular when it comes to writing CSS. Now here, for example, the browser would look through all the elements on the page and apply styling to any elements with a class attribute of subheading. Note that classes are identified in CSS by the period in front of their name. Now ID selectors are similar to class selectors in that they represent an HTML attribute. They differ from classes in one very important aspect. IDs must be unique to the page, meaning that if you assign an ID to a page element, no other element on that page may have that specific ID. In this example, the browser would find the element on the page that has the ID attribute of sidebar and then apply the styling. Now, IDs identified by the octothorpe, or as it is more commonly known, the pound symbol, in front of the ID name. You can also make class and ID selectors element-specific by adding an element to the front of the selector. This limits the styling to only elements with the specific class or specific ID applied to it. For example, here styling would only be applied to heading 2s with a class of subheading, or divs with an ID of sidebar. This allows you to write a single general class, or ID style, and then follow that with a more focused element-specific style if necessary. Classes and IDs can be anything you want them to be, but you do need a follow some naming conventions. First, don't use any spaces or special characters. Also, remember that they are case sensitive. If you use uppercase letters, you are going to need to remember that when writing the styles for them. Honestly, it doesn't really matter what you practice, as long as you're consistent. Another type of selector I want to discuss is the descendent selector. Descendent selectors allow you to target an element based on where it's found within another element. You simply string the selectors together, separating them with whitespace. The parent selectors are added first, followed by each successive nested selector. For example, in this example the browser would find any span elements inside of paragraph elements which were also inside of div elements. Now there isn't any limit as to how many of those that you can string together, but more than three starts to become extremely inefficient. Let's take a look at a couple of examples. Now in this first example, the browser would locate any paragraph found within a div tag and then apply the styling. In the second one, it would find any span element inside of a paragraph which is also inside of a div, and then apply the styling. As you can see, descendent selectors allow you to be extremely specific. Another thing I want to point out here is that the descendent selectors apply to any nested element, no matter how deep it's found within the page structure. Going back to that first example, it's going to apply to paragraphs inside the div, not just ones that are immediately inside the div. You can also group selectors together by using commas between the selectors themselves. Now this can make writing styles more efficient by grouping selectors together that need the same styling. Instead of writing three separate selectors like this, for example, you can simply write one group selector, and that's a lot more efficient. Although there are certainly more selector types available than the ones that I've shown here, the overwhelming bulk of your styles will probably be written through the basic selector types that we've covered. Learning how to write efficient selectors based on your document structure is among the most important CSS skills that you can cultivate. By mastering the different types of selectors available, you'll find that you have a much broader set of options for writing effective styles. Get unlimited access to all courses for just $25/month.Become a member Access exercise files from a button right under the course name. Search within course videos and transcripts, and jump right to the results. Remove icons showing you already watched videos if you want to start over. Make the video wide, narrow, full-screen, or pop the player out of the page into its own window. Click on text in the transcript to jump to that spot in the video. As the video plays, the relevant spot in the transcript will be highlighted. Your file was successfully uploaded.
As a result of competition and enmity among the European nations, wars erupted where the nations in question fought against each other. The wars lead to the destruction of property, resources, weakening of economies while at the same time claiming the lives of many. The European integration started after the Second World War ceased, whereby nations decided to come together by laying down their differences. This paper will discuss European Integration. The Nazi Germans were the actual culprits behind the tragic armed conflict. At the end of the second world war, the European nations wanted to pacify everyone, which would help economies come back on their feet. The European nations were well aware that only working towards reviving their economies is the only technique for future developments. As a result, a program that came to be known as the European Recovery Plan, alias the Marshall Plan, involved the United States aid to Europe following the second world war's shocking effects. The Marshall Plan was mainly focused on the infrastructure and economic development of Western European nations that President Harry Truman signed. The nations who were integrated to receive the aid following the Marshall Plan totaled 16. Among the 16 European nations including but not limited to Norway, Britain, West Germany, France, Netherlands, and Belgium (HISTORY, 2021). The nations came together to facilitate a collective defense mechanism as well as social, cultural, and economic collaboration. The Treaty of Paris, for example, revolutionized industrialization in Europe through the coal and steel community, which was to necessitate the regulation of the European Markets. The European Coal and Steel Community boosted the European member states' growth, improving people's living standards (Europejskiporatla.eu., 2021). The cooperation of the European nations encouraged the members to join hands to enhance the political position as well as the collective defense. Through the collaboration, the European nations' foreign policies and trading markets were championed through the European Political Community. The Treaties of Rome was successful in eliminating the trading restrictions and barriers in the Western European Nation' economies and markets, enabling free movement of resources, people, food, and services. The Single European Act treaty gave life to the Treaties of Rome by enhancing the decision-making processes that streamlined the political cooperation and the European market by incorporating technology, science, and the economic, socio-cultural, and governance (ESG). The fall of communism led to the formation of the European Union Treaty aimed to help the European nations attain sustainable development. Regulation of the issues raised in the European Union Treaty, the Amsterdam Treaty was initiated. However, the Amsterdam treaty was deficient given the fact that the members were growing in number, which resulted in the Treaty of Nice. Free movement of people in neighboring nations was facilitated by the Schengen Agreement eliminating all borders restrictions and barriers (Europejskiporatla.eu., 2021). Europejskiporatla.eu. (2021). History of European Integration - The European Portal of Integration and Development. Retrieved 13 April 2021, from <span style="font-size:12.0pt;line-height:200%;font-family:"Times New Roman","serif"; background:white">http://europejskiportal.eu/history-of-european-integration/</span>. HISTORY. (2021). Marshall Plan. Retrieved 13 April 2021, from <span style="font-size:12.0pt;line-height:200%;font-family:"Times New Roman","serif"; background:white">https://www.history.com/topics/world-war-ii/marshall-plan-1</span>.
A large body of social science research has established that students tend to overestimate the amount of alcohol that their peers consume. This overestimation causes many to have misguided views about whether their own behaviour is normal and may contribute to the 1.8 million alcohol related deaths every year. Social norms interventions that provide feedback about own and peer drinking behaviours may help to address these misconceptions. Erling Moxnes has looked at this problem from a dynamic perspective, in Moxnes, E. and L. C. Jensen (in press). “Drunker than intended; misperceptions and information treatments.” Drug and Alcohol Dependence. From an earlier Athens SD conference paper, Overshooting alcohol intoxication, an experimental study of one cause and two cures Juveniles becoming overly intoxicated by alcohol is a widespread problem with consequences ranging from hangovers to deaths. Information campaigns to reduce this problem have not been very successful. Here we use a laboratory experiment with high school students to test the hypothesis that overshooting intoxication can follow from a misperception of the delay in alcohol absorption caused by the stomach. Using simulators with a short and a long delay, we find that the longer delay causes a severe overshoot in the blood alcohol concentration. Behaviour is well explained by a simple feedback strategy. Verbal information about the delay does not lead to a significant reduction of the overshoot, while a pre test mouse-simulator experience removes the overshoot. The latter policy helps juveniles lessen undesired consequences of drinking while preserving the perceived positive effects. The next step should be an investigation of simulator experience on real drinking behaviour.
While having a username and password is better than an open MQTT server, the credentials are still being sent in plain text, so anyone with a packet sniffer could intercept them and then gain access to our server. TLS provides a chain of trust, allowing two computers that know nothing about each other to trust each other. Let's look at how this works in the context of a browser hitting a secure website, as this is an example we should all be familiar with. At a really high level, both your computer and the server have a Certificate Authority (CA) that they both implicitly trust (there is actually more than one, but let's keep it simple). When you hit a website that is secured the server sends it's certificate to the browser, which does some complicated maths and verifies the certificate against the CA - if the numbers work out, your browser can be confident that the certificate is legitimate and that the secure site is who it says it is. Web servers generally don't care who the client is, so they rarely ask for a client certificate, but in other applications (like this garage door opener) the server will also verify the client against it's CAs. After both computers are happy they are talking to the right people, they will exchange a set of symmetric keys that both computers will use to communicate. Symmetric keys are much faster to encrypt and decrypt that asymmetric keys, but it requires both sides to have the same key. To do that securely, a symmetric key is generated, encrypted using the other computer's public key, sent securely across the wire and then decrypted - now both computers have the key and they can start talking. Setting up our own trust network Enough theory! The first thing to do is generate a CA. If you have OpenSSL installed on your computer (Linux and OSX users more than likely will), you can generate a CA on the command line: openssl req -new -x509 -days 3650 -extensions v3_ca -keyout ca.key.pem -out ca.crt.pem You'll be asked for a PEM password - make this something difficult to guess and then store it in your password manager.This is literally the key to the city. If someone gets hold of your ca.crt.pem and ca.key.pem, and can guess your password, then they can generate their own valid certificates - not good.Fill in the details as you are asked - these details aren't super important as this is a personal CA, but make the values something you will recognise. The common name can be whatever you want. Now that you have your CA certificate (ca.crt.pem) and your CA.key (ca.key.pem) you can generate some certificates.An aside: If you have a password manager that supports notes, I'd recommend saving these files in the there and deleting them from your file system after you have generated any certificates that you need. Generate a server certificate For this to work, it's best to use actual domain names for servers and clients - the easiest way to do this is setting up mDNS, via avahi (linux) or bonjour (OSX) - Windows probably has something too - let's assume that the name of the computer running MQTT is mqtt.local and the name of the garage door opener will be garage.local. # Generate a key openssl genrsa -out mqtt.local.key.pem 2048 # Create a Certificate signing request openssl req -out mqtt.local.csr.pem -key mqtt.local.key.pem -new Again, fill in the details as you are asked. The MOST important one is Common Name. IT MUST MATCH THE DOMAIN. So in our case: mqtt.local # Sign the certificate openssl x509 -req -in mqtt.local.csr.pem -CA ca.crt.pem -CAkey ca.key.pem -CAcreateserial -out mqtt.local.crt.pem -days 365 You will be asked for the password you entered when you created your CA. Note that the certificate is valid for 365 days - you'll have to generate a new one in a years time. You can decide if you want to make it longer or shorter. Generate the client certificate Generating a client certificate looks a lot like the generation of a server certificate - just with different filenames. # Generate a key openssl genrsa -out garage.local.key.pem 2048 # Create a Certificate signing request openssl req -out garage.local.csr.pem -key garage.local.key.pem -newThe common name this time should be garage.local # Sign the certificate openssl x509 -req -in garage.local.csr.pem -CA ca.crt.pem -CAkey ca.key.pem -CAcreateserial -out garage.local.crt.pem -days 365 Setting up Mosquitto Now we have all of the certificates that we need, let's setup mosquitto to use it. Create a ca_certificates and certs directory in the mosquitto/etc docker folder mkdir mosquitto/etc/ca_certificates mkdir mosquitto/etc/certsand copy ca.crt.pem to the ca_certificates folder and mqtt.local.crt.pem and mqtt.local.key.pem to the certs folder. Finally update the etc/mosquitto.conf file to look something like this: persistence true persistence_location /var/lib/mosquitto/ password_file /etc/mosquitto/passwd allow_anonymous false cafile /etc/mosquitto/ca_certificates/ca.crt.pem certfile /etc/mosquitto/certs/mqtt.local.crt.pem keyfile /etc/mosquitto/certs/mqtt.local.key.pem port 8883 tls_version tlsv1.1 include_dir /etc/mosquitto/conf.d Notice we have changed the port to 8883, which is the standard port for secure MQTT. We are also still authenticating via username and password. We are explicitly forcing TLS version 1.1 because the ESP8266 implementation of 1.2 is buggy, and will fail. Save the file, and run docker-compose build mosquitto Because we aren't verifying the certificates, we can use the garage.local certificate on the command line to test everything out. Create the ca_certificates and certs directories, this time in mosquitto-client mkdir mosquitto-client/ca_certificates mkdir mosquitto-client/certs Copy ca.crt.pem to the ca_certifications directory and both the garage.local.crt.pem and garage.local.key.pem files to the certs directory. Now, modify the Dockerfile.sub ENTRYPOINT to look like this: ENTRYPOINT [ "mosquitto_sub", "-h", "mosquitto", "-p", "8883", "--tls-version", "tlsv1.1", "--cafile", "/ca_certificates/ca.crt.pem", "--insecure", "--cert", "/certs/garage.local.crt.pem", "--key", "/certs/garage.local.key.pem" ]and the Dockerfile.pub ENTRYPOINT to look like: ENTRYPOINT [ "mosquitto_pub", "-h", "mosquitto", "-p", "8883", "--tls-version", "tlsv1.1", "--cafile", "/ca_certificates/ca.crt.pem", "--insecure", "--cert", "/certs/garage.local.crt.pem", "--key", "/certs/garage.local.key.pem" ] docker-compose upAnd everything should connect again, but this time securely! All of these instructions gratuitously stolen from: https://mosquitto.org/man/mosquitto-tls-7.html
whole story as far as active tectonics is concerned. In many parts of the world, within both ocean basins as well as continental interiors, there are concentrations of earthquake and volcanic activity that seem divorced from plate-tectonic precepts. Hawaii is a dramatic example. The volcanoes of the Hawaiian Islands-Emperor Seamount chain are generally believed to result from upwelling of magmas from a source beneath the Pacific lithospheric plate. Although the geometry of the volcanic chain appears to be due to the motion of this plate over the underlying mantle “hot spot,” the hot spot itself is arguably a phenomenon independent from the overlying global plate framework. Regions such as southeast Asia and the western United States, where plate boundaries cut into continents, seem especially prone to intraplate tectonics. The volcanic and seismic activity in such areas is conspicuous (Figure 2.1) and often dramatic (Figure 2.2). The semantics of whether tectonic activity in such areas should be be considered truly intraplate or treated as some kind of distributed effect of a distant plate boundary largely begs the issue. Regardless of nomenclature, the fundamental causes of many such phenomena remain unclear, and their place in the plate-tectonic framework unresolved. The ancient Precambrian cores of the world’s continents contain special problems for understanding active tectonics. Although the most obvious manifestations of contemporary tectonics, such as seismicity, are decidedly less pronounced than along the plate boundaries, or even in the geologically younger intraplate regions, earthquakes do occur in the craton and near its periphery. In the United States, two of the most destructive earthquakes in history occurred not along the San Andreas Fault but in the nominally “stable” eastern United States near New Madrid, Missouri, in 1811–1812, and near Charleston, South Carolina, in 1886. Seismicity continues in many parts of the eastern United States, and faults of relatively recent geologic vintage have been identified (York and Oliver, 1976). That parts of the cratonic interior are tectonically active in the present time should perhaps not be surprising in view of the geologic record. Features such as the Michigan Basin and the Adirondack Dome (Figure 2.3) are incontrovertible evidence that the cratons were subject to major vertical motions in the past that lack a clear connection to the plate-tectonic scenarios of those times. Geologic strata attest to many gentle inundations and uplifts of the interior platforms that reflect relative mo-
Gene regulatory network A gene regulatory network or genetic regulatory network (GRN) is a collection of DNA segments in a cell which interact with each other indirectly (through their RNA and protein expression products) and with other substances in the cell to govern the gene expression levels of mRNA and proteins. In general, each mRNA molecule goes on to make a specific protein (or set of proteins). In some cases this protein will be structural, and will accumulate at the cell membrane or within the cell to give it particular structural properties. In other cases the protein will be an enzyme, i.e., a micro-machine that catalyses a certain reaction, such as the breakdown of a food source or toxin. Some proteins though serve only to activate other genes, and these are the transcription factors that are the main players in regulatory networks or cascades. By binding to the promoter region at the start of other genes they turn them on, initiating the production of another protein, and so on. Some transcription factors are inhibitory. In single-celled organisms, regulatory networks respond to the external environment, optimising the cell at a given time for survival in this environment. Thus a yeast cell, finding itself in a sugar solution, will turn on genes to make enzymes that process the sugar to alcohol. This process, which we associate with wine-making, is how the yeast cell makes its living, gaining energy to multiply, which under normal circumstances would enhance its survival prospects. In multicellular animals the same principle has been put in the service of gene cascades that control body-shape. Each time a cell divides, two cells result which, although they contain the same genome in full, can differ in which genes are turned on and making proteins. Sometimes a 'self-sustaining feedback loop' ensures that a cell maintains its identity and passes it on. Less understood is the mechanism of epigenetics by which chromatin modification may provide cellular memory by blocking or allowing transcription. A major feature of multicellular animals is the use of morphogen gradients, which in effect provide a positioning system that tells a cell where in the body it is, and hence what sort of cell to become. A gene that is turned on in one cell may make a product that leaves the cell and diffuses[disambiguation needed] through adjacent cells, entering them and turning on genes only when it is present above a certain threshold level. These cells are thus induced into a new fate, and may even generate other morphogens that signal back to the original cell. Over longer distances morphogens may use the active process of signal transduction. Such signalling controls embryogenesis, the building of a body plan from scratch through a series of sequential steps. They also control and maintain adult bodies through feedback processes, and the loss of such feedback because of a mutation can be responsible for the cell proliferation that is seen in cancer. In parallel with this process of building structure, the gene cascade turns on genes that make structural proteins that give each cell the physical properties it needs. It has been suggested that, because biological molecular interactions are intrinsically stochastic, gene networks are the result of cellular processes and not their cause (i.e. cellular Darwinism). However, recent experimental evidence has favored the attractor view of cell fates. At one level, biological cells can be thought of as "partially mixed bags" of biological chemicals – in the discussion of gene regulatory networks, these chemicals are mostly the mRNAs and proteins that arise from gene expression. These mRNA and proteins interact with each other with various degrees of specificity. Some diffuse around the cell. Others are bound to cell membranes, interacting with molecules in the environment. Still others pass through cell membranes and mediate long range signals to other cells in a multi-cellular organism. These molecules and their interactions comprise a gene regulatory network. A typical gene regulatory network looks something like this: The nodes of this network are proteins, their corresponding mRNAs, and protein/protein complexes. Nodes that are depicted as lying along vertical lines are associated with the cell/environment interfaces, while the others are free-floating and diffusible. Implied are genes, the DNA sequences which are transcribed into the mRNAs that translate into proteins. Edges between nodes represent individual molecular reactions, the protein/protein and protein/mRNA interactions through which the products of one gene affect those of another, though the lack of experimentally obtained information often implies that some reactions are not modeled at such a fine level of detail. These interactions can be inductive (the arrowheads), with an increase in the concentration of one leading to an increase in the other, or inhibitory (the filled circles), with an increase in one leading to a decrease in the other. A series of edges indicates a chain of such dependences, with cycles corresponding to feedback loops. The network structure is an abstraction of the system's chemical dynamics, describing the manifold ways in which one substance affects all the others to which it is connected. In practice, such GRNs are inferred from the biological literature on a given system and represent a distillation of the collective knowledge about a set of related biochemical reactions. To speed up the manual curation of GRNs, some recent efforts try to use text mining and information extraction technologies for this purpose. Genes can be viewed as nodes in the network, with input being proteins such as transcription factors, and outputs being the level of gene expression. The node itself can also be viewed as a function which can be obtained by combining basic functions upon the inputs (in the Boolean network described below these are Boolean functions, typically AND, OR, and NOT). These functions have been interpreted as performing a kind of information processing within the cell, which determines cellular behavior. The basic drivers within cells are concentrations of some proteins, which determine both spatial (location within the cell or tissue) and temporal (cell cycle or developmental stage) coordinates of the cell, as a kind of "cellular memory". The gene networks are only beginning to be understood, and it is a next step for biology to attempt to deduce the functions for each gene "node", to help understand the behavior of the system in increasing levels of complexity, from gene to signaling pathway, cell or tissue level (see systems biology). Mathematical models of GRNs have been developed to capture the behavior of the system being modeled, and in some cases generate predictions corresponding with experimental observations. In some other cases, models have proven to make accurate novel predictions, which can be tested experimentally, thus suggesting new approaches to explore in an experiment that sometimes wouldn't be considered in the design of the protocol of an experimental laboratory. The most common modeling technique involves the use of coupled ordinary differential equations (ODEs). Several other promising modeling techniques have been used, including Boolean networks, Petri nets, Bayesian networks, graphical Gaussian models, Stochastic, and Process Calculi. Conversely, techniques have been proposed for generating models of GRNs that best explain a set of time series observations.Recently it has been shown that ChIP-seq signal of Histone modification are more correlated with transcription factor motifs at promoters in comparison to RNA level. Hence it is proposed that time-series histone modification ChIP-seq could provide more reliable inference of gene-regulatory networks in comparison to methods based on expression levels. It is common to model such a network with a set of coupled ordinary differential equations (ODEs) or stochastic ODEs, describing the reaction kinetics of the constituent parts. Suppose that our regulatory network has nodes, and let represent the concentrations of the corresponding substances at time . Then the temporal evolution of the system can be described approximately by where the functions express the dependence of on the concentrations of other substances present in the cell. The functions are ultimately derived from basic principles of chemical kinetics or simple expressions derived from these e.g. Michaelis-Menten enzymatic kinetics. Hence, the functional forms of the are usually chosen as low-order polynomials or Hill functions that serve as an ansatz for the real molecular dynamics. Such models are then studied using the mathematics of nonlinear dynamics. System-specific information, like reaction rate constants and sensitivities, are encoded as constant parameters. By solving for the fixed point of the system: for all , one obtains (possibly several) concentration profiles of proteins and mRNAs that are theoretically sustainable (though not necessarily stable). Steady states of kinetic equations thus correspond to potential cell types, and oscillatory solutions to the above equation to naturally cyclic cell types. Mathematical stability of these attractors can usually be characterized by the sign of higher derivatives at critical points, and then correspond to biochemical stability of the concentration profile. Critical points and bifurcations in the equations correspond to critical cell states in which small state or parameter perturbations could switch the system between one of several stable differentiation fates. Trajectories correspond to the unfolding of biological pathways and transients of the equations to short-term biological events. For a more mathematical discussion, see the articles on nonlinearity, dynamical systems, bifurcation theory, and chaos theory. The following example illustrates how a Boolean network can model a GRN together with its gene products (the outputs) and the substances from the environment that affect it (the inputs). Stuart Kauffman was amongst the first biologists to use the metaphor of Boolean networks to model genetic regulatory networks. - Each gene, each input, and each output is represented by a node in a directed graph in which there is an arrow from one node to another if and only if there is a causal link between the two nodes. - Each node in the graph can be in one of two states: on or off. - For a gene, "on" corresponds to the gene being expressed; for inputs and outputs, "off" corresponds to the substance being present. - Time is viewed as proceeding in discrete steps. At each step, the new state of a node is a Boolean function of the prior states of the nodes with arrows pointing towards it. The validity of the model can be tested by comparing simulation results with time series observations. Continuous network models of GRNs are an extension of the boolean networks described above. Nodes still represent genes and connections between them regulatory influences on gene expression. Genes in biological systems display a continuous range of activity levels and it has been argued that using a continuous representation captures several properties of gene regulatory networks not present in the Boolean model. Formally most of these approaches are similar to an artificial neural network, as inputs to a node are summed up and the result serves as input to a sigmoid function, e.g., but proteins do often control gene expression in a synergistic, i.e. non-linear, way. However there is now a continuous network model that allows grouping of inputs to a node thus realizing another level of regulation. This model is formally closer to a higher order recurrent neural network. The same model has also been used to mimic the evolution of cellular differentiation and even multicellular morphogenesis. Stochastic gene networks Recent experimental results have demonstrated that gene expression is a stochastic process. Thus, many authors are now using the stochastic formalism, after the work by. Works on single gene expression and small synthetic genetic networks, such as the genetic toggle switch of Tim Gardner and Jim Collins, provided additional experimental data on the phenotypic variability and the stochastic nature of gene expression. The first versions of stochastic models of gene expression involved only instantaneous reactions and were driven by the Gillespie algorithm. Since some processes, such as gene transcription, involve many reactions and could not be correctly modeled as an instantaneous reaction in a single step, it was proposed to model these reactions as single step multiple delayed reactions in order to account for the time it takes for the entire process to be complete. From here, a set of reactions were proposed that allow generating GRNs. These are then simulated using a modified version of the Gillespie algorithm, that can simulate multiple time delayed reactions (chemical reactions where each of the products is provided a time delay that determines when will it be released in the system as a "finished product"). For example, basic transcription of a gene can be represented by the following single-step reaction (RNAP is the RNA polymerase, RBS is the RNA ribosome binding site, and Pro i is the promoter region of gene i): Furthermore, there seems to be a trade-off between the noise in gene expression, the speed with which genes can switch, and the metabolic cost associated their functioning. More specifically, for any given level of metabolic cost, there is an optimal trade-off between noise and processing speed and increasing the metabolic cost leads to better speed-noise trade-offs. A recent work proposed a simulator (SGNSim, Stochastic Gene Networks Simulator), that can model GRNs where transcription and translation are modeled as multiple time delayed events and its dynamics is driven by a stochastic simulation algorithm (SSA) able to deal with multiple time delayed events. The time delays can be drawn from several distributions and the reaction rates from complex functions or from physical parameters. SGNSim can generate ensembles of GRNs within a set of user-defined parameters, such as topology. It can also be used to model specific GRNs and systems of chemical reactions. Genetic perturbations such as gene deletions, gene over-expression, insertions, frame shift mutations can also be modeled as well. The GRN is created from a graph with the desired topology, imposing in-degree and out-degree distributions. Gene promoter activities are affected by other genes expression products that act as inputs, in the form of monomers or combined into multimers and set as direct or indirect. Next, each direct input is assigned to an operator site and different transcription factors can be allowed, or not, to compete for the same operator site, while indirect inputs are given a target. Finally, a function is assigned to each gene, defining the gene's response to a combination of transcription factors (promoter state). The transfer functions (that is, how genes respond to a combination of inputs) can be assigned to each combination of promoter states as desired. In other recent work, multiscale models of gene regulatory networks have been developed that focus on synthetic biology applications. Simulations have been used that model all biomolecular interactions in transcription, translation, regulation, and induction of gene regulatory networks, guiding the design of synthetic systems. Other work has focused on predicting the gene expression levels in a gene regulatory network. The approaches used to model gene regulatory networks have been constrained to be interpretable and, as a result, are generally simplified versions of the network. For example, Boolean networks have been used due to their simplicity and ability to handle noisy data but lose data information by having a binary representation of the genes. Also, artificial neural networks omit using a hidden layer so that they can be interpreted, losing the ability to model higher order correlations in the data. Using a model that is not constrained to be interpretable, a more accurate model can be produced. Being able to predict gene expressions more accurately provides a way to explore how drugs affect a system of genes as well as for finding which genes are interrelated in a process. This has been encouraged by the DREAM competition which promotes a competition for the best prediction algorithms. Some other recent work has used artificial neural networks with a hidden layer. Structure and evolution Gene regulatory networks are generally thought to be made up of a few highly connected nodes (hubs) and many poorly connected nodes nested within a hierarchical regulatory regime. Thus gene regulatory networks approximate a hierarchical scale free network topology. This is consistent with the view that most genes have limited pleiotropy and operate within regulatory modules. This structure is thought to evolve due to the preferential attachment of duplicated genes to more highly connected genes. Recent work has also shown that natural selection tends to favor networks with sparse connectivity. There are primarily two ways that networks can evolve, both of which can occur simultaneously. The first is that network topology can be changed by the addition or subtraction of nodes (genes) or parts of the network (modules) may be expressed in different contexts. The Drosophila Hippo signaling pathway provides a good example. The Hippo signaling pathway controls both mitotic growth and post-mitotic cellular differentiation. Recently it was found that the network the Hippo signaling pathway operates in differs between these two functions which in turn changes the behavior of the Hippo signaling pathway. This suggests that the Hippo signaling pathway operates as a conserved regulatory module that can be used for multiple functions depending on context. Thus, changing network topology can allow a conserved module to serve multiple functions and alter the final output of the network. The second way networks can evolve is by changing the strength of interactions between nodes, such as how strongly a transcription factor may bind to a cis-regulatory element. Such variation in strength of network edges has been shown to underlie between species variation in vulva cell fate patterning of Caenorhabditis worms. Bacterial regulatory networks Regulatory networks allow bacteria to adapt to almost every environmental niche on earth. A network of interactions among diverse types of molecules including DNA, RNA, proteins and metabolites, is utilised by the bacteria to achieve regulation of gene expression. In bacteria, the principal function of regulatory networks is to control the response to environmental changes, for example nutritional status and environmental stress. A complex organization of networks permits the microorganism to coordinate and integrate multiple environmental signals. - Body plan - Cis-regulatory module - Genenetwork (database) - Systems biology - Weighted gene co-expression network analysis - "Transcriptional Regulatory Networks in Saccharomyces cerevisiae". Young Lab. - Davidson E, Levin M; Levin (April 2005). "Gene regulatory networks". Proc. Natl. Acad. Sci. U.S.A. 102 (14): 4935. doi:10.1073/pnas.0502024102. PMC 556010. PMID 15809445. - Florian Leitner, Martin Krallinger, Sushil Tripathi, Martin Kuiper, Astrid Lægreid and Alfonso Valencia, Mining cis-Regulatory Transcription Networks from Literature, Proceedings of BioLINK Special Interest Group, 5-12, ISBM/ECCB, 2013 - Vibhor Kumar, Masafumi Muratani, Nirmala Arul Rayan, Petra Kraus, Thomas Lufkin, Huck Hui Ng and Shyam Prabhakar, Uniform, optimal signal processing of mapped deep-sequencing data, Nature biotechnology, 2013 - Chu D, Zabet NR, Mitavskiy B; Zabet; Mitavskiy (April 2009). "Models of transcription factor binding: sensitivity of activation functions to model assumptions". J. Theor. Biol. 257 (3): 419–29. doi:10.1016/j.jtbi.2008.11.026. PMID 19121637. - Kauffman, Stuart (1993). The Origins of Order. ISBN 0-19-505811-9. - Kauffman SA (1969). "Metabolic stability and epigenesis in randomly constructed genetic nets" (PDF). Journal of Theoretical Biology 22 (3): 437–467. doi:10.1016/0022-5193(69)90015-0. PMID 5803332. - Vohradsky J (September 2001). "Neural model of the genetic network". J. Biol. Chem. 276 (39): 36168–73. doi:10.1074/jbc.M104391200. PMID 11395518. - Geard N, Wiles J; Wiles (2005). "A gene network model for developing cell lineages". Artif. Life 11 (3): 249–67. doi:10.1162/1064546054407202. PMID 16053570. - Schilstra MJ, Bolouri H (2 January 2002). "Modelling the Regulation of Gene Expression in Genetic Regulatory Networks". Biocomputation group, University of Hertfordshire. - Knabe JF, Nehaniv CL, Schilstra MJ, Quick T (2006). "Evolving Biological Clocks using Genetic Regulatory Networks". Proceedings of the Artificial Life X Conference (Alife 10). MIT Press. pp. 15–21. CiteSeerX: 10 .1 .1 .72 .5016. - Knabe JF, Nehaniv CL, Schilstra MJ (2006). "Evolutionary Robustness of Differentiation in Genetic Regulatory Networks". Proceedings of the 7th German Workshop on Artificial Life 2006 (GWAL-7). Berlin: Akademische Verlagsgesellschaft Aka. pp. 75–84. CiteSeerX: 10 .1 .1 .71 .8768. - Knabe JF, Schilstra MJ, Nehaniv CL (2008). "Evolution and Morphogenesis of Differentiated Multicellular Organisms: Autonomously Generated Diffusion Gradients for Positional Information" (PDF). Artificial Life XI: Proceedings of the Eleventh International Conference on the Simulation and Synthesis of Living Systems. MIT Press. - Elowitz MB, Levine AJ, Siggia ED, Swain PS; Levine; Siggia; Swain (August 2002). "Stochastic gene expression in a single cell". Science 297 (5584): 1183–6. doi:10.1126/science.1070919. PMID 12183631. - Blake WJ, KAErn M, Cantor CR, Collins JJ; Kaern; Cantor; Collins (April 2003). "Noise in eukaryotic gene expression" (PDF). Nature 422 (6932): 633–7. doi:10.1038/nature01546. PMID 12687005. - Arkin A, Ross J, McAdams HH; Ross; McAdams (August 1998). "Stochastic kinetic analysis of developmental pathway bifurcation in phage lambda-infected Escherichia coli cells". Genetics 149 (4): 1633–48. PMC 1460268. PMID 9691025. - Raser JM, O'Shea EK; O'Shea (September 2005). "Noise in Gene Expression: Origins, Consequences, and Control". Science 309 (5743): 2010–3. doi:10.1126/science.1105891. PMC 1360161. PMID 16179466. - Elowitz MB, Leibler S; Leibler (January 2000). "A synthetic oscillatory network of transcriptional regulators". Nature 403 (6767): 335–8. doi:10.1038/35002125. PMID 10659856. - Gardner TS, Cantor CR, Collins JJ; Cantor; Collins (January 2000). "Construction of a genetic toggle switch in Escherichia coli". Nature 403 (6767): 339–42. doi:10.1038/35002131. PMID 10659857. - Gillespie DT (1976). "A general method for numerically simulating the stochastic time evolution of coupled chemical reactions". J. Comput. Phys. 22 (4): 403–34. doi:10.1016/0021-9991(76)90041-3. - Roussel MR, Zhu R; Zhu (November 2006). "Validation of an algorithm for delay stochastic simulation of transcription and translation in prokaryotic gene expression". Phys Biol 3 (4): 274–84. doi:10.1088/1478-3975/3/4/005. PMID 17200603. - Ribeiro A, Zhu R, Kauffman SA; Zhu; Kauffman (November 2006). "A general modeling strategy for gene regulatory networks with stochastic dynamics". J. Comput. Biol. 13 (9): 1630–9. doi:10.1089/cmb.2006.13.1630. PMID 17147485. - Zabet NR, Chu DF; Chu (June 2010). "Computational limits to binary genes". Journal of the Royal Society Interface 7 (47): 945–954. doi:10.1098/rsif.2009.0474. PMC 2871807. PMID 20007173. - Chu DF, Zabet NR, Hone ANW; Zabet; Hone (May–Jun 2011). "Optimal Parameter Settings for Information Processing in Gene Regulatory Networks". BioSystems 104 (2–3): 99–108. doi:10.1016/j.biosystems.2011.01.006. PMID 21256918. - Zabet NR (September 2011). "Negative feedback and physical limits of genes". Journal of Theoretical Biology 248 (1): 82–91. doi:10.1016/j.jtbi.2011.06.021. PMID 21723295. - Ribeiro AS, Lloyd-Price J; Lloyd-Price (March 2007). "SGN Sim, a stochastic genetic networks simulator". Bioinformatics 23 (6): 777–9. doi:10.1093/bioinformatics/btm004. PMID 17267430. - Kaznessis YN (2007). "Models for synthetic biology". BMC Syst Biol 1: 47. doi:10.1186/1752-0509-1-47. PMC 2194732. PMID 17986347. - "The DREAM Project". Columbia University Center for Multiscale Analysis Genomic and Cellular Networks (MAGNet). - Gustafsson M, Hörnquist M; Hörnquist (2010). "Gene Expression Prediction by Soft Integration and the Elastic Net—Best Performance of the DREAM3 Gene Expression Challenge". PLoS ONE 5 (2): e9134. doi:10.1371/journal.pone.0009134. PMID 20169069. - Smith MR, Clement M, Martinez T, Snell Q (2010). "Time Series Gene Expression Prediction using Neural Networks with Hidden Layers" (PDF). Proceedings of the 7th Biotechnology and Bioinformatics Symposium (BIOT 2010). pp. 67–69. - Barabasi, A.; Oltvai, Z. N. (2004). "Network biology: understanding the cells' functional organization". Nature Reviews Genetics 5 (2): 101–113. doi:10.1038/nrg1272. PMID 14735121. - Wagner, G. P. and J. Zhang. 2011. The pleiotropic structure of the genotype-phenotype map: the evolvability of complex organisms. Nature Review Genetics 12: 204-213 - Robert D Leclerc (August 2008). "Survival of the sparest: robust gene networks are parsimonious". Molecular Systems Biology 4 (1): 213. doi:10.1038/msb.2008.52. PMC 2538912. PMID 18682703. - Jukam; Xie, D. B.; Rister, J.; Terrell, D.; Charlton-Perkins, M.; Pistillo, D.; Gebelein, B.; Desplan, C.; Cook, T. et al. (2013). "Opposite feedbacks in the Hippo pathway for growth control and neural fate". Science 342: 211–219. doi:10.1126/science.1238016. - Hoyos, E.; Kim, K.; Milloz, J.; Barkoulas, M.; Penigault, J.; Munro, E.; Felix, M. (2011). "Quantitative variation in autocrine signaling and pathway crosstalk in the Caenorhabditis vulva network". Current Biology 21 (7): 527–538. doi:10.1016/j.cub.2011.02.040. PMID 21458263. - Filloux, AAM (editor) (2012). Bacterial Regulatory Networks. Caister Academic Press. ISBN 978-1-908230-03-4. - Gross, R; Beier, D (editor) (2012). Two-Component Systems in Bacteria. Caister Academic Press. ISBN 978-1-908230-08-9. - Requena, JM (editor) (2012). Stress Response in Microbiology. Caister Academic Press. ISBN 978-1-908230-04-1. - Bolouri, Hamid; Bower, James M. (2001). Computational modeling of genetic and biochemical networks. Cambridge, Mass: MIT Press. ISBN 0-262-02481-0. - Kauffman SA (1969). "Metabolic stability and epigenesis in randomly constructed genetic nets". J. Theor. Biol. 22: 434–67. - Gene Regulatory Networks — Short introduction - Open source web service for GRN analysis - BIB: Yeast Biological Interaction Browser - Graphical Gaussian models for genome data — Inference of gene association networks with GGMs - A bibliography on learning causal networks of gene interactions - regularly updated, contains hundreds of links to papers from bioinformatics, statistics, machine learning. - http://mips.gsf.de/proj/biorel/ BIOREL is a web-based resource for quantitative estimation of the gene network bias in relation to available database information about gene activity/function/properties/associations/interactio. - Evolving Biological Clocks using Genetic Regulatory Networks - Information page with model source code and Java applet. - Engineered Gene Networks - Tutorial: Genetic Algorithms and their Application to the Artificial Evolution of Genetic Regulatory Networks - BEN: a web-based resource for exploring the connections between genes, diseases, and other biomedical entities - Global protein-protein interaction and gene regulation network of Arabidopsis thaliana
December 21, 1994: The Hubble telescope's crisp vision has captured a first-time view of one of the smallest stars in our Milky Way Galaxy. Called Gliese 623b or Gl623b, the diminutive star [right of center] is 10 times less massive than the Sun and 60,000 times fainter. (If it were as far away as the Sun, it would be only eight times brighter than the full Moon). Located 25 light-years from Earth in the constellation Hercules, Gl623b is the smaller component of a double-star system, where the separation between the two members is only twice the distance between Earth and the Sun (approximately 200 million miles). The small star completes one orbit around its larger companion every four years.See the rest:
Need to know the basic facts about William Shakespeare and the quartos? Or want to refresh your knowledge? We have created this section to get you up to speed. Who was William Shakespeare? Shakespeare was born in Stratford-upon-Avon, Warwickshire, in 1564. Very little is known about his life, but by 1592 he was in London working as an actor and a dramatist. Between about 1590 and 1613, Shakespeare wrote at least 37 plays and collaborated on several more. Many of these plays were very successful both at court and in the public playhouses. In 1613, Shakespeare retired from the theatre and returned to Stratford-upon-Avon. He died and was buried there in 1616. What did he write? Shakespeare wrote plays and poems. His plays were comedies, histories and tragedies. His 17 comedies include A Midsummer Night’s Dream and The Merry Wives of Windsor. Among his 10 history plays are Henry V and Richard III. The most famous among his 10 tragedies are Hamlet, Othello, and King Lear. Shakespeare’s best-known poems are The Sonnets, first published in 1609. What are the quartos? Shakespeare’s plays began to be printed in 1594, probably with his tragedy Titus Andronicus. This appeared as a small, cheap pamphlet called a quarto because of the way it was printed. Eighteen of Shakespeare’s plays had appeared in quarto editions by the time of his death in 1616. Another three plays were printed in quarto before 1642. In 1623 an expensive folio volume of 36 plays by Shakespeare was printed, which included most of those printed in quarto. Why are the quartos important? None of Shakespeare’s manuscripts survives, so the printed texts of his plays are our only source for what he originally The quarto editions are the texts closest to Shakespeare’s time. Some are thought to preserve either his working drafts (his foul papers) or his fair copies. Others are thought to record versions remembered by actors who performed the plays, providing information about staging practices in Shakespeare’s day. Tell me more: Quotation in context “To be, or not to be, that is the question” Hamlet: Act 3, In the first “bad” quarto of 1603 reads 'To be, or not to be, I there’s In the second “good” quarto of 1605 reads 'To be, or not to be, that is the Compare the two copies
Pilgrims, Puritans, and Jamestown Webquest and Video Analysis with Key- this is a 14 page document that contains a webquest and a completed teachers key for easy marking. It contains 37 questions based on a Crash Course U.S. History video and the History.com website. Students will be introduced to the initial English colonies in the New World by watching the video: Crash Course U.S. History: When is Thanksgiving? Colonizing America. The video will introduce the Jamestown colony in Virginia and discuss how tobacco was the commodity that allowed the first successful English colony to thrive. The mysterious Roanoke colony is also mentioned. The New England colonies which were settled by the Puritans and the Pilgrims are also discussed at great lengths. The difference between the two groups, their devout faith, and the hardships they endured in the New World are all covered. Following the video students will proceed to the history.com website where they will watch addition videos about the Puritans and the Pilgrims, read corresponding articles on the two groups, and gain a firmer understanding of how these settlers began to shape the nation that would later emerge from these early colonies. This webquest can be easily used as an introduction to the topic, a study guide, or a quick and easy sub plan.
Io (pronounced /ˈaɪ.oʊ/, or as Greek Ἰώ) is the innermost of the four Galilean moons of the planet Jupiter and, with a diameter of 3,642 kilometres, the fourth-largest moon in the Solar System. It was named after Io, a priestess of Hera who became one of the lovers of Zeus. With over 400 active volcanoes, Io is the most geologically active object in the Solar System. This extreme geologic activity is the result of tidal heating from friction generated within Io's interior by Jupiter's varying pull. Several volcanoes produce plumes of sulfur and sulfur dioxide that climb as high as 500 km (310 mi). Io's surface is also dotted with more than 100 mountains that have been uplifted by extensive compression at the base of the moon's silicate crust. Some of these peaks are taller than Earth's Mount Everest. Unlike most satellites in the outer Solar System (which have a thick coating of ice), Io is primarily composed of silicate rock surrounding a molten iron or iron sulfide core. Most of Io's surface is characterized by extensive plains coated with sulfur and sulfur dioxide frost. Io's volcanism is responsible for many of that satellite's unique features. Its volcanic plumes and lava flows produce large surface changes and paint the surface in various shades of red, yellow, white, black, and green, largely due to the sulfurous compounds. Numerous extensive lava flows, several longer than 500 kilometres (311 mi) in length, also mark the surface. These volcanic processes have given rise to a comparison of the visual appearance of Io's surface to a pizza. The materials produced by this volcanism provide material for Io's thin, patchy atmosphere and Jupiter's extensive magnetosphere. Io played a significant role in the development of astronomy in the 17th and 18th centuries. It was discovered in 1610 by Galileo Galilei, along with the other Galilean satellites. This discovery furthered the adoption of the Copernican model of the Solar System, the development of Kepler's laws of motion, and the first measurement of the speed of light. From Earth, Io remained nothing more than a point of light until the late 19th and early 20th centuries, when it became possible to resolve its large-scale surface features, such as the dark red polar and bright equatorial regions. In 1979, the two Voyager spacecraft revealed Io to be a geologically active world, with numerous volcanic features, large mountains, and a young surface with no obvious impact craters. The Galileo spacecraft performed several close flybys in the 1990s and early 2000s, obtaining data about Io's interior structure and surface composition. These spacecraft also revealed the relationship between the satellite and Jupiter's magnetosphere and the existence of a belt of radiation centered on Io's orbit. Io receives about 3,600 rem of radiation per day. The exploration of Io continued in the early months of 2007 with a distant flyby by Pluto-bound New Horizons. While Simon Marius is not credited with the sole discovery of the Galilean satellites, his names for the moons have stuck. In his 1614 publication Mundus Jovialis, he named the innermost large moon of Jupiter after the Greek mythological figure Io, one of the many lovers of Zeus (who is also known as Jupiter in Roman mythology). Marius' names fell out of favor, and were not revived in common use until the mid-20th century. In much of the earlier astronomical literature, Io is simply referred to by its Roman numeral designation (a system introduced by Galileo) as "Jupiter I", or simply as "the first satellite of Jupiter". The most common adjectival form of the name is Ionian. Features on Io are named after characters and places from the Io myth, as well as deities of fire, volcanoes, the Sun, and thunder from various myths, and characters and places from Dante's Inferno, names appropriate to the volcanic nature of the surface. Since the surface was first seen up close by Voyager 1 the International Astronomical Union has approved 225 names for Io's volcanoes, mountains, plateaus, and large albedo features. The approved feature names used for Io include patera (volcanic depression), mons, mensa, planum, and tholus (various types of mountain, with morphologic characteristics such as size, shape, and height determining the term used), fluctus (lava flow), vallis (lava channel), regio (large-scale albedo feature), and active eruptive center (location where plume activity was the first sign of volcanic activity at a particular volcano). Examples of named features include Prometheus, Pan Mensa, Tvashtar Paterae, and Tsũi Goab Fluctus. Observational history Edit The first reported observation of Io was made by Galileo Galilei on January 7, 1610. The discovery of Io and the other Galilean satellites of Jupiter was published in Galileo's Sidereus Nuncius in March 1610. In his Mundus Jovialis, published in 1614, Simon Marius claimed to have discovered Io and the other moons of Jupiter in 1609, one week before Galileo's discovery. Galileo doubted this claim and dismissed the work of Marius as plagiarism. Given that Galileo published his work before Marius, Galileo is credited with the discovery. For the next two and a half centuries, Io remained an unresolved, 5th-magnitude point of light in astronomers' telescopes. During the 17th century, Io and the other Galilean satellites served a variety of purposes, such as helping mariners determine their longitude, validating Kepler's Third Law of planetary motion, and determining the time required for light to travel between Jupiter and Earth. Based on ephemerides produced by astronomer Giovanni Cassini and others, Pierre-Simon Laplace created a mathematical theory to explain the resonant orbits of Io, Europa, and Ganymede. This resonance was later found to have a profound effect on the geologies of the three moons. Improved telescope technology in the late 19th and 20th centuries allowed astronomers to resolve (that is, see) large-scale surface features on Io. In the 1890s, Edward E. Barnard was the first to observe variations in Io's brightness between its equatorial and polar regions, correctly determining that this was due to differences in color and albedo between the two regions and not due to Io being egg-shaped, as proposed at the time by fellow astronomer William Pickering, or two separate objects, as initially proposed by Barnard. Later telescopic observations confirmed Io's distinct reddish-brown polar regions and yellow-white equatorial band. Telescopic observations in the mid-20th century began to hint at Io's unusual nature. Spectroscopic observations suggested that Io's surface was devoid of water ice (a substance found to be plentiful on the other Galilean satellites). The same observations suggested a surface dominated by evaporates composed of sodium salts and sulfur. Radio telescopic observations revealed Io's influence on the Jovian magnetosphere, as demonstrated by decametric wavelength bursts tied to the orbital period of Io. The first spacecraft to pass by Io were the twin Pioneer 10 and 11 probes on December 3, 1973 and December 2, 1974 respectively. Radio tracking provided an improved estimate of Io's mass, which, along with the best available information of Io's size, suggested that Io had the highest density of the four Galilean satellites, and was composed primarily of silicate rock rather than water ice. The Pioneers also revealed the presence of a thin atmosphere at Io and intense radiation belts near the orbit of Io. The camera on board Pioneer 11 took the only good image of Io obtained by either spacecraft, showing its north polar region. Close-up images were planned during Pioneer 10's encounter with Io, but those observations were lost because of the high-radiation environment. When the twin probes Voyager 1 and Voyager 2 passed by Io in 1979, their more advanced imaging system allowed for far more detailed images. Voyager 1 flew past the satellite on March 5, 1979 from a distance of 20,600 km (12,800 mi). The images returned during the approach revealed a strange, multi-colored landscape devoid of impact craters. The highest-resolution images showed a relatively young surface punctuated by oddly shaped pits, mountains taller than Mount Everest, and features resembling volcanic lava flows. Shortly after the encounter, Voyager navigation engineer Linda A. Morabito noticed a plume emanating from the surface in one of the images. Analysis of other Voyager 1 images showed nine such plumes scattered across the surface, proving that Io was volcanically active. This conclusion was predicted in a paper published shortly before the Voyager 1 encounter by Stan J. Peale, Patrick Cassen, and R. T. Reynolds. The authors calculated that Io's interior must experience significant tidal heating caused by its orbital resonance with Europa and Ganymede (see the "Tidal heating" section for a more detailed explanation of the process). Data from this flyby showed that the surface of Io is dominated by sulfur and sulfur dioxide frosts. These compounds also dominate its thin atmosphere and the torus of plasma centered on Io's orbit (also discovered by Voyager). Voyager 2 passed Io on July 9, 1979 at a distance of 1,130,000 km (702,150 mi). Though it did not approach nearly as close as Voyager 1, comparisons between images taken by the two spacecraft showed several surface changes that had occurred in the four months between the encounters. In addition, observations of Io as a crescent as Voyager 2 departed the Jovian system revealed that eight of the nine plumes observed in March were still active in July 1979, with only the volcano Pele shutting down between flybys. |Galileo encounters with Io| |December 7, 1995||897| |November 4, 1996||244,000| |March 29, 1998||252,000| |July 2, 1999||127,000| |October 11, 1999||611| |November 26, 1999||301| |February 22, 2000||198| |August 6, 2001||194| |October 16, 2001||184| |January 17, 2002||102| |November 7, 2002||45,800| The Galileo spacecraft arrived at Jupiter in 1995 after a six-year journey from Earth to follow up on the discoveries of the two Voyager probes and ground-based observations taken in the intervening years. Io's location within one of Jupiter's most intense radiation belts precluded a prolonged close flyby, but Galileo did pass close by shortly before entering orbit for its two-year, primary mission studying the Jovian system. While no images were taken during the close flyby on December 7, 1995 the encounter did yield significant results, such as the discovery of a large iron core, similar to that found in the rocky planets of the inner solar system. Despite the lack of close-up imaging and mechanical problems that greatly restricted the amount of data returned, several significant discoveries were made during Galileo's primary mission. Galileo observed the effects of a major eruption at Pillan Patera and confirmed that volcanic eruptions are composed of silicate magmas with magnesium-rich mafic and ultramafic compositions with sulfur and sulfur dioxide serving a similar role to water and carbon dioxide on Earth. Distant imaging of Io was acquired for almost every orbit during the primary mission, revealing large numbers of active volcanoes (both thermal emission from cooling magma on the surface and volcanic plumes), numerous mountains with widely varying morphologies, and several surface changes that had taken place both between the Voyager and Galileo eras and between Galileo orbits. The Galileo mission was twice extended, in 1997 and 2000. During these extended missions, the probe flew by Io three times in late 1999 and early 2000 and three times in late 2001 and early 2002. Observations during these encounters revealed the geologic processes occurring at Io's volcanoes and mountains, excluded the presence of a magnetic field, and demonstrated the extent of volcanic activity. In December 2000, the Cassini spacecraft had a distant and brief encounter with the Jupiter system en route to Saturn, allowing for joint observations with Galileo. These observations revealed a new plume at Tvashtar Paterae and provided insights into Io's aurorae. Subsequent observations Edit Following Galileo's fiery demise in Jupiter's atmosphere in September 2003, new observations of Io's volcanism came from Earth-based telescopes. In particular, adaptive optics imaging from the Keck telescope in Hawaii and imaging from the Hubble telescope have allowed astronomers to monitor Io's active volcanoes. This imaging has allowed scientists to monitor volcanic activity on Io, even without a spacecraft in the Jupiter system. The New Horizons spacecraft, en route to Pluto and the Kuiper belt, flew by the Jupiter system and Io on February 28, 2007. During the encounter, numerous distant observations of Io were obtained. These included images of a large plume at Tvashtar, providing the first detailed observations of the largest class of Ionian volcanic plume since observations of Pele's plume in 1979. New Horizons also captured images of a volcano near Girru Patera in the early stages of an eruption, and several volcanic eruptions that have occurred since Galileo. There are currently two forthcoming missions planned for the Jupiter system. Juno, scheduled to launch in 2011, has limited imaging capabilities, but it could provide monitoring of Io's volcanic activity using its near-infrared spectrometer, JIRAM. The Europa/Jupiter System Mission (EJSM), a joint NASA/ESA project approved in February 2009 and scheduled to launch in 2020, would study Io using two spacecraft, NASA's Jupiter Europa Orbiter and ESA's Jupiter Ganymede Orbiter. While most of the observations of Io would be acquired from a distance as both spacecraft focus primarily on the icy Galilean satellites, the Jupiter Europa Orbiter would perform four close flybys of Io in 2025 and 2026 prior to going into orbit around Europa. ESA's contribution will still face funding competition from other ESA projects. In addition to these missions already approved by NASA, several dedicated Io missions have been proposed. One, called the Io Volcano Observer, would launch in 2015 as a Discovery-class mission and involve multiple flybys of Io while in orbit around Jupiter; however, at present[update], this mission remains in the concept study phase. Orbit and rotation Edit Io orbits Jupiter at a distance of 421,700 km (262,000 mi) from the planet's center and 350,000 km (217,000 mi) from its cloudtops. It is the innermost of the Galilean satellites of Jupiter, its orbit lying between those of Thebe and Europa. Including Jupiter's inner satellites, Io is the fifth moon out from Jupiter. It takes 42.5 hours to revolve once (fast enough for its motion to be observed over a single night of observation). Io is in a 2:1 mean-motion orbital resonance with Europa and a 4:1 mean-motion orbital resonance with Ganymede, completing two orbits of Jupiter for every one orbit completed by Europa, and four orbits for every one completed by Ganymede. This resonance helps maintain Io's orbital eccentricity (0.0041), which in turn provides the primary heating source for its geologic activity (see the "Tidal heating" section for a more detailed explanation of the process). Without this forced eccentricity, Io's orbit would circularize through tidal dissipation, leading to a geologically less active world. Like the other Galilean satellites of Jupiter and the Earth's Moon, Io rotates synchronously with its orbital period, keeping one face nearly pointed toward Jupiter. This synchronicity provides the definition for Io's longitude system. Io's prime meridian intersects the north and south poles, and the equator at the sub-Jovian point. The side of Io that always faces Jupiter is known as the subjovian hemisphere, while the side that always faces away is known as the antijovian hemisphere. The side of Io that always faces in the direction that the moon travels in its orbit is known as the leading hemisphere, while the side that always faces in the opposite direction is known as the trailing hemisphere. Interaction with Jupiter's magnetosphere Edit Io plays a significant role in shaping the Jovian magnetic field. The magnetosphere of Jupiter sweeps up gases and dust from Io's thin atmosphere at a rate of 1 tonne per second. This material is mostly composed of ionized and atomic sulfur, oxygen and chlorine; atomic sodium and potassium; molecular sulfur dioxide and sulfur; and sodium chloride dust. These materials ultimately have their origin from Io's volcanic activity, but the material that escapes to Jupiter's magnetic field and into interplanetary space comes directly from Io's atmosphere. These materials, depending on their ionized state and composition, ultimately end up in various neutral (non-ionized) clouds and radiation belts in Jupiter's magnetosphere and, in some cases, are eventually ejected from the Jovian system. Surrounding Io (up to a distance of 6 Io radii from the moon's surface) is a cloud of neutral sulfur, oxygen, sodium, and potassium atoms. These particles originate in Io's upper atmosphere but are excited from collisions with ions in the plasma torus (discussed below) and other processes into filling Io's Hill sphere, which is the region where the moon's gravity is predominant over Jupiter. Some of this material escapes Io's gravitational pull and goes into orbit around Jupiter. Over a 20-hour period, these particles spread out from Io to form a banana-shaped, neutral cloud that can reach as far as 6 Jovian radii from Io, either inside Io's orbit and ahead of the satellite or outside Io's orbit and behind the satellite. The collisional process that excites these particles also occasionally provides sodium ions in the plasma torus with an electron, removing those new "fast" neutrals from the torus. However, these particles still retain their velocity (70 km/s, compared to the 17 km/s orbital velocity at Io), leading these particles to be ejected in jets leading away from Io. Io orbits within a belt of intense radiation known as the Io plasma torus. The plasma in this doughnut-shaped ring of ionized sulfur, oxygen, sodium, and chlorine originates when neutral atoms in the "cloud" surrounding Io are ionized and carried along by the Jovian magnetosphere. Unlike the particles in the neutral cloud, these particles co-rotate with Jupiter's magnetosphere, revolving around Jupiter at 74 km/s. Like the rest of Jupiter's magnetic field, the plasma torus is tilted with respect to Jupiter's equator (and Io's orbital plane), meaning Io is at times below and at other times above the core of the plasma torus. As noted above, these ions' higher velocity and energy levels are partly responsible for the removal of neutral atoms and molecules from Io's atmosphere and more extended neutral cloud. The torus is composed of three sections: an outer, "warm" torus that resides just outside Io's orbit; a vertically extended region known as the "ribbon", composed of the neutral source region and cooling plasma, located at around Io's distance from Jupiter; and an inner, "cold" torus, composed of particles that are slowly spiraling in toward Jupiter. After residing an average of 40 days in the torus, particles in the "warm" torus escape and are partially responsible for Jupiter's unusually large magnetosphere, their outward pressure inflating it from within. Particles from Io, detected as variations in magnetospheric plasma, have been detected far into the long magnetotail by New Horizons. To study similar variations within the plasma torus, researchers measure the ultraviolet-wavelength light it emits. While such variations have not been definitively linked to variations in Io's volcanic activity (the ultimate source for material in the plasma torus), this link has been established in the neutral sodium cloud. During an encounter with Jupiter in 1992, the Ulysses spacecraft detected a stream of dust-sized particles being ejected from the Jupiter system. The dust in these discrete streams travel away from Jupiter at speeds upwards of several hundred kilometres per second, have an average size of 10 μm, and consist primarily of sodium chloride. Dust measurements by Galileo showed that these dust streams originate from Io, but the exact mechanism for how these form, whether from Io's volcanic activity or material removed from the surface, is unknown. Jupiter's magnetic field lines, which Io crosses, couples Io to Jupiter's polar upper atmosphere through the generation of an electric current known as the Io flux tube. This current produces an auroral glow in Jupiter's polar regions known as the Io footprint, as well as aurorae in Io's atmosphere. Particles from this auroral interaction act to darken the Jovian polar regions at visible wavelengths. The location of Io and its auroral footprint with respect to the Earth and Jupiter has a strong influence on Jovian radio emissions from our vantage point: when Io is visible, radio signals from Jupiter increase considerably. The Juno mission, planned for the next decade, should help to shed light on these processes. Io is slightly larger than Earth's Moon. It has a mean radius of 1,821.3 km (about five percent greater than the Moon's) and a mass of 8.9319×1022 kg (about 21 percent greater than the Moon's). It is a slight ellipsoid in shape, with its longest axis directed toward Jupiter. Among the Galilean satellites, in both mass and volume, Io ranks behind Ganymede and Callisto but ahead of Europa. Composed primarily of silicate rock and iron, Io is closer in bulk composition to the terrestrial planets than to other satellites in the outer solar system, which are mostly composed of a mix of water ice and silicates. Io has a density of 3.5275 g/cm3, the highest of any moon in the Solar System; significantly higher than the other Galilean satellites and higher than the Earth's moon. Models based on the Voyager and Galileo measurements of the moon's mass, radius and quadrupole gravitational coefficients (numerical values related to how mass is distributed within an object) suggest that its interior is differentiated between an outer, silicate-rich crust and mantle and an inner, iron- or iron sulfide–rich core. The metallic core makes up approximately 20% of Io's mass. Depending on the amount of sulfur in the core, the core has a radius between 350 and 650 km (220 to 400 mi) if it is composed almost entirely of iron, or between 550 and 900 km (310 to 560 mi) for a core consisting of a mix of iron and sulfur. Galileo's magnetometer failed to detect an internal magnetic field at Io, suggesting that the core is not convecting. Modeling of Io's interior composition suggests that the mantle is composed of at least 75% of the magnesium-rich mineral forsterite, and has a bulk composition similar to that of L-chondrite and LL-chondrite meteorites, with higher iron content (compared to silicon) than the Moon or Earth, but lower than Mars. To support the heat flow observed on Io, 10–20% of Io's mantle may be molten, though regions where high-temperature volcanism has been observed may have higher melt fractions. The lithosphere of Io, composed of basalt and sulfur deposited by Io's extensive volcanism, is at least 12 km (7 mi) thick, but is likely to be less than 40 km (25 mi) thick. Tidal heating Edit Unlike the Earth and the Moon, Io's main source of internal heat comes from tidal dissipation rather than radioactive isotope decay, the result of Io's orbital resonance with Europa and Ganymede. Such heating is dependent on Io's distance from Jupiter, its orbital eccentricity, the composition of its interior, and its physical state. Its Laplace resonance with Europa and Ganymede maintains Io's eccentricity and prevents tidal dissipation within Io from circularizing its orbit. The resonant orbit also helps to maintain Io's distance from Jupiter; otherwise tides raised on Jupiter would cause Io to slowly spiral outward from its parent planet. The vertical differences in Io's tidal bulge, between the times Io is at periapsis and apoapsis in its orbit, could be as much as 100 m (330 ft). The friction or tidal dissipation produced in Io's interior due to this varying tidal pull, which, without the resonant orbit, would have gone into circularizing Io's orbit instead, creates significant tidal heating within Io's interior, melting a significant amount of the moon's mantle and core. The amount of energy produced is up to 200 times greater than that produced solely from radioactive decay. This heat is released in the form of volcanic activity, generating its observed high heat flow (global total: 0.6 to 1.6×1014 W). Models of its orbit suggest that the amount of tidal heating within Io changes with time, and that the current heat flow is not representative of the long-term average. Based on their experience with the ancient surfaces of the Moon, Mars, and Mercury, scientists expected to see numerous impact craters in Voyager 1's first images of Io. The density of impact craters across Io's surface would have given clues to the moon's age. However, they were surprised to discover that the surface was almost completely lacking in impact craters, but was instead covered in smooth plains dotted with tall mountains, pits of various shapes and sizes, and volcanic lava flows. Compared to most worlds observed to that point, Io's surface was covered in a variety of colorful materials (leading Io to be compared to a rotten orange or to pizza) from various sulfurous compounds. The lack of impact craters indicated that Io's surface is geologically young, like the terrestrial surface; volcanic materials continuously bury craters as they are produced. This result was spectacularly confirmed as at least nine active volcanoes were observed by Voyager 1. Surface composition Edit Io's colorful appearance is the result of various materials produced by its extensive volcanism. These materials include silicates (such as orthopyroxene), sulfur, and sulfur dioxide. Sulfur dioxide frost is ubiquitous across the surface of Io, forming large regions covered in white or grey materials. Sulfur is also seen in many places across the satellite, forming yellow to yellow-green regions. Sulfur deposited in the mid-latitude and polar regions is often radiation damaged, breaking up normally stable 8-chain sulfur. This radiation damage produces Io's red-brown polar regions. Explosive volcanism, often taking the form of umbrella-shaped plumes, paints the surface with sulfurous and silicate materials. Plume deposits on Io are often colored red or white depending on the amount of sulfur and sulfur dioxide in the plume. Generally, plumes formed at volcanic vents from degassing lava contain a greater amount of S2, producing a red "fan" deposit, or in extreme cases, large (often reaching beyond 450 km (280 mi) from the central vent) red rings. A prominent example of a red-ring plume deposit is located at Pele. These red deposits consist primarily of sulfur (generally 3- and 4-chain molecular sulfur), sulfur dioxide, and perhaps Cl2SO2. Plumes formed at the margins of silicate lava flows (through the interaction of lava and pre-existing deposits of sulfur and sulfur dioxide) produce white or gray deposits. Compositional mapping and Io's high density suggest that Io contains little to no water, though small pockets of water ice or hydrated minerals have been tentatively identified, most notably on the northwest flank of the mountain Gish Bar Mons. This lack of water is likely due to Jupiter being hot enough early in the evolution of the solar system to drive off volatile materials like water in the vicinity of Io, but not hot enough to do so farther out. - Main article: Volcanism on Io The tidal heating produced by Io's forced orbital eccentricity has led the moon to become one of the most volcanically active worlds in the solar system, with hundreds of volcanic centres and extensive lava flows. During a major eruption, lava flows tens or even hundreds of kilometres long can be produced, consisting mostly of basalt silicate lavas with either mafic or ultramafic (magnesium-rich) compositions. As a by-product of this activity, sulfur, sulfur dioxide gas and silicate pyroclastic material (like ash) are blown up to 500 km (310 mi) into space, producing large, umbrella-shaped plumes, painting the surrounding terrain in red, black, and white, and providing material for Io's patchy atmosphere and Jupiter's extensive magnetosphere. Io's surface is dotted with volcanic depressions known as paterae. Paterae generally have flat floors bounded by steep walls. These features resemble terrestrial calderas, but it is unknown if they are produced through collapse over an emptied lava chamber like their terrestrial cousins. One hypothesis suggests that these features are produced through the exhumation of volcanic sills, and the overlying material is either blasted out or integrated into the sill. Unlike similar features on Earth and Mars, these depressions generally do not lie at the peak of shield volcanoes and are normally larger, with an average diameter of 41 km (25 mi), the largest being Loki Patera at 202 km (126 mi). Whatever the formation mechanism, the morphology and distribution of many paterae suggest that these features are structurally controlled, with at least half bounded by faults or mountains. These features are often the site of volcanic eruptions, either from lava flows spreading across the floors of the paterae, as at an eruption at Gish Bar Patera in 2001, or in the form of a lava lake. Lava lakes on Io either have a continuously overturning lava crust, such as at Pele, or an episodically overturning crust, such as at Loki. Lava flows represent another major volcanic terrain on Io. Magma erupts onto the surface from vents on the floor of paterae or on the plains from fissures, producing inflated, compound lava flows similar to those seen at Kilauea in Hawaii. Images from the Galileo spacecraft revealed that many of Io's major lava flows, like those at Prometheus and Amirani, are produced by the build-up of small breakouts of lava flows on top of older flows. Larger outbreaks of lava have also been observed on Io. For example, the leading edge of the Prometheus flow moved 75 to 95 km (47 to 59 mi) between Voyager in 1979 and the first Galileo observations in 1996. A major eruption in 1997 produced more than 3,500 km2 (1,350 sq mi) of fresh lava and flooded the floor of the adjacent Pillan Patera. Analysis of the Voyager images led scientists to believe that these flows were composed mostly of various compounds of molten sulfur. However, subsequent Earth-based infrared studies and measurements from the Galileo spacecraft indicate that these flows are composed of basaltic lava with mafic to ultramafic compositions. This hypothesis is based on temperature measurements of Io's "hotspots", or thermal-emission locations, which suggest temperatures of at least 1300 K and some as high as 1600 K. Initial estimates suggesting eruption temperatures approaching 2000 K have since proven to be overestimates since the wrong thermal models were used to model the temperatures. The discovery of plumes at the volcanoes Pele and Loki were the first sign that Io is geologically active. Generally, these plumes are formed when volatiles like sulfur and sulfur dioxide are ejected skyward from Io's volcanoes at speeds reaching 1 km/s (0.6 mps), creating umbrella-shaped clouds of gas and dust. Additional material that might be found in these volcanic plumes include sodium, potassium, and chlorine. These plumes appear to be formed in one of two ways. Io's largest plumes are created when sulfur and sulfur dioxide gas dissolve from erupting magma at volcanic vents or lava lakes, often dragging silicate pyroclastic material with them. These plumes form red (from the short-chain sulfur) and black (from the silicate pyroclastics) deposits on the surface. Plumes formed in this manner are among the largest observed at Io, forming red rings more than 1,000 km (620 mi) in diameter. Examples of this plume type include Pele, Tvashtar, and Dazhbog. Another type of plume is produced when encroaching lava flows vaporize underlying sulfur dioxide frost, sending the sulfur skyward. This type of plume often forms bright circular deposits consisting of sulfur dioxide. These plumes are often less than 100 km (62 mi) tall, and are among the most long-lived plumes on Io. Examples include Prometheus, Amirani, and Masubi. Io has 100 to 150 mountains. These structures average 6 km (4 mi) in height and reach a maximum of 17.5 ± 1.5 km (10.9 ± 1 mi) at South Boösaule Montes. Mountains often appear as large (the average mountain is 157 km (98 mi) long), isolated structures with no apparent global tectonic patterns outlined, as is the case on Earth. To support the tremendous topography observed at these mountains requires compositions consisting mostly of silicate rock, as opposed to sulfur. Despite the extensive volcanism that gives Io its distinctive appearance, nearly all its mountains are tectonic structures, and are not produced by volcanoes. Instead, most Ionian mountains form as the result of compressive stresses on the base of the lithosphere, which uplift and often tilt chunks of Io's crust through thrust faulting. The compressive stresses leading to mountain formation are the result of subsidence from the continuous burial of volcanic materials. The global distribution of mountains appears to be opposite that of volcanic structures; mountains dominate areas with fewer volcanoes and vice versa. This suggests large-scale regions in Io's lithosphere where compression (supportive of mountain formation) and extension (supportive of patera formation) dominate. Locally, however, mountains and paterae often abut one another, suggesting that magma often exploits faults formed during mountain formation to reach the surface. Mountains on Io (generally, structures rising above the surrounding plains) have a variety of morphologies. Plateaus are most common. These structures resemble large, flat-topped mesas with rugged surfaces. Other mountains appear to be tilted crustal blocks, with a shallow slope from the formerly flat surface and a steep slope consisting of formerly sub-surface materials uplifted by compressive stresses. Both types of mountains often have steep scarps along one or more margins. Only a handful of mountains on Io appear to have a volcanic origin. These mountains resemble small shield volcanoes, with steep slopes (6–7°) near a small, central caldera and shallow slopes along their margins. These volcanic mountains are often smaller than the average mountain on Io, averaging only 1 to 2 km (0.6 to 1.2 mi) in height and 40 to 60 km (25 to 37 mi) wide. Other shield volcanoes with much shallower slopes are inferred from the morphology of several of Io's volcanoes, where thin flows radiate out from a central patera, such as at Ra Patera. Nearly all mountains appear to be in some stage of degradation. Large landslide deposits are common at the base of Ionian mountains, suggesting that mass wasting is the primary form of degradation. Scalloped margins are common among Io's mesas and plateaus, the result of sulfur dioxide sapping from Io's crust, producing zones of weakness along mountain margins. Io has an extremely thin atmosphere consisting mainly of sulfur dioxide (SO2) with a pressure of a billionth of an atmosphere. The thin Ionian atmosphere means any future landing probes sent to investigate Io will not need to be encased in an aeroshell-style heatshield, but instead will require retrorockets for a soft landing. The thin atmosphere also necessitates a rugged lander capable of enduring the strong Jovian radiation, which a thicker atmosphere would attenuate. The same radiation (in the form of a plasma) strips the atmosphere so that it must be constantly replenished. The most dramatic source of SO2 is volcanism, but the atmosphere is largely sustained by sunlight-driven sublimation of SO2 frozen on the surface. The atmosphere is largely confined to the equator, where the surface is warmest and most active volcanic plumes reside. Other variations also exist, with the highest densities near volcanic vents (particularly at sites of volcanic plumes) and on Io's anti-Jovian hemisphere (the side that faces away from Jupiter, where SO2 frost is most abundant). High-resolution images of Io acquired while the satellite is experiencing an eclipse reveal an aurora-like glow. As on Earth, this is due to radiation hitting the atmosphere. Aurorae usually occur near the magnetic poles of planets, but Io's are brightest near its equator. Io lacks a magnetic field of its own; therefore, electrons traveling along Jupiter's magnetic field near Io directly impact the satellite's atmosphere. More electrons collide with the atmosphere, producing the brightest aurora, where the field lines are tangent to the satellite (i.e., near the equator), since the column of gas they pass through is longer there. Aurorae associated with these tangent points on Io are observed to rock with the changing orientation of Jupiter's tilted magnetic dipole. See also Edit - ↑ In US dictionary transcription, us dict: ī′·ō. - ↑ 2.0 2.1 Rosaly MC Lopes (2006). "Io: The Volcanic Moon". in Lucy-Ann McFadden, Paul R. Weissman, Torrence V. Johnson. Encyclopedia of the Solar System, Academic Press. pp. 419–431. ISBN 978-0120885893. - ↑ 3.0 3.1 Lopes, R. M. C.; et al. (2004). "Lava lakes on Io: Observations of Io’s volcanic activity from Galileo NIMS during the 2001 fly-bys". Icarus 169: 140–174. doi:10.1016/j.icarus.2003.11.013. - ↑ 4.0 4.1 4.2 4.3 Schenk, P.; et al. (2001). "The Mountains of Io: Global and Geological Perspectives from Voyager and Galileo". Journal of Geophysical Research 106 (E12): 33201–33222. doi:10.1029/2000JE001408. - ↑ http://zimmer.csufresno.edu/~fringwal/w08a.jup.txt - ↑ Marius, S. (1614). Mundus Iovialis anno M.DC.IX Detectus Ope Perspicilli Belgici, http://galileo.rice.edu/sci/marius.html. (in which he attributes the suggestion to Johannes Kepler) - ↑ 7.0 7.1 Blue, Jennifer (October 16, 2006). "Categories for Naming Features on Planets and Satellites". USGS. Retrieved on 2007-06-14. - ↑ Blue, Jennifer (June 14, 2007). "Io Nomenclature Table of Contents". USGS. Retrieved on 2007-06-14. - ↑ 9.0 9.1 9.2 Cruikshank, D. P.; and Nelson, R. M. (2007). "A history of the exploration of Io". in Lopes, R. M. C.; and Spencer, J. R.. Io after Galileo, Springer-Praxis. pp. 5–33. ISBN 3-540-34681-3. - ↑ O'Connor, J. J.; Robertson, E. F. (February 1997). "Longitude and the Académie Royale". University of St. Andrews. Retrieved on 2007-06-14. - ↑ 11.0 11.1 Barnard, E. E. (1894). "On the Dark Poles and Bright Equatorial Belt of the First Satellite of Jupiter". Monthly Notices of the Royal Astronomical Society 54 (3): 134–136, http://adsabs.harvard.edu/abs/1894MNRAS..54..134B. - ↑ Dobbins, T.; and Sheehan, W. (2004). "The Story of Jupiter's Egg Moons". Sky & Telescope 107 (1): 114–120. - ↑ Barnard, E. E. (1891). "Observations of the Planet Jupiter and his Satellites during 1890 with the 12-inch Equatorial of the Lick Observatory". Monthly Notices of the Royal Astronomical Society 51 (9): 543–556, http://adsabs.harvard.edu/abs/1891MNRAS..51..543B. - ↑ Minton, R. B. (1973). "The Red Polar Caps of Io". Communications of the Lunar and Planetary Laboratory 10: 35–39, http://adsabs.harvard.edu/abs/1973CoLPL..10...35M. - ↑ Lee, T. (1972). "Spectral Albedos of the Galilean Satellites". Communications of the Lunar and Planetary Laboratory 9 (3): 179–180, http://adsabs.harvard.edu/abs/1972CoLPL...9..179L. - ↑ Fanale, F. P.; et al. (1974). "Io: A Surface Evaporite Deposit?". Science 186 (4167): 922–925. doi:10.1126/science.186.4167.922. PMID 17730914. - ↑ 17.0 17.1 Bigg, E. K. (1964). "Influence of the Satellite Io on Jupiter's Decametric Emission". Nature 203: 1008–1010. doi:10.1038/2031008a0. - ↑ 18.0 18.1 Fimmel, R. O.; et al. (1977). "First into the Outer Solar System". Pioneer Odyssey. NASA. Retrieved on 2007-06-05. - ↑ Anderson, J. D.; et al. (1974). "Gravitational parameters of the Jupiter system from the Doppler tracking of Pioneer 10". Science 183: 322–323. doi:10.1126/science.183.4122.322. - ↑ "Pioneer 11 Images of Io". Galileo Home Page. Retrieved on 2007-04-21. - ↑ "Voyager Mission Description". NASA PDS Rings Node (1997-02-19). Retrieved on 2007-04-21. - ↑ 22.0 22.1 Smith, B. A.; et al. (1979). "The Jupiter system through the eyes of Voyager 1". Science 204: 951–972. doi:10.1126/science.204.4396.951. - ↑ 23.0 23.1 Morabito, L. A.; et al. (1979). "Discovery of currently active extraterrestrial volcanism". Science 204: 972. doi:10.1126/science.204.4396.972. - ↑ 24.0 24.1 Strom, R. G.; et al. (1979). "Volcanic eruption plumes on Io". Nature 280: 733–736. doi:10.1038/280733a0. - ↑ 25.0 25.1 25.2 Peale, S. J.; et al. (1979). "Melting of Io by Tidal Dissipation". Science 203: 892–894. doi:10.1126/science.203.4383.892. - ↑ Soderblom, L. A.; et al. (1980). "Spectrophotometry of Io: Preliminary Voyager 1 results". Geophys. Res. Lett. 7: 963–966. doi:10.1029/GL007i011p00963. - ↑ 27.0 27.1 Pearl, J. C.; et al. (1979). "Identification of gaseous SO2 and new upper limits for other gases on Io". Nature 288: 757–758. doi:10.1038/280755a0. - ↑ Broadfoot, A. L.; et al. (1979). "Extreme ultraviolet observations from Voyager 1 encounter with Jupiter". Science 204: 979–982. doi:10.1126/science.204.4396.979. - ↑ Strom, R. G.; Schneider, N. M. (1982). "Volcanic eruptions on Io". in Morrison, D.. Satellites of Jupiter, University of Arizona Press. pp. 598–633. ISBN 0-8165-0762-7. - ↑ 30.0 30.1 Anderson, J. D.; et al. (1996). "Galileo Gravity Results and the Internal Structure of Io". Science 272: 709–712. doi:10.1126/science.272.5262.709. - ↑ 31.0 31.1 31.2 McEwen, A. S.; et al. (1998). "High-temperature silicate volcanism on Jupiter's moon Io". Science 281: 87–90. doi:10.1126/science.281.5373.87. - ↑ 32.0 32.1 Perry, J.; et al. (2007). "A Summary of the Galileo mission and its observations of Io". in Lopes, R. M. C.; and Spencer, J. R.. Io after Galileo, Springer-Praxis. pp. 35–59. ISBN 3-540-34681-3. - ↑ Porco, C. C.; et al. (2003). "Cassini imaging of Jupiter's atmosphere, satellites, and rings". Science 299: 1541–1547. doi:10.1126/science.1079462. - ↑ Marchis, F.; et al. (2005). "Keck AO survey of Io global volcanic activity between 2 and 5 μm". Icarus 176: 96–122. doi:10.1016/j.icarus.2004.12.014. - ↑ Spencer, John (2007-02-23). "Here We Go!". Retrieved on 2007-06-03. - ↑ 36.0 36.1 Spencer, J. R.; et al. (2007). "Io Volcanism Seen by New Horizons: A Major Eruption of the Tvashtar Volcano". Science 318: 240–243. doi:10.1126/science.1147621. - ↑ Joint Jupiter Science Definition Team; NASA/ESA Study Team (January 16, 2009). "Europa Jupiter System Mission Joint Summary Report" (PDF). NASA/ESA. Retrieved on 2009-01-21. - ↑ "Cosmic Vision 2015–2025 Proposals". ESA (2007-07-21). Retrieved on 2009-02-20. - ↑ McEwen, A. S.; the IVO Team (2008). "Io Volcano Observer (IVO)" (PDF). Io Workshop 2008. - ↑ Lopes, R. M. C.; D. A. Williams (2005). "Io after Galileo". Reports on Progress in Physics 68: 303–340. doi:10.1088/0034-4885/68/2/R02. - ↑ Spencer, J.. "John Spencer's Astronomical Visualizations". Retrieved on 2007-05-25. - ↑ 42.0 42.1 42.2 42.3 42.4 42.5 42.6 Schneider, N. M.; Bagenal, F. (2007). "Io's neutral clouds, plasma torus, and magnetospheric interactions". in Lopes, R. M. C.; and Spencer, J. R.. Io after Galileo, Springer-Praxis. pp. 265–286. ISBN 3-540-34681-3. - ↑ 43.0 43.1 Postberg, F.; et al. (2006). "Composition of jovian dust stream particles". Icarus 183: 122–134. doi:10.1016/j.icarus.2006.02.001. - ↑ Burger, M. H.; et al. (1999). "Galileo's close-up view of Io sodium jet". Geophys. Res. Let. 26 (22): 3333–3336. doi:10.1029/1999GL003654. - ↑ Krimigis, S. M.; et al. (2002). "A nebula of gases from Io surrounding Jupiter". Nature 415: 994–996. doi:10.1038/415994a. - ↑ Medillo, M.; et al. (2004). "Io's volcanic control of Jupiter's extended neutral clouds". Icarus 170: 430–442. doi:10.1016/j.icarus.2004.03.009. - ↑ Grün, E.; et al. (1993). "Discovery of Jovian dust streams and interstellar grains by the ULYSSES spacecraft". Nature 362: 428–430. doi:10.1038/362428a0. - ↑ Zook, H. A.; et al. (1996). "Solar Wind Magnetic Field Bending of Jovian Dust Trajectories". Science 274 (5292): 1501–1503. doi:10.1126/science.274.5292.1501. - ↑ Grün, E.; et al. (1996). "Dust Measurements During Galileo's Approach to Jupiter and Io Encounter". Science 274: 399–401. doi:10.1126/science.274.5286.399. - ↑ Schubert, J. et al. (2004). "Interior composition, structure, and dynamics of the Galilean satellites.". in F. Bagenal et al.. Jupiter: The Planet, Satellites, and Magnetosphere, Cambridge University Press. pp. 281–306. ISBN 978-0521818087. - ↑ 51.0 51.1 Anderson, J. D.; et al. (2001). "Io's gravity field and interior structure". J. Geophys. Res. 106: 32963–32969. doi:10.1029/2000JE001367. - ↑ Kivelson, M. G.; et al. (2001). "Magnetized or Unmagnetized: Ambiguity persists following Galileo's encounters with Io in 1999 and 2000". J. Geophys. Res. 106 (A11): 26121–26135. doi:10.1029/2000JA002510. - ↑ Sohl, F.; et al. (2002). "Implications from Galileo observations on the interior structure and chemistry of the Galilean satellites". Icarus 157: 104–119. doi:10.1006/icar.2002.6828. - ↑ Kuskov, O. L.; V. A. Kronrod (2001). "Core sizes and internal structure of the Earth's and Jupiter's satellites". Icarus 151: 204–227. doi:10.1006/icar.2001.6611. - ↑ 55.0 55.1 55.2 55.3 Moore, W. B. et al. (2007). "The Interior of Io.". in R. M. C. Lopes and J. R. Spencer. Io after Galileo, Springer-Praxis. pp. 89–108. ISBN 3-540-34681-3. - ↑ Jaeger, W. L.; et al. (2003). "Orogenic tectonism on Io". J. Geophys. Res. 108: 12–1. doi:10.1029/2002JE001946. - ↑ Yoder, C. F.; et al. (1979). "How tidal heating in Io drives the Galilean orbital resonance locks". Nature 279: 767–770. doi:10.1038/279767a0. - ↑ Britt, Robert Roy (March 16, 2000). "Pizza Pie in the Sky: Understanding Io's Riot of Color", Space.com. Retrieved on 25 July 2007. Archived from the original on 15 December 2000. - ↑ 59.0 59.1 Carlson, R. W.; et al. (2007). "Io's surface composition". in Lopes, R. M. C.; and Spencer, J. R.. Io after Galileo, Springer-Praxis. pp. 194–229. ISBN 3-540-34681-3. - ↑ Spencer, J.; et al. (2000). "Discovery of Gaseous S2 in Io's Pele Plume". Science 288: 1208–1210. doi:10.1126/science.288.5469.1208. - ↑ Douté, S.; et al. (2004). "Geology and activity around volcanoes on Io from the analysis of NIMS". Icarus 169: 175–196. doi:10.1016/j.icarus.2004.02.001. - ↑ 62.0 62.1 62.2 62.3 Radebaugh, D.; et al. (2001). "Paterae on Io: A new type of volcanic caldera?". J. Geophys. Res. 106: 33005–33020. doi:10.1029/2000JE001406. - ↑ Keszthelyi, L.; et al. (2004). "A Post-Galileo view of Io's Interior". Icarus 169: 271–286. doi:10.1016/j.icarus.2004.01.005. - ↑ Perry, J. E.; et al. (2003). "Gish Bar Patera, Io: Geology and Volcanic Activity, 1997–2001" (PDF). LPSC XXXIV. Abstract #1720. - ↑ Radebaugh, J.; et al. (2004). "Observations and temperatures of Io's Pele Patera from Cassini and Galileo spacecraft images". Icarus 169: 65–79. doi:10.1016/j.icarus.2003.10.019. - ↑ Howell, R. R.; Lopes, R. M. C. (2007). "The nature of the volcanic activity at Loki: Insights from Galileo NIMS and PPR data". Icarus 186: 448–461. doi:10.1016/j.icarus.2006.09.022. - ↑ Keszthelyi, L.; et al. (2001). "Imaging of volcanic activity on Jupiter's moon Io by Galileo during the Galileo Europa Mission and the Galileo Millennium Mission". J. Geophys. Res. 106: 33025–33052. doi:10.1029/2000JE001383. - ↑ 68.0 68.1 Keszthelyi, L.; et al. (2007). "New estimates for Io eruption temperatures: Implications for the interior". Icarus 192: 491–502. doi:10.1016/j.icarus.2007.07.008. - ↑ Roesler, F. L.; et al. (1999). "Far-Ultraviolet Imaging Spectroscopy of Io's Atmosphere with HST/STIS" (fee required). Science 283 (5400): 353–357. doi:10.1126/science.283.5400.353. - ↑ Geissler, P. E.; et al. (1999). "Galileo Imaging of Atmospheric Emissions from Io" (fee required). Science 285 (5429): 448–461. doi:10.1126/science.285.5429.870. - ↑ McEwen, A. S.; Soderblom, L. A. (1983). "Two classes of volcanic plume on Io". Icarus 58: 197–226. doi:10.1016/0019-1035(83)90075-1. - ↑ Clow, G. D.; Carr, M. H. (1980). "Stability of sulfur slopes on Io". Icarus 44: 268–279. doi:10.1016/0019-1035(80)90022-6. - ↑ 73.0 73.1 Schenk, P. M.; Bulmer, M. H. (1998). "Origin of mountains on Io by thrust faulting and large-scale mass movements". Science 279: 1514–1517. doi:10.1126/science.279.5356.1514. - ↑ McKinnon, W. B.; et al. (2001). "Chaos on Io: A model for formation of mountain blocks by crustal heating, melting, and tilting". Geology 29: 103–106. doi:10.1130/0091-7613(2001)029<0103:COIAMF>2.0.CO;2. - ↑ Tackley, P. J. (2001). "Convection in Io's asthenosphere: Redistribution of nonuniform tidal heating by mean flows". J. Geophys. Res. 106: 32971–32981. doi:10.1029/2000JE001411. - ↑ 76.0 76.1 Schenk, P. M.; et al. (2004). "Shield volcano topography and the rheology of lava flows on Io". Icarus 169: 98–110. doi:10.1016/j.icarus.2004.01.015. - ↑ Moore, J. M.; et al. (2001). "Landform degradation and slope processes on Io: The Galileo view". J. Geophys. Res. 106: 33223–33240. doi:10.1029/2000JE001375. - ↑ 78.0 78.1 Lellouch, E.; et al. (2007). "Io's atmosphere". in Lopes, R. M. C.; and Spencer, J. R.. Io after Galileo, Springer-Praxis. pp. 231–264. ISBN 3-540-34681-3. - ↑ Feldman, P. D.; et al. (2000). "Lyman-α imaging of the SO2 distribution on Io". Geophys. Res. Lett. 27: 1787–1790. doi:10.1029/1999GL011067. - ↑ Retherford, K. D.; et al. (2000). "Io's Equatorial Spots: Morphology of Neutral UV Emissions". J. Geophys. Res. 105 (A12): 27,157–27,165. doi:10.1029/2000JA002500. |Wikimedia Commons has media related to: Io| - General information - Movie of Io at National Oceanic and Atmospheric Administration - Additional References als:Io (Mond) ar:إيو (قمر) frp:Io (satèlite) ast:Ío (lluna) bs:Io (mjesec) br:Io (loarenn) bg:Йо (спътник) ca:Ió (satèl·lit) cs:Io co:Io cy:Io (lloeren) da:Io (måne)et:Io (Jupiter) el:Ιώ (δορυφόρος)eo:Ioo (luno) eu:Io fa:آیو (ماه)fy:Io (moanne) ga:Io (gealach) gl:Ío (lúa)hr:Ija (mjesec) id:Io is:Íó it:Io (astronomia) he:איו (ירח) ka:იო (თანამგზავრი) la:Io (satelles) lv:Jo (pavadonis) lb:Io (Mound) lt:Ijo (palydovas) hu:Io mk:Ио (месечина) mr:आयो (उपग्रह) ms:Io (bulan) nl:Io (maan)no:Io (måne) nn:Jupitermånen Io nds:Io (Maand) pl:Io (księżyc) pt:Io ro:Io (satelit)scn:Io (satèlliti) simple:Io (moon) sk:Io (mesiac) sl:Io (luna) sr:Ио (сателит) sh:Io (mjesec) fi:Io (kuu) sv:Io (måne) tl:Io (buwan) tt:Ио (иярчен) th:ไอโอ (ดวงจันทร์) tg:Ио (радиф) tr:İo (uydu) uk:Іо (супутник) vi:Io (vệ tinh) zh-classical:木衛一
Properties change with pressureWe already know that material properties change at high pressure. As pressure increases, the distance between the atoms decreases, and the outer electrons, the highly mobile valence electrons, interact with each other. It is also the valence electrons that determine the material’s properties. For example under high pressure a shiny electrically conductive metal such as sodium becomes a transparent insulator, and a gas such as oxygen solidifies and conducts electricity. The oxygen can even become superconductive. But while the valence electrons are highly mobile, the inner electrons continue to move steadily around their atomic nuclei. Twice as high pressureThe highest pressure achieved thus far is 4 million atmospheres or 400 GPa, which is roughly the pressure at the earth's centre. But thanks to a newly developed method, the researchers have been able to achieve a pressure that is twice as high as at the earth's centre and 7.7 million times higher than at the earth's surface. With great precision they have then been able to measure both temperature and relative positions of atoms in a small crystalline piece of osmium. Osmium is the metal with the highest density and is almost as incompressible as diamond. Compressing osmium to this high pressure, the researchers found an unexpected anomaly in the relationship between the interatomic distances. “The high pressure didn’t result in any significant change to the valence electrons, which surprised us. It made us rethink things, and go back to the theories”, explains Prof. Abrikosov. Advanced supercomputer calculations at the National Supercomputer Centre, NSC, in Linköping later revealed how the innermost electrons start to interact with each other as a result of the extreme pressure. “This is a perfect example of collaboration between experimental and theoretical materials research”, says LiU researcher Dr Marcus Ekholm, co-author of the article. Long-standing collaborationFoto: Monica WestmanThis breakthrough is the result of a long-standing collaboration between the research team at LiU and researchers in Germany, the United States, the Netherlands, France and Russia. The researchers at Bayreuth University in Germany developed the method that makes it possible to apply twice the pressure that was previously possible, while still being able to measure and maintain control. This high pressure could exist at the centre of larger planets than ours. “Interaction between inner electrons has not previously been observed, and the phenomenon means that we can start searching for brand new states of matter”, says Prof Abrikosov. The results have been published in the highly ranked journal Nature. “We’re really delighted, and it’s exciting as it opens up a whole box of new questions for future research”, says Prof Abrikosov. The Most Incompressible Metal Osmium at Static Pressures above 750 GPa, L. Dubrovinsky, N. Dubrovinskaia, E. Bykova, M. Bykov, V. Prakapenka, C. Prescher, K. Glazyrin, H.-P. Liermann, M. Hanfland, M. Ekholm, Q. Feng L. V. Pourovskii, M. I. Katsnelson, J. M. Wills, and I. A. Abrikosov. Advance Online Publication on Nature´s website from 24 August 2015. Doi 10.1038/nature14681 Diamond anvil cell, see wikipedia https://en.wikipedia.org/wiki/Diamond_anvil_cell This method has been in use since the late 1950s. The researchers in Bayreuth, with the help of nanotechnology, have developed a small synthetic diamond that is positioned halfway between two ordinary diamonds, on each side of the Osmium crystal. The small diamonds are just a few thousandths of a centimetre in diameter. Because the area is significantly decreased, the pressure is significantly higher.
This task, written for the National Young Mathematicians' Award 2016, involves open-topped boxes made with interlocking cubes. Explore the number of units of paint that are needed to cover the boxes. . . . Take a rectangle of paper and fold it in half, and half again, to make four smaller rectangles. How many different ways can you fold it up? A dog is looking for a good place to bury his bone. Can you work out where he started and ended in each case? What possible routes could he have taken? In a square in which the houses are evenly spaced, numbers 3 and 10 are opposite each other. What is the smallest and what is the largest possible number of houses in the square? Design an arrangement of display boards in the school hall which fits the requirements of different people. 10 space travellers are waiting to board their spaceships. There are two rows of seats in the waiting room. Using the rules, where are they all sitting? Can you find all the possible ways? How will you go about finding all the jigsaw pieces that have one peg and one hole? This 100 square jigsaw is written in code. It starts with 1 and ends with 100. Can you build it up? Here you see the front and back views of a dodecahedron. Each vertex has been numbered so that the numbers around each pentagonal face add up to 65. Can you find all the missing numbers? Can you shunt the trucks so that the Cattle truck and the Sheep truck change places and the Engine is back on the main line? How many DIFFERENT quadrilaterals can be made by joining the dots on the 8-point circle? In how many ways can you fit two of these yellow triangles together? Can you predict the number of ways two blue triangles can be fitted together? A magician took a suit of thirteen cards and held them in his hand face down. Every card he revealed had the same value as the one he had just finished spelling. How did this work? What is the smallest cuboid that you can put in this box so that you cannot fit another that's the same into it? A tetromino is made up of four squares joined edge to edge. Can this tetromino, together with 15 copies of itself, be used to cover an eight by eight chessboard? How can you arrange the 5 cubes so that you need the smallest number of Brush Loads of paint to cover them? Try with other numbers of cubes as well. What is the best way to shunt these carriages so that each train can continue its journey? You have 4 red and 5 blue counters. How many ways can they be placed on a 3 by 3 grid so that all the rows columns and diagonals have an even number of red counters? Hover your mouse over the counters to see which ones will be removed. Click to remover them. The winner is the last one to remove a counter. How you can make sure you win? Swap the stars with the moons, using only knights' moves (as on a chess board). What is the smallest number of moves possible? Can you see why 2 by 2 could be 5? Can you predict what 2 by 10 will be? Can you work out how many cubes were used to make this open box? What size of open box could you make if you had 112 cubes? Cut four triangles from a square as shown in the picture. How many different shapes can you make by fitting the four triangles back together? How many different ways can you find of fitting five hexagons together? How will you know you have found all the ways? Building up a simple Celtic knot. Try the interactivity or download the cards or have a go on squared paper. What is the greatest number of counters you can place on the grid below without four of them lying at the corners of a square? How many different triangles can you make on a circular pegboard that has nine pegs? This article for teachers describes how modelling number properties involving multiplication using an array of objects not only allows children to represent their thinking with concrete materials,. . . . Can you make a 3x3 cube with these shapes made from small cubes? How many different cuboids can you make when you use four CDs or DVDs? How about using five, then six? How can you arrange these 10 matches in four piles so that when you move one match from three of the piles into the fourth, you end up with the same arrangement? Can you fit the tangram pieces into the outline of this sports car? Can you fit the tangram pieces into the outline of this goat and giraffe? Make a cube out of straws and have a go at this practical challenge. Looking at the picture of this Jomista Mat, can you decribe what you see? Why not try and make one yourself? Can you fit the tangram pieces into the outlines of the chairs? Is it possible to rearrange the numbers 1,2......12 around a clock face in such a way that every two numbers in adjacent positions differ by any of 3, 4 or 5 hours? Can you fit the tangram pieces into the outlines of the watering can and man in a boat? Exploring and predicting folding, cutting and punching holes and making spirals. Use the lines on this figure to show how the square can be divided into 2 halves, 3 thirds, 6 sixths and 9 ninths. Can you fit the tangram pieces into the outlines of the workmen? Can you find ways of joining cubes together so that 28 faces are visible? Can you fit the tangram pieces into the outline of Mai Ling? Can you fit the tangram pieces into the outlines of the candle and sundial? Can you fit the tangram pieces into the outlines of Mai Ling and Chi Wing? Can you fit the tangram pieces into the outline of Little Ming and Little Fung dancing? Here's a simple way to make a Tangram without any measuring or ruling lines. Can you cut up a square in the way shown and make the pieces into a triangle? Investigate the number of paths you can take from one vertex to another in these 3D shapes. Is it possible to take an odd number and an even number of paths to the same vertex?
Egyptomania 2: Hieroglyphics Part 2 of a four-program series. See details under Egyptomania 1: Introduction to Daily Life. Learn how to decipher some of the sacred writing of Egypt in this fascinating look at an over 4,000-year-old writing system. Teaching extensions to the lesson provide instruction for writing like a scribe and creating a personalized cartouche. - Open discussion with students' previous knowledge on Egyptian culture. - Discuss importance of written communication. - Compare and contrast hieroglyphics to English writing. - Introduce students to hieroglyphics through related artifacts such as mummy cases, sculpture, and architectural fragments. - Hieroglyphic writing activity. - To introduce students to the symbolic nature of written languages. - To compare similarities and differences between hieroglyphs and the English alphabet. - To acquaint students with the role of the scribe and his importance in ancient Egypt. - To investigate the materials, techniques and equipment used to produce hieroglyphic inscriptions.
Introduction to Economics : Introduction to Economics Definition of Economics: Explores the behaviour of Financial markets including interest rates & stock prices. Examines why some people & countries have high incomes while others are poor and suggests ways how the incomes of poor can be raised without harming the economy. Studies business cycles– the ups and downs of unemployment & inflation– along with policies Introduction to Economics : Introduction to Economics Definition of Economics: Studies international trade and finances & the impacts of globalization. Looks at the growth of developing countries & proposes ways to encourage the efficient use of resources. Government policies maybe used for rapid economic growth, efficient use of resources, full employment, price stability, & fair distribution of income. Definition of Economics : Definition of Economics Economics is the study of how societies use scarce resources to produce valuable commodities and distribute them among different people. Economics is concerned with the production, consumption, distribution and investment of goods and services. Adam Smith’s Contribution : Adam Smith’s Contribution Adam Smith, who is generally regarded as father of economics, defined economics as “ a science which enquires into the nature and cause of wealth of nation”. He emphasized the production and growth of wealth as the subject matter of economics.Smith considered how individual prices are set, studied the determination of prices of land, labour, and capital & inquired into the strengths & weaknesses of market mechanism. He identified the remarkable efficiency properties of markets and saw that economic benefit comes from self- interested actions of individuals. Adam Smith’s Contribution : Adam Smith’s Contribution Criticism of Wealth Oriented Definition : --Considered economics as a dismal or selfish science. --Defined wealth in a very narrow and restricted sense which considers only material and tangible goods. --Have given emphasis only to wealth and reduced man to secondary place in the study of economics. A. Marshall’s Welfare Concept : A. Marshall’s Welfare Concept According to A. Marshall “Economics is a study of mankind in the ordinary business of life; it examines that part of individual and social action which is most closely connected with the attainment and with the use of material requisites of well being. Thus, it is on one side a study of wealth; and on other; and more important side, a part of the study of man. Characteristics of Welfare Definition : Characteristics of Welfare Definition # It is primarily the study of mankind.#Takes into account ordinary business of life – It is not concerned with social, religious and political aspects of man’s life.# Emphasize on material welfare as the primary concern of economics i.e., that part of human welfare which is related to wealth.# Limited the scope to activities amenable to measurement in terms of money. Criticisms of Welfare Oriented Definition : Criticisms of Welfare Oriented Definition Criticized for treating economics as a social science ratherthan a human science. Thus welfare definition restricts the scope of economics to the study of persons living in organized communities only. Criticized because of the distinction made between economic and non-economic. Welfare in itself has a wide meaning which is not made clear in definition. Macro Economics of Keynes : Macro Economics of Keynes Macro Economics is considered with the overall performance of the economy. Keynes, in his revolutionary work ‘General Theory of employment, Interest & Money’ developed an analysis of what causes business cycles, with alternating spells of high unemployment & high inflation. Macro Economics of Keynes : Macro Economics of Keynes Macro Economics examines a wide variety of areas such as how total investment & consumption are determined; how central banks manage money & interest rates Approaches in Economics : Approaches in Economics Scientific Approach Econometrics Problems with Economic Reasoning : Problems with Economic Reasoning The Post hoc fallacy Failure to hold other things constant. Fallacy of composition
Aluminum is all around us, in almost every rock, plant, and animal. It is the third most plentiful element in the earth’s surface, after oxygen and silicon, and its compounds make up about 15 percent of the weight of the earth’s crust. But nowhere in nature is aluminum found in a pure form, it’s always mixed with other elements. Some parts of the earth’s crust contain large amounts of aluminum compounds, and these regions are mined for their aluminum ore. The most common aluminum ore is called bauxite. The aluminum in bauxite is separated from its other elements by electrical currents. This produces pure aluminum for aluminum foil and other materials. Aluminum is used for foil because it is light, weighing less than half as much as iron or copper, and can be pressed into very thin sheets. Aluminum is a good conductor of electricity and a good insulator against heat and light, which makes it perfect as a food wrap. The small continent of Australia is the world’s largest source of bauxite, producing more than twice as much bauxite as any other nation!
While nobody likes to cough, it can actually be good for you. In order to better understand this, it is important to understand what mucous is, how it is different to phlegm and the role that both play in our bodies, especially in relation to coughs. Mucous is the slippery liquid made by our mucous membranes or mucosa. These membranes line the passageways in our bodies that come into contact with the outside environment; these include the nose, mouth, airways, digestive tract, the reproductive tract, the white part of the eye and on the inside of the eyelids. Mucous is a useful material with important functions in the body. - It acts as a thin, protective blanket preventing the tissue underneath from drying out. Without mucous, the mucosa will be exposed to elements from the outside world, which will cause it to dry out and crack, so mucous serves an important role to keep these tissues healthy. - Mucous is also able to trap unwanted substances like bacteria and dust before it gets into our bodies. - Mucous contains elements of the immune system that kill any invaders it traps. How mucous moves along the mucous escalator The respiratory tract is a mucous-making machine, producing over a litre of mucous a day. This ensures that the protective mucous blanket is constantly supplied with newly made mucous. Many cells lining the airways have long, tail-like hair called cilia, which beat 10 to 12 times per second. The mucous blanket rests on top of the cilia, which propel it forward like an escalator. Once mucous reaches the throat, it is swallowed, usually unnoticed, and recycled in the stomach. The normal amount of mucous produced daily is very effectively handled and cleared by the mucous escalator to prevent it from accumulating. Phlegm: Mucous Accumulation A bad cold or an allergy can throw the body’s mucous production into overdrive. This is the body’s way to flush away infection, irritants or allergens. However, the mucous escalator may not be able to keep up with the increased volume of mucous, or may become inefficient due to the stickiness of the mucous. As a result, large volumes of thick, sticky mucous accumulate in the airways. Mucous from the lungs is sometimes referred to as phlegm and is produced by the lower airways. A Chesty Cough When the mucous escalator can’t keep up, the body deploys other strategies such as coughing. A cough that produces mucous is known as a chesty or wet cough. Unlike a dry cough, a wet cough should be encouraged because it prevents mucous from pooling in the lungs, which can impair breathing and the ability of lungs to fight infection. Mucous plays an essential role in the maintenance of a healthy body and respiratory tract. Infections, irritants and allergies can stimulate mucous overproduction, causing large volumes of thick, sticky mucous to accumulate in the respiratory tract. A wet cough helps to remove mucous and should be encouraged, rather than suppressed and a mucolytic can break the chemical bonds that hold mucous together, making it less thick, less sticky and easier to cough up. By: Supplied. Image: Pixabay
The dune tiger beetle (Cicindela maritima) was once thought to be a subspecies of the very similar species Cincindela hybrida, but is now recognised as a species in its own right (2). Like most tiger beetles, the dune tiger beetle is predatory, and has large jaws. It is generally reddish-brown in colour with blotchy yellowish markings (4). Adults breed during spring and summer, and the larvae occur later on in the year inside burrows in the sand. Either pupae or adults hibernate through the winter, and the adults emerge the following spring in order to breed, completing the annual life cycle (3). Adults feed on insects; they are fast runners and fly well, chasing their prey (3). This coastal species is currently found on both sides of the Bristol Channel, northwest Wales, Norfolk and Kent (3). Historical records are from Lincolnshire, Cornwall and Hampshire (3). It is widespread in Europe, where it is not tied to the coastline (3). Coastal development, disturbance by recreational use of beach habitats by humans, and the erosion of sand dunes are factors that threaten the survival of this tiger beetle (3). Holiday and urban development is a particular threat (5). Many sites supporting populations of this scarce tiger beetle are designated Sites of Special Scientific Interest (SSSIs) or National Nature Reserves (NNRs); the species therefore benefits from a degree of protection in such areas (3). This beetle has been identified as a priority species under the UK Biodiversity Action Plan (UK BAP). The Species Action Plan produced as a result of this prioritisation aims to maintain the current range of the dune tiger beetle (3). Furthermore, English Nature has included it in its Species Recovery Programme. The UK Biodiversity Action Plan for this species is available at UK BAP. A winter survival strategy characteristic of some mammals in which an animal's metabolic rate slows down and a state of deep sleep is attained. Whilst hibernating, animals survive on stored reserves of fat that they have accumulated in summer. In insects, the correct term for hibernation is 'diapause', a temporary pause in development and growth. Any stage of the lifecycle (eggs, larvae, pupae or adults) may enter diapause, which is typically associated with winter. Stage in an animal's lifecycle after it hatches from the egg. Larvae are typically very different in appearance to adults; they are able to feed and move around but usually are unable to reproduce. Stage in an insect's development when huge changes occur, which reorganise the larval form into the adult form. In butterflies the pupa is also called a chrysalis. A population usually restricted to a geographical area that differs from other populations of the same species, but not to the extent of being classified as a separate species. Embed this ARKive thumbnail link ("portlet") by copying and pasting the code below.
The term anterograde amnesia refers to a selective memory deficit characterized by the inability to create new memories after an incident. Therefore people affected by this type of amnesia are usually unable to remember recent events. But the long term memories before the incident are left intact. The interesting is that anterograde amnesia is not affecting habits or skill memories. Therefore a person affected by anterograde amnesia might learn new skill which he will remember the next day though he won’t remember the events that occurred the other day. Retrograde Anterograde Amnesia After Brain Injury: That’s mostly because the skill and habit memories (procedural memories) are stored in cerebellum while the episodic memories are stored into the cortex. Other symptoms might include disorientation, confusion and confabulation; the person remembers invented memories. Research is underway as the mechanism of it is not understood very well yet. There are two important causes for the anterograde amnesia: benzodiazepine drugs or traumatic brain injury. There are also rare cases when the anterograde amnesia is caused by illness. Herpes encephalitis for example, if left untreated for four days might lead to damage to the hippocampus leading to anterograde amnesia. Sometimes patients with certain illnesses have parts of their brain removed to prevent a more serious disorder. For example people with severe seizures might have one or both sides removed. Patients with brain tumors also might have parts of the brains removed. Strokes can also lead to anterograde deficits as they involve temporal cortex and the temporal lobe. Alcohol intoxication is another possible cause of the anterograde amnesia. The phenomenon is known as blackout. Studies showed when people drink large amounts of alcohol very fast. The effect is stronger and more likely to occur when the alcohol is ingested on an empty stomach. Also lack of oxygen can lead to anterograde amnesia as well. Sometimes it can be caused by respiratory distress, carbon monoxide poisoning or heart attack. Anyone who encounters unexplained memory loss should call a doctor immediately. Sometimes someone else has to call the ambulance as a person with anterograde amnesia might not be able to tell where he is and might not have the spirit presence to call a doctor. Before seeing a doctor people who experience memory loss should be well prepared. First and for most they should go there accompanied by a friend or a member of the family. Also they should write down all the symptoms as they happen and write down all the important personal details and information as well as the medications that were recently used. In case of memory loss the doctor will have to make a detailed evaluation which will include a medical history (family history of neurological diseases; depression, headaches, seizures or cancer), alcohol and drugs used (if any), type of memory loss, triggering factors and evolution. Then the doctor might do a physical exam to check the neurological functions such as reflexes, balance or sensory functions. Then a cognitive test might be useful and the person’s judgment, thinking as well as long and short term memory will be tested. Imaging tests might include CT scan, MRI and blood tests. So far there’s no medication to treat anterograde amnesia.
What is an Aneurysm? An aneurysm occurs as the result of a weakening in the wall of an artery. That weakened part of the artery wall swells from the force of blood pushing through it. Aneurysms are most commonly seen in the aorta, the main artery that carries blood from the heart to the rest of the body. However, aneurysms can occur in nearly any artery in the body, usually in the area where an artery branches into other vessels. What is Carotid Artery Disease? Carotid artery disease occurs when fatty, waxy deposits called plaques clog your carotid arteries. Your carotid arteries are a pair of blood vessels that deliver blood to your brain and head. The buildup of plaques in these arteries blocks the blood supply to your brain and increases your risk of stroke. What is P.A.D.? Peripheral Arterial Disease is a disease in which plaque builds up in the arteries that carry blood to your head, organs, and limbs. When plaque builds up in the body’s arteries, the condition is called atherosclerosis. Plaque is made up of fat, cholesterol, calcium, fibrous tissue, and other substances in the blood. Over time, plaque can harden and narrow the arteries. This limits the flow of oxygen-rich blood to your organs and other parts of your body. What is Deep Vein Thrombosis? DVT occurs when a blood clot forms in the deep venous system, predominantly in the legs. Several medical conditions increase the risk of DVT, including cancer and trauma. Other risk factors include older age, surgery, oral contraceptives, pregnancy, the postnatal period, and genetic factors, such as non-O blood type. Additionally, being immobile due to bed rest or sitting on long flights can also increase the risk for DVT.
What is Meningitis? Meningitis is defined as an inflammation of the lining of the brain and spinal cord.1 It's caused when the protective membranes around the brain and spinal cord known as the meninges become infected.1,2 There are actually several types of meningitis but bacterial and viral meningitis are the 2 most common.2 Meningococcal Meningitis and Meningococcal Sepsis Meningococcal meningitis occurs when bacteria called meningococci infect the lining of the brain and spinal cord.3,4 When these same bacteria get into the bloodstream, they can cause another serious condition known as meningococcal sepsis.3,5 Meningococcal disease, which includes meningococcal meningitis and meningococcal sepsis, is defined as any infection that’s caused by the bacteria meningococci.1,3 Although rare, it’s very serious and potentially life-threatening.1,3 It can potentially kill an otherwise healthy young person within 1 day after the first symptoms appear.6 Meningococcal disease can be difficult to recognize, especially in its early stages because meningitis symptoms are similar to those of more common viral illnesses.1 About 1 in 5 people who survive meningococcal meningitis can suffer permanent consequences, such as7-9: - Amputation of limbs, fingers, or toes - Severe scarring - Brain damage - Hearing loss - Kidney damage Who's at Risk for Meningococcal Meningitis? Teens and young adults are at increased risk. But anyone can get meningitis, even people who are usually healthy, such as athletes or college students.1,3 Is Meningitis Contagious? Yes, the bacteria that cause it can be spread through the exchange of saliva, which can occur during common activities, such as3,10:KissingSharing utensils & drinking glasses Risk factors for meningococcal meningitis include3,11,12:Living in close quarters (ie, dormitories)Smoking or being exposed to smoke Lifestyle may also play a part. For example, staying out late and irregular sleeping habits can make teens feel run down and might also put them at greater risk for meningitis by weakening their immune system.13 What Can I Do? Get your teen vaccinated. Because you can’t watch your teen every minute of every day, your best option is to talk to your child’s school nurse or other health care provider about the importance of vaccination. If you ever suspect that your child has meningitis, go to the emergency room right away, where he or she can be evaluated and receive prompt medical care.1,14,15 Next: Meningitis Symptoms
This is the second part of a series of three on supersymmetry, the theory many believe could go beyond the Standard Model. First I explained what is the Standard Model and showed its limitations. I now introduce supersymmetry and explain how it would fix the main flaws of the Standard Model. Finally, I will review how experimental physicists are trying to discover “superparticles” at the Large Hadron Collider (LHC) at CERN. Theorists often have to wait for decades to see their ideas confirmed by experimental findings. This was the case for François Englert, Robert Brout and Peter Higgs whose theory, elaborated in 1964, only got confirmed in 2012 with the discovery of the Higgs boson by the LHC experiments. Today, many theorists who participated in the elaboration of what is now known as supersymmetry, are waiting to see what the LHC will reveal. Supersymmetry is a theory that first appeared as a mathematical symmetry in string theory in early 1970s. Over time, several people contributed new elements that eventually led to a theory that is now one of the most promising successors to the Standard Model. Among the pioneers, the names of two Russian theorists, D. V. Volkov and V. P Akulov, stand out. In 1973, Julius Wess and Bruno Zumino wrote the first supersymmetric model in four dimensions, paving the way to future developments. The following year, Pierre Fayet generalized the Brout-Englert-Higgs mechanism to supersymmetry and introduced superpartners of Standard Model particles for the first time. All this work would have remained a pure mathematical exercise unless people had noticed that supersymmetry could help fix some of the flaws of the Standard Model. As we saw, the Standard Model has two types of fundamental particles: the grains of matter, the fermions with spin ½, and the force carriers, the bosons with integer values of spin. The mere fact that bosons and fermions have different values of spin makes them behave differently. Each class follows different statistical laws. For example, two identical fermions cannot exist in the same quantum state, that is, something -one of their quantum numbers – must be different. Quantum numbers refer to various properties: their position, their charge, their spin or their “colour” charge for quarks. Since everything else is identical, two electrons orbiting on the same atomic shell must have different direction for their spin. One must point up, the other down. This means at most two electrons can cohabit on an atomic shell since there are only two possible orientations for their spins. Hence, atoms have several atomic shells to accommodate all their electrons. On the contrary, there are no limitations on the number of bosons allowed in the same state. This property is behind the phenomenon called superconductivity. A pair of electrons forms a boson since adding two half spins gives a combined state with a spin of 0 or 1, depending if they are aligned or not. In a superconductor, all pairs of electrons can be identical, with exactly the same quantum numbers since this is allowed for combined spin values of 0 or 1. Hence, one can interchange two pairs freely, just like two grains of sand of identical size can swap position in quick sand, which makes it so unstable. Likewise, in a superconductor, all pairs of electrons can swap position with others, leaving no friction. An electric current can then flow without encountering any resistance. Supersymmetry builds on the Standard Model and associates a “superpartner” to each fundamental particle. Fermions get bosons as superpartners, and bosons get associated with fermions. This unifies the building blocks of matter with the force carriers. Everything becomes more harmonious and symmetric. Supersymmetry builds on the Standard Model and comes with many new supersymmetric particles, represented here with a tilde (~) on them. (Diagram taken from the movie “Particle fever” reproduced with permission from Mark Levinson) But there are other important consequences. The number of existing fundamental particles doubles. Supersymmetry gives a superpartner to each Standard Model particle. In addition, many of these partners can mix, giving combined states such as charginos and neutralinos This fact has many implications. First major consequence: the two superpartners to the top quark, called the stops, can cancel out the large contribution from the top quark to the mass of the Higgs boson. Second implication: the lightest supersymmetric particle (in general one of the mixed states with no electric charge called neutralino) has just the properties one thinks dark matter should have. Not only supersymmetry would fix the flaws of the Standard Model, but it would also solve the dark matter problem. Killing two huge birds with one simple stone. There is just one tiny problem: if these supersymmetric particles exist, why have we not found any yet? I will address this question in the next part in this series. To be alerted of new postings, follow me on Twitter: @GagnonPauline or sign-up on this mailing list to receive and e-mail notification.
October was the sixth month in a row of the warmest temperatures ever recorded. That's according to the National Oceanic and Atmospheric Administration. And El Nino is not fully to blame. Greenhouse gas emissions are a big part of the problem. Researchers at Virginia Tech and Sweet Briar College are working on ways to remove more of it from the air. Here's how Thomas O'Halloran explains the difference between weather and climate: "The weather tells you what you need to wear today and climate tells you what should be in your closet." O'Halloran is a Research Assistant Professor in the Department of environmental resources and conservation at Virginia Tech. A self-described weather geek, his team is measuring not only short term fluctuations, but also creating long term models with the data. He designed and built a brand new observation tower, perched 60 feet above the tree tops near the Blue Ridge Mountains that is gathering information 24/7 in the middle of a loblolly pine forest planted 25 years ago. “A pine plantation is a managed eco-system. It's the kind of thing, in the context of climate, if we want to take carbon dioxide out of the atmosphere and put it into forests, it might be advisable to expand the forested area and plant more trees. So part of our research is asking the question, what kinds of vegetation, trees or forests are most beneficial to climate?" The question needs answering before policy decisions can be made on the worldwide effort to combat global warming. "So the scientific community has set this goal of 2 degrees Celsius of warming; we've said if we can limit climate warming to 2 degrees or less, we think the effects won't be catastrophic. And so that is our goal." Scientists say transitioning from burning fossil fuels to more renewable energy is one part. “But another thing that needs to be part of our portfolio is in managing the land surface. Right now the terrestrial biosphere, the global land surface, the vegetation, the plants the forest, do us the favor of taking about a quarter of the carbon we put in the atmosphere back out. And the ocean does another quarter.” So how many dollars do those quarters add up to? As political leaders discuss the possibility of carbon credits markets, it will be important to know the numbers. "If we're going to put a dollar value on carbon then we need to know how to put it in the forest, how to keep it in forests, how sensitive they are to drought. How prone are they to fire. Because if you spend money and say we're going to put 'x' amount of carbon into a forest and that forest burns up that carbon is now back in the atmosphere so we have to be very careful about crediting these things and insuring against those kinds of scenarios." O'Halloran is working closely with Quinn Thomas also at Virginia Tech. Thomas was instrumental in 'saving' the state of the art Land-Atmosphere Research Station when it looked like Sweet Briar College in central Virginia might close its doors last year. Scientists around the world were concerned about its fate until Virginia Tech stepped in with additional new technology and invited O'Halloran to join its faculty. Now that Sweet Briar has remained up and running the tower is collaboration between the two institutions. And one of the things researchers are finding, is that climate itself, is something of collaboration between forests and the atmosphere as they exchange energy and carbon dioxide. Studies from the new tower are showing that when temperatures rise, the pine forest produces more particles that become air borne, slightly lowering the air temperature. "And what's even cooler than that, is that those particles that the forests make, because they interact with clouds and radiation, they're potentially modifying its own environment. If it can affect radiation in a cloud, then that actually affects the photosynthesis of the forest. So now you have a mechanism, not just where the atmosphere affects the forest, but the forest affects the atmosphere. We say that's a 'coupled system.' Essentially the forest and the atmosphere are talking to each other and we're just starting to get an idea of how that system works.” A webcam at the research tower updates every half hour with data and pictures from the site. The data are available in real time so that weather geeks and climate watchers can check in any time.
Individualized Education Program: IEP As mandated by the U.S. Department of Education, all students are guaranteed a, “Free and appropriate public education.” Children with learning, health, or mental disabilities must be offered special accommodations and services from the public school system. Services may include an Individualized Education Program (IEP), which outlines goals and services for the school year. Accommodations may include extra time to take tests, a note taker, support through a speech or resource teacher, adapted PE, and more. Attachment and Reactive Attachment Disorder:RAD Reactive Attachment Disorder is a rare but serious condition in which young children do not create healthy bonds with a parent or caregiver. Normally the disorder stems from neglect, abuse or orphaned children. When affection and nurturing are not present, caring attachments with others are not established. This can permanently change the child’s growing brain. Reactive Attachment Disorder is a lifelong condition, but with treatment children can develop more stable relationships with others. Attention Deficit Hyperactivity Disorder: ADHD Central Auditory Processing Disorder: CAPD or APD An Auditory Processing Disorder means that something is adversely affecting the processing of information. Children do not recognize subtle differences between sound and words. An example is, “How is a cow and a crow alike?” This may be understood as, “How is a couch and a row alike?” This happens more often when a person with APD is in a noisy environment, or complicated information is being given. Down Syndrome: DS (also called Trisomy 21) Learning Disabilities: LD Obsessive Compulsive Disorder: OCD Phonological processing is to be able to remember, separate, blend, and manipulate speech sound. Phonological Processing difficulties include retrieving and using phonology codes in memory, storing information, and deficits in phonological awareness.
Tip #5 – The Use Of Dogs In Camps Domestication of canines may have fallen short of what it is today if the relationship had been constrained by solely those factors. However, both humans and canines share the concept of territoriality, viewing a certain area as theirs to live in and, if necessary, defend. Wild canines living around a human camp would view it as parts of their territory, and as such would rally to the camp’s defense if dangers such as rival bands of humans or wild animals presented themselves. The clamor created by the wild canines would serve to alarm the camp’s inhabitants, who would then take over defense of the camp. Human settlements benefited highly from the security granted by their companion canine packs, especially at night.
The greenhouse in the sky? Venus could be the ultimate example of what can happen when an atmospheric greenhouse effect runs away. A mission to the planet four billion years ago might have shed some light on what is happening on Earth. But Esa's Venus Express probe will instead focus on understanding the planet's atmosphere, as Richard Corfield explains. - The European Space Agency's Venus Express probe will enter the planet's orbit on 11 April with the aim of building up a picture of Venus' thick, carbon-dioxide rich, sulfuric acid-laced atmosphere - Some believe that if life evolved on Venus before its ocean boiled away, it may have moved into the Venusian atmosphere where microbes could exploit the intense solar UV radiation that the planet receives - Nasa is looking into the possibility of sending a probe to Venus to search for life on the planet One of the most famous speeches in the history of space exploration was made at Rice University Stadium, Texas, US, on 12 September 1962. In it John F Kennedy set the agenda for the next decade of space exploration - committing the US to placing a man on the Moon by 1970. At the time Kennedy gave this speech a small and very primitive space probe was already speeding towards Venus. Despite its simplicity, this first probe to our nearest planetary neighbour had been dispatched to answer a crucial, nagging, question: was it the surface of the planet that was a prominent microwave emitter, or were the emissions coming from the swirling clouds that eternally veiled the face of the planet? This was important because if the answer was the former then no life could exist on Venus' surface, if it was the latter, then.possibly. The surface of Venus The answer that the tiny probe returned was unequivocal. The surface of Venus was hot enough to melt lead. No life could possibly exist there. At the time it was enough to make the US lose interest in Venus as a potential extraterrestrial colony and yet today scientific interest in Venus is at an all time high. Venus Express launches A small European-built spacecraft is currently drawing close to the planet and will enter orbit on 11 April. Venus Express, the European Space Agency's (Esa) first mission to Venus, set off from Baikonur Cosmodrome in Kazakhstan in November last year. It demonstrates the quiet revolution occurring in planetary exploration. The mission has been conceived and run by European scientists and launched atop a Russian rocket. It was only approved in March 2001 and yet it was launched less than five years later. Gone are the days of decadal turn-around times and billion dollar budgets. But what is the attraction of Venus? It does not have the cachet of Mars where exploration has been driven by the search for water, and always - in the background - the possibility that the next mission to Mars will be the one that discovers evidence of life. Despite the loss of the Beagle 2 probe, Esa's Mars Express mission was, and continues to be, a colossal success. Images continue to pour back from the Mars Express orbiter and it is now confirmed that not only did Mars once have liquid water on its surface, it almost certainly still has frozen water just below. Launch of Venus Express from Baikonur Cosmodrome, Kazakhstan, in November 2005 Venus has none of these romantic enticements despite being named after the goddess of love. In fact, with a surface temperature of 470°C, pressure of 90 atmospheres and an atmosphere composed of carbon dioxide laced with clouds of concentrated sulfuric acid, Venus has a good claim to be the most inhospitable of the inner, rocky planets. But, if some of the press claims surrounding the Venus Express mission are to be believed, then the reason we are sending this probe is intimately linked to her inhospitable features: Venus, say some, is the ultimate example of what can happen when an atmospheric greenhouse effect runs away and going there will inform us about our own planet's greenhouse effect. Venus is the bogeyman in the sky who warns us what the consequences of our own actions could be unless we take evasive action. To examine the accuracy of this statement we need to understand the background to Venus just a little better. There was a time when Venus was considered to be a tropical Eden. Science-fiction stories of the early 20th-century portrayed it as a planet wallowing in a Carboniferous time-warp where towering cycads and tree ferns sheltered dragonflies whose wingspans were measured in metres. To top this vision of a Palaeozoic paradise, it was thought that Venus' swamps would be populated by amphibian-like creatures busily making the same transition from water to land that we ourselves did about 400 million years ago during the Devonian and Carboniferous periods of Earth's history. By 1975 this myth had been dispelled. Ground-based spectroscopic observations had shown that not only were there no swamps on Venus but that the atmosphere was composed almost completely of carbon dioxide (with trace amounts of chlorine and fluorine). In addition the swirling cloud layers were made not of water vapour but sulfuric acid. The 1962 Mariner 2 probe had shown a surface broiling at the same temperature as a self-cleaning oven under pressures equivalent to those found beneath the Earth's oceans. For decades Venus' atmosphere had thwarted any attempts to image the surface but the development of extra-terrestrial radar imaging changed that. In the 1960s ground-based radar probing from the Goldstone observatory in California's Mojave desert, US, and the Arecibo observatory in Puerto Rico showed first that Venus' orbit was retrograde (in the opposite direction to most other planets) and that the Venusian day is 243 Earth days long. By the 1970s refinements in radar imaging techniques began to show up some of Venus' surface features including the bright (radar reflective highlands) known as Alpha Regio, Beta Regio and Maxwell Montes. After the US' Mariner 2 fly-by, Venus became the Russian planet. While the US turned its attention to the potentially more habitable Mars, it was left for the Russians to more or less single-handedly continue the exploration of Venus and through the late 1970s and early 1980s they barely missed a launch opportunity to explore the Venusian atmosphere and surface with their spectacularly successful Venera series of probes. In addition to Venera 9 - which sent back the first pictures from the surface of another world - Veneras 7, 8, 10, 11, 12, 13 and 14 all successfully soft-landed. It is ironic that it was the tantalising glimpses of the surface of Venus sent back by the Venera probes that reignited US interest in Venus. In 1978 the US was ready to return and dispatched the Pioneer Venus mission in 1978. The aim of the mission was two-fold. A cluster of four probes was designed to sample the atmosphere, while a radar-equipped orbiter probed beneath the veiling clouds for a closer look at Venus' strange topography. The images, despite being relatively low resolution, confirmed that Venus was a world vastly different from our own. There were small continental areas, vast plains and huge nurseries of active volcanoes. The evidence of active volcanism, coupled with the indisputable fact that Venus' was so conspicuously acidic, led to one inescapable conclusion: the surface of Venus was geologically young and volcanically active. It was this that opened the debate on the nature of Venus' strange atmosphere - had it always been like this or was the greenhouse in the sky new? The success of the Pioneer Venus mission raised a series of questions that required follow up with still better eyes. Thus the Magellan mission of the late 1980s was born. The questions that Magellan was tasked with answering were comprehensive and specific. Just what, exactly, is the nature of Venusian geology? Is Venus dominated by a plate tectonic regime similar to that of the Earth, or is it something completely different? What is the age of Venus' surface? The face of the Earth is a composite hodge-podge of several billion years worth of varying forces, with rocks ranging from 3.8 billion years old to only a few thousand years old. Does the surface of Venus have a similarly varied age structure? Does erosion shape the surface of Venus in a similar manner to the way it does on Earth? But the main question that the Magellan scientists wanted answered was: had there ever been running water (the main agent of erosion on Earth) on the surface of Venus? It was a question that belied the biggest question of all: had the Venusian greenhouse effect always existed or had it developed over geological time, and if so, could it develop on other planets, such as our own? With its high-resolution synthetic aperture radar, the results that Magellan sent back were nothing short of stunning and revealed for the first time just how alien our sister planet really is. The most surprising conclusion was one that must have gladdened the hearts of the Soviets for they had promoted the idea since the days of their own Venera probes. Venus has no plate tectonics, that is to say, there is no cycle of crustal creation in certain regions balanced by crustal consumption in other areas. The main geological force operating on Venus is volcanism. Similarly, Magellan's radar imaging showed that the density of craters across the face of Venus was such that the surface could not be more than 500 million years old - very recent in geological terms and contemporary with the rise of complex life in the oceans of our own planet. It appears that the lack of plate tectonics allows stresses and strains to build up in Venus' crust that are relieved periodically and cataclysmically, rather than more or less continuously as on Earth. In fact the face of Venus was apparently resurfaced more or less instantaneously half a billion years ago. Perhaps though the most significant result for the study of the Venusian greenhouse effects came from combining the erosion results with previous measurements of the ratio of deuterium to hydrogen in the Venusian atmosphere. Magellan found no evidence for erosion on the surface of Venus, thus confirming the lack of surface water. The D/H ratio results returned by the Pioneer Venus probe had earlier suggested that Venus had lost its isotopically light hydrogen (and therefore its water) early in its history and was now dominated by deuterium which, being heavier, was more easily retained by gravity. The conclusion was inescapable: Venus had once had water - perhaps even oceans and life - but now it was all gone and the agent for its removal had been Venus' terrifyingly powerful greenhouse effect. When and how Venus lost its water are two outstanding mysteries. Was it simply its closer proximity to the Sun that boiled off the ocean? And how long did this take after the formation of the planet 4.5 billion years ago? Whichever, the lack of water meant that a major CO2 sink available on earth - the oceans and therefore carbonate sediments - was denied Venus and CO2 accumulated in the atmosphere. As the seas boiled away they added water vapour (another potent greenhouse gas) to the atmosphere so plunging Venus into a positive-feedback greenhouse loop of unprecedented severity. After eons the isotopically light hydrogen in the water leached into space leaving only heavy deuterium - and a scorched Venus - behind. Today, despite being closer to the Sun than us, Venus reflects more sunlight because of its bright clouds - clouds that may be the result of those long ago events. Therefore it is no longer exposure to the sun's rays that is responsible for the present extreme heat of the planet, it is its greenhouse effect. Return to the shrouded planet All of which brings us back, a decade later, to the Venus Express mission where the emphasis is now firmly on understanding the atmosphere of this forbidding planet. Hakan Svedhem, ESA project scientist, sees the mission much more as an investigation of the uniqueness of Venus rather than to compare it to Earth. Life could exist in Venus' clouds The seven instruments Venus Express carries will give us a comprehensive look. The emphasis will be on building up a three-dimensional picture of Venus's thick, CO2-rich, sulfuric acid-laced atmosphere. Specific questions will focus on the causes of the super-fast rotation of the atmosphere, the reasons for the differentiation of the atmosphere into several cloud and haze layers at different altitudes, the processes that control the atmosphere's chemical balance and those that govern the escape of atmospheric components into space. The underlying philosophy of the Venus Express mission is to use different instruments to investigate the same phenomenon simultaneously, thus providing a series of interdependent checks. Thus atmospheric temperatures will be investigated using the PFS, VIRTIS, VeRa and SPICAV instruments; chemical composition will be investigated using VIRTIS, SPICAV and PFS; and the interaction of the atmosphere with the solar wind will be tracked by ASPERA, MAG and VeRa etc. One instrument that the Venus Express orbiter is not carrying though is one that might answer the most interesting question of all - the possibility that there might be life yet on Venus, in its clouds. Recent studies have shown that Earthly bacteria can live in a host of unlikely and inhospitable spots, including clouds. Louis Irwin of the University of Texas at El Paso, US, believes that if life evolved on Venus before its ocean boiled away then it may have moved into the Venusian atmosphere when the going got tough, where pressure and temperature conditions are no more arduous than in some areas on Earth. It may be that such Venusian microbes actively exploit the intense solar UV radiation that the planet receives, as David Grinspoon of the Southwest Research Institute at Boulder, Colorado, US, believes. While Venus Express is not equipped for a life-search on Venus, it will be asking why so much UV is absorbed by Venus' dense atmosphere, after all, the reason could be the presence of the biggest prize in comparative planetology - life itself. In the longer term Nasa is looking into the possibility of sending a probe to Venus to search actively for life. Irwin believes that the optimum mission design would be a balloon-equipped spacecraft that would collect samples from the Venusian atmosphere during descent and then blast off from the surface to return them to Earth. Imagine the expression on the faces of those who study Mars if Venus, after all, is the planet with evidence of life. But, will the Venus Express mission really contribute to our understanding of Earth's greenhouse effect as some press reports would have us believe? It seems unlikely. The two planets are just too different now. Perhaps if the Venus Express probe had been launched when Venus was beginning to lose her water (about four billion years ago) the results would have been revealing. This highlights an important point about studying climate change on this, or indeed any, planet. For it to be truly useful somehow we must grasp a historical perspective of what happened and when. Only then can we hope to arrive at the how and why. Understanding Venus' greenhouse effect will probably have to wait until we have made real progress in unravelling its geological history, something that is well advanced here on Earth and which is just beginning to happen on Mars. Robot geologists such as Nasa's Spirit and Opportunity rovers are now an integral part of Mars exploration. Such devices need to go to Venus too. In the meantime it is enough to go to Venus and do basic research to pave the way for the robot - and maybe human - geologists of the future. As John F Kennedy said when he launched the Apollo programme at Rice University stadium in 1962: 'We choose to go to the Moon... we choose to go to the Moon in this decade. not because it is easy but because it is hard.' Richard Corfield is a freelance science writer Instruments on board ASPERA (analyser of space plasma and energetic atoms), developed by the Institute of Space Physics in Sweden, will study the interaction of the solar wind and the Venusian atmosphere. MAG, the Venus Express magnetometer, developed by Austrian scientists, will measure the magnetic field around the planet that originates from the interaction of the solar wind with the atmosphere. Instruments on board Venus Express The Italian PFS, planetary Fourier spectrometer, will gather data on the vertical temperature structure of the atmosphere from the surface to its boundary with space and will help determine the composition of the atmosphere and aid in the search for volcanic activity. The imaging SPICAV spectrometer (spectroscopy for investigation of characteristics of the atmosphere of Venus), developed by France and Russia, will look for traces of water vapour, molecular oxygen, and sulfur compounds. The German VeRa (Venus radio science) experiment will use the radio communications between Venus Express and Earth to conduct radio sounding of the planet's atmosphere, ionosphere, and the solar corona. Results will yield information about the density, temperature and pressure of the upper atmosphere. The joint Italian-French visible and infrared thermal imaging spectrometer, VIRTIS, will use three observation channels to determine the composition and cloud structure of the lower atmosphere. The German VMC (Venus monitoring camera) will capture images at visible, near-infrared, and ultraviolet wavelengths to study global cloud dynamics.
Dental in phonetics means that the tip of the tongue is against the back of the upper teeth, or even further forward. The T D N and L of French, Spanish, and Italian are dental, but in English and German these sounds are alveolar , made further back on the alveolar ridge. The difference is slight, except that a few languages contrast them: in Dravidian and Australian Aboriginal languages and several Sudanese languages like Dinka and Nuer there are both dental and alveolar versions of these sounds, and they can change the meaning of a word. For example, Bidyara of Queensland gundu 'away' versus gundhu 'go across', where D and DH represent the contrasting stops. In such languages the dental stops may be interdental, that is with the tongue actually between the teeth. An interdental sound may be heard in English in a strong Jewish accent: it is more noticeably different than the lamino-dental (upper teeth) variant of French or Spanish. The initial consonants in English 'thin' and 'this', both written TH, are dental. The T sound may be dental by assimilation in a word like 'eighth' which is pronounced with T followed by TH. Dental affricates (the T+TH sound but as a single fused sound) are very rare, but I have a vague recollection some Iroquoian language, possibly Mohawk has it.
heic1203 — Science Release Hubble finds relic of a shredded galaxy 15 February 2012 Astronomers using the NASA/ESA Hubble Space Telescope have found a cluster of young blue stars surrounding a mid-sized black hole called HLX-1. The discovery suggests that the black hole formed in the core of a now-disintegrated dwarf galaxy. The findings have important implications for understanding the evolution of supermassive black holes and galaxies. Astronomers know how massive stars collapse to form small black holes a few times the mass of the Sun. However, it is not clear how supermassive black holes, which can have masses of millions or even billions of times the Sun's, form in the cores of galaxies. One idea is that supermassive black holes may build up through the merger of small and mid-sized black holes, a view supported by a new study using Hubble. Sean Farrell of the Sydney Institute for Astronomy in Australia and the University of Leicester, UK, discovered a middleweight black hole in 2009 using the European Space Agency’s XMM-Newton X-ray space telescope. Black holes can be spotted using X-rays because of radiation coming from matter heating up as it swirls around and falls into the black hole. This phenomenon is known to astronomers as an accretion disc. Known as HLX-1 (Hyper-Luminous X-ray source 1), this black hole weighs in around 20 000 times the mass of the Sun and lies towards the edge of galaxy ESO 243-49, which is 290 million light-years from Earth. Now, Farrell’s team has studied HLX-1 in ultraviolet, visible and infrared light using Hubble, and simultaneously in X-rays using the NASA/STFC/ASI Swift satellite. “For a unique source we needed a unique telescope,” explains Mathieu Servillat, second author of the study. “Hubble provided such precision in its images that it helped us understand the origin and environment of this intermediate-mass black hole.” Because HLX-1 is around 290 million light-years away, it is too far for Hubble to measure the individual stars around the black hole. However, a great deal can be deduced from the light that comes from it. Hubble’s images of the region show an excess of red light, which cannot be explained by emissions from the accretion disc alone. This light, the team concludes, is evidence of a cluster of hot stars surrounding the black hole as the brightness and colour of the light is similar to that from star clusters in nearby galaxies. “What we can definitely say with our Hubble data,” says Farrell, “is that we require both emission from an accretion disc and emission from a stellar population to explain the colours we see.” The existence of a star cluster around the black hole in turn gives clues about where the intermediate mass black hole may have come from, and why it lies in its present location in ESO 243-49. “The fact that there’s a very young cluster of stars indicates that the intermediate-mass black hole may have originated as the central black hole in a very low-mass dwarf galaxy,” Farrell explains. “The dwarf galaxy was then swallowed by the more massive galaxy.” As the dwarf galaxy was ripped apart, the black hole with some of its surrounding material would have survived. The future of the black hole is uncertain at this stage. It depends on its trajectory, which is currently unknown. It’s possible that the black hole may spiral into the centre of ESO 243-49 and merge with the supermassive black hole there. Alternatively, the black hole could settle into a stable orbit around the galaxy. Either way, it’s likely to fade away in X-rays as it depletes its supply of gas. The team has more observations planned this year to track the history of the interaction between the two galaxies. The new findings are being published on 15 February in the Astrophysical Journal. The Hubble Space Telescope is a project of international cooperation between ESA and NASA. The international team of astronomers in this study consists of S. A. Farrell (Sydney Institute of Astronomy, Australia, and University of Leicester, UK), M. Servillat (Harvard-Smithsonian Center for Astronomy, USA), J. Pforr (University of Portsmouth, UK), T.J. Maccarone (University of Southampton, UK), C. Knigge (University of Southampton, UK), O. Godet (Universty of Toulouse, France, and CNRS IRAP, France), C. Maraston (University of Portsmouth, UK), N.A. Webb (University of Toulouse, France, and CNRS IRAP, France), D. Barret (University of Toulouse, France, and CNRS IRAP, France), A. Gosling (University of Oxford, UK), R. Belmont (University of Toulouse, France, and CNRS IRAP, France), K. Wiersema (University of Leicester, UK). These results are reported in a paper entitled “A young stellar population around the intermediate mass black hole ESO 243-49 HLX-1”, published in the Astrophysical Journal on 15 February. Image credit: NASA, ESA and S. Farrell (University of Sydney, Australia and University of Leicester, UK) University of Sydney Harvard-Smithsonian CfA, Cambridge, USA (now CEA Saclay, Paris, France)
|Figure 1: Seismicity and Faults in California (USGS)| In the same way as earthquakes are neither evenly nor randomly distributed throughout the world (see blog September 29, 2008), California also has a few earthquake zones as well as vast areas which are essentially void of all Earth's tremors. There are actually distinct bands and clusters of seismicity in our state which can be clearly spotted on a map of earthquakes. The most famous zone of all is, of course, the San Andreas Fault. It snakes almost all the way through the Golden State, from the Salton Sea in the Imperial Valley in the south, to Cape Mendocino in the north. At a first glance it looks as if earthquake foci line up along this zone like pearls on a string. But looking a little bit more closely one finds that the San Andreas Fault is not just one clear thin line but a zone of tectonic movement, which can be up to several dozen miles wide. Sometimes the zone consists of several faults, which parallel each other. The picture gets murkier in the Los Angeles Basin. The reason is that the crust under LA is cracking along dozens of short fault segments, many of them not yet even named by seismologists. The ultimate cause for the earthquakes along the San Andreas system and in the LA basin is the sliding of the Pacific Plate against the North American Plate with a velocity of about 2.5 inches per year. Where the San Andreas Fault ends in the north, the seismicity fans out into the Pacific Ocean like an elephant's trunk to the west of Cape Mendocino. There the "Mendocino Fracture Zone" takes over the steady slide of the tectonic plates. Other clear bands of seismicity occur along the Garlock Fault in Southern California's Transverse Ranges, along its continuation through the Owens Valley, and further north along the steep eastern flank of the Sierra Nevada. The earthquakes there are not caused only by the sliding of the plates, but are also a consequence of the slow lifting of the Sierra Nevada, which has occured during the last several million years. And last but not least, there is a third class of earthquakes in California. These temblors can be found in clusters around the Geysers along the Sonoma and Lake County border, around Mammoth Lakes east of Yosemite, and to a smaller extent around the Coso Field near Ridgecrest. The causes of these mostly small quakes are remnants of volcanic and geothermal activity, like the restless Long Valley caldera next to Mammoth, which blew up in a gigantic volcanic eruption 760,000 years ago. (hra007) Earthquakes occur everywhere - so, at least, it seems. Temblors happen on all continents and beneath the deep oceans. They shake the world's highest mountains, the Himalayas, and the Earth's deepest valley, the Dead Sea. Even from under the ice caps of both polar regions, seismometers regularly record rumblings in the Earth's crust. But a more detailed look reveals that the distribution of earthquake foci in the world is by no means random. And neither are they evenly or regularly spaced. Instead, when plotted on a world map, earthquake locations look like narrow bands winding through the continents and oceans (see map). What are these zones and why are most earthquakes foci concentrated there? Simply put, temblors happen when rock breaks under force. Inside the Earth, the most important of such rock crushing forces is the "tectonic stress." It is exerted on the Earth's crust by the movement of the giant, rigid plates, which float on a subterreanean sea of hot and plastic rock called the asthenosphere. There are about twelve huge and another dozen smaller plates. Where ever such plates crush into or slide past each other during their respective drifts on the Earth's surface, the collision is able to break the rock, thus causing earthquakes. In principle, the effects of such plate collisions are similar to a car wreck where two automobiles hit each other, albeit on a much larger scale. The bands of earthquake foci in the map reflect these collision zones of the tectonic plates. In fact, they very clearly mark the boundaries of the plates. Look for instance at North America. The underlying plate is much bigger than the continent itself. It stretches from Iceland in the East all the way to the most far flung Aleutian islands in the West and reaches from Alaska to the Caribbean and beyond to the Azores, the island archipelago in middle of the Atlantic Ocean. But earthquakes happen not only where plates collide. They also occur where two plates move away from each other in the so called "spreading zones." One of these zones is the Mid-Atlantic Ridge where Europe moves away from North America at the rate of about one inch per year. You will find such ridges in every major ocean basin. In fact, there are many more miles of plate boundaries under the oceans than on land. As a consequence, the number of submarine earthquakes is also larger than the number of quakes on land. (hra006) Everybody has probably done it while frolicking at the beach: Stand with both feet on the wet sand and move your body quickly up and down several times without lifting your feet. After a short while the ground gives way. You sink a few inches into the sand and you might even lose your balance. Well, you just simulated one of the most dangerous effects seismic waves can have on buildings: Soil liquefaction. How can soil, which is hard enough for you to walk on, lose its strength and stiffness just because it is shaken a little bit? The secret is the water in the soil. Liquefaction occurs only in soils in which the space between individual sand particles is completely filled with water. Such soils are called "water saturated". The water exerts pressure on the particles, which in turn determines how tightly they are packed together. Before an earthquake, the water pressure is low and static. During the dynamic shaking, however, the water pressure can increase so much that the particles can move freely past each other. Once that happens, the soil loses its strength and becomes a gooey, slippery liquid. "Sand boils" are a relatively harmless consequence of such liquefaction; they look like small mud volcanoes (Figure 1). The overpressure inside the soil causes the sand to squirt out like lava from a volcanic crater. However, when soil liquefaction occurs under a building, it may sink into the soil, like your feet did during your experiment at the beach. The building might even tip over. The first time seismologists fully recognized the devastating effects of soil liquefaction was in 1964. During an earthquake under Niigata in Japan several apartment buildings sank into the ground and tipped over, because the water saturated soil on which they were built liquefied (Figure 2). The Bay Area is by no means immune to liquefaction, because many buildings in low lying areas are built on soils saturated with water from the Bay. Liquefaction can occur in all areas shaded brown and yellow in the map in Figure 3. The Association of Bay Area Governments (ABAG) has published a booklet with detailed information about the hazards of liquefaction in our region.(hra005) |Figure 3: Map of liquefaction susceptibility in the San Francisco Bay Area (courtesy of USGS). (Click to view larger image.)| |Figure 1: Wreckage of a twenty-one-story, steel-constructed building in the Pina Suarez Apartment Complex. Photo Mehmet Celebi, USGS| Fifteen-story reinforced concrete structure. Part of the building was only slightly damaged, while another part of it collapsed. Photo Mehmet Celebi, USGS One of the worst natural disasters in the Americas occured 23 years ago today, when at 7:19 am local time an earthquake of magnitude 8.1 struck in the subduction zone off the west coast of Mexico. The epicenter was located approximately six miles offshore near the town of Zihuatanejo in the state of Michoacan. Although there was severe damage in the coastal regions, the real disaster happened 220 miles away in Mexico City. Less than 15 minutes after the quake, thousands of people in the capital lay dead and the Mexican economy was shattered for years to come. Until today, nobody really knows how many people perished as a result of the earthquake. Official figures for the number of fatalities vary between 9,500 and 35,000. Most people died in Mexico City, where 412 multistory buildings collapsed completely and another 3,124 were seriously damaged, including 13 hospitals. Most of the destroyed structures were between 8 and 18 stories high. How can an earthquake cause so much damage over 200 miles from its focus? What happened 23 years ago in Mexico is comparable to a temblor occuring along the San Andreas Fault near San Francisco leaving Bakersfield in ruins. To answer this question, we have to go back in history almost 700 years. In 1325 the Aztecs, one of the high civilisations of Mesoamerica, founded their capital Tenochtitlan. They build it on an artificial island in a shallow lake in Mexico's central altiplano. Although the old capital was flooded again and again, the Spanish did not abandon the site in what they called Lago de Texcoco, but enlarged it instead. After Mexico's independence the settlement became the capital of the newly founded country. During the last century, the lake was completely drained, to make room for the housing needs of the ever-growing population of Mexico City. A lake bed in a basin, however, is one of the worst grounds for constructing a building. While hard rock simply shakes with the same frequency and amplitude as seismic waves, the unconsolidated sediments of an ancient lake bed react differently: They can amplify the shaking and even worse, they can lose their consistency and become a liquid. Such site amplification and liquefaction occured, when the waves of the distant earthquake shook the bed of former Lake Texcoco under Mexico City. Poorly founded multistory buildings lost their footing and collapsed. Read more about the dangers of liquefaction in the next blog entry. (hra004) In the refrain of a famous German lullaby children are asked: "Do you know how many stars are twinkling in the sky?" The answer, of course, depends on how you look for them and on the brightness of the heavenly objects. Venus, Jupiter and Sirius can be spotted even under bright city lights. If you go out into the country, on a moonless night you can see hundreds of stars with the naked eye and thousands through binoculars. Using the Hubble Space Telecope astronomers are able to spot millions. The situation is very similar, when you ask how many earthquakes occur during a year. The answer depends on how strong the temblors are, how far away you are from their focus, and how you try to detect them. The shaking of most moderate and all strong earthquakes is so obvious, they they are felt by everybody, sometimes even hundreds of miles from their focus. You may not notice smaller rattlings, say an earthquake of magnitude 4, when you are busily running around. Sitting down quietly at home in Orinda, the blogger has felt even microearthquakes of magnitude 2 occuring almost five miles away on the Hayward Fault under Kensington. However, each year in the Bay Area alone seismologists detect hundreds of earthquakes which are never felt. They use seismometers which are so sensitive, that they pick up the small rumblings of a car driving by hundreds of yards away. Using networks of such seismometers, scientists have gained a pretty complete picture of how many large earthquake occur worldwide per year (Figure 1). On the average, for every really big shaker of magnitude 8 or larger, there are 17 quakes with magnitudes between 7.0 and 7.9 and 134 temblors with magnitudes in the "sixes". Simply said: With every step down on the magnitude scale, the number of earthquakes worldwide increase by a factor of ten. Looking at California the earthquake statistic gets somewhat murkier. During the last century we had not a single temblor of magnitude 8 or greater. In the same interval 16 earthquakes occurred with magnitudes in the "sevens" and 39 quakes had magnitudes between 6.0 ad 6.9. For magnitude 5's, the number is in the low hundreds and it reaches just about one thousand for temblors with magnitudes between 4.0 and 4.9. The uncertainty about the number of earthquakes which occur rises significantly as we look for smaller quakes. As of this writing, more than 400 earthquakes had occured in California during the last week alone (Clickable list of current earthquakes). But only eight of them had magnitudes over 3. The rest were all microearthquakes, which would have passed mostly unnoticed, if it weren't for the more than 600 seismometers, which the California Integrated Seismic Network (CISN) operates in our State. (hra003) |Figure1: Worldwide statistics for large earthquakes in the 1990s (Courtesy of USGS)|
This tip is fairly well known, but should not be ignored. We’ve all heard that breakfast is the most important meal of the day. Sometimes, students skip breakfast because they’re sleepy, they’re running late, or they’re just not hungry. Be sure your students know to eat a healthy breakfast and avoid excessively stimulating foods, such as those containing caffeine or sugar, on the day of a test. A healthy breakfast should contain some protein. Low fat meats, eggs, and milk are good sources of protein. Whole grains, vegetables, and fruits are a good source of fiber. A healthy breakfast should contain some fiber. An example of a healthy breakfast that can help fuel students’ minds might be something simple like an egg, a bowl of whole grain cereal with low fat milk, and some blueberries. Encourage students to avoid away from the sugary cereals because they are digested quickly and will leave students hungry and tired in a couple of hours. You can also keep a basket of fruits like apples and bananas in your classroom for students who may not have gotten a good breakfast.
An observant person can tell exactly what creatures have passed through an area from the impressions that they have left in the snow, soft earth, mud of sand. A nature walk can be made more interesting by learning to "read" the tracks left by animals. Science Is Elementary: Let's Explore Animals (# 102) Students will be able to: - Identify animal tracks. - Classify animal tracks such as webbed, hooves, claws and paws with - Match tracks to the correct animal. - Measure each others' feet and compare the findings to a 12 inch ruler (Is a foot really a foot?). - Measure for area using one inch graph paper. - Discover that the same amount of space can be represented through varied forms, despite the tracks being various sizes and shapes. - Print with rubber feet stampers. - various pictures of animals and the tracks they make - rubber stampers of animal feet (can be borrowed from the Pennsylvania - one inch graph paper - large sponge - water paint - bucket and soap Ask students: Do you know when you first started walking? How old do you think you were? How old are animals when they start to walk? (Most can walk within hours of their birth.) What do people and animals need to walk? (Legs and feet.) To give students a specific responsibility while viewing, ask them to watch for the different things that feet can do. (Climb, run, stomp, walk.) Also ask students to see how many different kinds of animals they see in the video. Focus will be on animals and what they do with their feet. tape just after a picture of an elephant, armadillo and turtle; the narrator will say "You move with your feet and legs." when the narrator says "How do animals move?" Ask students to until the poem FEET by Aileen Fisher is finished. Stop. After viewing the video ask children: What different kinds of animals did you see? Do you remember what kind of feet they had and how their feet helped them? (Hooves, webbed, feet, claws; climb, walk, swim, Show the rubber feet (supplied by the Pennsylvania Game Commission) and ask: Can you tell what animal these feet belong to? Ask students: Does the size of the feet determine the size of the animal? Let each child describe the size. (Encourage them to use words such as inches, or bigger than ..., or smaller than....) Explain that you are going to show them how to measure an animal foot using Place enough water paint on a sponge so that it acts like a stamp pad for the rubber foot stampers. Push the rubber stamper onto the sponge and press it on a piece of one inch Next, let the student press their own foot onto the sponge and make a print beside the animal track. In the primary grades assessment will be done by observation and informal questioning while the children are exploring the science area. Ask them to find the area of specific things that the teacher provides. Ask students to name several animals and ways that their feet help them. Let room parents help clean-up by having a bucket of soapy water nearby to clean feet. When both prints have dried, show the student how to find the area by counting the full squares inside each foot print. Record the findings and compare the biggest and the smallest prints. Let children cut out the animals tracks and build a bar graph by gluing them to a large piece of butcher paper, from smallest to largest. Plan a field trip to a park, zoo or nature reserve. Invite a naturalist to visit the classroom. Subscribe to a nature magazine such as Ranger Rick or Backyard Friends. While on a nature walk, look for animal tracks. Once found, pour a mixture of plaster of Paris and water into the track. Let set and later lift and scrub with a brush to remove the dirt. Each child can make a plaster cast of an animal print. Find other outdoor objects such as leaves, pine cones, acorns, or rocks and measure for area. Encourage the children to visit the library and take out books on animals. Make a list of safety rules stating what to do if a person encounters a Write poems and stories with an animal as the main character. Write a letter to Aileen Fisher, the author of the poem FEET. From tape marks on the floor, children will practice hopping or jumping with both feet and then with one foot. They will then record the measurement. Students will measure each others' feet using a 12-inch ruler, yard stick or a shoe store's measuring tool. Teacher will make a foot pattern that is 12 inches in length. Children will then go on a search for a person with that size foot. They will then determine if a "foot" is really a "foot". Afterwards read the story How Big is a Foot by Rolf Myller. Read other stories about feet such as 10 in a Bed or How Many Feet in the Rewrite the poem FEET using different animals as characters. Using acrylic paint, paint on a t-shirt with the rubber feet stampers. Show snow shoes, rubber flipper, or football shoes and ask the students how they help people. Then ask the students to identify animals with similar Master Teacher: Grace Bickert Lesson Plan Database Thirteen Ed Online
This is the quintessential ‘What If’ question. It is counterfactual because now we can never know what would have happened if India had not been partitioned. But we can speculate about the possibilities and try and construct plausible scenarios for purposes of understanding and discussion. In this post we argue against the scenario presented by Aakar Patel in his op-ed in The News on September 22, 2008. Aakar Patel’s one-line conclusion is that an unpartitioned India would have been a disaster for both Hindus and Muslims. Let us first list the points we aim to contend: - Unpartitioned India would be the word’s largest country (1.4 billion people), the world’s largest Muslim country (500 million) and… the world’s poorest country (over 600 million hungry). - In undivided India, religion would have dominated political debate, as it did in the 30s and 40s, and consensus on reform would be hard to build internally. All energy would be sucked into keeping the country together. Undivided India would have separate electorates, the irreducible demand of the Muslim League and the one that Nehru stood against. A democracy with separate electorates is no democracy at all. - Hindus would never have been able to rule Punjab, Sindh, Balochistan or the Frontier. - Without Partition there would have been no Nizam-e-Mustafa. - The fault line of national politics in undivided India would have remained Hindu versus Muslim. Jinnah alone understood that from the start. Nehru and Patel understood it much later, agreeing to Partition. Gandhi never understood it; if he did, he never accepted it. - Three parts of undivided India had a Muslim majority. The west became Pakistan, the east became Bangladesh. Sooner or later, the north will become something else: the Muslims of Kashmir do not want to be India. But Indians do not understand that. Let us now respond in order and present a different perspective: - Undivided India need not have been the world’s poorest country. The resources, attention and energy that have gone into the continued hostility since Partition could have been channeled into development. (See the cost of conflict estimated by the Strategic Foresight Group, Mumbai). The huge market and the complementarities of arbitrarily divided ecosystems could have yielded great benefits. Huge investments went into making up for the division of the Indus water system, for example. - A democracy need not be a mechanical and rigid system. Malaysia, with three, not two, hostile communities found a way to adjust its system of governance to suit its constraints. South Africa, with its bitter history of apartheid, found a way in its constitution to work around the hostilities. There was no reason India could not have found a similarly workable formula. - There is no reason to think in terms of one community ruling the other. Indeed, that is a framework that is incompatible with democratic governance. The fact is that almost right up to Partition, the Punjab’s Unionist Party had found a mechanism to govern with a coalition of the major communities. - Even after Partition there is no Nizam-e-Mustafa. The fact that a large number of Hindus in India today want the Kingdom of Ram does not mean that their demand needs to lead to a redefinition of India. These kinds of demands need to be resolved in the political arena. - Jinnah did not feel from the start that the fault-line in undivided India would have remained Hindus versus Muslims. In fact, Jinnah was the advocate of Hindu-Muslim unity because he believed it was possible. The management of any fault line is up to the leadership as shown by the examples of Malaysia and South Africa mentioned earlier. Ireland is another example. - Three parts of undivided India had a Muslim majority but the demand for Pakistan did not originate in these areas. In fact the Muslim majority areas of the west were the last to sign on and even then very reluctantly. The Muslims of Kashmir seemed quite satisfied with the situation under the Farooq Abdullah government. Their attitude is more a function of India’s mismanagement (and post-partition Pakistan’s incitements) than of some innate hatred of Hindus. There is no cure for mismanagement. Even the Muslim west and east could not coexist in the face of political folly. It is quite possible to argue that there were many possible resolutions of the situation that prevailed in India in the 1930s and 1940s. It was a failure of leadership that the worst possible alternative was chosen. India lacked a statesman of the caliber of Mandela who could see beyond the immediate political gains and losses. The cost of the Partition is hard to imagine – almost a million deaths, ten million homeless, and continued conflicts. Add to this the subsequent costs in Bangladesh and the ongoing ones in Kashmir. If the inability of Hindus and Muslims to live together is given as the sole reason for the Partition, it should be considered that in all the one thousand years that Muslims lived in India, there was never once this scale of conflict or bloodshed. It was possible to live together. In fact Hindus and Muslims continue to live together in India even though their relations were poisoned and made immensely difficult by the fact of the Partition. One could just as well argue that the Partition was a disaster for both Hindus and Muslims as also for the Sikhs whose homeland was cut into two. A united India would never have allowed the Saudis or the Americans to set up madrassas and train jihadis within its territories. Dim-witted dictators would never have been able to occupy the positions of power they were in post-Partition Pakistan and Bangladesh. We can say that Manto in Toba Tek Singh had the right perspective on the partition of India.
A glacier is a mass of ice moving on the surface of the Earth. In this article, we shall discuss how glaciers form and move; we shall discuss the geological features associated with glaciers; and we shall show how recognizing these features can allow us to tell where glaciers have been in past ages of the Earth's history. Formation and motion of glaciers A glacier forms at an accumulation point, that is, a place where more snow accumulates than melts. This snow then piles up and compacts under its own weight to form ice. Even if this happened on a perfectly level surface, as the ice mounted up it would eventually start to squidge outwards under the pressure of its own weight; and often glaciers will form on mountaintops, where gravity is also a factor. Under the effects of pressure and/or gravity, the ice will flow. A glacier flows in two ways: by sliding along its base, and by "plastic flow" of the molecules of ice within the glacier. You may recall from science class that ice tends to melt under pressure; this means that the base of a glacier is often lubricated by water. The overall speed of a glacier can be measured by simple methods: hammer a stake into a glacier, wait a while, come back, and see how far it's moved. The speeds so measured range from centimeters to meters per day, depending on the glacier. One significant different between the flow of ice and the flow of water is this: a river is pulled downwards by gravity. This happens to glaciers too, when flowing downhill; but glaciers are also pushed by the pressure behind them: as a result, glaciers can and do flow uphill. Once in motion, the ice in the glacier will keep flowing until it reaches a point where the ice ablates: either it reaches the sea, breaking up into bergs, or it reaches a zone where the climate is warm enough to melt the advancing glacier. In the latter case, the end of such a glacier represents an equilibrium state at which the rate of melting is just sufficient to balance the rate of flow of the glacier. Now, while this equilibrium is maintained the glacier as a whole will stay still. The ice in the glacier will move, starting off at the accretion point and ending up at the ablation point, but the glacier as a whole stays in one place: it is like a conveyor belt of ice moving from accumulation to ablation. The length of the glacier will change with the climate: for example, if the climate gets warmer around the ablation end of the glacier, then the glacier won't be able to progress as far before reaching a zone in which the rate of melting equals the rate of flow, so the glacier will retreat (note that the ice in the glacier will still be moving forward while this is going on). Conversely, of course, a drop in temperature will let the glacier get further from the accumulation point. Changes at the accumulation site of the glacier will also affect its length: the more it snows at the accumulation point, the greater the volume of flowing ice, and the further it will get before it melts. It follows that global cooling will cause glaciers to extend further from accumulation points, and global warming will see them extend less far, or vanish entirely if the temperature rises so much that the snow melts at the former point of accumulation. Types of glacier A glacier originating where the snow accumulates on a mountaintop and flows down the mountain is known as a valley glacier or an alpine glacier: the terms are synonymous. The larger glaciers, such as can be found today covering the surface of Greenland and Antarctica, are known as continental glaciers, or sheet glaciers or ice sheets. Again, these are synonymous and the diversity of terms does not indicate that some sort of distinction is being drawn. Whereas valley glaciers flow downwards from the point of accumulation, the sheet glaciers of Greenland and Antarctica flow outwards in all directions from the point of accumulation. Erosion associated with glaciers A glacier moving over a landscape will sweep up topsoil and loose rocks as it goes, transporting them towards the ablation end of the glacier and revealing the bedrock beneath. This bedrock will be polished by the passage of the ice over it; it will also be grooved and scored in the direction of travel by the rocks contained in the glacier: such grooves are known as striations or striae, and the rocks are said to be striated. The picture to the right shows an example of striation: The passage of a glacier will produce debris of all sizes, from enormous chunks of rock plucked from the bedrock to very fine rock flour produced by the grinding action of the glacier. A valley glacier will also carry along any rock fragments that fall from the valley walls. Some glaciers, known reasonably enough as "rock glaciers", consist mostly of rocks cemented together by ice. One erosional feature commonly associated with glaciers is the roche moutonée, caused when a glacier slides over a hummock of rock. As the glacier slides up the hill, it polishes and striates it; flowing down the other side, it plucks fragments from the rock, leaving a steeper and more ragged face in its wake. A valley glacier will create a bowl known as a cirque at the accumulation point, with about a quarter of the rim of the bowl missing in the direction in which the glacier leaves the cirque. A mountain eroded by glaciers will have a rugged, jagged topography, with knife-edge ridges where two cirques or glacial valleys adjoin. The valleys carved out by a valley glacier will have a characteristic U-shaped cross-section quite different from that produced by a river, which produces V-shaped valleys. Deposition associated with glaciers The rock fragments transported and deposited by glaciers are known as till. This is unrounded and unsorted by size. This may not sound very remarkable, until we reflect that the action of wind or water cannot produce sediments like this: such an unsorted jumble of shards is characteristic of glacial action. In a valley glacier, there will be concentrations of till along the border of the glacier, where it has been plucked or ground from the valley sides. Such an accumulation of till is known as a lateral moraine. Where two valley glaciers meet, lateral moraines will merge into medial moraines in the middle of the larger glacier so formed, as shown in the photograph to the right. Ground moraine is till deposited over a wide area either when the till on the underside of the glacier lodges against something, or when a glacier retreats in response to climatic changes. Ground moraine is often found in small hills, shaped somewhat like the back of a spoon, known as drumlins. No-one is really sure how these form, but the fact that they are composed of till and found in conjunction with other signs of glacial action confirm that they are glacial in origin; also, they are invariably found oriented with their long axis in the direction of the flow of the glacier (as determined by study of striations, roches moutonées, etc). At the ablation end of a glacier, the sediments transported by the glacier will be dumped to form an end moraine, resulting in a ridge of till in the same convex shape as the lobes typically found at the end of a glacier. Beyond the ablation zone, where the glacier melts, the water from it will be carried away, typically in a braided stream (a term which will be explained in more depth in the article on rivers). This will carry with it the lighter sediments, known as outwash, which will be deposited in front of the glacier as an outwash plain. One interesting sedimentary feature can be seen in lakes fed by glacial waters (proglacial lakes). In summer, relatively coarse outwash of sand and gravel will be deposited in these lakes; in winter, when the lake freezes over, the calm conditions below the ice allow fine particles of clay and of organic material to settle. The result is the formation of couplets of sedimentary material, one fine, one relatively coarse, repeated over and over, each couplet being known as a varve. Because the deposition of varves is an annual event, the study of varves is of interest in dating, as will be discussed in a later article. Kettles are another feature we associate with glaciers. When a glacier retreats, we often observe that it leaves a large block of ice behind it. The outwash sediment from the retreating glacier will then build up around the orphaned block of ice. When it has melted, which may take many years, the result is a depression in the outwash plain: this is a kettle. If it lies below the water table, it will fill up to produce a small lake with outwash banks. Former glaciers: how do we know? It is not difficult to detect the passage of a glacier, even if it is no longer present: for if you take away the glacier, you are still left with the patterns of deposition and erosion that glaciers produce; and these are highly distinctive and cannot be produced by other mechanisms. We may note that we can see glaciers disappearing today: for example, Glacier National Park in Montana has at the time of writing only 26 named glaciers, down from 150 in the year 1850; so our statements about what evidence former glaciers leave behind are by no means hypothetical, but rather are based on direct observation. With all former glaciers, whether valley glaciers or continental glaciers, we see characteristic patterns of erosion: we see such things as bedrock polishing, striation, roches moutonées, and so forth. We also see till. As we have remarked, till is a very distinctive sort of sediment, which cannot be produced by the action of wind and water, as shown by its unsorted, unrounded nature. The arrangement of till can also be quite distinctive; a pile of till in a crescent-shaped end moraine admits of no other explanation except that a glacier deposited it there. Erratic boulders, when we find them, present another blatant clue. An erratic boulder is one which, in terms of the rock of which it is composed, has nothing in common with the geology of its surroundings, and which must have been transported to its present location over some distance; in some cases, hundreds of kilometers. Ice, as we can observe, can transport such enormous rocks; water and wind do not. We are therefore left with some unambiguous signs of former glaciers. When we see smoothed bedrock marked with striations leading to a semi-circular moraine of unrounded and unsorted rocks, many of them way out of their original geological context, beyond which is what looks suspiciously like an outwash plain, there is really no other conclusion that we can draw, except that we are looking at where a glacier once flowed and terminated. Besides the erosional and depositional features already mentioned, glaciers leave behind some highly distinctive landforms. Consider for example the photograph to the right. There is no longer a glacier present, and, indeed, a lake has formed at the former accumulation point. (Such a lake is known as a tarn.) Despite the absence of any actual ice, the reader should have no trouble in recognizing the landforms associated with a valley glacier, which are as clear and distinctive as an elephant's footprint. Here, surely, is the great bowl of a cirque, and just where we would expect to find it, near the peak where the temperatures are lowest; in the foreground, where the rim of the cirque is open, we see a valley with the distinctive U-shaped cross-section of a glacial valley, where the glacier once exited the cirque. Sheet glaciers do not carve out the same forms, but they do leave some large-scale clues behind. They often sweep away soil and other sediment, leaving large expanses of bare rock. In doing so, they also erase the drainage systems that were present in the landscape before their arrival, so that after they retreat, the landscape is poorly drained: such features are a sign to geologists to look for other indications of glaciation. We should mention a couple more signs of glaciation. The first of these is isostatic rebound. Ice is heavy, and for reasons that we shall go into more thoroughly in later articles, the weight of a continental glacier should press the crust of the Earth down into the mantle, and, when the glacier is gone, the Earth's crust should slowly "bounce" back up. This happens rapidly enough as to leave its marks over mere centuries: so, in parts of Scandinavia, we can see former harbors now standing uselessly distant from the sea. Today, the rate of rebound in post-glacial areas is measured directly by a GPS monitoring system called BIFROST: the maximum rate of rebound is about 1cm/year. This on its own would not prove the former presence of glaciers, but in combination with the less ambiguous signs of erosion and deposition, the phenomenon of rebound does confirm the hypothesis of glaciation. There is one more prediction that we can make and confirm. We can use geological dating methods, of which more will be said in later articles, to establish the times at which sheet glaciers covered northern North America and Eurasia. Now, if we are correct in attributing these striations, moraines, and so forth to glacial action, then we ought to find that at the same time, we have other evidence of a colder climate, such as flora and fauna adapted to colder climates; lower sea levels caused by water being locked up in continental glaciers; temperature-dependent changes in the composition of shells; and so forth. And this is exactly what we do find, providing an independent confirmation of an ice age. We shall deal more fully with these topics in later articles on geological dating methods, on paleoclimatology, and on ice ages; for now we shall simply note that these techniques can be used, and that they confirm what we can learn from studying landforms, sediments, and erosional features.
Learning and neural networks - 1 An Overview of Neural Networks - 1.1 The Perceptron and Backpropagation Neural Network Learning - 1.1.1 Single Layer Perceptrons - 1.1.2 Multi Layer Perceptrons - 1.1 The Perceptron and Backpropagation Neural Network Learning - 2 Self Guided Neural Network Projects - 3 External Links - 4 Further Reading An Overview of Neural Networks The Perceptron and Backpropagation Neural Network Learning Single Layer Perceptrons A Perceptron is a type of Feedforward neural network which is commonly used in Artificial Intelligence for a wide range of classification and prediction problems. Here, however, we will look only at how to use them to solve classification problems. Consider the problem below. Suppose you wanted to predict what someone's profession is based on how much they like Star Trek and how good they are at math. You gather several people into a room and you measure how much they like Star Trek and give them a math test to see how good they are at math. You then ask what they do for a living. After that you create a plot placing each person on in based upon their Star Trek and Math scores. You look at the plot and you see that if you draw a few lines, you can create borders between the groups of people. This is very handy; if you grabbed another person off the street and gave them the same Math and Star Trek test, ideally they should score similarly to their peers. As such, as you add more samples, people's scores should fall close to what other people who share the same profession score. Note that in our example, we can classify people with total accuracy, whereas in the real world noise and errors might make things much more messy. As a side note, single layer perceptrons can be analytically derived in one step. However, we will train this one to illustrate how it is done since, with multi-layer perceptrons, no analytically closed solution is known to exist. We will train the neural network by adjusting the weights in the middle until it starts to produce the correct output. The neural network starts out kind of dumb, but we can tell how wrong it is and based on how far off its answers are, we adjust the weights a little to make it more correct the next time. We do this over and over until its answers are good enough for us. It is important to note that we control the rate of learning, in this case via a constant learning rate . This is because if we learn too quickly we can overshoot the answer we want to get. If we learn too slow, then the drawback is that it takes longer to train the neural network. - Note: The difference between and is that is what you want the network to produce while is what it actually outputs. If the network is well trained Steps in training and running a Perceptron: - Get samples of training and testing sets. These should include - What the inputs (observations) are - What outputs (decisions) you expect it to make - Set up the network - Created input and output nodes - Create weighted edges between each node. We usually set initial weights randomly from 0 to 1 or -1 to 1. - Run the training set over and over again and adjust the weights a little bit each time. - When the error converges, run the testing set to make sure that the neural network generalizes a good answer. These steps can also be applied to the multi layer Perceptron. Multi Layer Perceptrons With a single layer perceptron we can solve a problem so long as it is linearly separable. To see what we mean by this consider the adjacent figure. In previous example, we drew a few lines and created contiguous single regions to classify people. However, what if another class comes between a single class. As an example, suppose the state of Ohio annexed the state of Illinois and created the state Ohionois (Remember the s is silent). We wouldn't be able to draw a single shape that contained both Illinois and Ohio into a new state without also including Indiana which is between the two states. Thus, the new state of Ohionois is not linearly separable since Indiana divides it. In order to deal with this kind of classification problem, we need a classification scheme that can understand that for some class of people with any given job, there may be some other people with another job that have math and Star Trek scores between that class. One way to accomplish the classification of non-linearly separable regions of space is in a sense to sub-classify the classification. Thus we add an extra layer of neurons on top of the ones we already have. When the input runs through the first layer , the output from that layer can be numerically split or merged allowing regions that do not touch each other in space to still yield the same output. To add layers we need to do one more thing other than just connect up some new weights. We need to introduce what is known as a non-linearity . In general, the non-linearity we will use works to make the outputs from each layer more crisp. This is accomplished by using a sigmoidal activation function. This tends to get rid of mathematical values that are in the middle and force values which are low to be even lower and values which are high to be even higher. It should be noted that there are two basic commonly used sigmoidal activation functions. - Note: The sigmoid activation function at the output is optional. Only the activation function following must be used. The reason for using a sigmoid at the output is to force the output values to normalize between 0 and 1. It should be noted that other output functions can be used such as stepper functions and soft-max functions at the final output layer. For more on transfer functions see: Transfer Function Training and Back Propagation The standard way to train a multi layer perceptron is using a method called back propagation. This is used to solve a basic problem called assignment of credit, which comes up when we try to figure out how to adjust the weights of edges coming from the input layer. Recall that in the single layer perceptron, we could easily know which weights were producing the error because we could directly observe the weights and output from those weighted edges. However, we have a new layer that will pass through another layer of weights. As such, the contribution of the new weights to the error is obscured by the fact that the data will pass through a second set of weights or values. To give a better idea about this problem and its solution, consider this toy problem: A Mad scientist wants to make billions of dollars by controlling the stock market. He will do this by controlling the stock purchases of several wealthy people. The scientist controls information that can be given by wall street insiders and has a device to control how much different people can trust each other. Using his ability to input insider information and control trust between people, he will control the purchases by wealthy individuals. If purchases can be made that are ideal to the mad scientist, he can gain capital by controlling the market. As a mad scientist, you will need to adjust this social network in order to create optimal actions in the market place. You do this using your secret Trust 'o' Vac 2000. With it you can increase or decrease each trust weight how you see fit. You then observe the trades that are made by the rich dudes. If the trades are not to your liking, then we consider this error. The more to your liking the trades are, the less error they contain. Ideally, you want to slowly adjust the network so that it gets closer and closer to what you want and contains less error. In general terms this is referred to as gradient descent. There are many ways in which we can adjust the trust weights, but we will use a very simple method here. Each time we place some insider information, we watch the trades that come from our rich dudes. If there is a large error coming from one rich dude, then they are getting bad information from someone they trust too much or are not getting good information from someone they should trust more. When the mad scientist sees this, he uses the Trust 'o' Vac 2000 to weaken a strong trust by a little and strengthen a weak trust by a little. Thus, we try to slowly cut off the source of bad information and increase the source of good information going to the rich dudes. We can take the ideas above and make them more mathematically formal One should notice that while the feedforward network uses sigmoid activation functions for the non-linearity, when we propagate the error backwards, we use the derivative of the activation function. This way we adjust the weights at a rate that reflects the curvature of the sigmoid. In the center of the sigmoid, more information is passed through the layer. As a result, we can assign more credit reliably from values that pass through the center. First recall the activation function used for each neuron: which has the very nice property that the derivative can be expressed simply as: Next we need to know the error. While we can compute it in many different ways, it is most common to simply use the sum of squared error function: Where is the output we got, while is what we wanted to get. Thus, we compute how different the network output was from what we wanted it to be. Additionally, in this example, we use a simple update based on the general error. This leads to a straight forward computation for which is simply: If we decided to omit the sigmoid activation on the output layer it's even simpler as: In its general form, the adjustment of weights can be described as: We can speed this up greatly by introducing a momentum term that toggles the learning rate to be faster by adding in some of the last steps error adjustments. Notice that at its maximum we can see the momentum term as: which leads to: Here is a number from 0 to 1 that is a momentum controlling constant. This causes the learning rate to range as: Additionally, there are many other advanced learning rules that can be used such as Conjugate Gradients and Quasi-Newtonian methods. These and other methods like them use Heuristics to speed up the learning rate, but have the downside of using more memory. Additionally, they make more assumptions about the topology of your sample space, which can be a drawback if your space has an odd shape. From this example observe that we can keep adding more and more layers. We need not stop with only two layers but we can add three, four or however many we want. In general, two layers is usually sufficient. Adding extra layers is only helpful if the topology of our sample space becomes more complex. Adding extra layers allows us to fold space more times. Another way of looking at this One can also look at a feedforward neural network trained with back propagation as a Simulink circuit diagram. Self Guided Neural Network Projects The Well Behaved Robot This project has been used at the University of Southern California for teaching core concepts on Back Propagation Neural Network Training. Your great uncle Otto recently passed away leaving you his mansion in Transylvania. When you go to move in, the locals warn you about the Werewolves and Vampires that lurk in the area. They also mention that both vampires and werewolves like to play pool, which is alarming to you since your new mansion has a billiard room. Being a savvy computer scientist you come up with a creative solution. You will buy a robot from Acme Robotics (of Walla Walla Washington). You’re going to use the robot to guard your billiard room and make sure nothing super natural finds its way there. To train your robot you need to select a set of features which the robot can detect and which also can be used to tell the difference between humans, vampires and werewolves. Further, after having your nephew Scotty ruin one of your priceless antique hair dryers, which you keep in the billiard room you decide that the robot should also detect children entering the room. After reading up on the nature of the undead and after taking careful measurement, you realize that the two best features for detection are how tall a person is who is entering the room is and how hairy they are. This works because vampires are tall and completely bald, werewolves are either short and totally covered in fur, or they are the mutant type that are extremely tall, but no more hairy than a human. An adult human is taller than a child and slightly hairier. The chart below shows samples that you took to validate your hypothesis. The next thing your robot will need to do in addition to detecting what creatures enter your billiard room is take an action that is appropriate for the situation. Since you want your robot to be polite, it will greet every human that enters the room. Additionally, if a child enters the room, when it greets the child it will scream so that you know to look in your closed circuit television and see what is happening in the room. When the robot detects a vampire, it will scream and impale it with a stake. Since robots are no match for werewolves, if the robot detects a werewolf, it will scream and then run away. Thus, your robot can take any of four actions, it can impale something entering the room, it can scream, it can run away and it can greet people. Any of these action can be performed following the detection of anything entering the room. Your job is to train the robot so that it performs the correct actions whenever it detects something entering the room. Your first task is to train your robot. Take the training data marked train1.dat and plug it into bpt1.nsls located in 2layer. Compile the model and run it. After you train the model, test it with bpr.nsls. The output from testing can then be found in out.bin.dat or out.dec.dat. These are tab-delimited files with the test results. You will want to take this data and make a scatter plot which shows a map of how the robot will react when it observes different heights and different amounts of hairiness. The way to interpret the output is as follows, there are four actions the robot can take, if the robot will take that action, the output is a 1, if the robot will not take that action, the output is a 0. The four actions in order are Impale, Scream, Run Away and Greet. Thus, if the output is 0,1,0,1 then that means the robot will scream and greet. - Take and plot the actions the robot takes over the space of possible inputs. Out.bin.dat and out.dec.dat contain the same information. However, out.bin.dat contains the binary coarse code for the output while, out.dec.dat contains the decimal equivalent. You can use either for creating the plot. The decimal version may be easier to use. It’s up to you. For the plot, make the x-axis the height of the visitor and the y-axis the amount of hair measured. Each point on the plot should show the robots action for that input. Note: you may create the plot with any method you choose, just so it is neat and clear. - Compare the plot of the robot’s test actions against the training data. Does the network do a good job of generalizing over the training data? Why or why not? - Does the robot always behave as programmed or does it commit actions that do not fit the patterns for people, children, werewolves or vampires? Explain. - Notice that it reacts to things entering the room as if they were vampires in two regions of space not visibly connected to the vampire training data. Why is that? Being an inquisitive lad or lass you decide you would like to find out how your robot would perform if you added a third layer to your back prop. Recall the equations from the NSL back prop lecture. Derive equations for a 3 layer back prop. This can be done by extending a two layer perceptron to three layers the same way a one-layer perceptron is extended to two layers. The figure below shows the schematic of the three-layer perceptron. Define: , , , , , and using the same notation from the NSL slides. - Using figure 2 as a guide and your results from 3.a , extend the 2 layer back prop model to a three layer back prop model. Do this on the model in the folder 3layer. Some parts have been filled in already to help guide the process. When you are finished, run the model on the same testing and training data as question 2. Create a scatter plot in the same manner and compare the two. - How good a job does the three layer network do on generalizing on the problem? - How does it compare to the results from the two layer network? - Does it do a better job? Why or why not? - As it turns out the evil Dr. Moriarty has created vampires that are similar in height and hairiness as adult humans. You have discovered his fiendish plan and must now train a new network. The figure below shows a scatter plot of the new training data. How fortunate for you that almost no one in Transylvania is the same height and hairiness as the mutant vampires, but which neural network should you use? Train both the two layer network and the three layer network on the new data. Create scatter plots for both results in the same manner as before. - How well do the two networks perform on the new data? - Which of the two networks performs better on generalization? - Specifically, why does the one that performs better do so? - Having analyzed the outcome of two networks on two different sets of data list several pro’s and con’s to using either network and explain with each one why it is the case that it is either a pro or a con. The Royal Society of Devious Werewolves has figured what method you use to train your robot. So before coming over to play pool, they all put on disguises. However, not being as bright as they think they are, they all wear the same disguise. Your faithful servant Igor having infiltrated the Royal Society can phone ahead to tell you what the features the werewolves disguises have so you can retrain your robot before they come over. The only problem is that Werewolves drive Italian sports cars, and since they are not known to drive with caution, time is of the essence in training. You decide to augment the training error in your network to use momentum. - Plug the extra momentum term into the training error. Train the two layer network on the first data set WolfData1.txt both with and without the momentum term. Give a print out of the networks error. - Does the network train faster with the new momentum term? If so, what is your intuition as to why this is or is not the case? Project Materials and Data - Other Projects in Artificial Intelligence at Cool-ai.org - The NSL (Neural Simulation Language) Home Page - Old but Relevant CSCI564 Course Web Page - This is the course in which this project was first used Mundhenk 07:21, 1 February 2007 (UTC) - 11th Joint Symposium on Neural Computation - Videos of the presentations from the Joint Symposium on Neural Computation held at the University of Southern California. - NSL Back Propagation Example Applet - 1hr video lecture on Neural Networks by Prof. P. Dasgupta, IIT Kharagpur. - Weitzenfeld, A., Arbib M. A., Alexander A.(2002)The Neural Simulation Language: A System for Brain Modeling, The MIT Press - Bishop, C. M. (1995) Neural Networks for Pattern Recognition, Oxford University Press
Acute Lymphoblastic Leukemia (Acute Lymphocytic Leukemia, ALL) Acute lymphoblastic leukemia (ALL) is a cancer of the blood and bone marrow, in which the bone marrow makes too many lymphocytes (a type of white blood cell). ALL begins in immature lymphocytes and progresses very quickly. It may occur at any age, but is most common in children, or as the end stage of a chronic leukemia of adults called chronic lymphocytic leukemia. Childhood ALL was one of the first cancers for which effective chemotherapy was developed and remains highly curable today. Cancer occurs when cells in the body (in this case, lymphocytes) become abnormal, and divide without control or order. Leukemia is cancer of the white blood cells and their parent cells. Leukemia cells do not function normally and cannot do what normal blood cells do, such as fight infections. This means that a person with leukemia is more likely to become infected with viruses or bacteria. Because the cancerous cells also overgrow the bone marrow, they can force out other normal components, such as the cells that make platelets. Platelets are necessary for proper blood clotting, and so, people with untreated leukemia may also have unexpected bleeding or may bruise easily The cause of ALL is unknown, but research shows that certain risk factors are associated with the disease. The following factors increase your chances of developing ALL: - Sex: male - Race: white - Age: children and elderly (adults older than 70) - Previous chemotherapy or radiation therapy treatment - Exposure to atomic bomb radiation or nuclear reactor accident - Exposure to toxic chemicals such as benzene (common in agriculture, dye works, and pain manufacturing and use) - Certain genetic disorders, such as Down’s syndrome, Bloom syndrome, Fanconi's anemia, ataxia-telangiectasia, neurofibromatosis, Schwachman syndrome, IgA deficiency, and congenital X-linked agammaglobulinemia The following risk factors are specific to childhood ALL: - Having a brother or sister with leukemia - Race: white or Hispanic - Living in the US - Exposure to x-rays before birth - Exposure to radiation - Previous chemotherapy or other treatment that weakens the immune system - Certain genetic disorders, (see above) - Paleness (a sign of anemia) - Easy bruising or bleeding - Petechiae (flat, pinpoint spots under the skin caused by bleeding) - Shortness of breath - Weight loss - Loss of appetite - Bone or joint pain - Stomach pain - Pain or a feeling of fullness below the ribs - Painless lumps in the neck, underarm, stomach, or groin - Swelling of the liver and/or spleen Note: These symptoms may also be caused by other, less serious health conditions. Anyone experiencing these symptoms should see a doctor. Your doctor will ask about your symptoms and medical history, and perform a physical exam. He or she may also check for swelling of the liver; spleen; or lymph nodes in the armpits, groin, or neck. You will likely be referred to an oncologist, a doctor who specializes in treating cancer. Tests may include the following: - Blood tests – to check for changes in the number or appearance of different types of blood cells - Bone marrow aspiration – removal of a sample of liquid bone marrow to test for cancer cells - Bone marrow biopsy – removal of a sample of liquid bone marrow and a small piece of bone to test for cancer cells - Spinal tap – removal of a small amount of cerebrospinal fluid to check for cancer cells - Routine microscopic exam – examination of a sample of blood, bone marrow, lymph node tissue, or cerebrospinal fluid - Bone, blood marrow, lymph node tissue, or cerebrospinal fluid tests – to distinguish among types of leukemia - Cytogenetic analysis – a test to look for certain changes of the chromosomes (genetic material) of the lymphocytes; certain genetic abnormalities are associated with ALL including: - The relocation of genetic material from one chromosome to another - The presence of a particular gene (Ikaros) - The presence of a variant gene, which controls an enzyme that influences folate metabolism - Immuophenotyping, a process by which the proteins on the surface of the cells and the antibodies the body produces in response are examined. This helps distinguish lymphoblastic from myeloid leukemia and may also be used to determine what types of therapy are most likely to be effective. - Chest x-ray – x-rays of the chest that may detect signs of lung infection or enlarged lymph nodes in the chest - CT scan – a type of x-ray that uses a computer to make pictures of structures inside the body - MRI scan – a test that uses magnetic waves to make pictures of structures inside the body - Gallium scan and bone scan – injection of a radioactive chemical into the bloodstream to detect areas of cancer or infection - Ultrasound – a test that uses sound waves to examine masses and organs inside the body Talk with your doctor about the best treatment plan for you. Treatment of ALL is done is two phases: remission induction therapy (to kill leukemia cells) and maintenance therapy (to kill any remaining leukemia cells that could grow and cause a relapse). Treatment options include: Chemotherapy is the use of drugs to kill cancer cells. It may be given in many forms including: pill, injection, and via a catheter. The drugs enter the bloodstream and travel through the body killing mostly cancer cells, but also some healthy cells. For ALL that has spread—or may spread—to the brain and spinal cord, intrathecal chemotherapy (chemotherapy drugs placed directly into the spinal column) may be used. Radiation therapy involves the use of radiation to kill cancer cells and shrink tumors. The type of radiation therapy usually performed to treat ALL is external radiation therapy. In external radiation therapy, the radiation directed at the tumor from a source outside the body. This treatment is used for ALL that has spread—or may spread—to the brain and spinal cord. Chemotherapy With Stem Cell Transplant Chemotherapy is followed by a transplantation of stem cells (immature blood cells) to replace blood-forming cells destroyed by cancer treatment. Stem cells are removed from the blood or bone marrow of the patient or donor, and infused into the patient. Biologic therapy, which is still being tested in clinical trials, is the use of medications or substances made by the body to increase or restore the body's natural defenses against cancer. This type of therapy is also called biological response modifier therapy. There are no guidelines for preventing ALL, since its cause is unknown.
Social and economic factors influence a broad array of opportunities, exposures, decisions and behaviors that promote or threaten health. Although there are many factors contributing to predicted tobacco use, socioeconomic status is the single greatest predictor. Tobacco and poverty create a vicious cycle: low income people smoke more, suffer more, spend more, and die more from tobacco use. Low social-economic status populations include low-income individuals with less than 12 years of education, the medically underserved, the unemployed, and the working poor. They can also be prisoners, gays and lesbians, blue collar workers, and the mentally ill. A study conducted by the National Network of Smoking Prevention and Poverty found that cigarettes served as a tool for those of low socioeconomic status to cope with boredom, relieve stress and as a companion to alcohol and caffeine. Cigarettes served as a loyal “friend” – a theme recognizable from cigarette advertising. Other results of the studies conducted on this population included: There are other factors unique to this community that prohibit prevention. Education materials may not be culturally or linguistically appropriate for this segment, and often members of this population live in communities where tobacco advertising is more prominent. Those who work in manufacturing, construction, or transportation are more likely to be exposed to secondhand smoke at work. Also, quitting occurs less often in this segment, and when individuals do quit they are more likely to relapse, a fact not helped by the fact that smoking may be considered the norm among family and peers. And, for many members of this population, smoking simply meets a need that cannot be met with something else. One of the greatest risks in tobacco prevention is smoking during pregnancy. This risk leaves pregnant mothers of low socioeconomic status seriously exposed. Smoking rates skyrocket for women with lower levels of education and of lower income brackets. Furthermore, studies now indicate exposure of babies to air and dust in a smoker’s home, residue on clothes, rugs, and walls in the home, even when parents don’t smoke around the child, pointing up the importance of smoke-free homes and cars. Find out how pregnant mothers can benefit from brief interventions at out Provider page. As many as 700,000 people are homeless on any given night. The homeless population is heavily represented by the mentally ill, over half have drug or alcohol dependency, and a disproportionate amount of homeless are ethnic minorities. Homelessness makes people exceedingly susceptible to smoking. Studies suggest that between 70 – 99% of homeless adults smoke. The homeless experience high stress, and feel vulnerable, overwhelmed and helpless. Also, mental illness and chemical dependency can increase susceptibility to marketing efforts that suggest that tobacco can help them cope. Hand-rolled cigarettes without filters, using recycled tobacco from butts, and group smoking can increase the dangers of each cigarette smoked, further endangering the health of this vulnerable population. The hazardous consequences of these behaviors can be seen in the dramatic increase in throat and mouth cancer of homeless people.
The uneven heating of the Earth's surface creates hot and cold spots, creating the potential for convective currents. Air above a warm spot rises, creating a low pressure zone near the surface. Colder, denser air near the surface in a higher pressure zone then flows in, creating wind.Continue Reading Heat's effect on the wind cycle is most easily observed near the sea. During the day, the ocean absorbs heat from the surface, keeping the air above it cooler than above the land. When the warm air rises from the shore, the cooler air from the ocean blows in to take its place. At night, the land cools quickly, while the ocean radiates the heat it absorbed, keeping the air above warm. This reverses the effect, causing the air offshore to rise and creating a breeze from the land. Uneven heating can also occur due geographical features. A mountainside exposed to the sun will warm more than a shadowed valley below it, creating a wind that blows uphill during the day and downhill at night. The same principle is responsible for global wind patterns. The polar regions receive less solar radiation than areas near the equator, resulting in a global shift in temperature according to latitude. This, along with differences in temperature and density in various layers of the atmosphere, creates global wind patterns like the jet stream that help drive weather systems.Learn more about Earth Science
Welcome to our final installment of The World Through Sound. Last time, we learned about linearity, non-linearity, and how linearization allows scientists to treat complicated systems like much simpler analogs through approximation. In this article, we will explore the concept of acoustic absorption and how sound can take many forms, including pressure, flow, and temperature. Sounds don’t last forever. In an open environment (like the outdoors) a sound will spread over a growing area from its starting point, the energy spreading out, causing the sound to get quieter and quieter until it finally dies away. Similarly, in an enclosed room the sound will also spread, but only until it fills the room. In both of these cases, though, there is an effect beyond the spreading of acoustic energy that causes sound to grow quieter: acoustic absorption. Acoustic absorption describes pretty much any process that causes sound energy to be dissipated, and intelligent use of acoustic absorption is how we keep concert halls from turning into echo chambers or ensure that speech in offices and classrooms is intelligible. What part of a room is responsible for most of the absorption though? It might be natural to assume that air absorbs a lot of the sound since that’s where the sound spends most of its time. But because air is a reasonably good acoustic medium, sound can move through it with only a little bit of loss. These losses only become noticeable at very long distances such as in outdoor acoustics. But what about the walls and other surfaces of an enclosed space? With lots of jutting angles to break up the waves, you might expect that good solid walls and objects help to bring down the noise. Again, though, solid walls are good reflectors of sound, acting much like how a mirror would for light. While walls are good for redirecting sound, they aren’t that great for getting rid of it. With only solid walls and air for absorption, sound can reverberate for a startlingly long time. Certain materials are very good for absorbing sound. Any sort of cloth or porous material, for example, will generally convert sound energy into heat. One way to understand this is to think about how sound causes air to move. While sound is generally thought of as a pressure wave, there is also air motion associated with the sound. The regions of high and low pressure are caused by sound flowing into and out of different regions, bunching up in some areas and thinning out in others. The equation a scientist would use here to quantify this relationship involves specific acoustic impedance, which gives a relationship between pressure and air flow, and the strength of that connection depends on air density and sound speed. Naturally, any porous material that restricts the flow of air is going to prevent that motion, and thus reduce the energy in the sound wave. At least, that’s the way the explanation usually goes. There’s a bit of a wrinkle to this explanation, albeit one that requires a bit of background knowledge about how sound moves in a room. For many of these absorbing materials that impede air flow, like carpets and wall tiles, the absorbers are on or near a solid surface. It turns out that because those surfaces reflect sound, there’s not actually a lot of flow in those areas. Instead, there’s a trade-off so that the flow is low but the pressure is high. But if this is the case, then why are carpets and acoustic tiles still effective absorbers? Shouldn’t the low flow prevent them from being useful? To find the answer, we must first consider a rather surprising connection between sound and temperature. As we have previously discussed, sound is a pressure wave. But, as you may remember from physics class, a gas under pressure will increase in temperature. The equation that should come to mind, here, is the ideal gas law that directly relates pressure and temperature. As a result, where the pressure of a sound wave increases, so does the temperature. I know that I was surprised to learn this fact, and I was even more surprised to learn of an entire branch of acoustics devoted to using sound to manipulate heat, called thermoacoustics, that has led to successful acoustic refrigerators and even sound lasers! So what does temperature have to do with sound absorption? A lot, it turns out. Just like how materials that impede flow are good at dissipating the acoustic energy associated with flow, materials that insulate temperature are great for dissipating the acoustic energy of temperature. This is why hiding under a blanket is good for dulling sound, why fiberglass insulation helps keep out unwanted noise, and why carpeted rooms are so much quieter than rooms with tile. Even better, because reflecting surfaces cause the pressure (and therefore temperature change) to peak near a wall or floor, thermally insulating acoustic tiles and carpets are perfectly positioned for maximum effect in those locations! The topic of controlling sound in a room will be highlighted in an article by Dr. Bonnie Schnitta in the fall 2016 issue of Acoustics Today, and techniques for managing reverberations in large spaces was covered by Russ Berger in *From Sports Arena to Sanctuary: Taming a Texas-sized Reverberation Time*. Physics is full of equations that connect seemingly different values. There are equations like the ideal gas law that relates pressure and volume to temperature. There are equations of motion and so-called laws (like Newton’s law), that tell how objects move. In all of these equations, we draw connections between dissimilar quantities, but in the process we can see how those concepts are really related. This is the power of substitution, probably the most useful mathematical tool to which we have access. With substitution, science can make the leaps of insight that further our understanding of the world. Sound is made up of waves of pressure, but it’s also made up of waves of flow and waves of temperature change and waves of density. Depending on which angle we consider, different solutions present themselves. This has been the final installment in this series of The World Through Sound. For those of you that are new to the world of acoustics, I hope that I have taught you something and shared just a touch of the enthusiasm I have for this branch of science. For those of you that are experienced acousticians, I hope that I have done our field justice and the perspectives that I’ve given might help you to share acoustics with those around you. And if you are hungry for more acoustics reading (or you came in to this series late) you can find more of The World Through Sound and lots of other articles on acoustics at AcousticsToday.org. If you want to see more popular acoustics from me, you can check out my blog over at ListenToThisNoise.com or follow me on twitter @ListenToNoise. In closing, I would like to thank Acoustics Today and the Acoustics Today Advisory Board for this internship and the opportunities for outreach that it has given me. I would especially like to thank my editor Arthur Popper for his feedback and help throughout this series and webmaster Daniel Farrell for converting my articles into such an attractive online format. And, of course, I would like to thank all of my readers for following this series over the last year. It really has been a pleasure sharing my love of acoustics with all of you, and I truly hope that I have passed at least a little of that love on to you. Andrew “Pi” Pyzdek is a PhD candidate in the Penn State Graduate Program in Acoustics. Andrew’s research interests include array signal processing and underwater acoustics, with a focus on sparse sensor arrays and the coprime array geometry. Andrew also volunteers his time doing acoustics outreach and education as a panelist and moderator on the popular AskScience subreddit and by curating interesting acoustics news for a general audience at ListenToThisNoise.com. Contact info: [email protected]
Often times as teachers, we are so focused on making sure our lesson plans fully engage, assess, and develop multi-tiered levels of learning that we forget one of the most substantial ingredients in helping our students excel… creativity! Here are five tips to help students of all learning abilities and across all subjects to experience more active engagement by stimulating their senses and captivating their attention. - READ ALOUD Yes, this is something done earlier and often in students’ scholastic careers, but the technique should be employed throughout all levels. No matter what text is assigned, there is opportunity for the teacher to lead students in creative ways to cover the information. Have students take turns, choose who reads at random, and change it up often. Finding ways to make covering the information engaging and spontaneous for all students will keep them actively involved. - ADD COLORS Often times, students need to take notes on the subject at hand. Have students color-code their notes with colored pencils. For each important new piece of information they need to write down, change up the color. It’s a simple way to create a more vivid collection of content primed for studying. - EMPOWER STUDENTS TO TEACH A creative method to assess students on the lesson is to have them share their discoveries and newfound understandings by teaching the content back to the entire classroom. Set aside time to give students the stage and let them share and teach what they’ve learned. It will help students commit to following along and processing what is being taught, so they can be ready to teach it back to their peers. - CONNECT WITH STUDENTS’ LIVES At some point in the lesson, ask students to write a journal entry about something in their lives that relates to the subject at hand. Students will always engage more to material if they understand how it connects to their own lives. These parallels are key and well worth the discovery time on the teacher’s side. - CREATE A CREATIVE WALL SPACE Encouraging students to share their work and newfound knowledge is key to unlocking the feeling of pride of their accomplishments and motivating others to do the same. Have a wall in the room where students can share visual proof of their learning. It can feature student work, and even be a place where students can share facts and information they want to highlight from their daily lessons. Create a place for students to write, draw, and share their thoughts and insights. This will encourage self-expression and become another way to support review of the topics you are teaching in your current lesson plan. What are some creative ways you have enhanced your lesson plans? We’d love to hear from you!
In every industry, technicians and engineers who work with automation, instrumentation, and control equipment and systems deal with measurement issues every day. Knowing that sensors convert characteristics of a physical environment or process is only part of the job; a lot can happen to these signals on the way to equipment or systems that require their input. Signal conditioning is how they’re getting measurable signals out of these sensors. - What is signal conditioning? - Types of signal conditioning - Signal conditioning systems - Mitigating noise for signal measurement - Ground loops and noise What is signal conditioning? Technicians and engineers know that most industries are not necessarily signal-friendly. Ground loops and noise can threaten signal integrity. Some sensor signals are weak, some require conversion, and some can't produce signals without an external power source. Technicians and engineers who deal with measurement issues often use signal conditioning to obtain, maintain, and improve the quality of these measurements. Signal conditioning is the preparation of an analog signal before it becomes digital. This manipulation of a signal prepares it so technicians and engineers can accurately and precisely measure it using digital tools. Signal conditioning protects desired signals from unwanted components or effects by using isolation or filtration. Signal conditioning devices use isolation to minimize the effects of interference caused by ground loops or noise, which can affect measurement accuracy. Filters are designed to minimize noise by rejecting signals within a specific frequency range while allowing others to pass. For example, low pass filters built into many Fluke digital multimeters and clamp meters effectively block high frequency noise component, like VDFs, in measurement signals. Signal accuracy vs signal precision Accuracy and precision are often confused with one another, but they are not the same thing. Accurate results show how close the output is to the desired value. While precise results show how close measurements are to one another. If you’re looking at a bullseye, highly accurate results would look like a bunch of hits close to the center of the bullseye, as the center is the value you want to hit. Highly precise results could be anywhere on the bullseye, as long as the hits are clustered close to one another. Taking the time to understand signal conditioning and take the necessary steps to convert when needed, can help ensure more accurate and precise measurement results. Types of signal conditioning Signal conditioning acts to translate the measurements from sensors to the devices that use their signals. Many measurement devices and their associated systems are picky when it comes to the types of signals they require. Therefore, the sensor signals must be translated, or converted, before measurement devices can use them. Depending on the starting and ending signal, a couple of different types of signal conditioning could be used. Signal conversion takes the output signal from one device and changes it into another signal for another. Types of signal conversion include amplification, attenuation, linearization, cold-junction compensation, and excitation. Some measurement devices prefer voltage, while others prefer current, it just depends on what you’re working with. Most of these devices require signals to have a much higher level than most sensors produce. Thermocouples remain the classic signal-conversion example, requiring three of the five signal conversion types: amplification, linearization, and cold-junction compensation. Thermocouples are used in many applications but can be tricky to understand how they work and how to calibrate them. Fluke Calibration has a series of application notes to understand more about thermocouples: - Thermocouple Fundamentals - How to Select Thermocouple Calibration Equipment - Calculating Uncertainties in a Thermocouple Calibration System - How to Calibrate a Thermocouple Thermocouples produce voltages in the millivolt range. However, some of the temperature measurement devices with which they are used require a larger amount, either 1-5 V DC (Direct Current), 0-10 V DC, or 4-20 mA for proper operation. In these cases thermocouple signal must be amplified to be useful. Thermocouples are notoriously non-linear. However, other sensors such as resistance temperature detectors (RTDs), strain gauges, and accelerometers are non-linear as well. A sensor is said to be linear if the change in the value of the signal it produces is directly proportional to the change in value to the physical environment or process it measures. However, when sensors produce signals that do not relate linearly to their physical measurements, linearization must be used to make sense of them. The connection between the thermocouple and the terminal of the measurement device also forms a junction, called the cold junction. If left as is, the cold junction will produce a measurement error. However, cold-junction compensation makes use of a thermally-sensitive device, such as a thermistor or diode, to simulate and apply the appropriate correction so the connection can be measured accurately and precisely. Some sensors do not produce voltage or current and consequently require excitation in the form of an external voltage, or current. Excitation is the process of adding external electrical input to generate a magnetic field. Some sensors that require excitation include RTDs, thermistors, strain gauges, and accelerometers. Signal conditioning systems When using signal conditioning in measurement systems, each sensor needs an individual signal conditioner based on the signal that sensor type produces. Signal conditioning modules and devices are available for most analog signal types, however, stocking spares for every type can become expensive and difficult to manage. Fortunately, there are also universal signal conditioners. Universal signal conditioners, or universal signal conditioning systems, accept a wide range of inputs and multiple outputs as well. These are an easier, most cost-effective solution. Universal signal conditioners convert, isolate, and transmit signals from many types of sensors as well as controllers and other devices. These versatile signal conditioners typically accept current, voltage, thermocouple, RTDs, and resistance inputs. Typical universal signal conditioner outputs include current, voltage, and programmable relays. Technicians and engineers who work with automation, instrumentation, and control equipment and systems apply signal conditioning to improve measurement accuracy and control system performance by optimizing signals from process sensors. However, these control systems must also be properly maintained and periodically calibrated. Fortunately, Fluke tools can help maintain the reliability, accuracy, and safety of process measurement and control systems. Mitigating noise for measurement signals Process indicators, data acquisition and collection equipment, PLCs (Programmable Logic Controllers), process control systems, and any other measurement device requires the appropriate signals - and nothing but the signals. In other words, these signals must be "clean" and noise-free. Motors, variable frequency drives (VFDs), welding equipment, and transients that occur due to starting and stopping equipment can produce noise that can affect signals traveling from sensors to their intended measurement devices. Electrical and instrumentation wiring best practices - including proper grounding and shielding - go a long way to minimize unwanted noise. Using a portable oscilloscope you can monitor noise on control loops to get an idea of what you’re working with. Though it is also important to note that well designed and installed systems have fewer noise problems, many times this is not enough to get a clean measurement. Knowing when, where, and how to apply signal conditioning can help keep these signals happy and make measurements easier. Ground loops and noise Ground loops and noise can cause interference in measurement signals. A ground is supposed to be at ground potential and referenced to Earth. However, a ground loop exists when a single control signal has two or more grounds at different potentials and there is a conducting path between the grounds. This difference in ground potential can create an unpredictable magnitude of additional current in the loop. This current is not part of the desired signal, and therefore distorts the signal and the measurement. Signal conditioning devices maintain signal integrity by breaking the ground loop current path. Industrial equipment produces electrical noise that can affect measurement signals. Normal mode noise typically occurs in AC circuits between the neutral and "hot" wires. Common mode noise typically occurs between the neutral and ground wires but can also occur between the hot and ground wires. While normal mode noise can also occur in DC measurement circuits, common mode noise is more regularly seen and causes the most measurement errors. In either case, isolating the signal can help prevent noise from affecting measurement accuracy.