content
stringlengths
275
370k
On November 23, 1887, the Louisiana Militia, aided by bands of prominent white citizens, shot and killed 30 to 60 unarmed striking Black sugar workers in what became known as the Thibodaux Massacre. Black Louisiana sugarcane workers, in cooperation with the racially integrated Knights of Labor, had gone on strike at the beginning of November in 1887 over their meager pay issued in scrip, not cash. The scrip was redeemable only at the company store where excessive prices were charged. Much of this history was documented through research by John DeSantis, author of The Thibodaux Massacre (History Press). The story of how he learned of and found accounts of the massacre in pension records is a history lesson in and of itself. Years after the Thirteenth Amendment brought freedom, cane cutters’ working lives were already “barely distinguishable” from slavery, argues journalist and author John DeSantis. With no land to own or rent, workers and their families lived in old slave cabins. They toiled in gangs, just like their ancestors had for nearly a century. Growers gave workers meals but paid famine wages of as little as 42 cents a day (91 cents per hour in today’s money, for a 12-hour shift).
SCIENCE STANDARD 3 TERM 1 SCHOOL: DOW VILLAGE GOV'T PRIMARY SCHOOL TEACHER: VIJAY RAMNATH This is a single session. It bridges the gap between standard 3 understanding of fractions and where standard 4 begins. The big idea: Fractions are more meaningful if we know its value. To be able to follow along in this lesson students must be accustomed to: - Drawing bar models. - working with set models. - Trying new ways of understanding concepts. - Multiplying 3-digit by 1 digit - Dividing 3-digit by 1 digit. Vowel sounds in some words can lead a writer to make errors. Practice using contextual passages from which these spelling words exist. NB - Some of the passages may be nonsensical in nature or non-factual. If any teacher has a passage to contribute to the course, please email the passage to me at [email protected] I will be sure to include in future activities within the same course. This course explores the uses of the comma. The course begins at the 3rd session. However earlier sessions will be edited within the coming weeks to the unit of work. If you are interested on working together to produce lessons like this, please let me know. I would try to assist. Hopefully we can build out some of the content needed to ensure an education for our future leaders. Send me an email at: [email protected]
In the spring of 1945, German forces were reeling from a series of devastating defeats on the eastern and western fronts. In the west, US forces contained and repelled a German counteroffensive in the Battle of the Bulge. On March 7, units of the American 9th Armored Division captured the Ludendorff Bridge at Remagen and established a bridgehead over the Rhine River, the largest remaining obstacle confronting American forces in their drive into Germany. In the east, decimated German units retreated in the face of Soviet assaults on a front that stretched from Yugoslavia to Lithuania. Citizens of Hitler’s Third Reich also could not escape the horrors of war. Germany’s cities lay in ruins from the Anglo-American bombing campaign which dropped more than 45,000 tons of bombs on German population centers between January 1944 and January 1945. Although in hindsight it is tempting to view March 1945 as the closing act of the war in Europe, German forces continued to inflict heavy casualties on Allied forces. On March 6, the German Army launched a final offensive near Lake Balaton in Hungary in a bid to protect valuable oil fields in the region. In the west, Germany launched thousands of V-2 rockets at targets in the United Kingdom, France, Belgium, and the Netherlands. More than 3,000 of the rockets struck Allied-held territory by March 1945, killing in excess of 4,500 people. Similarly, the British and American air campaign against Germany did not show any signs of abating in March 1945. In the spring of 1945, the Anglo-American air forces launched increasingly bigger and more destructive raids on German cities. Between February 13 and February 15, American and British bombers destroyed the virtually undefended city of Dresden, Germany, killing more than 25,000 civilians. On March 11, the Royal Air Force reduced much of the city of Essen to rubble when 1,079 British aircraft dropped over 4,700 pounds of bombs on the city. Simultaneously, the US Army Air Force dropped approximately 600 tons of bombs on German rail yards every month from September 1944 to April 1945. In the midst of so many devastating raids, one of the most significant air battles of the war has frequently been overlooked. On the morning of March 18, 1,329 bombers and 733 fighters of the US Eighth Air Force formed up over England and set a course for northern Germany. The target for 1,221 of the bombers was Berlin. This mission, the largest wartime raid on Berlin, was intended to support the Russian advance by attacking rail stations and tank factories in the city. During the previous two weeks, the American bombers had encountered little fighter resistance on their daylight bombing raids. In the pre-mission briefings on March 18, however, intelligence officers warned crews to be on the lookout for a new German jet fighter, the formidable Messerschmitt Me 262. With a top speed of 540 mph, the Me 262 was over 100 mph faster than the best American fighter of the war, the North American P-51 Mustang. As the American bomber fleet and its fighter escorts approached the German capital, more than 70 German fighters intercepted the invaders. Historian Donald Miller called the ensuing battle the “most tremendous air battle of 1945.” The German force consisted of three dozen jets and an equal number of piston engine fighters. Despite being outnumbered 25 to 1, the German fighters used cloud cover to evade the American fighter escort and close with the bomber formations. 30 jets from the newly formed Jagdgeschwader 7 streaked through the formation and shot down seven B-17 bombers in just eight minutes. The German jets each carried two dozen rockets slung under their wings. These projectiles could bring down a bomber with a single hit. To increase their lethality, the German fighters lined up abreast and fired their rockets into the bombers at close range. Fragments of the bombers rained down over the German countryside, while six American fighters that engaged the German defenders were also shot down. The armada next encountered German flak, which inflicted even more casualties. More than half of the bombers, 714 planes, sustained damage from German anti-aircraft fire. 16 suffered hits so severe that they had to crash land behind Soviet lines. In total, 24 bombers and six fighters were lost on the mission. 178 Americans were killed, wounded, or captured in the raid. The German Luftwaffe lost just three pilots. Despite the German Luftwaffe’s success, it was unable to prevent the vast majority of the American bombers from dropping their payloads on Berlin. Although the bombers targeted the city’s rail yard, their customary inaccuracy combined with the intermittent cloud cover meant that more than 3,000 tons of bombs impacted all over the city. Owing to the previous devastation of Berlin and the untold number of refugees in the city, it is impossible to know the exact number of Germans killed in the raid. Conservative estimates put German losses around 3,000 civilians. The German Me 262 jet fighter returned the qualitative edge to Hitler’s air force in the last days of the war, and German jets shot down a total of 63 bombers in the war’s final months. Yet the belated appearance of the best fighter of the war could not prevent Germany’s ultimate defeat. On April 25, American and Soviet forces linked up on the Elbe River near Torgau, while the Soviet Army fought its way into the heart of Berlin. American and British bombers finally ran out of targets in April 1945, but they had gutted as many German cities as possible in their effort to compel Germany’s surrender. March 18 was the largest Allied raid on Berlin during the war, yet the bombing mission rated only a single paragraph in the US Army Air Force’s official history of World War II. One possible reason for the mission’s obscurity is that it was one of more than 350 wartime bombing missions that targeted the German capital. The postwar years have also witnessed a contentious debate over the role of airpower in bringing about Germany’s defeat. US Army Air Force leaders in Europe commanded the most powerful strategic air force in history by the beginning of 1945, and they were determined to prove that the resources devoted to the bomber force were justified. This goal was all the more important since their demands on American manpower contributed to a dire shortage of infantrymen in Europe in the final six months of the war. In addition, Army Air Force leaders looked ahead to their postwar fight to become an independent service and secure appropriations for a permanent strategic bomber force. Consequently, American air commanders exhibited particular zeal in their spring 1945 campaign. Historian Tami Biddle argued that the United States Strategic Air Forces’ “wide-ranging targeting choices for the month of February revealed an almost desperate quest for a decisive use of strategic air power.” The same could be said of the March 18 raid, which had an undetermined impact on Germany’s defeat. Though the raid had a strategic justification, there was also no denying the psychological appeal of targeting Hitler’s capital with the full power of the US Eighth Air Force, the largest bomber force in American history.
Commercially we can find Hall Effect sensors both in their simple form and in bridge configuration. One of the advantages of using the bridged configuration is that it supports detecting field variations in both directions, simplifying the design of detector circuits. Hall effect sensors can be defined as being transducers that vary their output voltage in response to a magnetic field. The operation of these sensors is based on the Hall effect. The effect of Hall determines that in a magnetic area traveled by a current which we will measure the voltage drop, we find that it will be zero volts. But when applying a magnetic field to this same area, a small voltage best prices for SKF-Bearing will appear between the two ends. The difference of these two tensions is due to the existence of a force to move the electrons along the magnetic area (Force of Lorenz). It is with the information of this potential difference that the Hall effect sensor acts. These sensors are generally best used for measuring rotor speed in electric motors and not in position.
What achievements did the Song dynasty accomplish? Just a few of these advancements included improvements in agriculture, development of moveable type, uses for gunpowder, invention of a mechanical clock, superior shipbuilding, the use of paper money, compass navigation, and porcelain production. What belief system did the Song dynasty have? Buddhism flourished in the Tang and Song dynasties along with religious Daoism and a revival of Confucian thinking (referred to as “Neo-Confucianism”). What were the important events and accomplishments during the Song dynasty? China’s rice production booms, which contributes to population growth and an economic revolution. - Song Dynasty. - Emperor Taizu founds the Song dynasty. - The Song agree to annual payments to prevent northern invasion. - A new variety of early-ripening rice leads to an economic revolution. - Bi Sheng invents movable type.(c. What are some significant cultural contributions of the Song Dynasty? Rice Cultivation. A Labor-Intensive Crop. How did Song dynasty use Confucianism? The revived Confucianism of the Song period (often called Neo-Confucianism) emphasized self-cultivation as a path not only to self-fulfillment but to the formation of a virtuous and harmonious society and state. What are some significant cultural contributions of the Song dynasty? What were some of the great cultural achievements of the Tang and Song Dynasties? The Song and Tang dynasties in China were equivalent to Rome’s Pax Romana in terms of scientific advancement. A few of these advancements were the development of primitive gunpowder, porcelain, and paper money during the Tang and Song Dynasties, as well as the magnetic compass. Which set of accomplishments occurred during the Song dynasty of China? The advances that occurred during the Song Dynasty were the inventions of the dragon backbone, the cultivation of cotton, and fast ripening rice. What 3 inventions during the Tang and Song Dynasties impacted the world the most? The develop- ment of gunpowder, in time, led to the creation of explosive weapons such as bombs, grenades, small rockets, and can- nons. Other important inventions of this period include porcelain, the mechanical clock, paper money, and the use of the magnetic compass for sailing. Did the Song Dynasty invent the compass? The magnetic compass was first invented as a device for divination as early as the Chinese Han Dynasty and Tang Dynasty (since about 206 BC). The compass was used in Song Dynasty China by the military for navigational orienteering by 1040–44, and was used for maritime navigation by 1111 to 1117. What were the achievements of the Song dynasty? The Song government was the first in world history to issue banknotes or true paper money nationally and the first Chinese government to establish a permanent standing navy. This dynasty also saw the first known use of gunpowder, as well as the first discernment of true north using a compass . What was the religion of the Song dynasty? Statue of Confucius. Confucian teaching was the dominant religion and philosophy of the Southern Song. During the Song Dynasty era, the religions of Daoism and Buddhism became less popular among the ruling class than in previous eras. How did the Song dynasty try to limit the influence of Buddhism? The cultural and economic power of the monasteries made many emperors nervous, and the Song rulers attempted to limit Buddhist influence by requiring monks to purchase tax-exemption certificates before joining a monastery. What was the legal system like in the Song dynasty? The Song judicial system retained most of the legal code of the earlier Tang dynasty, the basis of traditional Chinese law up until the modern era. Roving sheriffs maintained law and order in the municipal jurisdictions and occasionally ventured into the countryside.
Little Explorers: Bugs is a part of an informative series where young children can learn all about various types of bugs and insects. This picture book is a perfect way for young children to discover the vast world of little creatures that roam the earth. Find out which bugs are the smallest and which ones are the biggest, how do they help maintain a balance in our ecosystem and why we need those insects in nature, why do some insects have bright colors, which insects are prettiest, and which fall in the category of squirmy and creepy! Little Explorers: Bugs calls out to all the little explorers with over 30 robust and colorful flaps the children can lift to read with eye-catching illustrations and cute artwork that makes this picture book stand out. With this highly informative book, young children are inspired to learn more about a variety of little creatures, bugs, and insects in a fun way that will keep them engaged and excited. It is an amazingly detailed picture book to ignite an interest in the young ones to learn a lot of fascinating stuff about various kinds of insects such as wasps, butterflies, dragonflies, snails, beetles, and many more and keep them intrigued throughout their exploration. All the interesting information filled in these colorful pages of Little Explorers: Bugs are divided into small chunks of informative text so that they are easier to read, learn and remember. It’s also full of captivating facts and trivia related to insects and bugs that make it an enjoyable read for both young children and adults alike. Little Explorers: Bugs will also play a crucial part to introduce new vocabulary related to the world of insects in the young children’s day-to-day life and learn various concepts as well as terminology to expand their knowledge in a very effortless way.
Data communication refers to the transferring digital bitstreams across point-to-point and networked devices. These communications can be assemble in possibly parallel or serial transmitting. In seite an seite transmission, a couple of electrical cables are used to send a signal. Usually, these cables are connected to a device just like a computer. Moreover, there are proprietary protocols that help increase the efficiency of retransmission. This reduces the overall number of people sent. Nevertheless , data www.bigdataroom.net/virtual-data-room-for-business transfer may result in information reduction. Consequently, a checksum methodology is delivered to detect problems. The checksum number is normally appended with each packet inside the sequence. In the event the sum of your packets and the checksum is normally zero, the error is definitely indicated. It is vital to be aware of timing skew, since this can damaged communications. In order to avoid this, the device should be synchronized with the transmitter. Generally, time skew tends to worsen as the distance between the origin and the destination increases. Pertaining to efficient data transmission, the receiver has to be able to see the individual binary bits. That is possible when the receiver includes proper timing information. In the same way, the sending device must also be able to discover the mistakes. This is created by using error-detecting codes. In addition , the device should be able to improve its transmission power. They are the major pieces of a data connection system. However , the system can only job if the process rules happen to be followed. Consequently , it is essential which the communication protocols are described in advance.
Every person 6 months of age and older, who has not had a serious reaction to the flu shot in the past, should be vaccinated each flu season, according to the Centers for Disease Control and Prevention (CDC). The flu shot reduces your risk of getting sick with the flu by 40 to 60 percent or more, and reduces the severity of flu if you do get sick. It also reduces your child's risk of flu-related illness. Babies under the age of 6 months cannot be given the flu shot directly. Even after they start receiving flu shots, their protection is incomplete until they have two shots approximately 28 days apart. It generally takes about two weeks after the second shot for babies to acquire maximum protection. According to the Infant Risk Center at Texas Tech University, a breastfeeding mother who has been vaccinated with the flu vaccine will transfer antibodies to her baby through her milk, giving her newborn some protection from the flu virus. “Cocooning”—ensuring that parents, siblings, grandparents, and other caregivers have received a flu shot during the current flu season —can help protect your baby from the flu until he is old enough to be vaccinated. If you and/or your child becomes ill with the flu, keep breastfeeding. Contact your health care provider and your child’s health care provider right away. The American Academy of Pediatrics and the CDC recommend that individuals at higher risk for flu-related complications (including women within the first two weeks postpartum, and infants) consider taking an antiviral medication, such as Tamiflu. Given that only a small amount of the drug transfers into human milk, Tamiflu is considered compatible with breastfeeding, For more information about the flu and the flu vaccine, click here.
What ecological effect do cattle actually have on grasslands? Grazing is actually very important for grasslands! Just like I prefer to eat pizza over hamburgers, grazers may prefer one type of plant over another. By eating their preferred meal, grazers give the less yummy plant a break so that the playing field becomes more even. Thus, grazers’ eating habits help maintain greater biodiversity on the plains. Grazers assist the ecosystems in other ways as well, such as through the cycling of nutrients (Blair et al.). If grazing is so good, why do we regulate how much livestock can graze on public lands? Like all things, grazing is good in moderation. Just as too little grazing would deprive the grasslands of much needed ecological benefits, too much or overgrazing can wreak havoc on a grassland ecosystem. The selective eating that is so good at intermediate intensity causes extinction and endangerment of plants at high levels, causing intense ripple effects throughout the rest of the food web (Blair et al.). Additionally, the nutrient cycling that cattle do so effectively at high volumes causes excess nutrient runoff that can lead to a wide range of downstream effects including algal blooms and dead zones (Center for Biological Diversity). Too many cattle can also lead to the compaction of soil, making it difficult for plants to grow roots. Thus fewer plants can survive in compacted soil and the rich topsoil that they would have been keeping in place is now free to blow away with the wind. So, what do we do? Effective management is the key to maintaining grassland biodiversity. We must find the sweet spot where we are grazing just enough cattle to take advantage of the economic goods and services that grasslands provide for humans, while not grazing so many cattle that the grasslands are suffering biodiversity loss (Bureau of Land Management). “Grassland Ecology” by John Blair, Jesse Nippert, and John Briggs Fact Sheet on the BLM’s Management of Livestock Grazing by The Bureau of Land Management Grazing by Center for Biological Diversity
for senior-level classes helps students to understand and analyze the key ideas and challenges that preceded Alberta’s entry into Confederation. The first section deals with the debates in the provincial and/or federal legislatures, while the second section addresses more specifically founding treaty negotiations with the First Nations. Each section can be taught independently. The activities and attached materials will help students understand the diversity of ideas, commitments, successes and grievances that underlie Canada’s founding. By the end of this mini-unit, your students will have the opportunity to: 1. Use the historical inquiry process—gathering, interpreting and analyzing historical evidence and information from a variety of primary and secondary sources—in order to investigate and make judgements about issues, developments and events of historical importance. 2. Hone their historical thinking skills to identify historical significance, cause and consequence, continuity and change, and historical perspective. This mini-unit has been broadly designed for Alberta senior-level classes. The activities described in the pages, for example, fulfill the following outcomes listed in “Grade 9: Canada: Opportunities and Challenges” curriculum guide. The mini-unit can be accessed here: Before each province and territory became a part of Canada, their local legislatures (and the House of Commons after 1867) debated the extent, purposes and principles of political union between 1865 and 1949. In addition to creating provinces, the British Crown also negotiated a series of Treaties with Canada’s Indigenous Peoples. Although these texts, and the records of their negotiation, are equally important to Canada’s founding, as the Truth and Reconciliation Committee recently explained, “too many Canadians still do not know the history of Indigenous peoples’ contributions to Canada, or understand that by virtue of the historical and modern Treaties negotiated by our government, we are all Treaty people.” The vast majority of these records, however, remain inaccessible and many can only be found in provincial archives. By bringing together these diverse colonial, federal and Indigenous records for the first time, and by embracing novel technologies and dissemination formats, The Confederation Debates encourages Canadians of all ages and walks of life to learn about past challenges, to increase political awareness of historical aspirations and grievances and engage present-day debates, as well as to contribute to local, regional and national understanding and reconciliation.
In their new study, published in Nature Communications, the researchers say that the Antarctic ice sheet (Antarctica) It reached the point of no return in terms of ice cap loss earlier than previously thought. “We may already be in the middle of this stage,” they add. Which could have dire consequences when it comes to global sea level rise, and the natural environment that Antarctica’s animals depend on. Back to the past In order for researchers to reach these results, they went back to the past and researched the history of the continent over the past twenty thousand years – the last ice age – by studying the ice core extracted from the sea floor. To understand the matter in a less complex way, scientists say that when the icebergs separate from the glaciers in Antarctica, they float in a main channel known as (Iceberg Alley), then the icebergs slip into the sea, and when they melt, debris stuck to these mountains accumulates on the sea floor, causing It gives researchers a history record about 3.5 kilometers underwater. By combining this natural record of iceberg drift with computer models of ice sheet behavior, the team was able to identify eight stages of ice sheet retreat over the last thousands of years. that. The study showed that the same pattern of sea rise occurs in each of the eight stages as well, with global sea levels affected for centuries and even a thousand years in some cases, and statistical analysis identified turning points for these changes. A new turning point The study findings bolster recent satellite images, which are only nearly 40 years old, as they show an increase in ice losses from the interior of the Antarctic ice sheet, not just changes in ice shelves that are already floating on the water. If we compare the current shift in Antarctica’s ice in the same way as we interpret past events identified in the study, we may already be in the midst of a new tipping point, which we’ve seen in the Arctic in recent years, the researchers say. “Our findings are consistent with a growing body of evidence suggesting that the acceleration of Antarctic ice-mass loss in recent decades may mark the irreversible beginning of ice sheet retreat and surface-level rise,” says geophysicist Michael Weber of the University of Bonn in Germany. the global sea in a large way.
In the quest to make machines that can think like a human or solve problems that are superhuman, it seems our brain provides the best blueprint. Scientists have long sought to mimic how the brain works using software programs known as neural networks and hardware such as neuromorphic chips. Last month we reported on attempts to make the first quantum neuromorphic computers using a component, called a quantum memristor that exhibited memory by simulating the firing of a brain’s neurons. Going a more Cronenbergian route, Elon Musk (and others) are experimenting with hard wiring chips into a person’s neural network to remote control technology via brainwaves. Now, computer scientists at Graz University in Austria have demonstrated how neuromorphic chips can run AI algorithms using just a fraction of the energy consumed by ordinary chips. Again, it is the memory element of the chip which has been remodelled on the human brain and found to be up to 1000 times more energy efficient than conventional approaches. As explained in the journal Science current networks of long short-term memory (LSTM) operating on conventional computer chips are highly accurate. But the chips are power hungry. To process bits of information, they must first retrieve individual bits of stored data, manipulate them, and then send them back to storage. And then repeat that sequence over and over and over. At Graz University, they’ve sought to replicate a memory storage mechanism in our brains called after-hyperpolarizing (AHP) currents. By integrating an AHP neuron firing pattern into a neuromorphic neural network software, the Graz tesam ran the network through two standard AI tests. The first challenge was to recognise a handwritten ‘3’ in an image broken into hundreds of individual pixels. Here, they found that when run on one of Intel’s neuromorphic Loihi chips, their algorithm was up to 1000 times more energy efficient than LSTM-based image recognition algorithms run on conventional chips. A second test, in which the computer needed to answer questions about the meaning of stories up to 20 sentences long, the neuromorphic setup was as much as 16 times as efficient as algorithms run on conventional computer processors, the authors report in Nature Machine Intelligence. As always, we’re on the outskirts of the breakthrough making a real world impact. Neuromorphic chips won’t be commercially available for some time but advanced AI algorithms could help these chips gain a commercial foothold. “At the very least, that would help speed up AI systems,” says Anton Arkhipov, a computational neuroscientist at the Allen Institute speaking to Science. The Graz University project leader Wolfgang Maass speculates that the breakthrough could lead to novel applications, such as AI digital assistants that not only prompt someone with the name of a person in a photo, but also remind them where they met and relate stories of their past together.
Welcome to Spotlight on Strategies Challenge! Our S.O.S series provides help, tips, and tricks for integrating DE media into your curriculum. Understanding a situation from multiple perspectives is an important skill for students to master. In addition, the theory of Multiple Intelligences (proposed by Howard Gardner in 1983) has been leveraged in classrooms across the world as a way to differentiate instruction and appeal to student?s strengths and interests. This week’s strategy uses images as a way to engage students in multiple perspectives of a topic in a way that plays to their unique learning styles. •Show students the large size image of Federal Troops At Rest •Have students look at the image through multiple perspectives and give them an activity to complete based on the perspective. For example: - Look at the man in the upper right hand corner. He is reading a newspaper. Take on the perspective of a news reporter from either side of the Civil War and write an article that might be found in that newspaper (Verbal/ Linguistic) - Look at the men in the middle of the image. They are playing a game. What other types of games could you create and play in large open fields with limited supplies that you can carry and take along with you? (Kinesthetic) - Look at the man in the lower right hand corner. He is reading a letter from home. Take on the perspective of his wife, his child, a sibling and write the letter; or take on his perspective and write a response (Intrapersonal) - Look at the man in the middle of the image towards the top. He is sitting on a drum. Take on his perspective on the war and write a song about what life is like as a Federal soldier. (Musical) •Have students work individually or in small groups to complete the activities above. •Share their work with the class - Select an image or video segment that matches your current curriculum. - Have students analyze the image or watch the segment. - Give students multiple perspectives to think through and and multiple activities to complete that address a variety of learning styles - Have students share their work with the class You can take the challenge by: - Implementing this strategy and letting us know how it went by posting a comment below. - Using this strategies in your grade level planning discussions and/or professional development and reporting your events. (Remember we consider an event anytime 3 or more educators gather together… doesn’t have to be in a computer lab… could be sitting around the lunch table) - Photocopying the flier and distributing it in your colleague’s boxes and/or posting it to your own BulleDEN board.
Radiation Glossary Q-R Quantitative Risk Assessment a detailed analysis that provides a numerical probability that a particular kind of injury will occur (for example, the number of additional cases of cancer in a group of 10,000).(See also qualitative risk assessment (See Roentgen Absorbed Dose) energy given off as either particles or rays from the unstable nucleus of an atom Radiation Protection Guide (RPG) radiation dose which should not be exceeded without careful consideration for doing so; every effort should be made to encourage the maintenance of radiation doses as far below this guide as practicable Radiation Sickness (syndrome) the set of symptoms that results when the whole body (or a large part of it) has received an exposure of greater than 50 rads of ionizing radiation. The earliest symptoms are nausea, fatigue, vomiting, and diarrhea. Hair loss, hemorrhaging, inflammation of the mouth and throat, and general loss of energy may follow. If the exposure has been approximately 1,000 rad or more, death may occur within two to four weeks. - Radiation Health Effects This page describes the effects of both long-term and acute exposure to radiation. Radiation Warning Symbol an officially prescribed symbol (a magenta or black trefoil) on a yellow background. It must be displayed where certain quantities of radioactive materials are present or where certain doses of radiation could be received. - Symbols in Radiation Protection This page provides definitions and images of radiation symbols. a deposit of radioactive material in any place where it may harm persons, equipment, or the environment. The process in which an unstable (radioactive) nucleus emits radiation and changes to a more stable isotope or element. A number of different particles can be emitted by decay. The most typical are alpha or beta particles. - Can Unstable Atoms Become Stable? This link explains unstable atoms. spontaneous transformation of the nucleus of an atom; this resulting in a new element, generally with the emission of alpha or beta particles often accompanied by gamma rays a test to detect and determine the amount of radioactive materials present that emit ionizing radiation. It will detect transuranic nuclides, uranium, fission and activation products, naturally occurring radioactive material and medical isotopes. caused by exposure to ionizing radiation - Ionizing and Non-ionizing Radiation This page explains ionizing and non-ionizing radiation. Radiological Dispersion Device (RDD) a device or mechanism that is intended to spread radioactive material from the detonation of conventional explosives or other means. An RDD is commonly known as a "dirty bomb." using radiation sources to "photograph" internal structures, such as turbine blades in jet engines. A sealed radiation source, usually iridium-192 or cobalt-60, beams gamma rays at the object to be checked. Gamma rays passing through flaws in the metal or incomplete welds strike special photographic film (radiographic film) on the opposite side. The process is very similar to taking an x-ray to check for broken bones. isotopes of an element that have an unstable nucleus. Radioactive isotopes are commonly used in science, industry, and medicine. The nucleus eventually reaches a more stable number of protons and neutrons through one or more radioactive decays. Approximately 3,700 natural and artificial radioisotopes have been identified. an unstable form of a nuclide Radium is a naturally-occurring radioactive metal. Radium is a radionuclide formed by the decay of uranium and thorium in the environment. It occurs at low levels in virtually all rock, soil, water, plants, and animals. Radon is a decay product of radium. This fact sheet describes the basic properties and uses, and the hazards associated with this radionuclide. It also discusses radiation protection related to it. a nationwide system of air, water, and milk sampling stations that monitor radiation in the environment This page provides information about different RadNet sampling programs. Radon is a naturally occurring radioactive gas found in soils, rock, and water throughout the U.S. Radon causes lung cancer, and is a threat to health because it tends to collect in homes, sometimes to very high concentrations. As a result, radon is the largest source of exposure to naturally occurring radiation. Resource Conservation and Recovery Act RCRA gives EPA the authority to control hazardous waste from cradle-to-grave. This includes the minimization, generation, transportation, treatment, storage, and disposal of hazardous waste. RCRA also sets forth a framework for the management of non-hazardous solid wastes. RCRA focuses only on active and future facilities and does not address abandoned or historical sites, which are regulated under the Comprehensive Environmental Restoration, Cleanup, and Liability Act (CERCLA) a regulatory limit expressed in terms of dose or risk. radioactivity in structures, materials, soils, groundwater, and other media at a site resulting from activities under the cognizant organization's control. This includes radioactivity from all sources used by the cognizant organization, but excludes background radioactivity as specified by the applicable regulation or standard. It also includes radioactive materials remaining at the site as a result of routine or accidental releases of radioactive material at the site and previous burials at the site, even if those burials were made in accordance with the provisions of 10 CFR Part 20. the probability of injury, disease, or death under specific circumstances. Risk can be expressed as a value that ranges from zero (no injury or harm will occur) to one hundred percent (harm or injury will definitely occur). Risk-based standards limit the risk that releasing a contaminant to the environment may pose rather than limiting the quantity that may be released. absolute risk, the excess risk attributed to irradiation and usually expressed as the numeric difference between irradiated and non-irradiated populations (e.g., 1 case of cancer per million people irradiated annually for each rad). Absolute risk may be given on an annual basis or lifetime basis. relative risk, the ratio between the number of cancer cases in the irradiated population to the number of cases expected in the unexposed population. A relative risk of 1.1 indicates a 10 percent increase in cancer due to radiation, compared to the "normal" incidence. a unit of exposure to ionizing radiation. It is an indication of the strength of the ionizing radiation. One Roentgen is the amount of gamma or x-rays needed to produce ions carrying 1 electrostatic unit of electrical charge in 1 cubic centimeter of dry air under standard conditions. Roentgen Absorbed Dose (rad) a basic unit of absorbed radiation dose. It is being replaced by the 'gray,' which is equivalent to 100 rad. One rad equals the dose delivered to an object of 100 ergs of energy, per gram of material. Roentgen Equivalent Man (rem) a unit of equivalent dose. Rem relates the absorbed dose in human tissue to the effective biological damage of the radiation. Not all radiation has the same biological effect, even for the same amount of absorbed dose.
New analysis published in Nature Climate Change shows that Indonesia is losing primary forest at a staggering rate. The country now has the highest rate of loss in tropical primary forests in the world, overtaking Brazil. Primary tropical forests are the most carbon- and biodiversity-rich type of forest ecosystem. Led by Belinda Margono (also co-author of this blog), researchers from the University of Maryland and WRI published the analysis, the first-ever that maps and quantifies annual loss of primary forests in Indonesia. It builds on the groundbreaking global analysis led by Professor Matthew Hansen in partnership with Google published in Science last year. The study reveals that from 2000 to 2012, Indonesia lost more than 6 million hectares of primary forest – an area half the size of England. In recent years, Indonesia even surpassed Brazil in deforestation, losing almost twice as much primary forest as Brazil in 2012. Perhaps most worryingly, the new data shows that the problem is getting worse: Indonesia’s primary forest loss is increasing by an average of 47,600 hectares every year, with an increasing proportion of loss occurring in wetlands, which often results in massive greenhouse gas emissions from peat soils. Where Is Forest Loss in Indonesia Happening? According to the new study, 38 percent of all tree cover loss in Indonesia during the study period occurred in primary forests, the most pristine and biodiverse of all the country’s forest land. Notably, 40 percent of this loss (within official forest areas) happened within zones that restrict forest clearing, such as national parks, protected forests, and even areas protected under Indonesia’s forest moratorium. Less surprisingly, 51 percent of the primary forest loss occurred in relatively flat, accessible areas (or “lowlands”), while higher areas remain relatively intact. Further, 43 percent of primary forest loss occurred in wetlands, a sign that lowlands forests are exhausted and the agricultural frontier is shifting into ecologically sensitive, carbon-rich ecosystems. Most of this change occurred in the islands of Sumatra and Kalimantan, with the former (already a longtime destination for expanding large- and small-scale agriculture) seeing more primary forest loss happening in wetlands, often by draining or burning carbon-rich peat and contributing significantly to climate change. It’s Important to Reduce Primary Forest Loss Indonesia is one of the world’s biggest greenhouse gas emitters, in large part to its deforestation and clearing of peat for agribusiness. More loss of primary forests releases disproportionately higher greenhouse gas emissions while also damaging livelihoods of the people who rely on forests. This trend also shows that despite Indonesia’s forest management commitments—such as its forest moratorium and a national pledge to reduce carbon emissions from land use—there is much work to be done. The new study shows that Indonesia needs to do much more to translate its forest and climate goals into real change on the ground. Specifically, the country could: Better enforce laws and regulation. As the study shows, 40 percent of primary forest loss is happening in regions where clearing is restricted. The country’s national, state, and local governments must investigate where illicit forest clearing is occurring and hold the responsible parties accountable. Define the scope of Indonesia’s forest moratorium to include all primary forests (it currently does not expand protection to areas that may be considered degraded through activities such as small-scale logging.) According to the study, more than 90 percent of primary forest loss occurred within already degraded areas. Although degraded, these forests are still important stores for carbon and biodiversity. Ensure better coordination between ministries and agencies in the natural resources sector—such as ministries of agriculture, forest, mining, and planning–as well as between national, provincial, and district governments. This will make sure all forest stakeholders are aligned on goals and action plans and, most importantly, streamline execution of these plans. Pursue smarter land-use planning in allocating land to agribusiness, forestry companies and smallholders, such as by expanding agriculture onto already-degraded lands as opposed to pristine forest. Indonesia can also learn from other leaders in forest conservation. A few years ago, Brazil had the highest deforestation rates of any tropical country. However, a recent study showed that with satellite and other monitoring systems, law enforcement, and financial incentives, Brazil has decreased deforestation by 70 percent in recent years. With good policies and consistently updated, high-quality data, Indonesia can and needs to pursue a similar path. LEARN MORE: The newly published maps of primary forests will be available soon on Global Forest Watch, an interactive forest monitoring and alert system designed for non-technical users.
What's in this article? Pulmonary tuberculosis (TB) is a contagious bacterial infection that mainly involves the lungs, but may spread to other organs. It is a potentially deadly disease, but it’s curable if you get medical help right away and follow your doctor’s instructions. The cause of Pulmonary Tuberculosis The bacterium (germ) that causes TB is called Mycobacterium tuberculosis. This germ causes other kinds of TB, but the most common is pulmonary TB. You can get sick with TB if you inhale the droplets exhaled by a person who has the disease. Although TB can be treated and preventable, according to the World Health Organization, up to 66% of people who get sick with TB will die if they do not get proper medical care. (WHO) If you believe that you have pulmonary tuberculosis, see your doctor immediately. If you’ve been exposed to pulmonary TB, ask for a TB test. The following are at high risk of active TB: - People with weakened immune systems, for example due to AIDS, diabetes, chemotherapy, or medicines that weaken the immune system Your risk of catching TB may increases if you: - Are around people who have TB (such as during overseas travel) - Live in a crowded or unclean living conditions - Have poor nutrition The following factors increases the rate of TB infection in a population: - Increase in HIV infections - Increase in number of homeless people (poor environment and nutrition) - Appearance of drug-resistant strains of this disease Symptoms of Pulmonary Tuberculosis The primary stage of the disease usually doesn’t cause symptoms. When the symptoms of pulmonary TB occur, these may include: - Cough (sometimes producing phlegm) - Coughing up blood - Excessive sweating, especially at night - Unintentional weight loss Other symptoms may also occur with this disease: - Breathing difficulty - Chest pain Treatment of Pulmonary Tuberculosis The goal of treatment is to cure the infection with drugs that fight the pulmonary TB bacteria. Treatment of active pulmonary TB will always involve a combination of many drugs (usually 4 drugs). All of the drugs are continued until lab tests show which medicines work best. Commonly used drugs include: Other drugs that may be used to treat TB includes: - Para-aminosalicylic acid You may need to take many different pills at different times of the day for 6 months or longer. It’s very important that you take the pills the way your health care provider instructed. When people do not take their TB medications as instructed, the infection become much more difficult to treat. The TB bacteria can become resistant into treatment. This means that the drugs no longer work. When there is a concern that a patient may not take all the medication as directed, a health care provider needs to watch the person take the drugs as prescribed. This approach is called directly observed therapy. In this case, the drugs may be given 2 or 3 times per week, as doctor prescribed. You may need to stay in your home or be admitted in hospital for 2 – 4 weeks to avoid spreading the disease to others until you are no longer contagious. Your doctor or nurse is required by law to report your TB illness to the local health department. Your health care team assures that you receive the best care. Pulmonary TB causes permanent lung damage if not treated early. Medicines used to treat TB may cause side effects, these includes: - Changes in vision - Orange or brown-colored tears and urine - Liver inflammation A vision test may be done before the treatment so your doctor can monitor any changes in the health of your eyes. Call your health care provider if: - You think or know that you have been exposed to TB - You develop symptoms of TB - Your symptoms continue despite the treatment - New symptoms develop TB is preventable, even in those who have been exposed to an infected person. Skin testing for TB is used in a high risk populations or in people who may have been exposed to the TB, such as health care workers. People who’ve been exposed to TB should be skin tested immediately and have a follow-up test at a later date, if the first test is negative. A positive skin test means you have come into contact with the TB bacteria. It doesn’t mean that you have active disease or are contagious. Talk to your doctor on about how to prevent getting tuberculosis. Prompt treatment is extremely important to prevent the spread of TB from those who have active TB disease to those who have never been infected with TB. Some countries with a high incidence of TB give people a BCG vaccination to prevent this disease. But, the effectiveness of this vaccine is limited and it is not routinely used in the United States. People who have had BCG may still be skin tested for TB. Discuss the test results (if positive) with your doctor.
Land line telegraph keys are fitted with a circuit closer switch. Radio telegraph keys were not but many radio operators used land line keys because they were so readily available. This includes the rather ubiquitous J-38 about which more will be found below. The arcing that occurred at the key contacts of the early spark radios caused radio stations to switch to purpose made keys which had more robust contacts and in some cases arc shields. In North American practice the idle condition of a land telegraph line was with a current of ~60 - 90 Milliamperes flowing continuously and the ferrous metal armatures of all Relays, Main Line Sounders, and Registers, were used, pulled to their energized position by the magnetic flux of the Electromagnet which is part of all Land Line telegraph receiving apparatus. As others have indicated the receiving apparatus were in series in the lines which passed through each station. When the line was idle the continuous current flowing through the line had to pass through each set of Relay or Main Line Sounder electromagnet coils in series. The earliest batteries used in land line telegraphy were Gravity Cells, so called because the only thing separating the two liquids that made up the batteries electrolyte were the difference in their specific gravity. In order for those Gravity Cells to remain functional they had to be making current continuously. If the circuit was opened for any long period of time the specific gravity of the 2 liquids would become too similar and they would then mix rendering the battery useless until the output was reestablished and the two liquids would separate after the passage of sufficient time. That was the largest factor in telegraph lines in the North America being normally closed constant current systems because they started out that way. If any telegrapher left their circuit closer open while they were not transmitting it would cripple the line. Without all other circuit closers being closed no operator could take control of the current flow to transmit a message. By opening the circuit closer switch the telegrapher took control of the line and caused all of the other sounders on the line to de-energize. The absence of current flow through the electromagnets of all of the Relays and Main Line Sounders, connected in series in the line, would result in the loss of the magnetic flux generated by each set of coils thus releasing the armature of the Relays and Main Line Sounders to be pushed, in the case of sounders, or pulled in the case of ralays, to their relaxed condition by their return springs. The single click made by the Sounders when the return spring pushed the hammer bar up against the limit stop of their frame alerted the other stations' operators to listen for their station's call to see if the message was for them. If they heard their stations call they would open their keys circuit closer and the operator who was sending their call would know that because the sending operators sounder would go silent in the up posotion. The telegrapher who called the station would then close their circuit closer to allow the other operator to respond. Occasionally the operator who opened the line would be someone other than the station called. This might be done by a telegrapher with urgent traffic such as a railway dispatcher controlling the movement of trains or, especially during World War Two, an operator who needed to send an Army Flash message. The J-38 Telegraph Key is a US Army Signal Corps LAND LINE key that was already in use when the United States entered World War Two. Every permanent Army post had a telegraph office to send and receive messages which could not wait for the US Mail. The fact that it was a line key can be deduced from the presence of a shunt bar for the other side of the line and the markings on the base plate that indicate that the LINE is to be connected to one shunt terminal and one key terminal and the Relay or Main Line Sounder was to be connected to the shunt terminal and key terminal on the other side of the base plate labeled TEL for telegraph equipment. Since the J-38 was already designed, prototyped, tested, tooled, and in production at the outbreak of WW2 it was quicker to use the J-38 for many applications than it would have been to wait for purpose made telegraph keys for individual uses to become available. As a consequence many thousands of J-38 keys were made during the early years of the war until the keys which were made for those other uses became available. When those other keys, such as the J-43 et al, became available their were already thousands of J-38 keys in the supply chain which became surplus after the war. That is why the J-38 is by far the most commonly available military surplus key.
Letter from President Eisenhower to Senator John Stennis (D-MS), October 7, 1957 Context for the Letter: In 1954 the Supreme Court ruled in Brown vs. Board of Education that segregated schools were "inherently unequal." In 1957, Central High School in Little Rock, Arkansas was ordered to desegregate. However, Arkansas Governor Orval Faubus ordered the Arkansas National Guard to prevent nine African American students who enrolled at Central High School in Little Rock, Arkansas from entering the school on September 3. A Federal District Court ruled against the use of the National Guard at the school. When the students returned to the school, they were met by an angry mob of 1,000 segregationists, and, police removed them for their own protection. President Eisenhower then ordered federal troops to Central High in Little Rock. Original document in the holdings of the Dwight D. Eisenhower Presidential Library, Abilene, Kansas. The Eisenhower Library is administered by the National Archives and Records Administration. 1. How does President Eisenhower explain the role of the executive branch of government in the conflict? 2. What are his goals in exercising executive power in the situation, and, what do you think of them? 3. Under the separation of powers doctrine, what is the role of the Supreme Court in this conflict and what do you think of it? 4. President Eisenhower did not personally agree with the Court's decision in Brown v. Board of Education, yet he upheld the desegregation rulings. What do you think his actions demonstrate about the Constitution, and what President Eisenhower thought about his constitutional role as president? 5. How does this situation demonstrate the strengths or weaknesses of the doctrine of separation of powers in action?
Rocket Thrust Calculator The rocket thrust calculator uses the Newton's third law and calculates the net rocket propulsion, taking into account the pressure difference between the ambient pressure and the pressure at the rocket nozzle. This calculator can be used not only for rockets themselves but also for any kind of vehicle that uses a jet-rocket engine as its main propulsion since the thrust formula is vehicle-agnostic. Rocket thrust: where does the rocket propulsion come from? Newton's third law states that for every action there is a reaction of equal strength and opposite direction. This means that when you pull a rope, the rope pulls you back with the same strength. This is the basis of rocket propulsion and rocket physics in general. The amount and speed at which the burnt fuel is exhausted of the rocket nozzle determine how fast the rocket will accelerate and what amount of kinetic energy it'll gain. This is the reason why a jet rocket engine consumes big amounts of fuel, but also why it is so powerful. In designing a jet rocket engine it is important to balance the size of the rocket nozzle in relation to the body of the rocket itself. Given the nature of fluid and rocket physics, a smaller nozzle will make the exhausted fuel move faster but will allow less of it to be exhausted per unit time, so it is important to build a rocket nozzle with the proper size for the desired rocket thrust. The importance of the rocket nozzle area is shown in this rocket thrust calculator by the variable A - you can test how different sizes would impact the net rocket propulsion. It can be very interesting to use this rocket thrust calculator in conjunction with the ideal rocket calculator in order to understand how a rocket works or just to use rocket physics to perform quick calculations and see how each of the variables affect the rocket thrust, speed, acceleration... Rocket thrust calculator: understanding the thrust formula and rocket physics First of all, we should look at the rocket thrust equation underlying the rocket thrust calculator: F = dm/dt * Ve-opt + Ae * (Pe - Pamb) The right side of the rocket thrust equation is the one used in the rocket thrust calculator, where it is important to point out that dm/dt represents the variation of mass with time (which is the mass exhausted by the jet rocket per unit time). Let's see now what are all those variables and introduce some typical values for them. We will use rounded values based on the characteristics of the Merlin 1D rocket engine, which is used by SpaceX in its Falcon 9 and Falcon Heavy rockets. Ve-opt: Effective exhaust velocity at the rocket nozzle if Pamb = Pe. For our example we used a typical value for a liquid-propellant rocket: 3 km/s. Ae: Flow area at the nozzle exit plane. In the case of the Merlin 1D engine it has a diameter of 1.25 m which we convert to an area of 1.227185 m2 using the circumference calculator. dm/dt: Flow at which mass is exhausted. We obtain a value of 273.6 kg/s using the advanced mode. Pamb: Ambient pressure around the rocket (check the pressure conversion tool for using different units). For our purposes, we will use atmospheric pressure (default value) of 101325 Pa. Pe: Static pressure at the jet rocket exhaust. For our example we will set it to a reasonable 84424 Pa. F: Net force or rocket propulsion (rocket thrust); it is the main quantity of interest. We obtain a thrust of 800 kN which is well within the capacity of a Merlin 1D engine (maximum thrust is 825 kN at sea level). dm: Mass expelled at the rocket nozzle in a time dt. In our example the total fuel mass of the first stage of the Falcon 9 is 44320 kg per engine. dt: Time elapsed in the expulsion of the aforementioned mass dm. The total fuel burn time in the first stage of the Falcon 9 is 162 s. One of the interesting takeaways from this thrust formula is that the net thrust for a given jet rocket engine will increase with altitude, since the pressure decreases with altitude and hence the negative contribution of the Pamb is reduced increasing the total rocket propulsion. Once the net thrust (or force) is obtained you can use a tool like the acceleration calculator to obtain the acceleration at which such a rocket can be launched.
The AQA Specification – Education Students need to know…. - The role and functions of the education system, including its relationship to the economy and to class structure (the perspectives: functionalism etc.) - Differential educational achievement of social groups by social class, gender and ethnicity in contemporary society. - Relationships and processes within schools, with particular reference to teacher/pupil relationships, pupil identities and subcultures, the hidden curriculum, and the organisation of teaching and learning. - The significance of educational policies, including policies of selection, marketisation and privatisation, and policies to achieve greater equality of opportunity or outcome, for an understanding of the structure, role, impact and experience of and access to education; the impact of globalisation on educational policy. How most text books break the specification down further…. Topic 1 – Perspectives on Education (‘role and function of education’) There are 4 Main Perspectives: - The New Right - You can also use knowledge from these perspectives: Feminism/ Social Democratic/ Liberalism - Focuses on the positive functions performed by the education system. There are four positive functions that education performs - Creating social solidarity (value consensus) - Teaching skills necessary for work - Bridge between home and school - Role Allocation and meritocracy - Traditional Marxists see the education system as working in the interests of ruling class elites. The education system performs three functions for these elites: - Reproduces class inequality. - Legitimates class inequality. - The Correspondence Principle – School works in the interests of capitalist employers The New Right - Created an ‘education market’ – Schools were run like businesses – competing with each other for pupils and parents were given choice. This required league league tables - Schools should teach subjects that prepare pupils for work, Hence education should be aimed at supporting economic growth. Hence: New Vocationalism! - The state was to provide a framework in order to ensure that schools were all teaching the same thing and transmitting the same shared values – hence the National Curriculum - Not a major perspective on education. - Use to criticise the relevance of the previous three perspectives. - A ‘one size fits all’ education system does not fit with a post-modern society - Education needs to be more flexible and targeted to individuals. Topic 2 – In-School Processes Make sure you explain the difference between Interactionism and Structural Theories School Ethos and The Hidden Curriculum Teacher Stereotyping and the halo effect - The ideal pupil - Labelling and the Self Fulfilling Prophecy - Banding, streaming and setting - Definitions of banding/ streaming setting - Summaries of evidence on the effects of banding etc. - Unequal access to classroom knowledge - Educational triage Student responses to the experience of schooling: school subcultures - Differentiation and Polarisation - Pro-School subcultures - Anti-school (or counter-school) subcultures - Between pro and anti-school subcultures: a range of responses Gender and differential educational achievement There are three main types of question for gender and education – achievement, subject choice the trickier question of how gender identities affect experience of schooling and how school affects gender identities. Distinguishing between out of school and in-school factors in explaining these differences is one of the key analytical skills for this topic (and in class/ ethnicity) Achievement (why do girls generally do better than boys) - In the 1980s boys used outperform girls - Today, girls do better than boys by about 8% points at GCSE. - There are about 30% more girls in University than boys. Subject Choice (why do they choose different subjects) - Subject choice remains heavily ‘gendered’ - Typical boys subjects = computing/ VOCATIONAL especially trades/ engineering - Typical girls subjects = dance, sociology, humanities, English, hair and beauty. Experience of Schooling/ Gender Identity - Pupils’ gender identities may influence the way they experience school. - Schools may reinforce traditional (hegemonic) and femininity - Gender identity varies by social class and ethnicity. Out of school factors and differential educational achievement - Changes in Employment – Rise of the service, decline in manufacturing sector, crisis of masculinity. - Changes in the family – dual earner households, more female worker role models. LINK TO FAMILY MODULE - Changing girls’ ambitions – from marriage and family to career and money (Sue Sharp) - Differential socialisation –girls socialised to be more passive/ toys related to different subjects (Becky Francis) LINK TO FUNCTIONALISM/ PARSONS. - Parental attitudes – traditional working class dads may expect boys to not try hard at school. - Impact of Feminism – equal opportunity policies. - Policy changes – introduction of coursework in 1988/ scaling back of coursework in 2015. Gender and In-School Factors - Teacher Labelling – typical boys = disruptive, low expectation, typical girls = studious, high expectations (Jon Abraham) – LINK TO INTERACTIONISM, Self-Fulfilling Prophecy - Subcultures –boys more likely to form counter-school cultures (Willis) – LINKS to out of school. - Feminisation of teaching – increase in female teachers puts boys off - Subject counsellors advise boys to choose boys subjects - Gendered subject images match traditional gender domains - Boys’ domination of equipment puts girls off practical subjects like PE - Traditional masculine identities – boys just don’t see school as a ‘boy thing’ – Working class boys saw school as ‘queer’, middle class work hard but hide this (Mac An Ghail) - Hyper-Feminine identities (hair/ make up) clash with the school (Carolyn Jackson) - Verbal Abuse – boys who study hard get called ‘gay’ as a term of abuse. Social class differences in educational achievement Why do working class kids do worse than middle class kids? (Free School Meals to measure, not class!) - Lots of ways! - Hidden costs - The cycle of deprivation - Selection by mortgage Cultural Deprivation – blame the working classes - Immediate/ deferred gratification - Restricted/ elaborated speech codes Cultural Capital – Marxist – blame the middle classes - Skilled and Disconnected Choosers - In-School Processes - Labelling, the ideal pupil (Becker) - Counter School Culture (Willis) - Aspirational culture in school (links to cultural capital) Ethnicity and differential educational achievement Chinese/ Indian kids do best/ African-Caribbean, Gypsy Roma worst. - Differences in income/ class don’t explain the difference (poor Chinese kids compared poor white kids) - Family structure (single parent households) - Parental attitudes (Steve Strand 2007) - Language differences (linguistic deprivation) - Black anti-school masculine street cultures (Tony Sewell) - Teacher racism/ labelling (Gilborn) - Subcultures and anti-school attitudes (Tony Sewell) - Subcultures as a means of resisting racism (Mac An Ghail). - Banding and Streaming/ Educational Triage - Ethnocentric Curriculum - Experiences of institutional racism and from other pupils (Crozier) - Also – racism in admissions at Oxford University - 1944 – The Tripartite System - 1965 – Comprehensivisation - 1988 – New Right – Education Act – Marketisation - 1997 – New Labour – Academies, Expansion of HE, Sure Start, EMA. - 2010 – Coalition/ New Right – Forced Adacademisation, Free Schools, Funding Cuts, Pupil Premium, and MORE STATE GRAMMARS. - Compenstory Education – E.G. EMA. - Vocationalism – e.g, Apprenticeships. Policies – key questions - To what extent have policies raised achievement? - To what extent have policies improved equality of opportunity? - How have policies changed the way schools select pupils and what are the consequences (apply the perspectives) - In what ways has education becoming more privatised and what are the consequences (apply the perspectives)? - What is the relationship between globalisation and education policy?
See Messier 20, the Trifid Nebula The Trifid Nebula (Messier 20 or M20) is one of the many binocular treasures in the direction of the center of our Milky Way galaxy. Its name means divided into three lobes, although you’ll likely need a telescope to see why. On a dark, moonless night – from a rural location – you can star-hop upward from the spout of the Teapot in Sagittarius to another famous nebula, the Lagoon, also known as Messier 8. In the same binocular field, look for the smaller and fainter Trifid Nebula as a fuzzy patch above the Lagoon. To locate this nebula, first find the famous Teapot asterism in the western half of Sagittarius. The Teapot is just a star pattern, not an entire constellation. Nonetheless, most people have an easier time envisioning the Teapot than the Centaur that Sagittarius is supposed to represent. How can you find it? First, be sure you’re looking on a dark night, from a rural location. Then, look for southward in the evening from Earth’s Northern Hemisphere. If you’re in Earth’s Southern Hemisphere, look northward, closer to overhead, and turn the charts below upside-down. Want a more exact location for the Teapot in Sagittarius? We hear good things about Stellarium, which will let you set a date and time from your exact location on the globe. Whether the close-knit nebulosity of the Trifid and the Lagoon represents a chance alignment or an actual kinship between the two nebulae is open to question. Both the Trifid and Lagoon are thought to reside about 5,000 light-years away, suggesting the possibility of a common origin. But these distances are not known with precision, and may be subject to revision. Both the Trifid and Lagoon are vast cocoons of interstellar dust and gas. These are stellar nurseries, actively giving birth to new stars. The Trifid and Lagoon Nebulae are a counterpart to another star-forming region on the opposite side of the sky: the Great Orion Nebula. The Trifid Nebula (M20) is at RA: 18h 02.6s; Dec: -23o 02′ Bottom line: The Trifid is a famous binocular object located in the direction of the center of the Milky Way galaxy. Its name means “divided into three lobes.”
The petrochemical manufacturing process begins with a “feedstock”. A feedstock is a raw material that is used to make a useful product in an industrial process. Natural gas liquids and naphtha that is created from crude oil during the refining process are used as feedstocks to manufacture a wide variety of petrochemicals. When using natural gas liquids to create petrochemicals, we separate it into ethane, propane and butanes. Most people know propane as what you use to power your barbeque and butane as what fuels your lighter. They can also be used as feedstocks to create petrochemicals. Ethane is an important feedstock because its structure is the simplest of the hydrocarbons, and it can be transformed into novel plastics that have remarkable and useful properties. Ethane is first converted to ethylene using a process called cracking. Ethane is fed into a large, complex piece of equipment called a cracker. It uses high temperatures to crack the bonds between carbons. Physics then takes over, and the carbons form two bonds (also known as a double-bond) with each other that make a new hydrocarbon molecule called ethylene. Propane and butane undergo additional processes to make propylene and butylenes (e.g., butadienes). Propane and butane can be cracked to make propylene and butylenes, the same way ethane is cracked to make ethylene. They can also undergo a newer process that plucks hydrogen atoms from them to form the double-bonds. Ethylene, propylene and butylenes, along with benzene, toluene and xylenes, are the fundamental building blocks of plastics. These six petrochemicals can be made into plastics, nylons, polyesters, etc., that are then transformed into items like bicycle helmets, lightweight car bumpers, space suits, medical devices and wind turbines. Petrochemicals don’t just make plastic; they also enable progress. These chemicals and the specialty chemicals created from them have special properties that just simply make the products we use better. They make nylon stronger so seatbelts and parachute straps are safer. They make workout clothes sweat resistant. They make cars lighter so they are more fuel efficient.
What are the main differences between AFib and VFib? Atrial fibrillation or AFib, and ventricular fibrillation or VFib, are both a type of abnormal heart rhythm or heartbeat called an arrhythmia. One of the main differences between these two heart conditions is that ventricular fibrillation is life threatening if treatment isn't begun immediately, while atrial fibrillation generally is not immediately life threatening, but can cause problems with the heart function that are very dangerous if not treated effectively. - AFib produces irregular electrical signals in the upper chambers of the heart muscle called the atria (and may include the AV node), causing the heart’s atria to beat irregularly and usually faster than normal. AFib usually is not an immediately life-threatening abnormal heartbeat (arrhythmia). - VFib produces irregular electrical signals in the lower chamber heart muscles (ventricles) that are so chaotic that the heart muscles can’t pump blood effectively. This type of heart condition is life threatening, and must be treated immediately or the person will likely die. What are the main similarities between these two heart conditions? Both types of heart disease are a type of abnormal heartbeat (arrhythmia). AFib and VFib can be detected by ECG’s and CPR defibrillators (machines that can identify arrhythmias and, if needed, can deliver shocks, or electrical impulses, to the heart to treat a life-threatening arrhythmia like VFib). What are AFib and VFib, and how do they affect the heart? To understand AFib and VFib, you need to know a little about your heart and how it normally works. The heart is composed of four muscular chambers, two upper and two lower. The two upper chambers are called the atria. The two lower chambers are called the ventricles. - The impulse is first generated at the sinoatrial node (SA node), which causes the right atrium to contract sending blood to the right ventricle. - The right ventricle then sends blood to the lungs to get rid of carbon dioxide (CO2) and to pick up oxygen (02). - The lungs then return the fresh oxygenated blood to the left atrium, which contracts to fill the left ventricle. - The left ventricle muscle tissue contracts, and generates the pulse and sends fresh oxygenated blood under pressure (blood pressure) to your body’s organs. - Each heartbeat repeats the process, and normally produces an electrical signal that is consistent for each heartbeat. When the electrical signal is irregular in any way, the patient has an abnormal heart rhythm. Picture of a Cross Section of the Heart Including the Atria and Ventricles. AFib and VFib are both termed arrhythmias (abnormal heart rhythms). AFib is a type of arrhythmia termed supraventricular tachycardia, meaning that the problem occurs above the ventricles. For AFib, the abnormal heart rhythms are due to irregular electrical activity in the atria, mainly the right atrium. It usually results in a fast and irregular heartbeat. In contrast, VFib occurs when the electrical signal is chaotic within the ventricular muscular tissue and results in no effective heartbeat so there is no effective blood pressure or pulse generated, which results in sudden cardiac death of the individual if the abnormal heartbeat continues and is not treated immediately (immediately). Picture of the Electrical Activity of the Heart during Ventricular Fibrillation. Atrial Fibrillation vs. Atrial Flutter Differences in Symptoms and Signs Atrial fibrillation (AFib) and atrial flutter and "feel" similar with symptoms and signs like shortness of breath, blurry vision, lightheadedness, and heart palpitations. One of the main differences in AFib and atrial flutter, is that in atrial flutter, the pulse is regular even though it's fast, while in atrial fibrillation (AFib), the pulse is fast and irregular. Is AFib or VFib more serious and dangerous? By far, VFib is more serious. If ventricular fibrillation isn't treated immediately, the patient will have a “sudden death” or “cardiac arrest ” and die. Differences between how AFib and VFib feel to a person (signs and symptoms) Atrial fibrillation signs and symptoms A person with AFib may have no symptoms, but in general, they may notice an irregular and rapid heartbeat. Other symptoms that may occur are: Ventricular fibrillation symptoms and signs In contrast, ventricular fibrillation (VFib) has very short-lived signs and symptoms. About an hour so before the person suddenly collapses due to ventricular fibrillation, some people may have these signs and symptoms. What causes these two heart diseases? Many underlying medical problems may contribute to the development of AFib and/or VFib. Some causes that are common to both heart conditions include: How do the EKG patterns differ for AFib and VFib? The EKG patterns in most cases are diagnostic for AFib and/or VFib because of the characteristic wave forms each produce. Normal ECG wave strip pattern Enlarged P, QRS Complex, and T wave in a normal ECG Wave Pattern Atrial fibrillation ECG Atrial Fibrillation ECG Wave Strip Pattern AFib shows irregular P wave patterns (the small “spike” just before the QRS or big spike pattern), which indicates irregular atrial contractions interrupted by QRS patterns (heartbeats or effective ventricular cardiac blood pumping). ECG Wave Pattern Strip for Ventricular Fibrillation ECG (electrocardiogram or EKG) of VFib shows only fast irregular electrical tracings with no tracings showing a QRS (the large “spike” pattern on a normal ECG) indicative of a heartbeat (ventricular contraction). Atrial Fibrillation: Heart Symptoms, Diagnosis, & AFib Treatment What is the treatment for AFib and VFib? Atrial fibrillation may automatically revert to normal sinus rhythm and require no treatment, in a few people. Many people with AFib can be treated with heart rate controlling or rhythm-controlling medications (see prevention). Moreover, some people with AFib may respond well to electrical cardioversion. This is performed by giving the heart an electrical shock that results in resetting the heart’s normal electric pattern. Ablation techniques destroy malfunctioning heart tissue responsible for the abnormal atrial electrical activity. Ventricular fibrillation is an emergency heart condition that requires immediate therapy. VFib can be treated with an electrical shock to the heart with a defibrillator. While a defibrillator is being located, CPR (cardiopulmonary resuscitation) with chest compressions is used to keep the person alive until a defibrillator shock terminates VFib. This allows the heart to produce an effective electrical current that causes the ventricular function to become organized enough to pump blood (for example, return to normal cardiac rhythm). Ventricular fibrillation may be the end signs of a dying heart and may be difficult to treat in some instances. Defibrillation in these cases may not work and the patient may die due to cardiac arrest. Can these heart conditions be prevented? Preventive measures for VFib include: Preventive measures for AFib include: - Rhythm and rate controlling drugs - Ablation techniques to destroy cardiac tissue that is generating abnormal electrical patterns - Pacemaker to regulate the heartbeat in case the electrical activity the heart starts becoming too fast or too slow - A surgical technique termed the Maze procedure may be performed in which a surgeon creates small cuts in the heart to form scars that interfere with electrical impulses that can cause AFib. What’s the life expectancy for someone with AFib or VFib? Underlying causes usually determine the life expectancy in people with AFib. People who are treated for the causes or triggers of AFib (for example, alcohol intake, metabolic problems, coronary artery disease, sepsis and many others), usually will have a normal life expectancy. Those who respond poorly to treatments will have a poorer prognosis. VFib needs immediate treatment (CPR and defibrillation) or the person will likely die within a few minutes. However, if it VFib is treated immediately, it may reduce the chances of having another VFib. In people who survive VFib, the survival rate and life expectancy is similar to AFib if the causes and triggers of VFib are treated and managed. Different terms and abbreviations used for atrial fibrillation and ventricular fibrillation - Abbreviations for Atrial Fibrillation: AFib, Afib, AF, afib - Abbreviations for ventricular fibrillation: VFib, Vfib, VF, vfib Caution should be taken when using the short form “AF.” AF is also a short form term for another similar heart arrhythmia – atrial flutter – that is closely related to AFib. American Heart Association. "What are the symptoms of atrial fibrillation (AFib or AF)?" Updated: Feb 06, 2017. American Heart Association. "Ventricular Fibrillation." Updated: Sep 2016.
Does it look like a rainbow? That is not the main point of the project. All painting students learned how to mix paint colors. Students even got to choose their own subject matter. - Three primary colors are: red, yellow, and blue. These are untouchable and cannot be mixed to get either of these hues. Think of Superman! - The three secondary colors: orange, green, and purple. These are made with the primary colors. - The six intermediate (tertiary) colors are: red-orange, yellow-orange, blue-green, yellow-green, red-violet, and blue-violet. These are made with the primary and secondary colors. All 12 colors make up the color wheel. When put together, you, the viewer, can see the color spectrum.
In 3rd grade you’ll likely see a significant leap cognitively in your child, and as a result more will be expected of him at school. Your child will not only learn how to write in cursive, with letters joined together, he’ll stretch beyond the paragraph writing of the 2nd grade and begin to compose short essays. Teaching to the Test States are required to test students annually in language arts, beginning in the 3rd grade. These tests assess standards for reading, listening, and writing. Tests generally consist of two types of questions: multiple-choice and open-ended. In reading, students read several passages representing a variety of genres, then answer questions that demonstrate their understanding of the passages. For listening, students hear a passage read out loud, then answer comprehension questions. Along with formalized testing, another hallmark of 3rd grade is learning to write in cursive, or longhand. For many 8 year olds, cursive separates them from the little kids — and they love it. In the majority of classrooms across the country, cursive is taught in 3rd grade (although some 2nd grade teachers introduce it toward the end of the school year). Over the years, some letters have been modified to make them easier to write and recognize. Today’s cursive Q and X may look quite different to someone who learned to write them a generation ago. Now that cursive has made a comeback, teachers begin the school year by devoting one week to each letter and spending a few minutes each day in review. Upping the Vocabulary Ante In 3rd grade, students are ready for solid work in written composition. Their thinking is more abstract and their stories less simplistic. Using transitions and writing in paragraph form remain challenging, but your child will have plenty of opportunities to practice these difficult skills. Now your child will work to enrich his stories through word choice, with a continued emphasis on using adjectives to enliven his compositions. In addition, your child will be introduced to reference books, such as the thesaurus (a book of synonyms and antonyms), to help him select more interesting words. Writing as Process Writing as a craft is a fairly new classroom concept. “Learning to write well isn’t considered a one-shot deal,” says Cynthia Graves, a 3rd grade teacher at Forest Avenue School in Verona, New Jersey. “It’s a process that evolves over time.” While the focus may vary from school to school, you can expect that your child’s work will progress through the following phases: - Prewriting, or brainstorming, includes activities such as creating a story web with ideas related to a main topic. - The first draft, or “sloppy copy,” is a student’s initial attempt at converting his thoughts into sentences and paragraphs. - Feedback involves sharing the first draft with classmates and/or the teacher to strengthen the work. The reviewer reads the piece, then tells the writer what’s good, bad, or confusing about the story. - The student incorporates the feedback during rewriting. - Correcting grammar, punctuation, and spelling mistakes takes place during the proofreading phase. - The final copy is either handwritten or typed on the computer. - Publishing is the last step, and each teacher handles it differently. It may mean turning the story into a book with illustrations, adding it to a class book, reading the work out loud to the class, or submitting it to a children’s publication. As in 2nd grade, in 3rd grade your child will be expected to write in a variety of genres. A narrative assignment might ask your child to write about a personal experience, such as her favorite day. A typical nonfiction assignment in 3rd grade would require her to write a simple report using facts gleaned from different sources of information (for example, an encyclopedia, a Web site, or a book on the subject). An informative writing assignment might ask her to explain how to make or do something (for example, my daughter wrote instructions for doing a handstand). Persuasive writing could be a letter to the editor, and finally, penning a poem might cover creative writing.
Sleep is a vital part of obtaining good wellbeing. We spend a third of our lives asleep. Despite what many of us believe, it is impossible to be in good health and not have enough quality sleep. Lack of sleep has been said to impact our health as much as, if not equally to; not eating, not drinking nor breathing. It has even been said that we can survive longer without food, than we could without sleep (Everson et al. 1989). This is because; the physical effects of stress on the human body are well documented: sleeping allows our bodies to repair and enables our brains to consolidate and process information. When your mind is not functioning optimally or is plagued by negative thoughts and emotions, eventually your body will suffer the consequences. Common problems that arise due to poor sleep include weakened immune systems and increased mental health problems, such as anxiety and depression. The recommended amount of sleep for the average adult ranges from 7 hours to 9 and it takes the average human around 14 minutes to fall asleep. A healthy amount of sleep has been thought to have a far higher impact on wellbeing than a 50% increase in disposable income. If you feel that you are not managing to reach the recommended amount of sleep and you feel like you’ve tried absolutely everything, here are some tips that we are sure that you haven’t tried before! 1.If you can’t get to sleep try to not think about not going to sleep! Now we don’t mean scroll some more on social media, or go downstairs and whip yourself up a lovely fresh coffee. Do simple tasks, such as; think about your day, keep your eyes wide open and blink, listen to some music that soothes you, start knitting, meditating, reading, think about your happy place and visualise your goals. These will all help you to relax and in turn swtich off from the negative thought cycle. 2.Headstands – headstands circulate refreshed blood to our glands and brain, hypothalamus and pituitary, which control all the other glands in our body. It also cleanses and detoxifies the adrenal glands, putting us into a positive state of mind, decreasing the amount of negative thoughts that we have. If you cannot do a headstand – learn. The key to this is strengthening your upper body, do this by carrying out the half-boat pose. 3.Only inhale through your left nostril – some say that in yogic science, the left nostril allows us to access Ida Energy, which represents moon energy. Moon energy slows us down and ensures that we stay as relaxed as possible; it is calming, soothing, passive, female and reflective. So when you cannot sleep due to your brain racing around thinking about anything and everything, and writing these thoughts down on paper hasn’t helped, breathing through our left nostril, activates your parasympathetic response. This will rescue you from your fight-or-flight mode. 4.Curl your toes – so easy yet so effective. This exercise is monotonous, which relaxes both your body and mind. To do this; curl your toes, hold for a few seconds and release. Repeat this until you feel relaxed. 5.Roll your eyes – this will trigger your brain to release melatonin, the natural sleep hormone. To do this; simply close your eyes and roll your eyes downwards then upwards in a slow and relaxing pace.
9 - 12 While many view genetically modified crops as a promising innovation, there is controversy about their use. This lesson provides students with a brief overview of the technology, equipping them with the ability to evaluate the social, environmental, and economic arguments for and against genetically modified crops. Interest Approach and Activity 1: - Internet and video projection capability - GMO PowerPoint - Food Label Cards, 1 set per group - Critically Thinking GMOs handout, 1 per student (this handout will be used throughout the lesson) - GMO Fact or Fiction PowerPoint or Kahoot app and electronic devices - Ball (8" or larger plastic ball or an inflatable beach ball) - Crop Supply and Demand Challenge Cards, 1 copy per class Essential Files (maps, charts, pictures, or documents) GMO Fact or Fiction PowerPoint Critically Thinking GMOs handout Food Label Cards Crop Supply and Demand Challenge Cards GMO Crop Spotlight Sheets (optional activity) gene: a region of the DNA that encodes a protein or part of a protein transgenic: containing a gene that has been transferred from one organism to another and acts as a synonym for genetically modified GMO: genetically modified organism genetic engineering (GE): process of directly modifying an organism’s genes using biotechnology to produce desired traits selective breeding: process by which humans use animal or plant breeding to selectively develop particular traits in an offspring; also known as artificial selection crossbreeding: selectively breeding two plants or animals of different breeds or cultivars to produce a superior offspring sometimes called a hybrid inbreeding: selectively breeding closely related plants or animals in an effort to isolate and perpetuate a desired trait mutagenesis: a method of selective breeding in plants where seeds are exposed to chemicals or radiation to promote DNA mutations that could result in developing new traits in offspring plants hybrid: the offspring of two plants or animals of different species or varieties acrylamide: a chemical substance which forms in starchy foods after high-temperature cooking processes such as frying, roasting, and baking Did you know? (Ag Facts) - 89% of the corn grown in the United States in 2015 was produced from seed varieties developed through genetic modification technologies.1 - As the use of genetically engineered crops has risen, the use of insecticides has decreased.2 - As the use of genetically engineered crops has risen, the use of herbicides has increased.2 - Many science organizations throughout the world, including the World Health Organization, find genetically modified crops to be safe for consumption.3 - Although significant science supports the safety of GM foods, many consumers are skeptical and perceive that non-GM foods are healthier.4 Background - Agricultural Connections This lesson provides a brief introduction to genetic engineering in plants. After the introduction, students assess the risks and benefits of genetic engineering, learn why farmers would choose to grow a GMO crop, and begin to recognize various perspectives about this controversial topic. To learn the scientific steps of creating a GMO, see the lesson The Science of a GMO. Plant Breeding Methods Traditional plant breeding has been used since humans began domesticating plants for food production. Early crop domestication was accomplished by using basic plant selection techniques to identify and promote ideal food plants. This is known as selective breeding. Crossbreeding, inbreeding, and hybridization are specific plant breeding methods that fall under the umbrella of selective breeding. These methods have allowed farmers to isolate genes for specific characteristics and progressively create more plants well suited to provide an abundant supply of nutritious food (e.g., fruits, vegetables and grains). For example, tomatoes come in many varieties, including large slicing tomatoes and smaller roma, cherry, and grape tomatoes. Tomatoes also come in a variety of colors, including from bright red, orange, yellow, and even a dark burgundy color. In addition to color and size, these plants also vary in taste, shelf life, and the amount of time they take to grow from seed to fruit. All of these characteristics were brought about by selective breeding; identifying desirable traits and continually cross-pollinating plants with those traits to eventually create a variety with desirable characteristics. Often in traditional plant breeding processes plants will gain either a resistance or a propensity toward disease. All of these characteristics vary from variety to variety due to the plants changing genetics from generation to generation. While these traditional plant breeding methods have been successful, they can take a significant amount of time (years or decades) to achieve the desired result, and it can be difficult to isolate individual traits such as disease or pest tolerance, color, flavor, or any number of other traits. In addition, the desired gene or characteristic must already be available in the plant's gene pool. Another method of plant breeding is called mutation breeding, or mutagenesis. This is the process of exposing seeds to chemicals or radiation in order to promote DNA mutations to maximize genetic diversity in an effort to create new traits in plants.5 In biology, a mutation is a permanent alteration in the DNA sequence. Some mutations cause little affect on an organism and others cause dramatic change. Mutations occur randomly, but are accelerated by exposure to UV rays, radiation, and some chemicals. Through the years, mutagenesis has helped create genetic variability and produce desired characteristics in crops such as wheat, barley, rice, cotton, sunflowers, and grapefruit.5 Mutagenesis can elicit results much faster than cross breeding or inbreeding. However, the changes are random and unpredictable. Crossbreeding, inbreeding, hybridization, and mutagenesis are all traditional plant breeding techniques and do not use biotechnology. The resulting plants are not GMOs, although many people hold the common misconception that they are. The Development of GM Crops A genetically modified organism (GMO) is defined by the United Nations Food and Agriculture Organization as “any living organism that possesses a novel combination of genetic material obtained through the use of modern biotechnology.”6 Common and synonymous terminology for genetically modified organisms include GMO (genetically modified organism), GM (genetically modified), GE (genetically engineered), biotech, and biotech engineering. Watch the video clip, What is a GMO? for more illustration and comparisons of plant breeding techniques. The first genetically engineered plant was created in 1983 when an antibiotic resistant gene was inserted into a tobacco plant.7 The first genetically modified food was the Flavr Savr tomato, created in 1994. This tomato had an extended shelf life, allowing it to be vine ripened and then shipped to grocery stores without rotting. However, production of the Flavr Savr tomato stopped three years later. Although the fruit had the desired extended shelf life without rotting, it still softened, making it little better than its traditional counterparts.8 Since that time, novel genes have been inserted into many crop plants. Genetically engineered crops have specific traits such as the following: - Herbicide tolerance: This trait allows farmers to spray their crop with an herbicide which will kill the weeds, but not the crop. (Transgenic) - Pest tolerance: These GM plants have a natural resistance to pests. For example, the European corn borer is a destructive pest that bores into corn stalks. When the bacterium Bacillus thuringiensis (Bt) is present in the corn, it produces a protein called Cry, which is toxic to the European corn borer. (Transgenic) - Disease resistance: Just like people, plants are susceptible to diseases caused by fungi, bacteria, and viruses. Some GM crops are developed to be resistant to specific diseases. Examples include the papaya and some varieties of squash. (GE, Nontransgenic) - Drought tolerance: Some crop varieties can be genetically engineered to be more hardy in drought conditions and use less water. (GE, Nontransgenic) - Shelf life extended/Spoilage resistance: Crops must travel from the farm to the consumer without spoiling or being damaged. Some crops must even be harvested before they are ripe to increase their shelf life, tomatoes are an example. (GE, Nontransgenic) Current GM Crops approved by FDA The Safety of GM Crops While many view genetically modified crops as a promising innovation, there is still much controversy surrounding their use. This debate is taking place worldwide. There are many questions being raised in the minds of consumers. How do GMO's impact the environment? Are GMO's safe for consumption? How does the production of GM crops impact communities and economics from various points of view? It takes many years for a GM crop to be developed, tested, and finally approved for commercial release. Prior to their release, GMO foods are monitored and regulated by three primary agencies in the United States: - Food and Drug Administration: "FDA regulates the safety of food for humans and animals, including foods produced from genetically engineered (GE) plants. Foods from GE plants must meet the same food safety requirements as foods derived from traditionally bred plants."9 - United States Department of Agriculture: "The USDA, EPA, and FDA work to ensure that crops produced through genetic engineering for commercial use are properly tested and studied to make sure they pose no significant risk to consumers or the environment."9 - Environmental Protection Agency: The EPA focuses on reviewing the environmental impacts of a GE crop prior to field-testing and the commercial release of the seed. They ensure there are no unintended consequences to honeybees, other beneficial insects, earthworms, fish, or the environment in general.10 They also look for any possible impact on other crops. After careful consideration by these three agencies, a GM crop may be approved. After approval, seeds are made available for purchase and farmers can choose to grow the GM crop or not. It's important to understand that not all farmers choose to grow GM crop varieties even when they are available. Some choose to use conventional crop varieties and control pests, weeds, and disease using other methods. A small percentage of farmers choose to grow and market food that can be certified and labeled as organic. Organic foods cannot be grown from genetically modified seeds and have specific regulations for how weeds and pests are managed. Although many studies have been conducted, none have proven that organic foods are nutritionally superior to conventionally grown foods.11 However, many consumers still choose organic. The organic food industry has shown increased consumer demand over the last decade and some farmers have adopted this production method to meet the consumer demand.12 Although there are benefits to utilizing biotechnology, its important to recognize the risks as well. While great effort and extensive research is put into the development and approval process of GM crops, scientists are looking for negative impacts that could still be observed. One risk is of plants, particularly weeds, developing tolerance to the herbicides that are used to kill them. This is possible through the simple biological process of evolution as the weed can become more hardy with each generation and eventually become tolerant of the chemical.13 This process can happen with non-GM crops, but scientists are aware that it could happen faster with GM crops. Other risks that scientists test prior to release of a GM crop, and that they continue monitoring carefully, include how the crop affects non-target organisms (insects, fungi, soil biota), how the crop affects the biodiversity of the ecosystem, if the transgenes escape and affect other plants and/or organisms, and how the biodiversity of the ecosystem is potentially affected. As a consumer, it can be difficult to wade through information about GMOs. There are many groups who strongly advocate against GMOs as well as others who advocate for GMOs. It is important to seek credible scientific evidence, then make your choice as you purchase your food. To make a decision about the production and consumption of GMOs, the science and safety may be sound, but there may be other considerations; the local community, local and global economy, and the sustainability related to inputs or the infrastructure required to plant and cultivate GM crops. In some situations a GMO may be a solution, in others, a GM crop may solve one problem but create another. Interest Approach – Engagement - Project the first slide of the GMO PowerPoint. Tell your students to imagine they are grocery shopping. As they are selecting their food items they begin to notice all of these labels. Hold a short class discussion about the labels and discuss what they might mean. Move on to slide two. Ask students if they have seen either of the two "non-GMO" labels. Ask students, "Are there any common food labels that could be misleading or meaningless?" - Divide your class into small groups and give each group one set of the Food Label Cards. Instruct your students to look through the cards and tell you what words are contained on every food package. (non-GMO) - Explain to your students that within their stack of cards there are 18 foods with labels that are "imposters." Explain that an imposter is something that is disguised. Some of the foods in their stack of cards are imposters because the ingredients in these foods are derived from crops that have currently not been genetically modified. (Allow students time to separate their cards. Use slide three as a visual). - Project slide four of the GMO PowerPoint. Use the slide to explain that there are currently only 10 crops that have been genetically modified and approved for commercial use by farmers. Therefore, only foods containing these ingredients even have the possibility of being genetically modified. Once you have listed the crops, ask the students if they need to make any changes to their piles. - Give students the correct answers and list which foods could have GM ingredients and which foods could not actually be genetically modified because no GM form of the food exists. - Foods that could have GMOs: Soymilk (soybean), cinnamon crunch cereal (sugar could be from sugar beet), rice milk (canola oil), wheat bread (sugar and soybean oil), pita bread (sugar with unspecified source, canola/soybean oil), and margarine (canola and soybean oil). - Foods that currently do not have GMOs: 2% milk, graham crackers, clementines, yogurt, mango baby food, banana baby food, flax seed, rye flour, wheat flour, sweetener, sugar (this label specifies it is from sugar cane plant), shredded wheat, tea, coffee beans, rice, orange juice, sour cream, and cottage cheese. - Note to teacher: The two primary sources of table sugar are the sugar cane plant and the sugar beet. Many food labels list "cane sugar." Cane sugar or sugar cane is not an approved GM crop. If it does not specify, it could be from either plant. It could be genetically modified if it came from a sugar beet. - Introduce the lesson topic to the students by helping them see that as a consumer, every time they enter a grocery store they may have the opportunity to buy (or not buy) a GM food. In this lesson we will be talking about what GMOs really are and why some food companies are labeling their foods even though their food product could not possibly contain GMOs. Be sure students understand that the foods in their "imposter" pile are indeed non-GMO. Clarify that while the "non-GMO" label is accurate, it impacts consumer perceptions of the food potentially leading to misconceptions about food safety and the total number of GMO crops found in our food supply. Activity 1: GE and Me What is genetic engineering and why does it matter? - To begin, students will be learning what GMOs are and what they are not. (Students should still have their Food Cards after completing the Interest Approach section of the lesson.) Give each student a copy of the handout Critically Thinking GMOs. Have students fill out the Venn Diagram located on the first page of the handout as you go through Activity 1. Remind them along the way to make notes on this handout. - Show the video, How Are GMOs Created? Prior to showing the video, ask students if they have ever eaten papaya or drunk papaya juice. Show students the picture of the papaya and the papaya tree and explain that it is a tropical fruit grown mostly in Hawaii. Prepare students for the video by explaining that they will be learning how GMOs are created using the example of the papaya. - Optional: To further illustrate what a GMO is, show the inFact video The Unpopular Facts about GMOs. This video uses terminology and comparisons that will be familiar to your students, adding to their understanding of what a GMO is. - Display the GMO Crop Table (found on slide five of the GMO PowerPoint). Emphasize that the 10 crops listed in the first column are the only plants in our food supply with the potential of being genetically modified. The second column lists the trait that was "copied and pasted" into the genetic structure of these plants. - Next, teach what a GMO is not. Refer your students to their pile of food cards which have not currently been genetically modified. State that, "These foods have not been genetically modified, but they are different than their wild counterpart. They have changed through the years. How did this happen?" Draw on students' prior knowledge of science and genetics. Use guided questions to lead them to recognize that methods of natural and artificial selection have been used to improve our food crops for centuries. Review the following plant breeding techniques, using the information found in the Background Agricultural Connections section of the lesson to further define if needed: - Natural Selection - Artificial Selection - Cross breeding/Hybridization - Clearly explain that these traditional plant breeding processes have been used for many years to produce desired characteristics in plants. None of these processes use genetic engineering or genetic modification. - Summarize the difference between GM crops and crops created through traditional plant breeding by reviewing what students have recorded on the Venn Diagram found on page one of their handout. Check for understanding and help students fill in gaps as needed. An example can be found on slide seven of the GMO PowerPoint. Activity 2: Assessing the Risks and Benefits of GMO crops What are the risks and benefits of genetically modified crops? - Ask your students if they have ever seen news reports, memes, blogs, or other social media posts in strong opposition or support of GMOs. Hold a class discussion about some of the specific ideas and concerns students have or that they have heard from others. Summarize the discussion by concluding that it can be difficult to distinguish the facts (supported by credible evidence) from fiction (unsubstantiated opinions). - Conduct a Fact or Fiction class activity using either the attached PowerPoint or the Kahoot game linked below. As you conduct this activity, students should be taking notes on page two of their handout, Critically Thinking GMOs, by listing the benefits and risks of GMO crops. - PowerPoint version: Project the attached PowerPoint, GMO Fact or Fiction? Tell your students that you will be going through a list of claims regarding GM crops. Assign a signal to represent fact and a signal to represent fiction. (hold up a "fact" or "fiction" card, thumbs up for fact and thumbs down for fiction, etc.) Go through each slide individually. Project the claim and give students time to respond by giving the fact or fiction signal. Next, display the answer and the clarification. Discuss as needed. - Kahoot version: Access the "GMO Fact or Fiction?" Kahoot. Follow the basic Kahoot instructions or watch online tutorials for using this application in your classroom. Each student will need internet access through a tablet, smart phone, or computer to play the game. Explanations for each answer can be found in the PowerPoint version of the game. - Teacher tip: You will find some additional explanation in the Notes portion of each PowerPoint slide. Hyperlinks are also included with several of the slides. You may also find more detailed answers on several subjects on the webpage Top 10 Consumer Questions About GMOs, Answered. - Using the information found in the Background Agricultural Connections section of the lesson, explain to your students some of the regulatory processes that must take place prior to the commercial use of GM crops. - After completing the fact or fiction activity, summarize and help students synthesize what they have learned. Refer again to the pile of "imposter" food cards and ask, "Why are so many foods at the grocery store labeled as "non-GMO" when that particular food product does not have a GMO counterpart?" (Likely due to heightened fear, misinformation, and consumers' lack of understanding of what GMOs are. In response, food companies have begun labeling their products.) As a follow-up question ask, "Do you think this labeling practice helps or hurts the food industry? Why?" (Answers will vary) Activity 3: How genetic engineering is used in the production of our food How can genetic engineering address the supply (farm production) and demand (needs) of agricultural products? - Refer to the instructions for the Have a Ball activity. As directed, use a ball with several numbers written on it to provide an object lesson about perspective and points of view. Help students understand that the use and implementation of biotechnology has many perspectives. Discuss how the point of view of a farmer, a scientist, and a consumer could have both differences and similarities. List these three people on the board and any others your students identify as having a different perspective. - Explain to your students that two factors determine the success of producing a crop. First, the farmer needs to be able to grow a safe product and produce an adequate harvest to be viable economically. Farmers provide our food supply. Second, consumers create the demand for a product when they purchase the product to meet their needs. The production of our food follows simple laws of supply and demand. - Use the following steps to draw a sketch on the board similar to the one below to illustrate: - Begin by writing the goal in the center of the board. Explain to your students that a successful crop satisfies the farmer and the consumer. - Next, draw two roads meeting together at the goal. Label one road for the farmer (supply) and the other road for the consumer (demand). - Last, explain that challenges will arise in meeting the ultimate goal. Illustrate the challenges by drawing a rock in each road. Explain that some challenges may be big and others may be small. Some challenges may stop the production or consumption of food altogether, and others may just slow it down. - Print the Crop Supply and Demand Challenge Cards and cut them in half. Distribute them to groups in your class. Ask each group to read the card and prepare to explain the challenge to their peers. - Have each student group present their challenge to the class. Determine if the challenge is faced by the farmer in order to produce a supply of food or if it is a "demand" from the consumer. Tape the card to the board on the appropriate side. Students should continue to make notes on page two of their Critically Thinking GMOs handout by continuing to list benefits and risks of GMOs. - Optional: After each student group presents a challenge, ask students to raise their hands and identify a perspective on that topic. Refer to the list of people you made in step 1 of this activity. Call on the students by tossing them the ball to present the perspective. - For example, after discussing the "Pests" card a student may identify that a farmer's perspective would be to grow GM crops to eliminate a pest problem without the use of insecticides. Another student may identify that a consumer may choose food labeled as "organic " even if the cost is greater because of what they have read on social media about GMOs or chemicals used to control pests. Another student may point out that a different consumer would have no problem purchasing a GM crop, especially if it's cheaper. - Optional: After each student group presents a challenge, ask students to raise their hands and identify a perspective on that topic. Refer to the list of people you made in step 1 of this activity. Call on the students by tossing them the ball to present the perspective. - Repeat step five until all the challenges have been presented and discussed. - Teacher tip: If time is short, speed this activity up by eliminating the student group participation outlined in steps 4-5. Instead, briefly introduce and describe the challenges to the students and place them on the board. - Summarize by reminding students that there are many methods and tools available to overcome these challenges. Methods available to farmers range from organic (without the use of chemicals) to conventional (using chemicals if necessary), and tools include the use of various traditional methods of selective breeding as well as the use of biotechnology to create GMOs. - Discuss the reality that although the science of genetic modification is sound, it still must be accepted by consumers to succeed. Consumers create the demand. For example, the development of Golden Rice was a scientific success but a social failure. Share the video, What is a GMO? to illustrate. (The segment about Golden Rice begins at 2:15.) After watching the video ask the following questions: - What important nutrient did Golden Rice contain? (beta carotene which the body converts into vitamin A) - Why was Golden Rice rejected by the people it was designed to help? (they feared it) Help students distinguish between biological science and social science. Based on what they have learned in this lesson, students should be able to distinguish between the two and recognize the impact of both. While biological science has confirmed the safety of GMOs in our food system, social science still impacts the acceptance of the technology in our society. Concept Elaboration and Evaluation - After conducting these activities students should recognize that the use of GM crops has scientific and social implications. Explain that socioscientific issues such as these are open-ended problems which may have multiple solutions. Evaluate student learning by following the instructions found on pages 3-4 of the Critically Thinking GMOs handout. Begin by dividing students into teams of two and assigning one student to be in favor of GMOs and the other to be against GMOs. Then have students follow the remaining instructions on the handout to complete the activity. - Review and summarize the following key concepts with your students: - Biotechnology is one tool that may help address challenges in food production (e.g., drought, pests, and disease) to meet the growing demand for food. - GM crops can increase crop yields (harvest) due to decreased crop loss from pests, disease, and drought. - Although significant research is performed to evaluate the safety of GM crops for consumption as well as to assess the potential for harm to the environment, some consumers remain concerned by the social and economic issues related to increased use of biotechnology and GM crops. - The discussion on the safety of GM crops can be viewed from many perspectives (e.g., farmers, consumers, scientists, nutritionists). - As a formative assessment, assign students to find something in the news or on social media about GMOs and determine, based on scientific evidence, if the claim/opinion is accurate or not. - Use the Biotech Cheese Kit to make cheese in your classroom. Your students may not know that most cheese is made using an enzyme developed through biotechnology. Historically, cheese was made using an enzyme called rennet which was obtained from the lining of a calf or other ruminant animal's stomach. Rennet is an enzyme which coagulates milk in the cheese making process. Biotechnology was used to develop chymosin, which is now used widely in commercial cheese production. - As a homework assignment, have students visit the GMO Answers website and enter a question they have about GMOs. This website is designed for consumers to ask questions about GMOs. (Most likely a similar question has already been asked and they will find an answer.) Assign students to find two questions or topics that interest them and then write a response to each question in their own words using what they learn through the given responses and linked articles. - Use the attached GMO Crop Spotlight sheets to assign individual students or groups of students to research the current GM crops available on the market. The ISAAA website contains a crop database with pertinent information for students to complete the assignment successfully. - Watch The Journey to Harvest (3:01 mins) and learn about the 20-year journey of the Arctic Apple®. As a class discuss how arctic apples could decrease food waste and other consumer benefits such as convenient packaging and nutrition. Visit the Arctic Apple® website for more information. - Orient students to the overall adoption and use of GM corn, cotton, and soybeans by visiting the USDA Economic Research Service webpage.1 Project the chart titled, Adoption of genetically engineered crops in the United States, 1996-2015. Help orient the students to the graph by explaining that it represents the adoption and use of GMO corn, cotton, and soybeans in the United States since in 1996. Explain that "HT" stands for herbicide tolerance and "Bt" stands for Bacillus thuringiensis which is an insect resistant crop. Ask students, "What is the general trend for the adoption and use of GM corn, cotton, and soybeans?" (generally increasing with some years/crops showing a small dip) Suggested Companion Resources GMO Case Study (Activity) GM Soybean Seed Kit (Kit) Biotech Cheese Kit (Kit) GMO Infographics (Poster, Map, Infographic) Crop Modification Techniques (Poster, Map, Infographic) The Life of a Seed- Jake, a GMO Seed (Multimedia) Genetically Engineered Crops in the United States Report (Multimedia) Genetically Engineered Crops Report (Multimedia) How Are GMOs Created? (Multimedia) Natural GMO? Sweet Potato Genetically Modified 8,000 Years Ago (Multimedia) Why are GMOs Bad? (Multimedia) Crop Genetic Engineering Simulation (Multimedia) Genetically Modified Food: Good, Bad, Ugly (Multimedia) Give it a Minute: Organic & Conventional Farming (Multimedia) Biotech in Focus (Booklets & Readers) Journey of a Gene (Website) What's In My Food? (Website) Food Dialogues (Website) GMO Answers (Website) Ann Butkowski, science teacher at Humbolt High School in St. Paul, MN wrote the original lesson for the Minnesota Agriculture in the Classroom program in 2013. The lesson was rewritten and updated in 2016 by National Agriculture in the Classroom. The Critically Thinking GMOs worksheet was developed using the concepts taught in the NSTA publication of Making Critical Friends, written by Sara Raven, Vanessa Klein, and Bahadir Namdar. National Agriculture in the Classroom and Minnesota Agriculture in the Classroom
What is a Scarab and what does it mean? It is a representation or image of a beetle, much used among the ancient Egyptians as a symbol, seal, amulet or a gem hut to resemble a beetle. Scarabs are a common type of amulet, seal or ring bezel found in Egypt, Nubia and Syria from the 6th Dynasty until the Ptolemaih Period (2345-30 BC). The earliest were purely amulet and uninscribed: it was only during the Middle Kingdom (2055-1650 BC) that they were used as seals. The scarab seal is so called because it was made in the shape of the sacred scarab beetle (scarabaeus sacer), which was personified by Khepri, a sun god associated with resurrection. The flat underside of the scarab, carved in stone or moulded in faience or glass, was usually decorated with designs or inscriptions, sometimes incorporating a royal name. Scarabs, however, have proven to be an unreliable means of dating archaeological contexts since the royal name is often that of a long dead ruler; Menkheperra, the prename of Thutmose III (1479-1425 BC). During the reign of Amenhotep III (1390-1352 BC), a series of unusually large scarabs were produced to celebrate certain events or aspects of Amenhotep’s reign, from the hunting of bulls and lions to the listing of the titles of Queen Tiу. There were also a number of funerary types of scarabs such as the large winged scarab, virtually always made of blue faience and incorporated into the bead nets covering mummies, and the heart scarab, usually inscribed with Chapter 30B of the Book of the Dead which was included in burials from at least the 13th Dynasty (1795-1650 BC). The term scaraboid is used to describe a seal or amulet, which has the same oval shape as a scarab but may have its back carved in the form of some creature other than the scarab beetle. This appears to have developed out of the practice of carving two-dimensional animal forms on the flat underside of the scarab. Tid period is know as the First Intermediate Period (2181-2055 BC). Finely carved scarabs were used as seals; inscribed scarabs were issued to commemorate important events or buried with mummies. Lapis Luzauli was a mineral often used. Metamorphosed form of limestone, rich in the blue mineral lazulite, a complex felspathoid that is dark blue in color and often flecked with impurities of calcite, iron pyrites or gold. The Egyptians considered that its appearance imitated that of the heavens and considered it to be superior to all materials other than gold and silver. They used it extensively in jewelry until the Late Period (747-332 BC) when it was particularly popular for amulets. It was frequently described as “true” Khesbed to distinguish it from imitations made in faience or glass. Its primary use was as inlay in jewelry and carved beads for necklaces. Scarabs have come a long way from shells and bones; it’s fascinating how the art of body adornment using decorative objects has evolved through time. Scarabs serve many purposes other than to serve as mere decorations. Over the ages, they have been used to symbolise wealth, used as currency, fashion accessory and also to serve as a form of artistic expression. Precious metals and stones were used from very earl ages as a sign of wealth and opulence. Royalty have always used Scarabs as a means for securing and consolidating wealth and even to the present day, some of the most precious pieces of jewelry are antiques. Royal jewels rank among the most expensive and luxurious assets of all times. Man forms of jewelry that we use today have their genesis as purely functional pieces. Pins, brooches and buckles were initially created to serve specific practical purposes, but they later evolved into more decorative versions and began to be considered as jewelry for adornment. Jewelry has also been an important part in religion and social groups, to signify membership in a group and the status within. Over the history and religious beliefs surrounding the Scarab Symbol which was one of the most important religious Egyptian Symbols in the mythology of ancient Egypt. The scarab beetle was also called the dung beetle because of its practice of rolling a ball of dung across the ground which it then used as a food source. The Scarab beetle symbolised the sun because the ancient Egyptians saw a likeness between the scarab beetle rolling the dung and the sun god rolling the sun, making it shine on Earth. In ancient Egyptian religion the scarab was also a symbol of immortality, resurrection, transformation and protection much used in funerary art. The life of the scarab beetle revolved around the dung balls that the beetles consumed, laid their eggs in, and fed their young represented a cycle of rebirth. When the eggs hatched the scarab beetle would seem to appear from nowhere, making it a symbol of spontaneous creation, resurrection, and transformation. A scarab amulet provided the wearer with protection and confidence in the certain knowledge of reincarnation. Was this article helpful?
Stonehenge is a monumental circular setting of large standing stones surrounded by a circular earthwork, built in prehistoric times beginning about 3100 BC and located about 13 km (8 miles) north of Salisbury, Wiltshire, Eng. The modern interpretation of the monument is based chiefly on excavations carried out since 1919 and especially since 1950. The Stonehenge that visitors see today is considerably ruined, many of its stones having been pilfered by medieval and early modern builders (there is no natural building stone within 21 km [13 miles] of Stonehenge); its general architecture has also been subjected to centuries of weathering and depredation. The monument consists of a number of structural elements, mostly circular in plan. On the outside is a circular ditch, with a bank immediately within it, all interrupted by an entrance gap on the northeast, leading to the Avenue. At the center of the circle is a stone setting consisting of a horseshoe of tall uprights of sarsen (Tertiary sandstone) encircled by a ring of tall sarsen uprights, all originally capped by horizontal sarsen lintels. Within the sarsen stone circle were also configurations of smaller and lighter bluestones (igneous rock of diabase, rhyolite, and volcanic ash), but most of these bluestones have disappeared. Additional stones include the so-called Altar Stone, the Slaughter Stone, two Station stones, and the Heel Stone, the last standing on the Avenue outside the entrance. Small circular ditches enclose two flat areas on the inner edge of the bank, known as the North and South barrows, with empty stone holes at their centers. Archaeological excavations since 1950 suggest three main periods of building--Stonehenge I, II, and III, the last divided into phases. In Stonehenge I, about 3100 BC, the native Neolithic people, using deer antlers for picks, excavated a roughly circular ditch about 98 m (320 feet) in diameter; the ditch was about 6 m (20 feet) wide and 1.4 to 2 m (4.5 to 7 feet) deep, and the excavated chalky rubble was used to build the high bank within the circular ditch. They also erected two parallel entry stones on the northeast of the circle (one of which, the Slaughter Stone, still survives). Just inside the circular bank they also dug--and seemingly almost immediately refilled--a circle of 56 shallow holes, named the Aubrey Holes (after their discoverer, the 17th-century antiquarian John Aubrey). The Station stones also probably belong to this period, but the evidence is inconclusive. Stonehenge I was used for about 500 years and then reverted to scrubland. During Stonehenge II, about 2100 BC, the complex was radically remodeled. About 80 bluestone pillars, weighing up to 4 tons each, were erected in the center of the site to form what was to be two concentric circles, though the circles were never completed. (The bluestones came from the Preseli Mountains in southwestern Wales and were either transported directly by sea, river, and overland--a distance of some 385 km [240 miles]--or were brought in two stages widely separated in time.) The entranceway of this earliest setting of bluestones was aligned approximately upon the sunrise at the summer solstice, the alignment being continued by a newly built and widened approach, called the Avenue, together with a pair of Heel stones. The double circle of bluestones was dismantled in the following period. The initial phase of Stonehenge III, starting about 2000 BC, saw the erection of the linteled circle and horseshoe of large sarsen stones whose remains can still be seen today. The sarsen stones were transported from the Marlborough Downs 30 km (20 miles) north and were erected in a circle of 30 uprights capped by a continuous ring of stone lintels. Within this ring was erected a horseshoe formation of five trilithons, each of which consisted of a pair of large stone uprights supporting a stone lintel. The sarsen stones are of exceptional size, up to 9 m (30 feet) long and 50 tons in weight. Their visible surfaces were laboriously dressed smooth by pounding with stone hammers; the same technique was used to form the mortise-and-tenon joints by which the lintels are held on their uprights, and it was used to form the tongue-and-groove joints by which the lintels of the circle fit together. The lintels are not rectangular; they were curved to produce all together a circle. The pillars are tapered upward. The jointing of the stones is probably an imitation of contemporary woodworking. In the second phase of Stonehenge III, which probably followed within a century, about 20 bluestones from Stonehenge II were dressed and erected in an approximate oval setting within the sarsen horseshoe. Sometime later, about 1550 BC, two concentric rings of holes (the Y and Z Holes, today not visible) were dug outside the sarsen circle; the apparent intention was to plant upright in these holes the 60 other leftover bluestones from Stonehenge II, but the plan was never carried out. The holes in both circles were left open to silt up over the succeeding centuries. The oval setting in the center was also removed. The final phase of building in Stonehenge III probably followed almost immediately. Within the sarsen horseshoe the builders set a horseshoe of dressed bluestones set close together, alternately a pillar followed by an obelisk followed by a pillar and so on. The remaining unshaped 60-odd bluestones were set as a circle of pillars within the sarsen circle (but outside the sarsen horseshoe). The largest bluestone of all, traditionally misnamed the Altar Stone, probably stood as a tall pillar on the axial line. About 1100 BC the Avenue was extended from Stonehenge eastward and then southeastward to the River Avon, a distance of about 2,780 m (9,120 feet). This suggests that Stonehenge was still in use at the time. Why Stonehenge was built is unknown, though it probably was constructed as a place of worship of some kind. Notions that it was built as a temple for Druids or Romans are unsound, because neither was in the area until long after Stonehenge was last constructed. Early in the 20th century, the English astronomer Sir Norman Lockyer demonstrated that the northeast axis aligned with the sunrise at the summer solstice, leading other scholars to speculate that the builders were sun worshipers. In 1963 an American astronomer, Gerald Hawkins, purported that Stonehenge was a complicated computer for predicting lunar and solar eclipses. These speculations, however, have been severely criticized by most Stonehenge archaeologists. "Most of what has been written about Stonehenge is nonsense or speculation," said R.J.C. Atkinson, archaeologist from University College, Cardiff. "No one will ever have a clue what its significance was." Excerpt from the Encyclopedia Britannica without permission.
“Diaspora” (from the Greek word for “scattering”) refers to the dispersion of a people from their homeland. A simple definition of diaspora literature, then, would be works that are written by authors who live outside their native land. The term identifies a work’s distinctive geographic origins. But diaspora literature may also be defined by its contents, regardless of where it was written. For example, the story of Joseph (Gen 37-50) is often called a “diaspora story” because although its final form was written within the land of Israel, it describes how Joseph learns to survive outside his homeland. The book of Job, too, may be an example of diaspora literature because it was likely written in the wake of the Babylonian destruction, which gave rise to the question, Why would God punish Israel, the chosen people, with such mass suffering? The term diaspora comes to us from the Greek translation of the Hebrew Bible, particularly Deut 28:25. This translation was called the Septuagint and was the project of Greek-speaking Jews living in the Egyptian diaspora. In the broadest possible terms, the entire Septuagint could be described as diaspora literature, because it is the work of Jews living outside their homeland—and their translation reflects that orientation. But specific books within it, such as the books of Tobit and Judith, which feature Jewish protagonists living outside the land or under foreign domination and which reflect on how the Jews might conduct themselves in this situation, could be described as especially diasporic because of their contents and concerns. We could also draw a distinction between exile and diaspora to further define what diaspora literature is. The difference between exile and diaspora may lie in a book’s attitude toward the homeland and toward the migration. Exile emphasizes the forced nature of the migration and the freshness of the experience of leaving the homeland; exile is not neutral and exiled peoples usually possess a single-minded desire to return to their homeland. Time is also a factor: exilic literature may be written during the Babylonian exile of the sixth century B.C.E., when the experience and memory of it was still vivid. In contrast, living “in diaspora” may assume a certain accommodation to living away from the homeland—and a sense that it is possible to survive and even thrive in the adopted country. Diaspora implies a more neutral or even a more positive view than exile does. Diasporic literature may be mindful of the ancestral native land, but the nostalgia for it has lessened, if not disappeared. And diasporic literature is, moreover, engaged by the possibilities of the new location. Finally, it may be written well after the Babylonian exile by Jews who chose not to return. Diasporic living stops short of assimilation because the community still maintains its distinctive identity and its status as a minority people. The diasporic book of Daniel, for example, celebrates Daniel’s refusal to assimilate to the pressures of the gentile court—such as his refusal to eat the nonkosher food at the king’s table. The book of Esther could also be described as diaspora literature, regardless of where it was written, because it reflects on what it means to be a Jew living outside the land—with all the accompanying dangers and opportunities. These books’ subtle reflections of the instabilities of diasporic existence have given them lasting appeal; their meditations on leadership and self-sacrifice for the good of the community resonates with those who wrestle with the vicissitudes of their own diasporic existences.
8th Grade Math Please sign in to download materials. What is 8th grade math all about? In eighth grade, students make several advances in their algebraic reasoning, particularly as it relates to linear equations. Students extend their understanding of proportional relationships to include all linear equations, and they consider what a “solution” looks like when it applies to a single linear equation as well as a system of linear equations. They learn that linear equations can be a useful representation to model bivariate data and to make predictions. Functions emerges as a new domain of study, laying a foundation for more in-depth study of functions in high school. Lastly, students study figures, lines, and angles in two-dimensional and three-dimensional space, investigating how these figures move and how they are measured. How did we order the units? In Unit 1, Exponents & Scientific Notation, students start off the year with a study of patterns and structure, using this structure to formalize properties of exponents. They reach back to skills learned in sixth grade to simplify complex exponential expressions and to represent and operate with very large and very small numbers. In Unit 2, Solving One-Variable Equations, students continue to hone their skill of solving equations. Students solved equations in sixth and seventh grades, and in eighth grade, students become more efficient and more strategic in how they approach and solve equations in one variable. Including this unit at this point in the year allows time for spiraling and incorporating these skills into future units. In Unit 3, Transformations & Angle Relationships, students formalize their understanding of congruence and similarity as defined by specific movements of figures in the coordinate plane. They experiment with, manipulate, and verify hypotheses around how shapes move under different transformations. Studying similarity, students observe how ratios between similar triangles stay the same, which sets them up for understanding slope in Unit 5. Students also make informal arguments, which prepares them for more formal proofs in high school geometry. Unit 4, Functions, introduces students to the concept of a function, which relates inputs and outputs. Students analyze and compare functions, developing appropriate vocabulary to use to describe these relationships. They investigate real-world examples of functions that are both linear and nonlinear, and use functions to model relationships between quantities. This introductory study of functions prepares students for Unit 5, in which they focus on a particular kind of function—linear equations. Unit 5, Linear Relationships, and Unit 6, Systems of Linear Equations, are all about lines. Students make the connection between proportional relationships, functions, and linear equations. They deepen their understanding of slope, making the connection back to similar triangles in Unit 3. Students think critically about relationships between two quantities: how they are represented, how they compare to other relationships, and what happens when you consider more than one linear equation at a time. Throughout these two units, students utilize their skills from Unit 2 as they manipulate algebraic equations and expressions with precision. In Unit 7, Pythagorean Theorem & Volume Applications, students discover the Pythagorean Theorem, which is supported by a study of irrational numbers. Students now have a full picture of the real number system. Lastly, in Unit 8, Bivariate Data, students analyze data in two variables using linear equations and two-way tables. They use these structures to make sense of the data and to make justifiable predictions. Note that this course follows the 2017 Massachusetts Curriculum Frameworks, which include the Common Core Standards for Mathematics. Unit 1 15 Lessons Exponents & Scientific Notation Unit 2 12 Lessons Solving One-Variable Equations Unit 3 22 Lessons Transformations & Angle Relationships Unit 4 12 Lessons Unit 5 15 Lessons Unit 6 11 Lessons Systems of Linear Equations Unit 7 16 Lessons Pythagorean Theorem & Volume Applications Unit 8 9 Lessons
Reading is a complex process. It involves auditory and visual discrimination in addition to cognitive construction. Montessori educators understand that children must reach certain developmental milestones in order for learning to become their own. The Montessori teacher prepares the environment to support this development, and her role as observer and guide is key in nurturing this independence. Continue reading as we provide some helpful information in this regard. Montessori Education Builds Development Needed Before ReadingLearning to read is not a race, though some popular media and some contemporary early childhood programs would lead us to believe otherwise. Parents who are driven to give their child an edge in a world of information are pushed to expose their child to the rudiments of reading before the child is developmentally ready. Children who are not developmentally ready to read will only be frustrated when pushed too fast, too soon. Learning to read becomes a distasteful process. Montessori , on the other hand, focuses primarily on developing a lifelong love of learning through a very specific, developmentally-appropriate curriculum. Cognitive Developmental Preparation - Sensorial development aids reception of information - Perceptual development aids in organizing and integrating information - Neurological development aids in physical procession of information - Social development aids in understanding relationships between people and events - Symbolic development aids in decoding - Concept formation - Verbal and visual language - Gross & fine motor control - Eye-hand coordination - Ability to perceive figures in space - Ability to organize spatial relationships - Ability to differentiate contrasting symbols and sounds - Ability to classify - Ability to understand meaning in content - Strong auditory discrimination - Ability to focus - Ability to understand and follow verbal directions As much as possible, NAMC’s web blog reflects the Montessori curriculum as provided in its teacher training programs. We realize and respect that Montessori schools are unique and may vary their schedules and offerings in accordance with the needs of their individual communities. We hope that our readers will find our articles useful and inspiring as a contribution to the global Montessori community. © the North American Montessori Center - originally posted in its entirety at Montessori Teacher Training on Wednesday, April 28, 2010.
Effect of Frequency on Inductive Reactance In an a.c. circuit, an inductor produces inductive reactance which causes the current to lag the voltage by 90 degrees. Because the inductor "reacts" to a changing current, it is known as a reactive component. The opposition that an inductor presents to a.c. is called inductive reactance (XL). This opposition is caused by the inductor "reacting" to the changing current of the a.c. source. Both the inductance and the frequency determine the magnitude of this reactance. This relationship is stated by the formula: As shown in the equation, any increase in frequency, or "f," will cause a corresponding increase of inductive reactance, or "XL." Therefore, the INDUCTIVE REACTANCE VARIES DIRECTLY WITH THE FREQUENCY. As you can see, the higher the frequency, the greater the inductive reactance; the lower the frequency, the less the inductive reactance for a given inductor. This relationship is illustrated in figure 1-2. Increasing values of XL are plotted in terms of increasing frequency. Starting at the lower left corner with zero frequency, the inductive reactance is zero. As the frequency is increased (reading to the right), the inductive reactance is shown to increase in direct proportion. Figure 1-2. - Effect of frequency on inductive reactance. Effect of Frequency on Capacitive Reactance In an a.c. circuit, a capacitor produces a reactance which causes the current to lead the voltage by 90 degrees. Because the capacitor "reacts" to a changing voltage, it is known as a reactive component. The opposition a capacitor presents to a.c. is called capacitive reactance (XC). The opposition is caused by the capacitor "reacting" to the changing voltage of the a.c. source. The formula for capacitive reactance is: In contrast to the inductive reactance, this equation indicates that the CAPACITIVE REACTANCE VARIES INVERSELY WITH THE FREQUENCY. When f = 0, XC is infinite and decreases as frequency increases. That is, the lower the frequency, the greater the capacitive reactance; the higher the frequency, the less the reactance for a given capacitor. As shown in figure 1-3, the effect of capacitance is opposite to that of inductance. Remember, capacitance causes the current to lead the voltage by 90 degrees, while inductance causes the current to lag the voltage by 90 degrees. Figure 1-3. - Effect of frequency on capacitive reactance. Effect of Frequency on Resistance both contain "f" (frequency). Any change of frequency changes the reactance of the circuit components as already explained. So far, nothing has been said about the effect of frequency on resistance. In an Ohm's law relationship, such as R = E/I no "f" is involved. Thus, for all practical purposes, a change of frequency does not affect the resistance of the circuit. If a 60-hertz a.c. voltage causes 20 milliamperes of current in a resistive circuit, then the same voltage at 2000 hertz, for example, would still cause 20 milliamperes to flow. NOTE: Remember that the total opposition to a.c. is called impedance (Z). Impedance is the combination of inductive reactance (XL), capacitive reactance (XC), and resistance (R). When dealing with a.c. circuits, the impedance is the factor with which you will ultimately be concerned. But, as you have just been shown, the resistance (R) is not affected by frequency. Therefore, the remainder of the discussion of a.c. circuits will only be concerned with the reactance of inductors and capacitors and will ignore resistance. A.c. Circuits Containing Both Inductive and Capacitive Reactances A.c. circuits that contain both an inductor and a capacitor have interesting characteristics because of the opposing effects of L and C. XL and XC may be treated as reactors which are 180 degrees out of phase. As shown in figure 1-2, the vector for XL should be plotted above the baseline; vector for XC, figure 1-3, should be plotted below the baseline. In a series circuit, the effective reactance, or what is termed the RESULTANT REACTANCE, is the difference between the individual reactances. As an equation, the resultant reactance is: X = XL - XC Suppose an a.c. circuit contains an XL of 300 ohms and an XC of 250 ohms. The resultant reactance is: X = XL - XC = 300 - 250 = 50 ohms (inductive) In some cases, the XC may be larger than the X L. If XL = 1200 ohms and XC = 4000 ohms, the difference is: X = XL - XC = 1200 - 4000 = -2800 ohms (capacitive). The total carries the sign (+ or -) of the greater number (factor). Q.1 What is the relationship between frequency and the values of (a) XL, (b) XC, and (c) R?
Long-term care for livestock requires you to have some knowledge about the nutrient density of the food you feed them. Different kinds of hay have differing amounts of minerals and protein, so knowing what kind you feed your animals is essential in knowing if you will need to supplement their feed at any point to keep them healthy. What Is Hay? Hay is made from cutting grasses during different growth periods of the plant's life. The cycles of plant life range from the leafing period, to budding, flowering and going to seed. Most hay is cut between the bud and bloom phase, which maximizes the nutritional content of the hay. Farmers then allow the cut hay to dry in the field, curing it for baling and storage for future use, either to be sold or to be used as feed for livestock during the winter. Categories of Hay Farmers divide hay into different categories, depending on what plants make it up. Categories of hay are legume, grass, cereal grain and a mixture of both legumes and grass. Red clover is one example of legume hay, while bermuda grass and fescue are two kinds of hay that come from grasses. A mixed hay would have both a legume and a grass. An example of a cereal grain would be oat straw. Nutritional Content of Hay Grass hays tend to have more sugar if they grow in the winter versus those, like coastal bermuda grass, that grow in the summer. Hays from legumes tend to have more minerals like calcium and phosphorous. Hays that come from grains contain a lot of nitrates, especially if they have a growth spurt following a period of drought. Cereal grains should be tested for nitrate content before giving them to livestock because too many nitrates can poison an animal. Mixed Hays and Balanced Nutrition Hay from grass and hay from legumes tend to complement each other as livestock feed because they provide a balanced diet. Grasses tend to have half or less the amount of protein than hay from legumes while grass has a higher amount of fiber and has a more appealing flavor. Since grass-eating livestock have sensitive stomachs, you should keep their diet consistent, or change from one kind of hay to another gradually so as not to upset their digestion. Protein Content of Hay Comparing different kinds of hay, like alfalfa to timothy grass, shows you how different the protein content of hays can be. The percent of protein in an alfalfa plant is between 15 to 20 percent of the plant. Compare this to timothy grass hay in which the protein content is about half that of alfalfa, yielding about 7 to 11 percent crude protein. Tall fescue grass has a protein content of between 5 and 9 percent, orchard grass hay between 7 to 11 percent and red clover hay between 13 to 16 percent crude protein. - Thinkstock/Comstock/Getty Images
The Online Teacher Resource Receive free lesson plans, printables, and worksheets by email: - Fun and Unique Sports - Great Group Activities - Great PE Helpers - Super Ideas |Basic Sports||Sports Physics| |Health||Winter Olympic Games| Physical education is a course that focuses on developing physical fitness in the youth. Same as Music, Gym and Math, this is a required course in primary and secondary school. Most of the time, it is also required in college. To understand what physical education, we must understand physical fitness which it intends to promote. Physical fitness is comprised of the following: Cardiovascular fitness - This is the ability of your heart and lungs to deliver the oxygen your body needs for its daily tasks. This is the fitness component that is addressed by such aerobic activities as brisk walking, jogging, running, dancing and swimming. Strength - This is the amount physical power that a muscle or group of muscles can use against a weight or resistance. This is addressed by such activities as weight lifting and body weight training. Endurance -This is the ability of a muscle or group of muscles to repeat movements or hold a position over a certain period of time. Long-distance running is an activity that helps to develop endurance. Flexibility - This refers to the body's range of movement. Pilates, yoga and gymnastics help promote this particular fitness component. Body composition - This refers to the ratio of the body's fat component vs. its lean mass. Exercises that address cardiovascular fitness, strength, endurance and flexibility also promote the reduction of fat and the build-up of muscle. Students of Music, Gym and Math often have to be challenged, in order to be interested. To break the monotony of the traditional Physical Education courses, many schools have updated their programs. These are some of the trends that are pervading the Physical Education programs across the country: The inclusion of activities that the students can use for life, like brisk walking, Frisbee and bowling. The principle behind this is that if students learn to like these activities early, they can easily adopt these into their current lifestyle and even carry them into adulthood. The inclusion of non-traditional sports - This makes Physical Education a cultural immersion at the same time. It teaches cultural sensitivity and can be a lot of fun. Patterning the Physical Education program after health club programs - The advantage of this is that the student is exposed to a whole variety of activities that can only make Physical Education more fun for her. Here, the student may do Tae-bo one day and do yoga the next. The combination of cardio and strength training activities also promote overall fitness. Adopting a sports league model - In this scenario, the Physical Education class is run like a sports league. Students take turns playing the roles of referees, players, scorers and coaches. This aims to develop the students into better-rounded, balanced individuals. Including martial arts and self-defense - Not only do these activities capture the interest of the students - they also promote their safety and well-being. This is a practical improvement on the usual Physical Education program. Inclusions of health and nutrition topics - Most Physical Education programs in the US include health and nutrition topics such as the following: hygiene, stress and anger management, self-esteem and bullying. Some states even require that Physical Education teachers are also certified as Health teachers. Exposure to technological enhancements - Students are taught how to use modern gym equipment as well as other fitness-related devices such as pedometers and heart-rate monitors. Although the primary goal of Physical Education is still to promote the physical fitness and well-being of each student, all these trends and advancements have changed the face of Physical Education forever. Music, Gym and Math will never be the same!
We also know that learning takes place most effectively in classrooms where knowledge is clearly and powerfully organized, students are highly active in the learning process, assessments are rich and varied, and students feel a sense of safety and connection (National Research Council, 1990; Wiggins & McTighe, 1998). We know that learning happens best when a learning experience pushes the learner a bit beyond his or her independence level. When a student continues to work on understandings and skills already mastered, little if any new learning takes place. On the other hand, if tasks are far ahead of a student's current point of mastery, frustration results and learning does not (Howard, 1994; Vygotsky, 1962). In addition, we know that motivation to learn increases when we feel a kinship with, interest in, or passion for what we are attempting to learn (Piaget, 1978). Further, we go about learning in a wide variety of ways, influenced by how our individual brains are wired, our culture, and our gender (Delpit, 1995; Gardner, 1983; Heath, 1983; Sternberg, 1985; Sullivan, 1993). In the end, we can draw at least three powerful conclusions about teaching and learning. First, while the image of a “standard issue” student is comfortable, it denies most of what we know about the wide variance that inevitably exists within any group of learners. Second, there is no substitute for high-quality curriculum and instruction in classrooms. Third, even in the presence of high-quality curriculum and instruction, we will fall woefully short of the goal of helping each learner build a good life through the power of education unless we build bridges between the learner and learning. These three conclusions are the engine that drives effective differentiation. They, along with our best knowledge of what makes learning happen, are nonnegotiables in a classroom where a teacher sets out to make each learner a captive of the mystery and power of knowing about the world in which those learners will live out their lives. Mixed-ability classrooms that are ambiguous about learning goals, that evoke little passion, that cast the teacher as the centerpiece of learning, and that lack responsiveness to student variance show little understanding of these various learning realities. They lack the foundation of all powerful learning, top quality curriculum and instruction—as well as a key refinement of superior curriculum and instruction, differentiated or responsive instruction. In regard to the first-named deficit, these classrooms operate as though clarity of understanding can be achieved through ambiguity and that fires of inquiry will be ignited in the absence of a flame. In regard to the latter deficit, they imply that all students need to learn the same things in the same way over the same time span. Ensuring rock solid clarity about where we want students to end up as a result of a sequence of learning is fundamental to educational success. Remembering that we cannot reach the mind we do not engage ought to be a daily compass for educational planning. Offering multiple and varied avenues to learning is a hallmark of the kind of professional quality that denotes expertise. Our students—each of them—is a message that we can never stop attending to the craftsmanship and artistry of teaching. The focus of this book is on the refinement of high-quality, alluring instruction that we call “differentiation.” This book, however, calls for clarity and quality in what we differentiate. It is an exercise in futility to try to meet the needs of learners by low quality, incoherent approaches to differentiation. They provide learners with several varieties of gruel. They will fall short for virtually all students. Looking at a Classroom through Many Eyes Their teacher cares about her work. She likes kids and she likes teaching. She works hard and is proud of her profession. The kids know that, and they like her for all those things. But the day seems long too often for many of the students. Sometimes their teacher knows it. Often she does not. Lin does not understand English. No one understands her language either as far as she can tell. The teacher smiles at her and assigned a classmate to help her. That classmate does not speak her language. The classmate smiles too. Sometimes smiles help. Sometimes they seem like music without sound. In math, Lin understands more. Numbers carry fewer hidden meanings than words. No one expects her to understand, however, and so no one asks her to go to the board and work problems. That's okay, because if she went, she wouldn't have words to tell about her numbers. Rafael wants to read aloud, wants to ask for more books about the people in history, wants to add his questions to the ones the other kids ask in discussions. He doesn't. His friends are down on school. They say it's not for them—not for kids like him. Learning belongs to another kind of person, they say. Where would grades get him? they ask. Maybe they're right. He knows he won't go to college or get a big deal job—but he secretly thinks about it. And he wants to know things. But it's hard to ask. Serena reads her mom's books at home. She reads the magazine that comes with the Sunday Times. She and her friends write and produce a neighborhood play every summer. Lots of people come. In school, she's learning 4th grade spelling words. She gets A's on the tests. She gets A's on everything. She doesn't work hard like when she's getting the plays ready. In school, she feels dishonest. She makes up stories in her head while she waits for other students to learn. They try hard and don't get A's. That makes her feel dishonest too. Trevor hates reading. He misbehaves sometimes, but it's not that he wants to. He's just tired of seeming stupid in front of everyone. He thinks he sounds worst in the class when he reads aloud. The odd thing is that he understands what the pages are about when somebody else reads them. How can you understand what you can't read? And how can you be a normal 4th grader and not be able to read? Lesley knows she doesn't learn like the other kids do. She knows people think she's “slow.” She has a special teacher who comes to class to help her, or takes her to a special room to learn things. She likes that teacher. She likes her main teacher too. She doesn't like the fact that having two teachers makes her feel different. She doesn't like the fact that what she studies seems so unlike what everyone else studies. She doesn't like feeling like she's on the edge of the action all the time. Danny likes coming to school because people don't yell there all the time. Nobody hits at school—or if they do, they get in trouble. There are things to play with at school. His teacher smiles. She says she's glad he's there. He's not sure why. He doesn't do well. He wants to, but it's hard to concentrate. He worries about his mom. He worries about his sister. He forgets to listen. At home, it's hard to do homework. He gets behind. Theo keeps listening for questions that sound like something a person in his house would ask. He keeps listening for language that sounds like his. He keeps waiting for a signal that the people he studies in school have some connection with him. He keeps waiting to see how the knowledge fits in with his neighborhood. He doesn't mind learning. He just wants to know why. He's restless. Their teacher works hard on preparing their lessons. They know that. Sometimes—many times—it seems like she's teaching lessons, not kids. Sometimes it seems like she thinks they are all one person. Sometimes it's like they are synonyms for test scores. Sometimes school is like a shoe that's shaped for somebody else's foot. Perhaps a good way to begin an exploration of differentiated teaching is to look at the classroom through the eyes of two broad categories of students—those who are advanced and those who struggle. Those two categories, of course, encompass many different sorts of students, but they do at least provide a place to begin thinking about the readiness of academically diverse learners and the range of needs they bring to school. In later chapters we'll look at needs related to student interest and learning profile. Understanding the Needs of Advanced Learners Whatever label we use—“gifted learners,” “high-end learners,” “academically talented learners,” or “advanced learners”—it seems to bother many people. In this book, “advanced learners” is used for two reasons. First, this label doesn't seem to carry some of the more controversial overtones of some other descriptors. Second, it says to the teacher in a mixed-ability classroom, “Don't worry so much about identification processes and formal labeling. Take a look at who is ahead of where you and the curriculum guide expect your students to be. Then you have a place to start.” Some students may be advanced in September and not in May—or in May, but not in September. Some may be advanced in math, but not in reading; or in lab work, but not in memorization of related scientific formulas. Some may be advanced for a short time, others throughout their lives but only in certain endeavors. Some learners are consistently advanced in many areas. Because the primary intent of differentiated instruction is to maximize student capacity, when you can see (or you have a hunch) that a student can learn more deeply, move at a brisker pace, or make more connections than instructional blueprints might suggest, that's a good time to offer advanced learning opportunities. But advanced learners, like other learners, need help in developing their abilities. Without teachers that coach for growth and curriculums that are appropriately challenging, these learners may fail to achieve their potential. For example, when a recent study compared Advanced Placement Exam results of the top 1 percent of U.S. students with top students in 13 other countries, U.S. students scored last in biology, 11th in chemistry, and 9th in physics (Ross, 1993). There are many reasons why advanced learners don't achieve their full potential. - Advanced learners can become mentally lazy, even though they do well in school. We have evidence (Clark, 1992; Ornstein & Thompson, 1984; Wittrock, 1977) that a brain loses capacity and “tone” without vigorous use, in much the same way that a little-used muscle does. If a student produces “success” without effort, potential brainpower can be lost. - Advanced learners may become “hooked” on the trappings of success. They may think grades are more important than ideas, being praised is more important than taking intellectual risks, and being right is more valuable than making new discoveries. Unfortunately, many advanced learners quickly learn to do what is “safe” or what “pays,” rather than what could result in greater long-term learning. - Advanced learners may become perfectionists. We praise them for being the best readers, assign them to help others who can't get the math, and compliment them when they score highest on tests. When people get excited about their performance, these students often assume it's possible to keep being the best. Because they attach so much of their self-worth to the rewards of schooling and because those rewards are accessible for years at a time, advanced learners often don't learn to struggle or fail. Failure then becomes something to avoid at all costs. Some advanced learners develop compulsive behaviors, from excessive worry to procrastination to eating disorders, and occasionally even suicide. Many advanced learners simply become less productive and less satisfied. Creative production typically has a high failure-to-success ratio. Students who have the capacity to be producers of new knowledge but who are afraid of failure are unlikely to see their productive capacity realized. - Advanced learners may fail to develop a sense of self-efficacy. Self-esteem is fostered by being told you are important, valued, or successful. Self-efficacy, by contrast, comes from stretching yourself to achieve a goal that you first believed was beyond your reach. Although many advanced learners easily achieve a sort of hollow self-esteem, they never develop a sense of self-efficacy. These students often go through life feeling like impostors, fearfully awaiting the inevitable day the world will discover they aren't so capable after all. - Advanced learners may fail to develop study and coping skills. When students coast through school with only modest effort, they may look successful. In fact, however, success in life typically follows persistence, hard work, and risk. In many cases, advanced learners make good grades without learning to work hard. Then when hard work is required, they become frightened, resentful, or frustrated. In addition, they “succeed” without having to learn to study or grapple with ideas or persist in the face of uncertainty. We graduate many highly able students with “evidence” that success requires minimal effort, and without the skills necessary to achieve when they discover that evidence is invalid. Advanced learners, like all learners, need learning experiences designed to fit them. When teachers are not sensitive to that need, they may set learning goals for advanced students that are too low or that develop new skills too infrequently. Then, if students are successful anyhow, they often fail to develop the desirable balance between running into walls and scaling them. Advanced learners share other learners' need for teachers who can help them set high goals, devise plans for reaching those goals, tolerate frustrations and share joys along the way, and sight new horizons after each accomplishment. Several key principles are useful when coaching advanced learners for growth. - Continually raise the ceilings of expectations so that advanced learners are competing with their own possibilities rather than with a norm. - Make clear what would constitute excellence for the advanced learner so she knows, at least in large measure, what to aim for in her work. - As you raise ceilings of expectation, raise the support system available to the student to reach his goals. When tasks are appropriately challenging, you'll find high-end learners need your support and scaffolding to achieve genuine success, just as other learners do. - Be sure to balance rigor and joy in learning. It's difficult to imagine a talented learner persisting when there is little pleasure in what the learner once thought was fascinating. It's also difficult to imagine growth toward expertise when there is all joy and no rigor. Understanding the Needs of Struggling Learners Labels are tricky with struggling learners, too. The term “slow learners” often carries with it a negative connotation of being shiftless or lazy, yet many struggling learners work hard and conscientiously—especially when tasks are neither boring (such as a steady diet of drill and skill) nor anxiety-producing (such as tasks that require more than they can deliver even when they work hard). The term “at-risk” overlooks the portion of the learner that may well be “at-promise.” One child's struggle stems from a learning disability, another's home life takes all her energy, and another just finds a subject his nemesis. Further, just like with an advanced learner, the learning profile of a struggling learner may shift over time; for example, suddenly a student becomes an eager reader after trailing the class in decoding and comprehension for some time. Many students whom we perceive to be “slow,” “at-risk,” or “struggling,” may actually be quite proficient in talents that schools often treat as secondary, such as leadership among neighborhood peers, story telling, or building contraptions out of discarded materials. Nonetheless, many students do struggle with school tasks. They are a diverse group who can challenge the artistry of the most expert teacher in listening deeply, believing unconditionally, and moving beyond a recipe or blueprint approach to teaching to shape classrooms that offer many avenues and timetables to understanding. Here are some principles that can be helpful in ensuring that struggling learners maximize their capacity in school. - Look for the struggling learner's positives. Every student does some things relatively well. It's important to find those things, to affirm them in private conversations and before peers, to design tasks that draw on those strengths, and to ensure that the student can use strengths as a means of tackling areas of difficulty. A student with kinesthetic ability and a weakness in reading, for example, may find it easier to comprehend a story by pantomiming the events in it as someone else reads aloud, and then reading the story to herself. - Don't let what's broken extinguish what works. Few adults elect to spend the majority of their days practicing what they can't do. The difference between us and students is that we have a choice. Struggling learners are more likely to retain motivation to learn when their days allow them to concentrate on tasks that are relevant and make them feel powerful. Many learning-disabled gifted learners, for example, find school intolerable because educators spend so much time “remediating” their flaws that there's no space for enhancing their strengths. It's important to avoid this temptation with struggling learners in general. - Pay attention to relevance. It's easy to understand why many struggling learners believe school is not “their place.” They don't “do school” well today, and we keep insisting that persistence will pay off “someday”—often in another grade or level of school in which the child believes he has little prospect for success. Dewey (1938) reminds us that if school isn't for today, it will often turn out to be for nothing. He believed this to be true for all learners. Certainly it is so for many struggling learners. A skilled teacher conscientiously works to make each day's explorations compelling for that day. - Go for powerful learning. If struggling learners can't learn everything, make sure they learn the big ideas, key concepts, and governing principles of the subject at hand. Not only does this approach help struggling learners see the big picture of the topic and subject, but it also helps build a scaffolding of meaning, a requisite framework for future success. - Teach up. Know your struggling students' learning profiles. Create tasks for struggling learners (individuals or groups with similar profiles) that are a chunk more difficult than you believe they can accomplish. Then teach for success (by encouraging, providing support, guiding planning, delineating criteria, and so on.) so that the seemingly unattainable moves within the learners' reach. A strong sense of self-efficacy comes not from being told we're terrific, but rather from our own recognition that we've accomplished something we believed was beyond us. - Use many avenues to learning. Some students learn best with their ears, some with their eyes, some with touch or movement. Some are solitary learners, some must interact with friends in order to learn. Some students work well by gathering details and constructing a bird's-eye view of what is being studied. Others will not learn unless the bird's-eye view is clear to them before they encounter the details. Struggling learners sometimes become more successful learners just because their way of learning is readily accessible through both teacher design and student choice. - See with the eyes of love. Some kids come at the world with their dukes up. Life is a fight for them in part because the belligerence that surrounds them spawns belligerence in them. These kids are no less difficult for a teacher to embrace than for the rest of the world. But behind the tension and combativeness abundant in the world of the angry child, what's lacking is the acceptance and affection he disinvites. Perhaps a good definition of a friend is someone who loves us as we are, and envisions us as we might be. If so, these students need a teacher who is a friend. The eyes of love reflect both unconditional acceptance and unwavering vision of total potential. It's not easy, but it is critical. Here are a few important principles to recall as you plan for success for students who struggle with school. - Be clear on what students must know, understand, and be able to do in order to grow in their grasp of a subject. Teacher fog will only obscure an already difficult view for struggling students. - Set important goals of understanding and use of ideas for struggling students, then figure out how to build scaffolding leading to student success in those goals. Don't dilute the goals. - Work for learning-in-context. In other words, help the student see how ideas and skills are part of their own families and neighborhoods and futures. Helping students connect their lives with ideas and skills presupposes that, as teachers, we understand the students' neighborhoods, cultures, and families and what connections are possible. - Plan teaching and learning through many modalities. If a student has heard about an idea, sung about it, built a representation of it, and read about it, success is far more likely than if one avenue to learning predominates. - Continually find ways to let the student know that you believe in him or her—and reinforce legitimate success whenever it happens. If I believe in you, I'll find a way to ensure that you succeed, and will be sure to point out that success to you whenever it is genuine and earned. Differentiating Learning Experiences to Address Academic Diversity Differentiated instruction is not simply giving a “normal” assignment to most students and “different” assignments to students who are struggling or advanced. That approach usually creates a “pecking order” among students, which then tends to cause other troubles. Students assigned a remedial assignment, which looks simple to others, can take it as a message that they are inferior. Advanced assignments tend to look more interesting to nearly everyone except the advanced learner, who may perceive it as more work. These strategies can backfire, causing both advanced and struggling students to feel different from those who do the “real” assignment. In a differentiated classroom, a number of things are going on in any given class period. Over time, all students complete assignments individually and in small groups, and whole-group instruction occurs as well. Sometimes students select their group size and tasks, sometimes they are assigned. Sometimes the teacher establishes criteria for success, sometimes students do. And setting standards for success is often a collaborative process. Because there are many different things happening, no one assignment defines “normal,” and no one “sticks out.” The teacher thinks and plans in terms of “multiple avenues to learning” for varied needs, rather than in terms of “normal” and “different.” The goal for each student is maximum growth from his current “learning position.” The goal of the teacher is coming to understand more and more about that learning position so that learning matches learner need. A Final Thought In the end, all learners need your energy, your heart, and your mind. They have that in common because they are young humans. How they need you, however, differs. Unless we understand and respond to those differences, we fail many learners. Some of us are drawn to teach struggling learners, some are natural champions of advanced learners, and some have an affinity for the sort of “standard” student who matches our image of the 4th or 8th or 11th grader we thought we'd be teaching. That we have preferences is, again, human. The most effective teachers spend a career meticulously cultivating their appreciation for children not so easy for them to automatically embrace, while continuing to draw energy from those students whom they more automatically find delightful.
Historians often refer to key periods in time as "inflection points" -- times when the course of human events began to veer away from one particular direction toward another. The history of space exploration is replete with such turning points: the launch of Sputnik, the first Apollo Moon landing, and the explosion of the Space Shuttle Challenger are among the most well known. Today, NASA's highly-successful robotic solar system exploration program, and the Mars exploration program in particular, is on the brink of its own major inflection point. The time has come, from both a scientific and exploration standpoint, for NASA to embark on a robotic mission to bring rock and soil samples back from Mars, but the Agency -- and the administration -- appear to be shying away from the challenge. Will the balance tip toward progress and discovery, or delay and stagnation? The past 15 years have seen an amazing renaissance in our exploration of the Red Planet. A well-coordinated program of four robotic orbiters (three from NASA, one from the European Space Agency (ESA)), two stationary landers, and three mobile rovers have delivered an incredible and steady supply of images, maps and chemical/mineral data about Mars. These missions have enabled astronomers, planetary scientists and astrobiologists to discover that Mars -- though today cold and dry and barren on its surface -- was once much more like the Earth, with active volcanoes and tectonics and flowing surface and subsurface liquid water. The place was, by all reasonable measures, a habitable world. A new rover, dubbed "Curiosity," launched recently and is set for an August 5 landing and a two-Earth-year mission to further investigate the habitability of our planetary neighbor, focusing especially on the possible presence of organic molecules preserved in an ancient, once-watery environment. Another NASA orbiter, called "MAVEN," is set for launch next year to study even more details about the planet's atmosphere and climate. Presidents and politicians talk about "special relationships" between nations. For example, the U.S. has a special relationship with England because of our close-knit shared heritage and culture. Similarly, many planetary scientists feel that the Earth and Mars have a special relationship, because of our close-knit origins and early histories and because Mars is -- today -- still the most Earth-like place in our solar system besides the Earth itself. We see photos of its stark ruddy landscapes and feel a sense of familiarity with the place, despite knowing the reality of it being an incredibly hostile and alien environment for humans. If we imagine turning back the clock, though, 3 or 4 billion years, we can reasonably wonder if life began or thrived in that earlier warmer, wetter environment, as it did on the early Earth. Nowhere else in our solar system can we find a closer, more inviting place than Mars try to answer the question, "Is (or was) there other life out there?" As we continue to make incredible discoveries about the immensity of the cosmos and our tiny place within it, almost no other question resonates so strongly in the human soul than "Are we alone?" And this is precisely why we are at a critical inflection point. Despite the incredible scientific discoveries and enormous public excitement and good will towards NASA that have come from the recent Mars Exploration Program, despite the fact that we are now able to search for and potentially detect the presence of biologic organic molecules (or their remains) in the Martian environment today, and despite the fact that NASA has assembled an incredibly competent and motivated nationwide cadre of thousands of skilled engineers, managers and scientists who know how to land and operate complex robots on and above Mars unlike anyone else in the world -- despite all that -- funding for future Mars missions, including the next step of sample return, is being dramatically cut. Some of our political leaders appear ready to take this exciting and inspirational part of our space exploration program off the rails altogether. This is "penny wise and pound foolish" thinking. The proposed cuts to the Mars program, including NASA's withdrawal from planned missions with ESA for a new orbiter in 2016 and a new rover in 2018 to build on recent discoveries, amount to less than about 2 percent of NASA's annual budget, which is itself less than 0.5 percent of the entire U.S. budget. But that small percentage would end up severely damaging one of the most successful and most publicly supported and admired programs that NASA has. It has been argued that the Mars program can take these cuts because it has been so successful, and because the latest rover mission, Curiosity, and the soon-to-be-launched MAVEN orbiter, have passed or will soon pass their peak spending rates, providing a logical opportunity to scale back funding. Hogwash. Cutting off such a successful, carefully formulated program of missions now would be like cancelling the Apollo landings after finishing only the Mercury and Gemini programs -- close enough, right? U.S. taxpayers have made a significant investment in NASA over the past few decades to build this program of systematic, paradigm-shifting solar system exploration missions. Now is the time to reap the scientific, educational and inspirational profits from that investment, not to turn back and squander the opportunity to answer fundamental questions about our place in the universe. The planetary science community and the National Academy of Sciences recently conducted an extensive survey to identify the highest-priority new robotic missions for the next 10 years. Topping that list of plans for NASA's most ambitious new "flagship" class of missions is the desire to bring back a set of carefully selected Martian rock and soil samples for detailed study in laboratories here on Earth. The ability to use the latest technologies in multiple laboratories on Earth and to selectively bring back sedimentary rock samples that have the best chance to preserve potential past Martian biosignatures, combined with the demonstrated scientific advances made by previous human and robotic sample return missions from the Moon, asteroids, and comets, helped the community arrive at a clear consensus: in order to make the next big advances in our search for life on Mars (as well as our understanding of the origin of life on Earth), Mars sample return must be the next step. Many in the astronaut community also think that bringing samples back with a robotic mission is an essential precursor to an eventual human exploration mission to Mars. As currently envisioned, such a Mars sample return campaign would have begun with the 2018 joint NASA-ESA rover to cache some well-selected samples, to be followed by two other missions to launch that cache into Mars orbit (in 2025) and return them to Earth (in 2026). Stable -- flat -- funding for NASA's missions would have allowed this plan to begin. But the proposed 20 percent cut to the planetary science portfolio would make it impossible to carry out. The proposed cuts to NASA's planetary science budget go deeper than just the Mars program and its flagship mission, however. The cuts would nix hopes for missions to other exciting astrobiology hotspots in the outer solar system, too, like Jupiter's moon Europa (which likely harbors the largest water ocean in the solar system), or Saturn's large moon Titan (which has a thick atmosphere of organic molecules and liquid "seas" of methane, ethane and propane). Lunar exploration plans would be scaled back, and smaller-class missions would be delayed. Even basic research and education funding would be scaled back. We're facing the prospects of declining U.S. leadership in the exploration of our solar system, including no flagship-class missions for at least a decade. Imagine what gaps we would have in our textbooks without the likes of Viking, Voyager, Galileo, and Cassini. Flagship missions provide amazing science returns on investment. The last such inflection point for NASA was in the late 1970s/early 1980s, when NASA's science program was sharply curtailed to help fund the Space Shuttle and the nascent Space Station. Grassroots efforts, like those of The Planetary Society (co-founded by Carl Sagan) helped to save NASA's planetary exploration program back then and thus to enable the incredible discoveries in solar system exploration that we have all been fortunate to experience recently. Similar efforts are needed today to help save NASA's science programs from the budget axe, which is why I'm spending time this week, during a critical time in the U.S. budget cycle, to join The Planetary Society and other organizations like the American Astronomical Society's Division for Planetary Sciences, the American Geophysical Union, the Geological Society of America, the Meteoritical Society, the SETI Institute, and others to appeal to our elected representatives on the relevant House and Senate Appropriations Subcommittees in Congress to restore the proposed cuts to NASA's planetary exploration program. If you care about this issue, now is the time to join in this fight as well. Nothing less than the future is at stake. Jim Bell is an astronomer and planetary scientist, a professor in the School of Earth and Space Exploration at Arizona State University in Tempe, and the president of The Planetary Society, the world's largest public space advocacy organization. He is the lead scientist for the Pancam color stereo cameras on the NASA Mars rovers Spirit and Opportunity, is a member of the science camera team on NASA's Curiosity rover, and has authored several space photography books, including Postcards from Mars, Mars 3-D and Moon 3-D.
The Story of Jamestown, written by Eric Braun, is a book that is presented in a format similar to a comic book; for that reason it may be attractive to boys in the classroom. It tells the story of the settlement of Jamestown, from the initial voyage that sailed from England in December, 1606 to the 1698 fire that destroyed the settlement. It tells of their interaction with the Indians and their struggle to survive. This book can be used to support Virginia Studies SOL VS.3a/b, as it helps to explain the reasons for English colonization and how the geography influenced the decision to settle in Jamestown. - This website is an interactive site that allows students to make choices about where and how they would settle if they had been on the first ships to arrive from England. - This National Geographic website offers an animated video of the voyage to Jamestown, and includes additional games the students can play to help reinforce the material they are learning about early Virginia history. - This website is quite interesting. It provides a great deal of information about the Powhatan Village, the English Ships, and the James Fort. Students are given options of tabs to click on to open pages that give detailed information about fort life, gender roles, navigational tools, and many other aspects of this time period. It also has photographs of artifacts and of the re-created settlements. Book: The Story of Jamestown Author: Eric Braun Illustrators: Steve Erwin, Keith Williams, and Charles Barnett III Publisher: Capstone Press Publication Date: 2006 Grade Range: 3rd-5th
Intaglio Printing (Etching) Intaglio is the form of printing and printmaking techniques in which the image is incised into a surface, and the incised line or sunken area holds the ink. It is the direct opposite of a relief print. Normally, copper or zinc plates are used as a surface or matrix, and the incisions are created by etching, engraving, drypoint, aquatint or mezzotint. Acid is also used to create these grooves that are then filled with ink. The image is created when the ink is retrieved from the recessed areas of the plate. Etching is a technique through which prints are made by inscribing an image on to a metal plate(copper, zinc or steel most commonly used) that is then bitten with acid. The plate is first coated with an acid resistant waxy substance (called ground) and a sharp tool or etching needle is used to create the image. The plate is then dipped in a bath of acid.The acid bites into the metal where it is exposed, leaving behind lines sunk into the plate. The longer the plate is immersed in the acid, the coarser the exposed lines will be. The plate is then inked all over and then the ink wiped off the surface, leaving behind only the ink in the etched lines. The plate is then put through a high-pressure printing press with a sheet of paper and the paper then picks up the ink from the etched lines, making a print. Copper is a traditional metal used, and is still preferred among many printers for etching, as it bites evenly, holds texture well, and does not distort the colour of the ink when wiped. The type of metal used for the plate impacts the number of prints the plate will produce. The firm pressure of the printing press slowly rubs out the finer details of the image with every pass through.
- Download PDF 1 Answer | Add Yours One way in which this statement could be said to be true would be related to climate change. Scientists have argued that extreme weather, e.g. hurricanes, flooding, and tornadoes is intensified by climate change, which has itself been exacerbated by human activity. Another way is that people have developed areas where natural disasters are likely to happen. One example of this is the construction of expensive real estate along barrier islands in the southeastern United States. This area has long been ravaged by storms, and the economic cost of these storms has been made disastrous not so much by a major difference in the storms themselves, but by the fact that over the last fifty years, billions of dollars worth of real estate has been constructed in their path. Similarly, people in the West have moved into areas often subject to brush and forest fires, thus making what is essentially a eons-old natural phenomenon a natural disaster, and countless millions of people live atop dangerous fault lines. Finally, human activity can create a risk for natural disasters. Through deforestation in many places, we create a risk for more erosion, which can lead to deadly mudslides. By draining marshes, we eliminate a safety valve for floodwater. So humans can intensify, create, or worsen the impact of natural disasters through their decisions. We’ve answered 324,877 questions. We can answer yours, too.Ask a question
Hydrogen sulfide is heavier than air and may travel along the ground. It collects in low-lying and enclosed, poorly-ventilated areas such as basements, manholes, sewer lines, underground telephone vaults and manure pits. For work within confined spaces, use appropriate procedures for identifying hazards, monitoring and entering confined spaces. The primary route of exposure is inhalation and the gas is rapidly absorbed by the lungs. Absorption through the skin is minimal. People can smell the "rotten egg" odor of hydrogen sulfide at low concentrations in air. However, with continuous low-level exposure, or at high concentrations, a person loses his/her ability to smell the gas even though it is still present (olfactory fatigue). This can happen very rapidly and at high concentrations, the ability to smell the gas can be lost instantaneously. Therefore, DO NOT rely on your sense of smell to indicate the continuing presence of hydrogen sulfide or to warn of hazardous concentrations. In addition, hydrogen sulfide is a highly flammable gas and gas/air mixtures can be explosive. It may travel to sources of ignition and flash back. If ignited, the gas burns to produce toxic vapors and gases, such as sulfur dioxide. Contact with liquid hydrogen sulfide causes frostbite. If clothing becomes wet with the liquid, avoid ignition sources, remove the clothing and isolate it in a safe area to allow the liquid to evaporate. Hydrogen sulfide is both an irritant and a chemical asphyxiant with effects on both oxygen utilization and the central nervous system. Its health effects can vary depending on the level and duration of exposure. Repeated exposure can result in health effects occurring at levels that were previously tolerated without any effect. Low concentrations irritate the eyes, nose, throat and respiratory system (e.g., burning/ tearing of eyes, cough, shortness of breath). Asthmatics may experience breathing difficulties. The effects can be delayed for several hours, or sometimes several days, when working in low-level concentrations. Repeated or prolonged exposures may cause eye inflammation, headache, fatigue, irritability, insomnia, digestive disturbances and weight loss. Moderate concentrations can cause more severe eye and respiratory irritation (including coughing, difficulty breathing, accumulation of fluid in the lungs), headache, dizziness, nausea, vomiting, staggering and excitability. High concentrations can cause shock, convulsions, inability to breathe, extremely rapid unconsciousness, coma and death. Effects can occur within a few breaths, and possibly a single breath. Before entering areas where hydrogen sulfide may be present: Air must be tested for the presence and concentration of hydrogen sulfide by a qualified person using air monitoring equipment, such as hydrogen sulfide detector tubes or a multi-gas meter that detects the gas. Testing should also determine if fire/ explosion precautions are necessary. A level of H2S gas at or above 100 ppm is Immediately Dangerous to Life and Health (IDLH). Entry into IDLH atmospheres can only be made using: 1) a full facepiece pressure demand self-contained breathing apparatus (SCBA) with a minimum service life of thirty minutes, or 2) a combination full facepiece pressure demand supplied-air respirator with an auxiliary self-contained air supply. If H2S levels are below 100 ppm, an air-purifying respirator may be used, assuming the filter cartridge/canister is appropriate for hydrogen sulfide. A full facepiece respirator will prevent eye irritation. If air concentrations are elevated, eye irritation may become a serious issue. If a halfmask respirator is used, tight fitting goggles must also be used. Workers in areas containing hydrogen sulfide must be monitored for signs of overexposure. NEVER attempt a rescue in an area that may contain hydrogen sulfide without using appropriate respiratory protection and without being trained to perform such a rescue. This is one in a series of informational fact sheets highlighting OSHA programs, policies or standards. It does not impose any new compliance requirements. For a comprehensive list of compliance requirements of OSHA standards or regulations, refer to Title 29 of the Code of Federal Regulations. This information will be made available to sensory impaired individuals upon request. The voice phone is (202) 693-1999; teletypewriter (TTY) number: (877) 889-5627. For more complete information: U.S. Department of Labor The Department of Labor does not endorse, takes no responsibility for, and exercises no control over the linked organization or its views, or contents, nor does it vouch for the accuracy or accessibility of the information contained on the destination server. The Department of Labor also cannot authorize the use of copyrighted materials contained in linked Web sites. Users must request such authorization from the sponsor of the linked Web site. Thank you for visiting our site. Please click the button below to continue.
In the Solar System, the region beyond Neptune. It includes more than 70,000 small objects, a number up to asteroidal size. It is located from 30 to 50 A.U.'s and was discovered in 1992. The Kuiper belt may be the source of the short-period comets (like Halley's comet). The Kuiper belt was named for the Dutch-American astronomer Gerard P. Kuiper, who predicted its existence in 1951. In general, any belt beyond the outermost large planet of a solar system, consisting of mainly icy objects (ice dwarfs and ice planetisimals).
The Structure of the Common Core State Standards for Math (CCSSM) |Overarching habits of the mind of a productive mathematical thinker| |Make sense of problems and persevere in solving them| |Attend to precision| |REASONING AND EXPLAINING| |Reason abstractly and quantitatively| |Construct viable arguments and critique the reasoning of others| |MODELING AND USING TOOLS| |Model with mathematics| |Use appropriate tools strategically| |SEEING STRUCTURE AND GENERALIZING| |Look for and make use of structure| |Look for and express regularity in repeated reasoning| The CCSSM call for mathematical practices (MP) and mathematical content (MC) to be connected as students engage in mathematical tasks. These connections are essential to support the development of students’ broader mathematical understanding—students who lack understanding of a topic may rely too heavily on procedures. The MP standards must be taught as carefully and practiced as intentionally as the Standards for Mathematical Content. Neither should be isolated from the other; effective mathematics instruction occurs when the two halves of the CA CCSSM come together as a powerful whole. The higher mathematics standards specify the mathematics that all students should study in order to be college and career ready. In California, the CCSSM for higher mathematics are organized into both model courses and conceptual categories. The model courses for higher mathematics are organized into two pathways: traditional and integrated. The traditional pathway consists of the higher mathematics standards organized along more traditional lines into Algebra I, Geometry, and other higher math courses, such as Advanced Placement Probability and Statistics and Calculus. An integrated pathway is an optional path that presents higher mathematics as a connected subject, in that each course contains standards from all six of the conceptual categories. The standards for higher mathematics are also organized into conceptual categories: - Number and Quantity - Statistics and Probability The conceptual categories portray a coherent view of higher mathematics based on the realization that students’ work on a broad topic, such as functions, crosses a number of traditional course boundaries. Connecting the Standards for Mathematical Practice to the Standards for Mathematical Content The Standards for Mathematical Practice describe ways in which developing student practitioners of the discipline of mathematics increasingly ought to engage with the subject matter as they grow in mathematical maturity and expertise throughout the elementary, middle and high school years. Designers of curricula, assessments, and professional development should all attend to the need to connect the mathematical practices to mathematical content in mathematics instruction. The Standards for Mathematical Content are a balanced combination of procedure and understanding. Expectations that begin with the word “understand” are often especially good opportunities to connect the practices to the content. Students who lack understanding of a topic may rely on procedures too heavily. Without a flexible base from which to work, they may be less likely to consider analogous problems, represent problems coherently, justify conclusions, apply the mathematics to practical situations, use technology mindfully to work with the mathematics, explain the mathematics accurately to other students, step back for an overview, or deviate from a known procedure to find a shortcut. In short, a lack of understanding effectively prevents a student from engaging in the mathematical practices. In this respect, those content standards which set an expectation of understanding are potential “points of intersection” between the Standards for Mathematical Content and the Standards for Mathematical Practice. These points of intersection are intended to be weighted toward central and generative concepts in the school mathematics curriculum that most merit the time, resources, innovative energies, and focus necessary to qualitatively improve the curriculum, instruction, assessment, professional development, and student achievement in mathematics.
Canada’s Copyright Act has two main goals: to protect creative works and to encourage the creation of new works. The basic rule is that a work protected by copyright cannot be used without the permission of its owner. What is copyright? Copyright literally means the right to copy. But it includes more than the right to copy a work. Rather, it is a bundle of rights that includes the right to produce, copy, sell, publish or perform an original creative work. The copyright owner is the only one who can “use” the work in these ways. No one else can use the work, or a substantial part of it, without the owner’s consent. The law protects four major categories of creative works: - literary works, such as books and computer programs - dramatic works, such as plays, films and choreography - musical works, such as musical compositions with or without words - artistic works, such as paintings, photos, maps, sculptures and architectural plans The law also protects other works, including these: - public performances of, for example, songs, poetry and dance - sound recordings, such as CDs, records and tapes - radio communication signals - some collections of works, such as a book of poems or drawings Ideas, Works and Original Works The law does not protect ideas – it protects the expression of ideas. This is an important distinction. For example, an idea for a novel (for example, a young man who goes to a school for wizards) is just an idea while it is in your mind. But if you express this idea in a book or another form, it can become a work protected by copyright (for example, the Harry Potter novels). To be protected, a work must also be original. This means that it did not exist before the creator created it and is not just a copy of another work. The creator needs to have put her own skill and judgment into it. How long does copyright last? Copyright does not last forever. As a rule, copyright in Canada lasts for the creator’s lifetime plus 50 years. After that, the work can be used freely by anyone. It becomes part of the “public domain.” The Copyright Owner The creator of a work is almost always the first owner of the copyright. But sometimes the creator of the work is not the first owner. For example, when the work is created by an employee as part of her job, the first owner would be the employer. The rights that copyright gives can be classified into economic rights and moral rights. Economic rights include the right to copy, translate, publish and communicate the work to the public. These rights can be used to earn money. Creators can give others permission to use their works. This is called giving a licence for use and the person who has this permission is called the licensee. Creators can decide which economic rights to licence. An example is when a famous artist lets one of his paintings be reproduced on posters. Creators can also assign (sell or give away) their economic rights to another person or organization, who then becomes the new owner. For example, when cartoonists assign their rights in a cartoon to a production studio, that studio can use the cartoon as it wishes…with one exception: the creator’s moral rights (see section below). Economic rights are enjoyed by the copyright owner (whether the original creator or someone who purchased the copyright) or a licensee. In Canada, creators have moral rights. Moral rights give creators the right to the integrity of their work. This means that the work cannot be changed or used in a way that could harm their honour or reputation. For example, a court found that putting ribbons around a sculpture as a holiday decoration violated the sculptor’s moral rights. Only the creator can decide how a work can be used, assert that she made it or remain anonymous! Moral rights can never be sold or given away, even if the work’s economic rights have been. But creators can waive or give up their moral rights over a work. Copyright and Intellectual Property The law on intellectual property is not limited to copyrighted works. It also protects many other types of intellectual creations. Here are some examples: - An invention or an improvement to an invention can be protected by a patent. - Words, slogans, shapes and sounds, among other things, can be registered as trademarks. - The visual features (appearance) of some objects can be protected as an industrial design. This article explains in a general way the law that applies in Quebec. This article is not a legal opinion or legal advice. To find out the specific rules for your situation, consult a lawyer or notary.
Hypothermia is a dangerously low body temperature. Hypothermia is often regarded as a cold injury, because it can be caused or made worse by exposure to cold surroundings. Being in an environment that is too cold, having certain disorders, being unable to move, or a combination can cause body temperature to become too low. The person shivers at first and later may become confused and lose awareness. Getting warm and dry can lead to recovery unless the body temperature is very low. If the body temperature is very low, doctors may warm the person with warmed oxygen and heated fluids given intravenously or passed into the bladder, stomach, abdominal cavity, or chest cavity through plastic tubes. Doctors also provide heat to the outside of the body. Hypothermia causes about 600 deaths each year in the United States. Hypothermia also increases the risk of death in people with heart, blood vessel, and nerve disorders. Hypothermia results when the body loses more heat than can be replaced by increasing the amount of heat generated by the body through exercise or by increasing warming from external sources, such as a fire or the sun. Wind increases heat loss, as does sitting or lying on a cold surface or being immersed in water. Sudden immersion in very cold water may cause fatal hypothermia in 5 to 15 minutes. However, a few people, mostly infants and young children, have survived for as long as 1 hour completely submerged in ice water. The shock can shut off all systems, essentially protecting the body (see Effects of submersion in cold water). Hypothermia may also occur after prolonged exposure in only moderately cool water. People at greatest risk are those who are lying immobile in a cold environment—such as people who have had a stroke or a seizure or who are unconscious due to intoxication, those with a low blood sugar (glucose) level, or those with an injury. Because they are not moving, these people generate less heat and also are unable to leave the cold environment. Such people are at risk of becoming hypothermic even when the surrounding temperature may be only as cold as 55 or 60° F (about 13 to 16° C). The very young and the very old are at particular risk. People in these age groups often do not compensate for cold as well as young adults and are dependent on others to anticipate their needs and keep them warm. Very old people may become hypothermic while indoors if they remain immobile in a cold room for hours. Infants lose body heat rapidly and are particularly susceptible to hypothermia. Sometimes a disorder, such as a widespread infection or underactivity of the thyroid gland (hypothyroidism), causes or contributes to hypothermia. Initial symptoms include intense shivering and teeth chattering. As body temperature falls further, These symptoms may develop so gradually that people, including companions of the affected person, do not realize what is happening. People may fall, wander off, or simply lie down to rest. When shivering stops, people become more sluggish and slip into a coma. The heart and breathing rates become slower and weaker. If they are very slow, the person may seem to have no signs of life (no heartbeat or attempts to breathe) even though the heart is beating very weakly. Eventually the heart does stop. The lower the body temperature is, the higher the risk of death. Death may occur at body temperatures below 88° F (about 31° C) but is most likely to occur below 83° F (about 28° C). Doctors diagnose hypothermia by measuring a body temperature less than 95° F (35° C), typically with a rectal thermometer. Conventional thermometers do not record temperatures below 94° F (about 34° C). Thus, electronic thermometers are needed to measure temperatures in severe hypothermia. Blood and sometimes other tests are done to see whether a disorder such as an infection or hypothyroidism caused hypothermia. If a person has no signs of life, doctors may use cardiac ultrasonography to determine whether the heart is still beating. In the early stages, drying the body, changing into warm, dry clothing, being covered with warm blankets, and drinking hot beverages can bring about recovery. In people who are found unconscious, further heat loss is prevented by wrapping them in a warm, dry blanket and, if possible, removing wet clothing and moving them to a warm place while arrangements are made for immediate transportation to a hospital. Cardiopulmonary resuscitation (CPR) outside of a hospital is not recommended, particularly by bystanders, if there are any signs of life, which may be very difficult to detect. For example, it may be difficult, particularly for untrained people, to detect very faint respirations and heartbeats. Often, even if no pulse can be felt and no heartbeat can be heard, the heart may be beating. Also, a severely hypothermic person must be handled gently, because a sudden jolt may cause an irregular heart rhythm (arrhythmia) that could be fatal. In the hospital, doctors warm the person with warmed oxygen given by inhalation and heated fluids given intravenously or passed into the bladder, stomach, abdominal cavity, or chest cavity through plastic tubes inserted into those areas. In addition, the blood may be warmed through the process of hemodialysis (in which the blood is pumped out of the body, through a filter with a heating attachment, and back into the body) or with a heart-lung machine (which pumps blood out of the body, heats the blood, adds oxygen, and then returns the blood to the body). Doctors may need to help the person breathe by inserting a plastic breathing tube through the mouth into the windpipe (endotracheal intubation) and using mechanical ventilation. If the heart has stopped, CPR is done. Because some people with hypothermia who have arrived at the hospital with no signs of life have recovered, doctors may continue resuscitation efforts until the person is warmed but still shows no heartbeat or other signs of life.
The scene in the small, stifling room is not hard to imagine: the scribe frowning, shifting in his seat as he tries to concentrate on the words of the woman in front of him. A member of one of the wealthiest families in Sippar, the young priestess has summoned him to her room to record a business matter. When she entered the temple, she explains, her parents gave her a valuable inheritance, a huge piece of silver in the shape of a ring, worth the equivalent of 60 months' wages for an estate worker. She has decided to buy land with this silver. Now she needs someone to take down a few details. Obediently, the scribe smooths a wet clay tablet and gets out his stylus. Finally, his work done, he takes the tablet down to the archive. For more than 3,700 years, the tablet languished in obscurity, until late-nineteenth-century collectors unearthed it from Sippar's ruins along the Euphrates River in what is now Iraq. Like similar tablets, it hinted at an ancient and mysterious Near Eastern currency, in the form of silver rings, that started circulating two millennia before the world's first coins were struck. By the time that tablet was inscribed, such rings may have been in use for a thousand years. When did humans first arrive at the concept of money? What conditions spawned it? And how did it affect the ancient societies that created it? Until recently, researchers thought they had the answers. They believed money was born, as coins, along the coasts of the Mediterranean in the seventh or sixth century B.C., a product of the civilization that later gave the world the Parthenon, Plato, and Aristotle. But few see the matter so simply now. With evidence gleaned from such disparate sources as ancient temple paintings, clay tablets, and buried hoards of uncoined metals, researchers have revealed far more ancient money: silver scraps and bits of gold, massive rings and gleaming ingots. In the process, they have pushed the origins of cash far beyond the sunny coasts of the Mediterranean, back to the world's oldest cities in Mesopotamia, the fertile plain created by the Tigris and Euphrates rivers. There, they suggest, wealthy citizens were flaunting money at least as early as 2500 B.C. and perhaps a few hundred years before that. "There's just no way to get around it," says Marvin Powell, a historian at Northern Illinois University in De Kalb. "Silver in Mesopotamia functions like our money today. It's a means of exchange. People use it for a storage of wealth, and they use it for defining value." Many scholars believe money began even earlier. "My sense is that as far back as the written records go in Mesopotamia and Egypt, some form of money is there," observes Jonathan Williams, curator of Roman and Iron Age coins at the British Museum in London. "That suggests it was probably there beforehand, but we can't tell because we don't have any written records." Just why researchers have had such difficulties in uncovering these ancient moneys has much to do with the practice of archeology and the nature of money itself. Archeologists, after all, are the ultimate Dumpster divers: they spend their careers sifting through the trash of the past, ingeniously reconstructing vanished lives from broken pots and dented knives. But like us, ancient Mesopotamians and Phoenicians seldom made the error of tossing out cash, and only rarely did they bury their most precious liquid assets in the ground. Even when archeologists have found buried cash, though, they've had trouble recognizing it for what it was. Money doesn't always come in the form of dimes and sawbucks, even today. As a means of payment and a way of storing wealth, it assumes many forms, from debit cards and checks to credit cards and mutual funds. The forms it took in the past have been, to say the least, elusive. From the beginning, money has shaped human society. It greased the wheels of Mesopotamian commerce, spurred the development of mathematics, and helped officials and kings rake in taxes and impose fines. As it evolved in Bronze Age civilizations along the Mediterranean coast, it fostered sea trade, built lucrative cottage industries, and underlay an accumulation of wealth that might have impressed Donald Trump. "If there were never any money, there would never have been prosperity," says Thomas Wyrick, an economist at Southwest Missouri State University in Springfield, who is studying the origins of money and banking. "Money is making all this stuff happen." Ancient texts show that almost from its first recorded appearance in the ancient Near East, money preoccupied estate owners and scribes, water carriers and slaves. In Mesopotamia, as early as 3000 B.C., scribes devised pictographs suitable for recording simple lists of concrete objects, such as grain consignments. Five hundred years later, the pictographs had evolved into a more supple system of writing, a partially syllabic script known as cuneiform that was capable of recording the vernacular: first Sumerian, a language unrelated to any living tongue, and later Akkadian, an ancient Semitic language. Scribes could write down everything from kingly edicts to proverbs, epics to hymns, private family letters to merchants' contracts. In these ancient texts, says Miguel Civil, a lexicographer at the Oriental Institute of the University of Chicago, "they talk about wealth and gold and silver all the time." In all likelihood, says Wyrick, human beings first began contemplating cash just about the time that Mesopotamians were slathering mortar on mud bricks to build the world's first cities. Until then, people across the Near East had worked primarily on small farms, cultivating barley, dates, and wheat, hunting gazelles and other wild game, and bartering among themselves for the things they could not produce. But around 3500 B.C., work parties started hauling stones across the plains and raising huge flat-topped platforms, known as ziggurats, on which to found their temples. Around their bases, they built street upon twisted street of small mud-brick houses. To furnish these new temples and to serve temple officials, many farmers became artisans--stonemasons, silversmiths, tanners, weavers, boatbuilders, furniture makers. And within a few centuries, says Wyrick, the cities became much greater than the sum of their parts. Economic life flourished and grew increasingly complex. "Before, you always had people scattered out on the hillsides," says Wyrick, "and whatever they could produce for their families, that was it. Very little trade occurred because you never had a large concentration of people. But now, in these cities, for the first time ever in one spot, you had lots of different goods, hundreds of goods, and lots of different people trading them." Just how complex life grew in these early metropolises can be glimpsed in the world's oldest accounting records: 8,162 tiny clay tokens excavated from the floors of village houses and city temples across the Near East and studied in detail by Denise Schmandt-Besserat, an archeologist at the University of Texas at Austin. The tokens served first as counters and perhaps later as promissory notes given to temple tax collectors before the first writing appeared. By classifying the disparate shapes and markings on the tokens into types and comparing these with the earliest known written symbols, Schmandt-Besserat discovered that each token represented a specified quantity of a particular commodity. And she noticed an intriguing difference between village tokens and city tokens. In the small communities dating from before the rise of cities, Mesopotamians regularly employed just five token types, representing different amounts of three main goods: human labor, grain, and livestock like goats and sheep. But in the cities, they began churning out a multitude of new types, regularly employing 16 in all, with dozens of subcategories representing everything from honey, sheep's milk, and trussed ducks to wool, cloth, rope, garments, mats, beds, perfume, and metals. "It's no longer just farm goods," says Schmandt-Besserat. "There are also finished products, manufactured goods, furniture, bread, and textiles." Faced with this new profusion, says Wyrick, no one would have had an easy time bartering, even for something as simple as a pair of sandals. "If there were a thousand different goods being traded up and down the street, people could set the price in a thousand different ways, because in a barter economy each good is priced in terms of all other goods. So one pair of sandals equals ten dates, equals one quart of wheat, equals two quarts of bitumen, and so on. Which is the best price? It's so complex that people don't know if they are getting a good deal. For the first time in history, we've got a large number of goods. And for the first time, we have so many prices that it overwhelms the human mind. People needed some standard way of stating value." In Mesopotamia, silver--a prized ornamental material--became that standard. Supplies didn't vary much from year to year, so its value remained constant, which made it an ideal measuring rod for calculating the value of other things. Mesopotamians were quick to see the advantage, recording the prices of everything from timber to barley in silver by weight in shekels. (One shekel equaled one-third of an ounce, or just a little more than the weight of three pennies.) A slave, for example, cost between 10 and 20 shekels of silver. A month of a freeman's labor was worth 1 shekel. A quart of barley went for three-hundredths of a shekel. Best of all, silver was portable. "You can't carry a shekel of barley on your ass," comments Marvin Powell (referring to the animal). And with a silver standard, kings could attach a price to infractions of the law. In the codes of the city of Eshnunna, which date to around 2000 B.C., a man who bit another man's nose would be fined 60 shekels of silver; one who slapped another in the face paid 10. How the citizens of Babylon or Ur actually paid their bills, however, depended on who they were. The richest tenth of the population, says Powell, frequently paid in various forms of silver. Some lugged around bags or jars containing bits of the precious metal to be placed one at a time on the pan of a scale until they balanced a small carved stone weight in the other pan. Other members of the upper crust favored a more convenient form of cash: pieces of silver cast in standard weights. These were called har in the tablets, translated as "ring" money. At the Oriental Institute in the early 1970s, Powell studied nearly 100 silver coils--some resembling bedsprings, others slender wire coils--found primarily in the Mesopotamian city of Khafaje. They were not exactly rings, it was true, but they matched other fleeting descriptions of har. According to the scribes, ring money ranged from 1 to 60 shekels in weight. Some pieces were cast in special molds. At the Oriental Institute, the nine largest coils all bore a triangular ridge, as if they had been cast and then rolled into spirals while still pliable. The largest coils weighed almost exactly 60 shekels, the smallest from one-twelfth to two and a half shekels. "It's clear that the coils were intended to represent some easily recognizable form of Babylonian stored value," says Powell. "In other words, it's the forerunner of coinage." The masses in Mesopotamia, however, seldom dealt in such money. It was simply too precious, much as a gold coin would have been for a Kansas dirt farmer in the middle of the Great Depression. To pay their bills, water carriers, estate workers, fishers, and farmers relied on more modest forms of money: copper, tin, lead, and above all, barley. "It's the cheap commodity money," says Powell. "I think barley functions in ancient Mesopotamia like small change in later systems, like the bronze currencies in the Hellenistic period. And essentially that avoids the problem of your being cheated. You measure barley out and it's not as dangerous a thing to try to exchange as silver, given weighing errors. If you lose a little bit, it's not going to make that much difference." Measurable commodity money such as silver and barley both simplified and complicated daily life. No longer did temple officials have to sweat over how to collect a one-sixth tax increase on a farmer who had paid one ox the previous year. Compound interest on loans was now a breeze to calculate. Shekels of silver, after all, lent themselves perfectly to intricate mathematical manipulation; one historian has suggested that Mesopotamian scribes first arrived at logarithms and exponential values from their calculations of compound interest. "People were constantly falling into debt," says Powell. "We find reference to this in letters where people are writing to one another about someone in the household who has been seized for securing a debt." To remedy these disastrous financial affairs, King Hammurabi decreed in the eighteenth century B.C. that none of his subjects could be enslaved for more than three years for failing to repay a debt. Other Mesopotamian rulers, alarmed at the financial chaos in the cities, tried legislating moratoriums on all outstanding bills. While the cities of Mesopotamia were the first to conceive of money, others in the ancient Near East soon took up the torch. As civilization after civilization rose to glory along the coasts of the eastern Mediterranean, from Egypt to Syria, their citizens began abandoning the old ways of pure barter. Adopting local standards of value, often silver by weight, they began buying and selling with their own local versions of commodity moneys: linen, perfume, wine, olive oil, wheat, barley, precious metals--things that could be easily divided into smaller portions and that resisted decay. And as commerce became smoother in the ancient world, people became increasingly selective about what they accepted as money, says Wyrick. "Of all the different media of exchange, one commodity finally broke out of the pack. It began to get more popular than the others, and I think the merchants probably said to themselves, ëHey, this is great. Half my customers have this form of money. I'm going to start demanding it.' And the customers were happy, too, because there's more than just one merchant coming around, and they didn't know what to hold on to, because each merchant was different. If everyone asked for barley or everyone asked for silver, that would be very convenient. So as one of these media of exchange becomes more popular, everyone just rushes toward that." What most ancient Near Easterners rushed toward around 1500 B.C. was silver. In the Old Testament, for example, rulers of the Philistines, a seafaring people who settled on the Palestine coast in the twelfth century B.C., each offer Delilah 1,100 pieces of silver for her treachery in betraying the secret of Samson's immense strength. And in a well-known Egyptian tale from the eleventh century B.C., the wandering hero Wen-Amon journeys to Lebanon to buy lumber to build a barge. As payment, he carries jars and sacks of gold and silver, each weighed in the traditional Egyptian measure, the deben. (One deben equals 3 ounces.) Whether these stories are based on history or myth, they reflect the commercial transactions of their time. To expedite commerce, Mediterranean metalsmiths also devised ways of conveniently packaging money. Coils and rings seem to have caught on in some parts of Egypt: a mural painted during the fourteenth century B.C. in the royal city of Thebes depicts a man weighing a stack of doughnut-size golden rings. Elsewhere, metalsmiths cast cash in other forms. In the Egyptian city of el-Amarna, built and briefly occupied during the fourteenth century B.C., archeologists stumbled upon what they fondly referred to as a crock of gold. Inside, among bits of gold and silver, were several slender rod-shaped ingots of gold and silver. When researchers weighed them, they discovered that some were in multiples or fractions of the Egyptian deben, suggesting different denominations of an ancient currency. All these developments, says Wyrick, transformed Mediterranean life. Before, in the days of pure barter, people produced a little bit of everything themselves, eking out a subsistence. But with the emergence of money along the eastern Mediterranean, people in remote coastal communities found themselves in a new and enviable position. For the first time, they could trade easily with Phoenician or Syrian merchants stopping at their harbors. They no longer had to be self-sufficient. "They could specialize in producing one thing," says Wyrick. "Someone could just graze cattle. Or they could mine gold or silver. And when you specialize, you become more productive. And then more and more goods start coming your way." The wealth spun by such specialization and trade became the stuff of legend. It armed the fierce Mycenaean warriors of Greece in bronze cuirasses and chariots and won them victories. It outfitted the tomb of Tutankhamen, sending his soul in grandeur to the next world. And it filled the palace of Solomon with such magnificence that even the Queen of Sheba was left breathless. But the rings, ingots, and scraps of gold and silver that circulated as money in the eastern Mediterranean were still a far cry from today's money. They lacked a key ingredient of modern cash--a visible guarantee of authenticity. Without such a warranty, many people would never willingly accept them at their face value from a stranger. The lumps of precious metal might be a shade short of a shekel, for example. Or they might not be pure gold or silver at all, but some cheaper alloy. Confidence, suggests Miriam Balmuth, an archeologist at Tufts University in Medford, Massachusetts, could be won only if someone reputable certified that a coin was both the promised weight and composition. Balmuth has been trying to trace the origins of this certification. In the ancient Near East, she notes, authority figures--perhaps kings or merchants--attempted to certify money by permitting their names or seals to be inscribed on the official carved stone weights used with scales. That way Mesopotamians would know that at least the weights themselves were the genuine article. But such measures were not enough to deter cheats. Indeed, so prevalent was fraud in the ancient world that no fewer than eight passages in the Old Testament forbid the faithful from tampering with scales or substituting heavier stone weights when measuring out money. Clearly, better antifraud devices were needed. Under the ruins of the old city of Dor along northern Israel's coast, a team of archeologists found one such early attempt. Ephraim Stern of Hebrew University and his colleagues found a clay jug filled with nearly 22 pounds of silver, mainly pieces of scrap, buried in a section of the city dating from roughly 3,000 years ago. But more fascinating than the contents, says Balmuth, who recently studied this hoard, was the way they had been packaged. The scraps were divided into separate piles. Someone had wrapped each pile in fabric and then attached a bulla, a clay tab imprinted with an official seal. "I have since read that these bullae lasted for centuries," says Balmuth, "and were used to mark jars--or in this case things wrapped in fabric--that were sealed. That was a way of signing something." All that remained was to impress the design of a seal directly on small rounded pieces of metal--which is precisely what happened by around 600 B.C. in an obscure Turkish kingdom by the sea. There traders and perfume makers known as the Lydians struck the world's first coins. They used electrum, a natural alloy of gold and silver panned from local riverbeds. (Coincidentally, Chinese kings minted their first money at roughly the same time: tiny bronze pieces shaped like knives and spades, bearing inscriptions revealing places of origin or weight. Circular coins in China came later.) First unearthed by archeologists early this century in the ruins of the Temple of Artemis in Ephesus, one of the Seven Wonders of the ancient world, the Lydian coins bore the essential hallmarks of modern coinage. Made of small, precisely measured pieces of precious metal, they were stamped with the figures of lions and other mighty beasts--the seal designs, it seems, of prominent Lydians. And such wealth did they bring one Lydian king, Croesus, that his name became a byword for prosperity. Struck in denominations as small as .006 ounce of electrum--one-fifteenth the weight of a penny--Lydia's coinage could be used by people in various walks of life. The idea soon caught on in the neighboring Greek city-states. Within a few decades, rulers across Greece began churning out beautiful coins of varied denominations in unalloyed gold and silver, stamped with the faces of their gods and goddesses. These new Greek coins became fundamental building blocks for European civilization. With such small change jingling in their purses, Greek merchants plied the western Mediterranean, buying all that was rare and beautiful from coastal dwellers, leaving behind Greek colonies from Sicily to Spain and spreading their ideas of art, government, politics, and philosophy. By the fourth century B.C., Alexander the Great was acquiring huge amounts of gold and silver through his conquests and issuing coins bearing his image far and wide, which Wyrick calls "ads for empire building." Indeed, says Wyrick, the small change in our pockets literally made the Western world what it is today. "I tell my students that if money had never developed, we would all still be bartering. We would have been stuck with that. Money opened the door to trade, which opened the door for specialization. And that made possible a modern society."
Significant contributions to decision theory were made by Chester I Barnard, a successful business executive in the telephone industry. Barnard introduced the concept of the concept of the social environment as a constraint influencing the final outcome of a decision. He stated that nature, legal regulations, social responsiveness, competitors, and customer needs are all environmental constraints that affect the final outcome of a decision. To Barnard, the decision maker must have an understanding of the environment so that the effects of alternative solutions can be predicted. Environmental factors, he felt, become more distinct and exacting as objectives are redefined and made more explicit. He noted, as well, that objectives have no meaning except in an environment with a set of restrictions or limitations. The previous two sections of this chapter should dispel the belief that managers rationally proceed step by step to define a problem, identify a wide range of alternatives with their consequences and risk, collect and analyze all data thoroughly, and select a specific alternative that is totally acceptable to all persons involved. With the problems of rationality and environmental constraints, the role of human behavior also restricts the decision - making process. This is particularly significant when we recognize that actions by dominant individuals or coalitions influence organizational decisions through negotiation and compromise. Seldom does a manager make a decision that is supported by all organizational units. Some decisions will be bitterly opposed, some well supported, while others will fall within a zone of comparative indifference. Thus, form a behavioral viewpoint, managers must be sensitive to the power structures within an organization in order to predict which decision will gain the greatest support in the pursuit of goals. Moreover, goal achievement depends upon how well managers influence the decisions of those personnel who are involved in performing task and implementing programs at lower levels of authority. Some of the behavioral elements that influence the decision process are given below: Group (peer) pressures, both supportive and non supportive, that are exerted on the decision maker. The decision maker's stated position within various groups (managerial and non managerial) Personal attitudes about what and how others will think if a particular decision is made. Feelings about the expectations of others and the weight of public opinion. Ability to use the reactions of key individuals and groups to access the impact of a decision on the organization. The stress to which the decision maker is exposed. Personal knowledge and skill can also affect a decision, especially the amount of knowledge one has about a problem and its alternatives. Clearly, the manager must have the ability and resources if a decisional problem is to meaningful. For instance, if a manager knows very little about production but is very knowledge about computers, a decision concerning capital expenditures might be to purchase a new computer. A decision is also influenced by the personal values of the decision maker at the time each alternative action is being considered. Thus, the motivation to develop a strong work force may be sufficience the manager to see the importance of creating a personnel department. However, this would not be likely unless it was felt that the department would better fulfill the company's overall objectives than some other available alternative. Unfortunately, in many organizations, managers perceive that problems of this type exist but are not sufficiently motivated to try to solve them.
Module 3 −− Introduction to Circuit Protection, Control, and Measurement Rectifier for AC Measurement Figure 1-12. - Rectifier action. A rectifier is a device that changes alternating current to a form of direct current. The way in which this is done will be covered later in this training series. For now, it is necessary to know only the information presented in figure 1-12. Figure 1-12 shows that an alternating current passed through a rectifier will come out as a "pulsating direct current." Figure 1-13. - Compass and conductor; rectified AC. What happens to the compass now? Figure 1-13 answers that question. When the compass is placed close to the wire and the frequency of the alternating current is high enough, the compass will vibrate around a point that represents the average value of the pulsating direct current, as shown in figure 1-13. Q10. How would a compass react when placed close to a conductor carrying alternating current at a low frequency? Q11. How would the compass react if the alternating current through the conductor was a high frequency? Q12. What is the purpose of a rectifier in a meter? By connecting a rectifier to a d'Arsonval meter movement, an alternating current measuring device is created. When ac is converted to pulsating dc, the d'Arsonval movement will react to the average value of the pulsating dc (which is the average value of one-half of the sine wave). Another characteristic of using a rectifier concerns the fact that the d'Arsonval meter movement is capable of indicating current in only one direction. If the d'Arsonval meter movement were used to indicate alternating current without a rectifier, or direct current of the wrong polarity, the movement would be severely damaged. The pulsating dc is current in a single direction, and so the d'Arsonval meter movement can be used as long as proper polarity is observed. A problem that is created by the use of a rectifier and d'Arsonval meter movement is that the pointer will vibrate (oscillate) around the average value indication. This oscillation will make the meter difficult to read. The process of "smoothing out" the oscillation of the pointer is known as DAMPING. There are two basic techniques used to damp the pointer of a d'Arsonval meter movement. The first method of damping comes from the d'Arsonval meter movement itself. In the d'Arsonval meter movement, current through the coil causes the coil to move in the magnetic field of the permanent magnet. This movement of the coil (conductor) through a magnetic field causes a current to be induced in the coil opposite to the current that caused the movement of the coil. This induced current will act to damp oscillations. In addition to this method of damping, which comes from the movement itself, most meters use a second method of damping. Figure 1-14. - a typical meter damping system. The second method of damping used in most meter movements is an airtight chamber containing a vane (like a windmill vane) attached to the coil (fig.1-14). As the coil moves, the vane moves within the airtight chamber. The action of the vane against the air in the chamber opposes the coil movement and damps the Q13. How can a d'Arsonval meter movement be adapted for use as an ac meter? Q14. What is damping? Q15. What are two methods used to damp a meter movement? Q16. What value does a meter movement react to (actually measure) when measuring ac? Q17. What value is indicated on the scale of an ac meter? An additional advantage of damping a meter movement is that the damping systems will act to slow down the coil and help keep the pointer from overshooting its rest position when the current through the meter is removed. Indicating Alternating Current Another problem encountered in measuring ac is that the meter movement reacts to the average value of the ac. The value used when working with ac is the effective value (rms value). Therefore, a different scale is used on an ac meter. The scale is marked with the effective value, even though it is the average value to which the meter is reacting. That is why an ac meter will give an incorrect reading if used to measure dc. Other Meter Movements The d'Arsonval meter movement (permanent-magnet moving-coil) is only one type of meter movement. Other types of meter movements can be used for either ac or dc measurement without the use of a rectifier. When galvanometers were mentioned earlier in this topic, it was stated that they could be either electromagnetic or electrodynamic. Electrodynamic meter movements will be discussed at this point. Electrodynamic Meter Movement Figure 1-15. - Electrodynamic meter movement. An electrodynamic movement uses the same basic operating principle as the basic moving-coil meter movement, except that the permanent magnet is replaced by fixed coils (fig. 1-15). a moving coil, to which the meter pointer is attached, is suspended between two field coils and connected in series with these coils. The three coils (two field coils and the moving coil) are connected in series across the meter terminals so that the same current flows through each. Current flow in either direction through the three coils causes a magnetic field to exist between the field coils. The current in the moving coil causes it to act as a magnet and exert a turning force against a spring. If the current is reversed, the field polarity and the polarity of the moving coil reverse at the same time, and the turning force continues in the original direction. Since reversing the current direction does not reverse the turning force, this type of meter can be used to measure both ac and dc if the scale is changed. While some voltmeters and ammeters use the electrodynamic principle of operation, the most important application is in the wattmeter. The wattmeter, along with the voltmeter and the ammeter, will be discussed later in this topic. MOVING-VANE METER MOVEMENTS The moving-vane meter movement (sometimes called the moving-iron movement) is the most commonly used movement for ac meters. The moving-vane meter operates on the principle of magnetic repulsion between like poles (fig.1-16). The current to be measured flows through a coil, producing a magnetic field which is proportional to the strength of the current. Suspended in this field are two iron vanes. One is in a fixed position, the other, attached to the meter pointer, is movable. The magnetic field magnetizes these iron vanes with the same polarity regardless of the direction of current flow in the coil. Since like poles repel, the movable vane pulls away from the fixed vane, moving the meter pointer. This motion exerts a turning force against the spring. The distance the vane will move against the force of the spring depends on the strength of the magnetic field, which in turn depends on the Figure 1-16. - Moving-vane meter movement. Figure 1-17. - Hot-wire meter movement. Figure 1-18. - a thermocouple meter. These meters are generally used at 60-hertz ac, but may be used at other ac frequencies. By changing the meter scale to indicate dc values rather than ac rms values, moving-vane meters will measure dc current and dc voltage. This is not recommended due to the residual magnetism left in the vanes, which will result in an error in the instrument. One of the major disadvantages of this type of meter movement occurs due to the high reluctance of the magnetic circuit. This causes the meter to require much more power than the D'Arsonval meter to produce a full scale deflection, thereby reducing the meters sensitivity. HOT-Wire and THERMOCOUPLE METER MOVEMENTS Hot-wire and thermocouple meter movements both use the heating effect of current flowing through a resistance to cause meter deflection. Each uses this effect in a different manner. Since their operation depends only on the heating effect of current flow, they may be used to measure both direct current and alternating current of any frequency on a single scale. The hot-wire meter movement deflection depends on the expansion of a high-resistance wire caused by the heating effect of the wire itself as current flows through it. (See fig. 1-17.) a resistance wire is stretched taut between the two meter terminals, with a thread attached at a right angle to the center of the wire. a spring connected to the opposite end of the thread exerts a constant tension on the resistance wire. Current flow heats the wire, causing it to expand. This motion is transferred to the meter pointer through the thread and a pivot. The thermocouple meter consists of a resistance wire across the meter terminals, which heats in proportion to the amount of current. (See fig. 1-18.) Attached to this wire is a small thermocouple junction of two unlike metal wires, which connect across a very sensitive dc meter movement (usually a d'Arsonval meter movement). As the current being measured heats the heating resistor, a small current (through the thermocouple wires and the meter movement) is generated by the thermocouple junction. The current being measured flows through only the resistance wire, not through the meter movement itself. The pointer turns in proportion to the amount of heat generated by the resistance wire. Q18. List three meter movements that can measure either ac or dc without the use of a rectifier. Q19. What electrical property is used by all the meter movements discussed so far? An ammeter is a device that measures current. Since all meter movements have resistance, a resistor will be used to represent a meter in the following explanations. Direct current circuits will be used for simplicity of explanation. Ammeter Connected in Series Figure 1-19. - a series and a parallel circuit. In figure 1-19(A), R1 and R2 are in series. The total circuit resistance is R2 + R2 and total circuit current flows through both resistors. In figure 1-19(B), R1 and R2 are in parallel. The total circuit resistance and total circuit current does not flow through either resistor. If R1 represents an ammeter, the only way in which total circuit current will flow through the meter (and thus be measured) is to have the meter (R1) in series with the circuit load (R2), as shown in figure 1-19(A). In complex electrical circuits, you are not always concerned with total circuit current. You may be interested in the current through a particular component or group of components. In any case, an ammeter is always connected in series with the circuit you wish to test. Figure 1-20 shows various circuit arrangements with the ammeter(s) properly connected for measuring current in various portions of the Connecting an ammeter in parallel would give you not only an incorrect measurement, it would also damage the ammeter, because too much current would pass through the Figure 1-20. - Proper ammeter connections Figure 1-21. - Current in a parallel circuit. Effect on Circuit Being Measured The meter affects the circuit resistance and the circuit current. If R1 is removed from the circuit in figure 1-19(A), the total circuit resistance is R2. Circuit current with the meter (R1) in the circuit, circuit resistance is R1 + R2 and circuit current The smaller the resistance of the meter (R1 ), the less it will affect the circuit being measured. (R1 represents the total resistance of the meter; not just the resistance of the meter movement.) Ammeter sensitivity is the amount of current necessary to cause full scale deflection (maximum reading) of the ammeter. The smaller the amount of current, the more "sensitive" the ammeter. For example, an ammeter with a maximum current reading of 1 milliampere would have a sensitivity of 1 milliampere, and be more sensitive than an ammeter with a maximum reading of 1 ampere and a sensitivity of 1 ampere. Sensitivity can be given for a meter movement, but the term "ammeter sensitivity" usually refers to the entire ammeter and not just the meter movement. An ammeter consists of more than just the meter movement. If you have a meter movement with a sensitivity of 1 milliampere, you can connect it in series with a circuit and measure currents up to 1 milliampere. But what do you do to measure currents over 1 milliampere? In figure 1-21(B), the voltage is increased to 100 volts. Now, In figure 1-21(C), the voltage is reduced from 100 volts to 50 volts. In this Notice that the relationship (ratio) of IR1 and IR2 remains the same. IR2 is nine times greater than IR1 and IR1 has one-tenth of the If R1 is replaced by a meter movement that has 10 ohms of resistance and a sensitivity of 10 amperes, the reading of the meter will represent one-tenth of the current in the circuit and R 2 will carry nine-tenths of the current. R2 is a Shunt resistor because it diverts, or shunts, a portion of the current from the meter movement (R1). By this method, a 10-ampere meter movement will measure current up to 100 amperes. By adding a second scale to the face of the meter, the current can be read directly. By adding several shunt resistors in the meter case, with a switch to select the desired resistor, the ammeter will be capable of measuring several different maximum current readings or ranges. Most meter movements in use today have sensitivities of from 5 microamperes to 1 milliampere. Figure 1-22 shows the circuit of meter switched to higher ranges, the shunt an ammeter that uses a meter movement with a sensitivity of 100 microamperes and shunt resistors. This ammeter has five ranges (100 microamperes; 1, 10, and 100 milliamperes; 1 ampere) selected by a switch. NEETS Table of Contents - Introduction to Matter, Energy, and Direct - Introduction to Alternating Current and Transformers - Introduction to Circuit Protection, Control, - Introduction to Electrical Conductors, Wiring Techniques, and Schematic Reading - Introduction to Generators and Motors - Introduction to Electronic Emission, Tubes, and Power Supplies - Introduction to Solid-State Devices and - Introduction to Amplifiers - Introduction to Wave-Generation and Wave-Shaping - Introduction to Wave Propagation, Transmission Lines, and Antennas - Microwave Principles - Modulation Principles - Introduction to Number Systems and Logic Circuits - Introduction to Microelectronics - Principles of Synchros, Servos, and Gyros - Introduction to Test Equipment - Radio-Frequency Communications Principles - Radar Principles - The Technician's Handbook, Master Glossary - Test Methods and Practices - Introduction to Digital Computers - Magnetic Recording - Introduction to Fiber Optics Posted July 25, 2021
What is STEM? STEM stands for science, technology, engineering, and mathematics. STEM is important because it pervades every part of our lives. Science is everywhere in the world around us. Technology is continuously expanding into every aspect of our lives. Engineering is the basic designs of roads and bridges, but also tackles the challenges of changing global weather and environmentally-friendly changes to our home. Mathematics is in every occupation, every activity we do in our lives. By exposing students to STEM and giving them opportunities to explore STEM-related concepts, they will develop a passion for it and hopefully pursue a job in a STEM field. A curriculum that is STEM-based has real-life situations to help the student learn. Programs like Engineering For Kids integrates multiple classes to provide opportunities to see how concepts relate to life in order to hopefully spark a passion for a future career in a STEM field. STEM activities provide hands-on and minds-on lessons for the student. Making math and science both fun and interesting helps the student to do much more than just learn. “In the 21st century, scientific and technological innovations have become increasingly important as we face the benefits and challenges of both globalization and a knowledge-based economy. To succeed in this new information-based and highly technological society, students need to develop their capabilities in STEM to levels much beyond what was considered acceptable in the past.” STEAM education incorporates the “A” for the arts – recognizing that to be successful in technical fields, individuals must also be creative and use critical thinking skills which are best developed through exposure to the arts.
Proteins are a large category of organic compounds formed from the elements - Carbon (C), - Hydrogen (H), - Oxygen (O) and - and, in some cases, also Sulphur (S) and Phosphorus (P). There are many different protein molecules - all have complex structures formed by one or more polypeptide chains of linked amino acids. Why are proteins important? - Protein is one of the three main parts of the human diet - the others being fat and carbohydrates. - Proteins are essential as chemical "building-blocks" within the body because they form the material structures of many tissues, muscles and organs. - Proteins are also important because of their roles regulating bodily functions, enzymes and hormones. More about the Digestive System: This section includes pages about: - Introduction to the Digestive System - Terminology about Digestion - Passage through the alimentary tract - Component Parts of the Digestive System, incl. Teeth, Stomach, Liver, Small Intestine, Large Intestine - Chemical Processes in the Digestive System (introductory level) - Diseases and Disorders of the Digestive System For further information see also our pages of books about gastroenterology. - The digestive system (introduction) - Digestive System Terminology - Main Stages of the Digestive Process - Transit through the Alimentary Canal - Absorption Sites - Structures of the mouth - Teeth - as part of the digestive system - Small Intestine - Large Intestine - Digestive System Diseases & Disorders
- the study or collection of coins, banknotes, and medals. - a rapidly evolving branch of Archaeology, studies the origin and development of ancient coins and monetary systems, throwing light on how people lived long ago. Go cashless, we’re told. But, this is about when the world was “cashless”. When the disadvantages of the barter system were realized and a new common material was fixed as a medium for the exchange of goods, metal was preferred to others. Sanskrit and Prakrit literature gives us an insight into the coinage systems used in ancient India. Literary references mention various names of coins such as shatamana, padas, nishka, hiranya panda, karshapana and others. The oldest coins we found The actual evidence of the earliest coins in India starts appearing from c. 6th Century B.C. This is also attested by the Buddhist Jataka tales. These silver and copper coins of various sizes, found all over the country, are known as the Punch Marked Coins (as the technique of their manufacturing was punching). Legend and portraits appeared on coins in c. 2nd Century B.C., issued by the Indo-Greek kings. These kings were actually the descendants of the provincial governors whom Alexander had appointed to look after the conquered territory in India. They became independent after his death and initially ruled over Bactria (parts of modern Pakistan and Afghanistan) then later on over the North Western parts of India. On the obverse of these coins the legends are generally in Greek script and Greek language. On the reverse the same legends are given sometimes in Brahmi or sometimes in Kharoshthi. Such coins using two scripts and two languages to convey the same information have provided a major clue for the decipherment of the two forgotten ancient scripts of Brahmi and Kharoshthi. All that glitters might be gold Gold was used by many rulers for national and international trade and usually silver, copper and bronze were used at the regional and local levels. Along with these metals, alloys of some metals were also used to mint coins. These findings throw light on development in the fields of metallurgy and technology in ancient India. The coins issued by the Satavahanas (c. 1st CenturyB.C. to c. 3rd Century A.D.) and Western Kshatrapas (c. 1st Century A.D. to c. 4th Century A.D.) in Maharashtra and Gujarat attest to their long rules over their respective regions. This was a major breakthrough for historians while writing the history of these two important dynasties. Why are these coins important to people who aren’t pirates? Symbols, pictures and portraits on the coins have been used to a great extent to understand the socio-religious conditions in ancient India. For example, the Kushanas (who ruled over the northern parts of India from about 1st Century A.D. to c. 3rd Century A.D) were the first rulers in the history of India who issued gold coins. It was during the reign of Wima Kadphises that gold coins were minted. His devotion to Shiva is evident by the fact that the reverse side of his coins bear the portrait of Shiva. This is the earliest depiction of Shiva in human form with four hands. And though the Kushanas issued gold coinage on an extensive scale, the most beautiful issues from the artistic point of view, were brought out by the Gupta emperors, who ruled over a major part of India from c. 4th Century A.D. to c. 6th Century A.D. These coins are found on a large scale all over North India. On one side of these coins, the kings are shown in various activities like playing a musical instrument, slaying a tiger or lion or riding a horse. What is important is that the legend is written in the Brahmi script and the contemporary classical Sanskrit language. The metrical compositions on the obverse generally record the king’s achievements on earth and the attainment of heaven in future through his merits. Figures of deities like Durga, Ganga and Lakshmi are also featured on the reverse. More gold coins than those of silver and copper have been found suggesting it was a prosperous period. The coming of the Muslims into India brought an altogether new look to coinage in the country. As the representation of figures was not acceptable to Islam, Muslim rulers introduced inscriptions in Arabic or Persian script. Mahmud Gazni, who invaded India from 1001 A.D. to 1021 A.D., issued coins with the Kalima in the Kufic script( Kufic is the name of the earliest form of Arabic script) on one side with the translation of this in Sanskrit language and Sharada script on the other. When Akbar came to power, 1585 A.D. he issued a new variety of coins known as the Ilahi 1585 A.D. The fact that he was a liberal ruler is borne out by his coins. The most interesting type of coins issued by Akbar is generally known as the ‘Rama-Siya’ type, in the beginning of his fiftieth year as emperor. These were gold and silver coins bearing the effigies of Ram and Sita with the words ‘Rama-Siya’ in the Nagari script. In the 17th century Shivaji Bhosale (1630 to 1680 A.D.), established his kingdom in Maharashtra. The Maratha power was a great challenge for the Mughals. Shivaji issued gold as well as copper coins. His gold coins are known as Hon and the copper coins are known as Shivarai. On one side is ‘Raja Shiva’ on the other side ‘Chhatrapati’. A similar variety is also found in silver, but here the legend on one side is ‘Raja Shiva Chhatrapati’ and on the other is ‘Jagadamba Prasanna’ (such coins are extremely rare). Even though we are now in the 21st Century when currency notes and coins are being replaced by Credit Cards, the enigmatic wonder of old coins lingers, bearing with it the story of times that have gone and how the world might have been. Written by: Manjiri Bhalerao Photographs: Namrata Khandekar-Boileau Copy editor: Alice Agarwal Coins Courtesy: Basti Solanki, Amol Bankar
1.29 Calculating masses Chemical equations are a useful shorthand way of describing the changes that take place when a chemical reaction happens. Word equations can be helpful for example : It is often more helpful to use the chemical formulae of the reactants since the formulae show the actual elements involved: For example if you were predicting a word equation for when methane burns in oxygen you might suggest: This is wrong of course because methane is not a single element, it is a compound of carbon and hydrogen (CH4). When it burns in a plentiful supply of oxygen (O2) the carbon in methane becomes carbon dioxide (CO2) and the hydrogen becomes water (H2O). This is more easily shown as a symbol equation: 1.29 Conserving mass 1.29 calculate reacting masses using experimental data and chemical equations Use the animation to see how to balance the equation for the combustion of methane. Notice that when it is balanced, the equation shows that the total mass of the reactants is equal to the total mass of the products. The law of conservation of mass states that the mass of the products in a chemical reaction must equal the mass of the reactants We use this idea throughout the course. 1.29 Activity 1. Masses reacting The video here shows a classic reaction where hydrogen gas (H2) is used to remove oxygen from copper (II) oxide (CuO). The reactants are therefore hydrogen and copper oxide. The products are water and copper. Study the video closely. You will see that the black copper oxide slowly becomes copper coloured as the oxygen is removed from it. The water produced in this reaction leaves the tube as vapour and so the mass of the tube decreases as the reaction continues: 1.29 Keep it in proportion The combustion of methane in oxygen produces carbon dioxide gas as well as water vapour. This exercise allows us to calculate the "carbon footprint" produced by the burning of this fossil fuel. Use the equation and the relative mass data in the video to calculate the following: the mass of: - carbon dioxide produced when 32g methane is burnt in excess oxygen? - water produced when 1.6 g methane is burnt in excess oxygen? - oxygen required for the complete combustion of 640g methane? - carbon dioxide produced by the complete combustion of 1g of methane - carbon dioxide produced by the complete combustion of 1 tonne of methane?
Research published in Scientific Reports has found that exercise could help with preventing colds and infections from developing. While the study was conducted on mice, it does have implications for human health too. The study involved 28 lab mice with an average body weight who were given tests to determine their blood and fat cell inflammation levels. The mice were divided into two groups and of them started a swimming workout routine where they swam in a pool for 10 minutes, 5 days a week. As mice aren’t natural swimmers, the workout is the equivalent of jogging for half an hour in human terms. The second group of mice didn’t do any exercise and their inflammation levels were monitored carefully over three weeks. Researchers predicted the mice swimming would have higher levels of inflammation and the theory was that their bodies needed to work to heal any minimally damaged tissue that normally occurs after working out and this is the way that the muscles gain strength. The researchers were right, the mice that were exercising had high levels of inflammation than the sedentary mice. To test the mice’s immunity, half of each group were injected with Staphylococcus germs after the three weeks finished. This is known as a Staph infection and the germs can lead to severe lung problems resembling pneumonia. While both injrected groups got ill, their bodies responded very differently to germs and this depended on if they had been exercising or not. The swimming mice also had more active immune systems and lower inflammation levels than the sedentary mice. The sedentary mice also didn’t get as ill and experienced far less lung damage. The research concluded that exercise strengthens the immune system and helps the body to recover faster following an illness. How this works exactly isn’t certain but the two observable effects were fat reduction and regular stress. This is because fat is associated with inflammation so while the mice lowered their size and the number of fat cells, it also decreased the inflammation level in the body. Exercise also causes small trauma to the body tissue, forcing the body into the healing process. According to research, this might act as a kind of practice for the body to draw experience from when dealing with unwanted illness or trauma. The process can cause mice to develop a stronger immune system and be able to better regulate inflammatory levels, keeping any infections at bay. While the study on mice may not be able to draw precise parallels with humans, many of the results can be applied to humans according to the leaders of the research. It’s therefore a great idea to exercise regularly if you want to avoid the symptoms of a cold. The best exercise is fast walking to improve total circulation and oxygen uptake. The symptoms of a cold can also be eased by taking enzymes like Serrapeptase such as Serra Enzyme 250,000IU, a good probiotic like Prescript-Biotics and even using acupressure in the form of the HealthPoint™ device, all of which are the recommendations and can be purchased from Good Health Naturally and can help with easing inflammation and pain.
Cholesterol is a waxy substance located in cells throughout your body that helps with the production of hormones and vitamin D, and aids in digestion. Your body produces enough cholesterol to support these functions, but you can also consume cholesterol in food. If you have too much cholesterol in your blood, it can combine with other substances to form plaque, causing your arteries to become narrow or completely blocked. If plaque in an artery ruptures, a blood clot can form and block blood flow. This may result in oxygen-rich blood not being carried throughout the body as necessary and may lead to heart disease, a heart attack or a stroke. Types of Cholesterol Lipoproteins are made up of fat and protein. High-density lipoprotein (HDL) carries cholesterol to the liver, which removes it from the body. HDL is often referred to as “good” cholesterol. Low-density lipoprotein (LDL) can cause plaque to build up in arteries and increase the risk of heart disease. LDL is often referred to as “bad” cholesterol. Very low-density lipoprotein (VLDL) can also cause plaque buildup in the arteries. VLDL mostly transports triglycerides, whereas LDL carries cholesterol. How Your Diet and Lifestyle Can Affect Your Cholesterol Levels An unhealthy diet filled with foods that contain saturated and trans fats can lead to high levels of LDL cholesterol. Trans fat can also lower your HDL level. Red meat, full-fat dairy products and deep-fried, baked and processed foods often contain high levels of saturated fat. Some processed and fried foods also contain high amounts of trans fat. Nutritional labels may refer to trans fat as “partially hydrogenated vegetable oil.” Focus on reducing your consumption of foods that contain saturated and trans fats and eating more foods with healthy fats, such as lean meat and nuts. Use unsaturated oils, such as olive, canola and safflower, for cooking. Foods that contain soluble fiber and plant stanols or sterols prevent the absorption of cholesterol. Include more fruits, vegetables, legumes and whole grains in your diet. Fish that are rich in omega-3 fatty acids, such as salmon, tuna and mackerel, will not reduce your level of LDL cholesterol. They may, however, increase your level of HDL cholesterol and reduce your risk of developing blood clots and inflammation, as well as the risk of having a heart attack. Lack of exercise can reduce your level of HDL cholesterol. Smoking can lower your HDL cholesterol and increase your level of LDL cholesterol. Your age, weight and genetics can also affect cholesterol levels. Talk to Your Physician If the results of a blood test have revealed that your LDL cholesterol is at a high level, discuss ways to address it with your doctor. He or she may recommend modifying your diet and activity level, or taking other steps. Seek professional guidance before making any dramatic changes to your eating habits or lifestyle.
Herpes Simplex Virus/Cold Sores What are cold sores? Cold sores are small blisters around and/or inside the mouth, caused by the herpes simplex virus. They are often referred to as "fever blisters." Herpes simplex type 1 is the most common cause of cold sores. Following initial infection, the herpes simplex virus becomes dormant for long periods of time but may reactivate, during which time cold sores reappear. Most episodes of cold sores do not last longer than two weeks. Extreme temperature changes, viral respiratory infections (the common cold), stress, or a weakened immune system are several of the triggers for recurrence of herpes simplex virus symptoms. As herpes simplex viruses are contagious, they are readily spread to others by kissing, sharing cups or utensils, sharing wash cloths or towels, or by direct touching of the cold sore before it is healed. The virus may also be spread to others in the day or two before the cold sore appears. What are the symptoms of cold sores? Some children and adults never experience any symptoms with the first infection; others have severe flu-like symptoms (fever, body aches) and ulcers or blisters in and around the mouth. Recurrences of cold sores are usually not as severe as the original outbreak. Although each individual may experience different symptoms, the most common symptoms are: - A small blister or cluster of blisters on the lips, mouth, nose, gums or tongue that gradually enlarge, scab and crust over - Tingling, itching, and soreness of the lips and mouth lasting from three to seven days - Tingling or burning sensation of the lips may be a “warning sign” of a recurrent infection The symptoms of cold sores may resemble other dermatologic conditions or medical problems. Always consult your children's primary care provider for a diagnosis. What is the treatment for cold sores? Specific treatment for cold sores will be determined by your child's primary care provider based on: Your child's age, overall health, and medical history Extent of the disease Your child's tolerance for specific medications, procedures, or therapies Expectations for the course of the disease Your opinion or preference Although the herpes simplex virus infection that causes cold sores cannot be cured, treatment may help alleviate and shorten the course of symptoms. Treatment may include oral antiviral medication, topical medication and/or pain relievers. Several of the available treatments are sold without prescription. Always consult your child's primary care provider. Reviewed by Debra D. Weissbach, MD, FAAP
Joints may be classified from different ways. - From an anatomical point of view, with regard to the substances and the arrangement of the substances by which the constituent parts are united. - From a physiological standpoint, with regard to the greater or smaller mobility at the seat of union, - From a physical standpoint, either the shapes of the portions in contact being mainly considered or the axes round which movement can occur. - From a combination of the preceding methods may be adopted, and this is the plan most generally followed. None of the classi- fications hitherto used is quite satisfactory, but perhaps, on the whole, that suggested by Prof. Alex. Macalister is the least open to objection, and therefore with slight modification it is utilised here. There are three chief groups of joints: - Synarthroses. In joints of this class the bones are united by fibrous tissue. - Synchondroses. Or joints in which the uniting substance intervening be- tween the bones is cartilage. - Diarthroses. The constituent parts of joints of this class are (a) two or more bones each covered by articular hyaline cartilage ; (b) a fibrous capsule uniting the bones, and (c) a synovial membrane which lines the fibrous capsule and covers any part of bone enclosed in the capsule and not covered with articular cartilage. An interarticular plate of cartilage may or may not be present. - Sutures or immovable joints, in which the' fibrous tissue between the bones is too small in amount to allow movement. - Harmonic. The edges of the bones are comparatively smooth and are in even apposition, e. g., vertical plate of palate and maxilla. - Squamous. The margin of one bone overlaps the other, e. g., temporal and parietal. - Serrate. The opposed edges interlock by processes tapering to a point. - Dentate. The opposed edges are dovetailed, e. g., occipital and parietal. - Limbous. The opposed edges alternately overlap, e. g., parietal and frontal. - Schindylesis. A ridge or flattened process is received into a corresponding socket, e. g., rostrum of sphenoid and vomer. - Gotnphosis. A peg-like process is lodged in a corresponding socket, e. g., the fangs of the teeth. - Syndesmoses. Movable joints in which the fibrous tissue between bones or carti- lages is sufficiently lax to allow movement between the connected parts, e. g., thyreo-hyoid membrane. Interosseous membranes of forearm and leg. In all synchondroses a certain amount of movement is possible, and they are often called amphiarthroses. - True synchondroses. The cartilage connecting the bones is the remains of the bar in which the bones were ossified, e. g., occipito-sphenoidal joint. - False synchondroses. The plate of cartilage intervening between and connecting the bones is fibro-cartilage and is not part of the cartilage in which the bones were ossified, but is developed separately, e. g., intervertebral joint and pubic sym- physis. The articular end of each bone may be covered with hyaUne cartilage and there may be a more or less well-marked cavity in the intervening plate of fibro-cartilage. In diarthrodial joints the surfaces in contact may be equal and similar or unequal and dissimilar. In the former case the joints are homomorphic; in the latter, heteromorphic. - Plane or arlhrodial. Flat surfaces, admitting gliding movement, e. g., intercarpal and acromio-clavicular joints. - Ephippial. Saddle-shaped surfaces placed at right angles to each other, admitting free movement in all directions, e. g., metacarpo-phalangeal joint of thumb. - Enarlhrodial. Ball-and-socket, allowing the most free movement, e. g., hip and shoulder-joints. - Condylarlhroses. The convex surface is ellipsoidal, and fits into a corresponding concavity, e. g., wrist and metacarpo-phalangeal joints, - Ginglymi. One surface consists of two conjoined condyles or of a segment of a cone or cylinder, and the opposite surface has a reciprocal contour. In these joints movement is only permitted round one axis, which may be transverse; e. g., elbow, ankle; or it may be vertical, in which case the joint is trochoid; e. g., odontoid process of axis with atlas, radius with ulna. Such a classification should be considered as being purely academic and the student must always remember that it is not enough to discuss a joint by assigning it to a particular class in any scheme; for he must be familiar with the actual conditions present in every joint. No classification, however perfect, must be taken as final, and each joint should be studied as a separate thing altogether apart from any general systematic arrangement.
Accessibility is the design of devices, environments, services, and products for people with disabilities that include a range of physical, sensory, mental, cognitive, developmental or intellectual impairments. In an effort to best provide equal access to physical spaces and environments as well as social, political, and economic resources, accessibility design attempts to offer both direct (unassisted) and indirect (assisted with technology) universal access to the world. Common forms of assistive technologies for humans include mobility technologies such as wheelchairs, prosthetics, walkers, canes, visual technologies like braille and screen readers, auditory technologies in hearing aids and listening devices, and cognitive aids from software to memory devices. ADA stands for the American with Disabilities Act and is a federal law that was enacted to prohibit discrimination against any individual with a disability. This includes employment, hiring, promotions, discharge, training, and benefits of employment. ADA is enforced by the US Equal Employment Opportunity Commission. Under the ADA a disability is defined as a person who has a physical or mental impairment that limits one or more major life activity. This includes people who may have a record of an impairment, but do not currently have a disability. A disability is a physical or mental condition that limits a person’s movement, senses, or activities. Short term disability lasts only a specific period of time, which is typically several months and up to 1 year. Generally, you receive benefits for a short term disability from your insurance after a waiting period of up to 14 days.
Using sound to communicate Since time immemorial, humankind has used sound as a means of communication: to signal danger, to pass on a message, to call people and domesticated animals home, to scare off enemies and wild animals, to express emotion – and for ritual communication with the great unknown. As our ancestors evolved, so did the use of sound and sound-tools. Let us take a closer look at one such tool: the ancient birch trumpet. Or as the Norwegians call it: the lur. What is it? Simply put, the lur is a hollowed-out piece of wood, designed to create a loud sound. It is narrow at one end and wider at the other. By blowing air into the narrow end – at the same time as shaping and vibrating one’s lips – sound is created and further enhanced on its journey through the hollow wood-pipe. The sound is similar to that of a trumpet – and can be heard over long distances. The longer the lur, the more tones it creates. Tones are solely generated by the shape of the player’s lips, and the pressure of the airflow coming from her lungs. A long and a short version Historically, there have been both a long and a short version of the lur. (1) The short version is a hollowed-out whole piece of wood. (2) The long version is made from a young tree trunk or a branch – split into two halves – hollowed-out – and then put back together again. The outer surface is often clad with birch bark. Scaring wild animals away In Norwegian folklore and historical storytelling, the instrument is strongly associated with the lonely existence of the summer dairy milkmaid and the young herder girl or boy. Every summer, the milkmaid and the herder took the animals out into the woods or up into the mountains – and lived there alone in simple seasonal abodes until the autumn. Living alone with the farm animals could be dangerous in areas with aggressive predators – like wolves and bears. Usually, people did not have weapons. Instead, they used noise to scare unwanted intruders away. The lur was one such weapon of sound. The milkmaid communicated with other nearby milkmaids by blowing the horn – and – depending on distance and topography – also with the people back home at the main farm. She also sounded the lur to call the herder and the domestic animals back for milking in the late afternoon. Maybe it was during her limited spare time in the evenings that she started creating simple tunes with the crude instrument; expressing loneliness and longing; turning the lur into more than just a practical tool. The meaning of the word The word lur comes from the old Norse lúðr and simply means a hollowed-out tree trunk or branch. When hearing the word lur, most modern-day Norwegians think of a fog horn – a tåkelur. Today, the sound guiding the seafarers lost in the fog comes from automated machines. But once upon a time – a man or a woman blew the horn – like a human beacon of sound – helping the sailor to find his way to safety. In 1894, Norwegian archaeologists found 2 bronze trumpets outside the city of Stavanger, dating back some 3000 years. The wooden version of lur is believed to predate these metal instruments by millennia. Is the birch trumpet still in use? Today, only enthusiasts and folk music groups play the instrument, keeping the traditions and the old tunes alive. Sadly, the milkmaids and the young herders are long since gone. If you search the internet, you will find video-examples containing the birch trumpet’s sound, an ancient link back to our distant ancestors. And if you close your eyes and listen carefully – you may hear their laughter, and see their smiling faces appear before your very eyes. Main source: «Det gjallar og det læt» by Reidar Sevåg – Det Norske Samlaget 1973.
- 1 How do I write a lesson plan? - 2 What are the 5 parts of lesson plan? - 3 How do you write a lesson plan for beginners? - 4 What is the structure of a lesson plan? - 5 What are the 7 E’s of lesson plan? - 6 What are the 3 types of lesson plan? - 7 What is 4 A’s lesson plan? - 8 What is 4a’s method? - 9 What are the 5 methods of teaching? - 10 What are the basic parts of lesson plan? - 11 What is a good lesson plan? - 12 What is lessons plan? - 13 How do you structure a class? How do I write a lesson plan? Your lesson plan should include: - An objective or statement of learning goals: Objectives are the foundation of your lesson plan. - Materials needed: Make a list of all necessary materials and ensure they are available well in advance of the lesson. What are the 5 parts of lesson plan? The 5 Key Components Of A Lesson Plan How do you write a lesson plan for beginners? Steps to building your lesson plan - Identify the objectives. - Determine the needs of your students. - Plan your resources and materials. - Engage your students. - Instruct and present information. - Allow time for student practice. - Ending the lesson. - Evaluate the lesson. What is the structure of a lesson plan? A lesson structure maps out the teaching and learning that will occur in class. A clearly thought out lesson has set steps that need to be achieved, with parts in between to be filled with more knowledge through scaffolding. What are the 7 E’s of lesson plan? So what is it? The 7 Es stand for the following. Elicit, Engage, Explore,Explain, Elaborate, Extend and Evaluate. What are the 3 types of lesson plan? What are the 3 types of lesson plan? - Detailed lesson plan. A detailed plan covers everything and gets teachers fully prepared for the lesson ahead. - Semi detailed lesson plan. - Understanding by design (UbD) - Stage 1: Desired Results. - Stage 2: Assessment Evidence. What is 4 A’s lesson plan? The 4-A Model Lesson plans are an important part of education. They’re a written plan of what a teacher will do in order to achieve the goals during the school day, week, and year. Typically, lesson plans follow a format that identifies goals and objectives, teaching methods, and assessment. What is 4a’s method? The Four A Technique is a strategy to connect the content you are teaching to the life experiences of learners. The strategy is broken into four parts: Anchor, Add, Apply and Away, which describe four possible parts of learning tasks. What are the 5 methods of teaching? Teacher-Centered Methods of Instruction - Direct Instruction (Low Tech) - Flipped Classrooms (High Tech) - Kinesthetic Learning (Low Tech) - Differentiated Instruction (Low Tech) - Inquiry-based Learning (High Tech) - Expeditionary Learning (High Tech) - Personalized Learning (High Tech) - Game-based Learning (High Tech) What are the basic parts of lesson plan? The most effective lesson plans have six key parts: - Lesson Objectives. - Related Requirements. - Lesson Materials. - Lesson Procedure. - Assessment Method. - Lesson Reflection. What is a good lesson plan? Each lesson plan should start by considering what students will learn or be able to do by the end of class. They should be measurable, so teachers can track student progress and ensure that new concepts are understood before moving on, and achievable considering the time available. What is lessons plan? A lesson plan is a teacher’s guide for facilitating a lesson. A lesson plan refers to a teacher’s plan for a particular lesson. Here, a teacher must plan what they want to teach students, why a topic is being covered and decide how to deliver a lecture. How do you structure a class? Beginning class with effective transitions - Ask questions related to today’s topic. Start with a few questions and ask students to consider the answer. - Activate prior knowledge with recaps. Begin class by asking students to recap what was covered in the last class. - Short writing exercises help students focus.
“We know remarkably little about how the Shenandoah Valley’s African Americans negotiated the vexing uncertainties of secession, civil war, and Reconstruction. This compelling and accessibly written narrative foregrounds the struggles of freedom-seeking enslaved persons in America’s most turbulent era.”—Brian Matthew Jordan, author of Marching Home: Union Veterans and Their Unending Civil War “A groundbreaking study that demonstrates how African Americans shaped the Civil War era. Noyalas systematically dismantles the old myth that the Shenandoah Valley did not have enslaved populations and instead weaves a compelling story of African American resistance and perseverance in a region deeply contested by war.”—James J. Broomall, author of Private Confederacies: The Emotional Worlds of Southern Men as Citizens and Soldiers Slavery and Freedom in the Shenandoah Valley during the Civil War Era examines the complexities of life for African Americans in Virginia’s Shenandoah Valley from the antebellum period through Reconstruction. Although the Valley was a site of fierce conflicts during the Civil War and its military activity has been extensively studied, scholars have largely ignored the Black experience in the region until now. Correcting previous assumptions that slavery was not important to the Valley, and that enslaved people were treated better there than in other parts of the South, Jonathan Noyalas demonstrates the strong hold of slavery in the region. He explains that during the war, enslaved and free African Americans navigated a borderland that changed hands frequently—where it was possible to be in Union territory one day, Confederate territory the next, and no-man’s land another. He shows that the region’s enslaved population resisted slavery and supported the Union war effort by serving as scouts, spies, and laborers, or by fleeing to enlist in regiments of the United States Colored Troops. Noyalas draws on untapped primary resources, including thousands of records from the Freedmen’s Bureau and contemporary newspapers, to continue the story and reveal the challenges African Americans faced from former Confederates after the war. He traces their actions, which were shaped uniquely by the volatility of the struggle in this region, to ensure that the war’s emancipationist legacy would survive. Jonathan A. Noyalas is director of the McCormick Civil War Institute at Shenandoah University. He is the author or editor of several books, including Civil War Legacy in the Shenandoah: Remembrance, Reunion and Reconciliation.
Using Pictures to Read the Past This Using Pictures to Read the Past lesson plan also includes: - Join to access all included materials Students use primary and secondary sources, using the Internet and other media. 3 Views 19 Downloads - Folder Types - Activities & Projects - Graphics & Images - Handouts & References - Lab Resources - Learning Games - Lesson Plans - Primary Sources - Printables & Templates - Professional Documents - PD Courses - Study Guides - Performance Tasks - Graphic Organizers - Writing Prompts - Constructed Response Items - AP Test Preps - Lesson Planet Articles - Interactive Whiteboards - Home Letters - Unknown Types - All Resource Types - Show All See similar resources: Primary and Secondary SourcesLesson Planet Show your class the difference between primary sources and secondary sources. The first page provides a list of examples of each type of source. While they research, pupils can refer back to the list quickly to make sure they are reading... 7th - 12th Social Studies & History Straight to the SourceLesson Planet Research famous figures from history through the primary sources they created! Explore how these types of documents can enrich our study of the past with your middle and high school learners. They create picture books to illustrate... 6th - 12th Social Studies & History Evaluating Print SourcesLesson Planet Not all sources are created equal, so how do you evaluate them? Writers learn how to evaluate print sources based on elements such as audience, tone, and argument in the sixth handout of 24 in the Writing the Paper series from the... 9th - Higher Ed English Language Arts CCSS: Adaptable Using Primary Sources: Wide Open TownLesson Planet A picture speaks a thousand words, no matter how old! Scholars use political cartoons from the era of Prohibition and the Temperance Movement to analyze what, a primary document (in this case, a bootlegger's notebook) is telling them... 6th - 12th English Language Arts CCSS: Designed Getting Oriented: A Move to Alternative FuelsLesson Planet The challenge surrounding a transition to alternative fuels may fall on the next generation of scientists, innovators, and creative thinkers. Get them started on their journey with a research and discussion activity focuses on the... 9th - 12th Science Early English Settlements History DetectivesLesson Planet Young historians play the role of history detectives as they investigate some primary source texts and images related to the early colonization of America, The Jamestown Settlement, and the Mayflower Compact. 6th - 8th Social Studies & History CCSS: Adaptable Primary and Secondary Sources - 7thLesson Planet A link to a beautiful Animoto presentation is included, giving examples of primary sources that a student might want to contact when doing research. Using the Topaz Internment Camp in Utah as a sample topic, middle schoolers view a slide... 7th - 9th English Language Arts Mondrian - Primary/Secondary Color StudyLesson Planet Utilizing computer software, learners demonstrate the color spectrum. They investigate the life of the artist Piet Mondrian and define his style of artwork. Then they use Photoshop to recreate some of his designs while discovering the... 7th - 12th Visual & Performing Arts
March 29, 2018 The mean-square value of a set of numbers, xn, n = 1, …, N, is given by the following equation: The mean-square value measures the average strength, or power, of a signal. Figure 3.2 plots the square of the vibration signal in Figure 3.1 and shows the mean-square value. For random signals with a mean value of zero, the mean-square value is the quantity that can be added when summing two signals. For example, consider the sum of two random variables, A and B. The square of (A+B) is given by (A+B)2 = A2 + 2 A·B + B2. If A and B are independent random variables with a zero mean value, then the mean value of A·B is zero. Therefore, the mean square value of the sum is equal to the sum of the mean square values (with zero mean).
The War that Split the Pantheon The gods were in control of the livelihoods of the Greek people, so the importance of understanding those gods, and their personalities was tantamount to survival. Not only did the people need to know which god to pray to in any certain situation, they needed to know how to go about worshipping that specific god or goddess. For example, if the people of Athens were trying to pray for a successful battle, they would pray to Athena goddess of war. Also, they needed to know that one way to properly worship her would have been a tribute of sorts at a place that was representative of her, such as the Acropolis. Homer’s the Iliad and the Odyssey portray the personalities of the gods and goddesses and demonstrate their human natures. Mostly it demonstrated how the human natures of the gods and goddesses managed to cause and exacerbate the Trojan War. At the top of the Pantheon of the Gods is Zeus, god of the sky. Zeus was famous for his extramarital affairs, as well as for his attempts to keep the gods and goddesses united. This was not an easy task. He was not usually known for using his intelligence; however, consider this example. When he was given an apple titled, “to the fairest”, he realized his dilemma. In choosing the recipient of the apple, he needed to pick Hera, his wife and goddess of marriage. But, he also needed to pick his daughter, the goddess of war and wisdom, because she protected him from Hera. But, he also needed to pick Aphrodite because she provided him with the women for his affairs. Unable to choose, he assigned the job to Paris, the youngest prince of Troy, who chose Aphrodite. Zeus’s delegation of power would prove the catalyst that would spark in the mortal world, but also a civil war among the gods and goddesses. This exhibit was curated by Caroline Whyler. Whyler received her BA in history from the University of California, Davis. Her first MA is in history, from California State University, Sacramento (CSUS). She is currently working on her second MA in Public History from CSUS, which she will receive in 2017. Her research has centered on ancient and medieval times, including but not limited to, Alexander the Great and his campaign east, the Julio-Claudian Dynasty, and medieval monsters.
There are many similarities between communicating with adults and with children/young people. In any case we need to maintain eye contact and interest, responding to what they are saying, and treating them with courtesy and respect. However, when communicating with children and young people, we also need to think about how we can maintain the relationship of carer to child and what this means in the context of an educational setting. As teaching assistants, we must look at the situation which we are in; for example, in a classroom with children, or in the playground, or in a meeting with other professionals, or in a parents’ evening. Generally, when we communicate with people we adapt automatically our communication to the appropriate language. In fact, when we are in a meeting with other professionals, we act and speak in a formal and professional manner, whilst when speaking to a child we are more animated and less formal. When communicating with children, we need to make sure we are listening actively to what they say, and we should be available for conversation whenever they feel the need to share something. Being clear and concise when giving instructions or explaining something to them is essential, as well as using vocabulary and grammatical structures suitable to their age and abilities. General politeness and empathy does not only show respect, but also teaches them mannerly conversation. When talking with adults we tend to use a more serious and formal way to communicate, unlike when we are communicating with children and young people. This is because we adapt our type of communication; children need more clear instruction, an age-appropriate vocabulary, a calm tone, and body language that will not send mixed messages. Adults instead, are more on the same wave length as each other. However, clear and concise language patterns require to be always used while communicating with children and adults. Also, protecting the individual rights and dignity of children, young people and older individuals must be highly warranted while communicating. Therefore, the main differences between communicating with a child, young person or adult is our tone of voice, body language, facial expressions, gestures and the vocabulary we use. We need to adapt these depending on the age, needs or ability of the person we are speaking too. Children have many ways of communicating. They can express themselves through play, drawings, modelling, music, singing, dancing and writing. If we are communicating with a small child we may do this by playing with toys or reading a story, using silly voices. When communicating with a young person instead, we would need to adapt our tone of voice and the phrases we use, as young adults surely have a more varied vocabulary. In fact, communication with teenagers is different from communicating with younger children and can cause conflict and stress; the most important thing is to keep the lines of communication open. We can give to young people more complex instructions and they can also appreciate jokes and word play. We need to listen and respect their ideas and keep up with their interests. For example, we can discuss past events allowing them to express their feelings and emotions. With regards to communicating with an adult, this would be done differently as we would normally do this by having a conversation face to face or by telephone, going out to a social event, or by texting, or maybe by sending an email. As said previously, we also need to consider the differences when communicating with anyone from a different culture or social background. Below is a summary of the main characteristics and differences between communicating with adults and communicating with children and young people. Communicating with adults: – Use a language that will be understood; – Maintain professionalism and support comprehension; – Make eye contact and use other non-verbal skills if needed; – Respect other ideas even if we are not sure about them; – Use written forms of communication when needed, like email, letter, notices, or text; – Avoid assumptions; – Summarise and confirm the key points to ensure understanding; – Resolve areas of poor communication by discussing them; – Comply with polices for confidentiality, sharing information and data protection. Communicating with children and young people: – Speak clearly and only give as much information as is needed; – Use a vocabulary appropriate to their ages, needs, abilities: verbal expressions must be at the right level for the children; – Actively listen and positively respond; – Ask and answer questions to prompt responses and check understanding; – Adapt communication to their language and abilities; – Focus on what the child says; – Use not only verbal but also non-verbal communication skills, like smile, nodding, eye contact etc.; – Praise and encourage them to keep the conversation going; – Give support while communicating with children; – Never interrupt them whilst they are speaking; – Never dismiss what children say because it lowers their self-esteem; – Never laugh at what children say; – Never hurry children when they are speaking. In conclusion, the communication with adults (professionals and parents) would be generally more formal and characterised by more complex language, discussion, and negotiation. With younger children, there is a much bigger emphasis placed on body language, facial expressions, and the use of more simple language. However, it is important to maintain a high level of professionalism when communicating with both adults and children. It is vital to maintain a high level of respect when communicating with adults and children/young people as this helps to build trust and foster positive relationships. The use of verbal as well as non-verbal skills through effective body language assists in empathising concepts with the adult population and enhance their participation in the communication process. When communicating with other adults who are not colleagues in the school, e.g. parents and carers, is not advisable to use a technical language with people who are not experts in this area. Also, a teaching assistant must be careful not to try and answer questions that are beyond their knowledge and expertise. In these cases, the parents’ questions should be referred to the class teacher or a teacher specialised in the field. Respecting other people’s thoughts and ideas while establishing the confidentiality of shared information, facilitates the enhancement of trust and confidence in communicating with adults. This Essay on "How are many similarities between communicating with adults and with children?" was written and submitted by your fellow student. You are free to use it for research and reference purposes in order to write your own paper; however, you must cite it accordingly. Please send request the removal if you are the copyright owner of this paper and no longer wish to have your work published on EduPRO.
Everybody gets a dry mouth from time to time. Temporary mouth dryness can be brought on by dehydration, stress, or simply the normal reduction in saliva flow at night. But persistent mouth dryness, a condition known as xerostomia, is cause for concern. Xerostomia occurs when your salivary glands, which normally keep your mouth moist by secreting saliva, are not working properly. A chronic lack of saliva has significant health implications. For one thing, it can be difficult to eat with a dry mouth; tasting, chewing and swallowing may also be affected. This could compromise your nutrition. Also, a dry mouth creates ideal conditions for tooth decay. That's because saliva plays a very important role in keeping decay-causing oral bacteria in check and neutralizing the acids these bacteria produce; it is the acid in your mouth that erodes tooth enamel and starts the decay process. A dry mouth can also cause bad breath. There are several possible causes for xerostomia, including: - Medications. For most people suffering from dry mouth, medications are to blame. According to the U.S. Surgeon General, there are more than 500 medications (both prescription and over-the-counter) that have this side effect. Antihistamines (for allergies), diuretics (which drain excess fluid), and antidepressants, are high on the list of medications that cause xerostomia. Chemotherapy drugs can also have this effect. - Radiation Therapy. Radiation of the head and neck can damage salivary glands—sometimes permanently. Radiation to treat cancer in other parts of the body will not cause xerostomia. - Disease. Some systemic (general body) diseases can cause dry mouth. Sjögren's syndrome, for example, is an autoimmune disease that causes the body to attack its own moisture-producing glands in the eyes and mouth. Other diseases known to cause dry mouth include diabetes, Parkinson's disease, cystic fibrosis and AIDS. - Nerve Damage. Trauma to the head or neck can damage the nerves involved in the production of saliva. If you are taking any medication regularly, it's possible that your physician can either suggest a substitute or adjust the dosage to relieve your symptoms of dry mouth. If this is not possible or has already been tried, here are some other things you can do: - Sip fluids frequently. This is particularly helpful during meals. Make sure what you drink does not contain sugar and isn't acidic, as these will both increase your risk of tooth decay. All sodas, including diet varieties, should be avoided, as they are acidic and attack the tooth surface. - Chew sugarless gum. This will help stimulate saliva flow if your salivary glands are not damaged. Choose a variety that contains xylitol, a natural sugar substitute that can be protective against tooth decay. - Avoid drying/irritating foods and beverages. These include toast and crackers, salty and spicy foods, alcohol and caffeinated drinks. - Don't smoke. This can dry out the mouth and also increase your risk of gum disease. - Use a humidifier. Running a cool-mist humidifier at night can be soothing. - Use saliva stimulants/substitutes. There are prescription and over-the-counter products that can either stimulate saliva or act as a substitute oral fluid. We can give you some recommendations. - Practice good oral hygiene. Brush at least twice a day with a fluoride toothpaste; this will remove bacterial plaque and add minerals to strengthen your teeth. Don't forget to floss. - Have an exam/cleaning. If you have dry mouth, it's more important than ever to maintain your regular schedule of visits to the dental office. Please be sure to let us know what medications you are taking, particularly if there have been any changes recently. We will do our best to help relieve any dry-mouth symptoms you are experiencing.
Stand-Alone Solar PV systems are not connected to the electricity grid and typically are installed in remote areas where there is limited connection to the grid, or areas of low electricity demand. Unlike their grid-connected counterparts, these systems must have batteries or back-up generation to provide supply at night. In many cases, they will also include a diesel or petrol generator to supplement energy supply. How do Stand-Alone Solar Power Systems work? A stand-alone solar system uses solar panels to charge large batteries which are then used for power during non-daylight hours. During the day, the electricity generated is used to power the home and charge the batteries. At night, and during rainy days, all necessary power is provided by the batteries.In some cases, where it is important that power is always available, some stand alone systems may also have another source of power such as a diesel generator. Power generated from a stand alone system is considered DC (direct current), and is stored in a battery and converted to AC (alternate current).
What is it? Lowland heath is found from sea-level up to about 300m and is characterised by heathers, gorse and grasses. On infertile, well-drained sands and gravels in the drier and colder east of the country, heather (ling) and European gorse are dominant. Such heaths generally support relatively few plant species, and a lack of competition allows lichens to flourish (particularly in Breckland, where there are a number of rarities and lichens form a distinctive short sward with grasses). As the climate becomes increasingly damp towards the west, dwarf gorse and bell heather become frequent, then western gorse and bristle bent and purple-moor grass. Where there is variety in the physical structure (e.g. short young and bushy old heather, patches of gorse and exposed bare ground) heaths can support a specialist fauna including characteristic ground-nesting birds, reptiles and invertebrates. Wet heath is found where either shallow peat or mineral soils are seasonally waterlogged and supports cross-leaved heath, heather (ling) and bog mosses. Wet heath may be more species-rich than dry heath, with lousewort and heath spotted-orchids. Particularly damp areas support the distinctive (if small) white beak-sedge and sundews. In shallow valleys in heathland, a type of lowland fen known as valley mire may be found, and is characterised by bog-mosses, rushes and purple moor-grass. Why is it like this? Lowland heathland is defined by the poor fertility of its soils, which discourages other plants, and a long history of human management. The process of natural succession (in which nutrients gradually accumulate allowing larger, more vigorous plants to become established at the expense of smaller, less competitive species) results in heathland eventually developing into birch or pine woodland if left to its own devices. Traditional heathland activities such as livestock grazing and burning have played a vital role in stalling succession and allowing heathland to persist over the centuries. Only plants adapted to the poor, acidic heathland soils flourish, and some resort to unusual means to gain nutrients - carnivorous sundews have glandular tentacles with sticky droplets on their leaves. These catch unwary insects and curl inwards, holding the insect while they ooze digestive enzymes and gradually absorb the nutrients released. The pretty, pink lousewort has an alternative strategy, and parasitizes the roots of other plants to gain nutrients, while gorse hosts bacteria in its root nodules that can grab nitrogen from the air present within soil. Although many species are restricted to heathland in the UK, this is, in some cases, a reflection of the presence of suitable microhabitat and the climate. Many specialists require warm, sunny conditions with bare ground and nearby vegetation for basking or shelter, conditions which in warmer countries are not so restricted to heathland. Distribution in the UK Lowland heathland is scattered across the UK, although largely absent from Scotland and north Wales. What to look for In early spring, listen out for what is arguably Britain’s most beautiful bird song, that of the woodlark. Between May and August, the distinctive churring of nightjar may be heard at dusk from mature heath. The enigmatic stone curlew is best seen at publicised reserves, for example in Breckland. All six reptiles can be found on heathland - look on sunny mornings in spring when the air is still cool and they need to bask in the sun to warm up (sand lizard and smooth snake are mostly restricted to Dorset and Hampshire). Invertebrates are at their best in June and July – heathland supports many rarities that are hard to find, but the striking green tiger beetle and the beautiful day-flying emperor moth are more common, while eye-catching clouds of silver-studded blue butterflies can be seen flickering over short open heath. On damp heathland tracks in the south-west, look out for a group of specialist plants, including the tiny gentian yellow centuary and the starry-flowered white water buttercup, three-lobed water crowfoot. From April, many heaths turn yellow as gorse blossom is at its most abundant, but heathers flower later and are at their best in August. Look for Dorset heath and Cornish heath if you are in the right area. Also in August, look out for the lovely blue marsh gentian on wet heath. Lowland heathland requires some kind of human intervention if it is to persist. Over the last few decades, this intervention has largely been instigated by conservationists in an attempt to save the meagre 20% of lowland heathland that survived the last two centuries. There are, however, notable exceptions, such as the New Forest, where the tradition of common grazing by farmers and smallholders continues. Extensive grazing using hardy breeds of cattle, ponies or sometimes sheep, cutting heather and gorse and some controlled burning combined with activities to create bare ground such as scraping or turf removal can create the largely open conditions with sufficient structural variation to support a range of heathland species. Restoration work and reintroduction of some species has also been carried out. Despite this, not all heathland species are flourishing, and more targeted work, often around the creation of bare ground and the prevention or counteraction of nutrient enrichment (for example from air pollution) is needed. The level of recreational use of some heaths also presents challenges (including arson, trampling and disturbance to ground nesting birds) and on urban heaths in particular the needs of recreation and wildlife conservation need to be carefully balanced.
In botany, flowering plants are divided into two groups (called "classes"): monocots and dicots. A flower's classification is based on the physical structures of the plant, including characteristics such as how many petals a flower has. Some of these structures are easy to see and identify, whereas others (such as the pores on a grain of pollen) need special training or equipment to see and identify. It is usually possible to determine to which class a plant belongs by looking at the flower and its leaves and stem. Count the number of petals on the flower. If there are three, or a multiple of three (six, nine, and so forth), then the flower is likely a monocot. If there are four or five petals, or a multiple of four or five, then the flower is likely a dicot. Count the stamens. The number of stamens should correlate to the number of petals to confirm your findings so far. If there are three stamens, or a multiple of three, the flower is likely a monocot. If there are four or five stamens, or a multiple of four or five, the flower is likely a dicot. Examine the leaves. Veins in the leaves are usually clearly visible either from the top of the leaf or the underside. If the veins of the leaf are all parallel (running in the same direction) with little of no branching, then the flower is probably a monocot. If the veins branch out or form a spider-web-like pattern, then the flower is probably a dicot. Cut through the stem of the flower with secateurs or a sharp knife. Examine the cut end. If the stem has concentric rings, like the growth rings of a tree, then the flower is likely a dicot. Otherwise, the flower is likely a monocot. In some cases, the only way to see the structure of the stem will be to cut a very thin slice and examine it under a microscope. Things You Will Need - Sharp knife or secateurs - Microscope (optional) - Another way to tell if a flower is a monocot or dicot is to examine the seeds, if they are available. Dicot seeds separate easily into two halves, like split peas or beans. Monocots seeds can not be separated into parts. - Sometimes it is not possible to tell if a flower is a monocot or a dicot based on easy-to-see characteristics. Some plants have contradictory characteristics and some flowers may have mutations which make it difficult to classify them. - Identify Tree Seedlings - Essential Parts of the Flower - Identify Male Persimmon Trees - Tell Female Flowers from Male Flowers on Pumpkin Plants - Flower Leaf Identification - Identify Trumpet Shaped Flowers - Find Out What Kind of Plant You Have - Is a Corn Kernel Seed a Dicot or Monocot? - Identify a Flowering Shrub - The Reproductive Parts of a Flower - Characteristics of Wind Pollinated Flowers - Dry Flowers in a Book
Kids learn at an astonishing rate. For example, young children are able to acquire language and music skills far faster than adults due to enhanced “brain plasticity” (lasting changes in brain anatomy and physiology due to exposure to the environment ) of children’s developing auditory cerebral cortex. Examples of brain plasticity include experience-dependent growth of neuronal dendrites and axons (input and output processes of neurons) and establishment of new connections among those axons and dendrites to form new memories . Astrocytes and the Plastic Brain Children sponge up information Although adults retain some brain plasticity and can learn new languages or play musical instruments, adult acquisition of language and music skills is typically much slower than in children, and often not as comprehensive. For instance, non-native speakers can usually acquire a new language and speak without an accent if exposed to the new language before age 7. But ability to speak without a foreign accent starts to decline sharply after age 8. Special times. Set aside a few minutes at a regular time each day when you can give your undivided attention to your child. This quiet calm time – no TV, iPad or phones - can be a confidence builder for young children. As little as five minutes a day can make a difference. Until recently, the underlying mechanisms that make children’s brains learn faster than adult brains were a mystery, but new research provides tantalizing hints about what goes on in the brains of children that makes them “sponges” for absorbing new material. And perhaps most exciting, these fresh insights into the underlying mechanisms of enhanced brain plasticity suggest ways that we might one day “reset the clock” on adult brains and restore the astonishing ability to learn that those brains originally possessed in their youth. According to Dr. Julie Miwa, a Lehigh University neuroscientist and leading researcher in brain plasticity, some of the most promising recent discoveries into the mechanisms of enhanced brain plasticity of children center around a particular set of cholinergic (acetylcholine sensitive) neuronal receptors in the brain called “nicotinic receptors” (yes, these are the same receptors that bind to nicotine in tobacco smoke). Dr. Miwa and colleagues have shown in animals that re-energizing these receptors by “knocking down” a protein called lynx1 that normally acts as a “molecular brake” on nicotinic receptor excitation, can enable adult brains to retain their youthful ability to learn and grow, possibly by sustaining increased activity of nicotinic receptors that promote formation of new dendrites, axons and synapses (connections) among those axons and dendrites. Remember: Kids will be kids. Kids will make mistakes using media. Try to handle errors with empathy and turn a mistake into a teachable moment. But some indiscretions, such as sexting, bullying, or posting self-harm images, may be a red flag that hints at trouble ahead. Parents must observe carefully their children's behaviors and, if needed, enlist supportive professional help, including the family pediatrician. For example, through genetic manipulations in mice that knock-out the genes that code for the lynx1 protein, Dr. Miwa’s team have demonstrated enhanced motor learning ability that persists throughout life. And knocking out the lynx1 molecular brake on nicotinic receptor activity also improves associative learning (e.g. Pavlovian conditioning), and other forms of memory, according to Dr. Miwa. Finally, working with Dr. Morishita of Harvard Medical School , Dr. Miwa helped establish that removing the lynx1 molecular brake in mice extends the “critical period” for visual brain plasticity well past the normal end of the critical period (post natal day 60) for mice. Do Women Age Slower Than Men? Taken together, these results suggest that it might someday be possible to improve the ability of human adult brains to absorb and retain new information and skills by knocking down the lynx1 molecular brake. Dr. Miwa said in a recent interview that one way to ease up on the lynx1 brake in humans could be through administration of interfering RNA molecules that would change the way lynx1 genes are expressed in neurons, thereby “down-regulating” the lynx1 protein and freeing up neurons with nicotinic receptors to grow and form new connections. Type 2 diabetes is increasingly being reported among children who are overweight. Onset of diabetes in children can lead to heart disease and kidney failure. “Such a treatment would be especially beneficial for adult stroke patients and patients with other forms of memory decline such as in Alzheimer’s, Parkinson’s and Huntington’s diseases,” Dr. Miwa said, “helping to return their brains to a youthful state where re-learning of vital skills such as speech and memory could be accelerated.” A substantial amount of research remains—including work to establish the efficacy and safety of such a treatment (there could be side effects, for instance)-- but Dr. Miwa’s work and that of other neuroscientists studying brain plasticity shows new ways that we might restore function in damaged adult brains, and even turbocharge healthy adult brains. Such work is being explored by Ophidion, Inc., where the company is developing an interfering RNA against lynx1, and a brain delivery shuttle to deliver it to the brain. They are moving toward clinical tests to tackle the cognitive decline associated with various neurodegenerative diseases, such as Alzheimer ’s disease. Would these novel treatments amount to a “smart pill” that restores child-like learning abilities in healthy adults? I’m not yet smart enough to say. But if I ever do take the lynx1 knock-out “smart pill” I’ll get back to you with a definitive answer.
In this Discussion Board you are asked to explain the tools used to create e-learning. Do the following: 1) Define the term "e-learning authoring system" 2) Name four e-learning authoring systems and give their URLs 3) Briefly describe each system E-learning authoring systems and tools describe how interactive courseware and online discussions can help in supporting the cognitive and social processes by allowing peers to engage in meaningful dialogue together with their tutor. These authoring systems encourage student collaboration. They improve team working skill and promote independent thinking. They also motivate students as they continue to learn more in online teaching-learning environments. The online learning community and the World Wide Web offer a wide array of e-learning tools. Educators and developers check and try these tools to determine which tools fit their pedagogical needs before they decide on which tool to incorporate into their e-learning strategy. Few examples of e-learning authoring systems or CMS/LMS are applications such as Blackboard, Moodle, CourseLab, Articulate Rapid E-Learning Studio, etc. These systems organize the content of the instruction. They offer self-contained ... The solution defines e-Learning authorizing system and four examples are described. References included.
Interesting and confusing facts about time / time zones: +11:00 or UTC+11:00/GMT+11:00 - mean that current place is 11 hours ahead of UTC (Universal Time Coordinated) or GMT (Greenwich Mean Time). For example, current Vladivostok standard time is UTC+11. -08:00 or UTC-08:00/GMT-08:00 - mean that current place is 8 hours behind of UTC (Universal Time Coordinated) or GMT (Greenwich Mean Time). For example, San Francisco standard time (PST) is UTC-08. - Some places/countries use time offsets not an integral number of hours from UTC/GMT. - Some places use quarter / hour offset from UTC/GMT. - Usual Daylight Saving Time (Summer time) rule is- altering the clocks ahead by one hour. There is an exception: - Lord Howe Island (Australia) advances its clocks by half an hour in the summer. Lord Howe Island is UTC/GMT + 10:30 during local winter and UTC/GMT +11:00 during local summer. - Australia has both horizontal and vertical time zones in summer. and Western Australia do not observe DST. Middle of Australia (Northern Territory and South Australia) use half an hour offset from nearby Western Australia and eastern states of Australia. Additionally, Northern Territory does not use DST and South Australia does. In this instance, for example, two places Nullarbor and Darwin, located in relatively same longitude in the middle of Australia have the same time during local winter (1 hour and 30 minutes ahead of Perth time, Western Australia). However, during local summer, Nullarbor and Darwin maintain 1 hour difference. Darwin is 1 hour and 30 minutes minutes ahead of Perth time and Nullarbor is 2 hours and 30 minutes ahead of Perth time (Western Australia). - Prior to 1995, International Date Line split the country of Kiribati. The result was that the eastern part of Kiribati was a whole day and two hours behind the western part of the country where its capital is located. In 1995 Kiribati decided to move the International Date Line far to the east- which placed the entire country into the same day. Now eastern Kiribati and Hawaii, which are located in the approximately same area of longitude, are a whole day apart. - If two places are located in the northern hemisphere and both places use DST- for example Amsterdam (Netherlands) and New York (USA) the time difference between those two places can be 1 hour offset during a year. This takes place because Europe shifts to DST on the last Sunday in March and U.S.A. goes to DST on the second Sunday in March - the time difference between New York and Amsterdam most time of the year is + 06 hours (Amsterdam time is 6 hours ahead of New York time), however for a few weeks in the spring and autumn- the time difference is + 05 hours. Same situation can be applied between most of the cities in Europe and North American cities during that week. - If two places are located in opposite hemispheres and both places use DST- the time difference between those two places can be 1-3 hours offset during a year. New York (northern hemisphere) and Chile (southern hemisphere) could have: A similar example is the time difference between local time in New York and local time in Rio de Janeiro (Brazil). Rio de Janeiro time could be +1, +2 or +3 hours ahead of New York time depending on the time of the year. - 0 hours difference (time is the same in both places) - - 1 hour (Chile time is 1 hour behind NYC time) and - - 2 hours (Chile time is 2 hours behind NYC time) - Equatorial and tropical countries (lower latitudes) usually do not observe Daylight Saving Time as the duration of day / night are very much the same - 12 hours. However Fiji UTC/GMT+12:00 and Samoa (Apia) UTC/GMT +13:00 are observing DST. - Although Russia is geographically spread over 12 time zones, it officially observes only 9 time zones (from march 2010). - Usually, when one travels in an easterly direction - a different time zone is crossed every 15 degrees of longitude (which is equal to one hour in time). However, the are exceptions. Since Japan is located to the east of Vladivostok (southern part of Russian Far East) one would assume that Japan time would be either similar or ahead of Vladivostok time. In this case the situation is completely opposite: Japan time is 2 hours behind Vladivostok time. - Mongolia once used to have 3 time zones- now it uses one time zone UTC/GMT + 08:00. - China observes one time zone UTC/GMT + 08:00 - which makes this time zone uncommonly wide. In the extreme western part of China the sun is at its highest point at 15:00, in the extreme eastern part - at 11:00. - "Daylight Saving Time" (DST) is the name commonly used in North America. Some regions (Europe, South America) more commonly use the name "Summer Time". This could create some confusion in the meaning of some timezone abbreviations, as ST could stand for "Summer Time" +1 hour (Europe, South America) and for "Standard Time" (North America). - Brazil sets its Summer time by decree every year. Some states / counties observe Summer time on a year to year basis. - The state of Arizona does not observe DST. However the Navajo Reservation (see USA map) does change to Daylight time. The Hopi Reservation is within the Navajo Reservation and does not observe DST (as rest of the state). - Some countries use different rules to start and end DST. For example, a law in Israel requires that summer must last at least 150 days. - Greenwich time (Greenwich Lab is located in London) has the same time as London time during winter time, however London is 1 hour ahead of GMT during summer time. - The military of some nations refer to time zones as letters, for example: Z (Zulu) = Zero Meridian (UTC or GMT). Letters A to M moves eastwards and N to Y moves westwards. The letter J (Juliet) is skipped and refers to current local time of the observer. - Antarctica Amundsen-Scott South Pole Station use New Zealand's time zone +12:00 hours during local winter and +13:00 hours - during local summer (which is winter in the northern hemisphere). - The International Space Station uses UTC/GMT. - News for Daylight Saving Time and Time changes around the World. - Mobile 4G LTE World Coverage Map - LTE, WiMAX, HSPA+, 3G, GSM Country List.
Your gift provides life-changing experiences for 4-H youth. In each module of this track, youth will learn about a different aspect of robotics and design and build a robot using what they have learned. This track emphasizes developing knowledge and developing skills as participating youth design and build their own robots. Youth will use their Robotics Notebook to record their learning experiences, robotic designs and the data from their investigations. The Junk Drawer Robotics curriculum has three books. In Book 1, Give Robots a Hand, you will explore the design and function of robotic arms, hands and grippers and build a robotic arm that really moves! In Book 2, Robots on the Move, you will design and build machines that roll, slide, draw or move underwater and explore robot mobility - movement, power transfer and locomotion. Book 3, Mechatronics, is all about the connection between the mechanical and electronic elements of robots and you will explore sensors, write programs, build circuits and design your own robot. Try the Sample Activity The Clipmobile Challenge is a sample activity taken from Module 1: Get Things Rolling of Book 2, Robots on the Move. In this activity, you learn about friction, about the engineering design process and how understanding friction and design can help you build more efficient machines. Like all the modules in Junk Drawer Robotics, there are three types of activities; (1) To Learn activities where you learn about how things work - the science behind robotics, (2) a To Do activity where you learn about engineering and design your own solution to the Clipmobile Challenge, and (3) a To Make activity where you apply technology as you actually build and test your design. Junk Drawer Robotics Facilitator Resources These resources will help you implement Junk Drawer Robotics and make science, engineering and technology engaging and meaningful in the lives of young people. The activities in Junk Drawer Robotics encourage youth to use the processes and approaches of science; the planning and conceptual design of engineering; and the application of technology in each module. Your role as a facilitator is to assist learners in developing their own knowledge and problem solving skills. This is done by bringing together a scientific inquiry and engineering design approach to learning. You assist youth in developing understanding by asking questions and prompting them to share and talk about their ideas, designs and results. You may also work with a team of teen presenters who enliven the activities they lead with their own experience and enthusiasm. In that case, you will mentor or coach the teen presenters by providing support, training and guidance as they lead the activities. Facilitating Junk Drawer Robotics Resources to help you get started leading activities or working with teen leaders Junk Drawer Robotics Toolbox Resources and safety information about tools and materials Junk Drawer Robotics Activity Supplies Resources to help with ordering or creating your own supplies for activities
Why Does Black Absorb Heat? Black as a color absorbs heat because of specific properties of the color and of the light. When light shines on an object, the object’s color either absorbs or reflects the light. A red object, for example, absorbs every color except for red, which reflects back to the eyes. The color of the object depends on the light radiated back to the eyes. If that same red object, like a red apple, were to be illuminated by a light source that had no red wavelength, it would appear almost black, because there would be no wavelength of light to reflect back to the eyes. ‘White, light actually has all of the wavelengths in it, which causes the multitude of colors in life. When white light hits a black object, the object absorbs all of the wavelengths, and none are reflected back, which is why the object appears black in the first place. Now you can think of light as energy in almost all situations: for example, all of the energy that you get comes from the ability of plants to absorb that light energy and store it in sugars. When you eat meat, the animal that you are eating had to get its energy from plants (at the very bottom of the food chain) as well. So when black absorbs the light, it is also absorbing energy. The object then radiates the energy by emitting it at a longer wavelength that is invisible to the eye, but is still energy. It is emitted at the infrared level, which is heat. The key to understanding the transformation of light into heat is that it conserves all energy ‘š one of the laws of thermodynamics, or the study of energy conversion between heat and mechanical work. No heat is lost; it is just transformed into a new form, a new wavelength.
Integrated arts practice refers to inter-disciplinary art, art research, development, production, presentation, or artistic creation of work that fully uses two or more art disciplines to create a work for a specific audience. The Integrated Arts experience is defined as an activity which induces in the learner an emotional or kinesthetic state which is in some way analogous to the concept being taught, and which occurs through heightened sensory awareness on the part of the learner. The term integrated is used to denote the integration of the arts experience into the learning process. The Integrated Arts Experience provides opportunities for the following processes to occur: metaphorical modes of thinking use of alternative forms of communication. In many schools integrated arts has been implemented and widely used to teach subjects that retain the attention of students. The rationale for implementing the integration of arts in the education system has been discussed by many of the art educators and others concerned with completeness of general education. Approaches include taking the students to a museum, describing them about the paintings and telling them the background of that painting and then asking the students to write an essay on their visit to museum in English. Another example of teaching the students with the help of arts is making them to first read a chapter of history and then asking them to create a summary of the chapter through painting. Teaching of history can also be done by role play method where the students enact the characters in the class. One other method which can be used for students of first grade or below that for teaching about animals, can be done by making them hear the sounds of different animals and asking them to recognize. Integration of arts in a classroom helps students to connect with the world they live in. The fundamental principle for integrating arts can be thought as follows: there is a similarity across arts and other subjects. incorporating arts into other subjects helps in accelerating and facilitating the learning process. art promotes creativity. integrated arts program are more economical than separate instruction in each area. The claim for this type of curriculum is that the students who were taught by the integration of arts method had more knowledge and understanding of the inter-relationships among the arts and also developed an appreciation for the arts. It is also claimed integrated art activities is it will help the child to connect his knowledge with the world outside and help him in transferring his learning about arts to many of his other subjects. Integrated arts often also refers to hybrid art forms in which new practices are invented and/or combined. Integrated arts practice is related to new media art, computer-based art, and web-based art. While new media is more computer-centric, integrated media (integrated arts) often involves computers plus some other discipline. An example of integrated art that involves new media might be a musical performance done on a computerized interactive multimedia sculpture like WWE. In public media, integrated media and media meshing also refer to the use of multiple orthogonal and perhaps interactive forms, such as news releases, websites, polls, wikis, blogs or forum sources, rather than just a single broadcast mode.
Have you ever heard of force? Force is a push or pull on an object which causes objects to move. Force is a vector because it has both size and direction. Force can be weak or strong. The SI unit for force is newton. One object can have more than one force at the same time.In fact all objects on Earth have at least two forces acting on them at all times. Those forces are gravity and upward force. The combined force on any object are called net force. When the two force acting on an object in opposite direction, if the opposing force is unbalanced then the net force will be greater than zero. If the opposing force is balance then the net force will be zero meaning that if one side has a greater force then the object will move to that side but two forces can act on an object in the same direction. Throughout the lesson, I felt like I understand a lot and it can also be applied to real life what I’m doing daily. I think it’s one of the interesting lessons to learn in my round 5. Sometimes, we met this kind of action many time but we never realize that it’s all connect to lessons of science. I would really love to focus and paying more attention to the lesson that I’m going to learn next year. For this round, we have learn many interesting topic such as solution, motion and force. One of my favorite topic is solution which I’m going to talk about. In a simple meaning of solution is a mean of solving a problem or dealing with a difficult situation. In chemistry, solution means a form of one substance dissolve in another. For example rock dissolve in water form a solution. When a substance dissolve in another is called solute. The substance that let the solute dissolve in is called solvent. For example ocean water is a solution which solute is salt that dissolve in solvent; water. In this example solid dissolve in liquid. However, matter in any state can be solute or solvent. Whenever solute dissolve in solvent it will change to the same state as the solvent. If the solute and solvent already in the same state, it will present greater quantity is considered to be solvent. In a solution there are three rate of dissolving. First is stirring. Stirring is one of several factor that affect how fast the solute dissolve in solvent. Second is temperature, temperature is another factor. A solid solute dissolve faster in a higher temperature. For example sugar dissolve faster in hot coffee than iced coffee. The third one is the surface area of the solute. In this round, I feel like I’ve learn a lot from the lesson and I feel more confidence in asking question to my falcilitator. I feel like I’ve improve a lot in learning. I hope to learn more interesting topic in STEM. It’s quite challenging for me but I think I can handle it. Carbon is one of the main topic that we focus on in STEM class for this round. Carbon is a nonmetal element. Carbon has four valence electron. So, it need four more electron in order to complete its outer energy level. In order to achieve this it need covalent bond that is form between nonmetal which they share a pair of electron. Usually, carbon form bond with hydrogen. There are three types of bond of carbon with other carbon atoms. Those bond are single, double and triple bond. Single bond is when they share a pair of electron. Double bond is when they share two pairs of bond and triple bond is when they share three pairs of bond. The types of single bond are called Alkane. The type of double bond are called Alkene and the type of triple bond are called Alkyne. We also learned about hydrocarbon. Hydrocarbon are compounds that contain only carbon and hydrogen. There are two types of hydrocarbon which are saturated hydrocarbon and unsaturated hydrocarbon. Saturated hydrocarbon contain only single bond which is alkane type. The name of alkane is always end with “ane”.The first part of the name indicates how many carbon atom each has. The smallest is called Methane which has only one carbon atom. The next are Ethane, Propane, Butane, Pentane, Hexane, Heptane, Octane, Nonane and Decane. This are the list with molecular formula Unsaturated carbon can be double or triple bond which are Alkene or Alkyne. The type of Alkene is always end with “ene”. The type of Alkyne is always end with “yne”. It’s also indicates to how many carbon atom each has. Overall, I feel it been a great round for me. I think I has understand a lot more about STEM. Also, I feel more confidence in asking question and has increase my passion in STEM. I hope to see more challenge and fun in the next round. In STEM class I’ve learned a lot, but for this round, we mainly focus on periodic table and chemical bond. We still use the same website which is CK-12. The first person who creates a periodic table is Mendeleev. There are 118 elements in the periodic table. At that time Mendeleev has arranged the elements by its atomic mass. Atomic mass is the mass of protons and neutrons in the atom. For nowadays people use a modern periodic table which they have arranged the elements by its atomic number. Atomic number is the number of protons in an atom of an element. There also a chemical symbol to represent the element which consists of one or two letter that comes from the chemical’s name in English or other languages. For example, the symbol of lead is Pb which come from a Latin word plumbum. They also divide the elements into three types which are metals( blue), metalloids or semimetals (orange), and nonmetals (green). The rows on the periodic table called period and the columns of the periodic table are called groups. Some period of the modern periodic table are longer than others you can see the first have only two elements and the period 6 and 7 its contrast so long that some of their elements are placed below the main part of the table. As you can see most of the element are metals. Metals are the elements that are good at conductor of electricity and a relatively high melting point. So, almost all at the room temperature. They are the largest of the classed in the modern periodic table. Most of the metal are also good conductors of heats. Nonmetal are the elements that do not conduct electricity, poor conductors of heat and it is the second largest classes of the elements. Nonmetals generally have properties that are opposite of those metals. Metalloids is the element that in between metals and nonmetals. Metalloid has some properties of metals and some properties of nonmetals. For example, many metalloids can conduct electricity but only at a certain temperature and these metalloids are called semiconductors. There also some groups of the element which are Alkali metal, Alkaline Earth Metal, Halogen and Noble gas. Alkali Metal has just only one valence electrons which are highly reactive. Alkali Metal is the most reactive of all the metals. Most of Alkali Metal is soft which enough to cut with knife. Alkali metal has also low density. Alkaline Earth Metal is the elements that is in group two. They have two valence electrons. They also very reactive but not like alkali metals. They are harder and denser than the alkali metal. Groups 3-12 are called Transition Metals. Transition have more valence electron and less reactive than metal and the first two groups metals. They are shiny, very hard, high melting points and boiling points. Halogens are all the elements that in the group 17. They are highly reactive nonmetal and with 7 valence electrons. Halogen group include gas, liquid and solid and they react violently with alkali metals. The last group is Noble gas. The nonmetal elements that in groups 18 are called Noble gas group. They are the least reactive group because their outer energy level is full and they are colorless odorless gases. In STEM I’ve learned a lot of thing such as chemistry, energy, weather, matter, astronomy and etc…. For this few we mainly focus on matter and also we dig a little into chemistry. The website that our facilitator use to teach and read is called CK12. This website is really helpful to me because it explains in very detail and it uses a lot of examples to explain to us. I can learn it and understand more about it by myself without facilitator just to read. I’ve learned many things from and matter. Matter defined anything that has mass and volume. There are two properties in the matter which are physical properties and chemical properties. Chemical properties are the properties of matter that can be measured or observe only when changes form and become something else. Physical properties are the properties that can be measured and observed without matter changing to different substances. One interesting thing about mass and weight that most people make their mistake of saying those word. So mass is the amount of matter in substance or object and its unit called Kilogram but the weight is a measure of the force of gravity pulling on an object and its unit known as Newton. So you should say your mass is 62 kg not your weight is 62 kg. Don’t be confused!!!. One more thing that I’ve to learn about atoms which are related to the STEM. Atom is the smallest particle of an element that still has an element of the property. So in the matter, there is the element, the element there are atoms, an atom, there are nucleus and electron and in the nucleus, there are proton and neutron. You might wonder what proton and neutron made from. Well, proton and neutron made quark and electron made from leptin. That’s is most of the thing that I’ve learned so far in STEM class. Hope you enjoy!!
The electrostatic force between a hydrogen atom bonded to a strongly electronegative atom and Hydrogen bond lone pair electrons of another Covalent bond strongly electronegative atom is called hydrogen bonding. Hydrogen bonds are not really chemical bonds in formal sense. These are weaker than covalent bond. However, hydrogen bonds are stronger than dipole-dipole interactions, which are stronger than London dispersion forces. Explanation with examples: Hydrogen bonding in water Hydrogen bonding occurs among polar covalent molecule containing H and one of the small, sized strongly electronegative elements such as N, O or F. Hydrogen bonding occurs in NH3, H2O, and HF etc. The boiling point and heat of vaporization of H2O are higher than those of H2S. This is because H2O molecules attract each other through hydrogen bonding whereas H2S molecules attract each other through dipole-dipole interactions, which is a weaker attractive force than hydrogen bonding. Hydrogen Bonding between Chloroform & Acetone Hydrogen bonding does not limit to above-mentioned electronegative atom. The three chlorine atoms in chloroform are responsible for H-bonding with other molecules. These atoms deprive the carbon atom of its electrons and the partial positively charged hydrogen can form a strong hydrogen bond with Chloroform and Acetone. Hydrogen Bonding in Ammonia Hydrogen Bonding in Hydrofluoric acid (HF is weaker acid than HCl & HBr) The exceptional, low acidic strength of HF molecule as compared to HCl, HBr, HI is due to this strong hydrogen bonding, because the partial positively charged hydrogen is entrapped between two highly electronegative atoms. So, hydrogen bonding is the electrostatic force of attraction between partial positive hydrogen atom and a highly electronegative atom like F, O and N belonging to another molecule. The stronger hydrogen bonding between H and F hardly allows hydrogen to become a proton as compared to HCl or HBr. Importance of Hydrogen Bonding: - Thermodynamic properties of covalent hydrides. - Solubility of hydrogen-bonded molecules. - Structure of ice. - Cleansing action of soaps and detergents - Hydrogen bonding in biological compounds and food materials. - Hydrogen bonding in paints, dyes and textile materials. Hydrogen bonding in DNA and RNA Application of Hydrogen Bonding in DNA and RNA (Biological Compounds) The molecules of living system contain hydrogen bonding. Proteins are the important part of living organisms. Hair, silk and muscles consist of long chains of amino acids. These long chains are coiled about one another into a spiral. This spiral is called a helix. Such a helix may either be right handed or left handed. In the case of right handed helix the groups like >NH and > C=O are vertically adjacent to one another and they are linked together by hydrogen bonds. These H-bonds link one spiral to the other. X-rays analysis has shown that on the average there are 27 amino acid units for each turn of the helix. . Deoxyribonucleic acid (DNA) has two spiral chains. These are coiled about each other on a common axis. In this way, they give a double helix. This is 18-20 Å in diameter. They are linked together by H-bonding between their sub-units. Hydrogen Bonding in Paints and Dyes In paints and dyes adhesive action is developed due to hydrogen bonding. Similar type of hydrogen bonding makes glue and honey sticky substances. Hydrogen Bonding in Clothing & Food materials We use cotton, silk or synthetic fibers for clotting. Hydrogen bonding is of vital importance in these thread making materials. This hydrogen bonding is responsible for their rigidity and the tensile strength. Food materials like carbohydrates including glucose, fructose and sucrose all have -OH groups responsible for hydrogen bonding. Structure of Ice The electronic structure of water is tetrahedral. Two lone pairs of electrons on the oxygen atom occupy two corners of the tetrahedron. In the liquid state, water molecules are extensively associated with each other and these associations break and are reformed because the molecules of water are mobile. But associations go on breaking and forming because the molecules of the liquid are mobile.
Also known as tipis, Native American teepees were handsome, conical dwellings that kept people snug and warm during cold winters, while providing shade during the heat of summer. Teepees were used primarily by hunting and gathering tribes of the Great Plains, seasonally in the eastern plains and all year by the semi-nomadic tribes of the western plains. Teepees provided great mobility for hunting buffalo and elk, because they could be dismantled and reconstructed very quickly. Evolution of Teepees in America Early day teepees were small, consisting of only three poles with an average diameter of 10 to 14 feet. The size was limited because the teepees were transported by backpack or by dogs, which couldn't handle the heavy weight of larger structures. Later teepees stood 18 to 20 feet high and weighed more than 500 pounds. Eventually, horses arrived on the American plains, and soon became a symbol of great wealth and prestige. Horses allowed Native Americans to transport larger teepees. The greater mobility increased the average daily travel distance from about five miles to 10 to 15 miles. Teepees consisted of a framework of 15 to 20 poles made of straight, dry trees that were available in the area. Although lodgepole pine was favored, poles were also made from cedar, yellow pine or tamarack. The poles were smoothed, then three or four of the strongest were lashed at the top to form a tripod-like frame. The remaining poles were attached to the frame for shape and support, with the two lightest poles serving as a base for the smoke flap, which provided ventilation and allowed smoke from a fire to escape. The structure was then covered, and the bottom of the covering was secured to the ground with rocks or blocks of sod. Teepee coverings, usually made by women, were determined by what was available. Materials included grass, bark or tanned hides from caribou, elk or buffalo. Buffalo hides were often used because the hides were warm and dry, but their translucency allowed light to enter. The hides were sewn together patchwork style, using sinews and bone needles. Teepees were lined with tanned animal hides during the winter, and the space between the liner and the outer covering was stuffed with dry grass for extra warmth. Additionally, straw or grass was piled around the exterior base of the teepee to keep cold air from entering at the bottom. Hides, which became stained and brittle from constant use, were replaced every two to four years. Cloth became available in the 1840s, and teepees were often made from heavy canvas. Modern teepees are often covered with synthetic materials. Traditionally, Plains Indians tribes located teepees with the door facing the rising sun in the east. The oldest male slept opposite the door on the west side of the teepee. Otherwise, people slept around the fire pit in the center, with the women on the south side of the teepee and men on the north. Symbols and Decorations Some Native American tribes, including the Sioux, decorated teepees with circular shapes to represent the universe, the horizon and the cycles of the seasons. Teepees decorated with circles were arranged in rings around a special, ceremonial teepee in the center. Other tribes, including the Blackfeet, Kiowa and Cheyenne, were known for elaborate paintings that represented scenes from hunting or battles, or images from vision quests. Designs were outlined with charcoal, then filled in with paint made from vegetables or minerals and a sticky substance made from buffalo hooves. - Purestock/Purestock/Getty Images
Originally posted on DrexelNow. Army ants, the nomadic swarming predators underfoot in the jungle, can take down a colony of prey animals without breaking a sweat. But certain army ant species can’t take the heat. According to a new study from Drexel University, underground species of army ants are much less tolerant of high temperatures than their aboveground relatives—and that difference in thermal tolerance could mean that many climate change models lack a key element of how animal physiology could affect responses to changing environments. At face value, this is not surprising, noted Sean O’Donnell, PhD, a professor in Drexel’s College of Arts and Sciences and senior author of the study published in the Journal of Animal Ecology. Ants that live above ground are exposed to higher temperatures than subterranean ants, so they should be expected to tolerate hotter conditions—but the relationship between microhabitat and heat tolerance simply hadn’t been tested before. No one knew whether the subterranean ant species might have been capable of handling high temperatures, even if they prefer cooler ones. Current models of climate change are built on predictions that animal species may shift their geographic ranges to new latitudes or elevations when the temperature rises. But these models typically use temperature averages taken at a macro resolution—commonly measured 1 meter above ground, and encompassing areas of a square kilometer or more—and they don’t generally take into account that different species have varied levels of heat tolerance. “A few inches of soil can make a big difference in temperature,” said Kaitlin Baudier, a doctoral student in O’Donnell’s lab and lead author of the study. To look at whether factors like microhabitat preference—living above ground vs. below ground—affected an animal’s thermal range, Baudier, O’Donnell and colleagues looked at nine closely related species of army ants that lived in the same general area in Costa Rica’s tropical forests. They sampled ants from each of the species and experimentally tested their maximum heat tolerance in the lab. They found that the best predictor of heat tolerance was how active the ant species was in above-ground environments. Above-ground army ants were most tolerant of higher temperatures, while species that lived mostly below-ground had much lower tolerance to heat, and species that used a combination of above- and below-ground environments had intermediate levels of tolerance. Body size and habitat type also interacted, so that in comparisons of same-size individuals from different species, the species that are active above-ground were more heat tolerant—another indicator that habitat use signals the species’ tolerance to heat. Within each species, smaller ants were more sensitive to rising temperatures than the larger ants of the same type. “The takeaway message is that an animal’s adaptation to its microhabitat is relevant to its thermal physiology,” said Baudier. “This shows us that the ways these species respond to a changing climate will be different depending on habitat type, and it’s important to know that microhabitat could be an indicator of heat tolerance,” O’Donnell said.
Rubella is a usually mild contagious viral disease characterized by fever, mild upper respiratory congestion, and a fine red rash lasting a few days: if contracted by a woman during early pregnancy, it may cause serious damage to the fetus. Rubella is also called as German measles or three-day measles. Rubella infection in the first trimester of pregnancy can lead to fetal death, premature delivery, and serious birth defects. Rubella is a viral illness that causes a skin rash and joint pain. A rubella infection is mild for most people, but can cause death or birth defects in an unborn baby. The rubella vaccine is available in combined vaccines that also contain vaccines against other serious and potentially fatal diseases. A rubella blood test detects antibodies that are made by the immune system to help kill the rubella virus. These antibodies remain in the bloodstream for years. The presence of certain antibodies means a recent infection, a past infection, or that you have been vaccinated against the disease. Rubella is caused by a virus that is spread from person to person when an infected person coughs or sneezes. Rubella is also spread by direct contact with the nose or throat secretions of an infected person. No treatment will shorten the course of rubella infection, and symptoms are often so mild that treatment usually isn't necessary. However, doctors often recommend isolation from others, especially to pregnant women during the infectious period.
The Romans first set foot on British soil in 55 BC. The Roman Army had initially met with the Britons in Gaul (France) when the latter was helping the French fight off the legion. Julius Caesar, who was the leader of the Roman Army in Gaul, decided that the time was right to teach the Britons a lesson and invade Britain. Caesar landed about six miles from Dover with the intention of taking the town. However, the large number of Britons who had arrived to prevent an invasion drove the Roman army into the sea. Eventually, the Romans did fight off the Britons but Caesar noted that the country was unlikely to relinquish to his army and returned to Rome. While Julius Caeser did not decide to stay and conquer Britain, his visit to the isles did mark the start of a trading relationship between the two tribes. Roman traders began to regularly travel to Britain for trade, learning more and more about the country, and as a result it was soon brought into the Roman radar once more. Having re-established an interest in Britain, the Romans returned to invade the country in 43 AD under the reign of Claudius with the intention of fully conquering the country. A Roman army of 40,000 soldiers and cavalry were sent over, quickly establishing authority over the locals. While some decided to resist the invaders, many noted their strength and attempted to make peace. However, there were still many clashes between the British citizens and the Roman army over the coming years and Britain eventually relinquished, becoming the Roman province of Britannia. By the year 84 AD almost the whole of Britain was ruled by an appointed governor from the emperor. Despite being less than impressed with Britons themselves, describing them as “bandy-legged” (Strabo) and “savages” (Tacitus), they did note how impressed they were with the skills many Britons held. Julius Caesar noted of the British chariot riders: "They combine the easy movement of cavalry with the staying power of foot soldiers. Regular practice makes them so skilful that they can control their horses at a full gallop, even on a steep slope. And they can stop and turn them in a moment. The warriors can then run along the chariot pole, stand on the yoke and get back into the chariot as quick as lightening.” See also: Roman Education "The Romans in Britain". HistoryLearning.com. 2019. Web.
4D printing a shape memory polymer Additive Manufacturing is about using a single material for creating complex shapes and “impossible” geometries. While designers and scientists are focused on optimizing the 3D printing process in terms of design and manufacturing with a given range of rigid materials (such as 3D printing with plastics, resins or metals), researchers of the MIT and the Singapore University of Technology and Design are looking in another direction. Shape memory polymers have nothing to do with the traditional 3D printing materials. These materials are programmed to change shape over time. That said, along with the constant growth of 3D printing, we foresee the rise of a new technology: 4D printing. Let’s have a look at this new trend in the 3D Printing field and get to know how a shape memory polymer can be a true revolution for the Additive Manufacturing industry. What are shape memory polymers? According to the report of the researchers on the topic, a shape memory polymer (SMP) is a kind of material that has the potential of showing large elastic deformation in response to environmental stimuli. In other words, it has the ability to change shape in various ways, and then come back to its original form, when an external energy is applied on it. In order to cause this change, this emerging kind of 3D printing material should be exposed to heat, light, electricity, moisture, or an environment with a specific pH. The experiments about shape memory polymers are conducted using a high-resolution projection microstereolithography (PμSL) technique that will be described in the following paragraphs. The shape memory polymers are constructed by combining mono-functional monomers resins as linear chain builder, and multi-functional oligomer resins as crosslinkers. The 3D printer used for processing the two types of resins and convert them into shape memory polymers is a commercial Polyjet 3D printer. How shape memory polymers connect the 3D Printing and 4D Printing technologies You may wonder, since shape memory polymers are 3D printing materials, why do we talk about 4D printing? Which are the differences between these two technologies, and how are they connected? As we explained in the introduction of this blog post, 3D printing is about creating solid and rigid parts, or in other words, objects that do not change shape under the exposure to external energy. On the other hand, as our recent article explains, the 4D printing technology deals with pre-programmed parts of the printed structure that react under their exposure to a stimulus. So basically, 4D printing is about 3D printing changing over time. You can differentiate the two technologies by the material they use as input. In the case of 3D printing, the material used could be any of the 3D printing materials that exist on the market today. In the case of 4D printing though, a “smart material” is needed to make the printing process work successfully. “Smart materials” are considered to be the hydrogels and shape memory polymers. In one of our previous articles about hydrogels, we explained how they work: They swell when solvent molecules diffuse into polymer network memory polymer. On the other hand, a shape memory polymer is capable of transforming itself into different shapes and shows shape memory behavior that can be controlled and programmed. In other words, a shape memory polymer is the kind of material that is needed to make a successful 4D printing procedure work. How is a shape memory polymer fabricated? The following figure represents the way a multimaterial shape memory polymer is produced based on projection microstereolithography (PμSL). As it will be fabricated with the 3D printing technology, a 3D file is needed. Thus, the 3D model of the object is sliced into horizontal slices, thanks to a 3D modeling software-procedure which applies to all the 3D printing techniques. In the next step, these sliced images are transferred to a digital micro display, that works as a dynamic photo mask. Then, a LED projects UV light to form the patterns corresponding to the sliced images and illuminates onto the surface of photocurable polymer solution. When the material is solidified to form the corresponding layer, the next sliced image is projected on top of the previous one. The procedure is repeated layer by layer until the whole structure is formed. The process of 4D printing is considered as a multi material process. That’s because it enables the resin exchange between monomers and oligomer resins to create the final shape memory polymer. The material containers in the following figure correspond to each type of resin. The shape memory polymers are printed using commercial Polyjet 3D printers, because they can create materials with properties ranging between rigid and elastomeric by mixing the two base resins that polymers are made of. How do shape memory polymers work? Shape memory polymers have the ability to return back to their initial shape in a short period of time after being exposed to certain conditions, such as heat exposure, water exposure etc.. The key attribute that makes the polymers change shape is the thermomechanical properties of the resins they are made of. Thanks to the polymer resin preparations and the way the two kind of resins chemically react during the 3D printing process, the polymers can change shape. Moreover, in order to successfully activate the shape changing result, precisely prescribed shape memory polymer fibers are used during the printing procedure. In the following picture, you can see how a 3D printed shape memory polymer changes form, step by step. It represents a simple structure that has the ability to be used as a gripper: it can grab and release objects. On the right side of the picture we can see different temporary forms that the gripper can take during its transformation. We can also see its initial and final position, which corresponds to its initially printed form and its form after being heated. In the following picture we see the shape memory polymer in snapshots during the procedure of grabbing a screw. Limitations in the use of shape memory polymers by the 3D printing industry Even if the commercial use of shape changing materials is very limited until now in the 3D printing industry, it has a big potential for future use. The most limiting factor is that of the commercially used manufacturing techniques. Up to now, shape memory polymers have been approached only in experimentational level. This means that it remains unknown how the printed parts perform in the long term. We do not know their level of efficiency after been submitted to many transformations and until which point they are fully functional. In addition, it is challenging to produce materials that exhibit desired thermomechanical behavior in large scale and in any kind of design complexity. It will take some time for the scientists to experiment with printing active materials, and to make this procedure available for commercial use. Which are the possible applications of shape memory polymers? Many fields will be impacted positively by the use of shape memory polymers. The biggest impact is expected to be noticed in the health industry. Additive Manufacturing is already widely used in the medical sector. Every kind of 3D printing materials is used for various purposes, from 3D printed bones to bioprinting with innovative materials. Considering that there is a constant arising need for new 3D printing materials and technologies in the medical sector, it is believed that the use of shape memory polymers will have a great impact on the people who are in need of medicine and technology. A possible application would be the creation of devices made out of shape memory polymers. These, could possibly be drug devices that will be inserted inside the body and will be scheduled to work upon the heat changes that the body demonstrates. For example, they will release medicine or antibiotics when they notice fever or some other kind of body temperature changes. Another field where shape memory polymers could find precious applications would be the energy industry. A possible use of shape memory materials would be on solar panels that would be working as sensors for detecting the sun and auto-rotating towards the right direction. By combining Material Science and Robotics, it will be possible to build smart shape changing solar panels that will automatically adjust their inclination for the maximum energy efficiency. One thing is certain: either being integrated into the field of electronics, in health care or in the energy sector, shape memory polymers are expected to be pretty disruptive for the engineering field. Nowadays, it is a huge trend to experiment with different and innovative 3D printing materials, fact that leads us to the conclusion that the rise of 4D printing is just around the corner. Yet, scientists have a long way to go before getting this new technology industrialized. Until then, you can enjoy all the benefits that the 3D printing technology offers and start with the creation of your own project. All you have to do is to upload your file here! Photo Credits: SciNews YouTube
Polyps are growths in the colon or rectum. They protrude into the lining of the intestine and can be flat or have a stalk. Polyps are one of the most common conditions affecting the colon and rectum. They occur in about 15% to 20% of the adult population and equally in both men and women. The cause of most colon polyps is not known. Although most polyps are benign (harmless or non-cancerous), as they continue to grow they can develop into cancer. One type of polyp called adenomatous is more likely than others to develop into cancer over time. Early diagnosis of adenomatous polyps may help prevent cancer or identify cancer at a stage when it might be treated more successfully. Research has shown that the best way to prevent colon cancer is early detection and removal of polyps. Almost all colon cancers develop from polyps, but they grow slowly, often over a period of years. Most polyps do not cause symptoms. But large polyps can cause symptoms such as rectal bleeding or a change in bowel habits. Polyps are diagnosed either by looking at the colon lining directly (colonoscopy) or by x-ray study (barium enema). Polyps can be removed or samples of tissue (biopsies) can be taken during a colonoscopy procedure. You will not experience any pain or sensation as the polyp is removed and you can usually resume normal activity the next day.
The Medicine/Physiology and Physics prizes went to the popular topics of body clocks and gravity waves. The subject of the Chemistry prize is perhaps a little more obscure yet very important for those interested in the molecules of life. The three chemistry prize winners, Jacques Dubochet, Joachim Frank and Richard Henderson all work in the field of cryo-electron microscopy which enables large biological molecules to be observed in solution in water. Looking at molecules Even the largest protein molecules are far too small to be seen under microscopes that use ordinary visible light. One way to obtain images of these molecules is to use very short wavelength X-rays in the method known as X-ray crystallography. Many previous Nobel prizes have been awarded for developments in and the use of this technique, not least the 1963 award for the discovery of the structure of DNA by Crick, Watson, Wilkins and, of course, Rosalind Franklin who actually made the images. X-ray crystallography, as it name suggests, requires crystals of the material being studied. Crystals contain lots of the molecules in a fixed, regular arrangement. Not all biological molecules form crystals easily. Indeed, it was Franklin’s exceptional skill in making DNA crystals that allowed the images to be obtained at all. In life, that is, in cells, biological molecules float around in solution in water, constantly moving and rotating. X-ray crystallography cannot be used to take pictures in those circumstances. Another way of looking at tiny objects is electron microscopy. A beam of electrons is bounced off or passed through the object, and collected by a sensor in the same way that a camera collects light reflected from an object. However, electrons have an effective wavelength much smaller than visible light and so can produce images of objects less than a micrometre in length. Ernst Ruska, a German, was belatedly awarded the Nobel Prize for Physics in 1986 for developing the first electron microscope in 1933. Electron microscopes of various types have since found uses in many scientific disciplines. They could not be used for looking at molecules in solutions however. They require a total vacuum so the water would evaporate. Also, the energy of the electrons destroys fragile proteins and other biological molecules. Those disadvantages did not stop this year’s Nobel Prize winners. Richard Henderson was born in Edinburgh, Scotland just after the end of the Second World War in 1945. He studied Physics at Edinburgh University but after obtaining his degree went to the Laboratory of Molecular Biology at Cambridge University to work on X-ray crystallography. Having completed his doctorate and a few years of research at Yale in the USA he returned to Cambridge in 1973. He was joined by Nigel Unwin to investigate the structure of paryticular protein molecules. The molecule they chose was bacteriorhodopsin, a light sensitive molecule found packed together in membranes on the surface of some bacteria. This protein could not be obtained in a form suitable for X ray crystallography so they turned to electron microscopy as an alternative. They covered the protein sample with a glucose solution that would not dry out completely in the vacuum of the electron microscope. The normal beam strength would destroy their sample so they used a very weak beam. The molecules in the sample scattered the electrons in a pattern from which they could deduce a rough shape of the molecule. Henderson saw that his method had possibilities. He changed the angle of the beam to obtain diffraction patterns in different directions and gradually succeeded in improving the resolution of his images. By 1990 he had improved his technique to build a picture that showed the arrangement of all the atoms in the molecule of the protein. The method worked for proteins that occurred in neat arrangements but did it have wider uses? In New York, Joachim Frank was working on just this problem. Frank was born in Siegen in western Germany in 1940. He studied in Freiburg and Munich and did research in the USA and UK before settling in New York in 1975. In 1987 he spent a short time with Richard Henderson at Cambridge. Back in 1975 Frank had come up with the idea of using a low intensity electron beam to produce an image of protein molecules in solution. The image showed the 2-dimensional shadows of the molecules floating in a thin film of water. Frank developed a computer programme to analyse the shadow cast by each molecule to build a 3-dimensional model of the molecule. In the late 1980s he had managed to produce an image of ribosomes, the protein building machines in cells. A problem that remained was stopping the water in a protein solution form being evaporated by the vacuum of an electron microscope. Henderson’s method using glucose solution only worked for certain proteins but Jacques Dubochet provided the answer. Dubochet was born in 1942 in Switzerland. He obtained his PhD from the University of Geneva and worked at the European Molecular Biology Laboratory at Heidelberg in Germany. He was a professor at the University of Lausanne from 1987 to 2007. Freezing the water would prevent it evaporating away in the electron microscope. However, ice crystals could block the electron beam or break up the protein molecules like they destroy soft fruit put in a freezer. Dubochet thought that if the sample could be cooled fast enough there wouldn’t be time for ice crystals to grow and the water would be turned into a glass-like, vitrified state. His team had success in 1982 when they cooled a drop of water to below -190oC in liquid ethane itself cooled by liquid nitrogen. Now Dubochet was able to develop cryo-electron microscopy. A thin film of the protein sample in water is spread over a fine metal mesh. The water is vitrified in liquid ethane and then exposed to the electron beam. In 1984 Dubochet obtained his first sharp images of viruses which are simply packets made of protein. Cryo-e.m. – a new tool A combination of Dubochet’s vitrification method and Frank’s computer programme allows cryo-electron microscopy to be used to obtain images of proteins showing the position of each individual atom in the molecule. Now scientists can use cryo-electron microscopy to explore large proteins in action in and on cells. For example: the way salmonella bacteria attack cells; the molecules responsible for regulating our body clocks; and the pressure sensitive molecules in our ears that allow us to hear sounds. The epidemic of the Zika virus in South America, particularly in Brazil during the 2016 Olympic Games, provided another case for cryo-e.m. The method was used to determine the structure of the proteins on the surface of the virus and hence suggest a vaccine for the disease. The work of Henderson, Frank and Dubochet has provided another valuable tool to look in to the workings of the molecules of life. - Why is cryo-e.m. more useful than X ray crystallography for determining the structure of many proteins? - Why are normal electron microscopes not suitable for imaging protein molecules? - Why is discovering the structure of proteins important? - What does the “cryo” part of the name cryo-electron microscopy mean? - What are the countries of birth and the countries where they work of the three winners of the 2017 Nobel Prize for Chemistry? - It is more than 25 years since cryo-e.m. was developed by the three Nobel winners. Why do you think it has taken that long for them to receive the award? - This year’s award winners are white males in their 70s. What are your opinions on this fact? - Follow the news in the media and online about the Nobel Prize and find out more about the winners.
From ancient times, the value of a samurai was assessed in terms of his prowess at kyuba no michi, the way of horse and bow, whereby the elite warrior would deliver arrows from his longbow while riding a horse. Unlike the light horse-archers of the steppes, the samurai was quite heavily armoured and used his horse as a mobile ‘gun platform’. It was only when hunting that archery would be conducted at a gallop, a martial art performed nowadays at festivals under the name of yabusame. The bow itself was made of bamboo sections wound with rattan and lacquered. It was loosed from one third of the way up its shaft for convenience when riding a horse. Some bows were very powerful, and stories are told of the finest archers sending arrows through an enemy’s arm or even shattering the planking of a boat so that the samurai on board were drowned. | History of the Samurai Wars – The making of Total War: Shogun 2 – WIN Collector’s Edition of Total War: Shogun 2 – 10 Major Developments of Japanese warfare in the Sengoku Period – Rise of the Samurai – Japanese Castles – Military Trends during the Sengoku Period – Battle in the Snow: Mikata ga Hara – Timeline of The Sengoku Period: Japan’s Age of War– Read the History of Samurai Wars magazine
As a woman, you know how to get pregnant (i.e. sex), but you may not know where exactly fertilization occurs. Pregnancy can be a complex, complicated topic, and it’s perfectly normal to be confused or unsure of how certain processes actually happen. Here are 10 things you need to know about fertilization, including where it occurs. Table of Contents - 10 Things You Should Know About Fertilization - ① Fertilization Occurs in the Fallopian Tube - ② Fertilization Can Only Occur During Ovulation - ③ Ovulation Isn’t Always Obvious - ④ Fertilization is a Miraculous Thing - ⑤ Once Fertilized, the Egg Changes - ⑥ Implantation Occurs in the Uterus - ⑦ Implantation May Cause Light Bleeding - ⑧ Implantation Usually Occurs 6-10 Days after Ovulation - ⑨ Implantation Isn’t Always Successful - ⑩ Miscarriages Are Common Even After Implantation 10 Things You Should Know About Fertilization ① Fertilization Occurs in the Fallopian Tube It’s easy to assume that fertilization occurs in the uterus (most women do), but sperm actually fertilizes the egg in the fallopian tube. In some cases, fertilization occurs on the outside of the reproductive tract, which can be dangerous. This is known as an ectopic pregnancy, and it can be a life threatening condition. ② Fertilization Can Only Occur During Ovulation Ovulation is the only time sperm can fertilize an egg. To understand how this works, you need to understand your menstrual cycle. Every month, a group of eggs start to grow inside of your uterus in tiny sacs called follicles. These eggs continue to grow until one erupts (or releases) from the follicle. This is called ovulation. Typically, ovulation occurs two weeks after the first day of your period, but this is not true for every woman. Some women ovulate earlier or later, depending on the length of their cycle. ③ Ovulation Isn’t Always Obvious If you’re trying to get pregnant, you may not know when you’re ovulating. The signs aren’t quite as obvious as your period. The most common indication of ovulation is the secretion of a white, sticky discharge that looks like egg whites. While this type of discharge is often a sign of ovulation, there’s a chance that the fluid can be normal discharge or even a sign of early pregnancy for some women. One of the best ways to determine if you’re ovulating is to measure your basal body temperature. A slight increase in your body’s temperature is often an indication that you’re fertile. ④ Fertilization is a Miraculous Thing The average ejaculate contains about 150 million sperm. The job of the sperm is to swim up the fallopian tube, and fertilize the egg. They only have about 12-48 hours to tackle this task before they die. Only about 85% of sperm will reach the fallopian tube, and only 15% will make it all the way up to the egg. When all is said and done, only about 1,000 sperm are left to fertilize the egg, and they not only have to find their way to the egg – they also have to pick the right fallopian tube. But only one sperm will fertilize an egg. ⑤ Once Fertilized, the Egg Changes All it takes is just one sperm to fertilize an egg, and when it does, it burrows into the egg. After this happens, the egg begins to change to prevent other sperm from trying to get in and fertilize. As soon as the egg is fertilized, the baby’s sex and genes are set. Sperm with an X chromosome will produce a baby girl, while sperm with a Y chromosome will produce a baby boy. Within 24 hours, the egg starts quickly dividing into many cells. ⑥ Implantation Occurs in the Uterus A fertilized egg will stay in the fallopian tube for 3-4 days. During the first few days of fertilization, the egg, as its dividing, will move slowly down through the fallopian tube and into the uterus. Once it arrives in the uterus, the egg will burrow itself into the wall of your uterus. This is known as implantation. ⑦ Implantation May Cause Light Bleeding Implantation can cause very light bleeding and cramping. As the egg burrows into the uterine wall, it may cause some shedding and bleeding as a result (similar to what happens when you get your period). Light bleeding is normal during implantation, but if you notice a heavy flow or the bleeding lasts more than a few days, you may have had a miscarriage, or your period may have come. ⑧ Implantation Usually Occurs 6-10 Days after Ovulation If a sperm successfully fertilizes an egg, it will take up to 10 days for implantation to occur, depending on when the egg was fertilized and the length of your cycle. For women with a 28-day cycle, ovulation usually occurs on the 14th day after your last period started. It takes about three days for the egg to make its way into the uterus. ⑨ Implantation Isn’t Always Successful Even if a single sperm manages to fertilize the egg, there’s no guarantee that implantation will be successful. About three-fourths of lost pregnancies are caused by failed implantation, so it’s not uncommon. Implantation is a sticky process – literally. The early embryo expresses a type of protein known as L-selectin, and at the same time, the uterus is enriched with carbohydrates. The L-selectin protein binds with carbohydrates briefly, which creates a sticking and unsticking interplay between the uterus and egg. This process slows down the embryo’s progress as it moves along the uterine wall. Once the embryo rests, it has the chance to burrow into (or attach to) the uterine wall. At this point, the embryo starts gaining nourishment from the placenta, and pregnancy can begin. ⑩ Miscarriages Are Common Even After Implantation While a large percentage of miscarriages are caused by failed implantation, other factors can lead to a lost pregnancy even after implantation has occurred. According to doctors, mismatched chromosomes account for 60% of early miscarriages. Every one of us has a pair of 23 chromosomes – one from our mothers, and one from our fathers. When the egg and sperm meet, there’s a chance that the chromosomes don’t match up properly or one is faulty. When chromosomal abnormalities occur, the pregnancy typically results in a miscarriage. Miscarriages can also occur if you have issues with your cervix or uterus. Thyroid problems, uncontrolled diabetes and other medical issues can lead to miscarriages as well. Even if implantation does occur, there is a chance that your pregnancy can end in miscarriage early on. While devastating, this is not an uncommon occurrence.
By Aoife Lyons There is much research that shows that children who have learning disabilities are at risk for having lower self-esteem and self worth than that of their peers. From an early age, children compare themselves with others in areas such as academics, the ability to make and keep friends, and athletic prowess. For younger children the comparisons and subsequent self-judgment can be rather simplistic or "black and white." Children with learning disabilities may judge themselves as"stupid","slow" or "dumb", based on academic comparisons with other children. These self-judgments are often global in nature such that a child who is having difficulty at school may perceive themselves negatively in all areas of their development. Children who are diagnosed with learning disabilities have likely been having difficulty in school for many years before the actual diagnosis. Because the diagnosis of a learning disability is often based on a discrepancy between a child's academic competence and their measured IQ score, it is more difficult to diagnose children before 1st or 2nd grade simply because expectations for academic achievement are not that high. Subsequently, children with learning disabilities may have endured many years of negatively comparing themselves to their peers and developing lowered self-esteem and self-worth before being formally diagnosed. After a diagnosis is made, children and families need help understanding the diagnosis and label. For some children and families, the diagnosis can bring relief as they now have a label to help explain the academic difficulties. For other children and families, the label may be stigmatizing and can lead to more negative appraisal of child's abilities. As professionals who work with children with learning disabilities, we have important roles in helping these children recognize both their areas of difficulty and their areas of strength. We also need to educate children and families about the nature of learning disabilities. Families need to hear that children with learning disabilities are bright; they just have a deficit in a particular area of learning. The message that learning disabled children have average or even above average IQ is one that bears repeating often, to children as well as families. Learning disabled children have often spent many years struggling in school and feeling "stupid". They have likely felt confused, discouraged and hopeless as their efforts do not produce a desired result. Some learning disabled children become immobilized by failure and develop "learned helplessness", and attitude of "why bother when I always fail?" It is our job to help children undo these negative self-evaluations and see themselves in a realistic light. Learning disabled children often need support around assimilating both positive and negative characteristics in to their self-image. While much time is spent helping learning disabled children master academic skills, we should also be working on improving self-esteem through recognition and appreciation of their areas of strength. Some ideas for parents, educators and others who work with LD children. Help the child feel special and appreciated. There is research that shows that the presence of at least one adult who makes a child feel special and appreciated leads to greater resilience and hopefulness in the child. Children feel special when their efforts are appreciated, when adults notice what makes them different in a positive light and when adults carve out special time to spend with the child. Help the child with problem solving and decision making skills. Solid problem solving skills have been linked to higher self-esteem. Instead of providing a child with the solution to their difficulty, (whether academically, socially etc.), help the child brainstorm possible solutions and the possible consequences of different decisions. Avoid judgmental comments and praise the effort children put into their work. Often children with learning disabilities are putting effort into their work but still struggle. Help the child around finding new strategies for learning that will help them feel more successful. Be empathic around the child's special learning needs and their level of frustration when learning. Don't compare learning disabled children with peers or siblings. Highlight a child's strengths in non-academic areas whether in music, art, athletics, etc., or highlight the strengths of their personality, (kindness, tenacity, helpfulness, sense of humor, etc.). Provide opportunities for a child to help. Helping others helps a child show that they have something to offer their family and community. Children often enjoy participating in volunteer activities with their friends and family. Helping others bolsters self-esteem. Have realistic expectations. We have realistic expectations about a child's performance it will help the child develop a sense of control. By working together, professionals and parents should help the learning disabled child overcome both academic difficulties and the subsequent self-esteem difficulties that often arise. If we can appreciate the learning disabled child in a holistic manner, the child should also learn to appreciate their own unique strengths.
By Elisabeth Waugaman African American naming traditions were dramatically influenced by slavery. From the sixteenth to the nineteenth centuries between nine and twelve million Africans were shipped to the New World as slaves. Existing slave ship manifests for the Atlantic slave trade record numbers, gender, approximate age of slaves, and occasionally “nation” (tribal identity). Given names are only registered on slave ships after the beginning of the international abolitionist movement circa 1820. Once sold into slavery, Africans were given Anglicized names. Plantation records list mostly diminutive first names (e.g. Tom, Dolly) and more rarely biblical (e.g., Abraham, Israel), well-known historical (e.g. Matilda, Pascol), classical (e.g. Scipio, Venus), and place names (York, London, Hampton). In rare instances, plantation slave lists reveal a name that appears to be African (e.g. Cudjo Lewis). A surviving African name suggests that the slave was able to gain enough respect to maintain his ethnic name. Such was the case for Ayuba Suleiman Diallo, an educated Muslim, who could read and write Arabic, and eventually published one of the earliest U.S. slave narratives. Scholarly estimates are that most Africans brought to America were animist, ten to thirty percent may have been Muslim, and three to five percent Christian. Biblical slave names may be those of Muslim or Christian slaves. For an extensive list of evolving African-American names over the centuries, check out Nameberry’s Satran and Rosenkrantz’s Beyond Ava and Aiden: The Enlightened Guide to Naming your Child. Within their own quarters, slaves secretly called one another by their Africans names. However, since African families were repeatedly split, living in a foreign culture with a foreign language, among diverse African ethnic groups, and their owners suppressing African customs and religions, maintaining African traditions over generations became almost impossible. With emancipation, liberated slaves abandoned diminutive names like Betty or Tom for their full forms (Elizabeth, Thomas). For surnames they had a wide range of choices — that of their former owners, that of prominent leaders, or one based on their occupation, a city or town, etc. In the 1950s and 60s, Malcolm X became a prominent spokesman for the advancement of African Americans. He chose to call himself X because, since African Americans had no way of knowing their family history, he knew that by using X as his surname it would not be the name of a slave owner. Some Black Americans decided to liberate their identity by intentionally misspelling their given name so that it would be theirs alone and would never have been used by a slave owner—e.g., Dawne. The Civil Rights movement of the 60s and 70s strengthened the sense of Black pride and identity, inspiring African Americans to discover more about their origins. The horrors of slavery and racism were exposed as never before. Because slave owners were Christians and some slaves were Muslims, African Americans began to explore Islam and Islamic given names begin to appear in the African American community. In 1976, Alex Haley published the Pulitzer Prize-winning book, Roots: The Saga of an American Family, which was made into a TV miniseries that won nine Emmy awards and made a strong impact. The series inspired many in the he Black community to give their children African names (e.g. Ama) or African-sounding names (e.g. Tanisha). Creating African sounding names led to making up totally unique names, which is an ongoing trend in the African-American community. Because of the vibrant Creole culture in Louisiana, there is also a French influence in some African-American names. This includes not only French surnames but also given names beginning with “La,” (e.g. Lawanda), “De” (e.g. Deandre’) and with the use of apostrophes (e.g. Andre’, Mich’ele), that represent accents that were not yet available on American typewriters at the time. Africa has the world’s greatest linguistic diversity– there are over 3,000 languages within Africa’s six language families, which offer an amazing array of given names. With the Internet, there is more and more information readily available about African culture, including African names. In addition to Nameberry’s, Awesome African Names, Behind the Name has a list of African names with origins and meanings and OnlineNigeria has another extensive list . You can find even more information by Googling different African language names such as Bantu, Hausa, and Yoruba names. Elisabeth P. Waugaman is the author and illustrator of the medal-winning Follow Your Dreams: The Story of Alberto Santos-Dumont, and author of Women,Their Names, and the Stories They Tell, is a blogger for Psychology Today Online, and is on the faculty of the New Directions Writing Program of the Washington Center for Psychoanalysis in Washington, D.C.
sickle cell anemia (SIH-kul sel uh-NEE-mee-uh) An inherited disease in which the red blood cells have an abnormal crescent shape, block small blood vessels, and do not last as long as normal red blood cells. Sickle cell anemia is caused by a mutation (change) in one of the genes for hemoglobin (the substance inside red blood cells that binds to oxygen and carries it from the lungs to the tissues). It is most common in people of West and Central African descent. Also called sickle cell disease.
How did the Second Great Awakening affect the evolution of women's roles in society in 1815-1860? I'm writing a paper, and I don't know much about how the Second Great Awakening affected the evolution of the roles of women. I was wondering if you could just give me some background in the years from 1815-1860? 2 Answers | Add Yours The Second Great Awakening impacted women’s roles in our country. The Second Great Awakening was a religious revival where people became more connected with their churches and with religious teachings. Many of the people involved in the Second Great Awakening were women. One concept that evolved from the Second Great Awakening was allowing for a greater role of women, at first within the household structure, and later in our society at large. There was a more equal sharing in responsibilities at home between husband and wife. Prior to this time, the man ruled the house. That began to change as a result of some of the ideas that developed from the Second Great Awakening. Women also began to develop roles outside of the house. Women began to work in other reform movements, some of which reflected the values developed from the Second Great Awakening. Women became involved in the abolition movement and the temperance movement. More women began to teach in schools. The voice of women became louder in the various reform movements, and their roles and actions in the Second Great Awakening contributed to this. Eventually, women began to advocate for their own rights, as seen at the Seneca Fall Convention in 1848. The Second Great Awakening impacted women’s roles significantly between 1815-1860. The major impact of the Second Great Awakening in this regard was that it led to the involvement of women in the reform movements of the time. The Second Great Awakening helped to create a propensity for reform in American society. It emphasized the idea that society could and should be perfected. Since women were (under the ideas of Republican Motherhood) supposed to be in charge of raising children to be good citizens, it also made sense for them to be in the forefront of reforming society in general. This is exactly what happened. Women took the lead in many reform movements. There were female abolitionists and female temperance workers. There were female educational reformers and female prison reformers. The Second Great Awakening emphasized the idea of reforming and perfecting society and it encouraged the idea that women should take a part in those reforms. We’ve answered 318,916 questions. We can answer yours, too.Ask a question
PART ONE: HISTORICAL BACKGROUND By the end of the 16th century, scientists were becoming increasingly aware that many of the speculations about the atmosphere were inadequate and erroneous. Much of this failure was due to the absence of precise meteorological instruments. Science as it is practiced today did not yet fully exist. Intellectual thought about the nature of the universe was dominated by the conclusions of natural philosophers who interpreted observations in ways that supported their preconceived notions (Frisinger, 1977). This is epitomized by Aristotle’s Meteorologica, published around 340 BC, which examined atmospheric phenomena such as rainfall, cloud formation, hail, long-term climate patterns, thunderstorms, and temperature (Aristotle, 1952). Aristotle believed that weather phenomena could be explained by the mixing of four basic elements – namely earth, water, ice, and fire – between different levels. Meteorologica, which became the standard resource on atmospheric phenomena until the 17th century, used qualitative observations to support its conclusions rather than the quantitative procedures that modern meteorologists would consider to be consistent with experimental scientific methods (Nutter, 2001). Frustration over the inadequacies of the qualitative approach lead scientists of the 17th century to yearn for quantitative instruments that would provide precise methods for measuring atmospheric phenomena. Galileo Galilei helped to lead the way when he created the first known thermometer at the tail end of the 16th century. This gave rise to the field of thermometry and to numerous attempts at expanding upon Galileo’s work. Robert Hooke published a document in 1664 that described his creation of a four-foot long thermometers filled with wine (Frisinger, 1977). Christian Huygens introduced the idea in 1665 of using the boiling point and the freezing point of water as fixed reference points on the thermometric scale (Zinszer, 1944). Sir Isaac Newton published a paper in 1701 which described the creation of a thermometer that was three feet long and that had a two inch diameter bulb filled with oil (Newton, 1701). None of the above mentioned scales, however, became widely accepted as a standard for measuring temperature. A young instrument maker by the name of Daniel Gabriel Fahrenheit therefore saw an opportunity. Fahrenheit was born in Danzig, Poland in 1686. He lived most of his life in Holland. When both of his parents died in 1701, Fahrenheit was sent by his guardian to study business in Amsterdam where he took a special interest in scientific instruments (Rozell, 1996). His primary desire was to go into business as a successful instrument maker, and it is believed that he was more interested in being a tradesman than he was in being a natural philosopher (Middleton, 1966). Fahrenheit combined business skills with instrument making skills in such a way that he was able to widely market his thermometers. The widespread availability of his thermometers explains why both scientists and society quickly embraced his scale as a standard. The scientific community, however, would for the most part eventually reject Fahrenheit’s scale in favor of the centigrade scale, which was created by Anders Celsius around 1740. Celsius suggested using the fixed points of 0 degrees and 100 degrees to represent the boiling point of water and the freezing point of water respectively. The scale was later inverted because placing the colder value at 100 degrees did not make intuitive sense to most scientists (Rozell, 1996). The eventual acceptance of the centigrade scale as a scientific standard was seen as a logical choice because it was consistent with the decimal based counting system that had been adopted by society (Middleton, 1977). Multiples and powers of ten are considered to be the round numbers in such a counting system. The centigrade scale with 100 degrees between two fixed points was therefore seen by scientists as more reasonable than Fahrenheit’s scale which was not consistent with a decimal based counting system. Fahrenheit was greatly influenced by a Danish astronomer by the name of Olaus Roemer who, around 1702, developed an alcohol thermometer to record daily atmospheric temperatures. Roemer set one of his fixed points at the boiling point of water, which he labeled 60 degrees, and the other fixed point at the melting point of snow, which he labeled 7.5 degrees (Cohen, 1944). Fahrenheit met Roemer in 1708 and eventually based his own thermometers on the fundamental principles of Roemer’s thermometer. Fahrenheit did make some changes, however. He used mercury instead of alcohol. He also modified Roemer’s scale because, according to Fahrenheit’s letters, he had no desire to work with "inconvenient and awkward fractions" (Rozell, 1996). Fahrenheit also was not initially concerned with using the boiling point of water as a fixed point because his primary interest was to use thermometers to measure atmospheric temperatures. A thermometer graduated as high as the boiling point of water is not very useful in a meteorological context (Middleton, 1977). Fahrenheit therefore used fixed points that were similar to temperatures that could be observed in the atmosphere. Fahrenheit designed and crafted the first mercury thermometer with trustworthy scales in 1714 (Frisinger, 1977) and divided his degrees into quarters (Middleton, 1966). An interesting account of how Fahrenheit arrived at the fixed points for the scale on his thermometers can be found in a document that he published in an issue of Philosophical Transactions (Fahrenheit, 1724). In this work, Fahrenheit stated that the length of his thermometers varied with the temperature range needed for a given task, but the distance between the scaled degrees did not deviate from one thermometer to another. The scale could be lengthened by adding more spaces of equal length whenever the situation required the use of higher values. The partitioning of the scale on all his thermometers was based on three fixed points. The first point was fixed at 0 degrees, which he considered the beginning of the scale, and was obtained by placing a thermometer into a mixture of ice, water, and sal-ammoniac, or also sea salt. This was an experiment that Fahrenheit claimed worked better in the winter. It produced what was essentially the coldest temperature that Fahrenheit could obtain with the materials and tools that were available to him. The temperature obtained from the mixture was therefore seen by Fahrenheit as a logical point to begin his thermometer scale. The second fixed point on Fahrenheit’s scale, also explained in the 1724 document that he published in Philosophical Transactions, was set by placing a thermometer in a mixture of water and ice without the salts (as Roemer had similarly done). Fahrenheit set this at a value of 32 degrees and referred to this point as "the beginning of congelation, for in winter stagnant waters are already covered with a very thin layer of ice when the liquid in the thermometer reaches this degree (Middleton, 1966). The third fixed point on Fahrenheit’s scale was obtained by placing the thermometer in the mouth or under the armpit of a healthy man. Roemer had done something similar and had placed the value of "blood heat" at 22.5 degrees (Cohen, 1944). Fahrenheit, however, set the value at 96 degrees. The boiling point of water, or 212 degrees, replaced 96 degrees as the upper fixed point shortly after Fahrenheit’s death. It was discovered that 98.6 degrees, and not 96 degrees, was the actual average temperature of a healthy human body. Once Fahrenheit’s scale was recalibrated to reflect this knowledge, scientists realized that the boiling point of water fortuitously lined up exactly with 212 degrees on the Fahrenheit scale (Middleton, 1977). It therefore became a common practice to use 212 degrees as the upper fixed point rather than either the erroneous 96 degrees or the corrected 98.6 degrees temperature of a healthy human body. The melting point of ice, or 32 degrees, was maintained as the lower fixed point. However, 0 degrees is no longer considered a fixed point on Fahrenheit’s scale most likely because it cannot be established that 0 degrees will always result if Fahrenheit’s method is used. As one author notes, "The mere fact that either of two salts was to be used in his freezing mixture, and the note that ‘the experiment succeeds better in winter than in summer,’ should have warned readers that such a zero would not be even approximately a fixed point" (Middleton, 1977). Fahrenheit died in 1736 at the young age of 50. He left behind a legacy that would influence society for centuries to come. His scale, while no longer in use by most countries of the world or by the general scientific community, continues to reign as the thermometric scale of choice within American culture. PART II: THE LESSON The lesson outlined in this section is intended for introductory earth science, physical geography, and meteorology courses that are taught at the university level. All such courses have extensive sections that deal with the phenomena of temperature, especially as it relates to the atmosphere. Several class sessions have been dedicated to this topic in every introductory earth science, physical geography, and meteorology course that I have either taught as an instructor or attended as a student. A dilemma that is encountered in introductory earth science, physical geography, and meteorology courses (at least, within my own teaching experience) is that students sometimes have difficulty understanding the significance of temperature scales. Some students express frustration over the fact that lab experiments usually require the use of the centigrade temperature scale, which they are generally unfamiliar with, rather than the Fahrenheit temperature scale, which they are more thoroughly familiar with. Students sometimes complain that they do not understand what the numbers on the centigrade scale mean unless they convert them to Fahrenheit values, and they wonder why they cannot do everything using the Fahrenheit scale. The intent of this lesson, therefore, is to create an exercise that responds to these concerns by helping students understand why one scale can be preferred over another in the context of science. It is the goal of the lesson to provide students, by means of the historical case study of Fahrenheit creating the first mercury thermometer with reliable scales, with a richer insight into the significance of the arbitrary nature of thermometer scales in general. After examining the historical account, students should begin to understand that the arbitrary nature of temperature scales could make some scales more useful when the fixed values are placed at logically convenient points. In the context of a decimal based counting system, for example, it is more reasonable to have a temperature scale with 100 degrees between the two fixed points than it is to have 180 degrees between the two fixed points (as it is on the Fahrenheit scale). The lesson will begin with students individually reading the account that is found in Part I of this paper (or a similar account that could be written later) that describes the background of Fahrenheit creating his mercury thermometer and temperature scale. Students will be asked the following questions to help them think more thoroughly about the nature of temperature scales: How did Fahrenheit arrive at the "0" point of his scale? What were the other two fixed points that he chose? Define the terms "objective" and "arbitrary." Was Fahrenheit’s scale based on "objective" standards or "arbitrary" standards? What is a "decimal based" counting system? What are some possible disadvantages of Fahrenheit’s temperature scale? What might be the advantages of using the centigrade scale (which places the melting point of ice at a value of "0" degrees and the boiling point of water at a value "100" degrees)? Which scale is consistent with a decimal based counting system? Students will work in small groups of two to four to answer these questions. After students have spent adequate time constructing various responses to the above questions, the instructor will initiate a class-wide discussion. Groups will be asked to present their ideas to the rest of the class. Students will also be asked to evaluate the comments made by other students. The instructor should encourage students to think carefully about what the questions are asking before constructing appropriate responses. It is hoped that the students themselves (with as little suggestion from the instructor as possible) will begin to recognize the arbitrary nature of temperature scales. Once this has been accomplished, it is hoped that students will also recognize that the centigrade scale is a more reasonable scale in the context of a decimal based counting system. Following this introductory exercise, students will work with a thermometer that uses the Fahrenheit scale and a thermometer that uses the centigrade scale. Students will be asked to record the air temperature in the classroom with both a centigrade thermometer and a Fahrenheit thermometer. The temperatures found by both thermometers should be recorded on paper. Students will also be asked to go out of the building and record the outside temperature using both thermometers. As with the indoor temperatures, the outside temperature found by both thermometers should be recorded on paper. Students will return to the lab where they will perform an experiment using water and ice. Students will be asked to fill a small container with tap water. (It is up the instructor’s discretion as to whether to have "room temperature" water prepared or to allow students to use cooler water from the sink.) Students will record the initial temperature of the water. They will slowly add chunks of ice to the water and, using both the centigrade and Fahrenheit thermometers, observe what happens to the temperature of the water as increasing amounts of ice are added to the water. Students should notice that the temperature of the water approaches 0 degrees on the Celsius scale and 32 degrees on the Fahrenheit scale. Students will be asked the following follow-up questions that are again designed to stimulate thought about the arbitrary nature of temperature scales: What is the melting point of ice on the centigrade scale? What is the melting point of ice on the Fahrenheit scale? The melting point of ice is different from one scale to another. What does this tell us about the nature of thermometer scales? What would the possible benefits be of placing the melting point of ice at 0 degrees rather than 32 degrees? It has already been noted that having 100 degrees between the fixed points is consistent with a decimal based counting system. There are also other advantages to the centigrade scale. For example, the melting point of water is extremely significant in meteorology because it is a transitional point between very different kinds of weather (namely, frozen verses wet). Depending upon whether or not the temperatures at various levels in the atmosphere are above, below, or equal to this point will determines whether or not precipitation will fall in the form of snow, sleet, freezing rain, or rain. Above or below freezing temperatures will determine whether there will be dew or frost in someone’s garden on a spring morning, and they will determine whether conditions on the highways will be made more hazardous by icy pavement or less hazardous by wet pavement. Thus, it may extremely beneficial to understand the temperature in terms of whether it is above the melting point of ice (positive values) or below the melting point of ice (negative values). Therefore, the following questions will be included in order to stimulate further thought about this idea: Why is the melting point of ice a significant value for meteorology and weather forecasting? What temperature did the water approach (on both scales) as you continued to put increasing amounts of ice into a container of water? What temperature (on both scales) did you record both inside and outside? What temperature values would you except outside (on both scales) during the day in the middle of January? Are these values positive or negative? What temperature values would you expect outside (on both scales) during the middle of July? Are these values positive or negative? Why would it be more useful to meteorologists to use a scale that places the "0" point at the melting point of ice? As with the previous questions, the instructor will initiate a class-wide discussion after students have spent adequate time constructing various responses. Groups will again be asked to present their ideas to the rest of the class and will be asked to evaluate the statements made by other students. A final discussion will take place between the students and the instructor that will be based on the following two questions related to student’s personal reactions to the historical account: Did the historical account of Fahrenheit creating his thermometer scale make the discussion of thermometers more interesting to you? Did the historical account help you to gain a better understanding of the concepts that were being discussed? If so, in what ways did reading this historical account benefit your understanding of thermometer scales? These questions are meant as a reminder to the students that their thoughts and opinions are valuable and important. Instructors could also use students’ responses to these questions as somewhat of a gauge of the lesson’s effectiveness. The lesson that has been outlined in this section is potentially effective in teaching only a small amount of the content that students should be learning about temperature and thermometers. Instructors should also explain what thermometers are actually measuring, and they should explain the difference between temperature and heat. This lesson could serve as a foundation for teaching other topics related to temperature such as isotherms, wind chill factor, mathematical conversion between temperature scales, and atmospheric lapse rates. PART III: PHILOSOPHICAL JUSTIFICATION It is not enough to teach students how to read and use various thermometer scales without giving them at least some notion of how these scales cane into existence. The use of historical case studies in the science classroom has been advocated by many researchers and philosophers of science education for quite some time. Defenders of its use as a pedagogical tool believe that the growth of science should be examined more closely, giving special considering not only to the paths that have been taken by scientists, but also to the paths that have not been taken by scientists (Duschl, 1994). Various arguments have been made in defense of this type of belief. Three of the stronger arguments for the use of history in science classrooms are summarized in a paper by Jenkins (1991) that described the use of history in science classrooms in Great Britain. First, history can be seen as providing a "humanizing" aspect to what is usually a "dehumanizing" science education structure. This argument was primarily used in Great Britain after the end of World War I. Second, science can be seen as offering "common ground" between the specialists in arts and the specialists of science. This argument was dominant in Great Britain during the 1950’s and 1960’s. Third (and likely the most enduring argument), the inclusion of history in the science classroom can provide students with a richer insight into the "nature of science" itself. I have stated that the goal of the lesson presented in Part II of this paper is to provide students, by means of the historical case study of Fahrenheit creating his thermometer scale, with a richer insight into the significance of the arbitrary nature of thermometer scales in general. It is therefore Jenkins’s third argument (the use of history in science can be used to provide a richer insight into the "nature of science" itself) that provides the basis for the lesson that has been presented in this paper. It has been argued that the teaching of science without an emphasis on the rich history of its development leaves students with a superficial and inadequate understanding of the nature of science. Matthews (1994) uses Boyle’s Law to illustrate this point. It is inadequate to teach Boyle’s Law without considering what the definition of a "law" is, who Boyle was, what Boyle did, and what kind of cultural environment and background influenced Boyle’s work. So too, it is inadequate to teach temperature without considering how temperature scales were arbitrarily created, who the creators were, what they did, and what kind of cultural environments and backgrounds influenced the creators’ work. In short, it is not enough simply to teach students how to read and use a thermometer because no real insight is being provided into the nature of science itself. Instead, students must be introduced to the very procedures that have governed scientific progress. In order to gain such insights, students must see science as being part of a larger surrounding cultural heritage (Jenkins, 1989). It is therefore imperative for science educators to furnish students (at least somewhat) with the richness of the influential history of science and to involve them in some of the questions that scientists have engaged in (Matthews, 1994). Students should realize that science does not create itself. In other words, science does not exist in a cultural vacuum. Much to the contrary, real people engaging in real activities have molded science into the vast field of knowledge that it is today. Benchmarks asserts a similar idea when it states, "History provides another avenue to the understanding of how science works. Students should come to realize that much of the growth of science has resulted from the gradual accumulation of knowledge over many centuries" (American Association for the Advancement of Science, 1993). Similar ideas have been expressed by other researchers and philosophers of science education. Brush (1989) argued that the focus of a historical approach to the teaching of science is not merely on the conclusions reached by the scientists, but rather it is on the processes that were used by the scientists to reach those conclusions. In short, the historical approach to the teaching of science requires thinking on the part of the students. Brush goes on to argue that traditional science teaching has focused far too much on the "objective facts" of science, and that this has resulted in the inaccurate but common notion of the arrogant scientist who has obtained infallible knowledge. However, the use of history in the teaching of science avoids this extreme by demonstrating that science is a human product that is subject to change. One researcher argued that a broad cultural approach to science that includes historical elements should replace the traditional focus on "correct-for-now" contents with history-based instruction that reveals the non-linear processes by which scientists have attained their knowledge (Galili. 2000). It is important to note, however, that the presentation of historical material will not necessarily guarantee an improved understanding of science (Russel, 1981). I am therefore not advocating replacing the teaching of science content with the teaching of science as a historical discipline. Rather, I am arguing that part of the science curriculum should include historical case studies that can provide students with a richer insight into the procedures and nature of science. There is evidence that indicates that the portrayal of history in science classrooms is most effective when it is used to bring out the "specific characteristics" of science (Russel, 1981). Thus, instructors are not advised to teach the history of science as merely a series of cold, hard historical events; rather instructors are advised to focus on specific historical cases and events that emphasize the explicit concepts and procedures that provide insights into the very nature of science. It is with the above considerations in mind that the lesson developed in Part II of this paper was created. It is my hope that the procedures and principles that determine the nature of thermometers will come alive for students after they read the historical account of Fahrenheit’s creation of the first mercury thermometer with reliable scales. Students are doing more than simply using a thermometer to measure temperature, and they are going beyond simply learning the abstract principle that thermometer scales are chosen in an arbitrary manner. They are seeing a concrete example of the principle in action. By studying this particular example students should begin to realize that arbitrary standards are used to determine the fixed values on a thermometric scale, and that this arbitrary nature can make some scales more useful than others when the fixed values are placed at logical points. It was noted in Part II of this paper that students sometimes have difficulty understanding the significance of temperature scales. Students may express frustration because they are using the centigrade scale instead of the Fahrenheit temperature scale that they are more thoroughly familiar with. It was therefore stated that it is important that this lesson responds to these concerns. Monk and Osborne argue that there are two modes of support for introducing historical accounts into the classroom (1997). They argue that it is comforting to students to realize that others have thought in the same way that they do and that they are therefore not stupid for thinking in the ways that they do. Secondly, it allows students to realize that some modes of thought are part of the past and that the current line of thought offers an improvement. These two principles can be easily applied to the teaching of thermometers. Students should begin to understand that the Fahrenheit scale is primarily a thing of the past in science because the centigrade scale offers an improvement. Students should understand that they are not stupid for wanting to hold on the Fahrenheit scale. In fact, the scale was created by a brilliant man, was adapted by society, and was used at times by the scientific community. This will hopefully address students concerns about using a scale that they are unfamiliar with. It should help them to understand that it is reasonable for the scientific community to develop and use a temperature scale that is consistent with the decimal based counting system that governs society. In summary, I have argued that it is possible for students to gain a richer insight into the arbitrary nature of thermometer scales in general by examining the specific historical case study of Fahrenheit creating his thermometer scale. Knowledge is not simply about the "products" of science; it must include the ‘processes" of science as well which can be defined as the technical and intellectual ways that science develops its ways of understanding the world around us (Matthews, 1994). If students are only aware of the "products" of science, then science may seem distant to them. This is why it is beneficial to discuss the historical example of Fahrenheit’s thermometer. Students will not only know what a thermometer is and how to use it, but they will also know how it was developed. Knowing how it was developed will help them to understand the processes by which scientists can develop thermometer scales in general, and this may open the door for them to understand why using different fixed points may be more useful to scientists. Hopefully, this recognition will result in their not being so hesitant in using the centigrade scale for their experiments in class. They may begin to understand the value of the centigrade scale. [About the author: Robert J. Ruhf received his Ph.D. in Science Education from the Mallinson Institute for Science Education at Western Michigan University in Kalamazoo, Michigan in December 2006. He also received a Meteorology degree from Central Michigan University in 1998, a master in Geography from Western Michigan University in 2000, and a Communications degree from Cornerstone University in 1990. He currently works for Science and Mathematics Program Improvement (SAMPI) at Western Michigan University.] American Association for the Advancement of Science. (1993). Benchmarks for Science Literacy. New York: Oxford University Press. Aristotle. (1952). Meteorologica: With an English Translation by H. D. P. Lee. Cambridge: Harvard University Press. Brush, Stephen J. (1989). "History of Science and Science Education." Interchange 20(2), 60-70. Cohen, Bernard. (1944). Roemer and the First Determination of the Velocity of Light. New York: The Burndy Library, Inc. Duschl, Richard A. (1994). "Research in History and Philosophy of Science," in Dorothy L. Gabel (ed.), Handbook of Research on Science Teaching and Learning: A Prospect of the National Science Teachers Association. New York: MacMilan, 443-465. Fahrenheit, Daniel G. (1724). Philosophical Transactions, 33, 78-89. Frisinger, H. Howard. (1977). The History of Meteorology: to 1800. New York: Science History Publications. Galili, Igal & Amnon Hazon. (2000). "The Effects of a History-Based Course in Optics in Students’ Views about Science." Science & Education, 10, 7-32. Jenkins, Edgar. (1991). "The History of Science in British Schools: Retrospect and Prospect," in Michael R. Matthew (ed.), History, Philosophy, and Science Teaching: Selected Readings. Toronto: OISE Press, 33-41. Matthews, Michael R. (1994). Science Teaching: The Role of History and Philosophy of Science. New York: Routledge. Middleton, W. E. (1966). A History of the Thermometer and its Use in Meteorology. Baltimore: The John Hopkins Press. Monk, Martin & Jonathon Osborne. (1997). "Placing the History and Philosophy of Science on the Curriculum: A Model for the Development of Pedagogy." Science Education, 81, 405-424. Newton, Isaac. (1701). Philosophical Transactions, 22: 824-829. Nutter, Paul. (2001). "An Historical Overview of Meteorology: From Speculation to Science." Class notes from METR 1014, University of Oklahoma’s School of Meteorology. http://weather.ou.edu/~metr1014/chapter1/met_hist.html Rozell, Ned. (1996). "Daniel Fahrenheit, Anders Celsius Left Their Marks." Geophysical Institute, University of Alaska Fairbanks. http://www.aspirations.com/neat_stuff_on_snow_ice.htm Russel, Thomas L. (1981). "What History of Science, How Much, and Why?" Science Education, 65(1), 51-64. Zinszer, Harvey A. (1944). "Meteorological Mileposts." Scientific Monthly, 58: 261-264.
Electronics Components: Transistors as a Magic Potentiometer A transistor within an electronic circuit works like a combination of a diode and a variable resistor, also called a potentiometer or pot. But this isn’t just an ordinary pot; it’s a magic pot whose knob is mysteriously connected to the diode by invisible rays, kind of like this: When forward voltage is applied to the diode, the knob of the magic pot turns much like the needle on a voltmeter. This changes the resistance of the potentiometer, which in turn changes the amount of current that can flow through the collector-emitter path. Note that a magic potentiometer is wired so that when bias voltage increases, resistance decreases. When bias voltage decreases, resistance increases. Besides being connected to the diode by invisible rays, the magic pot is magic in one more way: Its maximum resistance is infinite. Real-world potentiometers have a finite maximum resistance, such as 10 kΩ or 1 MΩ, but the magic pot has infinite maximum resistance. With this knowledge of the magic pot’s properties in mind, you can visualize how a transistor works. There are three positions that the magic knob can be in, which correspond to these three operating modes for a transistor. Infinite resistance: When there’s no bias voltage, the magic pot’s knob is spun all the way in one direction, providing infinite resistance. Thus no current flows through the transistor. This state is called cut-off because current is cut off. No amps for you! Actually, remember that the base of the transistor is like a diode, which means that a certain amount of forward voltage is required before current begins to flow through the base. The magic pot stays at its infinite setting until that voltage — usually about 0.7 V — is reached. This state is called cut-off because current is cut off. No amps for you! Some resistance: As the bias voltage moves past 0.7 V, the diode begins to conduct, and the invisible rays start turning the knob on the magic pot. Thus current begins to flow. How much current flows depends on how far the bias voltage has caused the knob to turn. No resistance: Eventually, the bias voltage turns the knob to its stopping point, and there’s no resistance at all. Current flows unrestricted through the collector-emitter circuit. You can continue to increase the bias voltage, but you can’t lower the resistance below zero! This state is called saturation.
Debate surrounds the influence carbon dioxide has on global warming and the future climate of the Earth. Despite documented increases in atmospheric CO2, and understood sources of these emissions, naysayers argue that there is no clear evidence that increasing greenhouse gas emissions are altering our climate. However, there is clear and unambiguous scientific evidence that documents how rising atmospheric CO2 is leading to increasingly acidic seawater. There is no debate as to what is causing ocean acidification. As a result, ‘the other CO2 problem’ leads us to one solution – limit CO2 emissions and mitigate future atmospheric CO2 levels. Seawater chemistry is rapidly changing as atmospheric CO2 levels rise. The ocean absorbs the excess CO2 through air-sea gas exchange as the partial pressure of CO2 in the atmosphere equilibrates with it. It is estimated that the oceans have become 30% more acidic since the beginning of the industrial revolution (Feely, 2004). The concentration of CO2 in the atmosphere is now approximately 385 parts per million (350 ppm is a target level suggested to be the safe upper limit) and is likely to increase at 0.5% per year throughout the 21st century, a rate 100-times faster than has occurred in the past 650,000 years (Meehl et al. 2007). This increase will likely lead to a pH drop in the oceans of 0.3-0.4 units by the end of the century (Feely et al. 2008), a 150% increase in acidity (since the industrial revolution). This increasing acidity is a significant problem for ocean ecosystems. Marine animals that use calcium carbonate to make shells, skeletons or tests such as pteropods, coccolithophores, corals, oysters, clams, and sea urchins (to name a few) are likely to have increasing difficulty building and maintaining their carbonate structures (Guinotte and Fabry 2008). As acidity increases, conditions become more corrosive to calcified organisms; water with low pH becomes depleted of calcium carbonate ions and is referred to as “undersaturated” with respect to the two major forms of calcium carbonate used by organisms, calcite and aragonite. This will result in negative impacts on growth, metabolism and survival, and ultimately, reef growth may cease and/or reverse. At projected future pH levels, reef-building corals will erode faster than they can build up—leading to a net loss of coral reefs and the species they support. As a result of increasing CO2, commercial fisheries are now confronted with unknown future impacts from acidification. First, fish may experience direct physiological changes that impact metabolism, growth and reproduction. Second, the food web that supports them may be altered as their prey (e.g., pteropods), which require calcium carbonate structures, decline. Finally, key habitats such as coral forests and reefs will be affected. This mix of ecological impacts is enough cause for alarm, but the combination of changes in ocean chemistry and pressure from destructive fishing methods such as bottom trawling, is a recipe for disaster. Cold-water coral communities in Alaskan waters are highly diverse (141 species to date) and far more abundant than most other high latitude areas, with an unusually high number of endemic species (Heifetz et al. 2009). National Marine Fisheries Service biologist, Robert Stone (pers. comm.) estimates that up to 50% of corals species and 30% of sponge species found in the Aleutian Islands are endemic. The most important structure-forming taxa in Alaskan coral ecosystems are gorgonians (>60 species) and hydrocorals (primarily stylasterids) (>25 species) (Stone and Shotwell 2007), also known as lace corals. Large Paragorgia or bubblegum corals (gorgonians) may be 2-3 m in height and some of the erect Stylaster species may be over 1 m in diameter. The structures formed by these coral colonies provide important habitats for ocean life. Biologically diverse deep coral ecosystems are found throughout the world’s oceans in areas where the combination of rocky or hard-bottom substrate is exposed to rich ocean currents. In these areas, productivity is high and the corals are important nurseries and spawning sites for a number of commercially important fish species. Cold, deep waters are lower in pH than waters bathing shallow reefs, and future projections indicate that 70% of cold-water corals could experience corrosive conditions by the end of this century (Guinotte et al. 2006). Coral communities, whether in the shallow or deep sea, are among the most biologically diverse marine environments, and understanding the origins of this diversity is an important conservation objective. A study on Stylasterid corals (Lindner et al. 2008) shows that this important group of tropical shallow water marine animals originated and diversified in the deep sea and subsequently invaded shallow waters. It is also possible that deep-sea black corals, gorgonians and stony corals have also contributed to shallow water communities (Lindner et al. 2008). This has very important implications for deep-sea coral communities of the North Pacific, the likely evolutionary origin of these corals. Adding to this richness, sponges are commonly found in association with deep-water corals. The sponges in Alaska are diverse and abundant and a structurally important component of these ecosystems. From the few collections that have been made so far, 126 species have been identified, mostly demosponges (Stone, pers. comm.), which are a different taxa from the hexactinellids or ‘glass sponges’ usually found on deep coral reefs. Coral reefs in Alaska are therefore unusual in many ways, and the complex structures formed by the rich assortment of coral and sponge colonies provide habitat for myriad invertebrate and fish communities, including commercially valuable fish species. Many pharmaceutically promising compounds have come from sponges, and the diverse Alaskan species are completely untapped. Better understanding of the threats, future changes and mitigation options to these cold-water coral communities is an important conservation objective. This project will provide a better understanding of the locations of these communities, current protections for these areas, and how and where increasing acidification is likely to impact these communities. We will use this analysis to educate decision makers and interested parties, and we fully expect that this information will add impetus to take action to improve management measures and reduce CO2 emissions. Guinotte, J., J. Orr, S. Cairns, A. Freiwald, L. Morgan, and R. George (2006) Will human-induced changes in seawater chemistry alter the distribution of deep-sea scleractinian corals? Frontiers in Ecology and the Environment 4(3): 141-146 Guinotte, J.M. and V.J. Fabry (2008) Ocean acidification and its potential effects on marine ecosystems. In The Year in Ecology and Conservation Biology 2008. R.S. Ostfeld & W.H. Schlesinger, Eds. Annals of the New York Academy of Sciences. Heifetz, J., B.L. Wing, R.P. Stone, P.W. Malecha and D.L. Courtney (2005) Corals of the Aleutian Islands. Fisheries Oceanography 14 (Suppl. 1): 131–138. Morgan, L.E., Tsao, C.F. and J. Guinotte (2006) Status of deep-sea corals in US waters, with recommendations for their conservation and management. Marine Conservation Biology Institute, Bellevue, WA. 64 pp. Stone, R.P. (2006) Coral habitat in the Aleutian Islands of Alaska: depth distribution, fine-scale species associations, and fisheries interactions. Coral Reefs 25(2): 229-238. Turley, C.M., Roberts, J.M. and J.M Guinotte (2007) Corals in deep-water: will the unseen hand of ocean acidification destroy cold-water ecosystems? Coral Reefs 26: 445-448 Watling, L. and E.A. Norse (1998) Disturbance of the seabed by mobile fishing gear: A comparison with forest clear-cutting Conservation Biology 12(6)1180-1197. Morgan, L.E. and R. Chuenpagdee (2003) Shifting Gears: Addressing the collateral impacts of fishing methods in US waters. PEW Science Series, Island Press, Washington DC. 42 pp.
February 13, 2014 Revision To Rules To Decipher Color In Dinosaurs Suggests Connection Between Color And Physiology New research that revises the rules allowing scientists to decipher color in dinosaurs may also provide a tool for understanding the evolutionary emergence of flight and changes in dinosaur physiology prior to its origin. At the same time, the team unexpectedly discovered that ancient maniraptoran dinosaurs, paravians, and living mammals and birds uniquely shared the evolutionary development of diverse melanosome shapes and sizes. (Diversity in the shape and size of melanosomes allows scientists to decipher color.) The evolution of diverse melanosomes in these organisms raises the possibility that melanosome shape and size could yield insights into dinosaur physiology. Melanosomes have been at the center of recent research that has led scientists to suggest the colors of ancient fossil specimens covered in fuzz or feathers. Melanosomes contain melanin, the most common light-absorbing pigment found in animals. Examining the shape of melanosomes from fossil specimens, scientists have recently suggested the color of several ancient species, including the fuzzy first-discovered feathered dinosaur Sinosauropteryx, and feathered species like Microraptor and Anchiornis. According to the new research, color-decoding works well for some species, but the color of others may be trickier than thought to reconstruct. Comparing melanosomes of 181 extant specimens, 13 fossil specimens and all previously published data on melanosome diversity, the researchers found that living turtles, lizards and crocodiles, which are ectothermic (commonly known as cold-blooded), show much less diversity in the shape of melanosomes than birds and mammals, which are endothermic (warm-blooded, with higher metabolic rates). The limited diversity in melanosome shape among living ectotherms shows little correlation to color. The same holds true for fossil archosaur specimens with fuzzy coverings scientists have described as "protofeathers" or "pycnofibers." In these specimens, melanosome shape is restricted to spherical forms like those in modern reptiles, throwing doubt on the ability to decipher the color of these specimens from fossil melanosomes. In contrast, in the dinosaur lineage leading to birds, the researchers found an explosion in the diversity of melanosome shape and size that appears to correlate to an explosion of color within these groups. The shift in diversity took place abruptly, near the origin of pinnate feathers in maniraptoran dinosaurs. "This points to a profound change at a pretty discrete point," says author Julia Clarke of The University of Texas at Austin's Jackson School of Geosciences. "We're seeing an explosion of melanosome diversity right before the origin of flight associated with the origin of feathers." What surprised the researchers was a similarity in the pattern of melanosome diversity among ancient maniraptoran dinosaurs, paravians, and living mammals and birds. "Only in living, warm-blooded vertebrates that independently evolved higher metabolic rates do we see the melanosome diversity we also see in feathered dinosaurs," said co-author Matthew Shawkey of The University of Akron. Many of the genes involved in the melanin color system are also involved in other core processes such as food intake, the stress axis, and reproductive behaviors. Because of this, note the researchers, it is possible that the evolution of diverse melanosome shapes is linked to larger changes in energetics and physiology. Melanosome shape could end up offering a new tool for studying endothermy in fossil specimens, a notoriously challenging subject for paleontologists. Because the explosion of diversity in melanosomes appears to have taken place right at the origin of pinnate feathers, the change may indicate that a key shift in dinosaurian physiology occurred prior to the origin of flight. "We are far from understanding the exact nature of the shift that may have occurred," says Clarke. "But if changes in genes involved in both coloration and other aspects of physiology explain the pattern we see, these precede flight and arise close to the origin of feathers." It is possible, notes Clarke, that a diversity in melanosome shape (and correlated color changes) resulted from an increased evolutionary role for signaling and sexual selection that had a carryover effect on physiology, or that a change in physiology closely preceded changes in color patterning. At this point, she stresses, both ideas are speculative. "What is interesting is that trying to get at color in extinct animals may have just started to give us some insights into changes in the physiology of dinosaurs."
New study shows that up to 30% of the Greenland icecap melting is due to cloud cover that is helping to raise temperatures − and accelerate sea level rise. LONDON, 30 January, 2016 – Researchers have identified another piece in the climate machinery that is accelerating the melting of the Greenland ice cap. The icy hills are responding to the influence of a higher command system: the clouds. An international research team led by scientists from the Catholic University of Leuven in Belgium report in Nature Communications journal that cloud cover above the northern hemisphere’s largest single volume of permanent ice is raising temperatures by between 2° and 3°C and accounting for 20-30% of the melting. The conclusion, based on imaging from satellites and on computer simulations, is one more part of the global examination of the intricate climate systems on which human harvests, health and happiness ultimately depend. “With climate change at the back of our minds, and the disastrous consequences of global sea level rise, we need to understand these processes to make more reliable projections for the future,” says the study leader, Kristof Van Tricht, a Ph.D research fellow in Leuven’s Division of Geography and Tourism. “Clouds are more important for that purpose than we used to think. “Clouds always have several effects. On the one hand, they help add mass to the ice sheet when it snows. On the other, they have an indirect effect on the ice sheet as well. “They have an impact on the temperature, and snow and ice react to these changes by melting and refreezing. That works both ways. Clouds block the sunlight, which lowers the temperature. At the same time, they form a blanket that keeps the surface warm, especially at night.” In one sense, such research changes nothing: scientists have repeatedly confirmed that Greenland is melting at an increasing rate as a consequence of increasing concentrations of carbon dioxide and other greenhouse gases in the atmosphere, as a result of the human combustion of fossil fuels that drives global warming. “Many of the countries most susceptible to sea level rise tend to be the poorest, and don’t have the money to deal with it” Melting ice flows into the oceans, and sea level rise now seems inexorable. But what matters is the rate of rise. “Over the next 80 years, we could be dealing with another foot (0.3 metres) of sea level rise around the world,” says Tristan L’Ecuyer, Assistant professor of atmospheric and oceanic sciences at the University of Wisconsin-Madison, and one of the study’s co-authors. “Parts of Miami and New York City are less than two feet above sea level. Another foot of sea level rise, and suddenly you have water in the city.” Such conclusions are driven by data from new imaging and exploratory instruments in orbit aboard CloudSat and CALIPSO, two NASA satellites dedicated to examining the depth, thickness and composition of the planet’s cloud cover. “Once you know what the clouds look like, you know how much sunlight they’re going to reflect and how much heat from the Earth’s surface they are going to keep in,” Dr L’Ecuyer says. The snowpack melts a little during the day, and on a clear night, most of that would freeze again. But on an overcast night, the temperatures remain a little higher and less of the meltwater turns back into ice. The rest drains away to the sea. This knowledge will pay off in a more sure understanding of the rate at which sea levels will rise. “Many of the countries most susceptible to sea level rise tend to be the poorest and don’t have the money to deal with it,” Dr L’Ecuyer says. “This is something we have to get right if we want to predict the future.” – Climate News Network Source: Climate News Network – Cloud blanket warms up melting icecap
According to Minority Rights Group International ’s State of the World’s Minorities 2008 , not only are ethnic, religious, and cultural minorities and indigenous groups suffering disproportionately from the effects of climate change, they are also less likely to benefit from humanitarian relief and more likely to be harmed by certain efforts to combat climate change . The report draws attention to the fact that the plight of minorities is often neglected in the international community’s discussions of climate change. Frequently residing on marginal land, minority and indigenous groups also tend to be directly dependent on natural resources for their livelihoods, and therefore are more vulnerable to changes in the environment. Some efforts to mitigate climate change—particularly increasing the production and use of biofuels—have forced minority and indigenous communities off their land. For example, as of 2005, more than 90 percent of the land planted with oil palms in Colombia had belonged to Afro-Columbians. The report also asserts that certain humanitarian relief efforts have been deliberately discriminatory, noting the slow pace of relief to the Dalits (members of the lowest Hindu caste) after last year’s floods in India. Minority and indigenous communities will continue to be at risk until policymakers seriously address these issues.
Here are three simple ideas to help your young child become a better reader. - Keep a reading log. Purchase an inexpensive small notebook or blank journal. Date and write down book titles of what you read with your child and/or what they read on their own. Leave about a half-page between entries. - After reading to, or listening to your child read ask: "What do you think this story is about?" If your child is having difficulty getting the main idea, help them sort it out. Explain what you think the main idea of the story is and find sentences in the story to support it. This teaches your child to look for similar clues in new stories, and helps increase reading comprehension. Then, go back to the reading log and add a sentence or two about the main idea after the title entry. This will help your child easily recall the story when they reference the reading log. - Do a "word of the week." Young children love learning "grown-up" words. For example, while reading "Mr. Popper’s Penguins" to my class we came across the word "promenade." After many incorrect guesses, I told them that it was a fancy way of saying "taking a walk." Every time you encounter a new word in a story, stop and try to determine what that new word might mean. Make it your family’s "word of the week." Look for opportunities to use it correctly again during the week. Resolve to read a few minutes more with your child each day. Reading to your child is the very best thing you can do to foster a life-long love of reading!
Lithium-sulfur batteries may be the power storage devices of the future. Newly developed porous nanoparticles containing sulfur deliver optimized battery performance. From smartphones to e-bikes, the number of mobile electronic devices is steadily growing around the world. As a result, there is an increased need for batteries that are small and light, yet powerful. As the potential for the further improvement of lithium-ion batteries is nearly exhausted, experts are now turning to a new and promising power storage device: lithium-sulfur batteries. In an important step toward the further development of this type of battery, a team led by Professor Thomas Bein of LMU Munich and Linda Nazar of Waterloo University in Canada has developed porous carbon nanoparticles that utilize sulfur molecules to achieve the greatest possible efficiency. In prototypes of the lithium-sulfur battery, lithium ions are exchanged between lithium- and sulfur-carbon electrodes. The sulfur plays a special role in this system: Under optimal circumstances, it can absorb two lithium ions per sulfur atom. It is therefore an excellent energy storage material due to its low weight. At the same time, sulfur is a poor conductor, meaning that electrons can only be transported with great difficulty during charging and discharging. To improve this battery's design the scientists at Nanosystems Initiative Munich (NIM) strive to generate sulfur phases with the greatest possible interface area for electron transfer by coupling them with a nanostructured conductive material. To this end, Thomas Bein and his team at NIM first developed a network of porous carbon nanoparticles. The nanoparticles have 3- to 6-nanometer wide pores, allowing the sulfur to be evenly distributed. In this way, almost all of the sulfur atoms are available to accept lithium ions. At the same time they are also located close to the conductive carbon. "The sulfur is very accessible electrically in these novel and highly porous carbon nanoparticles and is stabilized so that we can achieve a high initial capacity of 1200 mAh/g and good cycle stability," explains Thomas Bein. "Our results underscore the significance of nano-morphology for the performance of new energy storage concepts." The carbon structure also reduces the so-called polysulfide problem. Polysulfides form as intermediate products of the electrochemical processes and can have a negative impact on the charging and discharging of the battery. The carbon network binds the polysulfides, however, until their conversion to the desired dilithium sulfide is achieved. The scientists were also able to coat the carbon material with a thin layer of silicon oxide which protects against polysulfides without reducing conductivity. Incidentally, the scientists have also set a record with their new material: According to the latest data, their material has the largest internal pore volume (2.32 cm3/g) of all mesoporous carbon nanoparticles, and an extremely large surface area of 2445 m2/g. This corresponds roughly to an object with the volume of a sugar cube and the surface of ten tennis courts. Large surface areas like this might soon be hidden inside our batteries. Cite This Page:
From Wikipedia, the free encyclopedia - View original article Diffuse reflection is the reflection of light from a surface such that an incident ray is reflected at many angles rather than at just one angle as in the case of specular reflection. An illuminated ideal diffuse reflecting surface will have equal luminance from all directions which lie in the half-space adjacent to the surface (Lambertian reflectance). A surface built from a non-absorbing powder such as plaster, or from fibers such as paper, or from a polycrystalline material such as white marble, reflects light diffusely with great efficiency. Many common materials exhibit a mixture of specular and diffuse reflection. The visibility of objects, excluding light-emitting ones, is primarily caused by diffuse reflection of light: it is diffusely-scattered light that forms the image of the object in the observer's eye. Diffuse reflection from solids is generally not due to surface roughness. A flat surface is indeed required to give specular reflection, but it does not prevent diffuse reflection. A piece of highly polished white marble remains white; no amount of polishing will turn it into a mirror. Polishing produces some specular reflection, but the remaining light continues to be diffusely reflected. The most general mechanism by which a surface gives diffuse reflection does not involve exactly the surface: most of the light is contributed by scattering centers beneath the surface, as illustrated in Figure 1 at right. If one were to imagine that the figure represents snow, and that the polygons are its (transparent) ice crystallites, an impinging ray is partially reflected (a few percent) by the first particle, enters in it, is again reflected by the interface with the second particle, enters in it, impinges on the third, and so on, generating a series of "primary" scattered rays in random directions, which, in turn, through the same mechanism, generate a large number of "secondary" scattered rays, which generate "tertiary" rays... All these rays walk through the snow crystallytes, which do not absorb light, until they arrive at the surface and exit in random directions. The result is that the light that was sent out is returned in all directions, so that snow is white despite being made of transparent material (ice crystals). For simplicity, "reflections" are spoken of here, but more generally the interface between the small particles that constitute many materials is irregular on a scale comparable with light wavelength, so diffuse light is generated at each interface, rather than a single reflected ray, but the story can be told the same way. This mechanism is very general, because almost all common materials are made of "small things" held together. Mineral materials are generally polycrystalline: one can describe them as made of a 3D mosaic of small, irregularly shaped defective crystals. Organic materials are usually composed of fibers or cells, with their membranes and their complex internal structure. And each interface, inhomogeneity or imperfection can deviate, reflect or scatter light, reproducing the above mechanism. Few materials don't follow it: among them are metals, which do not allow light to enter; gases, liquids, glass, and transparent plastics (which have a liquid-like amorphous microscopic structure); single crystals, such as some gems or a salt crystal; and some very special materials, such as the tissues which make the cornea and the lens of an eye. These materials can reflect diffusely, however, if their surface is microscopically rough, like in a frost glass (Figure 2), or, of course, if their homogeneous structure deteriorates, as in the eye lens. A surface may also exhibit both specular and diffuse reflection, as is the case, for example, of glossy paints as used in home painting, which give also a fraction of specular reflection, while matte paints give almost exclusively diffuse reflection. Virtually all materials can give specular reflection, provided that their surface can be polished to eliminate irregularities comparable with light wavelength (a fraction of a micrometer). A few materials, like liquids and glasses, lack the internal subdivisions which give the subsurface scattering mechanism described above, so they can be clear and give only specular reflection (not great, however), while, among common materials, only polished metals can reflect light specularly with great efficiency (the reflecting material of mirrors usually is aluminum or silver). All other common materials, even when perfectly polished, usually give not more than a few percent specular reflection, except in particular cases, such as grazing angle reflection by a lake, or the total reflection of a glass prism, or when structured in certain complex configurations such as the silvery skin of many fish species or the reflective surface of a dielectric mirror. Diffuse reflection from white materials, instead, can be highly efficient in giving back all the light they receive, due to the summing up of the many subsurface reflections. Up to now white objects have been discussed, which do not absorb light. But the above scheme continues to be valid in the case that the material is absorbent. In this case, diffused rays will lose some wavelengths during their walk in the material, and will emerge colored. More, diffusion affects in a substantial manner the color of objects, because it determines the average path of light in the material, and hence to which extent the various wavelengths are absorbed. Red ink looks black when it stays in its bottle. Its vivid color is only perceived when it is placed on a scattering material (e.g. paper). This is so because light's path through the paper fibers (and through the ink) is only a fraction of millimeter long. Light coming from the bottle, instead, has crossed centimeters of ink, and has been heavily absorbed, even in its red wavelengths. And, when a colored object has both diffuse and specular reflection, usually only the diffuse component is colored. A cherry reflects diffusely red light, absorbs all other colors and has a specular reflection which is essentially white. This is quite general, because, except for metals, the reflectivity of most materials depends on their refraction index, which varies little with the wavelength (though it is this variation that causes the chromatic dispersion in a prism), so that all colors are reflected nearly with the same intensity. Reflections from different origin, instead, may be colored: metallic reflections, such as in gold or copper, or interferential reflections: iridescences, peacock feathers, butterfly wings, beetle elytra, or the antireflection coating of a lens. Looking at one's surrounding environment, the vast majority of visible objects are seen primarily by diffuse reflection from their surface. This holds with few exceptions, such as glass, reflective liquids, polished or smooth metals, glossy objects, and objects that themselves emit light: the Sun, lamps, and computer screens (which, however, emit diffuse light). Outdoors it is the same, with perhaps the exception of a transparent water stream or of the iridescent colors of a beetle. Additionally, Rayleigh scattering is responsible for the blue color of the sky, and Mie scattering for the white color of the water droplets of clouds. Diffuse interreflection is a process whereby light reflected from an object strikes other objects in the surrounding area, illuminating them. Diffuse interreflection specifically describes light reflected from objects which are not shiny or specular. In real life terms what this means is that light is reflected off non-shiny surfaces such as the ground, walls, or fabric, to reach areas not directly in view of a light source. If the diffuse surface is colored, the reflected light is also colored, resulting in similar coloration of surrounding objects. In 3D computer graphics, diffuse interreflection is an important component of global illumination. There are a number of ways to model diffuse interreflection when rendering a scene. Radiosity and photon mapping are two commonly used methods.
Coffee Supply and Demand ((ORIGINAL)) paper In economics, they say a picture is worth a thousand words. Below, you will find two scenarios. Your assignment is to discuss the situation by writing the solutions, and then show the solutions and how you got here in one or more graphs or flowcharts. Scenario One Supply and demand are foundational concepts in understanding economic theory. Whether you are a coffee drinker or not, you have been tasked to examine the impact of supply and demand when dealing with the coffee retail industry. A few companies probably come to mind. Pick a major coffee retailer and the contemplate what has been happening to both the supply and demand for this product. To begin, the following scenario deals with what happened in the coffee industry at the beginning of last decade: In the early part of the last decade, there was an overproduction of coffee. The price dropped so low that producers' costs were higher than the market price. The reason this happened was that market prices became high before this, and the supply of coffee increased substantially. In the meantime, demand for coffee and everything else remained the same. Coffee prices, as a supply input, went down. In the meantime, gourmet coffee houses began appearing, which began charging a premium for coffee in the period of decreasing prices. Gourmet coffee houses tend to open in high-rent areas and cater to higher income consumers. Because of the change they created for taste and preferences and the higher income market, the gourmet coffee houses had a win-win in a period of falling wholesale prices and increasing retail prices. Explain the changes in the supply and demand creating a supply and demand curve based on the above information. In this graph, be sure to demonstrate how these changes affected the price and quantity levels of supply and demand. Based on this analysis, how were coffee retailers faring in the marketplace? Now, fast forward to the current conditions in the coffee retail industry. Based on your research of your coffee retailer, what types of changes are occurring as they relate to supply and demand?
Could moly sulfide be the key to cheaper hydrogen production? By Grant Banks February 9, 2014 Chemical engineers have found a 30-year-old recipe that stands to make future hydrogen production cheaper and greener. The recipe has led researchers to a way to liberate hydrogen from water via electrolysis using molybdenum sulfide – moly sulfide for short – as the catalyst in place of the expensive metal platinum. While hydrogen is relatively abundant here on Earth, it is generally bound to either carbon or oxygen to form methane and water respectively. Producing hydrogen currently involves liberating it from methane at a cost of between US$1 and $2 per kilogram. And the world’s hunger for hydrogen continues to grow, currently we consume 55 billion kilograms of the element per year, making freeing it from methane or water big business. And with numerous automakers dipping their tires in the hydrogen fuel waters, it's set to get much bigger. The other side of the equation is the by-product of production. When hydrogen is freed from methane the waste product is carbon, which is released into the atmosphere furthering climate change. Producing hydrogen from water on the other hand produces oxygen as waste. The limiting factor to getting hydrogen from water in the past has been the expense of electrolysis, the process were hydrogen atoms are liberated from their bond with oxygen in water by passing an electrical current through an electrode immersed in the water. The main expense in this process was the use of platinum as the electrode. The efficiency of platinum to catalyze the breaking of hydrogen-oxygen bonds in water to free the hydrogen until now has been unmatched. Enter moly sulfide. Since World War II, moly sulfide has been used by petroleum engineers in the refinement of oil. It was thought to be inefficient for the electrolysis of hydrogen from water due to the molecular structure at its surface. That was until Stanford Engineering's Jens Nørskov, then at the Technical University of Denmark, noticed this structure differed at the edges of the crystal lattice. Around the edges, hydrogen production was possible as the structure has only two chemical bonds rather than the three seen elsewhere in its structure. This meant moly sulfide was capable of electrolyzing hydrogen, if only at the edges. Next came the Eureka moment, when the researchers uncovered a 30-year-old recipe for double bonded moly sulfide. Using this recipe, nanoclusters of double-bonded moly sulfide were synthesized and deposited on an electrically conductive sheet of graphite to form a cheap electrode alternative to platinum. Initial tests show the new technology to work at an efficiency approaching that of platinum. Early cost predictions for factory-scale production range from $1.60 to $10.40, which at the lower end would be competitive with current methane-based methods. "There are many pieces of the puzzle still needed to make this work and much effort ahead to realize them," said Stanford Engineering Assistant Professor Thomas Jaramillo. "However, we can get huge returns by moving from carbon-intensive resources to renewable, sustainable technologies to produce the chemicals we need for food and energy.” Findings of the research, which is a collaboration between Stanford University and Aarhus University in Denmark, were published in Nature Chemistry . Source: Stanford UniversityShare - Around The Home - Digital Cameras - Good Thinking - Health and Wellbeing - Holiday Destinations - Home Entertainment - Inventors and Remarkable People - Mobile Technology - Urban Transport - Wearable Electronics - 2014 Action Camera Comparison Guide - 2014 Smartwatch Comparison Guide - 2014 Windows 2-in-1 Comparison Guide - 2014 Smartphone Comparison Guide - 2014 Full Frame DSLR Comparison Guide - 2014 Tablet Comparison Guide - 2014 Superzoom Camera Comparison Guide - 2014 iPad Comparison Guide - 2014 Entry-Level to Enthusiast DSLR Comparison Guide - 2014 Small Compact Camera Comparison Guide