content
stringlengths
275
370k
University of Houston Research Team Finds “Holy Grail” of Air Quality Forecasting Ozone levels in the earth’s troposphere (the lowest level of our atmosphere) can now be forecasted with accuracy up to two weeks in advance, a remarkable improvement over current systems that can accurately predict ozone levels only three days ahead. The new artificial intelligence (AI) system developed in the University of Houston’s Air Quality Forecasting and Modeling Lab could lead to improved ways to control high ozone problems and even contribute to solutions for climate change issues. “This was very challenging. Nobody had done this previously. I believe we are the first to try to forecast surface ozone levels two weeks in advance,” said Yunsoo Choi, professor of atmospheric chemistry and AI deep learning at UH’s College of Natural Sciences and Mathematics. The findings are published online in the scientific journal, Scientific Reports–Nature. Ozone, a colorless gas, is helpful in the right place and amount. As a part of the earth’s stratosphere (“the ozone layer”), it protects by filtering out UV radiation from the sun. But when there are high concentrations of ozone near earth’s surface, it is toxic to lungs and hearts. “Ozone is a secondary pollutant, and it can affect humans in a bad way,” explained doctoral student Alqamah Sayeed, a researcher in Choi’s lab and the first author of the research paper. Exposure can lead to throat irritation, trouble breathing, asthma, even respiratory damage. Some people are especially susceptible, including the very young, the elderly and the chronically ill. Ozone levels have become a frequent part of daily weather reports. But unlike weather forecasts, which can be reasonably accurate up to 14 days ahead, ozone levels have been predicted only two or three days in advance – until this breakthrough. The vast improvement in forecasting is only one part of the story of this new research. The other is how the team made it happen. Conventional forecasting uses a numerical model, which means the research is based on equations for the movement of gasses and fluids in the atmosphere. The limitations were obvious to Choi and his team. The numerical process is slow, making results expensive to obtain, and accuracy is limited. “Accuracy with the numerical model starts to drop after the first three days,” Choi said. The research team used a unique loss function in developing the machine learning algorithm. A loss function helps in optimization of the AI model by mapping decision to their associated costs. In this project, researchers used index of agreement, known as IOA, as the loss function for the AI model over conventional loss functions. IOA is a mathematical comparison of gaps between what is expected and how things actually turn out. In other words, team members added historical ozone data to the trials as they gradually refined the program’s reactions. The combination of the numerical model and the IOA as the loss function eventually enabled the AI algorithm to accurately predict outcomes of real-life ozone conditions by recognizing what happened before in similar situations. It is much like how human memory is built. “Think about a young boy who sees a cup of hot tea on a table and tries to touch it out of curiosity. The moment the child touches the cup, he realizes it is hot and shouldn’t be touched directly. Through that experience, the child has trained his mind,” Sayeed said. “In a very basic sense, it is the same with AI. You provide input, the computer gives you output. Over many repetitions and corrections, the process is refined over time, and the AI program comes to ‘know’ how to react to conditions that have been presented before. On a basic level, artificial intelligence develops in the same way that the child learned not to be in such a hurry to grab the next cup of hot tea.” In the lab, the team used four-to-five years of ozone data in what Sayeed described as “an evolving process” of teaching the AI system to recognize ozone conditions and estimate the forecasts, getting better over time. “Applying deep learning to air quality and weather forecasting is like searching for the holy grail, just like in the movies,” said Choi, who is a big fan of action plots. “In the lab, we went through some difficult times for a few years. There is a process. Finally, we’ve grasped the holy grail. This system works. The AI model ‘understands’ how to forecast. Despite the years of work, it somehow still feels like a surprise to me, even today.” Before success in the laboratory can lead to real-world service, many commercial steps are ahead before the world can benefit from the discovery. “If you know the future – air quality in this case – you can do a lot of things for the community. This can be very critical for this planet. Who knows? Perhaps we can figure out how to resolve the climate change issue. The future may go beyond weather forecasting and ozone forecasting. This could help make the planet secure,” said Choi. Sounds like a happy ending for any good action story. - Sally Strong, University Media Relations
Between the pre-classic (1500 BC) and post-classic period (1521 AD), Mesoamerican cultures held similar values towards death, the afterlife and practiced similar ceremonies. Foremost, the duality of the universe was central to their belief system. Death was an integral part of life. Humans were the bridge between heaven and earth – the point of contact between the divine and profane, the spiritual and material, the rational and irrational. Man was the union of opposites, and responsible for maintaining the balance between the contradicting forces of the universe. Many beliefs of the Nahua people, from the central high plain area of Mesoamerica, illustrate origins of Día de los Muertos traditions. For the Nahua people, death signified the dispersal and fragmentation of the human. However, the soul, a divine creation was indestructible therefore allowed into the afterlife. There were special destinations for those who died in battle, or of a water disease, women who died in childbirth and for babies who died prematurely. However, most people went to Chicunamictlán (Land of the Dead) which was ruled by Mictlantecuhtli and Mictecacihuatl. Chicunamictlán consisted of nine levels the dead undertook in a four year voyage. The dead met dangerous challenges at each level to reach Mictlán, their final resting place. A point of honor, families of the deceased provided the necessary tools, food and water to their dead for the journey. This journey and the provisions for the dead are the foundation for the ofrendas (offerings) of food, water and symbolic items placed on altars during Día de los Muertos celebrations today. Christian evangelization of pagan Europe took around a thousand years. Pagans resisted the Roman Catholic Church and its insistence that pagan rituals associated with death be erased. Pagans honored the dead with ceremonies at the spring and autumn equinoxes, like Mesoamerican cultures which honored the dead each autumn. For these ceremonies families built bonfires around gravesites, brought offerings of food and wine, and danced and sang throughout the night. The Church celebrated mass in catacombs around the graves of martyrs and saints and had not yet developed funerary rites and rituals. Pope Boniface IV (7th century) established All Saints Day in May to honor Catholic saints and martyrs. Pope Gregory III (8th century) subsequently moved this feast to November 1st. Finally, Pope Urban II (11th century) established All Souls Day on November 2nd for the dead baptized as Christians. In contemporary Mexico, November 1st and 2nd are the dates for Día de los Muertos festivities. The Church also conceded to pagan traditions by unofficially accepting certain pagan rituals for All Souls Day. Medieval Spanish traditions included taking wine and specially prepared pan de ánimas (soul bread) to graves covered in flowers and lighting oil lamps for souls to find their way back to their earthly homes. Other traditions in northern Spain included a table with the finest tableware and a special meal which nobody ate until the following day or a bed with fresh linens left empty believing the deceased used them to rest before the long journey back to paradise.
NCERT Solutions for Class 3 EVS Chapter 10 What is Cooking. English and Hindi Medium solution from new NCERT Text Book Environmental Studies Looking Around updated for new session 2022-2023. Class 3 EVS Chapter 10 helps us to know about the different utensils used in Kitchen. All the questions given in chapter are solved in simple format using pictures and text. Get here NCERT Grade 3 EVS Chapter 10 Solutions free. NCERT Solutions for Class 3 EVS Chapter 10 NCERT Exercises Answers for Class 3 EVS Chapter 10 Look at the picture. Colour the spaces which have dots in them. What do you see? Pan, Ladle, Pot, Mesh Skimmer, Roti Tawa, Frying Pan, Cooker, Turning Spatula. What are utensils made of? Steel, Iron, Copper, Glass, Bronze and Earth (earthen pots) etc. Ask some elderly people what kinds of utensils were used earlier. What were they made of? In earlier times utensils were bigger in size than they are today. There were beautiful carvings on them. They were made of bronze, brass and copper. We do not cook all the things that we eat. Find out which things we eat raw and which ones we cook before eating. Which are the things we eat both cooked and raw? Fill in the table given below. |Things that are eaten raw||Things that are eaten cooked||Things that are eaten both raw and cooked| Go to the kitchen and observe something being cooked. What all was done to cook it? Write the sequence. Don’t forget to write the name of the item being cooked. Look at the notebook of your classmates and discuss in a group. Name of the item: Rice - Take a bowl of rice grains. - Wash them with plain water. - Take about two glasses of water in a pan. - Put rice grains in the pan. - Boil on the gas stove. - Cook till rice is done. Given below are different methods of cooking. Write the names of two things cooked by each of these methods. Add some more methods of cooking to the list and give examples too. |Method of Cooking||Name of Things||Name of Things| Identify the pictures given below and write their names. What produces heat in each of them? Match the picture with the list. Matching can be with more than one thing also. Make and Eat - 1. Soak whole moong seeds overnight in the water. In the morning wrap the soaked moong in a wet cloth and cover it. Take it out after a day. Do you find any difference? Ans. Yes, the moong seeds germinate after a day. Add sliced onions, tomatoes, salt and lemon juice to the moong and mix. Share it with your classmates. - 2. Which are the other things you can prepare without cooking? Write their names and the method of preparing them. One example is given below. 1. Lemon water: Mix sugar in water —> Add lemon juice -Strain it —>Lemon water is ready 2. Fruit salad: Chop some fruits in small pieces -> Sprinkle salt, sugar and chaat masala – Fruit chaat is ready 3. Lassi: Take one cup curd – Add sugar —> Add crushed ice —>Mix well using a mixer —> Lassi is ready. Extra MCQ with answers and Explanation Cooking is a process which makes the food Cooking utensils are made up of Some food items like __________ are eaten raw. A ladle is a Why do many utensils have wooden handles? Handles of cooking utensils are made of wood because wood is insulator of heat and so does not get heated while cooking. If they will be made of iron then it will get heated and we will not be able to hold them. Cooking pots are thus provided with wooden or plastic handles because they are bad conductors of heat. This prevents the heat from the pan to transfer to our hand. Thus, we can hold the utensils comfortably by our hands. Roasting is a method of cooking in which We bake different food items like Food is fried to make it Which out of the following gives more smoke from the flame? How is chapatti made? There are so many things to be done for this: - Taking out flour in a utensil, - Kneading it into a dough, - Making small balls of the dough, - Rolling out the balls and - Then cooking it on fire. Name the different utensils used in the kitchen and their uses. The different utensils used in the kitchen are as follows: - Tongs- They are used to remove hot pan from stove - Wok- It is used for deep frying anything - Pot-it is used for storing food and water - Sauce pan: It is used for cooking different types of food items - Pressure cooker: It is used to cook food faster than other utensils. - Ladle: It is used to stir and serve food - Rolling pin: It is used to roll dough to make chapatti. Class 3 EVS Chapter 10 Extra Important Question Answers What is steaming? Give few examples of food made by steaming Steaming is a moist-heat method of cooking that works by boiling water which vaporizes into steam; it is the steam that carries heat to the food and cooks it. Steaming is a method of cooking using steam. This is often done with a food steamer, a kitchen appliance made specifically to cook food with steam, but food can also be steamed in a wok. Steaming is considered a healthy cooking technique that can be used for many kinds of foods. Few examples of food items which are steamed are- idli, momos, dhokla etc. Name the different forms of fuels used for cooking. The different fuels used for cooking are wood, charcoal, cow-dung cake, kerosene, biogas, LPG, etc. Among all these, biogas and LPG are least polluting. Liquid fuels include kerosene, methanol, ethanol and plant oil, whilst renewable gaseous fuels consist of wood gas and biogas. The fossil gaseous fuels are comprised of petroleum gas (LPG) and natural gas. How does the fuel used in our homes affect the environment? When we burn oil, coal, and gas, we don’t just meet our energy needs but we pollute the environment and increase the rate of global warming crisis as well. Fossil fuels produce large quantities of carbon dioxide when they are burned. Carbon emissions trap heat in the atmosphere and lead to climate change. Which cooking fuel is most environment-friendly and why? For the cooking purpose, natural gas, methane, and LPG are used as they are environment-friendly and do not cause pollution. Out of the fuels, Natural gas is very efficient. Gas burns cleanly with no soot or ash, and therefore produces lower emissions. It is considered the most environmentally friendly fossil fuel. How does cooking food in a pressure cooker save time and fuel? Cooking food in a pressure cooker saves time and fuel as under pressure, the heat builds faster and maintains temperature better, so cooking time is reduced. The increased pressure inside the cooker increases the boiling point of water above 100°C so more cooking is done before the water actually starts to boil. So ultimately food is cooked faster. When you cook with a pressure cooker you can save as much as 2/3rds of your energy use. The extra temperature makes the food cook faster. No extra fuel is consumed during this shorter time frame and food gets cooked faster at reduced cost. What is kneading? Kneading dough is as simple process which involves pushing the dough away from you with the heel of your palm, folding it over itself with your fingers, and pulling it back. This repeated push-pull cross-knits the protein strands, developing a strong gluten net. Give reasons why Fruits and vegetables should not be washed after cutting. If we wash fruits and vegetables after cutting them then, many useful nutrients that are soluble in water may get drained along with water and get lost altogether from our diet. That is why we should not wash fruits and vegetables after cutting. Is that Chapter 10 from the EVS book of class 3 about preparing dishes? The processes are given in chapter 10 from class 3 EVS shares some of the information that is about preparing Indian flatbread called “Roti”. However, the exercise also shares information about the development cooking process all the way from Chulha to gas stoves. How do you think Chapter 10 could be an interesting topic for students in class 3 EVS? Students are always trying to find something new to do and pretty much all the students see their parents and guardian preparing food for them and this chapter will make them more interested to learn and preparing something to share with their friends. What are activities that are given in Chapter 10 that you consider to like the most? At the end of chapter 10 of class 3 EVS, you would find an activity that is given, and if you would have seen that activity you will notice that after completion of the activity you get to share the reward with your friends. In that activity one needs to prepare some drinks similar to lemonade, I can suggest trying to make mango drinks with milk and ice and then share it with friends and this activity I like the most.
Parts of the moon have stable temperatures fit for humans, researchers find Hoping to live on the moon one day? Your chances just got a tiny bit better. The moon has pits and caves where temperatures stay at roughly 63 degrees Fahrenheit, making human habitation a possibility, according to new research from planetary scientists at the University of California, Los Angeles. Although much of the moon's surface fluctuates from temperatures as high as 260 degrees during the day to as low as 280 degrees below zero at night, researchers say these stable spots could transform the future of lunar exploration and long-term habitation. The shadowed areas of these pits could also offer protection from harmful elements, such as solar radiation, cosmic rays and micrometeorites. For perspective, a day or night on the moon is equivalent to a little over two weeks on Earth — making long-term research and habitation difficult with such extremely hot and cold temperatures. Some pits are likely collapsed lava tubes About 16 of the over 200 discovered pits most likely come from collapsed lava tubes — tunnels that form from cooled lava or crust, according to Tyler Horvath, a UCLA doctoral student and head of the research. The researchers think overhangs inside of these lunar pits, which were initially discovered in 2009, could be the reason for the stable temperature. The research team also includes UCLA professor of planetary science David Paige and Paul Hayne at the University of Colorado Boulder. Using images from NASA's Diviner Lunar Radiometer Experiment to determine the fluctuation of the moon's pit and surface temperatures, the researchers focused on an area about the size of a football field in a section of the moon called the Mare Tranquillitatis. They used modeling to study the thermal properties of the rock and lunar dust in the pit. "Humans evolved living in caves, and to caves we might return when we live on the moon," said Paige in a UCLA press release. There are still plenty of other challenges to establishing any sort of long-term human residence on the moon — including growing food and providing enough oxygen. The researchers made clear that NASA has no immediate plans to establish a base camp or habitations there. Copyright 2022 NPR. To see more, visit https://www.npr.org.
Hello, I'm Dr. Katharine Price, an oncologist at Mayo Clinic. In this video, we'll cover the basics of oral cancer: What is it? Who gets it? The symptoms, diagnosis and treatment. Whether you're looking for answers for yourself or someone you love, we're here to give you the best information available. Oral cancer, also called mouth cancer, forms in the oral cavity, which includes all parts of your mouth that you can see if you open wide and look in the mirror. Your lips, gums, tongue, cheeks, roof or floor of the mouth. Oral cancer forms when cells on the lips or in the mouth mutate. Most often they begin in the flat, thin cells that line your lips and the inside of your mouth. These are called squamous cells. Small changes to the DNA of the squamous cells make the cells grow abnormally. These mutated cells accumulate, forming a tumor that grows in the mouth and often spread to lymph nodes in the neck. Oral cancer is curable if detected at an early stage. And like other cancers, a large amount of effort has been dedicated to determining causes and improving treatments. The average age of those diagnosed with oral cancer is 63. Just over 20% of cases occur in patients younger than 55. However, it can affect anyone. There are several known risk factors that could increase your risk of developing oral cancer. If you use any kind of tobacco, cigarettes, cigars, pipes, chewing tobacco, and others, you're at a greater risk. Heavy alcohol use also increases the risk. Those with HPV, human papillomavirus, have a higher chance of developing oral cancer as well. Other risk factors include a diet that lacks fruit and vegetables, chronic irritation or inflammation in the mouth, and a weakened immune system. Oral cancer can present itself in many different ways, which could include: a lip or mouth sore that doesn't heal, a white or reddish patch on the inside of your mouth, loose teeth, a growth or lump inside your mouth, mouth pain, ear pain, and difficulty or pain while swallowing, opening your mouth or chewing. If you're experiencing any of these issues and they persist for more than two weeks, see a doctor. They'll be able to rule out more common causes first, like an infection. To determine if you have oral cancer, your doctor or dentist will usually perform a physical exam to inspect any areas of irritation such as sores or white patches. If they suspect something is abnormal, they may conduct a biopsy where they take a small sample of the area for testing. If oral cancer is diagnosed, your medical team will then determine how far along the cancer is, or the stage of the cancer. The stage of the cancer ranges from 0 to 4 and helps your doctor counsel you on the likelihood of successful treatment. In order to determine the stage, they may perform an endoscopy, where doctors use a small camera to inspect your throat, or they may order imaging tests, like CT scans, PET scans, and MRIs, to gather more information. What your treatment plan looks like will depend on your cancer's location and stage, as well as your health and personal preferences. You may have just one type of treatment or you may need a combination of cancer treatments. Surgery is the main treatment for oral cancer. Surgery generally means removing the tumor and possibly lymph nodes in the neck. If the tumor is large, reconstruction may be required. If the tumor is small and there's no evidence of spread to lymph nodes, surgery alone may be enough treatment. If the oral cancer has spread to lymph nodes in the neck or is large and invading different areas of the mouth, more treatment is required after surgery. This could include radiation, which uses high-power beams of energy to target and destroy the mutated cancerous cells. Sometimes chemotherapy is combined with the radiation. Chemotherapy is a powerful cocktail of chemicals that kills the cancer. Immunotherapy, a newer treatment which helps your immune system attack the cancer, is also sometimes used. Learning you have oral cancer can be difficult. It can leave you feeling helpless. But remember, information is power when it comes to your health. This disease is survivable - now more than ever. Be informed. Take control of your health. And partner with your medical team to find a treatment that's right for you. If you'd like to learn even more about mouth cancer, watch our other related videos or visit mayoclinic.org. We wish you well.
Reactive ion etching Reactive Ion Etching (RIE) is a high resolution mechanism for etching materials. Samples are first masked by one of many patterning processes. They are then placed into a vacuum chamber. Gases are introduced into the chamber and then activated by Rf or microwave power to create a plasma. This plasma consists of a wide variety of reactive species, ions, and electrons. A negative DC bias is induced at the substrate by the free electrons. This bias accelerates ions in the plasma perpendicular to the sample surface. This provides a directional physical motivating force to the etch. Generally some form of passivating component is incorporated such that the etch proceeds only where energetic ions strike the surface. A well tuned etch can be very anisotropic compared to Wet etches which are typically isotropic.. RIE is a key enabling technology allowing IC processes to continue to approach the range of a few nanometers. This same technology may be used as a machining process for nano and micro scale devices. As such, RIE is also key enabling technology for MEMS and nanofabrication. Technology Overview/Workshop Presentation Reactive-ion etching (RIE) is an etching technology used in microfabrication. It uses chemically reactive plasma to remove material deposited on wafers. The plasma is generated under low pressure (vacuum) by an electromagnetic field. High-energy ions from the plasma attack the wafer surface and react with it.
Directions: Say “point to” ... - A letter you know - A word you know - The letter with which your first (last) name begins - A letter in your name - Your favorite letter - The letter with which your friend’s name begins - The letter ______ - The letter with the sound ______ - The letter before ______ - The letter after ______ - The letter between ______ and______ - A lowercase letter______ - An uppercase letter______ - A small word - A medium-sized word - A large word - A word with one (two, three, etc.) letter(s) - A word that begins with ______ - A word that ends with ______ - A word with one, (two, three, etc.) syllable(s) - The word ______ - A word that means about the same as ______ - A word that is the opposite of______ - A word that rhymes with______ - A compound word - A color word - An action word - The name of a person, place, or thing - A word with the ending (ly, ed, etc.) - The first word on the page we are going to read - The last word on the page we are going to read Make labels, using 3 x 5 index cards or sticky notes to label items in your home such as door, chair, window, lamp, plant, drawer, etc., and tape or display them at eye level for young learners. (My 2-year-old learned to read at a very early age when I did this). Use magnetic letters on the fridge, table or on a magnetic board. Children love to move the magnetic letters around to make words. Scramble the letters of the words they know and have them put them back together as fast as they can. Finally, run your finger from left to right under the words as you read books together. This allows your child to see the sequence of letters across the words at the same time he/she hears them. Enjoy your time with your young reader. Anne Burns is a retired reading specialist and trained Reading Recovery teacher. SHARE YOUR BEST LEARNING FROM HOME TIPS AND RESOURCES With the arrival of the new school year, many districts throughout the region have started the year remotely or are splitting time between the classroom and home. To help connect students, parents and teachers with additional resources, every day in Life we will provide an educational lesson from our partners at News In Education. We also invite teachers or educational community groups throughout the region to share ideas for lessons or fun educational activities from home for K-12 students, as well as tips and tricks for successful learning from home including getting organized, creating routines, setting up effective learning workspaces, plus fun ideas for exercise breaks, art and craft projects, nature play, science experiments you can safely do at home, nutritious lunch and snack ideas and more. To submit a guest article for publication, please send in an article no more than 500 words, along with a related photo to [email protected] with the subject line Learning from Home. If you have questions or want to learn more about this project and how you can help, please contact Life section editor Michelle Fong at [email protected].
It’s officially National Pollinator’s Week and we are ecstatic! Plant Sentry™ prides itself on helping make the Earth a better place for growing and sharing plants, so naturally, Pollinator Week is right up our alley! While underestimated in their value and importance, the list of pollinators includes around 200,000 species. Besides insects like bees, butterflies and beetles; there are 1,000 vertebrates on the list such as birds, bats, and other small mammals. Because of their impact, pollinators are some of the most important species on the planet. The Key to Pollinating A large portion of the pollinator population is made up of what are known as keystone species. Keystone species are essential to the environmental survival of their habitats. Many times keystone species become compromised when hunting, habitat degradation, and agricultural pursuits alter their ecosystems in a way that the species can not keep up with. If the keystone species can no longer survive its habitat, then the ecosystem it supports can no longer survive. Pollination Can Get Batty Lemurs aren’t the only pollinators at risk in our modern world. Due to recent innovations in wind energy bat pollination is also at risk. Wind farms are responsible for killing somewhere between 650,000 to 1.3 million bats between 2000 and 2011. The bat species that are seen most at risk are the two federally endangered species of the Hawaiian hoary bat (Lasiurus cinereus semotus) and the Indiana myotis (Myotis sodalis). It is uncertain as to what exactly attracts the bats to the wind turbines, but scientists are working diligently to figure it out. The Buzzzziest Pollinator In recent years, bees have finally been recognized for all of their hard work! The efforts of bee pollination add up to approximately $235-$577 billion USD in global food production, annually. Bees are responsible for the pollination of goods such as apples, broccoli, cranberries, melons, and sometimes cherries and blueberries. A combination of habitat loss, pollutants, climate change, the Varroa mite, bacterial diseases, travel, and irresponsible chemical usage all add up to be contributing factors that make it difficult for bee populations to survive in high numbers. The common solution that many humans turn to is becoming a honey beekeeper in hopes to boost the population. But, the honey bee isn’t the only bee. There are roughly 25,000 other bee populations on our planet. Depending upon the food source and the environment of the wild bees, the honey bee could be invasive. While many of these species look similar to the honey bee, the ecosystems they sustain are often drastically different. Your Impact & Responsibility As the human population continues to grow there is an ever-increasing need for more food. Part of the critical role that pollinators play is pollinating a number of crops for humans. Due to the decline of pollinators worldwide, in 2016 it was reported that farmers in China had turned to pollination by hand. To achieve the same pollination of their pear trees that had once been received by bees and other insects, people were paid to use a brush to exchange pollen from male to female trees. It’s estimated that a human can pollinate only 5-10 trees a day, merely a fraction of the amount bees can cover. This research led to the question of “What if this is our future normal?” The idea that someday swarms of insects will no longer exist to fulfill the task of pollination raises many red flags. Beyond the scope of agricultural needs is the concept of ecosystem structure. As mentioned earlier, entire environments depend on the role of keystone species, and the species they affect in order for ecosystems to thrive and survive. The policies and procedures that we as humans have used for centuries may have been enough in the past. But, looking forward to how humans interact with our planet and our environments, will depend on the change we implement and care we take to preserve and restore the damages we cause. As attention has continued to be paid towards the decline of the pollinator population, humans are more eager than ever to lend a helping hand in rehabilitating these species. Where to start in helping the pollinator population can seem challenging at first, but organizations such as the Pollinator Partnership and National Wildlife Federation have developed programs that can locate pollinator plants good for your area. Growing landscapes for bees and other pollinators is a great way to help recover the loss of pollinators without taking on too much work. Pollinator plants attract pollinators and give them the sustenance they need to keep moving. While science is continually growing, it will be the responsibility of communities to implement the checks and balances necessary to keep our pollinators alive. Plant Sentry™ practices this belief in the services we offer our clients in helping mitigate pests and protect against disease. It’s not always easy to decide what the right move is, but with our help, the load feels a lot lighter. For more information on how you can help protect pollinators and your plants, be sure to visit our Our Services page to learn more about our practices. If you have questions or interest about our services, Contact Us for more information. Black and White Ruffed Lemur. 17 Feb. 2020, lemur.duke.edu/discover/meet-the-lemurs/black-white-ruffed-lemur/. Bats & Wind Energy. www.batcon.org/resources/for-specific-issues/wind-power. “U.S. Forest Service.” Forest Service Shield, www.fs.fed.us/wildflowers/pollinators/animals/bats.shtml. “Shrinking Bee Populations Are Being Replaced by Human Pollinators.” Global Citizen, www.globalcitizen.org/en/content/life-without-bees-hand-human-pollination-rural-chi/. Victoria A Wojcik, Lora A Morandin, Laurie Davies Adams, Kelly E Rourke, Floral Resource Competition Between Honey Bees and Wild Bees: Is There Clear Evidence and Can We Guide Management and Conservation?, Environmental Entomology, Volume 47, Issue 4, August 2018, Pages 822–833, https://doi.org/10.1093/ee/nvy077 Farah, Troy. “While We Worry About Honeybees, Other Pollinators Are Disappearing.” Discover Magazine, 3 Aug. 2018.
Have you been wondering how scopes and closures work in Python? Maybe you’ve just heard about object.__closure__, and you’d like to figure out what exactly it does. In this Code Conversation video course, you’ll use the debugger Thonny to walk through some sample code and get a better understanding of scopes and closures in Python. In this Code Conversation video course, you’ll: - Clarify code by refactoring it with descriptive names - Learn how functions access variables in local and nonlocal scopes - Understand how inner and outer function calls open and close their own scopes You’ll also take a deep dive into the inner workings of Python by inspecting dunder objects to find out how Python handles and stores variables. To get the most out of this Code Conversation, you should be familiar with scopes and variables in Python. You should also be comfortable defining your own functions and distinguishing between inner and outer functions. For more informaton on the concepts covered in this lesson, you can check out:
A Christmas Celebration What is Christmas all about? The History of Christmas Christmas is the most recent in origin of the Christian festivals. The term “Christmas” came about by contracting “Christ’s mass.,” The actual celebration of Christmas did not come about until the Middle Ages. At that time, Christians were more likely to celebrate a person’s death than the person’s birthday. As a result, the church had an annual observance of the death of Christ and also honored many of the early martyrs on the day of their death. Before the fourth century, churches in the East—Egypt, Asia Minor, and Antioch—observed Epiphany, the manifestation of God to the world. The Epiphany celebrated Christ’s baptism, His birth, and the visit of the magi. The controversy regarding whether Jesus was truly God or a created being arose during the fourth century. This controversy resulted in an increased emphasis on the doctrine of the incarnation. The early Church sought to affirm that “the Word became flesh and dwelt among us” (John 1:14). The urgency to proclaim the incarnation appears to have been an important factor in the spread of the celebration of Christmas during the early part of the fourth century. The practice started in the Church in Rome and spread so that most parts of the Christian world observed Christmas by the end of the century. The exact date of the birth of Christ is not known; however, the December 25 date was chosen as much for practical reasons as for theological ones. Throughout the Roman Empire, various festivals were held in conjunction with the winter solstice. In Rome, the Feast of the Unconquerable Sun celebrated the beginning of the return of the sun. As Christianity expanded throughout the Roman Empire, the Church either had to suppress the pagan festivals or transform them. The winter solstice seemed an appropriate time to celebrate Christ’s birth. Thus, the festival of the sun became a festival of the Son, the Light of the world. Christ in One Often we forget the true meaning of Christmas. We forget that Jesus, the one beloved Son of God, came into the world to die for the sins of men. The Jews of that time were looking for the Messiah to free them from the oppression of the Romans. In addition, their tradition did not allow them to consider Jesus to be Messiah. They logic would go like this: - Anyone who hangs on a cross is cursed. - Jesus hung on a cross. - Jesus is cursed. - Therefore, Jesus is not the Messiah. The early Jews did not understand that Jesus would come once to suffer and to die (Isa 53). However, many did understand the purpose of the first advent of Christ. These early Christians where considered an odd sort by the “more sane” people of the time. In the Letter to Diognetus, which dates back to the second century A.D., an anonymous writer describes a strange people who are in the world but not of the world. “Christians are not differentiated from other people by country, language, or customs; you see, they do not live in cities of their own, or speak some strange dialect… They live in both Greek and foreign cities, wherever chance has put them. They follow local customs in clothing, food, and other aspects of life. But at the same time, they demonstrate to us the unusual form of their own citizenship. “They live in their own native lands, but as aliens… Every foreign country is to them as their native country, and every native land as a foreign country. “They marry and have children just like everyone else, but they do not kill unwanted babies. They offer a shared table, but not a shared bed. They are passing their days on earth, but are citizens of heaven. They obey the appointed laws and go beyond the laws in their own lives. “They love everyone, but are persecuted by all. They are put to death and gain life. They are poor and yet make many rich. They are dishonored and yet gain glory through dishonor. Their names are blackened and yet they are cleared. They are mocked and bless in return. They are treated outrageously and behave respectfully to others. “When they do good, they are punished as evildoers; when punished, they rejoice as if being given new life. They are attacked by Jews as aliens and are persecuted by Greeks; yet those who hate them cannot give any reason for their hostility.” The word “Christian” has lost much of its meaning in our culture. It means “Christ in one.” As we communicate the Good News of Jesus Christ during this Christmas season, we would do well to remember that Christians are set apart by God for His purpose. We celebrate Christmas because Christ is all in one: He is the savior; He is the example of how to live; He is the One who came to free us from the penalty of our sin. We should celebrate Christmas for no other reason that Jesus came to earth for us. God could give us no greater gift. We could give God no greater gift than to live our lives as thank-you letters to the God who loved us so much (Rom 5:8). Can This Be Christmas? This Christmas season let’s consider the following poem: What’s all this hectic rush and worry? Where go these crowds who run and curry? Why all the lights—the Christmas trees? The jolly “fat man,” tell me please! Why, don’t you know? This is the day For parties and for fun and play; Why this is Christmas! So this is Christmas, do you say? But where is Christ this Christmas day? Has He been lost among the throng? His voice drowned out by empty song? No. He’s not here—you’ll find Him where Some humble soul now kneels in prayer, Who knows the Christ of Christmas. But see the many aimless thousands Who gather on this Christmas Day, Whose hearts have never yet been opened, Or said to Him, “Come in to stay.” In countless homes the candles burning, In countless hearts expectant yearning For gifts and presents, food and fun, And laughter till the day is done. But not a tear of grief or sorrow For Him so poor He had to borrow A crib, a colt, a boat, a bed Where He could lay His weary head. I’m tired of all this empty celebration, Of feasting, drinking, recreation; I’ll go instead to Calvary. And there I’ll kneel with those who know The meaning of that manger low, And find the Christ—this Christmas. I leap by faith across the years To that great day when He appears The second time, to rule and reign, To end all sorrow, death, and pain. In endless bliss we then shall dwell With Him who saved our souls from hell, And worship Christ—not Christmas! – M.R. DeHaan, M.D., Founder, Radio Bible Class How are you celebrating Christmas this Year? Please join with me as we put “Christ” back into Christmas.
This course is an introduction to the basic concepts of programming languages, with a strong emphasis on functional programming. The course uses the languages ML, Racket, and Ruby as vehicles for teaching the concepts, but the real intent is to teach enough about how any language “fits together” to make you more effective programming in any language -- and in learning new ones. This course is neither particularly theoretical nor just about programming specifics -- it will give you a framework for understanding how to use language constructs effectively and how to design correct and elegant programs. By using different languages, you will learn to think more deeply than in terms of the particular syntax of one language. The emphasis on functional programming is essential for learning how to write robust, reusable, composable, and elegant programs. Indeed, many of the most important ideas in modern languages have their roots in functional programming. Get ready to learn a fresh and beautiful way to look at software and how to have fun building it. The course assumes some prior experience with programming, as described in more detail in the first module. The course is divided into three Coursera courses: Part A, Part B, and Part C. As explained in more detail in the first module of Part A, the overall course is a substantial amount of challenging material, so the three-part format provides two intermediate milestones and opportunities for a pause before continuing. The three parts are designed to be completed in order and set up to motivate you to continue through to the end of Part C. The three parts are not quite equal in length: Part A is almost as substantial as Part B and Part C combined. Week 1 of Part A has a more detailed list of topics for all three parts of the course, but it is expected that most course participants will not (yet!) know what all these topics mean.
There is a long tradition among scientists of comparing scientific models, images, theories and experiments to works of art. We are told that Albert Einstein’s theory of relativity, the double helix structure of DNA molecules, and images of colliding particles are beautiful, and that, just like works of art, they evoke in us aesthetic responses. Scientists themselves, like artists, are praised for their creativity, originality and aesthetic sensibility. Einstein, for one, regarded the American physicist Albert Michelson, who with Edward Morley co-designed the experiment to measure the velocity of Earth relative to the ether, as ‘the Artist in Science’, claiming that not only did Michelson care for devising a good experiment, he wanted his creations to be beautiful too. So what does it mean for an experiment to be beautiful? Let us start with the example of Foucault’s pendulum, which allows us to illustrate three important ways in which an experiment can be beautiful. In 1851, the French physicist Léon Foucault devised a way of demonstrating that Earth rotates on its axis. He hung a heavy metal weight from a long cable fixed to the inside of the dome of the Panthéon in Paris. When he set this pendulum in motion, it slowly swung back and forth, tracing lines in sand placed beneath it. After some time, it became clear that the traces were not all in one line because of Earth’s rotation beneath the pendulum. This pendulum was beautiful in an obvious visual sense – as a kind of kinetic sculpture with an almost mesmeric, slow, back-and-forth movement. Scientific equipment can indeed be beautiful, and often is. Take, for instance, chemical retorts, prisms, microscopes and complicated structures built in laboratories. Museums house beautifully crafted scientific instruments and equipment from the past because we can appreciate their aesthetic features. The phenomena we study in the experimental setup – such as copper sulphate crystals, rainbows produced by prisms, and the microscopic structure of cells – can be beautiful too. But what made Foucault’s pendulum a beautiful experiment is more than its visual beauty. Far more importantly, it showed the effects of Earth’s rotation, something important that hadn’t been demonstrated before, and it did this in a clever, imaginative and elegant way. The pendulum itself was beautiful, but the ultimate beauty of the experiment is a combination of its significance and its design. It’s hard to identify the most beautiful experiment in science, but one of the most beautiful in biology was designed by the American geneticists Matthew Meselson and Franklin Stahl to discover how DNA replicates. What is it about this experiment that makes it beautiful? To understand this, first let us reflect on what the experiment set out to do. It was performed only a few years after one of the most exciting scientific breakthroughs of the 20th century: the discovery in 1953 of the double helix structure of DNA. With the structure of DNA established, the next question the scientific community needed to address was how DNA replicated. There were already three different hypotheses proposed: 1) conservative replication, according to which each of the two strands of the parent DNA molecule are replicated in the new one; 2) semi-conservative replication, according to which one strand of the parent DNA is conserved in the daughter DNA; and 3) dispersive replication, according to which the parent DNA chains break at intervals, with the parental segments combining with new segments to form the daughter DNA. Meselson and Stahl set out to give an answer to the question of how DNA replicates by designing an experiment that would decisively discriminate between the three proposed hypotheses. In 1958, the two scientists published the results of that experiment. What they had done to determine the correct hypothesis on DNA replication was to feed bacteria nutrients containing a heavy nitrogen isotope that gets incorporated into the bacterial molecules through metabolising. They studied the genetic material through the next generations, knowing the rates at which bacteria multiply. Instead of using radioactive labelling of the DNA strands, which was common at the time, they decided to use density, and separated the heavy DNA from the light using density-gradient centrifugation. By studying the ratios of light, heavy and hybrid DNA that they obtained, Meselson and Stahl were able to eliminate the conservative and dispersive replication hypotheses, confirming that DNA in fact replicates semi-conservatively. A further aspect of its aesthetic value is not only what this research taught us but also how it did so One of the reasons why this experiment is celebrated in science is because it is an example of an experimentum crucis, or a ‘crucial experiment’. Such experiments are important; they deliver a decisive answer to a question. Usually, an experiment is considered crucial when it confirms a hypothesis among alternative competitors, and thus settles a dispute. It is from this ability to deliver a decisive answer to the question of how DNA replicates that the Meselson-Stahl experiment can be considered to derive its beauty. The results are taken to have been clear and straightforward, to have decisively spoken in favour of one of the contender hypotheses, and to have dismissed the alternatives. In his book Beauty and the Beast (1999), the German historian of science Ernst Peter Fischer argues that ‘the Meselson-Stahl experiments speak … for themselves and made all further commentary superfluous’. In his history Meselson, Stahl, and the Replication of DNA (2001), Frederic Lawrence Holmes further argues that the simplicity and clarity of the result makes this experiment easily presentable even to students, serving as an exemplar experiment for science education. The above considerations focus on what the Meselson-Stahl experiment did, the significance of its results, but a further aspect of its aesthetic value is not only what this research taught us but also how it did so, and this later consideration concerns its design. The Meselson-Stahl experiment is regarded as having an elegant and apt design that is optimally suited for the purpose it set out to achieve. Following the reasoning behind their experimental setup reveals the ingenious beauty of the design that the scientists created. By making the genetic material initially heavy and then light, Meselson and Stahl could extract and measure its weight through the next generations. It is this idea that makes their design original and elegant, and they used the optimal materials and techniques for the job. As such, their simple experiment integrates innovative and creative thinking, and shows aptness by delivering on what the elegant experiment was designed to do, by conclusively deciding which hypothesis was correct. Having established that the beauty of experiments lies in the elegance and simplicity of their design, the significance of their results and the creative thinking of their designers, I now want to consider whether these aesthetic ideals have carried over to contemporary experiments in science. There is a clear asymmetry between experiments of a century ago and experiments today. Past experiments, such as the Meselson-Stahl experiment, often involved a few scientists in a room, relatively cheap equipment, and generally the results could be perceived or established without lengthy interpretative work. Experiments today look rather different. Take, for instance, CERN’s experiments at the Large Hadron Collider (LHC) near Geneva, which not too long ago made one of the most significant discoveries in the history of particle physics: the detection of the Higgs boson that was predicted by the Standard Model. This experiment involves highly complex machinery (which has its own kind of beauty) and data analysis; it is the result of collaborative work between thousands of scientists; and the very boundary of the experiment transcends the borders of countries. Given their complexity and size, do large-scale experiments fit with previous aesthetic ideals? Have they become aesthetically disvalued, or can they still be praised for their aesthetic features? It is undeniable that such contemporary experiments are highly complex: from the machinery they use to the data they produce, they can overwhelm with their size and setup. However, perhaps there is still scope to assign a similar aesthetic value to them as to experiments from a century ago. It seems to me that, despite the complexity behind large-scale experiments, design and aptness continue to be the subject of aesthetic appreciation, and so is the creativity and originality exhibited in the experiment’s design. This is despite the fact that the creativity exhibited in the experimental design is now better ascribed to the collective thinking of the community, rather than necessarily to a specific individual, and the optimality and aptness of the design is concealed beneath the complexity of the experimental setup and the machinery involved. The appreciation of beauty in science, and in art, is not necessarily immediate to us Even if we could make the argument that similar aesthetic features get praised during different experimental traditions, there is a further difference between experiments from a century ago and those of today that perhaps should force us to reconsider the claim of similarity. The difference concerns the way in which we arrive at experimental results. As mentioned earlier, one of the aspects we can appreciate in the experiments on DNA replication is their immediacy: we can observe the results in a ‘singular historical event’, as Holmes noted. Such immediacy was characteristic of many, although certainly not all, early experiments. Let’s go back to the experiments that natural philosophers performed with the air pump, which allowed them to study many phenomena with a newly invented instrument that could extract air from a cylinder. As the painting An Experiment on a Bird in the Air Pump (1768) by Joseph Wright of Derby depicts, one could immediately perceive the effect on the living creature when the air from the cylinder was extracted using the air pump, which in turn generated a variety of responses in the audience, from fascination and awe to terror. Similarly, we can see the rainbow through Isaac Newton’s prisms, and we can see the bands of light-, heavy- and intermediate-density genetic material in the Meselson-Stahl experiments. But is there anything immediate when it comes to detecting particles and forces in a huge particle accelerator such as those detected by the LHC? Is there any sense in which the experimenter perceives the result in a ‘singular historical event’? Large-scale experiments often involve lengthy statistical data analysis before the experimental scientists can agree whether the data collection indicates an ‘event’ and whether the event constitutes a discovery. But this asymmetry between experiments with immediately perceivable results and those whose results involve time-consuming analysis, while noteworthy, is not surprising. It simply shows us that the appreciation of beauty in science, and in art, does not always involve perceptual features, and the aesthetic response is not necessarily immediate to us. After all, while many paintings and sculptures might evoke in us an immediate aesthetic response, this is certainly not necessary, and is certainly not the case when we consider artworks such as novels and concertos. We need some time and work to get through an entire novel or a concerto before we can fully appreciate its beauty and aesthetic significance more generally. Similarly with the results of many, especially very complex, experiments, it is ultimately the appreciation of the interplay between design and the significance of the result that provokes the aesthetic response. So far, we have seen that design is an integral part of our aesthetic appreciation of experiments, but we also seem to value those experiments that did something important: whose results helped to confirm a theory or to make a discovery. A final point I want to explore is how we can understand the significance of experiments that produce null results – can such experiments be beautiful too? Both the abovementioned experiments – the Meselson-Stahl and the Michelson-Morley – were highly original and elegant in their design. But while the former helped confirm the nature of DNA replication, the latter did not deliver on what it was designed to do: the interferometer never detected ether drift, despite continued attempts. Is the Michelson-Morley experiment then disqualified from being beautiful, because of the null results? I have argued that our understanding of what qualifies as a successful experiment needs to be much broader than simply its confirmation of a hypothesis or its discovery of a predicted particle. For example, the existence of the ether was once an undisputed assumption, until Michelson and Morley set up their incredibly careful experiment, which showed creativity and imagination in its elegant design. Even though it turned out that there is no such thing as an ether, their null results challenged accepted beliefs at the time and prompted the exploration of a new paradigm. So the Michelson-Morley experiment is beautiful because it offered the most elegant way to achieve its goal: the two scientists exhibited original and creative thinking, inventing one of the period’s most precise measurement devices, and delivering an experiment of insurmountable scientific import. Contrary to Meselson and Stahl, who did deliver an answer aligning with scientific expectation, the results of the Michelson-Morley experiments were disruptive. But it is this disruptive nature of their results that prompted the identification of the limitations of our knowledge and opened the door to revisit and reformulate our physics, leading to the acceptance of Einstein’s special theory of relativity and the abandonment of the Newtonian framework. The design was beautiful, the setup careful and original, and the results were disruptive, surprising and awe-provoking. Just like many artworks that challenge our fundamental assumptions about ourselves and our place in the world, beautiful experiments can deliver results that prompt us to reconsider our working assumptions. Their aesthetic significance is intricately related to our state of understanding, and illustrates the diverse nature of the aesthetic experiences that scientific products and artworks can elicit.
Chlorine gas reacts with fluorine gas according to the balanced equation: Cl2(g) + 3 F2(g) → 2 ClF3(g) If the first figure represents the amount of fluorine available to react, and assuming that there is more than enough chlorine, which figure best represents the amount of chlorine trifluoride that would form upon complete reaction of all of the fluorine? Figure ( b ) represents the amount of ClF3 that would be formed upon the complete reaction of the Flourine molecules.
ESL Teaching Terms Defined The field of English as a second language is rife with confusing acronyms, and teachers themselves are often confused about what to call their profession. TEFL, TESL, CTEFLA, and TESOL are the most common designations and are often used by teachers interchangeably, even though they do have slightly different meanings depending on who’s being taught. ESL schools usually distinguish between the terms, though, so it’s good to know the difference. TEFL stands for Teaching English as a Foreign Language, and correctly refers only to teaching English to pupils who are in a non-English-speaking area, such as the Czech Republic, Hungary, and most Eastern European countries. When checking out ESL schools, make sure that they teach TEFL instruction if you want to teach overseas. TESL means Teaching English as a Second Language, and describes the teaching of English to non-native students in countries like America, where the students need to learn English to function in everyday society. TESL can also refer to teaching English in a country like Singapore or Hong Kong, where English is an official language but most residents have a different first or “home” language such as Mandarin Chinese that they use in the home or most social situations. TESOL, meaning Teachers of English to Speakers of Other Languages, is a good catch-all term, since it refers to pretty much any kind of English teaching as long as it’s to non-native English speakers. This term can be confusing though, since it also refers to TESOL, Inc., which is the world’s largest organization of ESL instructors. CTEFLA stands for Certificate for Teaching English as a Foreign Language to Adults and is generally used in reference to the certificate given to successful participants in the Cambridge certificate program. This certification is also known as RSA (Royal Society of Arts) certification, RSA Cert, RSA Cambridge CTEFLA, or RSA/CTEFLA. This program is the most widely recognized EFL program in the world, and courses are offered at universities and training centers all over the world. Courses last either one month (full-time) or two months (part-time), and instruction is aimed particularly at English teaching in foreign countries (EFL). We’ve noted schools below that offer the Cambridge program.
REMEMBERING THE PAST: A TRIBUTE TO HAITIAN SOLDIERS Feb 22, 2016 On October 9, 1779, a force of more than 500 Haitian gens de couleur libre (free men of color) joined American colonists and French troops in an unsuccessful push to drive the British from Savannah. The regiment, known as the Chasseurs Volontaires de Saint-Domingue, were the largest unit of men of African descent to fight in the American Revolution. As battered American and French soldiers fell back, the Haitian troops moved in to provide a retreat. Due to their inexperience, the Chasseurs suffered a high number of casualties. These Haitian Americans made a significant contribution to our freedom with their bravery. After 228 years, the colored soldiers were finally recognized for their heroic actions. A monument to the Haitian soldiers was placed in Benjamin Franklin Square in Savannah, Georgia. Children of Fallen Patriots Foundation pays respect to soldiers of all ethnicities who have served honorably from the Revolutionary War to today’s military activities. Chief of Operations, Katelyn N. Brewer, stopped at the memorial to honor the Haitian soldiers on Fallen Patriots behalf. It’s Fallen Patriots sincere hope that the heroic actions of all ethnicities continue to be honored throughout history.
What is it? A pleural effusion is excess fluid between the two membranes (pleura) that surround the lungs. There is normally a small amount of fluid between the pleura, which acts lie a lubricant for the membranes. Fluid that can collect in the pleural space can be serous fluid, blood, chyle (lymphatic fluid) or pus. The type of effusion is further classified into transudative or exudative fluid. Transudative pleural effusions are caused by fluid leaking into the pleural cavity. This is caused by a low protein concentration in, or a high blood pressure within, the blood vessels. Common causes of transudative effusions are: Left sided heart failure Exudative pleural effusions are commonly a result of inflammation of the pleura. This causes the blood vessels to be more "leaky", leading to fluid to collect in the pleural space. Common causes of exudative effusions are: How is it diagnosed? The diagnosis of a pleural effusion is made from the patient's history, the examination findings and from test results. Chest X-rays are effective at confirming the presence of an effusion. Thoracentesis, the procedure of obtaining a sample of pleural fluid, will be required to determine the nature of effusion and also for symptomatic relief. What are the symptoms? Common symptoms include: Chest pain, especially on breathing in Shortness of breath Fever, if there is an inflammatory/infective process What is the treatment? The underlying cause of the pleural effusion will need to be treated. Some pleural fluid can be removed for symptom relief (therapeutic aspiration or chest drain insertion). The pleural fluid will need to be sent for analysis when a sample is obtained to further investigate its nature, and the cause of the effusion. Repeated effusions may require pleurodesis. This can be done either via drugs, or surgically. The chest tube that is inserted for pleurodesis will have to remain in place until fluid drainage stops. Back to Top
Cutting carbon emissions through better land management Land management research helps farmers cut carbon emissions. Farmers are helping UK and European governments meet greenhouse gas emission targets, thanks to extensive research by University of Hertfordshire’s Agriculture and Environment and Research Unit (AERU). Since 2005, AERU has examined the effect of farmland management practices on climate change. While some practices can lead to more carbon being captured from the atmosphere through natural processes and stored in soil and vegetation, others can have a negative effect, with stored carbon released as soil is disturbed. AERU’s research into the most effective practices has widely influenced high-profile agricultural guidance materials and policies. For example, it highlighted a number of practice alterations to the Environmental Stewardship scheme, which pays English farmers to protect and enhance biodiversity and the environment. These included reducing soil cultivation depth to decrease fuel consumption, encouraging springtime manure application to improve crop nutrient availability, and increasing the width of non-cropped margins around woodland to eliminate soil disturbance. AERU’s methodology has also been used by the Department for Environment, Food and Rural Affairs, the National Trust and the EU to improve a variety of farming practices. It’s difficult to quantify the impact on reducing greenhouse gas emissions. However, in 2009 2.44 million hectares of farmland were managed under Environmental Stewardship agreements. If improved management of just five per cent of this land shows a modest 0.5 per cent increase in soil organic carbon in the top 10 cm of soil, around an extra 2.7 million tonnes of carbon dioxide would be captured and stored. That’s the equivalent of 500,000 passenger flights around
Combine Icons of Depth and Complexity and Content Imperatives These tools are powerful in isolation, but become even more power when combined with other critical thinking tools. In combination, these tools take understanding to an entirely new level. Show students how to combine the prompts from the Depth and Complexity framework to enhance understanding. The goal is for students to begin to combine the icons themselves as they become more proficient with the different tools. Combine Icons: Multiple Points of View The Multiple Perspectives icon can be combined with any of the other Depth and Complexity or Content Imperative prompts. Combining the prompts provides students with the opportunity to explore things in at a higher level. For example, consider how exploring details on WWII from multiple points of view gives a better understanding of the war. Alternately, think about how examining words (and word choice) from different perspectives can provide a deeper and more complex understanding of: literature; establishing and maintaining power; and to conflict. Combine Icons: Change Over Time Combine the Change Over Time icon with any of the other Depth and Complexity or Content Imperative prompts. Consider having students explore how (and why) rules have changed over time. Rules can be defined as written and unwritten rules. You might also have students explore why rules change for people of certain ages. Another combined prompt is, Perspectives Change Over Time. This can apply to many different areas of study and exploration. Students might explore how (and why) perspectives on civil rights have changed over time. Think about combining the Ethical Issues and Change Over Time icons. Here, you might prompt students to consider, "How and why have ethical issues have changed over time?". Alternately, you might prompt them to consider what contributes to something being defined as an ethical issue. Students can use the Iconic Statement Frame to plug in thinking tools to create a statement that they will then prove with evidence. Teachers should model how to use this frame to create statements and prompts, but the real power comes in transitioning the ownership to students. Statements can be left as is and provide room for a broad application. "Change over time influences perspectives" can be applied to many different concepts and events. Statements can also be turned into more specific prompts. Prove by explaining how time has changed our perspective on acceptable mining techniques since the Gold Rush era.
The slavery issue was a major part of states rights. In the decades preceding the civil war the states rights issue hung over the nation like a sword. The doctrine held that certain rights and powers remained as part of the sovereignty of individual states and that the exercise of that sovereignty lay in the will of the states’ citizens. Through elected officials the citizenry bestowed certain powers to the federal government such as conducting diplomacy and declaring war. But the states had powers denied to the federal government. In the antebellum years authority granted the federal government by the Constitution was held to be vague and differing opinions about that authority tended to be regionally held. Conflicting interpretations about slavery escalated into regional disputes. Congress passed a fugitive slave act in 1793 as a means to protect Southern “property” rights concerning chattel slavery. As the Northern states abolished slavery they instituted personal liberty laws to safeguard free blacks and over time these laws made the 1793 act ineffective. With the spread of Northern and Western antislavery sentiments, a new fugitive slave act became a critical part of the Compromise of 1850. It was the one concession to Southern states written into the legislation and a test of the North’s commitment to personal property rights. Under the act, Northern officials were responsible for returning fugitive slaves to their owners. Any person found guilty of assisting a fugitive slave was subject to six months imprisonment and a $1000 fine (at this time a skilled workman like a blacksmith or carpenter made a wage of about $1 per day) plus, if the slave had not been recaptured, reimbursement of the market value of the slave. The act denied fugitives a jury trial or habeas corpus protection. Many Northerners regarded the act as a flagrant violation of fundamental personal rights and Northern state legislatures passed new personal liberty laws which weakened the 1850 fugitive slave act. Although politicians had expected the fugitive slave act to relieve regional tensions, they soon saw that it had become a propaganda tool for abolitionists, who deliberately violated the act. In the decade before the civil war fugitives who made it to the North were rarely returned to their masters. The act sharpened the rift between North and South. More than anything, it grew into a symbol of determined resistance for both pro- and anti-slavery factions and became one of the key issues leading to irreconcilable disunion in 1861.
Born: December 27, 1822 Died: September 28, 1895 French chemist and biologist The French chemist and biologist Louis Pasteur is famous for his germ theory and for the development of vaccines. He made major contributions to chemistry, medicine, and industry. His discovery that diseases are spread by microbes, which are living organisms—bacteria and viruses—that are invisible to the eye, saved countless lives all over the world. Louis Pasteur was born on December 27, 1822, in the small town of Dôle, France. His father was a tanner, a person who prepares animal skins to be made into leather. The men in Pasteur's family had been tanners back to 1763, when his great-grandfather set up his own tanning business. Part of the tanning process relies on microbes (tiny living organisms). In tanning, microbes prepare the leather, allowing it to become soft and strong. Other common products such as beer, wine, bread, and cheese depend on microbes as well. Yet, at the time Pasteur was a child, few people knew that microbes existed. Pasteur's parents, Jean-Joseph Pasteur and Jeanne Roqui, taught their children the values of family loyalty, respect for hard work, and financial security. Jean-Joseph, who had received little education himself, wanted his son to become a teacher at the local lycée (high school). Pasteur attended the École Primaire (primary school), and in 1831 entered the Collège d'Arboix. He was regarded as an average student, who showed some talent as an artist. Nonetheless, the headmaster encouraged Pasteur to prepare for the École Normale Supérieure, a very large training college for teachers located in Paris. With this encouragement he applied himself to his studies. He swept the school prizes during the 1837 and 1838 school year. Pasteur went to Paris in 1838 at the age of sixteen. His goal was to study and prepare for entering the École Normale. Yet, he returned to Arboix less than a month later, overwhelmed with homesickness. In August of 1840 he received his bachelor's degree in letters from the Collège Royal de Besançon and was appointed to tutor at the Collège. In 1842, at age twenty, he received his bachelor's degree in science. He then returned to Paris, and was admitted to the École Normale in the autumn of 1843. His doctoral thesis (a long essay resulting from original work in college) was on crystallography, the study of forms and structures of crystals. In 1848, while professor of physics at the lycée of Tournon, the minister of education granted Pasteur special permission for a leave of absence. During this time, Pasteur studied how certain crystals affect light. He became famous for this work. The French government made him a member of the Legion of Honor and Britain's Royal Society presented him with the Copley Medal. In 1852 Pasteur became chairman of the chemistry department at the University of Strasbourg, in Strasbourg, France. Here he began studying fermentation, a type of chemical process in which sugars are turned into alcohol. His work resulted in tremendous improvements in the brewing of beer and the making of wine. He also married at this time. In 1854, at the age of thirty-one, Pasteur became professor of chemistry and dean of sciences at the new University of Lille. Soon after his arrival at Lille, a producer of vinegar from beet juice requested Pasteur's help. The vinegar producer could not understand why his vinegar sometimes spoiled and wanted to know how to prevent it. Pasteur examined the beet juice under his microscope. He discovered it contained alcohol and yeast. The yeast was causing the In 1865 Pasteur was asked to help the ailing silk industry in France. An epidemic among silkworms was ruining it. He took his microscope to the south of France and set to work. Four months later he had isolated the microorganism causing the disease. After three years of intensive work he suggested methods for bringing it under control. Pasteur's scientific triumphs coincided with personal and national tragedy. In 1865 his father died. His two daughters were lost to typhoid fever in 1866. Overworked and grief-stricken, Pasteur suffered a cerebral hemorrhage (a bleeding caused by a broken blood vessel in the brain) in 1868. Part of his left arm and leg were permanently paralyzed. Nevertheless, he pressed on. Pasteur saw the trains of wounded men coming home from the Franco-German War (1870–71; war fought to prevent unification under German rule). He urged the military medical corps to adopt his theory that disease and infection were caused by microbes. The military medical corps unwillingly agreed to sterilize their instruments and bandages, treating them with heat to kill microbes. The results were spectacular, and in 1873 Pasteur was made a member of the French Academy of Medicine—a remarkable accomplishment for a man without a formal medical degree. A particularly devastating outbreak of anthrax, a killer plague that affected cattle and sheep, broke out between 1876 and 1877. The anthrax bacillus (a type of microbe shaped like a rod) had already been identified by Robert Koch (1843–1910) in 1876. It had been argued that the bacillus did not carry the disease, but that a toxic (poisonous) substance associated with it did. Pasteur proved that the bacillus itself was the disease agent, or the carrier of the disease. In 1881 Pasteur had convincing evidence that gentle heating of anthrax bacilli could so weaken its strength that it could be used to inoculate animals. Inoculation is a process of introducing a weakened disease agent into the body. The body gets a mild form of the disease, but becomes immunized (strengthened against) the actual disease. Pasteur inoculated one group of sheep with the vaccine and left another untreated. He then injected both groups with the anthrax bacillus. The untreated sheep died and the treated sheep lived. Pasteur also used inoculation to conquer rabies. Rabies is a fatal disease of animals, particularly dogs, which is transmitted to humans through a bite. It took five years to isolate and culture the rabies virus microbe. Finally, in 1884, in collaboration with other investigators, Pasteur perfected a method of growing the virus in the tissues of rabbits. The virus could be weakened by exposing it to sterile air. A vaccine, or weakened form of the microbe, could then be prepared for injection. The success of this method was greeted with excitement all over the world. The question soon arose as to how the rabies vaccine would act on humans. In 1885 a nine-year-old boy, Joseph Meister, was brought to Pasteur. He was suffering from fourteen bites from a rabid dog. With the agreement of the child's physician, Pasteur began his treatment with the vaccine. The injections continued over a twelve-day period, and the child recovered. In 1888 a grateful France founded the Pasteur Institute. It was destined to become one of the most productive centers of biological study in the world. In 1892 Pasteur's seventieth birthday was the occasion of a national holiday. A huge celebration was held at the Sorbonne. Unfortunately Pasteur was too weak to speak to the delegates who had gathered from all over the world. His son read his speech, which ended: "Gentlemen, you bring me the greatest happiness that can be experienced by a man whose invincible belief is that science and peace will triumph over ignorance and war.… Have faith that in the long run … the future will belong not to the conquerors but to the saviors of mankind." On September 28, 1895, Pasteur died in Paris. His last words were: "One must work; one must work. I have done what I could." He was buried in a crypt in the Pasteur Institute. Years later Joseph Meister, the boy Pasteur saved from rabies, worked as a guard at his tomb. Dubos, René. Louis Pasteur: Free Lance of Science. Boston: Little, Brown, 1950. Reprint, New York, NY: Da Capo Press, 1986. Geison, Gerald L. The Private Science of Louis Pasteur. Princeton, NJ: Princeton University Press, 1995. Jakab, E. A. M. Louis Pasteur: Hunting Killer Germs. New York: McGraw-Hill, 2000. Smith, Linda Wasmer. Louis Pasteur: Disease Fighter. Springfield, NJ: Enslow, 1997.
Can you write a Python program to check a sequence of numbers is an arithmetic progression or not? To refresh your memory, a sequence is a set of things (usually numbers) that are in order. In an Arithmetic Sequence the difference between one term and the next is a constant. In other words, we just add the same value each time. For example, the sequence 5, 7, 9, 11, 13, 15 ... is an arithmetic progression with common difference of 2. We can write an Arithmetic Sequence as a rule: xn = a + d(n−1) How would you write it using Python? Try it yourself. If you cannot figure it out, see code on the next page. Image from Pixabay The Fibonacci sequence was first observed by the Italian mathematician Leonardo Fibonacci in 1202. He was investigating how fast rabbits could breed under ideal circumstances. He made the following assumptions: Fibonacci asked how many pairs of rabbits would be produced in one year. Can you create the numbers yourself? Remember to count the 'pairs' of rabbits and not the individual ones. Try it. Were you able to come up with the Fibonacci numbers? If not, here is how you would do it. The pattern comes out to be 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233. Fibonacci numbers are of interest to biologists and physicists because they are frequently observed in various natural objects and phenomena. For example, the branching patterns in trees and leaves are based on Fibonacci numbers. On many plants, the number of petals is a Fibonacci number: buttercups have 5 petals; lilies and iris have 3 petals; some delphiniums have 8; corn marigolds have 13 petals; some asters have 21 whereas daisies can be found with 34, 55 or even 89 petals. How can we create a rule (algorithm) for the fibonacci series (sequence)? First, the terms are numbered from 0 onwards like this: n = 0 1 2 3 4 5 6 7 8 9 10 ... xn =0 1 1 2 3 5 8 13 21 34 55 ... What rule can we create here? Well, if you look, x3 = x2 + x 1 (2 = 1 + 1) and x4 = x2 + x3 (3 = 1 + 2), etc. So we can write the rule (algorithm) as: xn = x(n-1) + x(n-2). Example: term 7 is calculated as: x7= x(7-1) + x(7-2) = x6 + x5 = 13 + 8 Let's write programs in Python to calculate the Fibonacci numbers. 1. With looping: a,b = 1,1 for i in range(n-1): a,b = b,a+b 1. With recursion: if n==1 or n==2: N.B: No not copy and paste the python code as identation is important. The greatest common divisor (GCD) or the highest common factor (HCF) of two numbers is the largest positive integer that perfectly divides the two given numbers. Solving this problem for a specific set of numbers is easy. For example, find the GCD of 12 and 18. The The divisors of 12 are 1, 2, 3, 4, 6, 12 and for 18 are 1, 2, 3, 6, 9, 18. The common factors are 1, 2, 3, and 6. So the greatest common factor is 6. How would you find the GCD for any number? Here the problem is more challenging. Here is one solution. Let's take two integers a and b passed to a function which returns the GCD. In the function, we first determine the smaller of the two number since the GCD (HCF) can only be less than or equal to the smallest number. For example, the GCD of 12 and 14 can only be less than 12 and not greater. We then use a for loop to go from 1 to that number. In each iteration, we check if our number perfectly divides both the input numbers. If so, we store the number as the GCD. At the completion of the loop we end up with the largest number that perfectly divides both the numbers. Below is the algorithm in python. def computeGCD(a, b): if a < b: smaller = a smaller = b for i in range(1, smaller+1): if (a % i == 0) & (b % i == 0): gcd = i N.B: Do not cut and paste the above code. Make sure the indentation is correct. The above method is easy to understand and implement but not efficient. A much more efficient method to find the GCD (HCF) is the Euclidean algorithm. The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. That is a mouthful! Let's make it simple by taking an example. 21 is the GCD of 252 and 105 (as 252 = 21 × 12 and 105 = 21 × 5), and the same number 21 is also the GCD of 105 and 147 (252 − 105). Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, they are the GCD of the original two numbers. A more efficient version of the algorithm shortcuts these steps, instead we divide the greater by smaller and take the remainder. Now, divide the smaller by this remainder. Repeat until the remainder is 0. For example, if we want to find the H.C.F. of 54 and 24, we divide 54 by 24. The remainder is 6. Now, we divide 24 by 6 and the remainder is 0. Hence, 6 is the required GCD. Python code for Euclidean Algorithm def euclidAlgo(a, b): a, b = b, a % b Python code for Euclidean Algorithm using recursion: def euclidAlgo(a, b): if (b == 0): return euclidAlgo(b, a % b) Sources: Wikipedia; https://www.programiz.com/python-programming/examples/hcf Free Image from Pixabay Finding Prime Numbers Prime numbers are very important, yet many students do not see the value of learning them. Primes have several applications, most importantly in information technology, such as public-key cryptography, which relies on the difficulty of factoring large numbers into their prime factors. One key challenge is to find prime numbers. Interestingly, Prime numbers and their properties were first studied extensively by the ancient Greek mathematicians. Euclid, for example, proved that there are infinitely many prime numbers. Just to refresh our memory, a number greater than 1 is called a prime number, if it has only two factors, namely 1 and the number itself. Proof by Contradiction One of the first known proofs is the method of contradiction. It is used to calculate prime factors of large numbers. Calculating prime factors of small numbers is easy. For example, the factors of 17 is 1 and 17, so it is a prime number. What about large numbers? Let's look at the proof by contradiction method. If a number n is not a prime, it can be factored into two factors a and b, such that n = a*b. For example, let's say a * b = 100, for various pairs of a and b. If a = b, then they are equal, we have a*a = 100, or a^2 = 100, or a = 10, the square root of 100. If one of the numbers is less than 10, then the other has to be greater to make it to 100. For example, take 4 x 25 = 100. 4 is less than 10, the other number has to be greater than 10. In other words, if a * b, if one of them goes down, the other number has to get bigger to compensate so the product stays at 100. Put mathematically, the numbers revolve around the square root of their product. Let's test if 101 is prime number. You could start dividing 101 by 2, 3, 5, 7, etc, but that is very tedious. A better way is to take the square root of 101, which is roughly equal to 10.049875621. So you only need to try the integers up through 10, including 10. 8, 9, and 10 are not themselves prime, so you only have to test up through 7, which is prime. Because if there's a pair of factors with one of the numbers bigger than 10, the other of the pair has to be less than 10. If the smaller one doesn't exist, there is no matching larger factor of 101. Let's now build an algorithm using this method to test any number for primality. Algorithm in Python if (num < 2): for i in range(2, int(math.sqrt(num)) + 1): if num % i == 0: N.B: Do not just copy the code because you have to be careful with indentation in python. Try the above algorithm and let us know if you found it useful or have alternative solutions. Write something about yourself. No need to be fancy, just an overview.
THEME: Road Trains SUBJECT AREA: history TOPIC: transition from cattle drives to truck drives After World War II, the global demand for cattle and beef from Australia became strong worldwide, but particularly from Britain. The vast distances between cattle stations and the length of time it took to drive cattle cross country for shipments to worldwide markets proved quite a challenge as well as time consuming. In the 1930s, the development of a vehicle designed to haul large numbers of cattle began in earnest. This early day mode of transportation was referred to as a ‘road train’ due to their ability to haul several trailers pulled by a large truck resembling a railroad engine and cattle cars. Due to the lack of major railway lines in the Northern Territory, road trains seemed the next best thing. The next challenge to be dealt with in this fledgling transportation industry was the roads or lack thereof. In the early days of cattle stations, horses had been used to pack in supplies to the remote outstations and people didn’t travel often for leisure. In the Victoria River District, where the cattle industry thrived, acceptable roads for transporting cattle to markets was a necessity. Vesteys, a huge British company that owned more than 100,000 sq. kilometres of stations in the Territory, developed the ‘road train’ for cattle hauling and the Commonwealth government began pouring money into ‘beef roads’. By 1975, $30 million had been spent on over 2500 km of roads. One of these single lane bitumen roads is the Delamere Road, which runs from the Victoria Highway to Wave Hill station (once a Vesteys property). Today’s road trains continually haul cattle around the Territory. Designed with two decks, (an upper and lower deck) the trailers can haul on the average, approximately twenty head per level, or forty head per trailer depending upon the size of the cattle. Three trailers (the most one truck can haul and equal to nine and one half car lengths!) can transport over 100 head of cattle at a time. Beef transportation has come a long way in seventy years! Suggested learning activities: Investigate the development of the transportation industry in your area. How is it related to other area industries as far as the transporting of goods and products? Is the agriculture industry in your region dependent upon transportation and, if so, in what respects? Compare and contrast the trucking industry in your area with the road trains in Australia’s Outback regions. (Consider total truck length/tonnage allowed on the restricted roadways)
First, be sure your child has a balanced diet. A balanced diet includes foods from the following major food groups: Fruits and Vegetables; Breads and Cereals; Dairy Products; Meat, Fish and Eggs. Foods containing sugars and starches cause tooth decay. Foods with starch include breads, pasta, and such snacks as pretzels, goldfish crackers, and potato chips. There is hidden sugar in all sorts of foods. It can be added to your peanut butter, mayonnaise, salad dressing or many other processed foods. Be a label reader! Not at all! You need to be sure to be selective with him/her. The stickier the food is, the higher the chance that it will form a cavity. The best time is just after a meal and not in between meals. While your child is eating a meal, extra saliva is produced which helps to wash out the sugars. Each bite or sip of a sugary or starchy food or drink produces acids on the teeth for 20 minutes. So, if your child walks around with a sippy cup filled with a sugary drink, every sip results in 20 minutes of acid production. The child is better off drinking the whole cup at one sitting at a meal. Don’t nurse your child to sleep or put her to bed with a bottle. If she must have a bottle at night, put only water in it. Sugars left in the mouth at night do the most damage. Most children leave the milk in their mouth while sleeping, sucking occasionally. Early Childhood Caries are caused by this habit. Cheeses, yogurt, nuts, fresh fruits and vegetables, popcorn and unsweetened breads are good to give to your child. Water is the best drink to give him/her between meals. This helps him/her to establish a good lifelong habit of drinking water during the day. Sugary items such as soft drinks, sports drinks, sweet teas, lemonade and candies should be eaten for special occasions only. Avoid other sticky or chewy foods, including fruit roll ups, gummy bears, taffy, and chewy candies.
Fiscal Policy = Loanable Funds = Real Interest Rate Monetary Policy = Money Market = Nominal Interest Rate 1) Loanable Funds Loanable funds graph has the real interest on the vertical axis and the quantity of loanable funds on the horizontal. The supply curves show the amount of loanable funds that people have saved and are willing to loan. The demand curve is the businesses and individuals that would like to borrow. Equilibrium shows the real interest rate for the country. The real rate of interest is crucial in making investment decisions. Business firms want to know the true cost of borrowing for investment. If inflation is positive, which it generally is, then the real interest rate is lower than the nominal interest rate. If we have deflation, and the inflation rate is negative, then the real interest rate will be larger. (Pride) Loanable Funds = Money in banks that can be loaned out to individuals and firms The real rate of interest is: (an eye on the price level/inflation) - the opportunity cost of borrowing/loaning money - expressed in constant dollars (inflation adjusted value) - value or purchasing power of money used - percentage increase in purchasing power the borrower pays (adjusted for inflation) The real interest rate measures the percentage increase in purchasing power the lender receives when the borrower repays the loan with interest. The supply of loanable funds is based on the savings of the private sectors. 2) Money Market The money market graph has nominal interest rate on the vertical axis and the horizontal axis is labeled the quantity of money. The supply of money is perfect inelastic as the money supply is controlled by the FED. The demand for money curve is downward sloping. Price level changes will effect the demand for money as will interest rate changes. The nominal rate of interest is: - the opportunity cost of holding money - expressed in current dollars (non-inflation adjusted value) - price paid for the use on money (no eye on inflation) - percentage increase in money the borrower pays (not adjusted for inflation) The supply of money is based on the actions of the FED. Why Nominal rates ?? - the money supply deals with inflation and nominal value, not real value, while an increase in the money supply is the cause of Price level changes (inflation). In the long run an increase in the Money Supply will cause the demand for money to increase and the Nominal Interest Rate to rise.
Solar Array Definition A solar array can be defined as solar panels arranged in a group to capture maximum amount of sun light to convert it into usable electricity. The idea of Solar Array came into being when it was conceived that the power produced by a solar panels singly was not sufficient for domestic or commercial purpose. Many such solar modules are linked together to form a Solar Array. Solar panels can be arranged differently to form different patterns to suit ones purpose. Depending upon what the need is, different structures can be build. Generally there are two traditional styles of solar energy panels used in residential and industrial areas -photovoltaic and photothermal energy. Most solar arrays to date have been a mixture of these two. Because it is relatively low-tech, solar arrays are a very cost effective approach for people who believe in DIY technology. How Solar Array Works There are many ways to build or use a solar array, either commercially or buying the parts or doing it yourself. Most of the arrays use an inverter to convert the DC power produced by the modules into alternating current to produce power. The panels in a solar array are usually first connected in series to obtain the desired voltage; the individual strings are then connected in parallel to allow the system to produce more current. Solar Array Design The Design of a Solar Array is very important as a bad design can lead to huge losses over the lifetime of the solar array.Note even a partial shading of a solar panel in the solar array can degrade the performance of all the solar panels in the solar array leading to huge shortfall in the desired electricity requirements.There are numerous cases of bad design which must be avoided.Choosing the right EPC or Solar Installer is of utmost importance and the Design of the Solar Array must be done with utmost care.Also in Designing a Solar Array ,the right technology must be chosen for example in a space constrained environment ,solar panels of higher efficiency must be used even if they cost more per watt.Here are some important points while designing a Solar Array. - Position – The solar array must be positioned in such a way that will capture maximum direct sunlight. - Solar Tracker – Solar trackers can become very useful tools for a longer duration during the day time, to increase the efficiency of solar arrays. - Consumer Needs – Every solar array needs to comply with the needs of its consumers. - Shade – Any shade or light obstructions will decrease the solar arrays’ effectiveness. - Technology – Possible solar module combination may include one or many of photovoltaic, photothermal, photosynthetic, holographic photovoltaics or PV-TV. Solar Array Uses - Photosynthetic/ PV-TV arrays can be used as a covering over buildings These are quiet powerful and help in cutting electricity bills to a great extent. - Some used as two-way mirrors can be doubly used as low light billboards or theater screens. Solar Panel Array Costs A solar panel is generally 250 watts. The cost of 1 watt is these days is around $1.5/watt for small sized solar array used in residential and small commercial installations. Hence a solar panel costs around $350-400/watt. Now one solar array contains about 6-20 solar panels, which means the cost of a solar array should be somewhere between 2000-8000$Dollars Besides other factors that govern the price of a Solar Panel is its technology,power,brand and quality. The price of Thin Film Panels are lesser compared to crystalline silicon panels.Higher Power Panels have a higher price since they generate more electricity. Chinese solar panels cost less than the good brands stuff. Read more about b) Solar Panels
Overview of Canine Allergic Dermatitis Allergic dermatitis is a general term to describe a group of skin allergies that may be caused by a multitude of factors in dogs. Allergies are immune reactions to a given substance (allergen), which the body recognizes as foreign. These reactions occur following initial exposure to the allergen, with subsequent development of a hypersensitivity that causes itching and inflammation upon future exposures. The most common classes of allergic dermatitis seen in dogs are: Flea bite allergy Food allergy Atopy – an allergic condition caused by inhaled allergens, or absorption of allergens through the skin Less common are: Drug reactions Hormonal allergies Bacterial allergies Allergies to other parasites (mites, intestinal worms, ticks) Contact allergies (due to topical treatments or exposure to fibers, floor polish and detergents) Atopy and flea bite allergy are usually seen in young adults, whereas food allergy can be seen at any age. There are a number of canine breeds predisposed to the development of atopy. And some animals may be prone to development of certain allergies due to genetic factors. Allergic signs may be seasonal, depending on the cause of the allergy. What to Watch for Signs of Allergic Dermatitis in Dogs may include: Scratching, licking, chewing or biting the skin, feet and ears. Red, raised, scaly areas on the skin Bumps, crusts or pus filled vesicles on the skin Increased skin pigmentation Thickened skin Loss of hair Salivary staining (brown color) Head shaking Diagnosis of Allergic Dermatitis in Dogs The specific diagnostic protocol may vary depending on what type of allergy or other skin disease is suspected in your dog. Every diagnostic test listed below may not need to be performed. History and physical exam Skin scraping Skin cytology Complete blood count and biochemical profile Allergy blood tests Intradermal allergy testing Dietary trials Treatment of Allergic Dermatitis in Dogs The treatment prescribed by your veterinarian will vary with the type of allergy diagnosed. The following list includes the possible treatments that may be required. Avoidance of offending allergens when possible Anti-itch and/or antibacterial shampoos Topical anti-inflammatory or antibacterial drugs Antihistamines Corticosteroid therapy Immunotherapy (allergy vaccines) A new drug Oclacitinib (Apoquel) has been effective in some dogs Fatty acid supplementation Dietary management Antibiotics to treat secondary bacterial skin infections Home care is a crucial part of treatment for any dermatologic condition. Careful adherence to your veterinarian’s recommendations regarding oral medications and bathing is very important. Some animals may require bathing several times per week. Additionally, medications are often required even after the clinical signs have resolved. Although allergic dermatitis cannot be prevented, limiting exposure to allergens will help alleviate some of the clinical signs. Flea control in the environment is imperative for animals diagnosed with flea allergy dermatitis. Treating the pet alone is not sufficient to control the problem. Environmental reduction of any known allergens is advised. This may require keeping pets inside when pollen counts are high, avoiding long grass or freshly cut grass, and limiting dust and mold in the household. Eliminating exposure to certain foods is crucial to effective treatment of food allergy dermatitis. Information In-Depth on Allergic Dermatitis in Dogs As discussed, there are multiple types of allergies. In addition to different classes of allergy, there are a number of other causes of dermatitis that result in the same clinical signs. The following is a list of possible diagnoses in animals with itchy, red, crusty, scaly skin. Flea bite hypersensitivity- Animals with this type of allergy can have severe dermatitis even with a low flea burden. In some cases the fleas are not easily identified on the patient. This usually occurs in 3-6 year old animals. The distribution of skin lesions is predominantly on the back end of the pet. Atopy- This condition is also known as allergic inhalant dermatitis. Most patients with this disorder are 1-3 years of age. There are known breed predispositions in dogs. The face, feet and armpits are the areas of the body most commonly affected by atopy. As the disease progresses, the signs may spread to the whole body. Food allergy- Animals may develop an allergy to a certain component of their diet. This can occur at any age, and often occurs after an animal has been eating the diet for an extended period of time. In addition to dermatitis, some pets with food allergies will also develop vomiting and diarrhea. Drug allergy- Many drugs, especially certain antibiotics, have been shown to cause allergic reactions. The signs may range from scratching and redness, to hives, to severe illness and sloughing of the skin. If a drug allergy is suspected, the drug in question should be discontinued immediately. Contact allergy or irritant- Animals can be allergic to fibers in a carpet, finishes on a floor or topical shampoos or medications. Additionally, some substances may cause irritation even in animals that do not have an allergy. The dermatitis is often confined to ventral areas (along the underside of the body) or areas where there is a sparse haircoat. Pyoderma- A bacterial skin infection can occur alone, or in conjunction with allergic dermatitis. Many animals develop secondary pyoderma from chewing and licking at their skin. The normal skin has many bacteria, which will colonize an area of inflamed or irritated skin and worsen the clinical signs. Yeast infection- Infection with skin yeast can also occur secondary to allergy. Many patients (especially dogs) will have yeast and bacterial ear infections secondary to allergies. Scabies- This is an intensely itchy disorder caused by mites. Human family members can contract this as well. Cheyletiellosis- This is another type of mite that may cause minimal to severe scratching. Humans may also be infected. Pediculosis- Lice infestation
2. "Intelligence is not a single thing, but relies on a ‘toolkit’ of different complex cognitive abilities (e.g. Emery & Clayton, 2004)". Discuss the validity of this statement Definition Seeing intelligence not just as a single thing but a tool kit that combine a number of abilities is a very important aspect in the generalization of intelligence in relation to human beings and other animals as well. Charles Spearman studied students' grades in various subjects. The statistical method Spearman used compares variability across multiple tasks, and is called factor analysis. Spearman found that high-performing students tend to do well across all subjects, not just in the subjects that they're especially strong in. A broad definition of intelligence was proposed by a group of psychologists led by Ulric Neisser. They suggest that intelligence is the 'ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning and to overcome obstacles by taking thought.' According to Neisser and his colleagues, people are smart if they can succeed at a variety of interrelated tasks. Research shows that we are not born with a biologically predetermined amount of intelligence that remains fixed for our whole lives. Environmental influences have been shown to impact test results. Another psychologist Raymond Cattel noticed that there are two different forms of intelligence. Fluid intelligence or the ability to learn new ways of doing things, and crystallized intelligence, or the accumulation of knowledge throughout our whole lives, this proposition of intelligence has been shown to change over time. As a result as you age through adulthood, crystallized intelligence is increasing, whereas fluid intelligence is decreasing after late adolescence. Intelligence is affected by both genetic and environmental factors; it's the result of both nature and nurture. Genetic predisposition and environmental... Please join StudyMode to read the full document
A research team from the University of Florida and the US Geological Survey suggests the lack of genetic diversity in manatee populations may have an impact on the marine mammals’ survival long term. Even though the population is growing, the animal is still considered endangered. Researchers used skin samples collected from the tails of 362 manatees to determine genetic maps. Results revealed a moderate level of inbreeding within the Gulf Coast study population. The impact of a lower genetic variation means, in the case of manatees, they may be less able to overcome environmental threats which promote disease or encourage increased inbreeding due to a limited number of breeding-age partners. In order to survive, the manatees will have to adapt. Manatees, also known as sea cows, are large, fully aquatic, primarily herbivorous marine mammals. They measure up to 10 feet long, weigh as much as 1,200 pounds, and have paddle-like flippers and tail. Females tend to be larger and heavier than the males. Manatees typically breed once every two years. Gestation lasts about 12 months. They don’t have natural enemies and can live as long as 60 years. These aquatic animals have short snouts, widely spaced eyes, and no incisors or canines. Manatees have sets of cheek teeth which are replaced throughout their lifespan; old ones are ejected as new ones grow in. They have a large, flexible, prehensile upper lip used to gather food and eat. This specialized upper lip is also used for social interaction and communication. Manatees are the only animal known to have a vascularized cornea. Prior to this study, the US Fish and Wildlife Service considered reclassifying the statue of manatees from endangered to threatened on the conservation status scale. The change in rating is currently under debate. [Image via Shutterstock]
My home in the coral reefs is being damaged by ocean acidification—which occurs when the ocean absorbs carbon and becomes acidified. I love living among thriving reefs, but increasing acidification degrades the physical structure of these reefs, putting my habitat and food supply at risk. This affects all the creatures living among the reef—not just my team of fellow blacktip reef sharks. Blacktip reef sharks are viviparous with a yolk-sac placenta, with a gestation period about 10 months and litter size of 2-4 pups. Size at birth ranges from 33-52 cm. Males mature at about eight years of age and 95-105 cm in length; females mature at about 9 years old and a length of 93-110 cm. Courtship features the one or more males following closely behind a female. Reproductive behavior includes distinct pairing with embrace where the male grasps the female’s pectoral fin between his teeth and mates belly to belly. There is one breeding season in the central and western Pacific, but two seasons in the Indian Ocean. Females rest for 8-14 month between pregnancies to rebuild their energy stores. Blacktip reef sharks are preyed upon by other sharks and large groupers. The is a socially complex species that performs a variety of group behaviors. One useful definition distinguishes reefs from mounds as follows: Both are considered to be varieties of organosedimentary buildups – sedimentary features, built by the interaction of organisms and their environment, that have synoptic relief and whose biotic composition differs from that found on and beneath the surrounding sea floor. Reefs are held up by a macroscopic skeletal framework. Coral reefs are an excellent example of this kind. Corals and calcareous algae grow on top of one another and form a three-dimensional framework that is modified in various ways by other organisms and inorganic processes. By contrast, mounds lack a macroscopic skeletal framework (see stromatolite). Mounds are built by microorganisms or by organisms that don't grow a skeletal framework. A microbial mound might be built exclusively or primarily by cyanobacteria. Excellent examples of biostromes formed by cyanobacteria occur in the Great Salt Lake in Utah, and in Shark Bay on the coast of Western Australia. The Caribbean reef shark has an interdorsal ridge from the rear of the first dorsal fin to the front of the second dorsal fin. The second dorsal fin has a very short free rear tip. The snout of C. perezi is moderately short and broadly rounded. It has poorly developed, low anterior nasal flaps and relatively large circular eyes. Caribbean reef sharks also have moderately long gill slits with the third gill slit lying above the origin of the pectoral fin. Comparison to similar sharks: Despite its abundance in certain areas, the Caribbean reef shark is one of the least-studied large requiem sharks. They are believed to play a major role in shaping Caribbean reef communities. These sharks are more active at night, with no evidence of seasonal changes in activity or migration. Juveniles tend to remain in a localized area throughout the year, while adults range over a wider area. These sharks prefer the shoreline from Florida to Brazil. This is where it gets the common name from. The tropical parts of the western Atlantic Ocean is home to this variety of sharks. Normally found on the outer edges of reefs, the Caribbean Reef Shark prefers to live in coral reefs and its shallow waters as well as continental shelves and insular shelves. These sharks are found quite commonly at a depth of about 100 feet (30 meters) and are known to dive to incredible depths of around 1250 feet (380 meters). The Caribbean Reef Shark is known to be relatively passive and typically doesn’t pose much of a threat to scuba divers, snorklers, swimmers, or other humans it comes into contact with. They actually tend to avoid human interaction entirely. As per theInternational Shark Attack Files, there have been 27 attacks documented since 1960, of which none have been fatal. Of those attacks, it’s believe that 4 of them were caused because the shark mistakenly thought the person was a food source. The rest of the attacks were provoked attacks such as sharks caught in fishing equipment biting the fisherman. Reproduction is viviparous; once the developing embryos exhaust their supply of yolk, the yolk sac develops into a placental connection through which they receive nourishment from their mother. Mating is apparently an aggressive affair, as females are often found with biting scars and wounds on their sides. At the Fernando de Noronha Archipelago and Atol das Rocas off Brazil, parturition takes place at the end of the dry season from February to April, while at other locations in the Southern Hemisphere, females give birth during the Amazon summer in November and December. The average litter size is four to six, with a gestation period of one year. Females become pregnant every other year. The newborns measure no more than 74 cm (29 in) long; males mature sexually at 1.5–1.7 m (59–67 in) long and females at 2–3 m (79–118 in). The Caribbean reef shark is found throughout tropical waters, particularly in the Caribbean Sea. This shark’s range includes Florida, Bermuda, the northern Gulf of Mexico, Yucatan, Cuba, Jamaica, Bahamas, Mexico, Puerto Rico, Colombia, Venezuela, and Brazil. It is one of the most abundant sharks around the Bahamas and the Antilles. Although Caribbean reef sharks are found near reefs in southern Florida, surveys using long-line gear off the east coast of Florida reveal that Caribbean reef sharks are extremely rare north of the Florida Keys.
by Institute for American Indian Studies, Washington, CT To what extent are we, as citizens and communities, responsible for scientifically and/or historically significant finds on public property? - How do we best preserve animal remains such as dinosaurs, Ice Age animals, and other animal specimens, which are not affiliated with any particular group of people? - Who should be responsible for the physical and financial maintenance of these specimens? - What is our responsibility to ensure finite resources, such as the skeletons of extinct animals, are available for future generations? - To what extent is the discussion different if the find is a man-made artifact rather than a natural specimen? Things you will need to teach this lesson. For Part I: Debate |Download the Perspectives for Debate pdf| For Part II: The Pope Hill-Stead Mastodon |Download the History of the Pope Hill-Stead Mastodon: A Case Study pdf| Part I: Debate This activity involves a debate around the following fictional scenario. A complete mastodon skeleton has been unearthed in your town, on town-owned property, and a decision must be made as to who will care for it and where it will be stored. There are parameters which must be met in order to properly maintain the condition of the mastodon. - The bones are very large and will require a considerable amount of storage space, roughly 300 square feet. An image of a mastodon skeleton on display is included in the toolkit to give students a sense of the size of these bones and the space that they take. - It is important to note that these are bones and not fossils. The bones must be kept at a consistent temperature and humidity level to prevent cracking and slow decomposition. - The bones must be covered to prevent dust or debris from entering any cracks or imperfections within the bones, which would cause the cracks/imperfections to enlarge. - Each bone must be periodically examined and checked for any developing deterioration, which would need to be addressed by qualified professionals as soon as possible, or the bones will begin to decay. Since the remains were found on town land, they belong to the town, and the town must decide what to do with them. Students will research and argue a variety of viewpoints (See Perspectives for Debate document) in a town meeting-style debate. The goal of this activity is to have students decide on the best course of action, knowing that the specimen will require continual maintenance and monitoring in order to preserve it indefinitely and that this will have an impact on town resources. There are two options for setting up the debate: Option 1) Assign students to specific roles that they will research and represent in a town hall-style debate, with the goal of convincing a majority of the class to vote on a specific course or plan of action. Option 2) Assign students to specific roles, but have another class act as town members in a town hall-style debate. At the end of this debate the “town members” will vote on the course of action they found the most convincing. Part II: The Pope Hill-Stead Mastodon The second part of this activity involves an exploration of the true historical incident that inspired the fictional debate, the accidental discovery in 1913 of a mastodon skeleton on the grounds of the Pope estate (Hill-Stead) in Farmington, Connecticut. A selection of historic photographs from the excavation and a short history of the Pope Hill-Stead mastodon are included in the toolkit above. After familiarizing themselves with the Pope Hill-Stead case, students will discuss how finding a mastodon skeleton in 1913 might have been different than finding one today and what questions are raised if a specimen is discovered on private property, rather than public property. Cultural preservation has taken many different forms over the course of our history. At one time, private individuals had almost free rein to collect and/or display objects of interest in whatever way they saw fit. The struggle between what is considered “progress” and safeguarding the past often pitts developers against preservationists. The rights of Native Americans to retain their cultural material (including human remains) was not officially recognized until the later part of the 20th century. There is now a growing understanding, however, that multiple perspectives need to be considered when making decisions about preserving our diverse history and culture. Have students investigate two federal laws in place for the protection of cultural materials in the United States: the National Historic Preservation Act of 1966 and the Native American Graves Protection and Repatriation Act of 1990. Federal law does not currently address the preservation of extinct animals, such as the mastodon in this case study. Using what they have learned through this activity and their investigations of the two existing federal laws, have students draft a letter to a legislator arguing whether or not animal specimens should be protected by preservation laws as well.
Clinical Study Title: Varying role of vitamin D deficiency in the etiology of rickets in young children vs. adolescents in northern India. |Plain English Summary:| |This study looked at the role of vitamin D deficiency in the development of rickets in children versus adolescents. Nutritional levels for the children versus adolescents were obtained. Participants were then given either a calcium supplement or a calcium supplement with vitamin D. Children with rickets were able to heal with either supplement. Adolescents, however, only responded to the treatment that included vitamin D. This suggests that for adolescents who have rickets, the vitamin D deficiency may play an important role, and that treatment of rickets for adolescents should include a vitamin D supplement.|
This moment of déjà vu is brought to you by a new paper published in the February issue of Astrobiology where a team of scientists from NASA’s Johnson Space Center in Houston, Texas, and the Jet Propulsion Laboratory in Pasadena, Calif., describe the results of work on a 14 kilogram (30 pound) meteorite called Yamato 000593 (Y000593). The meteorite sample contains strong evidence that Mars used to be a lot wetter than it is now, but the researchers also report on the discovery of evidence for “biological processes” that occurred on the Red Planet hundreds of millions of years ago. Although this sounds exciting, there will likely be some skepticism, but the researchers appear to have foreseen the media circus that “Mars life” always inspires and refused to appear overly excited of some pretty fascinating evidence for ancient microbial life. In 1996, President Clinton made a high profile announcement on national television that evidence for life had been discovered by NASA scientists inside another Martian meteorite called Allan Hills 84001 (ALH84001). The discovery focused around scanning electron microscope images of the microscopic detail of ALH84001. The team, led by David McKay of Johnson Space Center, identified “biogenic structures” inside the meteorite that was theorized to be formed by indigenous life on Mars. The controversial media storm surrounding that 1996 announcement stirred a backlash that threw McKay’s team’s findings into doubt. However, McKay’s team defended the work after ruling out terrestrial contamination and other factors that may have created the nanometer-sized worm-like structures. McKay also worked on the Y000593 study until his death in February 2013. Not Your Average Space Rock This new work focuses around a meteorite that was discovered in the Yamato Glacier, Antarctica, by a Japanese Antarctic Research Expedition in 2000. Analysis of the meteorite shows that it formed on the surface of Mars 1.3 billion years ago from a lava flow. Then, around 12 million years ago, a powerful impact event shattered the region, blasting quantities of Martian crust, containing any hypothetical lifeforms (and evidence thereof), into space. These chunks of Mars rock then traveled through interplanetary space until one of the samples, Y000593, encountered Earth and fell to the surface as a meteorite, falling on Antarctica some 50,000 years ago. There are many known samples of Mars crust that have fallen to Earth as meteorites and are considered incredibly valuable scientific specimens that can be used as time capsules into Mars’ geologic past. These meteorites are nature’s ‘sample return missions,’ no spaceship required. “While robotic missions to Mars continue to shed light on the planet’s history, the only samples from Mars available for study on Earth are Martian meteorites,” said lead author Lauren White, of NASA’s Jet Propulsion Laboratory, in a news release. “On Earth, we can utilize multiple analytical techniques to take a more in-depth look into meteorites and shed light on the history of Mars. These samples offer clues to the past habitability of this planet. As more Martian meteorites are discovered, continued research focusing on these samples collectively will offer deeper insight into attributes which are indigenous to ancient Mars. Furthermore, as these meteorite studies are compared to present day robotic observations on Mars, the mysteries of the planet’s seemingly wetter past will be revealed.” In their research, the scientists describe features associated with Martian clay deposits — micro-tunnels thread throughout the Y000593 sample. When compared with terrestrial samples, the Martian shapes appear to closely resemble “bio-alteration textures” in basaltic glasses. This basically means that this Mars meteorite contains microscopic features that resemble mineral formations created by bacteria on Earth. Another factor is the discovery of nanometer to micrometer-sized spherules sandwiched between the layers of rock in the meteorite. These spherules are distinct from the minerals inside the rock and are rich in carbon, another sign that they may have been formed through biological interactions inside the rocky material. The First Rule of “Mars Life”: Don’t Talk About “Mars Life” Is this proof of Martian bacteria munching through Mars rock? Sadly, that’s one conclusion that cannot be made from this study and the researchers are very cautious not to write the word “life” at any point in their publication — it’s replaced by technical terms like “biogenic origins” and “biotic activity.” “We cannot exclude the possibility that the carbon-rich regions in both sets of features may be the product of abiotic mechanisms,” the scientists write in their paper. ‘Abiotic’ means mechanisms that are not caused by microbial life, such as some chemical reaction in the rock’s geology. “However, textural and compositional similarities to features in terrestrial samples, which have been interpreted as biogenic, imply the intriguing possibility that the martian features were formed by biotic activity.” Their caution has been applauded by other astrobiologists. “(The authors) have done well not to cry wolf and to scientifically speculate on the tubules’ origins, accepting that, as of yet, they do not know whether they are of biological origin or not,” said Louisa Preston of the U.K.’s Open University. “This is no smoking gun,” said White. “We can never eliminate the possibility of (terrestrial) contamination in any meteorite. But these features are nonetheless interesting and show that further studies of these meteorites should continue.” Since the 1996 ALH84001 controversy, many other researchers have come forward with meteorite studies that appear to show evidence for life on Mars and other interplanetary locations, but most have been published in sketchy journals with little to no peer review process, which serves to blur valuable research being carried out by astrobiologists. Therefore, skepticism for any Mars life study is often high. So, until we can detect and analyze DNA of extraterrestrial origin or have the ability to return pristine samples from Mars, work like this will be filed under “fascinating but not conclusive” in the profound hunt for life beyond Earth. Publication: “Putative Indigenous Carbon-Bearing Alteration Features in Martian Meteorite Yamato 000593,” Astrobiology, 2014, 14(2): 170-181. doi:10.1089/ast.2011.0733
Transposition of the Great Arteries Transposition of the Great Arteries Transposition of the great arteries is a birth defect causing a fatal condition in which there is a reversal, or switch, in the truncal connections of the two main (great) blood vessels to the heart, the aorta and pulmonary artery. There are two great arteries, the pulmonary artery and the aorta. Normally, the pulmonary artery carries blood from the right ventricle to the lungs. The aorta carries blood from the left ventricle to the vessels of the rest of the body. Normally, blood returning to the heart is depleted in oxygen. It goes first to the right atrium of the heart and then to the right ventricle where it is pumped to the lungs. While in the lungs, the blood picks up more oxygen. After the lungs, the blood flows to the left atrium, then the left ventricle, which pumps the blood out through the aorta to the rest of the body, thereby supplying the body with oxygenated blood. Transposition of the great arteries results in oxygen-depleted blood going to the body. The reason is that the connection of the two great arteries is reversed. In this case, the aorta is connected to the right ventricle. Blood returning to the heart goes to the right atrium and ventricle, which is normal. Then, when the right ventricle pumps the blood out, it goes into the aorta for distribution throughout the body. At the same time, blood in the lungs goes to the left atrium, the left ventricle, but then back to the lungs. This happens because the pulmonary artery is connected to the left ventricle. The result is that highly-oxygenated blood keeps recycling through the lungs, while oxygen-depleted blood recycles through the body without going through the lungs to reoxygenate. This condition develops during the fetal stage and must be treated promptly after birth if the newborn is to survive. The newborn can survive for a few days because the foramen ovale, a small hole in the septum that separates the two atria, is open, allowing some oxygenated blood to escape and mix into the blood that is being pumped throughout the body. However, the foramen ovale normally closes within a few days after birth. Causes and symptoms Transposition of the great arteries is a birth defect that occurs during fetal development. There is no identifiable disease or cause. The main symptom is a "blue" baby appearance, caused by a general lack of oxygen in the body's tissues. Diagnosis is made immediately after birth, when it is observed that the newborn is lacking oxygen. This is noted by the bluish color of the newborn, indicating cyanosis, a lack of oxygen. A definite diagnosis is made by x ray, electrocardiography (ECG), and echocardiography. The only treatment for this condition is prompt heart surgery shortly after birth. In surgery, the two great arteries are reconnected to their proper destination. This restores the normal blood flow pattern. The coronary arteries are also reconnected, so that they can supply blood to the heart itself. A catheter may be used to maintain or enlarge the opening between the two atria until surgery can be performed. Left untreated, this disease is fatal within the first weeks of life. Because there is no identifiable cause, there is no way to prevent this condition. Alexander, R. W., R. C. Schlant, and V. Fuster, editors. The Heart. 9th ed. New York: McGraw-Hill, 1998. corrected transposition of the great vessels(1) Anatomically corrected malposition of the great arteries—more popularly termed transposition of the great arteries. (2) Physiologically corrected transposition of the great arteries.
Even if you are a complete beginner, you will be surprised at the many similarities that exist between Spanish and English. Spanish, like Italian, Portuguese and French, is derived from Latin, so many Latin-based words in English are similar to Spanish ones: mayor (major), vehículo (vehicle), villa (villa), etc. You may be wondering whether by learning Latin American Spanish you will be able to communicate with Spanish speakers in Spain. The answer is yes. The differences between Latin American Spanish and that spoken in Spain are something like those variations in British English and American, Australian or Canadian English. They are mainly in the spoken language, particularly in pronunciation and intonation. Differences also exist in vocabulary: some more general, others specific to certain countries. A mobile phone, or cellphone in American English, for example, is un (teléfono) celular in Latin America, and un (teléfono) móvil, in Spain. So you will be understood. There are a few minor differences in grammar but you will soon be able to identify them and they will not hinder your communication in any way. By and large, however, the grammar of Spanish is one and the same and there are no differences in spelling such as those you find between British and American English (eg neighbour/neighbor). Naturally, there are language variations within Latin America itself. Some variations have their roots in the Spanish colonisation of the region; others stem from the influence of indigenous languages and from that of non-Spanish settlers, mainly African and European. This has given rise to distinctive linguistic areas within the region. The Spanish spoken in Mexico, for instance, sounds quite different from that spoken in the River Plate region, in countries like Argentina and Uruguay. This in turn differs from that of the Andean countries or that spoken around the Caribbean. Yet, despite such differences, speakers from all over the Spanish-speaking world can communicate with each other.
New research on the types of bacteria living in babies’ noses could offer clues as to why some recover quickly from their first cough or cold, while others suffer for longer. The study, published in ERJ Open Research, suggests that babies who have a wide variety of different bacteria living in their noses tend to recover more quickly from their first respiratory virus, compared to those who have less variety and more bacteria from either the Moraxellaceae or Streptococcaceae family. The researchers say their findings do not offer an immediate solution to help babies recover more quickly from coughs and colds. However, the results could help scientists understand the importance of the bacteria living in the respiratory tract, and how they influence infections and longer term conditions such as asthma. Dr Roland P Neumann from University Children’s Hospital of Basel, University of Basel, Switzerland, was one of the researchers. He explained: “It’s well known that different types of bacteria live in our gut. The respiratory tract is also home to a wide variety of bacteria and we are beginning to understand that the types and numbers of these bacteria, what we refer to as the microbiota, can influence our respiratory health. “We know that babies often suffer with coughs, runny noses, sore throats and ear infections, and in some babies the symptoms seem to drag on for weeks. These are usually caused by a virus such as the common cold, but we wanted to investigate whether the microbiota of the nose might also have a role in how long symptoms last. This is important not only in terms of babies feeling unwell but also because respiratory infections in the early years are linked to the development of asthma in later life.” The research was part of a larger study that is following a group of babies from birth to investigate the complex interactions of genetic and environmental factors and their influence on lung health. Parents taking part in this part of the study were asked to contact the researchers as soon as their babies developed symptoms of their first respiratory infection. This included more than two consecutive days when their babies were coughing, had a runny nose, signs of an ear infection or sore throat. Researchers took swabs from the noses of babies at that point and then took swabs again three weeks later. They analysed the swabs by testing for the presence of respiratory viruses, such as the common cold, and for the types of numbers of different bacteria. Working with sets of swabs from 183 babies, researchers were able to group the babies according to the makeup of their nasal microbiota. On average, the babies’ symptoms lasted around two weeks. Babies who were free of symptoms by the time the three-week swab was taken were more likely to have a wider mixture of bacteria in their noses and a microbiota that was not dominated by bacteria from the Moraxellaceae or Streptococcaceae family. Among babies whose symptoms lasted three weeks or longer, researchers found less variety in the types of bacteria living in the babies’ noses and the microbiota were more likely to be dominated by bacteria from the Moraxellaceae or Streptococcaceae family. These families include specific types that are known to be linked with respiratory disease. They found no clear link between the type of respiratory virus and the persistence of symptoms. Researchers took account of other factors that are known to have an impact on respiratory health, including the babies age, the season of the year, whether they had siblings or attended nursery, and whether they were exposed to cigarette smoke. They say this study cannot explain why the link exists, but a possible explanation is that certain types of bacteria may be more likely to result in inflammation and a worsening in symptoms. Or, it could be that a more diverse set of bacteria offers some protective effect. Professor Urs Frey, Chair of Paediatrics at the University Children’s Hospital of Basel, University of Basel, Switzerland, was also a researcher on the study. He said: “This study helps us to understand how bacteria that naturally live in the upper airways are important for respiratory health. “We know that antibiotics and environmental factors, such as season and childcare, can alter the numbers and types of bacteria in babies’ noses. We do not yet know what combination of bacteria would be ‘ideal’ and this would need to be known before we understand how we might manipulate it.” Professor Tobias Welte, from Hannover University, Germany, is President of the European Respiratory Society and was not involved in the study. He said: “There is an association between respiratory symptoms in babies in the first year of life and the development of asthma by school-age. We do not yet fully understand this link but the bacteria living in the upper airways could play a role. We need to do more research to understand the relationship between these bacteria, respiratory infections and long-term lung health.”
Between 1545 and 1548, an epidemic swept through the indigenous people of Mexico that is unlike anything else described in the medical literature. People bled from their face while suffering high fevers, black tongue, vertigo, and severe abdominal pain. Large nodules sometimes appeared behind their ears, which then spread to cover the rest of their face. After several days of hemorrhage, most who had been infected died. The disease was named cocoliztli, after the Nahautl word for “pest.” By contemporary population estimates, cocoliztli killed 15 million people in the 1540s alone—about 80 percent of the local population. On a demographic basis, it was worse than either the Black Death or the Plague of Justinian. For several centuries, its origin remained a mystery. Then, about two decades ago, researchers began to compare the known cocoliztli outbreaks with clues etched in the tree rings of modern-day Mexico. They found that cocoliztli struck during an apparent “mega-drought,” a decades-long period with little rain. Central Mexico suffered two mega-droughts in the 16th century, but, paradoxically, 1545 was a comparatively wet year in the drought. Cocoliztli itself also presented a problem: Unlike smallpox, which devastated the indigenous Mexican population starting in 1520, cocoliztli’s symptoms don’t resemble a known Old World disease. So researchers advanced a hypothesis: Cocoliztli was some kind of animal-spread hantavirus or arenavirus normally contained in Mexico’s highlands. When a brief wet period allowed the population of rodents (or otherwise) to boom, cocoliztli was able to take hold. The disease may still lurk in the highlands, waiting for an opportunity to arise. That, at least, is the hypothesis. Researchers are hampered in part because the 16th century is the last time that the deserts of southern North America experienced a mega-drought. Alas, they may get another opportunity soon. A new study, published last week in Science Advances, says that climate change will make a similar mega-drought far more likely in the American Southwest. In fact, this kind of phenomenon could become a near certainty: If carbon emissions continue unabated, the risk of a mega-drought could exceed 99 percent. “This will be worse than anything seen during the last 2,000 years and would pose unprecedented challenges to water resources in the region,” says Toby Ault, a professor of earth science at Cornell University and one of the authors of the study, in a statement. “As we add greenhouse gases into the atmosphere—and we haven’t put the brakes on stopping this—we are weighting the dice for mega-drought conditions.” Here’s what “weighting the dice” looks like from a scientific perspective: The study found that if carbon emissions continued on their current trajectory, and if global warming did not create a generally rainier climate in the American West, then the chance of a mega-drought would exceed 90 percent. But if climate change decreases precipitation—which is what most models predict—then the rate would sit at 99 percent, making mega-drought a virtual certainty. Somewhat counterintuitively, even if climate change makes the West rainier, a mega-drought is still more likely than it would be otherwise. A warmer world will put more demand on trees and other plants, requiring them to pull more water out of the ground; water will also evaporate faster from reservoirs and the soil. So even if global warming increases the amount of rain, then the chance of a mega-drought would still exceed 70 percent. Jonathan Overpeck, a professor at the University of Arizona, invented the word mega-drought in the 1990s with his colleague Connie Woodhouse. He was not involved with this paper. He told me that this new paper actually researches an especially intense form of the phenomenon, only examining a mega-drought that would spell infernal heat and terrific dryness for at least 35 years. “This isn’t like the drought of our grandfathers,” he says. “It’s a drought that everyone would agree would be devastating as all heck.” Overpeck said that the paper laid out two scenarios for the American Southwest. In the first, the world continues to pump carbon dioxide into the atmosphere, and a post-2050 “searing mega-drought” becomes a near certainty. In the second, carbon emissions soon begin to decrease, and the region sees about a 66 percent chance of a “warm mega-drought” in the second half of this century. The “searing drought” possibility scares him most. Most research predicts that a climate-addled “searing mega-drought” would be much worse than anything described in the 2000-year-long store of evidence left behind in lake beds and tree rings. Toxic dust storms could rage across the region, making driving extremely dangerous. The vast majority of trees in the region would die. Agriculture would become all but impossible. “The Southwest could be a really difficult place to live and make a living, at least as we know it today,” he told me. “It’s scary. I just cannot imagine a drought that long and that hot.” A warm mega-drought, on the other hand, could probably be endured. (The water journalist John Fleck talked to Vox last month about how that could be done.) And even in a world with no climate change, mega-droughts happen every couple centuries, and the “natural” risk of one occurring in any given year is about 10 percent. But making even the “warm mega-drought” a possibility will require a historic effort. Last week, the Paris Agreement, the first global treaty to limit carbon emissions, entered into force. But even if every country upholds its promises to restrict greenhouse-gas emissions, the world could still overshoot its 2-degree “emissions budget” by 2030. So we’re left thinking about the cocoliztli hypothesis. That idea speaks to the devastating consequences that can follow from even a modest disruption of the normal climate. It suggests that a climatic tragedy can trigger a human one, and a lack of rain can bring about consequences far more dire than a forest of dead trees. A drought, in other words, is never just a drought. We want to hear what you think about this article. Submit a letter to the editor or write to [email protected].
The narrow gorge of Canyon Creek has long served as a travel corridor. Native Americans likely trekked this canyon for thousands of years. Alexander McLeod of the Hudson's Bay Company provided the first written account of the route in 1829, while traveling from Fort Vancouver on the Columbia River to California's central valley. The U.S. Exploring Expedition, under Lt. George Emmons, followed the trail in 1841 making scientific observations. In 1846, this defile became part of the Applegate Trail, an effort by early emigrants to find an alternative to the treacherous Columbia River portion of the Oregon Trail. Prospectors and packers labored up the canyon en route to California's gold fields, beginning in 1848. Stagecoaches followed the rocky route in the 1870s, and today, Interstate 5 carries millions of vehicles over the steep pass.
CCSS.ELA-Literacy.RF.4: Read with sufficient accuracy and fluency to support comprehension. |Click image to download freebie.| Today, my students spent some of our reading time practicing fluency. Especially at this time of year, I go out of my way to make it fun. (Brain research shows us that "fun" is a big motivator, but I think teachers knew that before the research was done!) There are 4 important parts to fluency: - automaticity in word recognition - accurate word recognition - rate (speed) of reading - prosody, or expression
In linguistics, the Sapir-Whorf Hypothesis (SWH) states that there are certain thoughts of an individual in one language that cannot be understood by those who live in another language. SWH states that the way people think is strongly affected by their native languages. It is a controversial theory championed by linguist Edward Sapir and his student Benjamin Whorf. First discussed by Sapir in 1929, the hypothesis became popular in the 1950s following posthumous publication of Whorf's writings on the subject. In 1955, Dr. James Cooke Brown created the Loglan language (which led to an offshoot Lojban) in order to test the hypothesis. After vigorous attack from followers of Noam Chomsky in the following decades, the hypothesis is now believed by most linguists only in the weak sense that language can have some small effect on thought. Central to the Sapir-Whorf hypothesis is the idea of linguistic relativity--that distinctions of meaning between related terms in a language are often arbitrary and particular to that language. Sapir and Whorf took this one step further by arguing that a person's world view is largely determined by the vocabulary and syntax available in his or her language. The extreme ("Weltanschauung") version of this idea, that all thought is constrained by language, can be disproved through personal experience: all people have occasional difficulty expressing themselves due to constraints in the language, and are conscious that the language is not adequate for what they mean. Perhaps they say or write something, and then think "that's not quite what I meant to say" or perhaps they cannot find a good way to explain a concept they understand to a novice. This makes it clear that what is being thought is not a set of words, because one can understand a concept without being able to express it in words. The opposite extreme--that language does not influence thought at all--is also widely considered to be false. For example, it has been shown that people's discrimination of similar colors can be influenced by how their language organizes color names. Another study showed that deaf children of hearing parents may fail on some cognitive tasks unrelated to hearing, while deaf children of deaf parents succeed, due to the hearing parents being less fluent in sign language. Computer programmers who know different programming languages often see the same problem in completely different ways. The Neuro-Linguistic Programming (NLP) analysis of the problem is direct: Most people do some of their thinking by talking to themselves. Most people do some of their thinking by imagining images and other sensory phantasms. To the extent that people think by talking to themselves they are limited by their vocabulary and the structure of their language and their linguistic habits. (However it should also be noted that individuals have idiolects.) John Grinder, a founder of NLP, was a linguistics professor who perhaps unconsciously combined the ideas of Chomsky with the Sapir-Whorf hypothesis. A seminal NLP insight came from a challenge he gave to his students: coin a neologism to describe a distinction for which you have no words. Student Robert Dilts[?] coined a word for the way people stare into space when they are thinking, and for the different directions they stare. These new words enabled users to describe patterns in the ways people stare into space, which led directly to NLP — as pretty a validation of the weak hypothesis as one could ask. Programming Languages The hypothesis is sometimes applied in computer science that programmers skilled in a certain language may not have a deep understanding of some concepts of other languages, though it is possible to have a superficial one. The Church-Turing thesis states that any language that can simulate a Turing machine can perform any effective algorithm.
AfriGeneas Military Research Forum Archive Lesson Plans - Teaching with Historic Places A Nation Repays Its Debt: About This Lesson This lesson is based on the National Register of Historic Places nomination for the Central Branch, National Home for Disabled Volunteer Soldiers in Dayton, Ohio and other sources. The lesson was written by Paul LaRue, history teacher at Washington Senior High School in Washington Court House, Ohio with help from his 2003-2004 history class students. The lesson was edited by the History Program staff of the National Cemetery Administration, Department of Veterans Affairs, and the Teaching with Historic Places staff. This lesson is one in a series that brings the important stories of historic places into classrooms across the country. Where it fits into the curriculum Topics: This lesson covers aspects of late 19th-century U.S. history, social studies, Civil War, Reconstruction, and geography. Students will gain an understanding and appreciation for the challenges involved in carrying out a program to care for the needs of Civil War veterans and to mark their graves after their deaths. Time period: Civil War Era to 1929
Arc Flash Definition An electrical arc is an electrical breakdown of a gas (e.g. air) which produces an ongoing plasma discharge, resulting from a current flowing through normally nonconductive media such as air. A synonym is arc discharge. An arc flash is the consequence of an electric arc which can occur where there is sufficient voltage in an electrical system and a path to ground or lower voltage. It is usually caused by a short circuit of energized conductors. An arc flash caused by an electric arc with 1,000 amperes or more can cause substantial damage, fire or injury. The massive energy released in the fault rapidly vaporizes the metal conductors involved, blasting molten metal and expanding plasma outward with extreme force. A typical arc flash incident can be inconsequential but could conceivably easily produce a more severe explosion. The result of the violent event can cause destruction of equipment involved, fire, and injury not only to the worker but also to nearby people. (Forces may exceed 100kPa (KiloPascal), and debris is spread up to 300meters/second with temperatures of up to 20,000°C). In addition to the explosive blast of such a fault, destruction also arises from the intense radiant heat produced by the arc. The metal plasma arc produces tremendous amounts of light energy from far infrared to ultraviolet. Surfaces of nearby people and objects absorb this energy and are instantly heated to vaporizing temperatures. The effects of this can be seen on adjacent walls and equipment – they are often ablated and eroded from the radiant effects. The thermal incident energy onto the worker can cause severe skin burns or have lethal consequences.
An In-Depth Look at Eagle Eyes Bird vision has impressed and baffled humans for centuries. Scientists consider bird eyes to be the finest in the animal kingdom. And raptors have the finest vision of all. Small wonder just about everyone knows the expressions "bird's eye view" and "eagle eyes"! skull shows how large bird eyes really are! Long ago, scientists observed eagles fishing, hawks and falcons dive-bombing prey from great distances, robins cocking their heads before pulling out a worm, and nighthawks snatching moths out of midair, and figured these birds must have extraordinary vision. When people examined dead birds, they noticed that the eyes fill a huge portion of the head. Bird eyes sometimes even weigh more than the bird's entire brain! Eyes Have It It's impossible to know for sure what the world looks like to an eagle, but we know from studying the anatomy of their eyes that their view must be enlarged and magnified compared to our view. Eagle eyes are the same size (weight) as human eyes (though a full grown adult Bald Eagle weighs no more than about 14 pounds!) But an eagle eye has a much different shape from ours. The back is flatter and larger than the back of our eye, giving an eagle a much larger image than we can see. And its retina has much more concentrated rod and cone cells-the cells that send sight information to the brain. Some animals, including humans, have a special area on their retina called the fovea where there is an enormous concentration of these vision cells. In a human, the fovea has 200,000 cones per millimeter, giving us wonderful vision. In the central fovea of an eagle there are about a MILLION cones per millimeter. That's about the same number of visual cells as the finest computer monitor has on its entire screen when set at its highest resolution. The resolution for a person would be similar to setting a computer's screen at a much lower resolution. how much clearer an eagle's view of a distant dragonfly would be compared to a human's view of the same dragonfly, if the fovea were the only difference between our eyes: vs. Human Vision a distant dragonfly might look to an eagle the same dragonfly might look to a person Let's assume eagles have exactly 1,000,000 cones per square millimeter in their central fovea, and humans have exactly 200,000. If this was the only difference between our eyes, and if the farthest we could clearly see a 3-inch mouse was 200 feet, what would be the farthest an eagle could clearly see that same mouse? ruler isn't bent. Light hitting it bends as it passes from air to water. Notice how light passing from air to water makes this ruler seem bent. This refraction can make it hard for eagles to know exactly where the fish are in the water. Their eyes don't seem to have any adaptations to correct for refraction, but their brains do! The first fish young eagles successfully catch are often dead ones floating right on the surface of the water. They miss live prey a lot when they're first learning to fish. Fortunately, with experience they slowly learn how to correct for refraction. Look at the little boy's eyes and the Bald Eagle's eyes (The boy's name is Tommy. We don't know what the eagle's name is.) The eagle has a little bit of bare skin between its eyes and its beak, and a bony ridge above its eyes. That bony ridge makes its face appear fierce to us. Look at Tommy's eyebrows, and feel above your own eye. You have a bony ridge above your eyes, too, but in most people it's not quite as noticeable as on an eagle, and certainly doesn't make Tommy look fierce! - Why do you think people have a bony ridge above their eyes? Why might this bony ridge be so much more noticeable in eagles? (Link - Why might the skin right in front of the eagle's eye be bare? (Link Tommy's and the eagle's eyes are wide open! Like you, Tommy has a big top eyelid with long eyelashes, and a small bottom eyelid with shorter eyelashes. His lids open and close from top to bottom, but not from side to side. Eagles (and other birds) have 3 eyelids! The outside two are the ones we usually see. On eagles the bottom eyelid is bigger than the top eyelid, so they blink up instead of down. Birds also have an inner eyelid, called a nictitating membrane. This eyelid is transparent, and sweeps across the eye from side to side. It grows in the inner corner of the eye, right next to the tear duct. Look in your partner's eye or in a mirror and see if you can see a tiny hole in both the upper and the lower eyelids, right in the inner corner of your eye. These are tear ducts. Can you see tissue in the corner of your eye that is related to a bird's nictitating Why do you think birds have a nictitating membrane? Did you know that your tear glands are always making tears, even when you're not sad or peeling onions? Tears help to keep the eye moist, and have a special chemical called a lysozyme that kills bacteria, protecting the eyes from infection. Birds have tear glands that secrete watery tears like ours, and birds that spend a lot of time in the ocean have another, special kind of gland that secretes oily tears too, to protect the eyes against salt water. Eagles have these glands, but they're smaller and not as important for eagles as they are for cormorants and other ocean birds. - Why do you think tears are salty? And if we humans and birds are always making tears, where do they go when we're not crying? (Link The tiny speck of white in the center of Tommy's eye is just a reflection from the flash when the picture was taken. Tommy's irises are so dark brown that it's hard to see his pupils in this photo. The eagle's irises are pale yellow. The white part of Tommy's eye, which isn't a seeing part of the eye at all, is called the sclera. This eagle's eye also has a sclera, but it's hidden under the eyelid. If there were no skin to hide them, all eyes would appear bigger and round from the front. But both humans and birds have skin covering part of the eye. The eyelid openings for human eyes are oval-shaped. The eyelid openings for bird eyes are round. Why do humans need oval-shaped eyelid openings to see well? Why do birds need Tommy is two years old. His eyes are already just about as big as they're ever going to be! This Bald Eagle's eyes are about the same size as Tommy's. The eagle's head is smaller than Tommy's, but its eyes are just as big, or even a little Look again at the picture of a bird skull. Bird eyes are MUCH bigger relative to their head size than human eyes! And their brain is much smaller. People used to think that meant that birds were stupid compared to mammals, but now they are learning that birds are more intelligent than they thought! What lies beneath In 1578, a writer named Guillaume de Salluste, Siegneur Du Bartas described eyes as, "these lovely lamps, these windows of the soul." Eyes DO work like windows, opening to all the beautiful sights in the world outside our bodies, from the tiniest hairs growing out of our own skin to enormous stars so far away that it takes thousands of years for their light to reach the earth. For humans and birds both, much of the information that we perceive about the world is processed by eyes. This diagram of an eagle eye and a human eye shows them as cross-sections, as if looking down on them from above the head. Look at your own eye in a mirror or look at one of your classmate's eyes. - Click on the diagram of the human eye so you can see the large, labeled picture, and compare it to your eye or a classmate's eye. Which of the labeled parts can you actually see on a real eye? Which layers of the eye does light pass through to reach the retina? Why does the pupil of a real eye look so black? (Link - If your class ever has an opportunity to dissect an animal (fetal pig, frog, cat, pigeon, or something else) make sure you get a good look at the eye! an open and shut case Your iris and a Bald Eagle's iris may be different colors, but they have the same job: to control the amount of light that shines onto your retina. There are two kinds of muscles in the iris. Circular muscles encircle the iris close to the pupil, and straight muscles radiate out like rays of the sun. When the inner, circular ones contract, the iris gets bigger, making the pupil smaller. When the outer, radiating ones contract, the iris gets smaller, making the pupil bigger. Work with a partner. Take turns watching your partner's pupil and iris change as the amount of light changes. - Close your eyes for one minute, then open them while your partner watches. - Go into a room without windows, like a closet. If the room doesn't have a light switch, bring a flashlight. Keep your eyes open in the dark for one minute. Then turn on the light or shine the flashlight on your eyes for just a few seconds while your partner watches. - In a partly lit room, shine the flashlight close to your eyes (but not directly into them) while your partner watches. After 30 seconds, turn the flashlight off while your partner watches. - In a dimly lit room, put a hand between your eyes and shine a light on one eye while your partner watches for differences between your two eyes. - Which muscles (the circular ones or the radiating ones) work when the light suddenly gets brighter? Which work when the light suddenly gets dimmer? (Link Cornea: window to the world The first lens that light passes through into the eye is the cornea. This is a clear window with a curve that we describe as "convex." When light passes through any curved lens, it bends. The bending of light through a convex lens like the cornea makes it "converge." The image formed by the cornea is upside down and reversed from right to left. If the cornea were the only curved "window" that light passed through in the eye, far objects would focus very easily, but near objects would not. A human's cornea can't change its shape in order to bring objects into focus, but fortunately, one part of our eye CAN change shape. In order to help us focus on close objects, the LENS of our eye changes shape. This is called accommodation. Tiny fibers called ligaments and muscles change the shape of the lens, making it thinner to focus on far objects or thicker to focus on near objects. Eagles can change the shape of their lens, and can also change the shape of their corneas. This allows them more precise focusing and accommodation than we humans can get. where vision happens The retina is where vision actually takes place. Every single thing we see is projected, upside down and backward, on our retina, onto special cells called rods and cones. Our human eyes have millions of rods and cones; an eagle's eyes have tens or hundreds of millions. Each microscopic cone cell is connected to a nerve that goes straight to the brain. When a tiny particle of light from an object hits a particular cone, the brain instantly sees it as a particle of color. All the cones together work like the tiny dots on your computer screen. Your brain flips the image and puts all the dots together to tell you exactly what you're seeing the moment you see it. How an image appears in the human eye To see how you would appear in a retina, look at your reflection in a spoon. You'll be upside down and backward! Rod cells don't see color; they simply see light. And several rod cells network with each other, sending the brain messages on a single nerve. So vision with the rod cells isn't as precise, but is very fast. Rod cells may see only black and white, but they are extremely sensitive to light, so they help us see in the dark and notice quick movements. Eagles have a higher percentage of cone cells than we humans do, so they can't see as well as us at night, even if they do see better in daylight. If a human eye is shaped exactly right, things focus precisely on the retina. Sometimes the eye is longer than it should be, and the picture focuses in front of the retina. This condition is called "myopia" or nearsightedness. If the eye is shorter than it should be, the picture focuses behind the retina. This is called "hyperopia," or farsightedness. People wear glasses or contact lenses with exactly the right curve to move the focus onto the Eagles with eyes that are shaped wrong can't wear glasses. Since good vision is so critical to their ability to get food, eagles with less than perfect vision quickly starve, and never get old enough to reproduce. So eagle parents all have great vision, and luckily their babies take after them! Fovea: Magnifying the view Some lucky vertebrates (including us humans and just about all birds) have a special area on the retina called a fovea, where rod and cone cells are extraordinarily densely packed. As we noted above, a human's fovea has about 200,000 cone cells per square millimeter, and an eagle's central fovea has over a million. Plus, certain birds that have especially good vision, including eagles, have a second fovea. Some scientists consider a long, narrow ribbon-shaped area that connects the two eagle fovea to actually be a third fovea! We at Journey North wondered exactly what things look like to an eagle compared with how they look to us. There is no way to be sure! But we took into account the difference between the number of cone cells in the central fovea and the difference in the shape of the eye to make these images of a squirrel at a backyard bird feeder. may be how this squirrel would look to an eagle is how the squirrel looks to a human In the diagram comparing an eagle eye and a human eye, did you notice that the eagle eye had one feature that the human eye didn't? Birds are the only animals with this unique part, called the pecten. What's it for? No one knows for sure. One other difference between bird eyes and human eyes is that the retina in mammals gets a supply of blood through tiny blood vessels. This is important for the nutrition and health of the retina, but actually makes our vision a little poorer. Birds don't have blood vessels in their retinas, but they DO have the pecten. Here are some theories about why birds have this unique feature: - To keep the retina nourished and healthy without blood vessels. - To keep the fluids in the vitreous body at the right pressure. - To absorb light to reduce the chance of reflections inside the eye, which can distort vision - To help birds to perceive motion - To provide shade from the sun - To sense magnetism some data that supports the first four. The last two are simple guesses without evidence to support them. Which of these theories makes sense to you? of Journaling Questions - * An eagle could see the mouse 446 feet away, and can see 2.24 times better than humans can. This is how we figured out the answer: The LENGTH of the mouse is 3 inches. That means we can't think about the AREA of 1 millimeter square, but the LENGTH of it. An eagle has 1000 cones along an edge of that area (the square root of 1,000,000), and a human has 447 (the square root of 200,000). So just considering the fovea, an eagle could see 2.24 times as far as we can see. do you think is a good reason why people have a bony ridge above their eye? Why might this bony ridge be so much more noticeable in eagles? Answer: The bony ridge above human and eagle eyes does two jobs: protects the eyes from blows and helps shade the eyes from sunlight. Our skulls AND eagle skulls have a fairly similar job of protecting from physical injuries. But eagles, sitting at the tops of trees or fishing in the open on lakes and rivers, need more protection than we do to keep the sun out of their eyes. Humans can stay in the shade on bright days, and our eyebrows help protect our eyes so they don't need as big of a bony ridge for protection. Plus, WE can wear sunglasses. might the skin right in front of the eagle's eye be bare? only covering on skin that birds have is feathers. If even tiny feathers grew in front of an eagle's eye, they might block the view, get caught in the eye, or brush against it, especially when the eagle was flying. This would make it hard to see and maybe even scratch the eye! do you think tears are salty? And if we humans and birds are always making tears, where do they go when we're not crying? tears are salty because they come from body tissues, and our bodies (our blood and tissues) are just as salty! Our tears drain into the nasolacrimal duct, which empties into the nasal cavity. No wonder when we cry our noses get snuffly! Eagles don't produce as many tears as we humans do, and they are constantly being swept across the eye by the nictitating membrane, so eagles don't get snuffly. Lucky, too, because they don't have a nose to blow. on the diagram of the human eye so you can see the large, labeled picture, and compare it to your eye or a classmate's. Which of the labeled parts can you actually see on a real eye? Which layers of the eye does light pass through to reach the retina? Why does the pupil of a real eye look so black? parts can we see on a real eye? If you look at a classmate's eye from the side, you might be able to see the clear cornea sticking out like a thin bubble. (You can't see your own cornea unless you use two mirrors, and even then it's almost impossible!) You can see the black pupil, which is really just the hole that lets light pass through the lens. You can see the colored iris. The lens is too clear to see at all. You can see the white sclera. The rest you just have to imagine! - What eye parts does light pass through? The cornea, aqueous body, lens, and vitreous body. does the eye look so black? Inside the sclera of the eye is a thin layer called the choroid coat, which has special pigments that make it look very dark. These pigments absorb extra light inside the eye so the only light we see is what is actually on the retina, giving us clearer vision. muscles (the circular ones or the radiating ones) work when the light suddenly gets brighter? Which work when the light suddenly gets dimmer? the light suddenly gets brighter, the circular muscles contract to close the pupil a bit. When the light suddenly gets dimmer, the radiating muscles contract to pull the pupil more open. Science Education Standards - Each plant or animal has different structures that serve different functions in growth, - Living systems at all levels of organization demonstrate the complementary nature of structure
The Deafblind Manual Alphabet. The Deafblind Manual is the best way to communicate with someone who is Deafblind. You can learn it quickly, and here's how you do it. Stick out your index finger (that's the long one next to your thumb) on your right hand. fold your other fingers out of the way. Think of this finger as your pen. You are going to use it to write - not on paper, but on your deafblind friends left hand which they will hold out for you. First learn the vowels. They're easy. Just remember the order A,E,I,O,U. For a more detailed Description of the Block Alphabet Please Enter here. This is a simple system used by some deafblind people. With your forefinger draw the clear shape of capital letters on the palm of the deafblind person's hand. Use the whole palm for each letter - keeping them large and clear. Place one letter over the top of the last - do not attempt to write across the palm as you would on a sheet of paper and keep your pen in your pocket! Pause slightly at the end of each word making sure the person is able to follow what you are saying. Letters should generally be drawn from left to right and from top to bottom. Letters M N and W should be drawn keeping the finger on the palm and not in separate strokes. Numbers can alternatively be drawn as figures. Do not use the Continental (7) as this is easily confused as (2). Communicating using braille on the hand. Good braillists may like to use braille contractions for speed and some will indicate that words/sentences need not be complete because they have a good grasp of language. Braillists may prefer to use dots 4, 5 and 6 for word signs if the deafblind person wishes and the sender knows braille. Hands-on Signing - this is used by British Sign Language users whose vision no longer allows them to see sign language and they therefore 'feel' sign language by resting their hands on the communicator's. Sign Language, Some deafblind people were deaf from birth and became blind as teen-agers or adults. They prefer the sign language used by deaf people Instead of watching the hands and arms of friends, they touch the hands of the person making the signs to learn what is being said. It is usually necessary to restrict the movements involved in making signs so that a deafblind person can follow along conveniently. This system can lead to confusion. It requires the speaker to have extensive training in sign language. However, it is possible to interpret as quickly as English is spoken using this method. Tadoma is tactile lipreading (or tactile speechreading). The Tadoma user, feels the vibrations of the throat and face and jaw positions of the speaker as he/she speaks. Unfortunately, this requires years of training and practice, and can be slow. Although highly-skilled Tadoma users can comprehend speech at near listening rates, most Tadoma users are much slower and the added restriction of the user having to be in contact with the speaker adds to the problems associated with the Tadoma method. It is not very popular because it is hard to do and not very accurate. Tadoma is named after the first 2 children to whom it was taught, Winthrop "Tad" Chapman and Oma Simpson. Visual Frame Signing - a way of modifying and using sign language in a restricted space to suit the visual needs of the individual receiving it.
What is Alzheimer's disease? Alzheimer's disease is the most common cause of dementia, a loss of brain function that affects memory, thinking, language, judgment and behavior. In Alzheimer's disease, large numbers of neurons stop functioning, lose connections with other neurons, and die. Irreversible and progressive, Alzheimer's disease slowly destroys memory and thinking skills and, eventually, the ability to carry out the simplest tasks of daily living. The stages of the disease typically progress from mild to moderate to severe. Symptoms usually develop slowly and gradually worsen over a number of years; however, progression and symptoms vary from person to person. The first symptom of Alzheimer's disease usually appears as forgetfulness. Mild cognitive impairment (MCI) is a stage between normal forgetfulness due to aging and the development of Alzheimer's disease. People with MCI have mild problems with thinking and memory that do not interfere with everyday activities. Not everyone with MCI develops Alzheimer's disease. Other early symptoms of Alzheimer's include language problems, difficulty performing tasks that require thought, personality changes and loss of social skills. As Alzheimer's disease progresses, symptoms may include a change in sleep patterns, depression, agitation, difficulty doing basic tasks such as reading or writing, violent behavior and poor judgment. People with severe Alzheimer's disease are unable to recognize family members or understand language. How is Alzheimer's disease evaluated? No single test can determine whether a person has Alzheimer's disease. A diagnosis is made by determining the presence of certain symptoms and ruling out other causes of dementia. This involves a careful medical evaluation, including a thorough medical history, mental status testing, a physical and neurological exam, blood tests and brain imaging exams, including: - CT imaging of the head: Computed tomography (CT) scanning combines special x-ray equipment with sophisticated computers to produce multiple images or pictures of the inside of the body. Physicians use a CT of the brain to look for and rule out other causes of dementia, such as a brain tumor, subdural hematoma or stroke. - MRI of the head: Magnetic resonance imaging (MRI) uses a powerful magnetic field, radio frequency pulses and a computer to produce detailed pictures of organs, soft tissues, bone and virtually all other internal body structures. MRI can detect brain abnormalities associated with mild cognitive impairment (MCI) and can be used to predict which patients with MCI may eventually develop Alzheimer's disease. In the early stages of Alzheimer's disease, an MRI scan of the brain may be normal. In later stages, MRI may show a decrease in the size of different areas of the brain. - PET and PET/CT of the head: A positron emission tomography (PET) scan is a diagnostic examination that uses small amounts of radioactive material (called a radiotracer) to diagnose and determine the severity of a variety of diseases. A combined PET/CT exam fuses images from a PET and CT scan together to provide detail on both the anatomy (from the CT scan) and function (from the PET scan) of organs and tissues. A PET/CT scan can help differentiate Alzheimer's disease from other types of dementia. Another nuclear medicine test called a single-photon emission computed tomography (SPECT) scan is also used for this purpose. Using PET scanning and a new radiotracer called C-11 PIB, scientists have recently imaged the build-up of beta-amyloid plaques in the living brain. Radiotracers similar to C-11 PIB are currently being developed for use in the clinical setting. How is Alzheimer's disease treated? There is no cure for Alzheimer's disease. However, medications that slow the progression of the disease and manage symptoms are available. Locate an ACR-accredited provider: To locate a medical imaging or radiation oncology provider in your community, you can search the ACR-accredited facilities database. This website does not provide costs for exams. The costs for specific medical imaging tests and treatments vary widely across geographic regions. Many—but not all—imaging procedures are covered by insurance. Discuss the fees associated with your medical imaging procedure with your doctor and/or the medical facility staff to get a better understanding of the portions covered by insurance and the possible charges that you will incur. Web page review process: This Web page is reviewed regularly by a physician with expertise in the medical area presented and is further reviewed by committees from the American College of Radiology (ACR) and the Radiological Society of North America (RSNA), comprising physicians with expertise in several radiologic areas. Outside links: For the convenience of our users, RadiologyInfo.org provides links to relevant websites. RadiologyInfo.org, ACR and RSNA are not responsible for the content contained on the web pages found at these links. Images: Images are shown for illustrative purposes. Do not attempt to draw conclusions or make diagnoses by comparing these images to other medical images, particularly your own. Only qualified physicians should interpret images; the radiologist is the physician expert trained in medical imaging. This page was reviewed on February 12, 2014
What causes a migraine disorder? The body’s nervous system depends on millions of neurons that transfer information from the sensory organs of the body, such as the pressure sensors of a fingertip, to the brain and a second set of nerves that sends a signal back to the finger muscles to pull your finger away from the heat of a candle. Nerve cells have small thin extensions of their cell membranes which, when many are clumped together, form the white nerve fibers of the body. Those nerve fibers can be amazingly long but it is necessary for there to be more than one nerve cell reaching from the brain to the sensory nerves. They require the message to be passed, or relayed, from one nerve to the next. To accomplish this task, the nerves have developed a chemical message sent between individual nerves, or neurons. These chemical messengers are called neurotransmitters. Common neurotransmitters are serotonin and norepinephrine. They are released from one end of a neuron and float across a space to be recognized by the next neuron. The recognition of the neurotransmitter on the other side of the synapse triggers and renews the electrical signal. The theory is that this delicate transfer of information may be making mistakes that distort the information as it is transferred to or within the brain. Furthermore, there is likely that hormones, such as estrogen, affect the sensitivity of this chemical interaction between nerve cells. This may explain why migraine headaches or their other neural relatives vary during puberty, menstruation, pregnancy, menopause and aging.
11.A.1a Describe an observed event What Is Biodegradable? Discuss with your children what is biodegradable and what is not. Then bury things in your playground. A few weeks later dig these up and show your children the difference between biodegradable substances and non-biodegradable items. It is never too early for children to learn to care for the Earth and about environmentally friendly activities such as recycling. Recycling is the process where recyclable materials like paper, plastic, glass, metal, are reconstituted into new products or materials. Recycling Lesson Materials Four large cardboard boxes for sorting Four pictures representing metal, paper, plastic, and glass objects Recyclable paper, plastic, metal and glass items Letter to parents Recycling Lesson Preparation A week or two before the activity, send a note home with parents describing the activity and asking them to collect metal, paper, plastic, and glass items to send to school with their preschooler. Be sure to include recyclable item suggestions and remind the parents to wash or rinse recyclable items if needed. Create four pictures representing each category of recyclable and glue or tape it to the front of the box. Recycling Lesson Procedure Invite the children over to the recycling station. Describe how and why people recycle and the types of materials that are recycled. Explain to the children that they will be sorting recyclables into each box. Either give each child a few recyclable items and encourage them to deposit each item into the appropriate box, or cooperatively sort all items and then place them in the correct box. After they are all sorted, explain that you will be taking them for recycling. Recycling Lesson Objectives Objective: To develop an understanding of classification by encouraging children to sort recyclables by material. Verbal Cue A: Can you sort the items into paper and plastic? Verbal Cue B: Let’s put the metal items in one container and the glass items in another one.
Africa is highly vulnerable to the various manifestations of climate change. Six situations that are particularly important are: - Water resources, especially in international shared basins where there is a potential for conflict and a need for regional coordination in water management - Food security at risk from declines in agricultural production and uncertain - Natural resources productivity at risk and biodiversity that might be irreversibly - Vector- and water-borne diseases, especially in areas with inadequate health - Coastal zones vulnerable to sea-level rise, particularly roads, bridges, buildings, and other infrastructure that is exposed to flooding and other - Exacerbation of desertification by changes in rainfall and intensified land The historical climate record for Africa shows warming of approximately 0.7°C over most of the continent during the 20th century, a decrease in rainfall over large portions of the Sahel, and an increase in rainfall in east central Africa. Climate change scenarios for Africa, based on results from several general circulation models using data collated by the Intergovernmental Panel on Climate Change (IPCC) Data Distribution Center (DDC), indicate future warming across Africa ranging from 0.2°C per decade (low scenario) to more than 0.5°C per decade (high scenario). This warming is greatest over the interior of semi-arid margins of the Sahara and central southern Africa. Projected future changes in mean seasonal rainfall in Africa are less well defined. Under the low-warming scenario, few areas show trends that significantly exceed natural 30-year variability. Under intermediate warming scenarios, most models project that by 2050 north Africa and the interior of southern Africa will experience decreases during the growing season that exceed one standard deviation of natural variability; in parts of equatorial east Africa, rainfall is predicted to increase in December-February and decrease in June-August. With a more rapid global warming scenario, large areas of Africa would experience changes in December-February or June-August rainfall that significantly exceed natural variability. Water: Africa is the continent with the lowest conversion factor of precipitation to runoff, averaging 15%. Although the equatorial region and coastal areas of eastern and southern Africa are humid, the rest of the continent is dry subhumid to arid. The dominant impact of global warming is predicted to be a reduction in soil moisture in subhumid zones and a reduction in runoff. Current trends in major river basins indicate a decrease in runoff of about 17% over the past decade. Reservoir storage shows marked sensitivity to variations in runoff and periods of drought. Lake storage and major dams have reached critically low levels, threatening industrial activity. Model results indicate that global warming will increase the frequency of such low storage episodes. Natural Resources Management and Biodiversity: Land-use changes as a result of population and development pressures will continue to be the major driver of land-cover change in Africa, with climate change becoming an increasingly important contributing factor by mid-century. Resultant changes in ecosystems will affect the distribution and productivity of plant and animal species, water supply, fuelwood, and other services. Losses of biodiversity are likely to be accelerated by climate change, such as in the Afromontane and Cape centers of plant endemism. Projected climate change is expected to lead to altered frequency, intensity, and extent of vegetation fires, with potential feedback effects on Human Health: Human health is predicted to be adversely affected by projected climate change. Temperature rises will extend the habitats of vectors of diseases such as malaria. Droughts and flooding, where sanitary infrastucture is inadequate, will result in increased frequency of epidemics and enteric diseases. More frequent outbreaks of Rift Valley fever could result from increased rainfall. Increased temperatures of coastal waters could aggrevate cholera epidemics in Food Security: There is wide consensus that climate change, through increased extremes, will worsen food security in Africa. The continent already experiences a major deficit in food production in many areas, and potential declines in soil moisture will be an added burden. Food-importing countries are at greater risk of adverse climate change, and impacts could have as much to do with changes in world markets as with changes in local and regional resources and national agricultural economy. As a result of water stress, inland fisheries will be rendered more vulnerable because of episodic drought and habitat destruction. Ocean warming also will modify ocean currents, with possible impacts on coastal Settlements and Infrastructure: The basic infrastructure for developmenttransport, housing, servicesis inadequate now, yet it represents substantial investment by governments. An increase in damaging floods, dust storms, and other extremes would result in damage to settlements and infrastructure and affect human health. Most of Africa's largest cities are along coasts. A large percentage of Africa's population is land-locked; thus, coastal facilities are economically significant. Sea-level rise, coastal erosion, saltwater intrusion, and flooding will have significant impacts on African communities and economies. Desertification: Climate change and desertification remain inextricably linked through feedbacks between land degradation and precipitation. Climate change might exacerbate desertification through alteration of spatial and temporal patterns in temperature, rainfall, solar insolation, and winds. Conversely, desertification aggravates carbon dioxide (CO2)-induced climate change through the release of CO2 from cleared and dead vegetation and reduction of the carbon sequestration potential of desertified land. Although the relative importance of climatic and anthropogenic factors in causing desertification remains unresolved, evidence shows that certain arid, semi-arid, and dry subhumid areas have experienced declines in rainfall, resulting in decreases in soil fertility and agricultural, livestock, forest, and rangeland production. Ultimately, these adverse impacts lead to socioeconomic and political instability. Potential increases in the frequency and severity of drought are likely to exacerbate desertification. Given the range and magnitude of the development constraints and challenges facing most African nations, the overall capacity for Africa to adapt to climate change is low. Although there is uncertainty in what the future holds, Africa must start planning now to adapt to climate change. National environmental action plans and implementation must incorporate long-term changes and pursue "no regret" strategies. Current technologies and approachesespecially in agriculture and waterare unlikely to be adequate to meet projected demands, and increased climate variability will be an additional stress. Seasonal forecastingfor example, linking sea-surface temperatures to outbreaks of major diseasesis a promising adaptive strategy that will help save lives. It is unlikely that African countries on their own will have sufficient resources to respond effectively. Climate change also offers some opportunities. The process of adapting to global climate change, including technology transfer, offers new development pathways that could take advantage of Africa's resources and human potential. Examples would include competitive agricultural products, as a result of research in new crop varieties and increased international trade, and industrial developments such as solar energy. Regional cooperation in science, resource management, and development already are increasing. This assessment of vulnerability to climate change is marked by uncertainty. The diversity of African climates, high rainfall variability, and a very sparse observational network make predictions of future climate change difficult at the subregional and local levels. Underlying exposure and vulnerability to climatic changes are well established. Sensitivity to climatic variations is established but incomplete. However, uncertainty over future conditions means that there is low confidence in projected costs of climate change. Improvements in national and regional data and capacity to predict impacts is essential. Developing African capacity in environmental assessment will increase the effectiveness of aid. Regional assessments of vulnerability, impacts, and adaptation should be pursued to fill in the many gaps in information.
Learn something new every day More Info... by email The interior walls of a human being's small intestine are covered with a multitude of threadlike, tubular projections called the intestinal villi. These fingerlike projections, although tiny, are very complex and serve as sites for the absorption of necessary nutrients and fluids into the body. To aid in this process, the villi increase the small intestine's surface area, facilitating absorption of nutrients. In this manner, they play a crucial role in proper digestion. Intestinal villi coat the interior mucus membrane of the small intestine like a carpet. Each villus extends approximately 0.04 inches (about 1 mm) into the lumen, which is the empty chamber inside the small intestine. Inside each villus, a capillary bed and lymphatic vessel can be found. The outsides of the villi are covered by layers of cells. Nutrients pass through certain cells in this layer, are taken up by the capillary network and lymphatic vessels, and are thus transported by the blood and lymphatic system to the rest of the body. The types of cells that cover the surfaces of intestinal villi include mature absorptive enterocyte cells, mucus secreting goblet cells, and antimicrobial paneth cells. The surface of the enterocyte cells are covered with microvilli, which allow the cells to absorb the nutrients. The cells that cover the villi only live for a few days. When the cells die, they are shed into the lumen, digested, and absorbed into the body. In between the villi are areas called crypts, which are moatlike structures that produce the cells found on the surface of the villi. At the bases of the crypts are stem cells, and to replace dying cells, the stem cells keep dividing, creating daughter cells continuously. While some of these daughter cells remain to become stem cells, most migrate up the villi and divide into other types of cells. Some become mature absorptive enterocyte cells, while others become mucus-producing goblet cells. Other migrating cells become paneth cells, whose job it is to sterilize the interior of the small intestine by secreting antimicrobial peptides. Thanks to the intestinal villi, the surface area of the small intestine is much larger than any person would guess. It is about 656 square feet (200 square meters) — that's 100 times the surface area of a person's skin. Without the intestinal villi, the human body would not be able to absorb the nutrients necessary to survive. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
Air Resistance and Aerodynamics This science lesson is for second and third graders. Students will build parachutes out of three simple materials. After testing their parachutes students can illustrate or photograph and describe their results. This would be a nice showcase for our project. Things Fly: Activities for Teaching Flight These K-3 lessons are from The Smithsonian's Center for Education and Museum Studies. The lessons are designed to help young children understand the basic physics of flight. The lesson plans include printable materials. Students can share their learning experiences on our project. |Document Based Questions Third grade students can practice their keyboarding and writing skills as they answer the constructed response questions about the Wright Brothers. Their answers will be posted on the web site.| and Transportation in the United States The objective of this Xpeditions lesson plan is to have students in grades K-2 learn how products are transported. It supports the National Geographic "The patterns and networks of economic interdependence on Earth's surface" Students will be asked to write and illustrate stories about transportation. We would love to showcase their work on this project. |Aviation Pioneer Hall of Fame Students will write short biographies about pioneers in aviation. Although this lesson plan is for older children it can easily be adapted for our project. Biographies can be written as a Shared or Interactive Writing activity. Invite your students to write biographical or acrostic poems about The Wright Brothers. They can create a character map in Kidspiration or summarize book chapters with Kid Pix. Any of these activities can be nicely showcased on this project.| |Edible Wright Flyer Simple directions that require no cooking!| |Styrofoam Wright Flyer 1903 Students construct a model of the 1903 Wright Flyer with simple materials.| |TeacherLINK offers activities for students on aeronautics that were developed by NASA Aerospace Education Program Education Specialists. The wonderful hands-on science activities were designed to be easy and fun. We would love to publish your students reflections and pictures about their learning experiences.| |The "Wright Stuff" is a one day collaborative Internet project which will take place on December 17, 2003. There will be an online treasure hunt, links and teacher materials. Registration will begin on December 1, 2003. Recommend this one to all of your colleagues!| |With Wings as Eagles The Library of Congress has the most awesome collection of primary sources that document the history of flight. Fascinating for students of all ages!| |Wright Again In this web site treasure chest, you will find primary documents, activities, lesson plans, and student activities.| |Meet the Wright Brothers If you're looking for ideas for lower primary students, you've found the place! This web site is perfect to use as a center activity.| |Flights of Inspiration The Franklin Institute Online gives excellent examples on how teachers can use this web site as a resource. You will want to share this with your colleagues.| |The Mystery Lives On In addition to learning about The Wright Brothers, you may want your students to study other aviation pioneers. National Geographic Kids have an easy to read description on Amelia Earhart. After reading the information, students can take a poll on what they think happened to Amelia Earhart.| |Amelia Earhart You can count on The Library of Congress to provide excellent online content for students. This resource includes primary documents, a timeline and a story written on a level that your students will be able to understand.| |The Wright Brothers: Wilbur and Orville Young students will find this a perfect web site for online research. The printable worksheets are excellent.| |Remember the Wright Brothers' Historic Flight A page of resources compiled by Horace Mann Educated Financial Solutions.| |The Wright Brothers If you were impressed with Flights of Inspiration, you will be amazed with The Franklin Museum's latest and greatest. Celebrate the Centennial of Flight! You will understand why we picked this topic for a collaborative project. Don't miss their free 2003 Flight Forecast project that you can join. This is an educator's dream!| My Brothers' Flying Machine Written by Jan Yolen and perfect for your lower primary students |First to Fly: How Wilbur and Orville Wright Invented the Airplane Although this book is recommended for older students it could be a nice read aloud. If you students have upper grade reading buddies they can enjoy this book together.| |Magic School Bus Taking Flight Ms. Frizzle is at it again. This time she and her students are shrunk inside a model airplane!| |First Flight: The Story of Tom Tate and the Wright Brothers (I Can Read Chapter Book Series) Written for students in grades 1-3, this book tells the story of how Tom Tate helped the Wright Brothers with experiments for their flying machine.| |Taking Flight: The Story of the Wright Brothers Students will read how the Wright Brothers came to build their airplane. It is recommended for students ages 7-9 and is a Level 3 book.|
Children learn about science by doing science - by observing and describing, questioning and searching for answers. If you want to find out how fast woolly bear caterpillars crawl, you've got to follow them! Doing science includes working with others – collaborating on research, tossing ideas around, sharing your findings. By working together scientists (and curious children) build upon their knowledge. They test ideas, make connections, and maybe even change their views on how things work. What can we, as parents and teachers, do to help our children become more comfortable exploring science? We can give them time to work out their ideas. It might take hours, days, weeks, maybe months to observe, investigate and test a single idea. As every scientist knows, the minute you start investigating one question, a gazillion more leap out at you. Scientists need a workspace. A small table, with reference books and field guides nearby and a place to store supplies – we piled journals, hand lenses and other things in a plastic bin that fit under the table. If you’re afraid messy experiments will hurt your carpets, then put down a plastic sheet under the table (that’s what we did). Children learn a lot of skills as they do science: - Classifying (sorting things into groups using a system) - Creating models (graphs, charts, 3-D models, diagrams) - Formulating hypotheses (a tentative explanation for how things work) - Generalizing (drawing conclusions) - Identifying variables (factors that influence their project) - Making decisions - Using tools of science - Measuring things - Close observations - Making predictions - Recording data - Sharing what they discover with others If you are looking for ways to get your children (or students) involved in projects with real scientists, check out the list of projects under "Get Involved In Real Science" on the home page.
Today we pick back up with our journey through evolution and natural history. Last time we met the Spider Conch and today, we meet some of its relatives, the bivalves. As the name suggests, these animals have two shells, and the ones you probably know best are oysters and clams. Today I will write about the various feeding methods of these animals, and then the next post will be on movement and vision. There are bivalves which resemble a 2-shelled animal we met earlier, the brachiopod (HERE), and so can be easily confused. The first bivalve I would like you to meet today is Pedum spondyloideum, or the blue-lipped coral oyster. I am mostly showing you this one because I think it is spectacularly beautiful This is a teeny tiny scallop (or Pectinidae to give it the proper family name), which lives between corals. These are in the same order as oysters (Ostreoida), so are related to them, but are in a different family to what me and you know as oysters. There is an important differences between these, and the other molluscs we have met so far; I mentioned before (HERE) that molluscs have a rather cool tongue called a radula, which is essentially lots of rows of tiny teeth that they use for scraping food off of surfaces. If you look at the diagram above, there is no label saying “radula”. This is because bivalves do not have one! (They also do not have a head!) The image below shows the internal structure of a clam, and will help me explain what they do instead of scraping food: In the image above, you can see something labelled the “incurrent siphon” and the “excurrent siphon”. As these animals breathe (by extracting oxygen from the water), they cause small currents around their gills. These currents contain not just water, but yummy particles of food, which get moved towards the gills. There are cilia (those small hair-like wavey things we have bumped into a lot) on the gills, which move these currents towards tiny pores. If you take a peek at the top diagram, there is something labelled as the “labial palps”. These, and the gills produce mucus (like you do when you have a cold), and this covers the food particles and they fall down towards the mouth where they are eaten. So yes, they do eat food covered in snot! Large particles like sand fall down into the mantle, and are carried out by cilia again (those little hairs just get everywhere don’t they?). Sometimes these particles get stuck in the mantle, and become irritating, at which point they become pearls (although not the sort we use for decoration, they are formed differently). This method of feeding is known as filter feeding, and is how most bivalves eat. There are some species however, who obtain their food using a method known as deposit feeding. This is thought to be the original form of feeding for bivalves. Instead of the gills assisting in filtering food, they are used purely for breathing, whilst the labial palp has tubes attached to it which stick out to grab food from the sand or mud. Food which is caught in currents moving towards the gills is also grabbed and eaten. Still other species use symbiosis with small organisms (a lot like the corals do) whereby these organisms carry out photosynthesis and the bivalve gets most of its nutrition that way, while doing a small amount of filter feeding. The most well known example of this is the giant clam, which is a huge animal, up to 1.2m or so long. These animals are so huge that they are not able to move, so they sit on the sea floor, often in places like the Great Barrier Reef: The bacteria, and dinoflagellates which I wrote about HERE obtain food by photosynthesis, like plants do, and then the Giant Clam feeds on the by-products produced, as shown in this video: One final point about bivalve feeding. Because they filter feed, they also perform a role in cleaning water, which benefits other organisms in their ecosystem, and mussels can be used as an indicator of how polluted a body of water is. This is because as they feed, heavy metals and other pollutants are filtered, and build up within their bodies as they are unable to process them (like us with mercury etc). So, if you measure the levels of these pollutants in mussels and other bivalves, it gives you an idea of how polluted the area is. This video shows oysters and how they can function as filterers of water: As mentioned in the video, populations of bivalves are decreasing in some areas, and this means they are less able to filter the water, which in turn has an impact on the other animals and plants in the ecosystem.
Scientists looking to capture evidence of dark matter — the invisible substance thought to constitute much of the universe — may find a helpful tool in the recent work of researchers from Princeton University and New York University. The team unveiled in a report in the journal Physical Review Letters this month a ready-made method for detecting the collision of stars with an elusive type of black hole that is on the short list of objects believed to make up dark matter. Such a discovery could serve as observable proof of dark matter and provide a much deeper understanding of the universe’s inner workings. Postdoctoral researchers Shravan Hanasoge of Princeton’s Department of Geosciences and Michael Kesden of NYU’s Center for Cosmology and Particle Physics simulated the visible result of a primordial black hole passing through a star. Theoretical remnants of the Big Bang, primordial black holes possess the properties of dark matter and are one of various cosmic objects thought to be the source of the mysterious substance, but they have yet to be observed. If primordial black holes are the source of dark matter, the sheer number of stars in the Milky Way galaxy — roughly 100 billion — makes an encounter inevitable, the authors report. Unlike larger black holes, a primordial black hole would not “swallow” the star, but cause noticeable vibrations on the star’s surface as it passes through. Thus, as the number of telescopes and satellites probing distant stars in the Milky Way increases, so do the chances to observe a primordial black hole as it slides harmlessly through one of the galaxy’s billions of stars, Hanasoge said. The computer model developed by Hanasoge and Kesden can be used with these current solar-observation techniques to offer a more precise method for detecting primordial black holes than existing tools. “If astronomers were just looking at the sun, the chances of observing a primordial black hole are not likely, but people are now looking at thousands of stars,” Hanasoge said. “There’s a larger question of what constitutes dark matter, and if a primordial black hole were found it would fit all the parameters — they have mass and force so they directly influence other objects in the universe, and they don’t interact with light. Identifying one would have profound implications for our understanding of the early universe and dark matter.” Although dark matter has not been observed directly, galaxies are thought to reside in extended dark-matter halos based on documented gravitational effects of these halos on galaxies’ visible stars and gas. Like other proposed dark-matter candidates, primordial black holes are difficult to detect because they neither emit nor absorb light, stealthily traversing the universe with only subtle gravitational effects on nearby objects. Because primordial black holes are heavier than other dark-matter candidates, however, their interaction with stars would be detectable by existing and future stellar observatories , Kesden said. When crossing paths with a star, a primordial black hole’s gravity would squeeze the star, and then, once the black hole passed through, cause the star’s surface to ripple as it snaps back into place. “If you imagine poking a water balloon and watching the water ripple inside, that’s similar to how a star’s surface appears,” Kesden said. “By looking at how a star’s surface moves, you can figure out what’s going on inside. If a black hole goes through, you can see the surface vibrate.” Eyeing the sun’s surface for hints of dark matter Kesden and Hanasoge used the sun as a model to calculate the effect of a primordial black hole on a star’s surface. Kesden, whose research includes black holes and dark matter, calculated the masses of a primordial black hole, as well as the likely trajectory of the object through the sun. Hanasoge, who studies seismology in the sun, Earth and stars, worked out the black hole’s vibrational effect on the sun’s surface. Video simulations of the researchers’ calculations were created by NASA’s Tim Sandstrom using the Pleiades supercomputer at the agency’s Ames Research Center in California. One clip shows the vibrations of the sun’s surface as a primordial black hole — represented by a white trail — passes through its interior. A second movie portrays the result of a black hole grazing the Sun’s surface. Marc Kamionkowski, a professor of physics and astronomy at Johns Hopkins University, said that the work serves as a toolkit for detecting primordial black holes, as Hanasoge and Kesden have provided a thorough and accurate method that takes advantage of existing solar observations. A theoretical physicist well known for his work with large-scale structures and the universe’s early history, Kamionkowski had no role in the project, but is familiar with it. “It’s been known that as a primordial black hole went by a star, it would have an effect, but this is the first time we have calculations that are numerically precise,” Kamionkowski said. “This is a clever idea that takes advantage of observations and measurements already made by solar physics. It’s like someone calling you to say there might be a million dollars under your front doormat. If it turns out to not be true, it cost you nothing to look. In this case, there might be dark matter in the data sets astronomers already have, so why not look?” One significant aspect of Kesden and Hanasoge’s technique, Kamionkowski said, is that it narrows a significant gap in the mass that can be detected by existing methods of trolling for primordial black holes . The search for primordial black holes has thus far been limited to masses too small to include a black hole, or so large that “those black holes would have disrupted galaxies in heinous ways we would have noticed,” Kamionkowski said. “Primordial black holes have been somewhat neglected and I think that’s because there has not been a single, well-motivated idea of how to find them within the range in which they could likely exist.” The current mass range in which primordial black holes could be observed was set based on previous direct observations of Hawking radiation — the emissions from black holes as they evaporate into gamma rays — as well as of the bending of light around large stellar objects, Kesden said. The difference in mass between those phenomena, however, is enormous, even in astronomical terms. Hawking radiation can only be observed if the evaporating black hole’s mass is less than 100 quadrillion grams. On the other end, an object must be larger than 100 septillion (24 zeroes) grams for light to visibly bend around it. The search for primordial black holes covered a swath of mass that spans a factor of 1 billion, Kesden explained — similar to searching for an unknown object with a weight somewhere between that of a penny and a mining dump truck . He and Hanasoge suggest a technique to give that range a much-needed trim and established more specific parameters for spotting a primordial black hole. The pair found through their simulations that a primordial black hole larger than 1 sextillion (21 zeroes) grams — roughly the mass of an asteroid — would produce a noticeable effect on a star’s surface. “Now that we know primordial black holes can produce detectable vibrations in stars, we could try to look at a larger sample of stars than just our own sun,” Kesden said. “The Milky Way has 100 billion stars, so about 10,000 detectable events should be happening every year in our galaxy if we just knew where to look.” This research was funded by grants from NASA and by the James Arthur Postdoctoral Fellowship at New York University.
Adult supervision required when dealing with volunteers. Project Time Frame This project explores the human behavior known as lying. - To define lying in general, and to categorize types of lies. - To see how well people can tell when other people are lying. - To encourage more realistic discussions on the ethics of lying, not to mention its countless practical uses. Materials and Equipment - Computer with internet access - Digital video camera (or mobile phone video camera) - Typical office/craft supplies (such as paper, pens & poster-board) Simply put, a lie is any deliberately misleading action (or inaction), spoken or unspoken. Lying is a loaded gun. People say it’s wrong, and yet everyone does it. We’re taught that honesty is the best policy, and yet we all believe that there are such things as “good” lies. How good are we at the lying game? - Why do people lie? - Why are the benefits and disadvantages of lying? - Do lie detectors really work? - Can we tell when someone is lying? If so, how? Terms, Concepts and Questions to Start Background Research - Familiarity with computers and video software. - A basic knowledge of statistics would be helpful. - Read overviews of relevant topics (see bibliography). - Design a list of five simple questions like “How old are you?” or “What is 2+2?” - Recruit volunteers who don’t mind being filmed. - Instruct some volunteers to lie, and others to tell the truth. All lies must be close enough to the truth to be believable. For instance, a 38-year old man can give his age as 37. - Film volunteers answering the questions. - Get as many people as you can to participate in this test: Show the film (with the questions edited out so only the answers are viewed), and have viewers try to guess whether each person on the film is lying or telling the truth. - Score viewers based on percentage of correct answers. - Analyze the data. - Interpret findings in a detailed report. - Show results visually using charts and graphs. - Display relevant photos taken throughout the course of the experiment.
Concentration Camp Symbols of World War II During the NAZI era of 1930-1940 Germany, the World War II era, the government created a state policy where ‘undesirable’ groups within Germany and any of its occupied territories were isolated from the general population. These groups were identified as Jews, homosexuals, gypsies, Jehovah’s Witnesses, criminals, political prisoners, and emigrants. Once identified, they were forced to wear a distinctively designed cloth badge on their clothing to help identify them to the general population as to which persecuted group they belonged. Eventually and systematically, those wearing the cloth badges were moved as groups and imprisoned in outdoor concentration camps. The Jewish population was one of the largest groups forced into concentration camps. While there, they were routinely decimated through forced labor, starvation, disease, and outright extermination. The particular symbol chosen to identify the Jewish population as a whole was the Magen David, or Shield of David. This six pointed star-shaped design is actually made by the intertwining of two triangles. It is said that the triangles represent the intertwining of the Jewish people or that one triangle points upward to G-d and the other points down to earth. However, early Jewish text does not specifically identify this symbol as that of the Jewish people. There was some references to its use on synagogues as early as the 17th century, but not how it was chosen to represent the Jewish religion. Still, this symbol was adopted by the late 19th century Zionist movement and eventually incorporated into the national flag of Israel. This Magen David, the Star of David, was the symbol most used by the Nazi regime to identify its Jewish population. There are many versions of the Magen David used in different regions of Nazi influence. The red star armband above is only one design possibly used in the Eastern Europe concentration camps. However, the newness of the armband suggests that it is a more recent fabrication and not authentic to the period. Once incarcerated in concentration camps, the triangle seems to have prevailed as a unique symbol for all prisoners, only the color identifying the group the prisoner belonged. The other symbols for the other persecuted groups while in the concentration camps are identified as: - yellow triangles for Jewish prisoners – red triangles for political (Communist) prisoners – purple triangles for Jehovah’s Witness – pink triangles for homosexuals – green triangles for criminals – black triangles for Gypsies – blue triangles for emigrants Fake and Forgeries It is evident and unfortunate that too many of the concentration camp memorabilia offered on online auction sites are not authentic. Many of them are being made from original cloth of the period which can make it hard to know for sure whether it is authentic or not. However, there is one rule of thumb to consider – if it looks too new, it is. The Black Light Test The other more sure way to know is to move a black light over the piece and if the thread glows, it is synthetic, a material not available during this period. That’s true of any painted object, too. If it glows under black light, it is of recent origin. Collecting original concentration camp memorabilia is important as its very existence informs future generations that this shall not happen again.
The wonder material graphene has recently tackled another dimension and found another exciting application for the future of technology. If your phone has ever overheated on a hot day, you’re going to want to read this. Hexagonal boron nitride (h-BN), a similar element to graphene known as white graphene, is an electrical insulator. Normally a 2-D material, in a newly proposed, complex 3D lattice structure, white graphene has serious heat withstanding capabilities. In most materials used to create electronic devices, heat moves along a plane, rather than moving between layers to dissipate more evenly, which frequently results in overheating. This is also the case with 2-D hexagonal boron-nitride, but not the case when this same element is simulated in a 3-D structure. Rouzbeh Shahsavari and Navid Sakhavand, research scientists at Rice University, have just completed a theoretical analysis that produced a 3-D lattice-like white graphene structure. It uses hexagonal boron nitride and boron nitride nanotubes to create a configuration in which heat photons move in multiple directions—not only over planes, but across and through them as well. This means that electrical engineers now have the opportunity to move heat through and away from key components in electronic devices, which opens the door to significant cooling opportunities for many of the electronic items we use daily, from cell phones to massive data server storage facilities. In an interview with Fortune, Shahsavari clarifies this process even further in an explanation about 3-D thermal–management systems. Essentially, the shape of the material, and its mass from one point to another, can actually shift the direction of the heat’s movement. Even when the heat is more inclined to go in one direction, a switch is created that shifts the direction in reverse, better distributing heat through the object. The boron-nitride nanotubes are what enable the transfer between these layers to occur. For most of us, this just means that in the near future we may be able to worry less about our smartphones and tablets overheating. For engineers it may mean an entirely new approach to cooling through the use of white graphene, which could potentially provide a better or alternative solution to cooling mechanisms like nanofluids. Those interested in an incredibly complex scientific explanation can read more about how this dimensional crossover works here or here. photo credit: Shahsavari Group/Rice University
The alignment of the lines in a word document is crucial for maintaining the shape and symmetrical view. Sometimes aligning the line towards the left make the shape and look of the paragraph pleasant and acceptable. And in some cases, right aligning the text of a line does the work nicely for the writer. But you may wonder a bit after hearing about how to right align part of a line in word. It may look difficult after hearing the name and title of this lesson when you are new to MS word. If you pay attention to the discussion below, the whole task will become a lot easier. Why You May Need to Right Align Part of a Line in Word Alignment of the text in a line is like one of the most crucial ornaments of your MS word article. At the same time, it is one of the must ones to apply, and without it, the article you have written may look dull. You may either need to right-align the text in a line or left align it based on your necessity. When you right align part of a line, the other part of that line should be left aligned. So, it is just the same if you are looking for the answer of how to left and right align on same line in word. Aligning the text both left and right on the same line is surely new for most of the new word users. That’s why the first priority should be knowing about why you may require to apply this on your word document: - When you are wanting to create a bigger gap between the two sections of any line, you may need to right align part of a line - If the line indicates any topic on “anything versus anything,” you will have to apply right alignment on a part that line - Sometimes you may need to start writing from each end of the line, and at that time, both left and right alignments of a single line are must How to Right Align Part of a Line in Word – The Steps to Follow If you are puzzled about how to align text in word on both sides, then the answer is not far away. But first, stay calm, see the procedure in a step-by-step way, and then apply it during the time you write your word article. Now it is time to take an attentive look at the steps to sincerely right align part of a line in word along with left aligning the other part of it: Step-1: Select the Document and the Line for Left and Right Alignment If you are wanting to apply this partial right alignment to any written article, then you will have to open through a word document first. But if you are writing the article, and want to apply this alignment in any line instantly, then any extra task won’t be necessary. While doing all these, you must pick the line correctly in which you want to apply both left and right alignment. Step-2: Access to the Paragraph Section While you are writing or just opened the article on word, there will be a panel of options above. From that panel, find “Home” and click on it to ensure you are in the correct tab. After that, there will be another section right above the phase from where the text writing part starts. The last option of that section will be “Paragraph,” and there will be an icon beside that option. It is known as the “Paragraph Settings” icon, and you must press on it to access this portion. Thus, you will be directed to the “Paragraph” dialogue box with a small window appearing with regard to that. Step-3: Left Aligning of the Text in the Line In the dialogue box, you will find three different sections; and the first one is “Indents and Spacing,” and you must click on it. Under the “General” portion of this section, there will be a box for alignment with a dropdown menu bar. There will be four alignment options, and you need to select “Left” and click on it. You can also simply select the text line and press “Ctrl + L” to the left to align that line. Step-4: Right Aligning of the Text in the Line After you have left-aligned the text line, the next part is to align the same line to the right side of the word panel. At first, you will see a small symbol just above the top-left corner of the text-writing section. The symbol or icon will be just like the same as indicated in the given picture below. Now, you will have to keep clicking on that icon casually to change the symbol. With two or three clicks, the symbol will be changed twice or thrice, and the right tab symbol will be revealed. The symbol, logo, or icon, in this case, will also be the same as the given picture we have down here. There will be a ruler all around the writing section, but if not so, you can easily bring that up. Either manually bring it from the “Review” section of the top panel, or simply press “Ctrl + R.” As the ruler arrives, locate the rightest area of the selected line and relate it with the ruler above. As you find the perfect portion of the ruler related to your desired right-alignment section, click on that section of the ruler. Then simply press the “Tab” key from your keyboard, and the mouse cursor will jump to that desired part of the line. As you start writing from there, the text will go left serially when you will write more words. How to Adjust Page Margins in Microsoft Word As we have discussed above, proper alignment is like a crucial ornament for your word document. So, it is a must for the article to have just the right alignment that matches with the article topic and line requirements. When the term comes on both right and left alignment, it’s relatively unfamiliar for the new users. But it will remain incomplete without learning about how to right align part of a line in word. Though it is a bit complex than single left alignment and right alignment, it is a crucial one to learn.
I have read lately, that the estimated age of the universe is approx. 11 to 14 billion years, the Milky Way galaxy is approx. 10 billion years old, and our Sun is approx. 4 billion years old. Where, in relation to the entire Universe, is the Milky Way located, and how far in light years is the furthest detected object. It is difficult to say where in relation to the universe the Milky Way is located since we don't think that the Universe has a center, and that (on large enough) scales it is completely homogeneous (i.e. is made of mostly the same stuff) and isotropic (i.e. doesn't change depending on the direction you look). On smaller scales the Universe contains a lot of structure (for example us!). The largest known structures are the superclusters of galaxies which form at the nodes of the filamentary-like distribution of galaxies throughout the Universe (see here). The Milky Way galaxy is found in a small group of galaxies (known as the Local Group) towards the edge of a relatively small supercluster which we call the Local Supercluster (or sometimes the Virgo Supercluster after the Virgo Cluster, the largest cluster of galaxies in it). How far away the furthest detected object is depends on what you call an object. We detect the CMB radiation which comes from the time in the Universe when the ions and electrons first formed into atoms. That happened at a redshift of z=1000, or only about 300,000 years after the Big Bang (making it almost 13.7 billion light years away since that's how old the Universe is). The furthest galaxy we have detected however is a quasar at a redshift of about z=6, exactly how far away that is depends on your choice of cosmology (i.e. the amount of dark matter, dark energy, etc., in the Universe), but it's many billions of light years (although obviously closer than the CMB scattering surface). This page was last updated June 27, 2015.
Most horizontal-launch systems envisioned in the past, that used electromagnetics as a launch-assist, were subsonic. In order to conserve fuel during launch and to keep engine-weight to a minimal a combination of electromagnetic-propulsion, i.e. maglev or rail-guns and scramjet engines may offer a viable alternative for manned systems to LEO. A lot of electrical-energy storage systems would be required to create the required velocity in the launch infrastructure such that the orbital vehicle scramjet engines could ignite. An example of a lower speed system is here. If the launch velocity could be increased to between Mach-2 and Mach-4 a manned system, using only scramjet engines, could be constructed. Final orbit insertion would need some additional rocket engines. The launch-track would need to be at least 10 miles in length to keep acceleration under 3 g's, and offer launch-abort opportunities. The potential of a horizontal launch within the earth's atmosphere is very much limited by the atmosphere itself. At a low height the maximum speed is limited by the high atmospheric pressure and at medium height there is not enough oxygen left for the scramjet. But to get into a low orbit, much more speed and height is necessary, about 8 km/s speed instead of some 1.3 km/s for Mach-4 and 200 km height instead of only about 25 km. The difference in kinetic energy is impressive, about 38 times more for 8 km/s instead of 1.3.
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
PCR Test Current Affairs, GK & News The polymerase chain reaction process forms the basis for the tests currently being used to assess SARS CoV 2 infection among the masses. Polymerase Chain Reaction The PCR is used to create multiple copies of a DNA. It uses the enzyme polymerase to create the copies exponentially using a chain reaction process. The PCR technique .. On September 29, 2020, the World Health Organization announced that it will launch 120 million rapid diagnostic tests for COVID-19 joining hands with its partners. This will help lower- and middle-income countries to fill in the testing gap with richer countries. Highlights The WHO launched rapid diagnostic tests will be antigen based. The price of .. Category: International Current Affairs The number of COVID-19 cases has been increasing across the world. Countries are racing to test larger portions of its population. Steps Involved in Testing There are 5 steps involved in testing a sample for the virus: Collection: the sample is collected from the throat and nasal cavity. It is stored in a medium composed ..
Sound! it’s almost impossible to imagine a world without it. It’s probably the first thing you experience when you wake up in the morning—when you hear birds chirping or your alarm clock bleeping away. Sound fills our days with excitement and meaning, when people talk to us, when we listen to music, or when we hear interesting programs on the radio and TV. Sound may be the last thing you hear at night as well when you listen to your heartbeat and drift gradually into the soundless world of sleep. What is sound? Sound is the term to describe what is heard when sound waves pass through a medium to the ear. All sounds are made by Vibration of molecules through which the sound travels. Sound can propagate through a medium such as air, water and solids as longitudinal waves and also as transverse wave in solids. How dose sound travell? When you hear an alarm clock ringing, you’re listening to energy making a journey. It sets off from somewhere inside the clock, travels through the air, and arrives some time later in your ears. It’s a little bit like waves traveling over the sea: they start out from a place where the wind is blowing on the water (the original source of the energy, like the bell or buzzer inside your alarm clock), travel over the ocean surface (that’s the medium that allows the waves to travel), and eventually wash up on the beach (similar to sounds entering your ears). The decibel (abbreviated dB) is the unit used to measure the intensity of a sound. Frequency is the speed of the vibration, and this determines the pitch of the sound. - LOW FREQUENCY– sound waves with a frequency below the lower limit of audibility is termed as LF - MID RANGE FREQUENCY– typically the frequency range between 300Hz and 5,000Hz. … Reproduction of the midrange frequencies should sound natural and uncolored with excellent detail - HIGH FREQUENCY– High frequency sound is sound of which the frequency lies between 8 and 20 kHz. High frequency sound with a frequency over 16 kHz can hardly be heard, but it is not completely inaudible. Vibration in Sound Vibration, periodic back-and-forth motion of the particles of an elastic body or medium, commonly resulting when almost any physical system is displaced from its equilibrium condition and allowed to respond to the forces that tend to restore equilibrium. Vibrations fall into two categories: free and forced. Free vibrations occur when the system is disturbed momentarily and then allowed to move without restraint. Forced vibrations occur if a system is continuously driven by an external agency. The speed of sound Now you know that sound carries energy in a pattern of waves, you can see that the speed of sound means the speed at which the waves move—the speed at which the energy travels between two places. Sound travels at different speeds in solids, liquids, and gases and even its speed in one material can change. When we say that a jet airplane “breaks through the sound barrier,” we mean that it accelerates so fast that it overtakes the incredibly high-intensity (that is, noisy) sound waves its engines are making, producing a horrible noise called a sonic boom in the process. That’s why you’ll see a fighter plane whizz overhead a second or two before you hear the vicious scream of its jet airplanes. What is Acoustics in sound? Types of Acoustics ? Room acoustics is the broad term that describes how sound waves interact with a room. Each room, and all the objects in it, will react differently to different frequencies of sound. Every speaker will sound different in different rooms. Science of sound, including its production, transmission, and effects, including biological and psychological effects. Those qualities of a room that, together, determine its character with respect to auditory effects.” TYPES OF ACOUSTICS - Full Acoustics - Smart Acoustics - Quick Acoustics - Customized Acoustics TECHNICALITIES TO BE CONSIDERED IN ACOUSTICS How dose a speaker work? Speaker works by converting electrical energy into soundwaves. In speakers, A current is sent through the coil which produces an electric field that interacts with the magnetic field of the permanent magnet attached to the speaker. Like the charges repel each other and different charges attract. What is an Amplifier? It is an Platform where the incoming electrical signals are amplified (Increased) and the sound frequency is divided in to the necessary parts of the speaker such as Tweeter, Mid ranger, Sub etc. Amplifier is an electronic amplifier which amplifies signals such as signals from radios receiver, or electric guitar pickup to a level that is high enough for driving loud speakers or headphones. What is an A.V Receiver? An Audio/Video receiver is a component used in home theaters. It’s purpose is to receive audio and video signals from a number of sources and process them to the driver to loud speaker & to displays such as television, monitor or video projector. The standard for AV receivers is five channels of amplification. These are usually referred to as 5.1 receivers. This provides for a left, right, center, left surround and right surround speaker and a Sub-Woofer. What is a Power amp? A Power amp is a device which Amplifies ( which adds extra power to the signal) received from radio receiver or electric guitar pickup to a level that is high enough for driving loudspeakers. If you find this article help full please leave a comment and share Do you wish to start an online business ? Click to know more. Click here to learn on how to write Good Instagram caption that get engagement. Click here to know how Qr Code can help your business grow. Learn Digital marketing from the best Mentor in the country Digital Deepak Want to learn about Affiliate marketing? Click here Click here to know more on top 7 countries for higher education
This lesson is designed for middle school students with no previous knowledge of astronomy or the history of astronomy. I often prepare my images as a slideshow or printed, large size images for students to understand the story of the scientists behind the science we are studying. This lesson should take approximately thirty to forty-five minutes, depending on the amount of time you allow students to examine the works of Leonardo da Vinci and if you plan to include any additional background information about the Renaissance. - Describe the culture in Europe at the time of the Renaissance and the dramatic changes in thought - Describe the work and contributions of Leonardo da Vinci - Describe the platform created by the Renaissance to support the paradigm shifts of the Scientific Revolution Why was the Scientific Revolution important and how did it contribute to progress? Images of Leonardo da Vinci, his journals, his paintings, and his inventions (please note: as I have collected a great number of images in high resolution, they are too large to attach to this wiki page. You can find them in the Leonardo da Vinci Images folder located in this lesson folder). A dictionary or two if you want your students to look up definitions of words as you are introducing new concepts. - Doing da Vinci. This is a series produced by the Discovery Channel. A group of engineers builds life-size models of Leonardo's inventions and tests them. Some free video clips are also available. - Leonardo da Vinci. This is a site created by the Museum of Science in Boston about a Leonardo da Vinci exhibit they had a few years ago. The site contains interesting biographical information as well as some good lesson plans and classroom activities. [Note: This lesson, in its entirety, and references for the images used is attached as a zipped pdf file.] Summary of the Lesson - Introduce and define the Renaissance - Provide a brief biography of Leonardo da Vinci - Introduce the idea of a paradigm shift and why the Renaissance created a platform for the Scientific Revolution Lesson: The renaissance and leonardo da vinc We'll now jump over 1500 years in our study of astronomy to the next significant contributions to the structure of the solar system. It's now the 16th Century, and Europe is just emerging from the Middle Ages or Medieval times. The Middle Ages were a period of great cultural, political, and economic change in Europe. During the Middle Ages the Islamic Civilisation had flourished and their astronomers made many significant contributions to the field including careful observations and star discovery and cataloging. Astronomers in Europe made advances in the explanations of the motion of the stars and planets. The 16th Century was a period in European history characterized as the Renaissance (def: "rebirth" -- you may want to have someone look this up). The Renaissance began in Italy at the end of the Black Plague, and slowly spread through the rest of Europe. Renaissance thinkers returned to the ancient texts of Greek and Latin philosophers to improve and perfect their worldly knowledge. In particular, the Renaissance saw significant change in the way the universe was viewed and the methods with which philosophy sought to explain natural phenomena. Leonardo Da Vinci is often considered the quintessential (vocab: quintessential – the most perfect embodiment of something) Renaissance man, whose curiosity was equaled only by his powers of invention. He is widely considered one of the greatest painters of all time and perhaps the most diversely talented person to ever live. son of a 25-year-old notary, Ser Piero, and a peasant girl, Caterina, Leonardo was born on April 15, 1452, in Vinci, Italy, just outside Florence. His father took custody of the little fellow shortly after his birth, while his mother married someone else and moved to a neighboring town. Growing up in his father's Vinci home, Leonardo had access to scholarly texts owned by family and friends. He was also exposed to Vinci's longstanding painting tradition, and when he was about 15 his father apprenticed him to the renowned workshop of Andrea del Verrochio in Florence. Even as an apprentice, Leonardo demonstrated his colossal talent. For example, one of Leonardo's first big breaks was to paint an angel in Verrochio's "Baptism of Christ," and Leonardo was so much better than his master's that the story goes: Verrochio was so moved by the expression on the angel’s face, he put down his brush and resolved never to paint again. Leonardo stayed in the Verrocchio workshop until 1477 when he set up a shop for himself. much of his time sketching and drawing the things he saw around him—in fact, he always carried his sketchbook attached to a strap around his waist. Leonardo loved to study movement, and was one of the first artists to accurately capture the movement of his subjects. Leonardo was also the first artist to study proportions, and specifically, the physical proportions of people. He used these two techniques to accurately capture movement of proportionally accurate subjects in his paintings. always found it difficult to make a living off their art. Even a master like Leonardo was forced to sell out in order to support himself, so he adapted his drawing skills to the more lucrative fields of architecture, military engineering, canal building and weapons design. Although a peacenik at heart, Leonardo landed a job working for the Duke of Milan by calling himself a military engineer and outlining some of his sinister ideas for weapons and fortifications. Like many art school types in search of a salary, he only briefly mentioned to the Duke that he could paint as well. The Duke kept Leonardo busy painting and sculpting and designing elaborate court festivals, but he also put Leonardo to work designing weapons, buildings and machinery. Lucky for Leonardo, he was actually really talented as an engineer. Good illustrators were a dime a dozen in Renaissance Italy, but Leonardo had the brains and the diligence to break new ground, usually leaving his contemporaries in the dust. Like many crackpot geniuses, Leonardo wanted to create "new machines" for a "new world." For him, the most interesting part was the use of mechanical gears, and he studied them with relish. Based on the gear, he came up with loads of different thingamajigs, including the bicycle, a helicopter, an "auto-mobile", and some gruesome weapons of course. Things are not always as they appear though. While Leonardo may have earned a living designing war machines and new architecture for the Duke, he was actually an extremely peaceful man. He was famous for buying birds in the market place just to set them free, and for being a vegetarian who had no desire to hurt other living also obsessed with water. Recall that nobody had harnessed electricity yet, so water was at that point the ultimate source for power. Leonardo studied all forms of water liquid, steam, and ice and he had all sorts of swell ideas of what to do with it. He cooked up plans for a device to measure humidity, a steam-powered cannon, many different waterwheels, and oodles of useful industrial machines powered by flowing water. also able to finish some of his most famous paintings while working for the Duke. One of these paintings was The Last Supper, although almost as soon as he had finished the painting started disintegrating—ever the scientists Leonardo had been experimenting with a new form of paint that didn’t hold up well. the Duke was forced out of power in 1482, 17 years after Leonardo had started working for him. So, Leonardo was on his own, free to paint, invent, and design as he liked. In 1503, Leonardo started working on his most famous painting, Mona Lisa. Mona Lisa was the wife of an important citizen in Florence, although the painting was so important to Leonardo that he ended up keeping it for himself. In March of 1516, King Francis I in France offered him the title of “Premier Painter and Engineer and Architect of the King.” His last and perhaps most generous patron, Francis I provided Leonardo with a cushy job, including a stipend and manor house near the royal chateau at Amboise. He spent most of his time studying science, either by going out into nature and observing things or by locking himself away in his workshop cutting up bodies or pondering universal truths. Leonardo died on May 2, 1519 in Cloux, France. Legend has it that King Francis was at his side when he died, cradling Leonardo's head in his arms. So, now that you have a snapshot of the Renaissance and some of the creative and innovative ideas happening during this time, how do you think the culture of Europe during this time contributed to the Scientific Revolution? At the end of this lesson I ask students to create their own replica of a Leonardo da Vinci invention or painting and research its impact on society. I also ask them to read chapter 3, "On Revolutions and Fools," in The Story of Science Newton at the Center by Joy Hakim (Smithsonian, 2005). (The replica assessment is available on its own page as wiki content and as a downloadable pdf and doc file) The Renaissance and Leonardo da Vinci Lesson (zipped pdf)
KIRTON LANE PRIMARY SCHOOL CURRICULUM Our curriculum is based around our vision of independence and fluent use of basic skills. We recognise that if our learners are to be fully independent in the core skills of reading, writing and maths, these subjects must be core elements of a broad and enriched curriculum. We instil this by providing the children with a bank of memorable experiences that: deepen their understanding, give them opportunity to apply their skills and widen their knowledge of all aspects of the national curriculum. We aim to engage our pupils’ right from the start of projects with trips, visits and hands-on experiences. We then develop their skills, knowledge and understanding and provide opportunities for them to innovate and express their ideas and viewpoints in order to make the project their own. For more information on the National Curriculum's Programmes of Study, please click here Below we have added a document which details our curriculum plan and how it is thought out and applied. It also includeds further information on the Subjects each year group is to focus on, and what they are learning. At Kirton Lane we recognise that reading is a basic skill which is key to success in all subject areas and in partnership with parents we aim to raise achievement for all. There is overwhelming evidence that reading ability has a significant relationship to a child’s life chances, however evidence shows that only 1 in 5 parents easily find the opportunity to read with their children. Literacy skills and a love of reading can break this vicious cycle of deprivation and disadvantage. Parents and teachers are the most important reading role models for children and young people. In the early years, our approach is to develop an early love for books through sharing high quality stories and rhymes. Making reading exciting, fun and relevant. We used a structured synthetic phonics approach which builds knowledge of letters sounds. This is done on a daily basis through ability led small group work. As children develop the ability to decode words and develop a sight vocabulary for ‘tricky’ non-phonetic words we broaden the range of texts available to the children and greater focus is placed on the comprehension of texts and the development of a bank of new For any children who reach key stage 2 without a firm grasp of their letter sounds, extra phonic group support is in place to help with this. We also run a successful phonic intervention called Toe by Toe which provides daily one to one support for children with their reading. As well as individual reading to an adult at school, each teacher runs guided reading groups which help children to discuss and enjoy the texts at a deeper level aiding their comprehension of more challenging reading materials. The school follow the Read Write Inc teaching of Phonics. Click here to look at the school policy RWI Policy 2021. Here is the link for RWI for parents who wish to know more https://www.ruthmiskin.com/en/ At Kirton Lane we use the Maths Mastery approach when teaching mathematics. We believe all our children can achieve in maths and our lessons focus on developing children's conceptual understanding and their ability to reason and explain making links where appropriate. Once children secure fluency in mathematical concepts they are challenged in their learning through problem solving in order to gain a deeper understanding . Key features of the maths mastery curriculum are : - High expectations for every child - Fewer topics ,greater depth - Number sense and place value come first - Research based curriculum - Objects and pictures always before numbers and letters - Problem solving is central - Calculate with confidence - understand why it works Maths mastery embeds a deeper understanding of maths by utilising a concrete ,pictorial and abstract approach so that pupils understand what they are doing rather than just learning to repeat routines without grasping what is happening. Science activities are primarily practical and there is a strong emphasis on observation, discussion, investigation and the solving of problems. Children often work in mixed ability groupings in Science lessons and children are encouraged to work co-operatively with others in a group situation as well as individually, and as part of a whole class. The school has taken part in the BBC 'Terrific Science Investigations ' this included experiments that were published nationally . Click here to find out more. Personal, Social, Health, Economic Education (Circle Time) Throughout the school, all staff support children in their personal, social, moral, emotional and spiritual needs. The school has a trained Family Manager who is able to run nurture groups and parenting groups when required. The school has a sensory room and a space that is available for circle time. Staff teach regular PSCHE lessons and Spiritual, Moral, Social, Cultural elements run through our curriculum. The aims of the SMSC curriculum are: Explore beliefs and experience; respect faiths, feelings and values; enjoy learning about oneself, others and the surrounding world; use imagination and creativity; reflect. Recognise right and wrong; respect the law; understand consequences; investigate moral and ethical issues; offer reasoned views. Use a range of social skills; participate in the local community; appreciate diverse viewpoints; participate, volunteer and cooperate; resolve conflict; engage with the 'British values' of democracy, the rule of law, liberty, respect and tolerance. Appreciate cultural influences; appreciate the role of Britain's parliamentary system; participate in culture opportunities; understand, accept, respect and celebrate diversity. Sports and PE (see our sports and PE page by clicking here) Sport and physical exercise is an important factor of our daily lives here at Kirton Lane, whether it maybe the early morning Yoga class with Miss Fraser or the preparation for the Doncaster Fit and Healthy School Competition. Our Sports premium is used to employ specialists who not only work with pupils to help them progress in PE but also support staff CPD too. The premium has also provided our children with wider sport activities and opportunities. Take a look below at our skills in action in the garden of the Palace of Westminster! Modern Foreign Languages Children have an early introduction to Spanish ,German and French to prepare them for KS2 through songs, games and story reading. Children will start learning languages through phonics, basic vocabulary and syntactical structures which will give them a head-start in KS2 and will develop their interest in languages. The school follows the Doncaster Agreed Syllabus for Religious Education and has an important role to play as part of a broad, balanced and coherent curriculum to which all pupils are entitled. This actively promotes the values of truth, justice, respect for all and care of the environment. It places specific emphasis on: ■ pupils valuing themselves and others; ■ the role of family and the community in religious belief and activity; ■ the celebration of diversity through understanding similarities and differences; ■ sustainable development of the earth. The syllabus also recognises the changing nature of society, including changes in religious practice and expression, and the influence of religions in the local, national and global community. Religious studies supports the key purposes of the school curriculum, which are to: ■ provide opportunities for all pupils to learn and achieve ■ promote pupils’ spiritual, moral, social and cultural development and prepare all pupils for the opportunities, responsibilities and experiences of life. Computers are now part of everyday life. For most of us, technology is essential to our lives, at home and at work. ‘Computational thinking’ is a skill children must be taught if they are to be ready for the workplace and able to participate effectively in this digital world.The new national curriculum for computing has been developed to equip young people in England with the foundational skills, knowledge and understanding of computing they will need for the rest of their lives. Computing is concerned with how computers and computer systems work, and how they are designed and programmed. Pupils studying computing will gain an understanding of computational systems of all kinds, whether or not they include computers. Computational thinking provides insights into many areas of the curriculum, and influences work at the cutting edge of a wide range of disciplines. Writing and SPaG Children are encouraged to read and write a range of genres in their time at our school. Each year they will focus on various texts such as narrative, non-fiction and poetry ; this is how story-writing lessons help develop their story structure, grammar and punctuation skills. In Key Stage One and two the children use 'Talk for Writing' this enables children to imitate the language they need for a particular topic orally before reading and analysing it and then writing their own story. The children have access to a broad and balanced curriculum . The schools Art Week each year focuses on a famous artist or specific art skill such as printing or working with textiles. The school uses the National Curriculum as a base for planning for long term and medium term. Children have the opportunity to learn an instrument : this year it is the guitar and ukulele.
Researchers at the University at Buffalo, New York, have developed new processes for creating 3D artificial tissue, experimental drug tests, and advances that can improve the quality of artificial organs. Is explained in Advanced science, This method is based on compression buckling. This is a structural engineering principle that explains why figures stick out from the pages of a children’s pop-up book. “When you look at a new page, you create power. When this power pulls on the figure, it opens the creases and pops out,” said research co-author Dr. Ruogang Zhao. .. , Associate Professor of Biomedical Engineering. “This study shows that the same principles can be applied to artificially designed ones. Organization.. “ In a series of experiments, researchers used the compression buckling method to produce a variety of three-dimensional polymer structures. These include not only simple shapes such as boxes and pyramids, but also Sound wave An eight-legged design similar to an octopus. To demonstrate the usefulness of this method in tissue engineering, the team created a bone sol-like structure. Bone sol is the basic building block of bone tissue and is characterized by the sparse distribution of bone cells in the scaffolding of mineral bone. Each bone cell is located in a small cavity known as a fissure, and different bone cells are connected via canals, which are small channels of bone scaffolding. The results are important, Zhao says, because most tissue engineers rely on two-dimensional tissue fabrication methods to create very thin tissues that do not represent the volume of human tissue. The planar nature of these tissue models limits their application to disease modeling and drug testing, he says. Compressed buckling can be used to quickly transform 2D tissue into 3D tissue of considerable thickness, allowing researchers to create more realistic tissue and open up new possibilities for tissue engineering and regenerative medicine. can do. It can also perform better than other 3D techniques. Human tissue engineeringIn terms of manufacturing speed and spatial resolution, such as 3D bioprinting, Zhao says. Zhaowei Chen et al, Compression Buckling Manufacture of Fine Structures Including 3D Cells, Advanced science (2021). DOI: 10.1002 / advs.202101027 Quote: A new method inspired by the Children’s Pop-up Book for Creating 3D Artificial Tissues (October 15, 2021) is https: //phys.org/news/2021-10-method-kid-pop-up Obtained from -3d on October 15, 2021-artificial.html This document is subject to copyright. No part may be reproduced without written permission, except for fair transactions for personal investigation or research purposes. The content is provided for informational purposes only. A new way inspired by children’s pop-up books to create 3D artificial tissue Source link A new way inspired by children’s pop-up books to create 3D artificial tissue
Upon losing his position at the University of Pisa, Galileo was appointed as the Dean of the Mathematics department at the University of Padua. It was here that Galileo improved upon some of the technologies of the day to produce a superior compass for military applications, a precursor to the thermometer, known as the “thermoscope,” and he greatly improved the design of a telescope, producing instruments that had three times, then 30 times the magnifying power. Using the 30 power telescope enabled Galileo to scan the night skies and develop as an astronomer. Galileo is credited as the first person to describe the surface of the moon, view the four moons of Jupiter, discover the planet Neptune, view the rings around Saturn, and describe sun spots. As an astronomer, Galileo championed the Copernican theory that the earth revolved around the sun, along with the other planets in our solar system. This once again clashed with the conventional wisdom that the sun and other planets revolved around the earth. For his inability to follow conventional wisdom, Galileo was called to Rome to answer for his blasphemy, and ultimately placed under house arrest for eight years. It was during this phase of his life that Galileo published the book, “Two New Sciences.” This book, which was first published in Holland, is considered to be the first book of Physics. In addition to expounding on many physical properties from a mathematical perspective, Galileo outlined the modern scientific method, whereby theories were tested by experimentation. Despite Galileo’s status as one of the great thinkers of the day, many of his theories were not accepted during his lifetime. However, time would prove many of his theories to be correct, as the world emerged into the modern age. As a result of “Two New Sciences,” and his other publications, many of the great minds throughout history consider Galileo to be the individual who ushered in the age of modern science. Article by FCFCDB
Biology, Psychology, Parapsychology Date : Feb. 2021 Source : Current Biology A team of researchers has developed a methodology to experiment two-way communication with dreamers during their sleep. They used EEG (electroencephalography) and facial or ocular response to correlate with stimuli. Dreamers were able to hear simple arithmetic questions and to provide correct answers via a pre-established protocol (ocular or facial movements). Once awaken, the experimenters were able to confirm they received the questions and transmit the answers, with some distortion in some cases. This experiment confirm the sleep learning phenomenon and open the door to a better understanding of the dream states. Read more: Current Biology
- centralized authority, also known as the power or right to give orders, make decisions that other parties must follow, and enforce obedience. This kind of authority ignores the natural autonomy of other parties. - decentralized authority, also known as the power or right that is freely endowed by other parties to the authority, to make decisions, phrase ideas, set rules etc, which these parties will adopt and follow because they think it is in their own interest to do so. The purpose of a Centralized Authority is to further its own objectives by using (to the level of exploiting) other parties, thereby possibly disregarding the objectives and interests of these other parties if the authority deems that necessary. The purpose of a Decentralized Authority is to further the objectives of the parties that have endowed it with its powers or rights. It is an objective of such an authority to support these parties in their pursuit of their own, individual objectives.
Here are printable materials and some suggestions to present letter O. This activity is part of the Bible alphabet A to Z available in block handwriting formats. Bible > Activities > David's Psalm (Song) of Thanks 1 "Praise the Lord for the glory of his name. Bring your offering to him. Worship the Lord because he is holy." 1 Chronicles 16:29 in David's Psalm of Thanks Alphabet Activity > Letter O is for Offering Present and display your option of alphabet printable materials listed in the materials column. Finger and Pencil Tracing: Trace letter O's in upper and lower case with your finger as you also sound out the letter. Invite the children to do the same on their coloring Encourage the children to trace the dotted letter, and demonstrate the direction of the arrows and numbers that help them trace the letter correctly. During the demonstration, you may want to count out loud as you trace so children become aware of how the number order aids them in the writing process. Find the letter O's: Have the children find all the letter O's in upper and lower case on the page and encourage them to circle or trace/shade them first. Coloring Activity: Encourage the children to color the image in the coloring page or Letter O words: Letter O Worksheet and Mini Book: These materials can to reinforce letter practice and to identify related O words. Read suggested instructions for using the worksheet Discuss other letter O words and images found in the worksheet. You can also display other O posters and coloring pages or even make a letter O classroom book using coloring images or color posters. Visit Letter O printable activities to make your choice. Letter O Word Search & Handwriting Practice: The four-word search game features letter O words with pictures and handwriting practice. Advanced Handwriting Practice: Print your choice of drawing and writing paper lined-paper. Encourage children to draw depiction of an offering and practice writing
Learning Math: Measurement Fundamentals of Measurement Part A: Measuring Accurately (45 minutes) Session 2, Part A In This Part - Conservation, Transitivity, and Unit Iteration - Partitioning on a Number Line Conservation, Transitivity, and Unit Iteration In Session 1, we established that in order to measure something, we have to (1) select an attribute of the thing to be measured; (2) choose an appropriate unit of measure; and (3) determine the number of units. In conjunction with these three steps, many educators have noted that there are three components of measuring that contribute to students’ ability to make meaningful and accurate measurements: conservation, transitivity, and unit iteration. Conservation is the principle that an object maintains the same size and shape even if it is repositioned or divided in certain ways. If you understand this principle, you realize that a pencil’s length remains constant when it is placed in different orientations. For example, two pencils that are the same length remain equal in length when one pencil is placed ahead of the other: You also realize that two differently shaped figures have the same area if they have the same component pieces. For instance, a jigsaw puzzle covers the same amount of space whether the puzzle is completed or in separate pieces. When you can’t compare two objects directly, you must compare them by means of a third object. To do this, you must intuitively understand the mathematical notion of transitivity (if A = B and B = C, then A = C; if A < B and B < C, then A < C; if A > B and B > C, then A > C). For example, to compare the length of a bookshelf in one room with the length of a desk in another room, you might cut a string that is the same length as the bookshelf. You can then compare the piece of string with the desk. If the string is the same length as the desk, then you know that the desk is the same length as the bookshelf. Developmentally, conservation precedes the understanding of transitivity, because you must be sure that a tool’s length (area, volume, etc.) will stay the same when moved in the process of measuring. In order to determine the correct unit for measurement, you must understand the attribute you are measuring. For instance, when measuring distance, a linear measurement is appropriate. When measuring area, you need two-dimensional units, such as squares, to cover the surface. When measuring volume, you need a three-dimensional unit. Another key point to grasp is that the chosen unit influences the number of units. For example, weighing a package in grams results in a larger number of units (2,000 g) than weighing it in kilograms (2 kg). This inverse relationship — a larger number of smaller units — is a conceptually difficult idea. Unit iteration is the repetition of a single unit. If you are measuring the length of a desk with straws, it is easy enough to lay out straws across the desk and then count them. But if only one straw is available, then you must iterate (repeat) the unit (straw). You first have to visualize the total length in terms of the single unit and then reposition the unit repeatedly. Counts of a number of objects are exact (e.g., you can have either three chairs or four chairs around the table, not between three and four chairs), yet measurements cannot be made exactly. Why is that so? What makes a count different from a measure? The units on measurement instruments, such as rulers and thermometers, run together; they are not distinct as are, for example, the number of books on a shelf. - Why might this aspect of measurement cause confusion? - How is understanding a length of 7 in. or a temperature of 63 degrees Farenheit different from understanding that you have seven balloons or 63 pennies? Where else in mathematics is the concept of transitivity used? Give an example other than measurement. “Conservation, Transitivity, and Unit Iteration” adapted from Chapin, S. and Johnson, A. Math Matters: Understanding the Math You Teach, Grades K-6. pp. 178-180. © 2000 by Math Solutions Publications. Used with permission. All rights reserved. Let’s look more closely at the idea of a unit and how one goes about partitioning that unit into subunits. How are rational numbers (fractions and decimals) interpreted in measurement situations? Imagine that you are timing a swim meet. If you timed a 100 m backstroke race to the nearest hour, you would not be able to distinguish one swimmer’s time from another’s. If you refined your timing by using minutes, you still might not be able to tell the swimmers apart. If the swimmers were all well trained, you might not be able to decide on a winner even if you measured in seconds. In high-stakes competitions among well-trained athletes (the Olympics, for example), it is necessary to measure in tenths and 100ths of seconds. Now suppose that you are working on a project that requires some precision. You need to determine the exact length of a strip of metal in inches. Holding the strip up to your ruler, with one end at 0, you see that the other end lies between 4 and 5 in.: Note that only the right end of the metal strip is shown here. What would you say its length is? You might think to yourself, “The length is between 4 9/16 and 4 10/16, so I’ll call it 4 19/32.” These situations illustrate the measurement interpretation of rational numbers. A unit of measure can always be divided into finer and finer subunits so that you can take as accurate a reading as you need. On a number line; on a graduated beaker; on a ruler, yardstick, or meterstick; on a measuring cup; on a dial; on a thermometer — some subdivisions of the unit are marked. The marks on these common measuring tools allow readings that are accurate enough for most general purposes, but if the amount of the object you are measuring doesn’t exactly meet one of the provided hash marks, it certainly doesn’t mean that you can’t measure it. Rational numbers provide us with a means to measure any amount of stuff.If meters will not do, we can partition into decimeters; when decimeters will not do, we can partition into centimeters or millimeters — and so on. When we talk about rational numbers as measures, the focus is on successively partitioning the unit. Certainly partitioning plays an important role in other models and interpretations of rational numbers, but there is a difference. In measurement, there is a dynamic aspect; instead of comparing the number of equal parts you have to a fixed number of equal parts in a unit, the number of equal parts in the unit can vary, and what you name your fractional amount depends on how many times you are willing to keep up the partitioning process. In the above example, you’ve seen how the units were first divided into 16 equal parts and then into 32 equal parts (the fractional amount was thus expressed in 16ths or 32nds, respectively). If necessary, you could further partition the unit into 64 or more equal parts, each time refining the precision of your measurement. In your own words, clarify the difference between the measurement interpretation of rational numbers and the part-whole interpretation of rational numbers. Part-whole interpretation of rational numbers refers to dividing one or more units into equal-sized parts. You can think of it as pieces of a pie — 3/4 would mean three equal-sized slices from a total of four. Why is the concept of partitioning so important in measurement? “Partitioning” adapted from Lamon, Susan J. Teaching Fractions and Ratios for Understanding. pp. 113-121. © 1999 by Lawrence Erlbaum Publishers. Used with permission. All rights reserved. Partitioning on a Number Line How many partitions of a number line are possible? To use a rational number to describe how far a point on the number line is from 0, you can begin by partitioning the unit interval into an arbitrary number of equal parts. Each of those parts can then be partitioned into an arbitrary number of equal parts, and those, in turn, can be partitioned again. This process is actually a composition of operations. You can use arrow notation to keep a record of your partitioning actions, as well as the size of the subintervals being produced. For example, what if you wanted to locate 17/48 on a number line from 0 to 1? You would start by drawing the number line on a piece of paper and repeatedly folding it, making sure to mark the locations of 0 and 1 before you start folding: Here’s one set of partitioning actions to find 17/48: In this video segment, the participants place a fractional value on a number line using the method of partitioning. They explore the reciprocal relationship that exists between partitioning and the number of units in a measure.Is there more than one way to do the partitioning to arrive at a particular fraction?You can find this segment on the session video approximately 2 minutes and 40 seconds after the Annenberg Media logo. Take It Further Find another way (or ways) to locate the fractions in Problem A6 (a) and (b). Start with a new number line. The compensatory principle states that the smaller the subunit you use to measure the distance, the more of those subunits you will need; the larger the subunit, the fewer you will need. When multiples of two different subunits cover the same distance, different fraction names result. There is only one rational number associated with a specific distance from 0, so these fractions are equivalent. For example, when measuring the diameter of a pencil using two different subunits, we would have the following: But 1/4 and 2/8 are equivalent fractions, so these are the same measurements. State the compensatory principle in your own words. What type of relationship exists between the size of a measuring unit and the number of that unit needed to measure a property? Take a few minutes to read the information about conservation, transitivity, and unit iteration. Whereas adults conserve measures, we can sometimes become confused (as with the tangram activity in Session 1) by a visual image. Transitivity is used in algebra and geometry (for example, as justification for steps in a proof) as well as in measurement, when comparing the equality of a number of measures. Examining the concept of units leads us to consider the kind of units that are used when we count versus when we measure. If you are working in a group, discuss Problems A1-A3 together. When discussing Problem A2, consider the fact that young children first learn about numbers using discrete quantities. How does that differ from measurement, which is never exact (discrete), as we can infinitely divide continuous quantities? Rational numbers are what is known as a dense set: A dense set is such that for any two elements you choose, you can always find another element of the same type between the two. To learn more about the concept of density, go to Learning Math: Number and Operations, Session 2. To learn more about rational numbers and the part-whole interpretation of fractions, go to Learning Math: Number and Operations, Session 8. If you are working in a group, work in pairs on both parts of Problem A6. First use the fraction given to find one unit; then consider how you can use partitioning and equivalence to locate the desired fraction. The compensatory principle is an important mathematical idea. The idea of an inverse relationship between the size of a unit and the number of units can be examined numerically (e.g., the area of a surface that is 1 m2 can also be expressed as 10,000 cm2). An inverse relationship can also be shown graphically. A linear inverse relationship produces a straight line that is drawn diagonally from the upper left to the lower right in the first quadrant. Be sure to reflect on or discuss other inverse relationships when working on Problem A8. Counts are exact; they are not on a scale, nor are they ratios. In a count, the unit is absolute. In contrast, measurements are not exact; the units are relative, and typically they don’t directly match what we’re measuring. For example, a person’s height, measured in centimeters, is very unlikely to be an exact number of centimeters, so we approximate. A measurement is continuous, not discrete; someone can be 180 cm tall, 181 cm tall, or any number in between. - It is not possible to just “count” inches or centimeters, since the result of a measurement may not be an exact number in those units. Also, depending on the measuring device used, the unit of the measurement can change; for example, the same measurement could be expressed as 6 (in.), 0.5 (ft.) or 1/6 (of a yard). - Again, it’s a question of relative vs. absolute: When we hear that the temperature is 63 degrees, this means that the temperature has been rounded off to the nearest whole number. When we count that we have 63 pennies, there is no rounding off; we have exactly 63 pennies. Transitivity is used in many places — in parallelism, for example. If lines A and B are parallel, and lines B and C are parallel, then lines A and C are parallel. Answers will vary. One possible answer is that in part-whole interpretation, the number of parts that the whole is divided into is predetermined, whereas in measurement, you can vary the number of equal parts according to whatever is most appropriate for your measurement situation. Partitioning is important in measurement, because the measurements taken depend entirely on the partitioning. The example of timing a swim meet is relevant here, since the partitioning of time determines the measured times in the event (to the nearest second, 100th of a second, and so on). There are an infinite number of possible partitions of the number line, since we can always break any partition into a smaller one. |a.||Since the number line between 0 and 1 is already partitioned into 12 equal parts, we will need to partition the 12ths into two equal parts so that each is 1/24. Then, since 1/3 = 8/24, count one partition to the left of 1/3. |b.||The number line between 0 and 1 is partitioned into 18 equal parts now (since 1/6 is three partitions over). To locate 3/8, partition each of the 18 parts into four equal parts so that each is 1/72 (so 3/8 = 27/72). Since 1/6 = 12/72, count over to 2/6 (i.e., 24/72), then three partitions beyond it. Answers will vary. In either case, it is also possible to start with a new number line and make partitions different from the ones you made before. For example, to locate 7/24, you could partition the number line into thirds and then partition those into eighths, which would also result in 24ths (as 3 and 8 are both factors of 24). Other combinations with other factors of 24 are also possible. Write and reflect Session 1 What Does It Mean To Measure? Explore what can be measured and what it means to measure. Identify measurable properties such as weight, surface area, and volume, and discuss which metric units are more appropriate for measuring these properties. Refine your use of precision instruments, and learn about alternate methods such as displacement. Explore approximation techniques, and reason about how to make better approximations. Session 2 Fundamentals of Measurement Investigate the difference between a count and a measure, and examine essential ideas such as unit iteration, partitioning, and the compensatory principle. Learn about the many uses of ratio in measurement and how scale models help us understand relative sizes. Investigate the constant of proportionality in isosceles right triangles, and learn about precision and accuracy in measurement. Session 3 The Metric System Learn about the relationships between units in the metric system and how to represent quantities using different units. Estimate and measure quantities of length, mass, and capacity, and solve measurement problems. Session 4 Angle Measurement Review appropriate notation for angle measurement, and describe angles in terms of the amount of turn. Use reasoning to determine the measures of angles in polygons based on the idea that there are 360 degrees in a complete turn. Learn about the relationships among angles within shapes, and generalize a formula for finding the sum of the angles in any n-gon. Use activities based on GeoLogo to explore the differences among interior, exterior, and central angles. Session 5 Indirect Measurement and Trigonometry Learn how to use the concept of similarity to measure distance indirectly, using methods involving similar triangles, shadows, and transits. Apply basic right-angle trigonometry to learn about the relationships among steepness, angle of elevation, and height-to-distance ratio. Use trigonometric ratios to solve problems involving right triangles. Session 6 Area Learn that area is a measure of how much surface is covered. Explore the relationship between the size of the unit used and the resulting measurement. Find the area of irregular shapes by counting squares or subdividing the figure into sections. Learn how to approximate the area more accurately by using smaller and smaller units. Relate this counting approach to the standard area formulas for triangles, trapezoids, and parallelograms. Session 7 Circles and Pi (π) Investigate the circumference and area of a circle. Examine what underlies the formulas for these measures, and learn how the features of the irrational number pi (π) affect both of these measures. Session 8 Volume Explore several methods for finding the volume of objects, using both standard cubic units and non-standard measures. Explore how volume formulas for solid objects such as spheres, cylinders, and cones are derived and related. Session 9 Measurement Relationships Examine the relationships between area and perimeter when one measure is fixed. Determine which shapes maximize area while minimizing perimeter, and vice versa. Explore the proportional relationship between surface area and volume. Construct open-box containers, and use graphs to approximate the dimensions of the resulting rectangular prism that holds the maximum volume. Session 10 Classroom Case Studies, K-2 Watch this program in the 10th session for K-2 teachers. Explore how the concepts developed in this course can be applied through case studies of K-2 teachers (former course participants who have adapted their new knowledge to their classrooms), as well as a set of typical measurement problems for K-2 students. Session 11 Classroom Case Studies, 3-5 Watch this program in the 10th session for grade 3-5 teachers. Explore how the concepts developed in this course can be applied through case studies of grade 3-5 teachers (former course participants who have adapted their new knowledge to their classrooms), as well as a set of typical measurement problems for grade 3-5 students. Session 12 Classroom Case Studies, 6-8 Watch this program in the 10th session for grade 6-8 teachers. Explore how the concepts developed in this course can be applied through case studies of grade 6-8 teachers (former course participants who have adapted their new knowledge to their classrooms), as well as a set of typical measurement problems for grade 6-8 students.
It is always a mystery about how the universe began, whether if and when it will end. Astronomers construct hypotheses called cosmological models that try to find the answer. There are two types of models: Big Bang and Steady State. However, through many observational evidences, the Big Bang theory can best explain the creation of the universe. The Big Bang model postulates that about 15 to 20 billion years ago, the universe violently exploded into being, in an event called the Big Bang. Before the Big Bang, all of the matter and radiation of our present universe were packed together in the primeval fireball–an extremely hot ense state from which the universe rapidly expanded. 1 The Big Bang was the start of time and space. The matter and radiation of that early stage rapidly expanded and cooled. Several million years later, it condensed into galaxies. The universe has continued to expand, and the galaxies have continued moving away from each other ever since. Today the universe is still expanding, as astronomers have observed. The Steady State model says that the universe does not evolve or change in time. There was no beginning in the past, nor will there be change in the future. This model assumes the perfect cosmological rinciple. This principle says that the universe is the same everywhere on the large scale, at all times. 2 It maintains the same average density of matter forever. There are observational evidences found that can prove the Big Bang model is more reasonable than the Steady State model. First, the redshifts of distant galaxies. Redshift is a Doppler effect which states that if a galaxy is moving away, the spectral line of that galaxy observed will have a shift to the red end. The faster the galaxy moves, the more shift it has. If the galaxy is moving closer, the spectral line will show a blue shift. If the galaxy is not moving, there is no shift at all. However, as astronomers observed, the more distance a galaxy is located from Earth, the more redshift it shows on the spectrum. This means the further a galaxy is, the faster it moves. Therefore, the universe is expanding, and the Big Bang model seems more reasonable than the Steady State model. The second observational evidence is the radiation produced by the Big Bang. The Big Bang model predicts that the universe should still be filled with a small remnant of radiation left over from the original violent explosion of the primeval fireball in the past. The primeval fireball would have sent strong shortwave radiation in all directions into space. In time, that radiation would spread out, cool, and fill the expanding universe uniformly. By now it would strike Earth as microwave radiation. In 1965 physicists Arno Penzias and Robert Wilson detected microwave radiation coming equally from all directions in the sky, day and night, all year. 3 And so it appears that astronomers have detected the fireball radiation that was produced by the Big Bang. This casts serious doubt on the Steady State model. The Steady State could not explain the existence f this radiation, so the model cannot best explain the beginning of the universe. Since the Big Bang model is the better model, the existence and the future of the universe can also be explained. Around 15 to 20 billion years ago, time began. The points that were to become the universe exploded in the primeval fireball called the Big Bang. The exact nature of this explosion may never be known. However, recent theoretical breakthroughs, based on the principles of quantum theory, have suggested that space, and the matter within it, masks an infinitesimal realm of utter chaos, where events happen randomly, in a state called quantum weirdness. Before the universe began, this chaos was all there was. At some time, a portion of this randomness happened to form a bubble, with a temperature in excess of 10 to the power of 34 degrees Kelvin. Being that hot, naturally it expanded. For an extremely brief and short period, billionths of billionths of a second, it inflated. At the end of the period of inflation, the universe may have a diameter of a few centimetres. The temperature had cooled enough for particles of matter and antimatter to form, and they instantly destroy each other, producing fire and a thin haze f matter-apparently because slightly more matter than antimatter was formed. 5 The fireball, and the smoke of its burning, was the universe at an age of trillionth of a second. The temperature of the expanding fireball dropped rapidly, cooling to a few billion degrees in few minutes. Matter continued to condense out of energy, first protons and neutrons, then electrons, and finally neutrinos. After about an hour, the temperature had dropped below a billion degrees, and protons and neutrons combined and formed hydrogen, deuterium, helium. In a billion years, this cloud of energy, atoms, and neutrinos had cooled nough for galaxies to form. The expanding cloud cooled still further until today, its temperature is a couple of degrees above absolute zero. In the future, the universe may end up in two possible situations. From the initial Big Bang, the universe attained a speed of expansion. If that speed is greater than the universe’s own escape velocity, then the universe will not stop its expansion. Such a universe is said to be open. If the velocity of expansion is slower than the escape velocity, the universe will eventually reach the limit of its outward thrust, just like a all thrown in the air comes to the top of its arc, slows, stops, and starts to fall. The crash of the long fall may be the Big Bang to the beginning of another universe, as the fireball formed at the end of the contraction leaps outward in another great expansion. 6 Such a universe is said to be closed, and pulsating. If the universe has achieved escape velocity, it will continue to expand forever. The stars will redden and die, the universe will be like a limitless empty haze, expanding infinitely into the darkness. This space will become even emptier, as the fundamental particles of matter age, and ecay through time. As the years stretch on into infinity, nothing will remain. A few primitive atoms such as positrons and electrons will be orbiting each other at distances of hundreds of astronomical units. 7 These particles will spiral slowly toward each other until touching, and they will vanish in the last flash of light. After all, the Big Bang model is only an assumption. No one knows for sure that exactly how the universe began and how it will end. However, the Big Bang model is the most logical and reasonable theory to explain the universe in modern science.
You are watching: Why did general grant adopt the total war strategy? After the Civil war ended, exactly how was the North affected economically?The farming economy suffered and declined.Industry thrived and the economic situation soon recovered.The rise in inflation bankrupted many businesses.Economic uncertainty led to businesses to fail. Which statement ideal describes the troubles the North and also South challenged after the civil War?The North challenged severe financial problems, if the South faced severe social problems.Though both locations suffered native the war, the south fared lot worse 보다 the North.The North challenged many rebuilding difficulties in its cities, while the south rebuilt its cities quickly.Though both locations suffered from the war, the north fared lot worse 보다 the South. Why did basic Lee fail throughout his last was standing at the Appomattox Court House? He refuse to usage a full war strategy.His troops to be weak and surrounded.His reinforcements did no arrive in time.His troops had actually deserted him completely. During general Sherman"s "March come the Sea," Union soldiers traveled from Savannah to Atlanta.fought ~ above battlefields along the way.destroyed buildings, railroads, and also crops.attacked largely military resources. At the time, the number of soldiers killed during the Civil battle was much less than the number killed in the Revolutionary War.was the same as the number killed throughout the war of 1812.was the same as the number killed in the Spanish-American War.was an ext than the number eliminated in every previous united state wars combined. When Grant broke the Confederate currently on April 2, 1865, Lee alerted chairman Davis to evacuate Richmond.gathered his staying troops because that one last stand.surrendered to provide immediately.fled v his military to Savannah. After the civil War, slavery pertained to an finish in the South, andthe population sharply decreased.a new labor system was needed.prejudice and racism decreased.the agricultural economy improved. Who assassinated president Lincoln after ~ the South"s loss in the polite War?William J. BlackJohn Wilkes BoothGeorge McClellanJefferson Davis The ns of farms, crops, and also enslaved labor as a an outcome of the polite War expected that theNorthern economic situation almost totally shut down.Southern economic climate almost totally shut down.entire nation"s economic situation had to be rebuilt.entire nation had to experience without food. As a slaveholder and a Democrat, chairman Andrew Johnson waspart the the negotiation that finished the polite War.unable to lead the nation through Reconstruction.often in ~ odds with Republicans in Congress.assassinated in ~ the end of the polite War. How did basic Grant"s complete war strategy impact the presidential choice of 1864?It made Lincoln a renowned candidate because that the autonomous Party, which approved of Grant"s strategy.It allowed Sherman to record Atlanta, which raised Lincoln"s popular in the election.It horrified countless voters in the North, who chose to support Andrew Johnson instead of Lincoln.It shed Lincoln support among abolitionists, that wanted the to emphasis on freeing enslaved employees instead. See more: Explain Two Ways In Which Sectionalism Cause Conflict, How Did Sectionalism Lead To The Civil War In the wake of the polite War, contrasted to the southern the Northhad sustained very tiny destruction.had to produce a new labor system.had an ext economic challenges.had to repair damaged property. Which event led to the catch of Richmond, the Confederate capital?the Shenandoah sink Campaignthe "March to the Sea"the capture of Atlantathe Siege that Petersburg Who ended up being president in the wake of Abraham Lincoln"s assassination?Andrew JohnsonUlysses S. GrantPhilip H. SheridanGeorge McClellan The AmericansGerald A. Danzer, J. Jorge Klor de Alva, Larry S. Krieger, luigi E. Wilson, Nancy Woloch The American VisionAlan Brinkley, Albert S. Broussard, Donald A. Ritchie, James M. McPherson, Joyce Appleby
Perhaps the most important consideration of an ADC is its resolution. Resolution is the number of binary bits output by the converter. Because ADC circuits take in an analog signal, which is continuously variable, and resolve it into one of many discrete steps, it is important to know how many of these steps there are in total. For example, an ADC with a 10-bit output can represent up to 1024 (210) unique conditions of signal measurement. Over the range of measurement from 0% to 100%, there will be exactly 1024 unique binary numbers output by the converter (from 0000000000 to 1111111111, inclusive). An 11-bit ADC will have twice as many states to its output (2048, or 211), representing twice as many unique conditions of signal measurement between 0% and 100%. Resolution is very important in data acquisition systems (circuits designed to interpret and record physical measurements in electronic form). Suppose we were measuring the height of water in a 40-foot tall storage tank using an instrument with a 10-bit ADC. 0 feet of water in the tank corresponds to 0% of measurement, while 40 feet of water in the tank corresponds to 100% of measurement. Because the ADC is fixed at 10 bits of binary data output, it will interpret any given tank level as one out of 1024 possible states. To determine how much physical water level will be represented in each step of the ADC, we need to divide the 40 feet of measurement span by the number of steps in the 0-to-1024 range of possibilities, which is 1023 (one less than 1024). Doing this, we obtain a figure of 0.039101 feet per step. This equates to 0.46921 inches per step, a little less than half an inch of water level represented for every binary count of the ADC. This step value of 0.039101 feet (0.46921 inches) represents the smallest amount of tank level change detectable by the instrument. Admittedly, this is a small amount, less than 0.1% of the overall measurement span of 40 feet. However, for some applications it may not be fine enough. Suppose we needed this instrument to be able to indicate tank level changes down to one-tenth of an inch. In order to achieve this degree of resolution and still maintain a measurement span of 40 feet, we would need an instrument with more than ten ADC bits. To determine how many ADC bits are necessary, we need to first determine how many 1/10 inch steps there are in 40 feet. The answer to this is 40/(0.1/12), or 4800 1/10 inch steps in 40 feet. Thus, we need enough bits to provide at least 4800 discrete steps in a binary counting sequence. 10 bits gave us 1023 steps, and we knew this by calculating 2 to the power of 10 (210 = 1024) and then subtracting one. Following the same mathematical procedure, 211-1 = 2047, 212-1 = 4095, and 213-1 = 8191. 12 bits falls shy of the amount needed for 4800 steps, while 13 bits is more than enough. Therefore, we need an instrument with at least 13 bits of resolution. Another important consideration of ADC circuitry is its sample frequency, or conversion rate. This is simply the speed at which the converter outputs a new binary number. Like resolution, this consideration is linked to the specific application of the ADC. If the converter is being used to measure slow-changing signals such as level in a water storage tank, it could probably have a very slow sample frequency and still perform adequately. Conversely, if it is being used to digitize an audio frequency signal cycling at several thousand times per second, the converter needs to be considerably faster. Consider the following illustration of ADC conversion rate versus signal type, typical of a successive-approximation ADC with regular sample intervals: Here, for this slow-changing signal, the sample rate is more than adequate to capture its general trend. But consider this example with the same sample time: When the sample period is too long (too slow), substantial details of the analog signal will be missed. Notice how, especially in the latter portions of the analog signal, the digital output utterly fails to reproduce the true shape. Even in the first section of the analog waveform, the digital reproduction deviates substantially from the true shape of the wave. It is imperative that an ADC’s sample time is fast enough to capture essential changes in the analog waveform. In data acquisition terminology, the highest-frequency waveform that an ADC can theoretically capture is the so-called Nyquist frequency, equal to one-half of the ADC’s sample frequency. Therefore, if an ADC circuit has a sample frequency of 5000 Hz, the highest-frequency waveform it can successfully resolve will be the Nyquist frequency of 2500 Hz. If an ADC is subjected to an analog input signal whose frequency exceeds the Nyquist frequency for that ADC, the converter will output a digitized signal of falsely low frequency. This phenomenon is known as aliasing. Observe the following illustration to see how aliasing occurs: Note how the period of the output waveform is much longer (slower) than that of the input waveform, and how the two waveform shapes aren’t even similar: It should be understood that the Nyquist frequency is an absolute maximum frequency limit for an ADC, and does not represent the highest practical frequency measurable. To be safe, one shouldn’t expect an ADC to successfully resolve any frequency greater than one-fifth to one-tenth of its sample frequency. A practical means of preventing aliasing is to place a low-pass filter before the input of the ADC, to block any signal frequencies greater than the practical limit. This way, the ADC circuitry will be prevented from seeing any excessive frequencies and thus will not try to digitize them. It is generally considered better that such frequencies go unconverted than to have them be “aliased” and appear in the output as false signals. Yet another measure of ADC performance is something called step recovery. This is a measure of how quickly an ADC changes its output to match a large, sudden change in the analog input. In some converter technologies especially, step recovery is a serious limitation. One example is the tracking converter, which has a typically fast update period but a disproportionately slow step recovery. An ideal ADC has a great many bits for very fine resolution, samples at lightning-fast speeds, and recovers from steps instantly. It also, unfortunately, doesn’t exist in the real world. Of course, any of these traits may be improved through additional circuit complexity, either in terms of increased component count and/or special circuit designs made to run at higher clock speeds. Different ADC technologies, though, have different strengths. Here is a summary of them ranked from best to worst: Single-slope integrating, dual-slope integrating, counter, tracking, successive approximation, flash. Flash, tracking, successive approximation, single-slope integrating & counter, dual-slope integrating. Flash, successive-approximation, single-slope integrating & counter, dual-slope integrating, tracking. Please bear in mind that the rankings of these different ADC technologies depend on other factors. For instance, how an ADC rates on step recovery depends on the nature of the step change. A tracking ADC is equally slow to respond to all step changes, whereas a single-slope or counter ADC will register a high-to-low step change quicker than a low-to-high step change. Successive-approximation ADCs are almost equally fast at resolving any analog signal, but a tracking ADC will consistently beat a successive-approximation ADC if the signal is changing slower than one resolution step per clock pulse. I ranked integrating converters as having a greater resolution/complexity ratio than counter converters, but this assumes that precision analog integrator circuits are less complex to design and manufacture than precision DACs required within counter-based converters. Others may not agree with this assumption.
Using Minecraft as a Digital Tool in Applied Design Skills and Technology (ADST) In Applied Design Skills and Technology (ADST), students use the design thinking process to research and develop solutions to problems through prototyping testing and seeking feedback. One of the digital tools Surrey educators can use to engage students in the design thinking process is Minecraft: Education Edition, which is a learning platform that promotes creativity, collaboration and problem-solving in an immersive digital environment. Students can build and explore worlds on their own or in groups. From building or viewing Biomes, Ancient Civilizations, to recreating settings from the novel the student is reading to demonstrate their learning, there are so many ways Minecraft connects to the curriculum across a variety of subjects. Surrey Educator Scott Smith discusses how he uses Minecraft to promote student creativity, collaboration and problem-solving. “It is not necessarily what we know, but managing what they know because they will test and they will do things that are so interesting.” – Scott Smith
The purpose of Unity Day is to celebrate the differences that make us unique and contribute to what unifies us. Unity Day is the Monday of Thanksgiving week and all PTAs/PTSAs are encouraged to work with their school and community to promote this special day. Celebrate cultural differences through art, music, and performances. Ideas for Unity Day: Multi-Cultural Meal -Have a meal and invite each family to bring a tradition family food. Plan to teach a game or dance unique to each culture represented. Multi-Cultural Panel Discussion -Give parents an opportunity to discuss their own school experiences and crease a venue for parents and teachers to share with each other their views and expectations regarding education. “We Are One” Poster -Create a “We Are One” poster using photos and construction paper. Have participants decorate the poster with photos of their friends or pictures of children of all races interacting together. Display the posters for all to see. Linking Hands –Using construction paper, have children draw outlines of their hands. To make a long chain, have the children do this multiple times. Cut the hands out of paper and glue them together to make a chain. Let the children decorate the hands with markers, stickers, and glitter. Hand the hands around a room or along a hallway. Unity Quilt -Students used squares of paper and wrote something unique about themselves. The squares were connected to make a quilt that was hung in the hallway with a banner, “Our School Has Unity Covered”. Watterson Elementary PTA
Today we studied the Hero’s Journey. The Hero’s journey is a popular theory that shows how most popular fiction follows a certain sequence of events. Joseph Campbell originated the theory which has become a corner stone of fiction studies. We discussed the elements of the hero’s journey and shared some examples such as Star Wars, The Wizard of Oz and Harry Potter. We watched “Spirited Away”an animated movie about a girl who is compelled into a quest to save her parents. Students are asked to take notes on the movie noting the events that fit the sequences of the hero’s journey. The hero’s journey explained (watch the first 3:46 seconds only for the explanation) Examples of the hero’s journey Watch the movie on youtube The Hero’s Journey: http://youtu.be/KGV1BvnyvGo Today we read “All Summer in a Day” by Ray Bradbury. We discussed how weather can affect mood, and how people handle emotions and empathy towards each other. Once we had read and discussed the story the students watched the short film based on the story. Students were asked to complete a reflection based on questions posed in their assignment. Read the story and watch the film by clicking on the links below. Today we started talking about treaties such as the ones created between the First Nations and the government of Canada in the late 19th century. These treaties were sacred agreements between First Nations and the Government of Canada and they continue to be discussed and debated to this day. We observed some basic background information as provided by the Treaty Relations Commission of Manitoba and responded to the information with some essential critical thinking questions. Students are to choose 5 of the 7 questions in the assignment to critically respond to. The information we went over in class Map of the treaties of Canada Map of the treaties of Manitoba
- No products in the cart. What is RGB Color? Introduction to RGB Color The RGB color system is one of the most well-known color systems in the world, and perhaps the most ubiquitous. As an additive color system, it combines red, green, and blue light to create the colors we see on our TV screens, computer monitors, and smartphones. Although used extensively in modern technology, RGB color has been in existence since the mid-1800’s, and was originally based on theories developed by physicists such as Thomas Young, Hermann Helmholtz, and James Maxwell. Some early examples of RGB color in use were in vintage photographs (the above photo was taken in 1861) and cathode ray tubes. In modern technology, LCD displays, plasma displays, and Light Emitting Diodes are also configured to display RGB color. How does RGB Color Work? The parts of the human eye that are responsible for color perception are called cone cells or photoreceptors. RGB is called an additive color system because the combinations of red, green, and blue light create the colors that we perceive by stimulating the different types of cone cells simultaneously. As shown above, the combinations of red, green, and blue light will cause us to perceive different colors. For example, a combination of red and green light will appear to be yellow, while blue and green light will appear to be cyan. Red and blue light will appear magenta, and a combination of all three will appear to be white. How Do You Use RGB Color? RGB color is best suited for on-screen applications, such as graphic design. Each color channel is expressed from 0 (least saturated) to 255 (most saturated). This means that 16,777,216 different colors can be represented in the RGB color space. Advantages of RGB Color Almost every well-known application is compatible with RGB, such as Microsoft Office, Adobe Creative Suite (InDesign, Photoshop, etc.), and other digital editors. Drawbacks of RGB Color One of the major limitations of the RGB color system is that it doesn’t translate well to print, which uses the CMYK system. This has led to a great deal of frustration when people print out documents from Microsoft Office, only to have them turn out to be the wrong color. In addition, different devices often use different types of LEDS. This means that the same color co-ordinates do not display consistently across smartphones, TV screens, or even monitors. This can present some unique problems for professionals who work with precise digital color, from special effects to graphic or print design.
Rational Approaches to Solving Rational Equations Lesson 1 of 12 Objective: SWBAT solve a rational equation for a specified variable. To start off this unit, I want to first see what level of proficiency my students have with basic fraction operations. To do this, I am going to ask students to complete the three warm-up problems on slide 2 of the PowerPoint. I selected these problems to assess students’ knowledge of the rules of adding/subtracting fractions, multiplying/dividing fractions, and factoring. Question 3 is already simplified. I am expecting some students to still try to divide the terms. Once students have completed their warm-up problems. I plan to model how to simplify these for any students who got stuck. Before we proceed with rational equations I want to talk with students about factoring, simplifying, and "canceling." There are often many misconceptions that surround factoring of rational expressions for students. They don’t know what ‘cancels’, when it ‘cancels,’ and what is left over. See Teaching Notes about Factoring for more detail about how I will lead students through this conversation. Closure: Here’s how… As we finish today's lesson I will present Rational Approaches Closure Slide (slide 24 from the PowerPoint). I ask students to complete a Here’s How to close out today’s learning. I plan to assign Homework 1 - Rational Functions for homework this evening. I also plan to start talking with my students about the presentations I would like them to do at the end of the week (4 lessons from now). I will begin this discussion as I present slide 14 in the PowerPoint. I want students to start researching an explanation that makes sense to them: Why does dividing by zero causes a function to be undefined?
The entrancing, dramatic, magical, colourful curtain of light that is found both in northern and southern hemisphere have become a fascinating attraction for the tourists as well as scientists. Showcasing a palette of light show in vivid colours like green, red, yellow, blue and violet, the auroras in both the hemispheres have different names: - Northern hemisphere – ‘Aurora Borealis’ or ‘northern lights’ - Southern hemisphere – ‘Aurora Australis’ or ‘southern lights’ Cause of the magical show Through collisions between gaseous particles in the Earth’s atmosphere with charged particles released from the sun’s atmosphere, auroras are created. The colour variations occur due to the different kinds of gases that collide. Green auroras are the most common ones made by Oxygen molecules that are located about 60 miles above the earth. Nitrogen molecules produce blue or purple ones while the rarest ones remain the red auroras that are caused by Oxygen molecules at high-altitudes (200 miles or more). Best places to see Auroras: - Central and northern Alaska and Canada, Greenland, northern Scandinavia and Russia in the Northern Hemisphere - Antarctica, southern Australia, New Zealand, and Chile in the Southern Hemisphere
Previous Page: The Promise of CORE Resonance What Happened to the Hominids Who May Have Been Smarter Than Us? Two neuroscientists say that a now-extinct race of humans had big eyes, child-like faces, and an average intelligence of around 150, making them geniuses among Homo sapiens. by Gary Lynch and Richard Granger found at: http://discovermagazine.com/2009/the-brain-2/28-what-happened-to-hominids-who-were-smarter-than-us From the Brain special issue; published online December 28, 2009 A sketched reconstruction of the Boskop skull done in 1918. Shaded areas depict recovered bone. Courtesy the American Museum of Natural History The following text is an excerpt from the book Big Brain by Gary Lynch and Richard Granger, and it represents their own theory about the Boskops. The theory is a controversial one; see, for instance, paleoanthropologist John Hawks’ much different take. Copyright © 2008 by the authors and reprinted by permission of Palgrave Macmillan, a division of Macmillan Publishers Limited. All rights reserved. In the autumn of 1913, two farmers were arguing about hominid skull fragments they had uncovered while digging a drainage ditch. The location was Boskop, a small town about 200 miles inland from the east coast of South Africa. These Afrikaner farmers, to their lasting credit, had the presence of mind to notice that there was something distinctly odd about the bones. They brought the find to Frederick W. FitzSimons, director of the Port Elizabeth Museum, in a small town at the tip of South Africa. The scientific community of South Africa was small, and before long the skull came to the attention of S. H. Haughton, one of the country’s few formally trained paleontologists. He reported his findings at a 1915 meeting of the Royal Society of South Africa. “The cranial capacity must have been very large,” he said, and “calculation by the method of Broca gives a minimum figure of 1,832 cc [cubic centimeters].” The Boskop skull, it would seem, housed a brain perhaps 25 percent or more larger than our own. The idea that giant-brained people were not so long ago walking the dusty plains of South Africa was sufficiently shocking to draw in the luminaries back in England. Two of the most prominent anatomists of the day, both experts in the reconstruction of skulls, weighed in with opinions generally supportive of Haughton’s conclusions. The Scottish scientist Robert Broom reported that “we get for the corrected cranial capacity of the Boskop skull the very remarkable figure of 1,980 cc.” Remarkable indeed: These measures say that the distance from Boskop to humans is greater than the distance between humans and their Homo erectus predecessors. Might the very large Boskop skull be an aberration? Might it have been caused by hydrocephalus or some other disease? These questions were quickly preempted by new discoveries of more of these skulls. As if the Boskop story were not already strange enough, the accumulation of additional remains revealed another bizarre feature: These people had small, childlike faces. Physical anthropologists use the term pedomorphosis to describe the retention of juvenile features into adulthood. This phenomenon is sometimes used to explain rapid evolutionary changes. For example, certain amphibians retain fishlike gills even when fully mature and past their water-inhabiting period. Humans are said by some to be pedomorphic compared with other primates. Our facial structure bears some resemblance to that of an immature ape. Boskop’s appearance may be described in terms of this trait. A typical current European adult, for instance, has a face that takes up roughly one-third of his overall cranium size. Boskop has a face that takes up only about one-fifth of his cranium size, closer to the proportions of a child. Examination of individual bones confirmed that the nose, cheeks, and jaw were all childlike. The combination of a large cranium and immature face would look decidedly unusual to modern eyes, but not entirely unfamiliar. Such faces peer out from the covers of countless science fiction books and are often attached to “alien abductors” in movies. The naturalist Loren Eiseley made exactly this point in a lyrical and chilling passage from his popular book,The Immense Journey, describing a Boskop fossil: “There’s just one thing we haven’t quite dared to mention. It’s this, and you won’t believe it. It’s all happened already. Back there in the past, ten thousand years ago. The man of the future, with the big brain, the small teeth. He lived in Africa. His brain was bigger than your brain. His face was straight and small, almost a child’s face.” Boskops, then, were much talked and written about, by many of the most prominent figures in the fields of paleontology and anthropology. Yet today, although Neanderthals and Homo erectus are widely known, Boskops are almost entirely forgotten. Some of our ancestors are clearly inferior to us, with smaller brains and apelike countenances. They’re easy to make fun of and easy to accept as our precursors. In contrast, the very fact of an ancient ancestor like Boskop, who appears un-apelike and in fact in most ways seems to have had characteristics superior to ours, was destined never to be popular. The history of evolutionary studies has been dogged by the intuitively attractive, almost irresistible idea that the whole great process leads to greater complexity, to animals that are more advanced than their predecessors. The pre-Darwin theories of evolution were built around this idea; in fact, Darwin’s (and Wallace’s) great and radical contribution was to throw out the notion of “progress” and replace it with selection from among a set of random variations. But people do not easily escape from the idea of progress. We’re drawn to the idea that we are the end point, the pinnacle not only of the hominids but of all animal life. Boskops argue otherwise. They say that humans with big brains, and perhaps great intelligence, occupied a substantial piece of southern Africa in the not very distant past, and that they eventually gave way to smaller-brained, possibly less advanced Homo sapiens—that is, ourselves. We have seen reports of Boskop brain size ranging from 1,650 to 1,900 cc. Let’s assume that an average Boskop brain was around 1,750 cc. What does this mean in terms of function? How would a person with such a brain differ from us? Our brains are roughly 25 percent larger than those of the late Homo erectus. We might say that the functional difference between us and them is about the same as between ourselves and Boskops. Expanding the brain changes its internal proportions in highly predictable ways. From ape to human, the brain grows about fourfold, but most of that increase occurs in the cortex, not in more ancient structures. Moreover, even within the cortex, the areas that grow by far the most are the association areas, while cortical structures such as those controlling sensory and motor mechanisms stay unchanged. Going from human to Boskop, these association zones are even more disproportionately expanded. Boskop’s brain size is about 30 percent larger than our own—that is, a 1,750-cc brain to our average of 1,350 cc. And that leads to an increase in the prefrontal cortex of a staggering 53 percent. If these principled relations among brain parts hold true, then Boskops would have had not only an impressively large brain but an inconceivably large prefrontal cortex. The prefrontal cortex is closely linked to our highest cognitive functions. It makes sense out of the complex stream of events flowing into the brain; it places mental contents into appropriate sequences and hierarchies; and it plays a critical role in planning our future actions. Put simply, the prefrontal cortex is at the heart of our most flexible and forward-looking thoughts. While your own prefrontal area might link a sequence of visual material to form an episodic memory, the Boskop may have added additional material from sounds, smells, and so on. Where your memory of a walk down a Parisian street may include the mental visual image of the street vendor, the bistro, and the charming little church, the Boskop may also have had the music coming from the bistro, the conversations from other strollers, and the peculiar window over the door of the church. Alas, if only the Boskop had had the chance to stroll a Parisian boulevard! Expansion of the association regions is accompanied by corresponding increases in the thickness of those great bundles of axons, the cable pathways, linking the front and back of the cortex. These not only process inputs but, in our larger brains, organize inputs into episodes. The Boskops may have gone further still. Just as a quantitative increase from apes to humans may have generated our qualitatively different language abilities, possibly the jump from ourselves to Boskops generated new, qualitatively different mental capacities. We internally activate many thoughts at once, but we can retrieve only one at a time. Could the Boskop brain have achieved the ability to retrieve one memory while effortlessly processing others in the background, a split-screen effect enabling far more power of attention? Each of us balances the world that is actually out there against our mind’s own internally constructed version of it. Maintaining this balance is one of life’s daily challenges. We occasionally act on our imagined view of the world, sometimes thoroughly startling those around us. (“Why are you yelling at me? I wasn’t angry with you—you only thought I was.”) Our big brains give us such powers of extrapolation that we may extrapolate straight out of reality, into worlds that are possible but that never actually happened. Boskop’s greater brains and extended internal representations may have made it easier for them to accurately predict and interpret the world, to match their internal representations with real external events. Perhaps, though, it also made the Boskops excessively internal and self-reflective. With their perhaps astonishing insights, they may have become a species of dreamers with an internal mental life literally beyond anything we can imagine. Even if brain size accounts for just 10 to 20 percent of an IQ test score, it is possible to conjecture what kind of average scores would be made by a group of people with 30 percent larger brains. We can readily calculate that a population with a mean brain size of 1,750 cc would be expected to have an average IQ of 149. This is a score that would be labeled at the genius level. And if there was normal variability among Boskops, as among the rest of us, then perhaps 15 to 20 percent of them would be expected to score over 180. In a classroom with 35 big-headed, baby-faced Boskop kids, you would likely encounter five or six with IQ scores at the upper range of what has ever been recorded in human history. The Boskops coexisted with our Homo sapiens forebears. Just as we see the ancient Homo erectus as a savage primitive, Boskop may have viewed us in somewhat the same way. They died and we lived, and we can’t answer the question why. Why didn’t they outthink the smaller-brained hominids like ourselves and spread across the planet? Perhaps they didn’t want to. Longer brain pathways lead to larger and deeper memory hierarchies. These confer a greater ability to examine and discard more blind alleys, to see more consequences of a plan before enacting it. In general this enables us to think things through. If Boskops had longer chains of cortical networks—longer mental assembly lines—they would have created longer and more complex classification chains. When they looked down a road as far as they could, before choosing a path, they would have seen farther than we can: more potential outcomes, more possible downstream costs and benefits. As more possible outcomes of a plan become visible, the variance among judgments between individuals will likely lessen. There are far fewer correct paths—intelligent paths—than there are paths. It is sometimes argued that the illusion of free will arises from the fact that we can’t adequately judge all possible moves, with the result that our choices are based on imperfect, sometimes impoverished, information. Perhaps the Boskops were trapped by their ability to see clearly where things would head. Perhaps they were prisoners of those majestic brains. There is another, again poignant, possible explanation for the disappearance of the big-brained people. Maybe all that thoughtfulness was of no particular survival value in 10,000 B.C. The great genius of civilization is that it allows individuals to store memory and operating rules outside of their brains, in the world that surrounds them. The human brain is a sort of central processing unit operating on multiple memory disks, some stored in the head, some in the culture. Lacking the external hard drive of a literate society, the Boskops were unable to exploit the vast potential locked up in their expanded cortex. They were born just a few millennia too soon. In any event, Boskops are gone, and the more we learn about them, the more we miss them. Their demise is likely to have been gradual. A big skull was not conducive to easy births, and thus a within-group pressure toward smaller heads was probably always present, as it still is in present-day humans, who have an unusually high infant mortality rate due to big-headed babies. This pressure, together with possible interbreeding with migrating groups of smaller-brained peoples, may have led to a gradual decrease in the frequency of the Boskop genes in the growing population of what is now South Africa. Then again, as is all too evident, human history has often been a history of savagery. Genocide and oppression seem primitive, whereas modern institutions from schools to hospices seem enlightened. Surely, we like to think, our future portends more of the latter than the former. If learning and gentility are signs of civilization, perhaps our almost-big brains are straining against their residual atavism, struggling to expand. Perhaps the preternaturally civilized Boskops had no chance against our barbarous ancestors, but could be leaders of society if they were among us today. Maybe traces of Boskops, and their unusual nature, linger on in isolated corners of the world. Physical anthropologists report that Boskop features still occasionally pop up in living populations of Bushmen, raising the possibility that the last of the race may have walked the dusty Transvaal in the not-too-distant past. Some genes stay around in a population, or mix themselves into surrounding populations via interbreeding. The genes may remain on the periphery, neither becoming widely fixed in the population at large nor being entirely eliminated from the gene pool. Just about 100 miles from the original Boskop discovery site, further excavations were once carried out by Frederick FitzSimons. He knew what he had discovered and was eagerly seeking more of these skulls. At his new dig site, FitzSimons came across a remarkable piece of construction. The site had been at one time a communal living center, perhaps tens of thousands of years ago. There were many collected rocks, leftover bones, and some casually interred skeletons of normal-looking humans. But to one side of the site, in a clearing, was a single, carefully constructed tomb, built for a single occupant—perhaps the tomb of a leader or of a revered wise man. His remains had been positioned to face the rising sun. In repose, he appeared unremarkable in every regard…except for a giant skull. Previous Page: The Promise of CORE Resonance
The payback period refers to the amount of time it takes to recover the cost of an investment. Moreover, it's how long it takes for the cash flow of income from the investment to equal its initial cost. This is usually expressed in years. Most of what happens in corporate finance involves capital budgeting — especially when it comes to the values of investments. Most corporations will use payback period analysis in order to determine whether they should undertake a particular investment. But there are drawbacks to using the payback period in capital budgeting. Payback Period Analysis Payback period analysis is favored for its simplicity, and can be calculated using this easy formula: Payback Period = Initial Investment ÷ Estimated Annual Cash Flow This analysis method is particularly helpful for smaller firms that need the liquidity provided by a capital investment with a short payback period. The sooner money used for capital investments is replaced, the sooner it can be applied to other capital investments. A quicker payback period also reduces the risk of loss occurring from possible changes in economic or market conditions over a longer period of time. When considering two similar capital investments, a company will be inclined to choose the one with the shortest payback period. The payback period is determined by dividing the cost of the capital investment by the projected annual cash inflows resulting from the investment. Some companies rely heavily on payback period analysis and only consider investments for which the payback period does not exceed a specified number of years. So, longer investment periods are typically not desired. Limitations of Payback Period Analysis Despite its appeal, the payback period analysis method has some significant drawbacks. The first is that it fails to take into account the time value of money (TVM) and adjust the cash inflows accordingly. The TVM is the idea that the value of cash today will be worth more than in the future because of the present day's earning potential. Thus, an inflow return of $15,000 from an investment that occurs in the fifth year following the investment is viewed as having the same value as a $15,000 cash outflow that occurred in the year the investment was made despite the fact the purchasing power of $15,000 is likely significantly lower after five years. Furthermore, the payback analysis fails to consider inflows of cash that occur beyond the payback period, thus failing to compare the overall profitability of one project as compared to another. For example, two proposed investments may have similar payback periods. But cash inflows from one project might steadily decline following the end of the payback period, while cash inflows from the other project might steadily increase for several years after the end of the payback period. Since many capital investments provide investment returns over a period of many years, this can be an important consideration. The simplicity of the payback period analysis falls short in not taking into account the complexity of cash flows that can occur with capital investments. In reality, capital investments are not merely a matter of one large cash outflow followed by steady cash inflows. Additional cash outflows may be required over time, and inflows may fluctuate in accordance with sales and revenues. This method also does not take into account other factors such as risk, financing or any other considerations that come into play with certain investments. Due to its limitations, payback period analysis is sometimes used as a preliminary evaluation, and then supplemented with other evaluations, such as net present value (NPV) analysis or the internal rate of return (IRR). The Bottom Line The payback period can be a valuable tool for analysis when used properly to determine whether a business should undertake a particular investment. However, this method does not take into account several key factors including the time value of money, any risk involved with the investment or financing. For this reason, it is suggested that corporations use this method in conjunction with others to help make sound decisions about their investments.
“Make” one of your child’s favorite meals and get them excited! This activity helps your child recognize shapes and numbers. It also allows your child act as little chefs by adding their own “pizza toppings”! - Shape recognition - Number recognition AGE: 1.5 – 3 Years Old TIME: 10 Mins CATEGORY: Arts & Crafts, Numeracy, Shapes - Colored papers - At step 4, engage your child by asking them about the shape of the pizza topping they are adding so that they can practice recognizing shapes as well. - Add another level to the activity by introducing the colors of typical pizza toppings! E.g. cheese is usually yellow, vegetables are green, Pepperoni is red, etc, for them to recognize colors too.
We take the example of the sign of the product of two linear expressions. Two ways for determining the sign of a linear expression are demonstrated: - one uses results about linear expressions' sign (method 1), - the other uses results about variation of linear functions (method 2). Then we will use the rule of signs of a product or of a quotient for the signs of f. Create the function f . You can use function by formula. However, it is important to check the domain in order that the mini-diagram appear. In the expressions list, click on the expression of f. Figure 62 - Menu Calculate / Sub-expression Click on the Calculate button, in the contextual menu which appears, choose Sub-expressions. The following box appears, it lists the sub-expressions. Figure 63 - Sub-expressions list The selection of list's elements is as in Windows™ explorer : Keeping « Ctrl » key pressed, click on non-consecutive elements in order to select them. Keeping « Shift » or key pressed, click on first, then on last element of the list to be selected; the All button selects all the elements. In this example choose All, then click OK. The expressions and functions lists are completed by two functions named f0 and f1 defined from the selected expressions. Figure 64 - Sub-expressions (functions list) Figure 65 - Sub-expressions (expressions list) Highlight f1 in the functions list as seen above. In the Justify menu, choose Sign: linear. Figure 66 - Menu Justify / Sign - linear A dialog box appears. Figure 67 - Box "Sign - linear - Condition of application" Fill in Coeff of x and null at then click on the Evaluate button. If there is an error, information appears in the Notepad. Figure 68 - Sign linear completed box with error Figure 69 - Box with correct answer If the correct values are entered, after clicking Evaluate, a new box opens. Figure 70 - Conclusion box for the sign of linear function On each line, successive clicks in the blank box make negative or positive appear; evaluate your propositions by clicking the Evaluate Prop button. Figure 71 - Conclusion box with first line filled When propositions are correct, the OK button is active, please press it. Figure 72 - Conclusion box with correct answers The zero of f1 is added in the x-values list and in the functions list the sign of f1 is showed with green bullets (positive if the bullet is above the horizontal line passing by 0, else negative). Figure 73 - Displaying sign of f1 In order to determine the sign of (π – 2x), create the zero of the expression f0 (π/2) in the x-value list (click New Value) then select f0 in the functions list. In the Justify menu, choose Variations: reference functions. Figure 74 - Menu Justify / Variations - reference functions This box opens: Figure 75 - Box "Variations - reference functions - Conditions of application" You can choose the type of function in the list, and you will get a graphical representation of the chosen function’s type. Click on Evaluate. After a correct answer, a Conclusion box opens. Figure 76 - Conclusion box for variations For each line, click on the blank box in order to make decreasing or increasing appear. Evaluate your propositions with the Evaluate Prop button. When propositions are correct, the OK button is active, please press it. Figure 77 - Conclusion box for variations with correct answers. In the functions list, results appear in the mini-diagrams (arrows). Figure 78 - Displaying variations of f0 After variations, it is needed to justify the sign; in the Justify menu, choose Sign: known variations. Figure 79 - Menu Justify / Sign - known variations The box Sign: Known variations: application conditions opens. At one of the values in the drop down menu, indicate if the function is negative, zero or positive. Here, choose x2 and zero function. It means that you recognised that the function is decreasing and null at x2, and then Casyopée will know that the function is positive for x< x2 and negative for x> x2. When the proposition is correct, the Evaluate button is active, please press it. Figure 80 - Box "Sign - known variations - Conditions of application" A Conclusion box opens. For each line, click on the blank box in order to make negtive or positive appear. Evaluate your propositions with the Evaluate Prop button. When propositions are correct, the OK button is active, please press it. Figure 81 - Conclusion box for sign of a function with known variations The signs of f0 are denoted by the position of arrows. Above the line passing by 0, the function is positive and under, the function is negative. Figure 82 – Sign diagram of f0 Finally, in the functions list, select f, and then click on the Justify menu, choose Sign: product, quotient. Figure 83 - Menu Justify / Sign - product, quotient At the bottom of the dialogue box, select the two lines, because f is the product of f0 and f1. Figure 84 - Box "Sign - product, quotient - Conditions of application" Click on Evaluate, the Conclusion box opens. The meaning of this window has already been explicated. After pressing Evaluate Prop. and OK, the results appear as signs in the functions list. Figure 85 - Conclusion box for the sign of a product In the functions list, the table of sign is totally filled. The Notepad allows to keep track of the different steps and to write a justification. Figure 86 - Sign table of functions
FRIDAY, 18 NOVEMBER 2011 The study was conducted in two parts: the subjects were first presented with two boards of food: bananas for the chimps and gummy frogs for the children. The subject could reach one of the boards by pulling on both ends of a rope themselves. To reach the second board, however, they required a partner in a neighbouring room to pull on the other end of the rope. The children or chimps were allowed access to only one board, but were free to choose which. In this scenario, the chimpanzees chose to collaborate only 58% of the time, which is not significantly different from chance, whereas the children showed a significant preference for working together, choosing to do so 78% of the time. In that experiment, however, the partner always received the food reward regardless of whether or not they pulled on a rope. The researchers therefore wondered whether the children were actually choosing to avoid a decision that caused their partner to get something for doing nothing. To test this, they set up a second study in which the subject child never saw their partner receive a reward. The children still chose to work together 81% of the time, confirming the strong preference for collaboration seen in the first study. Chimps possess much of the cognitive ability required for cooperation and do work together at times, as they have been observed carrying out border patrols and hunting in groups. The results of this study may therefore be due to a difference in motivation to cooperate, with humans simply having more of a preference for working together. The development of this preference may have been one of the initial steps in the evolution of human collaboration, which eventually led to the establishment of our highly complex society. Future work will focus on other primates, such as the bonobo, to further elucidate the evolutionary history of cooperation. Written by Catherine Moir
Most of us are proud to be Americans—and with good reason. Other than the United States of America, no other nation on earth was designed with the primary purpose of promoting and protecting the rights of the individual. In this respect, our republic stands alone. The Only Government Established to Protect Individual Rights In all other countries, the collective rights of the nation have been established as primary, and the rights of its individual citizens as secondary. The founding fathers of the United States—revolutionaries to the core—flipped this notion, identifying and implementing the principles that the purpose of government is to secure individual rights and that governments derive their powers from the consent of the governed. Since the American government was established to secure (not create) individual rights, it follows that these rights existed before governments. Rights are inherent to humans’ rational nature. The most fundamental individual right is the right to life; all other individual rights spring from it. Secondarily, since the power of governments comes from the consent of the governed, it follows that the governed cannot empower governments with rights that individual citizens themselves do not possess. For example, since no individual has the right to deprive another of his/her personal belief or the freedom to express it, the government does not, either. Examining Freedom of Speech in America Our First Amendment right to free speech is clearly stated: “Congress shall make no law . . . abridging the freedom of speech.” Imagine, then, this scenario: A privately funded company posts a controversial position on social media. Some respondents claim to be offended by the expressed opinion, and a heated public discussion ensues. The administrators delete some of the responses. Is this a violation of the right to free speech? The Constitution protects—from government interference— an individual’s right to believe anything and to freely express those beliefs. In contrast, private entities are free to permit or prohibit speech on their own property and have no moral obligation to allow or promote speech that violates their sincerely held beliefs, motivations, or value systems For example, because the people have granted limited power to the government, it is constrained from prohibiting demonstrations based solely on the demonstrators’ beliefs. A private publishing company, however, has no such constraint and is free to publish or not publish as it deems appropriate—it has the freedom to decline to publish any content that promotes ideas and beliefs with which it disagrees. A common misconception about the right to free speech is that it includes freedom from criticism. A derivative effect of this belief prompts some educational entities to create “safe spaces” where certain opinions are suppressed because they make some people feel uncomfortable. Such a policy may be implemented on private property, but not on any property (such as a public school) that receives any taxpayer money. Public funds obtained from taxpayers may not be used to violate freedom of speech—a precious individual right that our government was established to protect. The right not to be criticized or the right not to be offended does not exist in a moral society. You have the right to say what you want, but other people have the right to dislike and criticize what you say. Again, in America, government may not use the force of law to restrict the expression of your personal beliefs as long as your actions do not violate anyone else’s rights. Challenger’s Commitment to Individual Rights Challenger is committed to teaching its students to respect individual rights. This important objective is interwoven into our policies, philosophy, curriculum, and teaching and behavior management methods. We teach students to value their individuality and their inalienable rights. We promote free speech and active debate as we encourage students to think for themselves and come to their own conclusions based on their examination of facts. Likewise, we recognize that the crucial choice of who will educate your children is always yours. We’re glad you have chosen Challenger as a partner in teaching your children, and we’ll continue to focus on important principles that can positively affect their learning, development, and happiness.
Dark matter probably forms an incredible 80 percent of the mass in the universe. But this single fact is the extent of our knowledge of this mysterious, all-consuming element, which scientists are not sure what it is and how it happened. Now a groundbreaking study has revealed dark matter may be more unique than first thought, as its origin may actually have dated the beginning of the Universe – the Big Bang. Dark objects are difficult to understand because they are not directly noticed. Scientists have known dark matter that dominates ordinary objects in the universe more than five times. This is because galaxies spin around their stars very fast. of physics says that these galaxies are separate. For example, the Milky Way rotates so fast that it should contain 30 times darker matter than ordinary matter. Tommi Tenkanen, a postdoctoral fellow in Physics and Astronomy at Johns Hopkins University and the author of the study, believes that he has found a new connection between physics and astronomy. He states: "If d The arc object is composed of new particles born before the Big Bang, it affects the way galaxies are distributed in the sky in a unique way. " This connection is can be used to present their identity and draw conclusions about the time before the Great Bang. ” Pronounced findings contradict a long-assumed dark matter that is a remnant of the Big Bang. in 201 Mr. Tenkanen added: "If the dark matter is actually left over in the Big Bang, then in many cases researchers should have seen a direct signal of the dark that matters to other 'physics party experiments.' Seismic studies illustrate how dark matter can be darkened before the Big Bang – the existing cosmological model for the observable Universe. Dark matter can be born during cosmic inflation, if space-time exp anded at irreversible speed. This expansion is thought to have led to the introduction of exotic particles such as the well-known Higgs boson. Mr. Tenkanen added: "We do not know what dark matter is, but if it has anything on any scalar particles, it may be older than the Big Bang. "In the proposed mathematical scenario, we do not need to assume new kinds of interactions between visible and dark matter beyond gravity, which we already know there. "a possible mathematical situation for the origins of dark matter. And research can lead to a new method of analyzing theory by observing the signatures of dark matter leaving the distribution of matter in the Universe.
Microbes could provide a clean, renewable energy source and use up carbon dioxide in the process, suggested Dr James Chong at a Science Media Centre press briefing today. "Methanogens are microbes called archaea that are similar to bacteria. They are responsible for the vast majority of methane produced on earth by living things" says Dr Chong from York University. "They use carbon dioxide to make methane, the major flammable component of natural gas. So methanogens could be used to make a renewable, carbon neutral gas substitute." Methanogens produce about one billion tonnes of methane every year. They thrive in oxygen-free environments like the guts of cows and sheep, humans and even termites. They live in swamps, bogs and lakes. "Increased human activity causes methane emissions to rise because methanogens grow well in rice paddies, sewage processing plants and landfill sites, which are all made by humans." Methanogens could feed on waste from farms, food and even our homes to make biogas. This is done in Europe, but very little in the UK. The government is now looking at microbes as a source of fuel and as a way to tackle food waste in particular. Methane is a greenhouse gas that is 23 times more effective at trapping heat than carbon dioxide. "By using methane produced by bacteria as a fuel source, we can reduce the amount released into the atmosphere and use up some carbon dioxide in the process!"
Students enter silently. They will all have their journals on their desks. Many have been falling behind during the opening part of class in the last two weeks and will get an opportunity to complete missing journal entries. Those who are finished with their journal entry will work on an operations puzzle that allows for the opportunity to earn achievement points. As students complete each puzzle, they may raise their hands to have their work checked and earn stars on their achievement card. After 5 minutes all students will be asked to turn in their homework form the weekend and get ready for the task. Essential Question: How is subtracting signed numbers related to adding signed numbers? Ask a student to sit at the table with the document camera to model the intro to this lesson. We will use the essential question above to guide how we think about the exercises on counters. For each expression, students will model with their chips and draw a representation of their model on their paper. I first start out by asking them to place 3 red chips on their desk for the first problem and also draw three circles with a positive sign in front. I ask them to volunteer the integer that represents this model. We write 3 on our paper. Then I ask them to show me 3+2 by modeling 3 red chips in a group and 2 red chips in another group. I remind them to draw the representation and fill in their answer. Then I read the 3rd problem as “3 take away 2” and ask someone to model 3 and then “take away two”. Some students may want to place two blue chips opposites of two blue chips. This is not entirely wrong. It is in fact the aim of the lesson, but it is important to translate the problem as “take away”, so that other students can connect the dots later. We ask this particular student to write down 3+(-2) as the expression for their drawing and ask them to continue thinking about the differences and similarities between the two expressions. We come back to ask this student about his comparison once we get closer to forming the relationship between addition and subtraction as a class. As we move through examples, I ask student volunteers to take turns at the document camera to display their model and draw their representation. Students continue with examples #4-9. I encourage students to speak with their neighbor and share out as a classif they are noticing a relationship between addition and subtraction. If any student gives an explanation that is too vague or broad, I offer counter examples and ask them to continue to think about the relationship (i.e. student states, “addition and subtraction are the same”, teacher responds, “what do you mean they are the same? So 9-5 is the same as 9+5? How are they the same?”). If a student words something in a succinct and mathematically correct way, we ask the student to repeat what he stated so that we can write it in our journals. I write it on a sheet of paper and project it on the document camera. I stop the class after 3-4 minutes to complete #1-9 and ask them to complete #10 with me. We start out with 3 blue chips. We read the problems as “negative 3 take away 3”. I ask students whether we need to take away 3 negatives or 3 positives. When we see that we need to take away positives, or red chips, and we have none, I ask if there is a way we can add red chips without adding any more value to the number currently on the table (use of zero pairs). If we add two zero pairs, we can take away the 2 red chips (+) and we’re left with a total of 5 blue chips. Students volunteer the use of the additive inverse to re-write the expression as an addition sentence. Students complete #12 with partners and are asked to try #13 in their pairs as well. I walk around until I find the first group that correctly models #13 and ask one of them to draw it on the white board using red and blue markers. To close the lesson, I will ask students to discuss the answer to today’s essential question with their neighbor and then I’ll have 2-3 students share out. Exit ticket – Students are given an exit ticket to complete before they leave. Homework is placed on student desks while they work. Exit Ticket includes 6 addition and subtraction questions (i.e. 8+(-4), -46-(-5)) Homework includes 8 addition/subtraction questions, 1 error analysis, and 1 word problem and change.
memset() is used to fill a block of memory with a particular value. The syntax of memset() function is as follows : // ptr ==> Starting address of memory to be filled // x ==> Value to be filled // n ==> Number of bytes to be filled starting // from ptr to be filled void *memset(void *ptr, int x, size_t n); Note that ptr is a void pointer, so that we can pass any type of pointer to this function. Let us see a simple example in C to demonstrate how memset() function is used: Before memset(): GeeksForGeeks is for programming geeks. After memset(): GeeksForGeeks........programming geeks. Explanation: (str + 13) points to first space (0 based index) of the string “GeeksForGeeks is for programming geeks.”, and memset() sets the character ‘.’ starting from first ‘ ‘ of the string up to 8 character positions of the given string and hence we get the output as shown above. 0 0 0 0 0 0 0 0 0 0 Predict the output of below program. Note that the above code doesn’t set array values to 10 as memset works character by character and an integer contains more than one bytes (or characters). However, if we replace 10 with -1, we get -1 values. Because representation of -1 contains all 1s in case of both char and int. Reference: memset man page (linux) This article is contributed by MAZHAR IMAM KHAN. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
A short topical play introduces students to the fields of bioinformatics, genetic testing, direct-to-consumer genetic testing, and ethical considerations. Students discuss some of the broad implications and ethical questions raised from gaining information through genetic testing. Students then consider a number of genetic tests and their potential usefulness and value and, as a class, explore the website of 23andMe, a company that offers direct-to-consumer genetic tests. The lesson wraps up as it began—by engaging students in a story. Through a short video, students are introduced to a family impacted by breast cancer. In Lesson One, students also learn how bioengineers might use bioinformatics tools in their career. Download Genetic Testing Lesson 1 (pdf) Download Genetic Testing Lesson 1 PowerPoint (ppt)
Astronomers have discovered strange and unexpected behavior around the supermassive black hole at the heart of galaxy NGC 5548. The international team of researchers detected a clumpy gas stream flowing quickly outward and blocking 90 percent of the X-rays emitted by the black hole. This activity could provide insights into how supermassive black holes interact with their host galaxies. The discovery of the unusual behavior in NGC 5548 is the result of an intensive observing campaign using major European Space Agency and NASA observatories, including the NASA/ESA Hubble Space Telescope. In 2013 and 2014, the international team carried out the most extensive monitoring campaign of an active galaxy ever conducted. There are other galaxies that show gas streams near a black hole, but this is the first time that a stream like this has been seen to move into the line of sight. The researchers say that this is the first direct evidence for the long-predicted shielding process that is needed to accelerate powerful gas streams, or winds, to high speeds. “This is a milestone in understanding how supermassive black holes interact with their host galaxies,” said Jelle Kaastra of the SRON Netherlands Institute for Space Research. “We were very lucky. You don’t normally see this kind of event with objects like this. It tells us more about the powerful ionized winds that allow supermassive black holes in the nuclei of active galaxies to expel large amounts of matter. In larger quasars than NGC 5548, these winds can regulate the growth of both the black hole and its host galaxy.” As matter spirals down into a black hole, it forms a flat disk known as an accretion disk. The disk is heated so much that it emits X-rays near the black hole and less energetic ultraviolet radiation farther out. The ultraviolet radiation can create winds strong enough to blow gas away from the black hole, which otherwise would have fallen into it. But the winds only come into existence if their starting point is shielded from X-rays. Earlier observations had seen the effects of both X-rays and ultraviolet radiation on a region of warm gas far away from the black hole, but these most recent observations have shown the presence of a new gas stream between the disk and the original cloud. The newly discovered gas stream in the archetypal Seyfert galaxy (NGC 5548) — one of the best-studied sources of this type over the past half-century — absorbs most of the X-ray radiation before it reaches the original cloud, shielding it from X-rays and leaving only the ultraviolet radiation. The same stream shields gas closer to the accretion disk. This makes the strong winds possible, and it appears that the shielding has been going on for at least three years. Directly after Hubble had observed NGC 5548 on June 22, 2013, the team discovered unexpected features in the data. “There were dramatic changes since the last observation with Hubble in 2011. We saw signatures of much colder gas than was present before, indicating that the wind had cooled down, due to a strong decrease in the ionizing X-ray radiation from the nucleus,” said team member Gerard Kriss of the Space Telescope Science Institute in Baltimore. After combining and analyzing data from the six observatories involved, the team was able to put the pieces of the puzzle together. NGC 5548’s persistent wind, which scientists have known about for two decades, reaches velocities exceeding 2.2 million mph (3.5 million km/h). But a new wind has arisen that is much stronger and faster than the persistent wind. “The new wind reaches speeds of up to 18 million km/h [11 million mph] but is much closer to the nucleus than the persistent wind,” said Kaastra. “The new gas outflow blocks 90 percent of the low-energy X-rays that come from close to the black hole, and it obscures up to a third of the region that emits the ultraviolet radiation at a distance of a few light-days from the black hole.” Strong X-ray absorption by ionized gas has been seen in several other sources, and it has been attributed for instance to passing clouds. “However, in our case, thanks to the combined XMM-Newton and Hubble data, we know this is a fast stream of outflowing gas very close to the nucleus,” said Massimo Cappi of INAF-IASF Bologna. “It may even originate from the accretion disk,” added team member Pierre-Olivier Petrucci of CNRS, IPAG Grenoble.
Risk Perceptions and Risk Characteristics Summary and Keywords Risk perception refers to people’s subjective judgments about the likelihood of negative occurrences such as injury, illness, disease, and death. Risk perception is important in health and risk communication because it determines which hazards people care about and how they deal with them. Risk perception has two main dimensions: the cognitive dimension, which relates to how much people know about and understand risks, and the emotional dimension, which relates to how they feel about them. Several theoretical models have been developed to explain how people perceive risks, how they process risk information, and how they make decisions about them: the psychometric paradigm, the risk perception model, the mental noise model, the negative dominance model, the trust determination model, and the social amplification of risk framework. Laypeople have been found to evaluate risks mostly according to subjective perceptions, intuitive judgments, and inferences made from media coverage and limited information. Experts try to base their risk perceptions more on research findings and statistical evidence. Risk perceptions are important precursors to health-related behaviors and other behaviors that experts recommend for either dealing with or preventing risks. Models of behavior change that incorporate the concept of risk perception include the Health Belief Model, Protection Motivation Theory, the Extended Parallel Process Model, and the Risk Perception Attitude framework. Public awareness and perceptions of a risk can be influenced by how the media cover it. A variety of media factors have been found to affect the public’s risk perceptions, including the following: (1) amount of media coverage; (2) frames used for describing risks; (3) valence and tone of media coverage; (4) media sources and their perceived trustworthiness; (5) formats in which risks are presented; and (6) media channels and types. For all of these media factors, albeit to varying degrees, there is theoretical and empirical support for their relevance to risk perceptions. Particularly related to media channels and genres, two hypotheses have emerged that specify different kinds of media influences. The impersonal impact hypothesis predicts that news media mainly influence how people see risks as affecting other individuals, groups, nations, or the world population in general (societal-level risk perceptions). By contrast, the differential impact hypothesis predicts that, while news media influence people’s societal-level risk perceptions, entertainment media have stronger effects on how people see risks as affecting themselves (personal-level risk perceptions). As the media environment become increasingly diverse and fragmented, future research on risk perception needs to examine more of the influences that various media, including social media, have on risk perception. Also, the accounts of how those influences work need to be further refined. Finally, since people’s risk perceptions lead them to either adopt or reject recommended health behaviors, more research needs to examine how risk perceptions are jointly affected by media, audience characteristics, and risk characteristics. Keywords: risk perception, optimistic bias, psychometric paradigm, impersonal impact hypothesis, differential impact hypothesis, personal- and societal-level risk perceptions, risk presentation format, health and risk message design and processing Risk and Risk Perception: Definitions and Dimensions Risks are pervasive issues both within and across national borders. Noteworthy examples include natural disasters such as hurricanes and earthquakes, human-made disasters such as radiation exposure, and recent instances of global infectious diseases such as Ebola, Middle East Respiratory Syndrome (MERS), and the Zika virus. The concept of risk refers to the probability of experiencing harm or hazards. Hazards refers to threats to people and the things they value. Probability refers to the likelihood of a harm’s or hazard’s occurrence, which will tend to be perceived with some degree of uncertainty. The uncertain aspect of risk is related to people’s disagreements about a given risk’s magnitude and severity. People experience uncertainty when a situation is ambiguous, unpredictable, or probabilistic. Interpretations and other subjective judgments about risks are known as risk perceptions (Slovic, 2000). Risk perceptions are important determinants of health- and risk-related decisions such as adopting healthy behaviors, curtailing unhealthy behaviors, and accepting or rejecting a certain level of risks (e.g., policy decisions on nuclear plants, GMO foods, processed meats). A common assumption in risk perception research is that people’s knowledge and certainty about a risk determines how they will perceive it. This assumption is based on the rational choice model of decision making, which portrays people as evaluating the possibility of outcomes after they calculate potential costs and benefits. This way of evaluating risks is predominantly ascribed to experts, who are assumed to rely on scientific information and objective assessment. By contrast, laypeople are commonly assumed to evaluate risks by using heuristics and other informal thought processes. For example, when people are more aware of certain risks, they tend to believe that those risks happen more frequently than they actually do. This tendency is known as the availability heuristic (Kahneman, Slovic, & Tversky, 1982). Consider some examples from health contexts: If you have any friends or family members who died of colon cancer, you are more likely than other people to perceive that disease as posing a fatal risk. Also, people who have been heavily exposed to media coverage of an infectious disease such as the H1N1 flu may perceive it to be more prevalent and risky than those who have not. Other ways of misperceiving the frequency and magnitude of risks can occur due to individual characteristics. A notable one is optimistic bias or unrealistic optimism, the tendency to believe that risks pose a less serious threat to oneself than they do to other people (Weinstein, 1980). For example, smokers who have a strong optimistic bias are likely to believe that smoking may be hazardous to other people’s health but not to their own. Heuristics such as these, as well as other individual tendencies, make people perceive risks in different ways. In addition, because laypeople often don’t have access to detailed information about risks, they tend to perceive them more in conjunction with emotions such as dread and fear. This tendency can lead people to overestimate hazards’ actual frequency and severity. When risk perception was initially explored, researchers focused on people’s cognitive judgments about the magnitude and likelihood of risks. Eventually, however, they acknowledged the important role that emotions such as dread, fear, and outrage play in evaluating risks. Slovic and his associates called attention to the affect heuristic, which in the context of risk perception refers to people’s tendency to rely on their current emotions when they make judgments about risks. If we feel intense dread when we perceive a risk, we are likely to evaluate it as more threatening and more prevalent. Similarly, the risk-as-feelings hypothesis predicts that emotional reactions to risks are often independent of cognitive assessments of them, and that they are stronger determinants of people’s behavior (Loewenstein, Weber, Hsee, & Welch, 2001). As these emotional aspects of risk perception were being explored in the field of psychology, a similar view of emotions’ role in risk perception was developed separately in the field of risk communication. Risk communication refers to the process of informing and persuading the public about risks so that they will be able to perceive them accurately and make appropriate decisions about them (Walaski, 2011). As the field of risk communication matured, researchers discovered that laypeople often do not understand risks in the same ways experts do. Laypeople’s responses to risks are determined more by their subjective perceptions, and they tend to have less knowledge about objective risk factors. This was yet another discovery that affirmed the importance of the emotional dimension of risks and risk characteristics. One researcher who paid close attention to how laypeople perceive and respond to risks was Peter Sandman. Sandman (1989) defines risk as a combination of “hazards and outrage.” Hazards constitute the technical component of risk, and outrage is a nontechnical component that refers to an amalgam of voluntariness, control, responsiveness, trust, dread, and other non-rational responses (Walaski, 2011). Although Sandman’s concept of outrage is idiosyncratic and needs further clarification, it is notable for including emotional components such as dread and fear. In his efforts to understand the roles of media (particularly journalism) in people’s risk perceptions and subsequent behaviors, Sandman particularly highlights the emotional and sensational nature of media coverage, and he suggests risk communication strategies according to different levels of hazards and outrage. (The role of media in risk perceptions is discussed in more detail below.) Risk Characteristics and Relevant Models One theoretical framework that incorporates both the cognitive and emotional dimensions of risk perceptions is the psychometric paradigm developed by Slovic and his associates (Slovic, 2000). According to the psychometric paradigm, people judge the riskiness of a hazard based on the combination of a range of (perceived) risk characteristics, which include the following: the severity of the risk is not controllable; the risk makes people feel dread; the risk could be globally catastrophic; the risk is certain to be fatal; people will experience the risk in unequal ways; many people are exposed to the risk; the risk could threaten future generations; the risk is increasing; exposure to the risk is involuntary; the risk affects us personally; the risk is not observable; people do not know whether they are exposed to the risk; the risk’s effects are immediate; the risk is new and unfamiliar; the risk is unknown to science (Slovic, Fischhoff, & Lichtenstein, 2000). The psychometric paradigm classifies this range of risk characteristics under two broad labels, dread risk and unknown risk. Dread risk includes “perceived lack of control, dread, catastrophic potential, fatal consequences and the inequitable distribution of risks and benefits” (Slovic, 2000, p. 225). Unknown risk includes “hazards judged to be unobservable, unknown, new, and delayed in their manifestation of harm” (Slovic, 2000, p. 226). Critics of the psychometric paradigm claim that these labels are ambiguous. Some have proposed that dread and unknown risk should instead be viewed as two dimensions of risk judgments, cognitive and emotional (Coleman, 1993; Dunwoody & Neuwirth, 1991). But despite such criticisms, the psychometric paradigm has increased our understanding of the complex psychology behind people’s risk perceptions. It has also helped explain why certain risk issues (e.g., radiation from nuclear plants) are perceived to be more serious than others—even when they in fact are not (Paek, 2014). Using several of these risk characteristics in the context of risk communication, Covello proposed four theoretical models that explain how people perceive risks, how they process risk information, and how they make decisions accordingly. First, the risk perception model identifies a wide variety of factors that influence people’s risk perceptions. They include voluntariness, controllability, familiarity, equity, benefits, understanding, uncertainty, dread, trust in institutions, reversibility, personal stake, ethical/moral nature, human versus natural origin, and catastrophic potential. For example, if people perceive a risk to be voluntarily incurred, they are more likely to accept it because they understand their role in experiencing the implications of the risk. By contrast, if people have less intense and less fearful emotions toward a risk, they are more likely to accept it. These factors have been used to inform risk and crisis communication strategies (Walaski, 2011). Second, the mental noise model posits that events producing a higher level of mental noise (or stress) reduce people’s ability to process risk-related information. Factors that cause a high level of mental noise include controllability, voluntariness, familiarity, cause of the disaster (human-made versus natural), dread, uncertainty, and the victim’s vulnerability (e.g., child, pregnant woman). These factors closely resemble those identified in the risk perception model. Third, the negative dominance model predicts that situations producing risks and subsequent emotions such as fear, dread, and anxiety create an environment where people are more likely to focus on negative messages. Fourth, the trust determination model highlights the importance of perceived trust of the communicator in people’s perceptions of and reactions to given risks. It highlights several trust determination factors that help build the communicator’s trust, such as caring and empathy, competence and expertise, honesty and openness. Of these four models, the risk perception model has been used most widely. The psychometric paradigm and Covello’s four models focus on how individuals’ psychological characteristics affect risk perceptions. Other approaches have indicated a variety of cultural and social influences on people’s perceptions and responses to risks. For example, the social amplification of risk framework (SARF) attempts to show the relations among the technical analysis of risk and the cultural, social, and individual response structures that shape people’s experience of risk (Kasperson et al., 1988). SARF presumes that risk events interact with psychological, social, and cultural processes in ways that can heighten or attenuate public perceptions of risk and related risk behaviors. An important feature of SARF is that it highlights the roles played by communication channels in risk amplification and attenuation. One channel is informal and interpersonal communication networks. Friends, family, and co-workers may amplify or attenuate risk perceptions by giving one another information or reinforcing habitual perceptions and cultural biases. The other channel is the news media, which can determine which risks receive public attention. The media tend to pay more attention to (and thereby amplify) unusual or dramatically striking risks, and they pay less attention to well-known or dramatically uninteresting risks, even though such risks may continue to be severe. Taken together, the psychometric paradigm, Covello’s four theoretical models, and SARF highlight how people’s risk perceptions are determined by various risk characteristics and factors of individual psychology, societal institutions, and communication channels. Theoretical Perspectives in Risk Perceptions Risk perceptions are important precursors to behaviors that experts recommend for either dealing with or preventing risks, for example vaccination, hand washing, wearing seat belts, and early screening for diseases. Several theories of behavior change have incorporated risk perception variables. Some of these theories—the Health Belief Model, Protection Motivation Theory, the Extended Parallel Process Model, and the Risk Perception Attitude framework—try to predict behaviors. Other theories try to explain how risk perceptions are formed and changed, and they pay special attention to the roles that media play in these processes. Because of this volume’s focus on health and risk message design and processing, this second type of theory is explored in more detail in the later parts of this chapter. The Health Belief Model (HBM) assumes that people want to avoid illness and that they will adopt behaviors which they believe will protect them from illness. The HBM identifies four types of risk perceptions as determinants of health behavior: perceived susceptibility, perceived severity, perceived benefits, and perceived barriers. Perceived susceptibility refers to people’s subjective beliefs about how vulnerable and susceptible they are to a disease or other health risk (Janz & Becker, 1984). Perceived severity refers to how serious people believe a health risk to be, and whether it will have adverse physical consequences such as death, disability, and pain, and adverse social consequences such as ostracism, stigma, and shame. Perceived benefits refers to people’s beliefs about whether a health behavior will enable them to manage a health risk. Perceived barriers refers to people’s beliefs about whether the costs or negative aspects of adopting a health behavior will prevent them from doing so. Perceived susceptibility and severity also play roles in Protection Motivation Theory (PMT; Rogers, 1983) and the Extended Parallel Process Model (EPPM; Witte, 1992). Both theories explain that perceived susceptibility and severity constitute people’s perceived threat, which is a precursor to adopting a recommended health behavior. A high level of perceived threat is a necessary but not sufficient condition for adopting such a behavior. If people are not confident that they carry out recommended actions (self-efficacy) or if they doubt those actions can control the threat (response efficacy), they will not adopt the recommended behavior. Similar to EPPM, the Risk Perception Attitude (RPA) framework also presumes that risk perceptions (both perceived susceptibility and severity) play key roles in motivating people’s changes in health behaviors (Rimal & Real, 2003). While EPPM tries to understand the underlying mechanism of how fear appeals work, RPA is useful for predicting individuals’ motivations and self-protective behaviors and for segmenting audience characteristics accordingly. Altogether, these health behavior theories and theoretical models highlight the importance of understanding risk perceptions as determinants of preventive and protective health behaviors. These theories are relevant to health and risk communication because they can guide message design. To take the case of vaccination, if people do not adopt behaviors such as getting shots for themselves or their children, formative research could be conducted to identify what risk perceptions in HBM are relatively low and which of them are most closely related to people’s behavioral intentions. If people perceive that the benefits of getting vaccinations are most closely related to their vaccination behavior, vaccination campaign messages could be designed to amplify the perceived benefits of vaccination. Other major theories of risk perception address the issue of how risk perceptions are formed and changed. We learn about many risk issues indirectly—either from other people or from the media. Media play critical roles in forming and affecting risk perceptions, and researchers have identified a variety of media factors that affect the general public’s risk perceptions (McCarthy, Brennan, Boer, & Ritson, 2008). These factors include the following: (1) amount of media coverage; (2) frames used for presenting risks; (3) valence and tone of media coverage; (4) types and trustworthiness of risk information sources; (5) media message formats; and (6) types of media. To varying degrees, the relevance to risk perception of each of these media factors has received theoretical and empirical support. 1. Media coverage. If the media devote a lot of coverage to a risk issue, it will become more salient to the public. In turn, the public will regard the issue as important. Media research on agenda setting has explicated this link between the media agenda (the issues that journalists and other media professionals consider to be worth covering) and the public agenda (the issues that members of the general public care about). For risk communicators, this link has important implications. If increased media coverage can heighten the public’s perceptions of risk issues, risk communicators should take special care to provide credible risk-related information and to identify specific actions that the public should take. 2. Media framing of risk issues. Compared to the amount of coverage that the media devote to a risk issue, what is often more important is the way they present it. One important aspect of presentation is media framing, a topic widely studied in communication research. Framing refers to the process of “selecting and highlighting some facets of events or issues, and making connections among them so as to promote a particular interpretation, evaluation, and/or solution” (Entman, 2004, p. 5). Researchers have studied many of the relations between the way the public perceives risk issues and the way the mass media frame them. Media frames, which are also called news frames, consist of “the words, images, phrases, and presentation styles that a speaker (e.g., a politician, a media outlet) uses when relaying information about an issue or event to an audience” (Chong & Druckman, 2007, p. 100). When the media cover a scientific issue that may be related to a risk, they tend to use frames that emphasize the issue’s dramatic characteristics. News stories on risk tend to emphasize who is responsible for causing or solving the risk, what actions people can take to deal with it, and what information should make them feel reassured (Oh et al., 2012). Many studies have content-analyzed media coverage to understand how journalists frame risk issues and how frames appear in certain patterns in media coverage. However, few studies have determined which exact types of media frames most strongly affect people’s risk perceptions. 3. Valence and tone of media coverage. News media tend to pay special attention to the emotional aspects of risk issues and to select issues that generate strong emotions. Emotions felt by the public with respect to risk issues typically include dread, worry, anger, distrust, and distress (Sandman et al., 1993). Journalists may highlight such emotions over information and statistics about risks. Sometimes they even create spurious phenomena such as “media pandemics” (Gainor & Menefee, 2006; Paek et al., 2008). Journalists also tend to focus more on human interest topics, highlight worst-case scenarios, and describe risks with sensationalistic and emotionally charged language. For example, a content analysis of the environmental risk news stories that were chosen as best articles by newspaper editors found that 68% of the stories featured conflicts and emotionally charged opinions and did not include any risk information (Sandman et al., 1987). In another study, Johnson, Sandman, and Miller (1992) found that information about people’s emotional reactions to risk had a substantial effect on risk perceptions, while technical details about the risk had no effect. Through these and other empirical studies, Sandman and his colleagues have affirmed the importance of emotional content in shaping public perception of risk. 4. Risk information sources. The types of sources that are used in media coverage on risk issues can also influence people’s risk perceptions. Journalists may favor sources who have strong opinions and can generate exciting debates, or they may seek out sources who help them balance publicly expressed views on controversial issues. They tend to use government, industry, and expert sources to represent the “safe” side of risk debates, and they tend to use activists and laypeople to represent the “risky” side (Sandman, 1997). People’s risk perceptions may also be affected by their perceptions of sources’ trustworthiness. In risk communication literature, trust has been found to play a significant role in predicting people’s risk perceptions, risk-preventive behaviors, and support for government. For uncertain health risk issues such as Ebola, MERS, and the Zika Virus, or for abrupt and unexpected natural or human-made risks, people may rely on the scientists, experts, or government officials who appear as sources in media coverage. However, if people distrust any of these sources, they will doubt the information they provide, and this doubt will in turn affect their risk perceptions. The more people trust the institutions that deal with risk issues, the more likely they will accept certain risks (Peters, Covello, & McCallum, 1997). By contrast, when risk communication efforts prove ineffective, lack of trust may be the cause. According to the asymmetry principle, people tend to notice negative and trust-destroying events more than positive and trust-building events, and they tend to consider sources of bad and trust-destroying news to be more credible than sources of good news (Slovic, 1993). While building trust in risk situations is important, such psychological tendencies provide additional challenges to health and risk communicators. 5. Risk presentation formats. The ways in which media present risk information can also affect people’s risk perceptions. Different message formats may convey uncertainty differently. Uncertainty is a central issue of risk perception because it affects how people perceive the risk itself, how they interpret risk information, and how motivated they will be to seek additional information about it (Powell et al., 2007). The two basic formats for presenting risk information are verbal and numerical estimates (Wardekker et al., 2008). Verbal estimates present risks without numbers, and the words in them tend to be vague, such as likely, unlikely, probably, possibly, etc. (Hove et al., 2015; Wallsten et al., 1986). Numeric estimates present risk information with numbers that either stand alone, or appear in ranges, or are accompanied by verbal qualifiers. Empirical studies have shown that these different risk presentation formats may have varying effects on audiences’ perceptions of and reactions to risks. On the issue of which format is more effective, there is still no consensus, and relatively little research has examined how media stories using the different formats affect people’s risk perceptions. One exception is an experimental study gauging audience reactions to media stories on H1N1, mad cow disease, and carcinogenic hazards in South Korean contexts (Hove & Paek, 2015). Findings were not consistent across all these topics, but the numeric presentation format generally yielded a higher level of risk perception than the verbal presentation format. 6. Genres and types of media. The genres and types of media in which risk messages and information appear can also affect risk perceptions. Two basic genres of media—news and entertainment—have been found to influence people’s risk perceptions, sometimes in different ways. Compared to other media factors, types of media have been more systematically analyzed for how they affect risk perceptions. Two competing theoretical hypotheses have emerged—the impersonal impact hypothesis and the differential impact hypothesis. The impersonal impact hypothesis focuses on the effects of news media, and it makes a distinction between people’s personal-level and societal-level judgments about risk. Personal-level judgments refer to individuals’ beliefs about how much a risk threatens themselves, while societal-level judgments refer to their beliefs about how much a risk threatens collectives such as a city, a nation, or the world population (Tyler & Cook, 1984). This distinction is important because personal-level risk perception may directly lead to preventive behaviors, while societal-level risk perception may not have such a direct influence. The impersonal impact hypothesis predicts that news media exert more powerful impact on societal-level than on personal-level risk judgments. The reason may be that when the news media feature a risk issue, journalists are more likely to describe it as a threat posed to generalized others whom audiences do not imagine as being similar to themselves. Different media types (television, print) and media genres (news, entertainment) may play different roles in influencing risk perceptions. For example, one study found that television exposure predicted personal-level risk perception, while frequent use of newspaper news predicted societal-risk perceptions for voluntary health and risk issues (heart disease, AIDS, smoking) (Coleman, 1993). Based on such findings, an alternative hypothesis has been proposed to acknowledge the potentially different roles played by different types of media. The differential impact hypothesis predicts that entertainment media are more likely to influence people’s personal-level risk perceptions while news media are more likely to influence their societal-level risk perceptions. Entertainment media tend to present risks in dramatic and emotional ways. Compared to news, dramas and movies may make a given health threat seem more salient and personally relevant. For example, a study on the portrayal of AIDS in the media found that movies and situation comedies were significantly related to personal-level risk perception (Snyder & Rouse, 1995). It is possible that exposure to news media affects the cognitive dimension of people’s risk judgments, while exposure to entertainment media affects the emotional dimension of risk judgments and personal-level risk perceptions. More research needs to explore how new and hybrid media genres such as health infotainment and neomedical documentaries affect people’s risk perceptions compared to traditional news and entertainment genres. A study comparing portrayals of smoking in Korean media found the following: exposure to news programs predicted smokers’ personal-level risk perceptions; exposure to entertainment programs predicted nonsmokers’ personal-level risk perceptions; and exposure to infotainment programs predicted both smokers’ and nonsmokers’ societal-level risk perceptions (So et al., 2011). These findings indicate that media type/channel/genre and audience characteristics (e.g., personal relevance, behavioral status, motivations) may also interact to affect people’s types of risk perceptions. Discussion of the Literature There has been growing recognition that risk communicators need to understand the many dimensions of people’s risk perceptions in order to do their jobs effectively. However, more efforts need to be devoted to understanding the determinants of risk perceptions and the underlying mechanisms through which risk perceptions affect subsequent behaviors. Some scholars have argued that the concept of risk perception is overly complex and vague, and that it should instead be called risk judgment because it has not only perceptual but cognitive, affective, and behavioral dimensions (Dunwoody & Neuwirth, 1991). However, the concept of risk judgment seems to focus more on the cognitive and rational aspects of risk perceptions and to overlook the various emotional ways in which people respond to risks. This emphasis on the rational and cognitive aspects of risk perceptions is also reflected in health behavior theories such as the HBM, PMT, EPPM, and RPA framework. These theoretical models commonly include perceived susceptibility and perceived severity, and they consider these risk perceptions as precursors to health behaviors. Even on the cognitive side of risk perceptions, some researchers have argued that more dimensions need to be explored. For example, the authors of a meta-analysis proposed adding perceived likelihood, the probability that one will be harmed by a risk (Brewer et al., 2007). Based on 34 studies that these authors reviewed, perceived likelihood seems to be a distinct component of risk perceptions that have consistently been found to be related to health behaviors. The emotional side of risk perceptions has been extensively studied by Slovic and his associates as part of their work on the affect heuristic and the risk-as-feeling hypothesis. Their theoretical arguments overlap with Sandman’s argument defining risks as a combination of hazards and outrage. Covello’s identification of risk perception factors also overlaps with these concepts. However, researchers still need to determine how and to what extent discrete emotions (e.g., anger, fear, worry) affect risk perceptions and subsequent behaviors. Although a variety of risk characteristics have been identified under the headings of unknown and dread risk, little research has explored which exact ones have stronger or weaker impacts on risk perceptions and subsequent behaviors. Such detailed explorations would also need to be more fully integrated into research that examines how interpersonal and mediated communication affect perceived risk characteristics and risk perceptions. A recent study found evidence for the differential roles of the cognitive and emotional dimensions of people’s perceived risk characteristics in risk perceptions, as well as for a more promising role of entertainment media in the process (Oh et al., 2015). Explicating the underlying mechanisms through which news and entertainment media affect risk perceptions in the context of H1N1 in South Korea, it found the following: (1) exposure to news media is positively correlated with the cognitive dimension of risk characteristics, while exposure to entertainment media is positively correlated with both their cognitive and the emotional dimensions; (2) the emotional, but not the cognitive, dimension of risk characteristics is positively related to both personal- and societal-level risk perceptions; and (3) exposure to entertainment media affects personal-level risk perceptions—but only indirectly, through the emotional dimension of risk characteristics. Media continue to play a critical role in communicating risks to the public, and scholars continue to try identifying media factors that affect risk perceptions. While some media factors, particularly media frames, have been extensively researched, others such as risk presentation formats need more attention. Efforts to understand how various verbal and numerical risk presentation formats can affect risk perceptions and subsequent behavior could help health and risk communicators in developing more effective messages and campaigns. However, such efforts would also need to take account of audience characteristics such as numeracy and relevant personal traits (e.g., uncertainty avoidance, risk seeking tendency, optimistic bias). Research on media channels and genres has generated interesting theoretical hypotheses. Distinguishing the personal and the societal levels of risk perceptions has moved risk perception research one step further. This distinction is important because these two levels of risk perception have differential impacts on subsequent behaviors. For example, if people think infectious diseases are more likely to affect only other people or society in general, they may not take preventive actions or follow government recommendations such as quarantining and stockpiling. Researchers have also made promising new discoveries regarding the different ways in which the media genres of news and entertainment influence risk perceptions. Recently, several studies have explored how and why people use hybrid genres of media such as edutainment, infotainment, and genre-specific media, and how these new media types affect risk perceptions. However, this area of research is limited because it still relies on survey measurements of media exposure (e.g., frequency, amount of media use). Such methods cannot capture the fact that, even within each type of informative or entertainment media, the way risk information is presented and framed could affect risk perceptions and subsequent behaviors. Researchers have also not yet adequately explicated the mechanisms through which media exposure affects risk perceptions. However, some attempts have been made to address these limitations and to examine how emotionally charged news media affect risk perceptions. One recent study’s findings suggest that the distinction between informative and entertainment media has become blurrier, and that people’s emotional reactions (i.e., fear) could also have different effects on their personal- and societal-level risk perceptions (Paek et al., 2016). Future research should try to replicate such findings and examine the differential effects of discrete emotions such as shame and anger on risk perceptions and subsequent behaviors. Finally, research that examines how media types/genres/channels/platforms affect risk perceptions needs to catch up with developments in social media. On a variety of new media platforms, people who were formerly passive receivers of risk information from traditional media have now become active producers and disseminators. During outbreaks of infectious diseases such as Ebola in the United States and MERS in South Korea, communicators on social media such as Twitter played critical roles in the rapid production, sharing, and dissemination of information, but often without much regard for accuracy. In some cases, rumors and misinformation about MERS that were originally disseminated through social media managed to influence the traditional and public media agendas. Because of the emerging role of social media in risk and crisis situations, the World Health Organization (WHO) has now issued outbreak communication guidelines that recommend how to handle rumors on social media. However, academic research has not yet fully addressed such issues, with the exception of some studies in public relations and crisis communication (e.g., Schultz, Utz, & Göritz, 2011; Utz, Shultz, & Glocka, 2013). More serious theoretical and empirical efforts should be made to integrate research on social media across disciplines in order to have a better understanding of how source, medium, message, risk/crisis type, and audience characteristics interact to affect the public’s risk perceptions and subsequent behaviors. Such understanding could enhance risk communication by making it more effective in giving people appropriate risk perceptions and motivating them to carry out recommended actions. Coleman, C. L. (1993). The influence of mass media and interpersonal communication on societal and personal risk judgments. Communication Research, 20, 611–628.Find this resource: Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases. Cambridge, U.K.: Cambridge University Press.Find this resource: Kasperson, R. E., Renn, O., Slovic, P., Brown, H. S., Emel, J., Goble, R., Kasperson, J., & Ratick, S. (1988). The social amplification of risk: A conceptual framework. Risk Analysis, 8, 177–187.Find this resource: Lundgren, L. E., & McMakin, A. H. (2013). Risk communication: A handbook for communicating environmental, safety, and health risks (5th ed.). Piscataway, NJ: IEEE.Find this resource: Sandman, P. M. (1989). Hazard versus outrage in the public perception of risk. In V. T. Covello, D. B. McCallum, & M. T. Pavlova (Eds.), Effective risk communication: The role and responsibility of government and nongovernment organizations. New York: Plenum.Find this resource: Slovic, P. (Ed.). (2000). The perception of risk. Sterling, VA: Earthscan.Find this resource: Slovic, P. (Ed.). (2010). The feeling of risk: New perspectives on risk perception. Sterling, VA: Earthscan.Find this resource: Snyder, L. B., & Rouse, R. A. (1995). The media can have more than an impersonal impact: The case of AIDS risk perceptions and behavior. Health Communication, 7, 125–145.Find this resource: Tyler, T. R., & Cook, F. L. (1984). The mass media and judgments of risk: Distinguishing impact on personal and societal level judgments. Journal of Personality and Social Psychology, 47, 693–708.Find this resource: Weinstein, N. D. (1980). Unrealistic optimism about future life events. Journal of Personality & Social Psychology, 39, 806–820.Find this resource: Brewer, N. T., Chapman, G. B., Gibbons, F. X., & McCaul, K. D. (2007). Meta-analysis of the relationship between risk perception and health behavior: The example of vaccination. Health Psychology, 26(2), 136–145.Find this resource: Chong, D., & Druckman, J. N. (2007). A theory of framing and opinion formation in competitive elite environments. Journal of Communication, 57, 99–118.Find this resource: Dunwoody, S., & Neuwirth, K. (1991). Coming to terms with the impact of communication on scientific and technological risk judgments. In L. Wilkins & P. Patterson (Eds.), Risky business: Communicating issues of science, risk, and public policy (pp. 11–30). New York: Greenwood.Find this resource: Entman, R. M. (2004). Projections of power: Framing news, public opinion, and U.S. foreign policy. Chicago: University of Chicago Press.Find this resource: Gainor, D., & Menefee, A. (2006). Avian flu: A media pandemic. Retrieved from http://www.businessandmedia.org/news/2006/news20060308.asp.Find this resource: Hove, T., & Paek, H.-J.(2015). Effects of risk presentation format and fear message on laypeople’s risk perceptions. Journal of Public Relations, 19(1), 162–182.Find this resource: Hove, T., Paek, H.-J., Yoon, M., & Jwa, B. (2015). How newspapers represent environmental risk: The case of carcinogenic hazards in South Korea. Journal of Risk Research, 18(10), 1320–1336.Find this resource: Janz, N. K., & Becker, M. H. (1984). The health belief model: A decade later. Health Education Quarterly, 11(1), 1–47.Find this resource: Johnson, B. B., Sandman, P. M., & Miller, P. M. (1992). Testing the role of technical information in public risk perception, Risk: Issues in Health and Safety, 3, 341–364.Find this resource: Kasperson, R. E., Renn, O., Slovic, P., Brown, H. S., Emel, J., Goble, R., … Ratick, S. (1988). The social amplification of risk: A conceptual framework. Risk Analysis, 8(2), 177–187.Find this resource: Loewenstein, G. F., Weber, E. U., Hsee, C. K., & Welch, N. (2001). Risk as feelings. Psychological Bulletin, 127(2), 267–286.Find this resource: McCarthy, M., Brennan, M., Boer, M. D., & Ritson, C. (2008). Media risk communication—what was said by whom and how was it interpreted. Journal of Risk Research, 11(3), 375–394.Find this resource: Oh, H. J., Hove, T., Paek, H.-J., Lee, B. K., Lee, H., & Song, S. (2012). Attention cycles and the H1N1 pandemic: A cross-national study of U.S. and Korean newspaper coverage. Asian Journal of Communication, 22, 214–232.Find this resource: Oh, S. H., Paek, H.-J., & Hove, T. (2015). Cognitive and emotional dimensions of perceived risk characteristics, genre-specific media effects, and risk perceptions: The case of H1N1 influenza in South Korea. Asian Journal of Communication, 25, 14–32.Find this resource: Paek, H.-J. (2014). Risk perceptions. In T. Thompson (Ed)., Encyclopedia of health communication. (Vol. 3, pp.1189–1191). Los Angeles, CA: SAGE.Find this resource: Paek, H.-J., Hilyard, K., Freimuth, V., Barge, K., & Mindlin, M. (2008). Public support for government actions during a flu pandemic: Lessons learned from a statewide survey. Health Promotion & Practice, 9, 60S–72S.Find this resource: Paek, H.-J., Oh, S. H., Hove, T. (2016). How fear-arousing news messages affect risk perceptions and intention to talk about risk. Health Communication.Find this resource: Peters, R. G., Covello, V. T., & McCallum, D. B. (1997). The determinants of trust and credibility in environmental risk communication: An empirical study. Risk Analysis: An Official Publication of the Society for Risk Analysis, 17(1), 43–54.Find this resource: Powell, M., Dunwoody, S., Griffin, R., & Neuwirth, K. (2007). Exploring lay uncertainty about an environmental health risk. Public Understanding of Science, 16(3), 323–343.Find this resource: Rimal, R. N., & Real, K. (2003). Perceived risk and efficacy beliefs as motivators of change. Human Communication Research, 29(3), 370–399.Find this resource: Rogers, R. W. (1983). Cognitive and physiological processes in fear appeals and attitude change: A revised theory of protection motivation. In J. Cacioppo & R. Petty (Eds.), Social psychophysiology. New York: Guilford.Find this resource: Sandman, P. M. (1997). Mass media and environmental risk: Seven principles. In R. Bate (Ed.), What risk? Science, politics and public health (pp. 275–284). Oxford: Butterworth-Heinmann.Find this resource: Sandman, P. M., Miller, P., Johnson, B. B., & Weinstein, N. D. (1993). Agency communication, community outrage, and perception of risk: Three simulation experiments. Risk Analysis, 13(6), 585–598.Find this resource: Sandman, P. M., Sachsman, D. B., Greenberg, M. R., Gochfield, M. (1987). Environmental risk and the press: An exploratory assessment. New Brunswick, NJ: Transaction.Find this resource: Slovic, P. (1993). Perceived risk, trust, and democracy. Risk Analysis, 13, 675–682.Find this resource: Slovic, P. (2000). Perception of risk. In P. Slovic (Ed.), The perception of risk (pp. 220–231). Sterling, VA: Earthscan. (Original work published in 1987.)Find this resource: Slovic, P., Fischhoff, B., & Lichtenstein, S. (2000). Facts and fears: Understanding perceived risk. In P. Slovic (Ed.), The perception of risk (pp. 137–153). Sterling, VA: Earthscan. (Original work published in 1981).Find this resource: So, J., Cho, H., & Lee, J. (2011). Genre-specific media and perceptions of personal and social risk of smoking among South Korean college students. Journal of Health Communication, 16, 533–549.Find this resource: So, J., & Nabi, R. (2013). Reduction of perceived social distance as an explanation for media’s influence on personal risk perceptions: A test of the risk convergence model. Human Communication Research, 39, 317–338.Find this resource: Schultz, F., Utz, S., & Göritz, A. (2011). Is the medium the message? Perceptions of and reactions to crisis communication via Twitter, blogs, and traditional media. Public Relations Review, 37(1), 20–27.Find this resource: Utz, S., Schultz, F., & Glocka, S. (2013). Crisis communication online: How medium, crisis type, and emotions affected public reactions in the Fukushima Daiichi nuclear disaster. Public Relations Review, 39, 40–46.Find this resource: Walaski, P. (2011). Risk and crisis communication: Methods and messages. New York: Wiley.Find this resource: Wallsten, T. S., Budescu, D. V., Rapoport, A., Zwick, R., & Forsyth, B. (1986). Measuring the vague meanings of probability terms. Journal of Experimental Psychology, 115(4), 348–365.Find this resource: Wardekker, J. A., Van der Sluijs, J. P., Janssen, P. H. M., Kloprogge, P., & Petersen, A. C. (2008). Uncertainty communication in environmental assessments: Views from the Dutch science policy interface. Environmental Science and Policy, 11(7), 627–641.Find this resource: Witte, K. (1992). Putting the fear back into fear appeals: The extended parallel process model. Communication Monographs, 59, 329–349.Find this resource:
Agriculture is the single largest consumer of freshwater and is a major determinant for surface and groundwater degradation. Challenges of water resources include too much rain, too little rain, and evapotranspiration rates. It is projected that hydrological processes and regimes are likely to change going forward, which will affect the availability, quality, use and management of these resources. Water scarcity is an example of reduced ecosystem service that affects livelihoods and agricultural production. Natural resources are central to the productivity of agricultural systems and the livelihoods of those who depend on them. Land degradation can be attributed to the following major causes; inappropriate agricultural practices; unsustainable use of natural resources; limited application of knowledge and technologies by farmers; and insecure land-tenure systems. Land degradation is further accelerated by climate change, and the inability of those unable to adapt. Ecosystem stability, functions and services depend upon the soils and land capability. Agricultural land use and biodiversity conservation have been traditionally viewed as incompatible. However, flora and fauna play a vital role in the optimal functioning of a healthy ecosystem. The role of biodiversity in agriculture is multifunctional, and plays an essential role in ecosystem services, such as pollination and biological control. Structurally complex ecosystems enhance agricultural production, health and resilience to adversity. The climate is a key determinant as to what biophysical resources exist in a region, their quality and quantity. Projected climate change impacts include changes to rainfall and temperature patterns, carbon dioxide levels and other climatic variables, that if realised are likely to affect forage, food and fibre yield, animal welfare, and proper ecosystem functioning. Advances in information and communication systems, services that provide greater efficiencies, and technologies to enhance agroecological systems are vital moving forward. Access to knowledge, goods and services must be available for every farmer in order to make the most out of their agricultural business without resource degradation or exhaustion. Traditional practices and indigenous knowledge provide an integral role in the planning, design and implementation of local sustainable practices. However their intrinsic value is often overshadowed by a narrow mindset focused on tangible assets and economic growth. Agrarian communities that embrace culture and heritage factors can build resilience to change, stabilize communities, and improve agricultural security. The impact of climate change and land degradation on agriculture has consequences that extend far beyond food supply. The economic health of many countries and their peoples is linked closely to the productivity of their farming communities. Instability of agricultural systems results in greater sensitivity to extreme events (droughts, floods, fires), migration to urban areas and across borders, as well as political and economic volatility. Access to markets affects the ability of farmers to sell their goods, and the supply of these goods to consumers. Without coordinated and just food systems, efficiency suffers, wastages increase, and food insecurities linger.
Under the trees, the river flows silently. Meandering in the shade, the water looks dark and cool. As the eyes get used to the dim light, the dense undergrowth can be discerned at the banks. Slowly, the canoe drifts along, the banks of the river come to meet, and then fall away behind. Now a small overgrown bank appears, dividing the river for a moment, quickly followed by another. The branches of the grey alder trees reach down to the river surface, almost touching it. After a turn, the river broadens somewhat, and a small archipelago opens up. The sun glimmers at the centre of the stream. A branch divides to the right, flowing for some 50 meters, then joining the main body of water again. After a while the river seems to narrow again, making a left turn, now flowing as a whole again. What causes the branching of the river? When a river flows it has a certain capacity to carry sediments with it. Along its route it will, depending on the conditions, carry the sediments on, deposit them, or erode more material to carry away. The motion of the water, particularly its speed, determines the carrying capacity. But other factors may affect the flow pattern, and thus the river morphology. The Austrian forester Viktor Schauberger (1885-1958) pointed out how the relation between the temperature of air, water and river bank would affect the flow pattern of the river, and in the end the river morphology. Together with his son, the environmentalist Walter Schauberger (1914-1994), he developed the following image: The shade of the bank vegetation and the roots of the trees have a cooling effect on the river bank. The shade prevents the sunlight to heat the water surface, the evaporation in the leaves has a cooling effect, and the roots bring up cool groundwater, the tree thereby acting as a refrigerator for the river bank, preventing the river from heating up excessively. As the river widens, the cooling effect is lost, the water surface is heated up. As the flow is heated up, the flow pattern of the river alters, and the river’s ability to carry sediments diminishes, causing sand banks and small islands in the river to be formed. Soon grass and small bushes have discovered the sand banks. The small vegetation after a while gives ways for the water-loving trees. The shade of the trees and their roots start to cool down the flow, altering the flow patterns, thereby increasing the transport capacity of the flow. The deposition of sediments slows down. A kind of dynamic equilibrium has been established between the water flow, the sediment transport, the bank formation and the cooling vegetation, all forming a kind of ecosystem, structurally stable when left to its own. A small archipelago has emerged. Then, Schauberger observed, man intervenes, cutting down the trees, removing the overgrown sand banks to “clear up” the river for the boats, straightening the bends. The cooling effect disappears, the flow patterns are altered, and the sensitive dynamic equilibrium of the ecosystem is destroyed. The river starts silting up, only to give the dredgers eternal work opportunities. The following lecture by Walter Schauberger summarizes his view on the co-operation between river flow and vegetation and its inherent self-stabilization: - Schauberger, Walter The destruction of water Lecture at Neviges, 1961. Reprinted (in Swedish translation) in: Schauberger, Walter & Alexandersson, Olof (Ed.) Kompendium i Implosionsteknik Institutet för Ekologisk Teknik, Linköping, 1986, see particularly p. 22-24 Olof Alexandersson’s biography on Viktor Schauberger gives an introduction to Viktor Schauberger’s perspective on water management: - Olof Alexandersson Living Water: Viktor Schauberger and the Secrets of Natural Energy, Gateway, 2002. English translation from the Swedish original. Some chapters in the English edition are obsolete, but the chapter on river management gives an essentially correct view of Schauberger’s perspective. There is also a more recent (expanded and updated) German edition, as well as French, Spanish, Czech and Greek editions. The Institute of Ecological Technology has an ongoing research programme on areas related to Viktor Schauberger. See particularly the area Alternative water flow: - Institute of Ecological Technology (research overview) The Schauberger Family Trust has an archive with many publications of Viktor Schauberger and Walter Schaubeger:
- Introductory Voyage – Walk through the interior of a Spanish ship and learn why colonists would risk their lives traveling 4,000 miles to La Florida. Experience life aboard – hear the sounds of arrival; feel the weight of colonial cannonballs; see real colonial gold and silver. - Life in the New Colony – Meet actual residents of the First Colony and view precious artifacts that reveal daily life and relationships with the native Timucua. Fly into the settlement, recreated in 3D, to interact with the residents and learn about their personal experiences. Sit down at the First Thanksgiving table to find out who attended and what was served. - How Do We Know? – Explore the science of archaeology and discover the First Colony through numerous artifacts unearthed by archaeologists. Become an archaeologist by using the hands-on multimedia centerpiece to excavate a site, discover and collect artifacts, and reveal the history they disclose. - Convergence of Cultures – Put your “street smarts” to the test! Learn about town planning and build your own town in an interactive game based on Spanish law. Stroll through a colonial streetscape and explore households and daily activities, from religious practices to work lives to leisure. See how the military defined the settlement and how life on its frontiers evolved. Find out why we don’t speak Spanish today! - Where Are We Now? – Globalize your world view with stories of cultural blending today. Plot your family origins on an interactive map; create a multimedia collage that reflects your own cultural background; see how the life of a modern American woman mirrors that of a woman from the First Colony. Our lives may be more similar than you think.
Tuesday, February 21, 2006 Digital Ethics #8 (Note: This article is part of a series I started on my BrettStuff website so, if you want to read the previous articles, pay it a visit). Theft has existed since organisms began competing for resources. In fact, the act of stealing is an evolutionary advantage for individual creatures – why go to the trouble of finding, killing or growing stuff, when you can filch the result from some other hard-working idiot? However, theft is destructive to societies. For this reason communities of all eras and cultures have instituted heavy penalties for all forms of stealing. What constitutes stealing is also an ancient issue. And as societies developed, the definition of theft has been refined and extended. However, the arena of 'intellectual property' has always been difficult to define. Two (linked) historical developments made the concept of intellectual property a significant issue. The expansion of trade between European cities and the appearance of non-church universities in the 12th and 13th centuries produced a literate and educated group of people who were interested in accumulating and exchanging information. This typically took the form of handwritten manuscripts, and a new trade emerged called 'stationery' where people would (for a fee) produce handwritten copies of your publication. The largest patrons of stationers were libraries (you paid to visit a library in those days), eager to stock the widest range of contemporary texts. Hand-copying of books was slow and costly. The man most often credited with changing this is Johannes Gutenberg (1398-1468), a metalworker and inventor, who devised a method for casting individual metal letters, setting them in blocks, applying a thin film of ink to their surface, and transferring this ink to sheets of paper. Gutenberg didn’t invent printing - China and East Asia had libraries with thousands of books printed using hand-carved wooden blocks as early as the 12th century - but he did refine and commercialise the process. Gutenberg's presses led to a boom in the production of texts in Europe. His most famous work, the Gutenberg Bible sold for the 'bargain' price of 300 florins a copy. This was still the equivalent of 3 year's wages for an average citizen, but a lot cheaper than a handwritten Bible, which could take a scribe as long as 20 years to produce! Printing revolutionised the distribution of knowledge by making it possible to produce a large number of copies of a single work at a reasonable cost in a relatively short amount of time. In fact one of the (not very snappy) names used for printing at the time was ‘the art of multiplying books’. The process spread to other German cities by the 1450s, to Italy in the 1460s, and then to France and the rest of Europe. The rapid spread of knowledge made possible by Gutenberg's printing press contributed to the Renaissance, the Scientific Revolution, and the Protestant Reformation. For this reason, Gutenberg was chosen by Time Magazine as the 'Man of the Millennium'. In the next installment we’ll look at some of the consequences of Gutenberg’s new toy.
Protists and their role in the tree of life. Protists are eukaryotes that may be both single celled or multi-cellular. They are basically any organism which contains a nuclear envelope around their DNA but are not classified as plants, animals, or fungi. Due to this, there are many different protists that come in many different forms. Eukayotes are developed through many layers of innovation which can be seen in the abundance of protists. This includes the “absorbtion” of mitochodria or chloroplastsmitochodria or chloroplasts as believed in the endosymbiotic theory of evolution or it may be due to random chance. While most plants and animals are terrestial many protists may be found in aquatic ecosystems making them important in the web of life. Protists are a diverse group of organisms. They may range from photosynthetic parameciums to parasitic – such as in the species T. Gondii. Either way they make up a large amount of the organisms in the world around us and as such should not go unnoticed.
Beech bark disease Beech bark disease (BBD) is caused by the combined actions of an insect, the beech scale (Cryptococcus fagisuga), and a fungus (Neonectria faginata). The insect feeds on the tree creating holes in the tree bark which become an entry point for the fungus and, by stressing the tree, decreasing its resistance to the subsequent fungal infection. The beech scale, probably along with the fungus, arrived in Nova Scotia around 1890, where it arrived from Europe on infested beech seedlings. The disease was first noticed in Halifax in 1920 and by the early 1930's was found through the Maritime Provinces. It has since moved southwestward and was first detected in 1965 in Quebec and in 1999 in Ontario. Beech scale is a tiny insect (up to 1 mm long), that feeds only on beech tree sap. There are only female scale insects, which reproduce parthenogenetically (i.e. the female reproduces without mating). The adult scale lays her eggs on the trunk of the tree in mid-summer, and nymphs (also called crawlers) hatch from the eggs later that summer or fall. Each nymph crawls to find a location to feed, and it inserts its stylet mouthparts into the bark to suck sap from the inner bark of the tree. Once the scale begins feeding, it becomes immobile, and eventually secrets a distinctive woolly white covering (see photo in sidebar). The nymph becomes an adult the following spring. Scale insects are spread by wind, by animals or with infested wood. The scale itself does not cause BBD, but it weakens the tree and reduces its ability to resist the fungus. Within two or more years (up to 10) of infestation by the scale insect, fungal spores are moved to the tree bark by rain or wind, where they enter the tree through the feeding punctures made by the scale insect. The fungus grows in the tissue under the tree's bark (the phloem and cambium) killing this tissue. The disease stresses the tree, reducing its growth, making it more susceptible to other pests or pathogens, and can cause the tree to become deformed with multiple cankers. Over time the disease can destroy the inner bark around the circumference of the tree, girdling the tree and killing it; this may take many years or even decades. Beech trees killed by BBD are more susceptible to decay fungi and insects, become fragile, and can break in high wind ("beech snap"). Trees at risk As the name suggests, BBD affects beech trees, both American beech (Fagus grandifolia) and European beech (Fagus sylvatica) are vulnerable here in Canada. The greatest impact of the disease is on larger trees - those 25 cm or more diameter at breast height (dbh) - but can kill trees as small as 10 cm dbh. Some beech trees (about 1% of them) are resistant to the scale and the fungus, and others are partly resistant. Beech bark disease is now found through most of the natural range of beech in Canada. When trees are infested with beech scale, the bark will look like it has woolly material on it. This is most often seen on rough areas of bark, near branch stubs, or under larger branches. As the infestation progresses, these white patches become more extensive and can cover most of the tree's trunk. When infected with the fungus, cankers are usually seen on the lower part of the trunk. The fruiting bodies of the fungus can also be found on the cankers, though they may be difficult to see when infection is light. Initially small whitish patches are seen on the canker, this is followed by more conspicuous red fruiting bodies (side bar). The tree canopy may show signs of decline or dieback, especially in the branches immediately above patches of dead bark. What you can do Since BBD is widespread, the approach is to manage it. If you have beech trees in your woodlot, read about management recommendations for beech bark disease
Yesterday during a writing lecture, the presenter emphasized the act of examination as students study text and respond in writing. I like the idea of introducing students to the process of examination in a deliberate way, and I'll do that today. First, we'll discuss the word: What does it mean to examine? Examine: to inspect in detail, to investigate thoroughly. Then, I'll ask, What tools will help us to examine math concepts? Once they offer their thoughts, I'll say, Today we're going to examine the "behavior" of place value again as we watch another SCRATCH animated math model. Let's see what we notice today. Watch carefully. Jot down notes and questions that you have. Also write down ideas about how I could have made this animated model better, and if needed, use the calculator to check my work or try out an idea you have as you watch the animation. Then we'll share ideas. If time permits, we'll examine the concept more by writing the number in the film in base-ten numeral form (standard form), expanded form, word form, expanded notation, and scientific notation. We'll also use the calculators and examine how the decimal point moves in this number when we multiply by 10, 100, 1000, 1/10, 1/100, 1/1000. Later in the day students will have a chance to apply their learning as they create their own place value movies, complete metric number lines, and practice skill using Khan Academy.
Meningitis???? What is it? Well, the brain and the spinal cord is covered by a layer and that particular layer is called meninges. So, the inflammation of this layer I.e. meninges is called meningitis. It is a very severe disease with high mortality in children of early age. The clinical symptoms arising due to affection of meninges (inflammatory and non-inflammatory genesis), is referred to as meningeal syndrome. It’s most frequent signs are: - Headache – In children of early age it is monotonous, i.e. monotonous with regard to sound of their cry. - Nausea, vomiting in small children, protrusion and pulsation of the frontal fontanels is a very significant sign for pediatrician. - General hyperesthesia – a painless touch of the skin of the child is accompanied by his getting anxious, crying, shouting. - Rigidity of occipital muscles – the doctor can’t bend the head of the patient forward. For the designation of this symptom one hand is placed on chest and another on the backside of the head. Then simultaneously the chest is pressed downwards and the head upwards. Force of resistance should be determined. - Meningeal position – the head is thrown back, legs are pressed to the abdomen, the child lies on one side. - Kerning’s symptom– if the kerning reflex is present in the child after 4 months of age, it is a sign of pathology. - Brzezinski’s symptom (polish pediatrician) - Higher – the doctor bends the head of the patient forward, during this the legs are bent spontaneously in the knee and hip joints. - Middle – In reply to pressing above the pubis the lower limbs bend as described above. - Lower – In response to the flexion of one leg by the doctor in the knee and hip joint, the patient bends his other leg in knee and hip joints. - Zygotic– In response to the pressing of the cheekbone the child raises his shoulders and flexes his/her hands in the elbow joints (characteristic of tubercular meningitis) - Lesage’s symptom – when the child is lifted up by holding him/her under the arms the legs are bent towards the abdomen. The dissociation of meningeal syndrome is characteristic of meningioma, which means while one symptom is present others may be absent.
The Magna Carta was signed in 15 June 1215 between the barons of Medieval England and King John. "Magna Carta" is Latin and means "Great Charter". The Magna Carta is an important piece of English history where the rights of individuals are protected against the power of the King or Queen. It was signed at Runnymede, on the banks of the River Thames, near Windsor Castle. Magna Carta Memorial at Runnymede near Windsor The document was a series of written promises between the king and his subjects that he, the king, would govern England and deal with its people according to the customs of feudal law. It was a last ditch attempt to stop a civil war. The Magna Carta is a significant document in the evolution of civil rights and is considered to be the first document of human freedom. It placed England on the road to a democratic state and introduced the lawyers in England to the concept of Human Rights as we know it now. Clause 39 still resonates today as one of the most powerful sentences in history. "No free man shall be seized or imprisoned, or stripped of his rights or possessions, or outlawed or exiled, or deprived of his standing in any other way, nor will we proceed with force against him, or send others to do so, except by the lawful judgment of his equals or by the law of the land." King John made himself very unpopular during his reign by his constant demands for money. The leading barons tried to impose limits on his powers by drawing up Magna Carta, after they captured London during a revolt against John’s tax policies and his conduct in general. No, King John found the terms in the Magna Carta unacceptable. He only signed the document to keep peace with the rebel barons - to buy time - and did not keep to what he agreed to. Civil war broke out in England.