content
stringlengths
275
370k
Tropics is the word derived generally from Tropic of Cancer and Capricorn, the parallels at a latitude of 230 which shows the outer limit of the areas where the sun can ever be in zenith. Tropic are the regions of the Earth around the equator, varying in width from about 400 to 600 in latitude; these areas are located between the tropic of cancer in the Northern hemisphere and the tropic Capricorn in the Southern hemisphere. These are places on the earth where the sun reaches a point directly overhead at least once in a year. The tropic is also called tropical zoneand thetorrid zone. It is generally understood that that the tropical areas are mainly located between these two line (Tropics of Cancer and Capricorn), they are therefore the region of low latitudes, but the outer limits of the low latitude are not easily recognized because the Tropics of cancer and of Capricorn themselves are unsuitable a boundaries. They are too rigid; some regions with clear tropical characteristic 230 while, on the other hand some clearly non-tropical area are found much closer to the equator. The best way to determine the outer limits of the tropical areas is therefore to use common characteristics which distinguish these regions from the rest of the world. As latitude is the main factor controlling climatic conditions, the most important of these common features are those of climate. Other typical characteristics of the low latitudes such as those of vegetation, soil, agriculture and economic development are all directly or indirectly related to their common climatic conditions. The tropics receive the most direct sunlight throughout the year which is believed to be favourable to plant and animal life, provided there is adequate moisture or precipitation. One of the problems with the tropics is that the soils are usually of poor quality and the nutrients have been leached out. In economic terms, most tropical regions belong to the group of developing nations, which are characterized by low standards of living, a strong concentration on agriculture and predominance of production of raw materials rather than industrial products. The poverty of the tropical countries is illustrated by the fact that more than two-third of them have a Gross National Product below the world median of $310 per capita. Tropical agriculture is of course largely controlled by climatic conditions. It is mainly devoted to the growing of food crops such as rice, maize cassava, coconuts, banana, groundnuts, sorghums, palm oil and cocoa. Most of these crops are produced on a subsistence basis. Politically, the tropical countries are much more important than they are economically. This is not solely the result of their numbers over 50 countries represented in the United Nations are situated in the tropics), their political significance is also the consequence of their common policy; their united stand is frequently independent of the big power blocks. This attitude is largely due to common history of colonial domination. One of the main origins of colonialism was the desire of the European countries to control the production and trade of the tropical agricultural products. Colonial rule was therefore heavily concentrated in the tropics. Without any doubt, the most important common climatic feature of the tropics is the absence of a cold season, this is usually represented by an old phrase where winter never come. To define the tropics by this lack of a winter season cannot be done with a simple temperature limit. However, in the tropics, climatic conditions varies gradually over long distances and moreover vary a great deal from year to year. The tropics is often assumed to include only regions where sufficient rain is received to carry out most forms of crop production without any form of irrigation. Tropical regions only experience two seasons in a year “hot” and “wet seasons”, especially where the seasons are made by monsoons. According to Koppens classification of climate for instance, the mean temperature is 18 0C for the coldest month of the year. This method would exclude the tropical highlands, where temperatures frequently remain well below this limit; yet these areas are truly tropical because they experience no winter. They can easily be included in the tropics by not using the actual temperatures but temperature reduced to sea level. Countries in the Tropics Some of the countries in the tropic include Mexico, Malaysia,Philippines,Singapore,India, Thailand, Nigeria,Kenya, Ghana, Zambia,Burkina Faso, Cote d’Ivoire,Bolivia, Costa Rica, Brazil, The Gambia, Bahamas, Guinea, Liberia, Mali, Mauritania, Niger etc. Dr. Brown is the founder of Jotscroll, he is a Medical Doctor, Entrepreneur, and author. Dr. Razi Brown holds a medical degree from the University of San Diego. He has invested in many startups and is currently working on his fifth book to be published in the upcoming year.
Students are about to begin a new unit on Japan. They have been gathering fuel to travel on a plane to Japan by reading at home each night. The hours they read convert to 10 miles each, allowing each child to help read to Japan. It usually takes the class from September to February to have enough miles to arrive in Tokyo. As our plane has now arrived, I want to introduce students to Japan. I tell them that Japan is made up of many small islands. I show them the numerical area of Japan in square miles, and the numerical area of the United States in square miles. I ask them to look at these two very large numbers and decide which is larger. Because of the plea value difference, students are able to see that the US is larger in area. I hold up 4 rulers to identify what a square foot is and tell students that if we could make Japan a square and measure it in miles it would be 145, 882 square miles in size meaning that inside my square (I point to my rulers) there would be 145,882 squares. I remind students that my little square is only a square foot, but if we were to get in the bus and drive to the beach and then to Coastal Ridge School and then to Stonewall Kitchen and back to school ( each leg of the trip about a mile - you can do this with landmarks about a mile from your school) that would be a total of 1 square mile. If we made our country square it would be a total of 3,790,000 miles inside the square. I ask students how they know which is larger? (again, students use their plea value understanding to identify that 3 million is way bigger than a number in the thousands) Next I say that Japan is smaller. Can you think of a tool (MP5) we could use to figure out how much smaller Japan is than the US? (calculator). I ask students to take out their calculators and I ask which number would be put in first? (3,790,000) We do this together. Now do we add or subtract to find out how much smaller (subtract) Ok so hit the - key. Now can you type in Japan's size? Don't forget to hit equal. And what do you get? (Students just had to make sense of very large numbers together to solve the problem (MP1) Next I tell students that Japan is very crowded. There are 127,600,000 people in Japan and 313,900,000 people in the US. Which has more? (The US.) I draw a picture of the US as a square and put a large person in it. Next I draw a small square to be Japan and I put a person that is almost as big as the US person in the little space. I explain that Japan has more people for each square mile than the US. (This is an introduction so I am just giving students a quick visual of how size and population of the US and Japan are not evenly proportional.) Students were working with very large numbers today. I did not expect them to manipulate such large numbers on their own. We worked together, but I did want them to see that larger numbers can refer to real things. I am hoping students will gain more insight into place value with larger numbers during this introductory lesson. I show students some introductory pictures of Japan and tell them that if they had just arrived by plane, they would first need to get some money because Japan uses the yen and not the dollar. I tell them that then they might travel by very fast train, the bullet train, around Japan. I tell them that they will work with partners to figure out their money and how far they can travel. I give students a paper with 3 questions on it. I tell them they may work with one other person for 5 minutes and may use tools from their math suitcases to help them. I tell them that if they are going to use the calculator tool, they will need to first write the equation they will type into the calculator, and share it with me and then they could use that tool, if it is the one they select. The problems are: 18 children travel to Japan. They each have $20.00 to spend. At the bank they trade their dollars for yen. Each dollar is worth 98 yen. How much money will each student have in yen to spend in Japan? The bullet train (shinkansen) can go 320 km/hour. How far could you go in 2 hours? How far could you go in 4 hours? If you want to travel to Mt. Fuji from Tokyo, it is 100 km. About how long will it take to get there if you travel on the bullet train? I walk around to help partners who are struggling with the problems. At the end of 10 minutes, I suggest that students combine with another group and share their work so far. At the end of another 10 minutes I call the group together to share how they solved the problems. I bring students together to share their strategies for solving these complex tasks. We look at several different strategies and talk about why they did or did not work. I want students to walk away with a sense of pride in their attempts to solve these difficult problems, rather than feeling bad if they did not get the correct answer. I reinforce how hard these problems were, and how proud I was of everyone for trying to figure them out.
Lee has shared the work of his group below: PDC group 4 have looked at the research surrounding the field of meta-cognition. Meta-cognition is a process where the learner thinks deeply about their own thought process when approaching a task. This enables the student to think about the skills they already have, what skills they need to develop, how they can access help and support and where the new knowledge links back to previous knowledge. A number of systematic reviews and meta-analyses have consistently found that meta-cognition has a high level of impact on students’ learning, with pupils making an average of eight months’ additional progress. In the PDC group, 2 techniques were trialled: - Questions to promote meta-cognition. The group members had a list of nine questions to ask. Three questions before, three during and three after the completion of a task. The teachers used different methods for collecting the answers; verbally, written or as a learning diary on Showbie. All teachers fed back positive results stating that the exercise: - Reminded students to think about relevant skills learnt in other subjects - Gave students more options of where they could access help - Enabled the students to work more independently - Enabled the students to see links where the work could be applied to other subjects The KWL (Know, Want to know, have Learnt) grid. At the start of a topic, students filled in the grid about what they already know about the topic and what they want to know. The teacher used this information to structure the learning and advised that it engaged the students and promoted deeper thinking around the subject. The teacher also advised that understanding what the students wanted to learn allowed them to reshape the delivery of content to address the students questions about the topic, helping to engage the students and allowing them to link the material in a manner suited to the students’ developing knowledge. All members of the PDC group felt that they had benefitted from trialling the techniques to improve meta-cognition and would continue to use and experiment with these techniques.
Scorpions are ancient land-dwelling arthropods, cousins of spiders, ticks and mites, that have been around for millions of years. There are 1,400 species of these venomous creatures found throughout the world and about 70 can be found in the United States. Most scorpions in the United States, with the exception of the Arizona bark scorpion or sculptured scorpion, have a sting that is no more painful than a bee sting. If you get stung by a scorpion clean the area with soap and water, apply ice and elevate. Some people can have an allergic reaction to a scorpion sting, so seek medical attention immediately if you experience symptoms like vomiting, twitching and accelerated heart rate. Surprisingly, scorpions can be found in a wide range of habitats from grasslands and deserts to rainforests and caves. Most scorpions are nocturnal and hunt primarily at night. They eat insects, spiders and even other scorpions. Larger species will sometimes eat snakes, mice and lizards. Scorpions like to hide and ambush prey when it comes within reach. Scorpions are mostly solitary creatures that prefer to live and hunt in specific territories. During the day, they seek shelter in cool, moist areas in burrows, under rocks or behind tree bark. Scorpions are most active when temperatures are above 70 degrees Fahrenheit at night. Predators of scorpions include owls, snakes, lizards, bats and tarantulas. Scorpions reproduce using a special “mating dance”. Depending on the species, females can be pregnant anywhere from several months to more than a year. The mother has a live birth and her tiny white offspring quickly scramble onto her back from protection. Females usually give birth to 25 to 35 young. After their first molt and about one to three weeks after being born, the young will leave their mother to live on their own. Scorpions typically live between three and five years, but some species have been known to live for 25 years. Scorpions become pests when their natural habitat is changed or damaged by things like flooding, construction or logging. Displaced scorpions may then seek shelter and food in houses. Don’t be alarmed if you find a scorpion in your home, they are solitary creatures so there aren’t usually more than one or two. Types of Scorpions Scorpions have eight legs, a pair of pincers and a long segmented tail that ends with a barb-like stinger. On a scorpion’s underbelly, there are a pair of sense organs that brush the ground like a comb in order to read the surface texture and vibrations. Scorpions found in the United States usually range between 2 to 4 inches, although Giant Hairy Desert Scorpions can get as long as 5 inches. There are a few look-alikes out there that can be confused with scorpions. These include pseudoscorpions, solifuges or wind scorpions and whip scorpions. Scorpions In The Home Scorpions only occasionally invade homes when outdoor conditions make it difficult for them to find food and water. There are a variety of things you can do to prevent scorpions from getting into your home. Remove logs, rocks and other debris from around the perimeter of your house that scorpions could use for shelter. Regularly mow your lawn and trim bushes and tree branches to make it more challenging for scorpions to reach your home. Store your garbage cans and firewood off the ground and away from the house. Check that window and door screens are in good condition. Install weather-stripping and door sweeps. Seal cracks in the foundation, roof eaves and where pipes enter the house. Frequently Asked Questions How do you treat a scorpion sting? Each scorpion species has its own venom “recipe” that is a mixture of neurotoxins and other chemicals. It is important to know that while all scorpions have the ability to sting, running and hiding are their first reactions when they feel threatened. Most people have the same reaction to a scorpion sting as they would to a bee sting. In the United States, only the Arizona bark scorpion has venom that is strong enough to be considered dangerous to people. Reactions to a sting can vary depending on the type of scorpion, the age of the person and if the person has an allergic reaction. If you get stung by scorpion wash the sting area with soap and water, apply a cold compress and elevate the area to prevent swelling. If your symptoms get worse, you may be having an allergic reaction and should seek emergency medical help. Benefits of Professional Scorpion Pest Control A pest management professional can reduce areas where scorpions like to hide and treat for other insect pests that may be enticing scorpions to come inside.They will also capture and remove any scorpions found in your home during that time. Pest management professionals have the skills and tools necessary to find and eliminate scorpions from your home, and to prevent them from coming back.
Do you own a valuable painting? Perhaps it is what you do not see which makes it valuable. X-ray use has become a common practice among art authenticators. Not only does it unlock secrets underneath paintings, but it helps to establish authenticity. Types of paper, materials, preparatory sketches, changes to the composition, and other clues can be discovered through the use of an x-ray to prove the nature and origin of a painting. X-rays can also be used to detect traces of minerals and other elements within the paint. These traces can be clues to when the painting was executed and where. For example, this x-ray of Vermeer's "The Girl With a Pearl Earring" reveals that there were traces of lead in the paint that he used. |Vermeer, Girl with a Pearl Earring||Vermeer, Girl with a Pearl Earring x-ray| During Vermeer's day, lead was a primary component in white paint. These brighter areas on the x-ray show where Vermeer used white, therefore creating the luminous glow that this picture has become famous for. Even though this is unmistakably a Vermeer, this specific applied technique confirms the painting was produced at the time when lead was in use. Another example where x-ray research was used on a famous painting was Pablo Picasso's "The Old Guitarist" (1903). Through x-ray research, it was revealed that this painting initially started as an old woman with her head bent over. X-rays also reveal a cow's head in the top right corner. While Ultra-Violet examinations can be done in-house; heavy duty x-ray photography must be done at the laboratory level. Art authenticators have been using x-rays to identify and authenticate paintings for more than 100 years. The first documented use of x-rays in art authentication was in 1896 in Frankfurt, Germany. So how do x-rays work in the art authentication world? It's actually a simple process. If you have ever had an x-ray performed at the doctor's office, you already have a basic understanding of how they work; x-rays are used to see different layers in your body that can't be seen by the naked eye. There are two types of x-rays used in art authentication: stereoradiography which operates the same way as medical x-rays, and autoradiography which uses beta particles. Each type of x-ray can show different things in a painting that would otherwise not be seen. Like at your doctor's office, x-rays can see through different layers. But instead of flesh, these x-rays see different layers of paint. X-rays can show where touch-ups have been made, or where places were painted over. Old Man x-ray, Painting by Mildred Peel, Oil on Canvas, 1904 In order to create a new picture of the layers of the painting, the rays pass through the painting and create a negative of the darker areas on film. Think of it as reversed photography. After the rays are passed through the painting, old layers of paint can be seen and the investigation can begin. Is this consistent with the known preparation and painting method of the artist? Are the hidden compositions similar to the style that the artist used? In order to find these "hidden paintings", the x-ray technician will apply a certain amount of kilovoltage. The kilovoltage is basically a measurement of how intense or weak the x-ray beam is. The more kilovoltage is applied, the more it reveals of the paintings underneath. It can be compared to changing the contrast on a television set when it goes from white to black. The more kilovoltage that is used, the better you can see the picture underneath. It has been said that kilovoltage is used by the radiographer to "paint" the picture (Graham and Eddie). Time is also an element that radiographers use to make x-ray exposures. In the same way that you can under or over expose film in a camera, the same can be said for x-rays. Radiographers use a short x-ray exposure to show the deepest layers. The longer the exposure, the shallower the x-ray will be. Generally, art laboratories use a series of "soft" x-rays known as the grenz rays. These wavelengths are long, but less intense, and are ideal for art authenticators. To produce these grenz rays, and to use them in a way that is convenient for authenticators, typically a machine like the Gilardoni Radiolite x-ray machine is used. Radiographers must stand very far away when conducting x-rays with this machine, due to the possible radiation exposure. Amazing things can be seen with the x-ray technique. For example, this painting "A Spanish Grandee" by El Greco shows that underneath the painting of this aristocrat is a layer showing a portion of a still life. |El Greco, A Spanish Grandee||Scan of A Spanish Grandee| The same can be said of this painting, "Tobias and the Angel" by Rembrandt. What we see as a landscape was originally a portrait of a man. Rembrandt, Tobias and the Angel Scan of Tobias and the Angel Through a combination of Morellian analysis, documentary research and x-ray examinations, authenticators can determine if a painting is the genuine article. For example, it is well-known that most artists would recycle their canvases. Painting over a rejected picture was a common practice. Red flags would go up for an art authenticator if there were no sketches, modifications (pentimenti), or anything at all below the surface of a painting. Some painters though, did not prepare a sketch. In general, however, a perfect composition may indicate that the painting is a duplicate or a copy. X-ray comparison of "A View of Picksgrill Harbor, Dusky Bay" by Hodges, shows a completely different landscape underneath. The original is of icebergs, and the surface painting is a different climate. An educated authenticator with a very trained eye can distinguish styles and methods with the use of x-rays. They are especially useful for examining paintings on panel (wood). From x-rays and other forensic technology to systematic comparative analysis, to archival research, we use all the tools and methods available to authenticate paintings. We are now offering Mobile X-ray Examinations at your home or office location in the following areas: NY, NJ, CT, PA.
Another collaboration has blasted off between NASA and TERN that’s set to dramatically improve global climate monitoring. NASA’s ECOSTRESS (Ecosystem Spaceborne Thermal Radiometer Experiment on Space Station) mission to the International Space Station launched from Cape Canaveral last week, providing critical climate data to scientists, helping them have a better understanding of how crops, the biosphere and the global carbon cycle respond to water availability and drought. Dr Joshua Fisher, Science Lead for the ECOSTRESS mission from NASA’s Jet Propulsion Laboratory (JPL), is thrilled to have access to data collected and made openly available by TERN. “This is an exciting new data sharing collaboration between NASA and TERN, which will lead to a better understanding of water stress and water use by plants from different biomes, and the implications for agricultural and natural ecosystems,” Dr Fisher said. “ECOSTRESS will do this by measuring plant temperatures in various locations at different times of day, in a number of locations, using a multispectral thermal infrared radiometer on the International Space Station.” The temperature images of Earth’s surface from ECOSTRESS will be the most detailed ever acquired from space and will make it possible to measure the temperature of individual farm paddocks. Meanwhile, in Australia, TERN will be collecting key southern hemisphere data at sites in a wide range of major Australian biomes, such as Tasmania’s eucalypt forests or Western Australia’s woodlands, which will then be used by NASA to verify the validity of data collected from the mission. Specifically, NASA will use data on the exchanges (fluxes) of carbon, energy and water between the land and the atmosphere collected by TERN’s nation-wide network of instrumented OzFlux towers. TERN is also supplying weather data on soil moisture data collected from the sites. NASA’s collaboration with TERN on the ECOSTRESS mission follows the success of TERN’s role in NASA’s soil moisture active passive (SMAP) mission to map global carbon. TERN Director Dr Beryl Morris believes this collaboration and the now-flowing data streams present an incredible opportunity for Australian scientists. “Australian science will directly benefit from these new NASA data, and our NCRIS funders and hosts, the University of Queensland, are at the very heart of it,” Dr Morris said. “Access to these data via TERN will enable Australian researchers to undertake and deliver useful analyses for the Australian continent that wouldn’t be possible without the new NASA data stream.” “TERN’s federally funded NCRIS infrastructure, backed by our host and partner institutions, are ensuring that our most advanced, space-based climate monitoring tools stack up, allowing us to respond to ever-evolving environmental threats, including climate impacts, food security and species loss.”
Things to Review 2nds and 3rds Two Eighth Notes We hose this student favorite to introduce eight notes for a practical reason. When you play and count “1 and 2 and”, you will notice that the counts match the finger numbers for most of the song. This can be helpful while students learn this new rhythm. We find that introducing eighth notes by first counting “two-eighths two-eighths”, before teaching “1 and 2 and” very successful. We loved including the explanation of the word “macaroni”, which appears in the lyrics. While we think it is worth knowing, it is probably the image of pasta that students remember most!
What is Autism? What is Autism? What is autism? Autism spectrum disorder (ASD) is a complex neurological disorder characterized by challenges with communication and social skills as well as repetitive or restrictive patterns of behavior or interests. Autism spectrum disorder (ASD) is considered “pervasive” because it affects many aspects of an individual’s life. It is considered a “developmental disorder” because it occurs very early in life (often prior to the age of three) and may be apparent from early infancy or may begin to manifest after a period of apparently typical development. Experts estimate that autism is diagnosed in 1 out of every 44 children in the United States and is four times more common in boys than girls. Autism affects 1 in every 27 boys and 1 in 116 girls and occurs regardless of race, religion, income level, or other societal factors. Autism is a life-long disorder that exists along a continuum of functioning levels – from those who have mild symptoms and can function independently – to those with severe delays in all areas of development who require continuous supervision. Regardless of the nature of their delays, many individuals with ASD can make substantial gains with appropriate treatment. What are common signs of autism spectrum disorder (ASD)? The first signs of ASD usually appear before age three. The earliest signs typically include: - Poor eye contact - Lack of pointing - Difficulty in the use and understanding of language - Unusual play or lack of play Other common signs include: - Poor social skills - Over- or under-sensitivity to sound, sight, taste, touch or smell - Repetitive movements (hand flapping, body rocking) - Difficulty with changes in routine or surroundings - Challenging behaviors such as aggression, self-injury or severe withdrawal - Echolalia (repeats words instead of responding) - Not responding when called by name or appearing to be deaf How is ASD diagnosed? There is no medical test to diagnose autism. In New York State, ASD can only be diagnosed by a licensed psychologist or physician. A diagnosis is made on the presence or absence of certain observable behaviors. Qualified professionals should perform a comprehensive evaluation to determine the presence of symptoms and the impact that these symptoms have on an individual’s life. There are many empirically-validated diagnostic instruments that a professional may use to make a diagnosis. Multiple symptoms must be found in all three core areas- socialization, communication, and repetitive/restrictive behaviors and must have a substantial impact on an individual’s daily life. What causes ASD? No one really knows what causes autism but a variety of theories exist. A substantial amount of research is underway to determine the biological causes of autism, but a single cause has not been identified. Some cases of autism have been found to have a genetic component, although a single gene is not responsible for the disorder. ASD occurs with increased frequency in identical twins. Many children with ASD also have seizure disorders. What are effective early interventions for autism? Although there is no cure for autism, children can make substantial gains with early intensive intervention. The most effective approaches require large amounts of time and effort from therapists, teachers, and parents. Interventions that have been found most effective for autism are those based on principles of applied behavior analysis (ABA). Effective ABA programs begin early in life, target all areas of development, and are delivered intensively (between 25 to 40 hours per week). The focus of treatment involves building skills in all areas of development in order help a child learn, play, interact with others and become as independent as possible. Research indicates that children with a normal IQ, who receive early intensive behavioral treatment, and develop speech before age five tend to have the best prognosis. While there are no medications that can cure autism spectrum disorders, they can help manage some of the associated symptoms including attention, anxiety, and aggression. Where can I get more information online? - American Academy of Pediatrics – http://www.aap.org - Association for Science in Autism Treatment – https://asatonline.org/ - Autism Society of America – http://www.autism-society.org - Autism Speaks – https://www.autismspeaks.org/ - Centers for Disease Control – http://www.cdc.gov/ncbddd/autism/facts.html - National Institute of Health – http://www.nichd.nih.gov
There are a wide number of causes that can contribute to problems with mental health and ongoing mental illnesses. The latest research shows that mental illnesses can be caused by a combination of psychological, environmental and biological factors. Some causes of mental illness Some mental illnesses are caused by the malfunction of pathways and nerve cells within the brain, causing a lack of the chemicals needed by the brain’s neurotransmitters. Genetic causes can also be a factor in mental illnesses, as these type of disorders often run in families. Other factors that can contribute to the development of mental illnesses include stress, trauma or abuse, poor nutrition and a range of environmental factors. Some potential environmental factors that can trigger mental illness include divorce, dysfunctional family and social life, low self-esteem and feelings of inadequacy, moving school or jobs, substance abuse, alcohol abuse, gambling and feeling unable to meet societal or cultural expectations. The global rise in mental illnesses makes it imperative for everyone to nurture their mental wellbeing at all times, in efforts to avoid developing mental problems and also recognize them, if they should occur. Tips for mental well-being Taking care of your own health also means focusing on your mental well-being in order to remain as healthy as possible. Some things that are recognized as helping maintain stable mental health include: Ensuring you get enough sleep Sleep is just as important for mental health as it is for physical good health and helps regulate chemicals within the brain. Lack of sleep often causes individuals to feel stressed, anxious or depressed. Follow a healthy diet and try to avoid too much caffeine This can be particularly important for anyone who gets stressed or nervy. Eating a balanced diet helps ensure your body receives all the nutrients, minerals and vitamins it needs to perform at peak levels. It’s been shown that deficiencies of iron and vitamin B12 can lead to mood disorders. Get out in the sun as much as possible, any time of year Sunlight causes the body to create vitamin D, which can be very important in maintaining mood levels as it helps with the release of serotonin and endorphins in the brain. Between 30 minutes and two hours of sunlight is recommended on a daily basis, but, of course, you should always follow safe practice with regards to skin and eye protection, as UV light can be damaging. Lack of sun in winter can result in a depressive illness, known as Seasonal Affective Disorder (SAD). Cut out tobacco, alcohol and drugs These can all impact on mental health in a detrimental way. Excess alcohol can lead to low levels of concentration and anxiety or depression. It can also create thiamine deficiencies. Thiamine is very important for the brain’s functioning. Smoking regularly causes the body and brain to go into a state of withdrawal between cigarettes and this can make you anxious or irritable. Substance abuse very often causes low mood and anxiety, while severe effects can be delusions or paranoia. It’s also believed that drug abuse can lead to the development of schizophrenia. There you have it: some simple tips to help maintain your mental well-being and reduce chances of mental illness. You can find more tips on maintaining mental well-being at the following website (10 tips to maintain mental health). The Content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition.
SQL (Structured Query Language) is a standardized programming language that’s used to manage relational databases and perform various operations on the data in them. Initially created in the 1970s, SQL is regularly used not only by database administrators, but also by developers writing data integration scripts and data analysts looking to set up and run analytical queries. The uses of SQL include modifying database table and index structures; adding, updating and deleting rows of data; and retrieving subsets of information from within a database for transaction processing and analytics applications. Queries and other SQL operations take the form of commands written as statements — commonly used SQL statements include select, add, insert, update, delete, create, alter and truncate. SQL became the de facto standard programming language for relational databases after they emerged in the late 1970s and early 1980s. Also known as SQL databases, relational systems comprise a set of tables containing data in rows and columns. Each column in a table corresponds to a category of data — for example, customer name or address — while each row contains a data value for the intersecting column. SQL standard and proprietary extensions An official SQL standard was adopted by the American National Standards Institute (ANSI) in 1986 and then by the International Organization for Standardization, known as ISO, in 1987. More than a half-dozen joint updates to the standard have been released by the two standards development bodies since then; as of this writing, the most recent version is SQL:2011, approved that year. Both proprietary and open source relational database management systems built around SQL are available for use by organizations. They include Microsoft SQL Server, Oracle Database, IBM DB2, SAP HANA, SAP Adaptive Server, MySQL (now owned by Oracle) and PostgreSQL. However, many of these database products support SQL with proprietary extensions to the standard language for procedural programming and other functions. For example, Microsoft offers a set of extensions called Transact-SQL (T-SQL), while Oracle’s extended version of the standard is PL/SQL. As a result, the different variants of SQL offered by vendors aren’t fully compatible with one another. SQL commands and syntax SQL commands are divided into several different types, among them data manipulation language (DML) and data definition language (DDL) statements, transaction controls and security measures. The DML vocabulary is used to retrieve and manipulate data, while DDL statements are for defining and modifying database structures. The transaction controls help manage transaction processing, ensuring that transactions are either completed or rolled back if errors or problems occur. The security statements are used to control database access as well as to create user roles and permissions. SQL syntax is the coding format used in writing statements. Figure 1 shows an example of a DDL statement written in Microsoft’s T-SQL to modify a database table in SQL Server 2016: An example of T-SQL code in SQL Server 2016. This is the code for the ALTER TABLE WITH (ONLINE = ON | OFF) option. SQL-on-Hadoop query engines are a newer offshoot of SQL that enable organizations with big data architectures built around Hadoop systems to take advantage of it instead of having to use more complex and less familiar languages — in particular, the MapReduce programming environment for developing batch processing applications. More than a dozen SQL-on-Hadoop tools have become available through Hadoop distribution providers and other vendors; many of them are open source software or commercial versions of such technologies. In addition, the Apache Spark processing engine, which is often used in conjunction with Hadoop, includes a Spark SQL module that similarly supports SQL-based programming. In general, SQL-on-Hadoop is still an emerging technology, and most of the available tools don’t support all of the functionality offered in relational implementations of SQL. But they’re becoming a regular component of Hadoop deployments as companies look to get developers and data analysts with SQL skills involved in programming big data applications. SQL-on-Hadoop is a class of analytical application tools that combine established SQL-style querying with newer Hadoop data framework elements. By supporting familiar SQL queries, SQL-on-Hadoop lets a wider group of enterprise developers and business analysts work with Hadoop on commodity computing clusters. Because SQL was originally developed for relational databases, it has to be modified for the Hadoop 1 model, which uses the Hadoop Distributed File System and Map-Reduce or the Hadoop 2 model, which can work without either HDFS or Map-Reduce. The different means for executing SQL in Hadoop environments can be divided into (1) connectors that translate SQL into a MapReduce format; (2) “push down” systems that forgo batch-oriented MapReduce and execute SQL within Hadoop clusters; and (3) systems that apportion SQL work between MapReduce-HDFS clusters or raw HDFS clusters, depending on the workload. One of the earliest efforts to combine SQL and Hadoop resulted in the Hive data warehouse, which featured HiveQL software for translating SQL-like queries into MapReduce jobs. Other tools that help support SQL-on-Hadoop include BigSQL, Drill, Hadapt, Hawq, H-SQL, Impala, JethroData, Polybase, Presto, Shark (Hive on Spark), Spark, Splice Machine, Stinger, and Tez (Hive on Tez). In the world of Hadoop and NoSQL, the spotlight is now on SQL-on-Hadoop engines. Today, many different engines are available, making it hard for organizations to choose. This article presents some important requirements to consider when selecting one of these engines. With SQL-on-Hadoop technologies, it’s possible to access big data stored in Hadoop by using the familiar SQL language. Users can plug in almost any reporting or analytical tool to analyze and study the data. Before SQL-on-Hadoop, accessing big data was restricted to the happy few. You had to have in-depth knowledge of technical application programming interfaces, such as the ones for the Hadoop Distributed File System, MapReduce or HBase, to work with the data. Now, thanks to SQL-on-Hadoop, everyone can use his favorite tool. For an organization, that opens up big data to a much larger audience, which can increase the return on its big data investment. The first SQL-on-Hadoop engine was Apache Hive, but during the last 12 months, many new ones have been released. These include CitusDB, Cloudera Impala, Concurrent Lingual, Hadapt, InfiniDB, JethroData, MammothDB, Apache Drill, MemSQL, Pivotal HawQ, Progress DataDirect, ScleraDB, Simba and Splice Machine. In addition to these implementations, all the data virtualization servers should be included because they also offer SQL access to Hadoop data. In fact, they are designed to access all kinds of data sources, including Hadoop, and they allow different data sources to be integrated. Examples of data virtualization servers are Cirro Data Hub, Cisco/Composite Information Server, Denodo Platform, Informatica Data Services, Red Hat JBoss Data Virtualization and Stone Bond Enterprise Enabler Virtuoso. And, of course, there are a few SQL database management systems that support polyglot persistence. This means that they can store data in their own native SQL database or in Hadoop; by doing so, they also offer SQL access to Hadoop data. Examples are EMC/Greenplum UAP, HP Vertica (on MapR), Microsoft PolyBase, Actian ParAccel and Teradata Aster Database (via SQL-H). SQL equality on Hadoop? In other words, organizations can choose from a wide range of SQL-on-Hadoop engines. But which one should be selected? Or are they so alike that it doesn’t matter which one is picked? The answer is that it does matter, because not all of these technologies are created equal. On the outside, they all look the same, but internally they are very different. For example, CitusDB knows where all the data is stored and uses that knowledge to access the data as efficiently as possible. JethroData stores indexes to get direct access to data, and Splice Machine offers a transactional SQL interface. Selecting the right SQL-on-Hadoop technology requires a detailed study. To get started, you should evaluate the following requirements before selecting one of the available engines. SQL dialect. The richer the SQL dialect supported, the wider the range of applications that can benefit from it. In addition, the richer the dialect, the more query processing can be pushed to Hadoop and the less the applications and reporting tools have to do. Joins. Executing joins on big tables fast and efficiently is not always easy, especially if the SQL-on-Hadoop engine has no idea where the data is stored. An inefficient style of join processing can lead to massive amounts of I/O and can cause colossal data transport between nodes. Both can result in really poor performance. Non-traditional data. Initially, SQL was designed to process highly structured data: Each record in a table has the same set of columns, and each column holds one atomic value. Not all big data in Hadoop has this traditional structure. Hadoop files may contain nested data, variable data (with hierarchical structures), schema-less data and self-describing data. A SQL-on-Hadoop engine must be able to translate all these forms of data to flat relational data, and must be able to optimize queries on these forms of data as well. Storage format. Hadoop supports some “standard” storage formats of the data, such as Parquet, Avro and ORCFile. The more SQL-on-Hadoop technologies use such formats, the more tools and other SQL-on-Hadoop engines can read that same data. This drastically minimizes the need to replicate data. Thus, it’s important to verify whether a proprietary storage format is used. User-defined functions. To use SQL to execute complex analytical functions, such as Gaussian discriminative analysis and market basket analysis, it’s important that they’re supported by SQL or can be developed. Such functions are called user-defined functions (UDFs). It’s also important that the SQL-on-Hadoop engine can distribute the execution of UDFs over as many nodes and disks as possible. Multi-user workloads. It must be possible to set parameters that determine how the engine should divide its resources among different queries and different types of queries. For example, queries from different applications may have different processing priorities; long-running queries should get less priority than simple queries being processed concurrently; and unplanned and resource-intensive queries may have to be cancelled or temporarily interrupted if they use too many resources. SQL-on-Hadoop engines require smart and advanced workload managers. Data federation. Not all data is stored in Hadoop. Most enterprise data is still stored in other data sources, such as SQL databases. A SQL-on-Hadoop engine must support distributed joins on data stored in all kinds of data sources. In other words, it must support data federation. It would not surprise me if every organization that uses Hadoop eventually deploys a SQL-on-Hadoop engine (or maybe even a few). As organizations compare and evaluate the available technologies, assessing the engine’s capabilities for the requirements listed in this article is a great starting point.
Answer the following problems. 1. Pistons are fitted to two cylindrical chambers connected through a horizontal tube to form a hydraulic system. The piston chambers and the connecting tube are filled with an incompressible fluid. The cross-sectional areas of piston 1 and piston 2 are A1 and A2, respectively. A force F1 is exerted on piston 1. Rank the resultant force F2 on piston 2 that results from the combinations of F1, A1, and A2 given from greatest to smallest. If any of the combinations yield the same force, give them the same ranking. (Use only “>” or “=” symbols. Do not include any parentheses around the letters or symbols.) - F1 = 6.0 N; A1 = 1.1 m2; and A2 = 2.2 m2 - F1 = 3.0 N; A1 = 1.1 m2; and A2 = 0.55 m2 - F1 = 3.0 N; A1 = 2.2 m2; and A2 = 4.4 m2 - F1 = 6.0 N; A1 = 0.55 m2; and A2 = 2.2 m2 - F1 = 6.0 N; A1 = 0.55 m2; and A2 = 1.1 m2 - F1 = 3.0 N; A1 = 2.2 m2; and A2 = 1.1 m2 2. A bicycle tire pump has a piston with area 0.49 in2. If a person exerts a force of 24 lb on the piston while inflating a tire, what pressure does this produce on the air in the pump? 3. A large truck tire is inflated to a gauge pressure of 82 psi. The total area of one sidewall of the tire is 1,330 in2. What is the net outward force (in lb) on the sidewall because of the air pressure? (Enter the magnitude.) 4. A viewing window on the side of a large tank at a public aquarium measures 59 in. by 69 in. The average gauge pressure from the water is 7 psi. What is the total outward force on the window? 5. The total mass of the hydrogen gas in the Hindenburg zeppelin was 18,000 kg. What volume did the hydrogen occupy? (Assume that the temperature of the hydrogen was 0°C and that it was at a pressure of 1 atm.) 6. A large balloon used to sample the upper atmosphere is filled with 590 m3 of hydrogen. What is the mass of the hydrogen (in kg)? 7. Find the gauge pressure (in psi) at the bottom of a freshwater swimming pool that is 18.6 ft deep. 8. The depth of the Pacific Ocean in the Mariana Trench is 36,198 ft. What is the gauge pressure at this depth? 9. An ebony log with volume 15.0 ft3 is submerged in water. What is the buoyant force on it (in lb)? (Enter the magnitude.) 10. An empty storage tank has a volume of 9,490 ft3. What is the buoyant force exerted on it by the air? (Assume the air is at 0°C and 1 atm.) 11. A modern-day zeppelin holds 9,770 m3 of helium. Compute its maximum payload at sea level. (Assume the helium and air to be at 0°C and 1 atm.) 12. A boat (with a flat bottom) and its cargo weigh 6,400 N. The area of the boat’s bottom is 5 m2. How far below the surface of the water is the boat’s bottom when it is floating in water? 13. A scale reads 378 N when a piece of iron is hanging from it. What does it read (in N) when it is lowered so that the iron is submerged in water? 14. A dentist’s chair with a person in it weighs 2000 N. The output plunger of a hydraulic system starts to lift the chair when the dental assistant’s foot exerts a force of 44 N on the input piston. Neglecting any difference in the heights of the piston and the plunger, what is the ratio of the area of the plunger to the area of the piston? 15. The wing of an airplane has an average cross-sectional area of 13 m2 and experiences a lift force of 91,000 N. What is the average difference in the air pressure between the top and bottom of the wing? 16. Air flows through a heating duct with a square cross-section with 9-inch sides at a speed of 4.1 ft/s. Just before reaching an outlet in the floor of a room, the duct widens to assume a square cross-section with sides equal to 15 inches. Compute the speed of the air flowing into the room (in ft/s), assuming that we can treat the air as an incompressible fluid. 17. A metal bowl with a weight of 1.45 N is placed in a larger kitchen container filled with olive oil. How much olive oil must the bowl displace in order to float? For reference, the mass density of olive oil is about 910 g/liter and its weight density is about 8.92 N/liter. Please give your answer in liters.
Foundation Subject Overviews Nursery Foundation Subjects Overview IQ How do I feel? With our class novel ‘Owl babies’ we build on children’s experiences of their emotions, how to recognize and describe them. They are beginning to have/make their own friends and build independence. They have a sense of their own family and relations. This helps with nursery and eventually school routines. IQ Who lives in the forest? In The Gruffalo, children explore the creatures that live in the forest, expanding on the Owl Babies, looking at animal habitats/what they eat. Reception Class expand on this with Polar habitats, minibeasts, food chains and African animals. IQ What will I see on my journey? Using the ‘Bear Hunt’ as the class novel, the children notice details of their environment, think about seasons and the weather. Reception class follow this with ‘Dinosaur Hunt’, changing the characters in the book, and further exploring taking a journey. IQ How many legs have I got? During the ‘Hungry Caterpillar’ topic, Nursery children will explore minibeast habitats and the life cycle of the butterfly. This is expanded on in Reception where the children build on the life cycle theme. IQ Who you gonna call? The nonfiction book ‘Emergency’ looks at the emergency services and their vehicles, and who helps us when we are hurt or ill. This is built on through RE and the Superheroes topic in reception IQ Can I be a pirate? Using the series of pirate books as the underlying theme, children are encouraged to use their imagination, take on a role, sing and build their confidence and awareness as they learn to play with others. Reception build on the Pirate topic through the ‘Explorers and Travel’ topic. Reception Foundation Subjects Overview IQ Who Lives at Donaldson’s dairy? Children build on Nursery concepts of recognising emotions and self confidence/self awareness, and begin to look at similarities and differences through the ‘Hairy Maclairy’ series. The children also find out about where milk comes from and this links to healthy lifestyles, which is expanded on in Year 1 IQ will I meet a dinosaur? Nursery children paint and draw the Gruffalo, Reception children learn about dinosaurs through ‘Tyrannosaurus Drip’, and build on these Nursery art skills to paint dinosaurs and illustrate their writing with more detail. This paves the way for Year 1 to learn more sophisticated painting techniques. IQ Why can’t a penguin and a Polar bear be friends? Being ‘explorers’ the Reception children build on their imaginative and role play experiences in Nursery and compare environments such as polar regions and Morley, using ‘Lost and Found’. (We also use ‘Where the Wild Things Are’ and ‘Pirates Love Underpants to further enhance the topic) In Year1 children build on their concept of different places through Katie Morag etc. IQ What is space like?/IQ what happens in spring? Nursery learn about environments and places, Reception class build on this with a journey into space, reading ‘the Way Back Home’ The children then learn ‘Little Red Hen’ and have the experience of hatching chicks. Year 1 build on this with life cycles and animals IQ Are dragons always bad? Linking to the characters in the Gruffalo, the children in reception learn about traditional tales, and how we can change and adapt these stories. This involves reading ‘There’s No Dragon in This Story’. We also read ‘Handa’s Surprise’ and learn about Africa and African animals. Year 1 expand with learning about different animals. IQ What makes me super? Building on PSED learning in Nursery, the children discover their special skills and prepare for their transition to Year 1. Year 1 Foundation Subjects Overview Is everywhere like Morley? Book Title: The Katie Morag Stories With our topic ‘Our World’ there is a focus on the British Isles and in particular the Isle of Struay, it’s similarities and differences to Morley and learning more about communities. This links to FS2 topic of ‘houses / homes’ and the community. Building on communities, Year 2 will learn about the Stone Age settlement. Children can use their prior knowledge to distinguish the difference in different settlements / communities and why (for e.g. discussing the time period and the influence of this). Year 1 also have a focus on the postal services around the world, looking at uniforms, means of travel and places around the world. FS2 develop children’s understanding about the postal service, as children hear the stories ‘The Jolly Christmas Postman’ (Autumn term) and ‘The Jolly Postman’ (Summer term.) In history, pupils in Year 1 build on their Reception learning about ‘People Who Help’ when studying Florence Nightingale. We make basic comparisons to nursing in the present day and this supports Year 2’S learning on Mary Seacole. Have you been down to the woods today? Book Title: The Magic Faraway Tree During the ‘Wonderful Woodlands’ topic children have the opportunity to learn more about animals and their habitats. FS2 study African animals and this will allow children to understand that animals live in different environments. Year 1 build on this further, and in particular look at woodland animals. Year 2 focus on one particular animal in depth- the fox drawing on previous knowledge they have obtained. In history FS2 learn more about how we travel and what an explorer is. Year 1 make comparisons between Christopher Columbus and James Cook, finding out in more detail how they travelled, where they explored and how they differ from today’s explorers. Year 2 focus on a more modern day explorer Neil Armstrong and how his adventures were similar or different. What makes you brave? Book Title: Charlotte’s Web During ‘Creatures Big and Small’, pupils explore fiction texts to build upon learning from Foundation Stage topics on minibeasts. Children in Year 1 also learn about animals including humans in science and this forms a basic foundation in preparation for Year 2 when the revisit animals and humans, but develop their understanding of human lifestyle including the food pyramid and animals and their offspring. Children in Reception have previously studied books such as The Little Red Hen and The Hungry Hen so have an awareness of farm animals and their purpose. Year 1 pupils will extend this understanding of farming further during Charlotte’s Web and also compare farming from the past. This links well to Year 2 as they look at early farming techniques at the end of the Stone Age. Charlotte’s Web highlights a number of social and emotional aspects such as loyalty, courage and bravery. The children’s prior learning about feelings from studying Where the Wild Things Are will help them to empathise and also prepare them for content in Year 2 texts such as The Midnight Fox. Year 2 Foundation Subjects Overview Were They the Stepping Stones to the Modern World? Book Title: Cave Baby, Ug, Boy Genius, The Boy with the Bronze Axe, Stone Age Boy Within our topic of ‘The Stone Age’ the children will be learning about different types of early settlements. We will be learning about how people lived in the Stone Age; their houses, tools and weapons, diet and animals. This continues from prior learning in FS2 and Year 1 around communities. They will be continuing their investigation of settlements and farming but in another time period. In Year 2 children will progress their learning by comparing the Stone Age with modern life, looking at artefacts and pictures of Stone Age villages and using their prior knowledge to predict what they would be used for and sorting different features of the Stone Age into the three different periods. The children begin their learning of historical time periods in Year 2 with the first humans, The Stone Age. This further continues in KS2 when they learn about Vikings, Romans and Victorians. What Makes a Fox Fantastic? Book Title: Fantastic Mr Fox, The Midnight Fox During the ‘Animals and their Habitats’ topic we will be building upon the children’s prior knowledge of animals including humans fromFS2 and Year 1 and preparing them to learn about the rainforest habitat in Year 3. In FS2 the children have learnt about African animals and in Year 1 they have learnt about woodland animals; we develop this further by focusing on one animal, the fox. We look at the red fox and the how it has adapted to survive and then we will compare two foxes from differing climates, the Fennec fox that lives in the Saharan desert and the Arctic fox, and how they have adapted to these two contrasting climates. When studying Mary Seacole the children will further develop the learning about ‘People Who Help’ from Reception and the learning they did about Florence Nightingale in Year 1. We will begin to look at Human Rights and Rights and Responsibilities which will be further developed in Year 4’s topic of Victorians and Year 5’s topic surrounding the novel ‘Trash’. Whodunit? Book Title: The Secret Seven Moving onto Year 2’s ‘Our Country’ topic, while comparing when The Secret Seven was set to now, the children will begin to look at post-war Britain which will be further developed in Year 6 when they study World War 2. We will also be continuing the work Year 1 have done about the UK and its capital cities and further developing this by locating them on a map and looking at the characteristics of these places. We will also be studying London in the Stuart Age when we study the Great Fire of London and the effects this had on London, whether they were positive or negative. This will be further developed in Year 4 when they study the Victorian Age. They will also be further developing the work they did in Year 1 around loyalty, courage and bravery when they read ‘Charlotte’s Web’. Year 3 Foundation Subjects Overview How important is the rainforest? Book Title: The Jungle Book Within our topic of ‘Radiant Rainforests’, we focus on the beauty within the Rainforest and how the Rainforest has an impact on the world. We look at deforestation causing the decrease of the orangutan population due to the high demand of palm oil in various products we use in society. In Year 2, pupils learn about the Stone Age and the tools that people would have used back then to survive; in year 3, children progress on to understand that such tools are still used in modern society. Rainforest tribal people depend on the earth to survive and use primeval/stone age tools and pottery in everyday life such as clay pots and grain grinders, much like the people of the Stone Age. In year 2, pupils begin to learn about the habitats and various animals that are in the world around us. There is a focus on creatures such as foxes, badgers etc. During Year 3, pupils progress their learning to understand about the exotic creatures that inhabit the rainforest and what they do to survive and adapt to the everyday changes in the rainforest. During the topic, we encourage pupils to compare the exotic plants and animals to the ‘everyday’ creatures in our environment, for example, how does the red-eyed tree frog differ from our type of frog? Can you compare the Amazon Rainforest to Sherwood Forest or woodland in our school areas?, which links closely to the pupil’s FS learning of the everyday world. Can You Walk Like an Egyptian? Book Title: The Lion, The Witch and The Wardrobe During our ‘Awesome Egyptians’ topic, pupils are expected to understand about the daily lives, geographical landscape and beliefs of Ancient Egypt. To achieve greater depth in Geography, pupils are to learn about the overall landscape, land types and landmarks that make up Ancient Egypt; this lays a good geographical basis for the pupils Year 4 learning of South Africa due to Egypt sharing a similar climate and land type and continent (Africa). Pupils are encouraged to look at Egypt in the context of the whole of the African continent and will support making those links to the Year 4 South African learning. When pupils reach Year 5, they are expected to progress onto the topic of Ancient Greece. Ancient Greek and Egyptian civilisations were established and thriving around the same time period in the timeline of History, they therefore share certain similarities in traditions. An example of how the year 5 and year 3 topics link is the creation of particular inventions, pottery and burial rituals; when pupils reach Year 5, they can compare and contrast these traditions/ foundations of these civilisations which allows them to reach the greater depth standard in History. Would you be a Mayan? Book Title: The Nowhere Emporium During our final topic in Year 3 of ‘Madness of the Mayans’, pupils are expected to understand the importance of chocolate in the Mayan civilisation and understand the traditions and beliefs of the Mayans (no matter how bizarre they may be). The Mayan’s deemed the cocoa bean to be a marvellous ‘treasure’ and were the first inventors of chocolate; during the pupils learning throughout school, they learn the different definitions of ‘treasure’ for different civilisations, such as Ancient Egypt (year 3) and Ancient Greece (year 5) deemed gold and precious gems as treasure, the Stone Age people (year 2) deemed stone to be their treasure and during World War (year 6), there was no treasure but they cherished food and rations! A comparison can therefore be made with these various historical times about what they defined as treasure. During Year 4, pupils will be making an airship based on the class novel of ‘Cogheart’; the skill of paper mache will begin in year 3 when children use these artistic skills to create a traditional Mayan mask. Although it is not linked to another year group, the creation of a traditional mask will be undertaken during the children’s Ancient Egypt learning where they recreate the funerary mask of Pharaoh Tutankhamun. Year Four Foundation Subjects Overview Can You Eat Off Table Mountain? Book Title: The Fastest Boy In The World Within our topic of ‘Amazing Africa’ our focus upon Nelson Mandela and Apartheid shows links to Human Rights and Rights and Responsibilities. Not only does this link with our own school procedures and our children’s rights and responsibilities but with Year 3’s topic about Ancient Egypt. Children can compare the treatment of slaves and pharaohs with the segregation and treatment of people of different races in South Africa. In Year 3’s topic about rainforests children focus upon the natural beauty of the rainforest and ‘hidden treasures’. Links can be made between this and the beauty of the three African countries that Year 4 study (Morocco, Ethiopia and South Africa). Is My Teacher As Mean As Ma’am? Book Title: Cogheart During the ‘Vile Victorians’ topic, children can compare slavery in Ancient Egypt and discuss the end of slavery in the British Empire. This can be further developed in Year 5 with their topic surrounding the novel ‘Trash’ which relates closely to Human Rights and Rights and Responsibilities in a more mature and topical manner. Following on from this, Year 4’s Victorian topic looks at Human Rights in regards to the Education Act, factories and mining laws and child labour laws. Whilst exploring mills and local industries in the Victorian era this builds upon exploring and contrasting localities in Year 1, developing their knowledge of Morley through the ages. The study of Queen Victoria’s reign and the start of the British Empire will be further developed in Year 6 by comparing and contrasting the Roman Empire. Was Elizabethan Life ‘Ruff’? Book Title: A Midsummer Night’s Dream Moving onto Year 4’s ‘Shakespearean Times’ topic, while discussing explorers such as Drake and Raleigh, children can make comparisons with KS1’s topics of settlements and explorers. Moreover, this will be further developed moving into Year 5 with the study of the Vikings. Children can makes links not only through mapping journeys but also about decisions regarding why communities chose to settle there. Furthermore, children discuss and explore the development of travel and communication and compare this to the present day. Through the study of Shakespeare and ‘A Midsummer Night’s Dream’ children’s love of language can be mirrored with the Shakespearean’s love of language and the theme of beauty and treasure previously discussed in Year 3. Year Five Foundation Subjects Overview What legacies Have the Ancient Greeks Left Behind? Book Title: Percy Jackson and the Lightning Thief Within this topic, children will learn about the civilization of Ancient Greece, a study of the country, its geographical landmarks and their beliefs. This learning builds on skills gained in Year 3 through their topic ‘Awesome Egyptians’, where children learn about different Gods and how they were invented to explain life and how things were created. Children will research and discover that many aspects of modern day life can be traced back to life as an Ancient Greek, especially through our learning on democracy and the Olympics. Children in Year 5 will continue to develop these skills as they progress into Year 6 where they will learn about the Romans and their impact on modern day life. Let Us Go Viking? Book Title: Kensuke’s Kingdom Through our ‘Explorers’ topic, children in Year 5 will become familiar with daily life as a Viking, which include hierarchy, law and punishment and survival during this barbaric era. They will also learn about slavery as an aspect of the working life of a Viking. This links to Year 3’s Ancient Egyptian topic where they will also learn about slavery, as well as Year 4’s topic ‘Vile Victorians’ where children will learn about how slavery was abolished. In Year 1, children gain a basic understanding of explorers such as Christopher Columbus and James Cook, including how they travelled and where they explored. In Year 5, these skills will be developed in greater depth when children research other sea explorers. This will include looking at chronology and how explorer’s modes of transport have enhanced over time. What are our rights as humans? Book Title: Trash Through this topic, children will learn about what human rights are and how they came to be agreed within the Universal Declaration. They will also compare and contrast the rights between adults and children. Alongside this, they will be able to state the differences between justice and freedom. Children will make the links with their learning of the declaration with Year 2’s learning of Mary Seacole, where she is known and admired for her bravery and determination to help British soldiers during the Crimean War, regardless of her help being refused due to her race. Through knowledge gained about the Education Act and the factories, mining and child labour laws, (‘Vile Victorians’ – Year 4) children will begin to have a deeper understanding of our rights as individuals in the 21st century. However, children will learn through a study of Brazil that our basic needs as a human being are not always met. Year 6 Foundation Subjects Overview What Legacies Did the Ancient Romans Leave Behind? Book Title: The Thieves of Ostia The topic ‘Romans on the Rampage’ builds on Year 5’s topic on Ancient Greece and looks at the lasting influence the Romans had on the western world. We start by looking at the history of Rome – legend and fact – and understand where in Europe Rome is, and how the Romans came to extend their influence and create such a large and influential empire. Children understand the power and organisation of the Roman army and ask and answer historically relevant questions about why it was so successful. Looking at the Roman legacy, children come to understand how many aspects of modern life can, in effect, be traced back in some way to the Romans by studying the cities, the rule of law, Roman numerals and the calendar we use today. Was it Britain’s Finest Hour? Book Title: Letters From the Lighthouse The ‘Woeful Second World War’ topic focuses on the outbreak of World War 2, what is meant by the ‘Phoney War’ and why the Battle of Britain was such a significant turning point in British History. Looking at the location of the countries involved highlights links to the British Empire and its commonwealth countries. In addition, children will research the lives of the ordinary people who faced the Blitz with a focus on rationing, the Blackout, evacuation and the role of women. Links to human rights and responsibilities learnt in Y4 will be built on as we examine the lives of the Jewish people during World War 2 and cover the terrible discrimination, oppression and liberation of the concentration camps.
Presentation on theme: "Welcome to Ridge House Letters and Sounds Presentation"— Presentation transcript: 1 Welcome to Ridge House Letters and Sounds Presentation 2 5 Basic Skills Learning letter sound Letter formation Blending Identifying soundsTricky words 3 Why phonics?The aim is to secure essential phonic knowledge and skills so that children progress quickly to independent reading and writingReading and writing are like a code, phonics is teaching the child to crack the codeGives us the skills of blending for reading and segmenting for spelling. 4 A phoneme is the smallest unit of sound in a word you hear a phoneme/sound a b c A grapheme is the letter, or letters, representing a phoneme, you see a grapheme/letter t ai igh 5 Blending is recognising the letter sounds in a written word, for example c-u-p, and merging or synthesising them in the order in which they are written to pronounce the word ‘cup’ 6 Segmenting is identifying the individual sounds in a spoken word (e. g Segmenting is identifying the individual sounds in a spoken word (e.g. ‘him’ = h – i - m) and writing down letters/grapheme for each sound to form the word. 7 A digraph contains two letters and it makes one sound oa ai ee iefor these digraphs we say ‘when two vowels go walking the first one does the talking’, so you say the letter name to make the soundOther digraphsoo ar or er oi sh ck th ll 8 Reading and Writing For reading: Phonemes/sounds associated with particular graphemes/letters are pronounced in isolation and blended together.For writing:Words are segmented into phonemes orally, and a grapheme written to represent each phoneme 9 Phase oneIn developing their phonological awareness children will improve their ability to distinguish between sounds and to speak clearly and audibly with confidence and control.Through speaking and listening activities, children will develop their language structures and increase their vocabulary. 10 Environmental discrimination, listening games Instrumental discrimination, matching instruments, patterns Body percussion, pass different clapping rhythms around a circle, body noises Rhythm and rhyme, nursery rhymes Alliteration, Alex ate an apple Voice sounds, animal sounds, sounds to match actions, ball bouncing boing Oral blending, cross the river if you have a d-u-ck, f-i-sh 11 Phase One outcomes Explore and experiment with sounds and words Listen attentivelyShow a growing awareness and appreciation of rhyme, rhythm and alliterationSpeak clearly and audibly with confidence and controlDistinguish between different sounds in wordsDevelop awareness of the differences between phonemes 12 Phase 2 To introduce grapheme-phoneme (letter-sound) correspondences (video clip) 13 Phase Two outcomesChildren know that words are constructed from phonemes and that phonemes are represented by graphemesThey have knowledge of a small selection of common consonants and vowels.They blend them together in reading simple CVC words and segment them to support spelling. 14 Tricky wordsSome words cannot be sounded out or spelt correctly by listening for the sounds in them. These are called ‘tricky words’ and have to be learnt. Examples of tricky words: the, to, I, no, go, into, he, she, me, we. The best way of learning these are flash cards and matching games and finding words in books, magazines, comics etc. 15 High frequency wordsThe National Literacy Strategy "Framework for Teaching" identifies an essential set of words that your child needs to learn “even to tackle very simple texts”.According to the NLS; “these words play an important part in holding together the general coherence of texts and teachers are encouraged to get their pupils to recognise them as soon as possible so they can get pace and accuracy into their reading at an early stage. Some of these words have irregular or difficult spellings and, because they often play an important grammatical part, they are hard to predict from the surrounding text.”Below are listed the 45 high frequency words that your child will be expected to read on sight, in and out of context, by the end of their first year in schoolI go come went up you day wasLook are the of we this dog meLike going big she and they my seeOn away mum it at play no yesFor a dad can he am allIs cat get said to in 16 Phase 3To teach children one grapheme for each of the 44 phonemes in order to read and spell simple regular words. 17 Phase Three outcomesChildren link sounds to letters, naming and sounding the letters of the alphabet.They recognise letter shapes and say a sound for each.They hear and say sounds in the order in which they occur in the word,They read simple words by sounding out and blending the phonemes all through the word from left to right.They recognise common digraphs and read some high frequency words 18 Phase fourTo teach children to read and spell words containing adjacent consonants. 19 Phase four outcomesChildren are able to blend and segment adjacent consonants in wordsThey apply this skill when reading unfamiliar texts and in spelling 20 Phase 5Teaching children to recognise and use alternative ways of pronouncing the graphemes and spelling the phonemes already taught. 21 Phase 5 outcomes Children will: use alternative ways of pronouncing the graphemes and spelling the phonemes corresponding to long vowel phonemes.identify the constituent parts of two-syllable and three-syllable words and be able to read and spell phonically decodable two-syllable and three-syllable words.recognise an increasing number of high frequency words automatically.apply phonic knowledge and skills as the prime approach in reading and spelling when the words are unfamiliar and not completely decodable. 22 Phase 6Teaching children to develop their skill and automaticity in reading and spelling, creating ever-increasing capacity to attend to reading for meaning. 23 Phase six outcomes Children will: Apply their phonics skills and knowledge to recognise and spell an increasing number of complex words.Read an increasing number of high and medium frequency words independently and automatically. 24 SummaryA phoneme is the smallest unit of sound in a word, a phoneme may be represented by 1, 2, 3 or 4 letters e.g. t, ai, igh, eigh.A syllable is a word or part of a word that contains one vowel sound.A grapheme is the letter(s) representing a phoneme. It is the written representation of a sound which may consist of 1 or more letters. E.g. the phoneme ‘s’ can be represented by the grapheme s (sun), se (mouse), c (city), sc or ce (science) 25 Summary continuedDigraph is two letters which make one sound. A consonant digraph contains two consonants e.g. sh, ck, th, ll. A vowel digraph contains at least one vowel ai, ee, ar, oySplit digraph, is a digraph in which the two letters are not adjacent e.g. makeTrigraph, three letters, which make one sound igh, dgeOral blending-hearing a series of spoken sounds & merging them together to make a spoken word (no text is used) e.g. when a teacher calls out ‘b-u-s’ the children say bus. This skill is usually taught before blending and reading printed wordsBlending-recognising the letter sounds in a written word, e.g. c-u-p, and merging or synthesising them in the order in which they are written to pronounce the word ‘cup’Segmenting-identifying the individual sounds in a spoken word e.g. h-i-m and writing down or manipulating letters for each sound to form the word ‘him’ 26 Useful websites & resources search letters and soundsMagnetic letters,Flash cards,Please ask a member of staff if you have any questions or want to borrow resources
Trees and Palms An evergreen tree keeps its leaves all year round. Most conifer trees are evergreen. They can be found in most parts of the world, except the hotter parts of the tropics. Broad-leaf trees from cool climates usually lose their leaves in the autumn, but those that grow in hot, dry parts of the world are often evergreen. Their leaves usually have a waxy coating to stop them from losing too much water in the heat of the sun. You will need to compare the "Growing Conditions" given for each tree you are considering planting to the characteristics of your site, to determine which would be the best tree for you. Once established, you can expect expect 18”-24” of average growth per year on a fast growing tree. And in exceptionally good years, and with good feeding and care, expect 3 feet of growth per year. There are over 2,500 species of palm trees. Palms are a member of the evergreen group. Not all palm trees are "trees," and not all plants called "palms" are truly palms. Palm trees have two different types of leaves: palmate and pinnate. There is a huge variety of sizes, shapes and textures. Most require the heat but there are a couple of varieties that have adapted to colder temperatures. Flowering and colorful trees can have single trunks or shrubby type trees that may have multiple upright trunks growing to 15-feet or taller. There are thousands of tree species with various colors of blooms to choose from, making it difficult to select the right flowering tree. The trees we select for our site are ones that are native or have adapted to our soil types and climate conditions in the Hill Country. A fruit tree is a tree which bears fruit that is consumed or used by humans and some animals — all trees that are flowering plants produce fruit, which are the ripened ovaries of flowers containing one or more seeds. Citrus is a genus of flowering trees and shrubs in the rue family, Rutaceae. Plants in the genus produce citrus fruits, including important crops like oranges, lemons, grapefruit, pomelo and limes.
The fundamental features of geological study, namely, field work, collection, and theory construction, were not developed until the 16th to 18th centuries. Previously, back to ancient Greek times, many scholars believed that fossils were the remains of former living things and many Christians (including Tertullian, Chrysostom, and Augustine) attributed them to the Noachian flood. But other scholars rejected these ideas and regarded fossils as either jokes of nature, the products of rocks endowed with life in some sense, the creative works of God, or perhaps even the deceptions of Satan. In the 16th and 17th centuries the debate among naturalists intensified. One of the prominent opponents of the organic origin of fossils was Martin Lister (1638–1712). John Ray (1627–1705) favored organic origin although he respected Lister’s objections. But from his microscopic analysis of fossil wood, Robert Hooke (1635–1703) confirmed that fossils had once lived. However, he did not believe they were the result of Noah’s flood. Prior to 1750, one of the most important thinkers was Niels Steensen (1638–86), or Steno, a Danish anatomist and geologist who established the principle of superposition: sedimentary rock layers are deposited in a successive, essentially horizontal fashion, so that a lower stratum is older than the one above it. In his Forerunner (1669) he expressed belief in a 6,000-year-old earth and that organic fossils and the rock strata were laid down by the Flood.1 Shortly after Steno, Thomas Burnet (1635–1715), a theologian, published his influential Sacred Theory of the Earth (1681) in which he argued from Scripture, rather than geology, for a global Flood. He made no mention of fossils and though he believed in a young earth, he took each day in Genesis 1 to be a year or longer. Following him, the physician and geologist John Woodward (1665–1722) invoked the Flood to explain stratification and fossilization, in An Essay Toward a Natural History of the Earth (1695). In A New Theory of the Earth (1696) William Whiston (1667–1752), Newton’s successor at Cambridge in mathematics, shared similar views to the above. But he offered a cometary explanation of the mechanism of the Flood and he added six years to Archbishop Ussher’s date of creation by his argument that each day of Genesis 1 was one year in duration. Some of his points were later used by those who favored the day-age theory for Genesis 1. In his Treatise on the Deluge (1768) the geologist Alexander Catcott (1725–79) used geological arguments to defend the Genesis account of a recent creation and global Flood which produced the geological record. On the other hand, another geologist, John Whitehurst (1713–88), contended in his Inquiry into the Original State and Formation of the Earth (1778) that the earth was much older than man and, although the Noachian flood was a global catastrophe, it was not responsible for most of the geological record. On the continent, Johann Lehmann (d. 1767) studied German mountain strata and believed the primary, non-fossil-bearing rocks were from creation week, whereas the secondary fossiliferous rocks were attributed to the Flood. Other geologists like Jean Elienne Guettard (1715–86), Nicholas Desmarest (1735–1815) and Giovanne Arduino (1714–95) denied the Flood and advocated a much older earth.2 In France, three prominent writers developed philosophically naturalistic explanations related to earth history (i.e., explaining the origin of everything by the present laws of nature). In his Epochs of Nature (1778), Comte de Buffon (1708–88) espoused the theory that the earth had originated from a collision of a comet and the sun. Extrapolating from experiments involving the cooling of various hot materials, he postulated that in about 78,000 years the earth had passed through seven epochs to reach its present state. He believed in spontaneous generation, rather than evolution, to explain the origin of living species. In an apparent attempt to avert religious opposition, he interpreted the days of Genesis 1 to be long ages, an idea which became popular among some 19th century British Christians. The astronomer Pierre Laplace (1749–1827) was strongly motivated by his atheism to eliminate the idea of design or purpose from scientific investigations. As a precursor to modern cosmic evolution, he proposed the nebular hypothesis to explain why the planets revolve around the sun in the same direction and in roughly the same plane. According to this theory, published in his Exposition of the System of the Universe (1796), prior to the present state there was a solar atmosphere which by purely natural progressive condensation had produced rings, like Saturn’s, which eventually coalesced to form planets. This theory made the age of creation even greater than that which Buffon had suggested. Jean Lamarck (1744–1829) was a naturalist specializing in the study of fossil and living shells. Riding the fence between deism and atheism, he had a strong aversion to any notion of global catastrophe. In Zoological Philosophy (1809) he attempted to explain the similarities and differences between living and fossil creatures by four laws of gradual evolutionary transformation commonly summarized as the inheritance of acquired characteristics. He believed in spontaneous generation, rejected the notion of extinctions, and became a fierce opponent of the catastrophist Georges Cuvier.3 So by the latter part of the 18th century, a number of factors were preparing the ground for the geological revolution of the coming century. Though most Christians believed in a straightforward literal reading of the creation and Flood narratives, some were suggesting that the earth was much older than Ussher had calculated. In addition, the deists and atheists were proposing alternative cosmologies to the one found in Genesis. The idea of an initially fully functioning creation, much like today’s, was beginning to be replaced by the notion of created or uncreated, initially simple matter, which gradually, by the laws of nature operating over untold ages, was transformed into the present state of the universe. A major shift in worldview, involving the existence and nature of God, the nature of His relationship to the creation, and the nature of the relationship of science to biblical interpretation, was underway. The years 1790–1820 have been called the “heroic age” of geology. During this time, geology truly became established as a separate field of scientific study. More extensive geological observations began to be made, new methods were developed for systematically arranging the rock formations, and the Geological Society of London, the first society fully devoted to geology, was born. But it was also during this period that geology became embroiled in the so-called Neptunist-Vulcanist debate.4 Neptunism was named after the Roman god of the sea and viewed water as the most important agent of geological change. Vulcanism takes its name from the Roman god of fire and saw the internal heat of the earth as the dominant factor. The founders of the two positions were, respectively, Abraham Werner (1749–1817) of Germany and James Hutton (1726–97) of Scotland. Werner was one of the most influential geologists of his time, even though his theory was rather quickly discarded.5 As a result of intense study of the succession of strata in his home area of Saxony, which were clearly water-deposited, he developed the theory that most of the crust of the earth had been precipitated chemically or mechanically by a slowly-receding primeval global ocean. The strata were then ordered by their mineral content. Werner did acknowledge volcanic activity, but put this as the last stage of his theory, after the primeval ocean had receded to its present level. Many objections were soon raised against his theory, but it was an attractively simple system. Furthermore, as an excellent mineralogist, Werner was an inspirational teacher for 40 years at the University of Freiberg, where he attracted the great loyalty of his students, many of whom came from foreign countries. He was not a prolific writer but recent studies of private correspondence and lecture notes have shown that he believed and taught his students that earth history lasted at least a million years. He felt that the earth’s crust provided more reliable historical information than any written documents. As a deist, he also felt no need to harmonize his theory with the Bible.6 Nevertheless, some writers, such as Richard Kirwan and André Deluc, used Werner’s theory in support of the Genesis flood. Hutton’s geological views, published in his Theory of the Earth (1795), were significantly different from Werner’s. He did most of his geological work in and around Edinburgh, which is set on volcanic rocks, and he argued that the primary geological agent was fire, not water. Rocks were of two origins, igneous and aqueous. The latter were the result of detrital matter being slowly deposited in the ocean bottoms which was gradually transformed into rock by the earth’s internal heat. Everything in the rock record must and can be explained by present day processes of erosion, sedimentation, volcanoes, and earthquakes.Another characteristic of Hutton’s view was its uniformitarianism: everything in the rock record must and can be explained by present day processes of erosion, sedimentation, volcanoes, and earthquakes.7 Earth history was cyclical—a long process of denudation of the continents into the seas and the gradual raising of the sea floors (by the internal heat of the earth) to make new continents, which in turn would be eroded to the sea only to rise again later. This theory was inspired, in part at least, by his deism: God’s wise government of the rock cycle was for the benefit of all creatures.8 It obviously expanded the age of the earth almost limitlessly. In fact, Hutton denied that geology should be concerned with origins. He asserted instead that he saw “no vestige of a beginning or prospect of an end” in the geological record. His view was a clear denial of any global catastrophe, such as Noah’s flood, which was for him a geological non-event. Hutton received harsh criticism from two prominent naturalists. Richard Kirwan was an Irish mineralogist and chemist who viewed Hutton’s views as atheistic. In Geological Essays (1799), he objected that Hutton’s theory was based on false evidence and was contrary to the literal interpretation of Genesis. André Deluc, a geologist and French-born resident of England, gave a gentler, but still negative, critique of Hutton. He took a fairly literal view of Genesis, but he was severely criticized by Kirwan for believing that the days of Genesis 1 were “periods of time” and that the universal Flood left some of the mountaintops unscathed as island refuges for vegetable and animal life. In his Illustrations of the Huttonian Theory of the Earth (1802) John Playfair (1748–1819), mathematician and Scottish clergyman, republished Hutton’s ideas in a more comprehensible and less overtly deistic style. He defended Hutton against Kirwan’s charge of atheism by arguing that Hutton was just following the path of natural theology by observing the beautiful design in the systems of the earth: Hutton’s ceaseless cycles of geological processes were like Newton’s laws of regular planetary motion. Although Playfair made no attempt to harmonize Hutton with Scripture he did defend Hutton’s notion of the earth’s great antiquity by saying that the Bible only addresses the time scale of human history, which Hutton did not deny was relatively short, as a literal interpretation of the Bible indicated. Like Hutton, Playfair also argued that the Flood was tranquil, not a violent catastrophe. Neither the Neptunists nor the Vulcanists paid much attention to the fossils. In contrast, William Smith (1769–1839), a drainage engineer and surveyor, worked on canals for transporting coal all over Britain. After many years of studying strata (revealed in the canal and road cuttings he helped design) and the fossils in those strata, he published three works from 1815 to 1817, containing the first geological map of England and Wales and explaining the order and relative chronology of the stratigraphic formations as defined by certain characteristic fossils rather than the mineralogical character of the rocks.9 He became known as the “father of English stratigraphy” because he gave geology a descriptive methodology, which became critical for the establishment of the theory of an old earth. Though Smith believed that a global flood was responsible for producing the gravelly deposits scattered over the earth’s surface, he never explicitly linked this with the Noachian flood and believed that all of the sedimentary strata were deposited many long ages before this flood by a long series of supernaturally induced catastrophic floods and recreations of new forms of life.10 Another important development at this time in Britain was the establishment of the Geological Society of London in 1807. The 13 founding members were wealthy, cultured gentlemen, who lacked much in geological knowledge but made up for it by their enthusiasm to learn. They met monthly at the Freemason’s Tavern (until the society outgrew it) and after an expensive dinner they discussed the advancements of geology. The cost of membership and the initial restriction of membership to London residents were two reasons why most practical geologists associated with mining and road and canal building, such as William Smith, John Farey, and Robert Bakewell, did not become members.11 The stated purpose of the society was to gather and disseminate geological information, help standardize geological nomenclature, and facilitate cooperative geological work, though in fact it also sought, without much success, to be a stabilizing and regenerating socio-economic influence in the face of potential and actual French-style unrest in Britain.12 From its inception, it was dominated by men who held the old-earth view (the relation of Genesis to geology was never discussed in its public communications), though it did not overtly favor either uniformitarianism or catastrophism, as its first president and influential member, George Greenough, believed, on the basis of Bacon’s principles, that in the 1810s and 1820s it was too early in the data collection process to formulate theories of the earth. By the end of the 1820s the major divisions of the geological record were quite well defined. As Table 1 [below] shows, the primary rocks were the lowest and supposedly oldest and were mostly igneous or metamorphic rocks devoid of fossils. The secondary rocks were next and were predominantly sedimentary strata that were fossiliferous. The tertiary formations were above these, also containing many fossils, but which more closely resembled existing species. Lastly, were the most recent alluvial deposits of gravel, sands and boulders topped by the soils. In the early 1800s Georges Cuvier (1768–1832), the famous French comparative anatomist and vertebrate paleontologist, developed his theory of catastrophism13 as expressed in his Theory of the Earth (1813). This went through several English editions over the next 20 years, with an appendix (revised in each later edition) written by Robert Jameson, the leading Scottish geologist. The son of a Lutheran soldier, Cuvier sought to show a general concordance between science and religion.14 In his Theory, he seems to have treated post-flood biblical history fairly literally, but did not interact at all with the text of the scriptural accounts of the creation and the Flood. He reacted sharply against Lamarck’s evolutionary theory of the inheritance of acquired characteristics and his denial of extinctions. From his study of the fossils of large quadrupeds found in the strata of the Paris basin, Cuvier concluded that there had indeed been many extinctions, but not all at once. Rather, he theorized that in the past there had been many catastrophic floods. Like William Smith, he believed that each of the strata was characterized by wholly unique fauna. The fauna had appeared for a time and then were catastrophically destroyed and new life-forms arose. In opposing Lamarckian evolution, Cuvier presumably believed these new species were separate divine acts of special creation, but he did not explicitly explain this. He believed that earth history was very much longer than the traditional 6,000 years, but that the last flood had occurred only about 5,000 years ago. This obviously coincided with the date of Noah’s flood, but Cuvier never explicitly equated his last flood with it.15 These violent catastrophes were vast inundations of the land by the sea. But they were not necessarily global so that therefore whole species were not always eliminated in these catastrophes. According to Cuvier, man had first appeared sometime between the last two catastrophes. William Buckland (1784–1856) was the leading geologist in England in the 1820s and followed Cuvier in making catastrophism popular. Like many scientists of his day, he was an Anglican clergyman. He obtained readerships at Oxford University in mineralogy (1813) and geology (1818), and was a very popular lecturer. Two of his students, Charles Lyell and Roderick Murchison, went on to become very influential geologists in the 1830s and 1840s. In his efforts to get science, and especially geology, incorporated into university education (which was designed at the time to train ministers) Buckland published Vindiciae Geologicae (1820). Here he argued that geology was consistent with Genesis, confirmed natural religion by providing evidence of creation and God’s continued providence, and proved virtually beyond refutation the fact of the global, catastrophic Noachian flood. However, the geological evidence for the Flood was, in Buckland’s view, only in the upper formations and surface features of the continents; the secondary formations of sedimentary rocks were antediluvian by untold thousands of years or longer. To harmonize his theory with Genesis he considered the possibility of the day-age theory but favored the gap theory. Like Cuvier, he held to the theory of multiple supernatural catastrophes and creations and the recency of the appearance of man and the Flood. As a result of further field research, especially in Kirkdale Cave in Yorkshire, he published in 1823 his widely read Reliquiae Diluvianae, providing a further defense of the Flood. However, the uniformitarian criticisms of John Fleming and Charles Lyell eventually led Buckland to abandon this interpretation of the geological evidence. He publicized this change of mind in his famous two-volume Bridgewater Treatise on geology in 1836, where in only two brief comments he described the Flood as tranquil and geologically insignificant.17 Buckland showed in personal correspondence in the 1820s that, for him, geological evidence had a superior quality and reliability over textual evidence (e.g., the Bible) in reconstructing the earth’s history.18 In his view, this was because written records were susceptible to deception or error, whereas the rocks were truthful and cannot be altered by man. Adam Sedgwick (1785–1873) was Buckland’s counterpart at Cambridge, receiving the chair of geology in 1818. Through the influence of these two and others (e.g., George Greenough, William Conybeare, Roderick Murchison, and Henry De la Beche), old-earth catastrophist (or diluvial) geology was widely accepted in the 1820s by most geologists and academic theologians. For several reasons most geologists at this time believed the earth was much older than 6,000 years and the Noachian flood was not the cause of the secondary and tertiary formations.19 First, it was believed that the primitive rocks were covered by an average of at least two miles of secondary and tertiary strata, in which was seen evidence of slow gradual deposition during successive periods of calm and catastrophe. Second, some strata were clearly formed from the violent destruction of older strata. Third, different strata contained different fossils; it was especially noted that strata with apparently terrestrial and fresh-water shells alternate with those containing marine shells and that strata nearest the surface contained land animals mixed with marine creatures. Fourth, generally speaking, it appeared that the lower the strata were, the greater was the difference between fossil and living species, which to old-earth geologists implied many extinctions as a result of a series of revolutions over a long time. Fifth, the evidence that faults and dislocations occurred after the deposition and induration of many strata implied a lapse of time between the formation of the various strata. Finally, there was the fact that man was apparently only found fossilized in the most recent strata. From this evidence, the earth was believed to be tens of thousands, if not millions, of years old and the relatively recent Noachian flood was considered to be the cause only of the rounded valleys and hills carved into consolidated strata and of the loose gravels and boulders scattered worldwide over the surface of those strata.20 A massive blow to catastrophism came during the years 1830 to 1833, when Charles Lyell (1797–1875), a lawyer by training as well as a former student of Buckland, published his masterful three-volume work, Principles of Geology. Reviving the ideas of Hutton and stimulated by the writings of John Fleming, the Scottish minister and zoologist, and George Scrope, a member of Parliament and volcano expert, Lyell’s Principles set forth how he thought geology should be done. His theory was a radical uniformitarianism in which he insisted that only present-day processes at present-day rates of intensity and magnitude should be used to interpret the rock record of past geological activity. The uniformity of rates was an addition to Hutton’s theory and was the essential, distinctive feature of Lyell’s view. Although the catastrophist theory had greatly reduced the geological significance of the Noachian deluge and expanded earth history well beyond the traditional biblical view, Lyell’s work was the “coup de grace” for belief in the Flood,21 in that it explained the whole rock record by slow gradual processes (which included very localized catastrophes like volcanos and earthquakes at their present frequency of occurrence around the world), thereby reducing the Flood to a geological non-event. His theory also expanded the time of earth history even more than Cuvier or Buckland had done. Lyell saw himself as “the spiritual saviour of geology, freeing the science from the old dispensation of Moses.”22 Catastrophism did not die out immediately, although by the late 1830s few old-earth catastrophists in the United Kingdom, America, or Europe believed in a geologically significant Noachian deluge. Lyell’s uniformitarianism applied not only to geology, but to biology as well. Initially he had held to a sense of direction in the fossil record, but in 1827, after reading Lamarck’s work, he had chosen the steady-state theory that species had appeared and disappeared in a piecemeal fashion (though he did not explain how). Lamarck’s notion that man was simply a glorified orangutan was an affront to human dignity, thought Lyell. He held man alone to be a recent creation and even after finally accepting Darwinism he believed that the human mind could not be the result of natural selection. From the mid-1820s, geology was rapidly maturing as a science. Smith’s stratigraphic methodology (using fossils to correlate the strata) was applied more widely by a growing body of geologists to produce more detailed descriptions and maps of the geological record. There was still debate over the nature and origin of granite and although Cuvier’s interpretation of the Paris basin was widely accepted, it also was being challenged. By the early 1830s all the main elements of stratigraphic geology were established, and maps and journal articles became more technical as geology was making the transition from an amateur avocation to a professional vocation. The 1830s and 1840s saw much debate about the classification of the lowest fossiliferous formations (the Cambrian to Devonian) and the glacial theory began emerging to explain what the earlier catastrophists had attributed to the Flood. By the mid-1850s, all the main strata were identified and the nomenclature was standardized. However, none of these developments added any fundamentally new reasons for believing in a very old earth. So whether the scriptural geologists were arguing against the old-earth theory before or after Lyell’s Principles of Geology, they were dealing with the same basic arguments that had been dominant since around the turn of the century. So in the early 19th century there were three competing views of earth history. In response to these different old-earth theories, Christians were confronted with the choice of various ways of harmonizing them with Genesis. Many of these old-earth proponents believed in the inspiration, infallibility, and historical accuracy of Genesis. But they disagreed with the scriptural geologists about the correct interpretation, in some cases even the correct literal interpretation, of the Biblical text.
Rumours have been swirling for weeks that scientists have detected gravitational waves – tiny ripples in space and time – from a source other than colliding black holes. Now we can finally confirm that we’ve observed such waves produced by the violent collision of two massive, ultra-dense stars more than 100m light years from the Earth. The discovery was made on August 17 by the global network of advanced gravitational-wave interferometers – comprising the twin LIGO detectors in the US and their European cousin, Virgo, in Italy. It is hugely important, not least because it helps solve some big mysteries in astrophysics – including the cause of bright flashes of light known as “gamma ray bursts” and perhaps even the origins of heavy elements such as gold. As a member of the LIGO scientific collaboration, I was immediately in raptures as soon as I saw the initial data. And the period that followed was definitely the most intense and sleep deprived, but also incredibly exciting, two months of my career. The announcement comes just weeks after three scientists were awarded the Nobel Prize in Physics for their foundational work leading to the discovery of gravitational waves, first announced in February 2016. Since then, detecting gravitational waves from colliding black holes has started to feel like familiar territory – with four further such events detected. But as far as we know, colliding black holes offer purely a window on the dark side of the universe. We haven’t been able to register light from these events with any other instruments. But GW170817 – the catchy title for the event of August 17 — changes all that. That’s because the source of the waves this time was two “neutron stars” – incredibly dense stellar remnants the size of a city, each weighing more than the sun. These stars whizzed around each other at a sizeable fraction of the speed of light before merging in a cataclysmic collision that we’ve now seen shake the very fabric of space and time. The cosmic concerto was just beginning, however. Astronomers have long suspected that the merger of two neutron stars could be the overture to a short gamma ray burst – an intense flash of gamma-ray light that releases more energy in a fraction of a second than the sun will pump out in ten billion years. For several decades we have observed these gamma ray bursts, but without knowing for sure what causes them. However, just 1.7 seconds after the gravitational waves from GW170817 arrived at the Earth, NASA’s Fermi satellite observed a short burst of gamma rays in the same general region of the sky. LIGO and Virgo had found the smoking gun, and the link between neutron star collisions and short gamma ray bursts was finally and clearly established. The combination of gravitational-wave and gamma-ray observations allowed the position of the cosmic explosion to be pinpointed to less than 30 square degrees on the sky – or about 100 times the size of the full moon. This, in turn, allowed a whole barrage of astronomical telescopes sensitive to light across the entire electromagnetic spectrum to search this small patch of sky for the aftermath of the explosion. And sure enough this was found – in an unfashionable backwater towards the edge of a fairly unassuming galaxy called NGC4993, in the constellation of Hydra. Over the next few days and weeks astronomers watched agog as the embers from the explosion glowed brightly and faded, beautifully matching the pattern expected for a so-called “kilonova”. This is produced when material rich in subatomic particles known as neutrons from the initial merger is ejected at great speed by the gamma ray burst. This ploughs into the surrounding region of space, triggering the production of heavy radioactive elements. These unstable elements typically split up (decay) to a stable state by emitting radiation. This is what causes the glow of the kilonova, which we have now confirmed by mapping it out in exquisite detail. Our observations also strongly support the theory that the stable end-products of these chains of reactions include copious amounts of precious metals like gold and platinum. While we’ve suspected neutron stars to be key to producing these elements in space, that hypothesis now looks a whole lot more convincing. Indeed, the kilonova that formed from the embers of GW170817 could have produced as much gold as the entire mass of the Earth – that is 1,000 trillion tonnes. By observing a kilonova “up close and personal” for the very first time, and seeing how well it fits into the unfolding astronomical storyboard that began with the neutron star merger, astronomers have taken a huge leap forward in our understanding of these violent cosmic events. The idea that we are all made of stardust is increasingly appreciated in popular culture – in everything from documentaries to song lyrics. But the mind-blowing concept that the gold in our wedding rings and Rolex watches is made of neutron stardust is about to catch on. Perhaps even more exciting, however, is the enormous potential now unlocked by this radical, new approach to studying the cosmos. By working together collaboratively – using instruments that operate not just across the entire spectrum of light but are sensitive to gravitational waves and even neutrinos too – astronomers are poised to fully open a completely new “multi-messenger” window on the universe, with many further discoveries to be made and cosmic mysteries to be solved. For example, we have already used our observations to make the first ever joint measurement of the expansion rate of the universe, using both gravitational waves and light. Our paper will appear in Nature on October 16. More results will also surely follow soon. The exciting new era of multi-messenger astronomy just started with a bang.
Milky Way: Does It Have A Habitable Zone? Measuring approximately 180,000 light years across, the Milky Way is enormous and contains 100 to 400 billion stars in its periphery. However, what regions make up the habitable or inhabitable zones in our galaxy, and is there a habitable zone at all? According to a source, the Milky Way has zones that are really inhabitable, and three such areas have been identified. Near the center of our galaxy, where the star density is greater, the combined radiation of all the stars present would make it highly unlikely for life to form. Closer to the galactic core is also inhabitable because the Oort Cloud, which is a huge cloud of comets that surrounds the Sun, can start on a collision course, leading to massive disruptions. The Milky Way's spiral arms are another inhabitable zone because star formation is a more common occurrence here due to increased density of the galaxy. Newly evolving stars pose the risk of blasting out dangerous radiation. Our planet is located around 27,000 light-years away from the Milky Way's center and is a whopping ten of thousands light years away from its outer boundary. The location of Earth also implies that does not cross with the spiral arms, and at the same time, the planet is close enough to have profited from the solar system's action when it gathered the elements needed for life. The Universe's earliest stars comprised only of helium, hydrogen and slight traces of other elements that remained from the Big Bang. It was only when the hugest stars exploded as supernovae that they spread heavier elements, like carbon, oxygen, gold and iron into the surrounding areas. Many generations of stars sprinkled the solar nebula with their heavy elements, setting the stage for further evolution by providing the necessary raw materials. Therefore, as per studies, if the solar system was located further out, it wouldn't have been able to work on the benefits left by the numerous generations of dead stars. So, where or what is the galactic habitable zone? According to astrobiologists, the habitable zone of the galaxy could begin right outside the galactic bulge, i.e. around 13,000 light years from the Milky Way's center, and ends somewhere around 33,000 light years away from its center. Therefore, the approximations imply that the Earth just about makes the cut by being located 27,000 light-years from the center. At the moment, the theory has not been accepted by all astronomers, and more data and research are needed as evidence. However, for now, earthlings are still the lucky ones as, "it turns out you were super duper extra lucky. Right universe, right lineage, right solar system, right location in the Milky Way. You already won the greatest lottery in existence," a report stated.
We often hear that medieval guns were crude, inaccurate, and generally ineffective. One common mantra is that they were more dangerous to the wielder than the target. Yet one has to ask, why did armies across Europe incorporate them into their arsenals starting in the 14th century and keep using them until they eventually replaced the longbow and crossbow? Obviously early firearms had something to offer. But what? When I started researching my book Medieval Handgonnes: The First Black Powder Infantry Weapons, I was shocked that there was no other book written in English on the development of the gun before the matchlock. I found no books in any other language either. Guns are, after all, one of the most important inventions to come out of the Middle Ages. Until very recently historians just accepted the image of crude early handgonnes without testing it or even really thinking about it. While this made my research extra difficult, it also made it extra rewarding. |Western Europe, 1390-1400. Courtesy PHGCOM| When I say "handgonne" I mean a handheld black powder weapon fired with a match or hot wire. There was no trigger, no automatic firing. Pretty primitive stuff! "Handgonne" was only one of many terms used in that era. In recent years reenactors and archaeologists have experimented with medieval recipes for gunpowder and tested replicas handgonnes. They've found these weapons could punch through armor better than a longbow or crossbow, although they have a much shorter effective range. Scholars have long debated how effective a longbow arrow is against plate armor. Many believe arrows had more of a harassing effect, with lucky shots getting into eyeslits, joints, or hitting horses. The escape velocity of the bullet was higher, making it more effective, but since it wasn't as aerodynamic as an arrow it lost velocity pretty quickly. Accuracy was better than people assume. Skilled reenactors can hit a man-sized target most of the time even at 45 meters (49 yards). Since men generally fought in large, compact masses, this made them even easier to hit. |This 15th century hackbut ("hooked gun") braces against a loophole and crossbeam at Muider Castle, The Netherlands. Photo by Sean McLachlan| Of course longbows and crossbows had better range, speed, and accuracy. In medieval illustrations you often see bowmen and gonners standing together. This capitalized on the advantages of both weapons while negating the disadvantages. As the enemy advanced they were harassed by the bowmen and when they got in close got cut down by the gonners. Handgonnes truly came into their own in the 15th century, as improvements in gunpowder production made them more powerful and cheaper. Soon the ratio of guns to bows tilted in favor of the gun. While crossbows and the English longbow lasted on the battlefield for some time, the development of the matchlock in the late 15th century heralded a new era in warfare.
Scale of preference is an important concept of economics. Find out what it is and why you should learn about it. Economics is a part of modern life. In our world, it is almost impossible to live without basic knowledge of it. It can help not only in building you career, but also in everyday life. We face its issues daily. Thus, one of the important things is scale of preference in economics. However, let’s first discover the general meaning of economics. There are many definitions, but in general, it might be described as the study of those factors, which have an impact on income, wealth, and well-being. Besides, you need to know, it is a complex social science, which includes math, statistics, several physical sciences, and such disciplines as law, politics, and so on. Basic concepts of economics The key thing is that scale of preference in economics is one of the concepts of it. Let’s look at all the existent ones, before we start discuss this one in details. They are: - Scarcity. It might be described as limited supply of resources, which are used to satisfy unlimited wants. We can also say it is inability of people to provide themselves with all the necessary and desirable things. The resources are considered scarce relative to the level of demand. Wants are endless and insatiable relative to resources; that’s why people have to prioritize. Otherwise, there won’t be any economic problems. - Wants. It is quite obvious. They are needs and desires of people to own goods and services that give them particular satisfaction. There are various groups of needs. Some of them are basic ones, such as food, accommodation, and clothing, while the others are additional. The latest may include different products, such as cars, furniture, electronic, etc. There are also those in the form of services, which contain medical assistance, tailoring, and some other things. The means of satisfying such needs are usually scarce. - Scale of Preference. We have finally come to our issue. As the resources are limited, people have to make choices. Thus, this concept is a list of unsatisfied wants, which are organized in the order of their importance for a particular person. We can just say they are arranged by priority. The most pressing want usually comes first, while the least pressing one come the last. The choice appears, because desires might be various and they are numerous. The resources in their turn are scarce. - Choices. They are systems of selecting one thing out of several alternatives. People have to choose to satisfy their needs. The most essential things are preferred. Their sales are usually much higher. Besides, the manufacturing is more intense. - Opportunity cost. It is expression of cost in terms of forgone alternatives. Thus, it is satisfaction of somebody’s wants at the expense of other wants. This concept is connected with the desires, which have been left unsatisfied in order to fulfill more pressing demands. It is vital to remember that it is not the same as money cost. The latest refers to total amount of money, which is spent to acquire certain goods or services. These are the most essential points of economics. People, who are involved in any business, always take them into account. The key thing is to remember that demand and supplies are not equal. READ ALSO: Causes of poverty in Nigeria. Importance of scale of preference in economics Along with the other concepts, it is vital to know about the importance of scale of preference. There are several factors, which make this issue essential. They are as follows: - It allows effective and efficient application of rare or limited resources. With the help of it, everyone can compare his or her needs and opportunities and decide, what is more or less important. - With the help of such scale, all the priorities (either individuals or firms) are properly organized and set. - This issue also helps plain people to get maximum satisfaction from the limited resources. They understand, which ones are more pressing, and choose them taking all the benefits. At the same time, they sacrifice less urgent ones. - Scale of preferences allows people to make the right decision, when it concerns the allocation of limited resources. - It gives an opportunity to make business more profitable, because sellers and manufacturers can better understand the buyers’ needs. There exist much more things, which make the observed issue important in everyday life. It is better to know at least some of them and understand the term. Example of scale of preference in economics We have already described the term. Now we can look at some real life examples and find out how it works. Consider the following situations: A farmer has 2000 naira and wants to buy some food for the family and a new hat. The amount of food he requires costs 1500naira while a new hat costs 700naira. Thus, you can see that he has no opportunity to buy both things, just because his total sum of money is not enough. However, he is able to but one thing out of two. Here a choice arises. A man should decide which one of his demands is more urgent and pressing. The answer is probably obvious. He can live without a new hat, and his family cannot live without food. Here, food is a basic need, while new hat is an additional one. He chooses to spend money on food and sacrifices his new hat. In fact, in this example, we can observe all the system of economics. It contains every concept listed above, starting from the wants (food and new hat) and ending with the opportunity cost (a farmer sacrifices his new hat to satisfy more essential demands). With the help of such simple example, you can understand the most essential and difficult economic processes. Economics is a very complex science. We face its concepts and processes every day in our life. It is equally important to understand its issues both in everyday and business deals, as they allow to get more profit. With the help of good examples anyone can understand, what the scale of preference is. Moreover, taking into account its impotence, you can take more benefits from the limited resources and make more appropriate choices.
Many efforts to smooth out the variability of renewable energy sources—such as wind and solar power—have focused on batteries, which could fill gaps lasting hours or days. But MIT's Charles Forsberg has come up with a much more ambitious idea: He proposes marrying a nuclear powerplant with another energy system, which he argues could add up to much more than the sum of its parts. Forsberg, a research scientist in MIT's Department of Nuclear Science and Engineering, describes the proposals in a paper published in the November issue of the journal Energy Policy. Now may be just the time for such new approaches, Forsberg says. "As long as you had inexpensive fossil fuels available for electricity demand, there was no reason to think about it," he says. But now, with the need to address climate change, curb greenhouse gas emissions, and secure greater energy independence, creative new ideas are at a premium. While nuclear plants are good at producing steady power at relatively low cost, their output cannot rapidly be ramped up and down. Meanwhile, renewable energy sources are also good at producing power at low operating cost, but their output is unpredictable. Fossil fuel plants can easily be switched on or off as needed, but have higher operating costs and produce greenhouse gas emissions. One solution, Forsberg suggests, is to find a way to divert excess power from a nuclear plant, making it a "dispatchable" source of electricity—one that can easily be ramped up and down to balance the disparities between production and demand. But what to do with that diverted power? The paper outlines three concepts, which Forsberg says could have potential in the coming decades. They involve pairing a nuclear plant with an artificial geothermal storage system, a hydrogen production plant, or a shale-oil recovery operation. The last of these ideas would locate a nuclear plant near a deposit of oil shale—a type of deposit, technically known as kerogen, that has not been used to date as a source of petroleum. Heated steam from a nuclear plant, in enclosed pipes, heats the shale; the resulting oil can be pumped out by conventional means. At first glance, that might sound like a "dirty" solution, enabling the use of more carbon-emitting fuel. But Forsberg suggests that it's quite the opposite: "When you heat it up, it decomposes into a very nice light crude oil, and natural gas, and char," he explains. The char—the tarlike residue that needs to be refined out from heavy crude oils—stays underground, he says. Today, the heating of the rock is usually accomplished by burning fossil fuels, making the process less efficient. That's where the excess heat from a nuclear plant comes in: By coupling the plant's steam output with a shale-oil well, the oil can be recovered without generating extra emissions. The process also does not need regular heat input: The nuclear plant can operate at a steady rate, providing electricity to the grid when needed, and heating oil shale at times of low electricity demand. This enables the nuclear plant to replace the burning of fossil fuels in producing electricity, further reducing the release of greenhouse gas. The world's largest oil-shale deposits are concentrated in the western United States. "We lucked out," Forsberg says. "This has the lowest carbon footprint of any source of liquid fossil fuel." The resource that could be unlocked is enormous, he says: "Some of these deposits would yield a million barrels per acre. There's no place else on Earth like it." Steven Aumeier, director of the Center for Advanced Energy Studies at the Idaho National Laboratory, says, "Many times the most formative game-changing approaches are not single new technologies, but rather novel ways of combining technologies. Hybrid energy systems could be a game-changing approach in enabling the cost-effective, secure, and high penetration of low-carbon energy into the economy." Aumeier adds that such systems would "afford a practical and regionally scalable means of using an 'all of the above' approach to energy security." Funding for the research was provided by the U.S. Department of Energy, the Idaho National Laboratory Institute for Nuclear Energy Science and Technology, and the French nuclear company Areva. Explore further: Using the energy in oil shale without releasing carbon dioxide in a greenhouse world
Matching - Halloween In this similar characters instructional activity, students match the 5 different Halloween pictures on the left of the instructional activity to same pictures on the right by drawing a line from one to the other. Pre-K Visual & Performing Arts 3 Views 5 Downloads Picture Books and the Bill of Rights Students identify the basic freedoms of citizens in the United States. In this Bill of Rights lesson, students act out scenarios about the Bill of Rights. Students create a picture book describing the rights they've acted out. Pre-K - 2nd Social Studies & History Recognizing Positive Character Traits Connect character education to this St. Patrick's Day activity! Have each person create a rainbow on a pot of gold with construction paper, and on each color of the rainbow write a positive character trait or something they are proud of.... Pre-K - 3rd Visual & Performing Arts
Throw the bomb In this game approximately 5 to 10 students can join. Prepare a small sand bag or a ball. 1. Start by arranging the students in a circle. The distance between each student should as far as possible. 2. The teacher stands in the middle of the circle and holds the ball/bag which would be the bomb in this game. Then the teacher randomly shouts out a topic such as colour, sport, place, food etc. and throws the bag/ball to one of the students in the circle. 3. The student has to catch the bomb and answer one word matching the topic in ten seconds (or longer depending on the level of the students). If the answer is correct, he/she can throw the “bomb” to the other student. Failing to say the right word in time or catching the “bomb” will lead to lose the game (or not allow the students to say the same word that has been used before if the students are good enough). 4. Play the name until only one student is left. The student is claimed to be the winner. < More Free Esl Games
Part of the Cool Sites series Many educators wonder how to inspire their students and motivate them to strive to learn. Standardized testing has burdened our students and a majority walk into our classrooms probably dreading the school year. My SEETA course, How Do You Youtube?, inspired me to think about the incredible benefits of using stop motion video for student motivation. Stop motion films can inspire your students because these films are creative, visually stimulating, and often feature incredible music. Here are three of my favorites from Vimeo. Visit the links to read about the making of these films, how they were used, and how the artist came up with the storyline: Here are three places you can find several stop motion videos to inspire your students. - 50 Stunning and Inspirational Stop Motion Videos - The Animation Telling Stories - The Hyde Tube - My Digital Storytelling Ideas album on Vimeo Educational uses of these videos, include having students: - discuss one of these videos as an icebreaker to a lesson involving digital storytelling - journal a response to one of these videos - work in groups to create the dialogue. The students can also create the subtitles for the video using Overstream or video editing software. - choose their favorite video to discuss in a round table discussion - work in groups and choose their favorite video to mimic One of the best ways is to have students work in groups to create a stop motion video. This can be a part of any curriculum. Have students create one to show a real world use of the learning. This can apply to math word problems, explaining scientific theory, defining an idiom, and so forth. You can also use a variety of materials. Below is an example of a stop motion film Maryna Badenhorst’s students created using clay. In the video, Daylight, the students put a nice twist to the Romeo and Juliet story. If you work with little ones like I do, then consider having them draw and color the scenes to a group story. The video, Chiarastella, is a great example of a class story created with student drawings and voices. Visit the website which tells you more about this cultural holiday in Venice where students go caroling door-to-door to raise money. Tips and Resources You will find the following resources valuable in creating and planning your stop motion film: - The Making of Chiarastella– Watch this documentary to find out how Raffaella Traniello used free software to create this class video. - Making Stopmotion Movies– This website provides you with resources on character development, creating storyboards, and more. - Maryna Badenhorst’s list of claymation resources and tips to get your students started in creating their own claymation videos! - The BBC’s Me and My Movie– This website shows numerous examples of movies made by children and provides tips on how they can make their own. The site takes time to download. - Video: The making of a stop motion film using Post-its. - Animata– Free software for editing and creating brilliant animation films. - Scroll down to the bottom of this post to find several free video editing software programs. Have students create stop motion films. You may want to subscribe for FREE to receive regular updates! What are your favorite stop motion films? How could you use this in your class?
There are several methods to graph a linear equation. In these lessons, we will learn Draw the line with equation y = 2x – 3 Try to choose values that will give whole numbers to make it easier to plot. Step 1 : Let x = 0 y = 2(0) – 3 y = – 3 Step 2 : Let x = 2 y = 2(2) – 3 y = 1 Step 3 : Plot the two points on the Cartesian plane Step 4 : Draw a straight line passing through the two points This video shows how to use a t-table to graph linear equations. This following video shows how to graph a linear equation by plotting points. The x-intercept is where the line crosses the x-axis. At this point, the y-coordinate is 0. The y-intercept is where the line crosses the y-axis. At this point, the x-coordinate is 0. When an equation is written in general form it is easier to graph the equation by finding the intercepts. Step 1: Find the x-intercept, let y = 0 then substitute 0 for y in the equation and solve for x Step 2: Find the y-intercept, let x = 0 then substitute 0 for x in the equation and solve for y Step 3: Plot the intercepts, label each point, and draw a straight line through these points. This video shows how to find intercepts and use intercepts to graph linear equation. This video shows how to graph Linear Functions by Finding the X-Intercept and Y-Intercept of the Function. y = mx + b, where m is the slope of the line and b is the y-intercept. Step 1: Find the y-intercept and plot the point. Step 2: From the y-intercept, use the slope to find the second point and plot it. Step 3: Draw a line to connect the two points. In this video, we will learn about the slope intercept formula. We look at what slope and intercept mean as well as how to graph the equation. Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations.
March 31, 1889 – The Eiffel Tower is officially opened. The Eiffel Tower is a wrought iron lattice structure on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower. Constructed in 1889 as the entrance to the 1889 World’s Fair, it was initially criticized by some of France’s leading artists and intellectuals for its design, but has become a global cultural icon of France and one of the most recognizable structures in the world. The tower is the tallest structure in Paris and the most-visited paid monument in the world: 6.98 million people ascended it in 2011. The tower received its 250 millionth visitor in 2010. The tower is 324 meters (1,063 ft) tall, about the same height as an 81-story building. Its base is square, 125 meters (410 ft) on a side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was built in 1930. Due to the addition of the aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Not including broadcast aerials, it is the second-tallest structure in France, after the Millau Viaduct. The tower has been used for making radio transmissions since the beginning of the 20th century. Until the 1950s, sets of aerial wires ran from the cupola to anchors on the Avenue de Suffren and Champ de Mars. These were connected to longwave transmitters in small bunkers. In 1909, a permanent underground radio centre was built near the south pillar, which still exists today. On November 20, 1913, the Paris Observatory, using the Eiffel Tower as an aerial, exchanged wireless signals with the United States Naval Observatory, which used an aerial in Arlington, Virginia. The object of the transmissions was to measure the difference in longitude between Paris and Washington, D.C. Today, radio and television signals are transmitted from the Eiffel Tower. On May 5, 1939, France issued a semi-postal stamp, 90-centimes + 50 centimes, commemorating the 50th anniversary of the tower (Scott No. B85):
As the final ice age came to an end, the weather in North America became warmer. Over many centuries, the descendants of the people who crossed the Bering land bridge traveled farther and farther south, then east. They were always searching for new animals to hunt, in part because the weather changes were causing the mammoths and other large prehistoric animals to die out. By about 11,000 B.C., there were Native Americans living in all areas of the continent, all surviving by different methods of hunting, fishing, and gathering wild plants. Slowly, as they adapted to their new environments, different groups of people began to develop different cultures.
The Red-tailed hawk ranges throughout North America to central Alaska and northern Canada, and south as far as the mountains of Panama. Although not truly migratory, they do adjust seasonally to areas of the most areas of the most abundant prey. In winter many of the northern birds move south. Hawks are carnivores (meat eaters) that belong to the category of birds known as raptors — birds of prey. They have strong, hooked beaks; their feet have three toes pointed forward and one turned back; and their claws, or talons, are long, curved and very sharp. Prey is killed with the long talons and, if it is too large to swallow whole, it is torn to bite-sized pieces.
Simple direct-current motors consist of a magnet or electromagnet (the stator), and a coil (the rotor) which turns when a current is passed through it because of the force between the current and the stator field. So that the force keeps the same sense as the rotor turns, the current to the rotor is supplied via a commutator – a slip-ring broken into two semicircular parts, to each of which one end of the coil is connected, so that the current direction is reversed twice each revolution. For use with alternating-current supplies, small DC motors are often still suitable, but induction motors are preferred for heavier duty. In the simplest of these, there is no electrical contact with the rotor, which consists of a cylindrical array of copper bars welded to end rings. The stator field, generated by more than one set of coils, is made to rotate at the supply frequency, inducing (see electromagnetic induction) currents in the rotor when (under load) it rotates more slowly, these in turn producing a force accelerating the rotor. Greater control of the motor speed and torque can be obtained in "wound rotor" types in which the currents induced in coils wound on the rotor are controlled by external resistances connected via slip-ring contacts. In applications such as electric clocks, synchronous motors, which rotate exactly in step with the supply frequency, are used. In these the rotor is usually a permanent magnet dragged round by the rotating stator field, the induction-motor principle being used to start the motor. The above designs can all be opened out to form linear motors producing a lateral rather than rotational drive. The induction type is the most suitable, a plate analogous to the rotor being driven with respect to a stator generating a laterally moving field. Such motors have a wide range of possible applications, from operating sliding doors to driving trains, being much more robust than rotational drive systems, and offering no resistance to manual operation in the event of power cuts. A form of DC linear motor can be used to pump conducting liquids such as molten metals, the force being generated between a current passed through the liquid and a static magnetic field around it. Related categories• ELECTRICITY AND MAGNETISM Home • About • Copyright © The Worlds of David Darling • Encyclopedia of Alternative Energy • Contact
- slide 1 of 6 Pitot tubes are used in a variety of applications for measuring fluid velocity. This is a convenient, inexpensive method for measuring velocity at a point in a flowing fluid. Pitot tubes (also called pitot-static tubes) are used, for example, to make airflow measurements in HVAC applications and for aircraft airspeed measurements. - slide 2 of 6 Static Pressure, Stagnation Pressure and Dynamic Pressure - Definitions Static pressure is what is commonly called simply the pressure of the fluid. It’s a measure of the amount that fluid pressure exceeds local atmospheric pressure. It is measured through a flat opening that is parallel with the fluid flow. Static pressure measurement is illustrated with the first U-tube manometer in the diagram at the left. Stagnation pressure is also a measure of the amount that fluid pressure exceeds local atmospheric pressure, but it includes the effect of the fluid velocity converted to pressure. It is measured through a flat opening that is perpendicular to the direction of fluid flow and facing into the fluid flow. Stagnation pressure (also called total pressure) measurement is illustrated with the second U-tube manometer in the diagram at the left. Dynamic pressure (also called velocity pressure) is a measure of the amount that the stagnation pressure exceeds static pressure at a point in a fluid. It can also be interpreted as the pressure created by reducing the kinetic energy to zero. Its measurement is illustrated with the third U-tube in the diagram at the left. - slide 3 of 6 Static Pressue, Stagnation Pressue and Dynamic Pressure - Relationships The symbol, P, is often used for static pressure. Dynamic pressure is given by the expression, ½ ρV2. The stagnation pressure is then given by the following equation: Pstag = P + ½ ρV2 + γh Where: ρ is the fluid density (slugs/ft3), γ is the specific weight of the fluid (lb/ft3), h is the height above a specified reference plane (ft), V is the average velocity of the fluid (ft/sec). With the specified units for the other parameters, pressure will be in lb/ft2. - slide 4 of 6 Velocity Measurement with a Pitot Tube For pitot tube measurements and calculations, the reference plane is taken to be at the height of the pitot tube measurements, so the equation for stagnation pressure becomes: Pstag = P + ½ ρV2 , which can be rearranged to: V = (2ΔP/ρ)1/2 Where ΔP = Pstag – P. The pressure difference, Δp, (or Pstag – P), can be measured directly with a pitot tube like the third U-tube in the figure above, or with a pitot tube like that shown in the figure at the right. This is a concentric pitot tube. The inner tube has a stagnation pressure opening (perpendicular to the fluid flow) and the outer tube has a static pressure opening (parallel with the fluid flow). - slide 5 of 6 Consider a pitot tube being used to measure air velocity in a heating duct. The air is at 85 oF and 16 psia. The pitot tube registers a pressure difference of 0.021 inches of water (Pstag – P). Calculate the velocity of the air at that point in the duct? Solution: Convert the pressure difference of 0.021 inches of water to lb/ft2 (psf) using the conversion factor, 5.204 psf/in water. 0.021 inches of water = (0.021)(5.204) psf = 0.1093 psf The density of air at 85oF and 16 psia can be calculated using the ideal gas law, to be 0.002468 slugs/ft3. (See the article, "Use the Ideal Gas Law to Find the Density of Air at Different Pressures and Temperatures," for more information.) Now V can be calculated: V = (2ΔP/ρ)1/2 = [(2)(0.1093)/0.002468] 1/2 = 9.41 ft/sec - slide 6 of 6 For a fluid with known density and measured difference between stagnation pressure and static pressure (ΔP), as measured with a pitot tube, the fluid velocity can be calculated with the equation: V = (2ΔP/ρ)1/2.
Countable nouns are the names of separate people or objects which we can count. Uncountable nouns are the names of materials, liquids and other things which we do not see as separate objects. We can use the indefinite article (a/an) with singular countable nouns. A plural countable noun cannot be used with indefinite articles. Countable nouns (both singular and plural) can also be used with numbers. - A cat - Two cats - A boy - Two boys We cannot use the indefinite article or numbers with uncountable nouns. - Water (NOT a water) (NOT two waters) - Weather (NOT a weather) (NOT two weathers) A singular countable noun usually has an article or other determiner with it. We say, the cat, my cat or this cat, but not just cat. Plural and uncountable nouns can be used with or without an article or other determiner. Many nouns which are normally uncountable are treated as countable in some cases. - Have you got a good shampoo? (Although shampoo is an uncountable noun, it is treated as countable to express the meaning of ‘a type of’.) - Three coffees, please. (= three cups of coffees) Some nouns that are countable in other languages are uncountable in English. Examples are: information, advice, news, scenery, accommodation etc.
Erbium is a chemical element with the symbol Er and atomic number 68. Erbium is a rare, silvery-white metallic lanthanide (an element having an atomic number between 57 and 71). It is found solid in its natural state and is commonly found with several other elements in the mineral gadolinite. It is found in Ytterby, Sweden. Erbium was discovered by Carl Gustaf Mosander in 1843. Mosander separated yttria from gadolinite into three compounds he called erbia, terbia, and yttria. Erbia and terbia, however, were confused at that time. After 1860, terbia was renamed erbia, then after 1877 what was known as erbia, was renamed terbia. Erbium-oxide was isolated in 1905, and pure erbium metal wasn’t produced until 1934. It wasn’t until the 1990s that erbium-oxide would be cheap enough to be used as a colorant in art glass. Erbium is never found as a free Earth element in nature. It is found bound to monazite sand ores. It was historically difficult to separate rare earth metals from one another until the invention of ion exchange techniques in the 1950s. This made it also cheaper to produce pure metals and their compounds. The principal sources of erbium come from the minerals xenotime, euxenite, and gadolinite. It has also been found recently in the clays of southern China. Two-thirds of the ore produced in China, though, is yttrium. Only about 5% is erbia. The total concentration of erbium found in the Earth’s crust is less than one ounce per ton of earth Erbium in its pure state is pliable (easily shaped). It is very soft, yet stable in air. It does not oxidize as quickly as other rare-earth metals. The salts of erbium are rose colored. The oxidized state of erbium is called erbia. Erbium doesn’t have any known biological roles although it does stimulate metabolism. Erbium can be used as an optical medium, or for lasers and optical amplifiers. Erbium is ferromagnetic (highly magnetic to various alloys) below -425 degrees Fahrenheit. Between -425 and -315 degrees Fahrenheit it becomes antiferromagnetic (atoms point in opposite directions), and at temperatures above -315 degrees Fahrenheit, erbium becomes paramagnetic (less magnetic than ferromagnetic). Naturally occurring erbium has 6 known stable isotopes. Erbium-166 is the most abundant (33% natural abundance). It has 29 radioisotopes with the most stable being erbium-169 with a half-life of 9.4 days. The rest have half-lives less than 50 hours, with most having half-lives less than 4 hours. The majority of theses have half-lives less than 4 minutes. Erbium has 13 meta states. The isotopes of erbium range in atomic mass from 142.9663 to 176.9541. Erbium is used in several applications. It is commonly used as a photographic filter, and a metallurgical additive. It is used in nuclear technology as a nuclear poison. It is commonly used in its erbia state as a colorant for glass, cubic zirconia, and porcelain. It is also used in sunglasses and cheap jewelry. Erbium compounds have little to moderate toxicity, although their toxicity has yet to be investigated in detail. The dust form of metallic erbium presents a fire and explosion hazard.
New research describes how enzymes come together to metabolize sugar at the cellular level. For sugars to metabolize and provide energy to the cells, a series of enzymes—biological catalysts—must each, in turn, break down a reactant. In this case, the researchers used glucose, the sugar found in corn syrup and one of the two sugars that result when table sugar—sucrose—breaks down in the body. In this cascade, the first enzyme acts on the glucose supplied to the cell and the subsequent enzymes work on successive products. In the process, two adenosine triphosphate molecules—ATP—are consumed but four are produced. The hydrolysis of ATP powers many cellular processes to maintain the cell’s viability. Similar enzyme cascades are responsible for many metabolic processes in the body. Enzymes that participate in such reaction pathways have in some cases been shown to form intracellular, reversible complexes termed metabolons by the late Paul Srere of the University of Texas Southwestern Medical School. Having the enzymes in proximity to one another facilitates the series of reactions they catalyze. One such example is the purinosome discovered in chemist Stephan J. Benkovic’s Lab at Penn State that consists of six enzymes involved in the biosynthesis of purines. The researchers asked whether one of the factors contributing to metabolon formation could be a gradient of chemicals generated by the participating enzymes. “We discovered some time ago that simple catalyst molecules such as enzymes will also chemotax up the gradient of a reactant,” says Ayusman Sen, professor of chemistry at Penn State. “They move toward higher and higher concentrations of reactant.” The movement is termed chemotaxis, where individual molecules migrate along a concentration gradient of other molecules. “All living things chemotax,” says Sen. “If you are hungry and suddenly smell French fries, you will try to walk toward the fries. If the smell decreases, you will randomly turn to try to find the higher concentration of French-fry odor molecules until you are at the French-fry counter.” In their study, the researchers used only the first four enzymes of the glycolytic pathway—hexokinase, phosphoglucose isomerase, phosphofructokinase, and aldolase. These four steps actually consume ATP. To study the movement of the enzymes, the researchers used fluorescent tagging of hexokinase and aldolase, the first and fourth enzymes in the pathway. Each was tagged with a different fluorescent dye so the movement of both enzymes could be followed. They looked at three cases—the normal reaction where hexokinase phosphorylates glucose; the reaction of hexokinase with mannose, a sugar that binds more strongly but has a slower reaction rate; and finally, with L-glucose, a form of glucose that is not used by hexokinase. The phosphorylation requires ATP. The presence of phosphoglucose isomerase—the second enzyme—and phosphofructokinase—the third enzyme—starts the production of the reactant for aldolase. The researchers observed that the aldolase moves towards the hexokinase in their flow experiment, revealing that aldolase was chemotaxing up the reactant gradient produced by the functioning of the first three enzymes in the pathway. The chemotaxis was greatest with D-glucose, less with mannose and not observed with L-glucose. Theoretical modeling of the enzyme movement qualitatively predicted the extent of enzyme movement. Metabolons remain mysterious The researchers also looked at whether chemotaxis of enzymes would occur in a model of the exceptionally crowded intracellular environment. They added a large molecular weight substance to simulate such crowding. Chemotaxis still occurred, but at a slower rate. “Chemotaxis along a chemical gradient could be a factor in assembly of enzyme clusters such as metabolons,” says Benkovic. “Other factors, such as noncovalent interactions would still be expected to contribute.” The resolution of the research instrument, however, was insufficient to demonstrate in this case that the four enzymes were assembling into a metabolon. The researchers observed the formation of large aggregates of enzymes, but could not demonstrate they were functioning metabolons. The researchers report their results in the journal Nature Chemistry. Additional researchers contributing to this project are from Penn State; Columbia University; and the University of California, San Diego. The National Science Foundation and the Defense Threat Reduction Agency supported this work. Source: Penn State
The origin and nature of Mars is mysterious. It has geologically distinct hemispheres, with smooth lowlands in the north and cratered, high-elevation terrain in the south. The red planet also has two small oddly-shaped oblong moons and a composition that sets it apart from that of the Earth. New research by University of Colorado Boulder professor Stephen Mojzsis outlines a likely cause for these mysterious features of Mars: a colossal impact with a large asteroid early in the planet’s history. This asteroid – about the size of Ceres, one of the largest asteroids in the Solar System – smashed into Mars, ripped off a chunk of the northern hemisphere and left behind a legacy of metallic elements in the planet’s interior. The crash also created a ring of rocky debris around Mars that may have later clumped together to form its moons, Phobos and Deimos. The study appeared online in the journal Geophysical Research Letters, a publication of the American Geophysical Union, in June. “We showed in this paper — that from dynamics and from geochemistry — that we could explain these three unique features of Mars,” said Mojzsis, a professor in CU Boulder’s Department of Geological Sciences. “This solution is elegant, in the sense that it solves three interesting and outstanding problems about how Mars came to be.” Astronomers have long wondered about these features. Over 30 years ago, scientists proposed a large asteroid impact to explain the disparate elevations of Mars’ northern and southern hemispheres; the theory became known as the “single impact hypothesis.” Other scientists have suggested that erosion, plate tectonics or ancient oceans could have sculpted the distinct landscapes. Support for the single impact hypothesis has grown in recent years, supported by computer simulations of giant impacts. Mojzsis thought that by studying Mars’ metallic element inventory, he might be able to better understand its mysteries. He teamed up with Ramon Brasser, an astronomer at the Earth-Life Science Institute at the Tokyo Institute of Technology in Japan, to dig in. The team studied samples from Martian meteorites and realized that an overabundance of rare metals — such as platinum, osmium and iridium — in the planet’s mantle required an explanation. Such elements are normally captured in the metallic cores of rocky worlds, and their existence hinted that Mars had been pelted by asteroids throughout its early history. By modeling how a large object such as an asteroid would have left behind such elements, Mojzsis and Brasser explored the likelihood that a colossal impact could account for this metal inventory. The two scientists first estimated the amount of these elements from Martian meteorites, and deduced that the metals account for about 0.8 percent of Mars’ mass. Then, they used impact simulations with different-sized asteroids striking Mars to see which size asteroid accumulated the metals at the rate they expected in the early solar system. Based on their analysis, Mars’ metals are best explained by a massive meteorite collision about 4.43 billion years ago, followed by a long history of smaller impacts. In their computer simulations, an impact by an asteroid at least 1,200 kilometers (745 miles) across was needed to deposit enough of the elements. An impact of this size also could have wildly changed the crust of Mars, creating its distinctive hemispheres. In fact, Mojzsis said, the crust of the northern hemisphere appears to be somewhat younger than the ancient southern highlands, which would agree with their findings. “The surprising part is how well it fit into our understanding of the dynamics of planet formation,” said Mojzsis, referring to the theoretical impact. “Such a large impact event elegantly fits in to what we understand from that formative time.” Such an impact would also be expected to have generated a ring of material around Mars that later coalesced into Phobos and Deimos; this explains in part why those moons are made of a mix of native and non-Martian material. In the future, Mojzsis will use CU Boulder’s collection of Martian meteorites to further understand Mars’ mineralogy and what it can tell us about a possible asteroid impact. Such an impact should have initially created patchy clumps of asteroid material and native Martian rock. Over time, the two material reservoirs became mixed. By looking at meteorites of different ages, Mojzsis can see if there’s further evidence for this mixing pattern and, therefore, potentially provide further support for a primordial collision. “Good theories make predictions,” said Mojzsis, referring to how the impact theory may predict how Mars’ makeup. By studying meteorites from Mars and linking them with planet-formation models, he hopes to better our understanding of how massive, ancient asteroids radically changed the red planet in its earliest days. News source: CU Boulder. The content is edited for length and style purposes. Figure legend: This Knowridge.com image is credited to NASA.
What is the swine flu? The swine influenza A (H1N1) virus that has infected humans in the U.S. and Mexico is a novel influenza A virus that has not previously been identified in North America. This virus is resistant to the antiviral medications amantadine (Symmetrel) and rimantadine (Flumadine), but is sensitive to oseltamivir (Tamiflu) and zanamivir (Relenza). Investigations of these cases suggest that on-going human-to-human swine influenza A (H1N1) virus is occurring. What are the symptoms of swine flu? Although uncomplicated influenza-like illness (fever, cough or sore throat) has been reported in many cases, mild respiratory illness (nasal congestion, rhinorrhea) without fever and occasional severe disease also has been reported. Other symptoms reported with swine influenza A virus infection include vomiting, diarrhea, myalgia, headache, chills, fatigue, and dyspnea. Conjunctivitis is rare, but has been reported. Severe disease (pneumonia, respiratory failure) and fatal outcomes have been reported with swine influenza A virus infection. The potential for exacerbation of underlying chronic medical conditions or invasive bacterial infection with swine influenza A virus infection should be considered. How can swine (H1N1) flu be prevented? The best way to prevent swine flu would be the same best way to prevent other influenza infections, and that is vaccination. When a safe vaccine is developed (projected to happen in a few months), people should get vaccinated if the disease is still causing infections. The CDC says that a good way to prevent any flu disease is to avoid exposure to the virus; this is done by frequent hand washing, not touching your hands to your face (especially the nose and mouth), and avoiding any close proximity to or touching any person that may have flu symptoms. Since the virus can remain viable and infectious for about 48 hours on many surfaces, good hygiene and cleaning with soap and water or alcohol-based hand disinfectants are also recommended. Some physicians say face masks may help prevent getting airborne flu viruses (for example, from a cough or sneeze), but others think the better use for masks would be on those people who have symptoms and sneeze or cough. The use of Tamiflu or Relenza may help prevent the flu if taken before symptoms develop or reduce symptoms if taken within about 48 hours after symptoms develop. However, taking these drugs is not routinely recommended for prevention because investigators suggest that as occurs with most drugs, flu strains will develop resistance to these medications. Your doctor should be consulted before these drugs are prescribed. In general, preventive measures to prevent the spread of flu are often undertaken by those people who have symptoms. Symptomatic people should stay at home, avoid crowds, and take off from work or school until the disease improves or medical help is sought. Sneezing, coughing, and nasal secretions need to be kept away from other people; simply using tissues and disposing of them will help others. Quarantining patients is usually not warranted, but such measures depend on the severity of the disease. There is no substitute for human blood.
The reason power chords are used more extensively in rock, usually with distortion, is this: A note put through a distortion unit will sound like a major chord. Let's look at why this is. We need to cover harmonics, scale theory and the properties of distortion units to do this. Here we go... Harmonics are the multiples of any fundamental note. For example, an open A string at 110Hz will have a second harmonic at 220Hz, a third at 330Hz, a fourth at 440Hz, a fifth at 550Hz and so on. On the guitar, you can produce the nth harmonic by lightly touching the string at 1/n of its length and plucking it. Touch a string very lightly above the twelfth fret and pluck it - you get the second harmonic, which is exactly one octave higher (see below). Doing the same above the seventh fret gets you the third harmonic, which is almost exactly an octave-and-fifth. You can also touch the string at any of the n equally spaced points - so you could touch at 2/3 the length, over the nineteenth fret, and get the same note. Of course, touching at 2/4 of the length will get you the second harmonic not the fourth (as 2/4 = 1/2). You can demonstrate what is happening with a rope. Get someone to hold one end. You hold the other and start to swing the rope like a skipping rope at the frequency that feels most natural. Now try doubling, tripling or quadrupling that frequency. See the stationary points (nodes)? On the guitar, those nodes are where your damping finger is. The common scale in Western music is made up of octaves (doublings of frequency) each divided into twelve equally-spaced semitones (although read up on tempering). The ratio between each semitone is therefore the twelfth root of two (hereafter referred to as r) which is about 1.059. Solid-state distortion units tend to add odd nth harmonics (third, fifth etc), decreasing fairly rapidly as n increases. This distortion tends to sound harsh. Valve distortion tends to boost the even harmonics, which sound warmer. Relationship between harmonics and notes OK - so what note does the nth harmonic relate to? We need to find out how many semitones correspond to a multiple of n. This is: x = log(n) / log(r) n x interval 2 12 exactly octave 3 19.02 oct + fifth, slightly sharp 4 24 exactly two octaves 5 27.86 two oct + maj third, fairly flat 6 31.02 two oct + fifth, slightly sharp 7 33.69 two oct + dom seven, quite flat ...and so on. Usually, anything above the fifth harmonic is quiet enough as to solely form part of the sound of the instrument, distinguishing it from a sine wave, rather than being recognizable as a distinct pitch. Using this, we see that a distorted power chord of C will contain C and G as fundamentals, G and D as third harmonics and E and B as fifth harmonics. This gives a power chord the feel of an unbalanced Cmaj9 chord. Adding your own third would muddy this further. This also shows why distorted minor chords are even grungier than major chords. This writeup is now online at http://tranchant.plus.com/guitar/power-chords
- Lungs: The lungs’ primary function is gas exchange. Oxygen is delivered to the tissue and carbon dioxide is removed from the tissues. Breathing is an automatic, rhythmic mechanical process, which delivers O2 to the tissues and removes CO2 from the tissues. - Alveoli: The exchange of gases between the external environment and cells of the body takes place in the individual alveolus. Oxygen and carbon dioxide exchange passively between the pulmonary capillaries and the alveoli; These gases move along their partial pressure gradients, i.e- from high to low. Function of the Respiratory System - Protection: Cilia, both in the upper airways and trachea, beat and move mucous continually towards the mouth. Macrophage Alveolar macrophages phagocytose inhaled particulate matter and pathogens. - Thermoregulation: Heat loss from the respiratory system helps the body regulate internal body temperature. - Differential pressure during inspiration: At the end of expiration, just before the beginning of inspiration, the pressure inside the lung is the same as the atmospheric pressure outside the body. 15/29 When the diaphragm actively contracts, the internal lung volume increases and the pressure inside the lung decreases. The change in internal pressure causes air to rush into the lungs and down its pressure gradient. - Differential pressure during expiration: At the end of inspiration, the diaphragm relaxes passively. The lung volume decreases and this causes the internal pressure inside the lungs to increase to a level higher than atmospheric pressure outside the body. - Lung elasticity and surface tension effects: the ability of the lungs’ elastic tissue to recoil during expiration. Elastins are elastic fibers present in the walls of the alveoli, which allow the lungs to return to their resting volume after expiration. - Pulmonary surfactant: Pulmonary surfactant is a phospholipid, similar to those found in a lipid bilayer surrounding human cells. It is made by pneumocytes in the lungs.
Population sampling is a method through which a group of individuals are selected from a population for statistical analysis. The sample should have enough size to warrant statistical analysis. Conducting population sampling is very important as errors can lead to misleading data. The ideal approach would be to test every single individual to get the most accurate results. Sampling is done because it is practically impossible to test every single individual in the population and sampling is a reasonable "proxy" that also saves time, money and effort during the research. There are many techniques available for population sampling. The 2 main types of population sampling are probability sampling and Non-probability sampling. What is Probability sampling? In probability sampling, every single individual will have equal chance of being selected as the subject for the research. This method assures that the selection procedure is random. Probability sampling is divided into 5 primary sampling strategies. They are Random sampling: This type of sampling is used in population sampling when analyzing historical or batch data. Random sampling is done by passing on a number to each unit in the population and using a random number table to create the sample list. Using random sampling defends against favoritism or bias being created in the sampling process. Stratified random sampling: This type of sampling is also for analyzing historical or batch data. Stratified random sampling is used when there are different population groups (strata) and the analyst makes sure that all the groups are represented in the sample. In this sampling, independent samples are drawn from each group. The size of each sample is proportional to the comparative size of the group. Systematic sampling: This sampling is used in process sampling circumstances when real time data is collected during process operation. Unlike population sampling, a frequency for sampling should be selected. Systemic sampling involves collecting samples according to some systemic rule. For instance, every 4th unit, every hour, the first 5 units etc. One disadvantage in this type of sampling is that the systemic rule may also match some fundamental structure which would bias the sample. Cluster sampling: In cluster sampling, the researcher selects groups or clusters and then from each cluster he selects the individual subjects by random or systematic sampling. The most familiar cluster used in research is a geographical cluster. Disproportional sampling: This is a probability sampling method which is used to tackle the difficulty researchers came across with the stratified samples of unequal sizes. This sampling method split the population into subgroups or strata but utilizes a sampling fraction which is not identical for all strata. Advantages of Probability sampling: The advantage of probability sampling is the precision of the statistical methods after the research. It is also used to find the population parameters. This sampling is considered to be a trustworthy method to eradicate the bias What is Non-probability sampling? In this type of population sampling, the population members will not have equal chance of being selected and so it is not safe to think that the sample completely represents the target population. Also the researcher would have intentionally chosen the individuals who will participate in the study. This sampling is carried out when the parameters of the entire population arenot required. This type of sampling is easy, cheap and quick but not accurate. Non-probability sampling can be classified into 5 types. Convenience sampling: In this sampling subjects are selected on the convenience of the researcher. Consecutive sampling: This is also called sequential sampling in which researcher picks a single or group of subjects, conducts his test, examine the results and move on to another group of subjects if needed. Quota sampling: This sampling is identical to stratified samplings which includes dividing the population into classes (males and females) and then gettinga sample within each class. Judgmental sampling: In this sampling researcher selects the subjects to be sampled based on the professional judgment. Snowball sampling: Snowball sampling is the method in which the researcher begins sampling one person, then asks that person to refer some other people and goes on like that. This may be called chain referral sampling. Advantages of Non-probability sampling: Non-probability sampling is used in pilot studies, qualitative research, hypothesis development, case studies.
Michael Fowler, UVa. Of course, Keplerís Laws originated from observations of the solar system, but The standard approach in analyzing planetary motion is to use (r, q ) coordinates, where r is the distance from the originówhich we take to be the center of the Sunóand q †is the angle between the x-axis and the line from the origin to the point in question.† In the picture above, in which I have greatly exaggerated the ellipticity of the orbit, suppose the planet goes from A to B, a distance Ds,† in a short time Dt, so its speed in orbit is Ds/Dt.† Notice that the velocity can be resolved into vector components in the radial direction Dr/Dt (in this case negative) and in the direction perpendicular to the radius, rDq/Dt.† The short line BC in the diagram above is perpendicular to SB (S being the center of the Sun), and therefore becoming perpendicular to SC as well in the limit of AB becoming an infinitesimally small distance. In the limit of small Ds, then, we have †where the angular velocity .† We shall consider Keplerís Second Law (that the planet sweeps out equal areas in equal times) first, because it has a simple physical interpretation. Looking at the above picture, in the time Dt during which the planet moves from A to B, the area swept out is the approximately triangular area ABS, where S is the center of the Sun. For the distance AB sufficiently small, this area tends to that of the long thin triangle BSC, which has a base of length rDq and a height r.†† Using area of a triangle = Ĺ baseīheight, it follows immediately that Now, the angular momentum L of the planet in its orbit is given by so the rate of sweeping out of area is proportional to the angular momentum, and equal to L/2m. Weíre now ready for Keplerís First Law: each planet moves in an elliptical orbit with the Sun at one focus of the ellipse. † Let us begin by reviewing some basic facts about ellipses. An ellipse is essentially a circle scaled shorter in one direction, in (x, y) coordinates it is described by the equation †a circle being given by a = b.† The lengths a and b are termed the semimajor axis and the semiminor axis respectively.† An ellipse has two foci, shown F1 and F2 on the diagram, which have the optical property that if a point source of light is placed at F1, and the ellipse is a mirror, it will reflectóand therefore focusóall the light to F2.†† One way to draw an ellipse is to take a piece of string of length 2a, tie one end to F1 and the other to F2, and hold the string taut with a pencil point. Anywhere this happens on a flat piece of paper is a point on the ellipse. In other words, the ellipse is the set of points P such that PF1 + PF2 = 2a. The ellipticity of the ellipse e is defined by writing the distance from the center of the ellipse to a focus OF1 = ea. In fact, in analyzing planetary motion, it is more natural to take the origin of coordinates at the center of the Sun rather than the center of the elliptical orbit.† It is also more convenient to take (r, q ) coordinates instead of (x, y) coordinates, because the strength of the gravitational force depends only on r.† Therefore, the relevant equation describing a planetary orbit is the (r, q ) equation with the origin at one focus.† For† an ellipse of semi major axis a and eccentricity e the equation is: It is not difficult to prove that this is equivalent to the traditional equation in terms of x, y presented above. Note: Iím including the calculus derivation of the elliptic orbit, not to be found in the textbooks, just so you can see that itís calculus, not magic, that gives this result.† This is an optional section, and will not appear on any exams. We now back up to Keplerís First Law: proof that the orbit is in fact an ellipse if the gravitational force is inverse square.† As usual, we begin with This isnít ready to integrate yet, because w varies too. But since the angular momentum L is constant, L = mr2w, we can get rid of w in the equation to give: This equation can be integrated, using two very unobvious tricks, figured out by hindsight.† The first is to change go from the variable r to its inverse, u = 1/r. The other is to use the constancy of angular momentum to change the variable t to q.† Substituting in the equation of motion gives: This equation is easy to solve!† The solution is where A is a constant of integration, determined by the initial conditions. †This is equivalent to the standard (r, q ) equation of an ellipse of semi major axis a and eccentricity e, with the origin at one focus, which is: The time it takes a planet to make one complete orbit around the Sun T (one planet year) is related to the semi-major axis a of its elliptic orbit by We have already shown how this can be proved for circular orbits, however, since we have gone to the trouble of deriving the formula for an elliptic orbit, we add here the(optional) proof for that more general case. The area of an ellipse is pab, and the rate of sweeping out of area is L/2m, so the time T† for a complete orbit is evidently Putting the equation† in the standard form Now, the top point B of the semi-minor axis of the ellipse (see diagram above) must be exactly a from F1† (visualize the string F1BF2), so using Pythagorasí theorem for the triangle F1OB we find Using the two equations above, the square of the orbital time We have established, then, that the time for one orbit depends only on the semimajor axis of the orbit: it does not depend on how elliptic the orbit is!
For this assignment, we will be addressing an interesting subject in evolutionary biology: the question of sexual selection. Very often, we see clear differences between sexes in many varieties of animals (including differences in size, coloration, behavior). For example, see the duck species and peafowl below: This has been an area of inquiry dating back to Charles Darwin and Alfred Russel Wallace. Dr. Melinda Weaver alluded to some of this in her own research in the introduction to module 4 (where she discussed her research with birds and coyotes). For this assignment, you should find a case of ‘sexual dimorphism’ and write a short research paper delving into the evolutionary questions surrounding the differences between the sexes. You should attempt to explain why it is that a dimorphism occurs between the sexes of the species you choose. Questions you might think about: - Is there any kind of physical competition between members of one sex for access to the other? - Is the dimorphism visually based (color differences), size-based, or behavioral? - What does ‘parental investment theory’ have to say about the dimorphism you are documenting? (which sex is being sexually selected and why?) - Are there any negative side-effects for the displays for males or females? (Do the displays sometimes result in mortality for the competitors?) - What is the larger ecological context in which this sexual selection occurs? (what kind of environment is it occurring in?) Things to include: - Your paper should be about two pages double-spaced. - Include at least three scholarly citations to back up your work - Include a photo of the species in question (ideally of male and female)
Eras and Their Highlights How did the Renaissance spread from Italy to the rest of Europe? Eventually the ideas born in Italy during the 1300s spread northward, which is at least in part attributable to German inventor Johannes Gutenberg’s printing press (c. 1440–1450). Before long, the spirit and ideas that were taking hold in Italy reached France, Germany, England, and the Netherlands, where the Renaissance continued into the 1600s. One of the most important figures of the northern Renaissance was the Dutch humanist Desiderius Erasmus (c. 1466–1536), whose book In Praise of Folly (1509) is a blistering criticism of the clergy, scholars, and philosophers of his day. Another notable figure of the northern Renaissance was Englishman Sir Thomas More (1478–1535), who was a statesman and adviser to the king. More’s Utopia, published in 1516, criticizes the times by envisioning an ideal society in which land is communally held, men and women alike are educated, police are unnecessary, politicians are honest, and where there is religious tolerance. The works of Flemish artist Jan van Eyck (1395–1441), including his groundbreaking portrait Man in a Red Turban (1433), demonstrate that the principles of the Renaissance were felt as strongly in northern Europe as they were in Italy.
- When: -po- -Po- is used in verb structure to indicate “when” in terms of time. When -po- is used in the present tense, -na- it also means whenever. The -po- is inserted as an infix between the tense marker (na – li – ta) and the verb stem. For example, “ninaposikia vizuri, inasabibisha ninakula vizuri” (“When I feel good, it is because I eat well.”). When -po- is used in the future tense with -ta- a -ka- is inserted after the -ta.- For instance, “nitakapoenda tutonana” (“When I go, we will see each other.“). 2. If: -ki- The present conditional tense is formed by inserting -ki- between the subject and verb stem. NO TENSE IS USED WITH -KI-. The verb that follows a conditional tense verb is always in the future tense or imperative/polite form which can imply the future. With this form, monosyllabic verbs lose the ku- at their start. Watoto wakila ugali, watlala vizuri. If the children eat ugali, they will sleep well. Nikikimbia, nitachoka jioni. If I run, I will be tired in the evening. 3. The negative -po- and -ki- Negative sentences with these grammar structures bear -sipo- in the first verb, between the subject and the verb stem. The second verb in this sentence takes the negative -ta- or -na-. NEVER USED WITH PAST TENSE, BECAUSE IT IMPLIES SOMETHING THAT HAS NOT HAPPENED YET. |Affirmative||Negative||Translation of Negative| |Anapofanya mazoezi polepole anajifunza vizuri.||Asipofanya mazoezi polepole hatajifunza vizuri.||If she doesn’t do the exercise slowly, she won’t learn well.| 4. Monosyllabic vebs With -po-, monosyllabic verbs keep the infinitive -ku- in both affirmative and negative sentences. With -ki- monosyllabic verbs drop the -ku-.
New technology reveals the higher-order structure of DNA. Source: “Comprehensive mapping of long-range interactions reveals folding principles of the human genome” Eric S. Lander, Job Dekker, et al. Science 326: 289-293 Results: Scientists developed a tool that makes it possible to map the three-dimensional structure of the entire human genome, shedding light on how six feet of DNA is packed into a cell nucleus about three micrometers in diameter. According to the resulting analysis, chromosomes are folded so that the active genes–the ones this particular cell is using to make proteins–are close together. Why it matters: Growing evidence suggests that the way the genome is packed in a particular cell is key to determining which of its genes are active. The new findings could allow scientists to study this crucial aspect of gene regulation more precisely. Methods: Scientists treated a folded DNA molecule with a preservative in order to create bonds between genes that are close together in the three-dimensional structure even though they may be far apart in the linear sequence. Then they broke the molecule into a million pieces using a DNA-cutting enzyme. The researchers sequenced these pieces to identify which genes had bonded together and then used this information to develop a model of how the chromosome had been folded. Next steps: Scientists plan to study how the three-dimensional structure of the genome varies between different cell types, between different organisms, and between normal and cancerous cells. They also hope that improving the resolution of the technology might reveal new structural properties of the genome. They can currently analyze DNA in chunks comprising millions of bases, but they would like to zero in on sequences thousands of bases long. Stem cells derived from patients with diabetes provide a new model for studying the disease Source: “Generation of pluripotent stem cells from patients with type 1 diabetes” Douglas A. Melton et al. Proceedings of the National Academy of Sciences 106: 15768-15773 Results: Scientists collected cells from patients with type 1 diabetes and turned them into induced pluripotent stem cells, adult stem cells with an embryonic cell’s capacity to differentiate into many different cell types. Then they stimulated these cells to differentiate into insulin-producing pancreatic cells. Why it matters: The stem cells carry the same genetic vulnerabilities that led the patients to develop diabetes. Watching them develop into insulin-producing cells should shed light on the development and progression of diabetes. Researchers may also be able to test new treatments on the developing cells. Methods: Researchers “reprogrammed” skin cells from two diabetes patients by using a virus to insert three genes involved in normal development. The new genes caused other genes to turn on and off in a pattern more typical of embryonic cells, returning the skin cells to an earlier developmental stage. The scientists then exposed the cells to a series of chemicals, encouraging them to differentiate into insulin-producing cells. Next steps: The researchers will examine the interaction between the different cell types affected by diabetes: the pancreatic beta cells and the immune cells that attack them. Initially they will study these interactions in a test tube, but ultimately they hope to incorporate the lab-generated human stem cells into mice. This will help scientists understand which cells are affected first. Armed with that knowledge, they could begin developing treatments that involve replacing some of those cells. This artist is dominating AI-generated art. And he’s not happy about it. Greg Rutkowski is a more popular prompt than Picasso. VR is as good as psychedelics at helping people reach transcendence On key metrics, a VR experience elicited a response indistinguishable from subjects who took medium doses of LSD or magic mushrooms. This nanoparticle could be the key to a universal covid vaccine Ending the covid pandemic might well require a vaccine that protects against any new strains. Researchers may have found a strategy that will work. How do strong muscles keep your brain healthy? There’s a robust molecular language being spoken between your muscles and your brain. Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Mystery Strip Activity Create one strip of paper exactly 1 meter long and do not mark the strip in any way. See example below: Then create the blue mystery strip by downloading this PDF file, making sure to follow the sizing instructions discussed in the document. Do the Activity - Put away rulers and other standard measuring devices! - Without using standard measuring devices, express the length of the mystery piece as a fraction of a meter, for example: The mystery strip is ____ of a meter. - Give team members time to try the problem individually and then discuss your approaches. It is an unfamiliar problem type, so you may find it useful to check out the solution approaches. - What understandings of fractions help in solving this problem?
Identifying the ‘flaws’ in an argument involves determining what is wrong in an argument, which mainly involves logical fallacies. This is one of the most difficult types of question to master, but this is certainly achievable through practice. Solving ‘flaw’ questions - Identify and understand the idea of the conclusion and premises. - Identify assumptions because, from them, you can more easily identify flaws in arguments. - Keep an eye out for extreme statements since these statements are harder to defend or prove and therefore, are more susceptible to flaws or fallacies. Read all the answer choices carefully to find the one closest to your understanding of the flaw. Ad hominem flaws are logical fallacies that involve personal attacks intended to discredit the person making a certain argument. Arguments of this kind are fallacious because instead of attacking the argument of the person, the attack is shifted to the person himself. · Argumentum ad populum Argumentum ad populum is a kind of logical fallacy wherein the truth of a statement becomes based on how popular it is or how many people believe it. · Genetic fallacy This fallacy means dismissing an argument based on its source or where it came from. · Appeal to authority The appeal to authority is a fallacy that bases the validity of a statement on whether it came from an authority. In other words, it is taking the word of an authority on a subject as absolute truth. · No true Scotsman This fallacy is dismissing the argument of a person by saying that they are not really a member of your group. Previously, we have discussed fallacies or reasoning flaws involving the relation between an argument and its source or the person saying it. In this part, we discuss flaws involving argumentative techniques. - Fallacy of faulty analogy This fallacy is making an analogy between two things that are not similar or irrelevant to the argument. - Straw man fallacy The straw man fallacy involves giving the impression that one is refuting the opponent’s argument, even though it is not really the argument the opponent presented. - Slippery slope The slippery slope fallacy assumes that if one thing happens, another thing would follow even though in reality, the thing that supposedly would follow does not necessarily have to. - Begging the question Also known as circular reasoning, this fallacy ‘begs the question’ by presenting the conclusion as the premise. - Fallacy of equivocation This fallacy occurs when a word or phrase with more than one meaning is used differently throughout an argument. - Either/Or thinking Also known as ‘black or white’ fallacy, this thinking assumes that there are only two possible ways for something. It assumes that something is either black or white and that there are no other options. - “All things are equal” fallacy The ‘all things are equal’ fallacy assumes that background conditions are equal in all situations. - Non sequitur Arguments that are non sequitur claim that a statement follows from another statement, even though they do not necessarily follow from one another. It presents conclusions as following from a premise or premises when in fact, it does not necessarily have to. - Special pleading Special pleading is making an exception for an argument when it is proven to be false. It is saying that if something refutes an argument, that something is an exception to the rule. The part-to-whole fallacy incorrectly assumes that the part of something or quality of something automatically applies to the whole. - Fallacy of fallacies This fallacy involves saying that because an argument is fallaciously argued, it is already incorrect. There are cases wherein an argument is completely valid but the person arguing for it commits a fallacy, which does not mean that the argument then becomes invalid. Now that you’ve wrapped up the Assumptions chapter, we’re going to move on to some trap question types. Trap Answers: BEWARE OF TWINS If two answers are the same, then both are likely wrong. This video is from the SAT critical reasoning section, which is very similar to the SAT Critical Reasoning questions. Beware: Scope Trap If you’ve found the main point, you must also identify what is in the range of the argument. Scope is related to more than just the general topic being discussed: it is the narrowing of the topic. Is the article about graduate-school admissions, business school admissions, or helping literature students get into the business school program of their choice? Each step represents a narrowing of the scope. Let’s look at this critical reasoning question to examine scope. Apartment building owners argue that rent control should be abolished. Although they acknowledge that, in the short term, rents would increase, they argue that the long-term effect would be a reduction in rents. This is because rent increases would lead to greater profitability. Higher profits would lead to increased apartment construction. Increased apartment construction would then lead to a greater supply of residences, and lower prices would result because potential apartment residents would have a better selection. Thus, abolishing rent control would ultimately reduce prices. Name an assumption made by the owners: (Hint: this is a difficult question, but you can eliminate four of the five answers as outside the scope of the argument.) - Current residents of rent-controlled apartments would be able to find new apartments once their rents increased. - The fundamental value of any society is to house its citizens. - Only current apartment owners would profit significantly from market deregulation. - New apartment construction will generate a great number of jobs. - The increase in the number of apartments available would exceed the number of new potential apartment residents. Which possible answers are outside of the scope? The scope is the argument that deregulation will increase supply and lower prices. “Name an assumption” means find a direct assumption of the supply/demand argument. (E) is the only choice that isn’t outside the scope of the argument: price of rents. So, we can choose it without even seeing a question stem. The argument revolves around a supply/demand dynamic in housing, so the answer should as well. (A) is incorrect because current residents aren’t related to the issue in the argument. (B) and (C) go afield into areas that aren’t related, such as societal ethics. The issue of jobs isn’t mentioned either, which rules out (D). (E) is the only choice directly germane to the paragraph.
Nutrition may sound like a complicated subject, but it’s really quite simple. Basically, nutrition describes how the food choices that you make affect your overall wellness. This includes how your body functions at the level of your cells, tissues and organs. Overview of Nutrition and You In order to stay healthy, you have to eat foods that provide your body with the energy and nutrients that it needs to work optimally. These include: - Other nutrients from plants (phytonutrients) Eating healthy can be easy. The key is to keep in mind these basic guidelines, which never go out of style: - Eat many different healthy foods, including a wide variety of fresh vegetables and fruits and whole grains - Choose lean protein sources such as beans and peas, fish, lean meats, poultry and low-fat dairy products - Limit your intake of sugar, salt and trans and saturated fats - Use alcohol in moderation - Drink plenty of water (You will also get water from fresh foods, especially fruits and vegetables.) More Detailed Nutritional Information The basics of nutrition will go a long way to keeping you healthy. However, knowing more about what’s in your food will help you make smart choices about what to eat. Here is a little more information on nutrition. Vitamins, minerals and other nutrients. Healthy foods contain a wide number of nutrients that keep you healthy. Both vitamins and minerals are needed by your body to grow and develop normally. These include, but are not limited to, vitamins C and D, and the minerals calcium and iron. Plants also contain other nutrients (phytonutrients), some of which may have cancer-fighting properties or other benefits. Superfoods. These special foods contain nutrients that are above and beyond what’s needed to simply keep your body going. Superfoods include leafy greens (such as spinach and kale), tomatoes, chocolate (i.e., dark chocolate with at least 70-percent cacao), blueberries and certain spices, such as turmeric. Each superfood has its own benefits, so be sure to eat a wide variety of them, as often as possible. Whole grains. Whole grains include wheat, oats, brown rice, barley, cornmeal and others; they are not processed, or refined. Processed foods may include grains, but they typically include refined grains from which the nutritional value has been leached. Examples of processed foods with refined grains are breads, pasta, cereal, white flour, white rice and cookies. In contrast, whole grains contain important nutrients, including fiber. To get the most health benefits, make sure at least half of your grains — and the foods that contain them — are whole. Lean protein sources. Your body needs protein-rich foods to build muscle and carry out many functions in the cells and tissues. Some protein sources, though, contain high levels of saturated fat that can be bad for your health. To boost your health, look for lean protein sources such as lean cuts of meat, poultry, seafood, beans and peas (legumes) and nuts and seeds. Sugar, salt and trans and saturated fats. Many processed foods contain things that can damage your health if eaten in large quantities. Some of these, such as salt, sugar and trans fats, are added to processed foods to change the flavor. Others, such as saturated fats, occur naturally in foods such as meat and eggs. To stay as healthy as possible, limit your intake of these, opting instead for foods like fresh fruits and vegetables.
brief, basic information laid out in an easy-to-read format. May use informal language. (Includes most news articles) provides additional background information and further reading. Introduces some subject-specific language. lengthy, detailed information. Frequently uses technical/subject-specific language. (Includes most analytical articles) Key Terms Descriptions Colonialism is the establishment, exploitation, maintenance, acquisition, and expansion of colonies in one territory by people from another territory. It is a set of unequal relationships between the colonial power and the colony and often between the colonists and the indigenous population. The European colonial period was the era from the 16th century to the mid-20th century when several European powers (particularly, but not exclusively, Portugal, Spain, Britain, the Netherlands, Russia, and France) established colonies in Asia, Africa, and the Americas. At first the countries followed mercantilist policies designed to strengthen the home economy at the expense of rivals, so the colonies were usually allowed to trade only with the mother country. By the mid-19th century, however, the powerful British Empire gave up mercantilism and trade restrictions and introduced the principle of free trade, with few restrictions or tariffs. Colonialism was always portrayed in the colonizing country (in public) as bringing benefits for the colony. They included: increased standard of living, benefits of Christianity, improved health and education, establishing law and order, etc
On the surface, the red planet’s freeze-dried world of rocks, ice, and dust looks like an unlikely place to plant a garden. But rocks and minerals found by the Mars rovers show it must once have had warmer, habitable living conditions. Now, using photo-realistic CGI visualizations, we’ll make a science fiction dream of Mars - a world of trees, rivers, and blue skies - a plausible future, bringing it to life after three-and-a-half billion years in a deep freeze. The first and most important step in making Mars habitable is to warm it up, raising the average temperature 35 to 55 degrees Fahrenheit. (Average Mars temperature is about -81 degrees Fahrenheit.) Bold concepts on how to warm up Mars range from detonating hydrogen bombs and guiding space rocks on a collision course with Mars to setting up little factories on the planet whose intent is to produce greenhouse gases. Much of the new Mars would be an icy world, like summer above the Arctic Circle, with atmospheric pressure equivalent to a mountain twice the height of Mt. Everest. The next step is turning Mars green and producing a breathable atmosphere, which will be a much longer and more difficult process. Lichen and moss, which thrive on carbon dioxide, will be the first imports of plant life from Earth, perhaps 50 to 100 years after warming begins. They build soil, create more nutrients and pave the way for grass and woody shrubs. Once established, Martian forests will spread on their own, improving the soil and the atmosphere, creating a livable world for more than just plants. It could take 100,000 years for trees to transform an icy blue Mars with a carbon dioxide atmosphere into a warm, green planet with enough oxygen for humans to breathe. A fully terraformed Mars may never be as warm and wet as Earth because it’s too small and too far from the sun. Low elevations where the atmosphere is thicker and regions near the equator will be the warmest.
Tips, Tricks and a Handy Tool for Teaching the R Sound Posted by Heidi | Filed under Improving Articulation Teaching the R Sound can be very challenging. Learning new ways to make teaching this sound easier fascinates me. I’d like to Thank Gordy Rogers from Articulate Technologies for taking the time to share with us his tricks for teaching this sound and for their (Articulate Technologies) creative solution in developing a useful tool to help teach the correct placement of the tongue for the R sound. Many of you, both parents and speech-language pathologists (SLPs), have experience with a child who has really struggled with his or her /r/ sound. Researchers and clinicians in the field of speech-language pathology have long known that this is the trickiest sound to correct. In fact, one leading researcher and scientific advisor to Speech Buddies, Dennis Ruscello of West Virginia University, found that 91% of SLPs experienced at least one case in which traditional methods of therapy did not work for a child struggling to produce /r/ correctly. In this blog post, I’m going to outline what makes /r/ so tricky to treat and provide some suggestions for teaching correct /r/ that you can try with your child right away. There are actually three distinct actions or behaviors that need to be performed correctly for a child (or anyone for that matter) to produce /r/ correctly. These behaviors involve three distinct oral regions: the lips, the tongue and the pharynx, or throat. Basically, these parts of the oral anatomy must constrict, or close up slightly, so that the sound produced by the vocal cords is shaped in such a way that /r/ is produced. I am going to describe what this oral anatomy needs to do, then describe tricks or techniques that can aid in making this happen for your child. Let’s start with the lips. I want you to say “rabbit” right now and, while you do so, concentrate on what your lips are doing when you say the /r/ in “rabbit”. They’re probably making the shape of an “O”. This rounded lip shape is the first key component of a correct /r/. Second, the tongue must create a hump in the middle of the mouth. Think of the sound coming from the vocal cords having to go over a small mountain made by the tongue. If there’s no mountain, there’s no correct /r/ sound. Finally, there’s the pharynx or upper part of the throat right behind the tongue. For /r/, the pharynx must be slightly constricted, or tightened, in order for the /r/ to sound correct. Now let’s discuss some tricks or techniques for achieving correct lip, tongue and upper throat shapes for /r/. For the lips, I tell my clients to make a “fish face” or to simply stick their lips out and make an “O” with them. One very powerful aid here is a visual cue, by which you’d simply have the child look at your face while you make a correct /r/, and have the child imitate what your lips are doing. With regard to the upper throat, I often have my clients gargle with water to help them learn to tighten these muscles. The action of keeping the water in the throat while producing, for example, the “ah” sound, closely models what the upper throat needs to do in order to correctly say /r/. Give it a try! Now, with the tongue, I’m not going to lie: this is, without a doubt, the trickiest part of the process and in the vast majority of cases, it’s the tongue that is the primary source of the child’s /r/ difficulty. Because the tongue movement necessary to create this hump or mountain I mentioned above can be difficult to achieve, and because this is all happening behind the visual barrier of the front teeth, I recommend a tactile cue. At Articulate Technologies we’ve created a tool called the, “R Speech Buddy” which provides a very specific tactile cue. This tool allows the child to feel exactly what he needs to do with his tongue in order to produce a correct /r/ sound. Many kids are strong tactile learners, especially in elementary school. The R Speech Buddy unlocks a sense of feeling to help them learn the correct tongue movement, as the clinical data we’ve gathered has shown, up to four times faster! The way it works is actually surprisingly simple. It involves two simple steps, placement and movement. In the placement phase, the child simply navigates to two sets of bumps. These bumps, placed right behind the upper front teeth, cue the correct starting position for /r/. Once the tongue is in place, the movement phase can begin. Here, the child simply unrolls the coil with his tongue. When the lips and throat are correctly configured, and the child fully unrolls the coil while attempting to say /r/, he will say a correct /r/; if the coil is not fully unrolled, the /r/ will not be correct – it’s as simple and reliable as that! Below is a video on how the R Speech Buddy works: You can also read additional information about the R Speech Buddy on the Speech Buddy Website. I know /r/ can be a particularly frustrating sound to teach. If you continue to find that your child struggles with his or her /r/ sound, I recommend you retain the services of a licensed, certified SLP in your local area. Click the link below to see a list of preferred providers, who have experience in treating the /r/ sound, and would be invaluable resources to you and your family through this process: http://www.speechbuddy.com/slps/provider-program. Best of luck! Ruscello, D. M. (1995). Visual feedback in treatment of residual phonological disorders. Journal of Communication Disorders, 28, 279–302 Gordy Rogers, M.S. CCC-SLP is a speech-language pathologist and co-founder of Articulate Technologies, Inc.
Though not as extreme as last year, the Arctic continues to show evidence of a shift to a new warmer, greener state in 2013, according to the Arctic Report Card, an annual report that details Arctic change released by the National Oceanic and Atmospheric Administration (NOAA). The Conservation of Arctic Flora and Fauna’s Circumpolar Biodiversity Monitoring Programme (CBMP) led the development of the Arctic Report Card’s terrestrial and marine ecosystem chapters, which detail changes in plants, birds, benthos, fish, mammals and other species. The Arctic Monitoring and Assessment Programme (AMAP) coordinated scientific review. One hundred forty-seven authors from 14 countries contributed to the peer-reviewed report. “The Arctic caught a bit of a break in 2013 from the recent string of record-breaking warmth and ice melt of the last decade,” said David M. Kennedy, NOAA’s deputy under secretary for operations, during a press briefing today at the American Geophysical Union annual meeting in San Francisco. “But the relatively cool year in some parts of the Arctic does little to offset the long-term trend of the last 30 years: the Arctic is warming rapidly, becoming greener and experiencing a variety of changes, affecting people, the physical environment, and marine and land ecosystems.” Major findings of this year’s report include: - Vegetation: The Arctic is greening as vegetation responds to warmer conditions and a longer growing season. Since observations began in 1982, Arctic-wide tundra vegetation productivity (greenness) has increased, with the growing season length increasing by 9 days each decade. - Wildlife: Large land mammal populations continued trends seen over the last several decades. Muskox numbers have increased since the 1970s, in part due to conservation and introduction efforts, while caribou and reindeer herds continue to have unusually low numbers. - Air temperatures: While Eurasia had spring air temperatures as much as 7°F above normal, central Alaska experienced its coldest April since 1924 with birch and aspen trees budding the latest (26 May) since observations began in 1972. Summer across a broad swath of the Arctic was cooler than the previous six summers, when there had been pronounced retreat of sea ice. But Fairbanks, just below the Arctic Circle in Alaska, experienced a record 36 days with temperatures at or exceeding 80°F. - Snow cover: The snow extent in May and June across the Northern Hemisphere (when snow is mainly located over the Arctic) was below average in 2013. The North American snow cover during this period was the fourth lowest on record. A new record low was reached in May over Eurasia. - Sea ice: Despite a relatively cool summer over the Arctic Ocean, the extent of sea ice in September 2013 was the sixth lowest since observations began in 1979. The seven lowest recorded sea ice extents have occurred in the last seven years. - Ocean temperature and salinity: Sea surface temperatures in August were as much as 7°F higher than the long-term average of 1982-2006 in the Barents and Kara Seas, which can be attributed to an early retreat of sea ice cover and increased solar heating. Twenty-five percent more heat and freshwater is stored in the Beaufort Gyre, a clockwise ocean current circulating north of Alaska and Canada, since the 1970s. - Greenland ice sheet: During a summer when air temperatures were near the long-term average, melting occurred across as much as 44 percent of the surface of the Greenland ice sheet, close to the long-term average but much smaller than the record 97 percent in 2012. For the first time, scientists also released new information on marine fishes and black carbon. Highlights: - Marine fishes: The long-term warming trend, including the loss of sea ice and warming of waters, is believed to be contributing to the northward migration into the Arctic of some fish such as Atlantic mackerel, Atlantic cod, capelin, eelpout, sculpin and salmonids. - Black carbon: Black carbon (soot) originating from outside the Arctic has decreased by 55 percent since the early 1990s, primarily due to the collapse of the former Soviet Union. “The Arctic Report Card presents strong evidence of widespread, sustained changes that are driving the Arctic environmental system into a new state and we can expect to see continued widespread and sustained change in the Arctic,” said Martin Jeffries, principal editor of the 2013 Report Card, science adviser for the U.S. Arctic Research Commission, and research professor at the University of Alaska Fairbanks. “But we risk not seeing those changes if we don’t sustain and add to our current long-term observing capabilities. Observations are fundamental to Arctic environmental awareness, government and private sector operations, scientific research, and the science-informed decision-making required by the U.S. National Strategy for the Arctic.” For more information on the Arctic Council’s contributions to the Arctic Report Card, contact: Executive Secretary, Arctic Monitoring and Assessment Programme Executive Secretary, Conservation of Arctic Flora and Fauna
AP/Sam Ogden via The Whitehead Institute This photo shows a mouse composed, in part, of cells that were reprogrammed to a stem cell-like state. New research has used these type of cells to create offspring from two fathers. Reproductive scientists have used stem cell technology to create mice from two dads. The breakthrough could be a boon to efforts to save endangered species -- and the procedure could make it possible for same-sex couples to have their own genetic children. Cells from a male mouse fetus were manipulated to produce an induced pluripotent stem cell line. These iPS cells are ordinary cells that have been reprogrammed to take on a state similar to that of an embryonic stem cell, which can develop into virtually any kind of tissue in the body. About 1 percent of the iPS cell colonies spontaneously lost their Y chromosome, turning them into "XO" cells. These cells were injected into embryos from donor female mice, and transplanted into surrogate mothers. The mommy mice gave birth to babies carrying one X chromosome from the original male mouse. Once these mice matured, the females were mated with normal male mice. Some of their offspring had genetic contributions from both fathers. The study authors say their technique could be applied to animal breeding efforts, so that two males with desirable traits could be crossed without mixing in traits from females. "It is also possible that one male could produce both oocytes (eggs) and sperm for self-fertilization to generate male and female progeny," the team writes. This could help save an endangered species that no longer had females to mate with, for example. In the future, scientists may be able to create human eggs from male iPS cells in vitro, allowing them to eliminate the need for the intermediate offspring, though a surrogate mother would still be needed to carry the two-father pregnancy to term. With a variation of the technique, "it may also be possible to generate sperm from a female donor and produce viable male and female progeny with two mothers," the researchers write. The research joins a long list of stem cell breakthroughs with mouse models. Check out the stories below to learn what else researchers have done with mice. - Whole mice created from skin cells - Stem cells reverse defects in mice embryos - Sex differences found in stem cells - Mice born without a dad's DNA - Paralyzed mice given stem cells walk again John Roach is a contributing writer for msnbc.com. Connect with the Cosmic Log community by hitting the "like" button on the Cosmic Log Facebook page or following msnbc.com's science editor, Alan Boyle, on Twitter (@b0yle).
Smoke Detectors Basics Smoke detectors are designed to provide early warning for a fire involving ordinary combustible materials that is expected to progress through distinct incipient and/or smoldering stages. The type, volume, and density of smoke produced during the fire development process will vary greatly depending on the fuels involved and the amount of oxygen available. Typically, the greatest volume of visible smoke is produced during the ignition (incipient) stage and the smoldering stage. Even with ready access to low cost and, at times, free smoke detectors, exposure to fire and smoke is responsible for thousands of deaths and billions of dollars in property damage each year. One challenge for many property owners is determining what type (i.e., design) smoke detector to purchase and where to install the units. Some detectors are more effective (respond quicker) to flaming combustion, while others respond better to smoldering combustion. Likewise, some detectors are more prone to false activation from environmental factors than others. This report provides an overview of the primary smoke detector types and discusses research into the effectiveness of each of those types. Smoke Detector Basic Types The two most commonly used smoke detectors are the ionization and photoelectric types. Ionization Detectors Smoke Detectors The ionization smoke detector reacts to both visible and invisible products of combustion. This spot-type detector contains a small radiation source that produces electrically charged air molecules called ions. The presence of these ions allows a small electric current to flow in a chamber. When smoke particles enter the chamber, they attach themselves to the ions, reducing the flow of electric current. The change in the current sets off the alarm. Many ionization detectors are of the multiple-chamber type. Probably the most common is the dual chamber ionization detector, which employs two sources of radiation - one in an essentially sealed chamber and one open to the atmosphere. The open chamber serves for sensing smoke particles, while the closed ionization chamber monitors ambient conditions and compensates for the effect of barometric pressure, temperature, and relative humidity on the ionization rate. This construction accepts a much wider range of pressure, temperature, and humidity changes without giving false alarms. Ionization detectors are not suitable for use in applications where high ambient radioactivity levels are to be expected. High ambient radiation reduces the detector's sensitivity. Ionization detectors have been known to react to non-fire-generated particles of combustion and the presence of ozone, ammonia, or insects. Single chamber ionization detectors installed in higher altitudes usually require a modification in sensitivity during installation. Photoelectric Smoke Detectors There are two types of photoelectric detectors - beam and light scattering, both of which consist of a light source, a collimating lens system, and a photosensitive cell. Aerosols generated during the combustion process affect the propagation of light as they pass through the air. The combination of the aerosol and air mixture results in two conditions that can be used to detect the presence of a fire. These are: (1) the attenuation of the light intensity integrated over a beam path length, and (2) scattering of the light (i.e., Tyndall effect) at various angles to the beam path. Beam-type smoke detectors.The beam-type photoelectric detector is a photoelectric smoke detector that works on the obscuration principle. A light beam is directed at a photocell. When no smoke interferes with this beam, the receiver accepts the beam at a specified voltage level, but when smoke interferes with the beam, the infrared light reaching the receiver drops below the predetermined sensitivity level of the receiver, initiating a signal. Beam-type photoelectric detectors can be sensitive to voltage variations, dirt on the lens or mirrors, building vibration, and insects. Light scattering smoke detectors (Tyndall Principle).The Tyndall-principle photoelectric detector is of the spot-type and detects smoke by sensing the light reflected by smoke particles. The smoke particles enter the detector and reflect or scatter light from a small lamp, or LED, in the device. Some of that reflected light strikes a photocell that produces an electrical current. As the number of particles increases and more light strikes the photocell, the intensity of the electrical current increases. When the smoke particles are dense enough to reflect a predetermined amount of light, the detector’s circuit actuates the alarm. See Fire Protection Report FP-21-01, Fire Detectors, for information on of other types of smoke detectors with specific applications. Smoke Detector Effectiveness Studies A number of groups have published reports and data related to the effectiveness of smoke detectors, including the National Fire Protection Association (NFPA), National Institute of Standards and Technology (NIST), and Texas A&M. The opinions of these groups often differ greatly, and each group’s rationale should be carefully evaluated when selecting detectors for a specific hazard. Some of the findings include: National Fire Protection Association (NFPA) The NFPA Technical Committee on Single- and Multiple-Station Alarms and Household Fire Alarm Systems released a Task Group report on Smoke Detection in 2008. That report “Minimum Performance Requirements for Smoke Alarm Detection Technology” reviewed data from various sources, including the 2007 NIST report, “Performance of Home Smoke Alarms Analysis of the Response of Several Available Technologies in Residential Fire Settings” (NIST Report TN 1455-1), to evaluate the effectiveness of various detector types, their installation locations, and the probability of nuisance alarms. The 2008 NFPA report provides a detailed review of the topic and in closing offers a number of recommendations and observations, including the following: “Based on the Task Group on Smoke Detection Technology’s review of NIST TN 1455-1, using the escape scenarios considered in the NIST report, the Task Group concludes that smoke alarms using either ionization or photoelectric smoke detection technologies, installed per NFPA 72-2007, are generally providing acceptable response to smoldering fires. Additional study is needed regarding photoelectric alarm response in flaming scenarios.” There was some push back from various groups following the release of the above report; thus, the Technical Committee commissioned a second task group to re-review the information. That Task group released the “Task Group on Smoke Detection Follow-Up Report” in July of 2009. In short, that group also found that: “The rate at which a particular type of detector did not provide adequate warning was similar for ionization and photoelectric detectors regardless of whether the Direct Escape or Indirect Escape was used. As expected, ionization detectors provided earlier warning to flaming fires, while photoelectric detectors provided earlier warning to smoldering fires.” National Institute of Standards and Technology Much of the research into detector response times was developed by the NIST research and is included in the NIST report “Performance of Home Smoke Alarms Analysis of the Response of Several Available Technologies in Residential Fire Settings,” which was originally published in 2004, then recommissioned in 2008. Both reports can be downloaded for free from the NIST Smoke Detector Study site. The NIST site provides a summary of the findingh3>s, including: “Smoke alarms of either the ionization type or the photoelectric type consistently provided time for occupants to escape from most residential fires, although in some cases the escape time provided can be short. Consistent with prior findings, ionization-type alarms provided somewhat better response to flaming fires than photoelectric alarms, and photoelectric alarms provide (often) considerably faster response to smoldering fires than ionization-type alarms.” A 1995 study by Texas A&M, “Risk Analysis of Residential Fire Detectors Performance,” used data from live fire testing. The authors developed a fault tree analysis for detector reliably predicative modeling. The closing statement in that report seems to mimic the latter reports, as follows: “Certain types of fire detectors are more reliable for different fires, therefore, recommendations as to the type and location of the fire detector should be including the type of fire ignition that would likely occur and the most reliable detectors that can be installed in that location.” Smoke Detector Use The 2011 NFPA Report, “Smoke Alarms in US Home Fires,” indicates that 385 of the home fire deaths occurred where there were no detectors. Further, the report reveals that 50% of the detectors that failed to operate were missing batteries, 23% had dead batteries, and 7% had the power disconnected. In short, the single greatest cause of detector failure does not have to do with the type of detector, but with a lack of power to the detector. The NFPA smoke detector webpage offers the following advice: “For each type of smoke alarm, the advantage it provides may be critical to life safety in some fire situations. Home fatal fires, day or night, include a large number of smoldering fires and a large number of flaming fires. You cannot predict the type of fire you may have in your home or when it will occur. Any smoke alarm technology, to be acceptable, must perform acceptably for both types of fires in order to provide early warning of fire at all times of the day or night and whether you are asleep or awake.” In summary, the importance of selecting the appropriate smoke detector lies in ensuring that the smoke detector is designed for the expected fire scenario and that they are installed and maintained in accordance with the manufacturer’s requirements. In general, photoelectric detectors respond better to a smoldering fire and ionization-type detectors to a flaming fire. To address the differences in detector types, nearly all of the major manufacturers offer dual (ion and photo) sensor detectors. Additionally, most risk control specialists recommend that carbon monoxide detectors also be installed to ensure that all possible combustion products are detected. For more information on loss control and managing business risks, check out the American Family Insurance Loss Control Resource Center. 1. Engineering and Safety Service. Fire Detectors. FP-21-01. Jersey City, NJ: ISO Services, Inc., 2011. 2. International Codes Council (ICC). International Fire Code. 2012 ed. Falls Church, VA: ICC, 2012. 3. National Fire Protection Association (NFPA). Fire Protection Handbook. 20th ed. Quincy, MA: NFPA, 2008. 4. ---. Minimum Performance Requirements for Smoke Alarm Detection Technology, Task Group Report. Quincy, MA: NFPA, 2008. 5. ---. Smoke Alarms in US Home Fires. Quincy, MA: NFPA, 2011. 6. ---. Task Group on Smoke Detection Follow-Up Report. Quincy, MA: NFPA, 2009. 7. National Institute of Standards and Technology (NIST). Performance of Home Smoke Alarms Analysis of the Response of Several Available Technologies in Residential Fire Settings. NIST TN 1455-1. Washington, DC: NIST, 2007. 8. Texas A&M University. Risk Analysis of Residential Fire Detectors Performance. College Station, TX: Texas A&M, 1995. COPYRIGHT ©2013, ISO Services, Inc. The information contained in this publication was obtained from sources believed to be reliable. ISO Services, Inc., its companies and employees make no guarantee of results and assume no liability in connection with either the information herein contained or the safety suggestions herein made. Moreover, it cannot be assumed that every acceptable safety procedure is contained herein or that abnormal or unusual circumstances may not warrant or require further or additional procedure.
The Everglades (or Pa-hay-okee) are a natural region of tropical wetlands in the southern portion of the U.S. state of Florida, comprising the southern half of a large drainage basin and part of the neotropic ecozone. The system begins near Orlando with the Kissimmee River, which discharges into the vast but shallow Lake Okeechobee. Water leaving the lake in the wet season forms a slow-moving river 60 miles (97 km) wide and over 100 miles (160 km) long, flowing southward across a limestone shelf to Florida Bay at the southern end of the state. The Everglades experience a wide range of weather patterns, from frequent flooding in the wet season to drought in the dry season. Writer Marjory Stoneman Douglas popularized the term "River of Grass" to describe the sawgrass marshes, part of a complex system of interdependent ecosystems that include cypress swamps, the estuarine mangrove forests of the Ten Thousand Islands, tropical hardwood hammocks, pine rockland, and the marine environment of Florida Bay.Human habitation in the southern portion of the Florida peninsula dates to 15,000 years ago. Before European colonization, the region was dominated by the native Calusa and Tequesta tribes. With Spanish colonization, both tribes declined gradually during the following two centuries. The Seminole formed from mostly Creek people who had been warring to the North; they assimilated other peoples and created a new culture. After being forced from northern Florida into the Everglades during the Seminole Wars of the early 19th century, they adapted to the region and were able to resist removal by the United States Army. Migrants to the region who wanted to develop plantations first proposed draining the Everglades in 1848, but no work of this type was attempted until 1882. Canals were constructed throughout the first half of the 20th century, and spurred the South Florida economy, prompting land development. In 1947, Congress formed the Central and Southern Florida Flood Control Project, which built 1,400 miles (2,300 km) of canals, levees, and water control devices. The Miami metropolitan area grew substantially at this time and Everglades water was diverted to cities. Portions of the Everglades were transformed into farmland, where the primary crop was sugarcane. Approximately 50 percent of the original Everglades has been developed as agricultural or urban areas. Following this period of rapid development and environmental degradation, the ecosystem began to receive notable attention from conservation groups in the 1970s. Internationally, UNESCO and the Ramsar Convention designated the Everglades a Wetland Area of Global Importance. The construction of a large airport 6 miles (9.7 km) north of Everglades National Park was blocked when an environmental study found that it would severely damage the South Florida ecosystem. With heightened awareness and appreciation of the region, restoration began in the 1980s with the removal of a canal that had straightened the Kissimmee River. However, development and sustainability concerns have remained pertinent in the region. The deterioration of the Everglades, including poor water quality in Lake Okeechobee, was linked to the diminishing quality of life in South Florida's urban areas. In 2000 the Comprehensive Everglades Restoration Plan was approved by Congress to combat these problems. To date, it is the most expensive and comprehensive environmental restoration attempt in history, but its implementation has faced political complications.
Central Auditory Processing Evaluations Little Falls Auditory Processing Disorders (APD), also known as Central Auditory Processing Disorder (CAPD) is the reduced or impaired ability to discriminate, recognize or comprehend complex sounds, such as those used in words, even though the person’s hearing is normal. For example, understanding boat for coat or the not being able to discriminate the difference in sounds between “sh” and “ch” It is a complex problem that affects about 5% to 7% of school-aged children and it is twice as often diagnosed in boys than in girls. Although it is difficult to understand, APD is not a problem with hearing per se. The problem lies in the hearing process. In children/adults with APD these electrical signals that come from the sound waves into the ear and are sent to the brain arrive with a delay or distortion, which makes learning and memorizing very difficult. This Is The Regular Process Of Hearing: Usually, a person with APD has normal hearing but the brain interprets what it hears as if there were a delay or distortion to the sound. This in turn makes it difficult for the person to comprehend what has been said and therefore he/she is not able to retain the information, thereby affecting their short-term memory. So, although he/she may be hearing everything that is said, he/she may be struggling to process the meaning of it. Children who “pass” their hearing screening at school or in their pediatrician’s office, do “ok” even when tested by the speech therapist at school, but struggle with long assignments, following directions, appear to not hear (ask for frequent repetitions) or “mishear”, misunderstand humor and idioms, or appear distracted or overly fidgety, may actually be manifesting signs of an auditory processing disorder. A person with APD will often have trouble focusing on schoolwork, multi-task instructions and surprisingly even every day socializing. People tend to retreat in social scenarios so as not to make any mistakes that will permit them to be ridiculed. Their self-esteem can be greatly affected. APD has been a controversial diagnosis in the medical field. Many people with APD will also have accompanying learning differences that are often diagnosed as the primary problem and therefore APD is overlooked and not properly treated. There are also some medical experts who argue that APD does not exist at all. However, the American Speech-Language-Hearing Association (ASHA) as well as the American Academy of Audiology have presented position statements in which they present the existence of APD among children as well as adults. What Are The Symptoms? Signs of APD often appear at a young age, usually in school age children but can be diagnosed in high school children and adults as well. However, it is very important to understand that APD cannot be diagnosed by a checklist. Every person is different and his/her symptoms will manifest themselves differently. Due to some overlap in symptoms, many children are misdiagnosed with ADD (Attention Deficit Disorder) or ADHD (Attention Deficit Hyperactivity Disorder) as well as other underlying conditions. The truth is that many children will have one of these disorders and delays in addition to APD, but APD can only be diagnosed by a certified audiologist after a series of specifically designed tests. - Difficulty understanding in noisy environments - Difficulty following multi-task directions - Difficulty distinguishing between similar sounds - Language and/or speech delays - Often requiring repetition or clarification (as if there was a hearing problem present) - Easily distracted or unusually bothered by loud or sudden noises • improved behavior and performance in quieter settings - Difficulty understanding abstract information - Difficulty with verbal math problems - Disorganized and forgetful - Have trouble or display poor memory for word and numbers - Have trouble understanding jokes, riddles or idioms - Show difficulty in expressive language - Seems to “tune out” when the conversation is complex or involves too many people - in school, they will often have difficulties with language, learning, reading and spelling - If your child has any of these symptoms and you suspect that APD may be the cause, contact a certified audiologist to make an appointment for an evaluation. Keep in mind however, that not all audiologists work with APD testing. What Causes APD? There is currently no known definite cause of APD. Research suggests that it can be congenital (some people are born with it) or it can be acquired. Evidence suggests links to recurring middle ear infections, head injury or trauma. What Is The Proper Treatment? There are multiple treatments that are recommended and used by professionals for APD but there is not one clear-cut proven solution. Like any other medical treatment, people will respond differently to the treatment chosen for them by his/her parents and team of experts. What Does an Auditory Processing Evaluation Entail? A central auditory processing evaluation includes a series of tests that determine how well a person processes, or does, with what he or she hears. Tests of auditory processing are designed to simulate listening tasks encountered in the real world. These tests stress the auditory system by distorting, degrading, filtering or time altering the speech signal. If I Have A Child With ADP, How Can I Help Her At Home? - Always talk to your child face to face; try to get him/her to look you in the eyes when talking or when giving out instructions. - Make instructions simple, no more than three steps at a time. If possible, have him/her repeat the instructions back to you. - Speak at a slightly lower rate and a slightly louder volume. - When you are in the car and unable to face your child, turn off the radio when carrying on a conversation. This will eliminate background noise so that your child can fully concentrate on the topic at hand. - When doing homework, have your child sit in a quiet place (no TV, radio, computer, etc) in order for him/her to concentrate. - Try placing felt stickers on the bottom of chairs to avoid loud noises when they are moved (tennis balls will also work for the noise but not for the decor!) BE PATIENT. People with APD have such a hard time on a daily basis trying to sort everything out in their brains and having to deal with other people’s intolerance that what they need most from us is patience, understanding and love.
Citation: Huitt, W. (2004). Observational (social) learning: An overview. Educational Psychology Interactive. Valdosta, GA: Valdosta State University. Retrieved [date], from http://www.edpsycinteractive.org/topics/soccog/soclrn.html Return to: | Social Cognition | EdPsyc Interactive: Courses | Observational or social learning is based primarily on the work of Albert Bandura (1977). He and his colleagues were able to demonstrate through a variety of experiments that the application of consequences was not necessary for learning to take place. Rather learning could occur through the simple processes of observing someone else's activity. This work provided the foundation for Bandura's (986) later work in social cognition. Bandura formulated his findings in a four-step pattern which combines a cognitive view and an operant view of learning. 1. Attention -- the individual notices something in the environment. 2. Retention -- the individual remembers what was noticed. 3. Reproduction -- the individual produces an action that is a copy of what was noticed. 4. Motivation -- the environment delivers a consequence that changes the probability the behavior will be emitted again (reinforcement and punishment) Bandura's work draws from both behavioral and cognitive views of learning. He believes that mind, behavior and the environment all play an important role in the learning process. In a set of well known experiments, called the "Bobo doll" studies, Bandura showed that children (ages 3 to 6) would change their behavior by simply watching others. Three groups of children watched a film in which a child in a playroom behaved aggressively (e.g., hit, kick, yell) towards a "bobo doll." The film had three different endings. One group of children saw the child praised for his behavior; a second group saw the child told to go sit down in a corner and was not allowed to play with the toys; a third group (the control) group saw a film with the child simply walking out of the room. Children were then allowed into the playroom and actions of aggression were noted. The results are shown below. What do we learn from these data in terms of the differences and similarities between boys and girls? Among different experimental conditions? Was the "model rewarded" really an example of the use of positive reinforcement? Bandura and his colleagues also demonstrated that viewing aggression by cartoon characters produces more aggressive behavior than viewing live or filmed aggressive behavior by adults. Additionally, they demonstrated that having children view prosocial behavior can reduce displays of aggressive behavior. In more recent years, Bandura turned his attention to self-efficacy and self-regulation. He now classifies his theoretical orientation as social cognition. All materials on this website [http://www.edpsycinteractive.org] are, unless otherwise stated, the property of William G. Huitt. Copyright and other intellectual property laws protect these materials. Reproduction or retransmission of the materials, in whole or in part, in any manner, without the prior written consent of the copyright holder, is a violation of copyright law.
Teaching English listening skills successfully requires using a combination of different resources to expose your students to spoken English. The lessons can be stimulating, as you can use films, music, radio and language-learning CDs to improve their skills. Translating spoken English is challenging as the words cannot be read, and there are no accompanying pictures or gestures to help the students understand. Therefore, it is important to keep encouraging them and build up their confidence with easier exercises in the beginning. Use dictation exercises to teach your students how to focus on understanding the context of a conversation. Play an audio clip of news from the BBC World Service website and ask the students to write down what was heard. Then, get the students to listen to one another's ideas while discussing the topic in English. Play a language-learning CD such as Rosetta Stone or English for Dummies, and work through the listening exercises with the students. Teach the students to focus on how sentences are structured by writing down the subject, object and noun heard during different clips, or answer the questions to puzzles. Write down a set of questions relating to an English-speaking film. Give them to the students to answer while watching the film together. Review the answers after the film has finished to assess their understanding of the plot. Allocate 10 minutes to listening to music with English lyrics. Use the music as a dictation exercise -- ask the students to write down the lyrics. This will engage the students and expose them to rhyming, slang and different accents. Teach your students to listen to one another by encouraging them to make English conversation or English-speaking friends. Use a website, such as englishconversation.org, to link up with English speakers, for example. Keep each conversation topic specific, such as hobbies, family or home town, to focus the students' attention, and to use vocabulary particular to that subject. Listen to English-speaking radio stations and teach the students about English culture, current affairs, music and listening simultaneously. Check out Radio Tower to access a number of different stations available in English.
image from calif-tech.com - Unlimited supply of recycled items - Variety of materials for attaching items (string, staples, glue, tape, etc.) - The materials listed above are only a starting point. The adults and kids can be creative in coming up with plenty more materials to inspire inventions. - Allow kids to work alone or in groups. - Encourage children to look over materials and develop an idea first of what they wish to create. They can even draw out a diagram of what they want their finished product to look like. - Once designed, then kids can start building. - When products are complete, display the kids’ work by putting on an Invention Convention. Think of this as similar to a science fair. Source(s): 100 Nature Activities for Kids (activity K2) Check out the full February calendar. It includes floating holidays, specialty weeks, and specialty months.
This is a guide to how children develop speech and language between 12 and 18 months. At this stage, children will start to use language in a more recognisable way. They will also become more sociable. Children develop skills at different rates, but by 18 months, usually children will: - Enjoy games like peek-a-boo and pat-a-cake and toys that make a noise. - Start to understand a few simple words, like ‘drink’, ‘shoe’ and ‘car’. Also simple instructions like 'kiss mummy', 'kick ball' and 'give me'. - Point to things when asked, like familiar people and objects such as ‘book’ and ‘car’. - Use up to 20 simple words, such as 'cup', 'daddy' and 'dog'. These words may not always be easily recognised by unfamiliar adults. - Gesture or point, often with words or sounds to show what they want. - Copy lots of things that adults say and gestures that they make. - Start to enjoy simple pretend play, for example pretending to talk on the phone.
How can foreign language courses be made more accessible to students with disabilities? In the United States, foreign language is often a requirement for college graduation. Many college students have disabilities that impact their ability to see, hear, or process language. As a result, these students may struggle with the oral, visual, and processing tasks of learning a foreign language. However, foreign language classes can be made accessible to students with disabilities through careful planning and implementation of innovative teaching methods, such as those included in the following resources: Strategies recommended in these resources include the following: - Explicitly teach the sound system of the language. - Link grammatical structures to students' native languages. - Use auditory, visual, and kinesthetic methods of instruction and practice such as the Orton-Gillingham method. - Slow the pace of instruction and vocabulary acquisition. - Create noun and adjective ending charts to help students see language patterns.
Building structure is a system of connected components that resist forces imposed by nature and people. Ultimately the building system must transmit loads to the foundation where they can be dissipated into the ground. The system fails when a load by itself or in combination with other loads exceeds the capacity of the materials. All structural systems attempt to reach a point of equilibrium. If that point falls within the capacity of the materials then the structure is stable. Otherwise, collapse ensues. Collapse can be partial or total, slow or catastrophic. Structures can fail when excessively loaded, when improperly constructed, when improperly designed, or when subjected to deterioration. - What precipitated the collapse? - Where did the collapse originate? - How did the failure propagate? - Which member(s) failed initially? - Which members failed as a result of the initial failure? - Was the failure due to a single cause or the result of a chain of events? - Was the structure designed and constructed according to building code and industry standard? - Could the failure have been foreseen and prevented?
Heat Pipes Information Heat pipes are self-contained heat pumps that can transport heat at high rates over fairly substantial distances with no external pumping power. Heat pipes are made out of aluminum or copper and are lined with a wicking material such as a ceramic or carbon fiber that provides capillary action. A liquid inside the heat pipe is heated and enters the porous wicking material, saturating the inside of the heat pipe. The gas that results from the heated liquid moves heat from the input end of the heat pipe to the output end. Heat pipes are typically constructed with an evaporator section, where the liquid is heated, and a condenser section where the gas cools and condenses. Heat piping is used in applications ranging from cryogenics and aerospace to home appliances such as air conditioners and refrigerators. Heat pipe technology is also used in circuit boards or computers. A flat heat pipe or thin heat pipe is used in conjunction with airflow from fans to regulate the computer’s temperature. The selection of a copper heat pipe is common for many applications. Other types of heat pipes include heat exchangers. A heat exchanger is a device used in mostly industrial applications for transferring heat between two different liquids. A double pipe heat exchanger consists of two heat pipes, one inside the other and is also known as a recuperator, or closed-type exchanger. In an open-type exchanger, the two liquids are allowed to mix with each other and exit the heat exchanger in one stream. In a double pipe heat exchanger, the hot and cold liquids do not contact each other. The heat energy instead flows from one liquid to the outer surface of one pipe using forced convection, then through the wall of the pipe by conduction and from the inside surface of the pipe to the second fluid by forced convection. Read user Insights about Heat Pipes
What Causes Asthma? Since asthma has a genetic origin and is a disease you are born with, passed down from generation to generation, the question isn’t really “What causes asthma?” but rather “What causes asthma symptoms to appear?” People with asthma have inflamed airways that are super-sensitive to things that do not bother other people. These things are called triggers. Although asthma triggers vary from person to person based on if you have allergic asthma or non-allergic asthma, some of the most common include: ⇒ Substances that cause allergies, called allergens, such as dust mites, pollens, molds, pet dander, and even cockroach droppings. In many people with asthma, the same substances that cause allergy symptoms can also trigger an asthma episode. These allergens may be things that you inhale, such as pollen or dust, or things that you eat, such as shellfish. It is best to avoid or limit your exposure to known allergens in order to prevent asthma symptoms. ⇒ Irritants in the air, including smoke from cigarettes, wood fires, or charcoal grills. Also, strong fumes or odors and irritants in the environment can bring on an asthma episode. These may include paint fumes, smog, aerosol sprays, and even perfume or scented soaps. Although people are not actually allergic to these particles, they can aggravate inflamed, sensitive airways. In many people with asthma, the same substances that cause allergy symptoms can also trigger an asthma episode. Today, most people are aware that smoking can lead to cancer and heart disease. What you may not be aware of, though, is that smoking is also a risk factor for asthma in children and a common trigger of asthma symptoms for people of all ages who have asthma. It may seem obvious that people with asthma should not smoke, but they should also avoid the smoke from others’ cigarettes. This second-hand smoke, or passive smoking, can trigger asthma symptoms in people with the disease. ⇒ Respiratory infections such as colds, flu, sore throats, and sinus infections. These are the number one asthma trigger in children. ⇒ Exercise and other activities that make you breathe harder. Exercise – especially in cold air – is a frequent asthma trigger. A form of asthma called exercise-induced asthma is triggered by physical activity. Symptoms of this kind of asthma may not appear until after several minutes of sustained exercise. (When symptoms appear sooner than this, it usually means that the person needs to adjust his or her treatment.) The kinds of physical activities that can bring on asthma symptoms include not only exercise but also laughing, crying, holding your breath, and hyperventilating (rapid, shallow breathing). The symptoms of exercise-induced asthma usually go away within a few hours. With proper treatment, a person with exercise-induced asthma does not need to limit his or her overall physical activity. ⇒ Weather, such as dry wind or cold air, or sudden changes in weather can sometimes bring on an asthma episode. ⇒ Expressing strong emotions like anger, fear, or excitement. When you experience strong emotions, your breathing changes – even if you don’t have asthma. When a person with asthma laughs, yells, or cries hard, natural airway changes may cause wheezing or other asthma symptoms. ⇒ Some medications, like aspirin, can also be related to episodes in adults who are sensitive to aspirin. People with asthma react in various ways to these factors. Some react to only a few; others to many. Some people get asthma symptoms only when they are exposed to more than one factor or trigger at the same time. Others have more severe episodes in response to multiple factors or triggers. In addition, asthma episodes do not always occur right after a person is exposed to a trigger. Depending on the type of trigger and how sensitive a person is to it, asthma episodes may be delayed. When a person with asthma laughs, yells, or cries hard, natural airway changes may cause wheezing or other asthma symptoms. Each case of asthma is unique. If you have asthma, it is important to keep track of the factors or triggers that provoke your asthma episodes. Because the symptoms do not always occur right after exposure, this may take a bit of detective work. The Role of Heredity Like baldness, height, and eye color, the capacity to have asthma is an inherited characteristic. Yet, although you may be born with the genetic capability to have asthma, asthma symptoms do not automatically appear. Scientists do not know for certain why some people get asthma and others do not. However, researchers have found that certain traits make it more likely that a person will develop asthma. Source: Asthma and Allergy Foundation of America, www.aafa.org This article was originally published in Coping® with Allergies & Asthma magazine, May/June 2009.
Instructions: Write a program to translate a message from English to Morse code. - Using a simple text editor like Windows Notepad, create a text file with all of the Morse code combinations for the letters and numbers. The codes for the first five letters of the alphabet are shown here. Notice that each code is on a separate line. You may want to include the letter or number that corresponds to the symbol on the same line, but that is optional. Read the file into a data structure of your choice. Use static methods throughout the program. Prompt the user to enter a message without punctuation. The program should not be - Print the converted message in Morse code format to the screen.
Many members of non-dominant or minority language communities, especially those living in remote areas, face significant challenges when they try to get a good quality basic education: 1) Some have no access to school at all; others have access to schools, but not to trained teachers – or teachers of any kind. 2) Even if schools are adequately staffed, many of the teachers use a language that the learners do not understand. 3) Textbooks and lessons focus on the language and culture of the dominant group. If the learners are unfamiliar with that culture, as many are, it is very difficult for them to understand the concepts that are being communicated. 4) Teachers who come from the dominant language society may consider the learners “slow”. They may fail to appreciate – or may even look down on – the learners’ heritage language and culture. For these learners, school is often an unfamiliar place teaching unfamiliar concepts in an unfamiliar language.
View full lesson: http://ed.ted.com/lessons/how-do-whales-sing-stephanie-sardelis Communicating underwater is challenging. Light and odors don’t travel well, but sound moves about four times faster in water than in air — which means marine mammals often use sounds to communicate. The most famous of these underwater vocalizations is undoubtedly the whale song. Stephanie Sardelis decodes the evocative melodies composed by the world’s largest mammals. Lesson by Stephanie Sardelis, animation by Boniato Studio. Tagged under: Stephanie Sardelis,Boniato Studio,whale song, whales sing,whale communication, animals talk?,blue whale,killer whale,vocalisation,blackfish,clicks,whistles,groans, whales music,bowhead,humpback,minke whales,sperm whales,teded whales,discovering deep,Baleen whales, whales talk ,TED,TED-Ed,Teded,Ted education Clip makes it super easy to turn any public video into a formative assessment activity in your classroom. Add multiple choice quizzes, questions and browse hundreds of approved, video lesson ideas for Clip Make YouTube one of your teaching aids - Works perfectly with lesson micro-teaching plans 1. Students enter a simple code 2. You play the video 3. The students comment 4. You review and reflect * Whiteboard required for teacher-paced activities With four apps, each designed around existing classroom activities, Spiral gives you the power to do formative assessment with anything you teach. Carry out a quickfire formative assessment to see what the whole class is thinking Create interactive presentations to spark creativity in class Student teams can create and share collaborative presentations from linked devices Turn any public video into a live chat with questions and quizzes
What Is Trigger Finger? Trigger finger is an inflammation of tissue inside your finger or thumb. It is also called tenosynovitis. Tendons (cordlike fibers that attach muscle to bone and allow you to bend the joints) become swollen. So does the synovium (a slick membrane that allows the tendons to move easily). This makes it difficult to straighten the finger or thumb. Repeated use of a tool, such as a drill or wrench, can irritate and inflame the tendons and the synovium. So can arthritis or an injury to the palm of the hand. But often the cause of trigger finger is unknown.
The mathematics challenge is a mixture of cerebral exercise and some physical activity but most of all the aim is for the young mathematicians to enjoy the challenges. Each school is invited to bring two pairs and the whole occasion is free for competing schools. MENTAL WARM UP – DICEY DICEY AND ODD ONE OUT The young mathematicians are asked to find the total of numbers which flash up on the screen. They will also be given some mathematic facts and they have to decide which is the odd one out. There are five examples in this activity where the young mathematicians have to use the four given values and using the four operations they have to make the answer 24. ROUND ROBIN ACTIVITIES These are 8 minute rounds where questions on a certain topic have to be answered. Topics could be: Puzzles with tangrams or dominoes or pentominoes Ordering cards by following given clues Mazes including addition mazes where the total has to be reached at the destination AND THE ANSWER IS ... Lateral thinking questions will be given with a choice of four answers. The young mathematicians have to solve the problems and display their winning letters to find out whether they are right or wrong. The young mathematicians always leave these challenges buzzing where a sugar rush is not required!
Kepler Team Announces Discovery of Earth-Sized Planet in Habitable Zone Since its launch in the spring of 2009, NASA's Kepler Space Telescope has been hunting exoplanets. The holy grail being a planet that is essentially like ours in terms of size, composition, and habitability: an Earth-twin. While we still haven't found a planet that exactly fits that bill, Kepler has now confirmed the discovery of an Earth-sized exoplanet in its star’s habitable zone. The announcement was made at a press conference and the findings have been published in Science. Kepler-186f is about 10% larger than Earth and orbits an M dwarf star around 500 light-years away in the constellation Cygnus. The star is about half of the size and mass of our sun, and it takes Kepler-186f about 130 Earth days to complete a revolution. On the outer edge of the star’s habitable zone, the planet receives about a third of the radiation from its parent star as we do from ours. Life as we know it requires the presence of liquid water, so a planet with the potential for life would be not too close to the star (which would be too hot and the water would be vapor) yet not too far away (where it would be too cold and the water would be ice). Habitability requires a “Goldilocks Zone” where conditions are just right. "We know of just one planet where life exists -- Earth. When we search for life outside our solar system we focus on finding planets with characteristics that mimic that of Earth," said Elisa Quintana, lead author of the paper. "Finding a habitable zone planet comparable to Earth in size is a major step forward." Co-author Thomas Barclay added: "Being in the habitable zone does not mean we know this planet is habitable. The temperature on the planet is strongly dependent on what kind of atmosphere the planet has. Kepler-186f can be thought of as an Earth-cousin rather than an Earth-twin. It has many properties that resemble Earth." Determining the composition of planets out in the habitable zone isn’t as easy as those who are incredibly close to the star, because there isn’t as much radiation from the parent star available to determine what is or isn’t getting absorbed. While previous findings have indicated that Kepler-186f is a rocky planet, further analysis must be done before any definitive conclusions can be made.
Thursday, 24 January 2013 Improving Students’ Speaking Skill Through Learn to Speak English 9.0 Software (classroom action research) Language learning is important for human’s social development. As a language which is used by more than a half of population in the world, English holds the key as international language. English is a tool of communication among peoples of the world to get trade, social-cultural, science, and technology goals. Moreover, English competence is important in career development, therefore students need to understand and use English to improve their confidence to face global competition. There are four basic skill in English they are reading, writing,listening and speaking skill that every human being to needs to interact or get information to another Speaking English is one of ways of finding information through oral communication in the world. The person who knows and understands English well can easily communicate with other people all over in the world because English is a international language and can make the person get a job, spread news and social transact his business. In this study, the writer focuses in teaching speaking. In speaking class, the students should be taught how to speak. The components of English speaking skill that should be given and studied in English speaking class are pronunciation, vocabulary, grammar, fluency, accuracy and comprehension. Speaking is the most important skill, because it is one of abilities to carry out conversation on the language. Speaking is an????????
I can write using the process of inquiry. Today was divided into two parts. We reviewed the "create" step in the process of inquiry during the first period. We worked on our research during the second period. Part 1: Create: Writing Titles and Topic Sentences The purpose of our Olympic Books (see my example here) is to inform. Since the purpose is to inform, we don't need to use any pathos, and we don't need to convince out reader of anything. We just want to write the information that we have learned. The first thing your reader reads on each page is the title and the first sentence of your paragraph. We want to both to be interesting and effective. Topic Sentences Should: To practice this we read the following information (research) about Michael Phelps. Then we brainstormed possible titles and topic sentences for each of the four questions on the research page. Look at the Michael Phelps research. Notice that there are four questions and answers to each of the four questions. Choose one question and read the information that answers the question. Next, brainstorm a list of three possible titles for a Olympic Book page about that question. Finally SumParQuo (Summarize, paraphrase, or quote) the information to create a topic sentence. Part 2- Research Remember the steps of inquiry? If you are on the "create" step of the process, then use the rubric and instructions below to help you create your Olympic Book. Term 3 Dead Day is Friday, March 2, 2018. Sign up for text message reminders about homework, due dates and more through Remind by clicking here.
Although cultures vary across the thousands of miles comprising North and South America, countries within the western hemisphere are constantly connected by sea and air creatures alike as they trek their journeys across hundreds of miles of water and terrain. From the Arctic to Antarctica, migratory species are vital ecological and economic resources shared by the nations and inhabitants of the Western Hemisphere. They are sources of food, means of livelihood and recreation, and they have important biological, cultural, and economic values for society. Despite their great value, many migratory species are endangered in the Western Hemisphere due to overexploitation, water pollution, the alteration and destruction of their breeding and wintering habitats, illegal trade, use of pesticides and, more recently, climate change. Since migratory species do not recognize borders, the conservation of these species and their habitats and migration routes can only be achieved through joint efforts of the Western Hemisphere’s nations. No single local organization can take on the entire responsibility of ensuring the protection and conservation of these migratory species’ homes, homes-away-from-homes, and everything in between, wildlife agency directors and other senior officials created the Western Hemisphere Migratory Species Initiative (WHMSI) to facilitate international cooperation. WHMSI is a regular forum for the conservation of migratory wildlife in the hemisphere. Species that play a vital role in ocean life, such as the Atlantic Bluefin Tuna and the Whale Shark face lower and lower populations due to overfishing. Changing and depleted habitats threaten Costa Rica’s hummingbird, the piping plover, the cerulean warbler, and the Mexican long-tongued bat to name a few species. With their eggs in high demand to both humans and predators, the Roseate tern, loggerhead turtle and green turtle have reached endangered species status. The benefits that these animals provide for their environments abound: pest control, pollination, seed distribution, and balancing the food chains are but a few of their duties. "I see WHMSI as a forum," said Herbert Raffaele, Chair of the Steering Committee. "It’s an opportunity for countries to collaborate internationally, and to do it in an innovative way where it maximizes impacts on the ground and minimizes the political baggage associated with international cooperation." Raffaele said that WHMSI works because it's an agreement "in spirit," rather than being a politically-driven signed treaty. "By using spirit to drive us rather than technical language, we are able to accomplish more and have less politics involved in our discussions," he said. Richard Huber, Vice Chair of the WHMSI Steering Committee and the Division Chief of Biodiversity and Land Management at the Organization of American States, said that the strength of WHMSI is that it ties together 34 member states of the Organization of American States under the common objective of protecting migratory species and their habitats. “It’s all about collaboration and coordination when you want to get things done,” Huber said. WHMSI aims to expand awareness and political support throughout the hemisphere through helping countries conserve and manage migratory wildlife, improving communication on common conservation issues and informed decision making, and providing a forum for identifying and discussing emerging issues. WHMSI’s unique interim steering committee, made up of representatives from governments, non-governmental organizations and interested international treaties and conventions includes representatives from: - American Bird Conservancy - Birdlife InternationalColombia - Convention on Migratory Species - Costa Rica - Inter-American Convention for the Protection and Conservation of Sea TurtlesOrganization of American States - Protocol on Specially Protected Areas and Wildlife of the Wider Caribbean - Ramsar Convention on Wetlands of International Importance - Saint Lucia - the United States (Chair) - Western Hemisphere Shorebird Reserve Network - The World Wildlife Fund In 2003, wildlife agency directors and other senior officials of the western hemisphere met in Chile to develop a cooperative measure geared towards the conservation of shared migratory species, creating an interim committee to guide such efforts and defining the following priorities: - Aiding in the building of countries’ capacities to conserve and manage migratory wildlife; - Improving communication within the hemisphere with regards to common conservation issues; - Strengthening the exchange of information needed for informed decision-making; and - Providing a forum in which emerging issues can be identified and addressed. WHMSI thereby defined its broader function as a non-prescriptive facilitator of cooperation among both governmental and non-governmental interests focused on migratory species conservation matters of broad common interest. In 2006, WHMSI held its second meeting in Costa Rica in order to identify partnerships to better aid countries in their training and conservation efforts. Representatives from 30 countries, and 60 non-governmental organizations and international conventions identified and prioritized their training needs, which were then integrated into a comprehensive plan aiming to train wildlife decision-makers, government officials, and managers under the WHMSI framework. In 2008, the Paraguayan Ministries of Environment and Tourism hosted the third Western Hemisphere Migratory Species Conference in Asuncion, Paraguay. Government wildlife officials and non-governmental organizations’ and conventions’ representatives gathered to engage in international dialogue and cooperation on migratory species. The Conference established an updated list of activities since the first conference, took further steps towards establishing a permanent forum for the conservation of migratory wildlife, and conducted thematic sessions of interest to the region, including issues such as adaptation to climate change, marine turtle conservation, and migratory birds conservation. In 2010, Miami hosted the fourth WHMSI meeting, with the U.S. Fish and Wildlife Service and Organization of American States acting as co-sponsors. Representatives from 30 countries, 20 nongovernmental organizations, four Conventions and the OAS attended. The Conference sought to: 1) Take further steps towards establishing a permanent forum for the hemispheric conservation of migratory wildlife, both terrestrial and marine; 2) Explore regional and sub-regional collaboration regarding migratory species conservation initiatives; 3) Update activities since the 2008 conference in Paraguay; and 4) Set a direction for the future and the sustainability of WHMSI. A number of important goals were accomplished at the Conference including the adoption in principle of the document, "Western Hemisphere Migaratory Species Initiative Purpose and Organization." This document provides a framework and guidance for WHMSI, including its vision, mission, guiding principles, objectives, and implementation. Another important result of the Conference included the selection of a WHMSI Steering Committee. Following guidance from the newly approved Purpose and Organization document, the Steering Committee is composed of a cross-section of partner representation including six governments of the Hemisphere, ten representatives of civil society, and two representatives from Conventions and hemispheric or sub-regional governmental bodies, all focused on migratory species and their habitats.
NASA takes climate change study to the air With the goal of shedding more light on a number of Earth system processes whose effect on our climate is incompletely understood, NASA will this year launch five new airborne field campaigns. These studies will look at long-range air pollution, warming ocean waters, melting Greenland glaciers, greenhouse gas sources, fires in Africa and clouds over the Atlantic, with the captured data to complement satellite- and surface-based observations to help provide a better understanding of the interconnected systems that affect our climate and how it is changing. The missions were selected from a list of 33 proposals, with each to be funded by up to US$30 million over five years as part of the Earth Venture-class of missions. These are a series of uncoupled, relatively low-to-moderate cost, small to medium-sized, competitively selected missions, the first series of which were undertaken in 2010. The five new missions are as follows: - The Atmospheric Carbon and Transport-America project: A US-based study into greenhouse gases, which will attempt to quantify the sources of regional carbon dioxide and methane, among other gases, and then look at the way weather systems move theses gases in the atmosphere. The goal is to improve identification and predictions of the sources of methane and CO2 sources and sinks across the eastern United States using data gathered from airborne, ground-based and space-borne sources. - Observations of Aerosols above Clouds and their Interactions project: A study into the impact on clouds over the Atlantic from particles emitted from large-scale seasonal burning in Africa. The particles are caught up in the mid troposphere, the lowest layer of the planet’s atmosphere, and then head west over the southeast Atlantic where they interact with permanent statocumulous clouds, which NASA calls “climate radiators”. These are the rounded and bumpier clouds that hang lower in the sky and are important to both the regional and global climate systems. - The Oceans Melting Greenland project: This study will examine how the Atlantic’s warmer and more salty subsurface waters affect the melting of Greenland’s glaciers, which will in turn ideally lead to better predictions of future sea level rise by observing changes in glacier melting where ice comes into contact with seawater. Ships and several aircraft will be involved in taking measurements of the ocean bottom and seawater properties around Greebnland. - The Atmospheric Tomography project: With the goal of better understand the changes in atmospheric chemistry wrought by varied air pollutants, this project will use airborne instruments to look at the impact of man-made air pollution on certain greenhouse gases. The area to be studied is extensive, with flights leaving California and traveling to the western Arctic, the South Pacific, the Atlantic and Greenland. - The North Atlantic Aerosols and Marine Ecosystems Study: An Oregon-based study that will examine the way ocean ecosystems might change as a result of the warming of the oceans. It will look at the impact small airborne particles from marine organisms have on the North Atlantic climate and focus on the annual life cycle of phytoplankton, as NASA believes the large bloom in the North Atlantic each year may "influence the Earth’s energy budget." "These innovative airborne experiments will let us probe inside processes and locations in unprecedented detail that complements what we can do with our fleet of Earth-observing satellites," said Jack Kaye, associate director for research in NASA's Earth Science Division. In total, these five new projects will involve seven NASA centers, 25 universities and other educational institutions, three American government agencies and two industry partners.
What is Stigma? Stigma goes far beyond the misuse of words and information, it is about disrespect. Stigma is commonly defined as the use of stereotypes and labels when describing someone. Stereotypes are often attached to people who are living with a mental illness. The simple fact is that no one fully understands how the brain works and why, at times, it works differently in different people. Our society tends to not give the same acceptance to brain disorders as we do to other “physical” disorders such as diabetes or hypertension. Even the term itself, “mental illness” suggests that the illness is not a legitimate medical condition, but rather a problem caused by one’s own choices and actions. People may blame the individual and think that his or her condition is “all in their head.” They may think that a mental illness suggests a character flaw, that if the person was just strong enough or determined enough, he or she could “pull themselves up by their bootstraps” and “just get over it.” Such misconceptions and resulting stigma can limit opportunities, standing in the way of jobs, housing, equality in insurance coverage, quality treatment and rehabilitation, adequate research, and the community understanding and supports afforded to those with physical illnesses. According to the U.S. Department of Health and Human Services, nearly two thirds of all people with a diagnosable mental disorder do not seek treatment. Can you imagine your reaction if only 1 out of every 3 of your friends sought help for heart disease, cancer, or even a broken arm? That is what stigma can do! The stigma surrounding mental illness, and resulting misunderstanding, fear, and discrimination can and must be exposed and overcome. You can help by learning the facts, sharing this information with family and friends, and speaking out! Some Common Myths and Facts about Mental Illness: Myth: There’s no hope for people with mental illnesses. FACT: There are more treatments, strategies, and community supports than ever before, and even more are on the horizon. People with mental illnesses lead active, productive lives. Myth: I can’t do anything for someone with mental illness. FACT: You can do a lot, starting with the way you act and how you speak. You can nurture an environment that builds on people’s strengths and promotes good mental health. For example: - Avoid labeling people with words like “crazy,” “wacko,” “loony,” or by their diagnosis. Speak up when you hear others using these damaging words, whether you are among friends in a social setting, read it in a newspaper article, or see mischaracterizations on television. - Use people first language. For instance, instead of saying someone is a “schizophrenic” say “a person with schizophrenia.” - Treat people with mental illnesses with respect and dignity, as you would anybody else. - Respect the rights of people with mental illnesses and don’t discriminate against them when it comes to housing, employment, or education. Like other people with disabilities, people with mental health needs are protected under Federal and State laws. Myth: People with mental illnesses are violent and unpredictable. FACT: In reality, the vast majority of people who have a mental illness are no more violent than anyone else. You probably know someone with a mental illness and don’t even realize it. Myth: Mental illnesses cannot affect me. FACT: Mental illnesses are surprisingly common; they affect almost every family in America. Mental illnesses do not discriminate based on geography, income, education, or other social status — they can affect anyone. Myth: Mental illness is the same as mental retardation. FACT: The two are distinct disorders. A mental retardation diagnosis is characterized by limitations in intellectual functioning and difficulties with certain daily living skills. In contrast, people with mental illnesses—health conditions that cause changes in a person’s thinking, mood, and behavior—have varied intellectual functioning, just like the general population. Myth: Mental illnesses are brought on by a weakness of character. FACT: Mental illnesses are a product of the interaction of biological, psychological, and social factors. Research has shown genetic and biological factors are associated with schizophrenia, depression, and alcoholism. Social influences, such as loss of a loved one or a job, can also contribute to the development of various disorders. Myth: People with mental illnesses cannot tolerate the stress of holding down a job. FACT: In essence, all jobs are stressful to some extent. Productivity is maximized when there is a good match between the employee’s needs and working conditions, whether or not the individual has mental health needs. Myth: People with mental illnesses, even those who have received effective treatment and have recovered, tend to be second-rate workers on the job. FACT: Employers who have hired people with mental illnesses report good attendance and punctuality, as well as motivation, quality of work, and job tenure on par with or greater than other employees. Studies by the National Institute of Mental Health (NIMH) and the National Alliance on Mental Illness (NAMI) show that there are no differences in productivity when people with mental illnesses are compared to other employees. Myth: Once people develop mental illnesses, they will never recover. FACT: Studies show that most people with mental illnesses get better, and many recover completely. Recovery refers to the process in which people are able to live, work, learn, and participate fully in their communities. For some individuals, recovery is the ability to live a fulfilling and productive life. For others, recovery implies the reduction or complete remission of symptoms. Science has shown that having hope plays an integral role in an individual’s recovery. Myth: Therapy and self-help are wastes of time. Why bother when you can just take one of those pills you hear about on television? FACT: Treatment varies depending on the individual. A lot of people work with therapists, rehabilitation professionals, their peers, psychiatrists, nurses, and social workers in their recovery process. They also use self-help strategies and community supports. Often these methods are combined with some of the most advanced medications available. Myth: Children do not experience mental illnesses. Their actions are just products of bad parenting. FACT: A report from the President's New Freedom Commission on Mental Health showed that in any given year 5-9 percent of children experience serious emotional disturbances. Just like adult mental illnesses, these are clinically diagnosable health conditions that are a product of the interaction of biological, psychological, social, and sometimes even genetic factors.
Building a Liter Lesson 11 of 18 Objective: SWBAT demonstrate understanding of the relative size of a liter. In order to fulfill the standard of students being able to demonstrate their ability to find all factor pairs from 1-100, I have allowed daily work and drill in this area to help with mastering this standard. Once a week, they are given a test of random numbers between 1-100 to find factor pairs for. This test then reveals their fluency in those facts. This test occurs on Wednesdays. I keep track of their progress by using a spreadsheet. To master the goal, my team and I decided that 80% overall was proficient. In addition to that goal, however, between weekly assessments of factor pairs, all students who didn't achieve 100% on the last assessment practice finding factor pairs by writing a given product down in their math notebook and write out the factor pairs of each. I check their answers and ask them to check the online factor pair calculator to be sure they have found all of them. I have numbered note cards 20-100. I throw them up in the air and they land around the room. Students choose one they haven't worked on before and begin to factor the number. They list the number in their notebook like this: They have chosen a card with 35 on it. I check them and if they have found all of them, I ask them to check their findings on their factor pair calculator ( this makes them take responsibility for being sure they have completed all the factor pairs). As people finish, I allow them to log onto a math ap on their iPad and work for a few minutes. As soon as everyone is finished, I begin the core lesson. Setting Up the Lesson Materials: I set up for groups of 3-4 students. Sinks in the science lab, or a water source, 1 liter graduated cylinders, 100 ml cylinders, eye droppers, 250 ml measuring cup, 1 styrofoam coffee cup, 1 12 oz plastic beverage cup and any other common container that might interest them in knowing how many ml it contains. I also used small pill containers. They will pour and explore capacity using the provided containers. I also used 1 2 liter pop bottle and 2 500ml water bottles for a sample of labeled containers they commonly see. I stapled together a four page flipbook out of two pieces of blue paper for each child. I had cut 8.5 x 11 blue sheets of paper for our liter flip book. I chose the color blue for water so they would connect it to liters. The flipbook would be used as it had been when we studied meters and grams, but they would be filling it out independently and drawing objects at the end of the lesson and for homework. I started the lesson today in the science lab, by writing the sentences on the white board with blanks to fill in: I can use meters to measure _____________. I can use grams to measure _____________. We talked about what words we could use to fill in the blanks. Quickly students said "length" for meters and "weight" for grams. One student said that he thought that "mass" for grams was a better word. So, I erased weight and wrote in mass. I coached my students to think about width as well as length and suggested that distance was a good word too. I explained that meters measure a length, width or distance of a line or line segment. Under the sentences, I wrote as I asked: What is left to measure? Liquid! And then I continued: So what metric unit do we measure liquid ? Everyone shouted "Liters!" I opened up my SB file to the first page and we discussed each question on the first page. I used this SB file as a lesson guide. What's a Liter SB File I asked my students what they thought a liter looked like? Students mentioned pop bottles and water bottles. I told them that they would use the containers on the counter in front of me to measure and discover what a liter looks like. I told them that I wanted them to think about the shape of the containers and use their estimation skills to predict what container would hold one liter. I told them that they would be pouring and measuring water in the graduated cylinders and other containers to find a liter. I randomly grouped students in groups of 3. I wrote instructions on the whiteboard: For now, you will use millilters as your unit. You are looking for a liter. Think: How many milliliters is in a liter? Instructional Note: Remember that students have not been told that 1000 ml equals a liter. I am hoping they figure this out using their prior knowledge from studying and converting meters. Take turns sharing responsibilities without arguing! 1. Choose a container in your station that your group thinks is about a liter. 2. From the faucet, fill the container. 3. Pour the water into the larger 1000 ml graduated cylinder and record the measurement in your notebook. 4. When you have found the container that is a liter, sketch it on the first page of your flipbook. Fill out your flipbook from liter, deciliter, centiliter and milliliter with kiloliter on the back. Extra: When you have found your liter, for fun, use the eyedropper to measure a ml. Count how many drops of water you need to make it to the ml mark on the smallest graduated cylinder. Are you surprised by your finding? Why or why not? I finished by explaining that deciliter and centiliters would be blank on their flipbooks because it was an uncommon unit as it was with meters and grams. With that, the "learning chatter" started as teams played with the containers. I could see they were engaged and that there was discussion about how to measure, which container to use first and that they guessed the large graduated cylinder was the liter. I roved the classroom, listening for insightful comments and remarks that I could use to guide them to mastery of the standard and math practice standards for accuracy and using the right tools the right way. I heard one student say they thought the large graduated cylinder was too big to be liter. I later would use this to get them to understand that containers were deceiving when it came to measuring liquids. To master this standard, students need to understand that relative sizes includes things that may not look like what they think the unit should be. This is demonstrated in liquid measurement very easily. Finding that liter! Students started to figure out how and why the large graduated cylinder was a liter because they made the choice to start with the largest container. I said nothing to lead them to this decision because I wanted to see if their critical thinking skills and logic would kick in gear. I noticed they were using their math reasoning and understanding of reading units on a measuring tool well. Understanding what a liter looks like. I had hoped that they would use prior knowledge learned in the other two labs: So What's a Meter Look Like ? and Finding a Gram because it had been a little bit more structured and directive. I wanted them to be freer this time around to make logical decisions on what tool they would use first and how they could find that liter. (I used the SB to guide their thinking in the prior section.) Some continued to measure and work. Measuring Some students needed clarification of how much water to put in the green cup when measuring. Clarifying the Expectations I continued to rove and see them start to experiment with the eyedroppers and I stopped to ask questions about how effective an eyedropper would be filling up a larger container. This led to discussions about choice of measuring tools. In the end, most groups had discovered that 1000 ml was a liter or they understood it from another group's discovery. As I visited each group of students I could see most were ready to wrap up and discuss what they had found. I wanted to really focus on helping them understand that volume is sometimes deceiving so I had them return to their seats as I brought out a filled 2L pop bottle and the smaller water bottle that measures 1.5 liters. Using the large 1000ml graduated cylinder, I asked them how many times I could pour the water from cylinder into the pop bottle? Everyone had confirmed earlier that they knew that the cylinder was one liter and drew it on their flipbook, yet everyone thought that the cylinder would pour just once into my pop bottle. Me Holding a Liter. They were amazed to see it went twice. Then when I held up the half liters, they knew the bottles were 500 ml. They knew both bottles would fill the cylinder, but didn't believe four of them would fill the pop bottle. We talked more about how containers is very deceiving. Deceived by size. This gives me a clear understanding that relative size in metric liquid measurement is still difficult for them to perceive. As I moved toward closure, I reiterated being careful about choosing our measuring tool.Logic to Measuring Things I asked for "aha" moments: Aha! Liters!: This student realizes what a liter is and explains. Aha moment with the green party cup shows a student who realized that the green party cup was not a liter. Another group was talking about how the styrofoam coffee cup held more water than they thought.Comparing the cups. The final wrap up, I wanted students to show me using the final page on the SB file what objects would fit in the right place on the T chart. Choosing the right unit. For their homework assignment, I asked them to complete their flipbook by drawing 3-5 examples of a liter, ml and kl. I wanted them to look around the house with the help of an adult ( since more than likely they would have to look in medicine cabinets, pantries and possibly under the sink ).
The Palazzo Vecchio in Florence, Italy What’s the News: The walls of the Palazzo Vecchio, the centuries-old seat of Florentine government, have doubtless housed many secrets over the years. Now, a physicist, a photographer, and a researcher who uses advanced technology to analyze art are teaming up to reveal one secret that may still linger there: a long-lost mural by Leonardo da Vinci, thought to be hidden behind a more recent fresco. The team plans to use specially designed cameras, based on nuclear physics, to peer behind the fresco and determine whether the da Vinci is actually there—and if so, to take a picture of it. What’s the Context: - Leonardo began the mural, called “The Battle of Anghiari,” in the early 1500s. While copies and historical mentions of it survive, the painting itself has not been seen for centuries. - Maurizio Seracini—an engineer by training who uses technology to examine, image, and analyze art and artifacts—has been searching for “The Battle of Anghiari” since the 1970s. He’s come to suspect it lies behind a later fresco, “The Battle of Marciano” by Giorgio Vasari, in the Palazzo Vecchio’s enormous council hall. This newer work, Seracini believes, was painted on a five-inch-thick brick wall covering Leonardo’s mural. - There are myriad methods of digitally “peeling back” layers of paint or peering through grime and other barriers to detect art that lies beneath: X-ray fluorescence and infrared reflectography, among others. Although Seracini tried many of these methods, none located the lost Leonardo—nor proved it wasn’t there. Part of Vasari’s “The Battle of Marciano” How the Heck: - Seracini has now teamed up with photographer Dave Yoder and physicist Bob Smither to search for the painting using a new technique: a gamma camera, based on a device Smither developed to image tumors. - The camera would first bombard the suspected location of the painting with neutrons. When the neutrons hit the mural, if it is indeed there, metals in the paint would give off gamma rays. These gamma rays would pass back through the wall to hit the copper crystals the camera uses instead of a lens to form an image. (Check out Yoder’s photos and descriptions of Smither’s gamma cameras here.) - A test of the method last summer showed that it could produce fairly clear images from the sorts of pigments Leonardo used, even through a brick wall. - Building bespoke, radiation-based cameras isn’t cheap, and despite securing substantial support, the team is still short on funds. They’re working to raise an additional $266,500 for the project. - If all goes well, the team is slotted to start their gamma camera hunt for the lost Leonardo next year.
Worldwide there are over 12,000 species of ferns. The Wet Tropics is home to 65% of Australia's fern species. In Australia there are 390 native species of ferns, 47 species of fern allies, 44 species of conifers and 39 of them are endemic. Forty fern species are endemic to the Wet Epiphytic ferns are one of the most common features in rainforests. They grow on the trunks and limbs of trees but unlike parasitic plants such as mistletoe, do not steal nutrients from their host tree. They survive instead on rainwater and the nutrients they get from trapped fallen leaves. Sometimes the host tree taps into the fern though. Roots have been found growing from the host tree into 'The Name Gondwana, given to the original super continent of Australia was a part, comes from a region of India where the fossil seed fern Glossopteris was first described. The widespread geographic distribution of this genus in ancient times was one of the clues which led to the realisation that the continents had once been joined.' Environmental Protection Agency, Cairns. & Reproduction of Ferns 'Most ferns love the tropics where the warm moist conditions, not unlike those in which they evolved, suit their requirements. As a result the Wet Tropics is home to a wide range of relict ferns- species which have survived from the earliest times. They represent all the major evolutionary fern groups.' In the evolutionary development of plants, ferns represent a great advance on all previous models. The surface cells of an aquatic algae are able to absorb nutrients and water, on land it is necessary to divide up the tasks, a step which makes the plants - Roots were a revolutionary new feature dedicated to seeking out less accessible sources of water, thus allowing plants to move inland. - They also served to stabilise the larger - Water and nutrients, taken up by the roots, had to reach other parts of the plant so a plumbing system - the vascular - Woody vessels (xylem) performed this function, moving water and nutrients upwards. - These vessels had a duel function, providing rigidity to the tissues. - With these load-bearing structures the plants were able to grow much taller and reach up to the light. - Leaves were another fern invention - a system of solar panels dedicated to capturing the energy of the sun and turning it - Exposed to the air, these had to be sealed to prevent the water gathered by the roots from leaking away so a waxy skin (cuticle) was developed. - Since the process of photosynthesis requires an intake of carbon dioxide from the atmosphere and waste oxygen must be released, special design features in the cuticle - pores - allowed this exchange of gases to continue. - Another plumbing system was needed to move the sugars and other photosynthetic products from the leaves to the rest of the - This function was performed by another new system of vessels, the phloem. - Although ferns were among the earliest vascular plants (algae, lichens, mosses and liverworts are all classified as non-vascular plants) they were not the only ones. - The fossil records tell us that at one time the world was dominated by massive clubmosses, giant horsetails and others which created magnificent forests 45m or more in height as they used their newly developed vascular systems to reach higher and higher in competition - Many of these plants are now extinct, their relatives hanging on comparative obscurity. links with the past ferns are structurally more advanced than mosses, like the more primitive algae and mosses their sex life involves two generations and a dependence on are produced by the fern plant in spore cases, usually beneath the leaf. released, each spore grows into a tiny heart-shaped structure known as the thallus which, in turn, produces male sperm cells at the pointed end and female cells in the notch. the presence of water, the sperm burst free from the thallus and, attracted by chemicals, swim to the female cells. fertilisation, an adult plant develops, eventually dwarfing its `parent' this large spore-producing fern plant is the equivalent of just the tiny spore capsule and stalk of the moss plant while the much more obvious green moss plant is the equivalent of the tiny fern thallus. produce sex cells. ferns can, of course, increase their numbers asexually by spreading their rhizomes-stems which are either below the soil surface or just above grow down from the rhizomes while fronds sprout from the top. Environmental Protection Agency, Cairns. While many of the plants in the rainforest have been around for millions of years, ferns have been around for much longer than that! They appeared in the fossil record dating back to 325 million years ago. They are one of the earliest vascular plant forms on the planet (plants which circulate water internally) and they preceded the flowering plants, the conifers and even the cycads - all of which have a more advanced means of reproduction. 40 species of ferns are endemic to the Wet Tropics (occur nowhere else) and there are many interesting species but only a few special ones are profiled here. The King Fern (Angiopteris evecta) looks superficially more like a palm crown growing directly out of the ground but this is actually a relic fern from the late Paleozoic era. This is the only species from its genus in Australia but it does occur elsewhere in Southeast Asia/Oceania. The fronds might be the longest in the world for a fern, reaching as much as 5m (16 ½ feet). A good place to see King ferns is the Nandroya Falls track in Palmerston National Park, south of Cairns and along the road to Cape Tribulation. Promotional photographs of the Wet Tropics often feature the Tree Fern, a species which imparts a tropical yet ancient feel to the area. Tree ferns have been here since the dinosaurs but the modern species are only small versions of their ancestors. The Scaly Tree fern (Cyathea cooperi) is an attractive and characteristic tree fern with its node scars (scales) covering its narrow trunk and horizontal crown of feathery fronds. The crown is said to reach up to 12 m (40 feet) wide and sits atop a thin trunk reaching up to 12 m (40 feet) tall. This species isn't restricted to the Wet Tropics and can be found in forests further down the east coast of Australia. A primitive looking fern indeed is the Tassel fern and with good reason - its fossils have been identified to much larger specimens from the Carboniferous period. Two very different forms of the Tassel fern (also known as Clubmoss) are almost opposite to each other in habit. The first is a ground creeping version sometimes called the Pine Tree fern (Lycopodiella cernua) as it resembles miniature pine trees only 25cm (10 inches) tall. It prefers open sun and spreads along the ground, sending up vertical stems from along its length. If any of the tips of the erect fern should meet the soil, a new plant sprouts from the tip and grows upward to become a new vertical plant that sends out creepers. Visitors to the Flecker Botanic Gardens in Cairns can see this plant The other Tassel fern group of interest is an epiphytic one (grows on top of another plant but is not parasitic) which has been grown frequently as a hanging plant. The Common Tassel fern (Huperzia phlegmaria) likes warm, humid conditions with good air flow. At the end of each long "cat-tail" is a shorter green stem with tiny cones along its length. These contain the material for a most interesting means of reproduction: the cones release spores which drop into water, some spores being male and others being female. These spores are then fertilised in the water as they collide, becoming a seed which can then sprout a new plant. Wet Tropics Management Authority.
There are many references in creationist literature to historical evidence of dinosaurs and man living together, such as the petroglyph in Natural Bridges National Monument, Utah, legends and stories of dragons in Europe, and frequent use of the dragon motif by the Chinese.1 But one striking physical and historical evidence in Asia is rarely mentioned: the bas-relief picture of a dinosaur in the ruins of Angkor outside of Siem Reap, Cambodia. Angkor is a collection of ruins from the ancient Khmer civilization that lived and ruled in Southeast Asia from the end of the ninth century to the end of the twelfth century. The ruins are composed of temples, palaces, libraries, monasteries, and other buildings built by the various kings and rulers of the Khmer people. These ruins now lie in an area designated as the “Angkor Archaeological Park” in the Kingdom of Cambodia. Many of these ruins have been restored over the years.2 One collection of ruins, known as Angkor Thom, has purposely not been restored. The original edifices were either constructed during the reign of King Jayavarman VII or commissioned by him (AD 1181 – ca. 1210). The most significant of the buildings of Angkor Thom is the great temple-monastery of Ta Prohm. Today, the ruins lie majestically entwined with large jungle vines and the roots of towering tropical strangler fig trees, a decision the Angkor Conservancy took to give tourists a more adventurous, exciting experience.3 Most of the great Angkor ruins have vast displays of bas-relief depicting the various gods, goddesses, and other-worldly beings from the mythological stories and epic poems of ancient Hinduism (modified by centuries of Buddhism). Mingled with these images are actual known animals, like elephants, snakes, fish, and monkeys, in addition to dragon-like creatures that look like the stylized, elongated serpents (with feet and claws) found in Chinese art. But among the ruins of Ta Prohm, near a huge stone entrance, one can see that the “roundels on pilasters on the south side of the west entrance are unusual in design.”4 What one sees are roundels depicting various common animals—pigs, monkeys, water buffaloes, roosters, snakes—and what appears to be a dinosaur! There are no mythological figures among the roundels, so one can reasonably conclude that these figures depict the animals that were commonly seen by the ancient Khmer people in the twelfth century. That means that only a little over 800 years ago, some dinosaurs were likely still living in the region of Cambodia! Of course this is no surprise to biblical creationists, because we know from Genesis 1 that land animals (such as dinosaurs) and humans were living together in the beginning, and that representatives of the land animals (e.g., dinosaurs) were saved on the Ark to repopulate the earth after the Flood only 4,300 years ago. [Ed. note: It has come to our attention that there are some who question the authenticity of this relief. If it is authentic, it is merely one more item that supports what we know to be authentic and 100% accurate: the account of the Word of God indicating man and land animals (including dinosaurs) lived together only a few thousand years ago. But even if this relief is found to be fraudulent, there are many other petroglyphs and legends that support the biblical account.]
Fish habitat must provide fish with a place to eat and a safe place to live and rest. Water must be natural, abundant, and clean. Flowing water is helpful because it provides oxygen that fish need to survive and it removes pollutants that harm fish. Fast current in streams is a problem for fish. It requires the fish to expend energy to stay in one place. Fish like to stay in one place for most of the year. Fish will hang out behind rocks or logs so that they can rest and not fight the current. Some of the other places where the current is slow are brush piles, at the bottom of deep pools, or under a streambank. Along with a slower current these places also supply cover from predators. Fish are usually active at night because they are harder to see. They usually rest during the day. Streams also provide food for fish in the form of small drifting aquatic insects. The insects tend to be in fast moving, rocky, shallow places. These are call riffle areas. Insects tend to drift more at night. This ia another reason that fish become more active at night. Ideal places for fish are pools found at the end of a riffle. It allows for slower moving current, but also supplies the aquatic insects that fish need for food. Foliage cover at the edge of a stream is important to shade the stream so that the water temperature does not get too warm. The roots of the plants hold the soil together so that the streambank does not wash away. The vegetation also allows cover for insects. These insects often fall into the stream and become food for the trout. From mid-May to mid-July cutthroat trout leave their normal habitat to find a spawning habitat which is usually on a gravel bar.
Commemorating the Exodus In Parshat Bo, as the Jews prepare for the Exodus from Egypt, God designates Nissan as the first month of the Jewish calendar. This is difficult to understand, given that we commonly refer to Rosh Hashana - the first day of Tishrei - as the new year, marking the Creation of humanity. The explanation is as follows: Often people accept the idea of God as Creator. But they figure that after Creation, God sat back to let nature run its course. The Exodus, however with all its open miracles - teaches us that God's role as Director of History, is even greater than His role as Creator. And that's why at the Exodus, the order of the months changed - to commemorate this new relationship between God and humanity. Actually, this helps explain another question: If Shabbat is a commemoration of the Six Days of Creation, then why are only Jews commanded to observe Shabbat? The answer is found in the text of the Friday night Kiddush, where we declare that the purpose of Shabbat is "to remember Creation and to remember the Exodus." Because while God created the entire world, it was through the Jewish Exodus from Egypt that mankind came to appreciate God as the guiding hand of history. Let's listen to the words of Prof. Nicholai Berdysev, writing in Moscow in 1935: "The survival of the Jews, their endurance under absolutely peculiar conditions, and the fateful role played by them in history - this people is governed by a [mystical] predetermination, transcending the norms of history."
Arts & Crafts Fruit and Vegetable Book Submitted by: masamom Create a fruit and vegetable book. 36 - 60 months 1 - 8 tots - 5 pieces Paper (Need 5 pieces per student) - Crayons (Assorted colors) - Markers (Assorted colors) - Stapler (Adult Use Only) - Ahead of time, prepare the following pages with these titles: Book Cover:" ________'s Fruit and Vegetable Book", Page 1: "My favorite vegetable", Page 2:"My Favorite fruit", Page 3: "I would like to grow", and Page 4: "I would like to cook with". - Begin activity by making a list of fruits and vegetables with kids. - Give each child the 5 prepared pages of the book. - Have child write name in blank on book cover page. Have child draw fruits and vegetables on book cover. - Have child draw a picture to illustrate each of the remaining book pages. Help child write the fruit or vegetable name in the blank. - Adult: Staple pages, in order, along left side to complete book. - Cognitive development » Approaches to learning » Persistence & attentiveness - Cognitive development » Literacy » Book appreciation & knowledge - Cognitive development » Literacy » Print concepts & conventions - Language & communication » Language development » Listening & comprehension - Fine motor skills » In-hand manipulation
A study carried out at the University of Auckland in New Zealand, led by Dr. Nicholas Gant has shown that caffeine may help with muscle fatigue caused by exercise. The chemicals released during rigorous exercise can affect the entire central nervous system, specifically its’ ability to drive muscle functions. This affects not only the muscles that we targeted during the exercise, but even those that control the movement and focusing of the eyes. It’s not believed that the caffeine within coffee may help to prevent that effect. The details of their research, which has been published in the journal Scientific Reports, focuses on the what is called central fatigue, a diminishing of the nervous system’s ability to control muscle movement due to high levels of recent activity. In their study, the researchers explain that vigorous exercise can lower the central nervous system’s ability to drive muscle function, resulting in what is known as central fatigue. This most commonly presents itself as fatigue of the muscles being used, such as a bicyclist’s legs feeling tired after a long ride. However, it’s postulated that this may also affect other muscles that aren’t overly exerted during the exercise at all. To verify, Dr. Gant and his team constructed an experiment that involved two groups of cyclists that were monitored during a 3 hour ride. One group consumed caffeine during their ride, an amount equal to two cups of coffee, while the other group was given a placebo. At the end of the 3 hour session, the riders’ eyes were tested with the use of a head-fixed eye-tracking system. Measurements of the eyes movements indicate that not only did the strenuous exercise induce symptoms of central fatigue, any impairment to eye movement was remedied with caffeine. The results show that caffeine is responsible for indirectly boosting the activity of certain chemicals that relay signals between brains cells, called neurotransmitters. These findings support the results of other previous studies which have suggested that impairments in neurotransmitter activity might be responsible for central fatigue. “Interestingly, the areas of the brain that process visual information are robust to fatigue. It’s the pathways that control eye movements that seem to be our weakest link,” says Dr. Gant, adding “These results are important because our eyes must move quickly to capture new information. But there’s hope for coffee drinkers because this visual impairment can be prevented by consuming caffeine”
Bipolar (by-POLE-ar) disorder is a condition in which periods of extreme euphoria * (yoo-FOR-ee-uh), called mania (MAY-nee-uh), alternate with periods of severe depression. Bipolar disorder is sometimes also called manic (MAN-ik) depression. for searching the Internet and other reference sources Mania Manic-depressive Illness What Is Bipolar Disorder? Bipolar disorder is a type of depressive disorder * . People with bipolar disorder experience two (thus the prefix "bi") extremes in mood; they have periods of extreme happiness and boundless energy that are followed by periods of depression. Bipolar disorder can range from severe to mild. Different forms of bipolar disorder are distinguished from one another by the severity of mood extremes and how quickly mood swings take place. For example, full-blown bipolar disorder, or bipolar I, involves distinct manic episodes followed by depression. People with this form of bipolar disorder often experience trouble sleeping, changes in appetite, psychosis * , and thoughts of suicide. Another form of bipolar disorder called bipolar II affects some people. In bipolar II the mania is not extreme and the person does not lose touch with reality but does have periods of depression. Some people also experience mixed states where symptoms of mania and depression exist at the same time, and this form may be more common in children. Other people may experience a form of bipolar disorder in which there is a rapid cycling between "up" and "down" moods with few, if any, normal moods in between. Cyclorhythmia is a condition in which there are mood swings but with milder highs and lows. Who Has Bipolar Disorder? Ernest Hemingway, winner of the Nobel Prize in literature, showed signs of having bipolar disorder. So did presidents Abraham Lincoln and Theodore Roosevelt and the composer Ludwig von Beethoven. All of these men were intelligent, creative, successful individuals, but they all fought the two faces of bipolar disorder. At one moment they would be on top of the world, full of ideas and creative and physical energy. Then a few days, weeks, or months later they would be sunk in the despair and lethargy of depression. * euphoria is an abnormally high moo d with the tendency to be overactive an d overtalkative, an d to have racing thoughts an d overinflated self-confidence. * depression (de-PRESH-un) is a mental state characterized by feelings of sadness, despair, an d discouragement. * depressive disorders are mental disorders that involve long periods of excessive sad-ness and affect a person's feelings, thoughts, and behavior. * psychosis (sy-KO-sis) refers to mental disorders in which the sense of reality is so impaired that a patient can not function normally. People with psychotic disorders may experience delusions (exaggerated beliefs that are contrary to fact), hallucinations (something that a person perceives as real but that is not actually caused by an outside event), Incoherent speech, and agitated behavior. * genes are chemicals in the body that help determine a person's characteristics, such as hair or eye color. They are inherited from a person's parents and are contained in the chromosomes found in the cells of the body. Bipolar disorder affects about 1 out of every 100 people, or at least 2 million Americans. It affects people of all races, cultures, professions, and income levels. Men and women are affected at equal rates. Bipolar disorder tends to run in families and is believed to have an inherited genetic component. Studies on twins show that if one member of a pair of identical twins (twins who have identical genes * ) has bipolar disorder, the other twin has about a 70 percent chance of also having the disorder. If one of a pair of fraternal twins (twins who do not have identical What are the Symptoms of Bipolar Disorder? Bipolar disorder has two distinctive sets of symptoms. During the depression phase, a person may experience: - persistent feelings of sadness and anxiety - feelings of worthlessness or hopelessness - loss of interest in activities that were formerly enjoyable - fatigue and decreased energy - sleeping too much or too little; difficulty getting up or going to sleep - eating too little or too much - unexplained periods or restlessness, irritability, or crying - difficulty concentrating or remembering things - difficulty making decisions - thoughts of suicide or suicide attempts - increased difficulties in relationships with friends, family, teachers, or parents - alcohol or substance abuse During the manic or euphoric stage, a person may experience: - great energy; ability to go with little sleep for days without feeling tired - severe mood changes from extreme happiness or silliness to irritability or anger - over-inflated self-confidence; unrealistic belief in one's own abilities - increased activity, restlessness, distractibility, and the inability to stick to tasks - racing, muddled thoughts that cannot be turned off - decreased judgment of risk and increased reckless behavior - substance abuse, especially cocaine, alcohol, and sleeping pills - extremely aggressive behavior * Attention Deficit Hyperactivity Disorder (ADHD) is a condition that makes it hard for a person to pay attention, sit still, or think before acting. How is Bipolar Disorder Diagnosed? Bipolar disorder usually begins in early adulthood, although experts now recognize that younger children and teens may also have the disorder. Some children who are diagnosed with attention deficit hyperactivity disorder (ADHD) * may actually have bipolar disorder or both disorders. These children not only have symptoms of ADHD but often also have symptoms such as significant and sustained tantrums, periods of anxiety * (including separation anxiety * ), periods of irritability, and mood changes. With many children, mood states change rapidly and without warning. Children with bipolar disorder are beginning to be researched by psychologists * and psychiatrists * who previously did not believe that such disorders occur in early childhood. Doctors often ask family members about the person's symptoms, as people with bipolar disorder are often not aware of the changes they are experiencing. People with bipolar disorder have had at least one period of mania. Often after the first episode five or more years will pass before another manic or a depressive period occurs. Despite the stretches of normal moods, bipolar disorder does not go away. Instead, the time between mania and depression gets shorter and shorter, and the symptoms may become more severe. Not infrequently, bipolar disorder can lead to psychosis or to suicide. About 19 percent of people who have required hospitalization for bipolar disorder commit suicide. How Is Bipolar Disorder Treated? Most people with severe mood swings can be helped by treatment. The drug lithium has been one of the medications of choice for treating bipolar disorder, and it is often very effective. Other medications have also have been helpful in controlling mood swings. These include various antiseizure medications (for example, valproate and carbamazepine) and antipsychotic medications. People with bipolar disorder need to continue to take their medications even when they feel normal to prevent the reoccurrence of mood swings. Living with Bipolar Disorder Living with a loved one who has bipolar disorder can be very hard on family members. Perhaps the most effective thing that family members can do is to help the person with the disorder get treatment. Many family members find joining a support group or participating in family therapy to be helpful in understanding and managing the impact of this difficult problem. People who are taking about suicide need emergency help. Many telephone books list suicide and mental health crisis hotlines in their Community Service sections, or help can be obtained by calling emergency services (911 in most communities). * anxiety (ang-ZY-e-tee) can be experienced as a troubled feeling, a sense of dread, fear of the future, or distress over a possible threat to a person's physical or mental well-being. * separation anxiety is the normal fear that babies and young children feel when they are separated from their parents or approached by strangers. * psychologist (sy-KOL-uh-jist) is a mental health professional who can do psychological testing and provide mental health counseling. * psychiatrist (sy-KY-uh-trist) refers to a medical doctor who has completed specialized training in the diagnosis and treatment of mental illness. Psychiatrists can diagnose mental illnesses, provide mental health counseling, and pre-scribe medications. Steel, Danielle. His Bright Light: The Story of Nick Traina. New York: Dell Publishing, 2000. Romance novelist Danielle Steel tells the true story of her son's struggle with bipolar disorder. The Child and Adolescent Bipolar Foundation (CABF), 1187 Wilmette Avenue, P.M.B. #331, Wilmette, IL 60091. CABF is an organization that provides information and support for families of children who have early-onset bipolar disorder. United States National Institute of Mental Health (NIMH), 6001 Executive Boulevard, Room 8184, MSC 9663, Bethesda, MD 20892-9663. NIMH is a government agency that provides information about bipolar disorder.
A new, detailed view of embryos growing cell by cell has been made possible by using fluorescent microscopy and a new computational framework. With this new system, cell nuclei can be tracked with high accuracy and speed. Cell lineage reconstruction is possible that allows observing embryos growing cell by cell. The study was published in the journal Nature Methods. The scientists that developed this new methodology are from the Howard Hughes Medical Institute, Janelia Farm Research Campus in Ashburn, Virginia and also the Max Planck Institute of Molecular Cell biology and Genetics in Dresden, Germany. Fernando Amat and Philipp J. Keller were the lead scientists on the project. A goal in developmental biology is to be able to reconstruct cell lineages in developing organisms. Achieving this goal would mean that the developmental process of a complex multicellular organism would be understood on a cell level. In this study, the new computational framework and fluorescent microscopy was used to perform the first cell lineage reconstruction of Drosophila melanogaster (fruit flies) nervous system development within the development of an entire embryo. Understanding and being able to track the development of the nervous system is of particular importance and has been a goal of scientists for a long time. Whereas high powered microscopes have allowed observation of cells dividing in early embryonic stages, this new method allows the tracing back of cells to their origins in earlier embryonic stages. A researchers can chose a cell at any stage of development and trace its lineage backwards or identify its trajectory forward during the growth of an embryo. This is done by traversing through the massive amount of data that was obtained with fluorescence microscopy and then stored in a computer. The authors pointed out that the software program is efficient enough to run on a desktop computer with use of a graphics card. The problem that was difficult to overcome in this type of study was to be able to follow individual cells on a large scale over a long period of time. The volume of data is tremendous and also the data is very complex. Cells in embryos have different shapes, can be densely packed and have different behaviors, which makes it difficult to identify and track particular cells. Image quality is also an important feature of the methodology that needs to be at a very high level. The computer system works by grouping shapes together to spot cells. The speed of the microscopy allows images to be captured quickly enough so that the cells cannot migrate away too fast before they are then captured in another image. Four-dimensional, terabyte-size (a terabyte is a thousand gigabytes) data sets were constructed of embryonic development in three different organisms. The three organisms were fruit flies, zebrafishes and mice. Advanced stages of embryonic development were analyzed including up to 20,000 cells per time point. The speed was 26,000 cells min-1 at a single computer workstation. Only two parameters needed adjustment during the process and visualization and editing tools were available for efficient data curation. As this methodology matures and after many types of organisms are studied, it may be a useful method for understanding some human diseases that have been attributed to errors in the developmental process. It may be that the error in development of a given physiological system could be pinpointed by the computerized ability to view embryos growing cell by cell. By Margaret Lutze
Accommodations and Modifications Specially Designed Instruction (SDI) The Individuals with Disabilities Education Act (IDEA) defines specially designed instruction as adapting, as appropriate to the needs of an eligible child, the content, methodology, or delivery of instruction; - to address the unique needs of the child that result from the child’s disability; and - to ensure access of the child to the general curriculum, so that the child can meet the educational standards within the jurisdiction of the public agency that apply to all children. Specially designed instruction is the instruction provided to a student with a disability who has an Individualized Education Program (IEP) in order to help him/her master IEP goals/objectives. Specially designed instruction is not a part of the Response to Intervention (RtI) or Section 504 of the Americans with Disabilities Act processes, but is specific to a student who qualifies for special education services in order to help him/her master IEP goals/objectives and ensure access to and progress in the general curriculum. (Texas PGC Website) Contact InformationValerie Moos, Education Specialist - [email protected] Vasquez, Administrative Assistant - [email protected]
The Blue Mountains are a region of elevated sandstone bedrock eroded over millennia to form an impressive range of table top ridges dissected by canyons. The ridges plummet dramatically over vertical cliff walls into broad river valleys some 700 m deep. The ancient sedimentary origin of the mountains has resulted in alternating layers of groundwater-permeable sandstone, as well as impermeable layers of shale, sandstone and ironstone. At the cliff face, groundwater of the tablelands is either forced to flow over the edge, or seep out between the impermeable layers half way down the bisected rock face. These constant water seepages have led to the formation of a unique niche called a hanging swamp – accumulations of peat that quite literally hang over and drip down the vertical cliff faces. This wet, nutrient-poor and exposed habitat forms the ideal environment for Drosera binata and other carnivorous plants to thrive. Drosera binata grows on the side of these cliffs in layers where the groundwater seeps out. In certain areas the plants grow so densely that the entire cliff face is dominated by their bright sparkling foliage – a truly beautiful sight as they catch the morning sun in their dew. Certain walks in the area allow you to descend the cliff face across these hanging swamps, and Drosera binata grows on the side of these trails in abundance. The dominant variety in the area is commonly referred to as the ‘dichotoma’ form, a large lime-green plant with laminas that generally fork twice or thrice and reach up to 20 cm from tip to tip. The leaf petiole is long, reaching around 1 m so as to extend beyond the thick sedge and ferns, and gracefully drapes down from the cliff surface. The white-petaled flowers, proportionately large for the genus, are borne on fleshy stems. The sepals of the dichotoma form are sparsely hairy with finely serrated edges. The ‘T-form’ of Drosera binata also grows along these trails. In contrast to the ‘dichotoma’ form, the ‘T-form’ is small and red, with once-bifurcated leaves that grow to around 15 cm in length. These plants can be spotted on the side of the walking trails where the disturbed banks create a bare exposed niche for the plants. Presumably, the dense scrub layer of the undisturbed cliff walls may choke out the diminutive plants (the ‘dichotoma’ form is able to reach through this layer with its long petioles). Of course a question to ask is whether the ‘T-form’ and the ‘dichotoma’ form represent two separate variations, or are the ‘T-form’ plants merely the immature seedlings of the larger plants. Given that the two varieties occupied different niches in the same habitat, I would say that they are distinct from each other. However, if the ‘T-form’ is actually distinct, then the question is: where are all the baby dichotomas? Although it was still quite early in the reproductive season (Drosera binata usually blooms in mid summer), I could not see any small plants that distinctly resembled a seedling ‘dichotoma’. Drosera binata is well known to have sudden ‘leaf jumps’ as plants mature from seedling to adult stages, with a corresponding change in the number of bifurcations. Several medium sized blush coloured plants seemed to represent an intermediate between the ‘T-form’ and the ‘dichotoma’ form – perhaps these were the immature ‘dichotoma’ or a hybrid? A good way to test would be to cultivate a ‘T-form’ plant to flowering size and observe if it changes its phenotype. There is broader discussion as to whether the species Drosera binata actually represents a complex several closely related, but distinct species. This subject is of personal interest to me and I am continually travelling to different places to observe different populations of plants. Generally speaking, the red ‘T-form’ is associated with cold environments – I’ve seen it on Cradle Mountain in Tasmania, the Tongariro alpine area in New Zealand, and in the Blue Mountains, all of which experience cold winters. The ‘dichotoma’ form grows in latitudes around Sydney in a mix of lowland and highland sites. It is my personal opinion that the species complex can be divided into three valid species (or at least subspecies) – the ‘T-form’, the ‘dichotoma’ form and a complex of ‘multifida’-like plants but this is a discussion for another day.
Theoretical breakthrough could boost data storage A trio of researchers that includes William Kuszmaul — a computer science PhD student at MIT — has made a discovery that could lead to more efficient data storage and retrieval in computers. The team’s findings relate to so-called “linear-probing hash tables,” which were introduced in 1954 and are among the oldest, simplest, and fastest data structures available today. Data structures provide ways of organizing and storing data in computers, with hash tables being one of the most commonly utilized approaches. In a linear-probing hash table, the positions in which information can be stored lie along a linear array. Suppose, for instance, that a database is designed to store the Social Security numbers of 10,000 people, Kuszmaul suggests. “We take your Social Security number, x, and we’ll then compute the hash function of x, h(x), which gives you a random number between one and 10,000.” The next step is to take that random number, h(x), go to that position in the array, and put x, the Social Security number, into that spot. If there’s already something occupying that spot, Kuszmaul says, “you just move forward to the next free position and put it there. This is where the term ‘linear probing’ comes from, as you keep moving forward linearly until you find an open spot.” In order to later retrieve that Social Security number, x, you just go to the designated spot, h(x), and if it’s not there, you move forward until you either find x or come to a free position and conclude that x is not in your database. There’s a somewhat different protocol for deleting an item, such as a Social Security number. If you just left an empty spot in the hash table after deleting the information, that could cause confusion when you later tried to find something else, as the vacant spot might erroneously suggest that the item you’re looking for is nowhere to be found in the database. To avoid that problem, Kuszmaul explains, “you can go to the spot where the element was removed and put a little marker there called a ‘tombstone,’ which indicates there used to be an element here, but it’s gone now.” This general procedure has been followed for more than half-a-century. But in all that time, almost everyone using linear-probing hash tables has assumed that if you allow them to get too full, long stretches of occupied spots would run together to form “clusters.” As a result, the time it takes to find a free spot would go up dramatically — quadratically, in fact — taking so long as to be impractical. Consequently, people have been trained to operate hash tables at low capacity — a practice that can exact an economic toll by affecting the amount of hardware a company has to purchase and maintain. But this time-honored principle, which has long militated against high load factors, has been totally upended by the work of Kuszmaul and his colleagues, Michael Bender of Stony Brook University and Bradley Kuszmaul of Google. They found that for applications where the number of insertions and deletions stays about the same — and the amount of data added is roughly equal to that removed — linear-probing hash tables can operate at high storage capacities without sacrificing speed. In addition, the team has devised a new strategy, called “graveyard hashing,” which involves artificially increasing the number of tombstones placed in an array until they occupy about half the free spots. These tombstones then reserve spaces that can be used for future insertions. This approach, which runs contrary to what people have customarily been instructed to do, Kuszmaul says, “can lead to optimal performance in linear-probing hash tables.” Or, as he and his coauthors maintain in their paper, the “well-designed use of tombstones can completely change the … landscape of how linear probing behaves.” Kuszmaul wrote up these findings with Bender and Kuszmaul in a paper posted earlier this year that will be presented in February at the Foundations of Computer Science (FOCS) Symposium in Boulder, Colorado. Kuszmaul’s PhD thesis advisor, MIT computer science professor Charles E. Leiserson (who did not participate in this research), agrees with that assessment. “These new and surprising results overturn one of the oldest conventional wisdoms about hash table behavior,” Leiserson says. “The lessons will reverberate for years among theoreticians and practitioners alike.” As for translating their results into practice, Kuszmaul notes, “there are many considerations that go into building a hash table. Although we’ve advanced the story considerably from a theoretical standpoint, we’re just starting to explore the experimental side of things.” Journalists seeking information about EECS, or interviews with EECS faculty members, should email [email protected]. Please note: The EECS Communications Office only handles media inquiries related to MIT’s Department of Electrical Engineering & Computer Science. Please visit other school, department, laboratory, or center websites to locate their dedicated media-relations teams.
Introduction and Types of Bias in AI Algorithms Artificial Intelligence (AI) has become an integral part of our lives, influencing decisions in various domains, from healthcare to finance. However, it's not always as objective as it may seem. AI algorithms can carry bias, leading to unfair or discriminatory outcomes. In this blog, we will introduce you to the concept of AI bias and explore the different types that can affect these algorithms. Understanding AI Bias Before we delve into the types of bias in AI algorithms, let's take a more detailed look at what AI bias entails and why it's a matter of concern. AI bias is not unlike the biases that humans possess; it's the partiality or unfairness that can be present in data or algorithms. When AI systems make predictions, recommendations, or decisions, they rely on data patterns and algorithms. These patterns can inadvertently include biases that exist in the data they've been trained on, reflecting historical inequalities, stereotypes, or prejudices. The most crucial point to recognize is that AI bias is often unintentional. It's not a result of a programmer or data scientist explicitly encoding prejudice into the algorithm. Instead, it's a byproduct of the data used, and the mathematical processes involved in machine learning. Java, a versatile and widely used programming language, extends its capabilities to web development through various frameworks and tools. Its stability, scalability, and vast community support make it an excellent choice for building robust web applications. Java's proficiency in handling complex tasks and its emphasis on optimization are key elements that contribute to maximizing performance in web development. For example, if an AI system is trained to evaluate resumes for job applications and the historical data it's trained on contains a bias toward hiring one gender over another, the AI system may perpetuate this bias. It could inadvertently favor one gender over the other, even if both applicants are equally qualified. AI bias isn't inherently malicious; rather, it reflects the limitations and imperfections in the data and algorithms used. This imperfection is what makes it a critical issue to address, especially given the increasing role AI plays in critical decision-making processes across various sectors. AI bias can have real-world consequences. It can lead to unfair hiring practices, discriminatory lending decisions, or biased medical diagnoses, potentially exacerbating societal inequalities. Recognizing the existence of bias and taking steps to mitigate it is essential for creating AI systems that are fair, accountable, and trustworthy. Types of Bias in AI Algorithms Now that we've laid the groundwork, let's explore the different types of bias that can manifest in AI algorithms. 1. Data Bias Data bias, also known as selection bias, occurs when the training data used to build an AI model is unrepresentative of the real-world population it's meant to serve. This can result in underrepresentation or overrepresentation of certain groups, leading to biased predictions or decisions. For instance, if a facial recognition system is trained primarily on one ethnicity, it may perform poorly on others. 2. Algorithmic Bias Algorithmic bias emerges from the way AI algorithms are designed and trained. It can occur when algorithms unintentionally incorporate human biases present in the data used for training. For instance, a biased sentiment analysis model might label positive sentiments differently based on gender or race. 3. Aggregated Bias Aggregated bias arises when seemingly unbiased individual data points combine to create biased outcomes. This is a cumulative effect of bias in data and algorithms. Even if individual data points are not explicitly biased, their aggregation may lead to discriminatory results. 4. Prejudice Amplification Prejudice amplification occurs when AI systems exacerbate existing societal biases. For example, an AI-powered recommendation system that recommends job opportunities based on past hiring practices could perpetuate gender or racial disparities. 5. Evaluation Bias Evaluation bias happens when the metrics used to assess the performance of AI algorithms are themselves biased. If fairness is not adequately considered in the evaluation process, it can lead to misleading results and reinforce existing biases. Understanding AI bias and its various forms is essential for building more equitable AI systems. Recognizing these biases is the first step toward addressing them and working to create algorithms that are fair and just. In our subsequent posts, we'll delve deeper into strategies for mitigating AI bias and explore real-world examples of the impact of bias in AI systems. Stay tuned for more on this critical topic. Full Stack Development Courses in Different Cities - Tamil Nadu
Table of Contents What is an example of applied utilitarianism in business? One example of utilitarianism in business is the practice of having tiered pricing for a product or service to different types of customers. Customers who fly in first or business class pay a much higher rate than those in economy seats, but they also get more amenities. What is utilitarianism in public health? Utilitarianism is a normative ethical theory that identifies the good with utility and the right with that which maximizes utility. Thus, according to utilitarianism, utility is the value that should guide actions, programs and policies. Our moral obligation, the right thing to do, is to maximize utility. What is a utilitarian crime? Term. Utilitarian Crime. Definition. A crime that produces a monetry reward. How do you become utilitarian? The utilitarian method requires you to count everyone’s interests equally. You may not weigh some people’s interests—including your own—more heavily than others. Similarly, if a government is choosing a policy, it should give equal consideration to the well-being of all members of the society. What is utilitarianism in nursing? Utilitarianism is a moral theory that focuses on the overall balance of positive and negative effects of a healthcare professional’s actions; all actions are considered on the basis of consequences, not on the basis of fundamental moral rules and principles or with regard to character traits. What’s a good example of the utilitarian theory? Utilitarianism is a philosophy or belief suggesting that an action is morally right if the majority of people benefit from it. An example of utilitarianism was the belief that dropping the atomic bomb on Japan was a good idea since it potentially saved more lives than it lost. How does utilitarianism impact society? Utilitarianism brings about more happiness which is relevant in today’s society. Therefore, Utilitarianism is the only practical ethical system for governing large groups of people and it provides us with the most simple, yet powerful, ethical guideline which is to strive for happiness but only at the same time as minimising pain. What is utilitarianism in the workplace? Utilitarianism is therefore concerned with actions that produce benefit and avoid harm. Utilitarian workplace values include honesty, keeping promises, professionalism, caring for others, accountability and avoiding conflicts of interest. What is the importance of utilitarianism in business? Importance of Utilitarianism Utilitarianism sets stringent ethical standards in the workplace that influence the behavior of all its members. It forms the basis of an ethical program that defines workplace conduct, ethical conduct training and advice, disciplinary action for ethical violations and the like.
In this video, Allison explains color theory. She goes into the main principles of color, why some color combos work well, and how colors can be combined into a color palette for Power BI reports. Links Mentioned in Video: Colorblind options - https://colorbrewer2.org/#type=sequen… Color theory is the study of how colors interact with each other and how they can be used effectively in design. It is important for report designers and data analysts because color can be used to convey information, highlight important data points, and create visual interest. There are several key concepts in color theory that are relevant for report designers and data analysts. When designing reports and visualizations, it’s important to consider the use of color carefully. Too many colors can be overwhelming and distract from the data, while too few can make the visualization appear dull. Additionally, certain colors may have cultural connotations or associations that should be taken into account when designing for a global audience. Overall, understanding color theory can help report designers and data analysts create more effective and engaging visualizations that effectively communicate the intended message to their audience.<
Jump to section Produce and communicate clear and effective arguments and ideas formed independently. Develop an appreciation and understanding of literature’s personal, cultural, and historical significance. Demonstrate an understanding of literary forms through studying the elements, structures, and characteristics of different types of literature. Examine historical events in world civilizations, as well as large trends and themes up through 1500. Demonstrate an understanding of how societies change over time and the implications for today. Demonstrate an understanding of the political systems of Britain and the U.S.A. Demonstrate the ability to think critically about various theories and ideas in philosophy
When cosmic carbon leaves home, it may move in a real rush, according to the first sighting of a star spewing it into space. Ageing stars build elements like carbon in their core. These are eventually shed when stars throw off their surface layers. It’s a crucial process for seeding the universe with carbon and oxygen, which are important for life. “All of organic chemistry and life depends on these elements,” says Albert Zijlstra of the Jodrell Bank Centre for Astrophysics in Manchester, UK. But no one knew exactly how the elements move outwards from the core. One theory was that the temperature cycles in dying stars could help stir their inner and outer layers, dredging up elements from the core. To test this theory, Zijlstra and his colleagues used the airborne SOFIA telescope to look at a gas cloud around an older sunlike star called BD+30 3639. They found that the outer part of the cloud, made of gas that left the star long ago, is mostly oxygen. The inner cloud, which left the star recently, is full of carbon. By modelling how the gap between the layers evolved, they were able to work out that the star took 1000 years to dredge up its carbon. “Even though 1000 years sounds like a lot, for a star it really means a very short time,” says Lizette Guzman-Ramirez of the European Southern Observatory in Santiago, Chile. “What we witnessed is the equivalent of a 40-minute event in the life of a human being.” “The unique result of this paper is really that one is witnessing the transition from an oxygen to a carbon-rich object,” says Leen Decin of the Catholic University of Leuven (KUL) in Belgium, who was not involved in the study. “It is almost [like] finding a needle in a haystack.” Journal reference: Monthly Notices of the Royal Astronomical Society Letters (accepted for publication); preprint at arxiv.org/abs/1504.03349
Providing Clean Water Improves Dairy Cattle Nutrition and Production After air, water is the most important nutrient for livestock, but water quantity and quality are often overlooked on many livestock operations. Providing safe, clean water is critical to maximizing a cow’s milk production and cow reproduction performance. Cows need clean water for normal digestion, proper flow of feed through the intestinal tract, proper nutrient absorption, normal blood volume and tissue requirements. Water Quality By the Numbers Cows consume 30 to 50 gallons (115 to 190 liters) of water per day despite spending just 20 to 30 minutes per day drinking it. Water accounts for 87 percent of the milk a dairy cow produces. Drinking water provides 60 to 80 percent of dry and lactating cows’ water needs, while feed provides most of the remaining water needs. Common Signs of Poor Water Quality Poor quality drinking water can result in decreased milk production and cow reproduction failure. Some of the common signs of poor water quality in cows include: - Depressed immune function and elevated somatic cell count, which can lead to reduced milk production and quality - Increased cow reproduction failure, including conception failure, early embryonic death and abortions - Increased off-feed events and erratic eating patterns - Health or performance issues - Scours or digestive upsets in replacement animals - Deteriorating health status of newly arrived heifers or dry cows - Off flavor, smell or color of drinking water Calves Need High Quality Drinking Water Too When considering water quality needs, many people consider the needs of a cow that is producing milk or reproducing, but calves need access to quality drinking water just as much. The quality of water used to make milk replacer is critical to a calf’s health, and the availability of fresh water also affects nutrient intake and the calf’s growth. According to research by Dr. Donna M. Amaral-Phillips from the University of Kentucky, calves should be offered free-choice water along with calf starter feed beginning at 4 days of age. The research suggests that depriving calves of fresh water decreases starter intake by 31 percent and decreases weight gain by 38 percent. Calves fed free-choice water also had a lower incidence of scours. Some of the signs of poor water quality in dairy calves are similar to those found in mature cows: - Increased incidence of scours and digestive upsets - Decreased immune competence - Depressed daily gain and feed efficiency - Increased off-feed events and erratic eating behavior Water Quantity and Availability is Equally Important as Quality Cows spend up to four to five hours per day eating, but only 20 to 30 minutes per day drinking water, making availability and easy access to clean, safe water a critical factor in meeting a cow’s hydration needs. A cow depends on readily-available water to maintain blood volume, tissue function, rumen activity and proper flow of feed through the digestive tract. To optimize dairy cows’ water consumption, you should provide direct access to clean water as cows exit the milking parlor and within 50 feet (15 meters) of the feed bunk. There should be at least two functioning waters available per pen. Some additional ways to optimize dairy cows’ water consumption include the following: - Ensure adequate flow rate to maintain a minimum water depth of 3 inches (7.6 centimeters) in the trough - Provide available trough space of 3.5 linear inches (9 linear centimeters) per cow - Monitor stray voltage in water troughs and in the areas around them Testing Drinking Water for Quality Dairy producers should be testing their cows’ drinking water twice per year in the late summer and in the winter. Listed below is what producers should be testing for when testing water: - Total dissolved solids, pH and hardness - Excess minerals or compounds such as sulfate, chloride, iron, manganese and nitrates - Coliform and bacterial counts - Toxic compounds, including heavy metals, organophosphates, PCBs and hydrocarbons Testing for dissolved solids is the first thing that should be evaluated and will reveal the sum of all dissolved and suspended inorganic matter present in the water sample. High concentrations of sulfate, chloride, iron, manganese and nitrates are known to significantly affect animal performance. In addition, testing water for iron is critical because it is estimated to have a 100 percent absorption rate. If the iron level in drinking water is greater than 0.3 ppm, it may cause problems for cows, including decreased palatability and increased oxidative stress, contributing to immune dysfunction. This can lead to mastitis and metritis, or decreased absorption of copper, manganese and zinc from the cows’ diet. Water contaminated with coliform bacteria can be detrimental to both humans and livestock while nitrates and nitrites can cause reproductive failure, depressed growth in young animals and can result in poor oxygen-carrying capacity of the blood. Generally, sulfates have a laxative effect on livestock, which reduces feed efficiency and performance. Sulfur and sulfates can also affect copper and selenium absorption rates, creating a need for adjustments in supplemental levels. Zinpro’s H20 Water Analysis Program Helps Assess Water Quality The Zinpro H20 Water Analysis Program is a step-by-step tool that puts livestock and poultry producers on the road to optimal water quality. The program analyzes and then compares water samples to water quality standards. The results can help nutritionists, producers and veterinarians identify areas of concern and review signs of potential toxicosis. Getting the Best Results A water test is only as good as the sample provided and the laboratory that conducts the test. When selecting a lab, choose one with experience testing water for livestock and dairy operations to ensure valid results. Below are additional steps for ensuring you get the most accurate results from your water test: - Use sterile, plastic bottles supplied by the testing laboratory - Return water samples to the lab within 24 hours of collection - Sample from the same water source the cows drink from - Collect samples from the stream running into the watering trough, not directly from the pool of water - Collect water samples from more than one pen, barn or water trough, in more than one location - Let water run for several minutes before beginning to collect the sample Once the analysis is complete, dairy producers can start to make decisions about correcting water quality problems. Common water treatments, depending on the problem, include disinfection, water softening, iron filtration and reverse osmosis. The Zinpro H20 Water Analysis Program recommends a follow-up analysis if the water contains any elements that approach or exceed the desired limits for livestock. Research shows that Zinpro Performance Minerals® are more metabolically available to animals in the presence of mineral antagonists in the water than other forms of trace minerals. For more information about the benefits of receiving a comprehensive water evaluation using the Zinpro H20 Water Analysis Program or about including Zinpro Performance Minerals as a part of your livestock nutrition program, contact a Zinpro representative today.
Between 1900 and the 1970s, twenty million southerners migrated north and west. Weaving together for the first time the histories of these black and white migrants, James Gregory traces their paths and experiences in a comprehensive new study that demonstrates how this regional diaspora reshaped America by "southernizing" communities and transforming important cultural and political Challenging the image of the migrants as helpless and poor, Gregory shows how both black and white southerners used their new surroundings to become agents of change. Combining personal stories with cultural, political, and demographic analysis, he argues that the migrants helped create both the modern civil rights movement and modern conservatism. They spurred changes in American religion, notably modern evangelical Protestantism, and in popular culture, including the development of blues, jazz, and country music. In a sweeping account that pioneers new understandings of the impact of mass migrations, Gregory recasts the history of twentieth-century America. He demonstrates that the southern diaspora was crucial to transformations in the relationship between American regions, in the politics of race and class, and in the roles of religion, the media, and culture.
When Parliament passed the Quebec Act of 1774, the boundaries of Canada were extended south to the Ohio River. Colonies that bordered the Ohio Country were alarmed because Americans were already expanding westward and settling in the area. After the battles of Lexington and Concord in April 1775, militia forces under the command of Ethan Allen captured Fort Ticonderoga and Crown Point on Lake Champlain, which helped give the Americans control of the Hudson River Valley. Despite this, the Continental Congress still viewed Canada as a threat. The Continental Congress organized the militia forces that were laying siege to Boston into the Continental Army and named George Washington as commander-in-chief. The Congress also decided that a takeover of Quebec was an important part of defeating the British in North America, and a plan was put in place to invade the province of Quebec and capture the cities Montreal and Quebec. Facts About the Date and Location of the Siege - Fort St. John is located in Canada, in the Quebec Province, at the northern end of Lake Champlain. - It was built by the French, along with Fort Chambly, to help control the Richelieu River. Both forts were turned over to the British at the end of the French and Indian War. - The siege took place from September 17, 1755, to November 3, 1775. Facts About the Prelude to the Siege - On May 29, the Continental Congress sent a letter that urged the residents of Canada to join them in trying to overthrow the British. The Canadians did not join the fight for independence. - On June 13, Benedict Arnold sent a letter to Congress that argued the time was right for an invasion of Canada. Ethan Allen said the same thing when he spoke before Congress on June 23. - When the Governor of Canada, Sir General Guy Carleton, found out the Americans had taken Fort Ticonderoga, he expected an invasion. He responded by making preparations to counter the American attack and reinforced Fort St. John with 800 men under the command of Major Charles Preston. Two redoubts were also built at the fort. - On June 27, The Continental Congress decided to invade Canada. Congress created the Nothern Department of the Continental Army and placed Major General Philip Schuyler in command. - Schuyler was ordered to go to Fort Ticonderoga and assess the situation. He was given permission to move into Canada, as long as the people of Canada did not object. Congress explicitly told him to take possession of Fort St. John, Montreal, and “any other parts of the country” that he felt he could. - Congress believed if it could drive the British out of Quebec it would not only unite North America together against Britain but also eliminate the threat of a British invasion of New England from the north. - The initial plan was for Schuyler to lead his Army from Fort Ticonderoga over Lake Champlain and up the Richelieu River towards Montreal. Along the way, he would capture the forts that defended the river, which were Fort St. John and Fort Chambly. - On August 20, General Washington added a second expedition to the invasion. This one would be led by Benedict Arnold and would approach Quebec from the east in hopes of taking it by surprise. - The invasion of Canada was launched on August 25 under the command of Major General Schuyler and Brigadier General Richard Montgomery. Schuyler had roughly 2,000 troops. Montgomery was already at Fort Ticonderoga, having been transferred there on August 17. - Arnold’s expedition left for Maine on September 13. His expedition was made up of around 1,100 troops. - While he was waiting for Schuyler to arrive at Fort Ticonderoga, Montgomery found out the British were building ships so they could sail down the Richelieu River into Lake Champlain and attack Fort Ticonderoga and Crown Point. - Montgomery knew that if he waited for Schuyler it might be too late, so he planned to take some men and occupy Isle aux Noix, an island in the middle of the Richelieu River. He explained his actions in a letter that he sent to Schuyler and then he and his men sailed for Isle aux Noix on August 26. Unfortunately, due to strong winds, Montgomery did not reach the island until September 5. Schuyler arrived there on the same day. Facts About Key Participants in the Siege - General Sir Guy Carleton, the Royal Governor of Canada. - Major Charles Preston - General Richard Montgomery Facts About Key Events of the Siege - On September 6, Montgomery led a small force that landed about a mile and a half from the fort. Schuyler wanted to find a position closer to the fort that was more advantageous. Montgomery marched his men through a marshy area, where it was ambushed from the left by Indians. Montgomery rallied the center and right of his force and fought the Indians off. Montgomery was not sure how large the enemy force was, retreated so that he was out of range of the British guns at the fort. - That night, Schuyler received intelligence that indicated the British forces were stronger than expected, and reinforcements were on the way. Schuyler called a council of war, where it was decided Montgomery and his men would retreat to Isle aux Noix. - Schuyler had his men build entrenchments on the island, which would allow them to attack any British ships that tried to travel south to Lake Champlain and attack Fort Ticonderoga and Crown Point. - Militia from Connecticut, led by David Wooster, arrived on September 8. New York militia, with artillery, arrived with them. - A second attempt at an assault on the fort occurred on the night of September 10. The American forces were divided into two groups, however, it was so dark that they ran into each other. Both parties thought the other was the enemy and retreated. Montgomery was able to get them organized and they marched on the fort. They came under heavy fire and the Americans fell back - Montgomery called for a council of war where it was decided to try another attack on September 11. Unfortunately, it was called off when rumors spread that a British warship was headed towards them and some of Montgomery’s troops fled. Once again, Montgomery pulled back to Isle aux Noix. - Schuyler, who had health issues, wrote to the Continental Congress and informed them that his health was so bad he could barely hold his pen. He informed Congress he was going to return to Fort Ticonderoga and turn command over to Montgomery. - Montgomery assumed command of the American forces on September 16. - Montgomery made preparations to launch another attack. On the 16th, he sent some ships upriver to fight off any British ships they might come across. On the 17th, he sailed upriver with about 1,400 men sailed upriver and landed near the fort. - The Americans attacked Fort St. John on the 18th. Just like before, the British fought off the attack, however, they were unable to force the Americans to retreat, and Montgomery ordered his men to entrench themselves around the fort and the siege began. - Additional guns arrived from Fort Ticonderoga on September 21. Montgomery had his men bombard the east side of the fort. Unfortunately, the guns were too small to do much damage. His artillerymen were also inexperienced, so the guns had been placed too far away. - Montgomery called a council of war and suggested moving the guns to the north side of the fort, so they could be positioned closer to the walls. His plan was voted down because his subordinates were afraid the troops would feel like they were in more danger and would leave. Instead, they agreed to build a battery within striking distance of the British ship Royal Savage. - Montgomery needed more men, so he sent Ethan Allen and John Brown north and asked them to find Canadians sympathetic to the American cause and willing to help them fight. - Allen and Brown were able to recruit around 300 men but decided to attack Montreal, instead of returning to Fort St. John. On September 24, Allen led the attack on Montreal but was repulsed. The operation was a failure, and Allen and many of his men were captured by the British at the Battle of Montreal. For unknown reasons, Brown and his men never joined in the attack, so they were able to escape. - On October 14, the battery to launch the bombardment on the Royal Savage was completed. It opened fire on the ship and sank it in the river. - The stalemate at Fort St. John changed on October 18 when the Americans led by James Livingston captured Fort Chambly, which provided Montgomery with the gunpowder needed for the ground assault on Fort St. John. - After the capture of Fort Chambly, morale among the troops improved and Montgomery ordered the artillery batteries to be constructed on the north side of the fort. - At Montreal, Governor Carleton saw the situation was perilous, and the Americans might take the fort. On October 31, he led a force and tried to break through the siege lines at Longueuil. The regiment of the Green Mountain Boys, under the command of Seth Warner, was waiting on Carleton. As soon as Carleton’s force reached the river, Warner’s regiment opened fire with muskets and artillery and routed them. Carleton was forced to fall back to Montreal. - The artillery batteries on the north side of the fort were completed on November 1. The Americans bombarded the fort until dusk when Montgomery sent a prisoner to the fort with a letter demanding that Preston surrender. - Although Carleton wanted Preston to hold out, Preston realized he would not be able to hold the fort. He surrendered it to Montgomery on November 2. Facts About Casualties of the Siege - British forces had 43 killed or wounded. - American forces had 11 killed or wounded, although many more died from disease. Facts About Result and Aftermath of the Siege - It was the first American victory of the Canada Campaign. - When the Americans took the fort, Carleton evacuated Montreal and took his forces to Quebec, which reinforced the city. - After Carleton evacuated Montreal, Montgomery took it without any opposition. The residents of Montreal surrendered it to him on November 13. - Although the Americans were able to take Fort St. John and Montreal, it took too long and allowed the British to fortify Quebec against the impending attack by Montgomery and Arnold. Timeline of the Siege of Fort St. John This timeline shows how the Siege of Fort St. John fits into the events of the Canada Campaign of 1775–1776. - May 10, 1775 — Capture of Fort Ticonderoga - June 27, 1775 — Continental Congress Authorized Invasion of Canada - September 5, 1775 – Skirmish at Isle Aux Noix - September 5, 1775 – Skirmish at Fort St. John - September 10, 1775 — Skirmish at Fort St. John - September 13, 1775 — Arnold’s Expedition to Quebec City Begins - September 17, 1775 — Siege of Fort St. John Begins - September 25, 1775 — Battle of Montreal (Longue-Pointe) - October 15, 1775 — Skirmish at Montreal - October 18, 1775 — First Battle of Fort Chambly - November 3, 1775 — Siege of Fort St. John Ends - November 13, 1775 — Americans Capture Montreal - November 14, 1775 — Arnold Expedition Arrives at Quebec City - November 15, 1775 — Skirmish at Plains of Abraham - November 19, 1775 — Naval Skirmish at Sorel - December 31, 1775 — Battle of Quebec - May 6, 1776 — Skirmish at the Plains of Abraham - May 15, 1776 — Battle of the Cedars - May 25, 1776 — Battle of Saint-Pierre - June 8, 1776 — Battle of Three Rivers - June 14, 1776 — Occupation of Sorel - June 16, 1776 — Second Battle of Chambly - June 24, 1776 — Skirmish at Isle Aux Noix - July 24, 1776 — Skirmish at Sorel River - October 11, 1776 — Battle of Valcour Island
You already know that diet is an important factor related to many health conditions, such as digestive tract disorders, heart disease, obesity, and cancer. You should also add dental health to that list, because even though we don’t often think about it, what you eat affects your teeth and gums a lot more than you would think! Sugar is one of the most common culprits leading to dental problems. A diet high in sugar encourages acid formation in the mouth, which then increases the risk of tooth decay. This is one of the primary reasons your dentist will advise you to avoid cookies, candy, soft drinks, and so on. In particular, sticky candies are the worst, because it can be difficult to brush away the residue they leave behind. However, added sugar lurks in more foods than you would suspect. In particular, packaged foods often include high amounts of sugar, so make sure to read labels carefully. Words like “fructose” and “sucrose” indicate sugar. Of course, avoiding packaged foods altogether is generally recommended for better health. Preparing fresh foods yourself gives you control over the ingredients, so you can certain of what you’re actually eating. Sodas are bad for another reason. They often contain phosphoric acid and citric acid, which can erode the enamel on your teeth. Look for these ingredients in other foods, too. Alcohol dries out your mouth, lowering your resistance against gum infections, so limit your intake of alcoholic beverages. If any of your medications cause dry mouth as a side effect, talk to your doctor or dentist. You might need to take additional steps to prevent gum disease. We’ve talked a lot about foods you should avoid, or at least limit… Now we’ll steer you toward some good foods that will promote dental health: - Fruits and vegetables, for their fiber content - Dairy products, for their calcium and other minerals - Unsweetened green or black tea – their polyphenols kill or inhibit the bacteria in plaque - Sugarless chewing gum after meals, to promote saliva production - Plenty of water And, of course, remember that no matter what you eat, you still need to brush and floss your teeth twice per day. Visit your dentist for regular check-ups, every six months, and promptly report any concerns or symptoms in the meantime.
NATURE OF ACCOUNTING Accounting is art of recording, classifying, summarizing in a significant manner and in terms of money, transactions and events which are, in part at least, of financial character and interpreting the results thereof. The Nature of Accounting can be defined in two ways: - Quantitative Attributes of Accounting - Qualitative Attributes of Accounting QUALITATIVE ATTRIBUTES OF ACCOUNTING The fundamental nature of financial statements is to provide true and fair view of the state of affairs and profit or loss for the period. Qualitative attributes simplifies and expands on the financial figures to ensure easy understanding and comparability of results. The Qualitative Attributes that describe the Nature of Accounting are as follows: Reliability implies that the information must be factual and verifiable. The accounting information has said to have verifiability if such information can be verified from source documents such as cash memos, purchase invoices, sales invoices, correspondence, agreement, property deeds and other similar documents. In order to be relied upon, the financial information requires the following attributes: - Substance over form i.e. accounting should be based on financial reality and not merely on legal form. Accounting information depicted by financial statements must be relevant to the objectives of enterprise. Unnecessary and irrelevant information should not be included in financial statements. The INTERNATIONAL ACCOUNTING STANDADRDS BOARD (IASB) says that information is relevant “When it influences the economic decisions of users by helping them evaluate past, present or future events or confirming or correcting their past evaluations.” The relevance of information is affected by its nature and materiality. If an item or event is material, it is probably relevant to the users of financial statements. For Example: The information regarding the rate of dividend paid by a company in previous years is relevant information for the investors since it provides a basis for forecasting future dividends. Accounting information should be presented in such a simple and logical manner that they are understood easily by their users such as investors, lenders, employees etc. A person who does not have any knowledge of accounting terminology should also be able to understand them without much difficulty. This can be done by giving relevant explanatory notes to explain the information given in financial statements. General topics which can be included in the explanatory notes are Method of depreciation, method of valuation of inventory, description of contingent liabilities, explanation of reserves, disclosure of events occurring after balance sheet date etc, These explanatory notes make the financial statements more useful and understandable. Comparability is very useful quality of the accounting. The financial statements should contain the figures of previous year along with the figures of current year so that the current performance can be compared with the past performance. Similarly, the financial statements should be prepared in such a way that the profitability and financial position of the concern may be compared with the other concerns of the similar type. Comparison reveals the strong and weak points of the business entity. Comparison is possible when the different firms in the same industry adopt the same accounting principles from year to year. For Example: If diminishing balance method of charging depreciation is selected, it should not be changed from year to year. Similarly, the method of valuation of stock should also be consistently the same from year to year. Accounting aims at preparing those financial statements that depict the true and fair view of profitability, liquidity and solvency position of an enterprise. Application of appropriate Accounting Standards normally results in financial statements portraying true and fair view of information of an enterprise. QUANTITATIVE ATTRIBUTES OF ACCOUNTING The Quantitative attributes explaining Nature of Accounting are as follows: ACCOUNTING IS AN ART AS WELL AS SCIENCE Accounting is an Art of recording, classifying, summarizing, analyzing and interpreting the accounting records with a view to ascertain the net profit/ loss and financial position of the business. Accounting as a Science is an organized body of knowledge that contains some underlying principles and rules that are followed while maintaining accounts. However, Accounting is not a pure science as it does not establish cause and effect relationship. RECORDING OF FINANCIAL TRANSACTIONS ONLY Accounting records only those transactions and events that are expressed in monetary terms or in quantitative form. For instance, the transactions like sale of goods for ₹5,000 will be recorded in the books of accounts. However, there are so many events which are very important for business but cannot be recorded in the books of accounts because such events cannot be expressed in quantitative or monetary form. For example: Loyalty of Employees, Resignation by an able and experienced manager, Strike by employees, Quarrel between employee and employer etc. but these events have a large impact and direct bearing on the business of the firm. RECORDING IN TERMS OF MONEY The accounting records only those transactions which can be expressed in terms of money only. It implies that a business man will not record the purchase of 5 chairs and 5 tables, he will record the purchase of 5 chairs costing ₹2,500 and 5 tables costing ₹5,000. Also the recording is done in the book of the journal which is the primary book of recording the transactions in the chronological order. In small business houses, the recording of transactions is generally done in the book of Journal whereas in big business houses the recording of transactions is done in the subsidiary books such as: - Cash Book - Purchase Book - Sales Book - Purchase Return Book - Sale Return Book - Bills Receivable Book - Bills Payable Book - Journal Proper The number of subsidiary books to be maintained depends upon the nature and size and needs or requirements of the business. CLASSIFYING THE TRANSACTIONS One of the Features of Accounting is that it classifies all the transactions recorded in the book of the Journal. Classification refers to grouping the transactions of same nature at one place, in a separate account. Classification of transactions is done in the books of ‘Ledger’. All the accounts related to creditors, debtors, capital, assets, liabilities, incomes and expenses are separately opened in the Ledger Book. Example: Wages Account, Ram Account, Advertisement Account, Cash Account, Bank Overdraft Account etc. SUMMARISING THE TRANSACTIONS Summarizing is the art of presenting the classified data in a manner which is understandable and useful to management and other users of such data. It involves: - Balancing of Ledger Accounts - Preparation of Trial Balance - Preparation of Trading and Profit & Loss A/c - Preparation of Balance Sheet Trial Balance is a summary of all the ledger accounts and is maintained to check the arithmetical accuracy of accounts. Trading Account is prepared to find out the Gross Profit or Gross Loss while Profit & Loss Account helps in knowing Net Profit or Net Loss. Balance Sheet prepared at the end of accounting year helps in knowing the financial position of the concern. It shows the Profitability, Solvency as well as Liquidity position of the business. Analyzing is concerned with the establishment of relationship between the various items or groups of items taken from Income Statement or Balance Sheet or both. Purpose of analysis is to identify he financial strengths and weaknesses of the enterprise. It provides the base for analysis. INTERPRETATION OF RESULTS Another feature of accounting is interpretation of results. Interpretation of results is concerned with explaining the meaning and significance of the relationship so established by the analysis. Interpretation of results requires high degree of knowledge and skills. The accountant should answer: - What has happened? - Why is happened? - What is likely to happen under specified conditions? COMMUNICATING THE RESULTS Accounting is so featured that it will provide the analyzed and interpreted results to its users such as Management, Employees, Creditors, Research Scholars, Debtors, Financial Institutions, Competitors, Bankers, Income Tax Authorities etc. The results are communicated by preparing final accounts, ratios, graphs, diagrams, charts, fund flow statement, cash flow statement etc.
All walls of the regular octahedron are identical equilateral triangles. ABCDEF octahedron edges have a length d = 6 cm. Calculate the surface area and volume of this octahedron. Did you find an error or inaccuracy? Feel free to write us. Thank you! Thank you for submitting an example text correction or rephasing. We will review the example in a short time and work on the publish it. Tips for related online calculators You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: video1 Related math problems and questions: - Triangular pyramid A regular tetrahedron is a triangular pyramid whose base and walls are identical equilateral triangles. Calculate the height of this body if the edge length is a = 8 cm - Equilateral) 7817 The base edge of a regular tetrahedral pyramid is a = 4 cm. base and walls are equilateral. Calculate the surface of this pyramid. - Triangular prism, The regular triangular prism, whose edges are identical, has a surface of 2514 cm² (square). Find the volume of this body in cm³ (l). - Rectangular 5798 The pyramid has a rectangular base with dimensions a = 5cm, b = 6cm. The side edges are identical; their length is h = 11cm. Calculate the surface of the pyramid. - Octahedron - sum On each wall of a regular octahedron is written one of the numbers 1, 2, 3, 4, 5, 6, 7, and 8, wherein on different sides are different numbers. John makes the sum of the numbers written on three adjacent walls for each wall. Thus got eight sums, which al - Triangular prism Calculate the volume and surface of the triangular prism ABCDEF with the base of an isosceles triangle. Base's height is 16 cm, leg 10 cm, base height vc = 6 cm. The prism height is 9 cm. - The plaster cast The plaster cast has the shape of a regular quadrilateral pyramid. The cover consists of four equilateral triangles with a 5 m side. Calculate its volume and surface area. - The cube The surface of the cube is 150 square centimeters. Calculate: a- the area of its walls b - the length of its edges c - its volume - Tetrahedral pyramid Calculate the volume and surface area of a regular tetrahedral pyramid, its height is $b cm, and the length of the edges of the base is 6 cm. - Triangular prism - regular The regular triangular prism is 7 cm high. Its base is an equilateral triangle whose height is 3 cm. Calculate the surface and volume of this prism. - Quadrilateral 5814 Calculate the surface area and volume of a regular quadrilateral truncated pyramid if the base edges are 87 cm and 64 cm and the wall height is 49 cm. - Triangular prism Calculate the surface of a regular triangular prism; the base's edges are 6 cm long, and the height of the prism is 15 cm. - Truncated pyramid The concrete pedestal in a regular quadrilateral truncated pyramid has a height of 12 cm; the pedestal edges have lengths of 2.4 and 1.6 dm. Calculate the surface of the base. - Top of the tower The top of the tower has the shape of a regular hexagonal pyramid. The base edge has a length of 1.2 m. The pyramid height is 1.6 m. How many square meters of sheet metal are needed to cover the top of the tower if 15% extra sheet metal is needed for join - Quadrilateral 7815 The area of the mantle of a regular quadrilateral pyramid is equal to twice the area of its base. Calculate the pyramid's volume if the base edge's length is 20 dm. - Axial section Calculate the volume and surface of a cone whose axial section is an equilateral triangle with side length a = 18cm. - Cube 1-2-3 Calculate the volume and surface area of the cube ABCDEFGH if: a) /AB/ = 4 cm b) perimeter of wall ABCD is 22 cm c) the sum of the lengths of all cube edges is 30 cm.
“Our lives begin to end the day we become silent about things that matter.” -Martin Luther King Jr. Martin Luther King Jr. was arguably the most influential person in the Civil Rights Act in the US. He played a key role in ending the segregation of African Americans in the South and in the creation of the Civil Rights Act of 1964. MLK was heroic to his followers and helped in the unity of all nationalities, black and white. Thank you Mr. King for everything you did to shape the America I know and love today.
We all react to stress in different ways. A sudden loud noise or flash of light can elicit different degrees of response from people, which indicates that some of us are more susceptible to the impact of stress than others. Any event that causes stress is called a ‘stressor.’ Our bodies are equipped to handle acute exposure to stressors, but chronic exposure can result in mental disorders, e.g. anxiety and depression and even physical changes, e.g. cardiovascular alterations as seen in hypertension or stroke-disorders. There has been significant effort to find a way to identify people who would be vulnerable to developing stress-related disorders. The problem is that most of that research has relied on self-reporting and subjective clinical rankings, or exposing subjects to non-naturalistic environments. Employing wearables and other sensing technologies have made some headway in the elderly and at-risk individuals, but given how different our lifestyles are, it has been hard to find objective markers of psychogenic disease. Approaching the problem with VR Now, behavioral scientists led by Carmen Sandi at EPFL’s School of Life Sciences have developed a virtual-reality (VR) method that measures a person’s susceptibility to psychogenic stressors. Building from previous animal studies, the new approach captures high-density locomotion information from a person while they explore two virtual environments in order to predict heart-rate variability when exposed to threatening or highly stressful situations. Heart rate variability is emerging in the field as a strong indicator of vulnerability to physiological stress, and for developing psychopathologies and cardiovascular disorders. VR stress scenarios In the study, 135 participants were immersed in three different VR scenarios. In the first scenario they explored an empty virtual room, starting from a small red step, facing on of the walls. The virtual room itself had the same dimensions as the real one that the participants were in so that if they touched a virtual wall, they would actually feel it. After 90 seconds of exploration, the participants were told to return to the small red step they’d started from. The VR room would fade to black and then the second scenario would begin. In the second scenario, the participants found themselves on an elevated virtual alley several meters above the ground of a virtual city. They were then asked to explore the alley for 90 seconds, and then to return to the red step. Once on it, the step began to descend faster and faster until it reached the ground level. Another fade, and then came the final scenario. In the third scenario, the participants were ‘placed’ in a completely dark room. Armed with nothing but a virtual flashlight, they were told to explore a darkened maze corridor, in which four human-like figures were placed in corner areas, while three sudden bursts of white noise came through the participant’s headphones every twenty seconds. Developing a predictive model The researchers measured the heart rates of the participants as they went through each VR scenario, collecting a large body of heart-rate variation data under controlled experimental conditions. Joao Rodrigues, a postdoc at EPFL and the study’s first author, then analyzed the locomotor data from the first two scenarios using machine-learning methods, and developed a model that can predict a person’s stress response—changes in heart rate variability—in the third threatening scenario. The team then tested the model and found that its predictions can work on different groups of participants. They also confirmed that the model can predict stress vulnerability to a different stressful challenge in which participants were put through a final VR test, where they had to quickly perform arithmetic exercises, and see their score compared to others’. The idea here was to add a timed and social aspect to stress. In addition, when they gave wrong answers, parts of the virtual floor broke down while a distressing noise played. Finally, the researchers also confirmed that their model outperforms other stress-prediction tools, such as anxiety questionnaires. Carmen Sandi says: “The advantage of our study, is that we have developed a model in which capturing behavioral parameters of how people explore two novel virtual environments is enough to predict how their heart rate variability would change if they were exposed to highly stressful situations; hence, eliminating the need of testing them in those highly stressful conditions.” Measuring stress vulnerability in the future The research offers a standardized tool for measuring vulnerability to stressors based on objective markers, and paves the way for the further development of such methods. Source: Read Full Article
Overview of Blood and Blood Components What is blood? Blood is the life-maintaining fluid that circulates through the entire body. What is the function of blood? Blood carries the following to the body tissues: Blood carries the following away from the body tissues: What are the components of blood? The components of human blood are: Where are blood cells made? Blood cells are made in the bone marrow. The bone marrow is the spongy material in the center of the bones that makes all types of blood cells. There are other organs and systems in our bodies that help regulate blood cells. The lymph nodes, spleen, and liver help regulate the production, destruction, and function of cells. The production and development of new cells in the bone marrow is a process Blood cells formed in the bone marrow start out as stem cells. A stem cell (or hematopoietic stem cell) is the first phase of all blood cells. As the stem cell matures, several distinct cells evolve. These include red blood cells, white blood cells, and platelets. Immature blood cells are also called blasts. Some blasts stay in the marrow to mature. Others travel to other parts of the body to develop into mature, functioning blood cells. What are the functions of blood cells? The main job of red blood cells, or erythrocytes, is to carry oxygen from the lungs to the body tissues and carbon dioxide as a waste product, away from the tissues and back to the lungs. Hemoglobin (Hgb) is an important protein in the red blood cells that carries oxygen from the lungs to all parts of our body. The main job of white blood cells, or leukocytes, is to fight infection. There are several types of white blood cells and each has its own role in fighting bacterial, viral, fungal, and parasitic infections. Types of white blood cells that are most important for helping protect the body from infection and foreign cells include the White blood cells: Help heal wounds not only by fighting infection but also by ingesting matter, such as dead cells, tissue debris, and old red blood cells. Protect you from foreign bodies that enter the blood stream, such as allergens. Are involved in the protection against mutated cells, such as cancer. The main job of platelets, or thrombocytes, is blood clotting. Platelets are much smaller in size than the other blood cells. They group together to form clumps, or a plug, in the hole of a vessel to stop bleeding. What is a complete blood cell count (CBC)? A CBC count is a measurement of size, number, and maturity of the different blood cells in the blood sample. A CBC can be used to find problems with either the production or destruction of blood cells. Variations from the normal number, size, or maturity of the blood cells can be used to mean there is an infection or disease process. Often with an infection, the number of white blood cells will be elevated. Many forms of cancer can affect the production of blood cells. For instance, an increase in the immature white blood cells in a CBC can be associated with leukemia. Blood diseases, such as anemia and sickle cell disease, will cause an abnormally low hemoglobin. Common blood tests CBC, which includes: White blood cell count (WBC) Red blood cell count (RBC) Hematocrit red blood cell volume (Hct) Hemoglobin (Hgb) concentration. The oxygen-carrying pigment in red blood cells Differential blood count To aid in diagnosing anemia and other blood disorders and certain cancers of the blood; to monitor blood loss and infection; or to monitor response to cancer therapy, such as chemotherapy and radiation. To diagnose and monitor bleeding and clotting disorders. Prothrombin time (PT) and partial thromboplastin time (PTT) To evaluate bleeding and clotting disorders and to monitor anticoagulation (anticlotting) Your healthcare provider will explain the purpose and results of any blood tests
This side-by-side comparison of red and gray squirrels illustrates the obvious size difference between the two North American species. Both are frequent visitors to bird feeders, and often seen chasing one another. Unfortunately in the UK, red squirrel populations appear to be declining mainly due to gray squirrels outcompeting reds due to their ability to feed more efficiently in broadleaved woodlands. Other threats to red squirrels are disease carried by gray squirrels that is fatal to reds, as well as road traffic. Grays can reproduce during times of stress, while reds cannot. Although both are native to North America, grays are considered an invasive species in the UK. According to the Forestry Commission, England, the red squirrel, native to Britain, is becoming extremely rare due to the introduction of the American grey squirrel. "There are estimated to be only 140,000 red squirrels left in Britain, with over 2.5 million grey squirrels. The Forestry Commission is working with partners in projects across Britain to develop a long-term conservation strategy that deters greys and encourages reds." If you are a backyard bird watcher in the United States, have you noticed a decline in the red squirrel population? Many years ago, they used to raise their young in our woods and entertain us daily with their feisty personalities. The above photo is the first red squirrel we've seen in our neck of the woods (New Hampshire) for several years. We spotted one dead on the road last spring, but rarely catch a glimpse of them. Let's hope we aren't faced with the same fate as the UK, and are one day working toward conserving this species.