content
stringlengths
275
370k
New research from the University of Leeds and University of Chicago reveals a model of Sun’s magnetic field. Sparked by a medium-sized (C-class) flare, a long, magnetic filament burst out from the Sun, producing one of the best shows that SDO has seen (Aug. 31, 2012). Viewed in the 304 Angstrom wavelength of extreme ultraviolet light, the filament strand gets stretched outwards until it finally breaks and heads off to the left. Some of the particles from this eruption did hit Earth with a glancing blow on Sept. 3, generating some beautiful aurora. The video clip covers four hours of activity. Researchers at the Universities of Leeds and Chicago have uncovered an important mechanism behind the generation of astrophysical magnetic fields such as that of the Sun. Scientists have known since the 18th Century that the Sun regularly oscillates between periods of high and low solar activity in an 11-year cycle, but have been unable to fully explain how this cycle is generated. In the ‘Information Age’, it has become increasingly important to be able to understand the Sun’s magnetic activity, as it is the changes in its magnetic field that are responsible for ‘space weather’ phenomena, including solar flares and coronal mass ejections. When this weather heads in the direction of Earth it can damage satellites, endanger astronauts on the International Space Station and cause power grid outages on the ground. The research, published in the journal Nature, explains how the cyclical nature of these large-scale magnetic fields emerges, providing a solution to the mathematical equations governing fluids and electromagnetism for a large astrophysical body. The mechanism, known as a dynamo, builds on a solution to a reduced set of equations first proposed in the 1950s which could explain the regular oscillation but which appeared to break down when applied to objects with high electrical conductivity. The mechanism takes into account the ‘shear’ effect of mass movement of the ionized gas, known as plasma, which makes up the Sun. More importantly it does so in the extreme parameter regime that is relevant to astrophysical bodies. “Previously, dynamos for large, highly conducting bodies such as the Sun would be overwhelmed by small-scale fluctuations in the magnetic field. Here, we have demonstrated a new mechanism involving a shear flow, which served to damp these small-scale variations, revealing the dominant large-scale pattern”, said Professor Steve Tobias, from the University of Leeds’ School of Mathematics, a co-author of the research. What is more, this mechanism could be used to describe other large, spinning astronomical bodies with large-scale magnetic fields such as galaxies. The dynamo was developed through simulations using the high-performance computing facilities located at the University of Leeds. “The fact that it took 50 years and huge supercomputers shows how complicated the dynamo process really is.” said Prof Fausto Cattaneo, from the University of Chicago’s Department of Astronomy and Astrophysics. The presence of spots on the Sun has been known since antiquity, and further analyzed after the invention of the telescope by Galileo in the 16th Century. However, their cyclic nature, with periods of high activity (lots of sunspots) and low activity (few sunspots) following each other, was not identified until the 18th Century. At the start of the 20th Century it was then recognized that these sunspots were the result of the Sun’s magnetic field. Since then much effort has been devoted to understanding what processes lead to the formation of sunspots and the origin of their cyclic behavior. Shear-driven dynamo waves at high magnetic Reynolds Number by S.M. Tobias and F. Cattaneo is published in the journal Nature on 23rd May 2013. This work was partially supported by the Science and Technology Facilities Council (STFC) and by the National Science Foundation-sponsored Center for Magnetic Self-Organisation at the University of Chicago. Publication: S. M. Tobias & F. Cattaneo, “Shear-driven dynamo waves at high magnetic Reynolds number,” Nature 497, 463–465, (23 May 2013); doi:10.1038/nature12177 Image: NASA Solar Dynamics Observatory
The Era of Good Feeling “The Era of Good Feeling” was a time of increased nationalism and prosperity for the nation. This of course is not completely true, debates over many important issues created a crack in the outward appearance of harmony during President Monroe’s two terms. These issues include sectionalism, foreign policy of isolationism and the rights of states vs. the rights of the federal government. During Monroe’s two terms, sectionalism, an excessive regard for sectional or local interest, increased greatly. This increase in sectionalism is due to acts like the Tariff of 1816. A tell tale sign that the Tariff of 1816 was going to cause sectionalism was that in the U.S. House of Representatives, the bill was passed by representatives in every section of the country except for the south. (In the south, “23 votes in favor, 34 against”.) The Tariff of 1816 was a protective tariff made to protect manufacturers from foreign competition. This protective tariff however, only helped the north because basically all of the United States’ manufacturing was being done in the north east. Since this protective tariff drove up the prices of foreign goods, the south wasn’t able to trade cash crops for manufactured goods of Europe for the same low prices that they had in the past. This of course caused great tension between the two sections of the country because the south viewed the north as the only ones being helped by the national government. Another issue that caused sectionalism was the debate over slavery. The authors of the constitution believed that slavery would eventually die out with the abolishment of the slave trade in 1808. This of course couldn’t be farther from what really happened. With the invention of the cotton gin, cotton production became a staple part of the American economy, with this mass production of cotton came an increased need for slave labor. Debates over slavery and whether it should be legal would cause... Please join StudyMode to read the full document
De-Extinction Could Bring Back 24 Different Species: Resurrecting the Woolly Mammoth Ever wondered what a living dodo would look like? How about a woolly mammoth or the quagga? These species are just some of the ones that could be resurrected if scientists have their way. Last week in a conference hosted by National Geographic and TEDx, scientists and conservationists discussed the possibility of creating these animals from ancient DNA. Surprisingly, many supported the idea. Like Us on Facebook Before you think that this will be a real-life rendition of "Jurassic Park," think again. Dinosaur DNA is too old in order to extract any viable samples. DNA degrades over time, which means that there's just not enough left to work with in order to reconstruct an entire organism. Instead, scientists have to limit themselves to more recently extinct species--only extinct for thousands of years rather than millions of years. De-extinction, as it's being called, could happen for about 24 different animals. At the conference, scientists discussed the ethics of bringing back these species, and whether or not they would be desirable. The main factors that the conference-goers took into account were if they had an important ecological function, if they were beloved by humans, if they were practical choices and if there would be access to tissue with good quality DNA or germ cells in order to reproduce the species. In addition, they also assessed whether these species would be able to be reintroduced into the world, and what caused them to go extinct in the first place. Despite the support for de-extinction, though, there are a few concerns with the process. De-extinction would require a surrogate mother for the recreated species--one that's related closely enough so that the offspring is viable. In the case of the woolly mammoth, for example, an African elephant might be used a surrogate mother. A woolly mammoth embryo would be implanted into the mother and then the mother would presumably give birth to the lost species. Yet would it really be a woolly mammoth? It may look like one, but certain behaviors might be different. The mammoth would essentially be raised by elephants, which could mean that it acts like an elephant rather than a mammoth. In an article in The Guardian, a molecular biologist suggested there was a way to test this possible issue. Use a black rat as the "extinct" DNA donor and then use its genetic cousin, the brown rat, as the surrogate mother. If the created black rat doesn't look and behave like a black rat and instead behaves like a brown rat, scientists may need to rethink the process. In addition to behavioral issues, the process would be extremely expensive. Creating enough individuals in order to make a viable population would be a difficult and time-consuming process that requires quite a bit of funding. That said, bringing back species such as the passenger pigeon and the Tasmanian tiger is a tantalizing possibility. Some of the species could possibly alter ecosystems drastically--and for the better. When wolves were reintroduced into Yellowstone, they caused a trophic cascade which caused aspens to reappear along rivers, beavers to return and build dams and beaver ponds to support all kinds of new life. It's not likely that de-extinction will occur any time soon, though. The process for resurrecting these species is still being worked on, and it could be years before any such process occurs.
In the 1500s French military surgeon Ambroise Paré rediscovered the use of ligatures, using a thread-like or wire material to constrict a patient’s blood vessels. This surgical technique, which stops the flow of blood from a severed vein, greatly reduces the patient's chances of losing too much blood. The introduction of the tourniquet in 1674 further advanced the practice of surgical amputation, as the blood flow could be severely restricted and the patient would not die as a result of blood loss. Pain and infection were other complications which impacted on the success of amputations. The last war to see amputation on a vast scale was the First World War, where 42,000 amputations were carried out. Amputation is still performed to save lives, but it is not practised in anything like the numbers it was in earlier centuries and during the First World War. BibliographyR Porter, The Greatest Benefit to Mankind (London: HarperCollins, 1997) A thread or string for tying the blood vessels, particularly the arteries, to prevent bleeding. The word ‘ligature’ can also refer to the action or result of binding or tying, e.g. the ligature of an artery. An apparatus designed for the compression of the vessels of the limb. A loosely applied tourniquet can reduce venous blood flow out of a limb. A tightly applied tourniquet can lessen arterial blood flow into it.
Speech and language are two very different communication skills that are commonly confused. Speech production is a motor function that involves movement of the muscles of the face and mouth to produce clear sounding speech. In contrast, language is a cognitive process that relates to meaning, not sounds. The ability to understand and use language is necessary for a child to successfully convey or interpret a message that holds meaning. A person’s language skills are assessed by a qualified Speech Pathologist. What is expressive and receptive language? A person may present with differing expressive and receptive language abilities. Comprehensive language assessment performed by a qualified Speech Pathologist can pinpoint the areas of strength and difficulty in your child’s language development. Expressive language describes how a child uses words, sentences and language to express themselves in a clear way. Expressive language allows a child to clearly and effectively communicate about their individual thoughts, feelings, ideas and needs. Receptive language describes how a child understands the words, sentences and language used by others. Receptive language allows a child to interpret and respond appropriately to communication experiences that they encounter in their day to day lives. Why is language important? Language development is so much more than just learning how to talk. Language allows your child to understand and express their thoughts and feelings. Language forms the basis for thinking and behaviour as your child learns to use their internal dialogue, or ‘self talk’, to successfully understand and respond to their experiences. Understanding and managing their own anger and emotions, using predicting and problem solving skills to make good behaviour choices, and using language to influence how other people think and feel in social situations are all examples of how language supports your child to interact with the world around them. Learning to understand, use and enjoy language is also the critical first step in literacy, and the basis for learning to read and write. As your child progresses through school, the oral language demands of the classroom steadily increase. In the high school curriculum, children learn new information and subject content through listening to large chunks of spoken information. During high school, adolescents learn to be independent learners as they prepare for university and vocational training. Conversational language becomes the foundation of friendships as children move away from playing sports and games during lunch breaks, and spend more and more time sitting and chatting with their peers. How does language develop? Expressive and receptive language development occurs most rapidly from birth to 5 years. In order to learn how to talk, children must have opportunities to regularly hear and practice language. Creating a language rich environment through conversation, books, stories, play, songs and nursery rhymes is the best way to support your child’s language development. While all children develop differently, it is important to understand the milestones of language development so that language delays can be identified and treated early. Early intervention is recognised as the most effective way to treat and prevent language disorders in later childhood. How is language assessed? Language assessment is performed by a qualified Speech Pathologist using standardised language assessment tools. Language assessment takes between 90 – 120 minutes to administer and may be conducted over two or three assessment sessions. Language assessment results can provide information about how your child performs in the following areas: - Expressive language - Receptive language - Sentence structure - Following directions - Verbal memory Language assessment provides an in-depth, comprehensive picture of a child’s language strengths and difficulties. Knowing how a child performs in particular areas allows the Speech Pathologist to pinpoint which areas are impacting most on their ability to communicate successfully in everyday life and academic settings. Learn & Grow Speech Pathologists believe that understanding a child’s strengths is as important as understanding their areas of difficulty. What does language therapy involve? If your child has identified language difficulties that require therapy, the Learn & Grow Speech Pathologist will work with you to develop an individualised treatment plan. Learn & Grow therapists believe in therapy that is achievable and meaningful for a child and their family. The Speech Pathologist will work collaboratively with you and your child to decide which goals, activities and outcomes will be the most meaningful and motivating. Classroom adjustments and teaching strategies are often an important part of successful language intervention. Learn & Grow Speech Pathologists value collaboration with other professionals and will provide assessment feedback directly to educational support staff when requested. The frequency, duration and intensity of language intervention will be determined by the nature and severity of your child’s difficulties. Home practice programs are included in all Learn & Grow therapy fees and will be reviewed and updated during each therapy session. School and educational support staff are often included in the intervention planning, delivery and review process. School therapy visits, school based language programs and educational support staff training can be included in your child’s intervention plan. How do you make therapy fun? Learn & Grow Speech Pathologists believe that therapy should be fun, motivating and achievable for clients and their families. A huge selection of games, toys, and activities that suit all ages and interests are used to keep kids excited about coming to their next Speech Pathology appointment. Goals for sessions are constantly reviewed and adjusted to ensure kids experience success and continue to build confidence with their communication abilities.
A reusable organic liquid that can pull harmful gases such as carbon dioxide or sulphur dioxide out of industrial emissions from power plants has been developed by US researchers. The process developed at the US Department of Energy’s Pacific Northwest National Laboratory (PNNL) could directly replace current methods. The technology, which can be retrofitted to power plants, could capture double the amount of harmful gases in a way that uses no water and less energy and saves money. Harmful gases such as carbon dioxide or sulphur dioxide are called ‘acid gases’. The scrubbing process uses acid gas-binding organic liquids that contain no water and appear similar to oily compounds. These liquids capture the acid gases at near room temperature. Scientists then heat the liquid to recover and dispose of the acid gases properly. It is claimed these recyclable liquids require much less energy to heat but can hold two times more harmful gases by weight than the current leading liquid absorbent used in power plants. It is a combination of water and monoethanolamine – a basic organic molecule that grabs the carbon dioxide. ‘Current methods used to capture and release carbon dioxide emissions from power plants use a lot of energy because they pump and heat an excess of water during the process,’ said David Heldebrant, PNNL’s lead research scientist for the project. He added the monoethanolamine component is too corrosive to be used without the excess water. In PNNL’s process, called ‘Reversible Acid Gas Capture’, the molecules that grab onto the acid gases are already in liquid form and do not contain water. The acid gas-binding organic liquids require less heat than water does to release the captured gases. Heldebrant and his colleagues demonstrated the process in previous work with a carbon-dioxide-binding organic liquid called CO2BOL. In this process, scientists mix the CO2BOL solution into a holding tank with emissions that contain carbon dioxide. The CO2BOL chemically binds with the carbon dioxide to form a liquid salt solution. In another tank, scientists reheat the salt solution to strip out the carbon dioxide. Non-hazardous gases such as nitrogen would not be captured and are released back into the atmosphere. The toxic compounds are captured separately for storage. At that point, the CO2BOL solution is back in its original state and ready for reuse. Heldebrant and his colleagues have also developed organic liquid systems that bind sulphur dioxide, carbonyl sulphide and carbon disulfide, which are acid gases that are also found in emissions.
2 OBJECTIVES Define economic growth. Analyze measures of economic growth.Examine GDP per capita.Analyze how GDP is related to a country’s standard of living. 3 Economic GrowthProcess by which a nation’s wealth increases over time.Rate of economic growth affected by:Natural ResourcesHuman Resources/CapitalCapital ResourcesTechnological Development – makes workers more productiveTrade 4 Labor ProductivityHuman Capital – skills, education, or training that makes workers more productive such as technologyMost important determinant of long-run economic growthMeasured by nominal GDP per worker 5 Measure Economic Growth Gross Domestic Product (GDPNational Income per CapitaConsumption per Capita 6 Gross Domestic Product (GDP) Real rate of growth in a country’s total output of goods and services produced in a given year.Single best measure of the economic well-being of a society.Largest category of spending measured – consumer spendingCalculated: Price x Quantity 7 Calculating GDP Price x Quantity Example: Only count final goods so no double countingExample:In 2005, Country X produced 10 computers at $800In 2008, Country X produced 14 computers at $900Real GDP is(10 x 800) = $8,000 (14 x $800) = $11,200Growth Rate in Real GDP11,200 – 8,000 x 100 = 40%8,000 8 Types of GDP Nominal GDP (Current Dollar GDP): Use current year’s prices for goods and servicesReal GDP (Constant Dollar GDP):Use a base year’s prices – adjusted for price changes over time (i.e., inflation or deflation)Used to compare the growth of output of a country or countries over time.PRIMARY MEASURE OF ECONOMIC PERFORMANCE OVER TIME 9 Inflation vs. Deflation Inflation – upward price movement of goods and services in an economy.Caused by: rise in production costs, excess printed money in circulation, national debt and international lendingImpact to consumers: standard of living decreasesDifference between inflation and normal price increases: Normal price increases are caused by natural law of supply and demand. Inflation is an increase in prices due to more money moving into the system. 10 Inflation vs. Deflation Inflation – upward price movement of goods and services in an economy.Real GDP is less than nominal GDPDisinflation – decrease in rate of inflationUnanticipated Inflation – benefits borrowers – harms lendersReal Interest Rate – nominal interest rate minus rate of inflation 11 Inflation vs. Deflation Con’t Deflation – downward price movement of goods and services in an economy.Caused by: drop in demand, increase in supply of goods, and decrease in money supply.Impact to consumers: spend less, credit harder to come by, can lead to recession.Recessions – usually short run economic issue 12 Measure InflationConsumer Price Index (CPI) – weighted average of price changes in consumer goods and services – weighted by number of units of each good average household consumesCurrent CPI – 3.9% ( )Calculate rate of inflation over time using CPI:May 2010 – May 2011 –– x 100 = 1.14% 13 Measure Inflation Con’t Producer Price Indexes (PPI) – measure of price changes from the perspective of the seller – leading indicator of consumer spending.Current CPI – +0.8% 15 Business CycleDescribes short-run GDP fluctuations in overall economic activity.Contraction - When the economy starts slowing down.Trough - When the economy hits bottom, usually in a recession.Expansion - When the economy starts growing again.Peak - When the economy is in a state of "irrational exuberance." 17 Unemployment Definition Person does not have a job but is looking for one.Natural Rate of Unemployment – rate that occurs when resources are fully employed.Current US Unemployment Rate – 9.1%Frictional Unemployment – due to time spent looking for a jobCyclical Unemployment – when unemployment rises during a recession 18 Standard of LivingMeasure of the goods and services available to each person in a country – measure of economic well-being. 20 GDP per Capita GDP divided by the total population of a country. Increase in GDP per capita means standard of living has increasedWhy would GDP per capita provide more information about a country’s standard of living than total GDP? Look at China? 21 World’s Richest Countries Source: International Monetary Fund 2011 22 World’s Poorest Countries Source: International Monetary Fund 2011 23 Food for ThoughtWhy is there such a disparity between wealth and poverty among some countries?
Free printable cursive writing worksheets - cursive alphabet, cursive letters, cursive words, cursive sentences practice your penmanship with these handwriting worksheets from k5 learning. Cursive sentences - these cursive writing worksheets focus on writing full sentences using cursive writing also available are worksheets focused on writing individual cursive words and. Reading & math at wwwk5learning com cursive writing worksheets: sentences write the sentences: k5 learning is an online reading & math program for k-5 14 day free. Worksheet cursive writing worksheets sentences mikyu free worksheet free printable cursive handwriting worksheets for 3rd grade countries of africa cursive practice printable cursive. Cursive writing chart example download handwriting practice lesson combination letters lots of worksheets write factual sentences sentence free k 12 education read trace pdf pennsylvania pre. Speaking cursive at almost 5 & 6 & you know🤗 find this pin and more on worksheets for kids by luckyturist free printable sheets- lots of topics free printables for kids practice cursive. We have 32 great and free cursive writing worksheets for you to choose from we have one for each letter, some traceable sentences worksheets and more help kids learn to write in. Teaching how to form continuous cursive letters, using animations and free handwriting worksheets the best order of teaching letters and teaching letter groups handwriting continuous. Kids learn how to write cursive with this app which help you to write sentences tracing completely out of line i tried writing on letter the text appeared elsewhere on screen good app. In this worksheet kids can practice cursive writing in that kids need to trace the given letter and also write the given letter in the appropriate place download. Cursive handwriting practice (worksheet 5) carefully and neatly copy the following passage the space shuttle is nasa's space transportation system. Some simple booklets i put together for a mixed ks3/4 class of sld/mld/asd students to practice cursive writing and printing there are 4 similar booklets, two cursive with guide lines, (one. Free cursive handwriting worksheets in this pack, you’ll find two levels of handwriting practic: our school system is not teaching cursive writing since adopting common core standards. See 13 best images of cursive writing worksheets sentences inspiring cursive writing worksheets sentences worksheet images cursive handwriting sentences worksheets practice cursive writing. A quality educational site offering 5000+ free printable theme units, word puzzles, writing forms, book report forms,math, ideas, lessons and much more great for new teachers, student. Find and save ideas about cursive writing practice sheets on pinterest | see more ideas about free printable handwriting worksheets cursive writing practice worksheets sentences find. Cursive handwriting practice practice writing words in standard cursive this page allows you to create a worksheet of text for cursive writing practice enter the text you want to be on the. These cursive practice sheets are perfect for teaching kids to form cursive letters, extra practice for kids who have messy handwriting, handwriting learning centers, practicing difficult. Printable pdf cursive writing work sheets carefully crafted set of beginning cursive worksheets in 3 printable pdf packets for each of 3 different cursive handwriting fonts all for free with. Manuscript (print) and cursive worksheets any purchase you make on amazoncom after clicking the link below gives help & encouragement to the author of all these handmade worksheets -. Handwriting for kids free handwriting lessons to teach kids and adults how to write alphabets, numbers, sentences, bible school, scriptures, and even their name interactive math such as. There are different types of cursive writing worksheets available for use they are as follows: how to create the cursive writing template there are a number of websites that allow. Cursive writing workbook for grades 1-3 this cursive writing workbook is a compilation of all of our cursive writing worksheets, suitable for grades 1-3 these worksheets are available free. Showing top 8 worksheets in the category - cursive handwriting sentences once you find your worksheet free cursive writing worksheet open in new window - print can't see worksheet. Worksheet : writing practice for kids cursive writing sentences worksheets kindergarten writing worksheets pdf cursive worksheets pdf cursive letters practice‚ cursive letter worksheets. Print these cursive handwriting worksheets to use in the classroom or home for extra handwriting when children are learning to write letters, it can be helpful for them to learn the. Cursive alphabet lower-case letters /a/////b/////c/////d/////e//// /f/////g/////h/////i/////j///// /k/////l/////m/////n///// /o/////p/////q/////r/////s//// /t/////u/////v/////w////. Cursive alphabet worksheets practice handwriting cursive handwriting letter worksheet screenshot from k5 learning cursive writing worksheet cursive handwriting worksheet on handwriting. What different types of cursive writing you would come across usually the cursive writing templates are developed or designed according to the categorization of this type of handwriting. Free printable cursive practice sheets practice writing cursive letters, sentences, and paragraphs with free cursive worksheets once your student has learned to write cursive using the. Kidzone grade 3 and up cursive writing worksheets [introduction] [printable worksheets] age rating all children develop as individuals parents and caregivers should use the age ratings.
Antarctica is the least explored continent on our planet Earth, largely due to today's massive ice cover on the continent, reaching a thickness of 4500 m in places, leaving only 0.3% of the land area uncovered. This ice sheet, however, was not always in place, and its inception ∼34 m.y. ago at the Eocene−Oligocene boundary marked one of the most fundamental climate transitions in recent Earth history: the transition from the greenhouse world of the Cretaceous and early Cenozoic to the icehouse world we are currently living in (e.g., Zachos et al., 2008). The paper by Scher et al. (2011, p. 383 in this issue of Geology) provides a detailed record of pulses in Antarctic continental weathering through this glacial onset. Global climate in the early Cenozoic seems to have been characterized by low latitudinal temperature gradients and subtropical temperatures at high latitudes (e.g., Bjil et al., 2009). There was no or only very little ice on the poles, and atmospheric CO2 levels were probably well in excess of 1000 ppm. State-of-the-art climate models and palaeoclimatic proxy data suggest that the main triggering mechanism for initial inception and development of the Antarctic ice sheet was the drop of atmospheric CO2 concentrations below a critical threshold (∼750ppm; DeConto et al., 2008). While it remains a topic of debate whether the tectonic configuration of Southern Ocean gateways influences the sensitivity of Antarctic temperatures to atmospheric CO2 concentrations (e.g., Sijp et al., 2009), changes in silicate weathering are arguably the most important mechanisms for long-term draw down of atmospheric CO2 (e.g., Kent and Muttoni, 2008). The Eocene−Oligocene transition, however, reveals a very rapid response of the climate system to initial cooling and ice buildup in Antarctica. The marine geological record documents a two-step increase in deep-sea benthic foraminiferal oxygen isotopes in less than 300 k.y. (Coxall et al., 2005), marked surface and deep ocean cooling of 4–5 °C at high latitudes (Liu et al., 2009), deposition of the first ice-rafted debris layers and a switch from chemically to physically weathered clay minerals in Southern Ocean sediments (e.g., Barker et al., 2007), pronounced deepening of the carbonate compensation depth in the Pacific Ocean (Coxall et al., 2005), increased productivity in the Southern Ocean, and replacement of carbonate-rich facies on passive margins by siliciclastics (see the discussion in Merico et al., 2008). Taken together, these observations are indicative of the close interrelationships between Earth's cryosphere, ocean chemistry, and the carbon cycle. Scher et al. take an innovative approach to investigate this interplay across the Eocene−Oligocene transition. First, they produced a seawater Nd isotope record, extracted from fossil fish teeth, to trace the flux of continental weathering−derived Nd to the deep waters of the Prydz Bay region of the Southern Ocean. They then combined this with a set of oxygen isotopes from deep-sea benthic foraminifera, which track both the volume of continental ice sheets and deep ocean cooling. Their results show a stunning two-stepped Nd isotope excursion that correlates very well with the global deep-sea oxygen isotope record, and slightly predates the arrival of the first ice-rafted debris, a direct proxy for continental-scale glaciation. Scher et al. suggest that the seawater Nd isotope record can be interpreted as two surges of weathering, generated by Antarctic ice growth—a novel idea that requires some further explanation. Seawater chemistry (at any point back in time) principally depends on the flux of solutes from the continents, which in turn depends on rates of physical denudation and chemical weathering. Weathering of silicate rocks not only acts as a long-term sink for atmospheric CO2, but also strongly influences global biogeochemical cycles by determining continental runoff. While chemical weathering is vital to the flux of nutrients to the ocean (e.g., Raiswell et al., 2006), this flux is also strongly coupled to the availability of fresh mineral surfaces with high reactivity. Global field studies, encompassing a variety of climate zones and erosional regimes, show a tight coupling between the supply of fresh material and chemical weathering rates (e.g., Millot et al., 2002). High weathering rates are not necessarily linked to tropical areas, as often assumed. In contrast, temperate glaciers that have water available at their bed to facilitate basal sliding and physical erosion are ‘mineral surface factories,’ and studies on mountain glaciers reveal some of the highest mechanical and chemical denudation rates (for a summary, see Anderson, 2007). Glacial grinding does not change the bulk mineralogy of source rocks, but exposes accessory phases and makes them accessible to chemical weathering. Among others, the radiogenic isotope systems of U/Th-Pb and Lu-Hf can monitor such changes in the style of weathering, as the parent/daughter ratios show significant variations between different mineral phases. As a consequence, solute continental runoff for these isotope systems can deviate significantly from the bulk rock signature (e.g., Harlavan et al., 1998). Seawater Pb and Hf isotope records have been used previously for studies of changes in the style of weathering in the Northern Hemisphere during the late Quaternary (e.g., van de Flierdt et al., 2002, Foster and Vance, 2006), and would also be ideally suited to provide insights into the dynamics of Antarctic continental weathering across the Eocene−Oligocene transition. But what about Nd isotopes? While a number of studies indicate that rare earth element mobility during weathering could lead to small Sm/Nd fractionation (measurable Nd isotope effects in weathered glacial tills, boreal river water, and sediment leachates; von Blanckenburg and Nägler, 2001, and references therein), it seems unlikely that this effect is large enough to have an impact on seawater budgets. Overall Nd isotopes are probably not significantly fractionated during weathering, and seawater records still reflect the isotopic fingerprint of the continental source area that has been eroded. This notion is supported by similar proportions of Sm and Nd being incorporated in most common rock-forming minerals (see the compilation in Bayon et al., 2006), and is also in agreement with the observation that dissolved and suspended loads of rivers generally exhibit similar Nd isotopic compositions (e.g., Goldstein and Jacobsen, 1987). Hence, there are strong indications that dissolved Nd isotopes in seawater monitor the flux of weathered Nd from the continents, rather than the actual style of weathering. Consequently, Scher et al. interpret the observed two-stepped seawater Nd isotope excursion across the Eocene−Oligocene boundary as a two-stepped change in Antarctic weathering flux, reflecting increased continental runoff created by the interplay of physical and chemical denudation. This interpretation adds to the evidence that the latest Eocene paleoenvironment in Antarctica was characterized by small isolated mountain glaciation, and a fluvial erosion pattern not too dissimilar to today's ice drainage pattern (Jamieson and Sugden, 2008). At the end of the Eocene, a permanent transition took place from an environment dominated by chemical weathering to one dominated by physical weathering featuring an ice-covered Antarctic continent (e.g., Barker et al., 2007). Stepwise advance of temperate glaciers could have provided the grinding and water needed for chemical reactions to create distinct pulses of weathering runoff. Furthermore, the expansion of ice onto areas of the continent that were not previously covered by riverine drainage may have facilitated erosion of Nd-rich iron(hydr)oxides, pre-formed in an Eocene (subtropical) weathering environment (Bayon et al., 2004). If such iron(hydr)oxides were formed in areas of older bedrock geology (e.g., Southern Prince Charles Mountains; see the appendix in Williams et al., 2010) they could have provided a large flux of Nd with a particularly low Nd isotopic composition. The short-lived nature of the spectacular Nd isotope excursion observed by Scher et al., and its coincidence with the benthic deep-sea oxygen isotope record, intrinsically ties it to the major ice expansion in Antarctica. It seems plausible that a significant weathering flux would precede the arrival of ice on the continental margin, and hence ice-rafted debris production (Scher et al., this volume). At this point, one could be tempted to use the Nd isotope information in a more quantitative way to constrain silicate weathering rates and potential effects on atmospheric CO2 draw down during the Eocene−Oligocene transition. While the community should strive to achieve such a quantitative understanding in the future by means of robust modern process studies and modeling of geochemical budgets (e.g., Vance et al., 2009), we have to acknowledge that one record from the Southern Ocean is not sufficient to take this last step yet. It is, however, studies like the one presented by Scher et al. that stimulate the application of novel climate proxies to the field and foster new avenues of research. The material for such research is already in sight: two recent Integrated Ocean Drilling Program (IODP) expeditions have recovered some of the sedimentary archives needed to further refine our understanding of the Eocene−Oligocene transition. Expeditions 320 and 321 sailed in 2009 and retrieved a Pacific Equatorial Age Transect (PEAT) of cores, containing Eocene−Oligocene sections (Pälike et al., 2010). These distal records will provide far-field constraints on Antarctic ice buildup, and complement the first ever proximal record drilled across the Eocene−Oligocene boundary at the Antarctic Wilkes Land margin (IODP Expedition 318; Expedition 318 Scientists, 2010). Exciting times lie ahead for advancing our understanding on the complex interaction of climate, tectonics, and ocean biogeochemical cycles.
Amino acids help maintain our body’s optimal health and vitality Amino acids are the “building blocks” of the body. When protein is broken down through digestion, the result is 22 known amino acids. Eight are essential, meaning they cannot be manufactured by the body. The rest are non-essential, (can be manufactured by the body with proper nutrition.) To understand just how vital amino acids are for our health, we must understand the importance of proteins. Protein substances make up the muscles, ligaments, tendons, organs, glands, nails and hair, and are essential for the growth, repair and healing of bones, tissues and cells. Insufficient levels of the essential amino acids can dramatically interrupt the way our bodies work. For example, deficiencies of tyrosine, tryptophan, phenylalanine, and histidine can cause neurological problems and depression. Low levels of tryptophan also make us anxious and unable to sleep. Amino acids are most abundant in protein foods, yet all foods contain some. Animal foods such as beef, pork, lamb, chicken, turkey, eggs, milk, and cheese are known as complete proteins and usually contain all eight essential amino acids. Many vegetable proteins contain adequate levels of many of the essential acids, but may be low in one or two. Grains and their germ coverings, legumes, nuts and seeds, and some vegetables fit into this category. The importance of balancing the diet in order to obtain sufficient levels of all the essential amino acids cannot be overstated. A diet containing a variety of wholesome foods is crucial. If the complete proteins (stated above) are eaten daily, there is no need to worry about supplementing the diet or creating optimal food combinations. However, most of us do not eat these foods daily and probably should not, as the over consumption of protein foods (especially meat and milk) can lead to disease. Those of us who follow a lacto-ovo-vegetarian diet need have less concern about combining foods than those of us who follow a vegan diet. For those eating vegetarian diets, it is fairly easy to obtain a good protein balance from vegetables, grains, nuts, and legumes. Eating beans or seeds with some sort of grain is the simplest way to obtain an adequate balance of proteins. Often times, traditional food cultures have already solved the problem. (ie. South American black beans and rice; MiddleEastern, chickpeas and couscous). According to Gabriel Cousins, M.D. in her book Conscious Eating, “the Max Planck Institute has found that the complete vegetarian proteins, those with all eight essential amino acids, are superior to, or at least equal to, animal proteins. They showed that these complete proteins were found in various concentrations in almonds, sesame seeds, pumpkin seeds, sunflower seeds, soybeans, buckwheat, peanuts, potatoes, all leafy greens, and most fruits.” Paying attention to what we eat and how we combine our foods is the first step in preventing amino acid deficiency. If there is worry that the diet is not giving the body all it needs, there is always supplementation. Supplementing with amino acids have been known to help those suffering from degenerative diseases such as mental or nervous disorder, heart disease, chronic fatigue syndrome, diabetes, epilepsy, anemia and herpes. Amino acid supplements are available singly and in combinations. It is always a good idea to consult with a physician to see which supplements, if any, are suitable for your particular needs.
World History on Ice From the outside, the storage shed on the University of New Hampshire (UNH) campus in Durham looks inconspicuous enough-a standard white 48-by-12-foot box. It doesn’t look too remarkable from the inside, either, housing a few electric jigsaws and racks holding thousands of cylindrical canisters filled with ice. This is not your average ice locker, however. It contains all the pieces of a two-mile strip of ice drilled from a massive ice sheet in Greenland. Moreover, this ice holds vital data about the earth’s climate over the past 250,000 years and offers the most detailed record yet of the last 110,000 years of our planet’s history. “In some ways, the ice sheets tell us more about what the environment was like in northern latitudes 100,000 years ago than we can learn about the 1700s and 1800s from human records,” says Paul Mayewski, director of glacial research at UNH and chief scientist for the Greenland Ice Sheet Project Two (GISP2). “Those written records consist mainly of temperature readings, but we can use the ice to analyze 45 different variables.” Mayewski views the ice sheets as a “time machine” that not only tells us about the earth’s history, including the effects of hundreds of volcanic eruptions, but also about human history. This frozen repository is providing a bounty of information to both earth scientists and archaeologists. How can they extract so much information from ordinary chunks of ice? The Greenland ice sheets are composed of snow that falls to earth carrying compounds from the air, including chemicals, metals, dust, even radioactive fallout. The snow piles up layer by layer, year after year, trapping these substances. Pressure from the accumulating snow eventually creates ice, and bubbles that form in the ice seal off small samples of the atmosphere. In laboratories at UNH and elsewhere, scientists can precisely identify the yearly layers in the ice-like the rings in a tree trunk-to determine the composition of the atmosphere at that time. Greenland’s frozen archives contain remarkable remnants of industrial enterprise over the ages. For instance, the record shows that the earliest large-scale pollution started about 2,500 years ago and continued for the next 800 years-the result of mining and smelting lead and silver during the Greek and Roman eras. In fact, lead pollution in that period rose to four times natural background levels, according to Claude Boutron, a French scientist whose team studied ice chunks from a parallel sampling effort, the European Greenland Ice-Core Project. Other findings indicate that the decline of the Roman Empire was followed by a steady drop in lead pollution: lead concentrations in the ice cores fell during the Middle Ages and did not surpass the Roman levels until the start of the Industrial Revolution. An even sharper rise occurred in the twentieth century when lead concentrations rose to some 200 times natural (pre-Greek and Roman) levels, presumably owing largely to the introduction of lead additives to gasoline. Other chemicals have also shown a dramatic upsurge. According to the ice core data, atmospheric concentrations of carbon dioxide climbed almost 30 percent, methane concentrations more than doubled, and concentrations of sulfate (a byproduct of coal combustion) have roughly tripled since the onset of the Industrial Revolution. New pollutants began showing up in Greenland in the late-1950s-radioactive strontium-90 and cesium-137, fallout primarily from U.S., Soviet, and British nuclear testing programs. “This fallout reached a peak in 1963 and then dropped off with the signing of the atmospheric Test Ban Treaty later that year,” says Jack Dibb, a UNH scientist in the Glacier Research Group. “We still see little bumps in the 1970s and ’80s from tests by the Chinese, French, and perhaps some others we don’t know about.” More radioactive debris in the form of cesium-134 and 137 drifted to Greenland in May 1986 courtesy of the Chernobyl nuclear accident in the Ukraine. This radioactive cloud deposited isotopes in Antarctic ice, suggesting that the entire planet was contaminated by the core meltdown. But the story the ice tells is not all bad. Concentrations of key pollutants (including lead) reaching Greenland have actually declined since the passage of the U.S. Clean Air Act in 1970 and the subsequent clamp-down on emissions. Still, over the 100,000-plus years these ice cores span, levels of carbon dioxide and methane, both greenhouse gases, have never been higher than they are today, says Martin Wahlen, a physicist at the Scripps Institute of Oceanography, and the magnitude of this human-induced change is truly remarkable. With respect to carbon dioxide and methane concentrations, he says, “humanity has brought about a change of roughly the same magnitude as that which naturally occurs between glacial and interglacial periods.” Whereas this natural shift took place over the course of tens of thousands of years, however, the human-induced change occurred within only the past few centuries. One of the biggest surprises to emerge from the GISP2 project is the discovery of rapid climate shifts that occur within a time frame of decades or less. “We’ve shown, on at least eight separate occasions, that climate change has occurred abruptly as civilizations were developing in the last several thousand years,” Mayewski says. These changes can put people living in extreme environments-either very cold or arid-at risk. “If you live in a marginal area like that, a slight change in temperature or moisture can put you out of business.” For example, Mayewski and Yale archaeologist Harvey Weiss have found a surprising correlation between a climatic “event” in 2,200 B.C., which resulted in extreme drought from Europe to India, and the collapse of the Mesopotamian Empire, which was based near a desert region in what is now Iraq. “That doesn’t mean climate change was the only factor, but it probably played some role,” Mayewski says. Mayewski teamed up with archaeologist Tom McGovern of Hunter College and others to investigate a similar longstanding mystery regarding the disappearance of Norse settlers in Western Greenland beginning in the mid-1300s. “The core records indicate a really cold winter around the year 1350 and a series of progressively colder summers,” McGovern says. “The worst news for these people would have been a series of cold summers, which would have reduced an already short growing season, and that’s exactly what happened.” The climate, he adds, had always been suspected of playing a role in wiping out the settlement, but “we needed the new ice core data, which has a resolution on the scale of individual years and seasons, to really pin it down.” McGovern next hopes to find out whether the widespread die-offs of mastodons, woolly mammoths, and other animals 10,000 years ago at the end of the Pleistocene era were due mainly to climate change or to human predation. “There’s been a tremendous debate in archaeology for years, and the Greenland data can finally help us resolve it.” Mayewski expects that future studies will turn up many other associations between the climate events revealed in the ice sheets and major turning points in human history. The next step, he says, is to produce ice cores from other parts of the world-hence a deep-drilling program that began last year in Antarctica. The GISP2 collaborators are also beginning to compare the ice core data with corresponding climate records obtained from tree rings, lake sediments, and coral. The key is not just to pool the data, McGovern says-“You really need to bring people together to form diverse teams,” and collaborations of this sort between climatologists, archaeologists, paleontologists, and historians are “opening up a whole new area” with tremendous potential. In terms of exploiting the body of information locked deep in the world’s ice sheets, Mayewski adds, “we’ve only begun to scratch the surface. Blockchain is changing how the world does business, whether you’re ready or not. Learn from the experts at Business of Blockchain 2019.Register now
Adjectives are words that modify nouns. They describe nouns by telling us the color, age, size, or some other characteristic of that noun. Unlike adjectives in some languages, in English they have a single form - they do not change according to gender, number, or location in the sentence. Many consonant sounds come in pairs. For example, P and B are produced in the same place in the mouth with the tongue in the same position. The only difference is that P is an unvoiced sound while B is a voiced sound.
In nature, all living things are in some way connected. Within each community each species depends on one or more of the others for survival. And at the core of individual ecosystems is a creature, or in some cases a plant, known as a keystone species. This species operates much like a true key stone, which is the stone at the top of an arch that supports the other stones and keeps the whole arch from falling down. When a keystone species is taken out of its environment, the whole system could collapse. In California's Monterey Bay National Marine Sanctuary the sea otter is a keystone species in the kelp forest ecosystem. Kelp forests provide food and shelter for large numbers of fish and shellfish. Kelp also protect coastlines from damaging wave action. One of the sea otter's favorite delicacies is the sea urchin who in turn loves kelp. When present in healthy numbers, sea otters keep sea urchin populations in check. But when sea otters decline, urchin numbers explode and grab onto kelp like flies on honey. The urchins chew off the anchors that keep the kelp in place, causing them to die and float away, setting off a chain reaction that depletes the food supply for other marine animals causing their numbers to decline. By the early 20th century when sea otters were nearly hunted out of existence for their fur, kelp beds disappeared and so did the marine life that depended on kelp. Years later, conservationists moved some remaining otters from Big Sur to Central California. Gradually, their numbers grew, sea urchin numbers declined, and the kelp began to grow again. As the underwater forests grew, other species reappeared. Protecting keystone species, like sea otters, is a priority for conservationists. Often, the extent of the keystone functions of a species aren't known until the species has been removed from its environment and the ecosystem changes. Rather than wait until it may be too late for the system's health and survival, scientists make every effort to keep an ecosystem working as nature had intended.
Why is playground safety important? Less than 5 children 0-9 die from playground injuries each year. However, three hundred and twenty nine (329) children age 0-4 and nine hundred and fifty-four (954)children age 5-9 were admitted to hospital as a result of a playground injury (2010/11). Playground injuries are the second leading cause of injury hospital admissions, after falls in general.As stated in Lesson 1, Introduction to Child Injury Prevention, these admissions are just the tip of the iceberg as many of these children are only seen in a emergency room or at a clinic, andare not admitted to hospital. Playground injuries are preventable. The images and messages depicted are the most common ways that children 0-6 are injured in playgrounds. Visit the Images section for each topic to view and download the images with their corresponding messages. How to use the images? These images can be useful in starting discussion about what caregivers know about how to prevent injury and to problem-solve around the barriers they encounter in keeping their children safe. The images can also be integrated into other resources that you create, such as posters, calendars, displays, etc. Program examples and evaluation tool These playground safety examples are based on best practice and share activities that groups have done or could undertake. Following documents are available for download : Supplementary messages and resources - Playground surfaces need lots of sand, pea gravel, wood chips or other recommended surfacing to cushion children when they fall. - Keep your young child off equipment that is higher than 1.5 meters (5 feet). Children are more apt to break a bone if they fall from a higher height, particularly if the surface is packed down or not deep enough. - Young children need to learn physical skills when playing, and will challenge themselves to learn new skills.. Children 5-9 like to take chances and need to feel they are doing so, in order to gain self-confidence. Caregivers need to be ready to step in if the child is in danger, but should not “hover". For additional messaging and information visit the Playground Safety section of the Parachute website. - Parachute’s Play Safe PSA. It’s available in English, French, Chinese and Punjabi. - Canadian Playground Safety Institute - Other tips and resources around playground safety - Playground funding opportunities: Google “playground funding opportunities in Canada” A number of sites are listed including Let Them Be Kids. Public Health Agency of Canada analysis of 2009 mortality data from Statistics Canada and 2010/11 hospitalization data from the Canadian Institute for Health Information. (This is the most recent data available.)
NASA Earth Observatory NASA's Terra satellite captured this natural-color image of phytoplankton blooms on May 4 in France's Bay of Biscay. By Douglas Main As weather warms up off the coast of France, blooms of plankton have once again begun to form, creating a beautiful, multicolored swirl visible from space. NASA's Terra and Aqua satellites acquired these images of the colorful blooms on April 20 and May 4, according to the NASA Earth Observatory. On the later date, a noticeably larger bloom occurred, fueled by nutrient runoff from French rivers and warmer temperatures in the Bay of Biscay. Phytoplankton bloomsprovide food for a whole host of creatures, from zooplankton (small drifting animals) to whales. Through photosynthesis, the blooms harness the energy of the sun and turn carbon dioxide into sugars. Sometimes, however, they can cause problems: certain species of phytoplankton can form so-called red tidesand produce neurotoxins that affect marine mammals as well as humans. And when they get too big, the blooms can create dead zones as the algae sinks and decomposes, consuming oxygen. The blooms can also have beneficial environmental effects. According to a study published last summer in the journal Science, phytoplankton blooms absorb about one-third of the carbon dioxide humans emit into the air each year through burning fossil fuels. Various pigments produce the colors of the phytoplankton blooms. For example, a type of algae called Coccolithophores makes a calcium-containing shell that creates a milky appearance, according to the Earth Observatory. - Gaia: Photos of Earth From Space Tourist Guy Laliberte - Earth Pictures: Iconic Images of Earth from Space - 50 Interesting Facts About The Earth Copyright 2013 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Sigler Counselor's Corner Mrs. Blanton's Weekly Insights - February 8th-12th Guidance This Week Idea: During your weekly R-Time activity, focus on the things that make your classroom conducive to building respectful friendships. Here are a few suggestions: - Discuss different situations that could arise between classmates/friends. Have the students act out both appropriate & inappropriate responses to each. - Create a list of friendship qualities and have students rank them from most to least important and defend their responses. - Come up with definitions of friendship in partner groups, share aloud, & compose one inclusive class definition.
Ka-Ka-Ka-r-r-r-et-et-et: Carrot. For parents helping their children learn to read, sounding out words like that is a daily occurrence. Letter-by-letter, syllable-by-syllable, kids make the sounds before thinking about the meaning of the words. As they become reading proficient, they can recognize the words without this painstaking process. The path that children take to reading proficiency is enabled by changes in the physical structure of the brain – with gray matter waning in some areas as the brain becomes more efficient at particular reading skills, researchers have found. In a new study that followed first graders for two years, neuroscientists have found that the most proficient readers started with increased gray matter for speech processing and, as they became better readers, regions of the brain associated with sounding out words decreased in volume – leading to the possibility that unnecessary neural connections are eliminated to make them more efficient readers. Given evidence that the brain adapts to the learning of new skills via plastic structural changes, we were curious to examine which structural changes are related to the acquisition of written language,” says Janosch Linkersdörfer of the German Institute for International Educational Research in Frankfurt. He was fascinated by the fact that people can acquire complex reading and math skills that are recent cultural inventions and, therefore, lack dedicated neural systems. Although scientists have explored the neural processes that support reading, few have determined how the brain itself changes to learn how to read. So Linkersdörfer and colleagues from Frankfurt University invited children to come into their lab at two different times in their reading lives – in the first and second years of elementary school. Each year, the children participated in a behavioral assessment of their reading skills and a structural MRI session. Using a technique called “tensor-based morphometry” which can very accurately align 3-D images of the brain, the researchers mapped out differences between gray matter volume at the two different time points and matched those changes to reading proficiency scores. As published in the Journal of Cognitive Neuroscience, they found two major patterns in their data: First, they found that children with higher volumes of gray matter in a left hemispheric region that has been associated with the perception and production of speech sounds became more reading proficient in second grade than those with correspondingly lower gray matter volumes. They believe the results are evidence that children have significant individual neurostructural differences before they learn how to read. The other major result they found was a decrease in gray matter volume as children transitioned from being reading ready in first grade to being more reading proficient in second grade. These decreases in gray matter were in regions known to be involved in the manipulation of speech sounds. As children become better readers, they rely less on sounding out every word and are able to use their neural networks more efficiently. We were surprised to find a negative association between cortical volume changes and reading proficiency,” says Linkersdörfer. “Previous studies that examined neurostructural correlates of reading, mostly in adults, usually reported positive associations between structural features of the brain and reading proficiency.” But, he points out, this study is the first to look at such structural changes over more than one time point in a child’s life. This new study, Linkersdörfer says, also examined children much younger than has been typical of such studies – looking at children at the very beginning of reading instruction. “In children of this age group, neurostructural development is mainly dominated by synaptic pruning processes,” whereby experience guides the strengthening of frequently used neural connections and the weakening or elimination of sparsely used connections. This phase might mark a shift toward a more accurate and efficient – more adult-like – processing in specialized neural networks,” he says. “Our results might thus indicate the formation of a more mature and fine-tuned cortical network for the processing of written language in the left hemisphere.” Importantly for parents and children alike, the study supports the idea that the brain is highly malleable and adapts to external demands. Therefore, reading success depends on the amount of effort put in. It also, however, points to structural brain differences that might give some children a leg up in the reading process. “It will be important to investigate whether these differences reflect genetic predispositions or differences in speech and language experience in the first years of life,” Linkersdörfer says. In future studies, Linkersdörfer’s team will follow children over longer periods of time to further examine how the brain dynamically changes over different stages of reading development. They are also interested in conducting similar studies with other academic skills such as mathematics. “Ultimately, we hope that our work will contribute to a better understanding of how the brain adapts to facilitate these academic skills,” he says. -Lisa M.P. Munoz The paper, “The Association between Gray Matter Volume and Reading Proficiency: A Longitudinal Study of Beginning Readers” by Janosch Linkersdörfer, Alina Jurcoane, Sven Lindberg, Jochen Kaiser, Marcus Hasselhorn, Christian J. Fiebach, and Jan Lonnemanm, was published in the Journal of Cognitive Neuroscience online on Sept. 9, 2014.
states that atoms tend to lose, gain, or shared electrons in order to have a full set of valence electrons What are atoms with a full set of valence electrons called? the noble gases What was one way to emphasize an atom's valence electrons? lewis dot diagram ion made up of more than one atom compound made up of ions positively charges ion negatively charged ion binary ionic compounds compounds composed of two different elements atom that has a positive or negative charge to denote the ratio of ions in a compound Formed when one or more electrons are transferred from one atom to another formed by a shared pair of electrons between two atoms a group of atoms that are held by covalent bonds substance that is made of molecules to describe the composition of a molecular compound molecular formula 2 tells how many atoms are in a single molecule of the compound specifies which atoms are bonded to each other in a molecule double covalent bonds consist of two pairs of shared electrons a covalent bond in which electrons are shared equally Electrons are shared unequally Please allow access to your computer’s microphone to use Voice Recording. We can’t access your microphone! Click the icon above to update your browser permissions above and try again Reload the page to try again! Press Cmd-0 to reset your zoom Press Ctrl-0 to reset your zoom It looks like your browser might be zoomed in or out. Your browser needs to be zoomed to a normal size to record audio. Your microphone is muted For help fixing this issue, see this FAQ.
As part of the children’s program it’s important to undertake cultural celebrations in acknowledging rights and traditions. This yearly calendar has a variety of monthly cultural celebrations which can be acknowledged within the program and adapted to suit the needs of the children and their families within the service. When selecting festivals or religious celebrations it’s important to choose those that are relevant to the children and families within the setting and that enable children to become aware of each other’s cultures. - 1st New Year’s Day - the time at which a new calendar year begins. - 4th World Braille Day - to commemorate the birthday of Louis Braille. Louis Braille is credited with inventing the Braille language which helps blind people to read as well as write. - 7th Orthodox Christmas - to remember Jesus Christ’s birth, described in the Christian Bible. This date works to the Julian calendar that pre-dates the Gregorian calendar, which is commonly observed. - 26th Australia Day - it marks the anniversary of the 1788 arrival of the First Fleet of British Ships at Port Jackson, New South Wales, and raising of the Flag of Great Britain at that site by Governor Arthur Phillip. - 26th Indian Republic Day - honours the date on which the Constitution of India came into force on 26 January 1950 replacing the Government of India Act (1935) as the governing document of India. - 31st Chinese New Year (dates vary each year) - it is also known as the Spring Festival. - 31st Tet Vietnamese New Year (dates vary each year) - celebrates the arrival of spring. - 6th Waitangi Day (NZ) - celebrates the signing of the Treaty of Waitangi, New Zealand's founding document, on that date in 1840. - 14th Valentine’s Day - a holiday in remembrance of Saint Valentine, celebrated by sending cards or similar tokens of love. - 5th Ash Wednesday (dates vary each year) - a day of fasting and is the first day of Lent in Western Christianity. - 17th St Patricks Day - the day commemorates Saint Patrick and the arrival of Christianity in Ireland, as well as celebrating the heritage and culture of the Irish in general. - 17th Holi, (dates vary each year), India – a spring festival also known as the festival of colours or the festival of love. - 21st Harmony Day - is intended to show cohesion and inclusion in Australia and promote a tolerant and culturally diverse society. - 22nd World Water Day – promote issues such as a billion people being without access to safe water for drinking and the role of gender in family access to safe water. - 1st April Fool’s Day - a day when people play practical jokes and hoaxes on each other. - 8th Rama Navami, (dates vary each year), Hindu - celebrating the birth of the god Rama to King Dasharatha and Queen Kausalya in Ayodhya. - 13th Palm Sunday, (dates vary each year), Christian & Orthodox - is a Christian moveable feast that falls on the Sunday before Easter. - 14th Tamil New Year, India - is the celebration of the first day of the Tamil new year by Tamils in Tamil Nadu and Puducherry in India, in Sri Lanka and by the Tamil population in Malaysia, Singapore, Réunion and Mauritius. - 14th Sinhalese New Year, Sri Lanka - generally known as Aluth Avurudda (Sinhala) in Sri Lanka, is the new year of the Sinhalese people in Sri Lanka. - 18th Good Friday (dates vary each year) - observed primarily by Christians, commemorating the crucifixion of Jesus and his death at Calvary. - 20th Easter Sunday (dates vary each year) - celebrating the Resurrection of Jesus Christ from the dead, described in the New Testament as having occurred three days after his crucifixion. - 21st Easter Monday (dates vary each year) - the day after Easter Sunday and is celebrated as a holiday. - 22nd Earth Day - promotes environmental awareness among the masses. - 25th Anzac Day, Australia & New Zealand - broadly commemorates all Australians and New Zealanders "who served and died in all wars, conflicts, and peacekeeping operations" and "the contribution and suffering of all those who have served. - 1st May Day - an ancient Northern Hemisphere spring festival. - 11th Mother's Day, Australia, Mexico, Venezuela - is a celebration honouring one's own mother, as well as motherhood, maternal bonds, and the influence of mothers in society. - 21st World Day for Cultural Diversity - a day to help people learn about the importance of cultural diversity and harmony. - 26th National Sorry Day, Australia - to remember and commemorate the mistreatment of the continent's indigenous population. - 27th National Reconciliation Week, Australia - to celebrate indigenous history and culture in Australia and foster reconciliation discussion and activities. - 1st International Children's Day - to honour children globally. - 3rd Mabo Day, Torres Strait Island - commemorates Eddie Koiki Mabo, a Torres Strait Islander whose campaign for Indigenous land rights led to a landmark decision of the High Court of Australia. - 5th World Environment Day - to raise global awareness to take positive environmental action to protect nature and the planet Earth. - 20th World Refugee Day - dedicated to raising awareness of the situation of refugees throughout the world. - 21st International Day Of Yoga - celebration of Yoga, a physical, mental and spiritual practice, aims to integrate the body and the mind. - 28th Ramadan begins, (dates vary each year), Islamic Faith - this annual observance is regarded as one of the Five Pillars of Islam. - 4th Independence Day, USA – a federal holiday in the United States commemorating the adoption of the Declaration of Independence. - 6th Dalai Lama's Birthday - the Dalai Lama is traditionally thought to be the rebirth in a line of tulkus who are considered to be manifestations of the bodhisattva of compassion. - 5th NAIDOC Week (dates may vary each year) - Aboriginal & Torres Strait Islander's celebration throughout Australia. - 14th Bastille Day, France - commemorates the beginning of the French Revolution with the Storming of the Bastille. - 18th Nelson Mandela's Birthday, South Africa - he was a South African anti-apartheid revolutionary, politician and philanthropist who served as President of South Africa from 1994 to 1999. - 30th July International Day Of Friendship - a United Nations (UN) day that promotes the role that friendship plays in promoting peace in many cultures. - 2nd Friendship Day (dates vary each year), United States of America - a day for celebrating friendship. - 9th International Day of the World's Indigenous People, Australia - to promote and protect the rights of the world’s indigenous population. This event also recognizes the achievements and contributions that indigenous people make to improve world issues such as environmental protection. - 17th Krishna Janmashtami, (dates vary each year), Hindu - an annual commemoration of the birth of the Hindu deity Krishna, the eighth avatar of Vishnu. - 29th Ganesh Chaturthi (dates vary each year) Hindu – celebrates the birthday (rebirth) of the lord Ganesha, the son of Shiva and Parvati. - 6th Father’s Day - a celebration honouring fathers and celebrating fatherhood, paternal bonds, and the influence of fathers in society. - 5th Teachers Day, India - it is considered a "celebration" day, where teachers and students report to school as usual but the usual activities and classes are replaced by activities of celebration and thanks. - 20th Oktober Fest begins (dates vary each year), Germany – it’s the world's largest fair held annually. - 10th Double 10 day, Republic of China – the national day of the Republic of China (ROC). It commemorates the start of the Wuchang Uprising. - 23rd Diwali Festival, (dates vary each year), India - the festival spiritually signifies the victory of light over darkness, knowledge over ignorance, good over evil, and hope over despair. - 31st Halloween, Canada, United States of American, Japan & United Kingdom - dedicated to remembering the dead, including saints (hallows),martyrs, and all the faithful departed believers. - 1st Day of the Dead Festival, Mexico - the holiday focuses on gatherings of family and friends to pray for and remember friends and family members who have died. - 2nd All Souls Day, Christian faith - day of prayer for the dead. - 3rd Culture Day, Japan - for the purpose of promoting culture, the arts, and academic endeavour. - 4th Melbourne Cup Day (dates vary each year) - marketed as "the race that stops a nation", it is a 3,200 metre horse race for three-year-olds and over. - 6th World Kindness Day - a celebration of kindness, which aims to increase the value of kindness in society as well as increase the amount of kind acts that take place, making kindness a greater part day to day life. - 7th Loy Krathong (dates vary each year), Thailand - comes from the tradition of making buoyant decorations which are then floated on a river. - 11th Veterans Day, United States of America - an official United States holiday that honours people who have served in the U.S. Armed Forces. - 11th Armistace / Remembrance Day, Australia & France - observed in Commonwealth countries since the end of World War I to remember the members of their armed forces who have died in the line of duty. - 11th Feast of St Martin, St Martin, Caribbean - a time for feasting celebrations. - 27th Thanksgiving Day, (dates vary each year) United States of America - President Washington declared a national Thanksgiving "for the civil and religious liberty", for "useful knowledge", and for God’s "kind care" and "His Providence". - 5th Sinterklaas, The Netherlands - a traditional winter holiday figure based on Saint Nicholas. - 12th Our Lady of Guadalupe Day, Mexico - a title of the Virgin Mary associated with a celebrated pictorial image housed in the Basilica of Our Lady of Guadalupe in México City. - 13th St Lucy's Day Italy, Scandinavia - the church feast day dedicated to Lucia of Syracuse (d.304), also known as Saint Lucy. - 23rd Emperor's Birthday, Japan - a public ceremony celebrating the emperor’s birthday, takes place at the Imperial Palace, where the gates of the palace are opened to public traffic. - 24th Christmas Eve – it’s the day before Christmas Day. - 25th Christmas Day - an annual commemoration of the birth of Jesus Christ. - 26th Boxing Day - traditionally the day following Christmas Day, when servants and tradesmen would receive gifts, known as a "Christmas box", from their bosses or employers. - 26th Family Day, Vanuatu - to enable workers to take a break from their hectic working lives and to spend some quality time with their family and friends. - 30th Rizal Day, Philippines - commemorating the life and works of José Rizal, one of the Philippines ‘national. - 31st New Year’s Eve - commemorating the life and works of José Rizal, one of the Philippines ‘national. This calendar represents a snapshot of the main cultural events celebrated by Australia's diverse population. It does not include the celebrations of all cultural groups and please keep in mind some of the dates may vary each year and may not be updated for the current year (right as of 2014). Honouring cultural diversity and awareness through celebrations and experiences requires commitment and respect for being and belonging in the world. “Educators honour the histories, cultures, language, traditions child rearing practices and lifestyle choices of families”. The Early Years Learning Framework for Australia Putting Children First (Issue 33, March 2010)
Student-Centered Learning: Meaningful Work Project-based learning that is student-centered works if it is meaningful work. According to the article “Seven Essentials for Project-Based learning” on Education Leadership: A project is meaningful if it fulfills two criteria. First, students must perceive the work as personally meaningful, as a task that matters and that they want to do well. Second, a meaningful project fulfills an educational purpose. Well-designed and well-implemented project-based learning is meaningful in both ways. It doesn’t matter the age of the learner, every learner gets more involved in the process if the task at hand means something to them and there is a purpose for their work. Let’s look at purpose. - Teacher one gives an assignment for their students to write a paper. Usually, the student hands the finished paper in to the teacher who then spends the evening reading and grading the papers. - Teacher two shares a topic or asks students to find a topic that is meaningful to them and write why it is meaningful. Students generate questions about their topic, come up with an opinion piece, and then share their writing with their peers who provided feedback. They use a rubric to grade each other and themselves. Which do you find more meaningful and engaging? Wanting to know more Students come to school curious about the world. They want to know more. If the teacher can let students pursue their interests and what they are curious about, then the classroom changes. How about the teacher bringing in a photo or local topic like a polluted nearby creek and letting students discuss it? Then they could go to the creek, take pictures, do research about the creek, interview water experts, etc. What they could find out is that they can make a difference somehow. They can research the problem, find out how a polluted creek like this one could impact the environment and life in the creek, get the right people involved to clean up the creek, and even pick up trash around the creek themselves. What about the standards? When I work with teachers they are told to meet the standards, follow the pacing guide, and use the textbook. When you are moving to a student-centered classroom, you are slowly changing the way you teach. You can still meet the standards and cover most of the curriculum. Instead of trying to “cover” everything, there may be another way to involve your students as co-designers of their learning. - Show your students the standards — right from the beginning. Explain that they will need to meet these standards with the project. Projects also cover multiple disciplines. If you focus on creeks for 4th grade (CA Science – Earth Science – Water), then you are also meeting Investigation and Experimentation, Language Arts > Writing Strategies > Research and Technology) and probably more. - Tell them that you need their input as co-designers so their learning is more meaningful to them. Mention that you normally teach the lesson like this but would like to have more of a student voice. Have them review the topic, the standards, and come up with questions based on this information. Good driving questions help focus the project We are all born curious. Most children want to learn something by first asking a question. “Where does rain come from?” “Why does a hummingbird flap its wings so fast?” The questions lead to more questions. If you think about the creek and pollution, maybe some of the questions might be “how did the creek get polluted?” or “why do people throw their trash in the creek?” or “how does the pollution affect the fish and other life in the creek?” A good driving question gets to the heart of the topic or problem. The creek is polluted. Life in the creek is impacted. The environment is affected by the pollution. Sometimes a good driving question is a call to action. “What can we do to stop the pollution in the creek?” The other questions asked before supported this question. Students working in groups This is the piece that teachers find difficult to manage or coordinate. Do you let students choose their groups or group by topic or do you choose the groups for them? The first time you ever do a project-based learning activity, be kind to yourself. First time, you choose the groups. Each group will have roles for each person but you decide on the roles. Let them choose who will do what. Some students will take on multiple roles and help each other. Some may not. I’m going to go into more detail in later posts about how to set up groups, designing questions, etc. The main thing I wanted to get across in this post was to focus on meaningful work and purposeful projects. If your students, no matter what age, feel they can make a difference, they are more motivated to learn, to share, to write, and to present.
Article by: Dean Spence About 30 per cent of the world’s carbon dioxide production comes from Canada and the U.S.A. One quarter of this is associated with transportation–mostly single-passenger automobiles. David Suzuki, Ph.D in Zoology, often notes the absurdity of transporting a ninety kilogram person in 2 tonnes of metal. This is not sustainable, especially in urban areas where public transit is readily available. Worldwide, the demand for cars is only increasing, especially in countries like China and India. Consider Delhi, where the air often contains six to 10 times the internationally accepted level of harmful fine particulate matter (PM2.5). One major culprit of this is vehicles. Every day, Delhi adds 1500 cars to its roads. The Indian government recently announced plans requiring that odd and even license plate would alternate daily on the roads in Delhi for two weeks. U of T professor Greg Evans, Ph.D, P.Eng., and his team study the sources, chemical transformation and health impacts of air pollutants. Recently they have focused on traffic-related air pollution (TRAP) in large cities like Toronto. According to Evans, TRAP is a key source of air pollution in Canadian cities which affects the health of 1 in 3 Canadians living near major roads. “We are studying the pollutants in individual exhaust plumes as vehicles drive by our lab on College street,” Evans said via email. “We have measured the exhaust of over 100,000 vehicles and found that a small portion of the vehicles are emitting a large portion of the pollution. This represents a great opportunity to improve air quality, removing this small number of vehicles from the road, or cleaning up their emissions could produce a dramatic reduction in air pollution.” Automobile technology has improved significantly in recent years, largely in response to stricter fuel-emission standards in countries such as Canada and the U.S.A. However, as Suzuki points out in Everything Under the Sun, hybrid cars still use fossil fuels and electric car technology does not solve all of the issues because electricity often comes from coal-fired power plants. Evans says there is often a trade-off with new vehicle technologies. “We have been studying emissions from vehicles equipped with the new gasoline direct injection (GDI) engines. These cars will soon be the dominant types sold in Canada. These GDI equipped cars are more fuel efficient which is good news in terms of climate change. However, after taking measurements in our vehicle emissions lab we have discovered they also emit more of a number of pollutants potentially increasing the burden on health.” Evans and his team also investigate how aerosols impact human health and the environment. He describes aerosol particles as either microscopic liquid or solid particles which are suspended in the air. Aerosols can be created by such natural sources as oceans, deserts, forest fires and volcanoes. They can also be created by such anthropogenic sources as vehicles, industrial processes, coal-based electricity generation, or even candles or cooking at home. “They are a complicated soup of chemicals and can travel deep into our lungs,” Evans said. “The smallest ultra fine particles can even travel in blood cells to different parts in our bodies. Every time we breathe we inhale millions of these particles, depending on how polluted the air is. This can have substantial impacts on health. Air pollution is the number one environmental burden on health, associated with over three million deaths a year globally. Much if this burden is due to these aerosol particles.” Much of the research that Evans and his team conduct is executed through The Southern Ontario Centre for Atmospheric Aerosol Research (SOCAAR). As director of SOCAAR Evans heads the interdisciplinary centre that brings together medical personnel, atmospheric chemists and environmental engineers in collaborative, state-of-the-art facilities and in partnership with government and industry. “SOCAAR’s main goal is to produce a broad, trans-disciplinary and actionable understanding of the origins, characteristics, environmental impact, and health consequences of atmospheric aerosols. SOCAAR is also part of the Canadian Aerosol Research Network, along with sister centres in Dalhousie University and and the University of British Columbia.” Evans’ research is not just limited to what he and his team investigate out of their College street lab. They also have a mobile research facility that allows them to study air quality throughout Ontario. “MAPLE (Mobile Analysis of ParticuLate in the Environment) allows us to get out of the lab and investigate air quality at sites around Ontario. Recently we have been using MAPLE to measure air quality across the GTHA through sampling. We have also measured emissions from vehicles on the highway as we drive near them on the highway.”
An intransitive verb is one that does not take a direct object. In other words, it is not done to someone or something. It only involves the subject. The opposite of an intransitive verb is a transitive verb. A transitive verb can have a direct object. For example: Remember, you can find the direct object of a verb by reading the verb and then asking "what?" (or "whom?"). If this question is not appropriate, then you're probably dealing with an intransitive verb. For example (verbs in bold): - He laughed. (Laughed is an intransitive verb. It has no direct object. You cannot laugh something.) - He told a joke. (Told is a transitive verb. The direct object is a joke. You can tell something. You can tell a story, a lie, a joke, etc.) - He caught the bus after the party. (Q: Caught what? A: the bus. This is a transitive verb. It has a direct object.) - He disappeared after the party. (Q: Disappeared what? That doesn't make sense. You can't disappear something. This is an intransitive verb. It can't take a direct object.) Examples of Intransitive Verbs Here are some more examples of intransitive verbs: - Every single person voted. - The jackdaws roost in these trees. - The crowd demonstrated outside the theatre. (In this example, demonstrated is an intransitive verb. However, to demonstrate can be used transitively too, e.g., He demonstrated a karate chop to the class.) Examples of Verbs Which Are Transitive and Intransitive Some verbs can be transitive and intransitive. For example: However, compare it to this: - Mel walks for miles. (As walks is not being done to anything, this verb is intransitive.) Here is another example: - Mel walks the dog for miles (This time, walks does have a direct object (the dog). Therefore, it is transitive. Some verbs can be both intransitive and transitive, depending on the precise meaning.) - The apes played in the woods. - The apes played hide and seek in the woods. (Q: played what? A: hide and seek.) Common Intransitive Verbs Here is a list of common intransitive verbs: |to agree||can also be transitive (e.g., to agree a point)| |to play||can also be transitive (e.g., to play a tune)| |to run||can also be transitive (e.g., to run a mile)| |to walk||can also be transitive (e.g., to walk the dog)| |to eat||can also be transitive (e.g., to eat a cake)| |to appear|| | |to arrive || | |to belong || | |to collapse || | |to collide || | |to die || | |to demonstrate ||can also be transitive (e.g., to demonstrate a skill)| |to disappear || | |to emerge || | |to exist || | |to fall || | |to go || | |to happen|| | |to laugh || | |to nest|| | |to occur || | |to remain || | |to respond || | |to rise || | |to roost|| | |to sit ||can also be transitive (e.g., to sit a child)| |to sleep || | |to stand ||can also be transitive (e.g., to stand a lamp)| |to vanish|| | Intransitive Verbs Do Not Have a Passive Form As an intransitive verb cannot take a direct object, there is no passive form. For example: Here is another example: - She fell. (The verb fell (from to fall) is intransitive.) - She was fallen. (There is no passive version of to fall.) Compare those two examples to one with a transitive verb: - The event happened at 6 o'clock. (The verb happened (from to happen) is intransitive.) - The event was happened at 6 o'clock. (There is no passive version of to happen.) - The man baked a cake. (The verb baked (from to bake) is transitive.) - A cake was baked by the man. (You can have a passive version with a transitive verb.) What is a direct object? What are transitive verbs? What is the subject of a verb? What is the passive form (or voice)? Glossary of grammatical terms
Part of an ongoing series highlighting the easy, no-cost ways that you can prepare your child for learning to read, today Christina will be discussing the benefits of reading with your child. Reading to your child is a fun and easy way to help prepare your child to read. Even the act of opening a book is teaching your child how a book works and what you do with it. Reading to your child will increase your child’s vocabulary, their general knowledge, prepare them for what letters and punctuation look like, as well as help create a bond between you and your child. Children enjoy reading because they are spending time with you and children who enjoy being read to are much more likely to be interested in learning to read on their own when they are older. Things to Keep in Mind When Reading to Your Child: Read to your child every day. Don’t worry about how well you read. What is important is the interaction you have with your child. If you create a reading time, this will become a ritual your child will look forward to. While many parents read to their child at bedtime, it can be any time you pick when you are not feeling rushed. Involve your child with the story. Let your child turn the pages. Talk to your child about the book, ask questions as you read and listen to what your child says. Let them point things out. Discuss the meaning of new words to help build their vocabulary. At the end of the story, let your child retell it in their own words to help build their listening comprehension. It’s okay to read the same story over and over. Even though adults get tired reading the same story all the time, your child is learning vocabulary and story patterns by memorizing the story. It is also fun for them to be able to predict what will happen. We all like to know things. This starts at an early age. Ask your librarian for book suggestions appropriate for your child’s age and current interests. If you are concerned about your child tearing pages, ask for board books which have cardboard pages. It is never too soon or too late to start reading. The sooner you begin reading to your child the more they learn and the more fun you have together sharing. Participate in reading programs like the library’s summer reading program which begins June 6th or the national book program 1000 Books Before Kindergarten which begins April 30th.These programs can give you and your child goals to keep you focused on reading regularly and add another element of fun. Ask at the Children’s Desk for details.
Statistical power is the likelihood that a statistical test will: Power can range between 0 and 1, with higher values indicating a greater likelihood of detecting an effect. What is statistical power? Statistical power is the probability of correctly rejecting a false H0 (i.e., getting a significant result when there is a real difference in the population). - Power ≥ .80 generally considered desirable - Power ≥ .60 is typical of studies published in major psychology journals Power will be higher when the: Statistical power can be calculated prospectively and retrospectively. If possible, calculate expected power before conducting a study, based on: - Estimated N, - Critical α, - Expected or minimum ES (e.g., from related research) Report actual power in the results. Try searching using terms such as "statistical power calculator" and maybe also the type of test, and you should turn up links to useful pages such as: - Statistical power calculators - One Sample Test Using Average Values - Post-hoc Statistical Power Calculator for Multiple Regression - Cohen, J. (1992). Power primer. Psychological Bulletin, 112, 155-159.
Do young people have any respect for the rights of others? Sir Bernard Crick argued that citizenship education requires young people to learn about moral values and to develop their ability to apply these in practice. As such, citizenship education in this area focuses on developing individuals’ ability to act as moral agents in their choices, intentions and actions. How do we create the notion that everyone should be broadly accountable to their communities and for their actions? Social and Moral Responsibility The Taster: Understanding rights and responsibilities 'Diary on the Beach' is a moral conundrum, would you return a lost possession? Case Study 1: Students' understanding of social and moral responsibility What is a student's responsibility whilst studying? Case study 2: Saving the planet How much do students care about the environment and what can they do? Links to electronic sources on Social and Moral Responsibility
What the college board is asking for: In this section of the course, students come to understand the major theories and approaches to personality: psychoanalytic/psychodynamic, humanistic, cognitive, trait, and behaviorist. In the process, they learn about the background and thought of some of the major contributors to the domain of personality such as Adler, Allport, Bandura, Cattell, Jung, Mischel, and Rogers. Through their study in this area, students recognize that each of the approaches to personality has implications for their understanding of both normal and abnormal personality, the assessment of personality, models of personality development, and the treatment of dysfunctional behavior. Students also learn about research in personality, including the kinds of methods that are employed (such as case studies and surveys), the differences among research orientations, and the strengths and weaknesses of each approach. The course exposes students to the major assessment techniques used in the study of personality, such as personality inventories, projective tests, and behavioral observations. Discussion of these instruments necessarily includes consideration of the reliability and validity of each. In addition, students examine the idea of the self and the related issues of self-concept and self-esteem. They learn how the self develops, how self-concept and self-esteem are assessed, and how both of these constructs are related to other aspects of the individual's functioning. How we measure personality There are a number of methods to assess one's personality. The major categories are interviews, observations, objective tests, and projective tests. Interviews: First ask about their lifestyle, including job, family, and hobbies. Used for diagnosing psychological problems and disclose personality characteristics. Observation: It's not just "watching people", actually it is extremely sophisticated. The psychologists are looking for very specific examples that follow strict guidlines. From a psychologist's observations, they can gather much information about one's personality. Objective tests: They are also known as inventories, that are standardized questionnaires that require written responses, usually true-false or multiple choice. Can be administered to a large group of people. They are the most widely used method. The most widely researched and clinically used multitrait test is the Minnesota Multiphasic Personality Inventory. Projective tests: They use ambiguous, unstructured stimuli, such as inkblots or pictures. These projective tests are suppossed to reveal one's unconscious conflicts. The two most common projective tests are the Rorschach Inkblot and the Thematic Apperception Tests. Trait theory: Traits are stable qualities that a person shows in most situations. The early trait theorists are Allport, Cattell, and Eysenck (not mentioned in Acorn). A difficult task, considering that their are nearly 18,000 adjectives to describe someone and 4,500 that trait theorists believed would be good words of description. To help break this down a bit, Raymond Cattell consdensced this list to 30 to 35 basic characteristics. Hans Eysenck narrowed this list even further, which is known as the acronym OCEAN. Gordon Allport found three types of traits: cardinal, central, and secondary. Trait theory seems to be the first model to achieve the ability to describe and organize personality characteristics. The arguement is that the human diversity in personality cannot be accounted for by only five traits. Walter Mischel thought that rather than seeing personality as the consistent,internal traits of an individual, you must measure personality by how people respond to factors and conditions in the external environment. Basically, personality is determined by the situation in which people find themselves. Psychoanalytic/psychodynamic: This theory attempts to explain personality by examining how unconscious forces interplay with thoughts, feelings, and behavior. The founder of this theory was Sigmund Freud. Among some of his followers and contributors to psycholanalytic theroy are: Carl Jung, Alfred Adler, and Karen Horney. So what exactly is the unconscious? Freud believed that the unconscious forms the majority of our mind. It stores our primitive, instinctual motives, plus memories that a laced with anxiety and emotion unable to make its way into the conscious mind. His personality structure is made of three parts: id, ego, and superego. Each accounts for a different aspect of one's personality. The id is made up of biological instincts and urges. It is immature, irrational, and impulsive. The ego is the part of the mind that can plan, problem solve, reason and control the id. It is responsible for delaying gratification. The superego is our moral censor. It sets the ethical guidlines and rules of behavior. It is our conscience, I suppose. Other Freud theories to be familiar with are: psychosexual development, defense mechanisms He was the first of Freud's followers to leave him. His main difference was that he believed that consciousness was the center of personality, rather than the unconscious. He disagreed with radical determinism and felt that each of us has the capacity to choose. Our main goal in life is to find security and overcome feelings of inferiority. He coined the term inferiority complex, which he believed we all suffer from. He felt these feelings of inadequacy stem from being once a helpless, small, incompetent, and helpless child. Jung placed less emphasis on sexual and aggressive forces and emphasized on the positive and spiritual unconscious. He believed the unconscious is split into two parts: the personal unconscious and the collective unconscious. The personal is created from our own individual experiences, whereas the collective is identical in each person and is inherited. Examples would be darkness, mother, and religion. These images, thoughts, behavior, and emotions are called archetypes. HORN-eye was cool in that she rejected a little bit from the previous guys, added her own concpets and thus came up with a theory all of her own. Her biggest problem with Freud was the biological differences between men and women. Freud believed that penis envy created biological inferiority in women, while Horney disagreed and said women's inferiority stems from cultural factors. She believed a child's relationship with its parents was the most important determinant in personality development. From the humanistic point of view, people are innately good and they possess a positive drive toward self-fulfillment. The most important figure in humanistic personality development is Carl Rogers. Formation of personality comes from an early development of one's self-concept. Rogers used the term self-concept to refer to all the information and beliefs you have regarding your own nature, unique qualities, and typical behaviors. He was most concerned with one's self-concept and their experiences. The goal to achieving a "good" personality is to trust your internal feelings and allow them to guide you toward a healthy mind. This underlies his belief that humans have freedom of choice. Unconditional Positive Regard: Roger's term for how we should behave toward someone to increase his or her self-esteem; positive behavior shown toward a person with no contingencies attached. Parents should accept positive nature and discourage negative nature. According to Albert Bandura each of us has a unique personality because of our individual history of interactions with the environment, and because we think about the world and interpret what happens to us. There is a continuing interaction between our environment, cognitions, and behaviors. This is a theory that branches from radical determinism proposed by the great behaviorist B.F. Skinner. This proposes that personality (honesty, kindness, hostility) are nothing more than a sum of reinforcement history. It was very evident, this theory, in Skinner's book Walden Two, in which a Utopian society is created with happy and content people because they get to do what they want to do. They are taught to cooperate through extensive operant conditioning. Reciprocal determinism is that people influence their environment, just as the environment influences them. Jullian Rotter has a similar idea about personality development. She emphasizes the expectation of what will happen following a specific behavior and the reinforcement value that is attached to the outcome of this behavior. It depends on the degree to which you prefer one reinforcer to another.
Civil rightsCivil rights are those legal protections granted to citizens under the jurisdiction of the civil law of a state. They are distinguished from human rights in that they may be violated or removed, and they may or may not apply to all individuals living within the borders of that state. Civil rights may include the right to vote, right to property, right to bear arms, right to free speech, right to associate, etc. Civil rights movements have existed in many countries. United States In 1964 civil rights workers Michael Schwerner, Andrew Goodman and James Chaney were lynched by the Ku Klux Klan in Mississippi. Their deaths shocked the United States' public and Congress and is one of the events that helped pass the landmark Civil Rights Act of 1964. Northern Ireland In Northern Ireland the Civil Rights movement developed in the 1960s among Northern Irish nationalists who demanded an end to what was seen as Unionist discrimination, in the form of the gerrymandering of local electoral districts to ensure the victory of unionist candidates in areas with nationalist majorities, and in discrimination in the awarding of local authority housing. One of the leaders of the Civil Rights movement was future Nobel Peace Prize winner John Hume, another, Austin Currie, a candidate for President of Ireland in 1990. Hume's co-Novel Lauraute, David Trimble, leader of the Ulster Unionist Party in the 1990s and 2000s, called the Northern Ireland of the 1960s a "cold house for catholics".
inertia (ĭnûrˈshə) [key], in physics, the resistance of a body to any alteration in its state of motion, i.e., the resistance of a body at rest to being set in motion or of a body in motion to any change of speed or change in direction of motion. Inertia is a property common to all matter. This property was first observed by Galileo and restated by Newton as his first law of motion, sometimes called the law of inertia. Newton's second law of motion states that the external force required to affect the motion of a body is proportional to that acceleration. The constant of proportionality is known as the mass, which is the numerical value of the inertia; the greater the inertia of a body, the less is its acceleration for a given applied force. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on inertia from Fact Monster: See more Encyclopedia articles on: Physics
What is Cowbane Poisoning? Cowbane (also called northern water hemlock or McKenzie’s water hemlock) is the common name for Cicuta virosa, one of four species of cicuta plants that are highly toxic. It is found in parts of Europe and northern Asia, as well as northwestern North America. The cicuta species are considered the most poisonous plants in North America. Reportedly, Native Americans used to dip their arrows in poison extracted from the root to make them more deadly. The main toxin in cowbane and other cicuta species is cicutoxin, a yellow colored retinoid found in strongest concentration in the root. Toxic root juices are said to smell like parsnip when cut, accounting for one of the plants many common names, poison parsnip. The root is toxic all year long. The early shoots in spring are also highly toxic, however during summer and fall the woody stem and larger leaves contain much less cicutoxin. Cowbane is the type species for other cicuta; it grows from 3-7 feet (1 to 2 meters) tall with leaves that are about 15 inches (38 cm) long placed alternately on each side of the stem. Three pairs of pointed oval leaflets branch on either side of the leaf stem (in botanical language this is described as tri-pinnately compound). The tiny white flowers are arranged in umbrella shaped clusters that are 1-4 inches (2.5-10 cm) across. Each flower has 5 petals and 5 stamen. Cowbane and other cicuta species grow wild in wetlands, fields or along the banks of streams. They are a hazard for livestock if they get into a field, but they are also dangerous for dogs out for a walk or anyone else who doesn’t recognize the plant. The lethal dose is extremely low. As little as a single bite of root, root stem, or early spring leaves could be enough to kill an animal, depending on weight. Cicutoxin is a neurotoxin which stimulates the central nervous system almost immediately. Excessive salivation, vomiting and seizures can occur from 15 minutes to 6 hours after ingestion. Death often takes place within a few hours, so treatment may be difficult. Immediate treatment can reduce absorption and control the severity of symptoms, but cowbane poisoning doesn’t have a high rate of recovery. Cowbane, also called water hemlock, is one of the most dangerously toxic plants in North America. It has been known to cause fatal poisoning in humans, dogs, and livestock animals. Symptoms appear quickly and can cause death in only a few hours. Book First Walk Free! Symptoms of Cowbane Poisoning in Dogs These are the symptoms you may see if your dog eats cowbane. Call a veterinarian or a poison helpline immediately. - Foaming at the mouth - Muscle twitches - High Pulse - Dilated pupils - Wheezing or difficulty breathing - Death from respiratory failure There are four species of Cicuta which are currently classified in the Apiaceae family. - Cicuta virosa – Cowbane, Mackenzie’s water hemlock, northern water hemlock - Cicuta maculata or Cicuta occidentalis – Beaver poison, spotted water hemlock, spotted cowbane, spotted parsley, musquash root - Cicuta douglasii – Western water hemlock, Douglas water hemlock - Cicuta bulbifera – Bulblet-bearing water hemlock, bulbous water hemlock Causes of Cowbane Poisoning in Dogs These factors could put your dog at risk for cowbane poisoning. - Dogs that like to eat shoots or leaves when out for a walk - Dog digging up a root and biting off the root stem (this is where the strongest concentration of toxin is located) - Frequent walks in areas where cowbane grows - Dogs running free in the fields - More likely in springtime when leaves are most toxic Diagnosis of Cowbane Poisoning in Dogs Cowbane poisoning will be diagnosed based on symptoms and a history of ingestion. Specialized blood tests can determine the presence of cicutoxin in the blood, but given the swift onset of symptoms, this is not usually used as a diagnostic method. Retaining a sample of the plant for identification is the best way to ensure a swift diagnosis, but care should be taken to avoid exposure. Wash your hands after handling and don’t touch your mouth or face. Use gloves if possible. If you think your dog may have eaten cowbane or another cicuta species, you should call a veterinarian or a poison helpline immediately. Be prepared to describe the plant exactly as well as give your dog’s breed and weight and an estimate of how much you think was ingested. Treatment of Cowbane Poisoning in Dogs If poisoning took place within the last hour, the veterinarian will induce vomiting and may also perform gastric lavage under anesthesia. Your dog may need an oxygen tube inserted down the throat during this process to ensure breathing remains constant. Activated charcoal will be given to help reduce absorption in the gastrointestinal tract. There is no antidote to cicutoxin, so treatment will be symptomatic. Various types of anti-seizure medication will be given to control seizures, either a benzodiazepine like diazepam, or a barbiturate such as phenobarbital. Several types of medication may need to be tried since seizures often recur even with treatment. Metabolic and blood pressure abnormalities will be maintained through intravenous fluid and electrolyte treatment, or with medication like dopamine or norepinephrine if low blood pressure persists. Kidney failure is often a problem with cicutoxin toxicity, so medications may be given to support kidney function. Humans with cicutoxin poisoning are often treated with dialysis. If your dog survives, symptoms will usually taper off in 24-48 hours, but seizures have been known to persist as long as 96 hours. Recovery of Cowbane Poisoning in Dogs Dogs with cowbane poisoning have a low chance of recovery. Symptoms are often fatal before treatment is possible, and severe cases will likely not respond to treatment. Humans that have recovered from cicutoxin poisoning have experienced negative long-term side effects including amnesia, restlessness, muscle weakness, twitching and anxiety, so physical and behavioral changes may be possible even if your dog recovers. In some cases, symptoms have been known to persist for days or even months after poisoning. It’s almost impossible to keep dogs from digging and investigating unknown plants, but there are several things you can do to help reduce the risk of exposure. Learn to recognize cowbane and other types of cicuta species found in your area. Keep your dog enclosed in the yard unless you can accompany him on a walk. Use a leash in areas where cowbane is known to grow. Have a plan in place for emergency veterinary treatment, so if your dog is exposed you will be able to get professional treatment as soon as possible.
210 - Literature, Culture, and Media Monster Lit: The overarching goal of this course is to expose students to questions about the nature of literature and other cultural products from within their own culture, and outside of it. We read cultural texts (broadly defined) to understand their impact in shaping the culture that surrounds us. For this version of the course the theme is "Monster Literature". This theme runs through most of literary history right up to the present day. The monsters covered in this course range from those found in ancient epic poetry to those in The Walking Dead. Some question we will ask: what is a monster? How are monsters different from each other and different from other creatures? Is the monster part of our psyche? Do we need monsters in order to define ourselves? Age of Revolutions: This course is a survey of the literature written by British authors during the tumultuous and vibrant period beginning with the onset of the French Revolution in 1789 and ending with the ascension of Queen Victoria in 1837. It was during this period that England, still recovering from the American Revolution, began its transformation from an agrarian society in which the landed aristocrats held most, if not all, of the social and economic power, to an industrial society which became more democratic and egalitarian. These various changes and shifts in society are reflected in the literature of the period, making it one of the richest and most varied in British history. Literature and Technology: Surveying the rise of computing technologies, information theories, and information economies in the last century, this course considers their impact on literature, culture and knowledge-formation. In particular, we will reflect on topics such as the relationship between social and technological transformation, literary print and digital cultures, and electronic literature. American Literature: This section of Literary and Cultural Studies is a survey of American literature from the Nineteenth century to the present day. There will be emphasis on the work of Edgar Allan Poe (his most famous poem "The Raven" and his short stories) and on 19th- and 20th-Century American short stories. Students will also read Mark Twain's Huckleberry Finn and Marilynne Robinson's post-modern novel, Housekeeping . Literature for Pop Culture: We’re going to spend this course looking at the recent cultural trend of popularizing/re-tooling works of literature (in our case, Shakespeare and fairy tales) for mass-market consumption. This creates a new genre of texts “inspired by” Literature from the Canon. Oftentimes this inspired re-working is done through unexpected or nontraditional methods, some of which include the graphic novel, online and/or board games, or extraordinarily self-aware/self-referential television programmes. In this course, we will first try to understand what it means to call something “literature” or to label it as part of the “literary tradition”. Then we’ll question the degree to which these homages comprise what we currently believe to be “literary tradition”. Finally we’ll ponder why any of this matters. What can these discussions teach us about culture? What can they teach us about the future of “Literature”? Our lively discussions will center around texts including (but not limited to) the comic book series Kill Shakespeare and Fables, the television programme Slings & Arrows, and the online game Romeo Wherefore Art Thou? The Mystery in the Story: What makes a story, and what makes it a mystery story? In this course, we'll study, analyze, and write about the nature of narratives, taking the classic mystery tale written by authors Arthur Conan Doyle, Agatha Christie, and Dashiell Hammett as typical of intricately plotted stories of suspense and disclosure that have been written and filmed in many genres. We’ll also examine horror tales by Edgar Allan Poe and Shirley Jackson, a psychological thriller by Patricia Highsmith, neo-noir films such as The Usual Suspects and Memento, and postmodern mystery parodies such as those of Paul Auster. Through our lively discussions, we'll look at the way that they hold together, the desire and fear that drive them, and the secrets that they tell -- or try to keep hidden. Screening Desire: This class is about love, desire, happy endings and guilty pleasures. Over the course of the semester we will examine the representation of relationships across popular culture. The class will examine a variety of media texts, including: It Happened One Night, Before Sunrise, Looking, The L Word, and True Blood. This course asks: How do popular media represent gender, sexuality, and partnership? If genre is a space where we work through and rework cultural norms, what conversations are romantic stories having with us? How do happy endings and romantic fantasies intersect with the realities of class, race and sexual orientation? What social conflicts do these stories seek to mediate? Finally, how are relationship stories constructed for different audiences and organized across different media forms? This course takes up these questions by examining the role of genre in our culture and exploring what relationships look like in print, on film, and on the television screen. Participatory Culture: From audiences sitting in the dark of the theater, to impassioned fans at conventions, there are many ways for us to engage with media. Popular culture inspires our passion, our participation, and sparks public debate. This class explores different historical periods, their dominant media forms, and theories of reception associated with them. Then, we will use this historical perspective to help us ask questions about contemporary media and participatory culture. This class looks at a variety of film, television, and digital media texts, including: Gentlemen Prefer Blondes, The Color Purple, Battlestar Galactica, remix projects, and major media franchises. We’ll also check out different YouTube Channels, "play" a digital documentary together, and look at transformative works projects like Wizard People Dear Reader. The class asks students to take an active role in discussions by reflecting on their own experiences as viewers. In addition to writing papers, students will also produce digital/remix projects in response to different media texts.
Hesperian Health Guides Safe chemicals: Who's responsible? Every day 20,000 people visit the HealthWiki for lifesaving health information. Ifwe could translate 50 more chapters. Make a gift to support this essential health information people depend on. Thousands of chemicals are created and used each year. But as important as they are to our economies, the laws and practices about chemicals do not protect people enough from their harmful effects. Chemical companies, governments, factory owners, and others who oversee their development, sale, and use are part of a system that has harmed people all over the world. - Chemical companies should prove a chemical is safe before it can be sold and used. Only a few thousand chemicals have been studied for their effects on people and the environment. Almost none have been studied for how they interact with other chemicals in the body. If testing is carried out, it often does not include all health effects. - Companies and governments must take responsibility for chemicals in use. The company that sells or uses the chemical should be responsible for making it safe for workers and consumers. If people get sick from a chemical, governments must move quickly to regulate or ban it. - Chemicals should be safe for people inside and outside the factory. "Safe" exposures for workers are set higher than what is considered safe outside the workplace. We all deserve to be safe from toxic chemicals. Employers should use the same, most protective standards in and out of the factory. - Use fewer chemicals in the workplace. Many products release some of the toxic chemicals used to make them after they leave the factory, as they are used, discarded, or recycled. Products should be designed to use fewer chemicals in their manufacture so they will cause fewer problems "from cradle to grave." - A chemical should only be replaced by a safer chemical, not by another toxic one. Many companies want to stop using toxic chemicals. However, they often replace one toxic chemical with another one that has not been well studied for health and environmental effects. The new chemical is often just as dangerous, but because its problems have yet to be discovered, it is considered "safer" or "greener." Safe chemicals in the workplace If a chemical is to be used in a workplace, it is the employer’s responsibility to choose one that: - is essential to the product, which could not be made without it. - is safer than other possible alternatives. - is used in smaller amounts than other alternative chemicals. - can be used and disposed of without harming the workers or the community. It is the boss’s responsibility to give you chemical information in a language you understand. Workers who do not read well can learn about chemicals from pictures, videos, demonstrations, explanations, and hands-on practice. When you start a new job or are assigned new work, your supervisor should train you on the safe use of the chemicals you work with, their health effects, and what to do if there is an accident.
Clouds could explain how Snowball Earth thawed outNovember 13th, 2012 in Earth / Earth Sciences Glaciation events during the Neoproterozoic (524-to-1,000 million years ago) and Paleoproterozoic (1,600-to-2,500 million years ago) periods - events that spawned ice ages that persisted for millions of years at a time - may have seen glacier ice encircle the planet in a frosty planetary configuration known as a Snowball Earth. Whether the planet could have existed in such a state, however, is a matter of considerable debate. An elevated planetary albedo, caused by the planet being covered in reflective snow and ice, would mean that a Snowball Earth would reinforce itself. With no known mechanisms able to fully explain how the planet could have thawed out from such a state, some scientists suspect that Snowball Earth never happened. However, using a series of global general circulation models, Abbot et al. find that the greenhouse potential of clouds, which had been overlooked in previous research, could explain how a Snowball Earth may have melted. Previous modeling research found that to thaw out a glacier that covered the planet would require carbon dioxide to account for up to 20 percent of the atmosphere by volume. Paleogeochemical evidence, however, shows that carbon dioxide levels reached only 1 percent to 10 percent. The model used for the earlier research, the authors find, ignored the warming potential of clouds. Clouds not only trap infrared radiation near Earth's surface, warming the planet, but also reflect incoming sunlight, cooling the planet. In the modern climate, both effects are important. However, set against a planet encompassed in ice, clouds' reflectivity becomes less important, and the overall effect of clouds is to warm the planet. By accounting for the heat-trapping effects of clouds, the authors find that the atmospheric carbon dioxide concentration required to drive deglaciation is 10-100 times lower than previous research suggested, a concentration that fits within observed levels. More information: Clouds and Snowball Earth Deglaciation, Geophysical Research Letters, doi: 10.1029/2012GL052861 , 2012 Provided by American Geophysical Union "Clouds could explain how Snowball Earth thawed out." November 13th, 2012. http://phys.org/news/2012-11-clouds-snowball-earth.html
Choose one pipe and move the magnet that is set on the pipe upward. Drop the magnet and observe its motion. Conduct that experiment with different pipes and compare the results. Turn the frame with pipes to place all magnets up at once. Observe the magnets’ race. How does it work Falling magnet induces so-called eddy currents in the pipes. These currents create a magnetic field that opposes the field of the magnet. As a result of interaction between magnetic fields of the magnet and of eddy currents, the magnet does not fall freely. The effect of slowing down the fall is proportional to the magnitude of eddy currents. Magnitude of eddy currents depends on the electrical conductivity of a particular pipe. The pipe are made of: copper, aluminium, brass and stainless steel. Why is this happening Eddy currents are created due to lectromagnetic induction. This phenomenon, discovered in 1831 by English physicist Michael Faraday, is the most used form for generating electricity.
The sample mean is a specific number for a specific sample. The sample mean is a random variable that varies from one random sample to another. Provided the sample size is sufficiently large, the sampling distribution of the sample mean is approximately normal (regardless of the parent population distribution), with mean equal to the mean of the underlying parent population and variance equal to the variance of the underlying parent population divided by the sample size. More formally, if , ,..., are a random sample from an infinite population with mean and variance then and ; and assuming the moment-generating function of the population exists, the limiting distribution of as approaches infinity is the standard normal distribution (Freund 1992). With "sampling distribution of the sample mean" checked, this Demonstration plots probability density functions (PDFs) of a random variable (normal parent population assumed) and its sample mean as the graphs of and respectively. For given other parameters ( and ), increase the sample size to visualize the effect on the standard error and therefore sampling distribution of the sample mean. As an estimator of an unknown population mean, the sample mean possesses the properties of unbiasedness and consistency, among others. To visualize the consistency property of the sample mean, set the sample size to 100,000 and observe the variance vanish and (given unbiasedness) the distribution of the sample mean collapse onto the parent population mean. This Demonstration is intended to be an instructional or educational tool for educators and students in introductory statistics and econometrics courses.
The main question, though, is why the winds are cold as the definition of the winds is hot and dry. Let's start with the basics of how these winds are formed. The winds are created when there is a high-pressure system east of the local mountains, and a low-pressure system off our coast. Naturally, the winds will blow from high to low pressure, hence the northeasterly or easterly direction. If you would like to test this theory of high- to low-pressure then simply let the air out of your car tire. The higher pressure in your tire will be released into air, causing a wind. Most people think the winds are heated by blowing over the hot desert, but that is incorrect. As the winds rise in elevation over the mountains, the air cannot hold the moisture vapor commonly known as humidity, which drys out the winds. Remember when you went snow skiing and your lips cracked from the dry air? Now comes the very interesting part of this equation. The winds heat up when the air molecules come smashing (for the lack of a better word) down this side of the mountain, and the smashing creates fiction that generates the heat. Have you seen pictures of a space capsule or the space shuttle reentering the atmosphere with heat shields ablaze? This is from the friction with the air molecules. The winds continue on their path to find the low pressure. But will the winds make it to the low center? Probably not. Now, cold winds can blow over our area when the original winds are very, very cold from a winter cold front moving across the great basin. In this situation, there is simply not enough "fall" to smash the molecules together to create enough friction to heat the winds. Plus, the actual location of the high and low pressures can be slightly different. The analogy would be turning on the cold water in a hot bath.
Don't like to read? According to a study completed on July 28th, 2020, a specific enzyme could be the cause of body odor( BO). If proven correct, researchers hope specialized treatments will be produced for severe cases. However, this may take years of research in order to create an effective solution. Body Odor Causing Enzyme While conducting a study at the University of York, a team discovered this enzyme. They noted that though people use antiperspirants to eliminate their BO. However, some people still struggle with it. In severe cases, this can lead to self-esteem issues. Sometimes people even opt to have their glands removed. Before this study was released, not showering, excessive sweating, and genetic conditions were thought to the leading causes of body odor. However, the BO enzyme has allowed scientists to explore another reason. They were able to use the enzyme to identify the bacteria that create odor molecules. According to researchers, this is a massive development as it will aid in understanding human bodies better. Although this may seem meaningless, this study could be beneficial to many people around the world. In severe cases, as mentioned before, people develop low self-esteem and sometimes even have their glands removed. Considering surgery can be very expensive, this could save someone thousands of dollars. Not to mention, low self-esteem often leads to isolation, so this study could also boost someone’s confidence. Due to this, looking into this enzyme sounds like a good idea. How Can An Enzyme Lead To Body Odor? Also, scientists believe the enzyme will allow them to target the inhibitor. As a result, they will be able to halt BO production without interrupting the armpit microbiome. Though armpits house several bacteria, researchers found that the BO enzyme existed in one specific type of bacteria. The exciting thing about this specific enzyme is that it existed before homo sapiens evolved. This suggests to scientists that BO may play a more significant role in life than previously thought. According to Dr.Gordon James, “this research was a real eye-opener.” Scientists say to ward off BO; people should wash their problem areas at least twice a day with soap. Shaving under armpits, washing clothes, and wearing natural fabrics can also aid in warding off BO. Though people already practice these habits, it is always essential to restate the information. Due to the lack of specificity in the study, it is hard to believe that the data is reliable. However, other reports suggest that this information is, in fact, accurate. Interestingly, the study has not garnered much recognition, considering this could be a scientific breakthrough. Science seems to be accumulated towards the coronavirus lately, which is understandable. However, there is something to be said about the news focusing all attention on the pandemic. Most people would agree that this seems like a fear-mongering tactic. Undoubtedly, the coronavirus is to be taken seriously. However, the news should showcase both good and bad stories to scientific studies. The Specific Molecules Involved According to The Irish News, the Staphylococcus hominins are the main microbes behind the body odor. After scientists transferred the enzyme to non-odor producing bacteria, it began to create a smell. This is the same process that occurs when people start to sweat under their armpits. It seems that over time as humans began to evolve, so did body odor. Hopefully, once scientists find a way to eliminate this enzyme, body odor will be a thing of the past. Though it is worth noting the study only tells the public how BO is produced, scientists have not yet found a way to remove the enzyme. Typically, experiments take a while before they are successful. The study could take up anywhere from five months to 10 years. The best thing to do at the moment is to be patient and hope for the best. Although finding time to catch up on this study would not hurt. Especially, considering how chaotic the world is, this study could be a good break from reality. Though as stated before, it should be taken with a grain of salt. Overall, however, the study gives a somewhat detailed description of everything and even manages to tie in a bit of history. Interestingly, one tiny enzyme is the cause for years of some people’s pain. Opinion by Reginae Echols Edited by Cathy Milne-Ware Yahoo! Life: True cause of body odour identified by scientists Shropshire Star: Scientists identify what causes body odour The Irish News: Scientists identify what causes body odour Featured Image Courtesy of sandwich’ Flickr Page- Creative Commons License Inline Image Courtesy of Clare_and_Ben’s Flickr Page- Creative Commons License
Earthquakes rattle the ground while cannon-like explosions shoot burning rock and ash into the air at speeds upwards of 600 miles per hour. The burning rocks crash to the earth, setting the grass and trees ablaze, while the ash rises 100,000 feet into the sky, blocking out the sun. Once the darkness sets in, all that is heard is a rumble. Like a sandstorm, a cloud of scorching gas and rock sweeps across the ground. Temperatures inside the storm reach up to 2,000 degrees. Everything in its path is incinerated. Carried by the wind, high-altitude ash spreads across the country before falling like snow to coat the land. No, this isn’t the opening scene of a new Michael Bay movie. It’s a description, courtesy of UB volcanologist Greg Valentine, of what likely occurred during a prehistoric eruption of Yellowstone Caldera, the supervolcano that lies beneath Yellowstone National Park. The destruction caused by the eruption of Mount St. Helens in 1980 (57 deaths and $1 billion in damages) pales in comparison to what would ensue if a supervolcano erupted today. These behemoths have the power to spew more than 250 cubic miles of ash, magma and rock. Indeed, a supervolcano could launch the planet into a new ice age. What’s more, there are many potentially active supervolcanoes across the Earth’s surface, from Bolivia to New Zealand, including three in the United States alone: Long Valley Caldera in California, Valles Caldera in New Mexico and Wyoming’s Yellowstone Caldera. Now for the good news: The last supervolcano eruption was more than 76,000 years ago. The most recent Yellowstone Caldera eruption—it has blown its top three times—was the roughly 630,000-year-old explosion that formed Lava Creek Tuff (see graphic below). “The bigger the eruption, the less frequent it is,” says Valentine, who also leads the UB Center for Geohazards Studies. “People have heard about supervolcanoes because of things like the Discovery Channel, but such an eruption hasn’t actually happened in human memory.” When most people picture a volcano, they imagine a large, lava-spewing hill, like the kind found throughout the Hawaiian Islands. Known as shield volcanoes, these can reach heights of more than 30,000 feet above the sea floor, but are not particularly explosive. Smaller in size, but much more explosive than shield volcanoes, are stratovolcanoes. These cone-shaped mountains rarely reach higher than 8,000 feet, but what they lack in size, they make up for with power. In contrast to their tamer cousin, stratovolcanoes erupt wildly, hurling molten rock in all directions and billowing clouds of ash into the air. They also spew lava, but it tends to spread slowly, like toothpaste being squeezed from a tube. Their preferred tool of destruction is a dense cloud of hot gas and rock known as a pyroclastic flow. The pyroclastic flows of Mount Vesuvius in A.D. 79 were responsible for the destruction of the ancient Roman cities of Pompeii and Herculaneum. Then there are the supervolcanoes. Like their smaller siblings, supervolcanoes release ash, magma and rock when they erupt; they just do so on a massive scale. To put it in perspective: The eruptions of the lava-oozing shield volcanoes in Hawaii are generally rated either a 0 or 1 on the Volcanic Explosivity Index, a tool used to measure the size of eruptions. The eruptions of Mount Vesuvius and Mount St. Helens (both stratovolcanoes) were each rated a 5. The eruptions of the world’s supervolcanoes typically rate an 8, which is 1,000 times more powerful than a 5. Moreover, supervolcano eruptions can last for days. As such, they’re capable of spreading ash 1,000 miles from the site of the explosion and demolishing all life within 100 miles with pyroclastic flows that travel between 5 to 70 miles per hour. By now you’re probably wondering why you’ve never seen a supervolcano. Shouldn’t they be rising up like giant beasts over the horizon? Well, no, because they don’t rise up at all. Supervolcanoes are actually calderas, or bowl-shaped craters that form after a volcano explodes and then collapses into its own magma chamber. Just to make things more complicated: While all supervolcanoes are calderas, all calderas are not supervolcanoes. Supervolcano calderas are mammoth; they can reach lengths of 55 miles and range in depth from 300 to 5,000 feet. In many ways, the supervolcano is the anti-volcano, says Valentine. If you’re not careful while driving near one, he adds, falling in is entirely plausible. Scientists have scratched their heads for centuries over three simple questions related to volcanoes: 1) what causes them to erupt 2) how does magma—a near solid—move so quickly through the ground, and 3) how does the aftermath of an eruption play out. As for the first question, we know that volcanoes need to build up magma before they explode. That’s why supervolcanoes erupt so infrequently; they must hibernate for tens of thousands of years, hoarding magma, to spur an eruption. But volcanoes don’t simply erupt when their magma chamber is full—and therein lies the mystery. The tipping point has perplexed volcanologists for years. “Why didn’t it erupt when it had 20 cubic miles? Why did it wait until it had 200 cubic miles? What’s the trigger?” asks Valentine. Magma’s ability to flow like water is even more puzzling. Every substance has a viscosity, or resistance to flow. Water, which flows easily, has a low viscosity, while peanut butter rests on the higher end of the scale. Magma, despite being a liquid, behaves like a solid, with a viscosity 1,000 times greater than the crunchiest brand of peanut butter. How magma behaves once it reaches the surface, and (in the case of stratovolcanoes and supervolcanoes) becomes a pyroclastic flow is the greatest mystery of all. How soon after the eruption the dense cloud of gas and rock falls to the ground, how fast it moves and how far it spreads are all topics of active research. Answering these questions when it comes to supervolcanoes is especially difficult, as the best way to study a volcano is to see one in action. Unable to wait thousands of years for the next supervolcano eruption, volcanologists use the next best source of information: the deposits left from previous ones. Valentine was involved in a recent study using this approach to tackle the third mystery, regarding pyroclastic flows. Focusing on the extinct Silver Creek Caldera in Arizona, the research team discovered that the pyroclastic flows from a nearly 19-million-year-old supereruption traveled at the modest speeds of 10 to 20 miles per hour, much slower than what scientists originally believed. Their surprising results back up a theory that the pyroclastic flows that poured from this caldera were a dense, fluid-like cloud of pressurized gas rather than a swift but airy sandstorm. Understanding how quickly pyroclastic flows move can help volcanologists do a better job of forecasting a volcano’s behavior when it erupts, says Valentine—which is a critical component of disaster preparedness. With all of the uncertainty surrounding volcanoes, it’s a wonder volcanologists are able to predict an eruption at all. And yet, they do just that; the U.S. Geological Survey keeps a close watch on all 169 active volcanoes in the country, sometimes spotting early signs of an eruption months in advance. Much like weather forecasting, volcano forecasting involves studying past and present behavior and using computers to predict how a volcano may act in the future. As above, the secrets behind how a volcano erupted in the past lie in the deposits left after an eruption. The rock can reveal how often a volcano erupts, when the eruptions occurred, how much magma was released and more. Present behavior is monitored using various gadgets that do everything from tracking changes in pressure of the magma chamber beneath a volcano to measuring whether the ground begins to swell and detecting even the slightest earthquake. These various data points are entered into computer simulations that predict when and how a volcano might wake up. Volcanologists can typically spot red flags well in advance of an explosion. But forecasts aren’t a sure thing and accuracy can depend on the type of volcano, the number of instruments being used to monitor it and how far into the future a forecast is desired. “If you want us to forecast what a volcano is going to do tomorrow, we can do that very accurately,” says Valentine. “If you want to know what it’s going to do in 20 years, we can say something, but there’s a lot of uncertainty associated with it.” At the same time, warning signs aren’t a guarantee of an eruption, and there have been false alarms in the past. This can make a call for evacuation difficult, says Valentine, yet he doesn’t advise against visiting or even living near a supervolcano. Most of them are safe, at least when they aren’t erupting. And Valentine walks the talk—his last home was built near the Valles Caldera. Ask him what does worry him? “Human-caused climate change,” he says. “It’s happening as we speak and could radically change life on Earth.” Marcene Robinson is news content manager for UB’s Division of Communications. The havoc wreaked by the eruption of Mount St. Helens is miniscule when compared to Yellowstone’s most recent catastrophic explosion. Volcano: A rupture in the ground that allows magma to escape to the surface Magma: Rock beneath the Earth’s surface that is so hot it has melted Lava: Magma that has reached the surface Ash: Tiny pieces of solidified magma that erupt from a volcano Pyroclastic Flow: A fast-moving cloud of gas and rock that moves along the ground during an eruption Magma Chamber: An underground pool of molten rock. When full, the magma forces its way to the surface, causing an eruption Vent: The hole from which gasses and magma flow out of a volcano Fissure: A crack in the Earth’s surface. Magma may erupt from here as well Fault: A crack in the Earth’s crust along which the ground shifts, creating volcanoes, earthquakes and mountains Hotspot: An unusually hot part of the Earth’s mantle. Causes large amounts of magma to build up in the ground Active: A volcano that erupts regularly Dormant: A volcano that hasn’t erupted for years, or even centuries, but will in the future Extinct: A volcano that is long gone, a result of the magma chamber solidifying Volcanic Field: A cluster of volcanoes Tuff: A soft rock made of the volcanic ash emitted from an eruption If you find yourself near an exploding volcano, you’re probably doomed. But maybe not! We asked Sonja Melander (MS ’12), science education coordinator at Mount St. Helens Institute in Amboy, Wash.—and former student of UB volcanologist Greg Valentine—what to do if it happens to you. 1. Be Prepared with a Plan A disaster kit is useful in any catastrophe, including volcanic eruptions. Make sure to stow food, water, blankets and a radio for listening to emergency messages from public officials. If you are told to evacuate, leave immediately. 2. Look Down and Head Up Lava is slow moving, so as long as you don’t put your foot in it, you should be OK. Traveling faster than lava are lahars, boiling rivers of concrete created when ash and rock mix with water. Lahars can be avoided if you stay above them, so seek high ground. 3. Cover Your Mouth Ash can spread for miles beyond the site of an eruption, and will coat everything in sight, including your lungs. Use a damp cloth to shield your mouth and nose. 4. Break Out a Broom The ash that falls after an eruption will look like snow. It’s not. If not promptly swept up, these bits of rock can collapse roofs, set like concrete over roads and wash into drains, wreaking havoc on sewage systems.
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
Amanda Lockeridge, State Program Manager for Munch & Move at NSW Health, writes about the importance of healthy eating and physical activity for young children. One in four Australian children are overweight or obese. Causes of obesity in children include unhealthy food choices and lack of physical activity. We know that good nutrition and physical activity for young children are vital to support healthy growth and development, to prevent illness and to provide the energy children need to power through their day. It is also important to lay the foundation for a healthy and active lifestyle from a young age. As many children spend significant amounts of time in early childhood education and care services, these services provide an ideal setting to promote and foster appropriate healthy eating and physical activity habits early in life. So how do we support children to learn about the importance of healthy eating and physical activity? “We can make endless plans, but the true magic of teaching and learning comes from spontaneous, genuine and thoughtful interactions, provisions and relationships with the children,” said Jennifer Wood, Early Childhood Training and Resource Centre (ECTARC) Munch & Move Trainer. “Promoting a play-based, child-centred environment encourages children to create, explore, practice and interact with materials, equipment, peers and adults.” The National Quality Framework acknowledges the importance of children’s nutritional and physical health needs and that learning about healthy lifestyles should underpin services’ everyday routines and experiences. This is supported through Quality Area 2 – Children’s health and safety, Standard 2.2 – Healthy eating and physical activity are embedded in the program for children, and the Early Years Learning Framework and Framework for School Age Care, Learning Outcome 3 -Children have a strong sense of wellbeing. Ideas on implementing Quality Area 2 Element 2.2.1 – Healthy eating is promoted and food and drinks provided by the service are nutritious and appropriate for each child. - Have a nutrition policy (for food provided by the service and/or the family in the lunchbox). Involve children, families and other agencies (such as Munch and Move) in developing the policy. - If the service provides food, display a weekly menu. - If families provide the food, make available some suggestions about healthy food options. - Food and drinks provided by the service should be consistent with the recommended guidelines for education and care services in Australia, e.g. the Get Up & Grow Guidelines and/or the Australian Dietary Guidelines. - Discuss healthy eating and fruit and vegetables with the children at mealtimes, offering a range of foods from different cultures. - Involve children in activities that focus on nutrition throughout the educational program. Some activities include setting up the lunch area as a restaurant, creating a vegetable garden, implementing cooking experiences, creating a healthy lunch book that includes recipes, sharing food photos and children’s conversations, using photos to encourage the drinking of water and promotion of fruit and vegetables. Element 2.2.2 – Physical activity is promoted through planned and spontaneous experiences and is appropriate for each child. - Maintain a balance between spontaneous and planned physical activity, and passive and active experiences. - Encourage each child to participate in physical activities according to their interests, skills, abilities and their level of comfort. - Talk to children about how their bodies work and the importance of physical activity for health and wellbeing. - Encourage and participate in children’s physical activity. There are other important links that can be made with: - Standard 3.2 – encourage and support children to participate in new or unfamiliar physical experiences and encourage children to use a range of equipment and resources to engage in energetic experiences. - Element 5.1.1 – provide children with relaxed, unhurried mealtimes during which educators sit and talk with children and role model healthy eating practices. - Element 6.2.2 – communicate with families about healthy eating, by providing information through newsletter snippets, fact sheets, photos, emails and face to face discussions. - Element 7.3.5 – develop a physical activity policy. Lisa Booth, Director at Wallaroo Children’s Centre in NSW, recognises the importance of encouraging healthy eating and physical activity. “We encourage and support children by providing nutritious meals and a water station that the children can access,” Lisa said. “Physical activity and healthy eating are embedded in all areas of the curriculum. Educators understand the importance of promoting children’s health and well-being through both planned and spontaneous experiences. “By using learning experiences such as music and movement, dramatic and creative play, outdoor activities and group games, the educators intentionally provide children with play-based experiences to support their learning.” There are a number of resources that support educators and services to promote and encourage healthy eating and physical activity through relevant learning experiences, resources and interactions. - Get Up & Grow - How to series – Promoting Healthy Eating In Education and Care Services - Munch & Move (New South Wales) - Achievement Program (Victoria) - Move Well Eat Well (Tasmania) - LEAPS – Learning Eating Active Play & Sleep (Queensland) - Right Bite Policy for schools and preschools (South Australia) - Kids at Play (ACT) - Department of Health (Northern Territory) - Contact Child Australia (Western Australia)
Noctilucent clouds are only visible when the sun is shining on them (at about 83 km altitude), and not on the lower atmosphere (when the sun is between 6 and 16 degrees below the horizon). They form in the polar mesopause – the coldest region of the Earth’s atmosphere. The polar mesopause reaches temperatures as low as −140°C. Noctilucent clouds were first reported in 1885 when they were independently observed in Germany and Russia. This was two years after the volcanic explosion of Krakatoa in the Straits of Java. It was thought that this initial observation was due to the increased number of people watching the twilight skies. Observers were attracted by spectacular displays, created by the globally distributed volcanic debris of Krakatoa.Another theory was that water vapour, injected into the upper atmosphere by the volcano, ultimately reached the cold, dry upper mesophere creating the clouds. Subsequent observations have proved that noctilucent clouds are not solely related to volcanic activity. In fact, their volcanic association is now scientifically contentious. It has been alternatively claimed that the appearance of noctilucent clouds is the earliest evidence of anthropogenic climate change. Noctilucent cloud observations from north-west Europe over the last 30 years show an increasing trend in the number of nights on which the clouds are observed each summer season, superimposed on a decadal variability that appears to be solar-cycle related. Competing explanations for this increase focus on excessive greenhouse cooling of the middle atmosphere, or increased water vapour linked to increased methane release associated principally with intensive farming activities. Noctilucent clouds have been observed thousands of times in the northern hemisphere, but less than 100 observations have been reported from the southern hemisphere. This could be due to inter-hemispheric differences (temperature and/or water vapour) in the atmosphere at these altitudes. Or, the difference could be due to the lack of observers and poorer observing conditions in southern latitudes. This is a subject of Australian Antarctic Division study.
Interleaving is a printing technique in which pages are inserted between other pages in a book. This is done for several reasons, including to prevent the ink from smudging, to prevent the pages from sticking together, and to add strength to the binding. An interleaved book is a book that has had its pages cut or perforated so that they can be easily removed and replaced. This allows the book to be used as a notebook, with the pages being used to write notes on. The interleaving also protects the pages from wear and tear. Interleaved books were first introduced in the early 19th century, and their popularity increased during the Victorian era. Many interleaved books were produced as gifts, with the pages being left blank so that the recipient could write their own notes in it. The practice of interleaving books began to decline in the early 20th century, as books became less expensive and easier to replace. However, there has been a recent resurgence in the popularity of interleaved books, as people have begun to value the ability to write notes in their books without damaging the pages. There are a few different ways to interleave a book. The most common method is to cut or perforate the pages so that they can be easily removed. This can be done by hand or with a machine. Another method is to bind the pages together with a ribbon or string. This method is less common, as it is more time-consuming and can be difficult. Interleaving is an important part of the printing process, and it is essential for ensuring that books are of the highest quality. Without interleaving, books would be more likely to fall apart, and the ink would be more likely to smudge. This printing technique is one of the many ways that printers ensure that books are able to withstand the test of time.
Follow these tips to help reduce the risk of fire in your home! Correct Electrical hazards such as: - Electrical cords under rugs, in walking pathways or pinched behind furniture - Overloaded outlets - laptops and phones charging on beds or sofas - use extension cords properly - unplug appliances by grasping the plug - use light bulbs with the correct wattage Give space heaters space - Keep them at least 3 feet (1 meter) away from anything that can burn— including you. Shut off heaters when you leave or go to bed. Stay in the kitchen when cooking - Never leave cooking unattended. Wear form-fitting or short sleeves when cooking - If a pan of food catches fire, slide a lid over it and turn off the burner. - Don't cook if you are drowsy from alcohol or medication. Stop, drop, and roll if your clothes catch on fire - Don't run. - Drop gently to the ground, and cover your face with your hands. - Roll over and over to put out the fire. If burned, use cool water for 3–5 minutes to cool the burn. Get medical help. Smoke alarms save lives - Have smoke alarms installed on every level of your home, inside each bedroom, and outside each sleeping area. - For the best protection, use interconnected alarms. - Make sure everyone in your home can hear the smoke alarms. - Test the alarms monthly. If you smoke, smoke outside - Provide smokers with large, deep, sturdy ashtrays. - Wet cigarette butts before throwing them out or bury them in sand. - Never smoke in bed or if oxygen is used in the home. Plan and practice your escape from fire and smoke - Have two different ways out of every room. - Make sure you can open all windows and doors in the plan. - In a fire, get outside quickly. If there is smoke, stay low and go. - Once outside, call the fire department. Wait for help outside. Know your local emergency number - Ask if it is 9-1-1 or a different number. - Have a telephone near your bed in case you are trapped by smoke and fire. Plan your escape around your abilities - Determine if anyone in the home will need assistance to get out safely. - Practice the plan twice a year both during the day and night. - Have necessary items near your bed, such as glasses, your walker, or your cane.
Where do rocks come from? Big rocks, little rocks? Professor Alan Collins answers this Curious Kids' question about the origin of rocks. Where do rocks come from?Claire, age 5, Perth, WA Wow, Claire, what a great question. As strange as it sounds, rocks are made from stardust; dust blasted out and made from exploding stars. In fact, our corner of space has many rocks floating around in it. From really fine dust, to pebbles, boulders and house-sized rocks that can burn up in the night sky to make meteors or “shooting stars”. The Moon and our local planets – Mars, Venus and Mercury – are just the largest rocks floating around our part of space. These are all made from space dust stuck together over billions of years. The ‘light’ rocks are on the Earth’s surface Planet Earth is a rock too, but so much has happened since it was formed from dust and small rocks that smashed and stuck together 4.543 billion years ago. As the space dust hit each other to make the earth, it got super hot and melted. The Earth was, at that time, a spinning ball of red-hot lava flying through space. In this melted lava planet, heavy bits of the earth sank and the light frothy bits gathered on the surface. Have you ever looked closely at a glass of milky coffee at a cafe? The dark heavy coffee is at the bottom, whereas the light, frothy milk sits on the top. Well, our planet was a bit like that coffee billions of years ago. We don’t see the really heavy rocks these days because they sank deep in the planet very early on. The rocks we see on the surface are like the frothy milk! They were light and rose to the top. Then, as time moved on, the planet cooled and froze to become the solid earth we have now. I know most rocks are heavy. But in fact some rocks – even really big ones like Uluru – are actually much lighter than the rocks found in the deep Earth. Lava and plates Those rocks on the Earth’s surface actually move around. Large chunks the size of continents (called “plates”) jostle each other and this can cause earthquakes. Some of them get forced under other plates and heat up and eventually melt. This forms more lava. The lava erupts from volcanoes, then cools and forms new rocks. These are some pictures of lava in the melted state and then after it has cooled down. Mountains and gems are also rocks Mountains form where two plates smash into each other. The rocks that get caught between two of the Earth’s plates get squashed under huge pressures and heat up. These can form really beautiful rocks. Sometimes gems form in these rocks and people try to find them to make jewellery. Rain and ice break up the rocks in mountains. These form sand and mud that get washed out to form beaches, rivers and swamps. This sand and mud can get buried, squashed and heated, which eventually turns them into rocks. Rocks contain a record of the history of our planet; what is has been through and what is capable of. We are only just learning how to read it. So, next time you see a rock, just think what an incredible story it contains. About this article - Author: Professor Alan Collins - Main image: Rocks contain a layer-by-layer record of the history of our planet. Fred Moore/flickr, CC BY-NC Curious Kids is a series for children. If you have a question you’d like an expert to answer, send it to [email protected]. You might also like the podcast Imagine This, a co-production between ABC KIDS listen and The Conversation, based on Curious Kids. |News by theme||News by school||News by subject| How do we feed the world’s growing population? How do we save our wildlife from extinction? Got an idea that will build a brighter, greener world? Australian high school students are invited to submit a short video about one of Australia’s big science challenges.
Platinum is a chemical element that has a chemical symbol Pt. The atomic number of platinum is 78. It is a malleable, dense, highly unreactive, ductile, precious, silverish-white transition metal. Its name was derived from the Spanish word Platino which means “little silver”. It is a member of the group 10 of the periodic table of elements. Platinum has six naturally occurring isotopes. It is the rarest elements found in Earth’s crust. It occurs in some copper and nickel ores with some native deposits, mainly in South Africa, that accounts for 80% of the world production. It is one of the least reactive metals. Platinum has remarkable resistance to corrosion, even at high temperatures which is why it is considered a noble metal. Consequently, it is often found chemically uncombined as native platinum. As it occurs naturally in the alluvial sands of many rivers. Today we will share the information about the electron configuration of the platinum. What is the Electron Configuration of Platinum How Many Valence Electrons does Platinum have Platinum has six valence electrons in its outer shell. Platinum Number of Valence Electrons There are six valence electrons in the outer shell of the platinum.
Blood groups were first identifies in 1900 by Karl Landsteiner at the university of Vienna to ascertain why deaths occurred after blood transfusions. The blood groups most widely known are A, B, AB and O. Two antigens (an antigen is a substance that an antibody fixes to), one type of antigen attaches to one type of antibody similar to a lock and key. These two antigens and antibodies identify the A, B and O blood groups. The antibodies are called Anti A and Anti B These antigens form on the surface of the red blood cells. A type blood cells will have the A blood antigen attached, and the body would not produce A blood antibodies. The reason for this is that if A blood antibodies were present, then these would attach and destroy the blood cells. However, should type B blood be inserted, B type antibodies are present. These antibodies would then attach to the antigens of the B blood and destroy the cell. The blood cells start to clump together that can cause a blockage in the blood vessel. This is called “agglutination”. O type blood do not produce any antigens for type A, B or O which makes this group universally accepted by any blood group therefor it is known as the “Universal Donor”. AB Blood types have both Anti A and Anti B antibodies and therefore can receive blood from all groups safely. AB blood groups are known the “universal receiver”. There are many other antigens on the red blood cell. The “Rhesus” antigen is another important factor. It was named rhesus after finding the antigen during research injecting Rhesus monkeys with rabbit’s blood. Not all blood has the antigen. Blood that does have the antigen is defined as RH+ and the Blood that does not is defined as RH-. There is rhesus negative (RH-) and rhesus positive (RH+). RH- does not already contain the RH antibodies and should RH+ blood come into contact with RH- blood, RH- then starts to produce the anti RH antibodies. This does not cause too much of a problem in the first instance as the process of producing the antibodies takes almost a week and the donated blood cells would have died. The major problem occurs should the RH- receives a further dose of RH+ blood as this causes the reaction much quicker due to the presence of the RH antibodies. This causes “Agglutination” and can be fatal. This is especially serious in pregnancy. Should a mother that is RH- has a foetus that is RH+. The mother receives the RH+ blood from the foetus and then starts to produce RH antibodies. These antibodies are then transferred back to the foetus via the placenta and into the foetus’s circulation. In the first child, this is not generally a problem as the antibodies will not have been produced in sufficient numbers to do any damage. The huge issue is any subsequent pregnancy. If a following foetus is RH+ the RH- antibodies from the mother will transfer across to the unborn foetus causing a mass destruction of blood cells. This condition is known as “Haemolytic disease of the new born”. TASK 7 – (ASSESSMENT CRITERIA 4) Explain the structure of the heart. Label the diagram of the heart – Worksheet 1 below – and explain its structure TASK 8 – (ASSESSMENT CRITERIA 5.1) Explain the function of the heart Concisely describe the function of the heart The heart is the most vital organ in the body. Its function is to “pump” blood through the circulatory system via arteries and veins. It has what is known as a “double circulatory” system. The first system pumps blood to and from the lungs to expel waste gasses (CO2) and other waste products and to collect the vital oxygen needed to sustain life, provision on nutrients that are essential for growth and repair. The second system pumps blood to the whole body. Oxygenated blood is pumped into the left atrium via the Pulmonary veins from the lungs. The flow is controlled by the mitral valve as it is passed through to the left ventricle where the heart muscle contraction pushes the oxygenated blood with its thick walls at high pressure through the aorta and into the body. Deoxygenated blood is returned to the heart via the inferior and superior vena cava into the right atrium where it is passed through the tricuspid valve into the right ventricle. Here, at low pressure, it is pumped through the pulmonary artery back into the lungs to expel the waste and to collect oxygen to be pumped around the body again. This function takes place continuously and the heart pumps approximately 7200 litres per 24 hours (based on an average heart rate of 72 bpm). TASK 9 – (ASSESSMENT CRITERIA 5.2)umflhdzx Explain coronary circulation The heart muscle (Myocardium) requires a blood supply to enable it to function correctly. This supply allows the heart to be provided with the oxygen required along with the extraction of waste products, (CO2 etc). Coronary Circulation is the supply of this blood to the myocardium. The oxygenated blood is provided by the coronary Arteries which can either be epicardial (run along the surface of the heart) or Subendocardial which are the arteries that run deep within the myocardium. Epicardial arteries are self regulating providing a constant level of supply to the heart muscle. They are very narrow and are prone to blockage which can cause serious heart damage such as angina or a heart attack due to the arteries being the only source of blood to the myocardium. Deoxygenated blood is removed by the cardiac veins. TASK 10 – (ASSESSMENT CRITERIA 6.1., 6.2 and 6.3) Describe the major arteries and veins of the circulation system. Explain the relationship between the structure and functions of arteries, veins and capillaries. Explain the ways in which the body controls blood vessel size and blood flow. Complete Blood Vessels sheet – class-based activity Draw up a table to show how the structure and functions of arteries, veins and capillaries differ. In bullet point format explain the ways in which blood vessel size and blood flow are controlled 并不是所有的血液都有抗原。有抗原的血液被定义为“相对湿度+血液”,而血液中并没有被定义为“相对湿度”—。有恒河猴阴性(铑)和恒河猴阳性(相对湿度+)。铑-不已经包含的铑抗体,应该铑+血液接触到与铑-血液,铑-然后开始产生抗-抗体的抗体。这不会引起太多的一个问题,在第一个实例的过程中产生的抗体需要几乎一个星期,捐赠的血细胞会死了。发生的主要问题应该是相对湿度-接收到一个进一步的剂量的相对湿度+血液,因为这会导致反应更快,由于存在的相对湿度抗体。这将导致“凝集”,并可能是致命的。这是特别严重的怀孕。如果母亲是Rh具有胎儿是Rh +。母亲从胎儿接收Rh +血然后开始产生Rh抗体。这些抗体会转移到胎儿通过胎盘进入胎儿的血液循环。在第一个孩子,这通常不是一个问题,因为抗体将不会产生足够数量的任何损害。这个巨大的问题是任何后续的怀孕。如果后面的胎儿是RH + RH -来自母亲的抗体将转移到引起血细胞大规模杀伤性胎儿。这种情况称为“新生儿溶血病”。
Time Travel Research Center © 1998 Cetin BAL - GSM:+90 05366063183 -Turkey / Denizli Waves generated through a gravitational field. The prediction that an accelerating mass will radiate gravitational waves (and lose energy) comes from the general theory of relativity. Many attempts have been made to detect waves from space directly using large metal detectors. The theory suggests that a pulse of gravitational radiation (as from a supernova explosion or black hole) causes the detector to vibrate, and the disturbance is detected by a transducer. The interaction is very weak and extreme care is required to avoid external disturbances and the effects of thermal noise in the detecting system. So far, no accepted direct observations have been made. However, indirect evidence of gravitational waves has come from observations of a pulsar in a binary system with another star. |We know that gravity bends or distorts space/time and light by virtue of the fact that we're able to see stars which we know should be blocked from our view by the sun. We've used radio and optical telescopes to map stars and other celestial bodies during the course of our yearly orbit around the sun, so we know where these celestial bodies should be. When the sun is between us and a star, many times we can still see the star as though it were in a different position. We know that gravity distorts time by virtue of the fact that if we take two devices which measure minute variations in time, and we keep one at sea level and take the other to a high altitude, when we recompare them, they reflect different times. The difference in this passage of time is caused by the fact that a gravitational field weakens the further you get from the source, and of course in this instance, the source of the gravitational field is the earth. So the one device which was taken to the high altitude was exposed to a less powerful gravitational field than the device which we kept at sea level.| One device used to make measurements like this is an atomic clock and the most recent atomic clock is supposed to not vary more than 1 second in every 1 million years. And, up until this point in time, great mass such as a star, planet, or moon was the only source of a discernible gravitational field that we were aware of. So, just as the gravitational field around great mass, such as a planet, distorts space and time, any gravitational field, whether naturally occurring or generated, distorts space and time in a similar manner. Up until this point in time, the term generate has been used to describe the capability of producing a gravitational field, but since there is no known way of creating a gravitational field from nothing, a more accurate term might be to access and amplify a gravitational field. To understand how gravity is generated or accessed and amplified, you must first know what gravity is. Gravity is a wave. Not a particle that acts like a wave, but a real wave. As well as the binder of space-time. The fact that gravity is a wave has caused mainstream scientists to surmise numerous sub-atomic particles which don't actually exist and this has caused great complexity and confusion in the study of particle physics. Gravity is a wave and there are two different types of gravity. Gravity A and gravity B. Gravity A works on a small or micro scale and gravity B works on a larger or macro scale. We're familiar with gravity B, it is the big gravity wave that holds the earth, as well as the rest of the planets in orbit around the sun and holds the moon, as well as man-made satellites, in orbit around the earth. We're not familiar with gravity A. It is the small gravity wave which is the major contributory force that holds together the mass that makes up all protons and neutrons. You must have at least an atom of a substance for it to be considered matter. You must have at least a proton and an electron and in most cases a neutron to be considered matter. Anything short of an atom such as the upquarks and downquarks which make up protons and neutrons; or protons, neutrons, or electrons, individually are considered to be mass and do not constitute matter until they form an atom. Thats why its said that gravity A holds together the mass or the "stuff" that makes up protons and neutrons. Once an atom is formed, the electromagnetic force is also a substantial factor. Gravity A is what is currently being labeled as the strong nuclear force in mainstream physics and gravity A is the wave that you need to access and amplify to enable you to cause the space/time distortion required for "practical" interstellar travel Locating gravity A is found in the nucleus of every atom of all matter here on earth and the universe. Accessing gravity A with the naturally occurring elements found on earth is a big problem. Remember that gravity A is the major force that holds together the mass that makes up protons and neutrons and other sub-atomic particles. This means the gravity A that we are trying to access is virtually inaccessible because it is located within matter we have here on earth. Our solar system has one star, which is our Sun. But the majority of solar systems in our Milky Way galaxy are binary and multiple star systems which have more than one sun. However, the earth is not representative of all matter within our universe. The two main factors which determine what residual matter remains after the creation of a solar system are the amount of electromagnetic energy and the amount of mass present during the solar system's creation. Many single star solar systems have stars that are so large that our Sun would appear to be a dwarf by comparison. Keeping all this in mind, it should be obvious that a large, single star system, binary star system, or multiple star system would have had more of the prerequisite mass and electromagnetic energy present during their creations. This makes it possible for these systems to possess elements which are not native to the earth. Scientists have long theorized that there are potential combinations of protons and neutrons which should provide stable elements with atomic numbers higher than any which appear on our periodic chart, though none of these superheavy elements occur naturally on earth. A "superheavy" element is any element with an atomic number over 110. Some elements heavier than uranium do occur on earth in trace amounts, but for the most part, we synthesize these heavier elements in laboratories. There are other elements that do not occur naturally on earth that a small group of the American government is experimenting with. It is called element 115 and it has two very unusual properties. SOURCE OF GRAVITY A-WAVE TO BE AMPLIFIED As well as the reaction due to transmutation. SEE ALSO: MICROWAVE ANALOGY Gravity Propulsion UFO Project -home- SEE ALSO: BIBLIOGRAPHY FOR RELATED STUDIES Alıntı: Copyright (c)boblazar.com [ Home | Aliens | Gravity Waves | Archive | UFO Project | Downloading | Hiçbir yazı/ resim kaynak gösterilmeden kullanılamaz!! Telif hakları uyarınca bu bir suçtur..! Tüm hakları Çetin BAL' a aittir. © 1998 Cetin BAL - GSM:05366063183 -Turkiye/Denizli Ana Sayfa · Index· UFO Galeri / E-Mail / Time Travel Technology / UFO Technology / Roket bilimi /
Technology advancements are often quantified and identified by the terminology “generation.” Each year, the product development process improves, this is deemed a generation. With each new generation of computer, the motherboard and silicon footprint decreases and the speed, power and memory power increases. Progression of Computers Computers have come a long way since the first generation vacuum tubes for circuitry and memory magnetic drums. The first generation computer utilized assembly language programming or high level programming languages to execute instructions for the user. These early computers required a lot of electricity to operate and also generated a lot of heat that was difficult to displace. The second generation replaced the vacuum tubes with transistors, which were a primary component of microprocessors today. Transistors were invented in 1947 in Bell Laboratories. These devices were preferable to vacuum tubes that emitted a significant amount of heat and slowed processing times. Transistors opened the door to faster processing. The latest microprocessors contain tens of millions of microscopic transistors. Without the transistor, we would not have the same level of computing power that we have today. The transistor was invented in 1947 but did not see widespread use in computers until the late 50s. The transistor was far superior to the vacuum tube. This allowed computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. The third generation computer involved integrated circuits. These circuits are often referred to as semiconductors, because of the substrate used to design the circuit. Semiconductors dramatically increased the speed and efficiency of the computer. Semiconductors also decreased the overall footprint of the computer. As the semiconductor packages become smaller, designers produced smaller laptops and desktop computers. Minimalist designers and chiropractors rejoiced with the weight and size reduction. The fourth generation marked the production of computers as we know them today. Microprocessors were introduced in this generation of computers. The computer processing speeds increased exponentially, as the “brain” of the computer mastered complex computations. This generation of computer allowed manufacturers to lower the price to make computers available to the common household. Computers, however, were still not as cost effective as they are today. The fifth generation of computer added artificial intelligence to the computer to improve the speed and efficiency of advanced computations and graphic displays. Game playing, expert systems, natural language, neural networks and robotics were all capabilities of the fifth generation computer. Neural networks were particularly important in this generation of computer. The computer could mimic actual neuron synapses in the human body. These complex mathematical models were handled with ease through the fifth generation computer. However, scientists still needed more computing power to accomplish advanced robotics and other language computations. The Sixth Generation of Computer Not only does the technology improve, but the price decreases as the technology improves. The sixth generation of computer provided consumers with the opportunity to have more power on a smaller footprint. The sixth generation also introduced voice recognition. Improved technology allows the computer to take dictation and recognize words. Computers have the ability to learn via a variety of advanced algorithms. The use of nanotechnology is a characteristic of sixth generation computers. This significantly increases the processing time of the computer and help consumers. Computers with multiple CPUs can perform sophisticated calculations and multitask. When a single CPU can perform multiple tasks at once, this is considered multi-tasking. When qubits or quantum bits process calculations, it is typically faster than conventional computers. This technology works in conjunction with the computer’s processor and memory. Complex languages such as English, Chinese, French and Spanish are easily processed with the use of qubits or quantum bits. Computers can now understand and interpret numerous languages with the new advanced technology available. This new advancement will allow students and the disabled to speak commands into the computer without touching the physical device. Voice recognition is also helpful in laboratory clean rooms, surgical operating rooms or even use in customer service. Voice recognition will significantly enhance the scientist’s ability to create new technology. Voice controlled games and typing applications are easy with sixth generation applications. Avid gamers will view video games in incredible detail with life-like motion. Parallel processing enables faster speeds for video games. As the semiconductor footprint becomes smaller through the use of nanotechnology, the user has more flexibility in the use of the computer. Sixth generation took advanced computing to a new level with voice recognition. Consumers can only imagine what the seventh generation of computer will bring. Consumers will look forward to these new advancements as they develop.
Distortion is any change to the original signal by a system. In sound reproduction, distortion is considered a flaw since it reduces the accuracy of reproduction by generating frequencies that were not included in the original content. This colors the sound and could make the music sounds impure, harsh, or muddy. It is also worth noting that distortion differs from noise by being dependent/related to the original signal, whereas noise is a random external/unrelated signal added to the original signal. We perform harmonic and intermodulation distortion tests for TVs at 80 dB SPL and at Max SPL (i.e. the TV is set to maximum volume). It is important to have a TV that produces low amounts of harmonic and intermodulation distortion when a clean and pure sound reproduction is desired. For example, in critical listening application or in situations where the sound reproduction is excepted to be full-spectrum and at loud volumes, like when watching an action movie in a large room, since TVs generally produce more distortion under heavier loads. However, since moderate - and even in some cases - high amounts of harmonic and intermodulation distortion is not very audible to humans, most TVs should be considered good-enough in this regard. Except for extreme cases, their THD and IMD performance shouldn't be a deciding factor. Audible levels of distortion tend to deteriorate the sound reproduction by making it muddy, colored, or harsh. Harmonic distortion is an overtone that is a whole number multiple of a fundamental frequency. In other words, it is an unintended frequency which is an integer multiple of an intended frequency fed to the system. Total harmonic distortion (THD) is the ratio of the sum of the powers (RMS amplitude) of all the harmonics to the power (RMS amplitude) of the fundamental frequency. Inharmonic distortion differs from harmonic distortion by being an overtone that is not an integer multiple of the fundamental frequency. For example, if the original signal is a 100Hz sine wave, a system with second-order harmonic distortion would output a 200Hz tone in addition to the original 100Hz signal, and a system with inharmonic distortion would output a non-integer multiple, like 273Hz, in addition to the input. The test signal for our THD measurements is the same as our frequency response test signal. It is a 16-bit/48KHz 20-second sine wave swept at -6dB FS (RMS) between 10Hz and 22KHz. The TV is placed on a table, as close as possible to the back wall, and is connected to the test PC through HDMI. The measurements are recorded using a calibrated Dayton Audio EMM-6 microphone, placed at the optimum viewing distance for each TV's size, and is connected to a Focusrite Scarlett 2i2 audio interface. The signal level is calibrated post-compensation (i.e. after being flattened by applying the target response) using a pink noise limited between 500Hz and 2KHz. The resulting sound pressure level is measured and calibrated with a GalaxyAudio CM-140 SPL meter, which is set to C-weighting and Slow. The measurements are performed at two different intensity levels; 80dB SPL and Max SPL. Due to the sample-rate of 48KHz and the test signal being limited to 22KHz, the THD results are capped at 10KHz, since harmonics of higher frequencies would fall outside of the test bandwidth. The THD (total harmonic distortion) of the 80dB SPL pass is calculated by our test software (Room EQ Wizard), as a percentage of the fundamental frequency's power and is exported for further calculations. Since speakers tend to produce more distortion in lower frequencies and human hearing is less sensitive to low-frequency harmonic distortion, a perceptual weighting filter is applied to the THD calculations which gives as much as 20x less weight to the lower frequencies compared to the higher frequencies. The final THD value is derived by calculating the variance of the weighted THD response. The THD (total harmonic distortion) of the Max SPL pass is measured the same way as the 80dB SPL, but at the maximum volume setting of the television. Intermodulation distortion (IMD) differs from THD by being produced when the device is excited by two frequencies rather than one as in THD. The resulting distortion frequencies will occur at the sum and difference frequencies of the original frequencies,, and at sums and differences of multiples of those frequencies. For example, if a 100Hz and a 105Hz signal are fed to a system, the intermodulation distortions will occur at 110Hz and 95Hz (and their multiples). Our IMD test is performed with the same procedure as the THD test but with two sets of test signals. The first is a DIN (250Hz & 8KHz) dual tone with a 4:1 ratio at -9dB FS (RMS), and the second is a CCIF (19KHz & 20KHz) dual tone with a 1:1 ratio at -12dB FS (RMS). The intermodulation distortion results for DIN and CCIF tones at 80dB SPL are calculated by REW (Room EQ Wizard) as a percentage of the fundamental frequency power and are then averaged to get the final IMD @ 80 value. The intermodulation distortion at Max SPL are measured in the same way as the 80dB SPL, but at the maximum volume setting of the television. Although THD and IMD have been a staple of audio measurements and specifications, studies have shown that there is only a moderate correlation between the THD/IMD response of a device and its perceived audio fidelity1, 2. Depending on the ratios between the different harmonics produced by a device, auditory masking, and other psychoacoustic effects, it is possible for a TV with high levels of THD/IMD to produce very little audible effects. It is also possible for a TV with audible levels of distortion (or compression and pumping) to have a typical THD/IMD performance. This could also be due to the fact that harmonic distortion is measured with a single tone sweep, and IMD with a dual tone, neither of which put the TVs under as much load as a full-spectrum, bass-heavy piece of music or movie. Additionally, some TVs are able to perform very well in THD at Max SPL simply due to being volume-limited, so they won't get loud enough to put the TV under heavy load and produce high distortion. Other methods for measuring distortion such as multi-tone distortion and non-coherent distortion have been proposed and studied, and among these, non-coherent distortion has significantly outperformed THD. There have also been new methods developed for interpreting THD, such as the GedLee metric, which incorporate auditory masking and other psychoacoustic phenomena into their calculations. We have plans for adding such measurements to our distortion tests in the future.
Presidential Campaigns and Candidates This site provides information on presidential campaigns dating back to 1960, campaign websites, and television ads from the present and the past. This website is geared toward teaching the history of the American presidency, primarily to high school students. The Presidency in History contains detailed biographies of each of the 43 past and present Presidents and First Ladies. The site also contains biographies of Cabinet members, staff, and advisers; timelines detailing significant events during each administration; and multimedia galleries to explore. The Presidency in Action delves into the function and responsibilities of the modern presidency. Here you will find detailed descriptions of the areas of presidential responsibility, updated organization charts, staff listings, and biographies of past and present staff and advisers. Letters to the Next President 2.0 Letters to the Next President 2.0 (L2P 2.0) is a national project hosted by the National Writing Project and KQED that helps give young people a voice on issues that matter to them. Middle and high school students are invited to choose an issue and write a letter about it to the next president. The letter can be created in any format : text, video, audio, multimedia etc. Students post their letter to the L2P 2.0 platform where it is available for all students to explore. The L2P 2.0 platform also hosts ideas for teachers as well as election-related curriculum resources. American Experience – The Presidents American presidents hold the most powerful office on earth and are found at the center of national and world events. In this award-winning collection from PBS, AMERICAN EXPERIENCE offers streaming documentaries on 10 American presidents. Also, biographies of every U.S. President, plus interviews, articles, photo galleries, timelines and teacher guides. The US Presidency PBS Learning Media provides a rich resource of the U.S. Presidency. Students can learn about the duties and powers of the President and First Lady, read Presidential biographies, and access videos and primary sources. Curriculum: Create a Candidate: Students will research the stances on major policy issues held by both political parties, debate the issues from the point of view of one party, and create a hypothetical political candidate by developing a campaign strategy. Curriculum: Election 1912: Students will research the candidates in the 1912 Presidential election and their stances on major policy issues and compare them to issues that are important to voters today. Students will also choose a nominee and develop a campaign button that will best represent the candidate and his ideas.
John Muir was enraptured by it. Ansell Adams immortalized it. President Teddy Roosevelt created the National Park Service to protect it. Then, as now, the beauty of Yosemite National Park astounds its visitors. Whether you seek the solitude of its many hiking trails, the fragrant mists of its soaring waterfalls or the colors of sunset reflected in the granite face of Half Dome, Yosemite will leave its enchanting mark on your soul. Yosemite National Park was created in 1890, largely due to the efforts of writer and naturalist John Muir. A native of Scotland, he first came to the valley in 1868 on a vacation from his new home in Wisconsin. His concern about the damage livestock was doing to the Yosemite area led him to successfully lobby for the park’s creation. In 1903, president Theodore Roosevelt toured Yosemite with Muir and was similarly impressed with its beauty. Muir had no problem convincing the President that Yosemite needed increased protection, and in 1906 the park came under the federal government’s control. Yosemite National Park is located in central California in the Sierra Nevada Mountains. The spectacular scenery was largely created from glacial activity. This scraping of ice over the landscape has produced the many waterfalls, U-shaped canyons and the iconic mountain, Half Dome. Rock falls, some triggered by earthquakes, others perhaps by water seepage turning into ice have also helped shape the area. The winding path of the Merced River has helped to carve out the valley floor. Yosemite National Park is home to several waterfalls. Yosemite Falls is one of the tallest in North America. Bridalveil Falls is aptly named because of the wind's tendency to blow the falling water sideways, making it look as delicate as a bride’s veil. The tallest waterfall in North America, Ribbon Falls, is also within park boundaries. It, along with the seasonal Horsetail Fall, flow off El Capitan. One of the most recognizable features of the park is Half Dome, an 8,842-foot peak with a vertical cliff of plutonic granite. It is the youngest peak in Yosemite. Many species of native wildflowers and trees can be found in Yosemite. Trees include the Giant Sequoia, which can live up to 3,000 years and are considered to be the largest living thing on the planet. These giants can be found in three groves: the Mariposa, the Merced and the Tuolumne. The California Black Oak, noted for its edible acorns, and the Ponderosa Pine can also be found throughout the park. Flowers include the Mariposa Lily, the Shooting Star and the Dog Wood, which is a seasonal flower (April to July) that blooms on Dog Wood trees. Yosemite is home to several animal species, some of which have been saved from extinction. Bighorn sheep, golden eagles and peregrine falcons are just some of the creatures that have comfortably increased their numbers. Black bears, which actually come in shades of brown, red or white in addition to black, are plentiful. Mule deer can often be found in meadow areas. Coyotes are more often heard rather than seen, and mountain lions live in relative secrecy. Smaller creatures, such as squirrels and marmots, are plentiful. Birdwatchers will find, in addition to several nesting pairs of falcons, Red-Tail Hawks, Great-Grey Owls, Ravens and Stellar’s Jays. There are between 300 to 500 black bears within the borders of Yosemite National Park. While there is a vast territory for these animals to seek their natural foods, more and more bears are finding human food a temptation. The danger lies when they lose their fear of people entirely and brazenly walk into campsites and raid improperly stored food or break into cars because they smell something interesting. Yosemite has implemented regulations concerning the storage of food in bear-proof storage lockers that are provided in camp areas. If you see a bear, the best thing to do is to keep your distance.
Definition - What does Exudation mean? Exudation is a process that occurs when matter that is a part of the composition of another material begins to rise or penetrate outward towards the surface of the material it is part of or contained by. Problems can occur in regards to corrosion if coatings are subject to exudation. Corrosionpedia explains Exudation Exudation can be used to define a wide variety of instances of outward surface penetration. For instance, a cut on someone's hand can cause blood to exude from the skin. Some trees exude sap when the conditions are right. Magma can exude from the core of the earth. Exudation can cause plasticizer additives to rise to the surface of a polymer. Exudation occurs in certain types of paints. If the water-soluble components of paint are subjected to high moisture and humidity then exudation can occur. This could ultimately result in a coating failure and corrosion. Coatings should be carefully selected to avoid exudation in humid environmental conditions.
Melanoma is the most dangerous form of skin cancer, these cancerous growths develop when unrepaired DNA damage to skin cells (most often caused by ultraviolet radiation from sunshine or tanning beds) triggers mutations (genetic defects) that lead the skin cells to multiply rapidly and form malignant tumors. These tumors originate in the pigment-producing melanocytes in the basal layer of the epidermis. Melanomas often resemble moles; some develop from moles. The majority of melanomas are black or brown, but they can also be skin-colored, pink, red, purple, blue or white. Melanoma is caused mainly by intense, occasional UV exposure (frequently leading to sunburn), especially in those who are genetically predisposed to the disease. Melanoma kills an estimated 10,130 people in the US annually. If melanoma is recognized and treated early, it is almost always curable, but if it is not, the cancer can advance and spread to other parts of the body, where it becomes hard to treat and can be fatal. While it is not the most common of the skin cancers, it causes the most deaths. In 2016, an estimated 76,380 of these will be invasive melanomas, with about 46,870 in males and 29,510 in women. The ABCDEs of Moles Moles, brown spots and growths on the skin are usually harmless — but not always. Anyone who has more than 100 moles is at greater risk for melanoma. The first signs can appear in one or more atypical moles. That’s why it’s so important to get to know your skin very well and to recognize any changes in the moles on your body. Look for the ABCDE signs of melanoma, and if you see one or more, make an appointment with a provider immediately. The benign mole, left, is not asymmetrical. If you draw a line through the middle, the two sides will match, meaning it is symmetrical. If you draw a line through the mole on the right, the two halves will not match, meaning it is asymmetrical, a warning sign for melanoma. A benign mole has smooth, even borders, unlike melanomas. The borders of an early melanoma tend to be uneven. The edges may be scalloped or notched. Most benign moles are all one color — often a single shade of brown. Having a variety of colors is another warning signal. A number of different shades of brown, tan or black could appear. A melanoma may also become red, white or blue. Benign moles usually have a smaller diameter than malignant ones. Melanomas usually are larger in diameter than the eraser on a pencil (¼ inch or 6mm), but they may sometimes be smaller when first detected. Common, benign moles look the same over time. Be on the alert when a mole starts to evolve or change in any way. When a mole is evolving, see a provider. Any change in size, shape, color, elevation, or another trait, or any new symptom such as bleeding, itching or crusting can point to danger.
DISCOVERY – Dino drumsticks discovered High-tech imaging is revealing a wealth of extra information about the dinosaur origin of birds. JOHN PICKERELL reports. The majority of dinosaur fossils comprise only preserved bones, but in very rare cases scientists get a much more detailed glimpse at what these creatures might have looked like. China’s Liaoning province is one place that regularly yields incredible fossils, with traces of feathers, internal organs and even gut contents preserved. Recently researchers used a new method to reveal details of the body, skin, feathers and scales of a dinosaur called Anchiornis. Known as laser-stimulated fluorescence (LSF), the technique uses a violet laser to make molecules in remnants of organic tissues embedded in the rock glow in the dark. “We were able to directly observe parts of the body outline of a bird-like dinosaur,” says Hong Kong University palaeontologist Michael Pittman. “We also observed soft tissue details of the wings and feet that are usually extremely difficult to infer from studying fossil skeletons.” Crow-sized Anchiornis is important for understanding the origin of birds and of flight, Pittman says, because it is thought to be closely related to the ancestor of birds. The scans revealed footpads and scales very similar to those seen on chickens, as well as small flaps of skin under the feathers on the leading edge of the wings. These are known as “propatagia”, and are important for flight in birds. “Drumstick-shaped legs” and a thin feathery tail were also exposed. There are about 230 Anchiornis specimens held in Chinese museum collections. This meant the scientists had many fossils from which to select the best soft-tissue preservation. Mike Benton, a vertebrate palaeontologist at the University of Bristol, said LSF was a useful technique for distinguishing between anatomical features and “artefacts of preservation around the bones”, revealing previously invisible details. The Anchiornis study was published in the journal Nature Communications. Anchiornis under laser-stimulated fluorescence, showing feathers like those of modern birds.
Rhinos are the second largest land mammal on earth. At the beginning of the 20th century, 500,000 roamed Africa and Asia, but today, very few survive outside national parks and reserves due to persistent poaching and habitat loss. The different species of rhinos There are five living species of rhinos in the world; white, black, Indian, Javan, and Sumatran. There are two subspecies of white rhinos; the northern and the southern white rhino, and four sub-species of black rhino; western, eastern, south-western, and south-central rhino. Rhinos can be solitary or social For the most part, rhinoceroses are solitary animals and like to avoid each other. Some species, particularly the white rhino, may live in small groups known as a ‘crash’. These crashes are usually made up of a female and her calves, although sometimes adult females can be seen together too. Males on the other hand, like to be left alone, unless in search of a female to breed with. Rhinos can tell a lot from a call The more social rhino species; the southern and northern white rhinos, use a call to communicate to one another. This is called a contact call pant, and can be heard over long-distances. Researchers have found that these calls say a lot about the caller In one study, scientists played calls from different individual rhinos to a group of ten wild southern white rhinos. They included calls made by both familiar and unfamiliar southern white rhinos, both males and females, and calls from northern white rhinos. They then monitored the behaviour of the rhinos, to see how they responded. The researchers found that the rhinos responded differently depending on who the caller was. They found that the rhinos could tell whether the caller was familiar or unfamiliar to them, whether they were a male or a female, and whether they were a fellow southern white rhino, or a northern white rhino. They also analysed the calls and found that they differed in structure depending on whether they were performed by a male or female rhino. They found that the male rhinos tended to respond more strongly to female calls than to other males. This suggests that male to male communication is less important for rhinos than male to female communication. In another study, researchers found that each call could be linked to an individual, in the same way that our voices are unique to us. Rhinos have very poor eyesight, and so being able to communicate vocally with one another over long distances is crucial. This research is important because it helps us to understand more about these amazing animals. Understanding how they communicate with one another may also be critical for improving conservation efforts. Rhinos really are amazing animals The Javan and Sumatran rhinos in Asia are critically endangered. There are thought to be less than 80 Sumatran rhinos in the wild. A subspecies of the Javan rhino was declared extinct in Vietnam in 2011. A small population of around 70 Javan rhinos still clings to survival on the Indonesian island of Java. Successful conservation efforts have helped the third Asian species; the greater one-horned (or Indian) rhino, to increase in number. Their status was changed from Endangered to Vulnerable, and they survive in northern India and southern Nepal. A recent count suggested that there are now 3,500 of these rhinos in India and Nepal, but the species is still threatened due to poaching for their horns. In Africa, southern white rhinos, once thought to be extinct, are another conservation success as they have been brought back to sustainable numbers. In fact, there are thought to be just under 19,000 in the wild. Like the Asian rhinos however, they too are being increasingly poached for their horns, and their numbers are now crashing. Once again risking the conservation status of these animals. Black rhinos have doubled in number over the past two decades from their low point of fewer than 2,500 individuals, and there are now thought to be around 5000 in the wild. This is still a fraction of the estimated 100,000 that existed in the early part of the 20th century. The western black rhino and northern white rhinos have recently gone extinct in the wild, and there are only two remaining female northern white rhinos left. They are being kept under 24-hour guard in Ol Pejeta Conservancy in Kenya. The main threat to these beautiful animals is illegal hunting, largely because their horns are used in traditional folk medicine, particularly in Asia. To save the remaining rhinos, countries must work together to protect conservation sites and, crucially, to stop the illegal trade in rhino horns. That means stopping the poachers who kill the rhinos, but it also means tackling a vast network of organised crime that ships the horns to China and other Asian countries. It is also important to end the demand. Rhino horns are status symbols in China, and so people pay lots of money for them. If demand could be stopped, then at least some of the rhino species could start to recover. It may well be too late for some of the species and subspecies whose populations are now so small they could never recover, but it is not too late for them all. For more information on how you can help to protect rhino’s in the wild see; Save the Rhino World Wildlife Fund International Rhino Foundation
Key Stage 1 The children begin Key Stage 1 of the National Curriculum. They will have daily Literacy and Mathematics lessons as well as a variety of other subjects, including, Computing, Religious Education (R.E.), Music, Personal, Social and Health Education (P.S.H.E.) and Physical Education (P.E.).. The children follow a book-banding system and each child has their own reading record book which records their progress. Some subjects are taught as a topic and these are: Science, History, Geography, Art and Design and Technology (DT). In Year 1 we aim to work closely with parents in order to help the children make good progress in their basic literacy and maths skills. We consider parental support to be essential.
Planktivore Feeding Habits Table of Contents Tides and ocean currents deliver a continuous supply of microscopic plants and animals past coral reefs. This abundance of food supports a huge number of sea creatures, from the corals all the way up to enormous animals such as manta rays and the biggest of them all, the whale shark. A large number of more normally sized fishes also feed on the plankton, facing into the currents above the reef to harvest their needs from the mass of tiny copepods, cladocerans, and invertebrate larvae transported across the reef by water currents. What kinds offish feed on plankton? Just about all tropical marine fishes undergo a planktonic stage early in their development, where they live as larvae among the plankton. Once settled on the reef, plenty of species diversify into different diets, but equally, a large number continue with their planktivorous diet. Almost all major families have planktivorous members; there are plenty of examples among the damselfish, basslets, and butterflyfish that feed during the day and among nocturnal feeders, such as squirrelfish and cardinalfish. How are they adapted to their diet? Diurnal planktivores feed on relatively small members of the zooplankton, such as copepods. These prey animals are usually transparent, all are usually smaller than 3mm and many measure less than 1mm. Spotting them requires excellent eyesight — one characteristic of the fishes that feed on them. The fish also tend to have small mouths and toothless jaws, with tightly packed gill rakers to prevent the escape of captured prey. They have forked tails and, usually, streamlined currents as they feed. These features also help the fish to make a fast getaway if danger threatens. Nocturnal planktivores are very different in appearance; their huge eyes allow them to see their prey in the dark and their mouths are equally large. One reason for this latter feature is the size of their prey, which is considerably larger than that of diurnal fish — almost always greater than 2 mm — and includes bigger animals, such as mysid shrimps. How do they feed on the plankton? Diurnal planktivores face into the current, especially on the outer walls of reefs, picking at their food as it drifts past.The further away the fish venture from the protection of the reef, the more food they can find, but this comes at the cost of a much greater risk of attack from patrolling predatory fish. For this reason, diurnal planktivores tend to feed in groups for safety, and the aggregations are structured by size — the larger, faster-swimming fish can afford to take more risks and feed furthest away from the reef. Sometimes these fish switch from their regular diet to exploit new types of food: when corals, or even large fish such as parrotfish spawn, sizeable groups of planktivores gather downstream to feast The nocturnal planktivores feed in a very different way. As they leave their huge resting aggregations, they split up into small, loose groups or even go solo, spreading across the reef to gather their superabundant prey.
History Of Cotton Archaeological findings in Mohenjo-Daro, in modern Pakistan, and in the Tehuacán valley in Mexico, both dating from about 3000 BC, suggest that the cotton plant was already domesticated and being used for making textiles over 5000 years ago. Cotton fabrics from India, of outstanding fineness and quality, were traded in the Mediterranean area from the time of Alexander, who had established the trade routes to the East. Alexandria became the major dispersal point for these goods. Later the rise to power of the city-state of Venice is said to have been built largely on trade in Indian cotton cloth. In the 8th century, cotton growing and fabric manufacture were introduced into Spain by the Moors, where it thrived until the expulsion of Islam in the 15th century. Thereafter, the opening of the sea route to India promoted Portugal to the prime source of cotton fabrics. During the 17th-century textile manufacturing expertise and sea, power began to concentrate in England which then became the dominant centre of textile manufacture. Meanwhile, cotton growing was expanded in North America and the Caribbean. These trends were reinforced in the late 18th century by the invention of the cotton gin in America, and by the development of spinning and weaving machinery, plus the harnessing of water and steam power, in Britain. Cotton Production & It’s Sources By 1930 cotton accounted for 85% of the world consumption of textile fibers but, during the last half of the twentieth century, its market share fell to about 40% due to the introduction of synthetic fibers. Current annual production (2010-12) is about 26 million tonnes. Although cotton production has tripled in recent decades, the amount of land utilized has not increased. This is a result of constant improvements in cotton varieties and farming techniques. Cotton is grown in about 80 different countries worldwide. The Cotton Plant Cotton is a member of the Mallow family. Its height ranges from 25 cm to over 2 m, depending on variety, climate, and agronomy. It is normally grown as an annual shrub but, in parts of South America and the Caribbean, it is cultivated as a perennial shrub (tree cotton). From planting to maturity takes between 175 and 225 days. At planting and during its growth, cotton needs plenty of water. For ripening, it needs heat. Therefore, the world’s cotton belt is located mainly in the tropics and sub-tropics. After flowering, the fruit nodes, located in the calyx (bracts), grow into capsules (bolls) which eventually crack open to reveal the seed hairs. In each ball, there are about 30 seeds. The number of hairs on each seed ranges from less than 1000 to more than 10000, depending on the variety. Like any agricultural product, the way that cotton is grown in different countries varies widely, depending on the level of development: in the USA, Australia, Brazil, Uzbekistan and Israel large machines are utilized; in poorer countries, oxen or buffalo may be used for traction, and manual labor is the rule. Harvesting of Cotton Harvesting is either by hand or by picking machines. Hand picking extends over several weeks. In principle, it has the advantage that only the fully ripened bolls are collected and no leaves are included. A picking machine will usually harvest the whole crop in one passage. It has a tendency to include some unripe bolls, together with various quantities of dead leaves and other plant parts. Drying of Cotton If the newly-harvested seed-cotton is wet, then it may have to be dried using warm air before it can be stored in large piles to await ginning. In many countries, drying is an integral part of the ginning process. Ginning of Cotton Ginning is the separation of the fibers from the seeds using special machines. The separated fibers, called lint, have a staple length of between 15 and 50 mm, depending on the cotton variety. On many types of seed, there are some very short fibers, called linters. They are made of cellulose and they find many uses, including the production of man-made fibers. The seeds can also be utilized for the production of edible oil and as cattle feed. 100 kg of clean seed cotton yields about 35 kg of fiber, 62 kg of seed and 3 kg of waste. Processing Into Yarn Cotton fibers are made into staple fiber yarns predominantly by ring spinning or OE rotor spinning. Commercially, cotton is usually designated according to its variety and origin. Different varieties are grown in different countries – about 40 in the USA alone. Thus the country of origin is only a partial guide to quality. The high-quality, long-staple cotton, such as the Giza’s of Egypt and the Pimas of the USA, Peru, and Israel, account for less than 10% of total production. Sea Island cotton, from the West Indies, is a very high-quality type produced in vanishingly small quantities. The most common type worldwide is the American Upland cotton, with about 85%.Naturally coloured cotton, mostly in brown shades, have been adapted for commercial production on a very limited scale. |Staple length||This is the most important aspect of quality. It generally lies between 20 mm and 40 mm. Spinnable fibers have a staple length greater than about 16 mm. Sea Island cotton can be as long as 50 mm. Giza and Pima are about 36 mm, Upland Is about 28 mm. |Fineness, Handle||Cotton fibers are fine. Their weight per unit length is Between 1 and 4 dtex. Generally, the longer the fiber, the Finer it is and the softer its handle.| |Large amounts of contaminants, such as leaf or seed fragments, or of very short fibers, or of immature and “dead” Fibers are severely detrimental to quality. |Strength||High-quality cotton will have a high strength relative to its fineness.| |Color and Luster||The color of cotton varies, according to the variety, from white (Upland) through creamy (Giza, Pima) to light yellow or brown. The luster is usually subdued. High-quality types, such as Giza and Pima have a silky luster.| Construction of Cotton Fibres Cotton is composed of cellulose, the foundation of all plants. Whilst it is growing inside the boll, the fiber is circular (annular) in section. When the boll opens, the fiber begins to dry and it collapses to a kidney-shaped cross-section. At very strong magnification in the electron microscope, a suitably prepared cross-section shows daily growth rings, comparable with the annual rings in wood. These are the result of daily deposits of layer upon layer of fresh cellulose, proceeding from the outside inwards. The first-formed outer layer is composed of an especially tough kind of cellulose. At the end of the growth period, a cavity remains at the centre. This is called the Lumen. During drying the fiber twists along its length axis and looks like a flattened, twisted tube. A layer of natural wax coats the surface. Each cellulose layer is formed from fibrillar bundles composed of individual fibrils (fibril = tiny fiber). The fibrils are made of cellulose macromolecules. The fibrillar bundles of succeeding cellulose layers are inclined at an angle to the length axis of the fiber. Spaces between the ordered lattice of the fibrillar structure, as well as the hollow fiber centre, are easily penetrated by water. Moisture can be stored in the cavities. Sweat can be absorbed and will be removed during subsequent washing. Cotton is stronger when it is swollen by water. This is because the presence of water promotes a more uniform distribution of stresses across and along the cellulose layers. The high strength of the cotton fiber is a consequence of its construction from highly organized cellulose chain molecules in the fiber interior (crystalline regions). Its low elasticity is due to slippage between the crystalline regions. Cotton fibers are single cells that extend from the seed coat epidermis. Their dimensions depend on the cotton species and variety. Thus the superfine Sea Island cotton (Gossypium barbadense) have a length of up to 5 cm and a linear density of 1 dtex, while the coarse Asiatic cotton (Gossypium herbaceum, Gossypium arboreum) have a length of about 1.5 cm and a linear density of 3 dtex.
In an attempt to prove the theory that the material universe began as a “big bang”, modern science has appealed to Albert Einstein and his theory of relativity, expressed in the formula E=mc2. According to secular science, this theory explains how the “big bang” occurred. The prevailing secular scientific assumption is that our universe once existed as an untold amount of energy and then in an instant converted from energy to mass as expressed in E=mc2. The purpose of this article is not to treat this subject with great depth. Instead, it is to relate the modern scientific perspective on the origins of the universe with Einstein’s theory. This article will also uncover flaws in the “big bang” theory from a rational perspective and will also look at a fundamental problem with the theory of relativity itself. A Basic Understanding of E=mc2 The Jewish physicist Albert Einstein hypothesized that there is a relationship between mass, energy and the speed of light. This led him to develop the formula E=mc2. In simplest terms, this formula states that energy(E) is equal to the product of mass(m) and the speed of light (c/celestias) squared. To illustrate this equation, imagine a balloon filled with water. Now squeeze the balloon in the middle so that there are two equal water-filled halves with a small funnel in the pinch point so water can travel between the two halves. As water is squeezed from one half, it moves to the other, and vice versa. One half of the balloon will increase with the same amount that the other decreases. Now imagine that one side of the balloon is mass, the other side is energy and the pinch point is the speed of light, a constant that does not change. According to the theory, mass can be changed to energy and energy can be changed to mass, with the conversion occurring equally in either direction. In fact, Einstein penned the formula as an educated guess and then sought to prove it. It has been universally accepted as fact. In order to better understand Einstein’s theory, certain known facts must first be understood about physics. First, the speed of light is 186,000 miles per second. We know this because we know the average distance from the sun to the earth is 93 million miles and it takes the light from the sun an average of 8 seconds to get to Earth. The speed of light(c) is the constant in the equation. It is also given that every object has a certain amount of energy and a certain mass, and the product of the two does not change. The individual mass and energy of an object can change, but the product of the two must always remain the same. In similar fashion, the product 12 can be expressed as 12x1, 6x2, or 4x3. These are all different expressions of the same unchanging result. Practically speaking, the formula E=mc2 asserts that as any given object increases in speed, some of the energy used to accelerate the object is converted to mass. This conversion of energy to mass is so small that it is almost non-existent until the object approaches the speed of light. At that speed nearly all the energy being used to accelerate the object is converted to mass, causing the object to become infinitely massive. In theory, an object can never actually reach the speed of light. Inversely, as an object decelerates from the speed of light, its mass is converted back to energy. The object becomes smaller. Property of Light We must also take a look at one of the most fundamental properties of quantum physics, or of the properties of light. Let’s assume that a space ship was to somehow travel at the speed of light, although the theory of relativity states this is not possible, and it was going away from the source of light. It would be assumed that the spacecraft has “caught up” with the light rays. In fact, even if the space ship were traveling at 186,000 miles per second, light would still be moving away from and toward the space ship at 186,000 miles per second. In this regard, light is immune from the rational physical nature of the material universe. A second fundamental property of light is that light does not need a known reference point to measure is velocity as is the case for all other material things. That is why an object, no matter how fast or slow it is traveling, will experience light moving in all directions at 186,000 miles per second. Property of Materials Now let’s compare that with material physics. Any tangible object must have a known reference point in order to measure its velocity. For example, we could say that a rocket, 23 seconds after lift-off, is travelling at 741 miles per hour, which is also the speed of sound at sea level, or Mach 1. What we are actually saying is that the space ship is traveling away from the earth at Mach 1. We have the earth as the reference point. If we used the moon as a reference point and the moon was moving away from the rocket in its orbit around the earth, then the speed of the rocket would be a much greater number. Let’s now suppose that same rocket was in an orbit around the earth. We would accept the fact that the rocket was traveling at Mach 25, or 18,500 miles per hour in its orbital path. Once again our point of reference is the surface of the earth. But from the moon, the velocity of the rocket would constantly change since both objects are in different orbits around the earth. Taking this even a step further, our rocket has now left orbit and is now traveling away from the earth toward the outer reaches of the solar system. It is now traveling at 43,000 miles per hour with the earth as the reference point. However, after a certain distance, it no longer makes sense to use the earth as a reference point. Assume it approaches very near the planet Neptune as it continues its exit from the solar system. Would it not make more sense to use Neptune as the point of reference? Or should we use the asteroid belt beyond Pluto? Maybe we use Pluto as the reference point to determine the rocket’s velocity. No matter what reference point we use, we will have a very different velocity. What if, in the void of outer space, the rocket finds itself thousands, or even millions of light years from the nearest object? After all, this would be most probable. What would we use for a reference point? In fact we could even say the rocket is the point of reference and is not moving at all, but all other objects are actually moving away from or toward the rocket. The point is this: any material object in our material universe is dependent upon some other object to determine its velocity. With so many objects in the universe, there are billions, even zillions of possible reference points. It would be neither right nor wrong to use any of them. Even the object we use as the subject for determining its velocity could become the point of reference for determining the velocity of all other objects. With this in mind, we come to a critical conclusion regarding E=mc2. Speed or velocity cannot be defined in the void of space for any given object apart from a known reference point. Einstein’s formula assumes an object can approach the speed of light. However, even if we say the object has approached the speed of light, we would be making an incorrect statement because it may only be traveling at half, or one-quarter the speed of light, depending upon what reference point we use to measure its velocity. We simply cannot use the speed of light as a constant in the equation. E=mc2 makes the incorrect assumption that the speed of light can be applied to the velocities of material objects. This is the fundamental problem with Einstein’s theory. A person would not reasonably try to measure distance with a bathroom scale or mete out a bushel of wheat with a stopwatch. Such measurements do not belong together. In the same manner, quantum physics and material physics do not follow the same sets of rules. Now, back to the statement that was previously made: The prevailing secular scientific assumption is that our universe once existed as an untold amount of energy and then in an instant converted from energy to mass as expressed in E=mc2. The latest theory suggests that the entire universe once existed as a single point of energy. This untold amount of energy was so massive that it accounts for all material matter in space. In an instant, this energy “exploded” moving outward in all directions from a common center. The explosion, or “big bang” approached the speed of light causing energy to transform into matter. Due to the fact that quantum physics and material physics are completely separate in substance, as previously discussed, this is impossible. Science has attempted to prove a hypothesis on the basis of an unproven theory. A basic tenet of proper science is to generate a hypothesis, gather data, test the data to a satisfactory degree, then draw a conclusion based on data gathered and tests performed. However in an effort to appeal to the masses, the rules have been thrown out the window and an unproven and unproveable hypothesis is presented as fact.
How to Manage Pests UC Pest Management Guidelines Seedcorn maggot larvae are legless, white maggots that cannot be distinguished from cabbage maggots without microscopic examination by a trained taxonomist. However, unlike cabbage maggots, they do not attack plants after seedling stages, so are rarely found tunneling in larger roots. The life cycle is similar to cabbage maggot with adult flies laying eggs singly or in clusters in the soil near plant stems. Larvae feed for 1 to 3 weeks on seeds and germinating seedlings and burrow into the soil to pupate. Numerous generations may occur, although maggots are most prevalent under cool spring conditions, especially after wet winters, and populations may decline in summer. This insect is attracted to soils that have a high organic matter content. Seedcorn maggots kill germinating seed and very small seedlings. Once the stand is established and seedlings have developed a few leaves, they are unlikely to cause economic damage. Prevention is the best management strategy. Seedcorn maggots prefer to lay their eggs in moist, organically rich soil. If you are using manure, let it age and incorporate it well before planting. Disk under cover crops at least 2 weeks before planting. Attach drag chains behind the planter during seeding to reduce egglaying in the seed row. Cool, wet spring weather is favorable to the development of seedcorn maggot populations. UC IPM Pest Management Guidelines: Cole Crops
Sitting just off the coast of California is a chain of eight small islands, five of which make up the Channel Islands National Park. These islands create what is considered one of the richest marine biospheres in the world, and are home to many species found only on these specks of rock in the sea. That includes the Island fox, one of the smallest canids in the world and an endangered species -- a species that nearly disappeared entirely if it weren't for conservation efforts that not only brought numbers back up, but did so in record time. In 2004, four subspecies of Island fox -- the San Miguel Island fox, Santa Rosa Island fox, Santa Cruz Island fox and Santa Catalina Island fox -- were listed as endangered on the Federally Threatened and Endangered Species list. These foxes are found only on six of the eight Channel Islands. Such a small range, and with nowhere to escape to, problems can hit hard and fast for such species. For the island fox, a significant decline occurred in the 1990s due to a canine distemper outbreak and predation by golden eagles. Golden eagles were once kept in check by the island's bald eagles, which didn't prey on the foxes as a main food source. But bald eagles all but disappeared from the area due to the impacts of DDT pesticide. Without the bald eagles around, and with a rise in non-native food sources on the islands, the golden eagles could feast as they wanted and that included the island foxes. The change in the islands tells the story of the fragility and interconnectedness of an ecosystem in bold letters. Yet already they may soon be considered for removal from the list thanks to the success of a captive breeding program, removal of golden eagles and their non-native prey, and the return of bald eagles to their historic territories. Two of the subspecies, the San Miguel Island fox and the Santa Rosa Island fox were down to a mere 15 individuals each. They now number 577 and 894 respectively. The Santa Cruz Island fox and the Santa Catalina Island fox are back up to 1,354 and 1,852 individuals. This represents enough of a population rebound for a review of their endangered status, and a bit of a celebration. "While the island fox still faces a multitude of threats on Catalina Island, we see this as an example of how a well-managed recovery effort can make a tremendous impact on an endangered or threatened species prospects for long-term survival," said Julie King, the Catalina Island Conservancy’s director of conservation and wildlife management, in the press release. Related posts on MNN: - Bald eagles: A conservation success story - Endangered species: Where are they now? - Which U.S. states have the most endangered species?
What are the different types of lines? There are two different kinds of lines. 1. Straight line: Straight lines are shown below. 2. Curved line: Curved lines are shown below. Straight lines may be drawn in different directions and are given three names. (i) Horizontal lines: The lines drawn horizontally are called horizontal lines. (ii) Vertical lines: The lines drawn vertically are called vertical lines. (iii) Oblique or slanting lines: The lines drawn in a slanting position are called oblique or slanting lines. The explanations on different types of lines will help the kids to understand the difference between the straight lines and the curved lines and how the lines are drawn in different directions in geometry.
A team of scientists have unmasked the intricacies of how sharks hunt prey — from the first whiff to the final chomp —in a new study about shark senses that was supported by the National Science Foundation and published in the peer-reviewed journal PLOS ONE. The study, led by scientists from the University of South Florida, Mote Marine Laboratory and Boston University, is the first to show how vision, touch, smell and other senses combine to guide a detailed series of animal behaviors from start to finish. Results show that sharks with different lifestyles may favor different senses, and they can sometimes switch when their preferred senses are blocked. That’s hopeful news for sharks trying to find food in changing and sometimes degraded environments. “This is undoubtedly the most comprehensive multi-sensory study on any shark, skate or ray,” said Philip Motta, a USF biology professor and internationally-recognized shark expert who co-authored this study. “Perhaps the most revealing thing to me was the startling difference in how these different shark species utilize and switch between the various senses as they hunt and capture their prey. Most references to shark hunting overemphasize and oversimplify the use of one or two senses; this study reveals the complexity and differences that are related to the sharks’ ecology and habitats.” Understanding how sharks sense and interact with their environment is vital for sustaining populations of these marine predators, which support the health of oceans around the world. Overfishing is the greatest known threat, but pollution and other environmental changes may affect the natural signals that sharks need for hunting and other key behaviors. In addition, understanding the senses of sharks and other marine life could inspire new designs for underwater robotics. However, before shark senses can teach us anything, scientists must gain a basic understanding of how they work. Past studies have suggested that sharks sense the drifting smell of distant prey, swim upstream toward it using their lateral lines — the touch-sensitive systems that feel water movement — and then at closer ranges they seem to aim and strike using vision, lateral line or electroreception — a special sense that sharks and related fish use to detect electric fields from living prey. However, no study has shown how these senses work together in every step of hunting, until now. “Our findings may surprise a lot of people,” said Jayne Gardiner, lead author of the study and a Postdoctoral Fellow at Mote whose thesis at USF included the current study. “The general public often hears that sharks are all about the smell of prey, that they’re like big swimming noses. In the scientific community it has been suggested that some sharks, like blacktips, are strongly visual feeders. But in this study, what impressed us most was not one particular sense, but the sharks’ ability to switch between multiple senses and the flexibility of their behavior.” The researchers placed blacktip, bonnethead and nurse sharks — three species found along Florida’s coast that differ in body structure, hunting strategy and habitat — into a large, specially designed tank where the water flowed straight toward them. The researchers dangled a prey fish or shrimp at the opposite end of the tank, released a hungry shark and tracked the shark’s movements towards the prey. Next, they made the hunt more challenging: They temporarily blocked the sharks’ senses one by one using eye coverings, nose plugs to block smell, antibiotics to interfere with their lateral lines that detect water motion and electrically insulating materials to cover the electrosensory pores on their snouts. Then the researchers took high-speed video — lots of it. “We had hundreds of video clips to sort through, and we had to get just the right angle to see when the shark was capturing the prey,” Gardiner said. The effort was worth it. Gardiner and her team reported some striking results, including: Nurse sharks did not recognize their prey if their noses were blocked, but the blacktips and bonnetheads did. Smell may be required for nurse sharks to identify prey because they feed in the dark and often suck hidden prey out of rock crevices. The other two species, which scoop up crustaceans in daytime (bonnetheads) or chase fish especially at dawn and dusk (blacktips), could still recognize prey without their sense of smell — once they got close enough to see it. When the researchers blocked both vision and lateral line, blacktip and bonnethead sharks could not follow the odor trail to locate prey, but nurse sharks could. Nurse sharks tend to touch the bottom with their pectoral fins — likely another way to feel which direction the water is moving, and thus which direction they should proceed. However, hunting this way was slow going. When the sharks’ vision was blocked, removing a key sense for aiming at prey from long distances, they could compensate by lining up their strikes, albeit at closer range, using the lateral line, which can sense water movements from struggling prey. During normal feeding in all three species, the prey’s electric field triggered opening their mouths at very close range. However, electricity alone was not enough: Blocking vision and lateral line prevented sharks from striking, even when they were close enough to sense the prey’s electric field. With electroreception blocked, sharks usually failed to capture prey. However, blacktip and nurse sharks sometimes opened their mouths at the right time if their jaws touched prey, whereas touch did not help bonnetheads. Scientists suspect that bonnetheads rely strongly on electroreception because their wide heads allow them to have the special pores that sense electric fields spread across a wider area. “We sought to discover how sharks use their highly evolved senses to hunt and locate prey, knowing it involved more than just a good sense of smell,” said Bob Hueter, Director of Mote’s Center for Shark Research and co-author of the current study. “What we found was amazing, not only in how the various senses mesh together but also how one shark species can vary from another. Not all sharks behave alike. n general, the results provide the most detailed play-by-play description of shark hunting behavior to date, from long-range tracking of smells and swimming upstream using the lateral line to orienting and striking using vision, lateral line and finally electroreception. “This is landmark work,” said co-author Jelle Atema, a professor of biology at Boston University and Adjunct Scientist at Woods Hole Oceanographic Institution who worked with Gardiner on pioneering studies of shark senses that were precursors to the current study. “Back in 1985, world experts in underwater animal senses met at Mote, and at that time we emphasized that sensory studies were focusing on one animal at a time, one sense at a time, and we needed to start combining this information. Now we have.” While the results do not focus on shark-and-human interactions, they do highlight that some shark-safety measures, like specially patterned wetsuits meant to provide visual camouflage or electrical deterrents that target the sharks’ electrosensory system — each focusing on one sense at a time — may not be enough to change the rates of shark incidents, Gardiner said. “This also could help explain why most shark ‘repellents’ may work for a short time but are eventually overcome by persistent sharks,” added Hueter.Regardless, shark-and-human interactions are extremely rare because sharks generally do not seek out humans. The results could also inform future studies with other marine species. According to the paper, “Sharks (…) are not unique in their sensory guidance of hunting: They exploit information fields available to all marine species. Thus, the results may be seen as a general blueprint for underwater hunting, modifiable by habitat and by the behavioral specializations of many different aquatic animals from lobsters to whales.” Understanding the full implications for sharks or any other species in the wild will take much more research, but Gardiner believes the current results bode well. “I think the sharks’ abilities to switch between different senses may make them more resilient in the wild. They may be more flexible and better adapted to deal with environmental changes – but not all human impacts. Overfishing is still overfishing,” Gardiner said. Read the full paper on the PLOS ONE web site: http://dx.plos.org/10.1371/journal.pone.0093036
Portable Cordless Vaccine Storage Device Laura Bowen | June 24, 2014 The Passive Vaccine Storage Device (PVSD) is a highly advanced container that combines ingenuity and insulation technology to empower aid workers delivering vaccines to the toughest-to-reach corners of the globe. Designed as a prototype that improves upon earlier models of vaccine transportation devices, this compact apparatus was developed with all the necessary steps: careful planning, simulation, and testing. The Elements Pose the Greatest Challenge To optimize the Passive Vaccine Storage Device (PVSD), engineers at Intellectual Ventures, as part of the Global Good Program, turned to thermal and vacuum system modeling with COMSOL Multiphysics together with experimentation. In the early development stages, they began with a design similar to a cryogenic dewar — a specialized vacuum container commonly used in the field. Typical dewars are able to store ice for a few days before it melts, which is not nearly enough time for long trips to remote destinations. Traveling from a source point to areas where people need vaccinations could take weeks depending on their locations. Long travel times in combination with extreme climates present major challenges for experts working in the medical community. The PVSDs need high-performance insulation to create the temperature-controlled environment required for vaccine storage. Each layer of the device can impact overall performance and is designed to add to its insulative strength. The Passive Vaccine Storage Device can hold vaccines in a temperature-controlled, easy-to-transport compartment for longer durations than ever before. The shell of the PVSD is made of multilayer insulation, which is similar to the materials used for temperature regulation in spacecraft. This design is especially necessary for areas of the world that get incredibly hot because the vaccines need to stay in a cool and narrow temperature range (between 0°C and 10°C). The multilayer insulation consists of several layers of reflective aluminum, a low conductivity spacer, and nonconducting vacuum space. When modeling the PVSD, the Intellectual Ventures team considered physics phenomena and design variables including heat transfer, outgassing, and hold time. Why Do Vaccines Need to Stay Cool? Vaccines require cold chain storage, which entails proper handling from the moment they are manufactured up until they are administered to a patient. Live virus vaccines can quickly deteriorate as soon as they leave their temperature-controlled space, and inactivated vaccines can lose potency from very short temperature fluctuations. Each year, countless doses of vaccines are thrown away or rendered useless because they were not stored and handled correctly. The Design That Really Keeps the Cold In For experimental tests, the researchers used an environmental chamber to recreate extreme outdoor conditions. In addition to experimental evaluation, multiphysics models were implemented using the Molecular Flow Module and Heat Transfer Module with COMSOL Multiphysics to optimize the PVSD design with regard to thermal performance and hold time. The outside of the device, composed of metal, prevents air inflow and helps maintain the cool temperature within. Added rubber absorbs shock to protect the contents during bumpy travels. Inside the PVSD is a small insulating shell (pictured below) that contains several compartments where the life-saving vaccines are stored. The inner shell of the PVSD holds individual vaccine vials that aid workers can easily access without disrupting the vacuum space or controlled environment. By breaking down geographical barriers for aid workers, many lives will be impacted by the PVSDs and their important cargo. - Molecular Flow Module - Heat Transfer Module - Read the full-length article “Innovative Thermal Insulation Techniques Bring Vaccines to the Developing World” in the 2014 edition of COMSOL News Seas of Change for Wind Turbines - Applications 9 - Certified Consultants 32 - Chemical 59 - COMSOL Now 146 - Conference 107 - Core Functionality 86 - Electrical 146 - Fluid 98 - Interfacing 41 - Mechanical 171 - Multipurpose 17 - Tips & Tricks 16 - Trending Topics 56 - User Perspectives 84 - Video 68
One of the world’s most enigmatic mammals, the Saola (Pseudoryx nghetinhensis), could be on the brink of extinction, according to a group of experts who held an emergency meeting in Lao PDR to try to save the animal. The Saola, which was only discovered to world science in 1992, resembles the desert antelopes of Arabia, but is more closely related to wild cattle. It lives in the remote valleys of the Annamite Mountains, along the border of Lao PDR and Vietnam. “We are at a point in history when we still have a small but rapidly closing window of opportunity to conserve this extraordinary animal,” says William Robichaud, Coordinator of the Saola Working Group, set up by IUCN’s Asian Wild Cattle Specialist Group. “That window has probably already closed for another species of wild cattle, the Kouprey, and experts at this meeting are determined that the Saola not be next.” Conservation biologists based in four countries, met in Vientiane, Lao PDR, last month, and agreed that Saola numbers appear to have declined sharply since its discovery in 1992, when it was already rare and restricted to a small range. Today, the Saola’s increasing proximity to extinction is likely paralleled by only two or three other large mammal species in Southeast Asia, such as the Javan Rhinoceros, according to the experts. The situation is compounded by the fact that there are no populations of Saola held in zoos. “The animal’s prominent white facial markings and long tapering horns lend it a singular beauty, and its reclusive habits in the wet forests of the Annamites an air of mystery,” says Barney Long, of the IUCN Asian Wild Cattle Specialist Group. “Saola have rarely been seen or photographed, and have proved difficult to keep alive in captivity. None is held in any zoo, anywhere in the world. Its wild population may number only in the dozens, certainly not more than a few hundred.” The Saola is listed as Critically Endangered on the IUCN Red List of Threatened Species™, which means it faces “an extremely high risk of extinction in the wild”. With none in zoos and almost nothing known about how to maintain them in captivity, for Saola, extinction in the wild would mean its extinction everywhere, with no possibility of recovery and reintroduction. The Saola is threatened primarily by hunting. The Vientiane meeting identified snaring and hunting with dogs, to which the Saola is especially vulnerable, as the main direct threats to the species. Experts at the meeting emphasized that the Saola cannot be saved without intensified removal of poachers’ snares and reduction of hunting with dogs in key areas of the Annamite forests. Improved methods to detect Saola in the wild and radio tracking to understand the animal’s conservation needs are needed, according to the biologists. In addition, there needs to be more awareness in Lao PDR, Vietnam and the world conservation community of the perilous status of this species and markedly increased donor support for Saola conservation, according to the group.
Microscope World are mechanical devices utilized for seeing materials and items so minute in size that they are undetected by the naked eye. The procedure conducted with such an instrument, called Microscopy, utilizes the combined schools of optical science and light reflection, managed and controlled through lenses, to study small items at close quarters. The fundamental microscopic lense consists of a number of complex and interrelated parts: a cylinder that offers an essential area of air between the ocular lens (eye piece) situated on top and the unbiased lens fixed at the bottom, hovering near a stage containing an optical assembly on a turning arm and a focused hole through which a light shines from a solid U-shaped stand below. Magnifying values for the ocular variety through X5, X10, to X20, while the worths for the unbiased lens has a more comprehensive span: X5, X10, X20, X100, x40, and x80. These worths provide the observer with a spectrum of possible distance orientations and degrees of sharpness as are necessary for viewing and analysis. Several various sort of microscopes exist, each having particular features: Optical Microscope: The very first ever produced. The optical microscope has one or two lenses that work to expand and improve images put in between the lower-most lens and the light. Easy Optical Microscope-- uses one lens, the convex lens, in the magnifying procedure. This kind of microscope was utilized by Anton Van Leeuwenhoek throughout the late-sixteen and early-seventeenth centuries, around the time that the microscopic lense was created. Compound Optical Microscope-- has 2 lenses, one for the eyepiece to serve the ocular point of view and one of brief focal length for objective viewpoint. Several lenses work to decrease both spherical and chromatic aberrations so that the view is unobstructed and uncorrupted. Stereo Microscope: This is likewise known as the Dissecting Microscope, and uses 2 different optical shafts (for both eyes) to produce a three-dimensional picture of the object through 2 somewhat different viewpoints. This kind of microscopic lense conducts microsurgery, dissection, watch-making, little circuit board manufacturing, and so on . Inverted Microscope: This sort of microscope views objects from an inverted position than that of routine microscopic lens. The inverted microscope concentrates on the study of cell cultures in liquid. Petrographic Microscope: This type of microscope features a polarizing filter, a rotating stage, and plaster plate. Petrographic website Microscopes specialize in the research study of inorganic compounds whose residential or commercial properties tend to change through shifting perspective. Pocket Microscope: This sort of microscopic lense includes a single shaft with an eye piece at one end and an adjustable objective lens at get more info the other. This old-style microscopic lense has a case for simple carry. Electron Microscopes: This sort of microscopic lense utilizes electron waves running parallel to a magnetic field providing higher resolution. 2 Electron Microscopes are the Scanning Electron Microscope and the Transmission Electron Microscope. Scanning Probe Microscope: This type of microscope measures interaction in between a physical probe and a sample to form a micrograph. Only surface area information can be gathered and examined from the sample. Kinds Of Scanning Probe Microscopes consist of the Atomic Force Microscope, the Scanning Tunneling Microscope, the Electric Force Microscope, and read more the Magnetic Force Microscope. Science wouldn't be what it is today without the microscope, as this device is the primary instrument by which the world and all of its aspects are determined and examined. It is with the microscopic lense that we have a look inside of ourselves so we can learn and understand who we are and how we work.
Article: “To Beat Back Poverty, Pay the Poor” Main Characteristics of Human Rights Some main characteristics of human rights are that the rights are made up especially with the interest of the people in mind. Human rights are different for each group of individuals. The people are entitled to make judgments and decisions they feel deemed necessary for the community. The chapter simply states that for every different geological culture, the human rights of the people are the same. Politics play a big role in the human rights system. While politics may not shed light on every right, they still do exist. The protections over these rights are also very minimal. The chapter said the rights are also very minimal as well. It’s said that the rights are inalienable, indivisible, and interdependent. With this being said, human rights cannot be discriminated against, they cannot be taken away from a person, and each human right a person obtains is contingent upon a component that allows he/she to keep a job, or participate in daily activities. In order for certain freedoms and beliefs to be recognized, human rights are to be placed into action. Human rights are established to help the people and they are also there to help keep peace, order, and togetherness within a community. Human rights are to be exercised but they are to be done so within reason, law and just cause. Classification of Human Rights Human rights are classified into numerous categories. The two main categories are classic/civil and social rights. The subcategories under social rights include economic and cultural. Classic/Civil rights are reserved to the individual. Political deals with the government and their general input. Social rights require positive acts of the government for the necessary conditions to sustain human life. The classifications of human rights are to ensure that everyone has the right to social security, and also the economic, social, and cultural rights previously listed. These duties may be hard because there are some things that people aren’t quite clear on. Things such as the negative connotation on civil and political rights and the positive connotation on the economic, social and cultural rights are to name a few. With that being said, the actual views on these rights are proved to be that they all have negative and positive traits, and not just one or the other. History of Human Rights The actual concept of rights began in England in the thirteenth century. This started with the signing of the Magna Carta in 1215 by King John. This established rights such as the right of the church to be free from governmental inference and the right to free all citizens to own property free from excessive taxes. The idea of the Magna Carta was so that the laws of the land were not swayed one way and were not based on one individuals own personal agendas. The English Bill of Rights, established around 1688, was written up after the Glorious Revolution. The Bill of Rights dealt with the countries fundamental concerns. These rights made the King an equal person to the people. Excessive bail or fines, cruel and unusual punishment, and unfair trials were protected by this bill. In 1776, The U.S Declaration of Independence, written by Thomas Jefferson was signed, declaring the British colonies independence from the British Empire. In the beginning stages of this document, the equal rights of women went unrecognized, but since then many more rights have been added. In addition to the Declaration on Independence, the Declaration of the Rights of Man of the Citizen was established. Both of these documents were ultimately written to protect the rights of the people. Human rights issues remained the topic of discussion for the nineteenth and twentieth century. The people believed that the reason for the acts of violence, occurring around that time, were because of the…
Decision-Making and Data in Teaching Teaching involves a lot of decision making. When creating and teaching a lesson, decisions have to be made constantly, including during the initial planning phase. These decisions include - how to start class, ways to sequence ideas or activities within the lesson; - how to assess students’ understanding of the content; - when to make these assessments; - how to group students; - what work to have students produce; - where to position ourselves at different points in the lesson; and - how to close the lesson. While that list is not exhaustive, it highlights how decisions occur throughout all aspects of a class. Teaching also involves a lot of data. Each decision leads to lots of new information that could tell you about the impact of your decision (e.g., How did students respond to the initial warm-up question? Were they ready to discuss the warm-up when you sought their attention? What questions or points of confusion, if any, were raised, when the class transitioned from the initial activity to the one that followed it?). In addition to using data to reflect on past decisions, you can use data to inform future decisions, such as how you will begin tomorrow’s lesson or how you will sequence content or activities next year when covering the same unit. Data and decision making go hand in hand. We can use data to inform our decisions, and our decisions can generate more or less useful data for us. But what makes data “useful”? Before explaining what “useful data” is, it can be important to first clarify what I mean by data. What Is Data? In the paragraphs, above I’ve been using data to mean any information that can tell me what students know or are able to do. In science, technology, engineering, and mathematics (STEM), there is often a large emphasis placed on numerical, or quantitative, data. These numbers often serve as data in experiments, projects, or problems. Likewise, many conversations about “data” in schools and education today place a great emphasis on such quantitative information. And so, while all these factors point to the view of data being numbers, I want to encourage us to think of data more broadly. Data can involve the responses students share to questions students and teachers raise in class. It can be how students handle equipment during a lab. It can be their discussions with other students about an engineering project. All of this information provides valuable data about what students know, or are able to do, with content or related skills, and it can be difficult to attach numbers to this information. So, in sum, I view data as any information about what students know or are able to do, and it can be quantitative or qualitative. Using Data to Make Decisions With such a broad definition of data, you might realize that there is a nearly infinite amount of data a teacher could focus on. Many people might focus on quantitative test scores, often arising from end-of-unit or end-of-semester assessments, as indicators of what students know or are able to do. Yet there’s other, more diverse data, that you could (and probably often do!) use each lesson to support you in making decisions that will benefit student learning. And so I return to the idea of more or less useful data. The usefulness of data is contingent on how well it informs us of whatever interests us. As teachers, what interests us is often what students know or are able to do. That is our goal. And with each particular lesson or activity, we have targeted goals identifying specific things we want students to know or be able to do. Once you know one or more goals for a lesson or activity, it’s important to identify how you will find out how well each goal has been met. To do that, you will use some tools. These tools could be labs students complete, projects they develop, worksheet questions they answer, discussions they engage in during class, or something else. It’s critical that these tools connect to your goal(s). The tools should lead students to generate data that is directly related to your goal. In other words, the data arising from the tool should provide you with information about what students know, or are able to do, around what interests you. Reviewing the data, which can be qualitative or quantitative, you can make inferences, or claims identifying patterns or trends in the data that can tell you how well each goal has been met. That data can support you in making decisions about next steps to take with the class as well as what to possibly do in the future. The cycle described above and seen in Figure 1 provides a framework for data-based decision making that can support you in formatively assessing how well students know and are able to do what is important to you and your course. Instead of waiting until a summative assessment, such as an end-of-unit test, you can use this cycle to support you in making adjustments throughout an individual lesson or series of lessons to better support student learning. That’s a major benefit of formative assessments! This framework, including additional ideas and resources related to formative assessment, is something my coauthors and I call the Feedback Loop in our book The Feedback Loop: Using Formative Assessment Data for Science Teaching and Learning. To help make the above framework more concrete, here’s one example of it that could occur during a portion of one lesson: - Goal – Students will appropriately apply the law of conservation of momentum to predict what will happen in collisions. - Tool – Students will watch, and predict the outcome of, simulations of collisions. For each collision, they will be provided multiple-choice options, asked to choose their answer, and allowed to discuss reasons for their decisions. - Data – There are two pieces of data that will be generated, including (1) how many students voted for each option and (2) the reasons provided for their selections. - Inferences – Using their votes and associated reasons, a teacher could feel more confident making claims about how well the students could apply the law of conservation of momentum to predict outcomes of collisions or might have a better sense of points of confusion, or possible preconceptions, they exhibit Teaching involves a constant flow between data and decision-making. Data can be incredibly valuable in making better decisions that support you and your students in making good progress towards learning goals. As you reflect on this process and your experiences, how has (or could) data benefit decisions you’ve made as a teacher? What have you found to be the most valuable tools for specific goals? How can you embed these ideas within your lessons so you’re regularly generating data to support your decision-making process? Furtak, E. M., Glasser, H. M., & Wolfe, Z. M. (2016). The Feedback Loop: Using formative assessment data for science teaching and learning. Arlington, VA: NSTA Press. Note: The Feedback Loop is available for purchase from the NSTA Science store (www.nsta.org/store)
Algae helps explains Antarctic ice sheet formation Antarctic ice sheets first began to form some 34 million years ago, during a period of sharply declining atmospheric carbon dioxide levels, a new study of ancient algae suggests. Jefferson Beck/Goddard Space Flight Center/NASA/AP Antarctica's vast ice sheets first grew when carbon dioxide levels in the Earth's atmosphere sharply declined millions of years ago, scientists now find. Carbon dioxide is a greenhouse gas — it traps heat radiating away from the Earth's surface. High levels of it in the atmosphere are linked with global warming, while low levels are linked with global cooling. Many such periods of warming and cooling have occurred in the Earth's history, with repercussions for climate around the planet. But reconstructions of what atmospheric carbon dioxide levels were like back when glaciers began to cover Antarctica nearly 34 million years ago had appeared contradictory. Some research actually suggested carbon dioxide levels rose just before and across this time, a period known as the Eocene-Oligocene climate transition, which is the opposite of what would be expected as prime glacier-growing conditions. Now research suggests that a sharp decline of atmospheric carbon dioxide levels may have played a major role in seeding Antarctica's glaciers. Scientists investigated alkenones — tough organic compounds only produced by certain types of algae — to find the carbon dioxide signatures of this period. These photosynthetic organisms would have used carbon dioxide that entered the water from the air, so looking at the chemical makeup of ancient deposits of alkenones can give an idea of what levels of the gas were like in the past. Paleoclimatologist and geochemist Mark Pagani at Yale University and his team collected alkenones at six deep sea locations across the planet. They sampled spots both near and far from the poles, to get a better sense of what global atmospheric carbon dioxide levels were like during this particular period. The investigators focused on carbon isotopes within these compounds. All isotopes of an element have the same number of protons, but each has differing number of neutrons — for instance, carbon-12 has six neutrons, while carbon-13 is heavier with seven. The more carbon dioxide there is in the water — and thus air — the more often alkenones are made up of lighter carbon isotopes. This is because the enzyme that helps the algae suck in carbon dioxide prefers such isotopes, and the more of the gas there is overall, the more chances this enzyme has to absorb the carbon it likes. By looking at carbon isotope ratios within the alkenones, the researchers found that carbon dioxide apparently decreased in the atmosphere just prior to and during the onset of glaciations in Antarctica. The contradictory alkenone findings published previously — ones from Pagani and his colleagues — likely came from locales with high levels of carbon dioxide that did not reflect what global levels overall were like, Pagani said. "The research supports a clear correspondence between carbon dioxide and climate change," Pagani told OurAmazingPlanet. "This is not a great surprise to those of us who study the history of Earth's climate, but given the politicization of science these days, connecting the dots between carbon dioxide and climate is increasingly important." "The geologic record is just waiting to reveal the nature of climate sensitivity to carbon dioxide and other greenhouse gases," Pagani added. "Further carbon dioxide investigations of very warm periods in Earth history and better constraints on global temperatures through time will keep me busy." [How Two Degrees Will Change Earth] The scientists detailed their findings in the Dec. 2 issue of the journal Science.
What is a Neutrino…And Why Do They Matter? Neutrinos are teeny, tiny, nearly massless particles that travel at near lightspeeds. Born from violent astrophysical events like exploding stars and gamma ray bursts, they are fantastically abundant in the universe, and can move as easily through lead as we move through air. But they are notoriously difficult to pin down. “Neutrinos are really pretty strange particles when you get down to it,” says John Conway, a professor of physics at University of California, Davis. “They’re almost nothing at all, because they have almost no mass and no electric charge…They’re just little whisps of almost nothing.” Ghost particles, they’re often called. But they are one of the universe’s essential ingredients, and they’ve played a role in helping scientists understand some of the most fundamental questions in physics. For example, if you hold your hand toward the sunlight for one second, about a billion neutrinos from the sun will pass through it, says Dan Hooper, a scientist at Fermi National Accelerator Laboratory and an associate professor of astronomy and astrophysics at the University of Chicago. This is because they’re shot out as a byproduct of nuclear fusion from the sun – that’s the same process that produces sunlight. “They’re important to our understanding of the kind of processes that go on in the sun, and also an important building block for the blueprint of nature,” Hooper said. Particle physicists originally believed that neutrinos were massless. But in the 1990s, a team of Japanese scientists discovered that they actually have a smidgen of mass. This tiny bit of mass may explain why the universe is made up of matter, not antimatter. Early in the process of the Big Bang, there were equal amounts of matter and antimatter, according to Conway. “But as the universe expanded and cooled, matter and antimatter were mostly annihilated. And a slight asymmetry favored matter over antimatter. We think neutrinos may have something to do with that process…. And it’s a puzzle, why we’re made out of matter and not antimatter.” Studying neutrinos is difficult. They’re tough to detect since they interact so weakly with other particles. But the newly-completed IceCube Neutrino Observatory will study neutrinos inside a cubic kilometer block of ice in Antarctica. Here’s how: when the neutrinos interact with atoms inside the deep arctic ice detectors, they sometimes give off puffs of energy. “As neutrinos pass through and interact, they produce charged particles, and the charged particles traveling through the ice give off light,” Conway said. “That’s how they’re detected. It’s like having a telescope for neutrinos underground.” Fermilab National Laboratory has an experiment that hurls a beam of neutrinos 400 miles underground from Wisconsin to Northern Minnesota in about two milliseconds, and the lab is also planning a massive linear accelerator called Project X that will study the subatomic particles by sending them even farther. “If 100 years ago, I told someone that the universe was filled with massless, chargeless particles with no energy, I wonder if they’d have believed you,” Conway said. “Who knows where we’ll be 100 years from now.” If you have a question on science or technology for Just Ask, send an e-mail to [email protected] with “science question” in the subject line or leave it in the comments section below.
To embed this, cut and paste the following html into your web page: The second Anglo-Boer war was an alarmingly bloody conflict, with heavy loss of life on both the British and Boer sides. The war's causes are complex, involving a gold rush, the struggle for self-government on the part of the Boers, and England's need to protect British citizens in South Africa. The most striking civil rights issues during the second Boer War surround the concentration camps into which Boers were collected. This original design was first hand-drawn and then rendered in a modern style using graphic shapes of solid colour
The megakaryocyte is a large cell in the bone marrow that creates platelets by fragmenting into small, odd-shaped pieces. These fragments (platelets) of the megakaryocyte's cytoplasm circulate in the blood as the body's first line of defense against blood loss, a process that is called hemostasis. This article will discuss how megakaryocytes give rise to platelets, as well as how megakaryocytes themselves arise from less differentiated cells, and how they mature. The bone marrow is a compartment inside some of the bones of vertebrates, including humans, that contains important stem cells that go on to make up all the cellular elements of the circulating blood and some of the cells of the immune system that live in the solid organs of the reticuloendothelium system. DNA synthesis is occurring in the nucleus during thrombopoiesis (stimulated by thrombopoietin) without cytokenesis, aka endoreduplication. Therefore, the nucleus of the megakaryocyte can become very large and lobulated, which, under a light microscope, can give the false impression that there are several nuclei. In some cases, the nucleus may contain up o 64N DNA. Platelets are held within demarcation channels, internal membranes within the cytoplasm of megakaryocytes. Megakaryocytes release their platelets in one of two ways. The cell may release its platelets by rupturing and releasing its contents all at once in the marrow. Alternatively, the cell may form platelet ribbons into blood vessels. The ribbons are formed via pseudopodia and they are able to continuously emit platelets into circulation. 2/3 of these platelets will remain in circulation while 1/3 will be sequestered by the spleen.
One of the easiest ways to find out if you're sick with a cold, throat infection or a chronic illness such as mononucleosis is to check for swollen lymph nodes. Just take the fingers and press gently on the neck and throat area in search of pea-sized lumps. If you happen to find them, there's a good chance you are fighting some kind of microbial attack. Lymph nodes are part of a much larger biological network inside each of us known as the lymphatic system. It has a vital importance in keeping us healthy as its primary job is to remove unwanted chemicals such as toxins and waste products. Yet, when an infection takes hold, a different call to action takes place and the nodes become a hub for infection fighting action. When an invader hits, the lymph nodes become the primary staging area to prepare for an effective battle. Several different types of immune cells gather here to learn and share more about the opponent. Those that have visited the area arrive with molecular information of the attacker. They share their information with other cells including the ones responsible for the front line fighting, known as killer T-cells. The primed soldiers venture out to the battlefield where they aim to kill the intruders while others in the node ensure new recruits are properly readied for combat. Since the lymph nodes require far more space to accommodate all the cells, they tend to enlarge. These areas remain this way until the battle has been won, making it easy for you to check their progress. Once victory has been achieved, the area quickly drains and eventually returns back to normal size within a few days. For the most part, researchers have focused on the process involved during enlargement of the lymph nodes as this plays an important role in keeping us healthy. The process of returning the node back to its original size — known as contraction — has been given relatively less attention. For the most part, the fluid drains, the unneeded cells die and the system returns to normal. But in 2014, a group of American researchers discovered a rather interesting phenomenon occurring as the lymph nodes contract. Somehow, the immune system also developed a memory of the battle and shared it with killer T-cells in preparation for another attack. This information suggested a new role for the lymph node after the infection was completed. At the time, team was able to identify the cells performing this function as lymphatic endothelial cells, or LECs. These cells were known to be involved in maintaining the integrity of the lymph node during swelling and contraction. Yet how they managed to retain the information as well as share it with killer T-cells soldiers remained a mystery. Now the team has come up with an answer. They have revealed how LECs manage to maintain memory and also pass the information on to the troops during this time of node contraction. The results demonstrate the lymph node is far more than a central hub for fighting infection. It also happens to be the place where memory is both stored and shared across the body. The results of this study reveal the fascinating way your body deals with an infection and prepares itself for any future attacks. The group took a closer look at the LECs to find out how they might be sharing memory within the lymph node. Much as they found in 2015, these cells kept molecular records of the fight in the form of antigens. However, none of this information was going directly to the killer T-cells. This meant some other cell was briefing the troops. The team explored the different cell types in the node and eventually realized the answer lied in a rather obvious option. It's known as a migratory dendritic cell, or MDC. It's usual role is to bring antigens from the battlefield to the lymph node so the troops can be primed for battle. Yet in this case, the cells acquired the information locally so they could act as a liaison between the LECs and the killer T-cells. With this result in place, the team tried to find out why memory seemed to occur during contraction of the lymph node. The answer turned out to be relatively straightforward. As the lymph node shrank, many of the LECs died off as they were no longer needed. The antigens contained in their cells were released into the environment and picked up by the MDCs. The information was processed in these cells and eventually given to the killer T-cells who would then retain the memory as they shipped out to other areas of the body. The results of this study reveal the fascinating way your body deals with an infection and prepares itself for any future attacks. As the invader tried to gain a hold inside you, the lymph nodes swell and the LECs end up archiving the antigen information for later use. When the battle is won, the information is shared with the troops to keep them at the ready. This information also has great potential to be used in vaccination research. With this knowledge in hand, scientists may be able to focus on developing stronger memory to reduce or prevent waning. This eventually may lead to improved versions of current vaccine options, such as those against the mumps virus. This study also may open the door to develop new vaccines. By focusing on developing memory over simply priming an attack, we may be able to find ways to keep us safe from pathogenic intruders in the future, including some for which we currently have no vaccine option. Follow HuffPost Canada Blogs on Facebook Also on HuffPost:
The immune system works with both the lymphatic and circulatory systems. Both of these systems help transport pathogens to immune organs so the immune system can eradicate them. The immune system also works with the integumentary system.Continue Reading The integumentary system is comprised of all skin cells in the body. The immune system works with the skin to help keep foreign pathogens out of the body. The skin is often the first line of defense for foreign pathogens. It acts as a barrier to the inner body and prevents pathogens from entering the body. The immune system works with the circulatory and lymphatic systems for transportation. Antigens and pathogens that enter the body must be transported to lymph nodes or the spleen for processing and eradication. Once they reach the spleen or the lymph nodes, the antigens present to lymphocytes and are tagged for destruction. Immune cells such as phagocytes and neutrophils engulf pathogens and destroy them. These immune cells and others inhabit the lymphoid organs and tissues such as the spleen and the lymph nodes. They circulate through the body by using the circulatory system. The circulating immune cells find the antigens that have yet to make it to the lymphoid organs.Learn more about Human Anatomy
Wildfires are a fundamental ecological process. Sophisticated models have been developed to evaluate the influence of daily weather on fire behavior but the role of seasonal or longer term climate is less certain. In ecological terms, a close linkage between fire and climate could diminish the importance of local processes, such as competition and predation, in the long-term dynamics of fire-prone ecosystems. The structure and diversity of such communities, which are regulated by fire frequency, extent, and intensity, may have nonequilibrial properties associated with variations in global climate. Successful prediction of vegetation change hinges on a better understanding of climatically driven disturbance regimes and the relative contributions of regional versus local processes to community dynamics. The southwestern United States is an ideal area for assessment of regional fire-climate patterns. Detailed meteorological records and fire statistics are available for extensive areas, and centuries-long climate and fire history proxies have been obtained from tree rings at many sites. Forests in this region lead the nation in average number of lightning fires and area burned by these fires each year. This vigorous fire regime ensues from an annual cycle of a variably wet cool season, a normally arid foresummer, and isolated lightning storms ushering the onset of the summer monsoonal rains. Lightning fires begin in the spring and peak in late June to early July and decrease significantly as the summer rainy season progresses. Interannual variations in fire activity probably derive from the influence of winter-spring precipitation on the accumulation and moisture content of the fuels. Annual ring growth in southwestern conifers is primarily a function of cool season moisture. Local surface burns are also recorded as fire scars in tree rings. Thus, tree-ring analysis allows simultaneous evaluation of the linkage between fire and climate. During the 1982-1983 El Nino episode, arguably the most severe of this century, National Forests in the United States sustained little fire activity while millions of hectares burned in Indonesia and Australia. Subsequently, a nationwide survey suggested that the relation between wildland fires and the El Nino--Southern Oscillation (ENSO) phenomenon is statistically significant only in the southeastern United States. However, this analysis relied on only 57 years of fire statistics and focused entirely on warm episodes in the tropical Pacific. In this report both warm (El Nino) and cold (La Nina) episodes in a 300-year record of fire activity for the southwestern United States are evaluated. Teleconnections with the tropical Pacific are indicated by correlations between the Southern Oscillation index (SOI) and rainfall over the Line Islands (LIRI) against precipitation, streamflow, and tree growth in the American Southwest. During the high-SO phase (La Nina), when sea surface pressure is higher than normal in the Southeast Pacific, the central Pacific cools anomalously and the Intertropical convergence zone (ITCZ) and South Pacific convergence zone (SPCZ) diverge on either side of the equator, the latter bringing abundant rains to Indonesia and eastern Australia. During the low-SO phase, when sea surface pressure is lower than normal over Tahiti, the central Pacific warms, the ITCZ and SPCZ converge on the equator, and the zone of deep convection shifts eastward to the Line Islands in the central Pacific, where tropospheric disturbances then propagate to extratropical regions. Northern winter (December to February, DJF) values of SOI are preferred for studying teleconnections because this is the season when the maximum pressure anomalies occur; precipitation surges or deficiencies over the Line Islands are most persistent from August through February. During the low-SO phase (abundant rainfall over the Line Islands), warm waters in the eastern Pacific provide the necessary energy for development of west coast troughs and weaken the tradewind inversion. This situation enhances interaction between tropical and temperate weather systems, and thus more moist air penetrates into the southwestern United States during fall and spring.
There are two basic kinds of learners – Verbal and Nonverbal. Intelligence does not play a role in this distinction—it is simply a difference in learning and thinking styles. Verbal learners mainly think in words rather than pictures, with a sort of internal dialogue. Verbal thought is linear and follows the structure of language. Thinking verbally consists of composing mental sentences, one word at a time, at about the same speed as speech. Nonverbal learners mainly think in pictures. They think with 3-dimensional, multi-sensory images that evolve and grow as the thought process adds more information or concepts. They do not experience much, if any, internal dialogue. This thought process happens so much faster than verbal thinking, that it is usually subliminal. Words that enable a picture-thinking person to imagine a picture, have meaning and are clearly understood. However, they are unconsciously challenged when faced with certain words or symbols in the English language. Can you think of a picture for any of the following words? These words, and at least 214 others like them, are at the root of reading difficulties for a picture-thinker. With no picture to process, the material quickly loses meaning – causing confusion, frustration, and fatigue. Consider, for a moment, that up to 60% of any given written paragraph are words that DO NOT trigger a visual picture. Imagine, as a person who thinks in pictures, trying to obtain the real meaning of a paragraph when 60% of the words are words with which they cannot think! The ability to think in 3-dimensional, multi-sensory pictures is a talent that all Dyslexics share. It can, however, cause problems and confusion when it comes to 2-dimensional symbols and words. Nonverbal thought is multi-dimensional and object-based: Verbal thought is linear and sequential:
Cold Accretion Theory Cold accretion theory is a term that can be used to distinguish modern theories of planet growth from earlier theories. The mechanics of modern cold accretion are reviewed in the article Accretion. Earlier theories visualized planets as forming out of hot blobs of solar plasma, or large, gravitationally unstable 'protoplanet' clouds which were relatively large segments of the solar nebula at any given solar distance. The planets were generally visualized as forming either from very hot matter, or (at least) forming in a molten state because of the high temperatures reached during gravitational collapse. However, Urey (1952) and Shmidt (1958) theorized that the planets formed from innumerable small, solid (i.e. 'cold') bodies. Urey used the argument that planets are extremely deficient in inert gases. This means they could not have formed by gravitational contraction of solar or nebular clouds because those would have contained solar abundances of inert gases,
One overriding aspect in medieval myths is that every hero is expected to act display himself in a certain a way. But in every myth there are certain differences and certain similarities. The tale of Odysseus, the legend of Beowulf, and the history of Prince Igor all have basically the same hero treats some are expressed in a different ways but if you look deep enough they are all they same, but they also have different ones. These men have proven them selves to be considered a hero. All thought all of these tales come from a different time they are all different in some way but also the same. Price Igor was considered a hero because he was noble. He was noble because when him and his men was captured by their enemies he did not leave his men to die even when had a chance to escape and save his own life. This conflict also comes up in the legend of Beowulf when he goes of into a cave to fight a dragon but never implies that his men should come inside that cave to help him because he does not what to put them in danger of let them get harmed in any way. Odysseus was also put in a similar position when he is faced with a one eyed beast who had kept him and his men captured in a cave but instead of him trying to escape by him self he also devises a plan to get his men out as well. That is why these men are considered to a hero. The differences that are produce by all of these are that Odysseus's hero traits contain a person to be smart. He proved himself to be smart in all of the adventures...
Though a cornerstone of thermodynamics, entropy remains one of the most vexing concepts to teach budding physicists in the classroom. As a result, many people oversimplify the concept as the amount of disorder in the universe, neglecting its underlying quantitative nature. In The Physics Teacher, co-published by AIP Publishing and the American Association of Physics Teachers, researcher T. Ryan Rogers designed a hand-held model to demonstrate the concept of entropy for students. Using everyday materials, Rogers’ approach allows students to confront the topic with new intuition — one that takes specific aim at the confusion between entropy and disorder. “It’s a huge conceptual roadblock,” Rogers said. “The good news is that we’ve found that it’s something you can correct relatively easily early on. The bad news is that this misunderstanding gets taught so early on.” While many classes opt for the imperfect, qualitative shorthand of calling entropy “disorder,” it’s defined mathematically as the number of ways energy can be distributed in a system. Such a definition merely requires students to understand how particles store energy, formally known as “degrees of freedom.” To tackle the problem, Rogers developed a model in which small objects such as dice and buttons are poured into a box, replicating a simple thermodynamic system. Some particles in the densely filled box are packed in place, meaning they have fewer degrees of freedom, leading to an overall low-entropy system. As students shake the box, they introduce energy into the system, which loosens up locked-in particles. This increases the overall number of ways energy can be distributed within the box. “You essentially zoom in on entropy so students can say, ‘Aha! There is where I saw the entropy increase,'” Rogers said. As students shake further, the particles settle into a configuration that more evenly portions out the energy among them. The catch: at this point of high entropy, the particles fall into an orderly alignment. “Even though it looks more orientationally ordered, there’s actually higher entropy,” Rogers said. All the students who participated in the lesson were able to reason to the correct definition of entropy after the experiment. Next, Rogers plans to extend the reach of the model by starting a conversation about entropy with other educators and creating a broader activity guide for ways to use the kits for kindergarten through college. He hopes his work inspires others to clarify the distinction in their classrooms, even if by DIY means. “Grapes and Cheez-It crackers are very effective, as well,” Rogers said.
Cause and Effect Diagram... Information | Understanding | Best Practice. We can usually identify many problems throughout a process. The question is how best to permanently and effectively address the particular problems we wish to focus on? The Cause and Effect diagram is a method of identifying potential root causes of the problem. The first stage in the Cause and Effect process is normally the use of Brainstorming. The Brainstorming process is highly effective at extracting ideas and knowledge from participants. Defining the problem and the scope are critical.If the problem is not clearly defined so that all involved can clearly focus on the problem, the risk will arise that suggestions will be offered during the brainstorming that do not aim to address the original problem. This can very quickly result in the team addressing the wrong problem, i.e. not addressing the real (root cause) problem. The scope also needs to be very clearly defined up front. Too wide a scope and the process will become unworkable as there will be too many potential causes. Too narrow a scope and the problem will not be adequately addressed. Even if a problem has intuitively (say) three potential causal areas, it may be appropriate to focus on just one or two, address them, then move onto other areas. In this regard too narrow a scope, with a commitment to revisit the problem once the first set of corrective actions have been implemented may be preferable to, too wide a scope. In a typical Cause and Effect Diagram, we identify the problem… and arrange the potential causes in a fishbone type arrangement. If we just focus for this example on “Method” With a focus on the Method used in the process we start to ask why. Why would the Method cause this problem? Why, why. why…..? What is the effect or problem to be solved ? If we continue asking why, we will build up the Cause and Effect diagram. In a Cause and Effect Diagram, we need to keep asking WHY?Why could material cause the effect? (Possible answer) Poor quality batch received. Why? Did not use preferred Vendor. Why? The preferred Vendor could not supply in time, due to internal process difficulties. Why ? …………. Normally, the question “why” would go down to the fifth level, but clearly more or less levels may be appropriate depending on the complexity involved. Once we get to the root cause, then we can consider proposing solutions to address the cause. We continue this process for each of the spines of the Cause and Effect.The end result will be a list of potential root causes, which we can then list out, review, prioritize and implement corrective actions. Quality Management Tools and Techniques … - Continuous improvement utilizing Analytical Techniques. - 5 why’s analysis - Process Flow Diagrams/Flowcharts/Process Mapping - Check sheets /Check Lists - Run charts - Scatter Diagrams/Scatter Plot - Cause and Effect/Fishbone/Ishikawa Diagrams - Identifying sources & causes of variation - Control/Shewart Charts/DPU Charts - Cpk and Ppk Analysis - Pareto Analysis - Bottleneck Analysis - Etc. Etc. - Information | Understanding | Best Practice >>>
Differences Between Human and Animal Eyes Our jobs are all about keeping human eyes healthy and making sure our patients see clearly. However, every once in a while, we like taking a look at the fascinating world of animal eyesight. There are some incredible eyes in nature that work very differently from the way human eyes do. Human Eyes Versus Animal Eyes One thing all eyes have in common is that they focus light onto a retina, turn the image into signals, and send them to the brain. Depending on what the animal does to survive, that requires different adaptations. The ones we need are things like depth perception, the ability to see movement, and color vision. You may have noticed that predator animals tend to have their eyes on the front of their skulls (like us), while prey animals (like sheep and rabbits) have their eyes on the sides. Predators use front-facing eyes for binocular vision, which allows them to pinpoint how far away a prey animal is. Prey animals use their side-facing eyes to see predators coming from most angles, making it harder to sneak up on them. Eye position is only the beginning. Eagles, for instance, have much deeper foveae than we do, which is like having built-in telephoto lenses. They can see incredible detail from great distances, and their field of vision is wider. They can also see a wider range of wavelengths, up into the UV spectrum! The Best Eyes in the Animal Kingdom Eagles have the best eyesight in the skies (at least during the day) but they don’t win every category. There are so many different amazing eyes in the wild. - The animal with the best color vision (that we know of) is the bluebottle butterfly. While we only have three types of cones to detect different colors. Bluebottles have fifteen, some of which see in the UV spectrum. - The animal with the best night vision is the owl. Their eyes are shaped more like tubes and don’t move in their sockets (that’s why they swivel their heads so dramatically). Their eyes are very large and their retinas have five times as many rods as ours. They also have a biological mirror built into their retinas, the tapetum lucidum, which improves their night vision even more. - In the water, sharks have the best eyes. They are adapted to hunting in murky, dark conditions. Like owls, they have tapeta lucida, and their eyes have a protective layer to protect them from the water. - The most complex eyes we know about are those of the mantis shrimp. They have eyestalks that move independently and each contain three separate compound eyes. Each segment does different things and communicates with different parts of the brain, and they have twelve types of photoreceptors. Keep Us Updated About Any Changes to Your Vision Human eyes can’t do a lot of the things all these animals’ eyes can do, but there are things they should be doing very well. If yours aren’t, or if you’ve noticed any changes to your eyesight, don’t hesitate to schedule an appointment so we can discover what’s causing the problem. It could be as simple as needing a stronger prescription.
We learn new skills by repetition and reinforcement learning. Through trial and error, we repeat actions leading to good outcomes, try to avoid bad outcomes and seek to improve those in between. Researchers are now designing algorithms based on a form of artificial intelligence that uses reinforcement learning. They are applying them to automate chemical synthesis, drug discovery and even play games like chess and Go. Scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have developed a reinforcement learning algorithm for yet another application. It is for modeling the properties of materials at the atomic and molecular scale and should greatly speed up materials discovery. Like humans, this algorithm “learns” problem solving from its mistakes and successes. But it does so without human intervention. Historically, Argonne has been a world leader in molecular modeling. This has involved calculating the forces between atoms in a material and using that data to simulate its behavior under different conditions over time. Past such models, however, have relied heavily on human intuition and expertise and have often required years of painstaking efforts. The team’s reinforcement learning algorithm reduces the time to days and hours. It also yields higher quality data than possible with conventional methods. “Our inspiration was AlphaGo,” said Sukriti Manna, a research assistant in Argonne’s Center for Nanoscale Materials (CNM), a DOE Office of Science user facility. “It is the first computer program to defeat a world champion Go player.” The standard Go board has 361 positional squares, much larger than the 64 on a chess board. That translates into a vast number of possible board configurations. Key to AlphaGo becoming a world champion was its ability to improve its skills through reinforcement learning. The automation of molecular modeling is, of course, much different from a Go computer program. “One of the challenges we faced is similar to developing the algorithm required for self-driving cars,” said Subramanian Sankaranarayanan, group leader at Argonne’s CNM and associate professor at the University of Illinois Chicago. Whereas the Go board is static, traffic environments continuously change. The self-driving car has to interact with other cars, varying routes, traffic signs, pedestrians, intersections and so on. The parameters related to decision making constantly change over time. Solving difficult real-world problems in materials discovery and design similarly involves continuous decision making in searching for optimal solutions. Built into the team’s algorithm are decision trees that dole out positive reinforcement based on the degree of success in optimizing model parameters. The outcome is a model that can accurately calculate material properties and their changes over time. The team successfully tested their algorithm with 54 elements in the periodic table. Their algorithm learned how to calculate force fields of thousands of nanosized clusters for each element and made the calculations in record time. These nanoclusters are known for their complex chemistry and the difficulty that traditional methods have in modeling them accurately. “This is something akin to completing the calculations for several Ph.D. theses in a matter of days each, instead of years,” said Rohit Batra, a CNM expert on data-driven and machine learning tools. The team did these calculations not only for nanoclusters of a single element, but also alloys of two elements. “Our work represents a major step forward in this sort of model development for materials science,” said Sankaranarayanan. “The quality of our calculations for the 54 elements with the algorithm is much higher than the state of the art.” Executing the team’s algorithm required computations with big data sets on high performance computers. To that end, the team called upon the carbon cluster of computers in CNM and the Theta supercomputer at the Argonne Leadership Computing Facility, a DOE Office of Science user facility. They also drew upon computing resources at the National Energy Research Scientific Computing Center, a DOE Office of Science user facility at Lawrence Berkeley National Laboratory. “The algorithm should greatly speed up the time needed to tackle grand challenges in many areas of materials science,” said Troy Loeffler, a computational and theoretical chemist in CNM. Examples include materials for electronic devices, catalysts for industrial processes and battery components. The team reported their findings in Nature Communications. Aside from Sankaranarayanan, Manna, Batra and Loeffler, contributing authors from Argonne include Suvo Banik, Henry Chan, Bilvin Varughese, Kiran Sasikumar, Michael Sternberg, Tom Peterka, Mathew Cherukara and Stephen Gray. Also contributing was Bobby Sumpter, Oak Ridge National Laboratory. The work was supported by the DOE Office of Basic Energy Sciences. The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. Supported by the U.S. Department of Energy’s (DOE’s) Office of Science, Advanced Scientific Computing Research (ASCR) program, the ALCF is one of two DOE Leadership Computing Facilities in the nation dedicated to open science. Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.
|Source: University of Chicago| Language is, to some extent, hard to be limited in scope since it overlaps with many branches and differs in its functions. But in general, language is a unique social phenomenon. That is to say, any natural language cannot survive on its own without a society. Due to multiculturalism and social mobility, language variation as an issue has attracted many researchers and sociolinguists. Language variation (or linguistic variation) refers to contextual, social, and regional differences in how a specific language is used by individuals or communities. This article expounds on three major points, namely geographical variation, social variation, and finally contextual variation. But before delving into the discussion of the three major points mentioned above, it is quite necessary to briefly define three main concepts in socio-linguistics so as to avoid ambiguity. These three concepts are: “dialect”, “variety”, and finally “slang”. 1. Key concepts in socio-linguistics |Source: your Dictionary| At first, “Dialect” is seen as a geographical subdivision of language. Longman dictionary of language teaching and applied linguistics (fourth edition) defines “dialect as a variety of a language, spoken in one part of a country (regional dialect), or by people belonging to a particular social class (social dialect or sociolect), which is different in some words, grammar, and/or pronunciation from other forms of the same language. A dialect is often associated with a particular accent. Sometimes a dialect gains status and becomes the standard variety of a country.” (p.168) Variety is defined as a set of linguistic items with a similar social variation. With reference to ‘slang’, it is viewed as a restricted set of new words and new meanings of older words mixed with linguistic items with a much larger social distribution. After defining these terms, it is high time to start discussing the three major elements of linguistic variation: 1. Geographical variation When it comes to geographical variation, it is quite interesting that so many languages in the world have a dialect continuum in which speakers from neighboring dialect areas meet and communicate with each other without problems. Ralph Penny states that geographical variation is a universal characteristic of human language that speakers of the ‘same’ language who live in different parts of a continuous territory do not speak in the same way. In addition to that, Isogloss is a sociolinguistic phenomenon that plays an important role in building borders between the former varieties and the new ones. Korean language, for instance, is based on two characteristics: the “Abstrand” language and Turkic as “Ausbau” language. The Abstrand language shows the independence of a specific language. On the other hand, Asubau languages demonstrate the amalgams of languages within one geographical area. 2. Social Variation As far as social variation is concerned, societies are approached from two different views. The first one is a social network and the second one is social ratification. With reference to social networks, it is mandatory to point out the density of networks in which everyone knows everyone else, and secondly the multiplexity network that requires two individuals interacting less than in the dense network. This article then draws a distinction between jargon and slang. The latter is very informal and could include impolite words, whereas jargon is seen as a set of vocabulary items used by members of a particular profession. 3. Contextual variation The last major point in the article is contextual variation. It is a variation within the individual, that is to say, we all vary our language between contexts. In clarifying the contextual variation, it is quite interesting that we take into account language policy and how it makes countries choose the three different levels of language, namely official language, the national language, and at last local language. To sum up, it is an outstanding thing to see how humans use different language variations, and how individuals are able to refine their own way of speaking according to regional and social purposes. It is also fascinating that some regions consist of more than three hundred languages and have various linguistic items. If you think these tips are useful, please “SHARE AND CARE”
Encouraging your child to eat a balanced and healthy diet with lots of vegetables, fruit, protein, dairy and carbohydrates, but low in fats, sugar and salt can help to lower your child’s risk of having tooth decay. Some food naturally contain sugar such as fruits and milk. These foods do not need to be limited with the guidelines below. Instead, food and drinks that contain ‘free sugars’- sugars that have been added and are not natural- should be limited to less than 5% of our calorie intake. Food packaging can be confusing because sugar is written in a lot of different ways. These could include glucose, fructose, sucrose, dextrose, maltose, honey or syrups to name a few. Even if products are advertised as natural sugars, they can still be harmful to teeth. For 4–6-year-olds the recommended maximum intake of free sugars is no more than 19g per day. This is equal to 5 sugar cubes. For 6–10-year-olds the recommended maximum intake of free sugars is no more than 24g per day, which is equal to 6 sugar cubes. For children 11 years over, the recommended maximum intake of free sugars is no more than 30g per day, which is equal to 7 sugar cubes. It is important to avoid children having sugary food and drink before bedtime and to only give fruit juice or sweet foods at mealtimes. Although fruit juices count as one of your child’s 5 a day, even unsweetened fruit juice is sugary, so it is advised that consumption should be limited to no more than 150 mls a day.
There are three main parts of a tree: Leaves, trunk and branches, and roots. Below we set out detailed anatomy illustrations showing all the parts of a tree; illustrations for each section of the tree. 1. Big picture tree anatomy Overview parts of a tree explained: - Crown: This the top area of the tree an includes branches and foliage (leaves). This is where photosynthesis occurs (see diagram above). - Foliage: the leaves of a tree. It refers to a leaf or the leaves altogether. - Taproot: this is a type of root that grows straight down into the ground, with smaller lateral roots branching off from the main taproot. It is called a taproot because it can be thought of as the main “tap” that supplies water and nutrients to the rest of the tree. - Lateral root: roots that extend horizontally. These are thinner than the vertical taproots. Lateral roots anchor trees and provide support as well as absorb water and nutrients. 2. Parts of tree leaves Parts of a leaf explained: - Epidermis layer: it’s like the outer skin of a leaf that regulates the exchange of gases and water for the leaf. It’s transparent so light can pass through. - Cuticle part of a leaf: this covers the epidermis layer which waterproofs the leaf to prevent water loss. - Palisade mesophyll: this layer has many chloroplasts that take care of photosynthesis (uses light to convert carbon dioxide and water into glucose for the tree). - Spongy mesophyll: As you can see from above, this lawyer is just beneath the palisade mesophyll layer. Just like the illustration shows, it’s comprised of air spaces and while it has fewer chloroplasts than palisade mesophyll, it also is involved in photosynthesis. - Stoma: this is actually a pore on the underside surface of tree (and plant) leaves. Adjacent “guard” cells regulate the opening and closing of the pore. This opening allows for gases to exchange between the leaf and air. 3. Parts of tree trunk Parts of a tree trunk explained: - Pith: this is the center of a tree trunk. It’s soft. It’s comprised of living cells. What it does is transports water and nutrients to all parts of the tree. - Heartwood: it’s a section made up of dead cells but that doesn’t mean it’s useless. It helps support the tree and protects it from diseases and pets. - Medullary rays: these are thin layers of tissue in tree trunks. Like the pith, medullary rays transport water and nutrients. - Growth rings: these are concentric circles that signify one year of growth. These rings are formed by the tree adding a new layer of wood between the bark and the trunk each growing season. They are created as a result of patterns in vascular tissues and their growth rate is affected by factors such as rainfall, temperature, and competition. - Sapwood: this is the outer, living layer of a tree trunk that transports water and nutrients from the roots to the leaves. It is usually lighter in color than the heartwood, and is the tree’s pipeline for moving water up to the leaves. The inner cells of the wood lose their vitality as new sapwood layers grow. - Cambium cell layer: this layer produces new bark for the trunk each year which protects the tree and helps it conserve water. - Inner bark: this is a layer of spongy material that helps transport nutrients created in the leaves to other parts of the tree. - Outer bark: this is the “outer” layer (duh) comprised of dead cells. Its primary purpose is to protect the tree. 4. Parts of tree roots 5. The Life of Trees You’ve seen trees before, but do you know why they are shaped in the way they are? Let’s begin with what is going on under the soil. Trees are able to stand so tall thanks to their root systems. Depending on the variety of tree, the roots may grow very deep into the soil to provide the required support for the heavy trunk and branches above, or the roots grow shallow in the soil, but spread out widely. Root systems are in place not only to provide support, but that is how trees access the water and nutrients from the soil. Roots suck up water from the soil through osmosis, and then transport it all throughout the body of the tree — all the way up to the leaves in the sky. The trunk of a tree is made up of woody tissue that provides strength, stability, and flex, as well as vascular tissue that helps transport water and nutrients to all the members of the tree. Most trees are covered with a layer of bark that helps provide a protective shield for the vulnerable and valuable parts underneath. As we move upwards, we find ourselves in the canopy or the crown of the tree. This is where branches reach out from the trunk, and sometimes these branches are then divided inter smaller shoots. Different tree species only have branches at the very top of their trunk, whereas other species have branches that grow out of the entire length of the trunk. At the end of the branches and shoots are where we find leaves (commonly associated with deciduous trees) or needles (commonly associated with coniferous trees). The leaves are able to capture energy from the sun, and photosynthesize to convert water into sugar (tree food!). You can think of a forest of trees as capillaries in your lungs. Trees are the reason why humans and animals are able to breathe. They take carbon dioxide out of the air, and turn it into oxygen. The fewer trees there are on the planet, the more carbon dioxide there will be in the atmosphere. The deforestation of the planet is a major cause in climate change. Trees also help keep the ground in its place. Deep and ancient root systems prevent the earth from eroding or washing away during severe storms. Trees provide shelter and nutrients for many animals and insects. For humans they provide food, field, shade, construction materials, and much more. Trees as Individuals All of this is well and good, but it is also important to view trees as entities of their own, where their primary function is not serve humans. Trees exist in communities. They support each other, they communicate, they learn, they adapt, they thrive, and they perish. Our survival in inextricable from theirs, but their survival is entirely separate from humans. So in an effort to celebrate the life of trees, we’ve compiled a list of 101 varieties of tree (out of millions).
On May 23, 1788, South Carolina became the eighth state to ratify the Federal Constitution. Although there was considerable opposition from the backcountry region, representatives from the capital, Charleston, and the surrounding lowcountry regions prevailed. This division in state politics would continue until a series of compromises were completed in 1808 balancing the representation of the two regions. A new state constitution was adopted by the South Carolina General Assembly in 1790. This document preserved the weak executive structure that dated back to before the American Revolution. For example, the governor did not possess veto power after 1790. The governor and lieutenant governor were each elected to a two-year term and were then required to not hold the office for four years before being eligible for election again. The General Assembly was comprised of two branches, the House of Representatives and the Senate. Both bodies were elected by popular vote. Members of the House of Representatives served two-year terms. There were a total of 124 members whose districts were determined by a combination of population and the amount of taxes generated. It was through electoral innovations like this that the lowcountry region maintained its political dominance even though it possessed a minority of the state’s white population. Senators were elected to four year terms. The most significant political issue in the state during this period was balancing the interests of the lowcountry and the backcountry. Under the Constitution of 1790, the state capital was moved from Charleston, on the coast, to Columbia in the interior. Eventually, the lowcountry representatives agreed to other Constitutional amendments which increased the number of electoral districts in the backcountry region and led to a greater balance of political power. The Federalist Party dominated South Carolina in the 1790s as it could count a number of prominent lowcountry planters among their ranks. Many South Carolinians played important roles for the Federalist Party at the national level. The Jeffersonian-Republicans, however, were rising in prominence, especially as Charles Pinckney and Pierce Butler, both of whom signed the Constitution for South Carolina, joined the rival party. Although the Federalists dominated the state until 1800, by 1804 there were no Federalists in power. South Carolina would remain a one-party state until the start of the Civil War. The Constitution of 1790 eliminated the religious qualification for voting and holding political office in South Carolina. All free, white men who were 21 years of age, had lived in the state for two years, was a resident of the district in which he was voting, owned fifty acres of land or a town lot and paid taxes were eligible to vote. In 1810 an amendment to the state constitution eliminated the property qualification for voting, extending suffrage to all white men who had lived in the state for six months. Thus, South Carolina was among the very first states to allow universal white male suffrage. - James Banner “The Problem of South Carolina”in Stanley Elkins and Eric McKittrick, The Hofstadter Aegis: A Memorial.(New York: Alfred A. Knopf, 1974) 60-93. - Walter Edgar, South Carolina: A History(Columbia: University of South Carolina Press, 1998) - Lacy K. Ford, The Origins of Southern Radicalism: The South Carolina Upcountry, 1800-1860(New York: Oxford University Press, 1991) - Rachel Klein, Unification of a Slave State: The Rise of the Planter Class in the South Carolina Backcountry, 1760-1808(Chapel Hill: University of North Carolina Press, 1990). - George C. Rogers, Evolution of a Federalist: William Loughton Smith of Charleston, (Columbia, SC: University of South Carolina Press, 1962). - C. Blease Graham, South Carolina’s Constitutions - South Carolina Information Highway – Governors
Active Kinetic energy harvesting technology is considered green for several reasons: - Utilises Renewable Energy Source: Active Kinetic energy harvesting taps into naturally occurring and renewable sources of energy, such as vehicle movement, wind, and ocean waves. Unlike fossil fuels, which are finite and contribute to environmental pollution, kinetic energy is constantly replenished in the environment, making it a sustainable and renewable energy source. - Reduced Carbon Emissions: By utilizing kinetic energy instead of relying solely on traditional energy sources like fossil fuels, active kinetic energy harvesting helps reduce carbon emissions and mitigate climate change. It enables the generation of electricity without burning fossil fuels, which are a significant source of greenhouse gas emissions. This contributes to cleaner air quality and a lower carbon footprint. - Energy Efficiency: Active Kinetic generators allows for the capture, conservation and conversion of energy that would otherwise go unused or be wasted. By utilizing the energy from moving vehicles, wind, or ocean waves, it improves overall energy efficiency. This reduces the need for additional energy production and helps optimize resource utilization, resulting in a greener and more sustainable energy system. - Minimal Environmental Impact: Compared to some other forms of energy generation, such as large-scale hydropower or fossil fuel extraction, Active kinetic energy harvesting technologies typically have a much lower environmental impact. Wind and ocean energy systems, for example, have a smaller footprint and fewer direct ecological consequences compared to damming rivers or extracting fossil fuels. Active Kinetic technology can be designed and implemented in ways that minimize disruption to ecosystems and wildlife habitats. - Localized Energy Generation: Active Kinetic energy harvesting often enables localized energy generation. For example, energy generated from vehicle movement can be used to power nearby streetlights or traffic signals, reducing the need for long-distance energy transmission. This localized generation reduces transmission losses and improves the overall efficiency of the energy system. - Diversification of Energy Sources: Incorporating kinetic energy harvesting technologies diversifies the energy mix, reducing reliance on a single source and enhancing energy security. By adding renewable sources like wind and ocean energy to the energy grid, it helps create a more balanced and resilient energy system that is less vulnerable to supply disruptions and price fluctuations associated with fossil fuels. Overall, Active kinetic energy harvesting is considered green due to its utilization of renewable energy sources, reduction in carbon emissions, energy efficiency, minimal environmental impact, localized generation, and contribution to diversifying the energy mix. By harnessing the power of natural motion, it offers a sustainable and environmentally friendly approach to generating electricity.
Foraging Biology of Neglected Bee PollinatorsView this Special Issue Foraging Activity in Plebeia remota, a Stingless Bees Species, Is Influenced by the Reproductive State of a Colony Colonies of the Brazilian stingless bee Plebeia remota show a reproductive diapause in autumn and winter. Therefore, they present two distinct reproductive states, during which colony needs are putatively different. Consequently, foraging should be adapted to the different needs. We recorded the foraging activity of two colonies for 30 days in both phases. Indeed, it presented different patterns during the two phases. In the reproductive diapause, the resource predominantly collected by the foragers was nectar. The majority of the bees were nectar foragers, and the peak of collecting activity occurred around noon. Instead, in the reproductive phase, the predominantly collected resource was pollen, and the peak of activity occurred around 10:00 am. Although the majority of the foragers were not specialized in this phase, there were a larger number of pollen foragers compared to the phase of reproductive diapause. The temperature and relative humidity also influenced the foraging activity. Stingless bees collect several types of material on their foraging flights. Most of these materials have vegetal origin, as pollen, nectar, resin, latex, leaves, trichomes, fragrances, oils, seeds, and so forth. In addition, stingless bees also collect materials of other origins, as animal feces, clay, water, and fungi spores, for example [1–3]. Among all these resources, pollen, and nectar are the ones used as food . In some bee species oil is also used to provision brood cells, as in Centris (Hemisiella) tarsata and C. (H.) trigonoides . The other materials can be used for several purposes, especially construction and protection [2, 4]. The flight activity includes waste removal, namely, to remove garbage (detritus) from the colony, besides foraging. The detritus comprise feces, old combs, dead bees, larval and pupal exuvia, among others [1, 2, 7]. The foraging behavior varies seasonally throughout the year, especially in relation to the amount of pollen collected by the colonies. Climatic factors such as temperature, light intensity, wind, rain, and relative humidity, as well as plant resource availability, influence foraging. Colony internal factors, such as population size and amount of stored food, also influence the foraging behavior of the individual bees and of the colony [2, 4, 8–12]. Several aspects of the flight and foraging activity of some stingless bee species have already been studied: (i) the influence of external and internal factors, (ii) the size and the physiology of the bees, and (iii) the effect of daily and seasonal patterns of availability of floral resources on foraging. However, among the species that present reproductive diapause, the foraging pattern and the flight activity in relation to the phase of diapause and the phase of oviposition by the queen were studied comparatively only in Plebeia saiqui [13, 14]. Reproductive diapause is characterized by an interruption in cell provisioning and oviposition process (POP) in autumn and winter [15–17]. In stingless bees the provisioning and oviposition process (POP) comprises (i) the construction of brood cells one by one, (ii) provisioning of them by the workers, (iii) the oviposition by the queen, and (iv) sealing of the cells by the workers, then new cells are started . Diapause also occurs in other stingless bee species, especially in the genus Plebeia: P. remota [15, 17], P. droryana , P. julianii , and P. wittmanni and has also been observed in some colonies of Melipona marginata obscurior . In this phase many changes in the architecture (i.e., construction of storage pots on the top of the pile of old combs) of the nest and in the behavior of the queens and workers occur, at least in P. remota . In this species even the defensive behavior of the bees is modified during the reproductive diapause . Hilário et al. [23–25] studied the influence of climatic factors on the flight activity of P. remota, but he did not present observations on the influence of diapause in the foraging and waste removal behavior for this species. The main aim of this study was to test whether the foraging behavior of this species varies according to the reproductive state of the colony. More specifically we examined whether there are differences on the type and amount of resource collected by the bees in the different phases of reproduction and in the removal of detritus. The influence of the temperature and the relative humidity on this activity and the daily rhythm of the foraging of the colonies and individual foragers were also investigated, as the individual activities of the foragers. 2. Material and Methods The study was carried out in the Bee Laboratory (Bioscience Institute, University of São Paulo; 23°S, 46°W) in two periods: from May 8th to July 7th 2006 (reproductive diapause of colonies) and from November 13th 2006 to January 24th 2007 (reproductive phase). We used two colonies of P. remota from Cunha (23°S, 44°O, São Paulo State). These colonies were hived in wooden boxes covered with glass lids and connected to the exterior of the laboratory by a plastic tube. Outside the building the tube was 15 cm long, so as to allow better observation of the activity of individual bees. Four hundred newly emerged bees were individually marked in each colony using a color code made with paint. This color code is based on five colors, each color representing a number, and on the position of the dot on the thorax (Figure 1). Dots in the center of the thorax mean 100 (white) to 500 (green). This system allows the researcher to mark 599 bees individually by combining the dots. For example, bee number 456 is marked with a blue dot on the center of the thorax, a green dot on the inner left side of the thorax, and a white dot on the upper right side. This is a marking system modified from Sakagami’s system . The observations were made between 8:00 and 18:00 (local time), for 20 minutes per hour in each colony, for 30 days distributed throughout each phase (a minimum of 3 times a week and maximum of 5 times a week). In these observations we counted the number of bees entering the colony and the number of bees taking out garbage, and registered the type of material carried by them (nectar, pollen, or resin). We also recorded the time and what resource individually marked bees foraged for. It was not possible to distinguish among bees bringing nectar, water, or nothing. Bees entering the hive without pollen or resin on the corbiculae were considered to bring nectar. To avoid an over estimation of bees collecting nectar, the number of bees removing garbage was subtracted from the number of bees collecting nectar, since these bees do not collect resources and come back rapidly without resources on the corbiculae, and they had been previously counted as bees collecting nectar. We calculated the mean and standard deviations of the numbers of bees collecting nectar, pollen, and resin and removing garbage per hour, as well as the minimum and maximum numbers registered per colony. Since the datasets did not follow a normal distribution (Shapiro-Wilk test, ), we used Mann-Whitney test to compare two groups of data and Kruskall-Wallis test or Wilcoxon signed-rank test to compare more than two groups . We calculated the partial correlation indexes between air temperature (°C) and relative humidity (%) and between these two environmental factors and nectar, pollen, and resin collection, total number of incoming trips in the colony and garbage removal. The controlled factors were the relative humidity and the air temperature. We also calculated Spearman correlation indexes between air temperature and relative humidity, and nectar, pollen, and resin collection and the total number of incoming trips in the colony. Weather data were provided by the Climatology and Biogeography Laboratory (Geography Department, Faculty of Philosophy, Letters, Science and History, University of São Paulo) from their experimental meteorological station at the University of São Paulo campus. The data were provided as means of five minutes of the temperature recordings. We used the mean of the four mean temperatures that corresponded to the 20 minutes of observation. To compare the air temperature and relative humidity between the two phases studied we used the Mann-Whitney test. The analysis of the individual behavior of foragers was based on the activity performed (type of material collected or garbage removal) and on the frequency they performed it. We considered that bees that collected only one type of resource (nectar, pollen, and resin) in 80% or more of their flights were specialists in the collection of that resource, as Biesmeijer and Tóth did. We also observed for how many days the marked bees foraged and their age. The foraging behavior of the colony and the individual foragers was also analyzed using rhythm tests. We tested whether the foraging of individual bees and of the colonies showed an acrophase (hr:min; local time) with the Rayleigh test () . We calculated the value of the acrophase of the colony and of individual marked bees (when the bee made six or more activities), the angular deviation of the acrophases and the mean vector r, which indicates the dispersion of the data around the acrophase; the greater the value of r, the less dispersion of the data around the acrophase . 3.1. Foraging Patterns of Nectar, Pollen, Resin Collection, and Garbage Removal There were differences in the foraging patterns between the reproductive and diapause phases. There was a statistically significant difference between the total number of bees collecting resources in the reproductive phase and during diapause (colonies 1 and 2, Mann-Whitney test, ). In both colonies the total number of bees collecting resources in the reproductive phase increased until 9:00. A peak of activity was found between 9:00 and 11:00. After 11:00, the income of resources decreased until 13:00, remaining constant for the rest of the day (Figure 2(a)). In the diapause, the total number of foragers increased from 8:00 to 11:00. The resource income remained constant from 11:00 to 13:00, when it started decreasing until 18:00 (Figure 2(b)). The nectar collection pattern was similar to the pattern of the total number of bees bringing resources to the colony. During the reproductive phase it was nearly constant along the day in both colonies (Figure 3(a)). From 8:00 to 11:00 it increased and remained constant until 16:00, when it decreased slightly. In the diapause (Figure 3(b)), the collection of this resource increased from 8:00 to 11:00 and showed a peak between 11:00 and 12:00 in colony 1, but in colony 2 this peak lasted until 13:00. In general, the collection of nectar started to decrease after this peak until cessation. Although distinct patterns in nectar collection in the two phases were found along the day, there was no difference between the total number of bees collecting nectar in the reproductive phase and in the diapause (Wilcoxon sign-rank test, colony 1: ; colony 2: ). The pollen collection pattern also showed differences between the two phases (Figure 4). In the reproductive phase the pollen collection showed a peak at the beginning of the morning, between 8:00 and 10:00. After that, pollen collection decreased along the day. The diapause was characterized by a low number of bees bringing pollen to the nest (Figure 4(b)). The number of bees collecting pollen increased between 8:00 and 11:00, remaining nearly constant for the rest of the day. There was a statistically significant difference between the total number of bees collecting pollen in the reproductive phase and in the diapause (Wilcoxon sign-rank test, colony 1: ; colony 2: ). The total number of bees bringing pollen to the nest in the reproductive phase was higher than in the diapause. Resin foraging in the reproductive phase was constant along the day (Figure 5(a)). In the diapause, this activity was nearly constant along the day, with exception of few periods of time (Figure 5(b)). There was a significant difference between the total number of bees collecting resin in the reproductive phase and in the diapause (Mann-Whitney test, colony 1: ; colony 2: ). However, different situations occurred in each colony. In colony 1, the total number of bees bringing resin to the nest in the reproductive phase was higher than in the diapause. In colony 2, the opposite was found (Figure 5). The garbage removal in the reproductive phase and in the diapause was concentrated at the end of the day, after 15:00 (Figure 6). In general, it increased along the day. The total number of foragers removing garbage during the reproductive phase was smaller than during diapause in colony 1 (Mann-Whitney test, ), but in colony 2 the opposite was found (Mann-Whitney test, ; Figure 6). There was a difference in the number of foraging trips for the different resources in the two phases in the two colonies (Kruskall-Wallis test, ). Nectar was always the most collected resource (Figure 7). In the reproductive phase, pollen was the second most collected resource (Figures 7(a) and 7(b)). In the diapause, different situations were found regarding pollen and resin collection. In colony 1, pollen collection was greater than resin collection (Mann-Whitney test, and ; Figures 7(a) and 7(c)), however, colony 2 had the opposite behavior (Mann-Whitney test, and ; Figures 7(b) and 7(d)). Garbage removal was more frequent when the resource (nectar, pollen, and resin) collection decreased at the end of the day (between 16:00 and 17:00) both in the reproductive phase and in the diapause (Figure 7). 3.2. Flight Activity and Climatic Factors The air temperature and the relative humidity in the reproductive phase were different from the diapause (Mann-Whitney test, ; Table 1). In the diapause, for colony 1 the minimum temperature for foraging was 14.7°C (only one entry) and for colony 2 it was 14.3°C. No bees were observed foraging below these temperatures. In the reproductive phase, none of the temperatures restricted the foraging behavior. As expected, air temperature and relative humidity were always correlated (Table 1). In the reproductive phase and in the diapause, the total number of incoming workers depended on the air temperature and relative humidity (Table 2). The flight activity depended more on the temperature in the diapause than in the reproductive phase (higher values). The partial correlation between the total number of incoming trips and the relative humidity was not statistically significant for colony 1 in the reproductive phase and for colony 2 in the diapause (Table 2). Nectar collection also depended on the air temperature. However, only the partial correlation between nectar collection and relative humidity in colony 1 in the diapause were statistically significant. Nectar collection was highly correlated with the total incoming trips in the colony (Table 2). Pollen collection depended on the air temperature in the diapause, but not in the reproductive phase. In contrast, it depended on the air relative humidity in both phases. This activity was correlated with the total number of entrances in the colony only in the diapause (Table 2). In colony 1, resin collection showed a relationship with the temperature only in the diapause. In colony 2, on the other hand, this activity depended to a minor extend on the temperature in both phases. The partial correlation between resin collection and relative humidity was opposite in the two colonies. In colony 1, this activity was not correlated with relative humidity in the reproductive phase, but it depended to a minor extent on this climatic factor in the diapause. In colony 2, resin collection depended on the relative humidity in the reproductive phase, but not in the diapause. Generally, this activity was not significantly correlated with the total number of incoming trips in the hive, with the exception in colony 1 during the diapause (Table 2). The partial correlation between garbage removal and temperature was not significant only in colony 2 in the reproductive phase. In colony 1, these two parameters showed a negative partial correlation in the reproductive phase. The partial correlation between garbage removal and relative humidity was not significant only in colony 1 in the reproductive phase. None of the partial correlations between this activity and the total number of incoming trips in the colony was statistically significant (Table 2). 3.3. Individual Activity of Foragers and Colony Rhythm In the reproductive phase only eight (2%) marked workers (from the ones we observed) in colony 1 and six (1.5%) in colony 2 were observed while foraging (entering and exiting the colony). In the reproductive diapause, 131 (32.75%) and 32 (8%) marked bees were observed in colonies 1 and 2, respectively. Additional marked bees were observed exiting the colony; however they were not considered in the analyses because we do not know what foraging activity they performed. Marked bees contributed little to nectar, pollen, and resin collecting during the reproductive phase and reproductive diapause (Table 3). We observed no marked bees removing garbage from colonies 1 and 2 in the reproductive phase. In the reproductive diapause, we observed only one marked bee in each colony performing this activity. In colony 1, this bee was responsible for only one removal flight (0.04%), but in colony 2, the observed bee was responsible for 7.21% of the removal flights. We also verified if the marked bees were specialized (nectar, pollen, or resin foragers) or not. In the reproductive phase in colony 1, 50% of the marked bees were nectar foragers (37.5% collected only nectar and 12.5% collected also pollen), 25% pollen foragers, and 25% were non-specialized (collected nectar and pollen). In colony 2, 66.7% of the marked bees were pollen foragers and 33.3% were non-specialized (collected nectar and pollen). In the reproductive phase, 94.6% of the marked bees were nectar foragers in colony 1 (86.2% collected only nectar and 8.4% collected also pollen or resin) and 96.9% in colony 2 (87.5% collected only nectar and 9.4% collected also pollen), respectively. In colony 1, 1.5% of the marked bees were pollen foragers in this phase and the rest was not specialized. In colony 2, 3.1% of the marked bees were specialized in garbage removal. The period of time that a marked bee foraged for was variable. In the reproductive phase foragers of colony 1 collected nectar for an average of 2.2 days (standard deviation: 1.5 days; ; maximum: four days) and pollen for 1.4 days (standard deviation: 0.9 days; ; maximum: three days). The mean number of days that the bees foraged (collected any resource or removed garbage) was 2.3 (standard deviation: 2.1 days; ; maximum: six days). In colony 2 we observed only two nectar foragers; one foraged for two days and the other for three. The bees foraged for pollen for an average of 2.7 days (standard deviation: 1.6 days; ; maximum: five days). The mean number of days that the bees foraged (collected any resource or removed garbage) was 3.3 (standard deviation: 2.3 days; ; maximum: six days). In the reproductive diapause, foragers of colony 1 collected nectar for an average of 3.3 days (standard deviation: 3.5 days; ; maximum: 16 days), pollen for 1.9 days (standard deviation: 1.1 days; ; maximum: 4 days) and resin for 1.3 days (standard deviation: 0.5 days; ; maximum: 2 days). The mean number of days that the bees foraged (collected any resource or removed garbage) was 3.4 (standard deviation: 3.6 days; ; maximum: 16 days). In colony 2, the foragers collected nectar for an average of 4.4 days (standard deviation: 3.6 days; ; maximum: 13 days). We observed only two pollen foragers; one foraged for one day and the other for two. Only one bee collected resin and did it for one day. Another marked bee removed garbage from the colony for five days. The mean number of days that the bees foraged (collected any resource or removed garbage) was 4.5 (standard deviation: 3.5 days; ; maximum: 13 days). Although a great variability was observed as to age of marked bees, the foragers of the reproductive diapause were older than the foragers from the reproductive phase (Figures 8 and 9, Table 4) in both colonies. Acrophases were detected in nectar, pollen, and exit of marked bees (Table 4). These acrophases occurred in different times in the reproductive phase and diapause, with the exception of nectar collection (Figures 8 and 9, Table 4). The acrophase of pollen collection occurred earlier in the reproductive phase (Table 4), but the variation around it (angular deviation) was similar between the two phases (Table 4). We compared the foraging of marked bees with the colony foraging (marked and nonmarked bees observed). The acrophase of the marked foragers was within the interval of the acrophases of the colony foraging activity (Table 5). This indicated that the foraging activity of the marked bees is representative of the foraging activity of the colony. 3.4. Other Behavioral Observations We observed the occurrence of nectar transfer between a forager and other bee that was in the tube. This behavior was not quantified, because it was not the aim of the observations, but tropholaxis was seen many times in the entrance tube. Sometimes the forager did tropholaxis with other bee in the tube as soon as it arrived and left the tube immediately. Hence, there is task partitioning in nectar collection in P. remota. The garbage removal is also a task that is partitioned among workers of P. remota. We observed that one or two workers remained in the tube carrying a pellet of garbage with their mandibles. These workers passed the pellet to other workers that were in the entrance tube. Those caught the pellet with their first pair of legs, hold it with their mandibles and then flew out of the colony. Many times an incoming forager caught the pellet from a worker and flew out of the colony immediately. Sometimes the worker that holds the pellet could resist and not give the pellet to the other bee. In this case, they pulled the pellet like in “tug of war” (rope pulling) or the worker ran away from the other bee that tried to catch the pellet while the other chased her. The foraging pattern of the reproductive phase was different from the one found in the diapause. Although we found some differences between times of the day, the foraging activity of the bees, during the reproductive phase, was nearly constant. Nevertheless, this activity in the diapause was more concentrated in the middle of the day, as already found by Imperatriz-Fonseca et al. and Hilário . The foraging pattern of P. saiqui, another species that presents diapause, was also different in the two phases (Table 4). These differences may be caused by environmental factors, probably temperature, as discussed later. Another factor that can influence the foraging activity of bees is the variation in the quantity and quality of food resources between days or seasons . In the reproductive phase, the nectar collection was nearly constant along the day, as well as the general foraging pattern of the colonies. This is expected because most of the foraging activity of the colonies was of nectar bringing foragers. Other stingless bee species also present this pattern (Table 6). In the reproductive diapause, nectar was collected more in the middle of the day (11:00–13:00). Another bee species also showed this peak of nectar collection (Table 6). Removal of detritus from the colonies occurred mainly at the end of the afternoon. We did not find a pattern in relation to the number of foragers performing this activity in the different colony phases (reproductive and diapause), because each colony behaved in a different way. This activity might be influenced more by the internal conditions than by external factors, like season of the year and climatic factors. Souza et al. found a positive correlation between nectar and pollen income and garbage removal, and suggested that the growth of the colony influences directly the amount of garbage produced by the colonies. The peak of this activity occurs at different times of the day in different species (Table 6). The resin foraging is also different in distinct species (Table 6). In P. remota this activity occurred along the day, in both phases. Besides the distinct daily foraging patterns between the reproductive period and diapause, we observed differential foraging efforts according to the resource, a new finding for the species studied. A similar number of bees bringing nectar to the colony was observed in the two phases, but the percentage of nectar in relation to the other resources collected was higher in the diapause. This might be a reflection of the differential allocation of foragers among different tasks. In the diapause the number of bees collecting pollen was lower compared to the reproductive phase, so the foragers concentrate their activity in nectar collection. On the other hand, nectar is the sugar source that provides energy for the bees. During diapause bees stay still inside the colony , but a large quantity of this resource may be needed when all colonies’ activities restart. Although we did not quantify, we observed an increase in the number of storage pots with honey in the diapause and we did not observe pollen stored in the colonies. However, this increase in nectar storage may occur due to the decrease in consumption by the bees. In this phase there is a decrease in the colonies’ population over time, since after the emergency of all the remaining brood, no bees will emerge until the end of the reproductive diapause and the development of the first brood when the reproductive phase begins again. Nectar could also be needed in greater quantity during diapause because of thermoregulation. Apis mellifera individuals can maintain their corporal temperature when they have sugar in their crop . We do not know whether this is true for P. remota or not, but if it is, honey is more needed during diapause, when it is colder (autumn and winter). These hypotheses need to be tested. Also it is possible that bees need nectar as a source of energy to forage . We observed quantitative differences in pollen collection of the two phases. A greater number of incomes of this resource was observed in the reproductive phase and the percentage of bees doing this activity increased in relation to the other resources, although nectar incomes made the greatest percentage of the incomes in both phases. This difference in the number of bees foraging for pollen was also observed in P. saiqui . Most bees rely on pollen as the main source of nitrogen and it is collected mainly to feed the larvae . In A. mellifera, the quantity of brood and stored pollen influences the pollen foraging . This might be the case of P. remota. In the diapause there is no cell construction and provisioning of these cells for queen oviposition. This might influence the decision of the foragers, as a higher percentage of pollen foragers was found in the reproductive phase. Furthermore, there is the possibility that, as in A. mellifera , winter bees eat less pollen than summer bees. Hrassnigg and Crailsheim state that A. mellifera workers respond to different quantities of brood adjusting their behavior and physiology, eating more or less pollen and altering their flight activity. The authors state that this allows that the winter bees live longer and work when there is brood in the colony again. This reduced need for pollen in P. remota during diapause may influence the pollen foraging in this species in a similar way that happens in honey bees and maybe this mechanism of regulating the life span is also present in this species, whereas the bees lived longer in the reproductive diapause, which occurs in winter. Another similarity between P. remota and P. saiqui pollen foraging was the daily peak of this activity in the reproductive phase. In both species it occurred in the begining of the morning until 10 am, when it started to decrease until the end of the day . In other stingless bee species a peak of pollen collection also occurred in the beginning of the morning (examples: M. scutellaris ; M. bicolor bicolor ; P. pugnax ). Climatic factors such as air temperature and relative humidity influence the flight activity of bees, along internal conditions of the colony. The differences found in air temperature and relative humidity between the two phases may be one of the reasons of the distinct patterns observed in the colonies of P. remota. Generally, the flight activity of stingless bees is positively correlated with the air temperature and negatively correlated with relative humidity (P. saiqui ; M. marginata obscurior [33, 40]; M. asilvai ; Meliponula ferruginea and Meliponula nebulata ; Tetragonisca angustula ; M. marginata marginata ). The same relationship was found by Hilário for P. remota. However, we found weak relationships between these climatic factors and the different foraging activities of P. remota, when they were significant. This might be due to the type of analyses we made. We used partial correlation to describe these relationships. Partial correlation statistically corrects for the effect of a third variable which influences the variables involved in the original correlation . We wanted to evaluate the effect of air temperature and relative humidity in the flight activity of P. remota, but these two climatic factors are highly correlated and with simple correlation it is not possible separate the effects of air temperature and relative humidity on this activity. As Hilário , we found weaker relationships in summer (reproductive phase) than in winter (diapause). This might be because in the summer bees forage along the day (all temperatures registered), but in winter this activity occurs mainly in the middle of the day, when the temperatures are higher. Other climatic factors as wind and rain are also responsible for shaping the flight activity of the colonies. The air temperature is a constraining factor. Bees must warm up before going out for foraging and waste removing. The minimum temperature that occurred during observations was in the diapause: 11.3°C for colony 1 and 11.6°C for colony 2. Flight activity was observed under 14.7°C for colony 1 and under 14.3°C for colony 2, indicating that temperatures under 14°C limits the flight activity of P. remota. This temperature is lower than the temperature found by Imperatriz-Fonseca et al. (16°C, but not in winter). However, Hilário observed flight activity under 10.2°C, indicating that the restraining temperature for the flight activity of this species is around 10°C. Other species from these genera presented similar low temperatures for foraging (P. pugnax: 14°C ; P. saiqui: 11°C ). Kleinert-Giovannini and Imperatriz-Fonseca observed that even under optimal climatic conditions for the flight activity of M. marginata marginata and M. marginata obscurior a decrease in this activity can occur, which indicates that there is a daily rhythm in it under favorable environmental conditions. P. remota, as other eusocial bee species, presents an age dependent division of labor (age polyethism). Foraging is the final stage of the life of a worker and it begins around 30 days of life . Van Benthem et al. observed 46% of the marked workers performing foraging in autumn and winter. In this study, a similar percentage of bees (32.75%) in colony 1 was observed performing this activity, though in colony 2 a lower percentage (8%) was observed. The percentage of observed marked bees varies among studies (M. bicolor bicolor, 60% ; Scaptotrigona postica, 40% ; Friesella sp, 34% ; M. compressipes fasciculata, 88.3% ). The age that the workers of P. remota became foragers varied from 43 to 90 days in the reproductive phase and from 42 to 107 days in the reproductive diapause. Van Benthem et al. observed foragers of 30 to 87 days of age in the reproductive diapause, similar to our observations. The difference in the age of foragers of the two phases might be a reflection of the differential longevity of the bees in these phases, as they live longer, they start foraging later. P. remota winter bees live from 25 to 100% more than summer bees . P. droryana workers were also observed for more than 100 days in the reproductive diapause and began to forage after 35 days of age . In general Melipona species start to forage earlier than P. remota (M. compressipes fasciculata: 15 to 85 days of age ; M. beecheii: 16 to over 60 days of age ). S. postica workers became foragers on 20 to 60 days of age and Friesella workers on 17 to 41 days of age . M. beecheii nectar foragers foraged for three days (from two to four days) . Generally, P. remota foragers collected nectar for a similar number of days in the reproductive phase and diapause, but the variation (up to 13 days, colony 2) in the reproductive diapause was greater than in this Melipona species. The pollen and resin foraging was performed in a shorter period when compared to M. beecheii . Based on the previous studies and our results (M. bicolor bicolor: around 6 days ; M. compressipes fasciculata: mean of 10 days ; S. postica: 3 to 7 days ), in general stingless bee foragers perform their activities for less than 10 days. In stingless bees, the foraging of the colony is based on the individual decision of the workers. Each has to decide when to start or to stop foraging. These decisions are taken using intrinsic information, as genetic information, memory, development and hormones, and extrinsic information, which comes from inside the colony (stored resources, information from other forager, odors, among others) or outside (flower availability and competition, for example) [9–11]. Although there are no studies on the influence of these factors on the foraging of P. remota, these factors might influence this activity in this species as well. Besides, the absence of the provisioning and oviposition process, one of the extrinsic information from inside the colony, shall be a key factor in the organization of this behavior in the reproductive diapause, as in this phase there are behavioral changes in the behavior of workers [17, 22] and, as we observed, of the foragers, noticed despite the difference in the proportion of nectar and pollen foragers between the phases. Diel rhythms were found in the foraging behavior of the P. remota colonies, and when we compared the acrophase of nectar collection detected in this study with the acrophase of flight activity detected by Hilário , they are very similar. M. bicolor also present daily rhythms in flight activity . Scaptotrigona aff depilis , and Apis mellifera also present circadian rhythms, as the studies were done under controlled environmental conditions. The nectar foraging is a partitioned task in P. remota, as in other stingless bee species (Melipona beecheii [29, 48], M. fasciata, M. favosa, Tetragonisca angustula , Trigona nigra, Plebeia frontalis, Scaptotrigona pectoralis and Nannotrigona perilampoides ). We also observed task partitioning in garbage removal, but it is almost unknown in stingless bees. The authors thank CNPq (135074/2005-3 and 140169/2000-8) for financial support. S. F. Sakagami, “Stingless bees,” in Social Insects Vol. III, H. R. Hermann, Ed., pp. 361–423, Academic Press, New York, NY, USA, 1982.View at: Google Scholar S. W. Roubik, Ecology and Natural History of Tropical Bees, Cambridge University Press, New York, NY, USA, 1989. C. D. Michener, The Social Behavior of the Bees: A Comparative Study, Belknap Press of Harvard University Press, Cambridge, Mass, USA, 1974. C. M. L. Aguiar and C. A. Garófalo, “Nesting biology of Centris (Hemisiella) tarsata Smith (Hymenoptera, Apidae, Centridini),” Revista Brasileira de Entomologia, vol. 21, no. 3, pp. 477–486, 2004.View at: Google Scholar C. M. L. Aguiar, C. A. Garófalo, and G. F. Almeida, “Biologia de nidificação de Centris (Hemisiella) trigonoides Lepeletier (Hymenoptera, Apidae, Centridini),” Revista Brasileira de Zoologia, vol. 23, no. 2, pp. 323–330, 2006.View at: Google Scholar S. D. Hilário, V. L. Imperatriz-Fonseca, and A. M. Kleinert, “Responses to climatic factors by foragers of Plebeia pugnax Moure (in litt.) (Apidae, Meliponinae),” Revista Brasleira de Biologia, vol. 61, no. 2, pp. 191–196, 2001.View at: Google Scholar A. Kleinert-Giovannini, “The influence of climatic factors on flight activity of Plebeia emerina Friese (Hymenoptera, Apidae, Meliponinae) in winter,” Revista Brasileira de Entomologia, vol. 26, no. 1, pp. 1–13, 1982.View at: Google Scholar J. C. Biesmeijer, M. G. L. van Nieuwstadt, S. Lukács, and M. J. Sommeijer, “The role of internal and external information in foraging decisions of Melipona workers (Hymenoptera: Meliponinae),” Behavioral Ecology and Sociobiology, vol. 42, no. 2, pp. 107–116, 1998.View at: Publisher Site | Google Scholar S. D. Hilário, V. L. Imperatriz-Fonseca, and A. M. P. Kleinert, “Flight activity and colony strentgh in the stingless bee Melipona bicolor bicolor (Apidae, Meliponinae),” Revista Brasileira de Biologia, vol. 60, no. 2, pp. 299–306, 2000.View at: Google Scholar R. A. Pick and B. Blochtein, “Atividades de coleta e origem floral do pólen armazenado em colônias de Plebeia saiqui (Holmberg) (Hymenoptera, Apidae, Meliponinae) no sul do Brasil,” Revista Brasileira de Zoologia, vol. 19, no. 1, pp. 289–300, 2002.View at: Google Scholar R. A. Pick and B. Blochtein, “Atividades de vôo de Plebeia saiqui (Holmberg) (Hymenoptera, Apidae, Meliponinae) durante o período de postura da rainha e em diapausa,” Revista Brasileira de Zoologia, vol. 19, no. 3, pp. 827–839, 2002.View at: Google Scholar F. D. van Benthem, V. L. Imperatriz-Fonseca, and H. H. Velthuis, “Biology of the stingless bee Plebeia remota (Holmberg): observations and evolutionary implications,” Insectes Sociaux, vol. 42, no. 1, pp. 71–87, 1995.View at: Google Scholar M. F. Ribeiro, V. L. Imperatriz-Fonseca, and P. S. Santos Filho, “A interrupção da construção de células de cria e postura em Plebeia remota (Holmberg) (Hymenoptera, Apidae, Meliponini),” in Apoidea Neotropica: Homenagem aos 90 Anos de Jesus Santiago Moure, G. A. R. Melo and I. Alves dos Santos, Eds., pp. 177–188, Editora UNESC, Criciúma, Brazil, 2003.View at: Google Scholar Y. Terada, C. A. Garófalo, and S. F. Sakagami, “Age-survival curves for workers of two eusocial bees (Apis mellifera and Plebeia droryana) in a subtropical climate, with notes on worker polyethism in P. droryana,” Journal of Apicultural Research, vol. 14, no. 3-4, pp. 161–170, 1975.View at: Google Scholar L. A. Juliani, “Descrição do ninho e alguns dados biológicos sobre a abelha Plebeia Juliani Moure, 1962 (Hymenoptera, Apidae, Meliponinae),” Revista Brasileira de Entomologia, vol. 12, pp. 31–58, 1967.View at: Google Scholar D. Wittmann, “Nest architecture, nest site preferences and distribution of Plebeia wittmanni (Moure & Camargo, 1989) in Rio Grande do Sul, Brazil (Apidae: Meliponinae),” Studies on Neotropical Fauna & Environment, vol. 24, no. 1, pp. 17–23, 1989.View at: Google Scholar F. V. B. Borges and B. Blochtein, “Variação sazonal das condições internas de colônias de Melipona marginata obscurior Moure, no Rio Grande do Sul, Brasil,” Revista Brasileira de Zoologia, vol. 23, no. 3, pp. 711–715, 2006.View at: Google Scholar E. Périco, Comunicação química em cinco espécies do gênero Plebeia com ênfase nos mecanismos de defesa contra a abelha cleptobiótica Lestrimelitta limao (Hymenoptera: Apidae: Meliponinae), thesis, Universidade of São Paulo, São Paulo, Brazil, 1997. S. D. Hilário, Atividade de vôo e termorregulação de Plebeia remota (Holmberg, 1903) (Hymenoptera, Apidae, Meliponini), thesis, University of São Paulo, São Paulo, Brazil, 2005. S. D. Hilário, M. F. Ribeiro, and V. L. Imperatriz-Fonseca, “Efeito do vento sobre a atividade de vôo de Plebeia remota (Holmberg, 1903) (Apidae, Meliponini),” Biota Neotropica, vol. 7, no. 3, pp. 225–232, 2007.View at: Google Scholar S. D. Hilário, M. F. Ribeiro, and V. L. Imperatriz-Fonseca, “Impacto da precipitação pluviométrica sobre a atividade de vôo de Plebeia remota (Holmberg, 1903) (Apidae, Meliponini),” Biota Neotropica, vol. 7, no. 3, pp. 135–143, 2007.View at: Google Scholar S. F. Sakagami, “Techniques for the observation of behaviour and social organization of stingless bees by using a special hive,” Papéis Avulsos do Departamento de Zoologia, vol. 18, no. 12, pp. 151–162, 1966.View at: Google Scholar J. H. Zar, Biostatistical Analysis, Prentice Hall, New Jersey, NY, USA, 1999. V. L. Imperatriz-Fonseca, A. Kleinert-Giovannini, and J. M. Pires, “Climatic variations on the flight activity of Plebeia remota Holmberg (Hymenoptera, Apidae, Meliponinae),” Revista Brasileira de Entomologia, vol. 29, no. 3-4, pp. 427–434, 1985.View at: Google Scholar L. L. M. de Bruijn and M. J. Sommeijer, “Colony foraging in different species of stingless bees (Apidae, Meliponinae) and the regulation of individual nectar foraging,” Insectes Sociaux, vol. 44, no. 1, pp. 35–47, 1997.View at: Google Scholar S. D. Hilário, M. Gimenes, and V. L. Imperatriz-Fonseca, “The influence of colony size in diel rhythms of flight activity of Melipona bicolor Lepeletier (Hymenoptera, Apidae, Meliponinae),” in Apoidea Neotropica: Homenagem aos 90 Anos de Jesus Santiago Moure, G. A. R. Melo and I. Alves dos Santos, Eds., pp. 191–197, Editora UNESC, Criciúma, Brazil, 2003.View at: Google Scholar L. M. Pierrot and C. Schlindwein, “Variation in daily flight activity and foraging patterns in colonies of uruçu—Melipona scutellaris Latreille (Apidae, Meliponini),” Revista Brasileira de Zoologia, vol. 20, no. 4, pp. 565–571, 2003.View at: Google Scholar F. B. Borges and B. Blochtein, “Atividades externas de Melipona marginata obscurior Moure (Hymenoptera, Apidae), em distintas épocas do ano, em São Francisco de Paula, Rio Grande do Sul, Brasil,” Revista Brasileira de Zoologia, vol. 22, no. 3, pp. 680–686, 2005.View at: Google Scholar B. A. Souza, C. A.L. Carvalho, and R. M.O. Alves, “Flight activity of Melipona asilvai moure (Hymenoptera: Apidae),” Brazilian Journal of Biology, vol. 66, no. 2B, pp. 731–737, 2006.View at: Google Scholar B. Heinrich, The Hot-Blooded Insects: Strategies and Mechanisms of Thermoregulation, Spring, Berlin, Germany, 1993. S. D. Leonhardt, K. Dworschak, T. Eltz, and N. Blüthgen, “Foraging loads of stingless bees and utilization of stored nectar for pollen harvesting,” Apidologie, vol. 38, pp. 125–135, 2007.View at: Google Scholar K. Crailsheim, N. Hrassnigg, R. Gmeinbauer, M. J. Szolderits, L. H. W. Schneider, and U. Brosch, “Pollen utilization in non-breeding honeybees in winter,” Journal of Insect Physiology, vol. 39, no. 5, pp. 369–373, 1993.View at: Google Scholar A. Kleinert-Giovannini and V. L. Imperatriz-Fonseca, “Flight activity and responses to climatic conditions of two subspecies of Melipona marginata Lepeletier (Apidae, Meliponinae),” Journal of Apicultural Research, vol. 25, no. 1, pp. 3–8, 1986.View at: Google Scholar S. Iwama, “A influência dos fatores climáticos na atividade externa de Tetragonisca angustula (Apidae, Meliponinae),” Boletim do Museu de Zoologia da Universidade de São Paulo, vol. 2, pp. 189–201, 1977.View at: Google Scholar L. R. Bego, “On some aspects of bionomics in Melipona bicolor bicolor Lepeletier (Hymenoptera, Apidae, Meliponinae),” Revista Brasileira de Entomologia, vol. 27, no. 3-4, pp. 211–224, 1983.View at: Google Scholar D. Simões, Estudos sobre a regulação social em Nannotrigona (Scaptotrigona) postica Latreille, com especial referência a aspectos comportamentais (Hymenoptera, Apidae, Meliponinae), dissertation, University of São Paulo, Ribeirão Preto, Brazil, 1974. C. Camillo-Atique, Estudo da variabilidade etológica de Friesella incluindo a caracterização de espécies crípticas (Hym. Meliponinae), thesis, University of São Paulo, Ribeirão Preto, Brazil, 1977. K. M. Giannini, “Labor division in Melipona compressipes fasciculate Smith (Hymenoptera: Apidae: Meliponinae),” Anais da Sociedade Entomológica do Brasil, vol. 26, no. 1, pp. 153–162, 1997.View at: Google Scholar H. G. Spangler, “Daily activity rhythms of individual worker and drone honey bees,” Annals of the Entomological Society of America, vol. 65, no. 5, pp. 1073–1076, 1972.View at: Google Scholar
Swift Characters: In this tutorial, you will learn about Swift characters and strings. You’ll additionally learn various operations that can be performed on strings and characters. For each programming languages, characters and its combination, i.e., strings assume a huge part in programming and bring with it some striking highlights that assist programmers to shape messages and statements inside the program. A string is a sequence of character. These strings and characters are used to show a message using println statement in Swift as you have seen earlier. Character is an data type that addresses a single-character string (“a”, “@”, “5”, and so on) We use the Character watchword to create character-type variables in Swift. For instance, var letter: Character Here, the letter variable can only store single-character data. // create character variable var letter: Character = "H" print(letter) // H var symbol: Character = "@" print(symbol) // @ In the above example, we have created two character variables: letter and symbol. Here, we have assigned”H” to letter and “@” to the symbol. Note: If we attempt to assign more than one character to a Character variable, we will get an error. // create character variable let test: Character = "H@" print(test) // Error: // cannot convert value of type 'String' to specified type Character In Swift, a string is used to store textual data (“Hey There!”, “Swift is awesome.”, and so forth) We use the String catchphrase to create string-type variables. For instance, let name: String Here, the name variable can just store textual data. Note: Since a string contains multiple characters, it is known as a sequence of characters. // create string type variables let name: String = "Swift" print(name) let message = "I love Swift." print(message) Swift I love Swift. In the above example, we have created string-type variables: name and message with values “Swift” and “I love Swift” respectively. Notice the statement, let message = "I love Swift." Here, we haven’t used the String keyword while creating the variable. It is on the grounds that Swift is able to infer the type based on the value. Note: In Swift, we use twofold quotes to address strings and characters. The String class in Swift provides different built-in functions that allow us to perform various procedure on strings. 1. Compare Two Strings We use the == operator to compare two strings. In the event that two strings are equivalent, the operator brings true back. Else, it returns false. For instance, let str1 = "Hello, world!" let str2 = "I love Swift." let str3 = "Hello, world!" // compare str1 and str2 print(str1 == str2) // compare str1 and str3 print(str1 == str3) In the above example, - str1 and str2 are not equivalent. Thus, the outcome is false. - str1 and str3 are equivalent. Thus, the outcome is true. 2. Join Two Strings We use the append() function to join two strings in Swift. For instance, var greet = "Hello " var name = "Salman" // using the append() method greet.append(name) print(greet) In the above example, we have used the append() method to join name and greet. Concatenate Using + and += We can likewise use the + and += operators to connect two strings. var greet = "Hello, " let name = "Salman" // using + operator var result = greet + name print(result) //using =+ operator greet += name print(greet) Hello, Salman Hello, Salman In the above example, we have used the + and += operators to join two strings: greet and name. Note: We can’t create greet using let. It is on the grounds that the += operator joins two strings and assigns the new value to greet. 3. Find Length of String We use the count property to find the length of a string. For instance, let message = "Hello, World!" // count length of a string print(message.count) // 13 Note: The count property counts the total number of characters in a string including whitespaces. Other Built-in Functions |isEmpty||determines if a string is empty or not| |capitalized||capitalizes the first letter of every word in a string| |uppercased()||converts string to uppercase| |lowercase()||converts string to lowercase| |hasPrefix()||determines if a string starts with certain characters or not| |hasSuffix()||determines if a string ends with certain characters or not| The escape sequence is used to escape some of the characters present inside a string. Assume we need to incorporate twofold quotes inside a string. // include double quote var example = "This is "String" class" print(example) // throws error Since strings are addressed by double quotes, the compiler will treat “This is ” as the string. Consequently, the above code will cause an error. To solve this issue, we use the escape character \ in Swift. // use the escape character var example = "This is \"String\" class" print(example) // Output: This is "String" class Presently the program will run with no error. Here, the escape character will tell the compiler to overlook the character after \ . Here is a list of all the escape sequences supported by Swift. |\t||a horizontal tab| We can likewise use the backslash character \ to use variables and constants inside a string. For instance, let name = "Swift" var message = "This is \(name) programming." print(message) This is Swift programming. In the above example, notice the line var message = "This is \(name) programming." Here, we are using the name variable inside the string message. This process is called String Interpolation in Swift. Swift Multiline String We can likewise create a multiline string in Swift. For this, we use triple twofold statements “””. For instance, // multiline string var str: String = """ Swift is awesome I love Swift """ print(str) Swift is awesome I love Swift In the above example, anything inside the enclosing triple-quotes is one multiline string. Note: Multi-line strings should consistently start on another line. Else, it will create an error. // error code var str = """Swift I love Swift """ Create String Instance We can likewise create a string using an initializer syntax. For instance, var str = String() Here, the initializer syntax String() will create an empty string. Thanks for reading! We hope you found this tutorial helpful and we would love to hear your feedback in the Comments section below. And show us what you’ve learned by sharing your photos and creative projects with us.
The push for ever-smaller chips has been running into more and more obstacles, including the very limits of silicon — while optical connections would allow for denser, speedier processors by eliminating heat and energy issues, silicon is lousy at emitting light. Or rather, it was. Eindhoven University of Technology researchers have developed what they say is the first silicon alloy that can emit light. The breakthrough is a mix of silicon and germanium in a hexagonal structure that allows a band gap (and thus emitting light). Scientists have been pursuing this for about 50 years, the university said, and a team created hexagonal silicon back in 2015. However, they couldn’t get the result to emit light until now, when they reduced the number of impurities and defects. The team still needs to produce a laser before they have technology that could be used in chips, and there would still be plenty of refinement left before you would see this in shipping electronics. That laser is expected in 2020, though. And this latest development was arguably the largest hurdle. From now on, the main challenge is making the technology practical.
This article was written by Sondos Atwi. What is Cross-Validation? In Machine Learning, Cross-validation is a resampling method used for model evaluation to avoid testing a model on the same dataset on which it was trained. This is a common mistake, especially that a separate testing dataset is not always available. However, this usually leads to inaccurate performance measures (as the model will have an almost perfect score since it is being tested on the same data it was trained on). To avoid this kind of mistakes, cross validation is usually preferred. The concept of cross validation is actually simple: Instead of using the whole dataset to train and then test on same data, we could randomly divide our data into training and testing datasets. There are several types of cross validation methods (LOOCV – Leave-one-out cross validation, the holdout method, k-fold cross validation). Here, I’m gonna discuss the K-Fold cross validation method. K-Fold basically consists of the below steps: Below is a simple illustration of the procedure taken from Wikipedia. How can it be done with R? In the below exercise, I am using logistic regression to predict whether a passenger in the famous Titanic dataset has survived or not. The purpose is to find an optimal threshold on the predictions to know whether to classify the result as 1 or 0. Consider that the model has predicted the following values for two passengers: p1 = 0.7 and p2 = 0.4. If the threshold is 0.5, then p1 > threshold and passenger 1 is in the survived category. Whereas, p2 < threshold, so passenger 2 is in the not survived category. However, and depending on our data, the 0.5 ‘default’ threshold will not alway ensure the maximum the number of correct classifications. In this context, we could use cross validation to determine the best threshold for each fold based on the results of running the model on the validation set. In my implementation, I followed the below steps:
Symptoms, spread and other essential information about the new coronavirus and COVID-19 As we continually learn more about coronavirus and COVID-19, it can help to reacquaint yourself with some basic information. For example, understanding how the virus spreads reinforces the importance of social distancing and other health-promoting behaviors. Knowing how long the virus survives on surfaces can guide how you clean your home and handle deliveries. And reviewing the common symptoms of COVID-19 can help you know if it's time to self-isolate. What is coronavirus? Coronaviruses are an extremely common cause of colds and other upper respiratory infections. What is COVID-19? COVID-19, short for "coronavirus disease 2019," is the official name given by the World Health Organization to the disease caused by this newly identified coronavirus. How many people have COVID-19? The numbers are changing rapidly. It has spread so rapidly and to so many countries that the World Health Organization has declared it a pandemic (a term indicating that it has affected a large population, region, country, or continent). Do adults younger than 65 who are otherwise healthy need to worry about COVID-19? Yes, they do. Though people younger than 65 are much less likely to die from COVID-19, they can get sick enough from the disease to require hospitalization. According to a report published in the CDC's Morbidity and Mortality Weekly Report (MMWR) in late March, nearly 40% of people hospitalized for COVID-19 between mid-February and mid-March were between the ages of 20 and 54. Drilling further down by age, MMWR reported that 20% of hospitalized patients and 12% of COVID-19 patients in ICUs were between the ages of 20 and 44. People of any age should take preventive health measures like frequent hand washing, physical distancing, and wearing a mask when going out in public, to help protect themselves and to reduce the chances of spreading the infection to others. What are the symptoms of COVID-19? Some people infected with the virus have no symptoms. When the virus does cause symptoms, common ones include fever, dry cough, fatigue, loss of appetite, loss of smell, and body ache. In some people, COVID-19 causes more severe symptoms like high fever, severe cough, and shortness of breath, which often indicates pneumonia. People with COVID-19 are also experiencing neurological symptoms, gastrointestinal (GI) symptoms, or both. These may occur with or without respiratory symptoms. For example, COVID-19 affects brain function in some people. Specific neurological symptoms seen in people with COVID-19 include loss of smell, inability to taste, muscle weakness, tingling or numbness in the hands and feet, dizziness, confusion, delirium, seizures, and stroke. In addition, some people have gastrointestinal (GI) symptoms, such as loss of appetite, nausea, vomiting, diarrhea, and abdominal pain or discomfort associated with COVID-19. These symptoms might start before other symptoms such as fever, body ache, and cough. The virus that causes COVID-19 has also been detected in stool, which reinforces the importance of hand washing after every visit to the bathroom and regularly disinfecting bathroom fixtures. Can COVID-19 symptoms worsen rapidly after several days of illness? Common symptoms of COVID-19 include fever, dry cough, fatigue, loss of appetite, loss of smell, and body ache. In some people, COVID-19 causes more severe symptoms like high fever, severe cough, and shortness of breath, which often indicates pneumonia. A person may have mild symptoms for about one week, then worsen rapidly. Let your doctor know if your symptoms quickly worsen over a short period of time. Also call the doctor right away if you or a loved one with COVID-19 experience any of the following emergency symptoms: trouble breathing, persistent pain or pressure in the chest, confusion or inability to arouse the person, or bluish lips or face. One of the symptoms of COVID-19 is shortness of breath. What does that mean? Shortness of breath refers to unexpectedly feeling out of breath, or winded. But when should you worry about shortness of breath? There are many examples of temporary shortness of breath that are not worrisome. For example, if you feel very anxious, it's common to get short of breath and then it goes away when you calm down. However, if you find that you are ever breathing harder or having trouble getting air each time you exert yourself, you always need to call your doctor. That was true before we had the recent outbreak of COVID-19, and it will still be true after it is over. Meanwhile, it's important to remember that if shortness of breath is your only symptom, without a cough or fever, something other than COVID-19 is the likely problem. Can COVID-19 affect brain function? COVID-19 does appear to affect brain function in some people. Specific neurological symptoms seen in people with COVID-19 include loss of smell, inability to taste, muscle weakness, tingling or numbness in the hands and feet, dizziness, confusion, delirium, seizures, and stroke. One study that looked at 214 people with moderate to severe COVID-19 in Wuhan, China found that about one-third of those patients had one or more neurological symptoms. Neurological symptoms were more common in people with more severe disease. Neurological symptoms have also been seen in COVID-19 patients in the US and around the world. Some people with neurological symptoms tested positive for COVID-19 but did not have any respiratory symptoms like coughing or difficulty breathing; others experienced both neurological and respiratory symptoms. Experts do not know how the coronavirus causes neurological symptoms. They may be a direct result of infection or an indirect consequence of inflammation or altered oxygen and carbon dioxide levels caused by the virus. The CDC has added "new confusion or inability to rouse" to its list of emergency warning signs that should prompt you to get immediate medical attention. Is a lost sense of smell a symptom of COVID-19? What should I do if I lose my sense of smell? Increasing evidence suggests that a lost sense of smell, known medically as anosmia, may be a symptom of COVID-19. This is not surprising, because viral infections are a leading cause of loss of sense of smell, and COVID-19 is a caused by a virus. Still, loss of smell might help doctors identify people who do not have other symptoms, but who might be infected with the COVID-19 virus — and who might be unwittingly infecting others. A statement written by a group of ear, nose and throat specialists (otolaryngologists) in the United Kingdom reported that in Germany, two out of three confirmed COVID-19 cases had a loss of sense of smell; in South Korea, 30% of people with mild symptoms who tested positive for COVID-19 reported anosmia as their main symptom. On March 22nd, the American Academy of Otolaryngology–Head and Neck Surgery recommended that anosmia be added to the list of COVID-19 symptoms used to screen people for possible testing or self-isolation. In addition to COVID-19, loss of smell can also result from allergies as well as other viruses, including rhinoviruses that cause the common cold. So anosmia alone does not mean you have COVID-19. Studies are being done to get more definitive answers about how common anosmia is in people with COVID-19, at what point after infection loss of smell occurs, and how to distinguish loss of smell caused by COVID-19 from loss of smell caused by allergies, other viruses, or other causes altogether. Until we know more, tell your doctor right away if you find yourself newly unable to smell. He or she may prompt you to get tested and to self-isolate. How long is it between when a person is exposed to the virus and when they start showing symptoms? Recently published research found that on average, the time from exposure to symptom onset (known as the incubation period) is about five to six days. However, studies have shown that symptoms could appear as soon as three days after exposure to as long as 13 days later. These findings continue to support the CDC recommendation of self-quarantine and monitoring of symptoms for 14 days post exposure. How does coronavirus spread? The coronavirus is thought to spread mainly from person to person. This can happen between people who are in close contact with one another. Droplets that are produced when an infected person coughs or sneezes may land in the mouths or noses of people who are nearby, or possibly be inhaled into their lungs. A person infected with coronavirus — even one with no symptoms — may emit aerosols when they talk or breathe. Aerosols are infectious viral particles that can float or drift around in the air for up to three hours. Another person can breathe in these aerosols and become infected with the coronavirus. This is why everyone should cover their nose and mouth when they go out in public. Coronavirus can also spread from contact with infected surfaces or objects. For example, a person can get COVID-19 by touching a surface or object that has the virus on it and then touching their own mouth, nose, or possibly their eyes. How could contact tracing help slow the spread of COVID-19? Anyone who comes into close contact with someone who has COVID-19 is at increased risk of becoming infected themselves, and of potentially infecting others. Contact tracing can help prevent further transmission of the virus by quickly identifying and informing people who may be infected and contagious, so they can take steps to not infect others. Contact tracing begins with identifying everyone that a person recently diagnosed with COVID-19 has been in contact with since they became contagious. In the case of COVID-19, a person may be contagious 48 to 72 hours before they started to experience symptoms. The contacts are notified about their exposure. They may be told what symptoms to look out for, advised to isolate themselves for a period of time, and to seek medical attention as needed if they start to experience symptoms. How deadly is COVID-19? The answer depends on whether you're looking at the fatality rate (the risk of death among those who are infected) or the total number of deaths. So far, influenza has caused far more total deaths this flu season, both in the US and worldwide, than COVID-19. This is why you may have heard it said that the flu is a bigger threat. Regarding the fatality rate, it appears that the risk of death with the pandemic coronavirus infection (commonly estimated at about 1%) is far less than it was for SARS (approximately 11%) and MERS (about 35%), but will likely be higher than the risk from seasonal flu (which averages about 0.1%). We will have a more accurate estimate of fatality rate for this coronavirus infection once testing becomes more available. What we do know so far is the risk of death very much depends on your age and your overall health. Children appear to be at very low risk of severe disease and death. Older adults and those who smoke or have chronic diseases such as diabetes, heart disease, or lung disease have a higher chance of developing complications like pneumonia, which could be deadly. Will warm weather slow or stop the spread of COVID-19? Some viruses, like the common cold and flu, spread more when the weather is colder. But it is still possible to become sick with these viruses during warmer months. At this time, we do not know for certain whether the spread of COVID-19 will decrease when the weather warms up. But a new report suggests that warmer weather may not have much of an impact. The report, published in early April by the National Academies of Sciences, Engineering and Medicine, summarized research that looked at how well the COVID-19 coronavirus survives in varying temperatures and humidity levels, and whether the spread of this coronavirus may slow in warmer and more humid weather. The report found that in laboratory settings, higher temperatures and higher levels of humidity decreased survival of the COVID-19 coronavirus. However, studies looking at viral spread in varying climate conditions in the natural environment had inconsistent results. The researchers concluded that conditions of increased heat and humidity alone may not significantly slow the spread of the COVID-19 virus. How long can the coronavirus stay airborne? I have read different estimates. A study done by National Institute of Allergy and Infectious Diseases' Laboratory of Virology in the Division of Intramural Research in Hamilton, Montana helps to answer this question. The researchers used a nebulizer to blow coronaviruses into the air. They found that infectious viruses could remain in the air for up to three hours. The results of the study were published in the New England Journal of Medicine on March 17, 2020. How long can the coronavirus that causes COVID-19 survive on surfaces? A recent study found that the COVID-19 coronavirus can survive up to four hours on copper, up to 24 hours on cardboard, and up to two to three days on plastic and stainless steel. The researchers also found that this virus can hang out as droplets in the air for up to three hours before they fall. But most often they will fall more quickly. There's a lot we still don't know, such as how different conditions, such as exposure to sunlight, heat, or cold, can affect these survival times. As we learn more, continue to follow the CDC's recommendations for cleaning frequently touched surfaces and objects every day. These include counters, tabletops, doorknobs, bathroom fixtures, toilets, phones, keyboards, tablets, and bedside tables. If surfaces are dirty, first clean them using a detergent and water, then disinfect them. A list of products suitable for use against COVID-19 is available here. This list has been pre-approved by the U.S. Environmental Protection Agency (EPA) for use during the COVID-19 outbreak. In addition, wash your hands for 20 seconds with soap and water after bringing in packages, or after trips to the grocery store or other places where you may have come into contact with infected surfaces. Should I accept packages from China? There is no reason to suspect that packages from China harbor coronavirus. Remember, this is a respiratory virus similar to the flu. We don't stop receiving packages from China during their flu season. We should follow that same logic for the virus that causes COVID-19. Can I catch the coronavirus by eating food handled or prepared by others? We are still learning about transmission of the new coronavirus. It's not clear if it can be spread by an infected person through food they have handled or prepared, but if so it would more likely be the exception than the rule. That said, the new coronavirus is a respiratory virus known to spread by upper respiratory secretions, including airborne droplets after coughing or sneezing. The virus that causes COVID-19 has also been detected in the stool of certain people. So we currently cannot rule out the possibility of the infection being transmitted through food by an infected person who has not thoroughly washed their hands. In the case of hot food, the virus would likely be killed by cooking. This may not be the case with uncooked foods like salads or sandwiches. The flu kills more people than COVID-19, at least so far. Why are we so worried about COVID-19? Shouldn't we be more focused on preventing deaths from the flu? You're right to be concerned about the flu. Fortunately, the same measures that help prevent the spread of the COVID-19 virus — frequent and thorough handwashing, not touching your face, coughing and sneezing into a tissue or your elbow, avoiding people who are sick, and staying away from people if you're sick — also help to protect against spread of the flu. If you do get sick with the flu, your doctor can prescribe an antiviral drug that can reduce the severity of your illness and shorten its duration. There are currently no antiviral drugs available to treat COVID-19. Should I get a flu shot? While the flu shot won't protect you from developing COVID-19, it's still a good idea. Most people older than six months can and should get the flu vaccine. Doing so reduces the chances of getting seasonal flu. Even if the vaccine doesn't prevent you from getting the flu, it can decrease the chance of severe symptoms. But again, the flu vaccine will not protect you against this coronavirus. - Get your affairs in order, COVID-19 won't wait - Be careful where you get your news about coronavirus - Is there any good news about the coronavirus pandemic? - Allergies? Common cold? Flu? Or COVID-19? A Harvard infectious diseases doctor looks at COVID-19 (recorded 3/3/20) Dr. Todd Ellerin is on the front lines of infectious disease containment and mitigation as the director of infectious diseases at South Shore Health in Weymouth, Massachusetts. He's an instructor at Harvard-affiliated Brigham and Women's Hospital. We spoke to him this week to get an update on the rapidly developing story surrounding the coronavirus Covid-19. Coronavirus status report: Harvard public health expert Dr. Ashish K. Jha fills us in on where we are headed (recorded 3/19/20) The COVID-19 outbreak has caused markets to collapse and worldwide health systems to become overwhelmed. When there's a global pandemic, it's nice to hear from the steady, transparent and yes even reassuring voice of experts on the front lines. We spoke to Dr. Ashish K. Jha, faculty director of the Harvard Global Health Institute. Dr. Jha's recent appearance on the PBS Newshour caused reverberations throughout the federal and state response system. Here's his update. For more information on coronavirus and COVID-19, see the Harvard Health Publishing Coronavirus Resource Center. Image: gemphotography/Getty Images As a service to our readers, Harvard Health Publishing provides access to our library of archived content. Please note the date of last review or update on all articles. No content on this site, regardless of date, should ever be used as a substitute for direct medical advice from your doctor or other qualified clinician.
Malaysia, a tropical country in South Asia, is home to a high level of rich biodiversity, including many unique and endemic species of reptiles. These reptiles are under constant threat to their population with some already extinct in their natural ranges. The reptiles form a major tourist attraction while some are exploited for food and others bred as pets. For some of the reptiles, conservation measures have been put up while for others, and no effort has been made. White-Fronted Water Snake (Amphiesma flavifrons) The white-fronted water snake is a non venomous colubrid endemic to Borneo in Sabah and Sarawak. The snake is mainly seen swimming in rivers with its head above the water. Its total body length is about 21 inches with the tail about 7 inches of the total body length. The snake has a slender body with around 19 mid-body scales, about 19 ventrals, and subcaudals numbering between 92 and 101. The dorsal body has an olive-gray coloration with a cream-yellow spot on the snout. The snake feeds on frogs, frog eggs, and tadpoles. Alfred’s Blind Skink (Dibamus alfredi) Alfred’s blind skink is a species of blind lizard which occupies tropical and subtropical forests occurring at high altitudes of above 1,000 meters. In Malaysia, the lizard is found in the Malayan rain forests, Bukit Besar, Na Prado, and Palau Tioman. The lizard is limbless, but males have short hind legs for mating. The lizard lacks exposed ears. Their bodies are tiny, and they live mainly underground making them look like worms. Malayan Snail-Eating Turtle (Malayemys macrocephala) The turtle is a carnivorous reptile that feeds mainly on snails, and sometimes dines on earthworms, aquatic insects, crustaceans, and small fish as well. In Malaysia, the turtle occupies the extreme northern Peninsula. The turtle lays a clutch of 4 to 6 eggs which are incubated for about 167 days. The turtle takes about three years for males to reach maturity and about five years for females. The habitats for this turtle include muddy bottoms of freshwater sources where there is plenty of vegetation and very little currents such as streams, canals marshes, and rice puddles. The snail eating turtle has been ranked as vulnerable due to over-exploitation for food and habitat destruction due to pollution. The export of the turtle has been regulated in Malaysia to conserve it. Twin-Barred Tree Snake (Chrysopelea pelias) This tree snake is a rarely seen, terrestrial, oviparous snake with beautiful patterns on its reddish upper body, and has black-edged white bars, white-speckled light brown flanks, and a yellow-white ventral surface. The snake has a quiet temperament and is mildly venomous. The snake glides by stretching its body into a flattened strip using its ribs and can cover a horizontal distance of 100 meters in a single glide. In Malaysia, the snake is found in Malaya, Penang Island, Palau Tioman, and East Malaysia. Though the species is rare, it is considered under least concern category due to its wide distribution across the Malayan peninsula and its tolerance to altered habitats. Threats to its population include habitat loss and degradation, and no conservation measures have been put up. Other Notable Reptiles of Malaysia Peninsular and archipelagic Malaysia alike house many species of snakes, lizards, turtles, crocodiles, and other reptiles. Agricultural activities, hunting, and over-exploitation, are the major threats facing the reptiles in Malaysia. Other reptiles native to Malaysia include the Siamese crocodile (which is a critically endangered species), the checkered keelback, the reticulated python, Dumeril’s monitor, the Malayan forest gecko, and the false gharial. Native Reptiles Of Malaysia |Native Reptiles of Malaysia||Scientific Name| |White-Fronted Water Snake||Amphiesma flavifrons| |Alfred's Blind Skink||Dibamus alfredi| |Malayan Snail-Eating Turtle||Malayemys macrocephala| |Twin-Barred Tree Snake||Chrysopelea pelias| |Siamese Crocodile||Crocodylus siamensis| |Checkered Keelback||Xenochrophis piscator| |Reticulated Python||Python reticulatus| |Dumeril's Monitor||Varanus dumerilii| |Malayan Forest Gecko||Cyrtodactylus pulchellus| |False Gharial||Tomistoma schlegelii|
Estimated reading time: 6 minute(s) What is drought? Drought is the result of anticipated natural precipitation reduction over an extended period of time, usually a season or more in length. It is regarded as a normal phenomenon that occurs in all climatic regions, including regions with high average rainfall. It can have drastic and long-term effects Droughts are one of the most complex of all-natural hazards, as it is difficult to determine when it begins and when it ends. Drought has a large impact on farming; it can affect cropping, grazing land, edible plants and even trees. One of the major hazards to drought is wildfire, this arises when dry grass is exposed to intense heat, causing an ignition of this biomass (Wayne, 1965). How to prepare and survive even to the worst drought? 1. Observing early warnings by meteorologists Farmers can approach meteorologists (scientists that deal with climate predictions) to understand local and global weather patterns better to do weather forecasting. Farmers can also set up instruments on the farm to monitor rainfall and temperature changes, this will assist in predicting and avoiding the effects of drought (Wayne, 1965). 2. Reduce livestock Farmers should reduce the number of livestock on their farm, this will assist in reducing grazing pressure on the rangeland, this approach will also promote available and abundant feed supply, giving farmers an opportunity to prepare other drought-management measures. During this process, it is ideal to get rid of young stock, animals close to marketable condition, castrated animals and old aging animals (also known as fillers) (Rothauge, 2001). 3. Accumulating financial reserves At this stage, income obtained from the emergency sale of “filler” animals. In anticipation of a drought, this emergency money can be utilized for other drought management measures such as buying feed for the drought. 4. Building a fodder bank A fodder bank is an accumulation of feed that will be used as emergency feed in times of natural grazing is scarce. The idea is to preserve feed to utilize during harsh times. 5. Drought-resistant fodder crops This approach looks at growing cheap and drought resistant fodder crops that can be harvested and stored for later use. 6. Store water Use water conservation practices that help you lose less water and encourage infiltration of water into the soil. 7. Supplemental feedstocks If the drought persists, consider the prices of supplemental feedstocks that could stretch the available forage in the pasture. In extreme drought conditions, by-products not usually fed to livestock and failed crops that were intended to be harvested can be used as feed. It is crucial that farmers and producers understand the use of the feed and whether it may have been exposed to chemicals. Drought is a serious concern for farmers, as it affects the production process. There are ways of surviving through drought events and ensuring less loss of livestock and less damage on the grazing lands. Rothauge, A. X. E. L. (2001). Drought Management Strategies for Namibian Ranchers. AGRICOLA, Windhoek, 91-105. Wayne, C. P. (1965). Meteorological drought. Res. Pap, 45, 58.
Laterite (from the Latin word later, meaning "brick" or "tile") is a surface formation that is enriched in iron and aluminum. Found mainly in hot, wet tropical areas, it develops by intensive and long-lasting weathering of the underlying parent rock. Laterite formations in non-tropical areas are products of former geological epochs. Some laterites are valuable for their ore content. Some hardened varieties have been used to build houses, roads, and other structures. In addition, solid lateritic gravel may be found in aquaria where it favors the growth of tropical plants. Nearly all kinds of rocks can be deeply decomposed by the action of high rainfall and elevated temperatures. The percolating rainwater causes dissolution of primary rock minerals and a decrease of easily soluble elements such as sodium, potassium, calcium, magnesium, and silicon. As a result, there remains a residual concentration of more insoluble elements—predominantly iron and aluminum. In the geosciences, only those weathering products that are most strongly altered geochemically and mineralogically are defined as laterites. They are distinguished from the less altered saprolite, which often has a similar appearance and is also widespread in tropical areas. Both types of formation can be classified as residual rocks. The process of laterite formation has produced some valuable ore deposits. For example, bauxite, an aluminum-rich laterite variety, can form from various parent rocks if the drainage is most intensive, thus leading to a very strong leaching of silica and equivalent enrichment of aluminum hydroxides, chiefly gibbsite. Laterites consist mainly of the minerals kaolinite, goethite, hematite, and gibbsite, which form in the course of weathering. Moreover, many laterites contain quartz as a relatively stable, relic mineral from the parent rock. The iron oxides goethite and hematite cause the red-brown color of laterites. Laterites can be soft and friable as well as firm and physically resistant. Laterite covers usually have a thickness of a few meters, but occasionally they can be much thicker. Their formation is favored by a slight relief that prevents erosion of the surface cover. Lateritic soils form the uppermost part of the laterite cover. In soil science, they have been given specific names, such as oxisol, latosol, and ferallitic soil. Lateritization of ultramafic igneous rocks (serpentinite, dunite, or peridotite containing about 0.2-0.3 percent nickel) often results in a considerable nickel concentration. Two kinds of lateritic nickel ore need to be distinguished: In pockets and fissures of the serpentinite rock, green garnierite can be present in minor quantities, but with high nickel content—mostly 20-40 percent. It is bound in newly formed phyllosilicate minerals. All the nickel in the silicate zone is leached downward from the overlying goethite zone. Absence of this zone is due to erosion. Laterites are economically most important for ore deposits, such as bauxite. In addition, strong, hardened varieties of laterite are sometimes cut into blocks and used as brickstones for building houses. Khmer temples in Cambodia were often constructed of laterite, but by the twelfth century, Khmer architects had become skilled and confident in the use of sandstone as the main building material. Most of the visible areas at Angkor Wat are of sandstone blocks, with laterite used for the outer wall and for hidden structural parts that have survived for over 1,000 years. Hardened laterite varieties are also applied for the construction of simple roads (laterite pistes). Nowadays, solid lateritic gravel is readily put in aquaria where it favors the growth of tropical plants. All links retrieved Jume 21, 2018. The history of this article since it was imported to New World Encyclopedia:
What is Hadoop? Apache Hadoop is an open source software framework used to develop data processing applications which are executed in a distributed computing environment. Applications built using HADOOP are run on large data sets distributed across clusters of commodity computers. Commodity computers are cheap and widely available. These are mainly useful for achieving greater computational power at low cost. Similar to data residing in a local file system of a personal computer system, in Hadoop, data resides in a distributed file system which is called as a Hadoop Distributed File system. The processing model is based on ‘Data Locality’ concept wherein computational logic is sent to cluster nodes(server) containing data. This computational logic is nothing, but a compiled version of a program written in a high-level language such as Java. Such a program, processes data stored in Hadoop HDFS. Hadoop EcoSystem and Components Below diagram shows various components in the Hadoop ecosystem- Apache Hadoop consists of two sub-projects – Hadoop MapReduce: MapReduce is a computational model and software framework for writing applications which are run on Hadoop. These MapReduce programs are capable of processing enormous data in parallel on large clusters of computation nodes. HDFS (Hadoop Distributed File System): HDFS takes care of the storage part of Hadoop applications. MapReduce applications consume data from HDFS. HDFS creates multiple replicas of data blocks and distributes them on compute nodes in a cluster. This distribution enables reliable and extremely rapid computations. Although Hadoop is best known for MapReduce and its distributed file system- HDFS, the term is also used for a family of related projects that fall under the umbrella of distributed computing and large-scale data processing. Other Hadoop-related projects at Apache include are Hive, HBase, Mahout, Sqoop, Flume, and ZooKeeper. Hadoop has a Master-Slave Architecture for data storage and distributed data processing using MapReduce and HDFS methods. NameNode represented every files and directory which is used in the namespace DataNode helps you to manage the state of an HDFS node and allows you to interacts with the blocks The master node allows you to conduct parallel processing of data using Hadoop MapReduce. The slave nodes are the additional machines in the Hadoop cluster which allows you to store data to conduct complex calculations. Moreover, all the slave node comes with Task Tracker and a DataNode. This allows you to synchronize the processes with the NameNode and Job Tracker respectively. In Hadoop, master or slave system can be set up in the cloud or on-premise • Suitable for Big Data Analysis As Big Data tends to be distributed and unstructured in nature, HADOOP clusters are best suited for analysis of Big Data. Since it is processing logic (not the actual data) that flows to the computing nodes, less network bandwidth is consumed. This concept is called as data locality concept which helps increase the efficiency of Hadoop based applications. HADOOP clusters can easily be scaled to any extent by adding additional cluster nodes and thus allows for the growth of Big Data. Also, scaling does not require modifications to application logic. • Fault Tolerance HADOOP ecosystem has a provision to replicate the input data on to other cluster nodes. That way, in the event of a cluster node failure, data processing can still proceed by using data stored on another cluster node. Topology (Arrangment) of the network, affects the performance of the Hadoop cluster when the size of the Hadoop cluster grows. In addition to the performance, one also needs to care about the high availability and handling of failures. In order to achieve this Hadoop, cluster formation makes use of network topology. Typically, network bandwidth is an important factor to consider while forming any network. However, as measuring bandwidth could be difficult, in Hadoop, a network is represented as a tree and distance between nodes of this tree (number of hops) is considered as an important factor in the formation of Hadoop cluster. Here, the distance between two nodes is equal to sum of their distance to their closest common ancestor. Hadoop cluster consists of a data center, the rack and the node which actually executes jobs. Here, data center consists of racks and rack consists of nodes. Network bandwidth available to processes varies depending upon the location of the processes. That is, the bandwidth available becomes lesser as we go away from- - Processes on the same node - Different nodes on the same rack - Nodes on different racks of the same data center - Nodes in different data centers
By identifying a protein that restricts the release of HIV-1 virus from human cells, scientists believe they may be closer to identifying new approaches to treatment. The research is published in the advance online edition of Nature Medicine. Scientists have known that most human cells contain a factor that regulates the release of virus particles, but until now they have been uncertain about the factor’s identity. Now a research team from Emory University School of Medicine, Vanderbilt University School of Medicine, and Mayo Medical School has identified CAML (calcium-modulating cyclophilin ligand) as the cellular protein that inhibits the release of HIV particles. CAML works by inhibiting a very late step in the virus lifecycle, leading to the retention of HIV particles on the membrane of the cell. The virus has developed a means of counteracting CAML, through the action of the viral Vpu protein. When Vpu is absent, HIV particles don’t detach from the plasma membrane and instead accumulate by a protein tether at the cell surface. When the research team depleted CAML in human cells in the laboratory, they found that Vpu was no longer required for the efficient exit of HIV-1 particles from the cell. When they expressed CAML in cell types that normally allow particles to exit freely, the particles remained attached to the cell surface. “This research is important because it identifies CAML as an innate defense mechanism against HIV,” says senior author Paul Spearman, professor of pediatrics (infectious diseases) at Emory University School of Medicine. “We are continuing to work on the mechanism that Vpu uses to counteract CAML and on defining exactly how CAML leads to virus particle retention on the infected cell membrane. We hope this will lead us to new treatments.”
Most of the vegetation types on the reserve are adapted to periodic veld fires. Exceptions are the indigenous forest patches in the kloof areas. The probability of fire spread would be expected to be maximal during the hottest and driest 4 month period of the year from December to end of March. The summer period from October to March is also the period with the strongest winds when westerlies, easterlies and south-westerlies predominate. Khoi pasturalists used patch burning from about 2000 years ago and before them the San, which live in the vicinity had also used fire, although the extent to which they changed the pattern and frequency of lighting-caused fires is unknown. Intensive patch burning was also practiced by most farmers in the area to encourage the increase of grass before the reserve was proclaimed. This practice still occurs today from our northern neighbours to encourage proliferation of the “ Sewerjaartjies”, for commercial gain. This can be of detrimental consequences if uncontrolled fires escape the neighboring farms and sweep into Vogelgat. The maintenance or restoration of a “natural” fire regime is the objective on the reserve. Although uncertainty and disagreement still exist regarding the details, it is now widely recognised that a “natural” fire regime must incorporate variability in seasonality, frequency and intensity in order to maintain biodiversity and resilience, including genetic variation in the long term. The “natural” fire regime will be determined primarily by the inherent rate of recovery of the vegetation after previous fire as well as climatic conditions, which together determine the flammability of the vegetation and thus the potential for the spread of fire. Thus a range of fire intervals (including occasional burning four to five years after a previous fire), as well as occasional fires outside the main ‘fire season”, must be considered as a prerequisite to maintain the natural biodiversity and resilience of the fynbos ecosystem. History of fires
Online volunteers are helping to track slavery from space. A new crowdsourcing project aims to identify South Asian brick kilns – frequently the site of forced labour – in satellite images. This data will then be used to train machine learning algorithms to automatically recognise brick kilns in satellite imagery. If computers can pinpoint the location of possible slavery sites, then the coordinates could be passed to local non-governmental organisations to investigate, says Kevin Bales, who is leading the project at the University of Nottingham in the UK. South Asian brick kilns are notorious sites of modern-day slavery. Nearly 70 per cent of the estimated 5 million brick kiln workers in South Asia are thought to be working there under force, often to pay off debts. But no one is quite sure how many kilns there are in the so called “Brick Belt” that stretches across parts of Pakistan, India and Nepal. Some estimates put the figure at 20,000, but it may be as high as 50,000. Bales is hoping that his machine learning approach will produce a more accurate number and help organisations on the ground know where to direct their anti-slavery efforts. It’s great to have an objective tool to identify possible slavery sites, says Sasha Jesperson at St Mary’s University in London. But it is just start – to really find out how many people are being enslaved in the brick kiln industry you still need to visit every site and work out exactly what’s going on there, she says. So far, over 4000 potential slavery sites have been identified by volunteers taking part in the project. The volunteers are presented with a series of satellite images taken from Google Earth and they have to click on the parts of images that contain brick kilns. As soon as 15 volunteers identify each of the nearly 400 images in the data set, Bales plans on teaching the machine learning algorithm to recognise the kilns automatically. He’s already working on the next stage of the project, which will use a similar approach to help identify open pit mines in countries such as the Democratic Republic of the Congo, which are also often sites of forced labour. But Bales thinks that his machine learning algorithms might have a trickier time categorising open pit mines than brick kilns. The kilns are usually a distinctive shape and colour, but the mines, which often look like big holes in the ground, can be harder to spot. “A lot of slavery is visible from space,” says Bales but image recognition could also be a useful tool for helping track slavery where satellites can’t reach. TraffickCam, a project set up by the social action group Exchange Initiative, uses image recognition to identify sex trafficking in hotel rooms. Visitors to hotels can use TraffickCam to upload an image of the inside of their hotel room to the website’s database. These photographs can then be compared with photos of sex workers that traffickers often post online. Because those photos are often taken in hotel rooms, investigators may be able use the TraffickCam database to pinpoint the location of a particular photograph. More than 150,000 hotel rooms have been documented in this way. Article amended on 23 June 2017 We have corrected the number of potential slavery sites that have been identified by volunteers More on these topics:
Using antibiotics on the farm to raise animals contributes to the production of antibiotic–resistant germs or “superbugs.” All animals carry bacteria in their intestines and on their bodies. Giving antibiotics to animals will kill large amounts of bacteria, changing their microbiome (<–good explanation here) and regular “good” bacteria too. Because 60% of the antibiotics used in animals are also used to treat human diseases, with time when antibiotics are used routinely, the bacteria become resistant, survive and multiply. If those resistant bacteria are transmitted to people, we don’t have as many medicines to eradicate them. Therefore, risks develop to humans when these “superbugs” thrive in animals and are transmitted through our food source. Over time, more and more infections carried in the food we eat will lack proper treatments. What we choose to eat will shape our risk. Susceptible and resistant animal pathogens can reach humans through the food supply, by direct contact with animals, or through environmental contamination. American Academy of Pediatric Technical Report Antibiotics used for infections in animals should be encouraged but antibiotics used to promote rapid growth and weight gain in animals likely should not (overuse). The majority of tonnage of antibiotics used in raising animals are often used for growth promotion and efficiency meaning they are used to keep meat cheaper, not necessarily safer. Hard to find clear data on exactly what % is used for disease treatment and what % is used for growth. Antibiotics resistance is considered one of the major threats to the world’s health. American Academy of Pediatric Technical Report How Bacteria In Animals Gets To Our Children When animals are slaughtered and processed for human consumption, the bacteria they carry can contaminate the meat or other animal products. These bacteria can also get into the environment and may spread to fruits, vegetables or other produce that is irrigated with contaminated water. People can get exposed to resistant bacteria from animals when they: - Handle or eat meat or produce contaminated with a resistant bacteria - Come into contact with animal poop To be clear, the problem is not antibiotic residues in the meat (which is restricted by federal law), as much as it is about (1) antibiotic residues entering the water supply and food chain through agricultural run-off or as manure sprayed on crops, and (2) antibiotic-resistant bacteria contaminating the meat and produce that we bring home to our kitchens from the market. ~Dr. Scott Weissman, Infectious Disease Expert What You Can Do To Improve Food Safety: - Buy meat raised without antibiotics - Review the CDC’s food safety recommendations on handling meat in your home - Look for the following labels when purchasing healthy and antibiotic free foods
What Makes up a Person's Character? Character consists of a person's mental and moral dispositions, manifested by his interaction with his environment and with other people. Character is the result of deeply held convictions, many of which form during childhood. External factors, especially trauma, have a major influence on character growth. Character is made up of a number of inter-related concepts, including morals, values and prejudices. A person's character is a combination of such mental tendencies and the way in which he channels them in his daily interactions. For example, "anal" characters are compulsive and perfectionist, "passive-aggressive" characters bottle up their anger and "narcissistic" characters express excessive self-centered behavior. Character influences relationships, career choices and interests. Character is in large part a product of one's childhood environment and relationship with his caregiver. Adverse or hurtful circumstances growing up lead to negative character traits. Neglect, condescension and spiteful language cause and provoke low self-esteem. Children come to believe that their personal qualities and way of being are undesirable and worthy of reproach. Thus, they repress these traits and develop feelings of fear, remorse and insecurity. These feelings continue into adulthood, and the individual is often unaware of the cause of his self-harming attitudes. A healthy character includes such traits as self-discipline and confidence. Fairness and honesty also are healthy attributes, especially when applied both with oneself and with others.