content
stringlengths
275
370k
In this piece, we’ll discuss and explore money and how it has shaped our understanding of value and trust. We’ll outline the definition and functions of money; how money is created; money as a social construct; money as a system of trust; and money as we view it today. Money is one of the oldest technologies of human civilisation and is prevalent in its application throughout our lives. We use it on a day to day basis for transacting in commerce, so what is it? Money is, and functions as a: Medium of exchange – used as a medium to be able to buy goods and service; Unit of account – a common standard for measuring relative worth; and as a Store of value – it holds value over time. Anything that fulfils these three functions can be considered to be money. In today’s world, we typically transact in fiat money and bank money. Fiat money is the physical currency that is issued and backed by the government and is what you have in your wallets and pockets. Bank money is what you have in your bank account, and is a form of privatised money, which is commonly referred to as deposits or credits. Historically we had operated off a monetary system that was backed by gold (the gold standard). The gold standard limited the issuance and quantity of commodity money to the amount of gold that was held in reserves. Commodity money is paper money that is backed by something scarce and of value such as gold and silver. Sound Money: Although the world has evolved immensely in the last two thousand years, what we would consider to be sound money is in line with the Greeks and Aristotle who listed 4 attributes, which to the current day, have essentially remained the same: It is Durable – not easily destroyed and is able to stand the test of time; It is Divisible – when divided into its parts would be interchangeable; It is Portable – can be easily transported and carried around; and It is Intrinsically Valuable & Scarce – holding value within itself that is independent from any other object and is scarce. Now, ‘intrinsically valuable’ is a debatable point when evaluating money throughout history. Fiat currency could not be said to have intrinsic value, considering that we had moved from monetary systems that had been backed by gold, to systems backed by the full faith and credit of the government that issues it. Gold has traditionally been labeled as intrinsically valuable, due to its scarcity, it’s geological composition and its 5000-year history of accepted value par excellence. Yet, we could also say that it is historically valuable, for value whether it is intrinsic or not is dependent on consensus, or in the words of Bettina Greaves, “gold’s physical properties are the product of nature, its value is the product of acting men”. If we value something, it has value, if we don’t, then it does not. Value all comes down to how society constructs it. Money as a Social Construct Money is an individual’s expression of value that is accepted by a collective. We collectively agree that ‘something’ is of value or not, and without this the individual is left to either find something of collective value to participate with others, or convince the others that their ‘something’ is valuable and demands participation. Governments are experts at influencing public participation in monetary systems of value. Marbles as Money: Outside of external intervention or governing decree, our society naturally orientates itself around the transacting of value as defined by the free-market’s decisions. As a child (in a pre-internet era), even though the principles of money were cognitively beyond our understanding, marbles were one of the primary objects of value and repute (in a time before Pokemon cards). For some reason, children would naturally shape their economic desires towards the pursuit of marble empires through rudimentary wealth accumulation strategies, ranging from barter to gambling through to payments for services and other goods, for example, trading marbles for lunch. As children, there were even loose valuation systems for the marbles. Rare marbles were ascribed a higher value, and the common ones, a lower value. Without understanding the monetary policies of the society they lived in, the children created their own, as will other groups within the human genome who participate in a shared economy. Shared economies provide the foundations for collective belief systems that then define the expression of value that is shared by other market participants in an economy. One of the fascinating considerations of money is that the physical fiat money we use is not actually worth the cost to make it, or is substantially worth more than the cost. Due to economies of scale, printing a $100 bill would cost a fraction of a dollar, yet minting coins, particularly the lower denominations would cost more than the value. The value of the $100 note is established by legal decree, collective agreement and societal acceptance that regardless of its inherent worth and cost of production, $100 will always be $100 (in a healthy economy). This acceptance is granted on legal tender, whereby transacting entities within society are required by law to accept it as payment. Money is not inherently valuable, until we allocate it value as a social construct. How Money is Created What we view as traditional money today can basically be broken down into physical currency – in the form of fiat (government-issued legal tender) notes and coins, and bank deposits through double entry accounting i.e. credits and debits. Central banks and mints are responsible for the creation of the physical currency that we call cash. Cash is private and permissionless and can be transacted outside the purview of the government. For this reason, this type of type of money has been slowly removed from economies, as governments transition their population into a digital monetary system i.e. a cashless economy. Beyond the issuance of the nation’s physical currency, the Central Bank is also responsible for the nation’s monetary policy; price stability and economic growth; regulation and control of liquidity in the economy through the expansion and contraction of the money supply; and are responsible for setting the reserve ratio requirements, holding the reserves, and acting as a clearing house for commercial banks. As a final note, the Central Bank also acts as the lender of last resort to financial institutions and banks experiencing liquidity issues, as we saw in the 2008 global financial crisis. The banking system has privatised the creation of money (commonly referred to as deposits, credit or bank money) by increasing the money supply through lending excess reserves. Banks are held to reserve requirements or capital adequacy requirements, where liquid deposits (cash) are held in reserve as dictated by the Central Bank. For example, a 10% reserve requirement to ensure that it can meet liabilities in the case of customer withdrawals. Any excess reserves may be lent out. Through double entry accounting, each new loan that is recorded as an asset, a corresponding deposit is created in their liabilities. With each loan that is lent out, new money is created./p> Although banks seemingly create money out of thin air, they are not able to create unlimited money. The money they can create comes down to the money multiplier as measured against the reserve requirement. If the reserve requirement is 10% then the money multiplier will be 10, then the money created will in theory be 10 times the original amount deposited. Simplified example: Tom deposits $100 in Bank A | Bank A keeps 10% = $10 in reserves | Bank A lends out $90 to Masha | Masha deposits $90 into Bank B | Bank B keeps 10% = $9 in reserves | Bank B lends out $81 to Polly | and so on and so on. From the original deposit of $100, theoretically, the total money created in the banking system through deposits would equal $1,000. *NOTE: if customers hold it in physical cash and don’t deposit back into the banking system, or the banks don’t lend out all the excess reserve then the money obviously does not multiply 10 times. Bank lending increases the money supply. Customer loan repayments decrease the money supply. As money is created and assigned a value for us to trust and transact within the traditional sense, other forms of money are also being created on a daily basis that serve as an alternative to the modern monetary system. These forms are always completely dependent on your personal situation, geographical and political location, and the community willing to validate its function as money. One of the most notable monetary innovations of the last millennia came in 2008 through a pseudonymous entity called Satoshi Nakamoto, who launched Bitcoin to the world through the Bitcoin white paper and then in January of 2009, the genesis block that gave life to the world’s first trustless, permissionless, decentralised, open-source peer to peer digital currency, Bitcoin. Please refer to our Bitcoin Primer for more information. Bitcoin was the first of many decentralised digital/crypto currencies that have since emerged that are changing the way we understand money, value and trust. Money as a System of Trust Trust underpins much of how we walk through the world on a day to day basis. We trust that when riding a bike to work that a car is not going hit us, that they will keep to their lane; we trust that the food we eat is not going to poison us; we trust that our friends and loved ones will stand by our sides through thick and thin; we trust that the people tell us the truth when we listen to them; we trust our employers to pay us for our services. Commerce is no different. The commerce lifecycle is based on systems of trust – trust that the currency transacted will have value tomorrow; trust that the goods and services received or delivered will have and continue to hold their value; trust that your money in the bank will still be there tomorrow (well not in the case of Cyprus); trust that your bank will still be around tomorrow; trust that your government will continue to maintain relative value of the national currency; and trust that you are able to spend your money on what you want, which is not always the case. In the modern world we trust the government and we trust the fiat system of money. Fiat dollars that have flooded the markets through inflation, exuberance and unchecked bank lending, are considered to be money and of value, not because they have intrinsic value, but because of the trust in the issuing government – In State we Trust. This is a devolution from what has been the money par excellence over the ages, that is, the value and trust in the precious metals of gold and silver. Gold is created by supernovas and when two neutron stars collide and gold particles are released into space, which find their way to earth. The rarity of its geological composition and the 5000-year-old history of trust in its value ensures that it earns its place at the table of intrinsic value – which is why throughout the ages – In Gold we Trusted. With the proliferation of the internet and information, the world is undergoing an awakening of value. State trust is becoming increasingly displaced in the fiat monetary system of money, led by uncertain supply dynamics (QE) at best, and at worst, abuse of State trust from governments in Zimbabwe and Venezuela leading to hyperinflation and the decimation of the people’s wealth. There is now an increased focus for individuals and families to be able to build and preserve their wealth outside of traditional financial systems to be able to live, survive, and thrive. There has been a continued erosion of trust in financial institutions and the governments that were, and are supposed to be, regulating them – 2008 taught us a lot about trust. A $700 billion dollar US government bailout was deemed necessary to stop the collapse of the global financial system, which essentially creates moral hazard that is ultimately supported by the taxpayer. Even trust in the physical cash in our pockets is currently in question, in regards to the future of cash, as some nations are going, or seeking to go cashless, for example, Sweden (currently), Australia (in the near future). From the removal of high denominations of cash, through to the outright removal of all cash payments, there will be a digital transformation as to how we live in a cashless society. What is Money Today? When we think of money we think of the ‘currency’ / ‘fiat’ that we use on a day to day basis, yet this is merely a form of money. Notably, it is the form of money that we as citizens of a nation are legally required to use for goods and services. The US dollar has value because the US and the rest of the world each subscribe to and put faith in the belief that the dollar today will be worth a dollar tomorrow. It also holds value due to the ‘full faith and credit’ backing by the US government, and that the government through trade or war will continue to ensure it retains relative value. The US dollar and the Federal Reserve has redefined our understanding of money over the last century. The US dollar went Alpha in 1944 when representatives of 44 countries met in Bretton Woods, New Hampshire to establish a new global monetary system – the Bretton Woods Agreement. It essentially replaced the gold standard, and pegged the participating nation’s currencies to the US Dollar, which was pegged to gold at an exchange rate of $35 USD per ounce. Pre-Bretton Woods, most countries followed the gold standard which due to its nature (currency supply can only increase if gold supplies are increased) held inflation and government over-spending in check. However, due to the ubiquity of military wars in the 20th century, governments had to abandon the gold standard in order to print enough to pay for their military spend, thereby blowing out their debt and creating increased levels of inflation. The Bretton-Woods Agreement collapsed in 1971 when Nixon and the US abandoned the gold standard. Europe and the rest of the world had lost faith in the US being able to honour the convertibility from US dollars into gold. The charge was led by France, where De Gaulle in favour of international monetary reform, and a return to the gold standard, exchanged their US dollars for US gold reserves, and thereby reclaimed their monetary rights as affirmed by the Agreement. Continuing Thoughts on Money Money is what allows us to express our needs and our desires. It is a financial representation of where we have chosen to spend our time, energy and productivity. It is the means by which we buy and sell goods and services and attain wealth. It is a communication of value and is a system of trust. There are many forms of money and financial instruments in which to accumulate wealth, and we believe in having the freedom, the education and available access to create our own individual financial expressions. We believe that alternative forms of money have a place in the modern world, complementing the existing system. Traditional money and financial systems will continue to operate as they have, aiming to keep pace with the digital age that is breaking down borders and opening access to billions of people globally that until now, were never invited into the club of financial exclusion. This digital age has ushered in the convenience of having not only the world’s information at your fingertips, but also, a global system of digital finance that everyone on a network can participate in. The digital age is providing new mechanisms and perspectives on value and trust. Money is a social construct and a consensus of what constitutes value. We either choose to continue buying into the existing construct of financial reality, or we choose to transform it, change it or use one of the greatest weapons in our own personal arsenal, we can choose to exit it. One of the most liberating truths in life, is that we have choice. And so it is with this choice that we can choose a multitude of financial realities to buy into, be that fiat money, physical cash, bank money, Bitcoin and digital currencies, gold, barter systems and so on. We believe in alternatives and choices.
Most ESL students step into the classroom for the first time brimming with enthusiasm and ready to improve their English communication skills. And by “communication” they’re thinking of speaking skills. Which is great! But what happens when they’re faced with the challenge of communicating in writing? In today’s world where a lot of our communication and interaction is digital or online or electronic, ESL students need to learn to communicate well in writing, as well as orally. Once you’ve gained some experience as an ESL teacher, you start seeing writing mistakes that pop up again and again, mistakes which are typical in ESL students in particular, and which are connected to the fact that English is their second and not maternal language. As teachers, it is not enough to identify these mistakes; we must deploy all of the strategies and tools we have in our arsenal to bust these mistakes once and for all. The 10 Writing Mistakes ESL Students Make Most Often Native English speakers who speak nothing but English, often make this mistake – it happens just as often with ESL students. Homophones are words that are pronounced the same way but have a different meaning. Classic examples are: their, they’re, there; new and knew; here and hear; its and it’s, etc… Punctuation can be a problem for those who are honing their writing skills in any language, and ESL learners are no different. The most common problem is the use of the comma (,). Students either don’t use it at all or insert it everywhere. Semicolons (;) are also misused often. Different languages have different rules for capitalization. In Spanish, for example, the names of languages and the adjectives for nationalities are not capitalized, which is why students often write english instead of English. The use of definite or indefinite articles is also no man’s land. ESL learners typically omit them entirely when they should be used or use them when they’re not necessary. This is when students write something like: The fruits and vegetables are good for you. Word order is tricky particularly when there are several adjectives involved. Consider this typical mistake: I have blond long hair. Students forget that the length of the hair has to be mentioned before the color. Even very advanced ESL learners, who make very few grammar mistakes, will on occasion choose the wrong word or one that is not entirely wrong, but may not be the best choice. Consider this example: I am looking for an economic hotel. The word that is misused is economic; it should be an economical hotel. I drove quick to my house. What’s wrong with this sentence? The student should have driven quickly or fast. Quite often students forget to use the correct adverb. Comparatives and Superlatives Raise your hand if you’re tired of correcting writing assignments that are full of “more better”, “more bad” or “expensiver”. Yeah. I thought so. The dog was sleeping on the cat’s bed. You might thing this sentence is correct. Except the writer is referring to two cats who share a bed, not just one. Students have trouble with possessives in plural nouns (the cats’ bed), as well as nouns that end in s (Socrates’ ideas). They also use apostrophes when they shouldn’t (CDs is the plural of CD). This is one of the mistakes that crops up again and again in ESL students’ writing assignments. I’m talking about sentences like: - People is excited about the World Cup. - She have two dogs and one cat. - He speak English fluently. How to Bust These Mistakes: You carefully correct each and every mistake, and hand back the writing assignment to your students. They look over all of your corrections. They see how many mistakes they’ve made. But this is not enough. If this is all they do, your students are doomed to keep repeating these mistakes again and again. In order to bust these writing mistakes once and for all, your students must go from being passive receivers of your corrections, to actively recognizing and correcting their own mistakes. How do you get students to correct their own mistakes (engage in self-correction)? You can go about this in a number of ways: - If the writing assignment is short, simply write down and circle the number of mistakes they’ve made, which they must look for and correct. - For longer assignments, you can break it down into paragraphs or type of mistake, for example preposition mistakes, verb tense mistakes, vocabulary mistakes, spelling mistakes, etc... - You can choose to correct some the mistakes, then single out a particular type for them to correct. For example, you can correct all of the spelling mistakes but make them correct grammar mistakes. - Underline entire problem sentences and have them change them or rephrase them. - Do what makes sense for your class and for the type of writing assignment. Just make sure they are self-correcting something. By having students correct their own writing mistakes, you’re forcing them to take a closer look at their writing, to dig deeper. Self-correction increases awareness, and it is precisely the kind of awareness that will help them stop making these mistakes. It also boosts confidence. And we all know that confident ESL learners are happy learners. Which other typical writing mistakes would you add to this list? Share them below! P.S. If you enjoyed this article, please help spread it by clicking one of those sharing buttons below. And if you are interested in more, you should follow our Facebook page where we share more about creative, non-boring ways to teach English.
Giuseppe Zaccai from the Institut Laue-Langevin (ILL) in Grenoble, France, describes how he and his co-workers have uncovered a way to explore water dynamics in the cell interior using neutron scattering and isotope labelling. Compared to other liquids, water has extraordinary properties. As water is essential for all living organisms, its properties also play a truly vital role at the level of molecular biology – a discipline which seeks to understand life processes at the levels of atoms, molecules and their interactions. The hydrophobic effect is one such property. It describes the observation that in a liquid solution, water and oil do not mix. The reason is that water molecules can form hydrogen bonds with each other and other molecules (which are called hydrophilic), but not with oil-like molecules (which are called hydrophobic) (for more on this topic, see Cicognani, 2006). This has fundamental consequences in molecular biology. The hydrophobic effect leads to the spontaneous organisation of lipid molecules to form the membranes that surround cells. It also contributes to the formation of three-dimensional structures in proteins, RNA and DNA, favouring their folding in such a way as to hide the hydrophobic parts of their structures from contact with water and to expose the hydrophilic parts. The hydrophobic effect, as it is understood, depends critically on the special dynamic molecular properties of liquid water. The implication of this effect in membrane formation and macromolecular folding was deduced from test-tube experiments on solutions in which water is clearly in the liquid state. There have been suggestions, however, that water in cells is not in its normal liquid state but is somehow ‘tamed’ and cannot move about as freely inside the viscous intracellular environment, a thick soup of proteins and other molecules. It was therefore very important to measure the dynamic state of water directly in living cells. This was not an easy task, but the special properties of the neutron helped scientists from my research group at the ILLw1, as well as researchers at the Institut de Biologie Structurale CEA-CNRS-UJFw2 in Grenoble, France, to tackle it successfully. The first experiments on water dynamics in living cells were performed at ILL on cells from organisms that live in the extremely salty conditions of the Dead Sea (Tehei et al., 2007). Salt is used as a preservative because at high concentration it usually kills micro-organisms. The Dead Sea halophilic (salt-loving) organisms evolved to cope with the very high salt concentration by having macromolecules with markedly increased hydrophilic surfaces. These surfaces affect water dynamics inside the cell, leading to the observation of a major ‘slow water’ component in the Dead Sea cells. Clearly, if this were true for all organisms, it would lead to a complete reassessment not only of the hydrophobic effect, but also of the role of water in biology in general. It was therefore essential to test whether this behaviour was special to the halophilic organisms or could be generalised (Jasnin et al., 2008). At ILL, scientists use neutron beams to investigate a variety of solid and liquid materials. In neutron spectrometry experiments to measure dynamics (how atoms move in a substance), the neutrons in the beam collide with the atoms to be studied, like billiard balls bouncing off each other. Neutrons and atoms exchange energy and momentum – the neutrons are scattered. Thus, measuring how these values change for the neutrons after the collision gives us an indication of the energy and momentum of the atoms they encountered, and therefore of how these atoms move. But how can we distinguish between the motions of different atoms in a complex sample, such as a cell that contains not only water but also many other molecules whose atoms move in different ways? Neutrons are scattered with different power by different atoms. To study complex systems, scientists use a trick to reduce the scattering power of everything they do not want to measure. Hydrogen scatters neutrons much more strongly than all other atom types (about 10-100 times, depending on which atom type you compare it with). In contrast, deuterium, a heavy isotope of hydrogen (its nucleus contains one neutron in addition to one proton), scatters neutrons about 40 times more weakly than hydrogen. Exploiting this property, scientists replace hydrogen with deuterium in the components of a complex system they are not interested in and render them practically ‘invisible’. The contributions to the scattering signal by the molecules that contain deuterium are negligible; we ‘see’ only the motions of the molecules that contain hydrogen. Marion Jasnin and her co-workers used this trick to analyse water dynamics in vivo in the cytoplasm of Escherichia coli bacteria, taking advantage of the neutron sources at ILL and ISISw3, UK. Studying physics with biological samples is always a difficult task, and human cells are very delicate and complicated to work with. E. coli were a good alternative as they are easier to handle, yet live in the human gut under similar physiological conditions of temperature and salinity as our own cells; and remember, the adaptation of the cytoplasm to high salinity was thought to be the cause of the ‘slowed down’ water in the halophiles. To replace the hydrogen atoms in the proteins and other cellular macromolecules by deuterium, E. coli cells were grown on deuterated nutrients and deuterated (heavy) water. For the measurements, they were then centrifuged gently and the heavy water was replaced with normal (hydrogen-containing) water, diluting out the deuterium-containing intracellular water but not the deuterium in the macromolecules. In such a sample, after diluting out, the neutron scattering signal comes mainly from the intracellular water. The pellet of living cells was placed in an aluminium sample holder. Aluminium, like all metals, is transparent to neutrons – though obviously not to light or X-rays. Neutron energy and momentum are determined before and after scattering by measuring their wavelength (in the Ångström range). The two main methods used to do this (depending on the spectrometer) are by ‘time of flight’, in which the neutron velocity (inversely proportional to wavelength, velocity is in the km/s range for Ångström wavelengths) is measured over a determined path; and by diffraction of crystals (according to Bragg’s law, only a certain wavelength is diffracted for a given crystal periodicity and angular setting – read more about this law in Hughes, 2007 and Cornuéjols, 2009). Find out more about these methods onlinew4. Heat is motion: the speed at which atoms in a material move depends on the temperature. However, atoms in one material can also move at different speeds at the same temperature, depending on how they are bound to other atoms around them: water molecules are known to be slowed down by direct contact with macromolecules such as proteins or DNA. The question the scientists asked was: do cellular water molecules that are not in direct contact with macromolecules move as they would normally in liquid water, or are they, too, significantly slowed down? scattering spectrometer at ILL. Click to enlarge image Image courtesy of ILL Each neutron spectrometer is specialised for measuring atomic motions occurring within a given length scale–time scale window. Basically, there are three types: those measuring in the range of about 1 Ångström amplitude occurring in about 1 picosecond (10-12 s), which corresponds to the thermal motion of hydrogen atoms in liquid water at room temperature (note that this corresponds to speeds of about 100 m/s); those measuring in the range of 1-10 Ångström amplitudes in a nanosecond (10-9 s), which would pick up ‘slowed down’ water; and an intermediate type for the range of 1-10 Ångström amplitudes in 100 picoseconds. By using a picosecond and a nanosecond spectrometer, Marion Jasnin and her co-workers established that water dynamics within a bacterial cell are similar to those in pure water. Water molecules rotate as well as diffuse linearly in the liquid, and a slightly slowed-down rotational diffusion was measured. From the fraction of hydrogen atoms that moved more slowly and the average surface of macromolecules inside an E. coli cell, the scientists calculated that this fraction corresponds to a single layer of water molecules next to the macromolecules that is slowed down, but the rest flows as freely as in liquid water. What happens inside the cell, then, is similar to what is found around the islands of the Venetian lagoon in Italy. The water close to the macromolecules (islands) is held up, whereas in between – as little as one layer of water molecules from the macromolecules – the water regains its fluidity. This is in contrast to the ‘taming’ hypothesis that claimed that all the water in the cell would be slowed down. Following up on the E. coli experiments, the group has now also managed to explore water dynamics in human red blood cells at neutron sources in Germany (FRM IIw5) and Switzerland (PSIw6). The same behaviour as in E. coli was confirmed, with liquid water flowing freely beyond the first layer which is in contact with haemoglobin, the main protein contained in these cells (Stadler et al., 2009). Scientists can heave a sigh of relief - and continue to do their experiments in liquid water solutions, thanks to this confirmation that such experiments are a valid model for what happens in cells.
Better Students Ask More Questions. Why are equations used to represent many similar situations?a. because an equation is... Why are equations used to represent many similar situations? a. because an equation is the only way to summarize data from an experiment b.to simplify work without needing to memorize specific results c.because scientists disguise their work in mathematical terms to keep others from stealing their work d.because many situations are very similar; their answers become the same 1 Answer | add yours None of the four answers is clearly correct (and options b. and c. are simply wrong). The human brain assimilates data, makes “sense” of outward stimuli, by comparing unknowns to knowns. The equation is a language, usually mathematical but sometimes simply logical, that says “this piece of evidence is equal to that piece of evidence, and therefore can be known by substituting one for the other” (“A difference that makes no difference is no difference.”) Mathematics deals with measurable quantities, “quantifiable information” (distances, weights, numbers per minute, etc.), while logic deals with reasoning, inductive and deductive, which information comes in language code (although there is also a formal, symbol-driven expression of Logic). So answer choice a. is fine except for the word “only”; answer choice d. is closer but awkwardly worded. A better answer would be: “Equations are used because they demonstrate equalities that can then be exchanged for each other.” Posted by wordprof on June 6, 2012 at 7:52 PM (Answer #1) Join to answer this question Join a community of thousands of dedicated teachers and students.
Organic Compounds Almost all the molecules a living cell makes are composed of carbon atoms. Carbon is unparalleled in its a ability to form large, diverse molecules. Next to water, carbon containing compounds are the most common substances in living organisms. Organic compounds are compounds containing carbon, – Often synthesized by cells. Carbon A Carbon atom has 4 outer electrons in a shell that holds 8. It has a strong tendency to complete its outer shell by sharing electrons with other atoms forming covalent bonds. – This allows Carbon to form up to 4 bonds, or bond with 4 different atoms. Carbon is unique in that in likes to bond with itself, capable of forming long chains with a “carbon backbone” (structure). – This diversity in bonding capability allows for an enormous variety of compounds, all with a variety of functions. Carbon skeletons vary by length, they can be branched or unbranched, they may have double bonds, or they can be arranged in rings. Structure will dictate molecular function. Methane and other compounds composed of only carbon and hydrogen are called hydrocarbons. The chain of carbon atoms in organic molecules is called a carbon skeleton. When two organic compounds have the same molecular formula, but differ in the position of their double bond they are called isomers. Each isomer has unique properties (function). Functional groups (structure) also contribute to determine the properties (function) of organic compounds. – A group of atoms that participate in a chemical reaction is called a functional group. Cells synthesize larger molecules (polymers) from a small list of smaller molecules (monomers). – Anabolism Dehydration synthesis- process by which cells links monomers to form polymers. The process is the same regardless of the specific monomers (macromolecules). All unlinked monomers have hydrogen atoms (H) and hydroxyl groups (-OH). For each monomer added to a chain, a water molecule (H2O) is removed. Two monomers contribute to the H2O molecule. As this occurs, a new covalent bond forms, linking the two monomers. Polymers are also broken down into their monomers by the reverse process, hydrolysis (digestion). – Catabolism The process is the same regardless of the specific monomers (macromolecules). Hydrolysis is the reverse of dehydration synthesis. Hydrolysis means to break (lyse) with water (hydro-) and cells break bonds between monomers by adding water to them. In the process, a hydroxyl group from a water molecule joins to one monomer, and a hydrogen joins to the adjacent monomer. Anabolism + Catabolism Metabolism Life has a simple yet elegant molecular logic: small molecules common to all organisms are ordered into large molecules, or macromolecules. Macromolecules are very large molecules assembled by living organisms through dehydration synthesis: 1. Carbohydrates: polymer made of monosaccharides 2. Lipids: not true polymer, but are formed by dehydration synthesis from several smaller molecules. 3. Proteins: polymer made from amino acids 4.Nucleic Acids: polymer made of nucleotides 1. Carbohydrates are a class of molecule ranging from small sugar molecules (monosaccharides & disaccharides) to large sugar molecules (polysaccharides). Carbohydrates consist of carbon (C), hydrogen (H), and oxygen (O) in a ratio of CH2O. Carbohydrates are a major source of energy. Glucose C 6 H 12 O 6 Sucrose C 12 H 22 O 11 Why is the ratio of CH 2 O violated in sucrose? Polysaccharides are polymers made of hundreds to thousands of monosaccharides which are linked through dehydration synthesis. Starch, Glycogen and Cellulose are three functionally different polysaccharides which are made from the same monosaccharide, glucose. Polysaccharide Structure and Function * Some animals can derive nutrition from cellulose, such as cows and termites, because they have cellulose-hydrolyzing microorganisms inhabiting their digestive tracts. 2. Lipids are not true polymer, but group together because they are hydrophobic (water fearing). Lipids consist mainly of carbon (C) and hydrogen (H) which are linked by non polar covalent bonds. Lipids store energy (triglycerides) and provide structure (phospholipids). Fatty Acid Glycerol Triglyceride Dehydration synthesis links a fatty acids to a glycerol molecule forming a triglyceride, a fat molecule. The fatty acids of unsaturated fats (plant oils) contain double bonds which prevent these fats from solidifying at room temperature (liquid). Saturated fats (animal fats) lack double bonds, therefore these fats are solid at room temperature. Phospholipids are a major component of cell membranes and very important biological molecule. They are structurally similar to fats, but contain phosphorus (P) and only two fatty acid tails. Waxes consist of one fatty acid linked to Waxes an alcohol and are very hydrophobic. Steroids are lipids whose carbon skeleton is bent to form four fused rings. How does a lipid’s structure affect its functions? 3. Proteins are essential to the structure and activities of all life. Proteins consist mainly of carbon (C), hydrogen (H), oxygen (O) and nitrogen (N). The diversity of proteins is based on the specific and unique arrangement of a universal set of 20 amino acids. Cells link amino acids together by dehydration synthesis forming peptide bonds. Many amino acids linked together will form a polypeptide chain, which is the primary structure of a protein. Secondary structure is the coiling (alpha helix) or folding (pleated sheet) of the polypeptide. Tertiary structure is the overall three dimensional shape of the protein which can be described as globular or fibrous. Quaternary structure results from bonding interaction with other tertiary proteins. If a protein loses its shape, then it will not be able to function properly (denatured). 4. Nucleic acids are polymers that serve as the blueprints for proteins.Nucleic acids consist mainly of carbon (C), hydrogen (H), oxygen (O), nitrogen (N) and phosphorous (P) There are two types of nucleic acids: -DNA (deoxyribonucleic acid) which is the genetic material (genes) -RNA (ribonucleic acid) intermediary molecule that makes proteins The monomer that makes up nucleic acids are called nucleotides. Each nucleotide is composed of three subunits: – sugar, phosphate, and a nitrogenous base. DNA polynucleotides form through dehydration synthesis. RNA polynucleotides form through dehydration synthesis. DNA molecular structure is a double helix (twisted ladder). Two DNA polynucleotides wrap around each other and are held together by hydrogen bonds between their paired bases. DNA base pairing: “A” always pairs with “T” and “C” always pairs with “G”. RNA molecular structure is a single strand.
MIT Media Lab Researchers have taken a step forward to develop a system that allows that allows underwater and Airborne to share data directly using Sensors. This Can be achieved by using transmitter projecting a sonar signal to the water surface which causes tiny Vibrations to the 1s and 0s transmitted. Similarly, a highly sensitive receiver above the surface reads these minute disturbances and decoded the specific signal. Fadel Adib and Francesco Tonolini of MIT Media Lab, have developed this way to connect these seemingly dissonant mediums through something called Translational Acoustic-RF communication where sound waves generated underwater, and Radar devices inbuilt to the planes transmit messages by creating faint ripples on the surface of the water. Adib also states that, though “translational acoustic-RF communication” (TARF), is still in its early stages”, it creates a milestone that opens new capabilities in Air-Water Communications. TARF works on the underwater acoustic transmitter that transmits sonar signals using a standard acoustic speaker. The signals undergo as pressure waves of different frequencies with reference to different data bits and when the signal hits the surface, it causes tiny ripples in the water to a minute level. The system transmits multiple data rates at the same time to achieve higher frequency rates so that researchers can transmit a huge amount of data at once. For the RADAR to detect incoming sonar signals Researchers employed a technology that detects reflections in an environment and organizes them by distance and power. TARF accurately decodes any data at hundreds of bits per second, similar to standard data rates for underwater communications even though any disturbances are caused. Adib also says, Although under some situations like the waves higher than 16 centimetres, the systems aren’t able to decode signals but it works efficiently in calm days and deals with certain water disturbances. Aaron Schulman mentions that “TARF is the first system that is more efficient to receive underwater acoustic transmissions from the air using radar” and hope I expect this new radar-acoustic technology will benefit researchers in fields that depend on underwater acoustics and will inspire the scientific community to investigate more on this technology. He also hopes that their researchers further develop airborne drone or plane flying across a water’s surface to constantly receive incoming sonar signal and decode them in a very short time.
Creating color RGB images¶ RGB images can be produced using matplotlib’s ability to make three-color images. In general, an RGB image is an MxNx3 array, where M is the y-dimension, N is the x-dimension, and the length-3 layer represents red, green, and blue, respectively. A fourth layer representing the alpha (opacity) value can be specified. Matplotlib has several tools for manipulating these colors at Astropy’s visualization tools can be used to change the stretch and scaling of the individual layers of the RGB image. Each layer must be on a scale of 0-1 for floats (or 0-255 for integers); values outside that range will be clipped. Creating color RGB images using the Lupton et al (2004) scheme¶ Lupton et al. (2004) describe an “optimal” algorithm for producing red-green- blue composite images from three separate high-dynamic range arrays. This method is implemented in make_lupton_rgb as a convenience wrapper function and an associated set of classes to provide alternate scalings. The SDSS SkyServer color images were made using a variation on this technique. To generate a color PNG file with the default (arcsinh) scaling: import numpy as np import matplotlib.pyplot as plt from astropy.visualization import make_lupton_rgb image_r = np.random.random((100,100)) image_g = np.random.random((100,100)) image_b = np.random.random((100,100)) image = make_lupton_rgb(image_r, image_g, image_b, stretch=0.5) plt.imshow(image) This method requires that the three images be aligned and have the same pixel scale and size. Changing minimum will change the black level, while Q will change how the values between black and white are For a more in-depth example, download the i SDSS frames (they will serve as the blue, green and red channels respectively) of the area around the Hickson 88 group and try the example below and compare it with Figure 1 of Lupton et al. (2004): import matplotlib.pyplot as plt from astropy.visualization import make_lupton_rgb from astropy.io import fits from astropy.utils.data import get_pkg_data_filename # Read in the three images downloaded from here: g_name = get_pkg_data_filename('visualization/reprojected_sdss_g.fits.bz2') r_name = get_pkg_data_filename('visualization/reprojected_sdss_r.fits.bz2') i_name = get_pkg_data_filename('visualization/reprojected_sdss_i.fits.bz2') g = fits.open(g_name).data r = fits.open(r_name).data i = fits.open(i_name).data rgb_default = make_lupton_rgb(i, r, g, filename="ngc6976-default.jpeg") plt.imshow(rgb_default, origin='lower') The image above was generated with the default parameters. However using a different scaling, e.g Q=10, stretch=0.5, faint features of the galaxies show up. Compare with Fig. 1 of Lupton et al. (2004) or the SDSS Skyserver image. rgb = make_lupton_rgb(i, r, g, Q=10, stretch=0.5, filename="ngc6976.jpeg") plt.imshow(rgb, origin='lower')
Our S.O.S series provides help, tips, and tricks for integrating DE media into your curriculum. Leave a comment and let us know how you’ll use this strategy in your class. Have an idea for a strategy? Share it with us by completing this form and we’ll feature you! Save the Last Word for Me Save the Last Word for Me is a discussion strategy that requires all students to participate as active speakers and listeners. Its clearly defined structure helps shy students share their ideas and ensures that frequent speakers practice being quiet. It can be used as a way to help students debrief a video or reading passage. Materials: Discovery Education media or text, index cards - Identify a Discovery Education video or reading passage excerpt that will serve as the catalyst for this activity. - Have students view or read the selected text . - Ask students to highlight three sentences that stood out for them and write each sentence on the front of an index card. - On the back of the index card, they should write a few sentences explaining why they chose that quote: what it meant to them, reminded them of, etc. They may connect it to something that happened to them in their own life or in history. - Divide the students into groups of three, and then have them take A, B, and Croles within the group. - Invite the As to read one of their chosen quotations (front of card only). Ask students B and C to discuss the quote. What do they think it means? Why do they think these words might be important? To whom would they be important? - After 2-3 minutes, have the A students read the back of their cards (or explain why they picked the quotation), thus having the last word. - Repeat the process with the Bs sharing, and then the Cs. This activity gives each student an opportunity to discuss his or her viewpoint in a small and safe group. It is a good exercise in learning how to politely disagree with partners (if viewpoints differ) and to be able to voice an opinion after a discussion has happened. All members of the group feel validated and multiple viewpoints are shared. - This same process can be used with images instead of quotations. You could give students a collection of Discovery Education posters, paintings, and photographs from the time period you are studying, and then ask students to select three images that stand out to them. On the back of an index card, students explain why they selected the image and what they think it represents or why it is important. - Ask students to think about three probing questions the text raises for them. (A probing question is interpretive and evaluative. It can be discussed and has no clearly defined right answer, as opposed to clarifying questions which are typically factual in nature.) Students answer the questions on the back of their cards. In small groups, students select one of their questions for the other two students to discuss.
Making up one-fifth of the population, 15-24 year-olds carry with them India’s legacy as they drive the fruit of its political, economic, social and business decisions sanctioned by the authoritative heads at the centre. Bearing the burden of a densely populated country like India is no small task, and drug abuse does nothing to lighten the load for India’s youth. The brain is intricately involved in any addiction, and for many teenagers, the addiction susceptibility disorder is present before they ever begin using substances. For others, repeated drug abuse creates significant changes in the brain, making them dependent on a rigid reinforcement system of abusing drugs every time that they have the crave to feel better. For most, it is the complex interaction between genetic and environmental factors with the abuse of addictive substances that paves the downward spiral of physical and emotional dependency. The susceptibility of young people to developing addictions more rapidly has to do with the fact that the brain is immature and not fully developed until around age 25. Just recently, researchers have been able to determine which specific areas of an adolescent’s underdeveloped brain are implicated in their vulnerability to addiction. Scientists have recently pinpointed a specific protein in the brain called elF2 that accounts for adolescents’ hypersensitivity to addictive drugs. Research support involved two studies with mice, along with evidence of generalization from brain imaging in human addicts. As youngsters become more independent, parents’ influence often diminishes, and as part of life’s natural progression, teenagers are influenced more and more by their peers. As might be guessed, one of the most powerful tools used to sway young people towards drug addiction is peer pressure, and peer influences in the area of drug abuse can begin as early as middle school. Teens who abuse drugs are more likely to struggle with addiction later in life and have permanent and irreversible brain damage. Some symptoms include a change in peer group, carelessness with grooming, decline in academic performance, missing classes or skipping school, loss of interest in favourite activities, trouble in school or with the law, changes in eating or sleeping habits, and deteriorating relationships with family members and friends. Creating healthy and attractive alternatives to drug abuse can curb the number of first-time users. The United Nations Office for Drug Control and Crime Prevention has come out with a handbook to help communities prevent drug abuse. Some basic prevention ideas include: With this knowledge in mind, we can lessen the dependency that our society has on drugs and establish positive mental health environments in order to make changes that will lead to healthier and safer lives for the youth of India. About the Author: Didhiti Ghosh is a psychologist, journalist, script-writer, professor and a certified translator-interpreter of the Spanish language. She has been involved in organizing youth mental health & anti-drug abuse campaigns in and around Kolkata in collaboration with educational institutions in Bengal and The National Institute of Mental Health and Neurosciences, Bangalore.
It has been suggested that our ‘linguistic competence’ (Chomsky, 1965) consists simply of the ability to construct ‘well-formed sentences’. The sociolinguist Del Hymes (1979) considered this notion to be far too narrow, and proposed the term ‘communicative competence’ to account for speakers’ ability to use language appropriately. Communicative competence lets us know when to speak and when not to speak, how to take turns in conversations and how to start and end them, and how to involve and exclude people. We also know how to listen. In some conversations, the information content may seem very slight because the speaker’s main purpose is to convey a message such as ‘I want to be sociable with you’. We are sure that at times you have noticed that speakers give ‘hidden messages’ such as ‘I find you irritating’ and ‘I have more important things to do’ without ever actually using those words. These interpersonal (Halliday, 1985) elements of the dialogue may help to reinforce the sense that, in constructing a dialogue, speakers are working together. Followers of Halliday’s approach (Thompson, 1996) hold the view that language can only sensibly be studied as a way of making meaning, and meaning depends on the context (including who is speaking and who is there to listen) in which the words are written or spoken. All speech is accompanied by additional features, which may include: - vocal features such as pitch, loudness, voice quality (e.g. whispering, groaning), pace and rhythm; - gestures (including pointing to elements of the physical environment), eye contact and what has become known as ‘body language’; ‘non-linguistic’ sounds such as sighing, ‘tutting’, exclamations like ‘Oh!’ and ‘Ah!’, and even screams. These paralinguistic features are all ways of adding to or intensifying meaning, especially the emotional, or affective, content of what is being said. To avoid misunderstandings (or worse), all these behaviors have to be learnt and used appropriately. Activity: A Mouthful of Sky Watch the video sequence from the Indian soap opera, A Mouthful of Sky below. As you watch, list some of the skills that the speakers use in order to make the conversation succeed. Note the ways in which the characters use voice, gestures and other paralinguistic strategies to develop their meanings (including interpersonal meanings). Are there any ways in which this scripted scene might differ from a real-life discussion? Transcript: view document Among (many) things, you probably noted that the characters use eye contact (or lack of it), physical contact and distance, smiles and other gestures to relate to each other. They use voice quality to convey moods and personality types: for example, a deep voice and slow delivery denotes seriousness; while Shama’s voice, with its wild leaps of pitch and its laughing quality implies frivolousness. The way that characters use their voice appears to be related to their gender (this raises issues around the image of women projected by this program). However, the difference between male and female voices depends on much more than pitch. Because they are following a script, the characters do not have to use strategies (eye contact, gesture or intonation) to determine whose turn it is to speak. There are no interruptions, points where two people are speaking at once or uncomfortable silences, although silences (as short pauses) are used to increase the dramatic or humorous effect. We understand smiles, gestures and other paralinguistic behavior because they are used consistently. Some of the rules relating to these features have much in common with the rules of language use. Just as we know when we can politely enter a dialogue, so we know (within the rules of the culture we belong to) how close to somebody we can politely stand: in fact in both situations we may talk about ‘not stepping on the other person’s toes’. In the video sequence above, the physical context of the building is constantly referred to, but the characters’ knowledge of each other, of their shared history and of what they have planned and agreed are just as much a part of the context: they must take these factors into account before they can reach complete understanding. Chomsky, N. (1965) Aspects of the Theory of Syntax Cambridge, M.I.T Press. Halliday, M.A.K. (1985) An Introduction to Functional Grammar London, Arnold. Hymes, D. (1979) ‘On communicative competence’ in Brumfit, C. and Johnson, K. (eds) The Communicative Approach to Language Teaching Oxford, Oxford University Press. Thompson, G. (1996) Introducing Functional Grammar London, Arnold. [Information last accessed: 27 July 2017] This article is adapted from ‘Knowledge in everyday life’. An OpenLearn (http://www.open.edu/openlearn/) chunk reworked by permission of The Open University copyright © 2016 – made available under the terms of the Creative Commons Licence v4.0 http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_GB. As such, it is also made available under the same licence agreement.
Scientists have demonstrated how new satellite technology can be used to count whales, and ultimately estimate their population size. Using Very High Resolution (VHR) satellite imagery, alongside image processing software, they were able to automatically detect and count whales breeding in part of the Golfo Nuevo, Peninsula Valdes in Argentina. The new method, published this week in the journal PLoS ONE, could revolutionise how whale population size is estimated. Marine mammals are extremely difficult to count on a large scale and traditional methods, such as counting from platforms or land, can be costly and inefficient. Lead author Peter Fretwell from the British Antarctic Survey (BAS), which is funded by the UK's Natural Environment Research Council (NERC), explains; "This is a proof of concept study that proves whales can be identified and counted by satellite. Whale populations have always been difficult to assess; traditional means of counting them are localized, expensive and lack accuracy. The ability to count whales automatically, over large areas in a cost effective way will be of great benefit to conservation efforts for this and potentially other whale species." Previously, satellites have provided limited success in counting whales but their accuracy has improved in recent years. The BAS team used a single WorldView2 satellite image of a bay where southern right whales gather to calve and mate. Driven to near extinction, these whales have made a limited recovery following the end of whaling. In recent years, however, many deaths have been seen on their nursery grounds at Peninsula Valdes. Their population size is now unknown but with this sharp increase in calf mortality, estimates are needed. The enclosed bays in this region contain calm, shallow waters which increase the chance of spotting the whales from space. Three main criteria were used to identify whales: objects visible in the image should be the right size and shape; they should be in the right place (where whales would be expected to be) and there should be no (or few) other types of objects that could be mistaken as whales. Whales in the image were manually identified and counted, finding 55 probable whales, 23 possible whales and 13 sub-surface features. Several automated methods where then tested against these numbers. A 'thresholding' of the Coastal Band of the WorldView2 image gave the greatest accuracy. This part of the image uses light from the far blue end of the spectrum which penetrates the water column deeper and allows us to see more whales. This technique found 89% of probable whales identified in the manual count. This is a semi automated technique that needs some user input to identify the best threshold. Future satellite platforms will provide even high quality imagery and Worldview3 is planned to be launched this year. This will allow for greater confidence in identifying whales and differentiating mother and calf pairs. Such technological advancements may also allow scientists to apply this method to other whale species. Issued by the British Antarctic Survey Press Office. Rachel Law, Tel: +44 (0)1223 221437; email: [email protected] Satellite images and photos of whales are available from the BAS Press Office. Peter Fretwell, British Antarctic Survey, Tel: +44 (0) 1223 22145; mobile: +44 (0) 7903 208132 email: [email protected] Notes for editors The paper: Whales from space: counting southern right whales by satellite by Peter T Fretwell, Iain J Staniland and Jaume Forcada is published in PLOS ONE on Wednesday 12 February 2014. View the paper at http://dx. Images are available on request. Southern right whales The southern right whale (Eubalaena australis) is a baleen whale with a circumpolar distribution in the Southern Hemisphere. An adult female can reach a maximum size of 15m and can weigh up to 47 tonnes. Southern right whales were hunted extensively from the 17th through to the 20th century, causing their numbers to drop from an estimated 55,000-70,000 to around 300 by the 1920s. The population appears to have grown strongly since the cessation of whaling but is still below 15% of historical estimates. British Antarctic Survey (BAS), an institute of the Natural Environment Research Council (NERC), delivers and enables world-leading interdisciplinary research in the Polar Regions. Its skilled science and support staff based in Cambridge, Antarctica and the Arctic, work together to deliver research that uses the Polar Regions to advance our understanding of Earth as a sustainable planet. Through its extensive logistic capability and know-how BAS facilitates access for the British and international science community to the UK polar research operation. Numerous national and international collaborations, combined with an excellent infrastructure help sustain a world leading position for the UK in Antarctic affairs. For more information visit http://www.
There is growing evidence that physical activity enhances brain function and improves thinking and reasoning skills for children – and adults. Some studies have also suggested that children perform better in school when they have planned periods of physical activity. This idea stands in contrast to the pressure to provide more time for academics has eroded opportunities for physical exercise during the school day. At a time when gym and recess time have been eliminated from many school programs and after-school sports and playtime have given way to academic support enrichment, more evidence has been needed to shape school policy. The researchers point out that when children participate in sports, they often have better behavior within the classroom and are better able to pay attention to academics. Past studies have suggested that as physical activity increases, school performance and performance on the job improve; but some studies have been inconclusive. With this in mind, a team of researchers recently reviewed 14 studies, 12 from the US, one from Canada and one from South Africa. All looked at the relationship between school performance and physical activity. The researchers found evidence that physical activity improves academics. They noted, “Evidence from the studies included in the present systematic review•suggests that there is a significant positive relationship between physical activity and academic performance...” The researchers offered several possible explanations for the positive effect. It may be that activity increases blood flow and oxygen delivery to the brain. It could be that it increases the level of norepinephrine and endorphins which decreases stress and improves mood, and that the increase in growth factors caused by exercise helps create new nerve cells and supports neurologic development. They also point out that when children participate in sports, they often have better behavior within the classroom and are better able to pay attention to academics. This report adds to the growing body of literature that supports the need for an appropriate balance of physical activity and study in the school days of our children. It can be found in the Archives of Pediatrics and Adolescent Medicine.
Keratoconus occurs in about one out of every 1,000 individuals. In fact, as we develop better screening tools, it is likely that many more are affected. KC is caused by weakening of the cornea, the clear lens that is the front of your eye (like the crystal on a watch). As a result, the cornea bulges out of its smooth, dome-like structure, and assumes a more conical and irregular configuration. Because of this change in shape, the cornea loses its ability to form a clear image in the eye. Furthermore, this irregularity of the corneal optics and visual perturbations progress over time. Why is this? Optically, in the keratoconic cornea, light is not completely focused because of the corneal distortion. This causes scattering of light rays and the formation of “visual static”, much like the static that you may find on a TV. This distortion, and consequent visual static, can increase over time, with decrease in vision and visual symptoms such as light glare and halo as well as double or triple vision. Like static on a TV, non-focused light in KC causes “visual static” which causes glare and multiple vision The presentation and impact of keratoconus can vary widely from person to person. Usually, it is first detected in a patient’s teens or twenties. In its earliest stages, keratoconus often masquerades as astigmatism or nearsightedness, two of the more common eye conditions. Often, it is only after numerous unsuccessful attempts at vision correction with glasses or soft contact lenses that your doctor may look elsewhere for a diagnosis. What Causes Keratoconus? The actual cause of keratoconus is unclear. It may have a genetic, inheritable component. However, in many patients there are no family members with the disease. Similarly, most children of KC patients do not have keratoconus, but they should be checked in early adolescence for signs of KC because early treatment can prevent progression of the condition over time. Keratoconus is typified by corneal thinning and biomechanical instability. This may be caused by abnormalities in the normal collagen structure of the cornea. Collagen is the main structural component of the cornea. Collagen is a molecule that typically is very strong. For example, it makes up most of the structure of the tendons and ligaments of your muscles and bones. Weakness of the corneal structure causes keratoconus and its progression over time. Collagen lamellae (“pancakes”) create corneal superstructure The normal cornea is made of pancakes (or lamellae) of collagen tissue in a complex array. In keratoconus, the collagen lamellar architecture may be abnormal. A complex arrangement of these pancakes and the extracellular matrix of biologic sugars maintains optical shape and structural integrity in the normal cornea. Interweaving of the collagen lamellae and linkages between molecules give the cornea its strength. Around the edge of the cornea, the collagen bands change to a circular belt, providing additional support to the round corneal architecture. Finally, transverse-oriented lamellae insert into the front layers of the cornea (Bowman’s layer), acting as roots to further support corneal structure. This complex micro-organization is altered in keratoconic corneas. In KC, the collagen fibrils are unevenly distributed, with rearrangement of their normal conformation. The keratoconic cone, itself, is most affected with loss and distortion of collagen fibers. In addition, KC corneas show less interweaving of the collagen pancakes and decreased in the collagen anchors supporting the corneal structural shape. These changes may allow the collagen pancakes to split and slide on one another and exacerbate KC progression. Because of this, it is important that you do not rub your eyes in order to avoid actual mechanical shearing of the collagen pancakes. What causes the changes that we see in the corneal structure in keratoconus? There may be a primary biochemical event that triggers these changes, and, in some cases, these in turn may have a genetic predisposition. There are enzymatic changes associated with KC. In particular, there may be an increase in collagen and extracellular matrix breakdown cause by enzymes such as matrix metalloproteinases (MMP) and others. In addition, enzymes such as lysyl oxidate (LOX), which help the formation of mature collagen by creating natural crosslinks, may be low in keratoconus. Assessing Your Keratoconus In order to fully assess your keratoconus, understand the likely future course of the disease, and make appropriate treatment recommendations, an extensive eye examination is performed along with several specialized tests to fully analyze your problem. These tests also give you a complete baseline for your ongoing care in the future. There are a number of goals of the comprehensive keratoconus evaluation at the CLEI Center for Keratoconus. First, we want to fully assess and define your keratoconus in order to monitor progression over time. Second, this testing will allow us to best recommend a course of treatment to optimize your visual function. The CLEI Center for Keratoconus incorporates all of the latest diagnostic technologies to assess your KC and determine the proper course of treatment. Some of these diagnostic tools include: Computerized Corneal Topography Analysis: Corneal topography instruments assess your cornea’s optical surface. These are corneal maps that can assess many indices of your individual corneal shape and structure. We use a number of instruments, each of which may give different clues to the corneal shape, including the Pentacam, Topolyzer, and EyeSys units. Corneal topography is analagous to looking at a mountain range from a satellite. A normal cornea is green (like a gentle slope). Red is a higher point (like a mountain), and can indicate keratoconus. Blue is a lower point (like a lake). In some cases of keratoconus, your corneal topography map can be used to help program a laser for topography-guided PRK treaments. Keratoconic “cone” is seen as the red elevation on the topography map Corneal Ocular Coherence Tomography (OCT): Ocular Coherence Tomography is analagous to an MRI of your cornea. It gives cross-sectional, magnified pictures of your cornea from which we can study all of the corneal layers. OCT allows us to map your corneal thickness in detail. Wavefront analysis assesses the eye’s optical system and aberration profile. Because of the optical irregularities of the keratoconic cornea, light is not completely focused. This causes scattering of light rays and the formation of “visual static”, much like the static that you may find on a TV. Wavefront analysis defines the particular types of static that are present in the keratoconic cornea. It is analogous to using a computer to check for any static on your TV. Corneal Biomechanics Measurements: The Optical Response Analyser (ORA): is a new instrument which measures the elasticity and flexibility of the cornea and is the first true clinical measurement of corneal biomechanics in KC. This may allow for better diagnosis of early keratoconus, help to predict its possible progression, and allow for monitoring of changes in the keratoconic cornea. Corneal thickness (ultrasonic and optical pachymetry) measurements detect the degree to which a keratoconus cornea is thinned. In KC, the cornea is thinner and weaker than normal. Changes over time can be monitored by periodic assessment of the corneal thickness measure both by ultrasound and by optical imaging on the Pentacam unit. Living with Keratoconus There are some general precautions that a patient who has keratoconus can take to help decrease the chance of disease progression. 1) Don’t rub your eyes. This is probably the most important suggestion. Remember that KC is a problem of corneal mechanics and strength. The cornea gets its strength from the linkages of the collagen pancakes to one another. Eye rubbing may exacerbate slipping of the the collagen pancakes of the cornea and possibly cause further destabilization of the corneal structure. It can also irritate the eyes, causing inflammation that is not good for the keratoconic cornea. 2) Control eye allergies. Ocular allergy can cause inflammation, and also encourage eye rubbing. Therefore, use medications and drops as prescribed by your doctor to minimize symptoms of eye allergy. 3) Optimize your contact lens fit. The impact of contact lens wear on progression of keratoconus is unclear. Contact lenses are the mainstay of keratoconus treatment in many cases. Making sure that the contact lens fit is the best possible, will avoid problems secondary to irritation, inflammation, or mechanical trauma to the cornea. 4) Wear sunglasses in bright sun. Ultraviolet light may increase the formation of inflammatory molecules which can further damage the corneal structure So, wear UV protecting sunglasses when you are going to get alot of sun exposure. 5) Eat a good diet Diets high in antioxidants (found in green, leafy vegetables and colored vegetables such as tomato and pepper)) may combat some of the inflammatory mediators that can exacerbate KC progression. Antioxidant vitamin and omega 3 supplements may also be helpful.
A Look at Formative and Summative Assessments in Montessori Student assessments in Montessori schools are different from the assessments conducted in most traditional classrooms. However, contrary to a common myth about Montessori education, students do undergo assessments. The difference is that assessments are viewed as a tool for understanding where students are in their learning and where they should be going, rather than the strict, pass-or-fail process that is typical in traditional classrooms. Like other types of educational approaches, Montessori schools use both formative and summative assessments. Here is what you need to know about both of these approaches. Formative assessments are the most frequently used kinds of assessments. They are used to evaluate a student’s progress at various stages of learning as they move towards a larger curricular goal. Through formative assessments, teachers can determine how well students are learning various concepts and skills through low-stakes assignments. In Montessori schools, formative assessments are constantly ongoing, and teachers perform them daily to determine each individual student’s mastery of ideas and skills. These assessments let teachers know when a student is ready to move forward with new material or if the current material needs to be presented in a different way to increase competency. Summative assessments are designed to evaluate a student’s success at reaching a specific educational milestone, such as completing a unit of study. In traditional classrooms, these assessments typically include things like tests, quizzes, and papers that determine if students can move on to a new grade. In Montessori classrooms, summative assessments are simply another tool teachers have to determine where a student is in his or her education. These assessments don’t determine if students will pass or fail but rather help teachers understand what their students need. Montessori education includes both accountability for students as well as permission for students to learn at their own paces without the stigmas associated with traditional classrooms. At The Montessori School, our classrooms empower students to embrace curiosity and be active learners. Are you interested in learning more? To reach our school in Allen, please call (972) 908-5055. To contact our North Dallas location, dial (469) 685-1732.
Astronomers have produced the largest, most comprehensive ‘history book’ of galaxies in the Universe, using 16 years’ worth of observations from the NASA/ESA Hubble Space Telescope. The endeavor is called the Hubble Legacy Field. The image, a combination of nearly 7,500 separate Hubble exposures, contains roughly 265,000 galaxies and stretch back through 13.3 billion years of time to just 500 million years after the Universe’s birth in the Big Bang. The Hubble Legacy Field combines observations taken by several Hubble deep-field surveys. In 1995, the Hubble Deep Field captured several thousand previously unseen galaxies. The subsequent Hubble Ultra Deep Field from 2004 revealed nearly 10,000 galaxies in a single image. The 2012 Hubble eXtreme Deep Field was assembled by combining ten years of Hubble observations taken of a patch of sky within the original Hubble Ultra Deep Field. The new set of Hubble images, created from nearly 7,500 individual exposures, is the first in a series of Hubble Legacy Field images.
The Yin and Yang of Korean Vowel Harmony In my first post about the Hangul writing system, I touched on the history of Hangul and the linguistic motivation for the characters. For example, the Hangul consonant characters were designed to indicate the point of articulation, as shown in the image below (from The World’s Writing Systems): While I certainly appreciate the linguistic basis for the consonant characters, I find the vowel characters and the mystic symbology of the Korean sound system downright fascinating. The Korean vowel characters combine the core elements for the Earth (ㅡ), the Sun in the Sky (ㆍ), and Man/Human (ㅣ) into sets of dark (or “yin”) vowels and bright (or “yang”) vowels. And the yin and yang must remain in harmony. In bright/yang vowel characters (ㅗ,ㅏ), the Sun in the Sky is above the Earth and to the right of Man (shining in his face). In dark/yin vowel characters (ㅜ,ㅓ), the sun is below the Earth’s horizon or behind Man. Interestingly, it was pointed out to me (thanks Dan!) that in the consonant characters, Man (with his tongue) is facing to the left but in the vowel characters, Man is facing to the right; I am curious about the historical reason for this difference in orientation. The bright/yang and dark/yin symbology is used consistently in writing the characters for other Korean vowels and diphthongs. For example, yang vowels ㅗ [o] and ㅏ[a] combine to form the diphthong ㅘ [wa], and yin vowels ㅜ [u] andㅓ[ʌ] combine to form ㅝ [wʌ]. In addition, the vowel [i] represented by the Man symbolㅣ is considered neutral (or “mediating”) and can be present with either yin or yang vowels, such as in ㅚ [we] and ㅟ [wi]. But no yin vowel appears in a diphthong character with a yang vowel. (Diphthong, meaning “two sounds,” is a sound composed of two basic vowels; possibly because it is a mouthful to say, diphthong may be the only common modern English word derived from the Ancient Greek root φθόγγος / phthóngos, meaning “sound.”) The yin and yang harmony can also be seen in the order of vowels in the original Hunminjeongeum document by King Sejong. The vowels are listed first with the three primary elements (Sun/Sky, Earth, Man), then two yang vowels, then two yin, two yang, two yin (the final four are simply the initial 2 yang and 2 yin vowels with an extra Sun/Sky stroke representing an initial iotized [y] sound before the vowel): ㆍ ㅡ ㅣ ㅗ ㅏ ㅜ ㅓ ㅛ ㅑ ㅠ ㅕ The interesting yin and yang system doesn’t stop with the characters: it also extends to the phonology and morphology of Korean. At the time of the creation of the Hangul system, the Korean language had strong vowel harmony, which favored the construction of words and phrases consisting of vowels that “harmonized” with each other. If a word root had bright/yang vowels, then it would take suffixes containing bright/yang (or neutral) vowels; if the root had yin vowels, it would take yin or neutral suffixes. Vowel harmony also plays a role in Korean vocabulary that conveys sound symbolism, such as onomatopeia (words imitating actual sounds), phenomimes (words that describe external phenomena), and psychomimes (words that describe psychological states). There are pairs of Korean words, one with yin vowels and one with yang, that have the same (dictionary) definition but that have different connotations. The yang/bright vowel word in the pair typically conveys meanings with smallness or brightness or shallowness. The yin/dark vowel word in contrast conveys depth or size or darkness. For example, there two very similar words for the adjective “red” with very subtle differences in usage. 붉은 (pulgeun) uses the yin/dark vowel ㅜand means a natural red, such as red lips. 빨간 (ppalgan) has the yang/bright vowelㅏand means a brighter, artificial red, such as lips with red lipstick. (For more on this word pair, this video by Ask Hyojin is interesting.) Another example, from the Organic Korean site is the word pair for the sound of falling into water. 퐁당 (pongdang) with the yang/bright vowels is used for small objects (such as stones) falling in water, and 풍덩 (pungdeong) with the yin/dark vowels is used when large objects (such as people) fall into water. Vowel harmony is weaker in Modern Korean than when Hangul was created, but many remnants of the early harmony live on. The inevitable vowel shift over 650 years has also weakened the connection between King Sejong’s original written vowel system and modern spoken Korean; for example, the original Sun/Sky characterㆍstood for a vowel [ə] that is no longer present in Korean, so the character is no longer used. (The Wikipedia Hangul article contains a lot of interesting details about changes in Hangul since its creation.) Some original diphthongs have also shifted to simple vowels, includingㅐ[ae] from ㅏ + ㅣ, and ㅔ [e] fromㅓ + ㅣ. In addition, the initial Hangul orthography included pitch accents and vowel length markers, both of which have been dropped from Hangul. Nevertheless, despite the underlying language changes over 650 years, the logical design of (and subsequent sensible updates to) the Hangul writing system make it a pleasure to work with as a Korean learner (especially compared to the mess of ambiguity in the English writing system). Share this Lexplorers post:
By Patricia Reaney LONDON — Sea levels would have risen higher and ocean temperatures would have been warmer in the 20th century if the Krakatoa volcano in Indonesia had not erupted in 1883, scientists said on Wednesday. The impact of the eruption that spewed molten rock and sulfate aerosols into the atmosphere was felt for decades — much longer than previously thought. “It appears as though with a very large eruption the effect can last for many decades and possibly as long as a century,” said Peter Gleckler, a climatologist at the Lawrence Livermore National Laboratory in California. Sea levels rise when ocean temperatures are warmer and recede when they cool. Volcanoes release aerosols and dust that block sunlight and cause the ocean surface to cool which can offset, at least temporarily, sea level rises caused by increased greenhouse gases in the atmosphere. In recent decades, the average ocean temperature has warmed by about .037 degrees Celsius, according to the scientists. Gleckler and researchers in the United States and Britain were studying models of climate simulations when they noticed the impact of volcanic eruptions. Some of the climate models included the impact of such eruptions while others did not. “As we looked at the first picture of all these models together, we saw that just at the time of Krakatoa there was this very clean separation of those that included the eruption and those that did not,” Gleckler told Reuters. “Volcanoes have a big impact. The ocean warming and sea level would have risen much more if it weren’t for volcanoes,” said Gleckler, who reported the findings in the journal Nature. The study also included more recent eruptions including Pinatubo in the Philippines in 1991, which was on a similar scale to Krakatoa. But the effect of Pinatubo on ocean temperatures was much smaller because of the impact of greenhouse gases which were much higher in 1991 than in 1883. “The Pinatubo eruption influence on sea level and heat content was dampened by this background warming,” said Gleckler. He added that scientists must think more carefully about how they include the effects of volcanic eruptions such as Krakatoa and even earlier ones, in climate modeling. “We can’t rely on future volcanic eruptions slowing ocean warming and sea level rises,” Gleckler added.
Maternal health encompasses the health of women during and just after pregnancy, a time when women are at risk of complications and even death. Global Burden of Disease (GBD) research has found that maternal deaths have decreased significantly since 1990, although 293,000 women still died in 2013 from pregnancy-related causes. Most maternal deaths are related to complications of childbirth and the period post-delivery. Approximately 25% of deaths occur during delivery and the 24 hours following; another 25% happen during pregnancy, and the rest occur up to one year after delivery. The vast majority of maternal deaths occur in developing countries. Leading causes include hemorrhage, infections, high blood pressure during pregnancy (pre-eclampsia and eclampsia), delivery complications such as obstructed labor, and unsafe abortion. Many maternal deaths are preventable if women have access to medical care and adequate nutrition during pregnancy, including iron and calcium supplementation. Prenatal care, prevention of malaria during pregnancy, and giving birth in a health facility or with a skilled birth attendant also increase a mother’s chances of survival.
A team of scientists has discovered a single-site, visible-light-activated catalyst that converts carbon dioxide (CO2) into “building block” molecules that could be used for creating useful chemicals. A team of scientists has discovered a single-site, visible-light-activated catalyst that converts carbon dioxide (CO2) into “building block” molecules that could be used for creating useful chemicals. The discovery opens the possibility of using sunlight to turn a greenhouse gas into hydrocarbon fuels. The scientists used the National Synchrotron Light Source II, a U.S. Department of Energy (DOE) Office of Science user facility at Brookhaven National Laboratory, to uncover details of the efficient reaction, which used a single ion of cobalt to help lower the energy barrier for breaking down CO2. The team describes this single-site catalyst in a paper just published in the Journal of the American Chemical Society. Converting CO2 into simpler parts—carbon monoxide (CO) and oxygen—has valuable real-world applications. “By breaking CO2, we can kill two birds with one stone—remove CO2 from the atmosphere and make building blocks for making fuel,” said Anatoly Frenkel, a chemist with a joint appointment at Brookhaven Lab and Stony Brook University. Frenkel led the effort to understand the activity of the catalyst, which was made by Gonghu Li, a physical chemist at the University of New Hampshire. “We now have evidence that we have made a single-site catalyst. No previous work has reported solar CO2 reduction using a single ion,” said Frenkel. Read more at DOE/Brookhaven National Laboratory Image: National Synchrotron Light Source II (NSLS-II) QAS beamline scientist Steven Ehrlich, Stony Brook University (SBU) graduate student Jiahao Huang, and Brookhaven Lab-SBU joint appointee Anatoly Frenkel at the QAS beamline at NSLS-II. (Credit: Brookhaven National Laboratory)
Prior to the publication of this volume in 1987, scholars interested in Old English alliterative meter discovered a number of intriguing restrictions on verse form, and their discoveries proved useful in the editing of texts and in research on the early history of the English language. It had proved impossible up to this point however, to capture these restrictions in a plausible system of rules. In this book Professor Russom obtained a coherent and comprehensive rule system using the insights of linguistic theory. The rules of this system apply not just to stress and syllable count but to other features of work structure as well. Russom claims, in particular, that the concept of 'metrical foot' appropriate for analysis of Old English poetry corresponds to the concept of 'word pattern' used in linguistic analysis. In Old English Meter and Linguistic Theory the author explains these rules carefully, justifies them from a linguistic point of view and goes on to apply them to a wide variety of problems. The results should interest not only those who deal with Old English texts, but also metrists and linguistics generally. Preface; Introduction; 1. The foot; 2. The verse; 3. Light feet and extrametrical words; 4. Interpretation of ambiguous linguistic material; 5. Relative frequency and metrical complexity; 6. Hypermetrical verses; 7. Alliteration; 8. Metrical subordination within the foot; 9. Words of classes B and C; 10. Rules and exceptions; 11. Overview; Appendix: rule summary; Notes; Works cited; Beowulf verses of special interest; Index. Ling & Literature
Chemical kinetics, also called reaction kinetics, is studying how fast chemical reactions go. This includes studying how different conditions such as temperature, pressure or solvent used affect the speed of a reaction. Chemical kinetics can also be used to find out about reaction mechanisms and transition states. The basic idea of chemical kinetics is called collision theory. This states that for a reaction to happen, the molecules must hit each other. Ways of increasing the speed of the reaction must therefore increase the number of hits. This can be done in many ways. With experiments it is possible to calculate reaction rates from which you can get rate laws and rate constants. A rate law is a mathematical expression with which you can calculate the speed of a reaction given the concentration of the reagents. Order of a reaction[change | change source] the equilibrium is dynamic in nature There are many types of rate laws, but the most common are: - zero-order reaction: the speed does not depend on the concentration - first-order reaction: the speed depends on the concentration of only one reactant - second-order reaction: the speed depends on the concentration of two reactants, or on the concentration of one reactant squared. From this data, it is possible to think about the mechanism of the reaction. If it is second-order, for example, then it is likely that both molecules in the reaction are coming together during the rate-determining step. This is the most difficult step in the mechanism to go through, because it has the highest activation energy.
In mathematics, the mean, median, mode and range are common statistical measurements of a simple set of data. This last measurement is the determination of the length of the interval of all numbers in the data set. This calculation can be made for any set of real numbers, including temperatures. The range is a fairly easy calculation to make, and calculating it can tell you a lot about the set of numbers in question. List the numbers in the data set of temperatures. Put them in order from lowest to highest. Identify the lowest number in the data set, as well as the highest number. Subtract the lowest number in the set from the highest number. The resulting value is the range of the set of temperature values.
An 'absconding swarm' refers to a swarm of bees that are completely abandoning their hive. Normal swarms are simply a part of the reproductive cycle of bee societies. A queen bee will lead a swarm from the old hive, taking with her the older generation of worker bees and drones. They will leave the younger bees behind to continue the colony in the original hive; this will usually include a queen or at least a developing queen, the entire brood (the larvae and pupae), and enough bees to take care of everything until the brood starts hatching and replenishes the hive. In an absconding swarm, all the bees that can get up and go, do so. This is a drastic measure, and is not a form of reproduction, as the entire brood is left to die. This may be done as the result of attacks on the hive (from humans, dragonflies, ants, snakes, mice, etc.), starvation, or disease (such as foulbrood). Absconding swarms are very rare; a colony will usually die out before vacating the hive. Africanized honeybees are more likely to form an absconding swarm when resources are scarce than are European honeybees, sometimes descending on farms and other human habitations where there are steady water supplies or food sources.
Digital History>Teachers>Modules> Topic Learn About the 1945 and 1954, the Vietnamese waged an anti-colonial war against France and received $2.6 billion in financial support from the United The French defeat at the Dien Bien Phu was followed by a peace conference in Geneva, in which Laos, Cambodia, and Vietnam received their independence and Vietnam was temporarily divided between an anti-Communist South and a Communist North. In 1956, South Vietnam, with American backing, refused to hold the unification elections. By 1958, Communist-led guerrillas known as the Viet Cong had begun to battle the South Vietnamese the Souths government, the United States sent in 2,000 military advisors, a number that grew to 16,300 in 1963. The military condition deteriorated, and by 1963 South Vietnam had lost the fertile Mekong Delta to the Vietcong. In 1965, Johnson escalated the war, commencing air strikes on North Vietnam and committing ground forces, which numbered 536,000 in 1968. The 1968 Tet Offensive by the North Vietnamese turned many Americans against the war. The next president, Richard Nixon, advocated Vietnamization, withdrawing American troops and giving South Vietnam greater responsibility for fighting the war. His attempt to slow the flow of North Vietnamese soldiers and supplies into South Vietnam by sending American forces to destroy Communist supply bases in Cambodia in 1970 in violation of Cambodian neutrality provoked antiwar protests on the nations college campuses. to 1973 efforts were made to end the conflict through diplomacy. In January 1973, an agreement reached and U.S. forces were withdrawn from Vietnam and U.S. prisoners of war were released. In April 1975, South Vietnam surrendered to the North and Vietnam The Vietnam War cost the United States 58,000 lives and 350,000 casualties. It also resulted in between one and two million Vietnamese deaths. Congress enacted the War Powers Act in 1973, requiring the president to receive explicit Congressional approval before committing American the longest war in American history and the most unpopular American war of the twentieth century. It resulted in nearly 60,000 American deaths and an estimated 2 million Vietnamese deaths. Even today, many Americans still ask whether the American effort in Vietnam was a sin, a blunder, a necessary war, or a noble cause, or an idealistic, if failed, effort to protect the South Vietnamese from totalitarian government. history of the Vietnam War of Independence, Democratic Republic of Vietnam. September 2, 1945 relating to the Vietnam War Handouts and fact sheets: Vietnam Archive at Texas Tech University your knowledge about the Vietnam War Herring, America's Longest War: The United States and Vietnam 1950-1975 balanced account of American involvement in Vietnam. more: Vietnam War Bibliography Based on director Oliver Stones experiences as an infantryman in Vietnam, this film offers a harrowing and heartbreaking glimpse into what it was like to be a soldier during the Vietnam war. reviews of this movie: Wars for Vietnam site provides an informative overview of the history of the war supplemented with primary source documents A collection of documents relating to U.S. involvement in Vietnam and the United States from 1941 to the fall of Saigon.
“To know that we know what we know, and to know that we do not know what we do not know, that is true knowledge.” -Nicolaus Copernicus As we peel back the layers of information deeper and deeper into the Universe’s history, we uncover progressively more knowledge about how everything we know today came to be. The discovery of distant galaxies and their redshifts led to expanding Universe, which led to the Big Bang and the discovery of very early phases like the cosmic microwave background and big bang nucleosynthesis. The history of the Universe, as far back as we can see using a variety of tools and telescopes. Image credit: Sloan Digital Sky Survey (SDSS), including the current depth of the survey. But before that, there was a period of cosmic inflation that left its mark on the Universe. What came before inflation, then? Did it always exist? Did it have a beginning? Or did it mark the rebirth of a cosmic cycle? Maddeningly, this information may forever be inaccessible to us, as the nature of inflation wipes all this information clean from our visible Universe. How cosmic inflation gave rise to our observable Universe, which has evolved into stars and galaxies and other complex structure by the present. Image credit: E. Siegel, with images derived from ESA/Planck and the DoE/NASA/ NSF interagency task force on CMB research. From his book, Beyond The Galaxy.
|Yale-New Haven Teachers Institute||Home| Issues of war and peace in the context of children's lives are the focus of this unit. I begin by looking at the role of the historian and authors who pass from generation to generation the 'truths' of what is often taught. In order to make this unit meaningful to younger students, the role of children in times of war is explored. Children as refugees, child soldiers, and famine are described and studied. Three stories by Dr. Seuss frame the unit as we explore nonviolent solutions to conflict resolution and are presented as a way to contextualize further discussions on war and peace. The setting of this unit is an integrated social studies unit that takes into consideration the history of the peoples that make up the classroom. This unit is geared towards elementary school children in the second to fourth grade. The ability to integrate this unit with language arts provides many opportunities for extensions into other curricular areas. A list of student and teacher electronic and other resources is provided for the implementation of the unit. (Recommended for Integrated Social Studies and Language Arts, grades 2-4.)
Neutron generators provide materials analysis and non-destructive testing tools to many industries, including oilfield operations, heavy mechanical production, art conservancy, detective work, and medicine. Many of these applications have been limited by the rather large size of current industrial and medical neutron sources. Now Sandia National Laboratories (SNL), whose main job is to develop and support the non-nuclear parts (including neutron generators) of nuclear weapons, has invented a new approach toward building tiny neutron generators called neutristors. The neutron was discovered as the product of an early radiochemical fusion reaction in 1932. Following a decade of mainly scientific use, the WWII nuclear bombs exploded over Japan each included a neutron generator to ignite the critical mass of fissionable material at the correct time. This event neatly split development of neutron generators between the secret and the open worlds. The neutron sources available to science and industry included particle accelerators (at that time these filled large rooms), nuclear reactors (filling large buildings), and radioactive materials the size of your little finger. As most researchers and manufacturing companies did not have easy access to reactors and accelerators, a good deal of work toward developing practical applications for neutron sources was carried out with radioactive neutron generators. There are three main approaches toward using radioactive isotopes to generate neutrons: - Radioactively-induced fusion neutrons Radioactive neutron generators usually emit fewer than a billion neutrons per second with a kinetic energy of a few MeV. The power of the emitted neutrons is only about a milliwatt, but the yield is sufficient for many applications. The problem with radioactive sources is they are dangerous, can't be turned off, and may not always be used by people understanding the danger. In many cases the shielding required is very large compared to the size of the source. Although such sources are still used for certain tasks, in the end, miniaturized particle accelerators that drive low-level fusion reactions won out, and accelerator-based neutron generators about the size of a mailing tube tied to a suitcase-sized electronics package became available. The miniaturized neutron generators accelerate deuterium (D) or tritium (T) ions to energies of 100 KeV (kiloelectron volts) or less, corresponding roughly to a temperature of about a billion degrees Kelvin. These ions are then directed into a beam that impacts onto a target containing deuterium. When deuterium is used in the ion beam, two deuterium ions fuse (D-D fusion), while if tritium is used, a deuterium and a tritium ion fuse (D-T fusion). In both cases, neutrons are by products of the fusion reaction. There are two main problems with accelerator-based neutron generators – their size and their cost. There are applications for which a three inch (7.5 cm) cylinder is too large, either physically (implanted neutron cancer therapy), or when a point source of neutrons is desired (e.g., for neutron inspection of weld flaws). Also, accelerator-based generators start at about a hundred thousand U.S. dollars, which is too large a price for some uses. For example, a neutron generator is needed for neutron activation analysis, a technique for rapidly identifying the composition of a sample. This is the sort of technique that would be amazing to incorporate in a Star Trek-style tricorder, but has been far too large and expensive. SNL's compact neutron generator Now SNL has announced its development of a new type of neutron generator that solves many of these problems by putting a particle accelerator on a chip. As seen in the figure above, the neutristor is layered in ceramic insulation because of the large voltages being used. The unit shown here produces neutrons through D-D fusion. The D-T reaction is easier to initiate, but the decision was made to require no radioactive materials in the design of the generator. A voltage is applied between the ion source and the deuterium target so that the deuterium ions from the source are attracted to the deuterium target. The ions accelerate in the drift region between the source and the target. The drift region must be in vacuum so the ions do not scatter from the air molecules. When the energetic ions hit the target, a small fraction of them will cause D-D fusion, thereby generating a neutron. Sandia did not announce typical acceleration voltages used with the neutristor, however, commercial neutron generators use around 100 kV, but significant neutron yields can be obtained at voltages under ten kV. The ion lens modifies the electric field between the ion source and the target so that the accelerated ions are concentrated on the region of the target loaded with deuterium. The SNL disclosure does not mention how the deuterium gas is stored, but one common approach is to coat the ion source and/or the target with palladium or some other metal that readily forms hydrides, or in this case, deuterides. For example, a palladium coating can store nearly one deuterium atom for each palladium atom. The ion current is sufficiently low that even these small amounts of deuterium will last a very long time in the completed neutristor. Neuristors can be operated in continuous or pulsed mode as required. Current neuristors have a drift region a few millimeters across, forming a sufficiently small package for many new applications. The estimated production cost for neutristors is in the neighborhood of US$2,000, about a fiftieth of the cost of current accelerator-based neutron generators. The next generation of entirely solid-state neutristors will not require a vacuum for operation, thereby reducing the cost and increasing the durability of the device. In addition, SNL is working on neutristors two to three orders of magnitude smaller that would be fabricated using MEMS (microelectromechanical systems) technology. The following movie is an excellent introduction to how the development of neutristors came about, and a good account of the underlying technology. Source: Sandia National Laboratories
Wind and Clouds Submitted by SusanAfter discussing how the wind moves the clouds. Let your children have cloud races by blowing into straws (wind) to move cotton balls or pompoms (clouds) What Does the Cloud Look Like Submitted by AmyTalk to your children about how people often see different things in clouds. Then fold a colored piece of paper in half. Put a few drops of white paint in the crease and then have your children press down on the paper. Ask them what they think the "cloud" looks like. This activity goes well with the book "It Looks Like Spilled Milk" These Preschool Ideas Found At:Everything Preschool >> Themes >> Clouds >> Games
To convert sunlight into electricity is not necessarily hard to do, but doing it efficiently can be very difficult. This is why researchers have been working to understand all of the mechanics involved in photovoltaics, especially those of organic semiconductors. Researchers at the University of Houston and University of Montreal have recently devised a new theory for what happens within organic solar cells, and it could potentially lead to breaking the SQ limit. This new theory considers the quantum mechanical effects associated with the vibrational motion of molecule chains in a polymer and the electronic structure of the material. The vibrational-electron coupling could lead to some interesting effects, and if properly understood could even be exploited to optimize a solar cell's efficiency. It may even enable the SQ or Shockley-Queisser limit to be broken, and it is the theoretical limit on the efficiency of semiconductor-based solar cells. The researchers next plan on working with those more familiar with producing polymers and solar cells, to put the theory to the test. Source: University of Houston
You have probably heard that wild fish are in peril around the world, and that in some places their populations are in precipitous decline. That is particularly true on the high seas, or international waters. Operating as a massive unregulated global commons, where any nation can take as much as it wants, the high seas are experiencing a latter-day “tragedy of the commons,” with the race for fish depleting stocks of tuna, billfish and other high-value migratory species. A new paper, written by Christopher Costello, a professor of resource economics at UC Santa Barbara’s Bren School of Environmental Science & Management, and Crow White, an assistant professor in the biological sciences department at Cal Poly San Luis Obispo and a former Bren School postdoctoral researcher, suggests a bold approach to reversing this decline: close the high seas to fishing. The paper appears today in the open access journal PLOS Biology Sound like a radical notion? Not according to White and Costello, who found that such a policy could actually provide a triple-bottom-line benefit, increasing not only global stocks of high-value species, but also fisheries harvests and profits from them. The idea is that closing the high seas to fishing would allow fish populations to rebuild, and because the fish migrate, it would also generate a “spillover effect” as some fish from protected international waters find their way into the exclusive economic zones (EEZs) of each nation, where they could be harvested. Currently, the world’s oceans are governed as a system of more than 150 EEZs occupying about 42% of the ocean, and one large high-seas commons comprising the remaining 58%, which is essentially open-access to all nations. Many high-value fish species migrate across these large oceanic regions. Some nations have catch limits on such fish in their own EEZs, but with essentially no catch limits on the high seas, migratory fish are systematically overfished, with the result that their numbers continue to decline. Over the decades, hundreds of attempts have been made to create international agreements to coordinate fishing across EEZs and the high seas. Nearly all have failed, and today, migratory species on the high seas pose perhaps the greatest global challenge to sustainable fisheries management. The researchers addressed the problem by developing a computer simulation model of global ocean fisheries and using it to examine a number of management scenarios, including a complete closure of fishing on the high seas. The model has a bio-geographic component that tracks the migration and reproduction of fish stocks in different areas, and a socio-economic component that quantifies the fishing pressure or activity, catch, and profits by each fishing nation under various polices. They found that closing the high seas could more than double both populations of key species and fisheries profit levels, while increasing fisheries yields by more than 30%. As an alternative to a high seas closure, the authors also examined a policy that would extend the world’s EEZs beyond the current 200-nautical-mile limit. In addition, they found that while that solution could benefit some fisheries, a complete closure of the high seas would provide superior fishery and conservation outcomes. “From a policy perspective, the results are incredibly important because they indicate a win-win-win —food, profit, conservation — scenario from closing the high seas,” said lead author White. “Further, even though our main focus was on the profitability of fisheries, this policy would represent possibly the largest conservation benefit ever enacted in the world’s oceans,” said co-author Costello, a resource and environmental economist at the Bren School. “We were pretty shocked. We definitely did not set out thinking a complete closure could be such an all-around beneficial policy.” The study makes a significant scientific contribution to an important, timely, and highly policy-relevant debate. To date, marine protected areas (MPAs) that ban or regulate fishing are largely located within EEZs. Only a few are located on the high seas, and they are too small to protect most migratory stocks. A complete closure of the high seas had not been proposed previously. Still, further research is needed before such a bold proposal could be put in to practice. “We hope this can be a starting point for further analysis and debate about the ecological and economic implications and political feasibility of a high seas closure,” said White. Issues to be addressed in more detail include the political acceptance of what will likely be highly varied impacts across different fisheries and nations. Also, the current legal instrument for high seas governance, the United Nations Convention on the Law of the Sea, would need to be reconfigured to include a high seas closure, and the methods and cost of enforcement would need to be determined and integrated into the closure’s logistical operation and economic performance. “This is bold idea, and perhaps the only way to eliminate the Tragedy of the Commons that is unfolding in much of the ocean,” said Boris Worm, a marine research ecologist and associate professor at Dalhousie University in Nova Scotia. “The careful analysis presented in this paper supports this view, and is bolstered by more local ‘experiments’ where open access was eliminated and benefits arose quickly – both for fish and fishermen.”
GCC Inline ASM GCC has an extremely powerful feature where it allows inline assembly within C (or C++) code. Other assemblers allow verbatim assembly constructs to be inserted into object code. The assembly code then interfaces with the outside world though the standard ABI. GCC is different. It exposes an interface into its "Register Transfer Language" (RTL). This means that gcc understands the meaning of the inputs and outputs to the fragment of assembly code. The extra information gcc has allows it to carefully choose the registers (or other operands) that define the interface. The ones chosen can vary depending on the surrounding code. In addition, gcc can be told which registers will be "clobbered" by the assembly code. It will then automatically save and restore them if required. This contrasts strongly with other methods, where inlined assembly code needs to manually do this saving and restoring. (Even when the surrounding code is such that it isn't needed.) The result is that commonly a piece of gcc inline assembly will compile into a single asm instruction in the executable or library. (Often you just want access to a single instruction not exposed by C.) However, to do this, you need to understand how to craft the constraints told to the compiler. If they are incorrect, then subtle bugs can result. A simple function using inline-assembly might look like: The above shows several features of gcc's interface. Firstly, the asm code is a compile-time C constant string. You can put anything you like within that string. GCC doesn't parse the assembly language itself. What it does do is use escape sequences (i.e. %0 in the above) to reference the interface described by the programmer. In this case %0 corresponds to the zeroth constraint, which in turn is described after the colon. That constraint "=r" is an output-constraint (due to the use of the '=' symbol), and consists of a general-purpose register (due to the use of the 'r' symbol. The resulting output is then stored into the variable within the parenthesis, 'out'. The result is a magic bit of code that somehow materializes a value, and then stores it into the variable 'out'. GCC doesn't understand where the value comes from. So in turn, it doesn't know that the variable 'var1' is actually used unless you tell it explicitly by the used attribute. (An unused variable can be elided from the executable object as a simple optimization.) When the above is put inside a .c file called gcc_asm.c, and then compiled, the result is: The standard ABI on 64bit x86 machines is to return integers in the %eax register. GCC picks this for the register chosen to contain the variable 'out'. Thus the resulting function actually only consists of two instructions: (The above has a whole lot of asm directives describing unwinding and debug information in addition, but that doesn't appear in the straight-line code.) See how gcc has replaced the '%0' in the asm string with the register it picked for the zeroth constraint. If there were more constraints, we could use '%1', '%2' etc. for them in the asm string. Values up to '%9' are available. The above describes how to get information out of a fragment of inline assembly code. So what about the reverse, getting information in? An example function that does that looks like: The above looks very similar to the first function. However, it has two more colon-delineated parts to the asm intrinsic. The first of these is again the asm string. The second, for the outputs, is blank in this case. This function has no outputs. The third section is an input constraint. Notice that the '=' symbol is missing. (It's an input, not an output.) What remains is the 'r', describing that this asm code wants that input stored in some general register. Finally, the asm code ends with a 'memory' constraint. This tells gcc that it writes to arbitrary memory. One other difference from the other function is that the asm fragment has an extra 'volatile' keyword. This is necessary because the code has no outputs. GCC needs to know if it is allowed to elide the perhaps useless asm which may not interact with anything else. The 'volatile' tells gcc that it shouldn't be removed. The 'memory' constraint tells gcc that it shouldn't move this call across other memory references. (Otherwise our read of 'var2' might cross writes to it.) It is possible to have output-less inline asm which don't have the above constraints. However, be aware that gcc can optimize your asm away, or move it around if they are missing. If done when you don't expect, the result will again be subtle bugs. The above when compiled yields: Which again is as small as possible. GCC picks the %edi register corresponding to the ABI register for the first parameter on x86_64. (If you want to find the exact code generated by the asm fragment, look for the areas surrounded by #APP, #NO_APP comments.) It is easy to create an inline asm with both input and output parameters: Here, the input parameter is %1, and the output is %0. Note the AT&T syntax used by default, which has outputs on the right of the asm instructions. Intel format can be used, which swaps things around. However, most gcc inline asm you will see will stick to AT&T format, so you should get used to seeing it. The above compiles into: GCC has picked both input and output registers so that again the result is a single instruction. A slightly more complex example is when you want something to be both an input and an output at the same time. For that, use the position of an output, and use a '+' symbol instead of an '=': The above also shows how you should prefix immediates with a dollar symbol in AT&T syntax. It also has the 'cc' constraint. This stands for "condition codes". Since the add instruction will affect the carry flag amongst other things, we need to tell gcc about it. Otherwise it might want to split a test-and-branch around our code. If it did so, the branch might go the wrong way due to the condition codes being corrupted. Basically, any inline asm that does arithmetic should explicitly clobber the flags like this. When compiled, we get: So now the input and outputs are one and the same register, %eax. However, since the parameter passed to the function is in %edi, gcc helpfully copies it into %eax for us. Only when the copying was really needed did gcc insert it. Looking at a slightly more complex example: Functions 5 and 6 attempt to something similar to function 4. However, instead of returning a value, they call some other function called foo. This means that the output should be in the %edi register. However, the input will also be in that register. The result shows how gcc will assume that output and input registers are allowed to overlap unless you tell it otherwise. func6() will not work correctly. gcc will pick %edi for both of 'out' and 'parm'. This will compile into: Which isn't what we want. The register is corrupted, and then added to itself. To fix this, use the '=&' constraint. That tells gcc that the output constraint register shouldn't overlap an input register. Using that instead gives us function 5: Which uses two registers, as required. It picks %eax for this, and inserts the extra copy needed. You may have noticed that the multi-line asm used '\n\t' control codes. This simply makes the result nice. You just need a carriage return '\n' to go to the next line. The tab character indents things to line up with the code generated by gcc from the rest of the program. (Remember that the inline asm string is basically inserted verbatim into the output sent to the assembler, modulo simple replacements.) To have multiple inputs, just separate them with commas: Which will compile into: Another possibility is that you might want some inputs and outputs to share a register. As described above, one way to do that is to use the '+' constraint. However, there is another way. You can use the number corresponding to another constraint within a second constraint. If you do this, then gcc will know that the two are linked, and must be the same. An example of using this is: Which compiles into: This may, or may not be a more readable technique than using a '+' constraint. '+' used to be buggy in old versions of gcc, so old code tends to use this method. Newer code might want to use the more concise '+' descriptor. In addition to passing information in registers, gcc can understand references to raw memory. This will expand to some more complex addressing mode within the asm string. Note that not all instructions can handle arbitrary memory references. Thus sometimes you need gcc to create a register with the required information. However, if you can get away with it, it is more efficient to use memory directly. Some code that does this looks like: Which compiles into: Notice how in the above, gcc has generated a %rip-relative addressing mode for us. Sometimes you really want a constraint to be satisfied by a certain register. Fortunately, gcc has specialized constraints for many (but not all) of the general purpose registers used on x86_64. The above code shows how you can explicitly use the 'a' register (which corresponds to %al, %ax, %eax, or %rax, depending on size). Note how we need to use a double-percent sign within the asm string. This is similar to a normal printf format string, where to print a single percent you need two of them. (This is due to a percent symbol being an escape character.) Compiling, we get: GCC has copied from %edi into the constraint register defined by 'a', %eax for us. Note that different machines will have differing names, and differing constraint symbols for their registers. You will need to look at the gcc documentation for your particular machine to find out what they are. This article will concentrate on the x86_64 case. Another commonly used register is the 'd' (%dl, %dx, %edx, %rdx) register: The above is a little tricky. p3 is passed in within %edx as specified by the function ABI. This means that gcc needs to copy it into another register so that p1 can go there. Fortunately, gcc handles all of the marshalling for us: Note the extra moves before the add instruction, and afterwards in order to get things where they need to be. This is the reason why you really shouldn't use explicit named registers if you can avoid them. The only time where they are unavoidable is if you want to match some kind of ABI, or have to interface with an instruction with fixed inputs or outputs. An example of this on x86, is the mul instruction. That will put its output in the 'a' and 'd' registers, and always takes one of its inputs from the 'a' register. So to describe it's use you might do something like: The above uses another feature of gcc asm. Sometimes inputs commute, and we don't really care which of them uses a particular register. In this case p1*p2 = p2*p1, and we don't mind which of them goes in %eax. To tell gcc this, we can use the '%' constraint flag, which means that that constraint and the following one commute. In this case, gcc decides not to swap the order of the two inputs because it doesn't matter. We can try something slightly different, where we use the 'D' constraint to force the use of %edi as the multiplicand. This compiles into: Unfortunately, gcc fails to make the swap in this case as well, even though it would be very profitable to do so. It looks like you can't really count on the '%' constraint specifier, which is a shame. There is another way to get more flexibility within the constraints. You can simply list more than one constraint symbol. GCC will choose the best one. An example of using either a register, or a direct memory reference is: Which will use the better direct-memory operand: Another way of gaining flexibility is using a more general constraint. 'g' allows a register, memory, or immediate operand. Using it: GCC will again pick the best option, which in this case is direct memory addressing modes. Of course, if you want an immediate, there is a symbol for that as well, 'i'. The limitation is that an immediate must be a compile, or link-time constant. Which compiles into: Notice how gcc automatically converts into the AT&T syntax for us, with the dollar symbol preceding the constant. There are other constraint modifiers. One of which is the '#' symbol which acts like a comment character. The above compiles into: Everything after the hash symbol is ignored. Unfortunately, you can't include spaces or punctuation symbols within the comment. The other thing that ends the 'comment' is a comma. This is because you can use commas to allow multiple alternatives in an inline asm. The alternatives are linked together (all first option, all second option, etc.) rather than being unlinked like in the 'rm' case. Some example code is: The above shows the power of the technique. In x86 assembly language, there can only be a single reference to memory within an instruction. Thus if we use two 'g' constraints, we can sometimes generate invalid code. One fix for this is to use register-only 'r' constraints. However, they can lead to inefficiency. What we want to do is only ban the invalid option. By using alternative constraints, we select the valid 'm + r', 'r + m', and 'r + r' options. Note that this feature isn't used very often within inline asm code, so is a little buggy. The final inline asm which is #defined out, in the above function should work. However, gcc gets confused by it. The fix is to add the 'r + r' option, like in the other cases. When compiled, the above yields: Another possibility is when you want a constraint, but you don't want the compiler to worry too much about the cost of that constraint. This doesn't really come into play very often. In fact, with orthogonal architectures like x86, it may not happen at all. This is really a case of API leakage, where gcc offers a feature that may be useful to some machines to all. The '*' constraint specifier causes the following character to not count in terms of register pressure. The canonical example is the following: In the above we have an instruction (an add, in this case), which will either take two references to the same register, or a memory-register combination. The same-reg, same-reg case is more strict, and we would like gcc to use the memory-addressing version if possible. The '*' accomplishes this. However, this trick is rather subtle... and probably shouldn't be used with inline asm. The above compiles into: Note how the differing form of the instruction is chosen. A much better technique is to use constraint modifiers that explicitly penalize some alternatives over others. By using the right amount of penalization, you can create patterns that match the machine's costs. GCC will then be able to make intelligent choices about which is best. The simple way to do this is to add a '?' character to the more costly alternative. The above shows how you can tell the compiler that (for example) %eax can be more or less expensive to use than %edx. It compiles into: Of course, a single level of penalization might not be enough. You can add more '?' symbols. Two question marks is even more penalized than one. For even greater penalization, you can use the '!' symbol. It is equivalent to 100 '?' symbols. This should be very rarely needed. Up until now, we have only used the clobber part of the asm intrinsic for 'memory', and 'cc' (condition codes). However, there are other things you can put in there. The most often used are names of registers. This tells gcc that that register is somehow used in the asm string. It will not use that register for inputs or outputs, and will helpfully save that register before the asm is called, and then will automatically restore it afterwards. An example of this where we clobber the %rdx register is: The mul instruction will write to %rax and %rdx. We don't care about the upper part, so it isn't an output. To tell gcc about the register write, the clobber does the job. (Yes, there are other versions of the x86 multiply instruction that don't clobber %rdx unnecessarily, but this is just an example of how clobbers might be useful.) This compiles into: In this case, %rdx is 'dead' because it is a parameter-register in the ABI. GCC doesn't need to save or restore it, so doesn't. Without the clobber, we would need to save and restore the register manually. That would be inefficient in cases like the above, were such saves and restores are not needed. Of course, you can clobber more than one register: The above is bad coding style. You really shouldn't use control-flow altering instructions inside inline asm. GCC doesn't know about them, and can do optimizations that invalidate what you are trying to do. (If 'foo' is inlined everywhere, it may not even exist to call.) Also, there have been many bugs when the number of clobbered registers gets too large. If gcc can't find a way to save and restore everything it may simply give up and crash. In the above case, we are lucky, and it compiles without issue. The trick is to notice that the clobbered registers are all dead (except %rdi) due to the x86_64 SYSV ABI. A much better technique is to use explicit temporaries. GCC can then allocate them where ever it has space. It can also move things around for more efficiency, based on the needs of surrounding code. An example of doing this is: In the above, we use two temporary registers. Since we don't want them to overlap the other inputs or outputs, they need to be defined by '=&r' constraints. The only thing left on the clobber list is the 'cc' due to the arithmetic and logic instructions altering the condition codes. Finally, there is another way to name registers within the asm string itself. Depending on your point of view, the numerical '%0-%9' scheme may be more or less readable than the following: By putting a name within square brackets in the constraints we can then use those names in the asm string. Note that the asm operand name does not have to be the same as the C variable it comes from. However, for readability, it may be better to keep the two the same if possible. The main disadvantage of the technique is that it can make the asm string a little longer, and can make it harder to see what addressing modes are used. Less Common Constraint Types There are a few standard constraints beyond those discussed above. One of these is for "offsetable memory", which is any memory reference which can take an offset to it. In the orthogonal x86 architecture, this is anything that 'm' could reference, so this constraint class isn't too useful there. Other machines may be different though. An example of its usage is: Which compiles into: The linker and assembler understand the more complex addressing within "out.2398+4(%rip)", and will generate the appropriate fix-up for us. Since some machines have offsetable memory as a separate class from normal memory constraints, there is some memory which is not offsetable. If you want to have a constraint that references such memory, you can use the 'V' constraint flag. However, since x86 doesn't have such a beast, we don't provide an example of its use. Some machines provide memory that automatically increments or decrements things stored within it. Such memory can be described by the '<' and '>' constraints. Again, x86 doesn't have anything like that, so those constraints are not supported, and no example is provided. Another constraint that isn't so useful on x86 is 'n'. That refers to a constant integer that is known at assembly time. Some machines have less capable assemblers and linkers, and cannot use the more general 'i' constraint. 'i' is an integer constant known at link time. Since 'n' defines a sub-category of 'i', you can also use it on x86: The above acts just like 'i' would do, and uses the 5 as an immediate: Another integer immediate constraint type is 's'. This describes an integer that is known at link time, but not compile or assembly time. This isn't particularly useful on x86, but on other machines can lead to optimizations. Not all immediates are integers. Some machines allow immediate floating point numbers. The 'E' constraint is for floating point immediates that are defined on the compiling machine. If the target machine is different, then the bit-values may be incorrect. Thus, this constraint shouldn't be used if you are cross-compiling. The x86 architecture really doesn't allow floating point immediates. You should get constants into SSE registers and the legacy floating point stack from memory instead. However, there are a coupled of special cases that still work: The above use the bit-pattern for the double '2.0', and indirectly moves it into an SSE register (defined by the 'x' constraint). It would be more efficient to do a direct memory load, but the above does work: The code for float-sized immediates is similar: In addition to the 'E' constraint is the 'F' constraint. This is cross-compiling friendly, and should probably be used instead. Otherwise, it has the same meaning as it's 'E' cousin. Which produces identical code to the 'E' version: Another rarely used constraint is 'p'. It describes a valid memory addresses. On x86, it behaves just like 'm' does. You should use the more standard 'm' instead. Which compiles into: There is one final constraint common to all machines, 'X'. This constraint matches absolutely everything. This catch-all doesn't give gcc any information about how to pass the information to the inline asm, so gcc picks the form most convenient for it. Since the exact output will be highly variable, it is difficult to use in normal asm instructions. However, it may be helpful in asm directives: The above compiles into: This creates a zero-terminated ASCII string containing the operand used by gcc. With a bit of section magic, it obtains a pointer to it, which is then returned in the output. X86 Register Constraints Most of the previous constraint types will work on all machines. Some have been x86-only though. For example, 'a', which will expand to '%al', '%ax', '%eax" or '%rax', will obviously not work the same way on another architecture. We have seen a few of these x86-only, but there are many more. A simple register constraint is 'R'. This selects any legacy register for use. i.e. one of the a,b,c,d,si,di,bp, or sp registers. This may be useful when interfacing with old code unable to use any of the new 64 bit registers. Otherwise, the constraint acts just like 'r' would do: The above cannot use p5 as is because it is passed in %r8 by the ABI. Thus gcc will insert a move instruction into a legacy register as requested. This copy wouldn't happen if 'r' were used instead; Another constraint that picks a subset of the available registers is 'q'. This picks a register with an addressable lower 8-bit part. The list of available registers differs between 64-bit mode and 32-bit mode. In 32-bit mode, some of the registers don't exist. i.e. you can't access %dil or %sil. Otherwise the use looks exactly like 'r' would have. A variant of the above is the 'Q' constraint, that picks a register with a 'high' 8-bit sub-register. i.e. any of the a, b, c or d registers: Which compiles into: Notice how the compiler was not allowed to use the %edi register as the operand any more. Instead, it picked %edx. As we have seen in the earlier sections, some of the x86 registers have constraints of their very own. We have seen 'a' and 'd'. Similarly, 'b' and 'c' do what you might expect, and refer to the '%bl', '%bx', '%ebx', and '%rbx' registers, and the '%cl', '%cx', '%ecx', and '%rcx' registers respectively. An example of this might be: Where every input has had its register manually defined by an explicit constraint. GCC needs to do a little bit of copying to get everything into the right spot; There are also special constraints for the si and di registers, 'S' and 'D' respectively. (We have used 'D' before in func13().) Something using them looks like: Which compiles into: There is one final way to access the general purpose registers, which is via the 'A' constraint. This is the two-register pair defined by the a and d registers. This is useful when you want to deal with 128-bit quantities in 64-bit mode, or 64-bit quantities in 32-bit mode. The low bits are stored in the a register, and the high bits in the d register, just like the multiply and division asm instructions expect. Its use looks like: Which compiles into: Since the ABI requires a function returning a 128-bit integer to do so in %rax and %rdx, the above has no extra register to register copies. (Other than that required to get the multiply instruction initialized.) X86 Floating Point Constraints The x86 has a strange floating-point coprocessor which uses an internal stack of registers. Dealing with this is difficult with gcc. You need to make sure that the right number of values are added and removed from this stack. GCC assumes that all output constraints are under its purview, and are popped by it. Input constraints are more complex, can be either popped by gcc afterwards or not. The least complex method is to tie an input constraint to an output. That makes it popped afterwards with the output that replaces it. You can also clobber an input to make it assumed to have been implicitly popped. Otherwise, gcc will assume it can use the input later for other calculations, and will handling the popping of that register itself. One critical detail is that the floating point processor acts on a stack. That means that the used (popped or not) registers must be contiguous. It's not possible for gcc to re-arrange the stack by popping something the middle. You need to make sure the outputs are first in the stack, followed by all registers you pop, and finally followed by the ones gcc will pop from that stack. The constraint for the top of the floating point stack is 't'. We can add things to the stack without a floating point register input by using memory instead: The above converts an integer into a long double float: The ABI mandates that long doubles are returned in st(0), so the above routine doesn't need to alter the stack. The next-from-top floating point register, st(1), also has a special constraint: 'u'. An example of its use might be: Note how in the above we link the first input to the output, so it is stored in st(0), and popped by gcc afterwards. The other input is in st(1), and since is not clobbered, will also be popped by gcc afterwards. You can see how gcc sets up the floating point stack (in a not particularly efficient way). You can also see how the st(1) input is cleaned up afterwards by the ftsp instruction. st(0) is still live at the end of the function, and is used for the long double output. Finally, you can create an input in an arbitrary floating point slot by using the 'f' constraint. (This doesn't work as an output constraint.) An example of this is: Where just to be different from the previous function, we use an in-out parameter on the top of the stack. Again the code generated has an extra fxch than what is needed. You really shouldn't use the legacy floating point instructions. Instead, modern code should use SSE instructions for their floating point work. Another legacy part of the x86 instruction set are the mmx registers. These are aliases of the legacy floating point stack. This means that they are difficult to use because you need to use the 'emms' instruction afterwards to avoid floating point exceptions. However, some older vectorized code does use them. The constraint for their use is 'y': Which compiles into: The above is obviously very inefficient, as gcc goes through the better SSE registers as mandated by the vector ABI. Another thing missing is the emms instruction. You'll need to use yet another inline asm in order to add it where needed. A better option is to avoid these registers if possible. Instead, most modern code should be using the 16-byte SSE registers. The constraint for accessing those is 'x'. (This was also used in func29.) Since the ABI is much more compatible, the overhead is lower: Which when compiled, produces: Many fewer instructions are used in the above, with the bulk of the function just a single SSE instruction. The final register constraint type is defined by the two-character string 'Yz'. This constrains to the first SSE register, %xmm0. This is useful because that register is often mentioned by the ABI. It is the first floating point or vector parameter passed to a function, and also the register used for floating point or vectorized output. Using it is easy: Here we deliberately cause gcc to have to swap the SSE registers around in order to get p2 into %xmm0: X86 Integer Constraints In addition to the machine-specific register constraints, the x86 inline asm in gcc also supports special integer constraints. Most of these are actually not useful for inline asm - being 'leakage' from the RTL pattern-matching used by the optimizer. They still can be used, although this is not recommended as these are not really documented. The first of these is relatively useful. The 'I' constraint specifies a constant integer in the range 1-31. It is useful for 32 bit shift instructions: This compiles as you might expect: Similarly, there is the 'J' constraint which specifies a constant integer in the range 1-63 for 64 bit shift instructions: Which compiles into: The above two constraints are helpful in that gcc will error out if the constants are the wrong size. This extra error-checking can prevent bugs. Perhaps less useful is the 'K' constraint. This specifies a signed 8-bit integer constant; On the other hand the 'L' constraint is obviously something that has escaped from RTL-land. It only allows the two integers 0xFF or 0xFFFF. It basically is a method of pattern-matching certain zero-extending constructs. Since you can't alter the asm string based on register matches, this constraint is barely useful. Of course, it still can be used: Where the above shows how the and instruction may be used for zero-extension: Another not so useful constraint is 'M'. This specifies integer constants from 0-3. This is useful for RTL pattern-matching shifts that may otherwise be better done with an lea instruction. However, again the result is something not so useful for inline asm. You probably shouldn't use it. However, if you do, it may look something like: Which compiles into: The next integer constraint is 'N'. This one specifies an unsigned 8-bit integer constant. It is useful for the io/instructions 'in' and 'out': The addition of 64-bit support to gcc meant that constraints needed to be added to support it. Since most instructions do not support 64 bit immediates, we need to differentiate from 'i' which will allow such large integers. Instead, you can use 'e', for a constraint for a constant 32-bit signed integer: Which when compiled gives: Similarly, there now is also a constraint for 32-bit unsigned integer constants, 'Z': Which we can compile to give: Finally there are two floating point constant constraints that you probably shouldn't use at all. These are used by gcc for optimizations. The first of these, 'G', will match a constant that can be easily generated by the i387 by a single instruction. However, since the resulting operand cannot actually be used by floating point instructions, there is very little point in using it in inline asm: Where in the above we use the same trick as used with the 'X' constraint, and simply convert the operand into a string. The resulting code after compilation is: The other floating point constraint is the equivalent for SSE registers, 'C'. Since there are less constants constructible from a single instruction, this is even less useful: Which when compiled produces: X86 Operand Modifiers The use of constraints doesn't fulfil all the possible things you might want to do in an inline assembly statement. The problem is that the operand %0 might not be in quite the form you want. For example, you may want to access a sub-register of %0, or use a different addressing mode that perhaps requires some slightly different formatting than the default. Fortunately, gcc offers operand modifiers that allow doing these changes. Operand modifiers work by inserting a symbol between the percent sign and the number for the operand (or its square-bracketed operand name). By using different modifiers, you can get different effects. However, many of the modifiers are really designed for RTL usage, so aren't helpful in inline asm mode. The simplest modifier is one that just outputs the character 'b' (for byte-sized accesses) if the compiler is in AT&T mode. This helps in writing asm strings that can also be parsed in Intel mode, which requires unadorned instructions. Use the 'B' symbol to do this: Which compiles into: Note how 'mov' gets changed into 'movb'. This particular operand modifier doesn't really depend on the operand itself. There are other versions of this for the 16-bit and 32-bit cases. 'W' will generate a 'w', and 'L' will create an 'l':> Which compile into: Unfortunately, this pattern does not continue into 64 bits. The 'Q' modifier outputs an 'l', rather than the 'q' you might expect. Perhaps this is due to the fact that most instructions cannot take a 64-bit immediate. An example of using it is: Finally, there are two other character-printing modifiers. 'S' creates an 's', and 'T' makes a 't'. These are less useful, corresponding to legacy floating-point use. Of course, since the output is a raw string, you don't actually have to use them for that... and other sillier usages are possible, as is shown below. Giving when compiled: Of course it goes without saying that such tricks should be avoided in real code. Another operand modifier tells gcc that the operand is a label. This is used in the "asm goto" extension. Labels are listed after the clobber list, and can be referred inside the asm string. Such asms should not have any outputs. They are designed for control flow usage. The problem is that there is no real way to get condition code information into and out of an inline asm statement. The asm goto method avoids this problem by letting the user do the branching inside, and thus all condition usage is encapsulated. Other gcc optimizers can then deal with the jump labels and move them around as needed. The result can be very efficient code. An example using it is: Which compiles into: If this function gets inlined inside an if statement, then the extra statements that set the output will be removed by optimizers. The above modifiers didn't really change the output of the operands. However the following do. The 'a' and 'A' modifiers deal with addresses. They are helpful when allowing compilation in Intel mode. They modify the operands in the correct way so that dereferencing is written in the right syntax. Example of their use is: Note how 'a' added brackets around the register name, and 'A' added an asterisk in front. The 'p' modifier is similar. It modifies an operand to be a raw symbol name. For constants, it removes the leading dollar symbol. This is useful because in some contexts a dollar symbol is incorrect syntax. For example, in segment-offset addressing: Notice how %gs:$40 would be wrong. The 'P' modifier does a little more work, it removes things like '@PLT'. This is helpful if you are creating something like a dynamic linker, where you need to do inline asm before relocations have been calculated: Notice how the raw unadorned 'func65' is used. The 'X' modifier is similar to 'P'. It outputs a symbol name with a prefixed dollar symbol. It is useful for symbolic immediates: Which compiles into: Compare with the output from the 'P' modifier. Basically, these symbol modifiers are only useful if you are playing with linker tricks. Usually, the default behavior from the 'm' or 'g' constraint is what you want. Only when you absolutely need some other form of linkage are they needed. Occasionally, you may want to use a differently sized sub-register based on a given constraint. Without operand modifiers there is no way to do this. The given asm string for a register name may be completely different. Compare %rax to %eax, versus %r8 to %r8d. Fortunately, gcc provides ways of accessing all possible registers based on a given constraint. The 'b' operand modifier gives you the 8-bit register related to a given register operand. (For those registers that have two 8-bit sub-registers, it picks the low one, i.e. %al, not %ah from %eax. Code using it looks like: The above takes the bottom 8 bits of the 32-bit integer parameter, and sets the corresponding bits of the 64 bit output: There are, of course, other sized sub-registers. The 16-bit operand modifier is 'w': Which does a similar thing as the previous function, but to the bottom 16 bits: The operand modifier for 32-bits is 'k': Where the above uses the 64-bit x86 feature that using a 32-bit instruction on a 64-bit register will clear the upper 32 bits. The asm looks like: Finally, if you want the 64-bit version of a register, use the 'q' modifier: Which compiles into: Of course, we still may want to access the other 8-bit "high" sub-register. The 'h' operand modifier allows this: Note how we had to use the 'Q' constraint to make sure that the high sub-reg existed. The resulting code chooses the %edx, and the %dh registers for this: Somewhat related is the 'H' operand modifier. This allows you to access the high 8-byte part of a 16 byte SSE variable in memory. It adds 8 bytes to the offset in the memory access. This effect can of course be simulated manually. Which compiles into: Another useful feature is that there are operand modifiers that help inline asm statements that deal with constants. The main issue is that in AT&T syntax, you may need to add a suffix to an instruction to tell the assembler what size of instruction to use. In Intel syntax, this suffix should not be there. The other problem is that flexible code may need to accept many possible instruction sizes. The 'z', and 'Z' modifiers help here. They print the correct suffix for a given register size: Notice the 'l' in the 'movl' instruction has been added for us. The 'Z' variant is similar: And in this case compiles identically. The difference between 'z' and 'Z' is that 'Z' is more flexible. It works with floating-point registers as well as just the integer ones. Unfortunately, neither modifier will work with constant asm constraints, just register constraints. Sometimes useful is that you may want to write accesses to the top of the legacy floating point stack slightly differently. The 'y' modifier converts 'st' into 'st(0)': Compare the result with the output from from func40(). 'n' is a weird operand modifier. It negates the value of an integer constant. It also suppresses the leading dollar sign: Another strange one is the 's' modifier. It prints out an integer constant, followed by a comma. It does not suppress the leading dollar sign: The next set of modifiers help asm using AVX instructions. The 't' modifier converts a SSE register name into its AVX equivalent: Where if you compile with the -mavx compile-time flag, you get: The reverse is implemented by the 'x' modifier, which converts an AVX name into the SSE version: Also potentially useful for AVX code is the 'd' operand modifier. This is documented to duplicate an operand. Since the fused multiply-add instructions come in three and four operand variants, it would be convenient to be able to support both from the same code-base. Using duplicated operands would help somewhat. Unfortunately, simple usage of 'd' with AVX registers leads to internal compiler errors with the current version of gcc (4.7.1), so this modifier should be avoided for now. Other modifiers to be avoided are those dealing with condition codes. There is no way for inline asm to input a condition code operand type. (They are generated from RTL, however.) So you shouldn't use the 'c', 'C', 'f', 'F', 'D' and 'Y' modifiers. The one remaining modifier is 'O'. It isn't particularly useful. It prints nothing if sun syntax is off. (The default.) Otherwise it prints 'w', 'l' or 'q', helpful for cmov instructions, which are slightly different in that asm dialect. In addition to operands specified by the constraints, there are a few others. The first of these we have seen before. '%%' will print a single percent sign. This is helpful for writing asm registers explicitly within the output string. The '%%' behavior is the same as that for the printf() function, so it is easy to remember. func10(), above shows its use. The '%*' operand prints an asterisk if you are using AT&T assembly output. Otherwise, nothing is printed. This is helpful for portability: Which compiles into: Again, you probably shouldn't use control flow instructions like that in inline asm, since gcc will not understand them. However... sometimes you might just need to, and tricks like that often help. The '%=' operand prints a unique numeric identifier within the compilation region. This is helpful for constructing a unique symbol from within an inline asm. Perhaps __LINE__, or local symbols should be used instead though. For example: Which compiles to give: Where in this particular case, it expanded to "820". Note that since you can construct a symbol name with a given pattern, this trick may be helpful for debugging. The '%@' operand expands to the thread TLS segment register. In 32-bit mode, this is %gs. In 64-bit mode, %fs. If you are writing low-level thread library code, this may be helpful for portability. Which compiles to give: The '%~' operand expands to 'i' if avx2 is available. Otherwise it expands to 'f'. I don't know why this could be useful. The '%;' operand expands to ':' if gcc has had compiled in support for certain buggy versions of the gnu assembler. Otherwise, it expands to nothing. This apparently is useful for getting segment overrides to work. However, these days, binutils is most likely modern, so you don't have to worry about this. Finally, there are two more operands that are not useful from inline asm. The '%+' operand is designed to add branch-prediction prefixes. However, inline asm can't give the information it needs. The '%&' operand expands to the name of a dynamic tls variable used within the function the inline asm is invoked in. This is used internally within gcc to get thread local variables to work correctly. You shouldn't need to use it in inline asm code. Another interface with assembly language within gcc are register variables. GCC has an extension that lets you assign which particular register a variable may use. An example of this is: Where we would like the input parameter p1 to be stored in %r10, before being copied into %eax for output. Unfortunately, reality isn't so kind: GCC ignores our request, and instead optimizes the extra moves away. You might think you could use a volatile specifier on the variable to make loads and stores to it more explicit. This doesn't work either. In fact, there is a warning "-Wvolatile-register-var" for this broken usage. In light of the fact that asm register variables are held captive to the whims of the optimizer, the should perhaps not be used. It is difficult to make sure they will have the behavior you might need. A final trick is that it is possible to insert asm at top-level within a C source code file. Normally, you would need to be inside a function to use inline assembly language. However, we can use the fact that the section attribute is inserted verbatim into the output. Since we can embed carriage returns, we can put anything we like there. The only constraint is that the input must be a constant C string: The above creates a function called func85() within the section attribute. The 'used' attribute is there to make sure that the variable func85a is not removed. The result is that func85 is inserted into the object code manually: A similar version of this trick allows variables to be put into elf sections that are not '@progbits'. Simply add the section details you want, and then end them with a comment '#' character. The comment will remove the unwanted details gcc adds as a suffix. Company Info | Product Index | Category Index | Copyright © Lockless Inc All Rights Reserved.
Analyzing Electrical and Thermal Conductance in a Contact Switch Alexandra Foley | July 29, 2013 A contact switch is used to regulate whether or not an electrical current is passing from a power source and into an electrical device. These switches are found in many types of equipment and they are used to control, for example, the power output from a wall socket into a device when it is plugged in; the currents passing across the circuit board of a computer; or the electricity powering a light bulb when the switch is flipped on. Because of their prevalence, simulating contact switches is a fundamental step in designing electronic applications. Since the concept used in their design remains much the same even as more complex components are implemented, a simple model can be used to provide a basic understanding of how a contact switch works. We can find such a simulation in our Model Gallery, and can use this model to explore the mechanical, electrical, and thermal behavior of the two contacting parts of the switch. Contact switch showing temperature distribution and current density within the switch. Contact Switch Concepts: Mechanical Contact and Electro-Thermal Contact The working principle behind a contact switch is simple — two conductive pieces of metal with an electrical voltage difference across them are brought into contact, allowing a current to flow between them. The metallic surfaces of the two components that touch one another are called contacts, and when the connection between the two contacts is broken, the current stops flowing. The current flow between the two contacts contributes to an increase in temperature in the switch due to the Joule heating effect. Anyone who has felt a warm power plug after running a vacuum cleaner, for instance, has experienced this effect. The heating of the contact switch can change the material properties of the metal as well as the surface area of contact, and therefore is an important effect to consider when modeling the switch. Letting the temperature become too high can even cause the switch to burn out, meaning the switch is no longer functional. Therefore, it is important to analyze its current-carrying capability in order to prevent this from happening. It is also important to consider that when the two metallic pieces come into contact, the surfaces touching each other experience a mechanical pressure or contact pressure. This mechanical pressure on the contacts can alter the electrical and thermal properties of the material locally around the region surrounding the contacts. Therefore, in order to accurately simulate the current-carrying capability and temperature rise in the switch, it is important to take a more comprehensive approach in the simulation and incorporate the effect of contact pressure to compute the electrical and thermal conductance of the contact surfaces. Let’s find out how you can combine all these concepts together to model a contact switch. Modeling Electrical, Mechanical, and Thermal Conductance in a Contact Switch First, let’s look at the geometry and materials used to build the contact switch. The switch is made of copper, with two fixed cylindrical elements and a central region where the contacts are located. On the end of each contact are plate hooks that enable contact between the two pieces. In the simulation, an electric potential of 1 mV is applied to the left side of the switch, while the right side is grounded. The geometry of the contact switch is shown below: Contact switch geometry, showing two fixed cylindrical bodies and the contacts at the center. The exposed surfaces of the switch lose heat due to their interaction with air via natural convection. In the simulation, this is modeled by specifying a heat transfer coefficient and the ambient temperature of the surrounding air (a more ambitious simulation might also include the fluid flow of the air). The model first solves for structural contact to obtain the contact pressure on the contact surfaces. These results are then used to compute the electrical and thermal conductance of the contact’s surfaces in a Joule heating simulation. We can use the simulation to analyze the electric potential distribution in the switch as shown below: Profile of the electric potential distribution in the contact switch. As would be expected, the electrical potential ranges from 0 V (ground) on the right, to the applied 1 mV on the left. Initially, the contact switch is assumed to be at room temperature (293.15 K, 68°F, or 20°C). A potential difference across the two components in the switch creates a current flow, which in turn leads to Joule heating. This causes a rise in temperature in the switch. If you leave the switch on for a while, temperature distribution in the switch reaches an equilibrium as shown in the figure below: Temperature distribution in the contact switch. In this model, Joule heating causes the temperature in the switch to rise about 5 K above room temperature, although only a small temperature variation is seen within the switch itself. Introducing the effect of electrical and thermal conductance allows us to predict the temperature rise more accurately. The simulation also shows that the switch gets slightly hotter at the contact region. If we look at both the temperature distribution at the contact region, and current density within the switch, we can see the results shown below: Temperature distribution (surface plot) and current density (streamlines) at the contact region. The simulation confirms that where the two hooks are touching at the center of the switch, the electrical current (shown as streamlines) passes from one hook into the other. Extending the Simulation Using COMSOL Multiphysics In this simulation, we showed how the contact pressure occurring between the two contacts can affect the electrical and heat conduction behavior in a contact switch. In some cases, it may be beneficial to conduct a more in-depth analysis by taking more physics effects into account. This can be accomplished using the electrical and thermal contact boundary condition that was added to COMSOL version 4.3b. With these new features, we can include additional analysis, such as: - Measuring the degree to which the electrical or thermal conductance varies in relation to a change in pressure on the contact surface, or change in surface roughness and hardness, with the Constriction Conductance feature - Testing how a thin layer of air, dirt, or fluid between the contact surfaces would change the electrical and thermal conductance, using the Gap Conductance feature - Measuring the radiation that could occur across the microscopic gap between the two contacts depending on how hot the temperature of the system became, using the Radiative Conductance feature It would also be possible to add more features to this simulation, such as the temperature-dependency of material properties and thermal expansion as a result of Joule heating. Each of these new features can be added to the model to construct a simulation that is more accurate for the particular environment and application where it will be used. To learn more about the implementation of these features, you can attend the upcoming Simulation of Thermal-Structure Interaction webinar that will be given on August 15th by my co-worker Supratik Datta and Kyle Koppenhoefer of AltaSim Technologies. Before attending the webinar, you can download the contact switch model, Simulation of Multiphysics Contact in a Power Conductor from the Model Gallery to explore how the mechanical, electrical, and thermal interactions in the contact switch were simulated. ETREMA to Talk About Transducer Design using Smart Materials
Food groupings are classified by dividing foods into fundamental categories. The group designations align foods according to their composition and nutritional properties based on the science of nutrition. The United States Department of Agriculture classifies food groups. The basic five food groups are grains, fruits, vegetables, milk and meat, according to the USDA. Carbohydrate foods contain sugars and starches to provide energy in the form of glucose. The body prefers glucose for the brain, central nervous system and red blood cells to function. The Institute of Medicine’s Dietary Reference Intakes for Macronutrients list the food groups consisting of grains, vegetables and fruits as sources of carbohydrates. Grains include foods such as whole wheat, rolled oats, barley, rye and brown rice. Corn, pasta, potatoes and breads constitute vegetables that contain carbohydrate in the form of starch. Fruit sources of carbohydrate food include apples, grapefruit, grapes, peaches and oranges. The Institute of Medicine advises about 55 percent of the daily diet should contain carbohydrates. Plain sugar, candy bars and carbonated sodas constitute another source of carbohydrates under discretionary calories, but the publication “Health” counsels against using these sources due to the lack of other nutrients in the foods. Proteins supply energy, but the primary role of protein in the diet is for healing injured tissue and for growth and development in the body. “Alive: The Canadian Journal of Health and Nutrition" describes the other essential function of protein in maintaining the immune system and hormonal balance. Examples of proteins fall under the meat, milk and vegetable food group. The meat food group contains both animal protein such as poultry, meat, eggs and fish, and plant proteins such as nuts, seeds, beans and legumes. The milk food group contains the protein foods milk, cheese and yogurt. The vegetable foods with protein content includes such items as peas, tofu, soybeans and lentils. The Institute of Medicine recommends that 20 percent of the diet contain protein. Three forms of fat -- saturated, monounsaturated and polyunsaturated -- produce fatty acids such as omega-3 fatty acid and omega-6 fatty acid that are required by over half the cells in the body, according to "Consumer Medical Journal." Omega-3 fatty acids are needed for neurological growth and development. Omega-6 fatty acids form the structural membranes in cells and are required for normal skin function. Olive, avocado, canola and peanut oils contain monounsaturated fat; fish, walnut, safflower and corn oils contain polyunsaturated fats. The Institute of Medicine only advises the intake of foods with monounsaturated and polyunsaturated fats. The modern diet should contain 30 percent fat for the total caloric intake. Vitamins and Minerals The USDA’s Dietary Guidelines for Americans 2010 explains the sources of the micronutrients in the food groups. Some sources of micronutrients, such as vitamin E, exist in all but the milk food groups. A source of vitamin E is a grain with fortified cereals, a fruit in avocados, a vegetable in carrot juice and a meat group with sardines. The other vitamin and minerals range across the food groups. The complete classifications of all the vitamin and minerals can be found in the USDA’s Dietary Guidelines for Americans 2010.
Panasonic said Monday it has developed a new system for artificial photosynthesis that can remove carbon dioxide from the air almost as well as plants do, as part of the company's push to join an industry-wide trend toward greener tech. The company said its system uses nitride semiconductors, which are widely used in LEDs (light-emitting diodes) to convert light to energy, and a metal catalyst to convert carbon dioxide and water to formic acid, which is widely used in dyes, leather production and as a preservative. Carbon dioxide is a major pollutant and considered to be a main cause of the "greenhouse effect," which most climate scientists believe causes global warming. Panasonic has struggled with its traditional electronics business and has made eco-friendly products and practices the key element in its turnaround plan. The company is hoping to leverage its large rechargeable battery and solar businesses, while joining the industry in embracing technologies that are friendlier to the environment. The issue is an important one with customers, as demonstrated by the the outcry earlier this month when Apple was forced to rejoin a green standards program when clients complained about its earlier withdrawal. As the name implies, artificial photosynthesis seeks to imitate the chemical conversion performed in green plants, which use sunlight to power a chemical reaction that converts water and carbon dioxide into carbohydrates like sugar. Theoretically the process is superior to current solar applications, which produce electricity that is inefficient to store, but current implementations are costly and degrade quickly during use. Panasonic said the system can convert carbon dioxide and water to formic acid with an efficiency of 0.2 percent in laboratory conditions, which is similar to the conversion rate for green plants. The efficiency refers to the portion of the incoming light energy stored in materials produced during the process. The company aims to eventually employ the system in industrial applications that produce high quantities of carbon dioxide, such as power plants and incinerators. Nitride semiconductors have long been used in LEDs and lasers, but in recent years have also been used in photovoltaic applications that convert light to electricity. Panasonic said it uses a thin-film version of the chips for the photosynthesis application. The Osaka based company said it holds 18 patents in Japan and 11 overseas relating to the system, which it will present at a solar conference in Pasadena, California.
Herbivores and Pathogens: Animals and microorganisms that feed on duckweeds. duckweeds serve as a food source for a wide variety of animals and microorganisms. These range from familiar birds and fish that may duckweeds as part of their diet to little-known insects and microorganisms with specialized dependency on these plants. Aquaculture systems have been designed in which domesticated fowl or fish are raised on duckweed grown in managed ponds. In other cases, herbivorous fish or insects are to control unwanted growth of duckweeds. This page provides information on just a few of the organisms that feed on duckweeds Right: Photo of ducks at the Phoenix Zoo, courtesy of Gayla Chandler. Insects That Feed on Duckweeds Two small insects are so commonly associated with Lemnaceae that their names reflect this fact: Lemnaphila scotlandae Cresson duckweed fly grows primarily on duckweeds and is one of insects to attack an aquatic The eggs are usually yellowish (0.3 mm long by 0.08 mm wide), with parallel ridges running lengthwise and are usually laid singly on the edges of the fronds. The incubation period is about 2 days, and the white larva digs down and feeds on the mesophyll tissue, tearing it apart with mouth hooks and then ingesting the macerated tissue. After completely clearing out one frond, the larva transfers to an adjacent frond to continue feeding. The larvae can also swim to other duckweed plants separated by open water. The larval stage comprises three instars and requires about 10 days. Two black-tipped, cone-shaped structures on the posterior end of the abdomen are thrust into the lower epidermis of the frond prior to pupation. The pupa are amber in color and about 1.5 mm long. The pupal stage lasts about 4 days. The adult emerges by inflating a specialized bladder-like structure that then ruptures the anterior end of the puparium. The emerging adult crawls through this opening and forces apart the epidermal layers of the frond to exit the frond. Feeding begins very soon after, but mating and egg laying are delayed until the second day. Although these insects can fly, their flights are usually low hops of a few inches. Adults probably only live about 3 days. This fly has since been reported in Illinois, Michigan, Ohio, and Florida, but likely occurs throughout the eastern US. Tanysphyrus lemnae Paykull weevil Tanysphyrus lemnae is one of the most common and widespread duckweed herbivores. The female lays her eggs one by one directly into the frond through a hole she chews into it. The eggs are inserted through the top surface of the frond and generally fill the space between the upper and lower surfaces. The female then plugs the hole, probably using feces. Eggs hatch in about a week into nearly transparent larvae about 0.5 mm in length. The newly hatched larvae immediately begin to feed. Each larva eats most of the frond that contained the egg within the first 12 hours. If other fronds are connected to the first, the larva will burrow directly from one to the next, and if not, will swim from one to the next. The larvae consume the green contents of the fronds, leaving most of the epidermis intact. As the larva grows, it takes on a translucent beige color with a yellow-brown head and lengthens to about 3 mm. Pupation occurs along the shoreline in the soil or under stranded duckweed. The total generation time is about 16–20 days. Adults feed by chewing on the surfaces of the fronds, causing obvious round perforations. Aphid, Rhopalosiphum nymphaeae (Linnaeus) In addition to water lilies, this aphid feeds on many other species. It is also known as the “reddish-brown plum aphid”, a name that is derived from its association with fruit trees, particularly during winter. This insect is widely distributed (cosmopolitan), and has long been known as a pest of cultivated aquatic plants. Aphids suck sap from plant leaves, but can also cause damage by transmitting plant viruses. The waterlily aphid is extremely destructive in aquatic gardens and nurseries and is known to transmit at least five plant viruses. Winged adult females migrate from aquatic habitats to trees in late fall and lay their eggs on the trees. overwinter, and, after hatching, subsequent generations and early summer on the fruit trees. After colonizing aquatic sites, the aphids reproduce developmental period from the birth of the first instar to stage ranges from 7 to 10 days, depending upon temperature 21-27°C). Each female produces up to 50 nymphs at an of two to four nymphs per day. The nymphs normally five instars during the course of their development, although they will occasionally produce a sixth instar. Photos above: A, Adult female and nymphs of the waterlily aphid. B, Relatively waterlily aphid colony on the underside of a waterlettuce leaf. myriotylum, a root and stem rot fungus. |Rejmankova et al. from dying duckweeds growing in affected Louisiana lagoons. its pathogenicity towards duckweeds in cultivation tanks and under conditions in lagoons. The quantity of duckweeds killed by the fungus increased exponentially and a whole stand would die in several days. Of six duckweed species tested in gibba, L. minor, and Spirodela polyrrhiza were the most susceptible to the fungal infection. Lemna was more resistant, while L. aequinoctialis and punctata never showed symptoms . The optimum temperature for infection was at Pythium myriotylum is one of the most common species of Pythium found in the soil in damp climates, often causing damping-off of seedlings and root-rot, so this species cannot be said have any special affinity for duckweeds. These fungi produce masses of microscopic, motile zoospores (see drawing) that can swim short distances to attack wet surfaces of plants. The fungi produce enzymes that break down the pectin in plant cell walls. Pectin breakdown results in a soft, watery rot. Pythium can survive indefinitely in the soil as a saprophyte, feeding on soil organic matter. In the soil it can also form thick-walled sexual oospores (see drawing). Oospores are the primary overwintering form. Pythium species are not vigorous competitors with other microorganisms in the soil. The fungi are disseminated in surface-drainage water and in infested soil on farm equipment, tools, and the feet of humans and animals. Above: Drawings of Pythium, (a) oogonia fertilized with monoclinous antheridia; (b) inflated sporangium (vesicle) containing immature zoospores; (c) typical sporangium; (d) two zoospores. Drawing by L. Gray. Reference, see Univ. of Ill. RPD No. 922, 1989. USDA/ARS Insects and Other Arthropods That Feed on Aquatic and Wetland Plants. Technical Bulletin 1870 October 2002 http://www.ars.usda.gov/is/np/aquaticweeds/aquaticweeds.pdf Mansor, M; Buckingham, GR (1989) Laboratory host range studies with a leaf-mining duckweed shore fly. Journal of Aquatic Plant Management 27: 115-118. Rejmankova E, Blackwell M, Culley DD. (1986) Dynamics of fungal infection in duckweeds (Lemnaceae). Veroeffentlichungen des Geobotanischen Institutes der Eidgenoessische Technische Hochschule Stiftung Ruebel in Zuerich 0(87): 178-189. Scotland, M.B. 1940. Review and summary of insects associated with Lemna minor. Journal of the New York Entomological Society 48:319–333. Univ. of Illinois, Dept. of Crop Sci., "Root and stem rots of garden beans." Report on Plant Disease, RPD No. 922, May 1989. http://web.aces.uiuc.edu/vista/pdf_pubs/922.PDF Wagner, D.T. (1969) Monocentric holocarpic fungus in Lemna minor L. [Ressia amoeboides] Nova Hedwigia 8 (1): 203-208. [ Top of Page ] [ Contact me ] Revised: December 10, 2005
Primary data and secondary data are two types of data, each with pros and cons, each requiring different kinds of skills, resources What does each and every research project need to get results? Data – or information – to help answer questions, understand a specific issue or test a hypothesis. Researchers in the health and social sciences can obtain their data by getting it directly from the subjects they’re interested in. This data they collect is called primary data. Another type of data that may help researchers is the data that has already been gathered by someone else. This is called secondary data. What are the advantages of using these two types of data? Which tends to take longer to process and which is more expensive? This column will help to explain the differences between primary and secondary data. An advantage of using primary data is that researchers are collecting information for the specific purposes of their study. In essence, the questions the researchers ask are tailored to elicit the data that will help them with their study. Researchers collect the data themselves, using surveys, interviews and direct observations. In the field of workplace health research, for example, direct observations may involve a researcher watching people at work. The researcher could count and code the number of times she sees practices or behaviours relevant to her interest–e.g. instances of improper lifting posture or the number of hostile or disrespectful interactions workers engage in with clients and customers over a period of time. To take another example, let’s say a research team wants to find out about workers’ experiences in return to work after a work-related injury. Part of the research may involve interviewing workers by telephone about how long they were off work and about their experiences with the return-to-work process. The workers’ answers–considered primary data–will provide the researchers with specific information about the return-to-work process; e.g. they may learn about the frequency of work accommodation offers, and the reasons some workers refused such offers. There are several types of secondary data. They can include information from the national population census and other government information collected by Statistics Canada. One type of secondary data that’s used increasingly is administrative data. This term refers to data that is collected routinely as part of the day-to-day operations of an organization, institution or agency. There are any number of examples: motor vehicle registrations, hospital intake and discharge records, workers’ compensation claims records, and more. Compared to primary data, secondary data tends to be readily available and inexpensive to obtain. In addition, administrative data tends to have large samples, because the data collection is comprehensive and routine. What’s more, administrative data (and many types of secondary data) are collected over a long period. That allows researchers to detect change over time. Going back to the return-to-work study mentioned above, the researchers could also examine secondary data in addition to the information provided by their primary data (i.e. survey results). They could look at workers’ compensation lost-time claims data to determine the amount of time workers were receiving wage replacement benefits. With a combination of these two data sources, the researchers may be able to determine which factors predict a shorter work absence among injured workers. This information could then help improve return to work for other injured workers. The type of data researchers choose can depend on many things including the research question, their budget, their skills and available resources. Based on these and other factors, they may choose to use primary data, secondary data–or both. Source: At Work, Issue 82, Fall 2015: Institute for Work & Health, Toronto This column updates a previous column describing the same term, originally published in 2008. Use of primary data in Institute for Work & Health research - Show and tell: Visual symbols inform vulnerable workers about MSDs - Workers with arthritis struggle to incorporate physical activity: study Use of secondary data in IWH research: - Economic crisis taking toll on worker health - Seven Premium rates, work demands play role in whether injuries involve time lossk Use of both types of data in IWH research:
A group of wading birds in the family Jacanidae, usually having long toes and claws and found throughout the world. Origin: Brazilian jaçanã, from any of several wading birds belonging to the genus Jacana and several allied genera, all of which have spurs on the wings. They are able to run about over floating water weeds by means of their very long, spreading toes. Called also surgeon bird Origin: [Cf. Sp. jacania.] The jaçanas are a group of tropical waders in the family Jacanidae. They are found worldwide within the tropical zone. See Etymology below for pronunciation. Eight species of jaçana are known from six genera. The fossil record of this family is restricted to a recent fossil of the Wattled Jaçana from Brazil and an Pliocene fossil of an extinct species, Jacana farrandi, from Florida. A fossil from Miocene rocks in the Czech Republic was assigned to this family, but more recent analysis disputes the placement and moves the species to the Coraciidae. They are identifiable by their huge feet and claws which enable them to walk on floating vegetation in the shallow lakes that are their preferred habitat. They have sharp bills and rounded wings, and many species also have wattles on their foreheads. The females are larger than the males; the latter, as in some other wader families like the phalaropes, take responsibility for incubation, and some species are polyandrous. However, adults of both sexes look identical, as with most shorebirds. They construct relatively flimsy nests on floating vegetation, and lay eggs with dark irregular lines on their shells, providing camouflage amongst water weeds. Chambers 20th Century Dictionary ja-kā′na, n. a tropical bird, allied to the rails, and frequenting swamps. [Brazilian.] The numerical value of jacana in Chaldean Numerology is: 3 The numerical value of jacana in Pythagorean Numerology is: 3 Images & Illustrations of jacana Translations for jacana From our Multilingual Translation Dictionary Get even more translations for jacana » Find a translation for the jacana definition in other languages: Select another language:
A Functional Biogeography of the Antarctic Biogeography, Function and a Basis for Securing Antarctic Biodiversity Antarctica and its surrounding, sub-Antarctic, islands are among the world’s most spectacular and least disturbed environments. The sub-Antarctic islands are staggeringly beautiful. Their habitats range from lush tussock grasslands in the lowlands to polar desert and glaciers in the uplands. Some have little vegetation at all. Others, in the more northerly reaches, may have lowland woody habitats. All of the islands are home to huge populations of seabirds and seals. They include albatrosses, penguins, elephant seals, and several species of fur seals. The islands’ vegetation includes extraordinary flowering plant species such as Ross Lillies and the Kerguelen Cabbage, but is dominated by groups such as the mosses and lichens. Terrestrial life includes just a few endemic birds, such as sheathbills, ducks and parrots, while insect life abounds. Many of these insects are curious, having reduced wings or no wings at all.
The nerve plant (Fittonia verchaffeltii), also known as the mosaic plant, for the unique veined appearance of its leaves, is a tropical evergreen plant native to tropical rainforests located throughout Central and South America. In the rest of the world nerve plants are commonly found in many plant arrangements, Nerve plants are used in many different types of arrangements for the beauty of their two inch long olive green leaves that are veined with red, white or silver. Nerve plants also make attractive houseplants if they are cared for properly. Although nerve plants require more attention than other plants caring for a nerve plant is easy if you remember that it is indigenous to a tropical location. Rainforests are very humid and nerve plants grow best in high humidity in temperatures above 70 degrees. For this reason some people choose to grow nerve plants in terrariums. A terrarium that is misted regularly is the superb growing environment for the nerve plant. In its native environment the nerve plant is a ground hugging plant that grows in the rich soil of the rainforest, spreading rapidly to cover the rainforest floor. If the nerve plant is to be grown successfully as a houseplant it needs to be planted in rich soil with a 10-10-10 fertilizer applied monthly. The soil needs to be kept evenly moist but not soaked. If nerve plants receive too much water their leaves will start to yellow and eventually they will drop off. Most tropical houseplants will yellow if they are overwatered. The nerve plant prefers medium or filtered light. Direct sunlight will burn the leaves of the plant and dry out its soil. Nerve plants that are properly cared for are rapid growers and will occasionally produce white or red insignificant flowers. Some gardeners will pinch off these flowers and prune the leaves of the nerve plant to keep the plant bushy and thriving. Although heavy pruning isn't necessary, light pruning is good for the nerve plant. Pest aren't normally a problem for the nerve plant although ill cared for plants will sometimes become hosts for mealy bugs or spider mites. These insect pests can kill houseplants if they aren't exterminated. To exterminate both mealy bugs and spider mites spray the tops and undersides of leaves with insecticidal soap. If mealy bugs and spiders mites are left untreated they will not only be detrimental to the nerve plant but they will travel to other houseplants. Nerve plants can be propagated by cutting, much like other tropical plants. Nerve plants can also be propagated by division through the hands of an experienced gardener.
A fundamental question for neuroscientists is how the activity in neuronal circuits generates behaviour. The nematode worm Caenhorhabditis elegans is an excellent model organism for studying the neural basis of behaviour, because it is small, transparent, and has a simple nervous system consisting of only 302 neurons. Typically, an organic glue is used to permanently immobilize the worm on an agar plate, and specific cells of the nervous system are stimulated with microelectrodes. This method has its limitations, however. As it is restricted, the worm’s muscles and nervous system cannot function properly, and the organism can therefore generate only a very limited number of behaviours. Furthermore, it is unclear whether or not the glue is toxic, or if it interferes with the function of nerve cells. Researchers from the Howard Hughes Medical Institute now appear to have overcome some these difficulties. In the journal Nature Methods, Nikos Chronis and his colleagues report that they have developed microfluidics chips for investigating the relationship between neuronal activity and behaviour of the nematode worm. The movements of nematodes are generated sequential muscular contractions that lead to sinusoidal waves which travel along the length of the body. Forward movements occur as a result of waves travelling from the front to the back of the worm’s body, while backward movements are the result of forward-moving waves. The microfluidics devices developed by Chronis et al are made of silicon elastomer attached to a glass coverslip, and are microfabricated using a technique called soft lithography. Two types of chip were made, each for a different set of experiments. Each chip contains a worm trap 1.2 mm long and 70 microns (thousandths of a millimeter) wide. The chips are only slightly than the worm itself – a young adult nematode is 1 mm long and 40 microns wide. The chips therefore constitute a well-controlled microscopic environment within which the neamtode can be manipulated – the worms are trapped, but at the same time can move freely in all directions. The researchers created transgenic worms for their experiments, which synthesize, in two specific cell types, a protein that fluoresces when it binds calcium. Because an increase of calcium ion concentration is an indicator of neuronal activity, the responses of those cells, either in response to a stimulus, or correlated to a behaviour, can be visualized using fluorescence microscopy. For one set of experiments (using the “behaviour chip”), the worms were loaded into the trap through a hole at one end. In the trap, the worms were slightly compressed at the thickest region of their body, so that their vertical movements were restricted. This compression kept the cell bodies of the neurons being investigated within the focal plane of the microscope. First, the researchers imaged the changes of calcium ion concentrations in cells called AVA interneurons. These cells regulate the backward movements of the worm, and are believed to elicit an escape response. They receive inputs from sensory neurons that are sensitive to mechanical pressure and chemicals in the surroundings, and send outputs to motor neurons that activate muscles in the worm’s body wall. It was found that when the worms switched from a backward- to a forward-moving wave (that is, from moving forwards to moving backwards) there was an increase in calcium ion concentration in the AVA interneurons. In all cases, the timing of interneuron activation corresponded exactly to the initiation and duration of forward-travelling body wave. In the second set of experiments, the “olfactory chip” was used. This chip integrates a worm trap with a mcrofluidics system that can deliver streams of solutions. The end of the trap was designed to match precisely the shape and size of the worm’s head, so that the end of the nose protrudes into a microchannel through which the solutions flowed (above). The olfactory chip was used to investigate the responses of ASH sensory neurons. These cells are polymodal, i.e. they are sensitive to different kinds of stimuli – chemical, mechanical and osmotic. (An osmotic stimulus is the pressure produced by the different concentrations, on either side of the membrane, of something dissolved in water.) Using the olfactory chip, the researchers exposed the worms’ noses to streams of highly osmotic solutions (solutions containing high concentrations of chemicals). It had previously been shown that ASH neurons are activated in response to the onset of osmotic stimulus. The authors found that the cells also respond to the offset of an osmotic stimulus, with a transient increase in calcium ion concentration. The authors suggest that the response of ASH neurons to the offset of osmotic stimuli had previously been masked because of the way the worms are immobilized in agar. They also say that the chips can easliy be modified to investigate other behaviours. For example, chips containing moveable parts could be used to look at worms’ responses to mechanical stimuli; chips with heated elements could be used to investigate thermoreception; and a combination of the behavioural and olfactory chips could be developed to investigate more complex stimuli. This study demonstrates the usefulness of microfluidics devices for the manipulation of small organisms like the nematode worm. The anatomy of the nematode nervous system is very well characterized; using such devices, researchers will be able to superimpose a functional map onto the anatomical one. Chronis, N., et al. (2007). Microfluidics for in vivo imaging of neuronal and behavioral activity in Caenorhabditis elegans. Nat. Methods 19 doi: 10.1038/nmeth1075.
Advanced digital electric meters, commonly called “smart meters,” gather data about electricity usage and periodically transmit that data across an advanced metering wireless network. During the periods when they broadcast, smart meters emit a type of radiation known as “non-ionizing.” Ionizing and non-ionizing radiations differ in fundamental ways. Ionizing radiation – Ionizing radiation reaches us through medical scans and x-rays, through cosmic rays when we travel on airplanes and through radon in the soil. Ionizing radiation is also released by radioactive fuel used in nuclear power plants. Scientific studies have shown that exposure to ionizing radiation causes short-term and long -term health problems in people. Ionizing radiation has a short wavelength and high frequency. Smart electric meters do NOT produce ionizing radiation. Non-ionizing radiation –Non-ionizing radiation includes extremely low frequency (ELF) waves produced by electrical equipment, ultraviolet light from lasers, and other types of electromagnetic fields. Compared to ionizing radiation, non-ionizing radiation has a longer wavelength and its health effects are less well understood. Radiofrequency (RF) radiation "Radiofrequency" (RF) is electromagnetic energy that falls within the frequency range of 3 kHz–300 GHz. Smart meters emit radiofrequency radiation, a type of non-ionizing radiation, when they transmit information to a base station at an electric utility. According to the Occupational Safety & Health Administration (OSHA) of the U.S. Department of Labor, RF radiation is absorbed throughout the human body and can cause damage by overheating cells. Research is still pending to determine whether exposure to RF radiation causes other types of health effects beyond heat (thermal) impacts. Cell phones also emit RF radiation, and because they are widely used and held close to the brain, public health research often focuses on them. Exposure guidelines for radio waves To protect against health hazards from exposure to RF electromagnetic fields, the Federal Communications Commission (FCC) has adopted RF limits. FCC RF limits are based on input from organizations that include the Institute of Electrical and Electronics Engineers, Inc. (IEEE), which issued recommended guidelines for RF exposure in their standard IEEE C95.1. (Methodology used to establish this standard is explained in this document.) Two other professional organizations, the National Council on Radiation Protection and Measurements (NCRP) and American National Standards Institute (ANSI), also have recommended maximum RF exposure levels. What is the health risk from smart meters? Based on studies of cell phone usage (which delivers radio waves close to the head), the World Health Organization’s International Agency for Research on Cancer (IARC) classified RF electromagnetic fields as “possibly carcinogenic to humans based on an increased risk for gioma, a malignant type of brain cancer, associated with wireless phone use.” However, the U.S. Food and Drug Administration (FDA) concluded that the evidence to date shows no increased health risk. The non-profit California Council on Science and Technology (CCST) funded a study that looked specifically at smart meter RF emissions. The CCST was commissioned by the California Assembly to perform an “independent, science-based study … [that] would help policy makers and the general public resolve the debate over whether smart meters present a significant risk of adverse health effects.” The CCST study, published in 2011, found “no clear evidence” of harmful effects from smart meters’ RF emissions. It concluded that “no additional standards are needed to protect the public from smart meters or other common household electronic devices.” As with other studies focused on RF emissions, the CCST recommended additional research. Engineers generally believe that if smart meters are manufactured, installed, and operated in compliance with guidelines from the Federal Communications Commission (FCC), they are safe. Risk increases in the following situations: - If someone spends frequent, extended periods of time very close to a smart meter, causing prolonged and heightened exposure. Children are likely to be especially vulnerable. - If a smart meter malfunctions in such a way that increases the frequency or duration of its duty (transmission) cycle. - If a smart meter is not manufactured or installed properly to conform to FCC guidelines. Individuals who are concerned about excessive exposure to broadcasting smart meters may consider shielding the device to reduce radiation. However, partial shielding is likely to reflect back the radio waves in unanticipated directions. Shielding only the street side of a meter mounted on a building does not prevent radio waves from entering the building.
view a plan A Math lesson on Probability Mary Ann Polowy Grade Level: 1 – 2 Objective: 1 The student will explore probability, practice addition, develop number sense and use probability terms as they play a game with color tiles. Lesson: Time: This activity is designed to take approximately one class period, 45 ‘ minutes. Materials: Small bowls (or paper plates) large enough to hold 12 color tiles. ( One bowl for each pair of students Color tiles, 12 per pair of students. Dice one die per pair of students. One sheet of chartpaper titled “Rolls to Empty the Bowl’ and listing the numbers 2 – 12 down the left side. Anticipatory Set: Read a story from the book, Math Fun: Test Your Luck. Talk about probability and probability words used in the story. Wnte a list of probability words on the chalkboard such as, chance, perhaps, likely, unlikely, probably, possible, perhaps, maybe and could be. Tell the students they get to play a probability game today and after they play a few times, you’ll look at what’s likely or probable to happen. Play an example round with one of the students to demonstrate the game. Ask the student to predict the number of rolls it will take to empty the bowl. Post the class chart paper with the heading “Rolls to Empty the Bowl” in a location where students can add to it. Divide the students into pairs. Each pair gets 1 die, 12 color tiles, a bowl, and a paper and pencil for recording. To play they will roll the die, note the number that comes up, and take out that many color tiles. While one person rolls to see how many times it takes to empty the bowl, the other person will record each of the other student’s rolls on a sheet of paper. They will continue with one rolling and the other recording until the bowl is empty. For example, one student rolls a 3, then a 4, then a 6. The other student writes 3 + 4 + 5 = 12. Rolls to empty the bowl = 3. Explain that it is not necessary to go out exactly in this game. For example, if there are two tiles in the bowl, and a five is roiled, you may remove the tiles. Have each pair play the game five times. After each game, have the pair record the number of rolls it took to empty the bowl on the class chart paper entitled, “Rolls to Empty the Bowl.” As a class examine the chart to identify the most likely number of roils it took to empty the bowl. Also identify the most and fewest rolls it took t’or anyone to empty the bowl. Ask them it’ they notice anything else. Discuss the fewest and the greatest number of rolls it could take to empty the bowl and how likely or unlikely that would be. What amount of rolls could never empty the bowl? Why? Point to words on the probability words chart as they are used in discussion. Extension Play the game using 20 tiles. Have the students use subtraction skills instead of addition by subtracting the numbers rolled on the dice trom the 12 tiles. Tank, Bonnie: Math By All Means: Probability Grade i – 2 (A Marilyn Burns Replacement unit, Math solutions Publications, New York, N.Y., 1996. Judy McCray, Ridgeway Elementary, Columbia MO Elting Mary and Wyler, Rose: Math Fun – Test Your Luck Simon and Schuster, New Yoric, N.Y., 1992.
Did you know? 1. Antibiotic resistance is one of the world’s most pressing public health threats. 2. Antibiotics are the most important tool we have to combat life‐threatening bacterial diseases, but antibiotics can have side effects. 3. Antibiotic overuse increases the development of drug-resistant germs. 4. Patients, healthcare providers, hospital administrators, and policy makers must work together to employ effective strategies for improving antibiotic use–ultimately improving medical care and saving lives. Antibiotic stewardship helps improve patient care and shorten hospital stays, thus benefiting patients as well as hospitals. In a study conducted at The Johns Hopkins Hospital, it was demonstrated that guidelines for management of community-acquired pneumonia could promote the use of shorter courses of therapy, saving money and promoting patient safety. According to a University of Maryland study, implementation of one antibiotic stewardship program saved a total of $17 million over 8 years at one institution. After the program was discontinued, antibiotic costs increased over $1 million in the first year (an increase of 23 percent) and continued to increase the following year. The way we use antibiotics today, or in one patient, directly impacts how effective they will be tomorrow, or in another patient; they are a shared resource. Antibiotic resistance is not just a problem for the person with the infection. Some resistant bacteria have the potential to spread to others – promoting antibiotic‐resistant infections. Targeting certain infections may decrease antibiotic use. For example, determining when and how to treat patients for urinary tract infections, the second most common bacterial infection leading to hospitalization, can lead to improved patient outcomes and cost savings. Since it will be many years before new antibiotics are available to treat some resistant infections, we need to improve the use of antibiotics that are currently available by: 1. Ensuring all orders have dose, duration, and indications; 2. Getting cultures before starting antibiotics; and 3. Taking an “antibiotic timeout,” reassessing antibiotics after 48-72 hours. Make appropriate antibiotic use a quality improvement and patient safety priority. Focus on reducing unnecessary antibiotic use, which can reduce antibiotic-resistant infections, Clostridium difficile infections, and costs, while improving patient outcomes. Emphasize and implement antibiotic stewardship programs and interventions for every facility – regardless of facility setting and size. Monitor Healthcare Effectiveness Data and Information Set (HEDIS®) performance measures on pharyngitis, upper respiratory infections, acute bronchitis, and antibiotic utilization. Visit www.cdc.gov/getsmart/healthcare/ to learn more.
Next year, scientists expect to change the way we define the basic units with which we measure our universe. An article by scientists at the National Institute of Standards and Technology (NIST) written for teachers will help ensure high school physics students are hip to the news. The brief, six-page article, which appears in this month’s issue of The Physics Teacher, is designed to be a resource for teachers who are introducing the International System of Units (SI) into their classrooms. The SI, as the modern form of the metric system, has seven fundamental units, including the meter and the second. It is expected that in 2018, for the first time in history, all seven of these units will be defined in terms of fundamental constants of the universe such as the speed of light or the charge of a single electron. Only recently were all the relevant fundamental constants known with sufficient certainty to make such a redefinition possible, and the authors are eager to help students realize the change’s importance. “It’s a historic moment,” said NIST physicist Peter Mohr, one of the article’s authors. “Back in the 19th century, James Clerk Maxwell—one of history’s great scientific visionaries—dreamed of a measurement system based on universal constants. Now that we are on the verge of realizing his dream, we want to explain why these constants have a relationship to SI units in a way high school students can understand.” The article, written in everyday English, begins with a brief history of measurement units and shows how their limitations over past centuries have led to the need to redefine them. The kilogram, for example, is currently defined by a metal artifact, which has a mass that has apparently been changing over time. This prompted an international effort to redefine it in terms of electrical energy. Most of the article comprises brief summaries of each unit’s relation to universal constants, allowing teachers to show physics students these relationships right from the beginning of a course, when units are generally taught. “One of the things that makes good sense for me is the unit of electrical charge,” said co-author Sandy Knotts, who recently retired after a career of teaching physics at Perkiomen Valley High School in Collegeville, Pennsylvania. “Now we can start out using the measurement of the electron rather than the ampere.” At present, the ampere is defined in relation to the force between two parallel electrical wires of infinite length. “That’s something that doesn’t exist in nature,” Knotts said. “Whereas an electron is an electron.” Teaching the relationships this way is intended to help eliminate potential confusion in the early weeks of a physics course. The redefinition also has the advantage of allowing students (and scientists) to do work with a clear relationship to the universal constants. “Defining the constants precisely provides a practical way to establish SI units,” Mohr said. “That allows you to do experiments and get answers in terms of these constants.” This changeover in unit definitions may be the last one physics students will need to absorb for a long time, Mohr added. “The definitions are such that they’re not dependent on technology,” he said. “We may come up with better ways to measure, but the definitions themselves won’t have to change.” Paper: S. Knotts, P.J. Mohr and W.D. Phillips. An Introduction to the New SI. January 2017. The Physics Teacher, DOI: 10.1119/1.4972491
|Romania Table of Contents In the fourteenth century, the Ottoman Turks expanded their empire from Anatolia to the Balkans. They crossed the Bosporus in 1352 and crushed the Serbs at Kosovo Polje, in the south of modern- day Yugoslavia, in 1389. Tradition holds that Walachia's Prince Mircea the Old (1386-1418) sent his forces to Kosovo to fight beside the Serbs; soon after the battle Sultan Bayezid marched on Walachia and imprisoned Mircea until he pledged to pay tribute. After a failed attempt to break the sultan's grip, Mircea fled to Transylvania and enlisted his forces in a crusade called by Hungary's King Sigismund. The campaign ended miserably: the Turks routed Sigismund's forces in 1396 at Nicopolis in present-day Bulgaria, and Mircea and his men were lucky to escape across the Danube. In 1402 Walachia gained a respite from Ottoman pressure as the Mongol leader Tamerlane attacked the Ottomans from the east, killed the sultan, and sparked a civil war. When peace returned, the Ottomans renewed their assault on the Balkans. In 1417 Mircea capitulated to Sultan Mehmed I and agreed to pay an annual tribute and surrender territory; in return the sultan allowed Walachia to remain a principality and to retain the Eastern Orthodox faith. After Mircea's death in 1418, Walachia and Moldavia slid into decline. Succession struggles, Polish and Hungarian intrigues, and corruption produced a parade of eleven princes in twenty-five years and weakened the principalities as the Ottoman threat waxed. In 1444 the Ottomans routed European forces at Varna in contemporary Bulgaria. When Constantinople succumbed in 1453, the Ottomans cut off Genoese and Venetian galleys from Black Sea ports, trade ceased, and the Romanian principalities' isolation deepened. At this time of near desperation, a Magyarized Romanian from Transylvania, János Hunyadi, became regent of Hungary. Hunyadi, a hero of the Ottoman wars, mobilized Hungary against the Turks, equipping a mercenary army funded by the first tax ever levied on Hungary's nobles. He scored a resounding victory over the Turks before Belgrade in 1456, but died of plague soon after the battle. In one of his final acts, Hunyadi installed Vlad Tepes (1456-62) on Walachia's throne. Vlad took abnormal pleasure in inflicting torture and watching his victims writhe in agony. He also hated the Turks and defied the sultan by refusing to pay tribute. In 1461 Hamsa Pasha tried to lure Vlad into a trap, but the Walachian prince discovered the deception, captured Hamsa and his men, impaled them on wooden stakes, and abandoned them. Sultan Mohammed later invaded Walachia and drove Vlad into exile in Hungary. Although Vlad eventually returned to Walachia, he died shortly thereafter, and Walachia's resistance to the Ottomans softened. Moldavia and its prince, Stephen the Great (1457-1504), were the principalities' last hope of repelling the Ottoman threat. Stephen drew on Moldavia's peasantry to raise a 55,000-man army and repelled the invading forces of Hungary's King Mátyás Corvinus in a daring night attack. Stephen's army invaded Walachia in 1471 and defeated the Turks when they retaliated in 1473 and 1474. After these victories, Stephen implored Pope Sixtus IV to forge a Christian alliance against the Turks. The pope replied with a letter naming Stephen an "Athlete of Christ," but he did not heed Stephen's calls for Christian unity. During the last decades of Stephen's reign, the Turks increased the pressure on Moldavia. They captured key Black Sea ports in 1484 and burned Moldavia's capital, Suceava, in 1485. Stephen rebounded with a victory in 1486 but thereafter confined his efforts to secure Moldavia's independence to the diplomatic arena. Frustrated by vain attempts to unite the West against the Turks, Stephen, on his deathbed, reportedly told his son to submit to the Turks if they offered an honorable suzerainty. Succession struggles weakened Moldavia after his death. In 1514 greedy nobles and an ill-planned crusade sparked a widespread peasant revolt in Hungary and Transylvania. Well-armed peasants under György Dózsa sacked estates across the country. Despite strength of numbers, however, the peasants were disorganized and suffered a decisive defeat at Timisoara. Dózsa and the other rebel leaders were tortured and executed. After the revolt, the Hungarian nobles enacted laws that condemned the serfs to eternal bondage and increased their work obligations. With the serfs and nobles deeply alienated from each other and jealous magnates challenging the king's power, Hungary was vulnerable to outside aggression. The Ottomans stormed Belgrade in 1521, routed a feeble Hungarian army at Mohács in 1526, and conquered Buda in 1541. They installed a pasha to rule over central Hungary; Transylvania became an autonomous principality under Ottoman suzerainty; and the Habsburgs assumed control over fragments of northern and western Hungary. After Buda's fall, Transylvania, though a vassal state of the Sublime Porte (as the Ottoman government was called), entered a period of broad autonomy. As a vassal, Transylvania paid the Porte an annual tribute and provided military assistance; in return, the Ottomans pledged to protect Transylvania from external threat. Native princes governed Transylvania from 1540 to 1690. Transylvania's powerful, mostly Hungarian, ruling families, whose position ironically strengthened with Hungary's fall, normally chose the prince, subject to the Porte's confirmation; in some cases, however, the Turks appointed the prince outright. The Transylvanian Diet became a parliament, and the nobles revived the Union of Three Nations, which still excluded the Romanians from political power. Princes took pains to separate Transylvania's Romanians from those in Walachia and Moldavia and forbade Eastern Orthodox priests to enter Transylvania from Walachia. The Protestant Reformation spread rapidly in Transylvania after Hungary's collapse, and the region became one of Europe's Protestant strongholds. Transylvania's Germans adopted Lutheranism, and many Hungarians converted to Calvinism. However, the Protestants, who printed and distributed catechisms in the Romanian language, failed to lure many Romanians from Orthodoxy. In 1571 the Transylvanian Diet approved a law guaranteeing freedom of worship and equal rights for Transylvania's four "received" religions: Roman Catholic, Lutheran, Calvinist, and Unitarian. The law was one of the first of its kind in Europe, but the religious equality it proclaimed was limited. Orthodox Romanians, for example, were free to worship, but their church was not recognized as a received religion. Once the Ottomans conquered Buda, Walachia and Moldavia lost all but the veneer of independence and the Porte exacted heavy tribute. The Turks chose Walachian and Moldavian princes from among the sons of noble hostages or refugees at Constantinople. Few princes died a natural death, but they lived enthroned amid great luxury. Although the Porte forbade Turks to own land or build mosques in the principalities, the princes allowed Greek and Turkish merchants and usurers to exploit the principalities' riches. The Greeks, jealously protecting their privileges, smothered the developing Romanian middle class. The Romanians' final hero before the Turks and Greeks closed their stranglehold on the principalities was Walachia's Michael the Brave (1593-1601). Michael bribed his way at the Porte to become prince. Once enthroned, however, he rounded up extortionist Turkish lenders, locked them in a building, and burned it to the ground. His forces then overran several key Turkish fortresses. Michael's ultimate goal was complete independence, but in 1598 he pledged fealty to Holy Roman Emperor Rudolf II. A year later, Michael captured Transylvania, and his victory incited Transylvania's Romanian peasants to rebel. Michael, however, more interested in endearing himself to Transylvania's nobles than in supporting defiant serfs, suppressed the rebels and swore to uphold the Union of Three Nations. Despite the prince's pledge, the nobles still distrusted him. Then in 1600 Michael conquered Moldavia. For the first time a single Romanian prince ruled over all Romanians, and the Romanian people sensed the first stirring of a national identity. Michael's success startled Rudolf. The emperor incited Transylvania's nobles to revolt against the prince, and Poland simultaneously overran Moldavia. Michael consolidated his forces in Walachia, apologized to Rudolf, and agreed to join Rudolf's general, Giörgio Basta, in a campaign to regain Transylvania from recalcitrant Hungarian nobles. After their victory, however, Basta executed Michael for alleged treachery. Michael the Brave grew more impressive in legend than in life, and his short-lived unification of the Romanian lands later inspired the Romanians to struggle for cultural and political unity. In Transylvania Basta's army persecuted Protestants and illegally expropriated their estates until Stephen Bocskay (1605-07), a former Habsburg supporter, mustered an army that expelled the imperial forces. In 1606 Bocskay concluded treaties with the Habsburgs and the Turks that secured his position as prince of Transylvania, guaranteed religious freedom, and broadened Transylvania's independence. After Bocskay's death and the reign of the tyrant Gabriel Báthory (1607-13), the Porte compelled the Transylvanians to accept Gábor Bethlen (1613-29) as prince. Transylvania experienced a golden age under Bethlen's enlightened despotism. He promoted agriculture, trade, and industry, sank new mines, sent students abroad to Protestant universities, and prohibited landlords from denying an education to children of serfs. After Bethlen died, however, the Transylvanian Diet abolished most of his reforms. Soon György Rákóczi I (1630-40) became prince. Rákóczi, like Bethlen, sent Transylvanian forces to fight with the Protestants in the Thirty Years' War; and Transylvania gained mention as a sovereign state in the Peace of Westphalia. Transylvania's golden age ended after György Rákóczi II (1648-60) launched an ill-fated attack on Poland without the prior approval of the Porte or Transylvania's Diet. A Turkish and Tatar army routed Rákóczi's forces and seized Transylvania. For the remainder of its independence, Transylvania suffered a series of feckless and distracted leaders, and throughout the seventeenth century Transylvania's Romanian peasants lingered in poverty and ignorance. During Michael the Brave's brief tenure and the early years of Turkish suzerainty, the distribution of land in Walachia and Moldavia changed dramatically. Over the years, Walachian and Moldavian princes made land grants to loyal boyars in exchange for military service so that by the seventeenth century hardly any land was left. Boyars in search of wealth began encroaching on peasant land and their military allegiance to the prince weakened. As a result, serfdom spread, successful boyars became more courtiers than warriors, and an intermediary class of impoverished lesser nobles developed. Would-be princes were forced to raise enormous sums to bribe their way to power, and peasant life grew more miserable as taxes and exactions increased. Any prince wishing to improve the peasants' lot risked a financial shortfall that could enable rivals to out-bribe him at the Porte and usurp his position. In 1632 Matei Basarab (1632-54) became the last of Walachia's predominant family to take the throne; two years later, Vasile Lupu (1634-53), a man of Albanian descent, became prince of Moldavia. The jealousies and ambitions of Matei and Vasile sapped the strength of both principalities at a time when the Porte's power began to wane. Coveting the richer Walachian throne, Vasile attacked Matei, but the latter's forces routed the Moldavians, and a group of Moldavian boyars ousted Vasile. Both Matei and Vasile were enlightened rulers, who provided liberal endowments to religion and the arts, established printing presses, and published religious books and legal codes. Source: U.S. Library of Congress
Mysterious Mercury: 5 groundbreaking images shedding light on the mini-planet (PHOTOS) Mercury is the closest planet to the Sun and, though bigger than our moon, it is also the smallest planet in our solar system. It’s close enough to Earth that the tiny planet can at times be seen by the naked eye, but its proximity to the sun makes any exploration of Mercury very tricky. Exploring Mercury, and even just getting close enough to capture images, is a challenge because of both its closeness to the sun and its lack of atmosphere. However, NASA’s Messenger spacecraft, the first to go into orbit around Mercury in 2011, has captured incredible images and delivered ground-breaking data. This extraordinary close-up image taken “about 58 minutes” before Messenger’s closest approach to Mercury on October 6, 2008, gives a detailed look at the planet’s deeply marked, crater-ridden, surface. Mercury’s unique chemical, mineralogical and physical surface features were enhanced to produce this colorful map on top of an image captured by Messenger. A probe into Mercury’s mineralogy by the Mercury Atmospheric and Surface Composition Spectrometer (MASCS) in 2012 delivered a kaleidoscope of wavelengths, segregated into red, green and blue. The result was visual feast that captured Mercury’s complex chemical surface. Messenger delivered the first topographical map across the entire planet, giving scientists an unprecedented insight into Mercury’s geological history. Pictured here is a view of Mercury’s volcanic planes, enhanced to show the different types of rock. In 2015, NASA took data from the more than 250,000 images captured by Messenger to compile a detailed map of Mercury’s entire surface processes and minerals including its distinct craters and pyroclastic vents. Mercury is still the least explored inner planet in the solar system, so, while we know much more than we did a decade ago, there’s a lot still to be learned about this mysterious mini-planet.
Representing a Tree With an Array You've seen two approaches to implementing a sequence data structure: either using an array, or using linked nodes. Wth BSTs, we extended our idea of linked nodes to implement a tree data structure. As discussed in the heaps lecture, we can also use an array to represent a complete tree. Here's how we implement a complete binary tree: - The root of the tree will be in position 1 of the array (nothing is at position 0). We can define the position of every other node in the tree recursively: - The left child of a node at position nis at position - The right child of a node at position nis at position 2n + 1. - The parent of a node at position nis at position Working With Binary Heaps Binary Heaps Defined In this lab, you will be making a priority queue using a binary min-heap (where smaller values correspond to higher priorities). Recall from lecture: Binary min-heaps are basically just binary trees (but not binary search trees) -- they have all of the same invariants of binary trees, with two extra invariants: - Invariant 1: the tree must be complete (more on this later) - Invariant 2: every node is smaller than its descendants (there is another variation called a binary max heap where every node is greater than its descendants) Invariant 2 guarantees that the min element will always be at the root of the tree. This helps us access that item quickly, which is what we need for a priority queue. We need to make sure binary min-heap methods maintain the above two invariants. Here's how we do it: Add an item - Put the item you're adding in the left-most open spot in the bottom level of the tree. - Swap the item you just added with its parent until it is larger than its parent, or until it is the new root. This is called bubbling up or swimming. Remove the min item - Swap the item at the root with the item of the right-most leaf node. - Remove the right-most leaf node, which now contains the min item. - Bubble down the new root until it is smaller than both its children. If you reach a point where you can either bubble down through the left or right child, you must choose the smaller of the two. This process is also called sinking. There are a couple different notions of what it means for a tree to be well balanced. A binary heap must always be what is called complete (also sometimes called maximally balanced). A complete tree has all available positions for nodes filled, except for possibly the last row, which must be filled from left-to-right. Writing Heap Methods ArrayHeap implements a binary min-heap using an array. Fill in the missing methods in ArrayHeap.java. Specifically, you should implement the following methods, ideally in the order shown. JUnit tests are provided inside ArrayHeap that test these methods (with the exception of peek and changePriority). Try out the tests as soon as you write the corresponding methods. You may find the Princeton implementation of a heap useful. Unlike the Princeton implementation, we store items in the heap as an array of Nodes, instead of an array of Key, because we want to leave open the possibility of priority changing operations. To submit, push your ArrayHeap.java to Gradescope and submit. The toString method is causing a stack overflow and/or the debugger seems super slow. The debugger wants to print everything out nicely as it runs, which means it is constantly calling the toString method. If something about your code causes an infinite recursion, this will cause a stack overflow, which will also make the debugger really slow.
On this day in 1962, an avalanche on the slopes of an extinct volcano kills more than 4,000 people in Peru. Nine towns and seven smaller villages were destroyed. Mount Huascaran rises 22,000 feet above sea level in the Andes Mountains. Beneath it laid many small Peruvian communities, the inhabitants of which farmed in the Rio Santa Valley. On the evening of January 10, as most of the region’s people gathered in their homes for dinner, the edge of a giant glacier suddenly broke apart and thundered down the mountain. The block of ice was the size of two skyscrapers and weighed approximately 6 million tons, and it made a loud noise as it fell, which was heard in the towns below. As avalanches were not unusual in the area, it was common knowledge that there was usually a 20 to 30 minute gap between the sound of the ice cracking off and an avalanche, which gave people time to seek higher ground. However, this time, the avalanche traveled nine-and-a-half miles in only seven minutes, wiping away several communities. The towns of Ranrahirca and Huarascucho were buried under 40 feet of ice, mud, trees, boulders and other debris. Only a handful of people in each town survived. The avalanche finally ended at the Santa River, where it stopped the water flow, causing flooding in nearby areas. Overall, approximately 4,000 people lost their lives in the avalanche. Some bodies were carried all the way to the Pacific Ocean near Chimbote, 100 miles away. Others were buried under so much debris that their bodies were never recovered. An additional 10,000 farm animals were killed and millions of dollars in crops were destroyed. Eight years later, an earthquake set off another terrible avalanche in the same area.
One of the defining characteristics of autism spectrum disorder (ASD) is difficulty with language and communication.1 Children with ASD's onset of speaking is usually delayed, and many children with ASD consistently produce language less frequently and of lower lexical and grammatical complexity than their typically developing (TD) peers.6,8,12,23 However, children with ASD also exhibit a significant social deficit, and researchers and clinicians continue to debate the extent to which the deficits in social interaction account for or contribute to the deficits in language production.5,14,19,25 Standardized assessments of language in children with ASD usually do include a comprehension component; however, many such comprehension tasks assess just one aspect of language (e.g., vocabulary),5 or include a significant motor component (e.g., pointing, act-out), and/or require children to deliberately choose between a number of alternatives. These last two behaviors are known to also be challenging to children with ASD.7,12,13,16 We present a method which can assess the language comprehension of young typically developing children (9-36 months) and children with autism.2,4,9,11,22 This method, Portable Intermodal Preferential Looking (P-IPL), projects side-by-side video images from a laptop onto a portable screen. The video images are paired first with a 'baseline' (nondirecting) audio, and then presented again paired with a 'test' linguistic audio that matches only one of the video images. Children's eye movements while watching the video are filmed and later coded. Children who understand the linguistic audio will look more quickly to, and longer at, the video that matches the linguistic audio.2,4,11,18,22,26 This paradigm includes a number of components that have recently been miniaturized (projector, camcorder, digitizer) to enable portability and easy setup in children's homes. This is a crucial point for assessing young children with ASD, who are frequently uncomfortable in new (e.g., laboratory) settings. Videos can be created to assess a wide range of specific components of linguistic knowledge, such as Subject-Verb-Object word order, wh-questions, and tense/aspect suffixes on verbs; videos can also assess principles of word learning such as a noun bias, a shape bias, and syntactic bootstrapping.10,14,17,21,24 Videos include characters and speech that are visually and acoustically salient and well tolerated by children with ASD. 19 Related JoVE Articles! Making Sense of Listening: The IMAP Test Battery Institutions: MRC Institute of Hearing Research, National Biomedical Research Unit in Hearing. The ability to hear is only the first step towards making sense of the range of information contained in an auditory signal. Of equal importance are the abilities to extract and use the information encoded in the auditory signal. We refer to these as listening skills (or auditory processing AP). Deficits in these skills are associated with delayed language and literacy development, though the nature of the relevant deficits and their causal connection with these delays is hotly debated. When a child is referred to a health professional with normal hearing and unexplained difficulties in listening, or associated delays in language or literacy development, they should ideally be assessed with a combination of psychoacoustic (AP) tests, suitable for children and for use in a clinic, together with cognitive tests to measure attention, working memory, IQ, and language skills. Such a detailed examination needs to be relatively short and within the technical capability of any suitably qualified professional. Current tests for the presence of AP deficits tend to be poorly constructed and inadequately validated within the normal population. They have little or no reference to the presenting symptoms of the child, and typically include a linguistic component. Poor performance may thus reflect problems with language rather than with AP. To assist in the assessment of children with listening difficulties, pediatric audiologists need a single, standardized child-appropriate test battery based on the use of language-free stimuli. We present the IMAP test battery which was developed at the MRC Institute of Hearing Research to supplement tests currently used to investigate cases of suspected AP deficits. IMAP assesses a range of relevant auditory and cognitive skills and takes about one hour to complete. It has been standardized in 1500 normally-hearing children from across the UK, aged 6-11 years. Since its development, it has been successfully used in a number of large scale studies both in the UK and the USA. IMAP provides measures for separating out sensory from cognitive contributions to hearing. It further limits confounds due to procedural effects by presenting tests in a child-friendly game-format. Stimulus-generation, management of test protocols and control of test presentation is mediated by the IHR-STAR software platform. This provides a standardized methodology for a range of applications and ensures replicable procedures across testers. IHR-STAR provides a flexible, user-programmable environment that currently has additional applications for hearing screening, mapping cochlear implant electrodes, and academic research or teaching. Neuroscience, Issue 44, Listening skills, auditory processing, auditory psychophysics, clinical assessment, child-friendly testing A Research Method For Detecting Transient Myocardial Ischemia In Patients With Suspected Acute Coronary Syndrome Using Continuous ST-segment Analysis Institutions: University of Nevada, Reno, St. Joseph's Medical Center, University of Rochester Medical Center . Each year, an estimated 785,000 Americans will have a new coronary attack, or acute coronary syndrome (ACS). The pathophysiology of ACS involves rupture of an atherosclerotic plaque; hence, treatment is aimed at plaque stabilization in order to prevent cellular death. However, there is considerable debate among clinicians, about which treatment pathway is best: early invasive using percutaneous coronary intervention (PCI/stent) when indicated or a conservative approach (i.e. , medication only with PCI/stent if recurrent symptoms occur). There are three types of ACS: ST elevation myocardial infarction (STEMI), non-ST elevation MI (NSTEMI), and unstable angina (UA). Among the three types, NSTEMI/UA is nearly four times as common as STEMI. Treatment decisions for NSTEMI/UA are based largely on symptoms and resting or exercise electrocardiograms (ECG). However, because of the dynamic and unpredictable nature of the atherosclerotic plaque, these methods often under detect myocardial ischemia because symptoms are unreliable, and/or continuous ECG monitoring was not utilized. Continuous 12-lead ECG monitoring, which is both inexpensive and non-invasive, can identify transient episodes of myocardial ischemia, a precursor to MI, even when asymptomatic. However, continuous 12-lead ECG monitoring is not usual hospital practice; rather, only two leads are typically monitored. Information obtained with 12-lead ECG monitoring might provide useful information for deciding the best ACS treatment. Therefore, using 12-lead ECG monitoring, the COMPARE Study (electroC n of ischeM sive to phaR atment) was designed to assess the frequency and clinical consequences of transient myocardial ischemia, in patients with NSTEMI/UA treated with either early invasive PCI/stent or those managed conservatively (medications or PCI/stent following recurrent symptoms). The purpose of this manuscript is to describe the methodology used in the COMPARE Study. Permission to proceed with this study was obtained from the Institutional Review Board of the hospital and the university. Research nurses identify hospitalized patients from the emergency department and telemetry unit with suspected ACS. Once consented, a 12-lead ECG Holter monitor is applied, and remains in place during the patient's entire hospital stay. Patients are also maintained on the routine bedside ECG monitoring system per hospital protocol. Off-line ECG analysis is done using sophisticated software and careful human oversight. Medicine, Issue 70, Anatomy, Physiology, Cardiology, Myocardial Ischemia, Cardiovascular Diseases, Health Occupations, Health Care, transient myocardial ischemia, Acute Coronary Syndrome, electrocardiogram, ST-segment monitoring, Holter monitoring, research methodology Eye Tracking Young Children with Autism Institutions: University of Texas at Dallas, University of North Carolina at Chapel Hill. The rise of accessible commercial eye-tracking systems has fueled a rapid increase in their use in psychological and psychiatric research. By providing a direct, detailed and objective measure of gaze behavior, eye-tracking has become a valuable tool for examining abnormal perceptual strategies in clinical populations and has been used to identify disorder-specific characteristics1 , promote early identification2 , and inform treatment3 . In particular, investigators of autism spectrum disorders (ASD) have benefited from integrating eye-tracking into their research paradigms4-7 . Eye-tracking has largely been used in these studies to reveal mechanisms underlying impaired task performance8 and abnormal brain functioning9 , particularly during the processing of social information1,10-11 . While older children and adults with ASD comprise the preponderance of research in this area, eye-tracking may be especially useful for studying young children with the disorder as it offers a non-invasive tool for assessing and quantifying early-emerging developmental abnormalities2,12-13 . Implementing eye-tracking with young children with ASD, however, is associated with a number of unique challenges, including issues with compliant behavior resulting from specific task demands and disorder-related psychosocial considerations. In this protocol, we detail methodological considerations for optimizing research design, data acquisition and psychometric analysis while eye-tracking young children with ASD. The provided recommendations are also designed to be more broadly applicable for eye-tracking children with other developmental disabilities. By offering guidelines for best practices in these areas based upon lessons derived from our own work, we hope to help other investigators make sound research design and analysis choices while avoiding common pitfalls that can compromise data acquisition while eye-tracking young children with ASD or other developmental difficulties. Medicine, Issue 61, eye tracking, autism, neurodevelopmental disorders, toddlers, perception, attention, social cognition A Novel Rescue Technique for Difficult Intubation and Difficult Ventilation Institutions: Children’s Hospital of Michigan, St. Jude Children’s Research Hospital. We describe a novel non surgical technique to maintain oxygenation and ventilation in a case of difficult intubation and difficult ventilation, which works especially well with poor mask fit. Can not intubate, can not ventilate" (CICV) is a potentially life threatening situation. In this video we present a simulation of the technique we used in a case of CICV where oxygenation and ventilation were maintained by inserting an endotracheal tube (ETT) nasally down to the level of the naso-pharynx while sealing the mouth and nares for successful positive pressure ventilation. A 13 year old patient was taken to the operating room for incision and drainage of a neck abcess and direct laryngobronchoscopy. After preoxygenation, anesthesia was induced intravenously. Mask ventilation was found to be extremely difficult because of the swelling of the soft tissue. The face mask could not fit properly on the face due to significant facial swelling as well. A direct laryngoscopy was attempted with no visualization of the larynx. Oxygen saturation was difficult to maintain, with saturations falling to 80%. In order to oxygenate and ventilate the patient, an endotracheal tube was then inserted nasally after nasal spray with nasal decongestant and lubricant. The tube was pushed gently and blindly into the hypopharynx. The mouth and nose of the patient were sealed by hand and positive pressure ventilation was possible with 100% O2 with good oxygen saturation during that period of time. Once the patient was stable and well sedated, a rigid bronchoscope was introduced by the otolaryngologist showing extensive subglottic and epiglottic edema, and a mass effect from the abscess, contributing to the airway compromise. The airway was secured with an ETT tube by the otolaryngologist.This video will show a simulation of the technique on a patient undergoing general anesthesia for dental restorations. Medicine, Issue 47, difficult ventilation, difficult intubation, nasal, saturation Assessment of Cerebral Lateralization in Children using Functional Transcranial Doppler Ultrasound (fTCD) Institutions: University of Oxford. There are many unanswered questions about cerebral lateralization. In particular, it remains unclear which aspects of language and nonverbal ability are lateralized, whether there are any disadvantages associated with atypical patterns of cerebral lateralization, and whether cerebral lateralization develops with age. In the past, researchers interested in these questions tended to use handedness as a proxy measure for cerebral lateralization, but this is unsatisfactory because handedness is only a weak and indirect indicator of laterality of cognitive functions1 . Other methods, such as fMRI, are expensive for large-scale studies, and not always feasible with children2 Here we will describe the use of functional transcranial Doppler ultrasound (fTCD) as a cost-effective, non-invasive and reliable method for assessing cerebral lateralization. The procedure involves measuring blood flow in the middle cerebral artery via an ultrasound probe placed just in front of the ear. Our work builds on work by Rune Aaslid, who co-introduced TCD in 1982, and Stefan Knecht, Michael Deppe and their colleagues at the University of Münster, who pioneered the use of simultaneous measurements of left- and right middle cerebral artery blood flow, and devised a method of correcting for heart beat activity. This made it possible to see a clear increase in left-sided blood flow during language generation, with lateralization agreeing well with that obtained using other methods3 The middle cerebral artery has a very wide vascular territory (see Figure 1) and the method does not provide useful information about localization within a hemisphere. Our experience suggests it is particularly sensitive to tasks that involve explicit or implicit speech production. The 'gold standard' task is a word generation task (e.g. think of as many words as you can that begin with the letter 'B') 4 , but this is not suitable for young children and others with limited literacy skills. Compared with other brain imaging methods, fTCD is relatively unaffected by movement artefacts from speaking, and so we are able to get a reliable result from tasks that involve describing pictures aloud5,6 . Accordingly, we have developed a child-friendly task that involves looking at video-clips that tell a story, and then describing what was seen. Neuroscience, Issue 43, functional transcranial Doppler ultrasound, cerebral lateralization, language, child Prehospital Thrombolysis: A Manual from Berlin Institutions: Charité - Universitätsmedizin Berlin, Charité - Universitätsmedizin Berlin, Universitätsklinikum Hamburg - Eppendorf, Berliner Feuerwehr, STEMO-Consortium. In acute ischemic stroke, time from symptom onset to intervention is a decisive prognostic factor. In order to reduce this time, prehospital thrombolysis at the emergency site would be preferable. However, apart from neurological expertise and laboratory investigations a computed tomography (CT) scan is necessary to exclude hemorrhagic stroke prior to thrombolysis. Therefore, a specialized ambulance equipped with a CT scanner and point-of-care laboratory was designed and constructed. Further, a new stroke identifying interview algorithm was developed and implemented in the Berlin emergency medical services. Since February 2011 the identification of suspected stroke in the dispatch center of the Berlin Fire Brigade prompts the deployment of this ambulance, a stroke emergency mobile (STEMO). On arrival, a neurologist, experienced in stroke care and with additional training in emergency medicine, takes a neurological examination. If stroke is suspected a CT scan excludes intracranial hemorrhage. The CT-scans are telemetrically transmitted to the neuroradiologist on-call. If coagulation status of the patient is normal and patient's medical history reveals no contraindication, prehospital thrombolysis is applied according to current guidelines (intravenous recombinant tissue plasminogen activator, iv rtPA, alteplase, Actilyse). Thereafter patients are transported to the nearest hospital with a certified stroke unit for further treatment and assessment of strokeaetiology. After a pilot-phase, weeks were randomized into blocks either with or without STEMO care. Primary end-point of this study is time from alarm to the initiation of thrombolysis. We hypothesized that alarm-to-treatment time can be reduced by at least 20 min compared to regular care. Medicine, Issue 81, Telemedicine, Emergency Medical Services, Stroke, Tomography, X-Ray Computed, Emergency Treatment,[stroke, thrombolysis, prehospital, emergency medical services, ambulance Measuring Attentional Biases for Threat in Children and Adults Institutions: Rutgers University. Investigators have long been interested in the human propensity for the rapid detection of threatening stimuli. However, until recently, research in this domain has focused almost exclusively on adult participants, completely ignoring the topic of threat detection over the course of development. One of the biggest reasons for the lack of developmental work in this area is likely the absence of a reliable paradigm that can measure perceptual biases for threat in children. To address this issue, we recently designed a modified visual search paradigm similar to the standard adult paradigm that is appropriate for studying threat detection in preschool-aged participants. Here we describe this new procedure. In the general paradigm, we present participants with matrices of color photographs, and ask them to find and touch a target on the screen. Latency to touch the target is recorded. Using a touch-screen monitor makes the procedure simple and easy, allowing us to collect data in participants ranging from 3 years of age to adults. Thus far, the paradigm has consistently shown that both adults and children detect threatening stimuli (e.g., snakes, spiders, angry/fearful faces) more quickly than neutral stimuli (e.g., flowers, mushrooms, happy/neutral faces). Altogether, this procedure provides an important new tool for researchers interested in studying the development of attentional biases for threat. Behavior, Issue 92, Detection, threat, attention, attentional bias, anxiety, visual search Assaying Locomotor Activity to Study Circadian Rhythms and Sleep Parameters in Drosophila Institutions: Rutgers University, University of California, Davis, Rutgers University. Most life forms exhibit daily rhythms in cellular, physiological and behavioral phenomena that are driven by endogenous circadian (≡24 hr) pacemakers or clocks. Malfunctions in the human circadian system are associated with numerous diseases or disorders. Much progress towards our understanding of the mechanisms underlying circadian rhythms has emerged from genetic screens whereby an easily measured behavioral rhythm is used as a read-out of clock function. Studies using Drosophila have made seminal contributions to our understanding of the cellular and biochemical bases underlying circadian rhythms. The standard circadian behavioral read-out measured in Drosophila is locomotor activity. In general, the monitoring system involves specially designed devices that can measure the locomotor movement of Drosophila . These devices are housed in environmentally controlled incubators located in a darkroom and are based on using the interruption of a beam of infrared light to record the locomotor activity of individual flies contained inside small tubes. When measured over many days, Drosophila exhibit daily cycles of activity and inactivity, a behavioral rhythm that is governed by the animal's endogenous circadian system. The overall procedure has been simplified with the advent of commercially available locomotor activity monitoring devices and the development of software programs for data analysis. We use the system from Trikinetics Inc., which is the procedure described here and is currently the most popular system used worldwide. More recently, the same monitoring devices have been used to study sleep behavior in Drosophila . Because the daily wake-sleep cycles of many flies can be measured simultaneously and only 1 to 2 weeks worth of continuous locomotor activity data is usually sufficient, this system is ideal for large-scale screens to identify Drosophila manifesting altered circadian or sleep properties. Neuroscience, Issue 43, circadian rhythm, locomotor activity, Drosophila, period, sleep, Trikinetics Surgical Management of Meatal Stenosis with Meatoplasty Institutions: Johns Hopkins School of Medicine. Meatal stenosis is a common urologic complication after circumcision. Children present to their primary care physicians with complaints of deviated urinary stream, difficult-to-aim, painful urination, and urinary frequency. Clinical exam reveals a pinpoint meatus and if the child is asked to urinate, he will usually have an upward, thin, occasionally forceful urinary stream with incomplete bladder emptying. The mainstay of management is meatoplasty (reconstruction of the distal urethra /meatus). This educational video will demonstrate how this is performed. Medicine, Issue 45, Urinary obstruction, pediatric urology, deviated urinary stream, meatal stenosis, operative repair, meatotomy, meatoplasty Aseptic Laboratory Techniques: Plating Methods Institutions: University of California, Los Angeles . Microorganisms are present on all inanimate surfaces creating ubiquitous sources of possible contamination in the laboratory. Experimental success relies on the ability of a scientist to sterilize work surfaces and equipment as well as prevent contact of sterile instruments and solutions with non-sterile surfaces. Here we present the steps for several plating methods routinely used in the laboratory to isolate, propagate, or enumerate microorganisms such as bacteria and phage. All five methods incorporate aseptic technique, or procedures that maintain the sterility of experimental materials. Procedures described include (1) streak-plating bacterial cultures to isolate single colonies, (2) pour-plating and (3) spread-plating to enumerate viable bacterial colonies, (4) soft agar overlays to isolate phage and enumerate plaques, and (5) replica-plating to transfer cells from one plate to another in an identical spatial pattern. These procedures can be performed at the laboratory bench, provided they involve non-pathogenic strains of microorganisms (Biosafety Level 1, BSL-1). If working with BSL-2 organisms, then these manipulations must take place in a biosafety cabinet. Consult the most current edition of the Biosafety in Microbiological and Biomedical Laboratories (BMBL) as well as Material Safety Data Sheets (MSDS) for Infectious Substances to determine the biohazard classification as well as the safety precautions and containment facilities required for the microorganism in question. Bacterial strains and phage stocks can be obtained from research investigators, companies, and collections maintained by particular organizations such as the American Type Culture Collection (ATCC). It is recommended that non-pathogenic strains be used when learning the various plating methods. By following the procedures described in this protocol, students should be able to: ● Perform plating procedures without contaminating media. ● Isolate single bacterial colonies by the streak-plating method. ● Use pour-plating and spread-plating methods to determine the concentration of bacteria. ● Perform soft agar overlays when working with phage. ● Transfer bacterial cells from one plate to another using the replica-plating procedure. ● Given an experimental task, select the appropriate plating method. Basic Protocols, Issue 63, Streak plates, pour plates, soft agar overlays, spread plates, replica plates, bacteria, colonies, phage, plaques, dilutions Membrane Potentials, Synaptic Responses, Neuronal Circuitry, Neuromodulation and Muscle Histology Using the Crayfish: Student Laboratory Exercises Institutions: University of Kentucky, University of Toronto. The purpose of this report is to help develop an understanding of the effects caused by ion gradients across a biological membrane. Two aspects that influence a cell's membrane potential and which we address in these experiments are: (1) Ion concentration of K+ on the outside of the membrane, and (2) the permeability of the membrane to specific ions. The crayfish abdominal extensor muscles are in groupings with some being tonic (slow) and others phasic (fast) in their biochemical and physiological phenotypes, as well as in their structure; the motor neurons that innervate these muscles are correspondingly different in functional characteristics. We use these muscles as well as the superficial, tonic abdominal flexor muscle to demonstrate properties in synaptic transmission. In addition, we introduce a sensory-CNS-motor neuron-muscle circuit to demonstrate the effect of cuticular sensory stimulation as well as the influence of neuromodulators on certain aspects of the circuit. With the techniques obtained in this exercise, one can begin to answer many questions remaining in other experimental preparations as well as in physiological applications related to medicine and health. We have demonstrated the usefulness of model invertebrate preparations to address fundamental questions pertinent to all animals. Neuroscience, Issue 47, Invertebrate, Crayfish, neurophysiology, muscle, anatomy, electrophysiology Assessment and Evaluation of the High Risk Neonate: The NICU Network Neurobehavioral Scale Institutions: Brown University, Women & Infants Hospital of Rhode Island, University of Massachusetts, Boston. There has been a long-standing interest in the assessment of the neurobehavioral integrity of the newborn infant. The NICU Network Neurobehavioral Scale (NNNS) was developed as an assessment for the at-risk infant. These are infants who are at increased risk for poor developmental outcome because of insults during prenatal development, such as substance exposure or prematurity or factors such as poverty, poor nutrition or lack of prenatal care that can have adverse effects on the intrauterine environment and affect the developing fetus. The NNNS assesses the full range of infant neurobehavioral performance including neurological integrity, behavioral functioning, and signs of stress/abstinence. The NNNS is a noninvasive neonatal assessment tool with demonstrated validity as a predictor, not only of medical outcomes such as cerebral palsy diagnosis, neurological abnormalities, and diseases with risks to the brain, but also of developmental outcomes such as mental and motor functioning, behavior problems, school readiness, and IQ. The NNNS can identify infants at high risk for abnormal developmental outcome and is an important clinical tool that enables medical researchers and health practitioners to identify these infants and develop intervention programs to optimize the development of these infants as early as possible. The video shows the NNNS procedures, shows examples of normal and abnormal performance and the various clinical populations in which the exam can be used. Behavior, Issue 90, NICU Network Neurobehavioral Scale, NNNS, High risk infant, Assessment, Evaluation, Prediction, Long term outcome An Affordable HIV-1 Drug Resistance Monitoring Method for Resource Limited Settings Institutions: University of KwaZulu-Natal, Durban, South Africa, Jembi Health Systems, University of Amsterdam, Stanford Medical School. HIV-1 drug resistance has the potential to seriously compromise the effectiveness and impact of antiretroviral therapy (ART). As ART programs in sub-Saharan Africa continue to expand, individuals on ART should be closely monitored for the emergence of drug resistance. Surveillance of transmitted drug resistance to track transmission of viral strains already resistant to ART is also critical. Unfortunately, drug resistance testing is still not readily accessible in resource limited settings, because genotyping is expensive and requires sophisticated laboratory and data management infrastructure. An open access genotypic drug resistance monitoring method to manage individuals and assess transmitted drug resistance is described. The method uses free open source software for the interpretation of drug resistance patterns and the generation of individual patient reports. The genotyping protocol has an amplification rate of greater than 95% for plasma samples with a viral load >1,000 HIV-1 RNA copies/ml. The sensitivity decreases significantly for viral loads <1,000 HIV-1 RNA copies/ml. The method described here was validated against a method of HIV-1 drug resistance testing approved by the United States Food and Drug Administration (FDA), the Viroseq genotyping method. Limitations of the method described here include the fact that it is not automated and that it also failed to amplify the circulating recombinant form CRF02_AG from a validation panel of samples, although it amplified subtypes A and B from the same panel. Medicine, Issue 85, Biomedical Technology, HIV-1, HIV Infections, Viremia, Nucleic Acids, genetics, antiretroviral therapy, drug resistance, genotyping, affordable Community-based Adapted Tango Dancing for Individuals with Parkinson's Disease and Older Adults Institutions: Emory University School of Medicine, Brigham and Woman‘s Hospital and Massachusetts General Hospital. Adapted tango dancing improves mobility and balance in older adults and additional populations with balance impairments. It is composed of very simple step elements. Adapted tango involves movement initiation and cessation, multi-directional perturbations, varied speeds and rhythms. Focus on foot placement, whole body coordination, and attention to partner, path of movement, and aesthetics likely underlie adapted tango’s demonstrated efficacy for improving mobility and balance. In this paper, we describe the methodology to disseminate the adapted tango teaching methods to dance instructor trainees and to implement the adapted tango by the trainees in the community for older adults and individuals with Parkinson’s Disease (PD). Efficacy in improving mobility (measured with the Timed Up and Go, Tandem stance, Berg Balance Scale, Gait Speed and 30 sec chair stand), safety and fidelity of the program is maximized through targeted instructor and volunteer training and a structured detailed syllabus outlining class practices and progression. Behavior, Issue 94, Dance, tango, balance, pedagogy, dissemination, exercise, older adults, Parkinson's Disease, mobility impairments, falls Measurement Of Neuromagnetic Brain Function In Pre-school Children With Custom Sized MEG Institutions: Macquarie University. Magnetoencephalography is a technique that detects magnetic fields associated with cortical activity . The electrophysiological activity of the brain generates electric fields - that can be recorded using electroencephalography (EEG)- and their concomitant magnetic fields - detected by MEG. MEG signals are detected by specialized sensors known as superconducting quantum interference devices (SQUIDs). Superconducting sensors require cooling with liquid helium at -270 °C. They are contained inside a vacumm-insulated helmet called a dewar, which is filled with liquid. SQUIDS are placed in fixed positions inside the helmet dewar in the helium coolant, and a subject's head is placed inside the helmet dewar for MEG measurements. The helmet dewar must be sized to satisfy opposing constraints. Clearly, it must be large enough to fit most or all of the heads in the population that will be studied. However, the helmet must also be small enough to keep most of the SQUID sensors within range of the tiny cerebral fields that they are to measure. Conventional whole-head MEG systems are designed to accommodate more than 90% of adult heads. However adult systems are not well suited for measuring brain function in pre-school chidren whose heads have a radius several cm smaller than adults. The KIT-Macquarie Brain Research Laboratory at Macquarie University uses a MEG system custom sized to fit the heads of pre-school children. This child system has 64 first-order axial gradiometers with a 50 mm baseline and is contained inside a magnetically-shielded room (MSR) together with a conventional adult-sized MEG system [3,4]. There are three main advantages of the customized helmet dewar for studying children. First, the smaller radius of the sensor configuration brings the SQUID sensors into range of the neuromagnetic signals of children's heads. Second, the smaller helmet allows full insertion of a child's head into the dewar. Full insertion is prevented in adult dewar helmets because of the smaller crown to shoulder distance in children. These two factors are fundamental in recording brain activity using MEG because neuromagnetic signals attenuate rapidly with distance. Third, the customized child helmet aids in the symmetric positioning of the head and limits the freedom of movement of the child's head within the dewar. When used with a protocol that aligns the requirements of data collection with the motivational and behavioral capacities of children, these features significantly facilitate setup, positioning, and measurement of MEG signals. Neuroscience, Issue 36, Magnetoencephalography, Pediatrics, Brain Mapping, Language, Brain Development, Cognitive Neuroscience, Language Acquisition, Linguistics Cortical Source Analysis of High-Density EEG Recordings in Children Institutions: UCL Institute of Child Health, University College London. EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1 . This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2 , because the composition and spatial configuration of head tissues changes dramatically over development3 In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials Quantitative Assessment of Cortical Auditory-tactile Processing in Children with Disabilities Institutions: Vanderbilt University, Vanderbilt University, Vanderbilt University. Objective and easy measurement of sensory processing is extremely difficult in nonverbal or vulnerable pediatric patients. We developed a new methodology to quantitatively assess children's cortical processing of light touch, speech sounds and the multisensory processing of the 2 stimuli, without requiring active subject participation or causing children discomfort. To accomplish this we developed a dual channel, time and strength calibrated air puff stimulator that allows both tactile stimulation and sham control. We combined this with the use of event-related potential methodology to allow for high temporal resolution of signals from the primary and secondary somatosensory cortices as well as higher order processing. This methodology also allowed us to measure a multisensory response to auditory-tactile stimulation. Behavior, Issue 83, somatosensory, event related potential, auditory-tactile, multisensory, cortical response, child Using Visual and Narrative Methods to Achieve Fair Process in Clinical Care Institutions: Brandeis University, Brandeis University. The Institute of Medicine has targeted patient-centeredness as an important area of quality improvement. A major dimension of patient-centeredness is respect for patient's values, preferences, and expressed needs. Yet specific approaches to gaining this understanding and translating it to quality care in the clinical setting are lacking. From a patient perspective quality is not a simple concept but is best understood in terms of five dimensions: technical outcomes; decision-making efficiency; amenities and convenience; information and emotional support; and overall patient satisfaction. Failure to consider quality from this five-pronged perspective results in a focus on medical outcomes, without considering the processes central to quality from the patient's perspective and vital to achieving good outcomes. In this paper, we argue for applying the concept of fair process in clinical settings. Fair process involves using a collaborative approach to exploring diagnostic issues and treatments with patients, explaining the rationale for decisions, setting expectations about roles and responsibilities, and implementing a core plan and ongoing evaluation. Fair process opens the door to bringing patient expertise into the clinical setting and the work of developing health care goals and strategies. This paper provides a step by step illustration of an innovative visual approach, called photovoice or photo-elicitation, to achieve fair process in clinical work with acquired brain injury survivors and others living with chronic health conditions. Applying this visual tool and methodology in the clinical setting will enhance patient-provider communication; engage patients as partners in identifying challenges, strengths, goals, and strategies; and support evaluation of progress over time. Asking patients to bring visuals of their lives into the clinical interaction can help to illuminate gaps in clinical knowledge, forge better therapeutic relationships with patients living with chronic conditions such as brain injury, and identify patient-centered goals and possibilities for healing. The process illustrated here can be used by clinicians, (primary care physicians, rehabilitation therapists, neurologists, neuropsychologists, psychologists, and others) working with people living with chronic conditions such as acquired brain injury, mental illness, physical disabilities, HIV/AIDS, substance abuse, or post-traumatic stress, and by leaders of support groups for the types of patients described above and their family members or caregivers. Medicine, Issue 48, person-centered care, participatory visual methods, photovoice, photo-elicitation, narrative medicine, acquired brain injury, disability, rehabilitation, palliative care Making MR Imaging Child's Play - Pediatric Neuroimaging Protocol, Guidelines and Procedure Institutions: Children’s Hospital Boston, University of Zurich, Harvard, Harvard Medical School. Within the last decade there has been an increase in the use of structural and functional magnetic resonance imaging (fMRI) to investigate the neural basis of human perception, cognition and behavior 1, 2 . Moreover, this non-invasive imaging method has grown into a tool for clinicians and researchers to explore typical and atypical brain development. Although advances in neuroimaging tools and techniques are apparent, (f)MRI in young pediatric populations remains relatively infrequent 2 . Practical as well as technical challenges when imaging children present clinicians and research teams with a unique set of problems 3, 2 . To name just a few, the child participants are challenged by a need for motivation, alertness and cooperation. Anxiety may be an additional factor to be addressed. Researchers or clinicians need to consider time constraints, movement restriction, scanner background noise and unfamiliarity with the MR scanner environment2,4-10 . A progressive use of functional and structural neuroimaging in younger age groups, however, could further add to our understanding of brain development. As an example, several research groups are currently working towards early detection of developmental disorders, potentially even before children present associated behavioral characteristics e.g.11 . Various strategies and techniques have been reported as a means to ensure comfort and cooperation of young children during neuroimaging sessions. Play therapy 12 , behavioral approaches 13, 14,15, 16-18 and simulation 19 , the use of mock scanner areas 20,21 , basic relaxation 22 and a combination of these techniques 23 have all been shown to improve the participant's compliance and thus MRI data quality. Even more importantly, these strategies have proven to increase the comfort of families and children involved 12 . One of the main advances of such techniques for the clinical practice is the possibility of avoiding sedation or general anesthesia (GA) as a way to manage children's compliance during MR imaging sessions 19,20 . In the current video report, we present a pediatric neuroimaging protocol with guidelines and procedures that have proven to be successful to date in young children. Neuroscience, Issue 29, fMRI, imaging, development, children, pediatric neuroimaging, cognitive development, magnetic resonance imaging, pediatric imaging protocol, patient preparation, mock scanner
Grade Levels: Grades 4-12 - appreciation of cultural diversity - understanding the difficulties of immigration - introduction to lost cultures - creating a context for the study of historical events, such as the Holocaust Sunshine State Standards: - Grades 3-5 - S.S.A.2.2.3, 5.2.1, 5.2.6, 5.2.8, 6.2.1, 6.2.4, 6.2.5, 6.3.1 - Grades 6-8 - S.S.A.2.3.1, 2.3.4, 2.3.6, 3.3.1, 3.3.2, 3.3.3, 5.3.1, 5.3.3 - Grades 9-12 - S.S.A.3.4.9, 5.4.2, 5.4.4, 5.4.5 (Although this activity deals with creating oral histories in general, many of the suggestions here would be appropriate for an interview with a Holocaust survivor, liberator, or rescuer.) Older people, with their first-hand knowledge of the past and their lifetime accumulation of skills, play a vital role in creating, preserving, and passing down cultural traditions from generation to generation. Through their stories, art, memories and keepsakes, older Americans embody and express the values and history of their families and communities. They provide an invaluable link to our past and they give meaning and direction to our future. You will find some guidelines here for collecting oral history. You will find general guidelines for interviews as well as a sample list of questions. Please adapt these to your own needs and circumstances. Be careful when requesting a person to interview. For many, this is a difficult task. The first step in conducting an interview is to consider the equipment you will need. Tape recording and note-taking are the most common means of recording oral history. Tape recording is preferable. It allows you to capture your narrator's stories and experiences completely and accurately, as well as make a lasting record of his or her voice. At first the people you interview might feel a little uncomfortable with a tape recorder, but after the interview gets going they'll forget that it is even there. Always keep a pen and paper with you during a tape recorded interview so you can note important points or jot down follow-up questions that come to mind. Practice using the tape recorder before your interview so that you are familiar with how it works. If you are at ease with your equipment, it will help to put your informant at ease also. Another useful piece of equipment is a camera. It allows you to capture a visual record of the interview and is especially valuable if you are documenting an action. Sometimes a video camera works well. Procedures: The Interview Creative expressions of the elderly, their stories, memories, and keepsakes--are rooted in a lifetime of experience. When interviewing older relatives or neighbors, be sure to seek out not only what they can tell you about the past, but what they can tell you about life in the present. How have certain family traditions evolved? What holiday customs are practiced today that were not a generation ago? What can they tell you about the ecology of an area? The seasonal cycles of life? What are some of their skills they have acquired from their years of experience that can be taught to future generations? Remember that the anecdotes and stories you collect are valuable not necessarily because they represent the historical truth, but because they represent a truth--a particular way of looking at the world. Every interview is unique. Conduct your first interview with someone you feel very comfortable with, such as an older neighbor that you know well or a favorite relative. The interview should take place in a relaxed and comfortable atmosphere. The home of your narrator is usually the best place, but there may also be other settings that would be appropriate, such as a workplace, store, or park. Prior to Interview: Get permission for the interview in advance, and schedule a time that he or she is comfortable with. Make it clear if you plan to use a tape recorder or camera. Make sure the purpose of the interview is made clear and what will happen to the tapes and/or notes. Is this an assignment? Are you planning to write a family history? Publish an article? Who will keep the tapes and photos? Do your homework--prepare a list of questions ahead of time. Make sure they are clear, concise and evocative. Avoid questions that elicit simple yes or no answers. During the interview, know which questions are key, but don't be tied to your list. The questions are meant simply as a framework. During the Interview: If you are using a recorder, tape a short introduction stating the place and date of the interview and the names of the persons involved. Begin with a question or a topic that you know will elicit a full reply from your narrator. Maybe ask about a story you once heard him tell. You may want to start with some basic biographical questions, such as "Where and when were you born?" These questions are easy to answer and can help break the ice. Show interest and listen carefully to what your informant is saying. Encourage him or her with nods and smiles. Take an active part in the conversation without dominating it. Be alert to what your narrator wants to talk about--don't be afraid to detour from your list of questions if he or she takes up a rich subject you hadn't even thought of. Bring props into play. Old photographs, family photo albums, scrapbooks, letters, heirlooms, and mementos help stimulate memories and trigger stories. Don't turn the tape recorder on and off while the interview is in progress. Not only are you likely to miss important information, but you will give your informant the impression that you think some of what he or she is saying isn't worth recording. Never run the recorder without your narrator's knowledge. Be sensitive to the needs of your informant. If he or she is getting tired, stop the interview and schedule another session. Some possible Questions: Assessment: Presenting the Findings - How did your family come to where it is today? Are there migration stories? Stories about establishing the family business or farm, or moving to an urban neighborhood? Are there stories about how family members acquired their first plot of land or their first store? Did the family stay in one place or move around? - If your informant is second-or third-generation immigrant, he or she might be asked: Do you know any stories about how your parents or grandparents came to America? Where did they first settle? How did they make a living? What language(s) did they speak and what do they speak now? - If the informant is a first-generation immigrant, you might ask him or her: Where were you born? Where did you grow up? What did you do for a living? Why did you leave your homeland? What possessions did you bring with you and why? What was the journey like? Which family members came along or stayed behind? Why? - What were some of your first impressions and early experiences in this country? Are there certain traditions or customs that you have made an effort to preserve? Why? Are there traditions that you have been forced to give up or adapt? - Are there stories about the history or origins of your family name? Has it undergone any changes? Are there any traditional first names or nicknames in your family? How did they come about? - What stories do you remember hearing from your parents and grandparents? What were some of their favorites? What stories do you enjoy telling most? Why? Are there stories about notorious characters in your family or town? Lost fortunes? Heroes and mischief makers? If you are interviewing your grandparents, ask them to tell you stories about what your parents were like when they were young. - How did your parents, grandparents, and other relatives come to meet and marry? Does your family have any special sayings or expressions? What are they? How did they originate? - What languages do you speak? Did you speak a different language at home than at work or school? Are there any expressions, jokes, stories, celebrations where a certain language is always used? - Have any recipes been preserved and passed down in your family from generation to generation? What are their origins? Have they changed over the years? Do they hold any memories for you? Are there certain foods that are traditionally prepared for holidays and celebrations? Who makes them? - How and where are holidays traditionally celebrated in your family? What holidays are the most important? Are there special family traditions, customs, songs, foods? Has your family created its own celebrations? What are they? How did they come about? - Does your family hold reunions? When? Where? Who attends? How long have the reunions been going on? What activities take place? Are awards given out? Is there a central figure who is honored? Why? What sorts of stories are told at these events? - What family heirlooms or keepsakes and mementos do you possess? Why are they valuable to you? What is their history? How were they handed down? Are there any memories or stories connected with them? - Do you have any photo albums, scrapbooks, home movies? Who made them? When? Can you explain their contents? Who is pictured? What were these people like? What activities and events are documented? - Social History: - What were some of your experiences during the Great Depression, World War I, World War II, or the Vietnam War? How did these events or others affect you and your community? - What are some of your earliest childhood memories? What games did you play when you were a child? Did you sing any verses when you played games? What were they? What kinds of toys did you play with? Who made them? - What slang expressions did you use? Who were your sports and comic book heroes and why? Can you remember your favorite songs and music? What was school like? What chores did you have to do? Do you remember your first job? - What did you do in the evenings before there was radio and television? What kind of home entertainment was there? Was there storytelling? Were there games? Music and songs? - Describe some of the technological changes you have witnessed over the years. Have there been any changes in the tools and equipment of your trade or profession? What was it like in the days before refrigeration? Do you remember the first cars, tractors, airplanes, or electric lights? What were some of your or your family's experiences with these new inventions? - How have cultural traditions and customs changed or stayed the same in your ethnic, regional, and/or occupational community? - Local History: - Describe the farm community, the small town, or the urban neighborhood where you grew up. How has it changed over the years? What brought about these changes? What did people do for a living? What do they do now? Were there any community traditions or celebrations like church suppers, rodeos, parades? What were they like? How are they different or the same today? - Can you draw a map of the family home? Of your old neighborhood? What places stand out most in your mind and why? What were your neighbors like? What kinds of gatherings were there? Now that the interview is complete, what do you do with the information you have gathered? There are a number of ways to preserve and present your finding. You may simply want to index and /or transcribe your materials and store them where you and other members of your family or community can have easy access to them, such as with a family member, in a scrapbook, or at a local archive. If you interviewed your grandmother about traditional foods and recipes that have been passed down through the generations, you may want to put together a family recipe book illustrated with snapshots of grandmother cooking in the kitchen at holiday gatherings and family meals. Or you may want to write a family history, compile an annotated family photo album, or make a scrapbook filled with keepsakes, mementos, old photos, reminiscences, and other items that embody and preserve your family heritage. If you have interviewed older people in your community about local traditions, customs, and history, you may want to write and produce a newsletter or magazine featuring the folkways of your local area. Making a grandparents book--a scrapbook or album that will reflect a family's own history as far back as the oldest member can recall. The whole family can join in gathering the material and the books as they take form will be full of surprises and discoveries for everyone. Allan Lichtman. Your Family History. New York: Random House, 1978 Comprehensive guide to conducting family history research. David Weitzman. My Backyard History Book. Boston: Little, Brown and Co., 1975. A how-to family and local history book specifically aimed at children. A Teacher's Guide to the Holocaust Produced by the Florida Center for Instructional Technology, College of Education, University of South Florida © 1997-2013.
NGSS are a set of standards designed to bring K-12 science education into the 21st century. They were developed by 26 states along with partners across the nation like the National Science Teachers Association (NSTA), Carnegie Foundation for the Advancement of Teaching, and the American Association for the Advancement of Science (AAAS) and released in 2013. Since the release, many states have voted to implement the standards in their school districts, including the recent addition of Michigan and Connecticut earlier this month (Michigan and Connecticut Adopt NGSS – Why It Matters So Much and Why Science Teachers Are Rejoicing). In this post we’ll talk about what they are, how they’ll affect your child’s education at school and at home, and where you can find more information about them. The big idea of NGSS is this: focus on a limited number of core science ideas over a long time period (Kindergarten-12th grade) to create a foundation for students to better understand the world we live in. By focusing on a limited number of concepts, these standards should help simplify what has become a convoluted, disparate, plug-and-chug style science education sure to make any talented educator wince. The goal is for science classrooms across the country to be more project-focused, hands-on, and student-directed. Ultimately, classrooms should more closely mimic how scientists and engineers work in real life. These goals appear to be scientifically substantiated. In a 2014 randomized controlled trial funded by NSF, and published by the SRI International (How Curriculum Materials Make a Difference for Next Generation Science Learning), students who were taught using NGSS-aligned, Project-Based Inquiry Science™ (PBIS) scored 8% higher on end-of-year tests than students taught using traditional instruction and were significantly more engaged in classroom activities. Unlike the traditional scientific method taught in most schools today, the study also found that students taught using PBIS did well regardless of their gender or ethnicity. The results of the study went viral in education media online as many teachers excitedly shared the good news that there is, indeed, a better way to reach students in STEM (Can Project-Based Learning Close Gaps in Science Education?). What’s “The Big Challenge” and What’s NGSS Got To Do With It? To demonstrate how students work using project-based, NGSS-aligned curriculum, It’s About Time®‘s (IAT) production team put together two, short videos featuring PBIS teacher, Sharon Hushek, and her students at Ben Franklin Elementary School. In the first video, Hushek explains how students are introduced to “The Big Challenge” that they will be investigating (the way scientists do in real life). Take a look: The next, two-minute video delves deeper into how Hushek’s class investigates factors that affect soil erosion. In this sample activity, students are expected to answer the following questions: — What is the relationship between particle size and erosion? — What is the relationship between slope of the land and erosion? How does this relate to the new NGSS standards? For starters, the video displays the hands-on nature of the new curriculum. Students are asked to investigate a scientific idea and then challenged to measure, quantify, and describe factors influencing the phenomenon. They are encouraged to think critically and deduce reasonable explanations themselves. Let’s look at how this lesson aligns with the new NGSS. The curriculum is divided into three dimensions: 1. Disciplinary core ideas – These are the tools students use to make sense of the world. They can be applied to describe the behavior of multiple phenomena. In the sample lesson above, the disciplinary core idea is: how do macroscopic particles interact with one another under different conditions? 2. Science and engineering practices – This is related to real world science and engineering. How do scientists approach erosion? What questions do they ask and what tools would they use to investigate? In the video above, students are asked to show how particle size impacts travel velocity and displacement. 3. Crosscutting concepts – Broadly speaking, crosscutting concepts are ideas that apply across scientific disciplines. There are 7 in total. They are: — cause and effect — structure and function — stability; and — change energy and matter patterns In the sample lesson above, crosscutting concepts include stability and change (how does a landscape change with weather patterns?), cause and effect (what’s causing soil to move?), and systems (how does the composition of the soil, rain, and slope affect what’s happening as whole?). How is this different from science taught in most classrooms today? Most teachers use the scientific method as the teaching model. Here’s what students are taught about it: 1st – Make an observation 2nd – Ask questions 3rd – Formulate a hypothesis 4th – Conduct an experiment 5th – Analyze data and draw conclusions After being presented with the method, students are often tested to identify the dependent and independent variables, analyze graphs and charts, or recite the steps verbatim. Everything exciting about science — the investigation, the hands-on problem solving, the real- world application — is removed. What’s left is boring. It’s neither theoretically compelling, nor practically applicable. From what I’ve seen working as a private science tutor in New York City, when science is learned in this way, students are turned away from the subject completely. Unlike the scientific method, the NGSS aim to excite students about science. Instead of presenting ideas abstractly, the new model adds context by emphasizing real-world application. K-12 Science Education Continuity Another landmark feature of NGSS is their emphasis on the continuity of education. NGSS clearly outlines goals from year-to-year. For example, in A Framework for K-12 Education, the manual from which the new NGSS standards were created, a second grade student studying physical sciences is expected to know: “Different kinds of matter exist (e.g., wood, metal, water), and many of them can be either solid or liquid, depending on temperature. Matter can be described and classified by its observable properties (e.g., visual, aural, textural), by its uses, and by whether it occurs naturally or is manufactured. Different properties are suited to different purposes. A great variety of objects can be built up from a small set of pieces (e.g., blocks, construction sets). Objects or samples of a substance can be weighed, and their size can be described and measured. (Boundary: volume is introduced only for liquid measure.)” And a fifth grade student is expected to know similar concepts but at a deeper level: “Matter of any type can be subdivided into particles that are too small to see, but even then the matter still exists and can be detected by other means (e.g., by weighing or by its effects on other objects). For example, a model showing that gases are made from matter particles that are too small to see and are moving freely around in space can explain many observations, including the inflation and shape of a balloon; the effects of air on larger particles or objects (e.g., leaves in wind, dust suspended in air); and the appearance of visible scale water droplets in condensation, fog, and, by extension, also in clouds or the contrails of a jet. The amount (weight) of matter is conserved when it changes form, even in transitions in which it seems to vanish (e.g., sugar in solution, evaporation in a closed container). Measurements of a variety of properties (e.g., hardness, reflectivity) can be used to identify particular materials. (Boundary: At this grade level, mass and weight are not distinguished, and no attempt is made to define the unseen particles or explain the atomic-scale mechanism of evaporation and condensation.)” One of the most intriguing aspects is the continuity from kindergarten through 12th grade. Each year, the complexity of the ideas increases. Joe Krajcik (Director of Create for STEM Institute, and one of the lead writers for the NGSS) says that “if [the students are] really going to develop understanding at a deep level, you have to develop it across time… At each step it’s getting a little bit richer, a little bit richer.” Instead of learning discreet new ideas, students are taught one core set of principles which is explored progressively deeper. Importantly, this applies as much from lesson to lesson as it does from year to year. For example, in the sample IAT lesson, students are challenged to work together to solve a problem. In lesson (1.1), they design a boat from aluminum foil to hold 6 keys and float for 20 seconds. In lesson (1.2), they discuss their boat design and iterate, collaborating with their fellow students to improve their boats performance. In lesson (1.3), they read about the science of boat design and to use what they learn to further improve their design. In final lesson (1.4), they present their findings to the class and questions and discussion are encouraged. Overall, these lessons center around the theme: How do scientists work together to solve problems? More than an arbitrary progression, they mimic real work. Students design, discuss, iterate, and then present their findings in a similar way that scientists conduct experiments, discuss their findings with their research group, improve their experiments, and then present their findings in a journal, or to their peers. Reinforcing NGSS with Students At Home “Studies show that family involvement is one of the biggest predictors of success in school. That’s why parental involvement is so important. Seek opportunities to explore science at home and in the community with your child. Encourage them to keep asking questions, just like scientists. Let them know you don’t have all the answers, and together try to find them.” In the sample boat building exercise, parents can discuss disciplinary core ideas like forces, gravity, buoyancy, and density (If unsure about these concepts, even a simple google or youtube search can lead to a fruitful discussion). They can engage their kids about science and engineering practices such as how scientists collaborate, iterate, and discuss the progression of the project. And they can talk about crosscutting concepts like stability and change, cause and effect, and systems. The integration of the three dimensions should help both educators and parents more effectively collaborate. The increased attention to real world application and the focus on deeper knowledge is a welcome change from the scattered and often diffuse practices which persist now. Learning science should be fun and engaging for students and parents. The health of the science education system depends on how well teachers, parents, administrators, and students work together. Parents can do family science activities like participating in citizen science adventures and incorporating fun activities during family vacations like geocaching. Try a few holiday science activities why students are off from school over the next couple of weeks (Holiday Science Projects). IAT often shares great activities for students to try at home (like STEM Wars: The F=m(a) Awakens). And sites like Science Buddies offers investigative, NGSS-aligned science activities for students of all ages. Additionally, the NSTA offers parents a guide to understanding and implementing NGSS with students at home (NGSS Parent Guide Q&A) as well as a great Resource Center with tools, books and tons of information. When it comes to support for parents in understanding and implementing NGSS, there’s no shortage of resources and tools! Offering students real-world, inquiry-based science learning that they’ll enjoy can be done at school and at home! Follow and chat with us in social media: Latest posts by Joshua Manley (see all) - 3-Dimensional Learning in the NGSS: What Parents Need to Know - December 18, 2015 - 10 Must-See Sessions at NSTA 2014 - April 2, 2014
A planet eleven times as big as Jupiter has been discovered orbiting a star at a distance of 650 astronomical units. That's 650 times as far from its star as Earth is from our own star. Never have we discovered a planet to be orbiting a star at so nearly great a distance. The planet is called HD 106906 b, and it's making astronomers scratch their heads. It doesn't conform to planetary formation theory, the usual run of which is that planets are mainly just asteroids that got caught in a nascent star's disc of gas and dust. In order for that to happen, they have to be close. 650 AU is not close enough. Planet HD 106906 b is exemplary of planets that defy this model. Yet it doesn't conform to binary star formation theory, either. In binary star formation, two masses of gas and dust collapse independently, develop gravity, and eventually form a mutual orbit around each other. It has been suggested that planet HD 106906 b could actually have started out that way. The problem with the hypothesis is that it lacks the necessary mass to borrow the rules, as it were, from binary star formation theory. Typically binary stars are no more variant in size than a ratio of 10 to 1. Planet HD 106906 b is about 100 times smaller than its star -- a very respectable proportion, but not star-like. It's a strapping young planet, about 13 million years old. Compare that to Earth's 4.5 billion years. The debris disc can even still be detected (as seen in the artist rendering above.) Astronomers will use this discovery to aid in reformulating theories about how stars and planets form in relationship to one another. The images were captured by the Earth-bound Magellan telescope -- in the Atacama Desert in Chile -- and its position was confirmed by the Hubble telescope.
The inner planets are the four planets closest to the Sun: Mercury, Venus, Earth and Mars. They are small and dense, made up of mostly rock and metal with a distinct internal structure and a similar size. The composition of these bodies and variable density (higher and lower on Mars and Mercury) provide important clues to solar system formation. The four have solid surfaces. Three also have an atmosphere. The study of the four planets gives information about geology outside the Earth.
In the third article in this series on astronomy and the electromagnetic spectrum, learn about the exotic and powerful cosmic phenomena that astronomers investigate with X-ray and gamma-ray observatories, including the European Space Agency’s XMM-Newton and INTEGRAL missions. In the 1960s, the advent of the space age initiated the era of high-energy astronomy. For the first time, astronomers could see the Universe with X-ray and gamma-ray eyes. Electromagnetic (EM) radiation at these wavelengths is emitted by cosmic sources with extreme properties such as exceptionally high temperatures, extraordinarily high densities or remarkably strong magnetic fields. Ground-based observatories, however, had been unable to register these rays, which have wavelengths too short to penetrate Earth’s atmosphere (figure 1). It took the first space observatories to unveil this turbulent and ever-changing Universe. In just half a century, observations made at the highest energies have significantly changed our view of the cosmos. By studying the X-ray and gamma-ray sky, astronomers have discovered several new types of astronomical sources and have enhanced their knowledge of many other types of objects. To examine the Universe in the X- and gamma-ray range of the EM spectrumw1, the European Space Agency (ESA; see box) operates two missions: the XMM-Newton (X-rays) and INTEGRAL (X-rays and gamma rays) space observatories. The techniques used in X-ray and gamma-ray astronomy and by these two missions were introduced in the second article in this series (Mignone & Barnes, 2011b); this article provides an overview of what these missions have taught us, from the life of stars to the structure of the Universe. For an overview of the EM spectrum and its role in astronomy, see the first article in this series (Mignone & Barnes, 2011a). Stars are born when gravity causes huge clouds of gas and dust to collapse, fragment and form protostars. These protostars later grow into fully fledged stars when nuclear fusion ignites in their cores. How a star then continues to evolve depends on its mass, with massive stars destined to a shorter life and a more spectacular demise than their lower-mass counterparts (figure 2). It is the early and late stages of a star’s life cycle that are the most interesting for X-ray and gamma-ray astronomers. Because some very young stars shine brightly under X-rays, astronomers can detect many of them by looking at star-forming regions with X-ray telescopes such as XMM-Newton (figure 3). The most massive young stars release highly energetic radiation and extremely hot gas, which are observed at X-ray wavelengths and influence how other stars form in the surrounding area. Astronomers using XMM-Newton have detected bubbles of hot gas from young massive stars in many regions of the skyw2, including the Orion Nebula and the star-forming region NGC 346. This research feeds into our understanding of how young massive stars affect star formation around them – a hot topic in modern astrophysics. At the ends of their lives, massive stars explode as supernovae (as described in Székely & Benedekfi, 2007), heating the surrounding gas to extremely high temperatures and accelerating particles, such as electrons, to very high speeds. As a result, an abundance of X-rays and gamma rays are released (figure 4). Furthermore, many elements heavier than iron, such as lead, nickel and gold, are synthesised during supernova explosions (to learn more, see Rebusco et al., 2007). Some of these elements are radioactive and eventually decay into stable isotopes, producing gamma rays in the process. Astronomers using INTEGRAL have surveyed the Milky Way and found traces of the radioactive isotope aluminium-26. Just like archaeologists, they have delved into the history of our galaxy and performed a census of past supernovae. The results demonstrate that, in the Milky Way, supernovae occur on average once every 50 yearsw3. After a supernova explosion, all that remains of the massive star is an extremely compact and dense object – either a neutron star or a black hole. With such a huge mass squeezed into a restricted space, these remnants have exceptionally strong gravitational fields and exert an intense pull on nearby matter, but they are fairly difficult to detect. However, if the neutron star or black hole is part of a binary stellar system (two stars orbiting around a common centre of mass), it may start devouring matter from its companion star; the accreting matter then heats up to millions of degrees, emitting X-rays and gamma rays. This high-energy emission can be used to reveal the presence of a neutron star or black hole. These systems are called X-ray binaries (figure 5) and were discovered in the late 1960s via X-ray observations. Back then, neutron stars and black holes had only been predicted by theory, so these observations provided the first proof of their existence. Since then, several generations of space-based observatories have helped astronomers to learn more. XMM-Newton and INTEGRAL have studied many X-ray binaries (which may also release gamma rays), revealing important details about the physics of black holes and neutron stars. For example, gammarays from Cygnus X-1, observed using INTEGRALw4, helped astronomers to better understand how matter is accreted via a disc onto this black hole and partly expelled in two symmetric jets. High-energy astronomers not only observe the birth and death of stars within the Milky Way and nearby galaxies, but also use X-rays and gamma-rays to investigate the much more distant Universe – including super-massive black holes and clusters of galaxies. All large galaxies harbour super-massive black holes at their cores, with masses a few million to a few billion times that of the Sun. Some galaxies, known as active galaxies, contain super-massive black holes that, unlike the one in the centre of the Milky Way, are active. Devouring matter from their surroundings, these black holes release high-energy radiation as well as powerful jets of highly energetic particles (figure 6). ESA’s XMM-Newton and INTEGRAL are thus ideal tools to hunt for active galaxies and to investigate the mechanisms that power them. Astronomers cannot see all the necessary details in more distant high-energy sources, so they also collect data from as many nearby active galaxies as possible. By combining data from close and distant galaxies, astronomers have figured out how super-massive black holes accrete matter via a disc, and how these discs may be surrounded by absorbing clouds of gasw5. On a still larger scale, galaxies tend to assemble in clusters of up to several thousand galaxies. These clusters are the largest structures in the Universe to be held together by gravity, and release a diffuse X-ray glow. This glow, first observed in the 1970s, revealed that the intergalactic space in a cluster contains an enormous amount of hot gas. Together with other observatories that probe the sky across the EM spectrum, XMM-Newton has observed hundreds of galaxy clusters (figure 7). These include a very distant cluster that is one of the earliest structures to have formed in the Universew6, just 3 billion years after the Big Bang. This may sound like a very long time, but it is less than one quarter of the Universe’s present age. Galaxy clusters are located in the densest knots of the cosmic web, the gigantic network of structure that makes up the Universe and consists mostly of invisible dark matterw7. Using XMM-Newton, astronomers have spotted matter where it is most densely concentrated, thus tracing the distribution of cosmic structure across the Universe (figure 8). From the birth of a star to the structure of the Universe – what next? X-ray and gamma-ray observatories, including ESA’s XMM-Newton and INTEGRAL, continue to keep a close watch on the ever-changing, high-energy sky, recording sudden violent outbursts of X-rays and gamma-rays. By continuing to unveil celestial wonders to astronomers, these remarkable space observatories are helping to solve the mysteries of our Universe. The European Space Agency (ESA)w8 is Europe’s gateway to space, organising programmes to find out more about Earth, its immediate space environment, our Solar System and the Universe, as well as to co-operate in the human exploration of space, to develop satellite-based technologies and services, and to promote European industries. The Directorate of Science and Robotic Exploration is devoted to ESA’s space science programme and to the robotic exploration of the Solar System. In the quest to understand the Universe, the stars and planets and the origins of life itself, ESA space science satellites peer into the depths of the cosmos and look at the furthest galaxies, study the Sun in unprecedented detail, and explore our planetary neighbours. ESA is a member of EIROforumw9, the publisher of Science in School. Find out how gamma rays from the Cygnus X-1 jets were observed with INTEGRAL ('INTEGRAL spots matter a millisecond from doom’). Learn about how Supergiant Fast X-ray Transients were observed by XMM-Newton (‘Neutron star caught feasting on clump of stellar matter’). All education materials produced by ESA are freely available to teachers in the 18 ESA member states. Many are translated into several European languages.
What is electricity? Electricity is a form of energy that makes heat and light. Electricity may also be referred to as “electrical energy.” Where does electricity begin? Electricity begins with the atom. Atoms are made up of protons, neutrons, and electrons. Electricity is created when an outside force causes electrons to move from atom to atom. The flow of electrons is called an “electrical current.” What causes the electrons to move? Voltage is the “outside force” that causes electrons to move. Voltage is potential energy. Potential energy has the ability to perform work. An example of potential energy is an axe being held above a piece of wood. If the axe is allowed to drop onto a piece of wood, then the wood would split. Notice the word “if” appears. Potential energy does work ONLY if it is allowed to do so. What is voltage? Voltage is the "outside force" that causes the electrons to move. Voltage is potential energy. Some characteristics of voltage are: - Voltage cannot be seen or heard. - Voltage is a push or force. - Voltage does nothing by itself. - Voltage has the potential to do work. - Voltage appears between two points. - Voltage is always there. What are the two kinds of electricity? Static electricity occurs when there is an imbalance of positively and negatively charged atoms. Electrons then jump from atom to atom, releasing energy. Two examples of static electricity are lightning and rubbing your feet on the carpet and then touching a doorknob. Current electricity is a constant flow of electrons. There are two kinds of current electricity: direct current (DC) and alternating current (AC). With direct current, electrons move in one direction. Batteries produce direct current. In alternating current, electrons flow in both directions. Power plants produce AC electric current. Alternating current (AC) is the type of electricity that BrightRidge distributes to you for use. What are conductors and insulators? Conductors are anything which electricity easily flows through. Examples of electrical conductors are copper, aluminum, and water. Insulators are materials that will not allow electricity to easily flow through. Some examples of insulators are rubber, glass, and plastic.
Researchers at the University Of South Carolina School Of Medicine recently discovered the reason that reactivates the herpes virus. Researchers also found how brain cells are deceived, thereby allowing the herpes virus to escape from the repressive ecosystem in neurons. About 90 percent of the population in the United States of America is living with HSV inside the brain cells. Under acute stress, the virus tends to leave the neurons and develops cold sores, eye infections, and in some cases, encephalitis. How can we reduce stress and prevent cold sores? Stress is said to be the most common reason behind cold sores. Studies have shown that the neurons where HSV resides are under pressure, eventually developing cold sores on the body. The Human Simplex Virus is found in about 90 percent of the American population and often shows its presence in cold sores, genital lesions, and eye infections. In rare cases, the virus may also cause encephalitis (inflammation of the brain), which has a 70 – 80 percent mortality rate if left untreated. Anna Cliffe, first and co-corresponding author of the study at the Department of Cell Biology and Physiology, said. “The proteins we’ve shown to be important for viral reactivation are almost exclusively found in neurons. So they do represent a good therapeutic target. We’ve known that stress triggers viral reactivation. We’ve now found how stress at the cellular level allows for viral reactivation.” The study’s results published in the journal Cell Host and Microbe were performed using primary neurons from mice. Mohanish Deshmukh, the paper’s senior author and Professor of Cell Biology and Physiology at the University of North Carolina. Said that he and his team were excited to discover the possibility of the existence of this stress–activation pathway in humans. How to use a positive single to inspire action? During the initial stages of the study, Cliffe and Deshmukh created an experimental analysis to force the herpes virus. To enter the latent phase in the primary neurons of the mouse in a dish before reactivating them. This allowed them to assess specific cellular protein pathways that might play a role in the reactivation of the virus. Researchers observed that the JNK protein pathway showed activity just before the HSV began to leave neurons. Suppose it is established that the JNK pathway is crucial for the reactivation of the virus in humans. In that case, it could be possible to create a perfect treatment for herpes and other ailments closely related to this virus.
Soothing a Sensitive Tooth If every bite of ice cream or every sip of coffee gives your teeth a nasty jolt, then you know what it’s like to live with tooth sensitivity. At least one in every eight Americans (including kids) has sensitive teeth. Why does this happen to so many of us and what can we do about it? The Basics of Dental Anatomy It’s important to understand a little about dental anatomy when thinking about how tooth sensitivity works. The visible portion of the tooth (the crown) is made up of three layers: the outer tooth enamel layer (the hardest substance in the human body), the dentin layer (more like normal bone) and the dental pulp layer at the center (nerves and blood vessels). Sensitive Exposed Nerves The nerves at the center of each tooth sense what’s going on at the surface through thousands of microscopic tubules running through the dentin layer. If the enamel wears too thin, the tubules become exposed and the nerves in the teeth start feeling way more input than they’re supposed to, making temperature changes or even a sudden sweet or sour taste too much to handle. What Causes Sensitivity? Aside from enamel erosion, there are other things that cause sensitivity. Root exposure is one. Unlike the crown of the tooth, the root lacks the protective enamel layer. It relies mainly on gum tissue. Gum recession (often caused by teeth grinding or overbrushing) leaves the roots unprotected. Cavities or damage to a tooth like chips or fractures can also cause sensitivity, especially to hot or sweet things. Protecting Teeth From Sensitivity There are a few things we can do about sensitive teeth. Step one is to get rid of a hard-bristled toothbrush and buy a soft-bristled one instead. Soft bristles are enough to effectively clean away plaque, while hard bristles can damage the enamel and gum tissue even more. It’s also a good idea to switch to a toothpaste formulated for sensitive teeth. Cutting down on sugar intake and avoiding very acidic foods and drinks (especially sugar) will help as well. The Dentist Can Help If you’ve been dealing with tooth sensitivity, schedule an appointment so the dentist can discover the cause. Beyond what you can do to reduce the symptoms and strengthen your teeth and gums at home, the dentist can apply a fluoride varnish, prescribe a stronger desensitizing toothpaste if needed, or recommend a gum graft or dental restoration to repair any significant damage.
If the net force on an object is zero, can the object be moving? Acceleration is the change of velocity per unit time, so if there is no force, all we know is that the acceleration is zero. Therefore, the velocity is not changing. If the object was already moving, then it will just keep moving. So, yes, the object can be moving when there is no force applied to it. Note: "force" in this discussion is to be interpreted as net force. Net force is the vector sum of all forces acting on the object. Here, we have used Newton's 2nd law to show how it relates to his 1st law: Newton's First Law of Motion: I. Every object in a state of uniform motion tends to remain in that state of motion unless an external force is applied to it.
Graphene has been hailed as a wonder material since it was first isolated from graphite in 2004. Graphene is just a single atom thick but it is flexible, stronger than steel, and capable of efficiently conducting heat and electricity. However, widespread industrial adoption of graphene has so far been limited by the expense of producing it. Affordable graphene production could lead to a wide range of new technologies reaching the market, including synthetic skin capable of providing sensory feedback to people with limb prostheses. That may be set to change now that researchers at the University of Glasgow have found a way to produce large sheets of graphene, using the same cheap type of copper used to manufacture lithium-ion batteries found in many household devices. In a new paper published in the journal Scientific Reports, a team led by Dr Ravinder Dahiya explain how they have been able to produce large-area graphene around 100 times cheaper than ever before. Graphene is often produced by a process known as chemical vapour deposition, or CVD, which turns gaseous reactants into a film of graphene on a special surface known as a substrate. The research team used a similar process to create high-quality graphene across the surface of commercially-available copper foils of the type often used as the negative electrodes in lithium-ion batteries. The ultra-smooth surface of the copper provided an excellent bed for the graphene to form upon. They found that the graphene they produced offered a stark improvement in the electrical and optical performance of transistors which they made compared to similar materials produced from the older process. Dr Dahiya, of the University of Glasgow's School of Engineering, said: "The commercially-available copper we used in our process retails for around one dollar per square metre, compared to around $115 for a similar amount of the copper currently used in graphene production. This more expensive form of copper often required preparation before it can be used, adding further to the cost of the process. Our process produces high-quality graphene at low cost, taking us one step closer to creating affordable new electronic devices with a wide range of applications, from the smart cities of the future to mobile healthcare. Much of my own research is in the field of synthetic skin. Graphene could help provide an ultraflexible, conductive surface which could provide people with prosthetics capable of providing sensation in a way that is impossible for even the most advanced prosthetics today. It's a very exciting discovery and we're keen to continue our research." The research was conducted by the University of Glasgow in partnership with colleagues at Bilkent University in Turkey. Source and top image: University of Glasgow
Entity Relationship Diagram (ER – Diagram) The entity-relationship diagram is a set of entities that describes the database through the diagram. It is also known as the ER – diagram. The ER model defines the conceptual view of a database. It works around real-world entities and the associations among them. At the view level, the ER model considered a good option for designing the database. An entity-relationship diagram consists of an entity, attributes, and relationship. Short explanations for its components are as follows: An entity can be a real-world object, either animate or inanimate, that can be easily identifiable. For e.g. in a school database students, teachers, classes, and courses offered can be considered as entities. May an entity is a collection of similar types of entities. An entity set may contain entities with attributes sharing similar values. Entities presented by means of their properties called attributes. All attributes have values. For e.g. a student entity may have name, class, and age as attributes. There exists a domain or range of values that can be assigned to attributes. Another e.g. a student’s name cannot be a numeric value. It has to be alphabetic. A student’s or people age cannot be negative, etc. The association among entities is a relationship. For e.g. an employee works at a department, a student enrolls in a course. Here, works and enrolls called relationships. For offline read download PDF file below, Solution of financial accounting PU 2018 Symbols used in Entity Relationship Diagram (ER – Diagram) PU 2016 Spring Q. No. 17 What is an ER Diagram? Draw Ans: Solution to this questions is through this image
|A domain name is the portion of an URL which identifies a specific website. It is a part of the address, which is a part of the internet protocol. A DNS server is responsible for maintaining the DNS records of various domain names, and the zone file contains information about these names. The DNS servers are usually the same on all machines that connect to the internet, but there is flexibility in this and different types of nameservers are available.| Nameservers are also known as domain name servers or, more commonly, as DNS servers. They are also known as the system domains, or addresses. The nameserver should be able to provide logical addresses for the client machines. These are needed in order to configure the DNS settings on the client machines, and a different nameserver can be used when the client machines are connected to a different network. A DNS server is programmed to accept requests from clients, and it will respond to them with a query message. In return, the DNS server creates a list, or zone file, of names associated with the domain name that was requested. Each name in the zone file corresponds to an IP address. The DNS server and the zone files are usually updated periodically, and this is done by communication between the client machine and the DNS server. Updates can be manually initiated by the owner of the domain name or by an administrator who works on the DNS system. Some DNS providers provide automated updates to the zone files; however, the client machines must periodically read and update the zone files themselves. There are several types of name servers, and their purposes are diverse. For example, some name servers are configured to provide dynamic DNS services to a user. This means that the name of a domain is resolved whenever a user types an IP address into an Internet browser. Another type of name server is a static name server; it can only be queried or registered by an IP address. A third type of name server is an IP-based name server, which registers or queries only specific IP addresses for domain name registration purposes. Many businesses use domain name servers to facilitate smooth customer access to the company website. For example, a business may have several different websites that point to the same main website. If each of these websites were operated on its own domain name, then each would have its own individual name server, and customers would need to contact all of these servers for information regarding a particular domain name. One way to overcome the problem of too many server names is to implement a central DNS service that serves as a reverse DNS service. Instead of requesting information about a domain name, the DNS service returns a predetermined list of domains matching the IP address entered. In other words, when someone types in the domain name, the server returns a list of all the DNS servers that currently have information on that domain name. The DNS server then determines which name servers should contain the requested information. If there are no servers matching the domain name, the DNS server returns a DNS error message. If the domain name is not available, the DNS server returns an error code, indicating that there is an error in the operation. Another way to avoid too many domain name servers is to use name shadowing. When you register a domain name with your domain name provider, you are given the option to register your name as many names as you want. In theory, you could use all of your domain name servers to register all of the names you desire. However, most providers do not do this because they would have to add a fee to cover the extra services, and it would take them too long to add and remove names as needed. Instead, they allow you to register up to N number of name servers with them, and since each name has its own IP address, they can always return a match for the domain name in question. Of course, the most important part of the name server is the IP address. This address, which is only visible to the relevant names within the network, is what the visitor actually sees when they connect to your site. The domain name, of course, is only registered in your name server; and the IP address is visible to anyone on the internet, so even if you have registered many names that eventually become registered as unused, the IP address still remains the same, making it vulnerable to misuse. It is for this reason that you should use caution when deciding how many name servers you need to use for your website.
There is a difference between telling a story and writing a story. Help students learn and practice skills to gain experience and confidence in writing stories. Storytellers will become great story writers as they practice the step-by-step techniques of the writing process. Clearly stated examples and clever illustrations introduce each essential element of creating writing that gradually leads to the composition of an original, interesting, and properly-developed story. Lessons include writing descriptive sentences, combining short choppy sentences, avoiding rambling sentences, punctuation, paragraphs and topic sentences, logical order, writing dialog, choosing a topic, selecting a title, organizing ideas, writing the story, proofreading. Grade level: 3 – 8.
The discovery of a new Jurassic dinosaur in South Africa shows that the transition from small, two-legged creatures to the thunderously huge long-necked dinosaurs wasn’t a straightforward process. Introducing Ledumahadi mafube, an early Jurassic dinosaur whose name means “giant thunderclap at dawn” in the African Sesotho language. The partial skeleton of this quadrupedal prosauropod, a distant relative of the giant long-necked sauropods like Brontosaurus and Diplodocus, was found sticking out of a cliff near Clarens, a town that’s close to the border of South Africa and Lesotho. This beast thundered across the early Jurassic landscape around 200 million years ago, which is about 40 million to 50 million years before the giant sauropods made their appearance. The lead authors of the new study, Jonah Choiniere and Blair McPhee from the University of the Witwatersrand in South Africa, say the discovery of such a large early Jurassic creature came as a big surprise. “When Blair and I were doing the fieldwork, we had to jackhammer into the side of a cliff to dig out the femur,” Choiniere told Gizmodo. “We didn’t know what was in that cliff until we started digging, and as we began to realise just how big the thigh bone was, it dawned on us that this was truly a gigantic animal.” Choiniere and his colleagues estimate that the adult specimen was around 14 years old when it died, and that it had reached its full size of 10.89t. That’s a far cry from the 63.5t sauropods that would come later, but Ledumahadi was bigger than anything the preceding Triassic Period had to offer, like the recently discovered Ingentia prima, which weighed between 6.3t to 9t. For comparison, the largest African elephants weigh around 5.4t to 6.3t, so if Ledumahadi were around today it would easily be the largest terrestrial animal on Earth. This research team, a collaborative project involving scientists from South Africa, the UK, and Brazil, was also able to show that Ledumahadi was a quadruped, but it lacked the column-like legs that would appear later among the giant sauropods. The discovery of the four-legged Ledumahadi means sauropod evolution didn’t follow a straight, simple path, and that sauropods evolved four-legged postures as least twice. Steven Brusatte, a palaeontologist at the University of Edinburgh who wasn’t involved in the new study, said the discovery of Ledumahadi is providing new insights into how dinosaurs evolved their gigantic body size. As a relative of Brontosaurus and Diplodocus, he said it’s not surprising to learn that Ledumahadi was a big animal, but because it’s a distant relative of the sauropods, and based on its position within the dinosaur family tree, Ledumahadi must have evolved its huge size independently of the sauropods. “Not only that, but it has a very different limb posture from the sauropods. The sauropods were like Greek temples — they had columnar limbs, held straight and sturdy under the bodies to hold up their massive bulk,” Brusatte told Gizmodo. “But Ledumahadi did not have this posture. It had flexed limbs, like more primitive dinosaurs.” This means different groups of early dinosaurs were “experimenting” with different ways of becoming big during the first few tens of millions of years of their evolution, said Brusatte. Eventually, true sauropods stumbled upon their column-limbed design that was perfectly suited for supporting their large size, and “this is what enabled them to grow to sizes larger than Boeing 737s,” he said. “I thought it was pretty cool that that Ledumahadi was larger than any contemporaneous sauropods,” Eric Gorscak, a postdoctoral research scientist at The Field Museum who wasn’t involved with the new study, told Gizmodo. “However, it does hint that sauropods had to be doing something different than what these close non-sauropod relatives were doing, in order to push through these size limitations and evolve forms that would become much bigger.” In other words, it wasn’t sheer size alone that helped sauropods to become one of the most dominant dinosaur groups during the Jurassic. What those fortuitous adaptations might have been will have to be the subject of future work. “This new dinosaur suggests that the evolutionary transition from a small, bipedal [sauropod-like creature] to a large, quadrupedal sauropod was a bit more complex than previously thought,” said Gorscak. “Interestingly, these not-quite-sauropods went extinct nearly the same time as the first sauropods, so why did they go extinct but not sauropods, and how is that related to certain adaptations but not others?” In summary, this new study shows that sauropod-like dinosaurs evolved the ability to walk on four legs more than once, that the evolution of four-legged postures came before gigantic body size, that by 200 million years ago there were 10.89t dinosaurs walking the Earth, and that it wasn’t until the evolution of an elephant-like limb posture that true sauropods evolved and became so successful. Not bad for the discovery of a few bones, but the study is limited by virtue of this very fact. “We don’t yet have a complete skeleton, and we only have one representative for the whole species,” he told Gizmodo. “Many surprising things could come to light if Ledumahadi‘s fossils are found in other places.” The never-ending search for dinosaur fossils continues.
Inner Eurasia includes the lands dominated by the former Soviet Union, as well as Mongolia and parts of Xinjiang. These make up the heartland of the Eurasian continent. Inner Eurasia is a coherent unit of world history, for its societies faced ecological and military problems different from those of the rest of Eurasia and responded by evolving distinctive lifeways. Five dominant lifeways are described here, which have shaped the history of the entire region from prehistory to the present. Inner Eurasia is losing its distinctive features in the contemporary era.What makes Inner Eurasia so distinctive? For one, the absence of major barriers to military expansion make it a natural unit of military and political history. Two of the three largest empires ever created, the Mongol empire and the Russian empire, both emerged in Inner Eurasia. Furthermore, the region's low ecological and demographic productivity sharply distinguishes it from Outer Eurasia: Western and Southeast Europe, and Southwest, South, Southeast, and East Asia. Christian outlines five dominant adaptations that have shaped the region's history: (1) hunting during the Paleolithic, (2) the rise of relatively sedentary but increasingly militarized pastoralism during the Neolithic, followed by (3) the emergence of pastoral nomadism and pastoral nomadic states like that of the Mongols, (4) the growth of agrarian autocracies like those of Kievan Rus and Muscovy, and (5) the Soviet command economy. By the end of the twentieth century, however, Inner Eurasia may have lost its distinctiveness. Changes in industrial techology have begun to erase its ecological disadvantages, as abundant mineral and energy supplies compensate for low agricultural productivity. And changes in military technology have rendered much of the world into the equivalent of the single, vast plain that used to distinguish Inner Asia. SOURCE: David Christian, "Inner Eurasia as a Unit in World History," Journal of World History 5:173-211. UPDATE: In the comments, Randy McDonald notes his review of The Siberian Curse on his LiveJournal blog. It suggests that environmental conditions in Inner Eurasia--at least in the more inhospitable areas--might marginalize the area in a global economy where capital is attracted to more easily exploitable areas. There certainly seems to have been a net population outflow from the less hospitable reaches of Inner Eurasia now that the gulag and deportations aren't supporting an artificial economy there.
Experiencing anxiety and worry throughout life is part of being human. But natural amounts of anxiety can easily give over to an anxiety disorder. It can be hard to distinguish between what is normal and what deserves attention as a mental health issue. One of the most common anxiety disorders is generalized anxiety disorder (GAD). Other common anxiety disorders are obsessive-compulsive disorder, panic disorder, post-traumatic stress disorder, phobias, and social anxiety disorder. Watching for the signs of anxiety can help you decide when it is the right time to seek professional care or support for dealing with your mental health. Contact AssuraSource online or at 844.821.4163 to learn about the benefits of an anxiety therapy program. 5 Signs of Anxiety Disorders 1. Pervasive Worry Pervasive worry is more or less a unifying thread across all anxiety disorders. It is a particular hallmark of GAD. The worry people experience with an anxiety disorder is usually disconnected from reality or else blown out of proportion. This type of pervasive worry is uncontrollable and not self-inflicted. In addition, the worry progresses to the point that it interferes with a person’s daily life and activities, making them less productive or even cognitively less effective. Research indicates that anxiety and irritability are linked. In fact, adolescents diagnosed with GAD reported irritability levels twice that of the general population. 3. Sleep Problems Mental health practitioners are not yet sure if sleep problems lead to anxiety or if the reverse is true. Regardless, the issues often run together. What is clear is that treating an anxiety disorder typically improves sleep as well. 4. Panic Attacks Panic attacks are one sign of anxiety that is specific to panic disorder. It is possible to experience a panic attack with GAD, though the chances are lower than those diagnosed with an outright panic disorder. The symptoms experienced during a panic attack include: - Elevated heartbeat - Restricted breathing - Chest tightness Panic attacks can be frightening experiences that happen randomly or are directly triggered by certain stimuli. 5. Social Avoidance This sign is largely coupled with social anxiety disorder. However, symptomatology across anxiety disorders means that social avoidance is a common sign of anxiety. For instance, someone with a panic disorder may self-isolate because they worry about suffering a panic attack in public. More specifically, social anxiety is related to: - Anxiety or fear in relation to social events or parties - Excessive worry about being judged by others - Crippling fear that you will embarass yourself in a social sitution - Total avoidance of social scenarios High-Quality Anxiety Therapy Options Therapy is one of the best treatment options for dealing with an anxiety disorder. Mental health practitioners help people make sense of their systems, officially diagnose the anxiety disorder in question, and work at addressing the underlying causes of the anxiety disorder. One of the foremost anxiety therapies is cognitive-behavioral therapy (CBT). CBT challenges negative thinking patterns that have ingrained themselves in a person’s brain and replace them with healthier patterns. By changing these foundational thoughts that drive an anxiety disorder, CBT also hopes to alter people’s behavior in ways that support improved mental health. Another anxiety therapy employed in cases where a phobia is present is exposure therapy. Exposure therapy uses systematic desensitization, which is the process of gradual exposure from light stimuli to a more aggressive example of a phobia. Building up slowly over time is supposed to increase tolerance for the stimuli until the phobia’s hold over someone’s mind is decreased. Explore Anxiety Therapy at AssuraSource Today At AssuraSource, our team is ready to help you or a loved one overcome signs of anxiety with our comprehensive therapy options. Learn about additional anxiety therapies by calling 844.821.4163 or visiting us online.
WHAT WAS THE HOLOCAUST? In 1933, the Jewish population of Europe stood at over nine million. Most European Jews lived in countries that Nazi Germany would occupy or influence during World War II. By 1945, the Germans and their collaborators killed nearly two out of every three European Jews as part of the “Final Solution,” the Nazi policy to murder the Jews of Europe. Although Jews, whom the Nazis deemed a priority danger to Germany, were the primary victims of Nazi racism, other victims included some 200,000 Roma (Gypsies). At least 200,000 mentally or physically disabled patients, mainly Germans, living in institutional settings, were murdered in the so-called Euthanasia Program. As Nazi tyranny spread across Europe, the Germans and their collaborators persecuted and murdered millions of other people. Between two and three million Soviet prisoners of war were murdered or died of starvation, disease, neglect, or maltreatment. The Germans targeted the non-Jewish Polish intelligentsia for killing, and deported millions of Polish and Soviet civilians for forced labour in Germany or in occupied Poland, where these individuals worked and often died under deplorable conditions. From the earliest years of the Nazi regime, German authorities persecuted homosexuals and others whose behaviour did not match prescribed social norms. German police officials targeted thousands of political opponents (including Communists, Socialists, and trade unionists) and religious dissidents (such as Jehovah’s Witnesses). Many of these individuals died as a result of incarceration and maltreatment. ADMINISTRATION OF THE “FINAL SOLUTION” In the early years of the Nazi regime, the National Socialist government established concentration camps to detain real and imagined political and ideological opponents. Increasingly in the years before the outbreak of war, SS and police officials incarcerated Jews, Roma, and other victims of ethnic and racial hatred in these camps. To concentrate and monitor the Jewish population as well as to facilitate later deportation of the Jews, the Germans and their collaborators created ghettos, transit camps, and forced-labour camps for Jews during the war years. The German authorities also established numerous forced-labour camps, both in the so-called Greater German Reich and in German-occupied territory, for non-Jews whose labour the Germans sought to exploit. Following the invasion of the Soviet Union in June 1941, Einsatzgruppen (mobile killing units) and, later, militarized battalions of Order Police officials, moved behind German lines to carry out mass-murder operations against Jews, Roma, and Soviet state and Communist Party officials. German SS and police units, supported by units of the Wehrmacht and the Waffen SS, murdered more than a million Jewish men, women, and children, and hundreds of thousands of others. Between 1941 and 1944, Nazi German authorities deported millions of Jews from Germany, from occupied territories, and from the countries of many of its Axis allies to ghettos and to killing centres, often called extermination camps, where they were murdered in specially developed gassing facilities. THE END OF THE HOLOCAUST In the final months of the war, SS guards moved camp inmates by train or on forced marches, often called “death marches,” in an attempt to prevent the Allied liberation of large numbers of prisoners. As Allied forces moved across Europe in a series of offensives against Germany, they began to encounter and liberate concentration camp prisoners, as well as prisoners en route by forced march from one camp to another. The marches continued until May 7, 1945, the day the German armed forces surrendered unconditionally to the Allies. For the western Allies, World War II officially ended in Europe on the next day, May 8 (V-E Day), while Soviet forces announced their “Victory Day” on May 9, 1945. In the aftermath of the Holocaust, many of the survivors found shelter in displaced persons (DP) camps administered by the Allied powers. Between 1948 and 1951, almost 700,000 Jews emigrated to Israel, including 136,000 Jewish displaced persons from Europe. Other Jewish DPs emigrated to the United States and other nations. The last DP camp closed in 1957. The crimes committed during the Holocaust devastated most European Jewish communities and eliminated hundreds of Jewish communities in occupied eastern Europe entirely.
Updated: Oct 30, 2020 According to a recent study by The Brookings Institution, 14 million children in the US alone are not getting enough to eat. This number is much higher than in 2018, or even during the recession of 2008. For those of us who have faced food insecurity ourselves, it may be easy to empathize with how these children are feeling. There are many short and long-term physical and mental effects of food insecurity in children that demonstrate how hunger is much more than the discomfort of an empty belly. In part 1 of this series, we’ll look at five of the primary short-term impacts. 1. Malnourishment is one common consequence of not being able to eat enough. Not getting enough nutrients means kids’ growing brains and bodies do not get what they need to make it through the day. This makes everything hard - learning, interacting with others, and performing normal daily activities, like playing or chores. Malnourishment also puts kids at higher risk of getting sick. 2. Fatigue is another result of lack of nutrition. For a hungry child, the exhaustion that results from simply keeping your body active when it’s not getting the nutrients it needs makes it hard to concentrate on school or have the energy to play with friends. Being tired also affects their mood, making them feel sick and irritable. For older children, fatigue also means they are less likely to participate in extracurricular activities that can positively impact their future, such as playing sports, joining clubs, and volunteering. 3. Behavioral issues are a common product of hunger, including depression, anxiety, aggression, and attention disorders. The American Psychological Association notes that “hungry children exhibited 7 to 12 times as many symptoms of conduct disorder (such as fighting, blaming others for problems, having trouble with a teacher, not listening to rules, stealing) than their at-risk or not-hungry peers... children classified as “hungry” show increased anxious, irritable, aggressive and oppositional behavior in comparison to peers.” 4. Embarrassment and shame happens when people at any age are stigmatized when they ask for help or receive social services. Children facing food insecurity are often embarrassed about their current situation, causing them to hide their needs or feel a sense of shame throughout their lives. Some children face teasing or discrimination by their peers for being poor, including embarrassment about their clothing and lunches. The effects of shame have immediate and lasting impacts on mental health and sense of self-worth. 5. Poor performance in school is one of the end results of all the negative effects above (American Psychological Association). Hunger is one of the most basic human needs. If that need is not met, the body focuses all of its attention on fixing that problem; education is simply less important for survival. The ability to concentrate and learn is even more difficult during the era of remote school, as parents are multi-tasking their own jobs while ensuring their children are attending online school. These are just a few of the short-term impacts that food insecurity and hunger can have on children. In part 2, we’ll look at how these can have long-term impacts on a child, which can last well into adulthood. The Snack Sack was founded to respond to the increased food insecurity due to the COVID-19 pandemic, and to better support children learning at home through healthy and fun snacks. Donate now. Brenna Kutch (they/them or she/her) is a bureaucratic activist who spends their time writing strongly worded opinions, joining human rights causes, and over-committing. You can read more or get in touch at www.brennakutch.com.
By Sudakshina Kundu Mookerjee Introduction: Science is a study of nature and the behaviour of natural phenomena. It is a knowledge based on facts learnt through observations and performing experiments. The quest starts with systematically gathering information and amassing evidence in support of the hypotheses that can be tested and finally made into laws that govern the natural phenomena. Therefore, science is an objective search for truth based on logic. Science promotes rationality which helps in eradicating the curse of superstition and dogmas from the human society. There is no doubt that scientific spirit needs to be cultivated in every member of any civilised society in order to make the world a better place to live in. Although science knows no boundaries based on caste, creed, religion or gender, yet from time immemorial, there has been little participation of women in science, not only in our country but worldwide. There are many reasons for this; social norms, societal structures, organizational patterns, relationship between workplace and home front, all have contributed to the reason for exclusion of women from higher education in general and science discipline in particular. Gender bias in higher education has been a great deterrent. Women were allowed much later into University education. The age old universities in the West had kept their gates shut for female students, even after the renaissance. In Imperial India the Universities of Calcutta and Bombay started admitting women from 1877 -78 and 1883 respectively. Hence change in societal mores and institutional structures were needed in order to promote participation of women in education and science. This article will try to trace the history of science as a discipline and review participation of women in scientific studies, in the backdrop of the world scenario as well as in India. Although men who have laid down milestones in scientific discoveries far outnumber their female counterparts, yet there are a number of women who have inspired their future generations. This is also a tribute to these trail blazers. History of Science: India perhaps has the longest tradition of Philosophical studies. Other than these philosophical works, there are other religious compendiums, works of state craft and economic theories by Kautilya, literature, dramatics, and volumes on medicine and astronomy. However, there are very few that can be called purely scientific work that lay down the laws of nature based on objective principles and logic. Since ancient times, education has been patronised by religious bodies. In India there were the “ashrams” of the sages, the “sanghas” or monasteries of the Buddhist monks. Buddha’s teachings were perhaps more secular in the sense they encouraged more on living a life on this world, inspired by equality, justice and compassion. But still there is very little evidence of scientific search in its truest sense. Although Ayurveda, the medical science, was well developed in ancient India, yet after the caste system became rigid during the middle of the first millennium common era, hands-on experimentation became limited, thus rendering scientific studies in its truest sense impossible. There is evidence of women in ancient India enjoying freedom of education irrespective of caste. But the training imparted was as per ones capability. However, for majority of women this was limited to primary education or training at the basic level. Women scholars, though limited in number, were not uncommon. Philosophers like Gargi, Maitryee, Atreyee and Sulabha left behind their unparalleled scholarship. Lilavati, also known as Khana, was a legendary astronomer and Bhanumati was an ace mathematician. The Buddhists and Jainas encouraged women’s education. However, the study of science as a secular discipline had not yet been developed. One can find religious texts, literature and liturgy in various forms and languages the world over. In all other ancient civilisations, the priestly classes were the sole custodians of education. It was not much different in the classical or the medieval world. In ancient Greece, several scientific minds tried to kindle the fire of scientific enquiry, of whom Pythagorus, Archemedes need special mention for their secular inspirations. In India Arya Bhatta the great mathematician during the Gupta Empire (476-550 AD) and few others excelled in scientific studies of the planets or Astronomy. However, participation of women remained limited, although not completely absent. Merit Ptah practiced medicine in ancient Egypt around 2700-2500 BC . Hypatia of Alexandria (370-415 CE) was an eminent mathematician but unfortunately died at the hands of an irate Christian mob.[1,2] Perhaps the Arab world in the early Islamic period practiced some form of secular learning which were soon lost during the Crusades. However, such higher levels of studies were restricted to very few and participation of women was even less, barring a few exceptions like the cosmologist Abbess Hildegard von Bingen who wrote on the natural world as well as cause and cures of illness [1,2]. The common men led a more humble existence in learning their trade. There were guilds of the tradesmen who trained their internees in their trades, which were more hands-on training other than scientific experimentations. They perfected their products more through trials and not assisted technologies developed by scientific studies. After Christianity spread across Europe and parts of Asia, the Church remained the sole custodian of education. Any secular study that contradicted the established beliefs was severely condemned. Galileo Galili (1564-1642), the father of modern physics, was regarded as a heretic by the Catholic Church and kept under house arrest where he died from fever and heart palpitations. Women’s Role in Scientific Studies: The women were denied of formal education even in the Western civilisation, their instructions too were limited. History records a few accomplished women in the Universities, but they were more exceptions than the rule. Ms. Bettisia Gozzadini was a Law Graduate from Bologna University, Italy, in 1237. At the beginning of the fervent period of the Italian Renaissance in the fourteenth century, several women were admitted for higher education. Dorotea Bucca and Novella d’Andrea, both were Law graduates of the Bologna University in the fourteenth century. Luisa de Medrano (Phylosophy), Isabella Losa (Theology), Francisca de Lebrija (Rhetorics) & Beatriz Galindo (Latin & teaches Queen), Spain, were products of the sixteenth century Renascent Europe. However, none studied science, which was yet to emerge as a discipline of study, distinct from philosophy or natural philosophy [2-4]. There were few men who practiced science prior to Galileo, with the exception of Leonardo da Vinci (1452-1519). The artists like Leonardo and Michael Angelo practiced anatomy, a scientific study, but it was done in secret. Galileo was the first who promoted modern scientific studies by working in observational astronomy, applied science and technology. India by this time had lost much of its scientific temper of the ancient era [5-7]. The old Vedic schools of studies had stagnated under the sole proprietorship of a few. The technologies used were being jealously guarded by the guilds and artisans. The scientific applications brought to India in the medieval times became closely guarded secrets of their custodians. Knowledge could not percolate to the masses nor could it expand by regular practice or discourse. Women’s education in medieval India was restricted to few members of the elitist and aristocratic families who were taught at home. Although there were few schools for women in different parts of Muslim India, and the Royal ladies like Noor Jehan, Mumtaz Mahal, Jehanara, Zebunnissa, Zeenat-un-Nissa etc were highly accomplished. Bengal in the sixteenth to eighteenth centuries saw a number of women scholars like Hati Vidyalankar, Hatu Vidyalankar, Madhabi, Chandrabati, Priyambada, Anandamayee. There were Rava, Roha, Madhabi, Anulakshmi, Sasiprava in the South. However, majority of women were uneducated as early marriage, “Purdah” and various social taboos came in the way of female education. So women’s participation in science was absent. However, health care was an area where women had some presence; not as qualified professionals but as quacks and especially in midwifery. India became exposed to modern science with the coming of the British when few Universities were established for imparting higher education to the natives from the middle of nineteenth century. The first Medical College was started in Calcutta, Bengal, in the year 1835, followed by Madras Medical College. Gradually other medical colleges were started in other provinces like the Medical College in Bombay named after the Governor Sir Robert Grant, in 1845. In 1854 the Agra Medical School was opened which was preceded by the Ecole de Medicine de Pondicherry established by the French in their colony in 1823. But all these Medical Schools and Colleges catered exclusively to the male students. Madras Medical College was the first to open its doors to women. In India the earliest participation of women in modern science was in the field of medicine. It took more than three decades for women to leave their marks in other branches of scientific studies. This article will narrate the stories of these remarkable women, both in the world and in India, who have charted a course for their posterity. - Women in Science by Georgina Ferry, www.brittanica.com - Women in Science: Historical Perspectives by Londa Schiebinger, www.stsci.edu - History and Philosophy of Women in Science: A Review Essay by Londa Schiebinger, Vol. 12, No. 2, Reconstructing the Academy (Winter, 1987), pp. 305-332 (28 pages), Published By: The University of Chicago Press, https://www.jstor.org/stable/3173988 - Ten Amazing Women in Science History You Really should Know About by Alexander Mcnamara, 17th May, 2019, www.sciencefocus.com - History of Science and Technology in the Indian Subcontinent; Wikipedia the Free Encyclopaedia - History of Sciences in Ancient and Medieval India by S. N. S. Nature 192, 27 (1961) - Glimpses of science and technology in ancient India by B.V. Subbarayappa, Endeavour, Vol 6, Issue 4, 1982, Pp177-182
Preschool Activities Worksheets provides parents a solution to dealing with an overloaded schedule. Often this can be solved by using a curriculum that is already designed to prepare children for kindergarten. The real key to success in these preschool activities is to find activities that are specifically geared toward developing their sense of awareness and reasoning. This will not only develop those skills necessary for kindergarten, but will also provide years of learning for children who spend so much time on the floor and have very little indoor playtime. Because most children will be spending more time outside than in, and because they are less likely to be outdoors in very hot weather, it is important to find a way to introduce them to this type of activity. While it is a good idea to provide some indoor activities, there are many preschool activities that focus on outdoor exploration. While some activities may be specific to the weather, there are plenty of ways to encourage children to explore various aspects of nature. In addition to allowing children to explore the natural world, these activities can help prepare them for entering kindergarten without any teaching at all. It is a very realistic situation to have children that have spent many years outdoors while still being fully literate in reading, writing, and basic arithmetic. And it is even more common for kids to spend as much as two or three hours a day outside. One of the most important things to remember about preschool activities is that it is important to spend time each day engaging with your children. Even if you do not feel like interacting with your child, doing so each day will allow you to get to know him or her and will prepare your child for kindergarten. Allowing children to engage in activities that are more fun than they are likely to remember is one of the most important things you can do. If your child feels like he or she is being bored, this can cause problems later in the school year. Most of the preschool activities, you will come across as you begin your search will be themed around building small things and seeing how they function. You can use a variety of woodworking projects to help your child build towers, towers that have up and down levels, hang things from the ceiling, and make simple sandcastles. Some preschool worksheets even encourage children to see how simple craft projects like these can become large and elaborate projects once they start building them out of larger items. Because children love to do crafts, but cannot seem to spend enough time building and inventing with their hands, some preschool activities incorporate the use of paper products. Children can make sticks, small rocks, balls, and dolls out of paper, as well as paper boats, bicycles, and soccer balls. They can also learn how to use cardboard to make various shapes. Most preschool activities that involve working with materials require some sort of storage. In order to keep things organized, these worksheets provide labels and storage bins so that the children can keep all the supplies they need ready and on hand. This keeps things more organized and will save parents a great deal of time and money in the long run. In addition to finding great times for spending time together, it is important to make sure that the children are learning and developing at the same time. There are many creative and fun preschool activities that promote social and intellectual development. By providing activities that encourage interaction and learning, children will grow at an accelerated rate.
An Introduction to Ground-Penetrating Radar (GPR) In the world of geophysics, ground-penetrating radar is a special technology that is used to locate objects that are buried underground without having to dig down into the soil or dirt. It is commonly used for locating underground utilities like gas lines, water or sewage pipes, telecom or fiber-optic cables, and electrical cables. This method works by sending radar pulses of electromagnetic energy into the ground. A special device monitors the signal that is reflected back, using the information it gathers to map the location of objects or structures underground. GPR technology is currently being used throughout the world by cities, civil engineers, architects, professional surveyors, and others. Its popularity can be attributed to the benefits that it provides. A Non-Destructive Solution Ground-penetrating radar eliminates the need to dig down into the ground. That means that the surrounding area doesn’t need to be damaged to determine where utilities are buried. Mapping buried utilities using a GPR survey provides incredibly accurate results while at the same time reducing a lot of the problems associated with traditional utility location methods. This technology not only eliminates the need to damage the surrounding area by digging up the dirt but it also keeps disruptions to a minimum. The information that is gathered can be used when planning projects involving utilities since it provides details about both the depth and location of existing underground services. Capable Of Detecting Non-Metallic And Metallic Items Unlike metal detectors, which can only detect metallic objects, GPR can also be used to locate non-metallic underground utilities. It is capable of locating objects that are made out of ceramic, concrete, plastic, fiber-optic, or other non-metallic materials. It also can be used to identify areas where the soil has been disturbed or where there are empty spaces or culverts. A Low-Cost Solution That Saves Time Locating buried utilities using ground-penetrating radar decreases the likelihood of workers getting injured. Accidents involving buried utilities are far less likely to occur. This can help keep projects on track, saving both money and time. Easy To Use Today’s GPR machines are intuitive to operate, allowing anyone who is properly trained to use them for locating buried utilities. This makes ground-penetrating radar one of the best choices when it comes to utility mapping.
When the first dinosaur bone was described in 1676, it was thought to come from an elephant or perhaps a giant. Over a century later, scientists realised such fossils came from a creature they named Megalosaurus, portrayed as a sort of stocky, overgrown lizard. Then, in 1842, leading anatomist Richard Owen recognised Megalosaurus as part of a whole new group of animals, which he named Dinosauria, or “Terrible Lizards”. Since then, around 700 different dinosaur species have been described, with more found every month. Our ideas about dinosaurs have also changed radically. The dinosaurs we know today are very different from the ones in the books you may have read as a child. 3d artist image of the Megalosaurus. Elenarts/Shutterstock Myth 1: Dinosaurs Were All Big The name dinosaur tends to evoke images of giants – and certainly many were very large. Tyrannosaurus rex was around 12 metres long and weighed more than five tonnes, the size of an elephant, and it probably wasn’t even the biggest carnivore. Long-necked, plant-eating sauropods grew to titanic proportions. The enormous Argentinosaurus is known from just a few bones, but its size has been estimated at 30 metres in length and 80 tonnes in weight. That’s larger than any living land mammal and all but the largest whales. And dinosaurs are unique here. No other group of land animals before or since was able to grow as large. But not all dinosaurs were giants. The horned dinosaur Protoceratops was the size of a sheep. Velociraptor was the size of a golden retriever and had to be scaled up for Jurassic Park to make it more terrifying. Recent years have seen an explosion in the number of small species discovered, such as the cat-sized raptor Hesperonychus, the rabbit-sized plant-eater Tianyulong, and the quail-sized insect-eater Parvicursor. The smaller species were probably more common than their giant cousins. It’s just that the massive bones of a T. rex are more likely to have been preserved and a lot easier to spot in the field.
Where does a dividend that has been paid appear in the Balance Sheet? Where does a dividend tha Equations are omitted for technical reasons - download the original pdf When a dividend is paid it is subtracted from the profits. In other words, it shows up in the balance sheet as a subtraction from the retained earnings (or reserves) of the company. (Strictly speaking, dividends are subtracted first, so they never appear as reserves as such.) If a dividend is liable to be paid, then it must appear as a current liability. When the dividend is paid, the cash of the company is reduced by the amount of the dividend. 1 The Balance Sheet: What does the company own? 2 Current Assets and Liabilities 3 Creditors and debtors 4 Capital and Reserves 5 The balance sheet 6 Types of share 7 Ordinary shares 8 Preference shares 10 Above the line and below the line in a balance sheet 11 Share premium 12 Analysis of Assets and Liabilities 13 Provision for bad debts 14 Subdivisions of current liabilities 15 What is meant by capital? 16 What is the difference between capital and money? 17 What is meant by a liability? 18 What is meant by an asset? 19 Are all the assets employed in a company generally owned by the owners of the company? 22 Current asset 23 Current liability 25 Cash flow crisis 28 Net current assets 29 Long-term liabilities 30 Net assets 32 What is limited liability? How does limited liability protect entrepreneurs and investors from 33 What is meant by the term reserves? 34 Why do we distinguish between loans and bank loans? 35 Why are loans from directors generally more important to small companies than to large companies? 36 Make two lists showing (1) the similarities between ordinary and preference shares; and (2) the diff 37 Why are debentures regarded as long-term liabilities, and not as capital? 38 Why would you be prepared to pay more than £1 for a share with a nominal value of £1? 39 Why do companies that are quoted on the stock exchange need a minimum share issue? 40 How do companies distribute profits? 41 Using examples, explain the difference between fixed assets and current assets. 42 In what sense can fixed assets be regarded as being used up in the production cycle? 43 Are creditors assets or liabilities? Are debtors assets or liabilities? 44 What are bad debts? How can bad debts cause a company to have a cash flow crisis? 45 Under what circumstances is it a problem for a company to run its production cycle on an overdraft 46 Where does a dividend that has been paid appear in the Balance Sheet? Where does a dividend tha 47 Accounting Exercise : East West Tools Company Limited 48 Accounting Exercise 3: The Bright Sheet Company Limited
What is the Alamo? 2 Answers | Add Yours “The Alamo” is the popular name of an old Spanish mission in San Antonio, Texas. The Alamo is historically important because it was the site of a major battle in the Texan war for independence from Mexico. In 1835, American settlers and others in Texas (which was then part of a state of Mexico) rebelled against the central Mexican government. Some of the Texan rebels besieged the Alamo and the mission was eventually surrendered to them by the Mexican forces in December of 1835. In March of the next year, Mexican forces attacked the Alamo. The Texans defending the mission were eventually killed to the last man. The battle at the Alamo became an important symbol of the determination of the Texan rebels and their desire for independence from Mexico. The Alamo is a Spanish mission, built in 1718 in what became the city of San Antonio. In 1836, the citizens of the then Mexican state of Coahuila y Tejas tired of the dictatorial rule of Mexican president Antonio Lopez de Santa Anna and began a fight for independence, known as the Texas Revolution. In March of that year, battle between an estimated 180-200 members of a ragtag militia (known as "Texians") and 4,000 Mexican troops took place at the Alamo site. The result of the battle was a Mexican rout; all Texian defenders were killed. However, it is considered a turning point in the Texas Revolution in that it tied up Mexican forces long enough to allow the supreme commander of the Texian army, General Sam Houston, sufficient time to organize his forces; Santa Anna was defeated the following month and Texas gained its independence. Join to answer this question Join a community of thousands of dedicated teachers and students.Join eNotes
TPRS–Teaching Proficiency Through Reading and Storytelling TPRS was developed by Blaine Ray et al as a take off from Asher’s TPR and adding many of the components that are central to all CI work. The bare bones of TPRS are”: - telling a story - or, asking a story - focused on 4 new “structures” which are placed visibly on the board with English meanings. The teacher points them out and goes over them before starting. - Structures may be new vocabulary or a grammatical structure, e.g. the infinitive - In asking/telling a story, the teacher circles encounters of new words by using a basic pattern where X represents the new structure. ○ Students, the character did X. (ohhhhhhhhh!) ○ Students, did the character do X? (yes!) ○ Students, did the character do X or Y? (X!) ○ Students, did the character do Y? (no!) ○ Of course not! How ridiculous! The character didn’t do Y. Everyone knows that the character didn’t do Y. The character did X! ○ Who did X? ○ Where did the character do X? ○ With whom did the character do X? ○ Why did the character do X? ○ Did anyone else do X? ○ Any additional questions about the story so far that allow additional repetitions of X. ○ Note: by circling a new structure, the teacher has just obtained 10 or more repetitions of this word/structure, and they will continue to hear it more in the unfolding story. ○ Note: circling as described here can and should be used in almost every form of CI described here. Circling is basic practice to CI because it so easily allows us to give understandable messages in the target language. TPRS was originally developed by Blaine Ray and Contee Seely. Their book and other materials can be found here.
- It is a type of machine learning, where one guides the system by tagging the output. - For example, a supervised machine learning system that can learn which emails are ‘spam’ and which are ‘not spam’ will have its input data tagged with this classification to help the machine learning system learn the characteristics or parameters of the ‘spam’ email and distinguish it from those of ‘not spam’ emails. - Just as the three year old learns the difference between a ‘block’ and a ‘soft toy’, the supervised machine learning system learns which email is ‘spam’ and which is ‘not spam’. - Now instead of telling the child which toy to put in which box, you reward the child with a ‘big hug’ when it makes the right choice and make a ‘sad face’ when it makes the wrong action (e.g., block in a soft toy box or soft toy in the block box). - Based on your problem domain and the availability of data do you know which type of machine learning system you want to build? Where business and experience meet emerging technology. Continue reading “Demystifying machine learning part 2: Supervised, unsupervised, and reinforcement learning”
Hello ICTM members! My name is Angie Shindelar and I serve on the ICTM Board as the Vice President for Elementary. I am currently a Math Consultant for Green Hills AEA in southwest Iowa. I taught elementary and middle school math at Nodaway Valley CSD for many years. While I am a huge Iowa State fan, I am a UNI alum with a BA in Elementary Education and a MA in Middle Grade Mathematics Teaching. On game day I just give in and cheer for both schools when they play one another. That always gets me some interesting looks. Previously, in the spring newsletter, I wrote about basic fact fluency. You can read that article here if you missed it. In this article, I continue with the basic fact fluency theme by asking readers to consider the difference between memorization and automaticity and why the distinguishment is important. The Iowa Core Math Standards specifically address basic fact fluency with one standard for each grade level, K-3. Examining these standards across K-3 reveals how the learning progression develops. It is important to note there is specific language indicating students should learn strategies for basic facts and work incrementally toward the fluency expectations. In these standards, the language “know from memory” is used. Often the interpretation of this is the expectation to memorize. With this interpretation the instructional focus may emphasize time as a measure. In other words, the number of facts a student can retrieve in a short amount of time becomes a measure of fluency. Sadly, for many students, fluency is short-lived. The memorized facts are not retained over time and either have to be rememorized or fall back on less efficient strategies like counting. “So, are you saying students don’t need to learn basic facts?” Not at all. Anyone that has taught in the later elementary and middle school grades can describe what a nightmare it is for students that do not know their basic facts. My previous article and this one have been written to consider more effective basic fact instruction and practice. The end goal has not changed. What might instruction look like if the expectation for learning basic facts were to develop solid mental strategies, reaching automaticity over time. Automaticity, in this case, meaning students have learned and practiced a mental strategy for a fact enough times that it is automatically known. A critical point here is if, over time, the basic fact cannot be recalled automatically, the student has a solid mental strategy that is quick and efficient. The tendency to guess or use a less efficient strategy like counting is rarely seen. My work as a math consultant has provided opportunities to on basic fact instruction with many elementary teachers and students. These teachers have numerous annecdotals describing how students’ number sense has developed and grown by focusing basic fact instruction on building automaticity through mental strategies. “So, I shouldn’t just expect a student to tell me 7 + 8 = 15?” Yes, we want students to be able to do that! Of course!! However, it’s how we get them there that makes the difference in the long term. Consider three scenarios for a student that is solving 7 + 8: 1) Has it memorized now, but can’t recall consistently over time; 2) Not memorized, so counts on from either addend; 3) Knows automatically from lots of practice with a mental strategy; a) 7 + 7 = 14, so 1 more = 15, or b) 8 + 2 = 10, so 5 more = 15, or c) 7 + 3 = 10, so 5 more = 15 4) Is getting close to automaticity and only hesitates briefly to think through one of the strategies listed above. 5) Has learned a couple different ways to solve mentally and is practicing regularly with games and activities to become more automatic. There might not seem to be a big difference between memorization (#1) and automaticity (#3). After all, if you’ve reached automaticity isn’t that the same thing as memorized? In both cases, the student knows the fact. The difference is the process. Putting the emphasis on learning a strategy and practicing it until automaticity is reached is a process that will develop a strong neural pathway and move the fact into long-term memory. Memorization, for many students, does not provide enough experience and development of number sense to move the fact into long-term memory. Basic fact fluency has always been important for elementary students to achieve. The Iowa Core Math Standards have given us, as teachers, a lot to think about. As I wrap up this article I am still thinking about another word that is used in the basic facts standards, “fluency.” It is often considered to be synonymous with memorization. I have a feeling there’s a third article on basic fact fluency brewing. Stay tuned. I would love to hear any of your thoughts around basic fact fluency and any other topics of interest for elementary math. My email is [email protected].
in the constellation Canes Venatici 24 million light-years 24 million to 38 million times the mass of the Sun Greater than the distance from Earth to the Sun In the early 1940s, astronomer Carl Seyfert discovered that some spiral galaxies are different from the others. Their cores are much brighter, and they are populated by hot, glowing clouds of hydrogen, helium, and other elements. A great power source inhabited the cores of these galaxies, but with the technology of the day, Seyfert and other astronomers couldn't determine what it was. Today, astronomers know that the power source is a disk of hot gas around a supermassive black hole. And one of the nearest of these "Seyfert galaxies" is M106, at a distance of just 24 million light-years. Astronomers have used radio telescopes to draw a detailed map of the galaxy's accretion disk. Water molecules at the edge of the disk are pumped up by the disk's energy, creating bright spots known as masers. The masers trace the disk's size (about two light-years in diameter) and its motion around the central black hole (speeds of about one million miles per hour at the outer edge of the disk). The masers also show that the disk is warped like the brim of a hat, which one side turned up a little, and the other turned down. From the masers and the motions of stars near the core, astronomers have measured the mass of the black hole at roughly 24 million to 38 million times the mass of the Sun. Magnetic fields generated by the rapidly spinning disk accelerate some of its hot gas to almost the speed of light and shoot it back into space in the form of two jets, which produce radio waves and other forms of energy. The disks shoot out into space from the black hole's poles, so they are perpendicular to the plane of the disk. Did you find what you were looking for on this site? Take our site survey and let us know what you think. This document was last modified: January 10, 2011.
Carry out the following activities (in this blog post) - you should be able to understand what's in - Activity 1 - read the examples and try to understand how the scale is represented on the graph paper - Activity 2 - to attempt and be used for discussion in the next lesson (A) Knowing the Graph Paper Have a piece of graph paper and a ruler with you. Try to figure out the explanation in the video clip on your graph paper and ruler. Use your ruler to measure the size of each "large" square. (B) Reading the Scale Watch the video clip before going through the slides. (C) Marking the Scale on the Paper The following video clips show how to mark the axes when given the scale. Watch the clip before completing Activity 2 Handout. (D) Checking My Understanding We will attempt this activity in class to check how much you know...
Debit is an accounting and bookkeeping term that comes from the Latin word debere which means “to owe.” The opposite of a debit is a credit. Debit is abbreviated Dr while credit is abbreviated Cr. A debit can be either a positive or negative entry to an account depending on what type of account is being debited. Asset and expense accounts increase in value when debited, whereas liability, capital, and revenue accounts decrease in value when debited. Debt is that which is owed. People or organisations often enter into agreements to borrow something. Both parties must agree on some standard of deferred payment, most usually a sum of money denominated as units of a currency, but sometimes a like good. For instance, one may borrow shares, in which case, one may pay for them later with the shares, plus a premium for the borrowing privilege, or the sum of money required to buy them in the market at that time. There are numerous types of debt obligations. They include loans, bonds, mortgages, promisary notes, and debentures. A budget deficit occurs when an entity (often a government) spends more money than it takes in. The opposite is a budget surplus. Depreciation is a decrease in the value of an asset, caused by wear and tear or by obsolescence. In accounting, the act of depreciating an asset is also supposed to create a reserve for the replacement of the asset. The use of depreciation affects a company’s (or an individual’s) financial statements, and, more importantly to them, their taxes. A dividend is the distribution of profits to a company’s shareholders. The primary purpose of any business is to create profit for its owners, and the dividend is the most important way the business fulfills this mission. When a company earns a profit, some of this money is typically reinvested in the business and called retained earnings, and some of it can be paid to its shareholders as a dividend. Paying dividends reduces the amount of cash available to the business, but the distribution of profit to the owners is, after all, the purpose of the business. Double-entry book-keeping is the standard accounting practice for recording financial transactions. It was invented by Luca Pacioli, a close friend of Leonardo da Vinci, in a 1494 footnote to a scientific paper. The system is based on the concept that a business can be described by a number of different variables or accounts, each describing an aspect of the business in monetary terms. Every transaction has a ‘dual effect’—increasing one aspect and decreasing another, in such a way that all of the different variables always sum to zero.
Guided Learning Projects / Tutorials A guided learning project, also known as a guided project or just tutorial is focused on a specific, achievable skill or objective. Project-based learning is often another way to refer to learning with tutorials. 💎 A good guided learning project will result in something you can actually show to someone else and call your own. You can then use the results to expand on your own learning projects. [This is specifically why tutorials on Codecademy are so horribly bad — even if they are popular and free. freeCodeCamp, on the other hand, has you do almost everything on your own projects you have to use and show after completion.] Similarity to Tutorials A tutorial or tute is a more colloquial way to refer to guided projects but doesn’t capture the true sense of the thing. Originally the word tutorial meant a learning session with a small group or individual help by assistants rather than a professor.
A study by scientists affiliated with the University of California at Los Angeles indicates that although human settlement on the surface of Mars may be feasible, there will be serious health issues for the settlers, including some that aren’t usually considered in discussions of that possibility. Human space travel has been much in the news of late. Three distinct projects of private-enterprise flight, led by three distinct and equally flamboyant billionaires, have helped keep it so. The general assumption is that humans, under private or public auspices, can get back to the moon readily, and that the next step after that will be the surface of Mars. The UCLA study is a “what is?” working from that assumption. The paper appears in the journal Space Weather. Strange New Worlds: The UCLA team sees human habitation of Mars for brief periods as feasible, provided that the spacecraft has sufficient shielding and the round trip is shorter than approximately four years. A trip that took four years or longer would entail a dangerous amount of exposure to radioactive particles. (Particles from outside our solar system are seen by these scientists as a greater danger than anything from our sun.)
Language Arts is integrated across the curriculum throughout the school day. Teachers promote the use of language as an integral accompaniment to play, routine chores, as well as in special learning experiences. Teachers employ verbal strategies that create optimum communication opportunities and facilitate the child’s participation in interactive verbal expression. During group meeting times, children learn to express ideas and to listen to the ideas of others. Puppets are sometimes used to dramatize situations. As children play during activity time or outdoors, teachers are on hand to help children communicate their thoughts, needs and feelings (e.g. who will play what roles, what to do next, or settling disputes). Teachers provide print-rich environments in their classrooms through experience charts, graphs, posters and book centers with a wide selection of fiction and non-fiction books. The book centers may also include books written by the class as well as books created by individual students and teacher made books. Word recognition skills develop as children learn to identify their own names and those of classmates on such things as attendance and job charts, cubbies and ‘mailboxes,’ place mats, and displays of their artwork. Story time is an important part of every school day. Students are encouraged to listen and recall the main idea and parts of the story, predict logical story conclusions, and sequence the events of the story. Children visit the School’s library once a week and select two books to take home. The library is run by parents. Storytelling is provided on a regular basis to each class by its teachers and also by the Storytelling Specialist. Children are encouraged to act out the story and to create their own stories, which can be dramatized. Flannel boards and puppets, costumes and props are sometimes used to help children in these efforts. Writing is encouraged through the creation of class books and the dictating of children’s own stories, which they can then illustrate. Centers include a variety of materials to provide children additional opportunities to write labels, signs, lists, notes and/or correspondence to friends and family. The dramatic play area may include paper pads and pencils for pretend phone messages, shopping lists, doctor’s prescriptions, etc. The teaching of beginning letter sounds through games, manipulatives and other classroom activities is provided for four and five year olds. Visual and auditory discrimination activities are a part of every class’s language arts curriculum. These include games and materials for free choice and teacher directed activities. For teaching letter sounds, we have been using aspects of a curriculum called Fletcher’s Place which includes hand motions to go with letter sounds. The goal of our curriculum in this area, as in all areas, is to provide children with the support and opportunities they need to develop skills and progress to the next level. The wide range of development in young children means that some of our students work at mastering the ability to hear and produce rhymes, while others begin reading to decode and encode CVC words or beyond. While most are successful developing some understanding of the relationship between letters and their sounds in the four’s and five’s classes, children who are not ready are not pushed.
People have a food intolerance when they have difficulty digesting certain foods and have an unpleasant physical reaction when they eat those foods. Food intolerances and food allergies are caused by different biological processes and are diagnosed and treated differently. The problem in food intolerance lies in the digestive system, not in the immune system as in food allergies, though the symptoms of the two types of food reactions can be similar. Food intolerance may be caused by: - Lack of an enzyme needed to digest a certain food, or insufficient enzymes to digest a food fully. Examples include: - Lactose intolerance, which is caused by the absence of an enzyme needed to digest milk and other dairy products - The gas buildup experienced after eating foods like beans, lentils, cabbage or apples - Gluten intolerance that is not celiac disease, which can cause uncomfortable reactions after eating food made with wheat, rye or barley - Fructose intolerance, an inability to fully absorb this sugar compound, which is found naturally in fruits and vegetables and is added to foods and drinks as a sweetener - Sensitivity to certain chemicals in food. This can include sensitivity to chemicals that occur naturally in food or to chemical food additives. Examples include: - Monosodium glutamate (MSG), which is sometimes added to food to enhance flavor but also occurs naturally in such foods as parmesan cheese, tomatoes and soy sauce. Whether added or naturally occurring, MSG can cause unpleasant reactions in some people. - Salicylates are natural chemicals that are produced by plants as a defense against insects and disease, and are found in fruits, vegetables, teas, coffee, nuts, spices and honey. They can cause reactions ranging from a stuffy nose to asthma and hives in people who are intolerant. - Amines, which are produced by bacteria in food during storage or fermentation. Amines are found in bananas, pineapples, avocados, citrus fruits, chocolate, cured meat, smoked fish, aged cheese and wine. In people who are amine intolerant, eating these foods can cause flushed skin, migraines, stuffy nose, diarrhea and other reactions. - Caffeine, which is found in coffee, tea, soda and energy drinks. Caffeine can cause anxiety, rapid heartbeat, restlessness or insomnia in people who are hypersensitive to the chemical, even after consuming only small amounts. - Sulfites, chemicals added to foods and drinks as preservatives. People who are sensitive to sulfites can react with flushed skin, hives, stuffy nose, diarrhea, coughing or wheezing. Sulfite sensitivity is sometimes seen in people with asthma. - Irritable bowel syndrome (IBS). This chronic condition causes annoying and often painful abdominal and bowel symptoms, sometimes in reaction to eating particular foods. People with IBS are more likely to have digestive problems after eating and may be more sensitive to discomfort caused by gas and the movement of food through the digestive system. - Psychological factors. For some people, certain foods can cause nausea and other digestive problems for psychological reasons. Even the thought of the food can make a person sick if this is the cause. Symptoms of food intolerance vary by individual, the type of food that causes the reaction, and the amount of the food eaten. Problematic food eaten in small amounts may cause no symptoms, while larger portions may cause great discomfort. Symptoms may include: - Stomach pain or cramps - Gas or bloating - Heartburn or acid reflux - Headaches, migraines - Skin rash or flushed skin - Irritability or nervousness (from caffeine intolerance) Unlike food allergies, which can be diagnosed relatively quickly by an allergist with a patient history combined with blood and skin-prick tests, food intolerances are identified through trial and error or by using a food elimination diet with breath tests to look for carbohydrate malabsorption. If your child has a reaction to a particular food, especially if it is an immediate reaction, they should be seen by an allergist to understand whether the cause is a food allergy. If a food allergy is ruled out as the cause of the reaction, a GI specialist or an integrative medical team with a nutritionist will use the trial-and-error method or an elimination diet to understand if the problem is caused by food intolerance. In the trial-and-error method, your child may be asked to keep a food diary, recording what they eat and the timing of any symptoms. By looking back at what your child ate in the hours before symptoms are noted in the log, the medical team can help find the foods that are causing problems. In the elimination diet, your child will completely eliminate all of the foods suspected of causing problems, then add them back into their diet slowly, one at a time. It is important to work with your child’s healthcare provider or a registered dietitian when starting an elimination diet to make sure your child is completely avoiding the appropriate food components, and your child's nutritional needs are being met. There is no treatment for food intolerance, but uncomfortable symptoms can be avoided by eliminating problem-causing foods from your child’s diet. That requires care in preparing meals, careful reading of labels, and diligence in asking how food is prepared when eating out. Some uncomfortable symptoms can be treated if your child does eat a problem food. Antacid medication, for example, can be taken for heartburn and acid reflux. Over-the-counter lactase enzyme preparations can relieve discomfort from ingesting dairy products for people with lactose intolerance. Make sure to read the medication instruction label or consult your child’s doctor for appropriate dosage and frequency. Children's Hospital of Philadelphia (CHOP) provides multidisciplinary care, bringing together the expertise of GI specialists, registered dietitians, clinical psychologists and feeding therapists. Members from the Food Allergy Center, Division of Gastroenterology, Hepatology and Nutrition (GI), Clinical Nutrition, and Integrative Health provide testing and support in the Food Reactions Clinic. Through this collaborative approach, we are able to provide state-of-the-art treatment to young patients with food intolerances. When symptoms of food intolerance lead to diagnosis with other conditions, we have the expertise to address a wide range of GI problems. We offer more than 20 specialty clinical programs focused on specific disorders, ranging from common to complex. Our commitment to family-centered care means that you and your family are considered members of your child's healthcare team. You will always be included in discussions related to your child's diagnosis, treatment plan and progress.
LECTION 4. Clasification and Genesis of biological rhythms Chronobiology is a branch of science that objectively explores and quantifies mechanisms of biological time structure including important rhythmic manifestations of life right from molecular level of living being, from unicellular organism to complex organism such as human being. Classification of biological rhythms Biological rhythms can be classified according to numerous criteria. This classification is based on the length of the period of oscillation. Table 1 details this repartition. < 20 h 24 ± 4 h 24 ± 0.2 h > 28 h 7 ± 3 d 14 ± 3 d 21 ± 3 d 30 ± 5 d 1 y ± 2 m h = hours;'d = days; m = months; y = year The rhythms whose period of oscillation is 24 + 4 h are defined as «circadian» (from circa dies, i.e., approximately one day). The cyclic events with a period of less than 20 h and more than 28 h are defined respectively as «ultradian» and «in- fradian». Besides the physical classification there exists a subdivision based on functional concepts that recognizes four varieties of biological rhythms, i.e., alpha, beta, gamma and delta. The alpha rhythms coincide with the spontaneous oscillation of biological functions. Alpha rhythms are-subdivided into alpha(s) and alpha(f) according to whether they are produced in conditions of «synchronization» ôr «frèe-running» (see below). The beta rhythms correspond to the periodicity of the response of biological functions toward stimulations or inhibitions applied at different times. The beta rhythms as wéll exist in the varieties beta(s) and beta(f) in relation to the presence of either synchronization or frèé-running conditions. These two varieties are further subdivided into beta(sl) or beta(fl) if thé perturbance is physiological, and alphha(s2) or beta(f2) if the perturbance is not due to a physiological event. Gamma rhythms regard the periodic oscillation of biological functions being modulated, perturbed, or influenced by deterministic factors, either physiological, i.e., gamma(sl) or gamma(fl), or non-physiological, i.e., gamma(s2) or gamma(f2). Here again, the differentiation into gamma(s) and gamma(f) varieties depends on the presence of either synchronization or free-running conditions. Lastly, delta rhythms, which are also subdivided into (s) and (f) varieties, correspond to the modification in the periodic oscillation of a given biological function secondary to manipulation of an alpha, beta, or gamma rhythm. The examination of rhythmic phenomena in organic matter reveals that there exist events which repeat themselves after a certain lapse of time as isolated occurrences. These are the «qualitative, punctual, discrete, or episodic rhythms» expressed by a binary condition, i.e., present/absent, event/non-event. For example, the menstrual cycle. Qualitative rhythms are mathematically describable in terms of finite quantities (0 or l) and counted as numerical frequencies. Therefore, qualitative rhythms could also be called «frequential rhythms». In living organisms it can be noted that several phenomena repeat themselves as entities which vary in a «continuum». In other words, the phenomenon is always present and measurable, even though changing as a function of time. Its magnitude reaches the same level following a given period of time. Therefore, the period of these phenomena is given by the space of time (duration) in which the curve reaches the identical level after a complete oscillation. These periodic events are dhus a quantitative expression of their variability and can be identified as «analogic or continuous or quantitative rhythms». These rhythms are mathematically expressed by numerical values of a potentially infinite order. From a classification point of view, there exists a third type of biological rhythm consisting in isolated peaks inscribed on the curve of a quantitative oscillation. Whether these spurts show a cadence in time, they can be defined «episodic rhythms». This classification is used mainly for the description of episodic rhythms or when it is necessary to describe a continuous periodic event in relation to its peak. The rhythms included under this heading are diurnal, nocturnal, serotine, vesperal, morning, daily, weekly, monthly, seasonal, yearly, etc. Note, however, that these terms define the periodicity only descriptively and do not lend to any inference on the effective duration of the period of the recurring phenomenon. Therefore, a diurnal rhythm is not implicitly circadian; it could be ultradian. Biological rhythms, like all biological phenomena, undergo an evolutive process that tends to modify the periodic properties in function of chronological age. As shown in (Fig. 1), every periodic in function is defined by its mean level, extent of oscillation, and timing of oscillatory crest, these parameters being called respectively, mesor (M), amplitude (A) and acropfyase (Ø or phi). Fig. 1. The rhythmometric properties of a biological oscillating function. Utilizing clinospectror analysis (see below), both positive and negative trends (clinous) have been identified for mesor and amplitude as an effect of age (Fig. 2). Therefore, there are «dianaclinous» or «dikataclinous» rhythms if both properties have a positive or negative trend during the course of life. Rhythms can also be defined as «mesor-anaclinous» or «mesor-kataclinous», and «amplitude-anaclinous» or «amplitude-kataclinous», if the evolution through chronological time involves only one of the parameters in either a positive or negative sense. There is also the possibility of a trend opposite for the two rhythmic properties defined as an «amphiclinous» rhythm. Finally, there can be an «aclinous» rhythm which is a rhythm that shows itself to be stable even though the age increases. Observing rhythmic biophenomena it can be ascertained that some of these are «permanent or long-lasting» while others are «transitory or temporary». The ovarian cycle is a typical «transient» rhythm because it disappears with menopause. The rhythm of body temperature is instead a permanent rhythm that is found even in the cadaver in the first 24 h following death. Fig. 2. - Age-related trends in rhythmometric properties of biological rhythms (clinospectroscopy). In considering biological rhythms we cannot neglect the important role they play in the economy of vital functions. In this regard it must be kept in mind that there are rhythms which are «essential» or «vital», and rhythms which are «non-essential». The essential rhythms are the pulsatile activity of heart, respiration, and cerebral electrical activity. The suppression of one of the first two coincides with «physical death». The lack of the electroencephalic rhythm, as is seen in the flat electroencephalogram, defines so-called «clinical death». It can be derived from the aforesaid that the essential rhythms represent biological life. Death coincides with the abrogation of these fundamental rhythms. From this perspective it is seen that life and biological rhythmicity are a counivocous expression. Non-essential rhythms are those rhythms whose abolition or desynchronization has no repercussion on vital functions. Their lack can, however, contribute to the development of a primary pathology (protochronopathology). The abolition of non-essential rhythms can be secondary to a given disease (deuterochronopathology). Examined under the profile of their meaning, biological rhythms may be found in two very important aspects of life, (i.e., the conservative and reproductive functions). The rhythms of the conservative sphere are, in turn, mental and physical. These two categories can be further identified as intellective, affective, endocrine, cardiovascular, metabolic, respiratory, digestive, etc. Reproductive rhythms are, on the other hand, related to sexuality and fertility. In biology, the rhythmic manifestation of life have different values with respect to their robustness. There exist, in fact, «resistant» or «permanent» rhythms as well as bioperiodic events that are «weak» or. «labile». The resistance of a rhythm depends on its role and to which system it belongs. Basically, resistance is inversely proportional to the susceptibility to be desynchronized following acute perturbations. The lability of a rhythm is mostly dependent on its spontaneous or forced passage to a different order of periodicity (multiplication or démultiplication of frequency). The rhythm of the heart is : said to be labile because it dan easily vary in frequency over the 24 h span. Only rarely the lability of a rhythm is due to its abrogation. The abolition of a biological rhythm is, an : extremely unnatural event which is very improbable to occur. Therefore, the disappearance of a given rhythm must be carefully evaluated and it cannot be established without having verified that the rhythm has just and simply changed its period. : Biological rhythms are part of the genetic patrimony of living matter. The oscillators are located in each cell, at every level of the biological organization. The rhythms begin to act at the birth of the cell. In metazoes the cellular rhythms take part to a more complex and general rhythmicity whose expression requires coordination and maturation. Some rhythms of very high organized activities, thus, take a certain time for their postnatal ontogeny. Rhythms in the formative stages are called «immature» rhythms, while those already operant at birth are defined as «mature» rhythms. Biological rhythms are natural events which recur spontaneously, their periodic component being endogenous. The endogeneous contribution to periodicity can manifest itself freely (free-running rhythms) or it can be conditioned by environmental factors that act cyclically as synchronizers (synchronized or masked rhythms). The free-running rhythms, therefore, may be transformed into synchronized rhythms, and the endpoint of this interplay is a «masking effect» exerted by the exogenous component on the endogenous bioperiodicity. In nature, the overt manifestation for most biological rhythms is the combination of the endogenous component plus the exogenous entrainment. In this case, the masking effect results in a synchronized rhythm, and the external factors of masking can be defined «entraining agents» or «zeitgebers» or «synchronizers». Importantly, the manipulation of an environmental synchronizer may cause a disturbance of the endogenous periodicity which results in a phenomenon of «external desynchronization». The dyschronic effect must be kept in mind when dealing with a biological rhythm lacking periodicity. This means that the abrogation of a given periodicity may be attributed to an exogenous mechanism being not primarily dependent on an intrinsic defect of internal rhythmicity. Interestingly, the masking effect may be not only exogenous but also endogenous. The endogenous masking effect can be used to explain the complex conditions of rhythm loss not explainable in terms of cause and effect. For example, the loss of the sleep-waking rhythm produces an endogenous masking effect on numerous other rhythms causing their periodicity to be abrogated (see below). There are rhythms that regard one concrete entity, i.e., «real rhythms». Other rhythms are instead the mere expression of a computational parameter, i.e., «virtual rhythms». The nychtohemeral profile of cortisol rhythm, when studied in blood, coincides with the within-day variations of its concentration in plasma or serum. By contrast, the circadian rhythm of pH is caused by the interplay of numerous factors each one characterized by its own rhythm. There are biological rhythms which refer to a single variable (something that is definable by its characteristics), i.e., «elementary rhythms», and others which attain to complex functions, i.e., «composite or factorial rhythms». Examples may be the circadian rhythm of prolactin, on one side, and the circadian rhythm of mood, on the other. If a rhythm is found to be complex, its eventual abolition could be dependent on an internal desynchronization among the constituent cyclic factors or mechanisms. Sometimes, the aperiodicity is merely due to changes in phase resulting in an antiphasic oscillation of the cycles which contribute to the complex rhythm. In the magnificent organization of bioperiodic phenomena, it has been found that some rhythms play a prominent role in conditioning other biological cycles. These rhythms are called «guide or primary or independent rhythms» while the driven rhythms are called «guided or secondary or dependent rhythms». Guide rhythms have a strategic importance in the sense that their presence is essential for the dependent periodicity. The lack of a guide rhythm usually produces desynchronizing effects mostly due to the abrogation of the rhythmic interplay. The interruption of the relationship between primary and secondary rhythms causes a phenomenon called «internal desynchronization». The guided rhythms will be absent due to an induced effect. Such a dramatic repercussion is called «endogenous masking effect». The endogenous masking may help us in understanding and interpretating the chronopathology of some biological rhythms. Genesis of biological rhythms The capacity to undergo rhythmic oscillations is a characteristic intrinsic to living matter. A fundamental statement of chronobiology states «many rhythms persist even in complete isolation from the major known environmental cycles». This affirmation clarifies that the natural rhythms can be considered to lay outside of the period of the geophysical cycles. This means that living matter has its own time, i.e., the «biological time». Assuming time as a fourth dimension of biology, one can conceptually and syllogistically argue that a chronome exists into the genome. Besides the physical (physemes) and chemical (chememes) signals, one can assume that the genes provide information also in the form of «chronemes», i.e., signals of periodic type. In such a way the process of donation is timed by determined periods, and results in a combination of quantal and temporal messages which cause the biological functions to quantitatively change according to a programmed spectrum of periodicities. Speculatively, one could presume that the temporal signals find their periodic genesis within the helicoidal spirals of DNA where the chronome should reside. The DNA double helix could act as a metronome generating a vibration whose length is the period of donation. It has been suggested that the gene inherits not only the capacity to clone (ergon) but also the capacity to endure (chronon). The concept of chronon refers to the expression of genes as a function of the chronological time which is linear, irreversible and progressive. The concept of cronome relates to the expression of genes according to the chronobiological time which is cyclical, irreversible but recursive. Accordingly, the chronological time could be seen as the summation of the iterated periods which constitute the time base of biological rhythms. Biological clocks and control of bioperiodic phenomena Biological periodicities are driven by a genetic program to run according to a temporal duration (biotemporality) which causes a recursivity in a spectrum of frequencies ranging from milliseconds to years. The temporal effect of genetic programming, the chronome, is the endogenous component for which the biological rhythms originate as «free-running» bioevents. The free-running rhythms reflect the «time of the body» which is independent from the environmental time measured by the clock, the «physical time». The free-running rhythms reflect the endogenous mechanisms of cyclic temporization whose expression is morphologically seen as an internal clock, a «biological or body clock». Observing the animals integrated into their environment, it can be noted that the endogenous rhythms are usually not «free-running». The «time of the body» is masked, and the spontaneous biological rhythms are obliged by the exogenous cycles to adjust their period in accordance. This means that the biological time has innately the capacity to uniform itself with the physical time. Therefore, events that perturbate the environment can modulate the periodic cadence of the genetically determined endogenous rhythms. The strongest interferences are those provided by systematic events having a cyclical character in their manifestation. The light-dark alternation, meal timing schedule, social routines, including work shifts, etc. (see below) are deterministic as entraining agents. In the entrainment of endogenous rhythms many structural entities intervene with a role of mediation (Table 2). Table 2. - Central nervous structures involved in the chronoregulation of biological functions Midbrain raphe nuclei Autonomic nervous system Superior cervical ganglic The most important determinants of biological timing are the endogenous oscillators, structures of the organism that function as rhythmic «pacemakers». Other machineries of synchronization are the «pace-resetters», elements of the organism that regulate the temporal structure of one or more rhythms in response to one or more environmental synchronizers. The informative relationships between pacemakers and paceresetters are determined by special connecting structures called «transducers» that translate the exogenous stimuli to the internal clocks. Transducers may have either negative or positive effects on the oscillators.. The series can be integrated by the «modifiers» and the «logic controllers» which act, respectively, in modifying arid controlling the exogenous and endogenous stimuli. / With regard to biological clocks there exists an eternal diatribe between positivists arid negativists. The prevalent opinion is positivistic in the sense that the biological clocks are accepted as identifiable entities which reside inside tissues and organs. Those who believe in the' existence of biological clocks assert that these structures of self-sustaining timing play a primary role in coordinating the miriad of peripheral biological rhythms. Such a coordinative capacity presupposes a leadership with which the biological clocks drive the phase of the rhythms provided by each cell of the organism. This implies that the biological clocks are formally equipped to ubiquitarily interact with all the cells by means of neural, physical arid chemical messages. For this reason they are prominently located inside the non-mitotic-structures of the nervous system, both encephalic and spinal. The structural organization of biological clocks is difficult to be deciphered. An attempt will be made here by presenting the principal models. The model I, the simplest, is made by an oscillator that times a second oscillator, and so forth. This primordial model proposes a linear cybernetic control. The model II describes a primary oscillator followed by a series of oscillators in succession. The model III proposes an interaction between various oscillators of equal hierarchical importance arranged in a cybernetic network. The control by nodal clocks explains the occurrence of collateral interactions conditioning the mechanisms of positive or negative feedback. This interactive mechanism of chronoregulation has been called « feed-sideward». Chronoanatomic research has brought to light a series of structures responsible for rhythmic programming and chronobiological integration of the organism with the environment. Information on the neuroanatomical structures involved in the central regulation of biological rhythms derive essentially from animal studies. Table 3 lists the structures which are presently recognized to play a rhythmogenic role as oscillators. Table 3. - Central structures involved in the coordination of oscillating biological functions Chronobiological studies provided evidence that various environmental factors act hierarchically as synchronizers of biological rhythms. The most powerful synchronizer is the light-dark alternation. Isolated from geophysical temporality, human beings progressively tend to delay the resting time. This phenomenon occurs even in conditions of perennial light or darkness. A rapid change in time zones (passing through three or more time zones), as occurs in transmeridian flights, gives rise to a psychophysical disturbance commonly known as «jet lag syndrome» prominently due to the dyschronism between biological time and physical time. The resynchronization following geographical dyschronism occurs with a phase shift of about 90 min every 24 h. It is, however, necessary to keep in mind that the direction of time zone transiction is crucial. In east-west bound flights, travellers must recuperate a time span equal to the temporal difference between the time zones. In west-east bound flights, subjects must recuperate 24 h minus the difference in time zones, i.e., the physical time already passed in that zone , which was not biologically «lived» by the travellers. This implies that the resynchronization takes much more time. The resynchronization can be, however, accelerated or delayed by numerous factors (Table 4). Strong temporal pressures Weak temporal pressures Higher performance task Lower temporal task Low pulse/respiration ratio High pulse/respiration ratio The meal schedule is also a robust synchronizer. Subjects eating a complete meal only once a day will show a phase shift for many biological rhythms toward the hour that the meal is given. Social routines (sociotemporality) are also important, especially shift work. A random shift can produce desynchronizing effects for many periodic functions, especially those related to physical and mental performance. Other environmental agents causing dyschronism are stress, fasting, fatigue, etc., if abnormally prolonged in time and/or cyclically repeated. Several drugs can induce desynchronization as well. The lists of these drugs should compose a new chapter of pharmacology to be used in pharmacological surveillance. Interestingly, some drugs may be used for resynchronizing the biological rhythms disturbed by exogenous interferences. These drugs are called «chronizing agents or chronizers».
The composition of Earth’s core remains a mystery. Scientists know that the liquid outer core consists mainly of iron, but it is believed that small amounts of some other elements are present as well. Oxygen is the most abundant element in the planet, so it is not unreasonable to expect oxygen might be one of the dominant “light elements” in the core. However, new research from a team including Carnegie’s Yingwei Fei shows that oxygen does not have a major presence in the outer core. This has major implications for our understanding of the period when Earth formed through the accretion of dust and clumps of matter. Their work is published Nov. 24 in Nature. According to current models, in addition to large amounts of iron, Earth’s liquid outer core contains small amounts of so-called light elements, possibly sulfur, oxygen, silicon, carbon, or hydrogen. In this research, Fei, from Carnegie’s Geophysical Laboratory, worked with Chinese colleagues, including lead author Haijun Huang from China’s Wuhan University of Technology, now a visiting scientist at Carnegie. The team provides new experimental data that narrow down the identity of the light elements present in Earth’s outer core. With increasing depth inside Earth, the pressure and heat also increase. As a result, materials act differently than they do on the surface. At Earth’s center are a liquid outer core and a solid inner core. The light elements are thought to play an important role in driving the convection of the liquid outer core, which generates Earth’s magnetic field. Scientists know the variations in density and speed of sound as a function of depth in the core from seismic observations, but to date it has been difficult to measure these properties in proposed iron alloys at core pressures and temperatures in the laboratory. “We can’t sample the core directly, so we have to learn about it through improved laboratory experiments combined with modeling and seismic data,” Fei said. High-speed impacts can generate shock waves that raise the temperature and pressure of materials simultaneously, leading to melting of materials at pressures corresponding to those in the outer core. The team carried out shock-wave experiments on core materials, mixtures of iron, sulfur, and oxygen. They shocked these materials to the liquid state and measured their density and speed of sound traveling through them under conditions directly comparable to those of the liquid outer core. By comparing their data with observations, they conclude that oxygen cannot be a major light element component of Earth’s outer core, because experiments on oxygen-rich materials do not align with geophysical observations. This supports recent models of core differentiation in early Earth under more ‘reduced’ (less oxidized) environments, leading to a core that is poor in oxygen. “The research revealed a powerful way to decipher the identity of the light elements in the core. Further research should focus on the potential presence of elements such as silicon in the outer core,” Fei said. Portions of this work were supported by grants from the National Natural Science Foundation of China, the Fundamental Research Funds for the Central Universities, and the National Basic Research of China, as well as the National Science Foundation and the Carnegie Institution for Science. Note : The above story is reprinted from materials provided by Carnegie Institution.
Indigestion, also known as dyspepsia, is the condition that causes gastric discomfort. Indigestion is caused due to peptic ulcer, smoking, drinking alcohol, and overeating. The patient may feel abdominal pain, nausea, and flatulence. Diagnosis can be done through physical evaluation, endoscopy, CT scan or X-ray and lab tests. Indigestion or dyspepsia is the condition which causes abdominal pain, bloating and benching in the patient. Following are the various types of dyspepsia: - Organic dyspepsia: Organic diseases are the diseases where there is measurable damage to the tissues or there is an inflammation. These conditions can be diagnosed by evaluating the presence of biomarkers or any biochemical change in the organ, tissue or cell. The causes of organic dyspepsia include peptic ulcer, GERD, gastric cancer, esophageal cancer, other GI or systematic disorder, and intolerance to food or drugs. - Functional dyspepsia: Functional disease are characterized by the absence of any identifiable or measurable diagnosis. The mechanisms relevant to functional indigestion are ANS-CNS dysregulation, delayed gastric emptying, impaired gastric accommodation, altered sensitivity to fats and lipids, hypersensitivity to gastric distension, and altered gastric electrical rhythm. Causes of indigestion or dyspepsia are: - Peptic ulcer - Delayed gastric emptying - Gastric distension - Anxiety and depression - Medications or underlying diseases - Fatty and spicy foods - Carbonated and caffeinated beverages Patient suffering from indigestion may encounter the following symptoms: - Abdominal pain - Flatulence, bloating and belching - Mild to moderate fever (in the advanced stage) - Fullness after meal - Reduced quality of life Ways to diagnose Following are the diagnostic methods available for diagnosing indigestion: - Physical examination: The physician may preliminary diagnose the condition by doing the physical examination on the basis of the symptoms presented by the patient. - Medical history and food habits: The doctor may ask the medical history of the patient in order to analyze the chance of the underlying disease and also ask about the eating habits. Eating habits are important in evaluating indigestion as they have a significant impact on this condition. - Endoscopy: In order to evaluate the condition in-situ, the doctor may use endoscopy. This is done to evaluate the tissue damage in the upper gastric tract. - Imaging techniques: X-ray or CT scan is done to analyze underlying disease or any obstruction that limit the capacity of digestion. - Blood test: Blood tests and other lab tests can be advised to diagnose reasons for the symptoms presented. It also helps in identify anemia or other medical conditions. - Breath and stool tests: Breath and stool test is done to test the presence of H. pylori. Risks if neglect Although indigestion is mild and can be managed effectively, sometimes severe indigestion may lead to the following complications: - Pyloric stenosis: Pyloric stenosis is caused due to the continuous irritation of the wall of the stomach lining due to acid. In this condition, the passage between the stomach and the intestine gets narrowed and the food does not able to move freely. This leads to improper digestion. The treatment of this condition is through surgical intervention. - Peritonitis: Indigestion, if not managed at the initial stage, may lead to the damage of the lining of the intestinal tract. Although, the condition is usually caused due to bacterial or fungal infection, repeated effects of acid on the peritoneum causes peritonitis. Surgical intervention and medications are the options for treating peritonitis. - Esophageal stricture: In conditions such as GERD, the acid bounce back from the stomach to the esophagus leading to esophageal stricture. If GERD remains for a long period of time, it may damage the esophageal wall and result in the construction of esophagus. Surgery is required to treat the condition. - Reduction in quality of life: Chronic indigestion severely reduces the quality of life and make the patient uncomfortable. It leads to absence from schools and offices. Stages of indigestion are divided on the basis of the severity of the symptoms. Following are the stages of indigestion: - Stage I or early stage: The symptoms of the early stage indigestion are generally caused by the food itself. The symptoms presented in this stage are bloating, flatulence, distension of the stomach and acidic or oily belching. The other symptoms of this stage are nausea and loss of appetite. - Stage II or advanced stage: The symptoms of this stage occur due to tissue damage. Inflammation occurs at this stage. The patient may also experience mild to moderate fever. Foods to eat and avoid Foods to eat: - Fresh orange or grapefruit juice. - Ginger tea Foods to avoid: - Carbonated drinks - Fatty food - Spicy food - Processed food - Artificial sweetener - Do not overeat. - Avoid spicy food. - Eat slowly with proper chewing. - Stay away from stress and anxiety. - Avoid smoking and limit the intake of alcohol. - Avoid carbonated as well as caffeinated beverages. - Avoid doing exercise with a full stomach. - Eat your dinner three hours prior to going to bed. - Walk for a few minutes after eating food. When to see a doctor Book an appointment with your doctor if: - You have bloating, benching or stomach pain. - You have heartburn. - You feel uncomfortable after eating food. - You have unexplained weight loss. - You have blood in vomiting. - You have bloody stools. - You have severe stomach pain. - You have shortening of breath. - You have any other symptoms that concern you. Do’s & Don’ts - Take a healthy diet. - Incorporate fresh juices in your diet. - Take a walk after eating. - Eat slowly and with proper chewing. - Use pillow while sleeping. - Do meditation to remain stress-free. - Do not drink alcohol. - Do not smoke. - Do not go directly to bed after having dinner. - Do not eat too spicy food. - Do not exercise just after eating food Risks for specific people Indigestion is generally found in the people who drink alcohol in excess. Patient on certain medication such as aspirin also has increased the risk of this condition. People with emotional stress and suffering from anxiety and depression have an increased incidence of indigestion. People on hormone replacement therapy also develop indigestion. Home remedies for indigestion - Apple cider vinegar and honey: 1 teaspoon of Apple cider vinegar and 1 teaspoon of honey are mixed in half to a 3/4th glass of water and drink 15 minutes prior to eating the meal. This will help in reducing indigestion. - Chamomile tea: Make chamomile tea by putting the teabag in a cup of boiling water. Allow for a few minutes to bring the extract in the solution. Drink the solution by sipping. - Baking soda solution: Half teaspoon of baking soda is added to a ½ glass of lukewarm water. Drink the solution twice a day. - Essential oil: 1-2 drops of essential oil such as lemon oil or ginger oil is added to a glass of warm water and consumed. The essential oil has a soothing property that helps reduce indigestion. - Aloe Vera juice: 2-3 teaspoon of concentrated Aloe Vera juice is mixed in half-a-glass of water and consume it by sipping. - Fennel extract: Add 1 teaspoon of fennel seed in hot water. Leave it for 10 minutes. Strain the solution. Consume the solution through sipping. - Ajwain seeds: ½ teaspoon ajwain seeds are mixed with a pinch of rock salt. Eat the mixture along with half a glass of water. - Massage therapy: Equal quantity of garlic and soy oil are mixed, and the mixture is massaged over the stomach. This will provide relief from stomach pain and indigestion.
Before them was a sheer cliff made of whitish limestone. Tearing across the cliff was the Bonarelli, a stark black layer of rock over a meter thick, and highlighted below by flares of rusty orange. The question for these students was: How did this rock get here, and what can it tell us about Earth history? History is written in stone. At least, it is for the geologist. One group of students in the Earth and Planetary Sciences Department journeyed to the Umbria-Marche region of Italy over spring break to study the most important book of their required reading. That book, of course, was the Earth itself, and the pages were layer upon layer of rock, each revealing part of the tale. “This area is a really classic area for looking at Earth history,” said Associate Professor Francis Macdonald, a field geologist and the trip leader. The Umbria-Marche Appenines of Italy are world renowned for their geological outcrops. One of the best recorders of Earth history is marine sediment, which is preserved layer upon layer, era to era, on the ocean bottom. The Apennine Mountains are formed from these marine sediment layers, which have been thrust upward from the depths of the sea by plate tectonic motions. Clearly exposed in Italy, like almost nowhere else on the planet, is an account of history from about 220 million years ago to 2.6 million years ago. Close to sunset, below a meandering ancient Roman aqueduct, Steven Jaret looked through his magnifying hand lens at an unremarkable bit of red clay. “Oh, wow! This is really cool. Look at that spherule: big, green, perfectly round,” exclaimed Jaret, a first-year graduate student, as he spotted vapor condensates that must have rained from the heavens shortly after an Earth-shattering asteroid impact. He had identified the thin layer in the rock record that divided two great eras. Below the clay layer, other students corroborated, it was evident the oceans were teaming with life. They could see the micro-organisms eternally fossilized in the rock. Above the clay, only the smallest and simplest life forms persisted. The rocks suggest the ocean underwent a catastrophic extinction event — one that correlates perfectly to the extinction of the dinosaurs. Few driving along this lonely road would suspect that it had been these rocks that spawned the asteroid impact hypothesis for the demise of the dinosaurs. About 30 years ago, this clay was discovered to have a spike in the rare earth element iridium. The iridium could only have come from an extraterrestrial source: an asteroid. Students put their fingers on the thin clay layer that divides the eras, sampling the evidence that sparked one of the most fruitful geological discoveries of our time. Not only did it prove the asteroid impact, but also it catalyzed a paradigm shift in geological discourse: No longer was the Earth viewed simply as a gradually evolving system. “It is very important to have students really appreciate how geologists think,” said Macdonald, who led the trip to Italy. “You have a problem, and you have to be able to figure out a way to use this data in the field to come up with a story. … Geology is really a question-driven science, and here there are just some really big questions exposed in the rocks.” The presence of the Bonarelli is one of the mysteries that challenge modern geologists. Biogeochemistry Professor Ann Pearson joined the trip to survey the rock layer. The layer is composed of organic material that must have been deposited when there was no oxygen in the oceans. But why was there no oxygen in the oceans, and could that happen again? While Pearson collected samples to study the isotope signature of the material, students measured the distance between the various rock layers below the Bonarelli to determine if natural ocean cycles predict such a massive ocean event, or if this one was something truly extraordinary and perhaps cataclysmic. Grappling with such problems may be advanced for beginners. But all the students agreed exposure to the “real questions” was far more illustrative than “book-learning.” Emily Howell, a freshman who challenged herself to participate in this upper-level course, said, “It’s definitely been tricky, but … I think that just going right out into the field is one of the best things because you actually get to see all these words that you keep hearing, and you get the big picture.” The more experienced first-year graduate student, too, found the field experience indispensable. Jaret said, “You look at it in the fields and then you look at it in the textbook, and it’s like, ‘Oh, wow. That is not what it looks like in the textbook at all.’ It is really nice to actually see what geology is actually like.” Before they left, the students did take time to experience the local culture. On the final day of the trip, after having been blessed with a week of good weather (and therefore productive fieldwork), students had an opportunity to roam the streets of the ancient city of Assisi, a fortified hilltop town famous for being the home of St. Francis. The walls of the buildings and fortifications of the city were constructed from the very same stones that the students had been studying for a week. Well, most of the stones, at least. While savoring gelatos, a few students noticed that some of the construction materials weren’t local; these randomly placed pieces must have been carted in and integrated into the town’s architecture far more recently. The geologist observes his or her environment, and then make inferences about a place’s history. But perhaps more than just Earth history is written in stone.
When graphic organizers are used effectively, both the teacher and the students expand their roles in the basal reader The basal reader has its supporters as well as its opponents. The teacher should always have access to an anthology of children's literature on his/her desk so that suitable poems may be read and discussed with learners, along with those poems contained in the basal reader . Holiday poems generally are very much liked by pupils, such as Halloween poetry. The paper also discusses a more open-ended method, Robert Gagne's eight steps of sequential learning, and it advances as a flexible form of teaching and learning the use by teachers of the basal reader in the reading curriculum. It also includes basal reader use in that category. Finally, "structural analysis" depends mainly on the ability of the reader to recognize prefixes and suffixes.Word-attack skills vs comprehension skillsIn "The Basal Reader Approach to Reading," Robert Aukerman, a dean at an American teachers college, wrote, "Learning word-attack skills alone is not enough, since reading involves more than just identifying the words that the graphic symbols represent. The teacher uses a basal reader that is contrived to teach, among other things, knowledge about letter sound relationships (Figure 3). One group of students read the first chapter of the book and another group read a modified basal reader version that had deleted cultural information. "WHO'S THAT tripping over my bridge?" roared the troll'--provided that an authentic version is used and not an emasculated sanitised basal reader There also are feelings of reward when a play is used directly from a basal reader The result has been a tendency to base reform on changes in basal reader programs, including anthologies, workbooks, and teacher's manuals. The intervention studied was the basal reader "Sam and Pat, Volume I," published by Thomson-Heinle (2006).
Caribbean countries are on high alert for power failures. Puerto Rico’s inconsistent grid, which was severely damaged during the 2017 hurricane season, continues to lose power—some island residents have yet to regain power in the seven months since Hurricane Maria. This phenomenon is part of a larger problem: electric grids across the region are dated, ailing, and overburdened. Powerful passing storms can leave thousands without power for months on end. The solution? Localized, renewable energy sources. Caribbean nations rely heavily on oil and diesel imports. Governments are attempting to integrate renewable energy sources (wind and solar) into their existing grids, but the task is more urgent now than ever before. In transforming energy grids into utilizing new, greener sources of power, electric grids will become more resilient to weather extremes; they will be decentralized and pull from an array of power sources. With strategically-planned renewable energy, there is always a back-up. Unfortunately, climate change will likely complicate the Caribbean’s transition into renewable energy. Caribbean islands are the most vulnerable when it comes to rising water levels, changing weather patterns, and other effects of global warming. The region has already experienced these extremes; research suggest that northern Caribbean countries, such as Cuba, Jamaica, and the Bahamas, have become rainier over the past three decades. The uptick in severe weather is costly, as it both damages existing systems and puts these countries further in debt. Additionally, with increasing weather extremes, green energy systems will, in turn, become vulnerable. For example: modern wind turbines can be torn apart in 165mph winds. Changing regional temperatures will dramatically alter the availability of hydro and solar power. Climate change makes it nearly impossible to predict future weather scenarios, so building a system to anticipate a changing climate is difficult. The Caribbean, however, is doing what it can to shift toward renewable energy sources. Jamaica is aiming to install automated weather stations to collect data, which can be used to build better electric systems. Urban wastewater hydropower plants are being developed for use on Caribbean islands. The future of the islands is uncertain but changing technologies may eventually help these countries navigate their way through climate change.
The art of making maps. In the development of this art, during the Middle Ages, an epoch is made by the Catalan "portulani"—seamen's charts showing the directions and distances of sailing between different ports, chiefly of the Mediterranean. These differ from the medieval mappœ mundi by having tolerably accurate outlines of the Mediterranean littoral, and are thus, in some measure, the predecessors of modern maps. Baron Nordenskjöld has proved that these are derived from what he calls the normal portulano, compiled in Barcelona about 1280. The best known of the portulani are those drawn up in the island of Majorca, where a school of Jewish chartographers seems to have drawn up sea-charts for the use of seamen. In 1339 Angelico Dulcert drew up a portulano which still exists; and in 1375 this was greatly improved by Cresques lo Juheu, who added to Dulcert's outline the discoveries of Marco Polo in the east of Asia. He thus made the voyage to the Indies westward appear less than it really was, and so helped toward the voyage of Columbus. This map, known as the "Catalan Portulano," was sent by the king of Aragon to the king of France, and is still retained in the Louvre. It formed a model for many globes and later maps, including those which most influenced Columbus, and is perhaps the best known of the portulani. See Cresques; Geographers, Jews as. - Jacobs, Story of Geographical Discovery, pp. 60-62; - Nordenskjöld, Periplus, 1897; - Kayserling, Christopher Columbus, pp. 6-8.
Scientists at the Los Alamos National Laboratory in New Mexico have figure out a way to use what are known as quantum dots to store and release energy from simple panes of glass in windows. The quantum dots, first developed in 2012, are nanometer sized semiconductors that are applied to the glass in a thin film. The dots can be programmed to store certain kinds of light while rejecting others. The Los Alamos team discovered that when they applied this thin layer of quantum dots to a piece of glass, they could last up to 14 years or so and have an efficiency just under 2%. The scientists explained that they would have to get to at least a 6% efficiency for the quantum dots to actually be practical in residential or commercial use. What they have done, in essence, is to simplify solar cells so that they are small and could be more efficient especially in cities where there is not a lot of rooftop space. What the new application does is to capture the solar energy all across the surface of the glass and then can transfer it to storage using only a single solar cell. The scientists believe that the technology is ready to be deployed on glass and they mentioned that, after the lifetime period, the layer of quantum dots could be simply scraped off and replaced with a new layer. The efficiency of the quantum dot technology will actually prove to be cheaper and more cost effective in the long run for commercial and residential applications. The engineers first tried to use certain dyes for the application but they soon discovered that the organic dyes they were using were actually absorbing the energy rather than storing it and transferring it. It was then that they decided to go with the quantum dot technology as a more efficient way to store and transfer the sun’s energy. PHOTO CREDIT: Los Alamos National Laboratory
Watching the Brain Create Memories The brain stores information in a cohesive way so that we can recall the myriad of details about an event: who was there, what music was playing, what the food smelled and tasted like, and how the occasion made us feel. These aspects of an event are bound into an integrated "memory trace" so that the whole event can be retrieved instantly. In order to store and retrieve information, the brain has to perform three functions. First, information that has been learned only moments earlier is captured in short-term memory. Second, the brain moves the information into long-term storage. Third, the brain has to be able to find and retrieve the memory on demand. One of the brain regions involved in moving information from short-term memory to long-term storage is the hippocampal formation. Scientists have studied individuals who have had extensive damage to their hippocampal formation; the moment their attention is diverted, they have no memory of a conversation they've been having. A person with a malfunctioning hippocampal formation would still have lots of thoughts and would be able to perform activities, but wouldn't be able to form new memories or learn new facts. Using functional MRI scans, Dr. Sperling has focused on a network of brain regions that are involved when a person is engaged in forming a memory. The scans clearly show that these regions engage in a finely tuned process in which one area turns on very rapidly when a person is learning something new. That region, the hippocampal formation, is responsible for binding new information together into a short-term memory, which can later be transferred into a long-term memory for storage in other brain areas. At the same time, other areas of the brain, in particular the parietal regions, must turn off so as not to interfere with the memory formation process. Later, when the person recalls a memory, the process is somewhat reversed. Many of the regions in the brain that were turned off during memory formation now turn on to retrieve the memory. This cycle of turning on and turning off, or activation and deactivation, as Dr. Sperling called it, happens all the time. "When we look at the fMRI of a healthy person, we see that the memory network is carefully synchronized. The parietal regions are constantly turning on and off and working in coordination with the hippocampal formation. This network activity correlates with how well the person is able to perform the memory task." In a person with Alzheimer's disease, the fMRI shows little or no activation of the hippocampal formation when the person is asked to learn something new. This correlates with pathological changes in the brain, as this area is among the first to be damaged in AD. Dr. Sperling explains that MRI scans of people with Alzheimer's disease show shrinkage, or atrophy, of the hippocampal formation as well as a thinning of the brain or loss of neurons in the parietal regions. The parietal regions are also especially vulnerable to the formation of beta-amyloid plaques in AD. Dr. Sperling is now conducting fMRI studies that use face-name memory tasks to see whether beta-amyloid is causing these disruptions in the activation and deactivation process in people with early-onset AD, as well as to study those with MCI and normal memory function. While conducting an fMRI study, Dr. Sperling asks research volunteers to look at photographs of faces and learn to associate each face with a name. The name-face recognition task is particularly difficult for people with AD, which is why she chose it for the test. "If you want to determine whether someone is at risk for a heart attack, you put them on a treadmill and stress the system. We stress the brain with memory tests to see how it performs." While lying in an fMRI machine, research volunteers are shown many pairs of face photos labeled with names. Thirty minutes later, they are shown the same faces with two name choices and asked to recall the name that goes with each face. This fMRI technique allows Dr. Sperling to see which parts of the brain are activated and deactivated during the exercise. A healthy person would quickly form the association between the photograph of a face with the name Derek, and the fMRI would indicate that normal hippocampal activation had occurred during that formation of Derek's face-name association. On the other hand, when a person with AD tries to learn the Derek face-name association, the fMRI scan shows little or no activity in the hippocampal formation and they might look at Derek's picture and say, "It could be Derek or Ian. I have no idea." Previous: Applying Advanced Imaging Techniques Excerpted from THE ALZHEIMER'S PROJECT: MOMENTUM IN SCIENCE, published by Public Affairs, www.publicaffairsbooks.com. In This Section Momentum in Science: The Supplementary Series - Understanding and Attacking Alzheimer's 12 min - How Far We Have Come in Alzheimer's Research 15 min - Identifying Mild Cognitive Impairment 20 min - The Role of Genetics in Alzheimer's 12 min - Advances in Brain Imaging 11 min - Looking Into the Future of Alzheimer's 6 min - The Connection Between Insulin and Alzheimer's 21 min - Inflammation, the Immune System, and Alzheimer's 29 min - The Benefit of Diet and Exercise in Alzheimer's 16 min - Cognitive Reserve: What the Religious Orders Study is Revealing about Alzheimer's 20 min - Searching for an Alzheimer's Cure: The Story of Flurizan 30 min - The Pulse of Drug Development 15 min - The DeMoe Family: Early-Onset Alzheimer's Genetics 25 min - The Nanney/Felts Family: Late-Onset Alzheimer's Genetics 20 min - The Quest for Biomarkers 17 min Video: Inside the Brain: Unraveling the Mystery of Alzheimer's Disease This 4-minute captioned video shows the progression of Alzheimer's disease in the brain. Inside the Brain: An Interactive Tour The Brain Tour explains how the brain works and how Alzheimer's affects it. Alzheimer's Disease: Unraveling the Mystery This book explains what AD is, describes the main areas in which researchers are working, and highlights new approaches for helping families and friends care for people with AD. - About The Scientists Connect with Alzheimer's Research Find out how you can participate in clinical trials or studies, find a research center, or get up-to-date information at 1-800-438-4380. The Alzheimer's Association 24/7 Helpline provides reliable information and support to all those who need assistance. Call us toll-free anytime day or night at 1-800-272-3900. - Create A Tribute Honor someone you care about and share your stories by contributing to The Tribute Wall on Facebook. The Alzheimer's Association message boards and chat rooms are your online communication forum. Share your thoughts and experiences, query your colleagues, and make new friends. - Rapid advances in our knowledge about AD have led to the development of promising new drugs and treatment strategies. However, before these new strategies can be used in clinical practice, they must be shown to work in people. Advances in prevention and treatment are only possible thanks to volunteers who participate in clinical trials. - Among those touched by Alzheimer's (excluding self), nearly one-third provide support as a friend or relative, another 3% provide support as a healthcare professional, and the remaining two-thirds provide no support to the person suffering from Alzheimer's. When support is provided, it most often entails emotional support, followed by care-giving support. While small in comparison, more than one person in ten is providing financial support. Read more.
The primary purpose of DNA is to store hereditary information within the cells of all living things. It is a molecule that encodes the genetic instructions used in the development and functioning of all known living organisms.Continue Reading DNA also facilitates biological synthesis specifically in the creation of RNA molecules and cellular proteins. Information stored in DNA is in the form of a code consisting of four chemical bases: adenine, guanine, cytosine and thymine. These bases pair up with each other to form base pairs. This process is known as base pairing, and it occurs when the bases attach to one another through hydrogen bonds. Each base is attached to a sugar molecule and a phosphate molecule. Collectively, a base, sugar and phosphate form a nucleotide. Nucleotides are arranged in two biopolymer strands called polynucleotides. Polynucleotides coil around each other in a double helix, which takes a form similar to a ladder's. Nucleotides are connected to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next. This sequencing of the bases that connect the two biopolymer strands determines the natural characteristics of the living thing in which that specific DNA exists.Learn more about Molecular Biology & DNA
Classrooms today are becoming increasingly diverse. Children from various cultural and linguistic backgrounds bring so much color and depth to the classroom. But a number of challenges can also emerge when working with students who speak different languages. How do you address the needs of multilingual learners when they are developing emergent literacy skills in not one, but sometimes two, or three languages? How could you use multiple languages to help language learners make sense of what they’re reading? One effective strategy is called translanguaging. Translanguaging bridges the languages spoken at home with the linguistic demands of schools. More specifically, it allows students to use words from two linguistic repertoires to communicate effectively. One manifestation of this might be a student who uses Spanish in one sentence and English in another, or one who blends the two languages together within a single sentence. Blending languages together is, after all, what people from naturally bilingual households do. Thus, allowing children to draw from these natural linguistic resources has the opportunity to engage students in the learning process as they wrestle with new knowledge. So what does this look like in the classroom? This article highlights three critical considerations drawn from the “Translanguaging: Guide for Educators” published by the City University of New York. These recommendations, used in part or in whole, are sure to benefit our bilingual learners as they make them feel both valued and included. 1) Use Multilingual Texts An essential component of translanguaging is that students from linguistically diverse backgrounds have access to multilingual texts. These texts could be stories that come in multiple translations, texts with both languages on the same page, or books that are written by authors from culturally diverse backgrounds. Research shows that using multilingual texts help build language learners’ background knowledge, and simultaneously support home language literacy development. In addition, children feel more confident because these books validate their linguistic and cultural identity. Putting theory into practice, here are a few suggestions on how to use multilingual texts in your classroom. - Include multilingual texts in your classroom library (here are some resources) - Translate one of your students’ favorite stories and read it aloud on a special day - Have students read books in multiple languages side-by-side - Have a group of students translate an English story that they love, and place it in the library for others to learn about that language - Supplement class content with readings in students’ home languages about the same topic or theme, which can be read at home with parents - Make sure students have bilingual dictionaries (picture dictionaries help!) to enable problem-solving when they face linguistic challenges 2) Think About the Language Process Break down the reading process for children. Take into account the linguistic demands of a reading task, and have students first read the text in their home language so they can become familiar with key vocabulary words and content. Then, ask students to read the passage in English at school to reinforce concepts and facilitate language transfer. After reading, have students engage in conversations about the text, allowing them to speak in any language they choose. This allows students to share precisely about their favorite part of the story, about how the story relates to their own experience, or about what they would do differently if they were the main character. The purpose is for students to interact with and make meaning of the text. Finally, when students share ideas with the class, ask them to speak in English (telling them beforehand, of course!). Now that they have formulated their arguments, they will be able to focus on how to accurately express themselves in their second language. 3) Actively Promote Multilingualism Allowing students to use the (often blended) languages that they speak at home in the classroom will, at first, feel strange to them. Shouldn’t the language of school be English only? While many teachers do not discourage students from using their home language in class, they also do not explicitly encourage it. Make the classroom an environment that embraces multilingualism. Let students know that their languages are welcome, and that they are important for learning a second language. To cultivate this, use multilingual posters, books, signs and student work on the walls. React positively to students who ask for help in a different language. Allow students to translate for each other. Discuss with the class why being multilingual is an asset in today’s society. Here are a few questions you can ask the class to help get the conversation started: - Where do you hear people speaking different languages? Do you think it is useful to speak more than one language? - Do you know of any famous people who can speak more than one language? When do you think they use their languages? Why do you think being bilingual helps them? - In class, do you think it’s ok for your classmates to use the language they speak at home? Why or why not? Engaging children in conversations that question the power and privilege ascribed to particular languages stops them from defining themselves (and their languages) as second class, and shows them what is truly powerful and truly a privilege — being bilingual.
Learning is classically defined as: “a relatively permanent change in behavior that marks an increase in knowledge, skills, or understanding thanks to recorded memories” It’s that change of behavior state that’s the important point of definition, but the neurological memory processes sit at the core of that. So let’s look at what let’s that occur. The “lego bricks” of memory Your brain is a hugely complex organ with over 100 billion neuron’s. Each neuron then has an associated 4-10,000 dendritic links to other neuron’s through synaptic connections. Each synapse has a huge number of chemical neurotransmitters that exist with in it. That can then change the chemical message sent from one neuron to the next through the dendrite connection with the next neuron. And so on to the next formed connection, and so on. So that is: 100,000,000,000 neurons x 7,000 dendrites/synapses x ∞ neurotransmitter mix = Infinite patterns (memories) When a memory is formed of the color ‘green’, a pattern of neuron’s fire with particular neurotransmitter settings in each synapse involved. Every time the concept of ‘green’ is recalled (say we see something ‘green’ or someone mentions the word ‘green’) that same pattern with the same settings will fire causing us to remember the construct ‘green’. Now, if we add a second construct, the idea and memory of what is a ‘door’ a different pattern of neurons will fire. Thinking of a ‘green door’ will cause both pattern’s to fire and a third pathway linking ‘green’ and ‘door’ to create a ‘green-door’. This third pattern is called association as we create a new memory linking two of these building blocks of memory together. The study of memory As self-aware creatures of ‘meta-cognition’ (thinking about thinking) the process of memory and its support of learning has been studied….a LOT! Accordingly there is an enormous body of scientific evidence of what we believe is going on when we create memories. Essentially there is a three level hierarchy of memory that drives how we operate as mammals and then how we then apply memories to our higher level cognitive functions. - Sensory memory – this is the sensory process of recognizing thing’s i.e. “that is food, <check if I am hungry?> <Yes I am, I can eat that>” OR “that is a snake, <snakes are scary and dangerous> <I should run away, fast!>” - Short term memory – this is the area where we ‘momentarily’ store stuff while we absorb and apply it or use it in some task. Short Term Memory is surprisingly fleeting and limited in the amount of concepts we can hold in it. Try remembering a phone number without writing it down when it is spoken to you and you’ll know how short term that can be! - Long term memory – Short term memories become Long term memory through several cognitive mechanisms: - Repetition is a strong factor, repeatedly firing a neural pattern indicates its importance and deepens the memory; - Association with another memory or concept, the stronger the better (an extremely powerful memory/learning strategy); - Emotional state, the greater the emotional arousal, typically the stronger and deeper the memory. After that long term memories get stronger the more each of these three factors apply when they memory is reused. Having an understanding of how memories are formed allows us to apply that to learning situations when we come to apply new information and convert it to knowledge. Applying that new knowledge along with our accumulated knowledge is what allows us to then think, create and innovate. Distraction and interruption All the above is great ideal situation stuff of course. If you’ve ever tried to force yourself to learn something new you know that our brain’s nicely well-ordered memory process are anything but. The progression from sensory memory to short term memory to long term memory is incredibly fragile. Imagine this scenario: You’re studying a new competitive intelligence report trying to understand why you biggest competitor is crushing you in sales. As you get to the third paragraph, In comes a text message. You force yourself to ignore it, but nope you have to start the paragraph again, You get an email from one of your VIP’s. Back you go to the beginning of that paragraph again…. You hear the garage door, your partner is home with lunch…man I’m hungry! Aw shoot, back to the beginning again. Multiply that with all the other distractions occurring around you and you can see why we find it so hard to absorb new information. In reality the process of memory is infinitely more complex than that simple scenario. By understanding the fundamentals of memory and learning you now can see the root of the dilemma we all face. Contemporary life makes trying to engage in real deep learning and focusing on deep cognitive work almost impossible. Mind Maps for learning and memory There are of course good strategies for reinforcing and structuring learning. Mind Maps provide a wonderful whole brain methodology for studying and absorbing new memories, associating them with stored memories and reinforcing them to long term memory. There are also a great many ‘focus techniques’ for eliminating distraction. Using them allows you to bring focus in a controlled manner to your workflow in such a way that you can train your brain into stronger patterns of behavior. MindMapUSA provides consulting and programs of training, coaching and consulting in Mind Mapping and strategies for more effective performance. The courses teach globally applicable skills then reinforce with a workshops to bring the skills and knowledge to functional teams in your organization. Please contact us to explore how we can help you: think differently | plan efficiently | perform effectively
Toddlers 19 months – 2 years: Playing Play is one of the best ways for children to learn language and literacy skills. Play helps children to think symbolically; a ruler becomes a magic wand, today becomes a time when dinosaurs were alive, a playmate becomes an astronaut exploring space. Play helps children understand that written words stand for real objects and experiences. Dramatic play helps develop narrative skills as children make up a story about what they’re doing. This helps them understand that stories happen in an order: first, next, last. Offer a variety. - Provide a variety of table toys and materials that encourage toddlers to use their hands and fingers, such as pegs to place in pegboards and blocks to stack and then knock down. These are great for developing fine motor skills. Hide and seek. - Just as she loved peek-a-boo as a baby, your toddler will love to play simple games of hide-and-seek. First thing in the morning take turns hiding under the bed sheets; at bath time, use a big towel. Blow up a balloon. - Balloons are great for indoor play — they move slowly enough to be chased and are relatively easy to catch. Blow one up and tap it gently into the air. Count how long it takes to float to the ground or let your toddler try to catch it. This is a good game for counting skills and hand-eye coordination. - Provide dress-up clothes and props — hats, scarves, shoes, keys, tote bags, and pocketbooks. Most toddlers like to dress up, pack a bag, and pretend to leave and come back. They play house with pots, pans, dishes, and other household items.
P, S waves and Tsunamis Country: United States Date: April 11, 2005 I have noticed that tsunami waves all seem to travel at the same speed. I do not understand why this happens since the earthquake or plate shift event that sets them off generate "S" and "P" waves that are different for each event. Is there some "normaling" factor that I am not considering? The speed of a tsunami wave varies with the depth of the water it is traveling through. The deeper the water, the faster it travels. As the wave approaches shallow water, it slows down. But as it slows down it simultaneously gets taller, which makes the wave more destructive when it According to the University of Washington's tsunami web site the speed of a tsunami wave is equal to the square root of the acceleration of gravity (32 feet per second per second) times the depth of the water. In the open ocean, if the water depth is 13,000 feet, the speed of the wave would be 645 feet per second or about 440 miles per hour. If the depth of the water is decreased to 100 feet, the speed is decreased to 57 feet per second or 39 miles per The laws of thermodynamics require that energy be conserved. So, if the wave is moving at hundreds of miles per hour, it has enormous kinetic energy. When it suddenly slows down the wave piles up and that kinetic energy is transferred into gravitational potential energy that shows up as a much greater The speed of the wave is constant at a uniform water depth, but the height and length of the wave is dependent on the severity of the earthquake that generates it. So, different tsunami waves may show up at different heights and smaller ones may hardly be noticeable, but they arrive at the same As an analogy, think of sound. The speed of a sound wave does not vary with the volume or frequency. Click here to return to the Environmental and Earth Science Archives Update: June 2012
Download a printable copy of this fact sheet here. Sexuality is a core characteristic and formative factor for human beings. It is a state of mind, representing our feelings about ourselves, what it’s like to be male or female, how we relate to people of our own gender and those of the opposite gender, how we establish relationships, and how we express ourselves. It is basic to our sense of self. As such it is an important part of development and growth. It is the ability to be intimate with another in mutually satisfying ways. Sexual feelings and actions can cover a gamut of expressions. Holding hands, flirting, touching, kissing, masturbating, and having sexual intercourse are just some of the ways in which sexuality can be expressed (MacRae, 2010). Religion, culture, ethnicity, and education can also affect how sexuality develops and is expressed (e.g., how sexuality was handled within one’s family can affect how one’s own sexuality develops). The Occupational Therapy Practice Framework: Domain and Process, 2nd Edition (AOTA, 2008) lists sexual activity as an activity of daily living (ADL). As such occupational therapists include sexuality as part of a routine evaluation of clients, and occupational therapists and occupational therapy assistants address this area in occupational therapy interventions. Following an acute health crisis or as part of a chronic condition, clients may worry about how their health issues will affect their sense of self, their ability to function physically, and their opportunities to engage in sexual activity. Concerns may also relate to misconceptions or expectations of others, including partners, caregivers, and health care providers. Occupational Therapy Interventions As a basic part of the human condition, sexuality is an ADL addressed with older adults; clients who are lesbian, gay, bisexual, and transgendered; clients with physical disabilities; clients with developmental disabilities or delays; and other recipients of occupational therapy services as part of a holistic approach to treating the whole person. Occupational therapy is a safe place for addressing sexuality, allowing the client to express fears and concerns, and offering assistance with problem solving. Empathy, sensitivity, and openness are necessary aspects of the therapeutic relationship, the foundation of occupational therapy, and are used in addressing sexuality. Partners are often included in occupational therapy interventions to achieve goals of mutual concern, such as sexual expression and satisfaction. Sexuality can be addressed by practitioners in any setting. Intervention can occur in homes, group homes, nursing homes, rehabilitation centers, community mental health centers, pain centers, senior centers, hospitals, retirement communities, and other venues. The following are types of interventions offered by occupational therapy practitioners. Health promotion: This approach consists of support groups, educational programs, and stress-relieving activities. For example, an occupational therapy practitioner could offer an educational program about safe sex for teenagers with developmental delays. Occupational therapy practitioners may also provide in-service training to assist caregivers in institutions such as skilled nursing facilities to understand the sexual needs of older adults and those with diverse sexual orientations. Such in-services might include introducing ways for insuring privacy when partners are visiting. Remediation: This approach consists of restoring skills, such as range of motion, strength, endurance, effective communication, and social engagement, as part of meeting sexual needs. An example is rehabilitation for clients following a hip replacement and addressing their concerns about physically being able to have sexual intercourse during the recovery process. Another example is developing leisure interests to help meet potential romantic partners when working with clients who report social isolation. Modification: This approach consists of changing the environment or routine to allow for sexual activity. Examples include resting prior to sexual activity for those with poor endurance; placing pillows under stiff or painful joints or preceding sexual activity with a warm bath; learning new positions to compensate for amputated limbs; and using positions that incorporate weight bearing to compensate for tremors. Enhancing an individual’s ability to participate in sexual activities can have a profound effect on that person’s life. By acknowledging the importance sexuality plays in all of our lives and displaying sensitivity to the personal nature of this ADL, occupational therapy practitioners help ensure that all aspects of their clients’ lives are addressed in therapy. Providing empathy and appropriate information, devising adaptations, and encouraging experimentation to find resolutions can be invaluable services to clients. When practitioners routinely discuss sexuality as an ADL, clients can talk about and address any issues in this area. Collaborative problem solving can empower clients to gain control over this most intimate of areas. It can be self-validating, allow personal expression of sexuality in ways that are meaningful, strengthen self-esteem, and allow that person to become whole again. American Occupational Therapy Association. (2008). Occupationaltherapy practice framework: Domain and process (2nded.). American Journal of Occupational Therapy, 62, 625–683. doi:10.5014/ajot.62.6.625 MacRae, N. (2010). Sexuality and aging. In R. H. Robnett & W. C. Chop (Eds.), Gerontology for the Health Care Professional (pp. 235–258). Sudbury, MA: Jones and Bartlett. By Nancy MacRae, MS, OTR/L, FAOTA, for the American Occupational Therapy Association. Copyright © 2013 by the American Occupational Therapy Association. This material may be copied and distributed for personal or educational uses without written consent. For all other uses, contact [email protected].
A number of the larger tribes had adopted republican forms of government, modeled after ours in their leading features. On the first day of July, 1839, the wise men of the Cherokee nation assembled in convention, or council, to frame an organic law, or constitution, for the government of the nation. After patient and mature deliberation, they adopted a constitution essentially republican, which has now been in force for a score of years. Their government consists of the executive, legislative, and judicial departments. The executive power is lodged in a chief, an assistant-chief, and a council of five, all of whom are chosen by the people for a term of four years. The chief, under certain restrictions, may exercise a veto power. The legislature consists of a senate, composed of at least sixteen members, and an assembly of not less than twenty-four members, all to be chosen by ballot, from districts the boundaries of which are defined by law. The sessions of the legislature open annually on the first Monday of October, when each house is organized by the election of presiding officers, the necessary number of clerks and under officers. Bills are introduced and passed through both branches in parliamentary form. Enter a grandparent's name to get started. The judiciary consists of the supreme and circuit courts, and the ordinary justices of the peace. The common law of England is recognized as in the States, and the right of trial by jury is secured to every citizen. Religious toleration is established, but no man is competent to testify as a witness in a court, or to hold a civil office, who denies the existence of God or a future state of rewards and punishments. After the adoption of the constitution the several officers were elected by the people, and the new government went into immediate and successful operation. It is proper to state, however, that a government embracing the leading features of the present, but less perfect in its details, had been adopted by the Cherokees in the old nation east of the Mississippi. The Choctaws soon adopted a constitution quite similar to that of the Cherokees, except that the executive power was vested in a council of chiefs, one being chosen by each district. The chiefs were equal in power, and during the sessions of the legislature, they jointly performed the duties devolving upon the executive, and, under proper restrictions, they exercised the veto power. The Choctaw capital, or grand council, was located on the Kiemichi river, above Fort Towson, south and west of the geographical center of their territory. Each district had, also, a council ground, at which elections and courts were held and other local business was transacted. The sheriffs or marshals were styled ” light-horse-men,” and to then, were committed numerous and responsible duties. They are the acting police, whose duty it is to execute the laws, arrest offenders, and execute the decrees of the courts. The chiefs serve for a term of four years; all are elected by the people viva voce. The constitution and laws of the Choctaws are printed in both the English and native languages. In 1845 they were all contained in a single duodecimo volume of less than three hundred pages. The government is by no means complete and perfect, yet it is quite efficient in its operations. The laws are executed with a good degree of promptness. The punishments, at the time of which we write, consisted of fines, whipping, and death; and, as there were no prisons in which to confine culprits, it was a matter of honor with accused persons to appear in court and answer to charges. If a man were charged with crime, and failed to come to court, lie was stigmatized as a coward. To the high-minded Indian cowardice is worse than death. It is affirmed that a full-blooded Choctaw was never known to abscond or secrete himself to evade the sentence of the law. Even when the sentence is death he will not flee, but will stand forth and present his breast to receive the fatal balls from the rifles of the light-horsemen. A circumstance was related to us which will serve to illustrate this trait of character. Two brothers were living together, one of whom had been charged with crime, convicted, and sentenced to be executed. When the morning came on which the sentence should be carried into effect, the condemned man manifested some reluctance in meeting the light-horsemen. The brother was both surprised and indignant. “My brother,” said he, “ you ‘fraid to die; you no good Indian; you coward; you no plenty brave. You live, take care my woman and child; I die; I no ‘fraid die; much brave!” The exchange was accordingly made; the innocent brother died while the guilty was permitted to live. This was said to have occurred before they emigrated west. In an earlier period of their history substitutes were frequently accepted, and when the guilty was not found any member of his family was liable to be arrested and made to suffer the penalty which should have been inflicted upon the criminal. The law required “an eye for an eye, a tooth for a tooth, blood for blood; but they would not execute two men for the murder of one. Two or more might be implicated, yet the death of one malefactor satisfied the demands of justice. Before the adoption of their present constitution, the injured or aggrieved party was permitted to take the case into his own hands, and to administer justice in the most summary manner; but since the organization of the new government every charge must take the form of a regular indictment, be carefully investigated, and decided in legal form.
Fish do have blood, and it is red. Other cold-blooded animals, like amphibians and reptiles, also have red blood. Fish have a circulatory system with blood and a heart as the pump just like that of humans, and just like that of humans, fish blood is red because it contains hemoglobin, the iron compound that carries oxygen. If a fish is fresh and is cut near major vessels, you will see blood. In muscle tissue, which becomes fillets, the vessels are so small that the blood may not be evident, or if the fish is not fresh, the blood may have coagulated or collected in one part of the body. Fish from the fish market may have already been gutted and beheaded and so may be drained of most blood. Even so, you might see blood around the spine, because a major blood vessel goes right under the arches of the vertebral column.