content
stringlengths 275
370k
|
---|
1. Why does the book choose to begin the study of the evolution of economic thought around the year 1500?
2. What factors were important in the transition from feudalism to capitalism? Discuss each factor.
1. Discuss the main features and policies associated with mercantilism.
2. Why did the mercantilists focus on exchange as the source of wealth? What was the role of the government under mercantilism?
3. Why did mercantilists try to achieve full employment?
4. What was Thomas Mun's major contribution to mercantilist ideas?
5. Discuss the ideas and contributions of Jean Babtiste Colbert.
6. Discuss the ideas and contributions of Sir William Petty.
1. Who were the Physiocrats? What policies did they advocate, and what problems did their policies address?
2. Discuss the ideas and contributions of Francois Quesnay. Explain Quesnay's Tableau Economique. Why is the table important in the history of economic thought?
3. Discuss the ideas and contributions of Anne Robert Turgot. Why was Turgot dismissed from his duties as finance minister in 1776?
1. Discuss the ideas and contributions of Richard Cantillon
2. Discuss the ideas and contributions of David Hume.
3. What is the price-specie flow mechanism and why is it important?
1. What is the main message of Adam Smith's book The Theory of Moral Sentiments?
2. According to Adam Smith, what are the four stages of economic and social development? What are the characteristics of each stage?
3. What is the role of the state according to Adam Smith? What makes for good taxes?
4. What reasons did Adam Smith give for increases in productivity brought about by the division of labor? What is the most fundamental division of labor? How does productive labor differ from unproductive labor, and why did he make this distinction? Ultimately, what is responsible for the wealth of nations?
5. Why does Adam Smith believe in free international trade? Are tariffs ever acceptable?
6. According to Adam Smith, what is the difference between the market price and the natural price? Why is this distinction important?
7. Why did Adam Smith adopt an "adding up" theory of value for economies that had advanced past the primitive stage?
8. According to Smith, why do wages differ across occupations? What was he trying to explain by making this distinction?
9. What is the wages fund doctrine?
10. What did Smith think would happen to the rate of profit over time? What reasons did he give for his conclusion?
1. Explain Malthus' population theory. What policy implications did Malthus draw from this analysis?
2. Explain Malthus's theory of gluts. What policy conclusions did he draw from the analysis?
1. Explain Ricardo's theory of rent and the law of diminishing returns. What policy conclusions did he draw from this analysis?
2. Explain Ricardo's theory of comparative advantage.
3. Why did Ricardo focus on distribution rather than production?
4. Explain Ricardo's theory of distribution. Also explain why Ricardo believed that rent and profit are inversely related, and that that profit and wages are inversely related. That is, explain and show graphically that, as population grows and the demand for corn rises, rents and wages go up, while profit falls.
5. Why does Ricardo believe the economy will end up in a stationary (no growth) long-run steady state?
1. How does hedonism differ from utilitarianism?
2. How did Bentham's utilitarianism lead him to egalitarian reform proposals? Why didn't he advocate complete equality of income?
1. According to J.B. Say, what was the fourth factor of production? How did the addition of this fourth factor of production this remove potential sources of class conflict from classical theory?
2. What is Say's law? What are the implications of Say's law?
Nassau William Senior
1. Discuss Nassau William Senior's beliefs concerning positive and normative economics.
2. Discuss Nassau William Senior's views on the poor laws. What policies did he help to enact as a member of the Poor Law Inquiry Commission?
John Stuart Mill
1. How do Mill's views on utility differ from Bentham's?
2. According to Mill, what are the three types of goods? Which is the most common?
3. What are Mill's views on government intervention?
1. Define and explain the following modes of production: Capitalism, State Capitalism, State Socialism, Utopian Socialism, Anarchism, Marxian Socialism (including the six stages of production), and Syndicalism.
2. According to Marx, in the transition from slavery to feudalism to capitalism, how does each new system fool workers into working harder even though, in the end, they are no better off (i.e. still at subsistence)?
3. What were Marx's views on the writings of Smith, Ricardo, Mill, Bentham, Say, and Senior? Why was he critical of much of what they wrote?
4. Explain Marx's theory of exploitation.
5. What are the reasons, according to Marx, for the falling rate of profit over time, capital accumulation, and crises? What are the consequences of the falling rate of profit, capital accumulation, and crises?
William Stanley Jevons and Carl Menger
1. What was Jevons' main contribution to the theory of exchange?
2. Explain Jevons' determination of the length of the working day.
3. What is the water-diamond paradox? How does Jevons solve it?
4. Discuss and illustrate (using a table) Menger's ideas on total and marginal utility.
5. Compare and contrast Menger's and Jevon's views on total and marginal utility.
6. What are Menger's views on factor price determination? How can they be used to refute the labor theory of value?
John Bates Clark
1. Explain Clark's marginal productivity theory and how it was used to counter Marx's claim that labor is exploited under capitalism.
2. Discuss the ethical implications of marginal productivity theory.
Francis Ysidro Edgeworth
1. What was Edgeworth's main contribution to utility theory? Explain.
2. Explain the contributions made by Edgeworth to production theory.
1. Explain why Marshall felt that economics is the most precise of all the social sciences.
2. Explain Marshall's views on consumer and producer surplus.
3. According to Marshall, what determines prices, supply or demand? Does the time period matter?
4. What factors, according to Marshall, cause firms to become more efficient as they grow? What determines whether they are increasing or decreasing cost industries? Why don't decreasing cost industries eventually become monopolized?
5. Explain why Marshall believed that taxing increasing cost industries and using the proceeds to subsidize decreasing cost industries is a desirable thing to do.
6. Why do modern supply and demand diagrams have the independent variable on vertical axis rather than, as is more usual, the dependent variable?
1. How does general equilibrium analysis differ from partial equilibrium analysis? What does Walras' general equilibrium analysis have to say about the determination of input and output prices, i.e. the debate over whether input prices cause output prices or vice-versa?
John Gustav Knut Wicksell
1. According to Wicksell, how does inflation or deflation come about? Can can inflation/deflation be controlled, i.e. can prices be stabilized? If so, how?
2. What is forced saving?
1. According to Irving Fisher, how is the interest rate determined? What competing forces are in balance when the interest rate is at its equilibrium value?
2. What is the Fisher equation? What is the Fisher hypothesis?
3. Explain Fisher's theory of debt deflation.
John Maynard Keynes
1. What economic conditions set the stage for the emergence of Keynesian economics? Prior to Keynesian economics, what was the prevailing view regarding government intervention to cure recessions? What was the basis for this view? What is the Keynesian view on government intervention?
2. How is income determined in the Keynesian model? What are the key forces that cause fluctuations in economic activity, i.e. what role do the MPC, MEC, and interest rates interact to produce economic fluctuations? Where do expectations (animal spirits) fit into this explanation? What role can government play in stabilizing the economy?
|
The essential functions of Digestive Glands are:-
In man there are three pairs of salivary glands (sub maxillary, sublingual, and parotid) which secrete saliva. Saliva contains an enzyme called salivary amylase which breaks down starch (complex substance) of the food into maltose (a simpler sugar). Thus, in the mouth cavity saliva moistens the masticated food and starts digestion of carbohydrate.
The gastric glands secrete hydrochloric acid and gastric juice which help in digestion of food. The enzyme pepsin present in the gastric juice acts on the proteins of the food and breaks them into smaller units called peptones and proteoses. The food then passes into the small intestine.
It is the largest gland of the body. It weighs 1.5 kg in man. Liver performs many functions. As far as digestion is concerned, it secretes a fluid called bile.
Bile juice produced by the liver is stored in the gall bladder. Gall stones which are found in the gall bladder of about 8% of the people are chiefly the concretions (depositions) of cholesterol, bile pigments and calcium salts. Bile is yellowish greenish, alkaline fluid. Bile emulsifies fats which help in breaking them into small globules. In this way, fat globules are easily exposed to the action of fat-hydrolyzing, enzymes. The acidic food coming from the stomach becomes alkaline, when it is mixed with the bile. It is an extremely important step which ensures further digestion of the food. The digestive enzymes that are brought in the duodenum and ileum can catalyse the breakdown of food only in alkaline medium.
It is the second largest gland of the body. It lies in the fold of duodenum. It is yellowish. It secretes pancreatic juice.
Pancreatic duct pours pancreatic juice into the duodenum. Pancreatic juice contains a number of digestive enzymes. These include trypsin and chymotrypsin for the breakdown of proteins; amylase for the splitting of polysaccharides; lipase for the breakdown of fats and nuclease for the breakdown of nucleic acids. These enzymes catalyse the breakdown of different constituents of food but not sufficiently enough to break all of them into their units.
The final step of digestion takes place in the ileum. There are numerous smaller glands occupying the walls of the small intestinal tract. These glands secrete what is termed intestinal juice or succus entericus. The intestinal glands are in the form of sunken pits or crypts which are interspersed among the finger-like villi. The digestive enzymes in the intestinal juice include carboxypeptidase and aminopeptidase which break small peptides into amino acids; sucrose, maltase and lactase which brak disaccharides into respective monosaccharides; lipase which breaks lipids into fatty acid and glycerol; and nuclease which breaks nucleic acids into nucleotides.
Absorption of Digested Food
Absorption of completely digested food taken place in the ileum. There are absorptive cells lining the finger-like projections, the villi, of the ileum. These absorptive cells of the villi absorb the units of food by a process involving the expenditure of energy. This process is known as the active transport. The absorbed food is then brought into the blood vessel. The products of lipid digestion are brought into the lymphatic vessel. From here the digested food materials are transported to different parts of the body through circulation.
Assimilation of Digested Food
The process whereby digested food is absorbed and utilized is termed assimilation. One of the ways by which digested food can be utilized is to obtain energy from it by the process of respiration. The excess of amonosaccharides is joined to form glycogen by the enzymes of liver and stored as such. The amino acids may be used in the synthesis of a variety of structural and functional proteins. Ammonia is produced by the removal of amino group of amino acids which gets converted into less toxic urea (nitrogenous waste) in the liver. Urea is removed from the blood through the kidneys. The glycerol and fatty acids either provide energy or get reconverted into fats. These fats are accumulated in different organs below the skin layer. The absorbed food is also utilized for the formation of new cells and tissues, leading to growth and development of the body.
Metabolism and Release of Energy
The sum of all biochemical reactions occurring within the living organisms is called metabolism. They are of two general types:
Catabolism involved the breakdown of complex molecules into simpler ones. This reaction release energy mainly in the form of heat and is known as exergonic reaction. The examples are the processes of respiration, digestion, etc.
This involves the biochemical reactions which lead to the formation or synthesis of complex molecules from simpler ones. In this constructive process, energy is required and, therefore, the process is called endergonic reaction. Photosynthesis, protein synthesis are anabolic processes.
Living organism grow if anabolic rate is higher than the catabolic rate. As an organism approaches old age, the catabolic becomes higher than the anabolic rate.
Two stages are involves in the liberation of energy from food. The first stage involves the breaking down of complex molecules into simpler forms. In the second stage oxygen is required for oxidation of simpler molecules. Along with the liberation of energy CO2 and water are required. For expulsion of CO2 and intake of oxygen animals breath or respire. The chemical reaction taking places I this process remain the same in every organism, whether it is a frog, a bacterium, a bird or a man. This suggests a common ancestry of all organisms.
|
How to Express Adjectives and Adverbs in American Sign Language
In English, a modifier can come before or after the word it’s modifying. However, in American Sign Language (ASL), you typically place the adjective or adverb — the modifier — after the word that it modifies. But sometimes in Sign, you may find yourself expressing the modifier at the same time you sign the word it modifies — just by using your face.
Your facial expressions can describe things and actions in ASL. For instance, if something is small or big, you can show the extent of it while you sign without actually signing small or big. Instead, use facial expressions.
For example, you can describe a small piece of thread by pursing your lips, blowing out a little air and closing your eyes halfway. If something is very thick, puff out your cheeks. You can convey that it’s raining hard or that a car is moving fast by moving your eyebrows or shaping your mouth a certain way.
The following examples give you a good idea of some of the different facial expressions you may use to get your point across when describing things in Sign:
Some adverbs used in English aren't usually used in Sign, such as the words very and really. You have to incorporate them into the verb by using facial expressions.
|
How we change what others think, feel, believe and do
Speech Act Theory
Getting a glass of water is an action. Asking someone else to get you one is also an act.
When we speak, our words do not have meaning in and of themselves. They are very much affected by the situation, the speaker and the listener. Thus words alone do not have a simple fixed meaning.
Two types of locutionary act are utterance acts, where something is said (or a sound is made) and which may not have any meaning, and propositional acts, where a particular reference is made. (note: acts are sometimes also called utterances - thus a perlocutionary act is the same a perlocutionary utterance).
Searle (1969) identified five illocutionary/perlocutionary points:
Thus pretty much all we do when we are talking is assert, direct, commiserate, express and declare. In fact we follow two types of rules:
The meaning of an utterance is thus defined more by convention than the initiative of the reader. When we speak, we are following learned rules.
Performativity occurs where the utterance of a word also enacts it ('I name this ship...'). It is a form of illocutionary act. This has been taken up by such as Judith Butler in feminism and has been used to indicate how pornography is less a form of speech as a performative act of sexual degradation. It is related to suture and interpellation in the way it forces a situation.
Ludwig Wittgenstein called ‘ordinary language philosophy’ the idea that the meaning of language depends on its actual use, rather than having an inherent meaning.
Speech-act theory was originated by Austin (1962) and developed further by Searle (1969).
Oh! - is an utterance (note that communication is not intended - it is just a sound caused by surprise).
The black cat - is a propositional act (something is referenced, but no communication may be intended)
The black cat is stupid - is an assertive illocutionary act (it intends to communicate).
Please find the black cat - is a directive perlocutionary act (it seeks to change behaviour).
By understanding the detail of what is being said, you can hence understand and communicate better with others.
And the big
|
Object-Oriented Design (OOD)
Definition - What does Object-Oriented Design (OOD) mean?
Object-oriented design (OOD) is the process of using an object-oriented methodology to design a computing system or application. This technique enables the implementation of a software solution based on the concepts of objects.
OOD serves as part of the object-oriented programming (OOP) process or lifecycle.
Techopedia explains Object-Oriented Design (OOD)
In object-oriented system design and development, OOD helps in designing the system architecture or layout - usually after completion of an object-oriented analysis (OOA). The designed system is later created or programmed using object-oriented based techniques and/or an object-oriented programming language (OOPL).
The OOD process takes the conceptual systems model, use cases, system relational model, user interface (UI) and other analysis data as input from the OOA phase. This is used in OOD to identify, define and design systems classes and objects, as well as their relationship, interface and implementation.
|
Terms & Definitions
Soil with a pH below 7.0. [Definition of pH is below]
Soil with a pH above 7.0 [Definition of pH is below]
Any substance such as lime, gypsum, or sulfur used to alter the properties of a soil, generally to improve its physical properties.
Plants living one year or less. During this time the plants grows, flowers, produces seeds, and dies.
A plant that completes its life cycle within two seasons.
Biosolids (sewage sludge)
A byproduct produced by wastewater treatment processes that has plant nutrient content. [Espoma does NOT use any sewage sludge in its products because it may be harmful to human health and the environment.]
A condition in which a plant or a part of a plant is light green or greenish yellow because of poor chlorophyll development.
Soil material containing more than 40% clay, less than 45% sand, and less than 40% silt. [Clay soils are hard and compact, and do not allow water, air and roots to penetrate. See gypsum for possible solution.]
A mixture of organic residues and soil that has been piled, moistened, and allowed to decompose biologically.
Refers to trees and shrubs that lose their leaves every fall.
Refers to trees and shrubs that retain their leaves throughout the year.
Any substance containing one or more recognized plant nutrients that is used for promoting plant growth.
A substance added to fertilizers to provide bulk, prevent caking or serve some purpose other than providing essential plant nutrients.
The stable fraction of the soil organic matter remaining after the major portion of plant and animal residues have decomposed. [Helps improve soil structure.]
To remove soluble materials from soil with water.
Any material such as straw, leaves, and loose soil that is spread upon the surface of the soil to protect plant roots from the effects of rain, soil crusting, freezing or evaporation.
Natural Organic Fertilizer
Materials derived from either plant or animal products containing one or more elements (other than carbon, hydrogen and oxygen) that are essential for plant growth. [Espoma broadens this definition to include natural minerals as well.]
A material containing carbon and one or more elements other than hydrogen and oxygen essential for plant growth. [This is the chemist approach used by state regulators.]
A plant that grows indefinitely from year to year and usually produces seed each year.
A measure of acidity or alkalinity of a soil. When soil pH is not in the proper range, nutrient uptake can be hindered.
See biosolids above
Applying plant food on the soil close enough to the plant so that cultivating or watering carries the plant food to the plant’s roots.
Layer of tough, brown, fibrous material on top of the soil composed of roots, stems, and stolons (above-ground roots) that haven’t broken down yet. When excessive, it can cause shallow root development due to poor water infiltration.
|
Originally, Turks lived in dome-like tents appropriate to their natural surroundings, deserts, as they were nomads. These tents later influenced Turkish architecture and ornamental arts.
Turkish Architecture and the Seljuk Turks
At the time when the Seljuk Turks first came to Iran, they encountered an architecture based on old traditions. Integrating this with elements from their own traditions, the Seljuks produced new types of structures. The most important type of structure they formulated was the” medrese”. The first medresses were constructed in the 11th century, the beginning of Turkish architecture.
Another area in which the Seljuks contributed to architecture is that of tomb monuments. These can be divided into two types: vaults and big dome-like mausoleums. Turkish architecture reached its peak during the Ottoman period.
The Ottomans and the Architecture in Turkey
The Ottoman period began in the 1300s when Ottoman art was in search of new ideas. During this period we encounter three types of the mosque: tiered single-domed and sub line-angled mosques. The Junior Haci Özbek Mosque in Iznik is the first example of Ottoman single-domed mosques.
The City and theArchitecture in Turkey
In Ottoman times the mosque did not exist by itself. It was looked on by society as being very much interconnected with city planning and communal life. Beside the mosque, there were soup kitchens, theological schools, hospitals, Turkish baths and tombs.
During the years 1720-1890, Ottoman art deviated from the principles of classical times. In the 18th century, during the Lale period, Ottoman art came under the influence of the excessive decorations of the west; Baroque, Rococo, Ampir and other styles intermingled with Ottoman art.
Neoclassical Period Architecture in Turkey
In Turkish architecture, the years 1890-1930 are looked upon as the neoclassical period. In this period, Turkish architects looked to the religious and classical buildings of former times for inspiration in their attempts to construct a national architecture. Nationalism, developing strongly after the second Ottoman constitutional period, freed Ottoman architecture from the influence of western art, and thereby brought about a new style based on classic Ottoman architecture.
Contemporary Architecture in Turkey
These notable works were followed by a new approach directed towards contemporary architecture. The Ismet Pasa Girls’ Institute, the Ankara Faculty of Letters, the Saracoglu district, the Grand Theatre and the Istanbul Hilton paved the way for recognition of contemporary architecture.
[ngg_images source=”galleries” container_ids=”29″ display_type=”photocrati-nextgen_basic_thumbnails” override_thumbnail_settings=”1″ thumbnail_width=”160″ thumbnail_height=”160″ thumbnail_crop=”1″ images_per_page=”20″ number_of_columns=”0″ ajax_pagination=”1″ show_all_in_lightbox=”0″ use_imagebrowser_effect=”0″ show_slideshow_link=”1″ slideshow_link_text=”[Show slideshow]” order_by=”sortorder” order_direction=”ASC” returns=”included” maximum_entity_count=”500″]
|
Studying imitation is a useful way to measure the memory of babies before they are old enough to talk. Once children can talk, they can just tell us what they remember! Imitation is called deferred imitation when there is a delay between the time a model demonstrates a behavior and when the child has an opportunity to imitate the action. This delay can range from minutes to months. During the delay, children do not have the opportunity to play with the objects that the model used to demonstrate the new behaviors.
Children’s ability to imitate actions increases with age. The graph shows that as children get older, they can remember a simple one-step action for much longer periods of time. For example, one research study found that 9-month-olds show deferred imitation of a simple action, such as pushing a button to make a sound, after 24 hours. They can watch a model perform an action on an object one day, store it in their memory, then recall and perform the same action 24 hours later. Another study found that 12-month-olds imitate a simple one-step action after a 4-week delay. Amazingly, 16-month-olds can remember a similar, simple action for 4 months!
Children’s ability to remember actions decreases when the actions are more complex. If 18-month-olds view a 3-step action, like putting a ball in a jar, placing a stick on the jar, and shaking it to make a rattle, they are likely to remember but for a shorter amount of time. Eighteen-month-olds remember this 3-step sequence for about 2 weeks.
Deferred imitation supports the teaching and learning of non-verbal behaviors and traditions. This occurs when children and adults remember and reproduce behaviors at different time points or in a new place from where the behavior was first modeled. For example, when you see children rocking a baby doll in their arms saying “shhh,” it’s likely deferred imitation in action.
|
The midwater, or mesopelagic zone, is located between the ocean's photosynthetic surface and the sea's deep dark bethnic layer. Even though this zone only accounts for one quarter of the entire ocean it contains the majority of the ocean's life.
Scientists have divided the ocean into 5 main layers. These layers, known as "zones", extend from the surface to the most extreme depths where light can no longer penetrate. These zones are where some of the most bizarre and fascinating creatures can be found. As we dive deeper into these vast and unexplored places, the temperature drops, light disappears and the pressure increases at a tremendous rate.
In the past, researchers used nets towed by ships to collect and study organisms drawn from the midwater layers. These nets were capable of catching organisms within certain zones but many deep-sea organisms have bodies so fragile, with water making up the bulk of their mass, that their bodies collapsed while they were being brought to the surface. Many of them were so badly damaged that their original shape was hardly recognizable.
Today, with the advent of blue water diving techniques, submersibles, and deep sea remotely operated vehicles (ROVs), it has become possible to study and observe these organisms in their natural habitats.
Scientists have been surprised at the number and variety of species discovered in the deep ocean's midwater zone. Many of these creatures are soft and gelatinous, from transparent squid, octopuses and jellyfish to large colonial animals called siphonophores.
One such group of deep sea creatures is a very diverse family of organisms called Cnidarians. These organisms include the familiar anemones, corals and jellyfish and live as either a stationary polyp or a free swimming medusa, or in some cases both. All Cnidarians share a similar body plan, they can be simply described as a sac within a sac. These organisms posses no distinct head, digestive or structural organs, but all possess Cnidae, specially modified stinging cells.
Many coastal Cnidarians, such as corals, include packets of light absorbing algae called zooanthellae. The midwater region is where sunlight's reach is too weak for photosynthetic organisms. The only light found at these depths is produced by bioluminescence from the living inhabitants. It has been estimated that 90% of deep sea marine life can produce bioluminescence in one form or another.
The most commonly referred to jellyfish belongs to the class of Scyphozoan. They are disc shaped animals that are seen floating near, or beached on the shore. Within the class of Scyphozoan are the orders of Semaeostomeae, which contains 50 described species of mainly coastal water jellyfish, including the popular Moon Jellyfish (Aurelia) and Sea Nettle (Chrysaora). The order Coronatae include 30, and growing, described species of deep sea jellyfish. The order Rhizostomeae contains 80 described species, which include the Cassiopea jellyfish, who swim upside down as they expose their photosynthetic symbiotic algae to sunlight. The order Stauromedusae consists of 30 described species, which are the non swimming stalked jellies that are generally found in cooler waters. They are vase shaped and fixed by a basal stalk. Their mouth is pointed upwards.
Jellyfish, or Sea Jellies, are made up of 99% water, and are found all over the world and in all the seas, even in some freshwater locations. A jellyfish's body is made up of two layers, and in between is a jelly-like substance, which is how they get their name. Their bodies range in size from less than1 inch to 16 inches, with a few species growing up to 6 feet across and 15 feet in length.
Most wild jellyfish have a very short lifespan and may live for only a few weeks, or a couple of years and feed on small plankton type animals that they capture within their tentacles, which have stinging cells, called nematocysts. It is these stinging cells that the adventurous swimmer may unfortunately swim into, with painful results.
The life cycle of Scyphozoan jellyfish consists of three stages ; Adult male and female jellyfish sexually produce eggs and sperm and that fertilized larvae is, either 'brooded' within the gut of the adult or develops as a free swimming planeload. The planeload settle onto a substrate, called the hydra tuba stage, and develops into a polyp, or scyphistoma. The scyphistoma, which is a sessile or stationary polyp, asexually buds off a small medusa, called an Ephyra.
These Ephyra resemble a small 1/8” diameter snowflake, that pulses. Within about two weeks their dome shaped bell begins to become apparent. After about a month the four trailing oral arms begin to develop. From this point it becomes a growth phase and within four to six months a saleable tank raised jellyfish is ready.
Midwater Systems originally planned to develop a free standing acrylic jellyfish tank but quickly realized that the available livestock for a jellyfish tank was non existent. The local public aquariums were prohibited from exchanging livestock and were guarded with their information. Fees for collection permits are expensive and many collectors are hesitant. Hard to sell a jellyfish tank without jellyfish ! The project now took on a second task, to learn how to produce livestock.
Having acquired an initial batch of tank raised adult Moon jellyfish from a laboratory in the midwest, which produced larvae and the resulting polyps, Midwater Systems was in the tank raised jellyfish livestock business. Recognizing a need and then developing a series of specialty tanks the product line increased to include tanks for holding polyps that are budding, tanks that gently rotate new born Ephyra, to larger grow-out tanks can double as retail holding systems for jellyfish sales.
Along with tank raised jellyfish Midwater Systems has developed a series of patent pending free standing complete system designed to maintain jellyfish, called the Jelliquarium.
The Jelliquarium, known in the scientific community as a plankton Kreisel, is uniquely designed. Water flow is introduced in a method called laminar flow. This creates a gentle flow that keeps the jellyfish in suspension. This flat stream of water acts as a boundary against the edges of the tank and creates a water flow that helps to separate debris in a manner as to not draw the jellies into the filter system and to maintain the gelatinous organisms in suspension.
It is hoped that the Jelliquarium systems and the information presented within this web site will spawn a new facet of the marine aquarium hobby, and a new fascination with keeping jellyfish.
|
Part 2: Food & Shelter
by Jim Theler
|Settlement of the Effigy Mound People|
gathering from the wild. They moved seasonally to take advantage of different animals, plants and resources. In the summer months the Effigy Mound people in Western Wisconsin moved to the Mississippi or Wisconsin River valleys. In late summer or early fall, they would form smaller groups and move to one of the interior valleys of western Wisconsin’s un-glaciated “Driftless Area.” Rather than returning to the location where they had spent the previous winter, they would select another valley for their winter camp. Selection was important as people needed to be spaced out on the landscape so as not to overuse resources, namely deer and firewood which were vital for winter survival.
Frequent moving necessitates having simple, portable houses. In early historic times, Native peoples in this area used simple pole structures covered with cattail matts or sheets of bark that could be tied to the poles making a secure dwelling for most seasons. In western Wisconsin archaeologists do not find evidence of year-round houses of the type we see in some agricultural societies. Rock shelters were popular winter living sites especially if they were on hillsides or cliff faces that faced south or east and were located near a water source. Archaeologists have also found indications during the Effigy Mound period of circular, semi-subterranean houses, some with a long entrance; these were designed for temporary refuge in the coldest winter weather. These houses were apparently heated with hot rocks brought in from fireplaces outside.
During the fall and winter months when the Effigy mound people lived in the interior valleys, they would primarily hunt deer as well as elk and smaller game. Bison were absent or very rare in this area and black bears were rarely taken. Archaeological excavations at winter sites have uncovered tens of thousands of animal bones, many of which can be identified. By analyzing the animal remains and counting the number of right and left bones, it is possible to tell not only what species of animals were harvested, but the number of animals, the amount of meat represented, and what animals were most important in the diet. The answer to that question is deer. Typically, deer, with an occasional elk, made up 85% to 95% of their winter diet.
The Effigy Mound hunters used the bow and arrow for hunting. Small, lightweight arrow points found at their living sites are very different from the larger, heavier spear points of earlier times. The bow and arrow replaced the spear about A.D. 500 or 600. This was an important innovation in fire power. With a quiver of arrows, a good bowman can get off several shots in a minute and increase hunting efficiency. While a lone hunter would be able to harvest game, small groups doing drives with the most skilled archers at ‘nick points’ where deer would flow through was undoubtedly the most effective strategy. Based on our knowledge of hunters and gatherers, everyone shared in the harvest and an animal didn’t belong to just one person.
So just how abundant were deer? In our oak savanna-tall grass prairie landscape, Effigy Mound hunters were the apex predators and it is believed that deer were much less common than at present. Today, in good habitat, wildlife managers often find 20 to as high as 50 deer per square mile. During Effigy Mound times, that number was probably closer to 2 to 5 deer per square mile. Over hunting deer would exceed the cull rate to sustain the herd. This scenario would place the humans in jeopardy during the lean, late winter and early spring months.
Deer and elk bone from winter sites are often found broken open with vertebrae and ribs pounded into small fragments. This was probably done to remove the marrow, a rich source of fat and other nutrients. Smaller crushed bone was boiled to render “bone grease” that could be scooped off the top of the pot. Larger long bones were split open and tubes of marrow removed. Native Americans also made “pemmican,” a sausage-like product made of fat, marrow, dried venison and sometimes berries. Pemmican could be kept for long periods during the cold season and consumed as needed. There is little doubt that it was made in Effigy Mound times.
|Richard's Arrowhead Collection|
|Gourds from the mound builders' time|
When winter broke, the small groups of people would move to their summer camps along the river valleys. With the stress of surviving winter behind them, living was much easier. Fish, mussels and small game were readily available and there was not much need for large quantities of firewood. Excavations at summer sites along the shores of the Mississippi and its backwaters have uncovered vast refuse deposits with the remains of freshwater mussels, fish, small mammals and nesting waterfowl mingled with broken pots, arrow points and charcoal from camp fires.
In our next newsletter article we’ll look further into some of the social aspects of the Effigy Mound building society. These people had an interesting way of organizing their community and people. We’ll discuss more of these aspects as well as how their burial mounds fit into the big picture.
Drawings & Gourd Photo borrowed from the Mississippi Valley Archaeology Center.
|
Final Examination / Fall Semester 2003
Humanities 240: Converging Hemispheres
Question 1. “Discuss the two selections from John Locke in the Converging Hemispheres Reader. Include in your discussion who he was, the intellectual movement he inspired, the specific historical conditions he was responding to and the main arguments he makes.”
Who He Was
Born in 1632 in Wrington, England, John Locke was the philosopher often called the Father of the Enlightenment. A teacher of Greek and Latin at Oxford University for some time, Locke later met Anthony Ashley Cooper, Earl of Shaftesbury, and served as his physician. Locke left for France and stayed there for four years (1675-9) to study philosophy. He came back to England afterwards, only to be suspected of radicalism by the government “because of his close association with Shaftesbury” (Reader 35). Because of this, he left for Holland in 1683, not to return to England until six years later. Not long after his arrival he published what are generally considered his two most significant works, “An Essay Concerning Human Understanding” and “Two Treatises of Government.” Locke came to be thought of as the foremost philosopher on freedom; he wrote more works until his death in 1704 in Oates (35). Locke remains world-renowned for his avant-garde vision of a democratic government, founder of British empiricism, social (and fundamental) equality and his all-encompassing writings that epitomized the ideals of the Enlightenment.
The Intellectual Movement He Inspired
John Locke was one of many philosophers of the Age of Enlightenment; perhaps one factor that brought about his great fame, however, was his logical, straightforward style—he wrote about extraordinary concepts in ordinary language. He was also one of the first to state some of the fundamental truths about human nature, and he related such truths to all men, rather than just those of a particular class or race. These are the principles of the Enlightenment doctrine, which Locke promoted: shift in focal point from the group to the individual; free labor; rationality as the distinguishing feature of humanity; a general rejection of traditional social, religious and political ideas; a government based on consent of the governed; equality of all people; opposition to royal absolutism; “tabula rasa”; religious freedom; and the emphasis of empirical knowledge rather than theoretical.
Certainly the coming of the Enlightenment was influenced by other factors, as well. From a historical standpoint, the Enlightenment stems from (or as a result of) monarchy and the European encounter. The textbook Traditions & Encounters states that the 18th-century philosophical movement “began in France” and “spread concepts from the Scientific Revolution” (G-2). John Locke, however, was one of the chief proponents of this progressive movement and its ideology. He also set down three core components for the modern world: free labor, rights of property and the abolishment of monarchy.
Historical Conditions to Which He Was Responding
Locke was opposed to some of the ideas of English philosopher Thomas Hobbes; in particular, he disagreed with the notion of absolute monarchy. This supposedly self-justified concept said that absolute monarchy, or royal absolutism, came from divine right. Although Hobbes did not buy into the idea of divinely chosen sovereigns (his power really came from the people, he said), he did believe that “the sovereign’s power is absolute and not subject to the law” (“Hobbes, Thomas,” Columbia). This absolutism also was characterized by kings’ belief that they could make decisions completely without counsel.
Another one of the topics Locke argued against was the supposed “divine right” of kings (above-mentioned in part), a doctrine supported by English royalist political author Sir Robert Filmer. The dictionary defines it as “the right of a sovereign to rule as set forth by the theory of government that holds that a monarch receives the right to rule from God and not from the people” (“divine right,” Webster’s). Locke, comparatively, used his belief in equality to argue against this. Based on his writings, his feeling was that—under divine right—everyone is born into subjection, and so no one is equal in that system. Locke was against any form of subjection, and countered the “divine right” in the first part of “Two Treatises on Government.”
John Locke disputed Thomas Hobbes’ thoughts again when it came to the matter of human nature. In “An Essay Concerning Human Understanding,” Locke describes the state of man when he first is born: He says every person is born a “tabula rasa,” or blank slate, and experience imprints knowledge onto it. He says that sensory perception, rather than innate knowledge, is how we come to learn and understand. He specifically contrasts with Hobbes in “Second Treatise,” however; both men believe that there is a “state of nature,” but Hobbes thinks of it as being characterized by constant war, violence and that “man is by nature a selfishly individualistic animal at constant war with all other men” (“Hobbes”). He says that men are equal—in their self-seeking, anyway. In Chapter II (“Of the State of Nature”), Locke contradicts this with his own standpoint that characterizes the state of nature as one of reason, tolerance, equality and freedom (Reader 35-6). He says that “all the power and jurisdiction is reciprocal, no one having more than another … [people] should be equal among one another without subordination or subjection … men being all the workmanship of one omnipotent and infinitely wise maker” (36).
Locke’s fundamental argument is that government is a responsibility of the people. He uses the above “state of nature,” which he says is regulated by the law of nature. Natural law, he explains, dictates that nobody has the right to harm himself or anyone else (nor take their property), “unless it be to do justice on an offender” (Reader 36).
Along the same lines, Locke expounds the discussion of property in Chapter V (“Of Property”), saying that property rights can only be claimed by someone who has put their own labor into their property (Reader 39). The reason that one has to toil to claim property is twofold: first, because “God … has given the world to men in common” (Reader 36). The other reason is that a man’s labor is part of his property, and so he has ownership over the work he puts forth. In an excerpt from the Reader, Locke says: “…yet every man has a property in his own person: this nobody has any right to but himself. The labour of his body and the work of his hands, we may say, are properly his” (39).
Locke says that people remain in the state of nature until “by their own consents they make themselves members of some politic society” (38). In Chapter VII (“Of Political or Civil Society”), he begins to talk about what he had (in previous works) called “social contract.” This dates back to Thomas Hobbes, who coined the term; both talk about why people might give up some of their freedoms. Hobbes felt that they would do so because, given the state of nature, they feared being violently killed by another (“Hobbes”). Hobbes used the idea of a social contract to support absolute monarchy—saying that men would give up their freedom for the sake of their own safety within a state—while Locke used the “social contract” notion to advocate the belief that the government has to reflect the will of the people. The concept as advocated by Locke is known as “popular sovereignty” and is defined as “a doctrine in political theory that government is created by and subject to the will of the people” (“popular sovereignty,” Webster’s). Locke says that the main reason people form a political society is to have an authority that makes laws, primarily to regulate and preserve property. He does not say that the political society should actually supersede the law of nature, but interpret and augment it (Reader 35).
In Chapter VIII (“On the Beginning of Political Societies”) and Chapter IX (“Of the Ends of Political Society and Government”), Locke asserts that man’s freedom, equality and independence is his natural right (41), and that man only gives up such liberties by voluntary consent. In Chapter IX, Locke addresses again the main reasons that people would give up certain liberties. He says that there are three things that the state of nature lacks: established, or positive, law that sets the standards of distinguishing right and wrong; an unbiased judge to decide cases based on said laws; and the power to implement punishments, also according to the laws (Reader 42-3). On Page 42—having explained that by agreeing to a common superior, men are bound to the decisions of majority rule—Locke draws the distinction between tacit and express consent. He reiterates that man enters into civil society, and subsequently its government, on his own free will. On consent, however, he says that express consent is an oath of allegiance, while tacit consent is in owning property (41-2).
As previously stated, Locke explained three reasons for having an actual government; Thomas Hobbes also believed in the idea of a common superior, but remember the vast disparity between both men’s idea of the state of nature. Given what Hobbes believes about the state of nature, he feels that an absolutist government is the lesser of the two evils, and so people turn over their rights to the common superior. Locke, by comparison, clearly is implying that people commit their rights to the government, but can restrict its authority.
In Chapter XIX (“Of the Dissolution of the Government”), Locke begins by saying, “He that will with any clearness speak of the dissolution of government ought in the first place to distinguish between the dissolution of the society and the dissolution of the government” (44). Throughout the piece, Locke has given the reader various aspects of a government that resembles the one drafted in the Constitution: the three main branches (executive, judicial and legislative, though he didn’t use all of those exact terms) and the system of checks and balances. Here in this chapter, he talks about how the government may cease to exist. From the above quotation, he makes it clear that the politic society is “the agreement which everyone has with the rest to incorporate and act as one body, and so be one distinct commonwealth” (44). He says that one thing that could happen is an overpowering attack from outside forces; the government will cease to exist upon the dissolution of the society. He also says that governments are dissolved internally, through a number of ways, such as “when the legislative or the prince (either of them) act contrary to their trust” (45). Locke feels rebellion is a right, and that the government can be overthrown or replaced by the people if it does not comply with its obligations. After all, he says, the purpose of entering civil society and having a government is “to be directed to no other end but the peace, safety, and public good of the people” (44).
Chapter IV: “Of Slavery”
Briefly, Chapter IV—though separated in the Reader from the rest of the extract—is similar to the rest of Locke’s writing. Although it is somewhat of a rehash of what has previously discussed, this chapter is somewhat more specific. Because Locke believes that all men are made equal by the Creator, he takes a firm anti-slavery stance. Recall that he has argued against the subjection of people (born into subjection or otherwise); further, one major point of his is that a man’s property includes his own labor. He says that the only acceptable “condition” or form of slavery should be “between a lawful conqueror and a captive” during a state of war. The sentence that summarizes his thoughts best reads: “A liberty to follow my own will in all things where the rule prescribes not, not to be subject to the inconstant, uncertain, unknown, arbitrary will of another man, as freedom of nature is to be under no other restraint but the law of Nature” (47).
Question 2. “Discuss the selections from Paine and Jefferson that we covered in class. Include in your discussion who they were, their relationship to the independence of the thirteen colonies, the foundation of the United States and the modern world.”
Who Were Paine and Jefferson?
Born in 1737 in Norfolk, England, Thomas Paine was an author and a political-minded theorist. His father was a Quaker. In the early 1770s he lost his job as an excise officer after causing a bit of a stir for higher pay. Not long after, he met Benjamin Franklin, who was visiting England; impressed with him, Franklin gave him letters of recommendation and suggested to Paine that he move to America. Paine worked as a publicist in Philadelphia for a while, and gradually came to take an interest in the conflict between ever-at-odds America and England. Believing that the colonists had every right to separate from England—and fed up with the colonists’ reluctance to take action—he anonymously published “Common Sense” in January or February of 1776.
Born in 1743 in Virginia, Thomas Jefferson is best known as the third President of the United States; he is also credited, however, as being the sole author of the Declaration of Independence. Borrowing from John Locke and Thomas Paine, it was written to explicitly describe the grievances against the English government and officially announce the separation of the colonies from Great Britain.
Relationship to the Independence of the 13 Colonies and the Foundation of the United States & Modern World
Paine and Jefferson shared a similar fervor when writing about the why the colonies ought to divorce from the Crown. Paine knew that his audience had to have simplicity, since not everyone was highly educated. Paine says that England has “[declared] war against the natural rights of all mankind” and that “the cause of America is, in a great measure, the cause of all mankind” (69). He doesn’t explain “natural rights,” knowing that even the average person can understand what Locke proposes: their basic rights. This is explained further throughout “Common Sense”; like Locke, Paine believes that all men are equals (73); he also supports man’s rights to liberty, life, property and rebellion against rulers who do not respect those rights (81). In Chapter 1, Paine’s discontent with the government is quickly made known; he says that “government even in its best state is but a necessary evil [and] in its worst state an intolerable one” (69). In the rest of the chapter, he says that one of the problems with the English government is its constitution. Two branches of the government, based on “ancient tyrannies,” do nothing for the state’s freedom and are so lopsidedly powerful that the third branch (the commons) can be overruled by either of them (70-1). Paine goes on to explain that America has to have a government, but England is so far from being a paragon of one that, as long as America is under their control, it can’t discern a superior one.
In Chapter 2 of “Common Sense,” Paine continues attacking monarchy, and saying that it goes against freedom because it divides people into kings and subjects. To further his point, he brings Scripture into play, in particular the Jews’ coming to Samuel, wanting a king. Samuel warns them against it, saying God should be the only sovereign; however, the Jews ignore him, get a king, and subsequently are punished by God (73). Paine’s selective citations take the Bible out of context—in fact, Paine was a Deist and opposed to organized religion—but he knew the colonists wouldn’t argue in the face of a good, Scripture-backed line of reasoning. In the remainder of the chapter he argues against hereditary succession and compares it to original sin (74-5).
In Chapter 3, Paine says, “I offer nothing more than simple facts, plain arguments, and common sense” (76). Here he confronts those reluctant to fight against England: “I have heard it asserted by some, that as America hath flourished under her former connection with Great Britain, that the same connection is necessary towards her future happiness … America would have flourished as much, and probably much more, had no European power anything to do with her” (76). He agrees that England sometimes does help or defend the colonies, but adds that it is only when it’s in their (England’s) best interest (77). Refuting the concept of a “parent country,” he replies that even wild animals don’t treat their children so cruelly. He also argues that being under Britain’s rule would cause America to get involved in European wars, hurting the trade and commerce. Paine talks about the “new world [as] an asylum” for those persecuted (77); on the next page he says the fact that the discovery of America preceded the Reformation was evidence of a divine plan to “open a sanctuary to the persecuted in future years” (78). He attributes the will of God: “…strong and natural proof that the authority of the one, over the other, was never the design of Heaven” (78).
Paine says that complete independence is the only route for the colonies, that they had outgrown any need for English domination and ought to have independence. He talks more about it as he describes how England and the Crown have mistreated people; becoming more ardent and intense, Paine says, “…then tell me, whether you can hereafter love, honor, and faithfully serve the power that hath carried fire and sword into your land” (78)? He challenges the hesitant, exclaiming that anyone who has lost a loved one or had property destroyed because of British cruelty—yet who would still shake an English hand and offer truce—is a “coward” and “sycophant” (78). England can’t conquer America, he declares; only by “delay and timidity” will the colonies be overpowered. He states that it is ridiculous for an island to rule a continent; it is simply impractical for Britain to rule a country so far away; and the satellite is bigger than the planet (79). He makes calls to arms and says, “for God’s sake, let us come to a final separation.” He had previously made a call to arms (earlier in the chapter); he continues to justify it, and diagrams a possible new structure of government to ease people’s fears about the uncertain future without English rule. Congress and delegates; a president; charters; Assembly (81). He says that the only king in America will be God—and the highest law will be His law. “The law is king,” he says decisively. In the last section, Paine says: “O ye that love mankind! Ye that dare oppose, not only the tyranny, but the tyrant, stand forth! … Freedom hath been hunted round the globe. … Europe regards her like a stranger, and England hath given her warning to depart” (82).
In Chapter 4, Paine’s biggest point is using figures to prove that a navy can be constructed easily; he says they can build a large fleet and sell the ships later. He says trade is suffering and men are out of work, which justifies creating an army. The continent isn’t crowded, he argues, and there aren’t many spots to have to defend. Besides, he says, the British navy is overrated; they have a long list of ships, but a lot are either in disrepair or not readily available (83-5). Throughout the last few pages he encourages people, saying that we have enough cannons, gunpowder, iron, et cetera to fight, and more importantly, resolve and courage (85). In his last few paragraphs he advocates religious diversity and equal representation. Concluding, he states that other countries have to see America as an independent nation—not a rebel—so we ought to send a manifesto to the foreign courts telling them the necessity of what we’re doing, he says. It has to happen, he urges.
Paine’s “Common Sense” reached as many as 500,000 people, and was highly esteemed for the message it brought. It successfully promoted the mindset that complete independence was the only way. It proved to be instrumental in bringing about the Declaration of Independence and later, the American Revolution. Moving to the Declaration, this short document echoes much of John Locke’s philosophy. It talks about the “laws of Nature” and says that “all men are created equal” (95). People have the fundamental rights to “life, liberty and the pursuit of happiness.” It combines a common moral code, conceptual theory of government and account of all the ill-treatment wrought by England’s rule. Talking about how the monarchy has acted as an “absolute despotism,” it says that government by consent (of the governed) is more just. Backed by “divine providence,” it says, we dissolve all connections and sever all ties with Britain. Both it and Paine—with Enlightenment guiding them—have set the stage for the United States’ foundation. The ideals of individual rights, equality, religious liberty and dissolution of monarchy are epitomized through two writers who had words for the common man.
Question 3. “Select and discuss an article from the Converging Hemispheres Reader that we read after the first examination. Explain in your discussion the main ideas of the author, compare and contrast her/his ideas to those of the other authors we read after the first examination, how her/his ideas relate to the modern world, the moral/ethical questions her/his work raises and why you see that work as the most meaningful.”
I have decided to talk about (the excerpts from) “An Inquiry Into the Nature and Causes of the Wealth of Nations” by Adam Smith. Born in 1723 in Scotland, Adam Smith was a popular, revolutionary economist and philosopher. After completing his education, he was a professor of moral philosophy at the University of Glasgow for several years. He published “Wealth of Nations” in 1776.
Main Ideas of Author
In Chapter I of Book One, Smith makes one of his biggest points right off the bat: why the division of labor is so important. Economically, division of labor (and separation of duties) is concerned with the functions and roles involved in production; Smith sees it as the basic principle of free trade, which he advocates. His argument for the division of labor is illustrated in his example of 10 men, each of whom operate two or three functions of 18 necessary to make a pin. By working together, they produce over 48,000 pins daily; if they had each worked alone and the labor not distributed, they would have about 200 (Reader 99-100). He thinks that many people are inclined to think that the advances in productivity due to machinery and new technology; however, he says, this is not the case. It is the specialization of labor that’s responsible for the progress; the division of labor—which is more basic than technology—is what enables that technology to come into being (102). For that reason, he argues, specialization is the key to material welfare. Necessity is the master of invention, he insinuates, stressing the importance of inventors (102). In Chapter II, Smith explains that man’s propensity and “disposition to truck, barter and trade” (104) is what gives rise to the division of labor.
In Book One, Chapter VII, Smith explains natural versus market price to people. The natural price of something is essentially what it costs to make or produce it: cost of materials, labor of people making the cost and time to make it (105). Market price, on the other hand, is different; it is based on the amount of similar items available to purchase and how badly people in the marketplace need/want them. He says, “The market price of every particular commodity is regulated by the proportion between the quantity that is actually brought to market, and the demand of those who are willing to pay the natural price” (105). If the supply and demand are the same, the seller gets about the natural price (and the market price would be equal to it). If demand exceeds supply, the people most able to afford the products will be the ones who get them. It will drive up the market price beyond the natural price (105-6). However, if supply exceeds demand (or if two people offer the same items for the same price), Smith adds, the merchant will lower his price to make his product more attractive to the customer. He reduced the price because and sold them for less than it cost to make them because supply exceeded demand; the market price is below the natural price (106).
A recurring theme that Smith talks about, especially in Book Four, is the true wealth of nations. This is part of the case he makes against mercantilism—the primary topic he’s arguing against. Mercantilism was an economic system that was characteristic of the major trading countries from the 1500s to the 1700s. It was the predominant form in the European market at the time; part of its philosophy was that the wealth of a nation was based on how much gold and silver it had in its treasury. Mercantilism, rooted in monopolization, disallowed free trade and forced colonies to trade only with the mother country. How much gold and silver you have is not the real measure of wealth, he says. “It is not by the importation of gold and silver that the discovery of America has enriched Europe” (107); rather, it is labor (the production of and having the best possible workforce) that is the true wealth of a nation. Regarding the discovery of America, he says, “By opening a new and inexhaustible market to all the commodities of Europe, it gave occasion to new divisions of labor … the productive powers of labor were improved, and its produce increased in all the different countries in Europe, and together with it the real revenue and wealth of the inhabitants” (108).
The reason Smith was opposed to mercantilism was because of its monopolistic nature; it stifles the economy and restrains free trade. Smith advocates a sort of “laissez-faire capitalism” instead, saying that the impulse of self-interest would bring about the public wellbeing. He says, “Every individual is continually exerting himself to find out the most advantageous employment for whatever capital he can command. It is his own advantage, indeed, and not that of the society, which he has in view. But the study of his own advantage naturally, or rather necessarily, leads him to prefer that employment that is most advantageous to the society” (109). Smith is saying here that individual profit benefits society; those who can choose what they want to do in free labor help themselves—and in turn, society as well. This is what he calls the guidance of an “invisible hand” (110); those who work in their own best interest are serving the interests of the economy as well.
In Chapter IX of Book Four, Smith advises people: Don’t restrain any aspect of what people produce; let them produce as efficiently as they can. Smith says that some countries restrict manufacturing, thinking that they are doing themselves a favor by promoting agriculture—but what they are actually doing is the exact opposite of what they want to, because in doing so, they’re actually reducing the market for their products.
Smith is referring to a group of French thinkers known as “physiocrats,” who chiefly believed in a hands-off, unautocratic governmental policy so that it didn’t hamper the operation of natural laws of economy. They felt that land was “the source of all wealth” (“Physiocrat,” Webster’s). Smith agrees on the point that the government shouldn’t interfere but not with the premise that only agriculture produces real wealth (112-3).
Comparison/Contrast to Some Other Authors
Adam Smith, also an Enlightenment thinker, is similar to John Locke in a couple of ways. Although Smith was much more involved economic, trade-and-industry matters, some of their main points are comparable. Both men are heavily against the idea of a government taking too much involvement or control—while Locke is more concerned with maintaining equilibrium within the branches of government, Smith is addressing why the government ought to keep its nose out of affairs relating to free trade and some areas of economics. While Smith doesn’t think much of aristocrats and the monopolistic mercantile system, both he and Locke oppose monarchy. They agree on the mistreatment and exploitation of the Americas. Both are also very heavy on individualism; Smith puts an economic spin on it by describing how a person helping himself helps society. Neither author has any qualms about free labor—a person has to have that right, and only they have the claim to their own labor. You can’t sell human labor; everyone controls their own. The biggest disparity arises between the two on human nature; Locke believes in “tabula rasa,” saying that a person comes into the world with no knowledge and only gains it from sense perceptions, while Smith clearly believes that the distinguishing feature of humans (compared to other animals) is an inherent instinct to trade and barter; he thinks it sort of a survival mechanism.
Alexis de Tocqueville and Adam Smith both concur and clash at different points. To illustrate a couple, de Tocqueville “questions” (or at least has reservations about) liberty and capitalism and is against it. Smith’s frame of mind is the exact opposite; he is in favor of both (although not the monopolistic capitalism of today, which he’d hardly recognize). However, the two men both see eye-to-eye on the topic of church and state (both are for the separation of church and state) and the abolishment of slavery (both favor the abolition).
Relation to Modern World; Moral/Ethical Questions Raised; Significance of Work
This work relates to the modern world in a few ways. Obviously this is the foundation of the capitalist system, although some say Smith would never recognize it in its current state. Written before (or at least near the onset of) the Industrial Revolution, Smith’s “Wealth of Nations” brought the doctrine of laissez-faire into the picture. Smith’s slant on gauging a nation’s wealth was a relatively new one, especially in a time when mercantilism was widespread. (In effect it was a kind of slavery; subordinate “slave” countries could only make what their “slave master” superior countries told them to.) Although some of his theories were rejected with the coming of the Industrial Revolution, a number of them are still seen in today’s world—such as supply and demand.
Although they’re much more infrequent than back then, there are still monarchies in today’s world, and some countries’ kings still exert the same sovereign power as those of yesteryear did. Some governments wield such extreme power even if they are not monarchial in structure. Similarly, there have been some presidents and congresses in the U.S. who believed that they could improve the economy by controlling prices; Some people think that a government that regulates prices is more moral because it tries to make things affordable for everyone. But it hardly produces the desired effect; look what happened when, years ago, the Soviet Union tried to regulate prices on basic commodities—it brought about scarcity and long lines instead. For example, if someone builds a car whose natural price is $10,000, and the government says “Sell it for $1,500,” the person is likely to refuse to do so. They won’t make cars at all, or eventually go broke. On the other hand, if you let prices move according to market demand, the supply meets the demand and things come out much more even. So what’s more moral: an artificially cheapened car for the sake of affordability (equal accessibility), or selling something for what it’s worth to promote the overall economy? The former seems like false morality. This is part of Smith’s argument, and it is even more meaningful because all of this fundamentally affects everyone: the job they can have, how much something costs and how much of it there is. At the crux of the war for independence was an all-encompassing whirlwind of individualism and the convergence of man and machine. The war removed the sword of Damocles that was British despotism.
Bentley, Jerry H., and Herbert F. Ziegler.
Traditions & Encounters: A Global Perspective
on the Past. Boston: The McGraw-Hill Companies, 2003.
“Divine right.” Webster’s Ninth New Collegiate Dictionary. 9th ed. 1989.
“Hobbes, Thomas.” Columbia Encyclopedia. 6th ed. 2003.
Jefferson, Thomas. “Declaration of Independence.” 2001 Converging Hemispheres Reader.
No ed. Boston: The McGraw-Hill Companies, Inc., 2001.
“Locke, John.” Columbia Encyclopedia. 6th ed. 2003.
Locke, John. “The Second Treatise of Government.”
2001 Converging Hemispheres
Reader. No ed. Boston: The McGraw-Hill Companies, Inc., 2001.
Paine, Thomas. “Common Sense.” 2001 Converging Hemispheres Reader. No ed. Boston:
The McGraw-Hill Companies, Inc., 2001.
“Physiocrat.” Webster’s Ninth New Collegiate Dictionary. 9th ed. 1989.
“Popular sovereignty.” Webster’s Ninth New Collegiate Dictionary. 9th ed. 1989.
Rushing, Dr. Fannie T. Class lecture. Humanities 240. Benedictine University, Lisle.
11 Nov. 2003 – 12 Dec. 2003.
Smith, Adam. “An Inquiry Into the Nature and Causes of the Wealth of Nations.” 2001
Converging Hemispheres Reader. No ed. Boston: The McGraw-Hill Companies, Inc., 2001.
“Smith, Adam.” Columbia Encyclopedia. 6th ed. 2003.
|
Rocky formations like these, called stromatolites, dominated coastal areas billions of years ago. Now they exist in only a few locations. The ones pictured here are near Shark Bay, Australia. (Photo by Virginia Edgcomb, Woods Hole Oceanographic Institution) [ Hide caption ]
About a billion years before the dinosaurs became extinct, stromatolites roamed the Earth until they mysteriously disappeared. Well, not roamed exactly.
Stromatolites (“layered rocks”) are rocky structures made by photosynthetic cyanobacteria. The microbes secrete sticky compounds that bind together sediment grains, creating a mineral “microfabric” that accumulates in fine layers. Massive formations of stromatolites showed up along shorelines all over the world about 3.5 billion years ago. They were the earliest visible manifestation of life on Earth and dominated the scene for more than two billion years.
“They were one of the earliest examples of the intimate connection between biology—living things—and geology—the structure of the Earth itself,” said Joan Bernhard, a geobiologist at Woods Hole Oceanographic Institution (WHOI). “Then, around one billion years ago, their diversity and abundance begin to take a nosedive.”
Why they declined has remained an unsolved puzzle. But in a study published May 28, 2013, in Proceedings of the National Academy of Sciences, WHOI researchers identified a possible reason for the crash by re-creating the evolutionary event in the laboratory.
One clue to the mystery was that the disappearance of stromatolites coincided with the sudden appearance in the fossil record of different formations called thrombolites (“clotted stones”). Thrombolites are also produced by microbes, but they are clumpy, rather than finely layered. Were the rise and fall of the two kinds of formations related? Did stromatolites become thrombolites, and if so, why?
“It’s one of the major questions in Earth history,” said WHOI microbial ecologist Virginia Edgcomb.
Bernhard and Edgcomb suspected that another organism may have played a decisive role: foraminifera, or “forams” as scientists often call them. Forams are protists, the kingdom that includes amoeba, ciliates, and other single-celled organisms. Forams are abundant in present-day ocean sediments, where they use fingerlike extensions called pseudopods to engulf prey and to explore their surroundings. In the process, their pseudopods churn the sediments on a microscopic scale.
Living stromatolites can still be found today, in limited and widely scattered locales, as if a few velociraptors still roamed in remote valleys. Bernhard, Edgcomb, and colleagues looked for foraminifera in living stromatolite and thrombolite formations from Highborne Cay in the Bahamas. Using microscope and RNA sequencing techniques, they found forams in both—and thrombolites were especially rich in the kinds of forams that were probably the first foraminifera to evolve on Earth.
“The timing of their appearance corresponds with the decline of layered stromatolites and the appearance of thrombolites in the fossil record,” said Bernhard. “That lends support to the idea that it could have been forams that drove their evolution.”
Next, Bernhard, Edgcomb, and WHOI postdoctoral investigator Anna McIntyre-Wressnig created a lab experiment that mimicked the conditions that might have prevailed a billion years ago.
They started with modern stromatolites and added the kinds of forams that are found in thrombolites. Some samples were also treated with a drug that prevented the forams from sending out pseudopods.
After about six months, they examined the formations. In drug-free samples, where forams could use pseudopods, the fine layers had changed to jumbled arrangements similar to those seen in thrombolites. In drugged samples, where the forams were immobilized, the fine layers remained intact.
Bernhard and her colleagues concluded that active foraminifera can profoundly alter the structure of stromatolites—perhaps enough to have spurred a major evolutionary change.
This research was funded by the National Science Foundation.
|
They have intriguing names such as hairy vetch, pearl millet and bird’s-foot trefoil. Collectively known as cover crops or green manure, they've been used for years to increase soil productivity by fixing atmospheric nitrogen into soil, making it available for cash crops, such as corn, and saving farmers money on input costs.
But fixing nitrogen is just one benefit of cover crops. They improve soil health because they add diversity to the microbial community, foster natural biological processes, boost organic matter, and increase soil porosity, which improves soil’s water holding capacity. What they capture is just as critical as what they add. Cover crops reduce soil erosion, sequester carbon and significantly mitigate nitrogen, phosphorous, and herbicide and pesticide losses to surface water.
Despite their myriad benefits, how to integrate them successfully into a production system still raises lots of questions for Missouri producers. Ranjith Udawatta, associate research professor in CAFNR’s soil, environmental and atmospheric sciences department and Center for Agroforestry, and his collaborators, aim to answer those questions and demonstrate cover crops’ benefits to soil health, water quality, ecosystem services and farm profitability.
Udawatta’s team was recently awarded a $500,000 Conservation Innovation Grant from the USDA-NRCS to examine cover crop management practices in 12 watersheds in central and north central Missouri. Several partners are contributing to the project, bringing the total budget to $1.1 million. Associated Electric Cooperative Inc. provided the principal research site in Chariton County.
According to EPA water quality studies, 44 percent of rivers, 30 percent of estuaries and 64 percent of lakes in the Mississippi River Basin are impacted by agricultural pollution and contribute to hypoxia in the Gulf of Mexico. Udawatta’s previous watershed research at Greenley Memorial Research Center has shown that the bulk of the loss of nitrogen and phosphorus occurs during the fallow period, when the ground is bare.
Udwatta’s project will demonstrate the environmental benefits of adopting a production system focused on soil health and conservation practices. Researchers will measure reductions in soil erosion, and nitrogen, phosphorous, and herbicide and pesticide losses to surface water. They'll measure improvements in soil and water quality and biomass production, and aim to enhance wildlife diversity as a result of planting field edge buffers and cover crops, which provide potential habitat.
What’s the bottom line? They’ll calculate that as well, measuring changes in productivity and decreased input costs from adopting an integrated cover cropping system. Finally, the project will develop and implement a user-friendly tool to recommend best management for cover crop selection, nitrogen application and economic return.
Not much cover crop data exists for Missouri, and this study, along with several cover crop projects at CAFNR’s out-state research centers, will bridge those knowledge gaps, providing technical information for farmers about which cover crops perform best for their area and cropping system and what machinery is needed to get the job done. Through the data they collect, they'll be able to add Missouri to the Midwest Cover Crop Council's database–a decision tool that helps farmers choose the right cover crops for their production needs. They'll share their results at field days across the state.
The project involves several collaborators including Associated Electric Cooperative Inc., Syngenta, Pioneer, Cover Crop Solutions, Chariton County Soil and Water Conservation District, Missouri Department of Natural Resources (MDNR), Missouri Department of Conservation (MDC), USDA-ARS, Natural Resources Conservation Service (NRCS) and several participating farmers.
|
Ancient History, A Literature Approach has been designed for junior and senior high students but includes literature recommendations for younger students so they can be studying the same time period. Using a literature-based approach, Rea Berg covers ancient history, focusing primarily on Egypt, Greece, and Rome.
Unlike most Beautiful Feet guides, this one relies also on a textbook for more comprehensive coverage. Streams of Civilization, Volume I (Christian Liberty Press) serves as this background text. It provides a broad scope, encompassing time from creation up through the Middle Ages. This ensures that students at the higher grade levels are covering all they should for a solid historical foundation. The text itself comes with chapter tests, and there are also tests with answer keys on each major period within the guide.
You will need to supply a Bible and a Bible atlas for use with the course. The basic literature pack includes the following literature and resource selections for the course: Pyramid by Macaulay, Augustus Caesar's World by Foster, City by Macaulay, Tales of Ancient Egypt by Green, Pharaohs of Ancient Egypt by Payne, D'Aulaires Book of Greek Myths, and The Children's Homer by Colum.
The Jumbo Pack includes books needed for senior high level: all of the above plus Streams of Civilization volume 1, test booklet for Streams of Civilization, The Golden Goblet by McGraw, The Bronze Bow by Speare, Shakespeare's Julius Caesar (with reader's guide), Shakespeare's Antony and Cleopatra, The Ancient Greeks by Nardo, Caesar's Gallic Wars, and Quo Vadis? by Sienkiewicz. Other recommended literature is also listed in the guide.
Daily lesson plans tell you what parts of each book are to be read each day; provide study and discussion questions (not for all lessons); and describe writing, research, and notebook activities. Students maintain a notebook where they record results of research and reading, and where they paste and draw maps and pictures, illustrations, newspaper articles, and other information related to each topic they study. While this study is not as reflective of the Principle Approach as is the study done in Berg's Literature Approach to Medieval History, it does utilize that approach to learning to some extent. However, the study does not require a prior familiarity with the Principle Approach.
The study can easily be stretched to include younger children as you read through some of the books together.
|
In the tracking of brain waves, since the invention of electroencephalography not much has changed. If electrodes are distributed over the head, a crude picture is recorded. Whoever goes looking for something finer finds that there is otherwise little alternative other than to go under the skull and detect electrical activity there with thin needles. If one wishes however to cover 1 cm2 of the surface with one electrode, there is the activity of some twelve million nerve cells to be measured there.
A “technical report” in Nature Neuroscience from November 2011 now points to a way of getting to much better results and thus creating a much more accurate electrical map of the brain. Brian Litt of the University of Pennsylvania and lead author of the publication describes the benefits: the measurements “allow us to see large parts of the brain simultaneously. Such resolution has not been around until now.”
Only if the distance of the signals between the measure points is less than one millimetre can sharp images of the brain be derived – capturing space and time, as the subject for instance speaks, gives commands to the muscles. He or she may happens to be epileptic, and the images record the source of electrical discharges. The new development is a flexible mat of a few square centimeters in size, on which hundreds of flexible microelectrodes are seated together with their corresponding circuits.
Cuddling the cortex
The implantable monitoring station emerged as a collaboration between neurologists and electrical engineers from the University of Illinois, who did not want to deal with rigid thick boards any longer. The extremely thin membrane of about 0.3 millimetre thickness now has the advantage that it conforms to any surface irregularities of tissue. It fits well therefore into fissures and sulci of the cerebral cortex, where it can tap into the current flow. It also eliminates the risk of injury if the electrodes only lie on the surface instead of protruding into the tissue as the needles used until now needed to do.
The flexible circuit board on a polyimide base had already before this point passed the earliest medical examinations. Scientists using it recorded the current flow in the heart and other muscles. At the very least, the new technology can at this point be considered as successful in animal brain studies. Placed on the surface of the visual cortex or in the fissure between the two hemispheres, measurements in the cat brain have succeeded which were previously not possible.
Epilepsy resembles cardiac arrhythmias
Sleep spindles, transient waves that occur during sleep, are apparently distributed over the surface of the brain and probably have something to do with the processing and consolidation of memories. The measurement of this type of activity in animals under anesthesia clearly shows that these outbreaks are, contrary to previous assumptions, confined to a small area and synchronised.
Picrotoxin triggers seizures in animals that resemble those of epileptics. What’s more, Brian Litt and his colleagues have also seen something amazing in this induced version: the waves of electrical discharges propagated themselves in a spiral form on the surface of the brain. In-vivo, such a thing had not previously been observed. The fine resolution of the sensors also revealed in this “pseudo epilepsy” that the cause of the seizures is not spread evenly, but emerges from very small spots. The researchers also registered surprise about how similar the pattern of these measurements and those of patients with heart rhythm disturbances were.
Neuro Care: Carbon in the head
The device “is not only a tool for researchers, but clearly intended for clinical application”, says developer John Rogers. The sensor mat is likely to make matters noticeably easier for those in the neurosurgeon clinic when attempting to locate affected areas in the brain of epilepsy patients. Perhaps, speculates the team, one could, in the heart as well as in the brain, also stop such unwanted electrical discharge. The implant would then send appropriate counter-rotating waves into the tissues in order to wipe out the eruptions. The next generation of devices therefore should not only record potential changes, but also induce them. Perhaps – such is the more distant vision in the future – problems with perception could be traced within the brain and and then be similarly corrected, using electronically controlled power surges.
However, Rogers and Litt are not the only ones developing flexible circuit boards with ultra-high resolution for use in neurology. Very recently the Jülich Research Center announced the start of the European project Neuro Care. The partners involved here are focusing on the material carbon. It can be produced economically and is biologically inert. “Fewer problems with biofouling occur on the bio-interfaces – that is to say, contamination” Andreas Offenhäusser from FZ Jülich says about the material properties. According to the project plan, over the next three years prototypes for electronic implants in the eye, ear and brain should emerge. In about ten years, the patient could then also benefit from them.
|
How Teachers and Counselors Can Help Students with Mental Health Disorders
School students today face so many challenges as they navigate their way through adolescence. For middle school and high school students suffering from mental and behavioral health conditions as well as substance abuse issues, a day at school can often seem unbearable.
But with the help of teachers and counselors working together, students struggling with these conditions can survive and thrive at school.
Mental Health and Behavioral Health Conditions
According to the Association for Children’s Mental Health (ACMH), one in five children and youth have a diagnosable serious emotional disturbance (SED), including a behavioral, a mental, or an emotional health disorder. One in ten suffer from a mental health condition so severe that it impacts how they function at school and home. More than half of students 14 years and older with emotional and behavioral conditions drop out of high school.
The following are mental and behavioral conditions that students often suffer from, as cited by the Mayo Clinic:
- Mood disorders (including depression and bipolar disorder)
- Anxiety (including generalized anxiety disorder, obsessive-compulsive disorder, and post-traumatic stress disorder)
- ADD and ADHD
- Eating disorders (anorexia, bulimia, binge-eating)
- Autism Spectrum Disorder (ASD)
Mental health issues affect kids socially and academically, making it difficult for them to endure a regular school day. Teachers and counselors play a key role in students’ academic success when they’re battling these health conditions.
Substance Abuse Among Students
Mental and behavioral health issues often go hand-in-hand with substance abuse. Students become depressed, anxious, or ill and turn to alcohol and drugs for relief, particularly if they don’t feel as though they have the support from their family and school staff. Approaching a teen who has turned to substance use to cope can be difficult, especially during this vulnerable time in their life.
When students have a mental health condition and drug or alcohol addiction at the same time, it is called a dual diagnosis or co-occurring condition. Teens suffering from dual diagnosis should receive integrated intervention, a method of care that provides therapy for the student’s mental illness as well as treatment for their addiction.
How Counseling Can Help Teens
Mental and behavioral health counselors and substance abuse counselors can help students with diagnosing conditions and addressing the challenges of being successful at school while dealing with issues like depression, anxiety, or addiction. They can serve as a support system, helping teens overcome these challenges.
How Counselors and Teachers Can Work Together
When teachers and counselors work together with families to implement strategies and coping mechanisms students can use in school, they’re more likely to make it successfully through the school day. Counselors and teachers can collaborate using these guidelines to help students with behavioral health and mental health conditions.
- They can provide input for students using an individualized education plan (IEP) or 504 plan.
- Effective plans are tailored to meet students’ individual needs, rather than cookie cutter templates designed for all students with behavioral health issues.
- Identify triggers at school and in the classroom. For example, a child with anxiety may escalate during lunch when they don’t have anyone to sit with.
- Counselors can work with teachers to help identify when a student is showing symptoms in the classroom.
- Counselors can train teachers to intervene when they recognize symptoms and help the student implement coping strategies.
- Counselors and teachers may offer students built-in breaks to avoid symptoms growing out of control.
It’s not easy for kids dealing with mental or behavioral health conditions to stay on top of homework, sports, and social activities. With the help of counselors and teachers working together to support students in school, they can be successful!
If you are concerned for a child you know who may be suffering from mental illness and/or addiction, contact Mazzitti and Sullivan to learn about our adolescent mental health and substance use services.
|
Explanation of Alternating Current (AC) Electricity by Ron Kurtus - Succeed in Understanding Physics. Key words: physics, DC, direct current, magnetism, generator, transformer, current, voltage, hertz, positive, negative, capacitor, inductor, electromagnet, School for Champions. Copyright © Restrictions
Alternating Current (AC) Electricity
by Ron Kurtus (revised 2 June 2009)
Alternating current (AC) electricity is the type of electricity commonly used in homes and businesses throughout the world. While direct current (DC) electricity flows in one direction through a wire, AC electricity alternates its direction in a back-and-forth motion. The direction alternates between 50 and 60 times per second, depending on the electrical system of the country.
AC electricity is created by an AC electric generator, which determines the frequency. What is special about AC electricity is that the voltage can be readily changed, thus making it more suitable for long-distance transmission than DC electricity. But also, AC can employ capacitors and inductors in electronic circuitry, allowing for a wide range of applications.
Note: We usually say AC electricity instead of simply saying AC, since that is also the abbreviation for air conditioning. You need to be exact in science to avoid any misunderstandings.
Questions you may have include:
- What is the difference between AC and DC electricity?
- Why do we use AC instead of DC?
- What are advantages of AC electricity?
This lesson will answer those questions. Useful tool: Units Conversion
Difference between AC and DC electricity
Electrons have negative (−) electrical charges. Since opposite charges attract, they will move toward an area consisting of positive (+) charges. This movement is made easier in an electrical conductor, such as a metal wire.
Electrons move direct with DC electricity
With DC electricity, connecting a wire from the negative (−) terminal of a battery to the positive (+) terminal will cause the negative charged electrons to rush through the wire toward the positive charged side. The same thing happens with a DC generator, where the motion of coiled wire through a magnetic field pushes electrons out of one terminal and attracts electrons to the other terminal.
Electrons alternate directions in AC electricity
With an AC generator, a slightly different configuration alternates the push and pull of each generator terminal. Thus the electricity in the wire moves in one direction for a short while and then reverses its direction when the generator armature is in a different position.
This illustration gives an idea of how the electrons move through a wire in AC electricity. Of course, both ends of the wire extend to the AC generator or source of power.
AC movement of electrons in wire
The charge at the ends of the wire alternates between negative (−) and positive (+). If the charge is negative (−), that pushes the negatively charged electrons away from that terminal. If the charge is positive (+), the electrons are attracted in that direction.
Rate of change
AC electricity alternates back-and-forth in direction 50 or 60 times per second, according to the electrical system in the country. This is called the frequency and is designated as either 50 Hertz (50Hz) or 60 Hertz (60Hz).
(See Worldwide AC Voltages and Frequencies for more information.)
Many electrical devices—like light bulbs—only require that the electrons move. They don't care if the electrons flow through the wire or simply move back-and-forth. Thus a light bulb can be used with either AC or DC electricity.
AC is periodic motion
The regular back-and-forth motion of the electrons in a wire when powered by AC electricity is periodic motion, similar to that of a pendulum.
Because of this periodic motion of the electrons, the voltage and current follow a sine waveform, alternating between positive (+) and negative (−), as measured with a voltmeter or multimeter.
Waveform varies between positive and negative as it travels in time
The rate that the voltage or current peaks pass a given point is the frequency of the AC electricity.
Advantages of AC electricity
There are distinct advantages of AC over DC electricity. The ability to readily transform voltages is the main reason we use AC instead of DC in our homes.
The major advantage that AC electricity has over DC electricity is that AC voltages can be readily transformed to higher or lower voltage levels, while it is difficult to do that with DC voltages.
Since high voltages are more effecient for sending electricity great distances, AC electricity has an advantage over DC. This is because the high voltages from the power station can be easily reduced to a safer voltage for use in the house.
Changing voltages is done by the use of a transformer. This device uses properties of AC electromagnets to change the voltages.
(See AC Transformers for more information.)
AC electricity also allows for the use of a capacitor and inductor within an electrical or electronic circuit. These devices can affect the way the alternating current passes through a circuit. They are only effective with AC electricity.
A combination of a capacitor, inductor and resistor is used as a tuner in radios and televisions. Without those devices, tuning to different stations would be very difficult.
We commonly use AC electricity to power our television, lights and computers. In AC electricity, the current alternates in direction. AC electricity was proven to be better for supplying electricity than DC, primarily because the voltages can be transformed. AC also allows for other devices to be used, opening a wide range of applications.
Electrify society by applying your knowledge of science
Resources and references
Elements of AC Electricity - Basic electronics tutorial site
Alternating Current - Overview of AC
This lesson selected by the SciLinks program, a service of National Science Teachers Association.
Questions and comments
Do you have any questions, comments, or opinions on this subject? If so, send an email with your feedback. I will try to get back to you as soon as possible.
Click on a button to bookmark or share this page through Twitter, Facebook, email, or other services:
Students and researchers
The Web address of this page is:
Please include it as a link on your website or as a reference in your report, document, or thesis.
Where are you now?
Alternating Current (AC) Electricity
|
Kids use their cutting skills to build a cool snowman on this prekindergarten worksheet. This worksheet helps build fine motor skills and shape recognition.
Here's some colorful writing practice: kids trace each lowercase letter multiple times in different colors to create rainbow letters!
These chickens need your child's help to cross the road. Kids will use their fine motor skills to trace lines and finish the crosswalks for the hens.
By tracing the horizontal lines on this prekindergarten writing worksheet, kids strengthen the fine motor skills needed to form letters such as "E" and "H."
Help your preschooler practice her color recognition along with her fine motor skills by learning all about the color orange!
Kids use their cutting skills to build a funky robot on this prekindergarten worksheet. This worksheet helps build fine motor skills and recognition of shapes.
Looking for a worksheet to help your child with uppercase and lowercase letters? This printable will help him with the letter R.
What do rain, rainbows and rabbits have to do with each other? They are all words that start with R!
Rainbow begins with the letter R! Color in the flashcards featuring the letter R then cut them out and use them to play spelling and memory games.
R is for rainbow! Color in the letter R and its matching flashcard featuring a picture of a rainbow and use them to practice memorizing the alphabet.
|
Scientists are busy characterizing and finding applications for graphene, a young class of carbon materials, first discovered in 1859, with research ramping up in the 1950s. Simply put, graphene is a two-dimensional crystal best depicted as a honeycomb that is only one atom thick! Researchers at the Fluid Interface Reactions, Structures and Transport (FIRST) Center have discovered how protons selectively cross this material. This new understanding lays the foundation for innovative selective membranes for batteries, fuel cells, and other applications.
An atomic layer of graphene can be produced in large yard-sized sheets using chemical vapor deposition–a process in which a source of carbon is supplied in gaseous form over a heated piece of metal and graphene is formed by the subsequent chemical reactions. Graphene is of great importance because of its unusual electronic properties made possible by its unique geometry. It can conduct heat and electricity very efficiently. Moreover, it’s stretchable and impermeable. Flexible, mechanically stable graphene membranes can be made with tunable or adjustable nanosized pores. The use of single-layer graphene could potentially be better than the state-of-the-art polymer-based filtration membranes used for water desalination, fuel cells, and other applications.
But what can and cannot go through the single-layer of graphene before any modifications have been made? Can protons pass? As part of her FIRST-funded Ph.D. studies at Northwestern University, Jennifer Achtyl, her adviser Franz Geiger, and their collaborators asked these exact questions. And to produce a definitive answer, they used every sophisticated experimental and computational tool at their disposal.
They placed a well-characterized layer of graphene on a surface, immersed the composite in liquid, and cycled the pH, effectively increasing and decreasing the concentration of protons with each cycle. In theory, if graphene was impermeable to protons, these changes in pH would not affect the surface’s charge. But this was not the case. In fact, the charge response was nearly identical with and without the graphene “shield.”
“This is where the fun started,” said Achtyl. “We had this unexpected result–the graphene appeared to be permeable to protons–but now we needed to figure out why and how this was possible. Thankfully, because of the FIRST Center, we had the expertise needed to tackle these questions right at our fingertips. It was the comprehensive and cohesive team effort across four FIRST partner institutions that developed this initial finding into a truly awesome story.”
Achtyl and her team investigated pinhole defects that dot the surface of graphene sheets to see if the protons were leaking through. Using scanning electron microscopy followed by rigorous statistical calculations, they concluded that the number of larger gaps (that a single strand of human hair could fit through) known as pinholes was quite small and could not have accounted for that much proton transport.
In addition to the experimental work, the team performed complex calculations to quantify energy barriers for proton transport through the ideal graphene surface, as well as through various surface defect sites. Pathways that have too high of a calculated energy barrier are unlikely. They determined that a plausible pathway is through much smaller atomic-scale defect sites, where carbon atoms are missing. Calculating energy barriers for proton transfer through a defect site with one to four missing carbons indicate that a site with four vacancies would provide a favorable path (see figure). The existence of these rare and extremely small defects was confirmed by scanning transmission electron microscopy.
By combining sensitive surface-probing techniques with computer simulations the scientists provided a mechanistic answer as to how a proton may pass through single-layer graphene. They identified low-energy barriers for water-assisted proton transfer through one type of atomic defect, known as hydroxyl-terminated atomic defects, and high barriers for a different defect, known as an oxygen-terminated defect. Understanding the mechanisms at play is an important step in preparing zero-crossover proton-selective membranes. Further research will continue to lead to a deeper understanding of the physical and chemical phenomena associated with the 21st century superstar material–graphene.
This work was supported as part of the Fluid Interface Reactions, Structures and Transport (FIRST) Center, an Energy Frontier Research Center, funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences.
JL Achtyl, RR Unocic, L Xu, Y Cai, M Raju, W Zhang, RL Sacci, IV Vlassiouk, PF Fulvio, P Ganesh, DJ Wesolowski, S Dai, ACT van Duin, M Neurock, and FM Geiger. 2015. “Aqueous Proton Transfer across Single-Layer Graphene.” Nature Communications 6:6539. DOI: 10.1038/ncomms7539
|
How did the universe and the solar system evolve? What happens inside stars? How long will our sun and the Earth still exist?
The exhibition takes visitors on a voyage through space and time to the origins of the solar system. It allows insights into how, where and when matter emerged, which, ultimately, was the prerequisite for the evolution of the Earth and life on it.
A gigantic explosion – the Big Bang – 13.7 billion years ago led to the formation of elementary particles, then atoms, the basic components of matter. Ever since, the universe has been expanding. The first stars and galaxies emerged from compounded matter.
Meteorites are ancient messengers of the solar system, 4.5 to 4.6 billion years old. They witnessed the formation of our planets. Their composition allows researchers to draw conclusions on the history of our planetary system. The Museum für Naturkunde owns the largest collection of meteorites in Germany. It is the focus of major research activities at the Museum. The exhibition shows a selection of these valuable pieces.
Adults 8,00 EUR / Children, students & reduced: 5,00 EUR;
Groups (+10 people): Adults 5,00 EUR p.p., reduced 2,00 EUR p.p.
The exhibition venue on google maps:
|
The following is a explanation of spring and neap tides in relation to lunar and solar cycles.
Since antiquity, people have noticed that oceans exhibit a much greater tidal range around the time of the full Moon and new Moon. This is when the Moon and Sun are either together in the sky or are on opposite sides of the heavens. Higher tides occur during these Moon phases because the Sun also exerts a gravitational pull on our oceans, although it is only 46 percent as strong as the Moon's.
When the gravitational effects of the Sun and the Moon combine, we get spring tides, which have nothing to do with the season of spring. The term refers to the action of the seas springing out and then springing back. These are times of high high tides and low low tides.
A week later, during either of the two quarter Moon phases, when the Sun and Moon are at right angles to each other and their tidal influences partially cancel each other out, neap tides occur, and the tidal range is minimal. In fact, because the oceans take a bit of time to catch up to the geometry of the Moon, spring and neap tides usually occur about a day after the respective lunar cycles.
Now morn has come,
And with the morn the punctual tide again.
–Susan Coolidge, American writer (1835-1905)
|
How They Do It...
1. Idea Sketch
Employed at a personal level to quickly externalize thoughts using simple line-work. Also known as Thumbnail, Thinking or Napkin Sketch.
2. Study Sketch
Used to investigate appearance, proportion and scale in greater detail than an Idea Sketch. Often supported by the loose application of tone/color.
3. Referential Sketch
Used to record images of products, objects, living creatures of any relevant observations for future reference or as a metaphor.
4. Memory Sketch
Helps expand thoughts during the design process using mind maps, notes and annotations.
5. Coded Sketch
Informal coded representation that categorises information to demonstrate an underlying principle or scheme.
6. Information Sketch
Quickly and effectively communicates features through the use of annotation and supporting graphics. Also known as Explanatory or Talking Sketch.
7. Sketch rendering
Clearly defined proposal produced by controlled sketching and use of color/tone to enhance detail and realism. Also known as First Concept.
8. Prescriptive Sketch
Informal sketch for the exploration of technical details such as mechanisms, manufacturing, materials and dimensions.
9. Scenario & Storyboard
Describes interactions between user and product, sometimes in an appropriate context.
10. Layout Rendering
Defines the product, proposals as a third angle orthographic projection with precise line and color.
11. Presentation rendering
Contains a high level of realism to fully define product appearance as a perspective view. Particularly useful for decision making by non-designers.
Schematic representation of the operating principle of relationship between components. Also knows as a Schematic or Diagrammatic Drawing.
13. Perspective drawing
Descriptive three-quarter view produced using a perspective drawing technique. Created using line only without the application of color or tone.
14. Gen. Arrangement Drawing
Exterior view all components using line only and with sufficient detail to produce an Appearance Model if required. Usually drawn in third angle projection.
15. Detail Drawing
Contains detail of components for the manufacturing product. Also known as Technical, Production or Construction Drawing.
16. Technical Illustration
Communicates technical detail with a high degree of realism that is sometimes supported with symbols. Includes Exploded views.
17. Sketch Model
Informal, relatively low definition 3D model that captures as the key characteristics of form. Also known as a Foam Model for 3D Sketch.
18. Design Development Model
Simple mock-up used to explore and visualize the relationships between components, cavities, interfaces, and structures. Usually produced using CAD.
19. Functional Model
Captures the key functional features and underlying operating principles. Has limited or no association with the product's final appearance.
20. Operational Model
Communicates how the product is used with the potential ergonomic evaluation.
21. Appearance Model
Accurate physical representation of product appearance. Also known as a Block Model as it tends not to contain any working parts.
22. Assembly Model
Enables the evaluation and development of the methods and tools required to assemble products components.
23. Production Model
Used to evaluate and develop the location and fit a of individual components and sub-assemblies.
24. Service Model
Supports the development and demonstration of how a product is services and maintained.
25. Experimental Prototype
Refined prototype that accurately models physical components to enable the collection of performance data for further development.
26. Alpha Prototype
Bring together key elements of appearance and functions for the first time. Uses of simulates production materials.
27. Beta Prototype
A refined evolution of an Alpha Prototype used to evaluate ongoing design changes in preparation for the final specification of all components.
28. Systems Prototype
Integrates components specified for the production item without consideration of the appearance. used to evaluate electronic and mechanical performance.
29. Final Hardware Prototype
Developed from the Systems Prototype as a final representation of the product's functional elements.
30. Off-Tool Component
Product using the tooling and materials intended for production to enable the evaluation of material properties and appearance of components.
31. Appearance Prototype
Highly detailed representation that combines functionality with exact product appearance. Uses or simulates production materials.
32. Pre-Production Prototype
Final prototype produced using production components. Manufactures in small volumes for testing prior to full scale production.
Dr. Mark Evans, Loughborough Design School, UK, with support from IDSA
|
河口域塩性湿地に生息する稀少カニ類シオマネキの生息場所利用 Habitat Use by the Rare Fiddler Crab Uca arcuata Living in an Estuarine Salt Marsh
A field study was conducted for spatial distribution and movements of burrow locations of the fiddler crab Uca arcuata in the estuary of the Katsuura River, Tokushima Prefecture, western Japan. The crabs inhabited the upper intertidal zone of reed (Phragmites australis) marshes to bare mud flat. Large crabs were found widely throughout the distributional range, whereas small crabs and ovigerous females tended to occur in the upper area of the habitat, near the lower edge of reed marsh. In the non-breeding season both males and females foraged in similar frequency in all areas of the habitat. In the breeding season, however, males foraged less frequently than females, and exhibited waving display more frequently in the upper non-vegetated area than in other areas. A tracking survey of marked crabs for 28 days revealed that the males held the same burrows for average 4.7 days with the maximum 16 days and females for average 5.5 days with the maximum 14 days. The movement distances of the burrow locations by each crab were less than 4m.
- 日本ベントス学会誌 = Japanese journal of benthology
日本ベントス学会誌 = Japanese journal of benthology 61, 8-15, 2006-07-28
JAPANESE ASSOCIATION OF BENTHOLOGY
|
We differentiate between peer-to-peer (p2p) techniques and p2p systems. The former refers to a set of techniques for building self-organizing distributed systems. These techniques are often useful in building datacenter-scale applications, including datacenter-scale applications that are hosted in the cloud. For instance, Amazon's Dynamo datastore relies on a structured peer-to-peer overlay, as do several other key-value stores. People often use "P2P" to refer to systems that use these techniques to organize large numbers of cooperating end hosts (peers) such as personal computers and settop boxes. In these systems, most peers necessarily communicate using the Internet, rather than a local area network (LAN). To date, the most successful peer-to-peer applications have been file sharing (e.g., Napster, BitTorrent, eDonkey), communication (Skype). and embarrassing parallel computations, such as SETI@home and BOINC projects. Limitations
The main appeal of p2p systems is that their resources are often "free", coming from individuals which volunteer their machines' CPUs, storage, and bandwidth. Offsetting this, we see two key limitations of P2P systems. First, p2p systems lack a centralized administrative entity that owns and controls the peer resources. This makes it hard to ensure high levels of availability and performance. Users are free to disable the peer-to-peer application or reboot their machine, so a great degree of redundancy is required. This makes p2p systems a poor fit for applications requiring reliability, such as web hosting, or other sorts of server applications. This decentralized control also limits trust. Users can inspect the memory and storage of a running application, meaning that applications cannot safely store confidential information unencrypted on peers. Nor can the application developer count on any particular quantity of resources being dedicated on a machine, or on any particular reliability of storage. These obstacles have made it difficult to monetize p2p services. It should come as no surprise that, so far, the most successful p2p applications have been free, with Skype being a notable exception. Second, the connectivity between any two peers in the wide area is two or three order of magnitude lower than between two nodes in a datacenter. Residential connectivity in US is typically 1Mbps or less, while in a datacenter a node can often push up to 1Gbps. This makes p2p systems inappropriate for data intensive applications (e.g., data mining, indexing, search), which accounts for a large chunk of the workload in today's datacenters. Opportunities
Recently, there have been promising efforts to address some of the limitations of p2p systems by building hybrid systems. The most popular examples are data delivery systems, such as Pando and Abcast, where p2p systems are complemented by traditional Content Distribution Systems (CDNs). CDNs are used to ensure availability and performance when the data is not found at peers, or/and peers do not have enough aggregate bandwidth to sustain the demand. In another development, cable operators and video distributors have started to test with turning the set top boxes into peers. The advantage of settop boxes is that, unlike personal computers, they are always on, and they can be much easily managed remotely. Examples in this category are Vudu, and the European NanoDataCenter effort. However, to date, the applications of choice in the context of these efforts have still remained file sharing and video delivery. Datacenter clouds and p2p systems are not a substitute for each other. Widely distributed peers may have more aggregate resources, but they lack the reliability and high interconnection bandwidth offered by datacenters. As a result, cloud-hosting and p2p systems complement each other. We expect that in the future more and more applications will span both the cloud and the edge. Examples of such applications are:
- Data and video delivery. For highly popular content, p2p distribution can eliminate the network bottlenecks by pushing the distribution at the edge. As an example, consider a live event such as the presidential inauguration. With traditional CDNs, every viewer on a local area network would receive an independent stream, which could lead to choking the incoming link. With p2p, only one viewer on the network needs to receive the stream; the stream can be then redistributed to other viewers using p2p techniques.
- Distributed applications that require a high level of interactivity, such as massive multi player games, video conferences, and IP telephony. To minimize latency, in these applications peers communicate with each other directly, rather than through a central server.
- Applications that request massive computation per user, such as video editing and real-time translation. Such applications may take advantage of the vast amount of computation resources of the user's machine. Today, virtually every notebook and personal computer has a multi-core processor which are mostly unused. Proposals, such as Google's Native Client aim to tap into these resources.
|
Autism affects around one in 68 children in the United States, so the autism spectrum disorder (ADS) has become a major concern. Various government and non-profit organizations have put a lot of effort into spreading awareness and informing the public about these conditions. However what is often misunderstood, is that autism is more than just one disorder. There are many variants of symptoms and causes that can appear in different combinations. In addition to the countless fact sheets, several videos have cropped up on YouTube from the perspective of an autistic person, giving a first-hand glimpse into their everyday struggles.
What is important to understand about autism is that it is not a single disorder, but a spectrum, with varying degrees of symptoms. Within it lies the actual autistic disorder, causing problems with social interaction, communication, and interests in children less than three years old. However, the umbrella also includes asperger’s syndrome, where a language is not a problem (in fact, it can often be above average) but social problems and limited scope of interests prevail. There are also several other less known variants, such as childhood disintegrative disorder or Rett syndrome, where children begin developing normally but start losing their communication or social abilities later on. Everything else on the spectrum falls into the pervasive development disorder (PDD), or atypical autism.
The symptoms generally fall into three categories. Firstly, social interactions and relationships are affected, with difficulty maintaining eye-to-eye contact or adequate facial expressions. This could also include lack of empathy or failure to establish strong friendships with others of the same age. Secondly, communication skills can be impaired as well, ranging from struggles to initiate and maintain conversation all the way to inability to speak. Finally, the last category deals with limited interests or activities. Autistic children and adults often fall onto a repetitive routine and preoccupation with the same topic.
These three categories already paint a clear picture of autism as more than just one disorder. However there is also a number of secondary issues that could crop up. For example, many also suffer from unusual sensory perceptions; light touch could be felt as painful or a deep pressure could seem calming. Colors and sounds, as well, can seem very jarring. In some cases, a combination of these factors could lead to a sensory overload, where the person becomes too overwhelmed to focus or think clearly.
The complete causes of autism are not yet fully understand. A lot of research seems to indicate the disorder is caused by genetic predisposition, and susceptibility could be passed from parents to children. Environmental factors are also a potential culprit. The anti-vaccination movement has been claiming a link between vaccination and the disorder, though the theory has been heavily contested. Latest findings also suggest city pollution could play a role. In either case, the external factors could be negatively affecting the early development of the brain, resulting in the disorder.
Merely reading medical information might make it difficult to really imagine what living with this condition is like. Carly’s Café, a video released in May 2012, puts the viewer in the shoes of an autistic person thanks to the erratic filming style and audio-visual effects. However, since there is more than one disorder that falls into the autistic spectrum, it is worthwhile to look a bit deeper. Digging through YouTube reveals more first-hand videos, giving a glimpse into many examples and everyday struggles experienced by people who have autism.
By Jakub Kasztalski
|
This Demonstration shows Newton's method of drawing the cissoid of Diocles. The length of and the point are fixed. As you drag on the straight line , is kept perpendicular to . The midpoint of traces out the cissoid.
According to , the method uses two line segments of equal length at right angles. If they are moved so that one line always passes through a fixed point and the end of the other line segment slides along a straight line, then the mid-point of the sliding line segment traces out a cissoid of Diocles.
|
|Cornsnake Morph Guide ®||Genetics Tutorial||Charles Pritzel|
Here we only show a handful, but in real life, cornsnakes have tens of thousands of genes. Below are some different genetic codes of cornsnakes.
What cornsnakes have in common is that the order of their genes is the same. Each place in the order is called a locus
. In our example, we have the Triangle locus, followed by the Square locus, then the Star locus, the Circle, Heart, etc.
Why are all corns similar but not exactly the same? Notice that the loci stay the same but the genes vary from snake to snake.
Each locus holds a gene. Each gene has its own function in the cornsnake. For example, say the Circle locus is where you find the gene that produces black pigment.
As you know, not all cornsnakes have black pigment. The reason is that some of them have a defective copy of the gene normally found at the Circle locus.
Any different genes that can be found at the same locus are called alleles
. (Allele is pronounced "uh-leel
.") The allele most commonly found at a locus is called normal or wild-type
|
What is a salt dome?
Salt domes are massive underground salt deposits. Mushroom-shaped and thousands of feet thick, they form where shallows seas once stood. They built up over tens of thousands of years as saltwater flooded these former marine basins, then evaporated.
With time, the salt was buried by thick layers of sediment. The sediment became rock which put pressure on the salt, causing it to become plastic and puttylike. Like lava in a lava lamp, the salt pushes up toward the ground, shoving aside denser rock and creating long, finger-like columns known as diapirs. As the salt displaces rock strata, pockets are formed that collect petroleum. These have been responsible for an important portion of US domestic oil production.
About 500 salt domes exist in the U.S., all located near or in the Gulf of Mexico, where an ancient sea stood until the Jurassic age. It left behind a layer of salt called the Louann Salt that stretches from East Texas to the Florida Panhandle and as far north as southern Arkansas.
In addition to trapping petroleum, salt domes have been eyed for another energy-related use; some geologists believe they could be safe, stable places to store nuclear waste. The idea is to excavate salt deep inside the dome, create a natural vault where radioactive material could be stored. Some petroleum-related waste is already stored in similar chambers. It is argued that salt’s plasticity could help make these spaces more secure than similar ones excavated from denser rock, because if radioactive material caused a fissure its wall, it could possibly close the rupture on its while other types of rock could not.
Proponents of storing waste in salt domes argue that they can withstand geologic activity and be impervious to water. But drilling near salt domes may cause them to become unstable and salt caverns used to store petroleum have leaked into surrounding sand.
|
Plants are factories that manufacture yield from light and carbon dioxide—but parts of this complex process, called photosynthesis, are hindered by a lack of raw materials and machinery. To optimize production, scientists from the University of Essex have resolved two major photosynthetic bottlenecks to boost plant productivity by 27 percent in real-world field conditions, according to a new study published in Nature Plants. This is the third breakthrough for the research project Realizing Increased Photosynthetic Efficiency (RIPE); however, this photosynthetic hack has also been shown to conserve water.
“Like a factory line, plants are only as fast as their slowest machines,” said Patricia Lopez-Calcagno, a postdoctoral researcher at Essex, who led this work for the RIPE project. “We have identified some steps that are slower, and what we’re doing is enabling these plants to build more machines to speed up these slower steps in photosynthesis.”
The RIPE project is an international effort led by the University of Illinois to develop more productive crops by improving photosynthesis—the natural, sunlight-powered process that all plants use to fix carbon dioxide into sugars that fuel growth, development, and ultimately yield. RIPE is supported by the Bill & Melinda Gates Foundation, the U.S. Foundation for Food and Agriculture Research (FFAR), and the U.K. Government’s Department for International Development (DFID).
A factory’s productivity decreases when supplies, transportation channels, and reliable machinery are limited. To find out what limits photosynthesis, researchers have modeled each of the 170 steps of this process to identify how plants could manufacture sugars more efficiently.
In this study, the team increased crop growth by 27 percent by resolving two constraints: one in the first part of photosynthesis where plants transform light energy into chemical energy and one in the second part where carbon dioxide is fixed into sugars.
Inside two photosystems, sunlight is captured and turned into chemical energy that can be used for other processes in photosynthesis. A transport protein called plastocyanin moves electrons into the photosystem to fuel this process. But plastocyanin has a high affinity for its acceptor protein in the photosystem so it hangs around, failing to shuttle electrons back and forth efficiently.
The team addressed this first bottleneck by helping plastocyanin share the load with the addition of cytochrome c6—a more efficient transport protein that has a similar function in algae. Plastocyanin requires copper and cytochrome requires iron to function. Depending on the availability of these nutrients, algae can choose between these two transport proteins.
At the same time, the team has improved a photosynthetic bottleneck in the Calvin-Benson Cycle—wherein carbon dioxide is fixed into sugars—by bulking up the amount of a key enzyme called SBPase, borrowing the additional cellular machinery from another plant species and cyanobacteria.
By adding “cellular forklifts” to shuttle electrons into the photosystems and “cellular machinery” for the Calvin Cycle, the team also improved the crop’s water-use efficiency, or the ratio of biomass produced to water lost by the plant.
“In our field trials, we discovered that these plants are using less water to make more biomass,” said principal investigator Christine Raines, a professor in the School of Life Sciences at Essex where she also serves as the Pro-Vice-Chancellor for Research. “The mechanism responsible for this additional improvement is not yet clear, but we are continuing to explore this to help us understand why and how this works.”
2016 Field Trials conducted at the University of Illinois' Energy Farm (Credit: Brian Stauffer/University of Illinois)
These two improvements, when combined, have been shown to increase crop productivity by 52 percent in the greenhouse. More importantly, this study showed up to a 27 percent increase in crop growth in field trials, which is the true test of any crop improvement—demonstrating that these photosynthetic hacks can boost crop production in real-world growing conditions.
“This study provides the exciting opportunity to potentially combine three confirmed and independent methods of achieving 20 percent increases in crop productivity,” said RIPE Director Stephen Long, Ikenberry Endowed University Chair of Crop Sciences and Plant Biology at the Carl R. Woese Institute for Genomic Biology at Illinois. “Our modeling suggests that stacking this breakthrough with two previous discoveries from the RIPE project could result in additive yield gains totaling as much as 50 to 60 percent in food crops.”
RIPE’s first discovery, published in Science, helped plants adapt to changing light conditions to increase yields by as much as 20 percent. The project’s second breakthrough, also published in Science, created a shortcut in how plants deal with a glitch in photosynthesis to boost productivity by 20 to 40 percent.
Next, the team plans to translate these discoveries from tobacco—a model crop used in this study as a test-bed for genetic improvements because it is easy to engineer, grow, and test—to staple food crops such as cassava, cowpea, maize, soybean and rice that are needed to feed our growing population this century. The RIPE project and its sponsors are committed to ensuring Global Access and making the project’s technologies available to the farmers who need them the most.
|
Dinosaurs have taken the world by storm just as they had done millions of years ago. The only difference is that this present storm is the fascination that has arisen out of many dinosaur fossil discoveries during this past year, 2008-2009. The discoveries of these ancient reptiles like Matilda, Clancy, Banjo and Zac of bygone days have given paleontologists something to toast to and bring them into the limelight of historical importance.
It is no secret that dinosaurs existed. These reptiles, known more for their huge size and small brain, roamed the earth over 100 million years ago and have left behind their legacy in the form of the fossils we find today. Here is a guide to the recent dinosaur discoveries for the period 2008 - 2009. Countries like Australia and China have emerged as prime Dinosaur territory that shed light on the geographic distribution of different dinosaur species.
# 1. Tiny T-rex Fossil in China (September 2009)
The beast Raptorex kriegsteini is believed to be a fore-runner of the most dangerous of all dinosaurs the famous T-rex (Tryannosaurus rex) by its close similarities to the latter beast. It is also described as having been carnivorous and existed 130 million years ago or roughly ten million years before the T-rex. The fossil found was that of a juvenile dinosaur, with a total length of 3 meters and weighing 60 kg.
#2. Sauropod Zac: discovered in Queensland, Australia (August 2009)
This Sauropod nicknamed Zac was unearthed in a farm in Australia, in the state of Queensland. Zac dates back to being 97 million years old and is described as having been a herbivorous (plant eating) reptile. It was a huge dinosaur with a long neck and an equally long tail. Other features are its small head and blunt teeth. It dates back to the mid-Cretaceous Period.
#3. Banjo, Clancy & Matilda: discovered in Australia (July 2009)
Two Sauropods, both herbivorous are Clancy and Matilda which are large and identified mainly by the thigh, hip and tail bones. Banjo, a Titanosaur is much smaller than the other two and was ferociously carnivorous. This predator is identified by parts of the lower jaw, a few ribs, forearms, legs and hands. These three dinosaurs date back to approximately 112 to 98 million years ago and is believed to be part of the mid- Cretaceous Period.
#4. Limusaurus inextricabilis: A bird-like Theropod discovered in China (June 2009)
This Theropod which existed during the Jurassic Period is believed to be direct link between reptiles and birds in the evolutionary tree. It has quite a few features similar to birds especially in its reduced upper arms, lack of teeth and a well developed beak and especially the three fingered feet (hand). It dates back to being 159 -160 million years old.
# 5. Epidexipteryx hui: pigeon sized dinosaur discovered in Inner Mongolia, China (October 2008)
This pigeon sized dinosaur, which was bird-like with four tail feathers is believed to be arboreal, as in being able to live on trees. It existed a few million years before the Archaeopteryx, a bird dinosaur which is a forerunner to the evolution of birds.
|
The term acknowledges that addiction is a chronic but treatable medical condition involving changes to circuits involved in reward, stress, and self-control.
As a young scientist in the 1980s, I used then-new imaging technologies to look at the brains of people with drug addictions and, for comparison, people without drug problems. As we began to track and document these unique pictures of the brain, my colleagues and I realized that these images provided the first evidence in humans that there were changes in the brains of addicted individuals that could explain the compulsive nature of their drug taking. The changes were so stark that in some cases it was even possible to identify which people suffered from addiction just from looking at their brain images.
Alan Leshner, who was the Director of the National Institute on Drug Abuse at the time, immediately understood the implications of those findings, and it helped solidify the concept of addiction as a brain disease. Over the past three decades, a scientific consensus has emerged that addiction is a chronic but treatable medical condition involving changes to circuits involved in reward, stress, and self-control; this has helped researchers identify neurobiological abnormalities that can be targeted with therapeutic intervention. It is also leading to the creation of improved ways of delivering addiction treatments in the healthcare system, and it has reduced stigma.
Informed Americans no longer view addiction as a moral failing, and more and more policymakers are recognizing that punishment is an ineffective and inappropriate tool for addressing a person’s drug problems. Treatment is what is needed.
Fortunately, effective medications are available to help in the treatment of opioid use disorders. Medications cannot take the place of an individual’s willpower, but they aid addicted individuals in resisting the constant challenges to their resolve; they have been shown in study after study to reduce illicit drug use and its consequences. They save lives.
Yet the medical model of addiction as a brain disorder or disease has its vocal critics. Some claim that viewing addiction this way minimizes its important social and environmental causes, as though saying addiction is a disorder of brain circuits means that social stresses like loneliness, poverty, violence, and other psychological and environmental factors do not play an important role. In fact, the dominant theoretical framework in addiction science today is the biopsychosocial framework, which recognizes the complex interactions between biology, behavior, and environment.
There are neurobiological substrates for everything we think, feel, and do; and the structure and function of the brain are shaped by environments and behaviors, as well as by genetics, hormones, age, and other biological factors. It is the complex interactions among these factors that underlie disorders like addiction as well as the ability to recover from them. Understanding the ways social and economic deprivation raise the risks for drug use and its consequences is central to prevention science and is a crucial part of the biopsychosocial framework; so is learning how to foster resilience through prevention interventions that foster more healthy family, school, and community environments.
Critics of the brain disorder model also sometimes argue that it places too much emphasis on reward and self-control circuits in the brain, overlooking the crucial role played by learning. They suggest that addiction is not fundamentally different from other experiences that redirect our basic motivational systems and consequently “change the brain.” The example of falling in love is sometimes cited. Love does have some similarities with addiction. As discussed by Maia Szalavitz in Unbroken Brain, it is in the grip of love—whether romantic love or love for a child—that people may forego other healthy aims, endure hardships, break the law, or otherwise go to the ends of the earth to be with and protect the object of their affection.
Within the brain-disorder model, the neuroplasticity that underlies learning is fundamental. Our reward and self-control circuits evolved precisely to enable us to discover new, important, healthy rewards, remember them, and pursue them single-mindedly; drugs are sometimes said to “hijack” those circuits.
Metaphors illuminate complexities at the cost of concealing subtleties, but the metaphor of hijacking remains pretty apt: The highly potent drugs currently claiming so many lives, such as heroin and fentanyl, did not exist for most of our evolutionary history. They exert their effects on sensitive brain circuitry that has been fine-tuned over millions of years to reinforce behaviors that are essential for the individual’s survival and the survival of the species. Because they facilitate the same learning processes as natural rewards, drugs easily trick that circuitry into thinking they are more important than natural rewards like food, sex, or parenting.
What the brain disorder model, within the larger biopsychosocial framework, captures better than other models—such as those that focus on addiction as a learned behavior—is the crucial dimension of interindividual biological variability that makes some people more susceptible than others to this hijacking. Many people try drugs but most do not start to use compulsively or develop an addiction. Studies are identifying gene variants that confer resilience or risk for addiction, as well as environmental factors in early life that affect that risk. This knowledge will enable development of precisely targeted prevention and treatment strategies, just as it is making possible the larger domain of personalized medicine.
Some critics also point out, correctly, that a significant percentage of people who do develop addictions eventually recover without medical treatment. It may take years or decades, may arise from simply “aging out” of a disorder that began during youth, or may result from any number of life changes that help a person replace drug use with other priorities. We still do not understand all the factors that make some people better able to recover than others or the neurobiological mechanisms that support recovery—these are important areas for research.
But when people recover from addiction on their own, it is often because effective treatment has not been readily available or affordable, or the individual has not sought it out; and far too many people do not recover without help, or never get the chance to recover. More than 174 people die every day from drug overdoses. To say that because some people recover from addiction unaided we should not think of it as a disease or disorder would be medically irresponsible. Wider access to medical treatment—especially medications for opioid use disorders—as well as encouraging people with substance use disorders to seek treatment are absolutely essential to prevent these still-escalating numbers of deaths, not to mention reduce the larger devastation of lives, careers, and families caused by addiction.
Addiction is indeed many things—a maladaptive response to environmental stressors, a developmental disorder, a disorder caused by dysregulation of brain circuits, and yes, a learned behavior. We will never be able to address addiction without being able to talk about and address the myriad factors that contribute to it—biological, psychological, behavioral, societal, economic, etc. But viewing it as a treatable medical problem from which people can and do recover is crucial for enabling a public-health–focused response that ensures access to effective treatments and lessens the stigma surrounding a condition that afflicts nearly 10 percent of Americans at some point in their lives.
|
PHILADELPHIA - What was Philadelphia like during the early days of colonial America? The city had many characteristics, including a thriving port and an abolitionist sentiment. Here's why it was so crucial to its colonial inhabitants. After all, the city was the first capital of the United States. Moreover, it was the center of culture and abolitionist sentiment.
Philadelphia Was A Port Town
As the largest port, Philadelphia had attracted immigrants from various parts of the world. In the early 1800s, nearly half the population was foreign-born, primarily Germans, Irish, and British. In addition, almost 30,000 Russian Jews and 20,000 Italians had settled in the city. By the turn of the twentieth century, the town was home to more than a million foreign-born residents.
The first mass migration to Philadelphia occurred in the eighteenth century when the Pennsylvania Provincial Assembly required ship captains to submit lists of passengers and cargo before leaving port. By 1717, more than seven thousand German immigrants had made the seven-week journey to Philadelphia. From 1749 to the beginning of the American Revolution, over one hundred thousand Scotch-Irish landed in Philadelphia. By the 1880s, the port had grown to the fourth largest immigrant port in the United States.
It Was A Center Of Culture
By the late seventeenth century, Philadelphia was the heart of colonial America's political, cultural, and intellectual life. The city was home to many of the country's most important institutions, such as the College of Philadelphia, the Library Company of Pennsylvania, and the Pennsylvania Hospital. The city attracted talented individuals from across the region and the Atlantic World, and its thriving culture supported innovations in public welfare, printing, humanities, and arts. The American Philosophical Society was founded in Philadelphia.
Philadelphia is also known as the birthplace of the American Revolution and the site of the Declaration of Independence. Among the many places of interest in the city are Independence Hall and the Liberty Bell. Independence Hall contains the Declaration of Independence and the Constitution of the United States. Betsy Ross' home is still standing on Arch Street. Several historical buildings have been designated as National Historic Sites.
It Was The Capital Of The United States
Philadelphia, Pennsylvania, was the early capital of the United States after the Constitution was ratified. However, the nation's capital moved from Philadelphia to Washington on May 14, 1800. Several factors led to the move, including a compromise between Thomas Jefferson and Alexander Hamilton over slavery and a rowdy incident involving Continental soldiers in 1783. Even so, Philadelphia had long been a vital hub of the new nation and was accessible from North and South.
The city's rapid growth made it one of the first planned cities in America. In 1699, it was laid out by Governor Francis Nicholson, who wanted a new, orderly city to be the capital of the largest British colony in the Americas. At the time, it was home to the oldest legislative assembly in the New World. Despite its early failures, the city quickly became the center of economic and political life for Virginia.
It Was A Center For Abolitionist Sentiment
Abolitionist sentiment began in Philadelphia around 180 years ago, when a group of abolitionists gathered to write a manifesto and Constitution. They based the manifesto on the central idea of the Declaration of Independence, the Biblical command to love one's neighbor. Philadelphia's first president was Arthur Tappan, while William Lloyd Garrison penned the Declaration of Sentiments, the body's official statement.
Abolitionists in Philadelphia were motivated by the plight of enslaved Africans in the city, and many of these citizens were wealthy. In 1775, Philadelphia merchants held auctions of enslaved people outside the London Coffee House, a meeting place of Philadelphia's anti-slavery Quakers. This group organized anti-slavery meetings and eventually helped the state pass the first emancipation law.
|
Color Preference Analysis
Grades: 6, 7, 8, 9, 10, 11, 12
Learning Target: Students will produce color preference data and create a method to analyze multidimensional data.
- Using student-created data (use an online Form generation tool), generate a table of r, g, b data that describes student color preferences (in RGB color, 0-255) and dislikes. Use this tool to create the colors. Collect favorite and list favorite color data. Data should be formatted in the following manner:
- Copy and paste the data into these two graphs: Most Favorite, Least Favorite. You will need to delete the existing data table and rename the variables: i1,r1, g1, b1. The i variable is the ID from the data table.
- Share both the spreadsheet and the Favorite graphs to student groups. Have groups answer the following questions:
- Is there a pattern to most and least favorite colors in the class?
- How can you prove the pattern using the data?
- If there is no pattern, can you prove that using data?
- After students have developed a method for their data analysis, ask students to use histographs with R, G, and B color data and look for a pattern. Answer the following questions:
- Would using a scatterplot show a pattern?
- What is one problem with using a scatterplot?
- What other method could be used?
CCSS Math Practice
- I can look for and make use of structure.
- I can look for and express regularity in repeated reasoning.
NGSS Crosscutting Concepts
|
The Northern Darwin’s frog is one of only two frogs in the world which exhibit ‘mouth brooding’ parental care, whereby the young undergo part of their development in the parent’s mouth. It is possible this species is now extinct.
Females lay their eggs on damp ground and, when the developing tadpoles start to wriggle in their egg capsules, the guarding male swallows them into his vocal sac. They stay until their jaws and digestive tracts are fully formed, where upon the male carries them to a stream to be released. The only other species known to perform this behaviour is the closely-related Darwin’s frog, which is also suffering population declines.
The Rhinodermatidae is a family that comprises the two Darwin frogs and another unusual EDGE species, Barrio’s frog. Darwin’s frogs split from Barrio’s frog around 40 million years ago, and together they diverged from all other amphibian lineages some 55 million years ago. In terms of mammalian evolutionary comparisons, they are as distantly related to their closest relatives as whales are to giraffes.
The Northern Darwin’s frog is currently listed as Critically Endangered by the IUCN Red List and considered ‘Possibly Extinct’. The frog has not been seen since 1981 and it could have been driven to extinction by habitat loss, climate change or disease, possibly the Chytrid fungus. Habitat loss through the planting of pine plantations and human expansion threatens much of the former and current range of the species. It is not known from any protected area, as there are none within its historical range.
- Order: Anura
- Family: Rhinodermatidae
- Population: Possibly extinct
- Trend: decreasing
- Size: 31-33mm
This species occurs in Chile from Zapallar to Ramadillas, at elevations up to 500 m above sea level.
Habitat and Ecology
This species has been found in leaf litter in temperate mixed forests, and also in bogs surrounded by forests. Females lay clutches of 12-24 small eggs on moist ground leaf litter which is guarded by the male. The species feed on small insects and other small invertebrates and the frog is terrestrial and primarily day active.
|
Although educational opportunities for minority and low-income students have improved over the past 30 years, the achievement gap has not been closed. Understanding diversity and how it affects teaching and learning is a critical component in reaching this goal. Yet even with additional resources for multicultural education, most educators are not aware of the many ways that racial and cultural diversity affects teaching, learning, and educational outcomes. The situation is further compounded by the placement of uncertified or inexperienced teachers in schools where the majority of students are minorities from low-income families.
There is considerable research demonstrating that teachers lacking “cultural competence”-a deep understanding of ethnic groups, learning styles, and cultural differences-have lower academic expectations and aspirations for students from diverse backgrounds. Findings from a Department of Education survey on teacher preparation indicate that inexperienced teachers do not feel well prepared to teach students from diverse cultural backgrounds or students who are English language learners.
One strategy for improving the quality of teaching for diverse learners is to diversify the teaching profession: The typical teacher is young, white, female, a recent college graduate with limited contact or experience with people of other races or cultures. Researchers argue that students are better served by teachers who share their cultural and social backgrounds, since it is assumed those teachers will have greater cultural awareness and understanding, higher aspirations for student achievement, and the ability to provide positive role models.
Recruiting teachers from diverse backgrounds, however, does not sufficiently address the challenge of meeting the needs of diverse learners. First, there are not enough teachers of diverse backgrounds to go around. While the percentage of minority children in schools has increased, the percentage of minority teachers has not kept pace. In addition, the student population has become increasingly diverse on a variety of levels, making it highly unlikely that any one teacher would have the same cultural and racial background as the students in the class.
Broad changes in pre-service teacher education programs are needed to produce teachers who are effective with a diverse student body. These changes include recruiting teachers who are committed to multicultural education, integrating diversity throughout the undergraduate curriculum, and providing clinical experiences that immerse teacher candidates in the communities of their prospective students.
Because so many teachers begin teaching without the opportunity to develop the skills and knowledge needed to teach diverse students, ongoing professional development in diversity is essential.
How New Teachers Deal with Diversity
Overall, new teachers in a new survey held positive attitudes toward cultural diversity. The teachers – the majority of whom are young (70 percent were age 35 or under) white (78 percent) women (77 percent) – did not consider schools with a predominantly non-white student population to be “diverse,” even when the background of the teaching staff differed from that of the students. The teachers also had a tendency to view diversity in terms of individual student differences, and worked on addressing the individual interests, needs, and aptitudes of their students. Teachers seldom mentioned diversity in terms of social and educational equity, and very few described students as members of racial, cultural, or linguistic groups that, historically, have been treated unfair by the education system.
Many indicated they felt well prepared to understand the culture and background of their students and to teach their subject areas in ways that help all students learn. However, the same teachers also said they felt least prepared to address the needs of English language learners or students with special learning needs (see 5 Areas New Teachers Feel Least Prepared, page 9), a pattern that held true not only for teachers in the PEN study, but also for those participating in prior studies as well. These findings indicate a disconnect between the way teachers view culture and the way they view language, raising the question of whether teachers really understand what it means to teach all children.
Across all sites, teachers felt that socioeconomic diversity-in particular, poverty-and academic diversity had more of an effect on their teaching and on student learning than did race or culture. They viewed their lack of preparation in dealing with English language learners and special-needs students in terms of academic diversity, but did not link academic diversity to race or culture.
The level of awareness among teachers of how their racial and cultural backgrounds might affect their teaching and their relationships with students varied significantly. Most teachers did not mention the issue-even those describing their student population as predominantly African American and the teaching staff as predominantly white-while others were very conscious of how the nuances of cultural difference affected their teaching.
Some teachers said they felt more comfortable with their students if they shared the same race/culture and lived in the same community, and acknowledged that they may not have the necessary preparation to teach children of other races or cultures.
Teachers also felt it was important for schools to make a greater effort to recruit teaching staff that reflected the student body. They felt that white teachers had to work harder to establish trusting relationships with students of color and that non-white teachers seemed to be able to develop “positive” and “different” connections with students of color.
Teachers with greater diversity awareness felt it was important for all teachers to spend time learning how to relate to students of various racial and cultural backgrounds. They felt teachers need to be more proactive and self-reflective in obtaining a better understanding of how their background and experiences might affect their teaching. Since good student-teacher relationships are a key to academic success, teachers felt they should have the opportunity to learn about their students and to work in the community before they entered into formal teacher-student relationships. Interestingly, these teacher perspectives are supported by research on the integral components of teacher diversity preparation.
Jeff C. Palmer is a teacher, success coach, trainer, Certified Master of Web Copywriting and founder of https://Ebookschoice.com. Jeff is a prolific writer, Senior Research Associate and Infopreneur having written many eBooks, articles and special reports.
|
We're back with here). He's going to take us through the identification process for hardwoods.
David, where do we start?
You should prepare the surface of a wood sample before you examine its cells. Preparing the cross-sectional surface of a piece of wood properly can be frustrating and time consuming, but it is worthwhile. Make a thin, clean cut across the wood's surface with a sharp knife or razor blade. Make the thinnest slice possible to reduce tearing of the wood.
After removing the slice, use a hand lens or magnifying glass to look at the surface. Identifying wood is often a process of elimination. Look for different cell types and write down what you observe. Your notes will help you remember what you have seen and help you identify the wood.
Are there a lot of different cell types to learn? And are they easy to tell apart?
There are four major cell types, fiber tracheids, vessels or pores, longitudinal parenchyma, and ray parenchyma. All of the cell types are easily identified, so there is no confusion about what they are, and each one serves a unique function in the tree.
Are cells the only thing to look at?
No, that's just the start. After you determine a piece of wood is a hardwood, you should examine the pores in greater detail. Remember that hardwoods contain vessel elements, or pores, that softwoods do not have. You will want to examine the size, distribution, and changes in number of pores to identify the type of hardwood.
Hardwoods can be classified into three groups based on the pores:
• Ring-porous hardwoods, oaks and elms, have pores that transition from small to large abruptly from the earlywood to the latewood. The largest of the pores are clearly visible to the naked eye:
• Semi-ring porous hardwoods such as walnut, pecan, and hickory have pores that gradually change from small to large in a growth ring:
• Diffuse-porous hardwoods yellow poplar, gum, and maple have pores that are the same size throughout the growth ring:
Pores are also distributed in other ways in wood. They can be arranged as follows:
A. Solitary pores: Individual pores evenly spaced.B. Pore chains: Multiple pores chained together.C. Nested pores: Clusters of pores connected together.D. Multiple pore: Two or more pores clustered together.E. Wavy bands: Bands of pores with a wavy appearance.
There is actually a lot of difference in these woods. The pictures really help, thank you. So we have cells and pores, and…?
And we have wood rays which look like small stripes that go from one edge of a piece of wood to the other edge on the cross-sectional face. Wood rays transport food and water horizontally in the tree.
The rays in most species are unique and allow for easy identification. Oaks, for example, have very large rays that are visible to the naked eye. Sycamores can also be easily identified by the number of rays.
That's what makes "rift and quartered" flooring and other and other "figured" wood so distinctive, right?
Exactly. When you saw the lumber, you slice open these rays, making the surface patterns. But beyond the aesthetics, examining the tangential and radial surfaces of wood for the characteristics of rays can help you identify wood species. Rays vary both in height and width, so examining both surfaces is key. Looking at the tangential surface will allow you to look at ray height. Some rays are several inches tall, while others are difficult to see at all.
The rays in oak can be over an inch high (white oak) or less than an inch (red oak). Examining on the radial surface will allow us to see what is called the ray fleck, of the wood. The fleck is where rays have been cut longitudinally and give, in the case of oaks or cherry a "tiger stripe" effect.
If rays are present in every hardwood, why don't we see significant figure in most species?
Because many of the rays are only one cell wide, we call these unisariate rays. They can be so narrow in fact that without the aid of a microscope they can't be seen.
And why do rays seem to shine?
The angles of the cell walls and the parenchyma cells around them tend to make them catch light in a desirable way. Many of the woods we utilize are simply used because of the way light seems to dance back from their surfaces.
That helps so much! So next week tropical hardwoods?
I can do that.
|
The beef value chain is a complex system, which includes the production of feed, the raising of beef cattle on grass and in feedlots, processing plants, retailers, food service operations, and the consumer. Broadly, the beef value chain can be split into pre-farm gate (all the processes and activities prior to the harvest of the beef animal) and post-farm gate (all the processes and activities that take place once the beef animal leaves the farm, ranch, or feedlot). Approximately 80% of greenhouse gas (GHG) emissions produced per unit of beef in the United States occur in the pre-farm gate part of the beef value chain.1
The prefarm gate portion of the beef value chain can be split into three major phases: the cow-calf phase, the stocker or backgrounding phase, and the feedlot or finishing phase.
Feedlots are often believed to be responsible for the largest portion of beef’s GHG emissions. In reality, the cow-calf phase is responsible for most (approximately 70%, Figure 1) of the GHG emissions in the beef value chain prior to the harvest of beef cattle.2-5 Factors that influence GHG emissions in each phase deal with three primary components: the number of animals maintained in each phase at any given time, the diet of the animals in each phase, and efficiency of feed conversion.
Animals in the cow-calf phase are either pregnant or lactating cows, replacement heifers, growing calves, or bulls. Cows that are lactating have higher daily energy and nutrient requirements than other mature, non-lactating animals. Cattle in the cow-calf phase of the industry are largely raised on pasture, consuming mostly forages that are typically of lower quality or digestibility. It has been well established by scientific research that cattle consuming feed with low digestibility tend to generate more methane emissions (a GHG 28 times more potent at trapping heat in the earth’s atmosphere than carbon dioxide6) as compared to cattle eating more digestible feed (e.g., cattle in feedlots eating high-grain diets).5 While cattle in the cow-calf phase produce more methane emissions per animal due to their diet of mostly grass and hay, those feeds are also unsuitable for human consumption; therefore, there is a sustainability tradeoff between methane emissions and the ability of cattle to convert grass into human usable products (e.g., beef, leather).2
From the cow/calf sector, cattle are typically weaned and sold and enter the stocker/backgrounding phase, where they spend additional time grazing forage. However, the GHG emissions from the stocker/ backgrounding phase are lower because the number of animals maintained in this phase is smaller, and they spend a shorter amount of time in this phase. To put this in perspective, cattle generally have one calf per year as a function of their gestation interval (which is similar to that of a person), so an entire herd of cows must be maintained for an entire year to produce one year’s worth of cattle that may spend approximately 120 days in the backgrounding phase. Occasionally, weaned animals enter the feedlot directly and skip the stocker/backgrounding phase altogether.
|
A few years ago I traveled to South Vietnam. One day, walking along a backcountry road looking for birds, which are fairly rare because the populace of rural villages eats pretty much any wild animal, a woman stooped over in front of me and snatched a big spider off the road for a quick snack. My foray into insectivory is limited to chocolate-covered grasshoppers, but two billion people around the world eat insects. In Korea boiled silkworm pupae are seasoned and eaten as a snack. Connoisseurs in Japan enjoy aquatic fly larvae sautéed in sugar and soy sauce. In the U.S. you can purchase protein bars made of cricket flour. And why not? Insects are 50-78 percent protein and 77-98 percent digestible.
With food this nutritious, it is no wonder that approximately 60 percent of the world’s birds are dedicated insectivores, surviving on arthropods. There is such a variety of exploitable arthropods that birds employ a wide array of foraging behaviors like hawking, sallying, gleaning, or probing. In temperate area winters, arthropods are scarce so permanent residents have to be flexible and find dormant insects, larvae or eggs, switch to another food source, or leave. Downy Woodpeckers probe galls or stems of plants for larvae. Northern Flickers eat ants and beetles from the ground and may take berries and seeds. Great Tits survive the winter on a regimen of berries and the seeds of beech and hazel.
Flycatchers, warblers, swallows, and swifts, dependent on active insects, migrate to the tropics where they have access to arthropods all year except at high elevations. What happens when all these insectivorous birds arrive, sometimes doubling the bird population? Insectivorous birds that are permanent residents in the tropics tend to be specialists, surviving in narrow foraging niches. For example, eleven percent of insectivorous birds in the upper Amazon basin feed only by acrobatically gleaning insects off aerial leaf litter (dead leaves hanging from understory plants), as some antthrushes and ovenbirds do. Although there are fewer dead leaves than live ones, dead leaves hold more arthropods and thus offer a higher energy yield. Migratory birds arriving in the tropics are opportunists and survive by feeding in the gaps between the residents’ foraging niches. One exception is the Worm-eating Warbler from the Eastern U.S. that winters in Central America. In the spring in the U.S. it spends about 75 percent of its time searching live leaves; on its wintering grounds it forages 75 percent of the time on dead leaves.
Insectivorous birds are important at keeping insect levels in check in forests and so reduce plant damage. Researchers in southern Sweden excluded birds from tree trunks and branches by the use of nets. After four weeks, there was a 20 percent increase in plant-eating arthropods on the protected tree parts. A similar study in a Jamaica coffee plantation resulted in a 60-70 percent increase in arthropod populations on coffee trees. Insectivores do not have as great an influence in temperate ecosystems as they do in tropical ones because cold winters keep insect populations under some control.
If you don’t want to be accused of engaging in insectivory, the proper term for humans eating bugs is entomophagy.
|
Satellite images are a great help to solve different geological problems, they are a key tool in multiple fields such as geotechnics, hydrogeology, regional geology, structural analysis, mining exploration, geomorphology, geological mapping, etc. These images, through photo interpretive analysis, allow large areas to be studied in a short time and at a lower cost in the exploration stages, they are also key in areas of difficult access in which it would otherwise be almost impossible to obtain information.
Satellite images are the visual representation of information captured by a sensor mounted on an artificial satellite. Some of the most used images in geological exploration today are: LANDSAT, ASTER, ALI, QUICKBIRD, SPOT, HYPERION.
Landsat satellites are intended for the study of natural resources, for years several of these have been launched into space with the aim of collecting multiple images of the earth, they systematically and repetitively cover the surface of the earth by rotating around it, to later send this information to the earth where it is processed. The analysis of landsat images basically consists of a photo interpretation in which these images can be modified by means of techniques that allow certain features or elements to be enhanced.
Multispectral analysis and techniques to highlight certain elements of the image, such as color compositions, digital analysis, etc., complement geological research, providing great advances in geological study. Satellite images are also used in the elaboration of geomorphological maps, updating of geological charts, as well as helping in the conservation of the environment by providing data on the changes that occur due to climate change.
Satellite images from Remote Sensing are very useful in geological and mining mapping since the different wavelengths deliver a large number of data needed in these areas. The multispectral and hyperspectral sensors allow the identification of different types of lithology, the recognition of minerals such as alunite, illite, chlorite, kaolinite, epidote, oxides, etc. Satellite images have been applied with great success in structural geology, since they allow the identification of large linear features. The bands most used for mapping minerals are those corresponding to the visible and infrared spectrum.
Some applications of satellite geological mapping and remote sensing are:
- Determination of hydrothermal alteration zones.
- Determination of mineralized zones (mining).
- Geological risk studies,
- Environmental impact studies.
- Iceberg motion control.
- Geological cartography.
- Mapping of coastal waters, forestry and urban.
- Discrimination of type of rocks and soils.
|
The hardest part of a science fair is deciding on a project that suits you. Every year the old standby tornado in a jar and solar system projects are displayed; but why not get creative and choose something no one else will do? If you enjoy animals, try doing a project about turtles. Turtles are easy to find as pets and in the wild and also make great subjects.
Compare Turtle Behaviors
Research and make a list of different species of turtles. If you already have pet turtles or turtles in your yard, use these species. Ask friends who have pet turtles or turtles near their homes if you can use their turtles. You only need two species, but use more if you can or want to. Observe these turtles at different times of the day. Make sure to observe them at the same times each day (for example, each morning, afternoon, and night). Make notes about what each turtle is doing at these times. When do they like to eat, sleep? When are they more active? When do they prefer day/light, evening/shady, or nighttime/dark? Record your observations and compare the behaviors between the different species. How are they alike? How are they different?
Is It a Turtle or a Tortoise?
Research tortoises and turtles. Gather information regarding their habitat, feeding habits, sleeping habits, hibernation, breeding, offspring, where can you find them in the United States, and so forth. Find pictures of turtles and tortoises and note the likenesses and differences in appearance. Next, examine live turtles and tortoises. Again, observe likenesses and differences in appearance. If you are able to observe them for several days, observe the differences in how they live, eat, and sleep and in their habitats. Record all of your observations, and note how they assist in telling the difference between a turtle and a tortoise.
What Color Do Turtles Prefer?
First, get your pet (or, with permission, someone else's pet) turtle. Make sure it is in its usual, comfortable habitat (tank with water, heat, light, and so forth). Provide the turtle with a variety of different foods that appeal to and are safe for it. Make sure that each food is a different, vibrant color (tomato, carrot, spinach, apple, grape, banana, or whatever you choose to use). Record which food the turtle goes to first. Repeat this procedure every day or every other day over the course of several days. Each time, record which food the turtle goes to first. Make notes about which foods the turtle is least interested in and which ones are the second favorite. Is the turtle consistent in his choices? If so, the turtle may have a favorite color. If not, color may not matter to turtles. How does this affect the way a turtle lives?
|
All religions, in order to remain both authentic and relevant, constantly reflect on how best to respond to changing values and attitudes in society. Rapid changes in such things as technology, the media, family life, marriage and other dimensions of social life all have the potential to create issues within society that in turn become matters of concern or interest for particular religions.
On 10 December 1948, the United Nations General Assembly in Paris proclaimed the Universal Declaration of Human Rights. This milestone document was drafted by a panel of representatives with various legal and cultural backgrounds, and has since been translated into over 500 different languages. The document specifies the fundamental human rights that apply to all people, are to be universally protected.
Article 14 of the Universal Declaration of Human Rights stipulates:
Everyone has the right to seek and to enjoy in other countries asylum from persecution.
This right may not be invoked in the case of prosecutions genuinely arising from non-political crimes or from acts contrary to the purposes and principles of the United Nations.
Australia was one of eight nations involved in the drafting of the Universal Declaration of Human Rights, and is a party to the declaration.
United Nations (n.d.). Universal Declaration of Human Rights. Retrieved from http://www.un.org/en/universal-declaration-human-rights/
Ted-Ed. (2016, June 16). What does it mean to be a refugee? - Benedetta Berti and Evelien Borgman. Retrieved from https://www.youtube.com/watch?v=25bwiSikRsI
Who are refugees?
Until 1951, there was no commonly accepted term for people fleeing persecution. People who fled their country were known as stateless people, migrants or refugees. There were no universally recognised definitions for these categories and different countries treated these people in different ways.
Following the mass migrations caused by the Second World War (particularly in Europe), it was decided that there needed to be a common understanding of which people needed protection and how they should be protected. This resulted in the development of the 1951 Convention relating to the Status of Refugees, which defines a refugee as:
"Any person who owing to a well founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group or political opinion, is outside the country of his/her nationality and is unable, or owing to such fear, is unwiffing to avail himself/herself of the protection of that country."
Who are asylum seekers?
An asylum seeker is a person who has sought protection as a refugee, but whose claim for refugee status has not yet been assessed. Many refugees have at some point been asylum seekers, that is, they have lodged an individual claim for protection and have had that claim assessed by a government or UNHCR. Some refugees, however do not formally seek protection as asylum seekers. During mass influx situations, people may be declared "prima facie" refugees without having undergone an individual assessment of their claims, as conducting individual interviews in these circumstances is generally impracticable (due the large numbers involved) and unnecessary (as the reasons for flight are usually self-evident). In other cases, refugees may be unable to access formal status determination processes or they may simply be unaware that they are entitled to claim protection as a refugee.
What is the difference between a refugee and a migrant?
A migrant is a person who makes a conscious choice to leave their country to seek a better life elsewhere. Before they decided to leave their country, migrants can seek information about their new home, study the language and explore employment opportunities. They can plan their travel, take their belongings with them and say goodbye to the important people in their lives. They are free to return home at any time if things don't work out as they had hoped, if they get homesick or if they wish to visit family members and friends left behind.
Refugees are forced to leave their country because they are at risk· of, or have experienced persecution. Their concerns of refugees are human rights and safety, not economic advantage. They leave behind their homes, most or all of their belongings, family members and friends. Some are forced to flee with no warning and many have experienced significanttrauma or been tortured or otherwise ill-treated. The journey to safety is fraught with hazard and many refugees risk their lives in search of protection. They cannot return unless the situation that forced them to leave improves.
People who choose to migrate for economic reasons are sometimes called "economic refugees", especially if they are trying to escape from poverty. However, they are not refugees under international law. The correct term for people who leave their country or place of residence because they want to seek a better life is "economicmigrant".
Internally Displaced Persons
Internally displaced persons (IDPs) are often referred to as refugees. However, while refugees and IDPs may flee for similar reasons (for example, armed conflict or persecution), their legal status is very different. Unlike refugees, IDPs remain within the borders of their home countries and are legally under the protection of their own government, even in cases where the government's actions are the cause of their flight. A person cannot be recognised as a refugee unless they are outside their home country.
The term "refugee" is also used colloquially to refer to people who have been displaced due to a natural disaster (such as an earthquake or volcaniceruption) or environmental change.
SBS Viceland. (2015, July 23). Asylum seekers and Australia: is the debate over? Retrieved from https://www.youtube.com/watch?v=kO1AKcBUkp4
|
Lyme disease (or borreliosis) is a tick-borne infection caused by certain species of the Borrelia genus (B. burgdorferi in the US, predominantly B. afzelii and B. garinii in Asia and Europe). There are three stages of Lyme disease. Stage I (early localized disease) is characterized by erythema migrans (EM), an expanding circular red rash at the site of the tick bite, and may be associated with flu‑like symptoms. In stage II (early disseminated disease), patients may present with neurological symptoms (e.g., facial palsy), migratory arthralgia, and cardiac manifestations (e.g., myocarditis). Stage III (late disease) is characterized by chronic arthritis and CNS involvement (late neuroborreliosis) with possible progressive encephalomyelitis. In Asia and Europe, further skin manifestations may also occur in stage II (lymphadenitis cutis benigna) and stage III (acrodermatitis chronica atrophicans). Lyme disease is a clinical diagnosis in patients presenting with EM. Serological tests (e.g., Western blot; enzyme-linked immunosorbent assay) can help support the clinical diagnosis, especially if the presence of EM is not known or questionable. Lyme disease is treated with antibiotics; the drugs of choice are doxycycline for localized disease and ceftriaxone for disseminated disease.
- Incidence: most commonly reported vector-borne disease in the US
- Geographical distribution: primarily the Northeast and upper Midwest of the US
Epidemiological data refers to the US, unless otherwise specified.
Various tick species: mainly Ixodes scapularis (deer or black-legged tick) in the northeastern and upper midwestern US
- Ixodes pacificus (western black-legged tick) in the northwestern US
- Ixodes ricinus (castor bean tick) in Europe
- Typically found in forests or fields on tall brush or grass
- The incidence of Lyme disease is highest between April and October (especially from June to August).
- Peromyscus leucopus, the white‑footed mouse, is the primary reservoir of B. burgdorferi in the US.
Increased risk of disease for:
- Outdoor workers (landscapers, farmers, etc.)
- Outdoor enthusiasts (i.e., hikers, hunters, etc.)
- Various tick species: mainly Ixodes scapularis (deer or black-legged tick) in the northeastern and upper midwestern US
- Reservoir hosts: deer, cattle
Stage I (early localized Lyme disease)
Symptoms develop within 7–14 days after a tick bite.
Erythema chronicum migrans (EM)
- Pathognomonic of early Lyme disease
- Occurs in approx. 70–80% of infected individuals
- Usually a slowly expanding red ring around the bite site with central clearing (“bull's eye rash”)
- Typically warm, painless
- Possibly pruritic
- EM is often the only symptom.
- Self-limiting (typically subsides within 3–4 weeks)
- Flu‑like symptoms: fever, fatigue, malaise, lethargy, headache, myalgias, and arthralgias
Stage II (early disseminated Lyme disease)
Symptoms develop 3–10 weeks after a tick bite
- Migratory arthralgia: can progress to Lyme arthritis
- Early neuroborreliosis
- Lyme carditis
- Cutaneous manifestations
Stage III (late Lyme disease)
Symptoms develop months to years after the initial infection
- Chronic Lyme arthritis (10% of cases)
- Late neuroborreliosis manifestations include:
Acrodermatitis chronica atrophicans (ACA, also called Herxheimer's disease): in Europe and Asia
- Chronic progressive dermatological disease due to infection with Borrelia afzelii that occurs only in Europe and Asia and most commonly affects women > 40 years of age
- Manifestation on the extensor side of extremities
- Stages in the course of the disease:
To remember important symptoms of Lyme disease, think of someone making a FACE (Facial nerve palsy, Arthritis, Carditis, Erythema migrans) when biting into a lime.
- After a tick bite, observe for erythema chronicum migrans.
- If stage I Lyme disease is likely (e.g., EM is present), start empiric antibiotics without further testing.
- If symptoms of Lyme disease arise in a patient with possible exposure (especially if recently traveled to an endemic area), conduct two-step serological testing.
- If signs of neuroborreliosis are present and other tests are inconclusive, consider additional procedures, such as a lumbar puncture for cerebrospinal fluid testing.
Two‑step serological testing
- Initial test: enzyme‑linked immunosorbent assay (ELISA)
- Confirmatory test: Western blot
- Detect IgG and IgM antibodies against Borrelia
- Results are only significant with corresponding clinical symptoms because:
- Positive results only demonstrate exposure to Borrelia (not necessarily current infection).
- False negative results are possible if seroconversion has not yet occurred (may take up to 8 weeks).
- Various diseases can lead to a false positive serology as a result of cross-reactions, including:
- Possible blood test findings:
- Cerebrospinal fluid testing
- If peripheral neurological symptoms are present: evaluate axonal damage
Borrelia-specific intrathecal antibodies with normal protein and without pleocytosis indicate a past infection. Detection of elevated antibodies alone does not provide conclusive evidence of an active infection
- Commonly conducted if signs of arthritis are present, but results do not allow differentiation from septic arthritis without PCR
- Synovial fluid findings
- Rule of 7s: Children who meet all of the following criteria can be identified as at low risk for Lyme meningitis and may be treated accordingly in an outpatient setting until lab results become available.
- For erythema migrans
- For Lyme carditis
- For Lyme arthritis
- For neuroborreliosis
- For differential diagnoses of tick bite: See “Overview of tick-borne diseases.”
The differential diagnoses listed here are not exhaustive.
|Stages||Presentation||General therapy||Therapy in pregnant/nursing patients|
|Localized Lyme disease|
|Disseminated Lyme disease|
If infection is likely (e.g., EM is present), start antibiotic treatment!
Possible complications following successful antibiotic treatment
Post-Lyme disease syndrome (PLDS)
- Description: a somewhat controversial syndrome (the medical community does not agree on its existence) following successful treatment of Lyme disease that is associated with pain, fatigue, and difficulty concentrating that lasts > 6 months
- Differential diagnosis: somatoform disorders, unsuccessfully treated chronic Lyme disease
- Treatment: symptomatic treatment with general medical and psychosomatic support
- There is no approved vaccine on the market for Lyme disease. There was a Lyme disease vaccine in the past that offered temporary protection, but it was discontinued in 2002 because of low demand.
- Avoid prime habitats in areas known for Lyme disease.
Tick bite prevention: Prevent and properly manage tick bites to avoid exposure.
- Wear protective clothing: e.g., long-sleeved shirts, long pants, and light colors.
- Use tick repellent and pesticides.
- Check body for tick bites.
Remove ticks immediately!
- Grasp the tick with tweezers directly above the skin's surface.
- Carefully pull upward with even pressure.
- Do not use nail polish remover, adhesives, oils, or similar substances to remove the tick. The tick should be removed quickly rather than waiting for it to detach slowly.
- Disinfect the site of the bite and dispose of the tick
- Observe the bite site for early detection of EM.
Post‑exposure prophylaxis for Lyme disease
- Although controversial, post-exposure prophylaxis may be considered for patients who meet all of the following criteria:
- The attached tick can be identified as an adult or nymphal Ixodes scapularis tick.
- The tick has been attached for ≥ 36 hours (based on degree of engorgement or amount of time since exposure).
- Prophylaxis can be started within 72 hours of tick removal.
- The local rate of tick infection with B. burgdorferi is ≥ 20% (known to occur in parts of New England, parts of the mid‑Atlantic states, and parts of Minnesota and Wisconsin).
- The patient can take doxycycline (e.g., the person is neither pregnant nor breastfeeding, nor a child < 8 years of age).
- If the patient meets all the above criteria, 200 mg of doxycycline can be given to adults and 4 mg/kg to children ≥ 8 years (maximum dose: 200 mg).
|
A blank page and a lot of confusion.
Crumpled up attempts to get it right on the first try, struggling desperately to eloquently place thoughts upon the page.
Too often, students need prompts or formulas to get started because they’ve been trained into submission, learning not to trust themselves or their own ideas.
As young learners, we provide them with structures like the five-paragraph essay in order to help them understand organization. Unfortunately, once this crutch is put into place, students seldom abandon the aid.
The training wheels we provided students (with the best of intentions) as young learners becomes their undoing. They come to rely on them and then become afraid to take them off. It becomes easy and convenient to just do as they are told, afraid to be wrong, crippled by the choices we provide. The risk of failed creativity becomes paralyzing and so they do what they know.
So how do we teach kids to get started and abandon the formulas?
It starts by getting them to write informally as often as possible. Whether in a journal or a notebook or on a blog, kids need to be writing all of the time. Giving them ungraded opportunities to write and take risks until they find their voices, offers low stakes chances to see they can do it.
They must learn to trust themselves.
For students who can’t get started, it’s okay to provide open-ended prompts and allow them to do with it whatever they will. Over time, take the prompts away and allow them to free write often on whatever works for them.
Once writing becomes a part of their learning routine, then it’s time to start asking them to write more formally. Not timed writing, but structured writing. Learning to decide on a structure based on the content is a skill that can be taught. Provide students lots of options and examples of different styles of writing. Ask them to read and evaluate what others have done until they find something that resonates with them.
Remind them that they can take bits and pieces of things they have read and learn to make it their own. This is how we develop our voices over time.
When they say they don’t know how to start, tell them to brainstorm first, then write an outline. Make sure they get all of the information on the page before they start writing paragraphs.
Students often put too much pressure on themselves to get it right that they fail to understand the purpose of the writing, to begin with. If there is no one right answer, then students can learn that their writing doesn’t have to look any one particular way. If we are transparent about the purpose of each writing assessment, and we provide students multiple opportunities for formative feedback, that introductory paragraph won’t be so daunting.
As we move away from the formulas, students need to remember that all writing is about saying something. First, they must be clear about what they want to say and then they can revise to say it more artfully. When engaging readers, they need to explore the connection that will draw an audience in (especially in academic writing) and go from there.
No writing is ever perfect. Even professional writers don’t get it right the first time. It takes practice and attention and patience, like all learning. Let’s teach kids the wonder of taking risks in their writing and embracing the growth that comes from that.
Revision is a practiced science, but writing is an art. So be creative when writing and then come back with a focused eye. Model these behaviors for your students. Share early drafts of your own work. Be transparent, so they can be.
What is the most challenging part of writing for you? Please share
The opinions expressed in Work in Progress are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.
|
The specification aims to encourage students to develop a range of skills, knowledge and understanding needed to create and produce music using technology. It provides a worthwhile course of study to broaden experience, foster creativity and promote personal and social development. Through coursework components, students should be able to create, develop and record musical ideas using a range of technology. Identify and correct errors and misjudgements in the use of technology and demonstrate understanding of, and comment perceptively on the technological and contextual features of recorded music.
The A Level course consists of 4 Components:
Component 1: Recording (Externally assessed, 20% of total A level Mark). This unit gives students the opportunity to learn how to use production tools and techniques to capture, edit, process and mix an audio recording. One recording between 3 and 3½ minutes is chosen from a list of 10 songs provided by Edexcel. This will involve recording at least seven instruments to create an audio recording of the chosen song.
Component 2: Technology based Composition (Externally assessed, 20% of total A level mark). This unit gives students the opportunity to create, edit, manipulate and structure sounds into a composition and to develop their composition skills leading to the creation of one original composition. The composition is in response to a brief set by Edexcel and must include synthesis and sampling. The total time for the compositions is 3 minutes.
Component 3: Listening and Analysing (Externally assessed by examination 25% of total A level mark). This unit focuses on listening to familiar music and understanding how it works. Areas of study include Vocal Music, Instrumental Music, Music for Film, Popular Music and Jazz, Fusions and New Directions. The exam is divided into two sections.
Section A: Listening and Analysing; Four questions based on unfamiliar commercial recordings.
Section B: Extended written response: Two essay questions, one comparing two unfamiliar commercial recordings and one of another commercial recording.
Component 4: Producing and analysing (Externally assessed, 35% of total A level mark). This is a written and practical exam which tests knowledge of editing, mixing and production techniques. Students will create, correct and combine audio and MIDI tracks to form a completed mix. The written component will focus on testing the application of knowledge of mixing to a specific scenario.
A minimum of a Grade 6 in GCSE Music or alternatively a Grade 5 practical award and a Grade 5 theory award.
Examination Board: Edexcel. Course Number: 9MT0
|
Retinal disorders are conditions that affect the layer of tissue at the back of the eye, known as the retina. This important part of the eye responds to light and passes on images to the brain. All retinal disorders affect your vision in some way, but some can also lead to blindness.
Macular degeneration. Also known as age-related macular degeneration (AMD), this condition affects the center part of the retina, the macula. This area is needed for the sharp, central vision that is used during everyday activities such as driving, reading or working with tools. This condition is a leading cause of vision loss in people over the age of 60 years old. Treatment can slow the loss of vision, but it will not restore vision that has already been lost.
Diabetic eye disease. The high blood sugar (glucose) levels that occur with diabetes can also affect vision. One type of diabetic eye disease is diabetic retinopathy, which affects the blood vessels in the retina. This can lead to blurry or double vision, blank spots in the vision and pain in one or both eyes. Diabetics may also be at higher risk of developing other eye conditions, such as cataracts and glaucoma.
Retinal detachment. This medical emergency happens when the retina pulls or lifts off of its normal position. It can cause symptoms such as floaters in the field of vision, light flashes and the feeling of a “curtain” in the way of your vision. If not treated right away, a retinal detachment can lead to permanent blindness in that eye.
Retinoblastoma. This cancer of the retina is generally uncommon; although, it is the most common type of eye cancer in children. The cancer starts in the cells of the retina, but can spread to other parts of the body (metastasize).
Macular pucker. Scar tissue on the macula can make the central vision become blurry and distorted. Although the symptoms are similar, macular pucker is not the same as age-related macular degeneration. The symptoms of a macular pucker are usually mild and do not require treatment. Sometimes, the scar tissue can fall off the retina on its own, and the vision will return to normal.
Macular hole. This condition is caused by a small break in the macula, which leads to blurriness and distortion in the central vision. Related to aging, this condition usually happens in people over the age of 60. Some macular holes close up on their own while others require surgery to help improve vision.
Floaters. These are specks, or “cobwebs,” that appear in the field of vision. Unlike scratches on the cornea, which follow your eye movements, floaters can drift even when the eyes are not moving. Most people have some floaters and have no problem with their vision. A sudden increase in the number of floaters, though, can indicate a more serious eye problem such as retinal detachment.
If you notice a change in your vision or simply have not undergone a routine eye ex
Age-Related Macular Degeneration
We Can Help With, Retinal Disorders
One of the leading causes of vision loss in people who are age 50 or older is age-related macular degeneration (AMD). This common eye condition leads to damage of a small spot near the center of the retina called the macula. The macula provides us with the ability to clearly see objects that are straight
|
Direction: In the following passage there are blanks, each of which has been numbered. Choose the correct word from the given options which fits the blank appropriately.
___(1)____ October 12, 1492, the Italian ___(2)___ Christopher Columbus landed on a small island in the Caribbean, which he named San Salvador and claimed for Spain, the country that had ___(3)___ his voyage. ___(4)__ Columbus was not actually the first European to reach the Americas, and millions of indigenous people already lived there, he has traditionally been ___(5)___ in the United States as the "discoverer" of the Americas. The first Columbus Day __(6)___ took place ___(7)___ 1792 the 300th anniversary of his ___(8)____. Columbus Day celebrations ___(9)___ in popularity over the decades that followed, especially in Italian American and other Catholic immigrant communities, where amid a general climate of anti-Catholic prejudice, Columbus was ___(10)___ as a symbol of what it meant to be both Catholic and American. A federal holiday was signed into law by Franklin Delano Roosevelt in 1937.
Free Practice With Testbook Mock Tests
This question was previously asked in
SSC CPO Tier-II Previous Paper 3 (Held On: 15th Dec 2017)
The correct answer is 'Grew'.
Correct Sentence: "Columbus Day celebrations grew in popularity over the decades that followed..."
|
The human male and female reproductive systems are made from the same embryonic cells and are perhaps more similar in structure and function than is first apparent. There are two ovaries protected within the pelvic cavity. The ovary is the site of egg cell production. The egg cell is the female gamete and is haploid – it has only one chromosome from each homologous pair. The ovaries are also endocrine organs that produce the female sex hormones oestrogen and progesterone.
[Indeed differences between the gametes is the essential difference between male and female organisms. Females are always individuals who produce a small number of large, often immobile gametes. You can easily remember this: female – few, fixed, fat. Males are organisms that produce large numbers of small, motile games. Male – many, mini, motile.]
This diagram shows the human egg cell after it has been released from the ovary into the Fallopian tubes (or oviduct). The egg cell is coloured pink in the diagram above (if you are being picky it is not really an egg but a cell called a secondary oocyte but I won’t stress over this now…) The egg cell is surrounded by a thick jelly-like layer called the zona pellucida and then by a whole cluster of mother’s cells from her ovary – the corona radiata. The big idea to remember is that the egg cell is very large compared to sperm cells: it is one of the largest cells in humans with a diameter of about 500 micrometers.
The Fallopian tubes carry the egg down towards the uterus. The lining of the Fallopian tubes is covered in a ciliated epithelium. The cilia waft to generate a current that helps move the egg down towards the uterus. Sperm cells have to swim against this current to reach the egg in the tubes. The Fallopian tube is the usual site for fertilisation to occur.
Once fertilisation has occurred, the newly formed zygote divides over and over again by mitosis to form a ball of cells called an embryo. The embryo continues its journey down the Fallopian tube until it reaches the uterus. The uterus (womb) is a muscular organ with a thickened and blood-rich lining called the endometrium. Implantation occurs when the embryo attaches to the endometrium and over time, a placenta forms. The embryo develops into a foetus and remains in the uterus for 9 months.
The cervix is a narrow opening between the uterus and the vagina. It holds the developing foetus in the uterus during pregnancy but dilates (widens) at birth to form part of the birth canal. The vagina is the organ into which sperm are deposited from the man’s penis during sexual intercourse. The lining of the vagina is acidic to protect against bacterial pathogens and the sperm cells released into the vagina quickly start to swim away from the acidity in grooves in the lining. These grooves lead to the cervix and hence into the uterus.
|
How many times have students been pigeon-holed into the category of displaying bad or negative behavior when opposing class work or during transitions from a state of play or break back to the classroom and vice versa?
When the body appears like this during an overt meltdown:
The Brain Actually looks like this:
The Emotional Brain that is highlighted are two specific parts of the limbic system, the amygdala and the hypothalamus. The amygdala controls the brain’s ability to coordinate many responses to emotional stimuli, including endocrine, autonomic, and behavioral responses. Stress, anxiety, and fear are primary stimuli that produce responses. Mediation by the amygdala allows control among the stimuli.
The hypothalamus plays a significant role in the endocrine system and are effected by the amygdala. It is responsible for maintaining your body’s internal balance, which is known as homeostasis. This includes the heart rate, blood pressure, fluid and electrolyte balance, appetite, sleep cycles and is the key connector between the endocrine system (glands and hormones) and the nervous system.
Now we are painting this picture of the brain developing at a functionally optimal manner; without aberrations from either genetic means or environmental factors. However, when faced with students who have underlying imaging differences in brain imaging due to the said factors and manifest a type of negative behavior that can easily be mistaken and categorized as a regular tantrum, the subtle elevations in amygdala and hypothalamic responses are now pushed to abnormally erratic levels in these brains.
For example, take the Attention Deficit Hyperactivity Brain in comparison to the Normal Brain:
We see clearly that the shape alone of the cerebrum of the ADHD brain is not elongated or similar to a normal brain’s saddle
type shape. It is oblong and with heavy concentration on temporal and occipital real estate versus the butterfly formation of the normal brain. What is also fascinating is the corpus callosum (where part of the amygdala and hypothalamus are housed) is lighter in the ADHD brain. What that means is that there is no clear path of communication between both hemispheres as compared to that of a normal brain. The blues indicate calm sections of the brains and the greens are considered to be the brain in an even keeled state, balanced and not in fight-flight mode.
Here’s also an image of a person with and without ADHD medication:
With Adderall, the brain is utilized in full functional capacity, the chemical connections between neurotransmitters is efficient and there are little if any underutilized processing areas. When Adderall is wearing off, the results are unimaginable: the only sections of the brain that have any residual function left are the orbitofrontal area of the Pre Frontal Cortex (responsible for sensory integration and some decision making), and spotty areas across the 4 lobes. What is fascinating to mention here is the loss of Adderall effects are from back to front of the cerebrum.
These images provide a very clear picture of the typical versus atypical brain, especially the differences between one with ADHD and one without. If ony it were that easy as a classroom teacher to distinguish a student with ADHD from a student with sensory overload. The list below is not as ‘yellow’ and ‘red’ as the brains above, but hopefully it will provide clarity and a concrete direction for you to take in order to best meet the needs of your students.
First, it crucial to note that boys and girls with ADHD display different symptoms; therefore, they are distinguished below. Second, students with meltdowns as a result of negative behavior, will most likely present with similar symptoms; therefore, it is an undertaking for teachers to take quantitative data on the targeted behaviors. Forms like the one below:
- Fidgety while sitting
- Talk nonstop
- Constant motion, may include touching items in their path
- Difficulty sitting still
- extreme impatience
- Always “bored”
- Lack verbal filter
- Interrupt others’
- Trouble with organization
- Forget directions
- Forget or incomplete homework
- Lose or misplace papers, books, personal belongings
- Much Less Likely
For students with ADHD, these symptoms as well as sensory overload meltdowns will be manifested consistently throughout the day across environments, unless the student is highly engaged in a preferred activity. Students presenting with negative behaviors will have meltdowns at specific yet intermittent periods of the day or throughout the day as will be shown in the ABC Chart above. For example, when the medication is wearing off, one may see a spike in ADHD symptoms in any combination. Once you can answer when, where, how long and make valid hypotheses as to why students are displaying the behaviors below, you should be able to have a pretty strong understanding as to whether your student is having a meltdown because of learned negative behaviors or as a result of having an ADHD brain on sensory overload.
|
Locked into the chilly soil of the Northern Hemisphere's high latitudes are vast stockpiles of carbon compounds.
An estimated 1,400 billion tons of carbon is believed to be resting in the Arctic permafrost — decades worth of today's human-generated greenhouse emissions. If it stays frozen, it goes nowhere.
But if it thaws, it can start to decompose as bacteria start to munch on it. And that could unlock those compounds, adding them to an atmosphere already warming up due to heat-trapping emissions like carbon dioxide or methane, which punches above its weight as a greenhouse gas.
A swift, massive release of methane is one of the nightmare scenarios of climate change: A feedback loop that accelerates warming, bringing on consequences like rising sea levels and changes to farmland before people or other species can adapt. But don't panic: Scientists who have studied the soil of the far North say while that "methane bomb" scenario is possible, it's unlikely — at least for now.
"The bomb is maybe there, but it will not explode anytime soon," said Vladimir Romanovsky, a geophysicist who studies permafrost at the University of Alaska in Fairbanks.
Methane is the second most-common greenhouse gas, making up about 15 percent of global emissions. It lingers in the atmosphere a far shorter time than carbon dioxide, but packs more than 80 times the heat-trapping potential during that lifespan.
The Arctic is already warming at roughly double the rate of the rest of the globe. A nearly 40-year record of data from the region "shows clearly and no doubt that permafrost is increasing in temperature, and this increase is very significant," Romanovsky said. That increase is sharpest on Alaska's North Slope, where average temperatures a meter (2.35 feet) below ground have gone up 5 degrees Celsius (9 degrees Fahrenheit). At 20 meters down, temperatures have still risen about 3 degrees Celsius, he said.
That leaves the near-surface temperatures about 3°C below freezing. And if it crosses that threshold — which might happen by the middle of the century — the thawing and decomposition of organic matter will result in the release of greenhouse gases, he said. That is likely to include methane, particularly in wetter areas, "but the amount of it is still small compared to CO2," Romanovsky said.
"It probably shouldn't happen within the next few decades. But the farther you go into the future, the probability increases," he said.
Methane is already seeping out from underground in some spots. Romanovsky's colleagues have documented methane bubbles frozen into the ice atop lakes and made videos of themselves setting methane plumes aflame. Eruptions of methane released from thawing underground ice are suspected in the emergence of craters across Siberia's Yamal Peninsula, home to a major Russian natural gas operation.
The chances of a widespread release of carbon compounds from the tundra might be offset by other effects of climate change, such as increased plant growth in the warming region. But it's not certain that will happen consistently enough to make a big difference, Romanovsky said.
The methane bomb scenario got a new boost into the public eye after being featured in a hotly debated New York magazine article on climate change, which argued that without sharp cuts in planet-warming carbon emissions, parts of the Earth "will likely become close to uninhabitable" by the end of this century. A methane feedback was only one float in the parade of horribles outlined in the article, which has come under fire from several prominent climate scientists as being too alarmist.
A 2014 study led by the National Snow and Ice Data Center in Colorado estimated that unless humans curb their emissions of carbon dioxide, methane, and other greenhouses gases, a widespread release of carbon trapped in permafrost around the globe could increase the resulting warming by about 8 percent — adding slightly over a third of a degree to a 4-5 degrees Celsius increase by 2100. If emissions are reined to the point that warming can be held down close to the 2 degrees Celsius goal of the Paris climate accord, that increase might be around a tenth of a degree.
However, the study adds that the bulk of the resulting emissions are likely to occur after 2100 — which could push the planet beyond the Paris target. And other studies have shown that more temperate soils can also give up more greenhouse emissions when warmed.
But how much more? That question may yield some encouraging news.
Joel Kostka, a microbiologist at Georgia Tech, is part of a team from several universities that has set up an experiment station in northern Minnesota to study just that question. The not-quite-frozen peat bogs found in those regions are also huge carbon sinks, so Kostka and his colleagues have tried to simulate what happens to that peat when it's warmed to various temperatures. The results to date suggest the soil may not give up its carbon so easily.
"The predominance of our data shows most of that methane is coming from the surface soils," Kostka said. That's "relatively recent" carbon, "not the ancient carbon that we are more concerned about." That's held up since the team published its first round of findings in December, he said.
"We still think we're not seeing evidence of that deep, ancient carbon being released as CO2 or methane," he said.
The far North's carbon stores also extend into the ocean, on continental shelves that were above water during the last ice age and in the deep ocean floor, and scientists have been watching closely for any signs that warming is freeing methane that’s currently trapped in ice crystals known as hydrates.
But Carolyn Ruppel, a research geophysicist at the US Geological Survey, said methane that escapes the deep Arctic Ocean isn't likely to reach the surface. Instead, it gets dissolved into the water and eaten by subsea microbes. The catch is, the byproducts of that digestion include carbon dioxide, which makes the oceans more acidic.
And so far on land, it's not clear whether methane or carbon dioxide would become a bigger source of emissions under future warming scenarios, said Ruppel, who leads the USGS gas hydrates research project. The world has a bigger problem in the emissions that humans pump out every day, she said.
"The bottom line is in reality, the anthropogenic CO2 emissions are far, far more important in the atmosphere than methane, even though the methane is a very potent greenhouse gas," she said.
Originally published on Seeker.
|
by Brendan James
When we really try, humans can perform echolocation à la bats and dolphins, like blind man Daniel Kish in the above video:
We can’t match the 200 or so clicks per second achieved by bats and dolphins, but it’s not really necessary. Kish, for one, simply makes a clicking noise every few seconds, with interludes of silence when he doesn’t need to get a new picture of his surroundings.
From there, the sound waves produced by the click are broadcast into our environment at a speed of roughly 1,100 feet per second. Shot out in all directions, these waves bounce off the objects, structures and people around the echolocator and arrive back in his or her ears. The volume of the returning click is much quieter than the original, but those with proper training readily identify the subtle sound. And although it might seem amazing to be able to analyze these sound waves to generate a picture of the environment, some of the basic principles in play are concepts you already rely on everyday.
Previous Dish on how the blind utilize other senses here.
|
Today saw the third session of the primary school math circle we’ve been running for kids in Year 2 to Year 6. (I was absent for Session 2, where my wife covered mathematical card tricks.)
- (Medial) graph representations of knot projections, moving backward and forward between them (see http://en.wikipedia.org/wiki/Knots_and_graphs)
- The importance of assigning an (under/over) decision to crossings
- Getting the kids to think of their own ideas for what knot equivalence might mean (shape, size, rotation, deformation – of what type) etc.
The key innovation we used here, which I think really brought the session to life, was to get the kids actually physically making the knots using Wikkistix, wax-coated string which allowed them to make, break, and remake their knots.
The kids really ran with this, and made their own discoveries, in particular:
- One child discovered that some knots corresponding to the graph could be manipulated to produce the unknot, some could not.
- A child discovered that it is possible to produce two interconnected knots, forming a link. Another child came to the same conclusion from the graph representation.
- Consider the graph with vertex set and edge set (is there a name for these graphs?). One child completely independently found that for even, this corresponds to a link of two knots, whereas for odd, it corresponds to a single knot.
I certainly had a lot of fun this afternoon – I hope they did too!
|
All life on Earth is carbon-based which of course makes this molecule of considerable interest for scientists. A carbon atom has 6 electrons which means that it has four electrons in the valence shell and results in it being able to form different kinds of chains and combinations, such as buckminsterfullerenes (often shortened to just ‘fullerenes’, more popularly known as ‘buckyballs’) that consists of 60 carbon atoms in the shape of a geodesic sphere.
Now comes along a report that scientists have created a ring that consists of 18 carbon atoms. Other carbon molecules such as fullerenes, graphene, and carbon nanotubes have each carbon atom linking to three other carbon atoms. In this new molecule, each carbon links to just two other carbon atoms, enabling them to form rings called cyclocarbons.
Theory had suggested that C18 is the smallest ring of carbon that is stable but creating it from the ground up had proved to be difficult. So a group of scientists tried a different tack, of going from the top down. They started with the molecule C24O6 and then proceeded to successively knock out pairs of CO (carbon monoxide) molecules, producing C22O4, C20O2, and finally C18.
The big unanswered question had been whether these rings would have what is known as a polyynic structure with alternating triple and single bonds or would be cumulenic, with pairs of double bonds.
Now that the researchers had an actual molecule to study, they could answer the question.
And having finally created a stable version of the thing they could study, the researchers put to rest a long-standing debate about what kind of atomic bonds the carbon atoms within such a molecule would share. The ring is made up of alternating triple and single bonds, known as a polyynic structure, the researchers found. Amazingly, they confirmed this by studying how the molecules looked. They literally compared photos of the individual collections of atoms to expected theoretical models.
You can read the paper in the journal Science here.
|
Defining our Place in the Cosmos – the IAU and the Universal Frame of Reference
How do you know where you are now? How do we know where we are in space? How does the International Space Station or the latest space probe keep track of its location in the Universe? The best answer would be – with great difficulty! Ever since the earliest philosophers first considered our place in the Universe, it has always been a natural first step to define our position in the overall order and structure of the cosmos.
One of the earliest Greek philosophers, Heraclitus, is often credited with advancing the concept of “everything changes or panta rhei”; a philosophy that develops the notion that the Universe is continually in motion, like a river. If we consider the Earth, the Solar System and the Universe as a whole, from the ground beneath our feet to some of the largest objects in the Universe, nothing is, in fact, immobile. On Earth the tectonic plates under our feet are moving, albeit slowly! And when we look out beyond the Earth, there is still no absolute reference point. The Earth rotates at half a kilometre a second at the equator, and is moving around the Sun at 29 kilometres a second; our Sun is also moving through space at about 19 kilometres each second and is orbiting the centre of the Milky Way (our galaxy) at about 215 kilometres a second. Stepping up a scale, the Milky Way is moving towards the Virgo Cluster, which is also in motion. As an added complication, the continuing expansion of the Universe must be included in large-scale distance measurements. The light we see today arriving from distant objects has taken so long to reach us that the Universe has expanded in the travel time of the light. So the task of defining a single reference frame from which the location of any other object in space can be defined is particularly complex. Traditionally this has required very precise measurements of the positions of many reference stars, with a catalogue of their motion across the sky through the year, referenced to their position at a particular precise date and time.
The International Astronomical Union (IAU) is responsible for defining a Universal Frame of Reference. This work touches on many aspects of our daily lives, so much so, that without a standard reference frame many of our modern gadgets would, at best, be incompatible with each other and, at worst, inaccurate or not fit for purpose.
Many people nowadays use the Global Positioning System (GPS) in their everyday lives. GPS requires several aspects of the Universal Frame of Reference to be defined. For example, the systems that controlled the launches of the GPS satellites had to have an excellent understanding of the positions of the stars, orbital elements and the definitions of various units in order to position the satellite in the correct orbit needed to complete the “constellation” of satellites. The IAU Commission A1 (Astrometry) and Commission X2 (Solar-System Ephemerides) provide valuable information about “physical position” and “position in time” respectively, mainly to astronomers and space scientists. Astronomers also need to have accurate definitions for concepts such as the celestial equator — the imaginary line on the sky above the equator on Earth — and the ecliptic — the path of the Sun across the sky — as some earlier reference frames were based on these.
However, much more than basic positional input needs to be considered to establish a Universal Frame of Reference. Scientists have to agree on definitions for certain key reference units or parameters. These topics are covered by other IAU Commissions, including Commission 31 (Time ). In the context of the Universal Frame of Reference the work of Division A WG Time Metrology Standards is also closely linked with Commission A2 Rotation of the Earth, as knowledge of the Earth's ever-changing orientation in space is necessary to link terrestrial and celestial frames. The Earth is not fixed, nor in moving in a way that is simply described, so much work goes into measuring and defining this complex movement. Phenomena such as precession, the slow, roughly 25 000 year cycle, of movement of the direction of the Earth’s axis, and nutation, the continual “nodding” of the Earth’s axis, due mainly to tidal forces from the Sun and Moon, all have to be taken into account when defining a Universal Reference Frame.
In 1997 and 1998 the IAU, in collaboration with the International Earth Rotation and Reference Systems Service (IERS) and the International Very Long Baseline Interferometry Service (IVS)International Celestial ReferenceFrame(ICRF). The ICRF uses the relative positions of 212 extragalactic radio sources to establish an origin for the system at the centre of mass of the Solar System, and coordinate axes that are aligned with the conventional axes of the celestial equator and equinox ( the point at which the Sun crosses the equatorial plane moving from south to north ) of the epoch J2000.0 (1200 hours Terrestrial Time on 1 January 2000), but are obtained in a way that is independent of the dynamics of the Earth’s rotation . On 20 August1997, at the 23rd IAU General Assembly in Kyoto, Japan, the IAU adopted the ICRF, and the celestial equator and the ecliptic were no longer central in establishing a celestial or Universal Reference Frame.
In recent years more precise measurements have allowed the ICRF to be refined, allowing for a much more accurate system. At the IAU General Assembly in 2003 the IAU Working Group (WG) on the ICRF was dissolved and its work was then covered by the main Reference Frame Working Groups: Commission 8, Densification of the Optical Reference Frame and Division 1, Second Realization of International Celestial Reference Frame. On 24 August 2006, at the IAU General Assembly in Prague, new resolutions were adopted that aim to improve our definition of the Universal Frame of Reference. The Division A WG Third Realisation of International Celestial Reference Frame is currently working on this topic.
|
The Success of the Declaration of Independence
The long road to freedom was not an easy one. The kinds of rights enjoyed today in America were unheard of before and during the American Revolutionary War. The British through King George III reined over thirteen colonies in America in that period. There were certain laws and subsequent actions from the British that made the American colonies unhappy. By voicing their displeasure, they angered the British who in turn became even more oppressive. This led the colonists to start a rebellion in search of autonomy. After much consultation, the colonies settled on a document that clearly stated their grievances and the position they were going to take regarding their association with Britain. With that, the colonies adopted the declaration of independence and this marked the birth of freedom from England since it captured the aspirations of the American public (Wood and Wood 88).
Primarily, the highest decision-making body at that time was the British parliament. From it came various laws that had far-reaching repercussions to all territories under British rule. The decisions of the British parliament were not open for questioning, let alone a show of defiance. Whichever territory acted contrary to such decisions would suffer severe consequences. The American colonies felt that it was wrong to follow directives from the British parliament yet they had no people to represent them in that institution. They argued that if they had had representatives who were to voice their opposition on certain issues and defeated by a majority, they would have no problem complying. Therefore, they settled on the declaration of independence because once they were free, they would rely on where they had representation-Congress alone.
Initially, whenever the colonies had any disagreements with England, they would petition them. However, there was no chance for them to say what they had in mind. This was troubling. Already, there was growing anti-British sentiments across the country especially after Paine published an article in the popular pamphlet, Common Sense. Thus, they now had an opportunity to air their views freely while making America independent through a document that was legally binding.
In addition, the British made some set of rules that were discriminatory. Among them were the Stamp Act and the Townshed Act. The Townshed Act for example, sought to compel the colonists to pay taxes as a way of repaying the debt of British assistance in the French and Indian war (Russell 40). This was unacceptable to the colonists because they were not under direct occupation by the British. The enactment of the Tea party Act led to riots and a boycott of British products, which hampered trade. The colonists, in their wisdom included the issue of taxation in the draft and the role of government towards that thereby closing the door on dependency on foreigners.
Similarly, they eloquently denied the dictatorial notion that all laws coming from Britain were mandatory. Particularly, they wanted to cease all formal ties with them but they needed proof of unanimity among the colonies. Furthermore, they had either to be submissive to England or go to war and since they wanted to go to war, all colonies had to seek permission from their governments. A document signed by representatives from all colonies was therefore essential to act as proof of unity. This document captured their war rhetoric and their desire for cessation from England.
Similarly, there was consensus among the colonies that there are some fundamental human rights. Thus, not even a government can give or take them at will. This was in response to the British excesses. There was also the important requirement that it was only through a motion in the continental congress that their wishes could succeed. This prompted the colonies to form a committee of five people to draw the text of that motion, much of which had contributions from Thomas Jefferson. His famous remark ,”We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness” is remembered to date (McLaughlin 95). With this, the stage was set for defection from England.
On the contrary, the war was unjustified because the colonies should not have refused to pay their taxes. It does not make sense because the British had sacrificed many resources in helping the colonies with territories formerly under countries that participated in the Indian war and the French war. It would have been a good gesture of thankfulness for the colonies to pay taxes for them to continue enjoying those and other benefit.
Generally, the colonies benefited from being free of British rule and American’s way of life improved in a free society. All this is courtesy of the text in the declaration of independence, which heavily influenced the contents of the American constitution in use (McLaughlin 104).
McLaughlin, Andrew C. A Constitutional History of the United States. New York: Appleton-Century-Crofts, 1935.104. Print.
Russell, David L. The American Revolution in the Southern Colonies. Jefferson, N.C: McFarland & Co, 2000.40. Print
Wood, Gordon S, and Louise G. Wood. Russian-American Dialogue on the American Revolution. Columbia: University of Missouri Press, 1995.88. Print.
We have the capacity, through our dedicated team of writers, to complete an order similar to this. In addition, our customer support team is always on standby, which ensures we are in touch with you before, during and after the completion of the paper. Go ahead, place your order now, and experience our exquisite service.
Use the order calculator below to get an accurate quote for your order. Contact our live support team for any further inquiry. Thank you for making BrilliantTermpapers the custom essay services provider of your choice.
|
On February 26, 1935, Nazi Germany’s ultra-modern air force–the Luftwaffe–is secretly organized under the direction of Hermann Goering. The Versailles Treaty prohibited military aviation in Germany, but the civilian airline Lufthansa allowed flight training for the men who later became Luftwaffe pilots. After seizing power in 1933, Nazi leader Adolf Hitler began to secretly develop his military air force. In February 1935, the Luftwaffe was formally organized, and in March, Hitler revealed it to the world. Two years later, a stinging sample of Germany’s new air power was felt in the brutal bombing of Guernica during the Spanish Civil War. After September 1939, Poland, France, and especially Britain and Russia discovered the Luftwaffe to be the deadliest of Germany’s armed forces. Britain’s Royal Air Force, although outnumbered 2 to 1, handed the Luftwaffe its first defeat in the Battle of Britain. Later in the war, American forces joined the RAF in the battle for Europe’s skies, and the once-proud Luftwaffe was destroyed.
|
9 Essential Technology Tips for Kids
Donna's Talk on Technology
On Children’s Week I attended a talk by Donna Cross, a UWA Professor. Donna spoke about the effects of technology on children, and said that technology was here to stay, so parents needed to learn how to use it positively. She said banning it would just create a greater chasm between parents and children. She suggested we perhaps think of it as a tool, like a window to the outside world, or like a magnifying glass that a child can use to view the world in a way they never have before.
In saying that, one of my kids was playing with a magnifying glass and managed to burn a hole in their bedroom rug! Technology is a useful tool but it’s not a great idea to let kids use a tool as much as they like, unsupervised! Also there is no longitudinal research, and we won’t know how playing with iPads affects the brain development of a toddler for another decade. So we need to use wisdom and discernment. Donna mentioned the American Academy of Paediatrics recommends that kids under two have no access to technology.
Here are Donna’s top tech tips:
1.Encourage co viewing. Never let kids sit around an iPad alone. Encourage other children to watch one thing together rather than being on separate devices.
2.Allow them to create content instead of just consuming it.
3.Create tech free zones in your home, e.g. phones must stay in the kitchen at all times, and everywhere else in the home is a phone free zone.
4.Balance green time with screen time.
5.20/20/20/20 vision rule. Looking at screens for a long time can cause short sightedness. Encourage kids to take a 20 minute break, blink 20 times, look at something at least 20 metres away, and do something physical for 20 minutes.
6.Increase the distance between the child and the device. No apps in laps.
7.Monitor use using tokens. Write down allocated amounts of time for technology use on milk bottle lids kept in a jar. Once the kids have used up all their tokens that’s it for the day.
8.Limit access one hour before bedtime as the blue light from devices inhibits production of melatonin.
9.Create a media agreement with your kids and have everyone in the family sign it.
My kids in grade five and six are on the school bus for two hours a day and play on their iPads during that time. At school they use iPads 50-70% of the time. So by the time they get home, they’ve been on their iPads for about five hours, which I think is way too much, but it is what it is. So our home is generally a tech free zone for kids at this stage. Each family needs to work out what is best for them, because technology is here to stay!
I'll leave you with an infographic from the South Texas Eye Institute, which I found interesting!
|
Bull kelp is so beautiful, especially now in the early spring when the young “sporophyte” stage is growing at an insanely fast rate. This kelp species, Nereocystis luetkeana, can grow to be 36 m long, and can apparently shoot up at a rate of up to 10 cm a day.
It is at this early stage of growth too that bull kelp is an intense colour green unlike anything else I know. When older, the colour darkens to an olive green.
Kelp is an alga, not a plant but, like plants, algae also photosynthesize, converting the sun’s energy into food. However algae have simpler structures and different chemical pathways.
Young bull kelp grows so fast to allow the leaf-like parts, called “fronds”, to be closer to the sun so that more food can be made.
The round, floating part of the kelp, is the “pneumatocyst”. This bladder-like structure is completely hollow and is filled with carbon monoxide (NOT carbon dioxide), allowing the long fronds to drift at the surface to catch the sun’s rays.
Apparently, there is enough carbon monoxide in bull kelp to kill a chicken! Now that’s valuable information.
The stem-like part is called the “stipe” and it is also hollow. I’m sure it is not what Nature had intended, but this allows we humans to play the stipe of bull kelp like a trumpet or didgeridoo!
Bull kelp does not have roots. Rather a “holdfast”, a tangle of woody structures, anchors bull kelp onto rocks. However, if rocks are too light to counter the floatation of the pneumatocyst, the kelp will actually change the ocean bottom by carrying away smaller rocks, likely ending up washed onto the shore.
The stipe gets thinner and whip-like near the holdfast which is why bull kelp likely got its name; because the stipe is shaped like a “bull whip”.
And bull kelp always grows in patches, truly forming an underwater forest that is life-giving for the same reasons as terrestrial forests: kelp forests buffer the climate change gas carbon dioxide; produce oxygen; and provide food and habitat for so many other organisms. Bull kelp forests are, in fact, estimated to provide habitat for some 750 species of fish and invertebrates (animals without backbones).
Sea urchins are one of those invertebrates, living in the forest and grazing on a lot of bull kelp. If otters, mink, wolf eels and other predators of urchins did not keep urchins in check, there would be further reduced kelp forests.
Kelp forests are not what they used to be for reasons far beyond our foolishness in over-harvesting sea otters. There used to be such dense forests that it is theorized “Ancient humans from Asia may have entered the Americas following an ocean highway made of dense kelp.“
All coastal boaters still benefit from kelp. It is a navigational aid since, where it grows, you know you there is shallower water.
We divers have yet an additional reason to value kelp. Since it is so strong, we can hold onto it if we need to during our safety stop (3 minutes at 5 metres depth) or if needing to gradually pull ourselves down into the depths or back to the surface.
Oh – and you can eat it. (I love pickled young bull kelp!)
And yes, you could do puppet shows with bull kelp, cutting a face into the bladder like you would into a jack-o-lantern. The fronds even look like two pig-tails!
There will be more on bull kelp here in the future. Wait till you find out how bull kelp reproduces! The “alternation of generations” is mind-blowing with the offspring look nothing like the adults.
But for now, come underwater with me. Come into the forest, breath in, breath out and worship the kelp!
For more on the natural history of bull kelp see this link.
For more information on algae – click here for my blog “Every Breath You Take . . .”
|
If you get freaked out when a spider scurries across your floor, wait until you see what this newly discovered species of arachnid can do. Moving at a whopping two meters per second, this desert dwelling spider cartwheels to get up and boogie. By cartwheeling, the spider can move twice as fast as its walking speed. This allows it to get away from predators or threatening situations.
Additionally, this spider builds tubular structures out of its silk to protect itself from predators and the intense sun. Not only is this spider a gymnast, but it is also an architect.
The man who discovered the spider, Ingo Rechenberg, was so ecstatic about the spider’s movements that he designed a robot that propels itself like the spider’s cartwheels. He believes that this movement will be perfect for exploring the bottom of the ocean or even Mars. It looks like thanks are in order to the little guy. Cartwheel away, oh cartwheeling one. Know more.
|
A proposed new time-keeping system could make atomic clocks look positively erratic, staying accurate to one twentieth of a second over 14 billion years – the age of the universe.
“This is nearly 100 times more accurate than the best atomic clocks we have now,” says professor Victor Flambaum of the University of New South Wales.
“It would allow scientists to test fundamental physical theories at unprecedented levels of precision and provide an unmatched tool for applied physics research.”
With its time-keeping system tied to the orbiting of a neutron around an atomic nucleus, the team’s proposed single-ion clock would be accurate to 19 decimal places.
Currently, atomic clocks are the standard, and are widely used in applications ranging from GPS navigation systems and high-bandwidth data transfer to tests of fundamental physics and system synchronization in particle accelerators.
Atomic clocks use the orbiting electrons of an atom as the clock pendulum. But, says Flambaum, using lasers to orient the electrons in a very specific way allows the orbiting neutron of an atomic nucleus to be used as the clock pendulum, making a so-called nuclear clock with unparalleled accuracy.
Because the neutron’s held so tightly to the nucleus, its oscillation rate is almost completely unaffected by any external perturbations – unlike those of an atomic clock’s electrons, which are much more loosely bound.
“With these clocks currently pushing up against significant accuracy limitations, a next-generation system is desired to explore the realms of extreme measurement precision and further diversified applications unreachable by atomic clocks,” says Flambaum.
|
NATURAL DISASTER TYPES
Situated on the Ring of Fire, the ASEAN region faces one of the greatest threats of natural disaster due to geophysical activity along this active belt of tectonic plates. Following on from volcanoes in the last edition, another key disaster threat categorised into the geophysical type are earthquakes, as well as a range of related disasters that can occur as the result of earthquake activity. Earthquakes are another form of geophysical events that have triggered disasters in ASEAN during recent times. Therefore, understanding the varieties and impacts of earthquakes is important for disaster management across the region.
The AHA Centre receives ongoing information regarding earthquakes as they take place across the region. Considered relatively unpredictable, earthquake occurrences are therefore more often than not the focus of both response and preparedness activities for the AHA Centre team. As with volcanoes, Indonesia’s geographical location sees it experience earthquakes of various sizes on an almost daily basis, with their impact highly dependent upon a range of influencing factors such as force, depth, location and vicinity to human populations and infrastructure.
2018 has seen more than its fair share of significant earthquake events, particularly across Indonesia. A number of major earthquakes during August and September caused widespread death and damage on the island of Lombok and its surrounds, while most recently a 7.4M event shook central Sulawesi, causing not only extreme devastation from the earthquake itself, but a resulting tsunami that has affected millions of people. Other significant ASEAN earthquakes in recent times include:
7.2M quake that killed over 200 people in Bohol, the Philippines 2013;
6.9M earthquake that killed approximately 100 people in Myanmar, 2011;
7.6M earthquake that caused over 1,000 deaths in Padang, Indonesia 2009; and
9.1-9.3M earthquake (and resulting tsunami) with an epicentre off Aceh, Indonesia, that resulted in the loss of over 220,000 lives, and displaced millions across 14 countries, including Indonesia, Thailand, Myanmar and Malaysia.
An earthquake, identified by a shaking of the earth, are most often caused by movement of geological fault lines (along the edges of the earth’s tectonic plates) – known as an inter-plate earthquake. The three main types of faults that can result in these earthquakes are known as ‘normal’, ‘reverse thrust’ and ‘strike-slip’ faults. The first two types of fault occur when two plates meet, resulting in movement that is vertical in nature (dip-slip movement). The third, strike-slip faults, are characterised by two plates meeting and sliding past each other horizontally. While most of the earthquakes we experience are related to these naturally occurring faults, earthquakes are also caused by other events such as volcanic activity, or human-induced occurrences such as mine blasts or nuclear testing.
The power of an earthquake is measured by the use of the Richter scale, most commonly used to describe the magnitude (for example 6M or 6MR) and impact of a quake. An earthquake’s impact and force will decrease further from its epicentre, and also depend upon the location and depth of the initial fault occurrence. In general, earthquakes felt with higher magnitude measurement will result in greater damage, with general guidelines shown below.
Aside from being powerful and deadly in themselves, earthquakes also lead to a range of other dangerous natural disasters. Well-known to the ASEAN region is the tsunami, which is caused by shallow earthquakes with an epicentre in the ocean, resulting in giant waves that make their way towards land. Alongside this, the shaking of the earth from a quake can cause landslides in hilly or mountainous regions, as well as phenomenon such as soil liquefaction, which was a major cause of death and destruction after the most recent earthquake in Central Sulawesi, Indonesia..
Written by : William Shea
|
The picture shows the micro resonator frequency comb system
(Source: Arslan Raja, Federal Institute of Technology, Lausanne)
Optical frequency combs (OFC) is a laser source whose spectrum consists of a series of discrete, evenly spaced comb-shaped spectral lines that can be used for accurate measurements. Over the past two decades, they have become the primary tools for applications such as precision ranging, spectroscopy and communications.
Most commercial optical frequencies based on mode-lock lasers (modulated laser oscillations with a certain phase relationship between longitudinal modes at different frequencies to obtain ultrashort pulse lasers with narrow pulse width and high peak power) Comb drive sources are bulky and expensive, and these features limit their potential for applications in high volume and portable applications. Although chip-scale optical frequency combs using microresonators appeared for the first time in 2007, the fully integrated form was hampered due to high material losses and complex excitation mechanisms.
Tobias J. of the Federal Institute of Technology (EPFL) in Lausanne Kippenberg and Michael L. of the Russian Quantum Center The research team led by Gorodetsky has now built integrated soliton microcombs driven at 88 GHz repetition rate using chip-level indium phosphide laser diodes and silicon nitride (Si3N4) microresonators. With a footprint of only 1 cubic centimeter, the device is the smallest of its kind to date.
The silicon nitride (Si3N4) microresonator is fabricated using the patented photonic damascene reflow process, which achieves unprecedented low losses in integrated photonics. These ultra-low loss waveguides bridge the gap between the chip-level laser diodes and the power levels required to excite the dissipative Kerr soliton state, which is the basis for generating optical frequency combs.
This method uses a commercial chip-level indium phosphide laser instead of the traditional large laser module. In the study, a small portion of the laser is reflected back to the laser due to the inherent scattering of the microresonator. This direct reflection helps stabilize the laser and create solitons. This shows that the cavity and laser can be integrated on a single chip, a unique improvement over previous technologies.
Kippenberg explained: "People have a strong interest in this optical frequency comb drive source, which is opto-electrically driven and fully integrated by optoelectronics to meet the needs of next-generation applications, especially for laser radar ( LiDAR) and information processing in the data center. This not only represents the technological advancement in the field of dissipative Kerr soliton, but also provides an insight into the nonlinear dynamics with the rapid feedback of the cavity."
The entire system is less than 1 cubic centimeter in volume and can be electrically controlled. Arslan Sajid, the lead author and doctoral student of the study, explained: "This micro-comb system is characterized by compact structure, easy adjustment, low cost and low repetitive operation rate, and is suitable for large-scale manufacturing applications. Its main advantage is optical feedback. Fast, no active electronics or any other on-chip tuning mechanism."
The goal of scientists today is to implement integrated spectrometers and multi-wavelength sources, and to further improve the manufacturing process and integration of micro-combs operating at microwave repetition rates.
Commercial Fridge,Commercial Refrigerator,True Commercial Refrigerator,Commercial Upright Freezer
ShanDong XiMaiD Commercial electrical appliance Co. LTD , https://www.sdximaide.com
|
The Changing Role of Women In Chinese History Those familiar with Pearl S. Buck’s classic novel of Chinese life The Good Earth will recall that when female children were born, they were referred to with contempt and disappointment as “slaves. ” In Chinese culture since ancient times, that term was not much of an exaggeration for the role of women, In a classic Chinese work from 2. 000 years ago by court historian Pan Chao. t Is written: “Let a woman modestly yield to others. Let her respect others. Let her put others first, herself last Should she do omething good, let her not mention it. Should she do something bad, let her not deny it. Let her bear disgrace; let her even endure when others speak or do evil to her. Always let her seem to tremble and fear…. lf a wife does not serve her husband, then the proper relationship between man and woman is broken. ” Many women throughout Chinese history functioned as concubines.
Men were allowed to take multiple wives, with the wives falling Into a hierarchy amongst themselves based on such factors as the order In which they had been married, and which was the current favorite of the master of the household. Perhaps the best known symbol of the place of women In Chinese society was the custom of crippling women starting in childhood by “foot binding,” where the arch of each foot was broken, and the feet bound to keep them from growing.
The result was women who could not walk, or could at most hobble awkwardly a few steps in great pain. If left unbound, in adulthood the feet could partly heal and limited mobility return, so generally the binding was continued until the end of their lives. Foot binding at first was a practice of the aristocracy, where rendering a woman less functional was a status symbol in hat it meant that a household was wealthy enough to afford the luxury of having some of its members serve primarily ornamental functions.
But it eventually became common below that level as well, as ordinary Chinese adopted the practice in mimicry of their betters, to gain a little reflected status for themselves and to Increase the chances of being able to sell their daughters Into higher class marriages. Non-arlstocratlc Chinese certainly could not afford the luxury of their women being unable to work, so ways were developed to keep the women “slaves” orking as hard as ever on tasks that required little or no mobility. Women worked In the home at Jobs such as spinning cloth, shucking oysters, and processing tea.
When their labor was needed in the fields, they could still contribute a certain amount of work by crawling about Women with unbound feet were considered ugly and unworthy of marriage. These values were internalized to a large extent by women themselves, who generally cooperated in the crippling of themselves and the young girls in their family. Even during the worst periods of inequality for Chinese women, here were individuals and customs that showed that women could still wield a certain amount of influence.
At times, when the Chinese throne passed on to a child too young to realistically lead, the child’s mother, as “Empress Dowager” was 1 OF2 even be warriors. Marriages were most often arranged by an aunt or older female relative, which was a powerful role in that it determined which families would be allied by marriage and who would receive a dowry. In the Hunan region there was a longstanding custom of “sworn sisters,” where women were allowed to organize themselves into groups of seven lifelong friends.
Often the sworn sisters developed their own private language and system of writing, which allowed them to safely communicate even dissenting statements amongst themselves. In the 19th century, there were rumblings of discontent and calls for change in the status of women in Chinese society. But it was not until the short-lived Chinese Republic that significant progress was made. Even then, women were expected to concern themselves primarily with tending to their household.
But at least they had somewhat more access to education in the cities, and foot binding came under greater and greater disapproval. The Communists under Mao turned much of Chinese society upside down, showing a willingness to spill whatever amount of blood was necessary to impose their vision of how things should be on the populace. One of their stated goals was to erase the inequality between men and women once and for all. To some extent, they were successful.
Though most Chinese women did not have a good life under the Communists, the misery they suffered tended to be more similar to the misery that men suffered than had been true in the past. So there was a move toward equality in that sense. More than ever before, women worked outside the ome, obtained educations, and involved themselves in political matters. The government’s sometimes persuasive, sometimes coercive, “one child” policy to deal with horrible overpopulation removed the traditional pressure many women had been under to keep having more and more babies to produce a sufficient number of sons.
With the death of Mao and the move toward more of a state capitalism economy, the government’s zeal to forcibly combat customs regarded as backward has lessened somewhat. There has been some reversion to traditional gender inequality in some respects, though the increasing influence of consumerism and Western cultural values makes it unlikely the traditional ways will ever fully return. Foot binding itself became rarer and rarer over the course of the 20th century.
It was vehemently denounced by the Communists and stamped out as much as possible. In present day China, it is virtually unheard of, with Just a few scattered women, mostly very elderly, still following the practice.
|
Introduction to Encryption Algorithm
In the contemporary period, where the security of the data or application is the main concern, there are lots of things that have been developed to protect the system against breaches and the Encryption Algorithm is one among them. Encryption Algorithm can be defined as the mathematical procedure that the data has to pass through in order to get converted into the ciphertext. The main purpose of the Encryption Algorithm is to manipulate the critical information in a way so that only the authorized person can understand it. The output of the encryption algorithm is mainly a long string of the characters that moreover looks like junks and one will need the appropriate key to convert that junk into useful information.
Encryption may also be considered as the set of statements that add randomness to the string which could be decoded using a particular key. The output of the data processed through the Encryption Algorithm is called ciphertext and one needs the correct key to decode it. It was developed to mitigate the man in the middle attack in which the malicious user can intercept the traffic to sniff the data between the legitimate application and the authorized user. Encryption has been mainly bifurcated into two modes: Symmetric and asymmetric, that we will see later.
Different Types of Encryption Algorithm
There is an Encryption Algorithm that has been developed to add security features with the data that has exchanged between the peers. Depending upon the security requirements, the different Encryption Algorithm can be used with the cipher suite. Below are some of the important Encryption Algorithms:
- AES stands for Advanced Encryption Standard which is the most common mode of data encryption.
- AES used 128 bit for data encryption while it also has the tendency to bring 192 and 256-bit heavy encryption.
- This encryption algorithm has been endorsed by the US government and can be considered best to protect the system against all kinds of attacks, but not the brute force attack.
- RSA can be defined as the de facto algorithm to encrypt the data transmitted over the internet.
- It is nothing but the asymmetric algorithm and has been considered just opposite to that of Triple DES that is a symmetric algorithm.
- In RSA, the data has been encrypted using the public key while a private key has been used to decode it. The main concern comes in while using this algorithm is, the private key has to be kept very secure to protect the data or system from abuse.
3. Triple DES
- Triple DES can be defined as the updated or advanced version of the Data Encryption Standard that has been used to encrypt the data in many organizations.
- Triple DES is the symmetric algorithm and hence depends upon a single key to encrypt and decrypt the data.
- It has been called Triple DES as the uses three different keys of 56 bits each in order to encrypt the data which eventually makes it 168-bit data encryption.
- In some of the industries, DES has been considered as the standard to protect the data as it is the most common encryption algorithm.
- Blowfish may be defined as the symmetric algorithm that has been introduced to replace the Data Encryption Standard(DES).
- This algorithm divides the entire message into the block of 64 bits which then gets encrypted individually to enhance the security.
- Blowfish is often used in the websites that accept or process the payment online in order to encrypt the card and other critical details.
- Twofish can be defined as another symmetric algorithm that is actually a predecessor of Blowfish.
- Unlike to Blowfish, there is just a single key used to encrypt or decrypt the data and the key is supposed to be a 256-bit long key.
- It is freely available for anyone who wants to use it and due to its free and easy availability, it has been preferred by several software and hardware environments.
Understanding Symmetric and Asymmetric Algorithm
Let’s discuss the two modes of encryption below:
It may be defined as the encryption algorithm that uses a single key to encrypt and decrypt the data. The data has to pass through this algorithm to get transformed into the ciphertext that can be decrypted by any of the peers using the same key that has been used to decrypt it. It is used as the core algorithm to develop other algorithms like Blowfish, Twofish and so on.
It may be defined as the kind of encryption algorithm that uses two different keys to encrypt and decrypt the data. The key used to encrypt the message is called the public key while the key used to decrypt the message is called the private key. Between the two keys, the private key has to be kept very secure to protect the system from a man in the middle attack. The encryption algorithms like RSA uses this mode of encryption.
There are several encryption algorithms out there in the market available for us to secure the data that has to be transmitted through the internet. The sole reason for the existence of these algorithms is to protect the man in the middle attack which is concerned with the sniffing of data by someone malicious in an unauthorized manner. Based on the requirement of the software or the hardware system, we can choose the encryption algorithm among various available options. In some organizations, usually, they select any particular algorithm as the standard one in order to transform the message into ciphertext.
4.6 (3,144 ratings)
As per the requirement based on the speed of encryption, the algorithm has to have opted. For instance, the Blowfish encryption algorithm works enough fast to speed up the encryption processes. So many of the systems that require quick encryption and decryption of the data should have to process with Blowfish. When it comes to government-based organizations, they prefer to have their standard encryption algorithm applied everywhere in order to manage the standard. There are several algorithms that have been made available for free so that the organization will low budget in their security department can also leverage it to protect their data being exchanged online.
This has been a guide to the Encryption Algorithm. Here we have discuss the Different Types of Encryption Algorithm along with Understanding of Symmetric and Asymmetric Algorithm. You may also have a look at the following articles to learn more –
|
The Townshend Acts, a number of laws imposed upon Britain's American colonists to impose taxes and extract revenue, met with overwhelming opposition in the colonies and caused the dissenting colonists to call for a boycott on taxed items. The colonists then followed the boycott with both verbal and violent protests, prompting British soldiers to kill five American civilians in the Boston Massacre of 1770.
After the Stamp Act of 1765 was repealed following widespread American opposition, Chancellor of the Exchequer Charles Townshend pushed through Parliament a new series of laws meant to raise money in the colonies. These acts suspended the New York Assembly, reorganized the customs service and imposed duties on paint, paper, glass, lead and tea. The colonists saw the Townshend Acts as a threat to self-government. The ensuing boycott reduced tax revenue to Britain. Because the new customs board headquartered in Boston, the city was a hotbed of dissent. Colonial Secretary Lord Hillsborough sent four regiments of troops to Boston, and the outrage of the Bostonians to the occupation led to the Boston Massacre.
Ironically, on the same day of the massacre, the prime minister of England partially repealed the Townshend Acts. The duty on tea remained, however, as a symbol that Britain had the right to tax its colonies. The rebellion of the colonies against this measure culminated in 1773 in the Boston Tea Party, which was one of the key events that led to the American Revolution and the war for independence of the American colonies.
|
Early Literacy Skills that every child learns before reading:
The process that children go through to learn how to send and receive verbal information. Learning and using new vocabulary is essential to language development along with learning how to follow and tell a story.
Understanding how books work – which way to hold a book, text reads from left to right and top to bottom, the parts of a book – cover, illustrations, and author’s name. The love of books and reading is appreciation for books.
The ability to discern different sounds in a word is important for early literacy. Children must be able to recognize rhyming, letter sounds and syllables. Songs and nursery rhymes are wonderful for teaching phonological awareness.
Understanding what print is – that the scribbles on a page mean something and that words are all around us.
Recognizing the shape of the letters of the alphabet and the corresponding sounds that go with those letters.
Learning that one can express ideas through drawing, scribbling and writing which is learned by practicing and playing.
YOU are your child’s first and best teacher! Share your enthusiasm and love of reading every day.
Make reading time a regular planned activity every day.
Children learn best when they are actively involved with the book being read. Ask your child questions as you read a book, e.g. “what is happening in this picture?”, “what do you think will happen next”. Any of the “wh” questions are great.
Help your child notice the words that are all around us. Point out words on signs, newspapers, cereal boxes etc.
Start early! Babies enjoy short periods of sharing a book and love to just “play” with board books.
Sing to and with your child – this fun activity teaches them that words can be broken up into parts or “syllables”.
FOR A COMPLETE LISTING OF NCRL BRANCH STORYTIMES, CLICK HERE.
|
What is a WAPI?
Water Pasteurization Indicators (WAPIs) are a very simple devices for showing when water has been heated adequately to make it safe to drink.
Water Pasteurization indicator (WAPI)
Traditionally families have been told to boil their water to make it safe, mostly because it is a visible endpoint. This sometimes requires a large amount of cooking fuel and time. In many parts of the world obtaining fuel or firewood is expensive and time consuming, and causes harm to the environment such as deforestation and erosion.
Pasteurization involves heating liquids high enough to kill pathogenic organisms without boiling. Milk is heated to 161° Fahrenheit, which kills all dangerous bacteria and other organisms, and water can be made safe at the same temperature. It is not necessary to boil water to make it potable, only to heat to the temperature of pasteurization. This is a graph that shows the temperatures necessary to kill pathogens.
Temperatures at which Disease Causing Organisms in Contaminated Water Are Killed
- Pasteurization of Milk - 15 seconds at 71° C (160° F)
- WAPI WAX MELTING TEMPERATURE65° C (149° F)
Organisms killed at:
- Hepatitis Virus Type A65° C (149° F)
- E. Coli, Shigella,60° C (140° F)
- Rotaviruses, Polioviruses60° C (140° F)
- Worms, Giardia, Entamoeba, Cryptosporidium55° C (131° F)
WAPI's are a durable, reusable plastic tube containing a special wax that melts at 149° F. When the wax melts it shows that the water has reached pasteurization temperature and is safe. When the tube is removed from the water it cools and the wax hardens and is ready for use again.
Download a copy of this table from our workshop handouts section.
|
Thermodynamic stability occurs when a system is in its lowest energy state, or chemical equilibrium with its environment. This may be a dynamic equilibrium, where individual atoms or molecules change form, but their overall number in a particular form is conserved. This type of chemical thermodynamic equilibrium will persist indefinitely unless the system is changed. Chemical systems might include changes in the phase of matter or a set of chemical reactions.
State A is said to be more thermodynamically stable than state B if the Gibbs energy of the change from A to B is positive.
Chemical stability versus reactivity
Thermodynamic stability applies to a particular system. The reactivity of a chemical substance is a description of how it might react across a variety of potential chemical systems and, for a given system, how fast such a reaction could proceed.
Chemical substances or states can persist indefinitely even though they are not in their lowest energy state if they experience metastability - a state which is stable only if not disturbed too much. A substance (or state) might also be termed "kinetically persistent" if it is changing relatively slowly (and thus is not at thermodynamic equilibrium, but is observed anyway). Metastable and kinetically persistent species or systems are not considered truly stable in chemistry. Therefore, the term chemically stable should not be used by chemists as a synonym of unreactive because it confuses thermodynamic and kinetic concepts. On the other hand, highly chemically unstable species tend to undergo exothermic unimolar decompositions at high rates. Thus, high chemical instability may sometimes parallel unimolar decompositions at high rates.
In everyday language, and often in materials science, a chemical substance is said to be "stable" if it is not particularly reactive in the environment or during normal use, and retains its useful properties on the timescale of its expected usefulness. In particular, the usefulness is retained in the presence of air, moisture or heat, and under the expected conditions of application. In this meaning, the material is said to be unstable if it can corrode, decompose, polymerize, burn or explode under the conditions of anticipated use or normal environmental conditions.
|
Replace a coffee break with an outdoor walk—or take the coffee with you on your walk.
Physical Activity Initiative
We all want children to grow up healthy and with the knowledge and skills needed to succeed in life. A big part of being healthy for children is getting the recommended level of physical activity—at least 60 minutes daily. Studies show that physical activity not only helps kids stay active and healthy, but it can enhance important skills like concentration and problem solving, which can improve academic performance. Learn more about how you can make sure our children are on the path to an active and healthy life.
Public Service Announcements
The President's Council launched a physical activity outreach initiative highlighting the physical and cognitive benefits of regular activity for youth. The initiative includes the following national public service announcements:
This print PSA features youth that highlights the benefits of physical activity for children.
This print PSA features an adolescent boy that highlights the benefits of physical activity for children.
Check out the President's Council's ideas on ways to be active to get your kids moving today.
The Benefits of Physical Activity
Physical activity has many health benefits. It can help children maintain a healthy weight and build healthy bones, muscles, and joints. It also puts them on the path to a healthier lifestyle, which is important, considering that active children are more likely to become active and healthy adults. Compared with those who are not active, physically active youth have higher levels of aerobic fitness, stronger muscles, and stronger bones.
In addition to the health benefits, physical activity has a strong impact on academic performance and social skills. Research shows that 60 minutes or more of daily physical activity can help children in the following ways:
- Improved test scores, grades, and time management skills
- Boosted concentration, memory and classroom behavior
- Increased self-confidence and self-esteem
- Strengthened social and cooperative skills, such as teamwork and problem solving
- Reduced anxiety and stress
Studies show that physically active students score higher on standardized tests and have better grades, particularly in math, English and reading. Recess and classroom activity breaks show positive association with indicators of cognitive skills, attitudes, and academic behavior and achievement.1 When children are active their blood flow increases, improving memory and concentration, which are essential in the classroom, and hormones are released that can improve their mood and reduce anxiety and stress.2
Learn more about the importance of being active.
Physical Activity Facts
By encouraging physical activity, we empower our children to be healthy.
- The U.S. Department of Health and Human Services recommends that young people aged 6–17 years participate in at least 60 minutes of physical activity daily.3
- Only one in three children achieves the minimum amount of physical activity they need each day.4
- Physical activity is particularly important among children with physical disabilities, as 22.5% of children with disabilities are obese compared to 16% of children without disabilities.
- In 2011, 29% of high school students surveyed had participated in at least 60 minutes per day of physical activity on all 7 days before the survey.5
- Children spend an average of more than seven-and-a-half hours a day in front of a screen, inside watching TV, playing video games, or surfing the Web.6
- Participation in physical activity declines as young people age.7
Many organizations work to ensure that our nation's children have daily physical activity. Visit their websites for additional information and resources.
- Active Schools : A comprehensive program that empowers school champions (P.E. teachers, classroom teachers, principals, administrators, and parents) to create active environments that enable all students to get moving and reach their full potential.
- Making Health Easier : Making Health Easier is an interactive social networking site where Centers for Disease Control and Prevention-funded communities and their partners can share stories and resources around obesity and tobacco issues.
- Shape America : SHAPE America is the largest organization supporting and assisting professionals involved in PE, recreation, fitness, sport and coaching, dance, health education and promotion, and all specialties related to achieving a healthy and active lifestyle.
To return to the page content, select the respective footnote number.
Recent Blog Posts
Having fun while active is the key! Find activities that you enjoy & include friends/family.
Explore ways you can enjoy living a healthy and active life.Learn more about #0to60 campaign
|
Cytolysis is the lysis, or death, of cells due to the rupture of the cell membrane. Cytolysis is caused by excessive osmosis, or movement of water, towards the inside of a cell. The cell membrane cannot withstand the osmotic pressure of the water inside, and so it explodes. Osmosis occurs from a region of high-water potential to a region of low-water potential passing through a semipermeable membrane.
Cause and effects
Osmotic lysis occurs in a hypotonic environment, where water diffuses into the cell. As the water continues to diffuse into the cell, the cell grows larger, and will eventually burst if too much water enters. The cell membrane is not strong enough to prevent and stop the swelling of the cell, and eventually will rupture, releasing the cell contents.
Cytolysis does not occur in plant cells because plant cells have a strong cell wall that contains the osmotic pressure, or turgor pressure, which would otherwise cause cytolysis to occur. Contrary to organisms without a cell wall, plant cells must be in a hypotonic environment in order to have this turgor pressure, which provides the cells more structural support, preventing the plant from wilting. In a hypertonic environment, plasmolysis occurs, which is nearly the complete opposite of cytolysis: Instead of expanding, the cytoplasm of the plant cell retracts from the cell wall, causing the plant to wilt.
Osmotic lysis is often the result of a stroke, since a stroke leads to a malfunctioning of the cell's metabolism, which results in an inflow of extracellular fluid into the cell.
The Fab portion of IgG or IgM binds to epitopes on the outer membrane of the gram-negative cell wall. This activates the complement pathway enabling the [membrane attack complex]? (MAC) to insert through the outer membrane and cytoplasmic membrane causing the bacterium to lyse.
Preventing osmotic lysis
Different cells and organisms have adapted different ways of preventing cytolysis from occurring. For example, the paramecium uses a contractile vacuole, which rapidly pumps out excessive water to prevent the build-up of water and the otherwise subsequent lysis.
Other organisms pump solutes out of their cytosol, which brings the solute concentration closer to that of their environment and slows down the process of water's diffusion into the cell, preventing cytolysis. If the cell can pump out enough solutes so that an isotonic environment can be achieved, there will be no net movement of water.
|
Cirrhosis of the liver describes the end result of years of organ damage. Chronic inflammation slowly leads to scar tissue, changing the organ’s shape, killing organ cells and altering blood flow. The liver becomes increasingly damaged and loses its capacity to function effectively. Every year, cirrhosis kills approximately 25,000 Americans.
Cirrhosis Progression. The development of cirrhosis often takes decades. Chronic inflammation begins the process, causing scar tissue, or fibrous tissue, to develop after healthy cells die.
Over time, the scar tissue covers greater areas of the organ, changing its shape and interfering with blood flow. The liver attempts to regenerate itself—it is the only organ in the body that can regenerate. Nodules of regenerative tissue appear in badly scarred areas, causing its shape to change even more.
In its final stages, the diseased organ is covered with large areas of scar tissue and regenerative nodules, it may have become enlarged and fatty, or it may have shrunk considerably. In either case, blood no longer flows properly through the organ, and vital bodily functions and enzyme production are severely compromised.
Causes of Cirrhosis . Hepatitis and the excessive consumption of alcohol are among the top causes of cirrhosis.
Alcoholism: Research has not yet revealed why some alcoholics develop cirrhosis and others don’t. Autopsy information shows that approximately ten to fifteen percent of alcoholics have cirrhosis at the time of death. Alcohol can cause other liver impairments besides cirrhosis.
One of the liver’s important functions is to break down toxins, and alcohol is one of these toxins. However, too much alcohol can damage liver cells even as they attempt to destroy the toxin, and the by-products of metabolizing alcohol cause further damage. Cells become inflamed and die, starting a degenerative process that results in cirrhosis. The organ often becomes enlarged and develops excess amounts of fat as its ability to function decreases. It can take up to a decade of heavy drinking to produce cirrhosis.
Alpha-1-antitrypsin Deficiency: This is a hereditary disorder that prevents the body from properly utilizing the alpha-1-antitrypsin protein. In some cases, alpha-1-antitrypsin builds up in the liver, where excess amounts can lead to tissue scarring.
Autoimmune Hepatitis: An overly active immune system can attack liver cells, causing inflammation and symptoms similar to hepatitis. Autoimmune reactions sometimes target red blood cells, which can lead to anemia.
Blocked Bile Ducts: Bile ducts become blocked due to birth defects, gallbladder surgery complications, or inflammation. If the ducts are blocked, bile flows back into the liver, damaging the organ.
Glycogen Storage Disease: This disease is essentially a deficiency of enzymes that control blood sugar (glucose) levels. Glucose is stored as glycogen in the liver, where it can cause damage if it builds up to excessive amounts.
Hemochromatosis (iron overload): Iron overload occurs when the body lacks the ability to regulate iron levels. Iron overload can damage and scar many internal organs, and requires early treatment to be successfully controlled.
Hepatitis B and Hepatitis C: Chronic infections of hepatitis B and hepatitis C can cause liver inflammation. Worldwide, hepatitis B accounts for the largest number of cirrhosis cases. Both hepatitis C and B take many years to cause extensive damage, but the results can be fatal.
Hepatotoxicity: Exposure to certain prescription drugs, environmental toxins and chemicals can all cause cirrhosis.
Non-alcoholic Steatohepatitis, Malnutrition and Diabetes: Steatohepatitis is the medical term for an enlarged, fatty liver. The condition is usually caused by alcoholism, but can also be the result of malnutrition, obesity and diabetes.
Wilson’s Disease: Excess amounts of copper are stored by the body and cause hepatotoxicity and brain dysfunctions.
|
Most people assume that air pollution is an outdoor issue; they feel an imaginary sense of protection and safety within their homes and buildings; contrary to popular knowledge, indoor pollution is just as bad, if not worse, for your health. Often people do not even realize how many household items create pollutants that are released into the air of their homes. Air quality is responsible for a large variety of health risks. The average person spends close to 90% of their time indoors, making poor indoor air quality a major concern. Pollutants enter our air in several ways and in several forms. Take steps in improving indoor air quality by understanding possible air pollutants and the health risks they pose for people. Finally, learn how to cleanse poor air quality, resulting from these pollutants, to create a healthier indoor atmosphere.
Pollutants come in various forms; living organisms, like mold and dust mites, are just as harmful as a variety of gases and chemicals that are found indoors. Secondhand smoke, for example, is caused by smoking, or the burning of tobacco products; when burned indoors, the chemicals enter the air, walls and furniture and seeping into the bodies of anyone present. Chemicals found in several cleaning products, paints, waxes, air fresheners, pesticides and repellents are considered Volatile Organic Compounds, or VOCs; these products are commonly used and stored within homes, schools and other buildings for basic cleaning; the problem is that the chemicals within them are extremely harmful to health, and they evaporate out of the cleaners and into the air. Asthma triggers and molds are other common indoor pollutants; dust mites, animal dander, certain foods and pollen can be found in our air, causing respiratory issues and discomfort. Molds produce spores that thrive in damp environments, like the bathroom or kitchen. Combustible pollutants, such as carbon monoxide and nitrogen dioxide come from machines in our houses, especially heating units, which release these odorless, colorless gases into the environments when not functioning properly. Lastly, radon is a radioactive gas that is created in soil; this gas is known to enter homes and buildings through areas in contact with the earth; cracks in the floor can allow radon to seep through and enter into the air, making the air quality extremely dangerous. It is important to know the symptoms that come along with each of these pollutants because awareness is the first step in solving the problem.
Health risks from pollutants vary in degree of severity; symptoms of poor air quality range from allergies and fatigue to cancers and even death. Learn how to identify factors contributing to indoor pollution in the air by identifying the symptoms. When allergens and molds are the culprit, symptoms tend to be a combination of a runny nose, sneezing, red eyes and/or difficulty breathing. Secondhand smoke also causes issues with asthma and has been linked to causing ear infections. Ear, nose and throat irritation, headaches, and nausea are symptoms of Volatile Organic Compounds; if not taken care of, VOCs can cause damage to the liver, kidneys and central nervous system, and sometimes lead to cancer. Combustible pollutants, like carbon monoxide, inhibit oxygen from reaching the brain, resulting in headaches, dizziness, weakness, nausea and, sometimes, death. Radon is the leading cause of lung cancer for non-smoking individuals and is the second leading cause for everyone else. If experiencing any of the above symptoms, it is very likely that the quality of air is being affected by pollutants. Action is necessary to aid in prohibiting the increase of severity in symptoms.
To aid in the cleansing of indoor air, follow these 12 easy steps to improving air quality today
Fresh air will clear out a lot of the indoor pollutants, so it is important to open the windows or doors of buildings for at least an hour a day. Other buildings may have air conditioning units that have a setting to rotate the air from inside into the outside and outside to the inside.
Purchase a high-quality dehumidifier. Moisture within buildings give mold and bacteria ideal conditions for growing. Humidity within the home is most desired between 30% and 50%. Be sure to run fans in areas where moisture is likely, i.e. bathrooms and kitchens.
Heating and air conditioning units have built-in filters that are meant to collect dust and other pollutants. If the filters are not replaced regularly, they are not as efficient. Put in high quality filters and maintain the utmost efficiency in pollutant filtration.
Do not allow smoking inside. Cigarettes and other tobacco products are loaded with harmful chemicals that release into the air, and remain in the air, walls, furniture, etc. of the building. Everyone in the building becomes subject to the harmful effects of the chemicals.
It is important to dust and clean homes and other buildings thoroughly. Clean away mold and mildew; brush away dust bunnies. Get rid of all the things that cause allergies within the house. Make sure to also use cleaning products that are green, or environmentally safe, because they contain less harmful chemicals.
Fix all water leaks. Anywhere that water can sit will grow mildew, molds and other bacteria that are bad for the air quality.
7. Wash Sheets
Make sure to wash all sheets, blankets, towels and furniture covers in hot water. Items for bedding should be washed once a week. Hot water will kill off any pollutants and keep the sleeping environment fresh and clean.
Purchase an allergen-proof mattress. Dust mites and other organic pests can grow and burrow in beds and furniture. It is important that where you sleep is free of these things. Use anti-allergen spray to use for the rest of the house.
Keep pets groomed. Pet dander can be found floating through the air in houses and buildings where pets are housed. Shampoos, oils and medicines can help reduce pet dander in the air.
Seal cracks in the floor so that radon cannot enter from the ground into the house. Make sure that windows are sealed efficiently to keep pests, pollen and other air pollutants out. This will also help to reduce any moisture that is entering from the outside into the house or building.
11. Gas Warnings
Install carbon monoxide and radon detectors to notify you if the deadly gas is present in the home. If need be, a radon sump can be installed to remove radon from the air.
Strip away old chemical-based paint, and repaint interiors with new environmentally safe paint products. So many items within households are harmful pollutants; most of the time we are not even aware of their harmful nature. Keep health at its best by following these 12 steps to improve the quality of air indoors; it is worth the time, money and effort needed to change these things. Imagine how much air we breathe in a day… Now imagine how many pollutants are entering our bodies each day… Improved quality of air means an improved quality of life.
The following products are just a few trustworthy items that can help improve indoor air quality:
- CO800 Carbon Monoxide Detector warns when deadly gas is present.
- Green Promise by Benjamin Moore Paints are safe for everyone, including nature.
- Simple Green All-Natural Cleaner works just as well as regular cleaners, but lacks the dangerous chemicals.
- Frigidaire FAD704TDP Dehumidifier is the best on the market for large space areas, like homes and schools.
- Use vacuum with a HEPA filters; used in surgery room to maintain cleanliness and sanitation, these filters are the top of the line for improved air quality.
Author: Conrad Mackie
|
The scientific method is a process for creating models of the natural world that can be verified experimentally. The scientific method requires making observations, recording data, and analyzing data in a form that can be duplicated by other scientists. In addition, the scientific method uses inductive reasoning and deductive reasoning to try to produce useful and reliable models of nature and natural phenomena. Inductive reasoning is the examination of specific instances to develop a general hypothesis or theory, whereas deductive reasoning is the use of a theory to explain specific results. In 1637 René Descartes published his Discours de la Méthode in which he described systematic rules for determining what is true, thereby establishing the principles of the scientific method.
The subject of a scientific experiment has to be observable and reproducible. Observations may be made with the unaided eye, a microscope, a telescope, a voltmeter, or any other apparatus suitable for detecting the desired phenomenon. The invention of the telescope in 1608 made it possible for Galileo to discover the moons of Jupiter two years later. Other scientists confirmed Galileo's observations and the course of astronomy was changed. However, some observations that were not able to withstand tests of objectivity were the canals of Mars reported by astronomer Percival Lowell. Lowell claimed to be able to see a network of canals in Mars that he attributed to intelligent life in that planet. Bigger telescopes and satellite missions to Mars failed to confirm the existence of canals. This was a case where the observations could not be independently verified or reproduced, and the hypothesis about intelligent life was unjustified by the observations. To Lowell's credit, he predicted the existence of the planet Pluto in 1905 based on perturbations in the orbits of Uranus and Neptune. This was a good example of deductive logic. The application of the theory of gravitation to the known planets predicted that they should be in a different position from where they were. If the law of gravitation was not wrong, then something else had to account for the variation. Pluto was discovered 25 years later.
Real science hops from failure to failure, from several falsifiable hypotheses in confused competition to the next set, until a consensus evolves around a surviving paradigm that often uses aspects of its predecessors, adding unexpected novel ideas that lead to productive questions and more definitive tests, as disparate data starts to fit an overall unifying view. — R. Murray
The apparatus for making a scientific observation has to be based on well-known scientific principles. The telescope, for instance, is based on magnification of an image using light refraction through lenses. It can be proved that the image perceived through the telescope corresponds to that of the object being observed. In other words, you can trust observations made through telescopes. This is in contrast to magic wands, divining rods, or other devices for which no basis in science can be found. A divining or dowsing rod is a "Y" shaped branch of a tree, which is supposed to be able to help to identify places where there is underground water. The operator holds the divining rod by the top of the "Y", and the single end is supposed to dip when the operator passes over a section of land where there is water. What is the force that makes the divining rod dip? How does the divining rod "sense" the water? A scientist would try to answer these questions by experiments. Place the divining rod on a scale, for example, and then put a bowl of water under the divining rod. Is there a change of weight that indicates force? In another experiment the scale with the divining rod may be placed over a place known to have underground water, and over another place known to be dry. If these experiments show no force being exerted on the divining rod, we have to conclude that divining rods cannot be used as instruments for detecting water. We also have to conclude that any movement of the rod is accomplished by the hands of the person holding it, no matter how much the person denies it.
The scientific method requires that theories be testable. If a theory cannot be tested, it cannot be a scientific theory. Step 2 involves inductive reasoning, as described above. This approach can be used to study gravitation, electricity, magnetism, optics, chemistry, etc. Sometimes more than one theory can be proposed to explain observable events. In such cases, different predictions made with each theory can be used to set up experiments that select one theory over another. In the 17th century there were competing theories about whether electromagnetic radiation, such as visible light, consisted of particles or waves. At the beginning of the 20th century Max Planck postulated that energy can only be emitted or absorbed in small, discrete packets called quanta. This seemed to favor the particle theory, particularly after Einstein demonstrated that light behaves like a stream of particles in photoelectric cells. However, diffraction experiments with electrons, which were considered particles because they had a measurable weight, showed all the characteristics of waves. In 1926, Erwin Schrödinger developed an equation that described the wave properties of matter, and this became the foundation for the branch of physics called quantum mechanics.
How can waves behave like particles and particles behave like waves? Some scientific facts are very hard to comprehend. Yet, these are observable phenomena verified over and over again by many people all over the world. The behavior of the speed of light is another physical fact that is hard to understand. The speed of light in a vacuum is approximately 299,792 kilometers per second. The speed is reduced by about 3% in air and by 25% in water. A famous experiment conducted by Michelson and Morely at the end of the 19th century showed that the speed of light was the same perpendicular to the orbit of the earth and parallel to the orbit of the earth. The orbital speed of the earth of 29 kilometers per second could not be detected in the measurement of the speed of light. Einstein's theory of relativity is based on the constancy of measurement of the speed of light for all observers. A train has its headlight on. The speed of the light emanating from the train is the same whether the train is moving toward you or not! It is hard to accept, but many experiments for over one hundred years have come to the same conclusion.
Science has some well-known limitations. Science works by studying problems in isolation. This is very effective at getting good, approximate solutions. Problems outside these artificial boundaries are generally not addressed. The consistent, formal systems of symbols and mathematics used in science cannot prove all statements, and furthermore, they cannot prove all TRUE statements. Kurt Gödel showed this in 1931. The limitations of formal logical systems make it necessary for scientists to discard their old systems of thought and introduce new ones occasionally. Newton's gravitational model works fairly well for everyday physical descriptions, but it is not able to account for many important observations. For this reason, it has been replaced by Einstein's general theory of relativity for most celestial phenomena. Instead of talking about gravity, we now are supposed to talk about the curvature of the four-dimensional time-space continuum. Scientific observations are also subject to physical limits that may prevent us from finding the ultimate truth. The Heisenberg Uncertainty Principle states that it is impossible to determine simultaneously the position and momentum of an elementary particle. So, if we know the location of a particle we cannot determine its velocity, and if we know its velocity we cannot determine its location. Jacob Bronowski wrote that nature is not a gigantic formalizable system because to formalize it we would have to make some assumptions that cut some of its parts from consideration, and having done that, we cannot have a system that embraces the whole of nature.
The application of the scientific method is limited to independently observable, measurable events that can be reproduced. The scientific method is also applicable to random events that have statistical distributions. In atomic chemistry, for example, it is impossible to predict when one specific atom will decay and emit radiation, but it is possible to devise theories and formulas to predict when half of the atoms of a large sample will decay. Irreproducible results cannot be studied by the scientific method. There was one day when many car owners reported that the alarm systems of their cars were set off at about the same time without any apparent cause. Automotive engineers were not able to discover the reason because the problem could not be reproduced. They hypothesized that it could have been radio interference from a passing airplane, but they could not prove it one way or another. Mental conceptual experiences cannot be studied by the scientific method either. At this time there is no instrumentation that enables someone to monitor what anybody else conceives in their mind, although it is possible to determine which part of the brain is active during any given task. It is not possible to define experiments to determine objectively which works of art are "great", or whether Picasso was better than Matisse. So-called miracles are also beyond the scientific method. A person has tumors and faces certain death, and then, the tumors start shrinking and the person becomes healthy. What brought about the remission? A change in diet? A change in mental attitude? It is impossible to go back in time to monitor all variables that could have caused the cure, and it would be unethical to plant new tumors into the person to try to reproduce the results for a more careful study.
The scientific method relies on critical thinking, which is the process of questioning common beliefs and explanations to distinguish those beliefs that are reasonable and logical from those which lack adequate evidence or rational foundation.
Arguments consists of one or more premises and one conclusion. A premise is a statement that is offered in support of a claim being made. Premises and claims can be either true or false. In deductive arguments the premises provide complete support for the conclusion. If the premises provide the required degree of support for the conclusion then the argument is valid, and if all its premises are true, then the conclusion must be true. In inductive arguments the premises provide some degree of support for the conclusion. When the premises of inductive arguments are true, their conclusion is likely to be true. Arguments that have one or more false premises are unsound.
Arguments are subject to a variety of fallacies. A fallacy is an error in reasoning in which the premises given for the conclusion do not provide the needed degree of support. A deductive fallacy is a deductive argument where the premises are all true but reach a false conclusion. An inductive fallacy consist of arguments where the premises do not provide enough support for the conclusion. In such cases, even if the premises are true, the conclusion is not likely to be true.
Common fallacies are categorized by their type, such as Ad Hominem (personal attack), and appeals to authority, belief, fear, ridicule, tradition, etc. An example of an Ad Hominem fallacy would be to say "You do not understand this because you are American (or Chinese, etc.)". The national origin of a person (the premise) has nothing to do with the conclusion that a person can understand something or not, therefore the argument is flawed. Appeals to ridicule are of the form: "You would be stupid to believe that the earth goes around the sun". Sometimes, a naive or false justification may be added in appeals to ridicule, such as "we can plainly see the sun go around the earth every day". Appeals to authority are of the form "The president of the United States said this, therefore it must be true". The fact that a famous person, great person, or authority figure said something is not a valid basis for something being true. Truth is independent of who said it.
Direct or Experimental evidence. The scientific methods relies on direct evidence, i.e., evidence that can be directly observed and tested. Scientific experiments are designed to be repeated by other scientists and to demonstrate unequivocably the point that they are trying to prove by controlling all the factors that could influence the results. A scientist conducts an experiment by varying a single factor and observing the results.
When appropriate, "double blind" experiments are conducted to avoid the possibility of bias. If it is necessary to determine the effectiveness of a drug, an independent scientist will prepare the drug and an inert substance (a placebo), identifying them as A and B. A second scientist selects two groups of patients with similar characteristics (age, sex, etc.), and not knowing which is the real drug, administers substance A to one group of patients and substance B to the second group of patients. By not knowing whether A or B is the real drug, the second scientist focuses on the results of the experiment and can make objective evaluations. At the end of the experiment, the second scientist should be able to tell whether the group receiving substance A showed improvements over those receiving substance B. If no effect can be shown, the drug being tested is ineffective. Neither the second scientist nor the patients can cheat by favoring one substance over another, because they do not know which is the real drug.
Anecdotal, Correlational, or Circumstantial Evidence. "Where there is smoke, there is fire" is a popular saying. When two things occur together frequently, it is possible to assume that there is a direct or causative relationship between them, but it is also possible that there are other factors. For example, if you get sick every time that you eat fish and drink milk, you could assume that you are allergic to fish. However, you may be allergic to milk, or only to the combination of fish with milk. Correlational evidence is good for developing hypotheses that can then be tested with the proper experiments, e.g., drink milk only, eat fish only, eat fish and milk together.
There is nothing wrong with using representative cases to illustrate an inductive conclusion drawn from a fair sample. The problem arises when a single case or a few selected cases are used to draw a conclusion which would not be supported by a properly conducted study.
Argumentative Evidence consists of evaluating facts that are known and formulating a hypothesis about what the facts imply. Argumentative evidence is notoriously unreliable because anybody can postulate a hypothesis about anything. This was illustrated above with the example about the "channels" of Mars implying intelligent life. The statement "I heard a noise in the attic, it must be a ghost" also falls in this category.
Testimonial Evidence. A famous football player appears on television and says that Drug-XYZ provides relief from pain and works better than anything else. You know that the football player gets paid for making the commercial. How much can you trust this evidence? Not very much. Testimonials are often biased in favor of a particular point of view. In court proceedings, something actually experienced by a witness (eyewitness information) has greater weight than what someone told a witness (hearsay information). Nevertheless, experiments have repeatedly demonstrated that eyewitness accounts are highly unreliable when compared with films of the events. The statement "I saw a ghost last night." is an example of testimonial evidence that probably cannot be verified and should not be trusted. On the other hand, the statement "I saw a car crash yesterday." can be objectively verified to determine whether it is true or false by checking for debris from the accident, hospital records, and other physical evidence.Next Subjective Perceptions
|
Tara’s Lessons for ESL Students: Learning English with Books
Many of my students and I enjoy using different types of fictional and non-fictional texts during our classes. One of my favorites (and one of my students’ favorites as well) is Charlie and the Chocolate Factory by the beloved British author Roald Dahl. Although considered a children’s book, it is a story that is fun for young and old readers and appeals to people from all over the world.
There are so many reasons why this novel is great for students of English! First of all, the vocabulary used in this book is interesting and very useful in everyday life. The new words and idiomatic phrases you learn are repeated throughout the story, and will STAY in your mind because you are reminded of them from start to finish. Also, there is lots of SPOKEN English used in this book, so you get an idea for how native speakers really talk to each other, and then hopefully go out and use it! Finally, this novel allows ESL students to have fun while accomplishing the goal of reading a complete book in English!
In today’s blog we will take a look at a typical lesson for this novel. I hope you enjoy it!
Chapter 1 – Here Comes Charlie
1. Listen to the Audio Recording, read by me! (You can find the text online.)
2. Now take a look at the vocabulary words I have selected and their part of speech. Which words do you know? Which words are new? Can you use the words in a sentence? Go back and listen again, now that you understand all of the vocabulary.
Wooden (adjective), Edge (noun), Mattress (noun), Draught (noun), Cap (noun), Cabbage (noun), Tummy (noun), Slabs (noun), Greedily (adverb), Torture (noun), Nibble (noun), Tremendous (adjective), Marvelous (adjective), Enormous (adjective), Sniff (noun), Gorgeous (adjective), To look forward to sth (phrasal verb), To make your mouth water (idiomatic phrase), To long for (idiomatic phrasal verb)
3. Did you understand the text? Let’s do some comprehension/opinion questions:
- Tell me about your family.
- Describe your house.
- Tell me about your job.
- Tell me about the food you eat with your family.
- Describe some large building/attraction that stands out in your city.
- Tell me about something you long for more than anything, and why.
- Now, you invent, with your imagination. 2 new crazy sweets that Willy Wonka could make. What would it they taste like? What would they be made of?
Well, I hope you enjoyed listening to the first chapter in Charlie and the Chocolate Factory. It takes about 2 hours to finish a complete chapter and you can expect the schedule of the lesson to be as follows:
- Just Listen: You listen to the Audio recording of the chapter WITHOUT reading, once or twice. Then you explain the main idea or “gist” of the chapter. Tell me anything you remember hearing.
- Listen and Read: You listen to the Audio recording of the chapter AND read the text – for better understanding.
- Echo and Vocabulary: This is when we practice pronunciation and stress. I say a few words (or a sentence in you are more advanced) and then you REPEAT the same words/sentence trying to sound EXACTLY like me. When you listen to the Mp3 recording of our class, you will be able to hear the differences and then practice the words and stress that are hard for you. During the “echo” you have the chance to ask about all new vocabulary words and grammar structures.
- Practice makes Perfect: You listen to the Mp3 of our class AND the Audio of the chapter during your free time and practice your pronunciation and stress, trying to sound like me in the original recording.
- Speaking and Comprehension: After you practice on your own, you read the chapter out loud so that I can listen to you and your improvement. Then I ask you several questions about the chapter allowing you to show me that you fully understand. Finally, we discuss all the concepts that are presented in the chapter such as the “Role of the Media” or “Advertising to Children”.
- Written Work: Some students like to have a chance to practice their writing skills. Here I give you several options for writing assignments that we go over together during our next class. You have a chance to use the new vocabulary and grammar you learned.
So, if you are interested in reading this novel with me or would like more information about my classes, send me an email today and we can talk further about your goals for learning English. I hope you have a great day!
|
SAT SUBJECT TEST MATH LEVEL 1
Equations and Inequalities
THE SUBSTITUTION METHOD
If in a system of equations either variable has a coefficient of 1 or –1, solving the system by the substitution method may be just as easy or even easier than solving it by the addition method. Look at the system of equations you solved in Example 27: . By subtracting 2x from each side of the first equation, you can rewrite that equation as y = 13 – 2x. Now in the second equation you can replace y by 13 – 2x :
3x – (13 – 2x) = 12
This is now a simple equation in one variable that you can solve using the six-step method.
Then substitute 5 for x :
y = 13 – 2(5) = 13 – 10 = 3
The solution is x = 5 and y = 3.
|
mumps; parotid; gland; MMR; immunise; immunisation; vaccine; vaccination; immunize; immunization; virus; parotitis ;
Mumps is an infection which affects the glands that make saliva. These glands are on the cheeks near to the ears. With mumps, they can swell and be painful, and often people with mumps have a very bad headache for a day or so. Mumps does not usually cause any ongoing problems, but sometimes it can affect the testes of older boys (after puberty) or men.
- Mumps is an infection caused by a virus called a paramyxovirus.
- It can cause swelling of one or both of the parotid glands. The parotid glands (which make saliva) are just in front of and below the ears. The mumps virus may also infect the other salivary glands which are lower, along the jaw.
- Mumps can cause a severe headache and sometimes it may be difficult to be sure this is only' mumps, because the symptoms can be similar to meningitis.
- Only 30% to 40% of people who have the infection will have swollen parotid glands.
- Most people who get a mumps infection will not have any signs that they have the infection, while some will have 'cold'-like symptoms (often with a bad headache).
- Mumps is now uncommon, since children are immunised against it, but before the days of immunisation, most people had mumps when they were children (most often between 5 and 9 years).
- While it is rare for mumps infections to cause severe problems, there have been some deaths from mumps (in Australia there were ten deaths between 1978 and 1997, and in 2000 there were 2 deaths, both of men over 80 years old.)
How is mumps spread?
- The infection is spread by droplets of infected saliva from the mouth and by sneezing and coughing.
- Also, tissues used by a person with mumps can transfer the infection to another person if that person handles the used tissue and does not wash his hands after holding it.
long does mumps take to develop?
- Between 12 - 25 days after being in contact with someone who has mumps, a child will become unwell for one or two days before the swelling appears.
long are people infectious?
A person is infectious from about 6 days before becoming unwell, until about 9 days afterwards. If the swelling goes down before 9 days, the person will no longer be infectious.
Children or adults who are infectious should stay away from other people.
People who don't have any swellings can still pass the infection on.
- Less than 50% of children with mumps become unwell.
- Usually the first sign is feeling ill, with a fever, loss of appetite and headache.
- After 1 or 2 days, there may be swelling of the salivary glands in front of, and just below the ear. These swollen glands are usually tender.
- The headache of mumps often becomes more severe a couple of days after the swelling of the parotid glands, with the child being distressed by bright light, not willing to eat, and sometimes vomiting. Headaches due to mumps can occur even when there is no swelling of the parotid glands.
- Other 'glands' which are often felt below the jaw and below the ear are quite different to the salivary glands. These other glands swell often when children have infections, such as ear infections, they are usually firm and feel round or oval. These are lymph nodes (or glands) which help fight infections.
- The swollen salivary glands are higher on the face, soft and usually do not have a clear edge.
- There are other salivary glands along and just below the jaw and these might also be swollen, but not hard.
- Most children have swelling on both sides, but about 30% have swelling only on one side of the face.
- The swelling usually goes down after 5 to 10 days.
problems from mumps
- Only a few children with mumps get any on-going problems from mumps, but adolescents and adults who get mumps have a higher risk of developing problems.
- 15-35% of males who get mumps after puberty get a very swollen and sore testis (orchitis) usually only on one side. It is not likely that this will stop them from making sperm and having children even if both testes are affected, but this can occasionally happen. Orchitis usually starts 7 to 10 days after swelling of the glands in the face starts. The boy or man becomes unwell again, with a fever and painful swelling of usually one testis, which lasts for several days.
- 5% of females who have mumps after puberty get the same sort of infection in the ovary (oophoritis), causing pain in the lower part of the tummy. This does not prevent the girl or woman from having children.
- In 4% of cases the pancreas (another gland) is affected, but this does not seem to cause any problems.
- Permanent hearing loss happens rarely.
- Mumps illness may cause mumps meningitis, with severe headache and neck stiffness. If this happens the child or young person should be seen by a doctor urgently, but mumps meningitis does not require treatment and almost always goes away without leaving any problems.
- There is no clear evidence that babies are harmed if the mother gets mumps during a pregnancy, but getting mumps in early pregnancy may possibly cause a miscarriage.
you can do
- There is no specific treatment for mumps (no antibiotics for example).
- Children or adults with mumps need to rest and may be much more comfortable in a darkened room while they have a headache.
- Medication for pain (eg paracetamol or ibuprofen) may be useful for the headache, tenderness of the swollen glands and for pain if other glands are involved (see the topic 'Using paracetamol or ibuprofen')
- Offer lots of drinks.
- Offer different kinds of soft foods to see which the child feels is easiest to eat but if she doesn't eat much for a few days this will not cause any problems.
- Contact the doctor if the child or young person has a bad headache, stiff neck or other problems that don't go away.
- Protect other children by keeping your child at home until the swelling goes down, or for 9 days (which ever happens first).
- The topic 'Feeling sick' has other ideas about caring for sick children.
children and adults from mumps
- Immunisation with MMR (Measles, Mumps, Rubella vaccine) is recommended for children when they are 12 months old and again at 4 years. See the topic 'Immunisation'.
- Immunisation is not usually done earlier because of the low risk of serious illness from mumps and because immunisation may not work as well if it is given during the first year of life.
- It is advised that pregnant women do not have the vaccine during pregnancy.
- People born in Australia since 1966 may not have had two doses of MMR vaccine, and those born before 1966 may not have had any MMR vaccine. If anyone has not had these vaccines, or does not know that they have had them, they should be immunised. It is safe to give the vaccines to people who have already had them or have had the illnesses, but are not sure whether they are protected.
- It is very important that young adults who intend to go into health or teaching professions are fully immunised with MMR.
- This immunisation is free in Australia.
: no links to other health problems
- Expert opinion of researchers around the world is that there is no link between MMR immunisation and autism, autism spectrum disorders or bowel disease.
Department of Health and Ageing, Australian Government: Australian Immunisation Handbook - 10th Edition, 2013:
SA Health: 'Mumps - symptoms, treatment and prevention'
Novak M 'Mumps' in Garfunkel LC, Kaczorowski J, Christy C, (Ed) 'Pediatric Clinical Advisor', Mosby 2007.
The information on this site should not be used as an alternative to professional care. If you have a particular problem, see a doctor, or ring the Parent Helpline on 1300 364 100 (local call cost from anywhere in South Australia).
This topic may use 'he' and 'she' in turn - please change to suit your child's sex.
|
February 11, 2014
Plant growth is orchestrated by a spectrum of signals from hormones within a plant. A major group of plant hormones called cytokinins originate in the roots of plants, and their journey to growth areas on the stem and in leaves stimulates plant development. Though these phytohormones have been identified in the past, the molecular mechanism responsible for their transportation within plants was previously poorly understood.
Now, a new study from a research team led by biochemist Chang-Jun Liu at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory identifies the protein essential for relocating cytokinins from roots to shoots.
The research is reported in the February 11 issue of Nature Communications.
Cytokinins stimulate shoot growth and promote branching, expansion and plant height. Regulating these hormones also improves the longevity of flowering plants, tolerance to drought or other environmental stresses, and the efficiency of nitrogen-based fertilizers.
Manipulating cytokinin distribution by tailoring the action of the transporter protein could be one way to increase biomass yield and stress tolerance of plants grown for biofuels or agriculture. "This study may open new avenues for modifying various important crops, agriculturally, biotechnologically, and horticulturally, to increase yields and reduce fertilizer requirements, for instance, while improving the exploitation of sustainable bioenergy resources," Liu said.
Using Arabidopsis, a small flowering plant related to mustard and cabbage that serves as a common experimental model, the researchers studied a large family of transport proteins called ATP-binding cassette (ABC) transporters, which act as a kind of inter- or intra-cellular pump moving substances in or out of a plant's cells or their organelles. While performing gene expression analysis on a set of these ABC transporters, the research team found that one gene – AtABCG14 – is highly expressed in the vascular tissues of roots.
To determine its function, they examined mutant plants harboring a disrupted AtABCG14 gene. They found that knocking out this transporter gene resulted in plants with weaker growth, slenderer stems, and shorter primary roots than their wild-type counterparts. These structural changes in the plants are symptoms of cytokinin deficiencies. Essentially, the long-distance transportation of the growth hormones is impaired, which causes alterations in the development of roots and shoots. The disrupted transport also resulted in losses of chlorophyll, the molecule that transforms absorbed sunlight into energy.
The team then used radiotracers to confirm the role of the AtABCG14 protein in transporting cytokinins through the plants. They fed Carbon-14-labeled cytokinins to the roots of both the wild-type and mutant seedlings. While the shoots of the wild-type plants were full of the hormones, there were only trace amounts in the shoots of the mutant plants, though their roots were enriched. This demonstrates a direct correlation between cytokinin transport and the action of AtABCG14 protein.
"Understanding the molecular basis for cytokinin transport enables us to more deeply appreciate how plants employ and distribute a set of signaling molecules to organize their life activity and for their entire body building," Liu said.
"From a biotechnology view, manipulating the activity of this identified transporter might afford us the flexibility to enhance the capacity and efficiency of plants in energy capture and transformation, and the storage of the reduced carbon, or the ability of plants to adapt to harsh environments, therefore promoting either the production of renewable feedstocks for fuels and bio-based materials, or grain yields to meet our world-wide food and energy demands."
This work was completed in concert with researchers from Palacky University & Institute of Experimental Botany, and St. John's University. It was funded by DOE's Office of Science and the National Science Foundation toward understanding plant cell wall biogenesis and functions of ABC transporters, respectively.
DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.
2014-1608 | Media & Communications Office
|
This lesson builds oral vocabulary and phonological awareness by separating words in sentences. In 24 pages, the child will
• preview four color words: red, blue, yellow, and green,
• separate the words in simple sentences by clapping,
• review key vocabulary, and
• trace the first letter of each color word.
The activities are aligned to the following standards:
Common Core: ELA Kindergarten
RF.1.1c: Understand that words are separated by spaces in print.
Also: RF.1.1a, RF.1.1b, RF.4.4, L.1.1b, L.1.2c
Texas Education Knowledge and Skills: ELA Kindergarten
1.E: Recognize that sentences are comprised of words separated by spaces and demonstrate the awareness of word boundaries.
Also: 1.A, 1.C, 1.F, 3.D, 15.B, 17.A
This prereading pack is the first lesson of Book 1
from the Newitt Beginning Reading and Writing Program. The program provides a fundamental yet integrated approach to reading, writing, listening, and speaking the English language. The standards-based series of 24 books guides novice learners through a full year curriculum. Each lesson discovers, connects, and practices the basics of English.
A key feature of the program is that it uses linear incremental steps and repeated practices linked to higher level thinking skills. This approach gives the learner a solid foundation of reading and writing experiences. In the process, the learner gains routine study habits, motivation to learn, confidence, and mastery.
Please print the lesson double-sided and then cut in half, so that each page is 5 1/2” by 8 1/2”. Each lesson has 24 sides.
This lesson connects to "Identify Categories: Lesson 1, Book 3 (Newitt Prereading Series)"
. The companion lesson builds comprehension skills using vocabulary introduced here.
Please click here
for more lessons on separating words in sentences.
“Developmentally appropriate and challenging for all ages striving to learn English. The Newitt Beginning Reading and Writing Program will ensure success for every scholar that embarks on a path to learn the English language.”
- Dr. Patricia A. McNames, former teacher, middle school guidance counselor, elementary principal, and associate professor.
|
A valley in Mexico has provided clues about the Martian crater currently being explored by NASA's Rover Curiosity, after similar rocks were found on both sites.
The biological reserve Cuatro Ciénegas, a valley in the northern state of Coahuila, has similar properties to the Gale crater on Mars.
Millions of years ago, fire and water formed gypsum rocks were then locked into the Mexican valley. Gypsum rocks are made up of sulphate mineral and are formed as the result of evaporating sea water in massive prehistoric basins.
Exploration of the Gale crater by NASA has also detected gypsum rocks. Scientists now believe that a large meteorite crashed into the planet's 'primitive sea'.
The presence of gypsum rocks on Mars indicates that water rich in minerals was present. It also shows that sulphur was able to form because of the meteor that caused the Gale crater.
Valeria Souza, evolutionary ecologist at the National Autonomous University of Mexico, said: "Cuatro Ciénegas is extraordinarily similar to Mars. As well as the Gale crater where Curiosity is currently located on its exploration of the red planet, this landscape is the home to gypsum formed by fire beneath the seabed."
Scientists have not yet been able to confirm tectonic movement on Mars.
However, the study of bacteria at Cuatro Ciénegas provides an insight into how species could survive in a place with so little nutrients, and has implications for how life could survive on the red planet.
Souza said: "This oasis in the middle of the Chihuahuan desert is a time machine for organisms that, together as a community, have transformed our blue planet yet have survived all extinctions. How they have managed to do this can be revealed by their genes."
One bacterium has adapted to live without nitrogen, while another can live with virtually no phosphorus.
"The bacterial communities have survived all types of cataclysms here such as the extinction of the dinosaurs or the majority of marine creatures. But the only thing they are not adapted for is the lack of water," Souza added.
Luis David Alcaraz, a Mexican researcher participating in the study from the Higher Public Health Research centre of Valencia, said: "Understanding the usage and exploitation strategies of phosphorus is necessary in understanding what could happen in extreme scenarios like on other planets where there is a possibly serious limitation to this and other nutrients."
|
A team of scientists from the University of Portsmouth have developed new scientific tests to better understand the effects of pollution on wildlife behaviour.
The field of behavioural toxicology is gaining traction within the environmental sciences with an increasing number of studies demonstrating that chemical exposure can alter animal behaviour.
An organism's behaviour is fundamentally important to their survival through feeding, finding mates and escaping predators. Any chemical which could interfere with these responses has the potential to impact the food chain.
Using small shrimp-like crustaceans called amphipods, which are commonly used to monitor environmental toxicology, a team led by Professor Alex Ford and PhD student Shanelle Kohler, have been designing experiments to best answer these questions. In previously determining that these animals prefer to swim away from the light (negative phototaxis) and preferably be touching the sides of the tanks (positive thigmotaxis) they first set about asking whether these preferences could be altered by the size and shape of their testing tanks.
The results from their study, published this month in the journal PeerJ, found that tank size and shape can alter their exploratory behaviours, the time they spent next to a wall (wall-hugging) and the speed at which they swam. In a second set of experiments, the results published in this month's Aquatic Toxicology journal, they wanted to determine whether two closely related species (one marine and one freshwater amphipod) reacted in the same way to a stimulus of light. Interestingly, they found that the two species reacted very differently to a short (two-minute) burst of light.
Professor Ford from the University's Institute of Marine Sciences, said: "These results are really important for us and the scientific community in determining the correct experimental design. If scientists don't give the organisms the space to behave they might not detect the impacts of chemical pollution."
He added: "Environmental toxicologists around the world often use similar processes but not always for the same species for their pollution testing. This could lead to two groups of scientists getting very different results if their study organism are not the same species. For example, a chemical might have the capacity to alter a certain behaviour but if two closely related species have subtly different reactions to a stimulus (light for example) then this might mask the impacts of the pollutant."
Shanelle Kohler said: "These results highlight the importance of standardising behavioural assays, as variations in experimental design could alter animal behaviour. It is essential to gather baseline behaviours on your test organism to ensure that they are sensitive to your assay and prevent erroneous interpretations of results, for example is your animal unaffected by your contaminant or are they simply not sensitive to your assay?"
Co-author on the paper Dr Matt Parker, Senior Lecturer in Behavioural Pharmacology and Molecular Neuroscience at the University of Portsmouth, said: "One of the critical issues in scientific ethics is the necessity to choose the least sentient organism possible for use in research. This set of studies has highlighted behavioural diversity in two closely related invertebrate species, suggesting that these organisms may be useful for studying the basis of more complex behaviours, and the potential to study the effects of different drugs on behavioural responses."
|
Critical FactsEarth's climate is very complex. Below you have a summary of key factors important in affecting Earth's climate.
1. There are at least 22 climate change causes (drivers).
The sheer number of drivers "speaks" to the complexity of understanding climate change. The principal influence of each of the drivers and their impact is shown in the table below. Some drivers exert a more immediate influence while others contribute over much longer time scales. CO2 (carbon dioxide) is one of the drivers and while everyone agrees that CO2 does contribute to climate change as a greenhouse gas, the magnitude of CO2 's influence has not been settled within the overall scientific community, the political systems, the media or the population in general. We have determined that CO2 's influence, while significant at low concentrations in the atmosphere, is of minor impact as more and more is added to the atmosphere, a view that we address in the following Critical Facts list. For an expanded discussion of each of these 18 climate drivers, refer to our Recommended Reading List, "Fire, Ice, Paradise".
Read more >>
3. The sun supplies over 99% of the heat to Earth's surface.
Not only is this statement true, but the amount of solar (sun's) irradiance or heat leaving the sun does vary. A well followed example is the number of sunspots present on the surface of the sun at any one time and the length of the sunspot cycles. More sunspots, which have bright, hot haloes around them, correlate well with a warmer surface of the Earth. Fewer sunspots generally correlate with cooler times. Also, the length of the sunspot cycles correlates with Earth's temperature (see figure below). Trying to forecast the climate changes with models that do not include solar variations cannot be expected to result in reliable outcomes or ranges of outcomes.
That no one factor correlates perfectly with whether Earth's surface is warming or cooling is expected due to the fact that, while the sun is putting out a little more or less heat, Earth's distance from the sun in its orbit around the sun may be changing or the amount of cloud cover on Earth may be changing and exerting an opposite effect on the climate. Some of the drivers are predictable, like Earth's orbital variations, while others, like solar intensity or volcanic explosion are not predictable at this time. Regarding atmospheric CO2, as oceans become warmer they release more CO2 into the atmosphere, as does a bottle of carbonated water as it warms. As oceans become cold they tend to absorb more CO2 from the atmosphere.
In summary, the sun is Earth's ultimate heat engine and thus plays a dominant role in determining global warming or cooling. Changes in ocean currents also affect earth's climate in a major way but oceans receive their heating predominantly from the sun.
Read more >>
4. Earth's temperature changes, then CO2 follows.
In 1999, more than a decade after the "manmade CO2 is the major cause of global warming" hypothesis became popular and many prominent politicians and academics had taken strong public positions in favor of that view, a startling discovery was made and came to light that atmospheric CO2 changes followed, not caused, temperature changes (see figure below). This was a paradigm changing discovery because the cause of climate change, which had been assumed to be CO2, was now revealed to be following, not causing climate change. Yet, those who had declared so boldly that man was committing another sin against nature try to play down the significance of the discovery. Even those who made the discovery were almost apologetic in their release of the information. After all, we now had a generation or more that had been taught that almost anything humans did was bad for the planet and this included our school children, now grown up, as well as teachers and professors who had taught only one view of the subject to their classes for years and years.
The push was either to prove it wasn't true or to minimize the significance that changes in atmospheric CO2 were not the initial cause of temperature changes. So they moved to the "well, it still causes positive feedback or amplifies the temperature change" which in the case of warming had been caused by a (shock!) naturally occurring event. It goes like this; temperature rises which then causes a rise in CO2 , presumably released from the oceans, and that additional CO2 causes a rise in temperature, which then causes a release of still more CO2 and on and on. This they say could lead to runaway global warming and roast the planet. The problem is, this runaway that they were and still are predicting, has never occurred, at least not in the last 500 million years of Earth's temperature history, even though naturally occurring CO2 levels have been as high as 7,000 parts per million versus the warming catastrophists' predictions that a simple doubling or less of our current 400 ppm will cause runaway warming. Despite the real world, empirical data, the logic seems to be lost on those who want to believe that man, once again, is the culprit. We concur that man has been the culprit many times and many ways on environmental issues but in this case, from what we now have learned regarding CO2 's influence on Earth's plant and animal kingdoms, man's activities causing a rise in airborne CO2 appears totally beneficial. Remember, a cause does not follow an effect.
Read more >>
5. CO2's ability to trap more heat declines very rapidly.
Decades ago it was determined that CO2 's ability to trap heat rising from Earth's surface declines logarithmically or very rapidly (see first figure below). This means that early on, at low concentrations, CO2 does exert a significant warming of the lower atmosphere. But as the absorption bands in which CO2 captures this rising hear begin to get saturated, CO2 can capture less and less heat with each additional unit of CO2 . Depending on how sensitive or reactive one thinks Earth is to additional CO2 , the level of influence of rising CO2 today can be very small or still of significant impact. Once again, we have chosen the path recommended long ago by Winston Churchill who once said, "The farther backward you look, the farther forward you are likely to see." As we look back at Earth's climate history, far beyond the popular 1980's and 1990's which happened to see a supposedly rapid rise in temperature coinciding with a real and admittedly rapid rise in airborne CO2 , we find many examples where rises in CO2 were accompanied by declining temperatures (see the Predictions vs. Reality figure below).
These real world observations lead us to believe that Earth is not very sensitive to CO2 and that many other factors have a stronger influence on the climate. This is one of the reasons that one of the world's most prominent scholars, Professor of Meteorology, Dr. Richard Lindzen of M.I.T., has been "going crazy" for decades at humanity's infatuation that CO2 is a major cause of global warming. ("Resisting Climate Hysteria" by Richard S. Lindzen, 7-26-09)
Observe how rapidly (logarithmically) CO2's ability to cause additional temperature rise declines. Warming from a doubling (or tripling) atmospheric co2 will be very small. This physical limitation explains how Earth could have entered an ice age when CO2 levels were several thousand parts per million. Note that CO2's warming effect is strong only at low levels of atmospheric CO2, predominantly at less than 100 ppm.
Read more >>
6. Empirical observations indicate CO2 is not a major driver of climate change.
As previously discussed in Critical Facts 4 and 5, empirical (real) observations are important. A hypothesis such as "manmade CO2 is a (or the) major cause of global warming" must stand up to testing by real world observations to continue to be valid. Observe Critical Facts numbers 4 and 5 again to see that Earth's temperatures have declined for decades as CO2 levels continued to rise. Observed in the figures below how temperatures have varied even though the ice cores and IPCC data show that CO2 levels remain quite stable at approximately 280 ppm with the exception of the last 150 years. Also note that the recent rate of change is not unprecedented, a fact now admitted by Professor Phil Jones who directed the Climate Research Unit a East Anglia University that fed data to the IPCC committees.
Clearly other drivers were powering climate change on decade-long scales. Looking back over 1,000 years, we find that the supposedly stable interglacial CO2 concentration of 280 ppm, as professed by the manmade warming advocates, was accompanied by centuries-long temperature rises and falls. Doesn't this make you want to stop and think what could be the agenda of those who would choose to ignore the real data and even be willing to reduce the input of something as beneficial to the Earth as CO2?
Our goal is to educate the public on these astonishing benefits of more airborne CO2 and in doing so, help protect these benefits from being diminished in the name of a hypothesis that we believe has been proven false.
Read more >>
7. Climate models that have focused on CO2 have been very poor at hind-casting Earth's known climate history as well as their recent forecast of the future.
Climate models that have focused on and been tuned to demonstrate a significant impact by CO2 on Earth's climate have generally failed the test of simply explaining the changes that have occurred in Earth's past climate where the answers are already known. Couple this with the fact that the forecasts of the International Panel on Climate Change (IPCC) models of the past 19 years have already failed in their projections of early 21st century global cooling (see figure below). The current global temperatures fall below even the the lowest rise projected by the IPCC models. These forecasts of the last many years are no longer called forecasts (since they have failed) but are now being called just possible scenarios. Regardless, none of the models or scenarios allowed for what is actually occurring. The most likely cause of the failure of the IPCC models is that the modelers were charged with finding the manmade signal in the climate and, given the lack of preciseness of the many factors that go into the models and the modeler's inability to choose the magnitude of the effect of many factors, it is likely that none of the modelers wanted (or expected) to show no significant effect by manmade CO2 . Astonishingly, the modelers were even told by the leaders of the IPCC to ignore variations in solar effects which would include not only variations in solar intensity but also in the possible effect of solar magnetic variations on shielding the Earth from the potential influence of cosmic rays in cloud generation that could help cool the earth.
Note the monthly global temperatures; up, down, steep rise, low rise, etc. all this despite a steady increase in CO2 that is in Earth's atmosphere 24/7, worldwide.Read more >>
8. The science of what is causing global warming, including humanity's impact, is clearly not settled; debates are badly needed.
Nearly all media, the politicians and even some of the scientists who declared so firmly their belief that manmade CO2 is the major driver in recent (until about 8 years ago) global warming, have continued to profess that the science is settled and we should now turn to how to get rid of manmade CO2 . The consensus cry is not only misleading, it is simply not true (see figures below). Wouldn't you prefer to see real, scientific debates on what is or is not causing global warming or global climate change instead of just hearing the false claim that the science is settled? Demand the debates. When someone claims the science is settled, ask them if they would be willing to publically debate the issue. There should be plenty of scientists nearby to challenge them in the debate. And if you too, begin to have doubts about the veracity of the earlier consensus, weigh your doubts against the tremendous and extensively documented benefits of more CO2 for the plant and animal kingdoms.
Read more >>
9. More research is needed on all climate drivers; not just a focus on one driver. Billions of dollars have been spent trying to assess the impact of manmade CO2 on the climate to the detriment of more thorough examinations of the impacts of the other, natural drivers of climate change. Also, there have been few dollars spent on researching what mitigating things need to be done should climate, from whatever causes, get appreciably warmer or, in particular, colder. Research on the ice cores and deep sea sediments indicates that when Earth is colder, it is also windier and drier. These conditions are very bad for agriculture and humanity as well as the robustness and number of habitats available to the plants and animals.Read more >>
|
Starr Sackstein combines drawing and literature in her classroom. In her article, “Make it Visual: Students Draw Austen’s characters,” she believes that the process of students drawing what they read “connect them to the work and allow them to make meaning on their own.” Through drawing, we can bring literature to life in the classroom. Sackstein argues that “Too often students are accustomed to teachers having an expectation of what they are supposed to learn about a text that they don’t know how to approach a text on their own. They never learn to trust their gut when they read.” Arguably, drawing can bridge this gap in literary studies.
More literary teachers should consider incorparting Sackstein’s methods into their own classrooms.
|
The Indefinite Articles: A, AN
The Indefinite Articles – A, An – are used to represent one of whatever noun they are modifying.
They are called indefinite because, though they refer to one of whatever noun they are describing, it they can represent any of the noun.
They are non-specific (not definite.)
A screwdriver (any screwdriver)
A car (any car)
A horse (any horse)
An apple (any apple)
An Orangutan (any orangutan)
An electric shaver (any electric shaver)
The Article “A” is used before nouns that start with a consonant
The Article “An” is used before nouns that start with a vowel
The Definite Article: The
“The Indefinite Article” on Wikipedia
Help Keep GiveMeSomeEnglish!!! Ad-Free & Awesome!!!
Visit My Campaigns at:
Or Make A One-Time Donation
|
See the World through Infographics: Information Visualization
See and learn the world through vivid infographics such as maps, column charts and so on. Communicate more effectively by means of information visualization.
Introduction to Information Visualization
To begin with, let's learn what information visualization is. Information visualization is the communication of knowledge, advice or order through imaging which is easily identified by the human brain. This method integrates information in the pictures, which is called infographic. This means can accurately and clearly depict complicated information while making the reading more interesting and convenient. Below are a wide range of examples demonstrating the benefits of infographics.
See how to implement infographics to Improve Presentation, Improve Business Reports, Improve Your Essay Writing. and gain better Information Conveyance.
Infographics Example 1 World Map
A World map template is available for free download and print. You can apply it in various areas, such as geography, marketing and transportation. See more details about its application in article Customizable World Map Presentation Templates.
Infographics Example 2 World Data map
From this table, you can learn some common knowledge of the world. All shapes in this diagram are pre-drawn. Users don't need any drawing skill or experience. Just drag and drop suitable suitable symbols to visualize your ideas.
Infographics Example 3 Cultural Difference Diagram
With a few simple images, some differences between western and eastern culture are clearly and interestingly laid out.
Infographics Example 4 Top 10 Richest Countries
You can see the ranking numerically as well as visually. The figures of the 10 richest countries in terms of GDP are compared intuitively through such a column chart. Note: the data is collected in 2014.
6 Tips for Infographics Design
- Information communication is the priority.
- Make it knowledgeable.
- Make it attractive.
- Make it intuitive.
- Ensure the accuracy.
- Make it personalized.
Automatic Infographics Design Software
The above diagrams are all created by Edraw, an all-in-one diagramming tool with predefined symbols. Edraw offers you a variety of diagram makers, and provides diagram examples and libraries of ready-made templates and shapes for quick and simple creation of professional-quality diagrams. All templates and examples are editable, printable and for free download.
It also supports export of vector graphic multipage documents into multiple file formats: vector graphics (SVG, EMF, EPS), bitmap graphics (PNG, JPEG, GIF, BMP, TIFF), web documents (HTML, PDF), PowerPoint presentations (PPT), and Adobe Flash (SWF).
Download Full Software Package and View More
Examples for Free
Infographics about Health
Infographics about Tea
Vector Diagrams about Education
Network diagram examples
Business Plan Examples
Strategic Planning Examples
|
New Dinosaur Species: Cold-Weather Hadrosaur Found in Alaska
A new cold-weather hadrosaur species was discovered in the Liscomb Bone Bed, along the Colville River. This species, known as Ugrunaaluk kuukpikensis, roamed the North Slope of Alaska and endured long periods of winter darkness and snowy conditions.
This new species was a type of duck-billed dinosaur that could grow up to 30 feet long. They were also suited with hundreds of teeth for eating coarse vegetation, according to researchers from the University of Alaska Fairbanks.
The fossils were excavated from a fossil-rich location known as the Prince Creek formation and are dated to be roughly 69 million years old. This find represents the northernmost dinosaurs known to date.
"Today we find these animals in polar latitudes," Pat Druckenmiller, Earth sciences curator at the Museum of the North, said in a news release. "Amazingly, they lived even farther north during the Cretaceous Period. These were the northern-most dinosaurs to have lived during the Age of Dinosaurs. They were truly polar."
At the time that the Prince Creek Formation was deposited, Arctic Alaska was covered in polar forests, since the climate was so much warmer, the release noted.
"The finding of dinosaurs this far north challenges everything we thought about a dinosaur's physiology," Gregory Erickson, a Florida State University researcher who specializes in bone and tooth histology, said in a statement. "It creates this natural question. How did they survive up here?"
Over 6,000 bones of this new species have been excavated from this site. Since a large number of them were juvenile fossils, "It appears that a herd of young animals was killed suddenly, wiping out mostly one similar-aged population to create this deposit," Druckenmiller added.
Researchers suggest this area holds an abundance of new species waiting to be found. Their findings were recently published in the journal Acta Palaeontologica Polonica.
For more great nature science stories and general news, please visit our sister site, Headlines and Global News (HNGN).
-Follow Samantha on Twitter @Sam_Ashley13
|
1.Dynamic systems theory (Adolph, Karasik, & Tamis-LeMonda, 2010; Thelen & Smith, 2006).
Infants assemble motor skills for perceiving and acting, which are coupled together. In order to develop motor skills, infants must perceive something in the environment that motivates them to act, then use their perceptions to fine-tune their movements. Motor skills thus present solution to the infant's goals. No matter what the kids do their behaviors reflect their aims. Such as kids are trying to climb to the high places to get the toys they want, kids are trying to touch the picture and point it to others, and they can stand and kick a ball without falling and stand and throw a ball. Both the gross motor skills (e.g. Moving their arms and walking) and fine motor skills (e.g. Grasping a toy, using a spoon, or anything that requires finger dexterity) can be seen during their movement.
Erikson (1968) stressed that independence is an important issue in the second year of life. Erikson's second stage of development is identified as autonomy versus shame and doubt. Autonomy builds as the infant's mental and motor abilities develop. At this point, not only can infants walk, but they can also climb, open and close, drop, push and pull, and hold and let go. From my observation I found that the kids have the ability to put several toys into a basket. And they can also open the door of the cabinet. What's more the boy who climbed to the high place successfully was yelled and very excited. They feel pride in these new accomplishments and want to do everything themselves.
By age 2, children can use language to define their feeling states and the context that is upsetting them (Kopp,2011).During my observation I found that the boy who was kept dropping his spoon was angry and shouted " No, No " when the caregiver stop his behavior. Besides, the boy who was not willing to wash his hands also show his impatience to the caregiver and show no response to the caregiver's...
|
(dailyRx News) According to researchers at the Military Mental Health Research Center and the Rudolf Magnus Institute of Neuroscience, soldiers' brains adapt to perceived threats rather than actual events during a mission.
In other words, stress is caused by what the individual views as a threat, not necessarily by the actual threat that exists. This perceived threat is responsible for neural changes.
Combat-induced stress is known to cause complications such as fatigue, slower reaction times, disconnection from one's surroundings, and difficulty prioritizing.
In a study of 36 soldiers deployed in Afghanistan between 2008 and 2010, researchers found that continual exposure to stress leads to changes in the neural circuits of the brain that control fear and vigilance. The study is the first of its kind to have a control group comparison.
Soldiers underwent two brain scans in addition to filling out a questionnaire about their combat experiences. Researchers found increased activity in the amygdala and insole, the parts of brain that control fear and vigilance. Changes to the frontal lobe (the emotional center of the brain) depended on how soldiers perceived events during deployment. Although the soldiers did not develop post-traumatic stress disorder (PTSD), the changes to their brains led to similar symptoms that lasted for at least two months after soldiers returned home.
Future studies will focus on how long these changes last in soldiers' heads, and if perceived high levels of stress put soldiers at an increased risk of developing symptoms of post-traumatic stress.
The study is published in the journal Molecular Psychiatry.
|
EL wire is powered by alternating current (AC), not direct current (DC). When electricity is applied to an electroluminescent wire, electrons in the wire's phosphor coating are knocked to a higher energy level or orbital. When these electrons move back to their original energy level, they emit light particles called photons. Only at the point when the electrons release their extra energy and return to their previous state will they release photons, causing the phosphor to glow. So, why is AC better than DC for electroluminescent wire? If direct current (DC) were applied to the EL wire, light emission would stop as soon as the current stops exciting the electrons. As DC moves in only one direction, the glowing would begin and end very quickly as the current passed.
For EL wire to continuously glow, it needs a constant supply of electrical current. With alternating current (AC), the electrical current moves back and forth between the positive and negative poles of the circuit. This alternating polarity means that electricity constantly flows through the circuit, giving the EL wire a constant electrical supply. Consequently, the phosphor atoms are continuously being ionized, or having their electrons change energy level. Since these ions are always having their electrons jump from one energy level to another -- the process by which they emit light -- they will glow continuously. Additionally, AC can provide higher voltages than DC because alternating current can be stepped up or down by using transformers. This is key for determining how brightly your EL wire will shine. Higher voltages cause more electron excitation, which in turn leads to a brighter light [source: PBS].
For EL wire applications that won't work with standard electrical outlets, battery power is the way to go. Because batteries provide direct current (DC), your battery-powered EL wires require an inverter, which converts DC into the AC you need. When it comes to inverters, two factors will help you determine what you need: the project you have in mind and the power you'll need to make it work. Your EL wire's length determines how much power you'll need, and the power you use determines how bright your EL wire will appear. Using an inverter that's too small in relation to the length of the wire will hinder its ability to produce bright light. An inverter that's correctly sized will dramatically increase EL wire's brightness.
Continue to the next page to learn about EL wire's uses and cost.
|
Scientists have discovered a decline in the amount of oxygen in the world’s oceans, a result of climate change that could have “detrimental consequences” on marine life, fisheries, and coastal economies, according to a study in the journal Nature.
Climate models have long predicted such a decline. Warmer water holds less dissolved gas, and according to the U.S. Environmental Protection Agency, sea surface temperature rose at an average rate of 0.13°F per decade from 1901 through 2015. In addition, as surface water heats up, the ocean mixes less, which prevents oxygen pulled from the atmosphere and created by surface-dwelling marine life from reaching deeper, denser waters.
The new paper, published by climate and marine scientists at the GEOMAR Helmholtz Center for Ocean Research in Germany, concluded that the ocean’s dissolved oxygen content has declined 2 percent globally, with the Pacific and Arctic Oceans experiencing the sharpest drops. Climate models project that the oceans could lose up to 7 percent of their oxygen by 2100.
|
Handwriting analysis is the interpretation of the symbols and strikes in a person’s handwriting to better understand his or her personality.
You can tell how a person is feeling, if he or she has any physical ailments, and even some things about his or her past from handwriting.
To start, it is good to get handwriting in cursive. While you can still tell a lot about a person in print, there is more of an emotional connection in cursive writing so you get a more detailed read. There are five basic elements that I always look for in a person’s handwriting:
In handwriting analysis, there are three zones: upper, middle, and lower. The three zones represent the three major parts of the body, the upper zone being the head, the middle zone the torso and the lower zone is the lower erogenous zones of the body. It is always good to tell which zone is more dominant in a person’s handwriting.
The upper zone includes letters that touch the top line such as a lower case l’s, t’s, and b’s. If it is their upper zone that is dominant, then they are probably more of an intellectual or philosophical person.
The middle zone includes most lowercase letters such as a, e, i, o, and u. If the middle zone is dominant than that person probably bases many of their decisions on their emotions since the heart is located in the middle zone.
The lower zone consists of all the letters that drop below the lower line such as lower case y’s, g’s, q’s, p’s, and j’s. If the lower zone is most dominant, then that person is probably a physical person, materialistic, or has a huge sexual appetite. It is always good to check for the proper zone.
The slant is also very important when it comes to handwriting because it represents how a person relates to other people. There are three slants in handwriting: slant to the left, the right, and straight up.
When a person’s handwriting is slanted to the left, it means that this person tends to be emotionally withdrawn and repressed.
When the writing is slanted to the right, it means that the person leans more toward people and is overall more of a people person.
When a person’s handwriting slants straight up, this represents a person who is more of the independent type.
All Three Slants in One Word
This represents a person who can change moods easily, and can be unpredictable when it comes to communicating with others. This is usually a dangerous sign when meeting someone new.
Pressure is important to be able to tell how easily or hard it is to persuade this person. The best way to test for pressure is to feel the back of the paper. If you can feel the writing through the other side of the paper, then it is probably hard to persuade that person if his or her mind is made up. If you don’t feel any pressure, then that person can probably be persuaded easily.
There are three basic sizes when it comes to handwriting: big, medium, and small. This can tell many things about a person.
If the handwriting is big it usually means that the person is more of a people person and likes to socialize. A person with big handwriting will probably attempt a project by looking at the big picture first and then focusing on the smaller details later.
If a person writes small it is usually the opposite of big handwriting. This person is more of an introvert and keeps more to themselves. They attempt projects by paying attention to the details first and focusing on the big picture later. A small writer is also more inclined to being in small spaces as well.
If a person has medium sized handwriting, this is the medium between the other two extremes. They can take on any of the characteristics from smaller and bigger handwriting. The size of handwriting can also change depending on how a person is feeling at that time.
This is the most influential letter in the entire alphabet, because it represents you as an individual. This letter shows how one’s relationship is with their mother and father.
In the personal pronoun I there are two loops. The loop that touches the top line is the mother loop. The loop that touches the lower line is the father loop. There are many variations to this letter and depending on where the differentiation is and what type. If there is no father loop present, then there was probably no father figure in that person’s life. If there is no mother loop there was probably no mother figure in that person’s life. If the straight line in the father loop is curved, that means that that person lets their dad get away with too many things. If the tip of the mother loop is pointed that means that there is some aggression towards the mother. If the two loops touch that means that the mother and father are probably close. And it is always good to see which loop is bigger to see which parent had a bigger influence on the individual.
There are many reasons to take up handwriting analysis as a hobby, or even a possible career. While this only shows a brief explanation of what it is and what to look for, I highly suggest that everyone reads more literature on handwriting analysis themselves. You will be pleasantly surprised as to how relevant it really is.
|
Comparing Energy Resources: Pros and Cons
Lesson 3 of 6
Objective: Students will be able to compare the pros and cons of various energy resources and determine which resource is the best option to both meet our future energy needs and minimize our environmental impact
This lesson is a follow up to the previous lesson on energy resources and gives students an opportunity to conduct short research to learn more about the advantages and disadvantages of various energy resources.
In this lesson, students will form small groups which are each assigned a particular energy resource (e.g., a coal group, a petroleum group, a solar group, etc.). They will then conduct short research about the pros and cons of their particular energy resource and make an informative poster. All students will then do a gallery walk around the room taking notes from the posters of other groups. Following the gallery walk, students will be asked to write a short argument explaining which energy resource(s) best balances our society's energy needs with consideration of the environment. Finally, we have a short discussion and a mock vote to choose two energy resources that our society can use in the future.
Please Note that you will need the following materials for this lesson:
- Poster paper
- Computers or smartphones with internet access
Connection to Standards:
In this lesson, students will conduct short research and integrate multiple information sources to solve a problem, present information clearly and distinctly in a visual format, write an argument using content knowledge and vocabulary, develop claims supported by evidence, and make a concluding statement that follows the established argument.
I begin this lesson with a quick review of the concept of a cost benefit analysis from the economics and the environment lesson way back when in the second unit of class. Since this lesson comes towards the end of the school year, I use the Prom as a means of providing an example of a cost benefit analysis. I ask students to list some of the costs and benefits of going to Prom. Students respond that prom has serious time and financial costs, but that it has the benefit of "a memory of a lifetime". Similarly, skipping prom may have the benefit of saving time and money, but it costs someone the chance to have the typical American Prom experience.
I then review the concept of electricity being a secondary energy resource made from primary energy resources, which was discussed in the previous lesson on energy resources. I do this by asking students to describe some of the ways that energy is being used in the classroom right now. Although a few students will point out that their bodies are busy using up chemical energy from food they've consumed, most students will point out that most of the energy use in the classroom is in the form of devices such as lights, AC, their phones, the sound system, etc. that run on electrical energy. I then ask if electricity is a primary energy resource by asking if these devices are powered by lightning strikes. Students respond that, of course, this is not the case. I then try and elicit the explanation that electricity is a secondary energy resource because it must be produced by converting some other type of energy. I ask students to offer a few examples of these primary energy resources and hopefully they can offer most of the energy resources that we discussed in the previous lesson. As they offer these resources, I write them on the board:
- natural gas
I then explain to students that like going to the prom or making a decision to do one thing or another, each energy resource has particular pros and cons, and explain that today they will be forming small groups that will be doing research to do a quick cost benefit analysis on a specific energy resource, have the opportunity to share that information with their classmates, and then decide as a group which energy resource best balances our energy needs and desire to protect the environment.
Once we've done this quick warm up and review, I distribute the activity instructions and let students know that we'll soon form groups and get started.
Since there are nine energy resources that will be examined, I form groups by having students count off in nines. I then have students go to a table with their new group and I use the list of energy resources on the activity instruction worksheet to assign specific energy resources to the numbered groups (e.g., group 1 gets petroleum, group 2 gets coal, and so on).
Since I have a small class, most groups wind up with only 2 or 3 members, making individual students more accountable to their group.
Once groups have been assigned their energy resource, I have a member from each group get poster paper and markers. If necessary, I allow groups to get a computer to do their research, but most of my students just used their smartphones.
Before students to begin work on their poster, I go over the requirements that their poster must include
- the name of the energy resource
- an image that represents the resource
- a list of pros and cons of the resource
I then let students know they will have 30 minutes to complete their poster and I give warnings when there are 15, 10 and 5 minutes left.
While students are working, I move around the class offering help where necessary. most groups can get right to the work, but some need a little prodding to get going. Since most groups have multiple smartphones, I suggest to struggling groups that one member might do an internet image search for their energy resource (so that they can select and draw a visual representation of the resource) and the other search for pros and cons (e.g., I suggested one group Google "solar power pros and cons"). If there's a third member, they can be tasked with organizing the info on the poster. Because of the time limit, it's important for students to divide up the work so that making the poster is a cooperative and collaborative endeavor.
As you can see in this image of students doing research, there are already several web resources that have essentially organized this information already, it's just a matter of finding it and deciding which to include in their poster. Once most groups are on track, I spend my time just making sure groups know what the time limit is and make myself available for their questions.
However, I try and minimize my involvement because, by this point in the year, I'm hoping my students have acquired the skills to gather and analyze information on their own. That's essentially the point of this lesson: if you're trying to make a decision, how can you sift through the copious amounts of available information to do so?
After all groups have completed posters, students are given 15 minutes to do a "gallery walk" and view other groups' posters. I let students know that they should move quickly enough to visit all stations. Rather than have a student stay with their posters to explain their work, I prefer to have students let their posters speak for themselves. Additionally, because each student will be required to write an argument about which energy resource is the best option for our energy needs, it's important that everyone move around the room to see every poster.
During this time, students use the space provided on the activity instructions sheet to take notes about the different resources from the posters that their peers made. Before they begin, I let them know that they don't need to write down everything from every poster, but that they should rather write down what they think are the most relevant points to the essential question of the lesson: Which energy resources are the best compromise between meeting our energy needs and protecting the environment?
This is an opportunity for more "distributed teaching", students take notes from each other's work. It's less work for me and more engaging for students.
Once the time limit for the gallery walk is up, I have students leave their groups and return to their original seats. I then ask them to focus their attention to the short answer question at the bottom of the activity instructions sheet:
Consider the pros and cons of each energy resource. Which resource(s) do you think societies should use to meet their future energy needs?
Although it's written at the bottom of the page, I deliver the following caveat out loud: Make sure your response includes an explanation of why you made your choice by addressing the disadvantages of the energy source, but ultimately why its advantages made it be the best choice.
I ask students if there are any questions they have before beginning, and then give them about 10 minutes of silent time to compose their written response.
If students need assistance, I might quietly point out that the notes they took during the gallery walk should provide a framework to consider which resources have the strongest benefits relative to their costs. Again, since this lesson occurs at the end of the year, I do less handholding here and expect that students are able to use the facts they've collected to make and support a claim.
As you can see in this student's response, they not only explained why nuclear power was their preferred energy resource, they preemptively addressed expected counterclaims and explained why they did not think solar power was a realistic choice to meet our energy needs.
Once students have had time to write their own opinion, we have a short class discussion allowing students to explain their choices for the written segment. I ask students to volunteer their choices and their reasoning. After a few students have done so, I refer back to the list of resources I wrote on the board during the warm up and use it to elicit a wider variety of responses and supporting evidence (e.g., "Ok... did anyone choose wind power? Ok... why did you choose that?" or "A lot of you are mentioning the pros of water power, but I haven't heard any mention of the cons... does someone want to share a reason it wouldn't be a good idea to rely on water as our primary energy resource?"). I continue in this way until arguments for or against most of the energy resources on the board have been heard.
Finally, we have a vote. I tell students that they are members of a congressional committee that will determine only 2 energy resources that Americans can use to supply our energy needs. I then explain that students are allowed to vote for two separate resources, but not twice for the same resource. I pass out two small pieces of paper to each students and ask them to vote by writing a resource of their choice on each paper. I then collect the pieces of paper, tally the votes next to the resource list on the board and announce the winners. In the case of my class, the two "winners" were the environmentally friendly solar and water power.
After this vote, if time allows we close with a discussion about whether students agree or disagree with the class' decision. If not, I ask students to write a short statement about whether they agree or disagree with the choices of the congressional committee along with an explanation why, which we then discuss shortly at the beginning of the next class meeting.
|
Massive sediment movements take place in steep areas when an external factor such as rainfall, volcanic activity, or seismic motion exceeds some critical level, and can cause sediment disasters of potentially high cost. On the other hand, even if a large size debris flow takes place, nobody will recognize the occurrence of a disaster in the area if there are no human activities. Japan, located along the tectonically active circum-Pacific island arcs, is a mountainous country about seventy percent of which is steep and hilly areas, with a rainy season and frequent typhoons yielding severe rainfall. In addition to such geological, topographical and meteorological conditions, it is densely populated. Correspondingly, people have long suffered from sediment induced disasters.
|
There is a challenge for librarians to move on from the discourse of "quick and easy" approaches to information searching to a more "slow and steady" approach which ultimately may be more successful for learners. This idea has developed from the slow movement e.g. Slow eating. Draw on theories of Poirier and Robinson (2014) who define principles of slow information searching, and these were linked to the ACRL framework of information literacy.
Undergraduates today live a "fast" life and this has effects on the brain. This manifests in stress, frustration, unreasonable expectations, and sensory overload. The resulting pedagogical approaches lead to surface learning, and a lack of deep reading of texts. Mindful practices and reflection can counter these problems, and help students focus on tasks, choose quality over quantity, and enjoy learning more.
These librarians aim to incorporate a slow "critical" approach in their own teaching, so aim to be critical, problem posing, creative, intellectual, process-based. Slow principles contribute to students' lifelong learning.
Teaching strategies include focusing on open ended questions, time for reflection, using debate, and interviewing each other about research topics. Students asked to evaluate news stories, engage in problem based learning.
|
While studying radioactivity,we have seen that an α-particle is emitted from radium-226 and radon-222 is obtained. This nuclear is change is represented by the following equation:
Such an equation represents a nuclear reaction. Above mentioned nuclear reaction takes place on its own accord. However, it was Rutherford who, first of all, expressed his opinion that besides natural radioactivity decay processes,other nuclear reactions can also occur. A particle x is bombarded on any nucleus X and his process yield a nucleus Y and a light object y as given below:
X + x → Y +y
Rutherford performed an experiment on the nuclear reaction in 1918. He bombarded α-particles on nitrogen. He observed that as a result of this reaction, oxygen is obtained and a proton is emitted. That is
This reaction indicated that when α-particle enters the nucleus of ,then an excitation is produced in it. And as a result of it and a proton are produced. Since the experiment of Rutherford, innumerable nuclear reactions have been observed. For nuclear reactions to take place the fulfillment of certain conditions is a must.
Before and after any nuclear reaction the number of protons and neutrons must remain the same because protons and neutrons can neither be destroyed or can they be created. We elaborate this point from the example of Rutherford’s nuclear reaction of and here:
A nuclear reaction can take place only when the total energy of the reactants including the rest mass energy is equal to the total energy of the products. For its explanations we again take the example of the nuclear reaction of Rutherford involving and . In this reaction the mass of the reactants is :
Mass of =14.0031 u
Mass of =4.0026 u
Total mass of the reactants = 18.0057 u
In the same way the mass of the products is
Mass of = 16.9991 u
Mass of = 1.0078 u
Total mass of the products after the reaction = 18.0069 μ.
This shows that the total mass after the reaction is greater than the total mass before the reaction by 0.0012 μ. We known that a 1μ mass = 931 MeV energy, therefore a mass difference of 0.0012μ is equivalent to an energy of 931 MeV × 0.0012μ = 1.13 MeV. Hence this reaction is possible only when a additional mass of 0.0012 μ is added to the reactants to the minimum kinetic energy of the α-particle is 1.13 MeV such as obtained from . The energy these α-particle is equal to 7.7 MeV which is greater than 13 MeV. Had these α-particles been obtained from a source that give out α-particles whose energy was less than 13 MeV then this reaction would not have taken place.
From the conditions described above we can tell whether any nuclear reaction is posible or not. There is an interesting aspect in a nuclear reaction that it can take place in the opposite direction also. We known that is obtained by the reaction with an α-particle of appropriate energy. If we accelerate protons, with the help of a machine like cyclotron, and increase their velocity and then bombard these high velocity protons on n , Rutherford’s nuclear reaction of and will proceed in the backward direction as:
By bombarding different elements with α-particles, protons and neutrons, many nuclear reactions have been produced.Now we describe one such nuclear reaction with the help of which James Chadwick discovered neutron in 1932. When was bombarded with α-particles emitting out of ,then as a result of a nuclear reaction and a neutron were obtained. This reaction is shown below with an equation:
As neutron carries no charge, therefore it presented a greater amount of difficulty for its identification. Anyhow when neutron were passed through a block of proffin, fast moving protons were ejected out and these were easily identified. It may be remembered that a large amount of hydrogen is present in proffin and the nuclei of hydrogen atoms are protons. The emission of protons is the consequence of elastic collisions between the neutrons and the protons. This indicates that the mass of neutron is equal to the mass of the proton. It may be remembered that when an object of certain mass collides with another object of equal to mass at rest, then as a result of elastic collisions, the moving object comes to rest and the stationary objects begin to move with velocity of the colliding object. The discovery of neutron has brought in a revolution in the nuclear reactions as the neutrons carry no charge so that can easily enter the nucleus.
The arrangement of Chadwick’s experiment for the discovery of neutron.
“Such a reaction in which a heavy nucleus like that of uranium splits up into two nuclei of equal size along with the emission of energy during the reaction is called fission reaction”
Otto Hahn and Fritz Strassmann of Germany while working upon the nuclear reactions made a starling discovery. They observed that when slow moving neutrons are bombarded on , then as a result of a nuclear reaction , an and average of three neutrons are obtained. It may be remembered that the mas of both Krypton and barium is less than that the mass of uranium. This nuclear reaction was different from hither to studied other nuclear reaction in two ways. First as a result of the breakage of the uranium nucleus two nuclei of almost equal size are obtained, whereas in the other nuclear reactions the difference between the masses of the reactants and the products was not large. Secondly a very large amount of energy is give out in this reaction.Fission reaction of can be represented by the equation:
Here Q is the energy give out in this reaction, By comparing the total energy on the left side of the equation with total energy on the right side, we find that in the fission of one uranium nucleus about 200 MeV energy is given out. It may be kept in mind that three is no difference between the sum of the mass and the charge number on the both sides of the equation. Fission reaction can be easily explained with the help of graph. This graph shows that the bending energy per nucleon is greatest for the middle elements of the periodic table and this bending energy per nucleon is a little less for the light or very heavy elements i.e., then nucleons in the light or very heavy elements are not so rigidly bound. For example the binding energy per nucleon for uranium is about 7.7 MeV and the products of the fission reaction of uranium, namely barium and krypton, have a total mass less than the mass of uranium equal to 8.5 -7.6 = 0.9 MeV per nucleon. Thus when a uranium nucleus breaks up, as a result of fission reaction,into barium and krypton, then an energy at the rate of 0.9 MeV per nucleon is given out. This means that an energy 235 × 0.9 = 211.5 MeV is given out in the fission of one uranium nucleus.
The fission process of uranium does not always produce the same fragments (Ba,Kr). In fact any of the two nuclei present the upper Horizontal part of binding energy could be produced. Two possible fission reactions of uranium are given below as an example:
Hence in the uranium fission reaction several products may be produced. All of these products (fragments) are radioactive. Fission reaction is not confined to uranium along it is posible in many other heavy elements. However, it not been observed that fission takes place very easily with the slow neutrons in uranium-235 and plutonium-239, and mostly these two are used for fission purposes.
Fission Chain Reaction:
We have observed that during fission reaction a nucleus of uranium-235 absorbs a neutron and breaks into two nuclei of almost equal masses besides emitting two or three neutrons. By properly using these neutrons fission reaction can be produced in more uranium atoms such that a fission reaction can continuously maintain itself. The process is called fission chain reaction. Suppose that we have a definite amount of and a slow neutron originating from any source produces fission reaction in one atom of uranium. Out of this reaction about three neutrons are emitted. If conditions are appropriate these neutrons produce fission in some more atoms of uranium. In this way this process rapidly proceeds and in an infinitesimal small time a large amount of energy along with huge explosion is produce, it is representation of fission chain reaction
It is posible to produce such conditions in which only or neutron, out of all the neutrons created in one fission reaction becomes the cause of further fission reaction. The other neutrons either escape out or are absorbed in any other medium except uranium. In this case the fission chain reaction proceeds with its initial speed. To understand these conditions carefully look at. The resulting neutrons scatter in the air and so the cannot produce any fission chain reaction. Some favorable conditions for chain reaction. Some of the neutrons produced in the first fission reaction produce only one more fission reaction but here also no chain reaction is produced. If the sphere sufficiently big,then most of the neutrons produced by the fission reaction get absorbed in before they escape out of the sphere and produce chain reaction. Such mass of uranium in which one neutron, out of all the neutrons produced in one fission reaction, produces further fission is called critical mass. The volume of this mass of uranium is called critical volume.
If the mass of uranium is much greater than the critical mass, then the chain reaction proceeds at a rapid speed and a huge explosion is produced. Atom bomb works at this principle. If the mass of uranium is less than the critical mass, the chain reaction does not proceed. If the mass of uranium is equal to the critical mass, the chain reaction proceeds at its initial speed and in this way, we get a source of energy. Energy, in an atomic reactor, is obtained according to this principle. The chain reaction is not allowed to run wild, as in an atomic bomb but is controlled by a series of rods, usually made of cadmium, that are inserted into the reactor. Cadmium is an element that is capable of absorbing a large number of neutrons without becoming unstable or radioactive. Hence, when the cadmium control rods are inserted into the reactor, they absorb neutrons to cut down on the number of neutrons that are available for the fission process. In this way the fission reaction is controlled.
In a nuclear power station the reactor plays the same part as does furnace in a thermal power station. In a furnace, coil or oil is burnt to produce heat, while in a reactor fission reaction produces heat. When fission takes place in the atom of uranium or another heavy atom, then an energy at the rate of 200 MeV per nucleon is produced. This energy appears in the form kinetic energy of the fission fragments. These fast moving fragments besides colliding with one another also collide with the uranium atom. In this way their kinetic energy gets transformed in heat energy. This heat is produce steam which in turn rotates the turbine. Turbine rotates the generator which produces electricity. A sketch of a power station are shown in figure:
A reactor usually has four important parts. These are:
- The most important and vital part of a reactor is called core. Here the fuel is kept in the shape of cylindrical tubes. Reactor fuels are of various types. Uranium was used as fuel in the elementary reactors. In this fuel the quality of is increased from 2 to 4 percent. It may be remembered that the quality of in the naturally occurring is only 0.7 percent. Now-a-day plutonium-239 and uranium-233 are also being used as fuel.
- The fuel rods are placed in a substance or small atomic weight, such as water,heavy water,carbon or hydrocarbon etc. These substance are called moderators. The friction of these moderators is to slow down the speed of the neutrons produced during the fission process and to direct them toward the fuel. Heavy water, if may be remembered is made of , a heavy isotopes of hydrogen instead of . The neutrons produced in the fission reaction are very fast and energetic and are not suitable for producing fission in reactor fuel like or etc. For this purpose slow neutrons are more useful. To achieve this moderators are used.
- Beside moderator three in an arrangement for the control of number of neutrons, so that all the neutrons produced is fission, only one neutron produces further fission reaction. The process is achieved either by cadmium or by boron because they have the property of absorbing fast neutrons. The control rods made of cadmium or boron are moved in or out of the reactor core to control the neutrons that can initiate further reaction. In this way the speed of the chain reaction is kept under control. In case of emergency or for repair purposes control rods are allowed to fall back into the reactor and thus stop the chain reaction and shut down the reactor.
- Heat is produced due to chain reaction taking place in the core of the reactor. The temperature of the core, therefore, rise to about 1200 °C. To produce steam from this heat, it is transported to heat exchanger with the help of water,heavy water or any other liquid under great pressure. In the heat exchanger this heat is used to produce steam from ordinary water. The steam is then used to run the turbine which in turn rotates the generator to produce electricity. The temperature of the steam coming out of the turbine is about 300°C. This is further cooled to convert it into water again. To cool this steam, water from some river or sea is, generally,used. Heavy water is being used as a moderator and for the transportation of heat also from the reactor core to heat exchanger, heavy is used. To cool steam coming out of the turbine sea water is being used.
The nuclear fuel once used for changing the reactor can keep on operation continuously for a new months. There after the fissile material begins to decrease. Now the used fuel is removed and fresh fuel is fed instead. In the used up fuel is intensely radioactive substance. The half-life of these radioactive remnant material is many thousand years. The radiations and the particles emitted out of this nuclear waste is very injurious and harmful to the living things. Unfortunately there is no proper arrangement of the disposal of the nuclear waste. This cannot be dumped into oceans or left in any place where they will contaminate the environment,such as through the soil or the air. They must be not allowed to get into the drinking water. The best place so far found to store these in the bottom of old salt mines,which are very dry and are thousands of the meters below the surface of the Earth. Here they can remain and decay without polluting the environment.
|
The most distant manmade object has crossed a new far-away boundary into interstellar space and a University of Iowa physicist is still listening as it phones home. NASA’s Voyager One was launched 28 years ago has crossed what’s called the “termination shock” at the far edge of our solar system — eight-point-seven billion miles from the sun. The U-of-I’s Don Gurnett says no spacecraft, or anything else made on Earth, has ever been so far away. Voyager is now 94 astronomical units away, or 94-times the distance between the Earth and the Sun. Gurnett is principal investigator for the spacecraft’s plasma wave gear which, even twice as far away as the planet Pluto, is still sending back valuable data every day. He says there’s a radioactive power supply onboard which should enable it to operate through 2020, perhaps longer. Gurnett says as Voyager voyages on into new territories, it becomes harder to hear its call.Radio waves take two seconds to reach the Moon from Earth. Gurnett says radio signals from Voyager now take nearly 13 hours to reach Earth. When interviewed by Radio Iowa, Gurnett was in New Orleans to present his findings during the spring 2005 meeting of the American Geophysical Union. Other sounds of Voyager’s encounters can be heard by visiting Gurnett’s Web site at: “www-pw.physics.uiowa.edu/space-audio/”.
You are here: / / U-of-I physicist listens to distant Voyager
|
What is Discrimination?
In plain English, to "discriminate" means to distinguish, single out, or make a distinction. In everyday life, when faced with more than one option, we discriminate in arriving at almost every decision we make. But in the context of civil rights law, unlawful discrimination refers to unfair or unequal treatment of an individual (or group) based on certain characteristics, including:
- Marital status
- National origin
- Religion, and
- Sexual orientation.
Lawful vs. Unlawful Discrimination
Not all types of discrimination will violate federal and/or state laws that prohibit discrimination. Some types of unequal treatment are perfectly legal, and cannot form the basis for a civil rights case alleging discrimination. The examples below illustrate the difference between lawful and unlawful discrimination.
Example 1: Applicant 1, an owner of two dogs, fills out an application to lease an apartment from Landlord. Upon learning that Applicant 1 is a dog owner, Landlord refuses to lease the apartment to her, because he does not want dogs in his building. Here, Landlord has not committed a civil rights violation by discriminating against Applicant 1 based solely on her status as a pet owner. Landlord is free to reject apartment applicants who own pets.
Example 2: Applicant 2, an African-American man, fills out an application to lease an apartment from Landlord. Upon learning that Applicant 2 is an African-American, Landlord refuses to lease the apartment to him, because he prefers to have Caucasian tenants in his building. Here, Landlord has committed a civil rights violation by discriminating against Applicant 2 based solely on his race. Under federal and state fair housing and anti-discrimination laws, Landlord may not reject apartment applicants because of their race.
Where Can Discrimination Occur?
Federal and state laws prohibit discrimination against members of protected groups (identified above) in a number of settings, including:
- Government benefits and services
- Health care services
- Land use / zoning
- Lending and credit
- Public accommodations (Access to buildings and businesses)
Most laws prohibiting discrimination, and many legal definitions of "discriminatory" acts, originated at the federal level through either:
- Federal legislation, like the Civil Rights Act of 1964 and the Americans with Disabilities Act of 1992. Other federal acts (supplemented by court decisions) prohibit discrimination in voting rights, housing, extension of credit, public education, and access to public facilities.
- Federal court decisions, like the U.S. Supreme Court case Brown v. Board of Education, which was the impetus for nationwide racial desegregation of public schools. Other Supreme Court cases have shaped the definition of discriminatory acts like sexual harassment, and the legality of anti-discrimination remedies such as affirmative action programs.
Today, most states have anti-discrimination laws of their own which mirror those at the federal level. For example, in the state of Texas, Title 2 Chapter 21 of the Labor Code prohibits employment discrimination. Many of the mandates in this Texas law are based on Title VII of the Civil Rights Act of 1964, the federal law making employment discrimination unlawful.
Municipalities within states (such as cities, counties, and towns) can create their own anti-discrimination laws or ordinances, which may or may not resemble the laws of the state itself. For example, a city may pass legislation requiring domestic partner benefits for city employees and their same-sex partners, even though no such law exists at the state level.
Discrimination: Getting a Lawyer's Help
If you believe you have suffered a civil rights violation such as discrimination, the best place to start is to speak with an experienced Discrimination Attorney. Important decisions related to your case can be complicated -- including which laws apply to your situation, and who is responsible for the discrimination and any harm you suffered. A Discrimination Attorney will evaluate all aspects of your case and explain all options available to you, in order to ensure the best possible outcome for your case.
|
These games teach valuable skills and have a high fun and educational rating.
Your child develops counting skills while matching and counting pictures of shapes with Oobi.
Your child develops rhyming skills as they help Oobi find the rhyming word and picture pairs.
Your child develops letter recognition and early reading skills as they sound out words that start with the given letter.
Your child develops memory skills as they try to remember what piece is missing from the pattern.
Your child improves comprehension by following the adventures Uma, Oobi, and friends.
|
Colliding one object with another in the air is extremely challenging, and yet dragonflies do that routinely every time they need a bite. For a long time, such prey interception behavior was compared to the guided missile program. In short, the missile would lock the target at constant bearing or constant relative orientation. By maintaining such geometry, the missile is on the collision course to the target. However, we found dragonfly’s behavior much more organic. Instead of following the most efficient interception trajectory, the dragonfly actively orients itself behind and below the prey. Such tactic keeps the dragonfly always in the blind spot of the prey, preventing the prey from doing any evasive action. The dragonfly is not catching a ball after-all… it’s catching another flying animal.
|
CHFD 445 (Family Communication)
July 20th 2011
Communication Differences between Men and Women in the work place
Men and women have cohabited on the planet with all the idiosyncrasies which are well known and experienced by all of us at some stage of our life. Men and women who live together also at the same places have to work together and in the process are known to get entangled with their personal perspectives. It is my position that men and women are equal but different. When I say equal, I mean that men and women have a right to equal opportunity and protection under the law. The fact that people in this country are assured these rights does not negate my observation that men and women are at least as different psychologically as they are physically. For centuries, the differences between men and women were socially defined and distorted through a lens of sexism in which men assumed superiority over women and maintained it through domination. As the goal of equality between men and women now grows closer we are also losing our awareness of important differences. In some circles of society, politically correct thinking is obliterating important discussion as well as our awareness of the similarities and differences between men and women. The vision of equality between the sexes has narrowed the possibilities for discovery of what truly exists within a man and within a woman. The world is less interesting when everything is same. Gender differences in the workplace typically stem from social factors, which influence the behaviors of men and women. Some organizations welcome gender diversity and encourage the inclusion of both sexes when making company decisions and offering promotional opportunities. Other organizations discourage gender inclusion and promote bias in the workplace. With most companies, gender differences add value and varying perspectives to an organization Gender differences involve both physical and emotional factors. They are essentially the characteristics that influence male and female behavior in the workplace. These influences may stem from psychological factors, such as upbringing, or physical factors, such as an employee's capability to perform job duties. Differences may also stem from gender stereotypes related to men and women. For instance, a stereotypical assessment is that women belong in the home while men work and provide support. Stereotypes often lead to sex discrimination in the workplace. I decided to research the questions, “Do women and men communicate differently?” and “Does it make a difference in the workplace?” In conducting the research, I came across very interesting articles and have summarized the findings below. This article by Fiona Sheridan was aimed at conveying the outcome of a study she conducted which examined “the role that gendered talk plays in the workplace in both task and non-task related interactions” (2007, p. 319). Ms. Sheridan’s research found that men and women communicate differently in workplace situations based on their gender and that “the consequences of differences in linguistic activity between men and women in the workplace are enormous” (2007, p. 320). For example, the study found that men and women are different in the way they give orders, manage people and the communication they use for both. According to her research, men tend to be more direct when giving orders while women have a tendency to be more indirect, “soften their demands and statements” and “use tagged phrases like ‘don’t you think’ following the presentation of an idea, ‘if you don’t mind’ following a demand or ‘this may be a silly idea, but’ preceding a suggestion” (2007, p. 323). Unfortunately, perception is often reality, and while women may not mean to come across as being tentative, our style of communication may actually hinder us at...
|
Pattern blocks are helpful tools to create a plethora of hands-on math activities. They are not just for students in first grade and below, activities with pattern blocks can be adapted to fit a wide range of skills and difficulty, strengthening geometric reasoning and spatial awareness. Here are some activities and games students can enjoy:
- The Last Block is a 2-4 player game that challenges
students to be the last player to place a block on the gameboard. You can use
as a board for the pattern block game.
- FirstGradeParade adapted Musical Chairs into a game
where students added blocks to the patterns created by other students. This is
a great way to get students up and moving while practicing with patterns!
- MathLearningCenter has free pattern block lesson plans to
download and use in class. Activities are suited for K-2 students.
- MarcialMiller lists several games and activities using pattern blocks. Ideas include everything from working with tessalations, fractions, and making pictures of animals and flowers
|
This Tutorial contains following Attachments:
- ECE 332 - Week 5 - DQ 1 - The Importance of Play.doc
ECE 332 Week 5 DQ 1 The Importance of Play
Children learn as they play and, while playing, they learn how to learn. Take a moment to read the article “The Importance of Play – Activities for Children.” Then, think about the childhood games that you played. Share your favorite playtime activity with your classmates and explain why it was your favorite. Describe what you learned from your play and how it enhanced your development as a child.
Using your own experience with play and the information in the reading, also describe five takeaways from each article that you will use to support play and enhance the development of children under your supervision.
Write a reviewYour Name:
Your Review: Note: HTML is not translated!
A B C D F
Enter the code in the box below:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.