content
stringlengths
275
370k
In school and in the workplace an individual’s abilities are grouped into the general categories of soft skills vs hard skills. Let’s look at the difference between these two skill sets. Soft Skills vs Hard Skills Examples Hard skills are learned through school or on-the-job training. These skills are specific to a particular job. For example: - A hard skill for a cashier is using a cash register. - A hard skill for a teacher would be lesson planning. - A hard skill for an electrician would be the ability to use specialized tools and machines. Every job requires an individual to have a particular set of hard skills in order to perform their duties. Soft skills are non-specialized skills that may be useful no matter what an individual does for a living. You may also hear them referred to as “transferable skills” because you can transfer them from one job to another. Soft skills are often used in everyday situations as well, not just in the workplace. Examples of soft skills include: - The ability to work with a team - Communicating with others effectively and efficiently - Time management - Problem solving As you can see, soft skills can be used in a variety of everyday situations. That’s the greatest difference between soft skills vs hard skills. The ability to use a cash register is really only useful while working as a cashier, whereas the ability to multitask is useful at just about every job. As you may also gather from the above soft skills and hard skills list, while the two sets of skills are different from one another they are both necessary to be successful on the job. Developing Soft Skills vs Hard Skills Hard skills are more objective and concrete that soft skills. That means that once you learn how to do a particular task you would then possess that skill. Soft skills, on the other hand, are more difficult to develop. They are not learned through training sessions, rather, they are acquire over time by practicing them in the real world with other people. Hard skills are easy to measure, as employers can get a fairly good idea of an individual’s hard skills by looking at their education, previous work experience, and certifications. Soft skills are more difficult to evaluate as they cannot be simply communicated through a cover letter or resume. Employers typically cannot evaluate soft skills without going through a job interview, or seeing how an individual performs during their first few weeks on the job. One thing that soft skills and hard skills have in common is that a particular skill may come naturally to some people, while others do not have such an easy time with them. So an individual should not be discouraged if he or she feels they don’t possess a particular soft skill. Just as a teacher can become more efficient at lesson planning over time, a person can also become more efficient at multitasking over time. Another way to understand soft skills is by comparing them to executive functioning skills, as they are all technically soft skills. Executive functioning skills are learned in the same way as soft skills, they are not easy to evaluate, and they take time to develop. Like executive functioning skills, soft skills are also versatile and transferable from school, to work, to social situations, and to independence at home. For example — skills like pacing, self monitoring, taking initiative, and prioritization can be used at various times throughout one’s life. By contrast, hard skills are specialized abilities and difficult to transfer outside of the situations in which they’re most useful. Cooking is a great hard skill to have, but it’s only useful when you’re in the kitchen preparing meals. Knowing how to build a computer is another skill that, while nice to have, is not something that can be transferred to other tasks. That brings us to another term you may be familiar with — generalization. In terms of acquiring skills , generalization is the concept of using past learning in present situations. It allows people to transfer knowledge across multiple situations. This is something everyone can relate to, and it is directly tied to both soft skills and executive functioning skills. Take self monitoring and editing, for example. You may discover, in various situations, that using manners such as “please” and “thank you” evokes a more positive response than omitting those words. Knowing that, you would become conscious of using good manners more often. By now you should have a better idea of the differences between soft skills vs hard skills. Believe it or not, one way to evaluate a child’s soft skills and hard skills is through fun activities like computer games. We encourage you to introduce your son or daughter to Identifor’s unique selection of games, which can help identify the strengths in their skill sets.
The Zhangjiajie National Forest Park is a popular national park located in China. The park is famous for attracting visitors from all over the world. Apart from the unique geographical and geological features, the park is home to a vast number of flora and fauna including the Dove tree, and the Chinese chestnut tree. The park is also part of the unique Wulingyuan Scenic Area. China first recognized Zhangjiajie National Forest Park in 1982. The park is the most famous part of a much larger scenic area, Wulingyuan, which covers an area of 153.5 square miles. The Wulingyuan scenic area was officially listed as UNESCO world heritage site in 1992. In 2001, the ministry of land and resources recognized the Wulingyuan scenic area as Zhangjiajie sandstone peak forest national Geopark. UNESCO listed the Zhangjiajie Geopark as a Global Geopark in 2004. Before it was approved to be a national forest park, the Zhangjiajie National Park was a state-run tree farm. The Zhangjiajie National Forest Park has unique habitats composed of varied ecosystems such as water bodies, cliffs, valleys, and forests which support a large variety of animal and plant species. These animals include many types of birds, giant salamanders, and rhesus monkeys. Throughout the Zhangjiajie National Forest Park are pillar-like formations that resemble karst terrain. However, unlike limestone karst which is formed through chemical dissolution, the Zhangjiajie area lacks limestone deposits. The pillar-like structures are due to years of physical erosion caused mostly by the growing plants in the region and the expanding ice during the winter. The foliage around the pillars is dense due to the year-round humid weather. Streams carry away the weathered materials. These pillar-like structures are a distinct feature of China’s landscape and can be seen in most Chinese paintings. As a part of the larger Wulingyuan Scenic Area, the Zhangjiajie National Forest Park faces threats from overcrowding from visitors leading to other problems such as damage to vegetation, disruption of wildlife and pollution. The number of tourists to the park keeps rising every season affecting the integrity of the park. Other threats arise from environmental conditions such as storms, avalanches, and floods which pose dangers to tourists as well as destroying wildlife.
Gamma decay is analogous to the emission of light (usually visible light) by decay in the orbits of the electrons surrounding the nucleus. In each case the energy states, and the wavelengths of the emitted radiation, are governed by the law of quantum mechanics. But while the electron orbits have relatively low energy, the nuclear states have much higher energy. For example, the sodium "D" spectral line has a wavelength of 0.6 microns and a corresponding quantum energy of about 2 electron volts, whereas a gamma ray emitted after cobalt-60 decay has a wavelength of about 1 picometer (10-12 meters) and a quantum energy of about 1 million electron volts. Nuclei are not normally in excited states, so gamma radiation is typically incidental to alpha or beta decay—the alpha or beta decay leaves the nucleus in an excited state, and gamma decay happens soon afterwards. Gamma radiation is the most penetrating of the three kinds. Gamma ray photons can travel through several centimeters of aluminum, for example. - ↑ Wile, Dr. Jay L. Exploring Creation With Physical Science. Apologia Educational Ministries, Inc. 1999, 2000
View the GDI Frequently Asked Questions The new GDI measures gender gap in human development achievements in three basic dimensions of human development: health, measured by female and male life expectancy at birth; education, measured by female and male expected years of schooling for children and female and male mean years of schooling for adults ages 25 and older; and command over economic resources, measured by female and male estimated earned income. The index uses the same methodology as in the HDI. The goalposts are also the same except for life expectancy at birth where the minimum and maximum goalposts are varied (minimum of 22.5 years and a maximum of 87.5 years for females; and the corresponding values for males are 17.5 years and 82.5 years. The rationale is to take into account a biological advantage averaging five years of life that females have over males. For more details on computation see Technical notes. Countries are ranked based on the absolute deviation from gender parity in HDI. This means that ranking takes equally into consideration gender gaps hurting females, as well as those hurting males. The GDI reveals that gender gaps in human development are pervasive. On average, at the global level, female HDI value is about 8% lower than male HDI but disparities do exist across countries, human development groups and regions. Across countries gender gaps in HDI values range between 0 % and 40%. Gender gaps in HDI values tend to be smaller in the ‘Very High Human Development Index group’ and widens as one move towards the “Low Human Development Index Group (a gap of 2.5% to 17%). Across regions - it is lowest for the OECD countries at 3.6, followed by the Latin America and the Caribbean region (3.7%) to 17% in South Asia.
Much discourse is emerging from scientific circles detailing the results of genetic testing in relation to human migration patterns. These studies attempt to show the distribution of ethnic genetic codes over certain geographic areas in relation to time. This article attempts to explain some of this research. Scientists have now identified the human lineages of the world descended from 10 sons of a genetic Adam and 18 daughters of Eve. This ancestral human population lived in Africa and started to split up 144,000 years ago. This time period is when both the mitochondrial and Y chromosome trees first branch out. You will also notice that the analysis of DNA from many ancient skeletons and mummies (studies mentioned below) is performed on the mitochondrial DNA, or mtDNA. This "ancient" DNA is often degraded and present in very small quantities. mtDNA offers the best chance of isolating DNA from ancient samples because it is small and is present in the cell with many copies. 18 Daughters of a Genetic Eve Dr. Douglas C. Wallace and his colleagues at the Emory University School of Medicine in Atlanta constructed a world female genetic tree based on mitochondrial DNA. Dr. Wallace found that almost all American Indians have mtDNA that belong to lineages he named A, B, C and D. Europeans belong to lineages H through K and T through X. The split between the two main branches in the European tree suggests that modern humans reached Europe 39,000 to 51,000 years ago, Dr. Wallace calculates, a time that corresponds with the archaeological date of at least 35,000 years ago. In Asia the ancestral lineage is known as M, with descendant branches E, F and G. In the Americas are lineages A through D. In Africa there is a single main lineage, known as L, which is divided into three branches. L3, the youngest branch, is common in East Africa and is believed to be the source of both the Asian and European lineages. Dr. Wallace's mitochondrial DNA lineages are "haplogroups" but known as "daughters of Eve," because all of the lineages are branches of the trunk that stems from the mitochondrial Eve. Dr. Wallace is now exploring the root of the mitochondrial tree. In the March 2000 American Journal of Human Genetics, he and colleagues identify the Vasikela Kung of the northwestern Kalahari desert in southern Africa as the population that lies nearest to the root of the human mtDNA tree. Another population that seems almost equally old is that of the Biaka pygmies of Central Africa. The 7 European Daughters of Eve Professor Sykes and Oxford University researchers in England have identified seven ancestral matriarchal groups from which all Europeans appear to be descended. Every European can trace his or her evolutionary history back to the seven ancestral mother groups, also referred to as the Seven European Daughters of Eve. Sykes et al. obtained buccal cells from 6,000 individuals and analyzed the samples using the mitochondrial DNA (mtDNA) analysis. It is known that mtDNA mutates at a very slow rate, such as 1 mutation in every 10,000 years. So they figured that the women would have lived between 8,000 and 45,000 years ago. What is amazing is that all seven of the genetic groups appear to be descended from the Lara clan, one of three clans that still exist today in Africa. This is called the African Eve theory. It was proposed in the late 1980's by Allan Wilson, Mark Stoneking and others. The African Eve theory states that all humans share a common African ancestor. The Seven European Daughters of Eve matriarchal groups correspond to Dr. Wallace's lineages above, and were given names by Professor Sykes: Helena: This clan lived in the ice-capped Pyrenees. As the climate warmed, Helena's descendants trekked northward to what is now England, some 12,000 years ago. Members of this group are now present in all European countries. Jasmine: Her people had a relatively happy life in Syria, where they farmed wheat and raised domestic animals. Jasmine's descendants traveled throughout Europe, spreading their agricultural innovations with them. Katrine: Members of this group lived in Venice 10,000 years ago. Today most of Katrine's clan lives in the Alps. Tara: This group settled in Tuscany 17,000 years ago. Descendants ventured across northern Europe and eventually crossed the English Channel. Ursula: Users of stone tools, Ursula's clan members drifted across all of Europe. Valda: Originally from Spain, Valda and her immediate descendants lived 17,000 years ago. Later relatives moved into northern Finland and Norway. Xenia: Her people lived in the Caucasus Mountains 25,000 years ago. Just before the Ice Age, this clan spread across Europe, and even reached the Americas. (As Dr. Wallace discovered, the X pattern is a rare European lineage and is also among the northern Native Americans such as the Ojibwa and Sioux.) 10 Sons of a Genetic Adam A male genetic tree based on the analyses of the Y chromosome has been constructed by Dr. Peter A. Underhill and Dr. Peter J. Oefner of Stanford University. In March 2000, a colleague published the preliminary findings of this study in a book, Genes, People and Languages, by Dr. Luca Cavalli-Sforza (see the Dr's own study Two Waves below). The tree starts with a single Y chromosomal Adam with 10 principal branches. Of these sons of Adam, the first three (designated I, II and III) are found almost exclusively in Africa. Son III's lineage migrated to Asia and fathered sons IV-X. These sons then spread through the rest of the world. Son IV spread to the Sea of Japan, son V to northern India, and sons VI and IX to the South Caspian. Other Recent Research Two Migration Waves out of Africa This study came out of the University of Padua, Italy, under the direction of Dr. Luca Cavalli-Sforza and was published in the December issue of the journal Nature Genetics. In the study, the mtDNA in the blood of people from India and east Africa was analyzed. The results showed that a common maternal ancestor coming out of Africa existed 50,000 years ago between the people of Ethiopia and the Arabian peninsula, and India. Matches were not found in the Middle Eastern populations. In another earlier study, it was found that an earlier migration occurred, pegged at 100,000 years ago, involving a common maternal ancestor coming out of Africa by a northern route, settling in the Mediterranean and in Greece. Migration Effect on Languages There are of course efforts under way to take all of these studies above and to relate them to the formation of languages. Dr. Cavalli-Sforza believes the Y chromosome lineages may be associated with the major language groups of the world. Dr. Joseph Greenberg, a linguist at Stanford University, has proposed three migrations, corresponding to the three language groups of the Americas, known as Amerind, Na-Dene and Eskimo-Aleut. You know all the tests tracing back living people's mtDNA to a most recent common ancestor or matriarchal line? Well, in a December 1999 article in the journal Science, Philip Awadalla of the University of Edinburgh, Scotland basically says that those early assumptions may not be fully true. If not, the rate of mutation for mtDNA, often thought of as 1 mutation every 10,000 years, will have to be recalculated. He says this because there may be some reason to suspect that male and female mtDNA somehow combine. It has been known that male mtDNA in sperm is destroyed by the egg after fertilization. It is anybody's guess how male DNA, mtDNA could be involved with the female mtDNA. More studies will have to be done to replicate this study and to further it. In the journal Nature, March 2000, William Goodwin of the University of Glasgow and counterparts in Russia and Sweden state that DNA from the bones of a Neanderthal baby who died 29,000 years ago in Russia's Caucasus Mountains is proof that Neanderthals are not ancestors of modern humans. This study agrees with another Neanderthal study from 1997, where DNA from bones of a Neanderthal found in Feldhofer Cave in Germany were analyzed. What we all should look for now is for specimens where there shows signs of Neanderthals and humans interbreeding. If we take the analyses of the 1997 and 1999 studies and compare with future studies, we may find a significant divergence to support that hypothesis. The 1999 study showed that the baby's mitochondrial DNA differed from that of the other Neanderthal in 3.5 percent of the locations tested. However, as compared to humans, the divergence of the Neanderthal DNA was 7 percent, or double. Because of this, coupled with the expected rate of change, Neanderthals and humans had a common ancestor about 500,000 years ago. Let us not forget another study in the October 26, 1999 issue of the Proceedings of the National Academy of Sciences. In that study, Neanderthal bones coming out of Vindija cave north of Zagreb, Croatia, indicate that Neanderthals and modern man must have coexisted in central Europe for at least 6,000 years. Probabilities of cohabitation and genetic exchange go up, don't you think? Cheddar Man, England This is my favorite, maybe because of the name. He is a 9,000-year-old skeleton who lived in a cave, and who has a distant male relative living right down the street in Cheddar, England. Cheddar Man was a Stone Age hunter-gatherer who lived in southwestern England. Scientists from Oxford University's Institute of Molecular Medicine, led by Dr. Sykes, analyzed mitochondrial DNA extracted from one of Cheddar Man's molar teeth. The results were compared to those of 20 people in the area. Researchers say that it shows that Britons descended from European hunter-gatherers rather than Middle Eastern farmers. I would note that since mtDNA analyses were done, we cannot say that Cheddar Man fathered any children since the mtDNA of Cheddar would have been passed down by his mother. The living relative and Cheddar had a most recent common ancestor 10,000 years ago. Yes, have you heard? As of April 25, 2000, the court is allowing DNA testing of this male skeleton, 9,300 years old, from the state of Washington (not originally, if you know what I mean!). Will the study show that Kennewick Man belongs to one of four identified haplogroups, or genetic groups, that have been identified among American Indians? Ancestor of the American Indians, or European heritage? Ancestor of the Ainu people of Japan? Stay tuned. (See the book Uncovering the Life and Times of a Prehistoric Man Found in an Alpine Glacier, by Brenda Fowler; also The Man in the Ice, by Konrad Spindler (although his conclusions are strange re the Iceman's death.) Iceman was a body found frozen in the Alps in September 1991. He was taken to Innsbruck University, Forensic Medicine Institute in Innsbruck, Austria. Iceman was found on the Italian side of the Austrian-Italian border, only by a few feet. Anyway, when did Iceman live? The answer is 5,348 to 5,298 years ago! The DNA tests showed that the Iceman's DNA fit with DNA sequences of Europeans. Iceman's DNA matches DNA sequences of individuals living in the _tztal Valley and Alpine regions (Handt 1994:1775). Iceman is now in a museum in Bolzano, Italy. The Ice Maiden was a girl only 12-14 years old who was apparently sacrificed by Inca priests 500 years ago. She was a frozen and well preserved mummy, discovered in September 1995 on Mt. Ampato in the Peruvian Andes by anthropologist Johan Reinhard and Miguel Zarate. Her DNA was analyzed at The Institute for Genomic Research (TIGR) in Rockville, Maryland. Some mtDNA from a heart sample was analyzed using the PCR method and gel electrophoresis. "We conclude from our analysis that the Ice Maiden's mitochondrial DNA HV1 sequence places her precisely in the native American Indian Haplogroup A. Her HV2 DNA sequence represents a new HV2 variant not found in the current mitochondrial DNA sequence databases and is most closely related to the Ngobe people of Panama" (Mike Knapp, TIGR). For an article on Ice Maiden, see the January 1997 issue of National Geographic. Beringia and Travel to the Americas Beringia was a land bridge between 12,000 and 13,000 years ago that was evident once glaciers in the area melted and sea levels decreased. Beringia linked up Siberia and what is now Alaska. What is disputed by scientists is what people came over to the America's, when and how. By land? By boat? Paleo-Indians are believed to have used Beringia. Much DNA evidence is pointing to the use of water travel by Asians. There is the study involving the Olmec "celt" inscriptions versus the Chinese Shang writing, which in many cases is very close. We must also remember the concept of independent invention - that humans do independently invent things. Chinese Migration to Mexico, B.C. Researchers studied Native Americans from the Navajo, Chamorro and Flathead tribes. They then determined that all three groups possess a unique type of retrovirus gene, JCV, found only in China and Japan (National Academy of Sciences, 1197). Would seem to suggest travel by boat. Virus Links Andes with Japan There is a theory that South America was colonized from Asia thousands of years before any Spaniards set foot in South America. DNA from bone marrow of 1,500 year old mummies found in northern Chile was analyzed. The results show that a virus associated with adult T-cell leukemia was prevalent in native Andeans and in a small section of people from southwest Japan. The study also theorizes that the virus may have originated from paleo-Mongoloids who migrated to Japan and South America more than 10,000 years ago. No doubt that this was an mtDNA PCR study (Nature Medicine, 1999). Irish with Spanish Genetic Influence ? Y-chromosome variation and Irish origins, E. Hill, M. Jobling, D. Bradley, 23 March 2000 Volume 404 Nature No. 6776 Americans Descended from Australians ? Americans from European ancestry are traced to one of the daughters of Africa Eve, as found in a study above. A further study examined a 11,500-year-old skull, found in Brazil, which appears to belong to a woman of African or Aboriginal (Australia) descent. This might suggest boat travel.
Before the unification of the German Empire in 1871, about half of the 27 individual German states issued their own stamps. Among the most interesting are the low-value stamps created for the states of Brunswick and Mecklenburg-Schwerin, which could be divided into quarters for minor postage needs. Most German stamps at the time featured large numerical denominations surrounded by fanciful scrolls indicating the specific currency and postal service. Prussia’s first stamps appeared in 1850, portraying Friedrich Wilhelm IV. The king was replaced by the Prussian armorial design in 1861 after his successor, Wilhelm I, initiated this new trend. By the 1860s, other states like Bavaria had adopted designs emblazoned with their official coat of arms at center and the monetary value marked along the border. Armorial imagery generally included conventional symbols like castles, lions, crowns, keys, and dragons. After the North German Confederation was established in 1868, the stamps of individual member states were substituted with numerical designs in either groschen or kreuzer currencies for the northern and southern regions, respectively. The unification of 1871 inspired stamps featuring the German eagle emblem to showcase the strength of a consolidated German Empire. A special issue from 1899 shows the actress Anna Führing in her popular role as Germania, wearing full battle gear and looking severely toward an unseen enemy. Germany became the first country to produce stamps using photolithography in 1911 with a series celebrating the 90th birthday of Prince Regent Luitpold; two years later, stamps featuring the Luitpold’s son Ludwig III were the first to use the photogravure process. Designs from the Weimar Republic era, which began in 1919, still incorporated the established title “Deutsches Reich,” or “German State,” but highlighted imagery of the labor class. Special issues of this era often had high premium costs to benefit state welfare funds. The hyperinflation of German currency in 1923 required overprinting for many German stamps to match new values which changed faster than designs could be released. The highest stamp value was eventually raised to 50 billion marks, with stamps labeled “50 milliard.” Many of the most collectible German stamps come from the great period of upheaval which began with Hitler’s ascent to power in the 1930s and lasted through 1949, when two distinct German states were established. In 1933, the National Socialist Worker’s Party appointed Hitler as chancellor and shortly after the passage of the “Enabling Act” gave him absolute power. Under the Nazis, philately was used as a means for disseminating state propaganda, as seen in many stamps from the 1930s, like the design featuring ominous, disembodied hands extending toward a giant glowing swastika. Other stamps celebrated the expansion of the Reich, including the acquisition of the Saar region which inspired stamps featuring a symbolic embrace between mother and child. As G... Finally, in 1941, Hitler replaced the classic portrait of Paul von Hindenburg on the country’s definitive stamps with his own image. Two years later, in the midst of World War II, Germany introduced stamps commemorating its attacks on Poland and began labeling postage “Grossdeutsches Reich” or “Great German State.” After the fall of the Third Reich in 1945, Germany’s postal system was completely wrecked, and many systems appeared to fill the voids. Very popular today are the stamps bearing Hitler’s portrait which were still used in eastern Germany where the Soviets had no immediate provisions for a new postal service. These stamps were routinely altered so that Hitler’s face was no longer visible, and only circulated for a few months following the war. The occupying allied forces each issued stamps, creating unique versions for each postal zone within their own territories. Additional series of generic stamps for use in all of the American, British, and Soviet zones were created beginning in 1946, including the modernist series depicting a pair of hands breaking free from chains while reaching for a dove carrying an olive branch. Many municipalities again released their own stamps with charity surcharges to aid in reconstruction. Finally, in 1949, the three western occupied zones became the Federal German Republic while the Soviet territory became the German Democratic Republic (GDR). The western portion of Berlin was officially ceded as a province of the Federal Republic in 1950, and would issue its own stamps through 1990. Stamps used in West Berlin were often just modified versions of the Federal Republic’s original designs, like the 1971 stamp commemorating the German postal service’s centenary. Stamps in the GDR emphasized historical communist figures and anniversaries, as well as an idealized working class. The GDR also recognized many painful moments from Germany’s recent past, with series memorializing the Kristallnacht riots or the terror of concentration camps. However, the GDR’s postal service still attempted to censor philately that suggested problems with communism. For example, in 1965, a stamp commemorating the 20th anniversary of refugees fleeing East Germany was produced by the German Federal Republic; letters marked with this postage and mailed to the East were either returned or had their stamps painted over. After the fall of the Berlin Wall in 1989, the currency of East Germany was replaced by the Deutschmark, and new stamps were printed reading simply “Deutsche Post.” One of the earliest designs depicted the reunification of Berlin, with revelers climbing the wall in front of the Brandenburg Gate.
||The English used in this article or section may not be easy for everybody to understand. (April 2012)| |Electricity · Magnetism| Electric fields[change | change source] An electric field is an area where charged particles will feel an electric force. The units used to measure electric fields are newtons per coulomb. Electromagnetism is closely related to both electricity and magnetism because both involve movement of electrons. Electric fields can be drawn as arrows. The arrows point which way a positive particle, like a proton, will be pushed if it's in the field. Negative particles, like electrons, will go in the opposite direction as the arrows. In an electric field, arrows will point away from positive particles, and towards negative ones. So, a proton in an electric field would move away from another proton, or towards an electron. Through electromagnetic induction, a changing magnetic field can produce an electric field. This concept is used to make electric generators, induction motors, and transformers work. Since the two types of fields were dependent on each other, the two are thought to be one. Together they are called the electromagnetic field. The electromagnetic force is one of the fundamental forces of nature. The electromagnetic force is the force that causes an attraction between electrons and the positive nucleus. All forces between atoms are caused by the electromagnetic force. Electromagnetic radiation[change | change source] The electromagnetic radiation is thought to be both a particle and a wave. This is because it sometimes acts like a particle and sometimes acts like a wave. To make things easier we can think of an electromagnetic wave as a stream of photons (symbol γ). Photons[change | change source] A photon is an elementary particle. It is the particle that light is made up of. Photons also make up all other types of electromagnetic radiation such as gamma rays, X-rays, and UV rays. The idea of photons was thought up by Einstein. Using his theory for the photoelectric effect, Einstein said that light existed in small "packets" or parcels which he called photons. Photons have energy and momentum. When two electromagnetic fields act on each other, they switch photons. So photons carry the electromagnetic force between charged objects. Photons are also known as messenger particles in physics because these particles often carry messages between objects. Photons send messages saying "come closer" or "go away" depending on the charges of the objects that are being looked at. If a force exists while time passes, then photons are being exchanged during that time. Fundamental electromagnetic interactions occur between any two particles that have electric charge. These interactions involve the exchange or production of photons. Thus, photons are the carrier particles of electromagnetic interactions. Electromagnetic decay processes can often be recognized by the fact that they produce one or more photons (also known as gamma rays). They proceed less rapidly than strong decay processes with comparable mass differences, but more rapidly than comparable weak decays. History[change | change source] In 1600, William Gilbert said that electricity and magnetism were two different effects in his book De Magnete. The link between electricity and magnetism was found through the work of Hans Christian Ørsted. A scientist named Ampère then used mathematics in electromagnetism. Many physicists then developed a theory of electromagnetism now known as classical electromagnetism. James Clerk Maxwell then brought everything together into one theory of electromagnetism. This type of electromagnetism was based on Maxwell's equations and the Lorentz force law. Maxwell's studies showed what light actually was. Maxwell's work did not work with classical mechanics because he said that the speed of light was always constant. It only depended on the permeability of the substance it was travelling through. This led to the development of the theory of special relativity by Einstein. Problems in classical electromagnetism[change | change source] Albert Einstein's work with the photoelectric effect and Max Planck's work with black body radiation did not work with the traditional view of light as a continuous wave. This problem would be solved after the development of quantum mechanics in 1925. This development led to the development of quantum electrodynamics which was developed by Richard Feynman and Julian Schwinger. Quantum electrodynamics was able to describe the interactions particles in detail.
Battery capacity - the time during which the battery can power the connected load. Standard capacity is measured in ampere-hours, and for small batteries - in milliamps-hour. The quantity of electricity (charge) of the battery is called capacity. The charge is measured in coulombs, 1 pendant is 1 Amp x 1 second. To find the capacity battery, Charge it fully and discharge after a given current I and measure the time T, for which will complete discharge. Multiply the time (T) to the current (I), and you get the Q - Capacity battery. Fully charge the battery and to check theconnect it to this unit. Clock set on the record & lt-0 & gt- and turn on Start. Now the relay should close the contacts 4-5 and 5-6, and the battery begins to discharge through a resistor the R, the clock voltage is applied. The voltage on the battery itself and the resistor begins to slowly decline, when the resistor, it will drop to 1V, the relay opens the contact, the discharge stops and the clock stops. On discharge battery Managing the current passing through the contacts 1-2relay 8 drops to 2 mA. If the drive current is 3 mA in power, the contacts 4-5 and 5-6 the resistance is less than 0.04 ohms (value low enough not to take it into account when measuring the current). If you need a current of 1A discharge, use a resistor R = 1.2 ohm. After the discharge is stopped, the battery voltage starts to rise to 1.1-1.2 V, this occurs because of the internal resistance of the cell. Consider the fact that the capacity of the newly charged battery will be higher, because after some timethe charge is lost sight of self-discharge. In order to calculate the amount of self-discharge, measure the capacity immediately after charging, and then after about a week after that. Some self-discharge of batteries can be up to 10% per week or more. If you are working on this scheme, try to reduce the contact resistance battery and connectors. If the current is 0 is valid.5-1 A, you can get at a very high measurement accuracy (lose on the contacts of 0.1 V or more). It also can cause loss of precision steel spring, which is used in some of the holders batteryTherefore it contacts and other steel shunt with a copper wire.
to Jolly Grammar What is Jolly Grammar? Jolly Grammar is the next stage, after a first year with Jolly Phonics. The materials provide guidance and resources to help teach grammar to children; it is active and multi-sensory, with emphasis on consolidating the children’s knowledge from Jolly Phonics and helping them develop an understanding of how grammar works. How does Jolly Grammar work? By teaching key essential grammar rules, it helps children bring diversity to their writing and improve their spelling in a structured way. Jolly Grammar teaches a wide range of language forms including the parts of speech, plurals, punctuation, and the tenses past, present, and future. It also teaches a wide range of spelling rules, including defining aspects such as the short vowels. Jolly Grammar uses colors and actions to help children identify parts of speech in sentences, for example verbs are red and nouns are black. Filled with great ideas and fun ways to remember some of the rules, Jolly Grammar provides teachers with a systematic way to teach grammar, spelling and punctuation. Jolly Grammar is designed for four years of teaching. Together with Jolly Phonics they provide a course for the first four years. The Jolly Grammar Program Traditionally, grammar was seen as a formal academic subject, far too difficult for young children to learn. However, with the Jolly Grammar program, young children can be introduced to grammatical concepts in a fun and accessible way. The Jolly Grammar Handbooks provide enough lesson plans and activity pages for 36 weeks, with two one-hour lessons per week. The first of these lessons is devoted to spelling and to increasing the children’s phonic knowledge. The second lesson focuses on teaching grammar. The term ‘grammar’ is used loosely here, and the Jolly Grammar lessons introduce such topics as sentence structure, antonyms and synonyms, punctuation, and dictionary work, as well as teaching the children about parts of speech. Teachers can use Jolly Grammar twice a week to cover the structural aspects of the English language, and devote their remaining literacy lessons to other areas, such as group reading, creative writing, and comprehension exercises. Just as each letter sound was introduced with an accompanying story and action in Jolly Phonics, Jolly Grammar introduces each different part of speech with an associated color and action. The colors and actions not only make the grammar lessons fun for the children, but also make the grammatical terms easier for them to learn. The colors used to introduce each part of speech are the same as those used in Montessori schools.
Computer Products Guide - Network Hubs and Switches Hubs and switches function as a common connection point for the workstations, printers, file servers and other devices that make up a network. The main difference between hubs and switches is the way in which they communicate with the network. What is a Hub? A hub functions as the central connection point of a network. It joins together the workstations, printers, and servers on a network, so they can communicate with each other. Each hub has a number of ports that connect it to the other devices via a network cable. How does a Hub work? A hub is an inexpensive way to connect devices on a network. Data travels around a network in 'packets' and a hub forwards these data packets out to all the devices connected to As a hub distributes packets to every device on the network, when a packet is destined for only one device, every other device connected to the hub receives that packet. Because all the devices connected to the hub are contending for transmission of data the individual members of a shared network will only get a percentage of the available network bandwidth. This process can slow down a busy network. A 10Base-T hub Ethernet Hub provides a total of 10 Mbit/sec of bandwidth, which all users share. If one person on the network is downloading a very large file, for example, little or no bandwidth is available for other users. These users will experience very slow network performance. What is a Switch? A switch is more sophisticated than a hub, giving you more options for network management, as well as greater potential to expand. A switch filters the data packets, and only sends the packet to the port which is connected to the destination address of that packet. It does this by keeping a table of each destination address and its port. When the switch receives a packet, it reads the destination address and then establishes a connection between the source port and the destination port. After the packet is sent, the connection is terminated. What are the advantages of a Switch? A switch provides higher total throughput than a hub because it can support multiple simultaneous conversations. For example, when a 100Mbit/sec hub has five workstations, each receives only 20Mbit/sec of the available bandwidth. When a 10/100Mbit/sec switch is used every port on the switch represents a dedicated 100Mbit/sec path, so each workstation receives 100Mbit/sec of bandwidth. Switches also run in full duplex mode, which allows data to be sent and received across the network at the same time. Switches can effectively double the speed of the network when compared to a hub which only supports half duplex mode. Why choose one of our Switches? Switches improve the performance and efficiency of a network and should be used when you: - Need to make best use of the available bandwidth - Have multiple file servers - Require improved performance from file servers, web servers or workstations - Use high speed multi-media applications - Are adding a high speed workgroup to a 10Mbit/sec LAN - Plan to upgrade from 10 to 100Mbit/sec or Gigabit network The standard features on all N-Way switches are: - 10/100Mbit/sec Auto-Negotiation on all ports, the switch automatically senses the speed of the attached device and configures the port for the proper speed. This simplifies deployment in mixed Ethernet and Fast Ethernet environments - Auto MDI/MDI-X auto-detects whether the connected cable type is normal or cross-over - Full or Half Duplex operation Which Switch do I need? If you are setting up a home or small office network an ideal solution is to use a switch with 5 to 8 ports. Switches can be linked together as your network expands. The compact 8 Port 10/100Base-TX Fast Ethernet Switch features Auto MDI/MDI-X on all ports, 10/100Mbit/sec Auto-Negotiation, and full and half-duplex modes and can be desktop or wall mounted. These 19" rackmount switches are the perfect solution for expanding a 10/100 network. Gigabit Ethernet Switches Our GIGA N-Way Switches provide cost effective scalability of the network by utilising the existing copper CAT5e cabling environment. Connectivity is not sacrificed because the same cabling is used for Ethernet, Fast Ethernet and Gigabit Ethernet. These switches also incorporate VLAN technology. This feature is accessed from a console port on the switch and provides network administrators advanced configuration options and the ability to set up “virtual” LANs which function as separate, secure network 24 Port 10/100Base-TX Switch with two 10/100/1000Base-T Gigabit Ethernet Ports with VLAN technology. A managed switch allows the ports on the switch to be configured, monitored, enabled and disabled. Switch management can also gather information on a variety of network parameters, such as: - The number of packets that pass through each of its ports - What types of packets they are - Whether the packets contain errors - The number of collisions that have occurred You should look for the following features on a managed switch: - Gigabit Ethernet support - SNMP management and remote control capabilities - A management interface that can be accessed through an internet browser - Auto-negotiation support which auto-senses the speed and duplex capabilities of connected devices - Built-in expansion capability The Fully Managed SNMP 24 Port 10/100Base-TX + GIGA Expansion N-Way Switch (Part No. 25030) is a high performance web-managed Layer 2 Switch that provides 24 Fast Ethernet 10/100Mbps ports. The built-in expansion slot can accommodate a number of different modules . Optional Gigabit/Fast Ethernet modules can be copper or fibre based and support 10/100/1000Base-T, 100Base-FX, and 1000Base-SX. This switch is ideal for organisations wishing to create a new, or upgrade their existing network infrastructure. The switch features advanced SNMP (Simple Network Management Protocol) management and remote control capabilities, and supports an easy to use Layer 2 management interface that can be accessed through an internet browser. Fully managed SNMP 24 Port Fast Ethernet and full Gigabit backbone support with remote management. Using a managed switch can reduce hidden costs by using – - Switch and traffic monitoring to help head off problems before they occur, reducing user downtime - Management tools that offer an intuitive graphical user interface (GUI) that simplifies configuration and monitoring tasks - Management functions can be performed remotely using a web browser or directly via a console connected to the switch Please click here to see the full range of Network Hubs and Switches.
The March on Washington: Looking Back on 50 Years August 28 marks the 50th anniversary of the March on Washington. It is a time to celebrate a movement, a speech, and leaders who influenced generations of people around the globe and achieved genuine progress for diverse groups of Americans. There is no doubt that America has come a long way since the civil rights era. But while the indignities of segregated public accommodations have largely disappeared, another significant theme of the march remains highly relevant half a century later: the struggle for economic opportunity and equality. It was perhaps due to the march and the great success of the larger civil rights movement that opposition to this sort of equality was immediate, persists to this day, and is reflected in all three branches of the federal government. The 1963 March on Washington for Jobs and Freedom was based on 10 concrete demands, including comprehensive civil rights legislation, desegregation of public schools, voting rights, job training and dignified work, and an increased minimum wage. The potential to expand economic opportunity and lift African Americans, other Americans of color, and white Americans out of poverty and low-income status was clear—not just through direct tools such as wages and work but also through indirect avenues such as improved educational opportunities and the opportunity to vote for political candidates who work to advance economic justice. Celebrating and commemorating genuine progress This month’s anniversary raises the question: “Was the march a success?” In so many ways, the answer is “yes.” The combined efforts of the 1963 marchers, policymakers in Congress, the office of the presidency, and researchers such as Michael Harrington who raised awareness about poverty among Americans of all races resulted in an unprecedented succession of legislative victories. New laws—including the Civil Rights Act of 1964; the Economic Opportunity Act of 1964, which helped wage the War on Poverty; the Voting Rights Act of 1965; and the Civil Rights Act of 1968, or the Fair Housing Act—largely reflected the marchers’ demands and advanced the causes of economic and social justice. These efforts rapidly improved conditions for African Americans in the United States. Between 1959 and 1979 black poverty rates dropped significantly, from 55 percent to 31 percent, and African Americans saw gains in education, employment, and democratic participation. Over the same time period, the number of black children attending majority-minority schools dropped from 77 percent to 63 percent, the annual wage gap between blacks and whites decreased from $8,901 to $7,285 in 2011 dollars, and black voter registration increased by nearly 20 points in the former Confederate states. Other groups benefited as well, including women, poor whites, other communities of color, people with disabilities, and senior citizens. They were either directly included in the civil rights and anti-poverty legislation that passed from 1964 to 1968 or benefited from subsequent laws patterned after that legislation—for example, Title IX, which prohibited sex discrimination in education; the Americans with Disabilities Act; and the Age Discrimination in Employment Act. But something even larger came out of the march and the civil rights movement. They became the ultimate example of some important concepts, such as the power of peaceful social movements to bring about significant societal change, the ability of Congress and the president to work together to solve national problems, and the value to America of structures that protect minorities from injustice, even if that injustice is at the hands of the majority. This legacy informs our nation 50 years later, as concerned Americans may feel discouraged about the ways in which government cuts are setting back efforts to aid the poor; partisan divides are limiting the ability of Congress and the president to solve national problems such as elevated unemployment and the need for jobs; and various groups, such as immigrant DREAMers, black youth in the wake of the Trayvon Martin decision, and Occupy Movement protesters, continue to strive for economic and social equality using strategies reminiscent of the 1960s. The path to regression So we must also face the bad news: Efforts to stop progress have always existed, and they persist to this day. When too many people remain indifferent, progress stagnates or turns to regression. Attacks in reaction to the march found early expression when conservatives who opposed the Civil Rights Act of 1964 rallied around the presidential campaign of former Sen. Barry Goldwater (R-AZ). In Awakening from the Dream, Lee Cokorinos and Alfred Ross outline the anti-civil rights movement from that point forward. After his landslide loss to President Lyndon B. Johnson, the focus of Sen. Goldwater’s supporters shifted to building national and regional think tanks such as The Heritage Foundation and the Manhattan Institute for Policy Research. The Reagan administration also appointed many of Sen. Goldwater’s supporters to positions in federal administrative agencies and on the federal bench. Efforts persisted into the 1990s and 2000s, during which time legal and advocacy organizations such as The Federalist Society grew and developed, creating legal arguments to defeat the cause of social justice in the courts and a network of conservative lawyers to fulfill the mission. Media outlets including Fox News also emerged and began to demonstrate their influence. It is clear that the courts matter to America’s post-march story. On that August day in 1963, people demanded voting rights; soon after, the Voting Rights Act of 1965 passed. By the year 2012, for the first time in our nation’s history, blacks voted at a higher rate than whites. But in 2013 the U.S. Supreme Court gutted the Voting Rights Act in Shelby County v. Holder, ending the practice of requiring states with a history of voting discrimination to get prior Justice Department approval before changing their voting practices. The marchers also demanded school integration. But after significant initial progress—especially in the southern states—the U.S. Supreme Court struck substantial blows to their cause with a pair of 1970s cases—Milliken v. Bradley, 418 U.S. 717 (1974), and San Antonio Independent School District v. Rodriguez, 411 U.S. 1 (1973)—that significantly limited the effectiveness of mandatory school-desegregation plans and declared that education is not a fundamental right. More recently, the Court even placed limitations on the ability of school districts to voluntarily create school-integration plans. Many districts have historically been required to integrate via court order after findings of discrimination. Over the years other decisions have made it more difficult for people to take their claims of violations of the Civil Rights Act of 1964 to court. New barriers to achieving the 1963 marchers’ dreams definitely exist. Consider what’s happened—or hasn’t happened—in Congress. The 1963 marchers demanded job training and decent work. By 1978 Congress’s spending in this area reached a high of $38.6 billion in 2013 dollars. As the nation entered the Great Recession in 2007, however, that number had tanked to $8 billion in 2013 dollars. After several failed attempts, Congress hasn’t completed a reauthorization of the Workforce Investment Act of 1998, the nation’s largest job-training program, in 15 years. Some important methods that would improve services, such as shifting from an emphasis on short-term placements to one focused on developing skilled workers through education and training, have not been incorporated into the legislation. The marchers also demanded an increase in the minimum wage. Over the years Congress has given America’s workers a few raises, but the real value of the minimum wage is now lower than it was in 1964 and can still leave a full-time worker with children below the poverty line. And as the nation’s fast-food workers join together to fight for decent wages, they are being ridiculed by conservatives. Congress can also have an impact on the courts. Various members of Congress have introduced legislation that would rectify the damage done by the courts, including the Civil Rights Act of 2008. Notably, the Supreme Court’s voting-rights decision suggests that it is now Congress’s role to draft new legislation. Congress must also address partisan rancor over confirming judicial nominees. Not only do conservative efforts to load the bench with like-minded judges put the future of social justice at risk, but they also put all justice at risk. As the number of judicial vacancies grows to emergency levels, there are not enough judges to hear the cases piling up on dockets across the country. Moving forward, believers in economic and racial justice should take inspiration from the March on Washington and its reverberation, as this period is the best possible testament to the power of mass movement to bring about change. It is instructive for those seeking to generate the types of progress and government action needed—passing legislation that reshapes and improves upon existing structures, securing adequate funding for relevant services, improving the effectiveness of the judicial-confirmation process, and restoring gains that have been eroded by the courts. Joy Moses is a Senior Policy Analyst with the Poverty and Prosperity program at the Center for American Progress. Zach Murray was a Congressional Hunger Fellow at the Center. To speak with our experts on this topic, please contact: Print: Liz Bartolomeo (poverty, health care) 202.481.8151 or [email protected] Print: Tom Caiazza (foreign policy, energy and environment, LGBT issues, gun-violence prevention) 202.481.7141 or [email protected] Print: Allison Preiss (economy, education) 202.478.6331 or [email protected] Print: Tanya Arditi (immigration, Progress 2050, race issues, demographics, criminal justice, Legal Progress) 202.741.6258 or [email protected] Print: Chelsea Kiene (women's issues, TalkPoverty.org, faith) 202.478.5328 or [email protected] Print: Benton Strong (Center for American Progress Action Fund) 202.481.8142 or [email protected] Spanish-language and ethnic media: Jennifer Molina 202.796.9706 or [email protected] TV: Rachel Rosen 202.483.2675 or [email protected] Radio: Sally Tucker 202.482.8103 or [email protected]
Hands-On Equations Introductory Webinar Tuesday, February 19, 2013 from 1:15 PM to 2:00 PM (EST) Registration for this webinar is now closed. Please click on the link below to register for our next webinar. DEMYSTIFYING THE LEARNING OF ALGEBRA! This interactive webinar will provide an overview of our powerful visual and kinesthetic approach to introducing students in grades 3 - 9 to algebraic concepts.. We will show how the game pieces along with physical actions are used to represent and solve equations such as: - 4x + 3 = 3x + 9 - 2(x + 4) = x + 10 - 3x + (-x) = x + 4 - 2x = (-x) + 12 In addition, we will provide a glimpse of how Hands-On Equations can assist students in solving verbal problems such as: - Three times a number, increased by 2, is the same as the number increased by 10. Find the number. - Sally is four years older than Tim. If the sum of their ages is 22, how old is each? Be prepared for an interactive and fun webinar experience! Full-day workshops are available to provide educators with in-depth training. Please click here to conduct a SABA systems check. Registered participants will receive a link to the webinar one day prior to the webinar date. "I thought that the whole experience was great. I've worked with online programs before, but nothing like this. Easy and instant!" - webinar participant Borenson and Associates, Inc. Since 1990, Borenson and Associates has provided the Making Algebra Child's Play workshop to more than 50,000 educators in the United States.
Cloud space is the new level of data storage—another technological advancement of this century. It is a data storage made available by different hosts and can usually be accessed through web-based programs. Cloud space, in layman’s terms, puts data in a theoretical “cloud” where it can simply be retrieved, edited, and put back. These clouds do not take up the memory of the computer used to save it there. Instead, memory is saved onto a virtual location created by physical servers. In simpler terms, this allows a person to save information through the Internet into a place that is different from their actual computer memory but is just as accessible as long as an Internet connection is still available. This new advancement offers so many different advantages to K12 classrooms, such as the following: The most obvious advantage of having cloud space for a classroom is that large amounts of data can be saved on the cloud. Many times, memory is taken up by school files, making computer use inefficient and inadvisable. To counteract this using cloud space, files can be placed online with ease, where they will be secure and accessible only to people who are given the proper permissions. File sharing for K12 The most used aspect of cloud space is the fact that it can be a very helpful platform for file sharing. Any files that are uploaded into the cloud space can be accessed by people who are sharing the same cloud. For a classroom, this means that every student can access the files uploaded by the teacher and vice versa. Students can also upload files that should be shared with each other. Cloud space presents a great way to keep files backed up. In this era, many people rely on saving files in soft-copy versions. In other words, most documents are simply saved on a laptop, tablet, or phone, and fewer print materials are being produced. As a consequence, new ways of losing data have also evolved. Flash drives and computers can get viruses and crash at the most inopportune moment, rendering students at a loss for their data when they need them the most. Now that cloud space technology is available, it has become fairly easy to ensure that files have a backup. These backup files can then be accessed and retrieved from any Wi-Fi enabled device. Cloud space can be used by anyone and any group for so many different reasons that it is very easy to see how a classroom can benefit from one. Students can easily retrieve and edit files that have been shared on the cloud space, whether they are uploaded by the teacher or another student. It also makes sure that files are backed up, such as important submissions that may get misplaced. Another avenue of use for cloud space would be for communication between teachers, where files of a department can quickly be made available for everyone to those involved. Latest posts by Maria Dublin (see all) - Surviving ISTE 2017 Like a Boss - June 10, 2017 - Engaging More Students in the Classroom with Podcasting - October 5, 2016 - ISTE 2016: Things to Remember After the Conference - July 13, 2016
Teaching Geography: Workshop 5 Programs and Activities: Part 1. South Africa Going Further 1 (15 minutes) Multiple Media: Getting Beyond the Text Site Leader: This activity is split into three time periods. Please help participants keep track of the time and facilitate the group discussion. The classroom lesson we just saw focused on the land-allocation group activity. In an earlier part of this lesson that was not shown, Maureen Spaight had her students listen to part of an audio recording of Alan Paton reading from his book Cry, The Beloved Country in order to help them better understand the history of South Africa. Maureen has also used popular films such as Out of Africa and novels such as Animal Farm to introduce concepts relevant to the study of Africa and colonization. Take three minutes to list as many specific examples as you can of similar types of media—such as poetry, music, films, novels—that you might use to introduce and/or reinforce a lesson about South Africa. Please be specific, e.g., the 1987 film about Biko, Cry Freedom. Now, take two minutes to choose one of your examples and briefly explain why it would be an effective classroom tool: how does it connect to the subject and how does it help connect your students to the subject? For the remaining ten minutes, share and discuss your ideas, keeping this question in mind: How does using multiple media appeal to different styles of learning?
Food Detectives at WEEFWC At the Early Education and Family Wellness Centre children are participating in the food detectives program. This program is a way for children to explore food in a safe and fun environment. Children are exposed to foods that they may not have experienced before, with a focus on exposure to different textures and properties. We are working hard as a staff to eliminate judgements of food, such as yummy and yucky, and instead focus on the sensory properties, such as wet, dry, crunchy, squishy, etc. This helps children to be more open to experiencing things, when we haven’t already placed a judgement on it. One of the most popular activities in food detectives is the bite and drop bin where children practice holding food in their teeth and dropping it into a water table filled with coloured water. Children have the opportunity to playfully engage with food that they might not otherwise feel safe putting in their mouth. There is no expectation of eating the food, but rather the focus is on exposure without pressure. When we do the bite and drop we talk about the properties of the food in our hands and mouth, as well as in the water. For example, we watch which ones sink and float. We make predictions about what will happen if we leave the food in the water, and then we come back later to explore the food with our hands. We again talk about the food in terms of its sensory properties, discussing whether the food changed or stayed the same. Food detectives is a team approach. It is planned by our Occupational Therapist and Speech Language Pathologists, and is implemented in conjunction with classroom staff. Through the process, staff are able to learn how to best support children who are selective eaters.
(Difference between revisions) A stock, also referred to as a share, is commonly a share of ownership in a company. Purpose of Equities The owners and financial backers of a company may want additional capital to invest in new projects within the company. If they were to sell the company it would represent a loss of control over the company. Alternatively, by selling shares, they can sell part or all of the company to many part-owners. The purchase of one share entitles the owner of that share to literally a share in the ownership of the company, including the right to a fraction of the assets of the company, a fraction of the decision-making power, and potentially a fraction of the profits, which the company may issue as dividends. However, the original owners of the company often still have control of the company, and can use the money paid for the shares to grow the company. In the common case, where there are thousands of shareholders, it is impractical to have all of them making the daily decisions required in the running of a company. Thus, the shareholders will use their shares as votes in the election of members of the board of directors of the company. However, the choices are usually nominated by insiders or the board of the directors themselves, which over time has led to most of the top executives being on each other's boards. Each share constitutes one vote (except in a co-operative society where every member gets one vote regardless of the number of shares they hold). Thus, if one shareholder owns more than half the shares, they can out-vote everyone else, and thus have control of the company. History of Equities History of stock market trading in the United States can be traced back to over 200 years ago. Historically, the colonial government decided to finance the war by selling bonds, government notes promising to pay out at profit at a later date. Around the same time private banks began to raise money by issuing stocks, or shares of the company to raise their own money. This was a new market, and a new form of investing money, and a great scheme for the rich to get richer. In 1792, a meeting of twenty four large merchants resulted into a creation of a market known as the New York Stock Exchange(NYSE). At the meeting, the merchants agreed to meet daily on Wall Street to daily trade stocks and bonds. Ithe mid-1800s, the United States was experiencing rapid growth. Companies needed funds to assist in expansion required to meet the new demand. Companies also realized that investors would be interested in buying stock, partial ownership in the company. History has shown that stocks have facilitated the expansion of the companies and the great potential of the recently founded stock market was becoming increasingly apparent to both the investors and the companies. By 1900, millions of dollars worth of stocks were traded on the street market. In 1921, after twenty years of street trading, the stock market moved indoors. Progress brought us the Industrial Revolution, which also played a role in changing the face of the stock market. New form of investing began to emerge when people started to realize that profits could be made by re-selling the stock to others who saw value in a company. This was the beginning of the secondary market, known also as the speculators market. This market was more volatile than before, because it was now fueled by highly subjective speculation about the company√Ƃ⍂ĄĘs future. Types of Equities These include the following: Ordinary share or common stock By far the most common security which represent ownership in a company. Holders of ordinary shares exercise control by electing a board of directors and voting on company policy. Ordinary shareholders are on the bottom of the priority ladder for ownership structure. In the event of liquidation ordinary shareholders have rights to a company's assets only after bond holders, preferred shareholders, and other debt holders have been paid in full. Preferred share or stock A class of ownership in a company with a stated dividend that must be paid before dividends to ordinary share holders. Preferred shares do not usually have voting rights. Convertible share or convertible preferred stock A preferred share that can be converted into an ordinary share See also T2W links
There are over one million fractures (broken bones) each year in the UK alone. Fractures can occur in people of any age, but two groups of people tend to sustain most fractures - the elderly and the childhood age groups. In children a broken forearm is the most common fracture, with boys sustaining fractures more than girls. Teenagers tend to be the most active age group, which increases their risk of injury, and their bones are more prone to breaking following the period of rapid growth during adolescence. In the elderly age group a combination of Osteoporosis (decreased bone density) and increased incidence of falls means that the number of broken bones increases with age. In the older age group women suffer more fractures than men - this is because hormonal changes during the menopause increase the incidence of Osteoporosis. The most common fractures are the hip and wrist. This guide explains exactly what bone is and the four main stages of bone healing. By understanding bone healing better you can feel more in control of the rehabilitation process and help your fracture to heal.
10. In what other ways can sound exposure affect children and adolescents? The SCENIHR opinion states: 3.9. Non-auditory effects The non-auditory effects of noise on children and adolescents basically fall into two categories. (1) At the psychological level, seen as changes in reading, memory, attention, school achievement, and motivation and (2) other effects, mainly those who show up at biological or physiological level. 3.9.1. Psychological effects Pertaining to the psychological effects on cognition and attention, there is no reported research on noise from PMPs. However there are reliable findings of the noise effects from other noise sources on cognition and attention in children and young adults. Thus, to consider possible outcomes of PMP-use it is worthwhile to briefly summarize relevant research, coming mainly from studies of aircraft and road-traffic noise. 188.8.131.52. Reading and memory The best documented impact of noise on children's performance is research showing negative effects on reading acquisition. Close to twenty studies have found indications of negative relations between chronic noise exposure and delayed reading acquisition in young children (Evans and Lepore 1993). There are no contradictory findings and the few null results are likely due to methodological problems, such as comparing children across school districts who have different reading curricula (Cohen et al. 1986). There are fewer studies of other cognitive processes and noise among children relative to reading. However, noise effects on memory have been the focus of a handful of studies. The most ubiquitous memory effects occur in chronic noise, particularly when complex, semantic materials are probed (Hygge 2003). Several studies of both chronic (Evans et al. 1995, Haines et al. 2001a, Hygge et al. 2002) and acute noise (Boman 2004, Boman et al. 2005, Hygge 2003, Hygge et al. 2003) have found adverse impacts of aircraft or road traffic noise exposure on long term memory for complex, difficult material. Stansfeld et al. (2005) replicated these effects on long term memory for chronic aircraft noise. In the experimental acute noise studies by Boman (2004), Hygge (2003), Hygge et al. (2003) worse (approx. 15-20 %) long-term learning and memory in children was induced by exposure to aircraft and road-traffic noise and speech noise at 66 dB(A) Lequ during 15 min exposure time while reading a text and tested for memory of the text an hour later or a week later. For aircraft noise there was impaired memory also from 15 min exposure to 55 dB(A) Lequ. For chronic aircraft noise exposure the Munich study (Hygge et al. 2002) and the RANCH study (Stansfeld et al. 2005) indicated that children exposed to chronic aircraft noise showed cognitive deficits compared to children not having been exposed to chronic aircraft noise. It was also found that the children at the old airport in Munich, who got rid of aircraft noise, improved their cognitive performance. Thus, there was some reversibility in the negative effects of noise on cognition when the noise ceased. To what extent this recovery is dependent upon the age of the children in question (11-12 years) and the accompanying continuing growth in cognitive development, we do not know. Thus, short time exposure (15 min) to noise with average levels of 65 dB(A), impairs memory and learning. Long-time chronic exposure to, at least aircraft noise, indicate that there will be statistically significant impairments of memory and language skills when the noise levels increase from around or below 55 to above 60 dB(A) Lequ. 184.108.40.206. Attention and distraction Use of music is sometimes employed to distract from a noisy working environment, and sometimes this is beneficial. One reason for this to happen is that the more boring, repetitive and simple a task is, the more will it benefit, both in quality and quantity, from being performed in noise (Kryter 1994). On the other hand, the more complex and difficult the task is, the more it is prone to be hampered by excessive sounds. When the noise is preferred music from PMP one would in addition expect more of a perceived comfort. Further, when the music from the PMP also masks distracting sounds in the environment, devoid of relevant information or warning characteristics, it will most likely be a subjective advantage to listen to the PMP rather than to shut it off. On the other hand, the more cognitively demanding the task is, the more it is dependent upon speech communication, and the more there are of potential warning sounds in the close environment, the more to the disadvantage of the task performance and the security of the listener the PMP-listening is. With regard to attention, there is always a risk that the sound of the music listened to from the PMP will acoustically mask warning sounds e.g. from approaching cars, street crossings or reversing trucks. Even if the music is not in a physical sense masking the warning sound, the focused attention on the music will from time to time make the listener inattentive to other sounds, some of which my be warning sounds. 220.127.116.11. School performance There are a several cross-sectional studies that have reported a covariation between high noise levels (from aircraft or road traffic) and low grades or low levels of school achievement (Cohen et al. 1981, Cohen et al. 1986, Green et al. 1982, Evans and Maxwell 1997, Haines et al. 2001a, Haines et al. 2001b, Haines et al. 2001c, Haines et al. 2002, Maser et al. 1978, Stansfeld et al. 2005). However, cross-sectional studies suffer from two possible short-comings. The first is the differential socio-demographic composition of the noise dose groups, which may favour children in quiet middle-class housing and living areas. Adjusting statistically for the social class effects may not be sufficient to control for this. The second is the possible confound between being exposed to noise both while learning and when tested for what is learnt. Noise at testing may lower the test scores without learning being effected, but the effects of noise on learning and performing can not be disentangled. Thus, cross-sectional studies are not the best platform for a strong inference on cause-effect relationships. One laboratory study (Glass 1977) and several field studies (Bullinger et al. 1999, Cohen et al. 1986, Evans et al. 1995, Maxwell and Evans 2000) have found that children chronically exposed to noise are less motivated when placed in achievement situations where task performance is contingent upon persistence. Cohen et al. (1986) also found that a second index of motivation, abrogation of choice, was affected by chronic noise exposure. Children chronically exposed to noise, following a set of experimental procedures in quiet conditions, were more apt to relinquish choice over a reward to an experimenter, in comparison to their well matched quiet counterparts. Haines et al. (2001a) could not replicate the effects of aircraft noise on puzzle persistence in elementary school children although they administered the task in small groups rather than individually. Perceived control is at the heart of the theorising about noise after-effects. When the noise exposed person perceives that (s)he has control over the noise exposure or noise source, the motivational after effects vanish. Thus, we can not really expect that the persons that freely expose themselves to music from PMPs will lose any motivation just because of that. 18.104.22.168. Lasting after effects on cognition from listening to PMPs No directly relevant study of lasting after effects (effects that last also after the cessation of noise exposure) of listening to PMP on memory, learning, attention or other facets of cognition has been located in the international research literature. Studies of lasting cognitive effects from involuntary exposure to chronic aircraft and road traffic noise (Hygge et al. 2002, Stansfeld et al. 2005) have indicated impaired memory and learning with an increased noise level. It is questionable though whether those studies validly can be stretched to make any inference about voluntary, non-chronic exposure to music. And even if the studies of chronic noise and cognition in some ways are applicable to PMP-listening, they can not state in any detail how long (years) the chronic noise must be present to result in impaired cognition, and whether this cognitive impairment will be permanent or not. For instance, in a study around the Munich airport (Hygge et al. 2002) children chronically exposed to noise at the old airport, and lagging behind their silent control group on memory and language performance, recovered from their deficits within 18 months after the airport was closed down. Thus, there does not seem to be sufficient research on PMPs to conclude anything about long lasting effects on cognition, and the available evidence from research on other noise sources is not detailed enough to give any strong indications about exposure duration and permanence of cognitive deficits. 3.9.2. Other Effects The obvious beneficial effect of listening to PMPs is indulging in a preferred activity, which is also the intended outcome. As long as this activity does not interfere with intended or required task performance, there should be no need to restrict listening to PMPs. Although there is not much of relevant research, the little research there is point to children having somewhat better sleep than adults. Lukas (1972) stated that children are not as easily awakened by noises adults are. Öhrström et al. (2006) compared children aged 9-12 years with their parents in a road traffic study and reported that for parents there was a significant exposure-effect between noise and several self reported sleep parameters, but this relationship was less marked for children. 22.214.171.124. Cardiovascular and other physiological effects Twelve studies found some association between increased blood pressure and noise-induced hearing loss (Pyykkö et al. 1981, Lang et al. 1986, Pyykkö et al. 1987, Verbeek et al. 1987, Milković-Kraus 1990, Talbott et al. 1990, Solerte et al. 1991, Starck et al. 1999, Souto Souza et al. 2001, Toppila et al. 2001, Narlawar et al. 2006, Ni et al. 2007). In contrast eleven other studies did not find such an association (Lees RE, Roberts JH. 1979, Willson et al. 1979, Ickes and Nader 1982, Kent et al. 1986, Gold et al. 1989, Kontosić et al. 1990, Tarter and Robins 1990, Hirai et al. 1991, Garcia and Garcia 1993, Zamarro et al. 1992, Barberino 1995). Overall both groups of positive and negative studies are quite comparable in sampling and other methodologies. It must be noted however that the positive findings report moderate average differences sometimes restricted within studies to sub-groups such as only the more exposed or the youngers or those who also smoked showing altered blood pressure. The question of causality remains open, the cardiovascular differences having been simply observed as concommittant. Two studies (Tomanek 1975, Dengerink et al. 1982) produced experimental temporary threshold shifts which were found related with altered cardiovascular parameters, however physiological processes underlying temporary and permanent threshold shifts are known to be notably different. A recent extensive review by Babisch (2006) dealing specifically with exposure to road or aircraft noise on blood pressure, hypertension and ischaemic heart disease concludes that there is no clear evidence of increased blood pressure. Whereas for aircraft noise (but not road noise exposure) most recent studies (Babisch 2006) indicate some significant relationship, finally concerning ischeamic heart disease more recent studies also suggest a trend towards increased risk as compared with previous studies. Exposing oneself to music from a PMP is a matter of a personal choice of leisure activity. Harmful lasting and irreversible non-auditory effects of excessive listening to PMP can be expected in three areas: (1) Cardiovascular effects, (2) cognition and (3) distraction and masking effects. Cardiovascular effects, in particular increases in blood pressure, build up and accumulate over time, when there is not enough silent time in between noise exposures to recover. However, we do not have sufficient evidence to state that music from PMPs constitutes a risk for hypertension and ischaemic heart disease in children and young adults. Effects on cognition (memory and learning) of excessive sound exposure has been shown from acute noise exposure and from chronic noise exposure. Noise exposure for 15 min to 66 dB(A), and for aircraft noise down to 55 dB(A) has been shown to cause impaired learning and memory of a text. We have no study stating that the same is true also for music, but we also have no reason to believe that music should be substantially less harmful to cognition that aircraft noise, road traffic noise or speech noise. Thus, listening to music from PMP while at the same time trying to read a text and learn from it, will hamper memory and learning. This learning impairment has been shown at fairly short (15 min) exposure times and at sound levels that are moderate (55-65 dB(A)). Prolonged exposure to chronic aircraft noise has been shown to impair cognition in children, but there is also one indication that children may recover from the noise induced cognitive deficit when the noise exposure stops. We do not as yet have a sufficient scientific basis to assume that excessive voluntary PMP-listening leads to lasting and irreversible cognitive and attention deficits after the cessation of the noise. Source & ©: SCENIHR,
Mortality experience of a population is best represented by a life table. A life table is the life history of a hypothetical cohort of persons, which over a period of time gets depleted systematically because of death of its members till such time when all the persons are dead. In other words, a life table can be defined as “a summary presentation of the death history of a cohort”. The credit of preparing the first life table goes to John Graunt who published a rudimentary life table based on the analysis of the ‘Bills of Mortality’ in 1662. Thereafter, several scholars have contributed towards its improvement. The concept of life table is very simple. Let us take a cohort of newly born babies born at a particular time to be P. This group will experience depletion due to deaths of its members at various ages until all of them have died. Thus, at the end of each successive year, the size of the cohort will get reduced to P1, P2, P3………………….. and finally Po,, where co is the maximum length of life and P is equal to zero. This sequence P1, P2, P3,….Pω describes the attrition in a cohort. A life table is the summary of this gradual process of attrition in a cohort over time. A life table so constructed is called a cohort or generation life table. However, in a real-life situation, in view of the length of the life span of a cohort, it is not possible to obtain the actual sequence corresponding to P1, P2, P3,….Pω. A solution to this problem is to take a hypothetical cohort and subject it to the age-specific death rates prevailing in a population at a particular time. Such a life table is known as a current life table or period life table. Thus, life tables can be grouped in two categories, namely a current or period life table, and a cohort or generation life table. While the former is based on current mortality experience, the latter depicts the actual mortality experience of a birth cohort. The construction of a cohort or generation life table requires collection of data over a very long period. The collection of such data is almost impossible in the real-life situations, and this restricts the utility of such life tables. The current life table is, therefore, more commonly used in any population analysis. The present discussion is also confined to current life table only. Life tables can further be grouped under complete life table and abridged life table. A life table, based on single year age data, is called a complete life table. Obviously, a complete life table becomes very clumsy and unmanageable. On the other hand, a life table based on broad age groups, say for instance 5 or 10 year interval data, is more precise, easier to construct and is the most commonly used life table in any population analysis. Such a life table is called an abridged life table. As the mortality experience of males and females in a population differ from each other, separate life tables are usually constructed for the two sexes. The construction of a life table is based on certain assumptions. A life table is customarily constructed for a hypothetical cohort of 1, 00,000 newborn babies. This is called the radix of the life table. The radix is assumed to be closed to migration. It gets depleted only through death of its members. A life table population, thus, resembles a stationary population where births and deaths are equal. The members of the cohort die according to a given schedule of age-specific death rates, and there is no periodic fluctuation in the death schedule due to random factors. A life table is, therefore, a deterministic model. And, finally, the number of deaths, barring the few early years, is supposed to be uniformly spread over a year. Columns of a Life Table: As the name suggests, a life table is usually presented in a tabular form consisting of different columns. The readers will note that all these columns are interrelated, and once a crucial column is known, the rest of the columns can be generated from it. A brief account of these columns and their functional relations is given below (also see Table 9.1): Age x to x + n: The first column of a life table relates to age represented by x. Age here means ‘exact age’. In an abridged life table it is expressed as ‘x to x+n’, where n is the age interval. nqx is the probability of dying of a person between the age group ‘x to x+n’. When the age interval is 1 year it is denoted as qx. In a current life table this is the crucial column. The values of this column are obtained from the age-specific death rates of the population. Npx is the probability of survival of a person between the age x to x + n. A person will either survive or die, hence Npx is equal to 1- Nqx – Since is not required in the generation of other columns, it is generally not included in most of the life tables. lx is the number of persons surviving at the beginning of age x. This column starts with lo, the size of the birth cohort, and undergoes decline through deaths at each subsequent age of life. The value of lx is obtained by subtracting the number of deaths in the previous age group from its corresponding lx. In other words, lx+n = lx-n d x or l x+n = l x+n pn In case of a cohort or generation life table, this column is already known and the rest of the columns are generated from it. Ndx is the number of deaths in the age group ‘x to x+n’. It is obtained in the following manner: ndx = lx. nqx = (9.10) nLx is the person years lived by lx persons in the age group ‘x to x + n’. This column is the equivalent to the population and hence it is called the life table population. Tx is the total number of years, lived by the cohort after exact age x, and is obtained by cumulating the nLx column upward from the last row. ex is the end product of a life table. It is the average number of years a person aged x years is expected to live. This column is worked out in the following manner: ex-Tx / lx (9.11) Life expectancy at birth is thus denoted by e°. It is a summary measure of mortality conditions in a population as a whole. It has been found that life expectancy, except for the early age groups in a life table, tends to decline with increase in age. With a somewhat greater risk of deaths at age 0, life expectancy is lower at this age than that at age 1. As noted earlier, in the construction of a life table nqx is the crucial column, and once this column is known, columns corresponding to ndx and lx can be generated. It has also been noted that the values of nqx are approximated from age-specific death rates. Thus, all that is need for the construction of a life table is the data on age-specific death rates in the population concerned. It should be noted that while age-specific death rates relate to mid-year population (see equation 9.3), nqx as probability relates to the population at the beginning of the age interval. Under the assumption of a linear distribution of deaths over the age interval, nqx is calculated as under: = nqx 2n . nmx / 2+n . nmx (9.12) where nmx is the age specific death rate in the age group x to x + n, and n is the age interval. This formula can be used for all age groups including 1-4 years age group (Woods, 1979). For probability of dying at age ‘0’, i.e., q0, however, the suggested formula is: q0 = 2.m0/2 + m0 (9.13) In the last row of the column, since all the survivors at the beginning of the age group will die in due course of time, the value of probability of dying is equal to 1. Once the probability of dying has been obtained, lx and ndx can be systematically generated from top to bottom using equations 9.8 and 9.10 respectively. Under the assumption of a uniform distribution of deaths over the age interval, Lx is the midyear population [i.e., (Lx +Lx+1) /2] in a life table based on single year data. However, the assumption of uniform mortality is not applicable to the first year in life. Therefore, a variety of ‘separation factors’ are employed to weight what could normally be the average of L0 and L1. The suggested formula is: L0 = 0.3l0+ 0.7l1, and (9.14) It should, however, be noted that these weights are not applicable universally. Keeping in mind the mortality experiences, different weights are suggested for different populations. For the age groups beyond the first year of life, a uniform weight of 0.5 is generally used in the case of a complete life table. In an abridged life table, the values of subsequent nLx are obtained in the following manner: nLx = n / 2 (lX + l x+n) Note that this is similar to the weight of 0.5 used in the case of a complete life table. As noted earlier, a life table generally terminates with open-ended interval, for instance 70 + or 80 +. The nLx value corresponding to the last row, for say ’70 years and above’, can be approximated in the following manner: ?L70 = ?d70 / ?m70 (9-16) where ?d70 is the number of deaths in the age group 70 and above, and ?m70 is the age-specific death rate of the age group. And finally, expectancy of life (ex), the last column of the life table can be generated using equation 9.11. Table 9.1 shows a life table of females in India based on the age-specific death rates by sex for the year 1998. The above-discussed procedure for the construction of a life table is based on the assumption of linearity in the distribution of deaths. This assumption is, however, not always empirically acceptable. For the construction of a life table, scholars have, therefore, suggested several alternate procedures. It should, however, be noted that all of them suffer from one or the other defect (Ramakumar, 1986:85). We limit our discussion to two of them, which give better results, and are widely used in construction of life tables. Reed and Merrell proposed a method in 1939, which is simple to calculate and gives fairly accurate results. They suggested the following formula to arrive at values: nqx = 1 – exp [ -n.nm-a.n3. nmx2] (9.17) where the value of ‘a’ is taken as 0.008 which gives a good fit for age interval 1 to 10 and for ages 0 to 80. Reed and Marrell also constructed a series of tables for nqx values corresponding to different values of n and age specific death rates (Shryock, 1976). For the values of Reed and Marrell suggested the following equation:
Chapter 12 ~ The Arthropods Contents ~ Introduction ~ Protozoans ~ Metazoans ~ Sponges ~ Cnidaria The three largest groups within the Phylum Arthropoda are the insects (moth larva), crustaceans (barnacle), and arachnids (spider). Introduction to the ArthropodsEdit Arthropods (Phylum Arthropoda) constitute the largest phylum of animals, and include the insects, arachnids (e.g., mites, spiders), crustaceans (crabs, lobsters, shrimp), and other similar creatures. Between 75 and 80% of all organisms on planet Earth are arthropods—over a million modern species are known, and the fossil record reaches back to the early Cambrian. However, species in the world's tropical forests remain largely undiscovered; Thomas (1990) estimated that perhaps 6 to 9 million species are yet to be discovered in this environment alone. Arthropods are common in all environments, and include symbiotic and parasitic forms. They range in size from microscopic plankton (~0.25 mm) up to forms several metres long.its class insecta bear almost 1.4-1.5 million species. Because this group is so large, we will devote the next several chapters to the various subgroups: the subphyla and classes of arthropods. - Read Arthropods (Links need not be persued at this time) Because of the hard exoskeleton, arthropods tend to make excellent fossils. A particularly large group of arthropods known as trilobites are known only from the fossil record: florishing in Cambrian seas and into the lower Palaeozoic. The last of the triliobites disappeared at the end of the Permian. - Thomas, C. D. 1990. Fewer species. Nature, 347: 237.
An uncertain future for our living, blue planet A new report on the health of the ocean finds that the marine vertebrate population has declined by 49 percent between 1970 and 2012. WWF’s Living Blue Planet Report tracks 5,829 populations of 1,234 mammal, bird, reptile, and fish species through a marine living planet index. The evidence, analyzed by researchers at the Zoological Society of London, paints a troubling picture. In addition to the plummeting number of marine vertebrate species, populations of locally and commercially fished fish species have fallen by half, with some of the most important species experiencing even greater declines. These findings coincide with the growing decline of marine habitats, where the deforestation rate of mangroves exceeds even the loss of forests by 3-5 times; coral reefs could be lost across the globe by 2050; and almost one-third of all seagrasses have been lost. Global climate is one of the major drivers causing the ocean to change more rapidly than at any other point in millions of years. The oceans store huge quantities of energy and heat, but as the climate responds to increasing carbon emissions, the exchange intensifies. This may result in extreme weather events, changing ocean currents, rising sea temperatures, and increasing acidity levels—all of which aggravate the negative impacts of overfishing and other major threats such as habitat degradation and pollution. Finding solutions for saving oceans Though the challenge seems immense, it’s possible for governments, businesses, communities and consumers to secure a living ocean. To reverse the downward trend we need to preserve the oceans natural capital; produce better; consume more wisely; and ensure sustainable financing and governance. Our ocean needs a strong global climate deal and work is already underway as President Obama and leaders of the Arctic nations recently pledged to work together to boost strong action on climate change. But more needs to be done to prioritize ocean and coastal habitat health.
This image shows Rocks explored by the Rover. Click on image for full size Image from: JPL/NASA Rocks explored by the Rover This diagram shows the path traveled by the Mars Pathfinder Rover during its 80 day mission. In addition to sampling the atmosphere, and the Martian soil, during it's travels on the Martian surface, the Rover took measurements at a number of different rocks. Measurements helped scientists classify the rocks. These rocks were scattered all around the Pathfinder landing site. The Rover visited many rocks. Scientists gave them names like: Pop Tart, Ender, mini-Matterhorn, Wedge, Baker's Bench, Flat Top, and the Broken Wall. We will not show you all the rocks visited by the Rover, but will discuss a representative sample. Rocks not in this list can be viewed in the MPF Image Archive below. Shop Windows to the Universe Science Store! Our online store on science education, ranging from evolution , classroom research , and the need for science and math literacy You might also be interested in: One of the measurement objectives of the Mars Pathfinder mission was the examination of the composition and structure of the soil. As the Rover traversed the surface exploring the rocks of Mars, it also...more The rocks explored by the Mars Pathfinder's Rover have been classified into three kinds by scientists analysing the Rovers' findings. Potentially the rocks may all be the same kind of rock, all having...more This image shows the rock called Pooh Bear. Soil found near Pooh Bear seemed to be a clumpy kind; finely grained, cloddy, and rocky. This was different from the soils found near the rock Scooby Doo, which...more Mars Global Surveyor made a measurement of pyroxene on the surface of Mars. The absence of pyroxene minerals was a puzzlement to scientists studying results returned by the Mars Pathfinder Rover's examination...more The Mars Pathfinder (MPF) mission was sent to investigate the geology of Mars. Its principal objective was to analyze the rocks and soil of Mars. The MPF consisted of 2 components, a lander and a mobile...more The Mars Pathfinder was launched in December 1996 aboard a Delta II rocket. The spacecraft entered the Martian atmosphere on July 4th, 1997 with a Viking-derived heat shield and landed with the help of...more These are the findings of Mars Pathfinder. High Silica Rocks - a result from chemical analysis of the Martian rocks. suggestive of differentiated (evolved) rocks and minerals. helps establish that, like...more
The history of Juliaca -from the quechua word xullasca, which means covered in snow- goes back thousand of years. The first evidence of life ever found was from the year 10000 a.C of hunters and gatherers. The reason for this, it is believed, was the weather -less aggressive in those years- which contributed to the proliferation of many animals like vizcachas, deers and camelids (llamas, alpacas, vicunas) and birds, all of these, apparently, favorite food of the new habitants. Then, with the discovery of pottery, a new culture emerged. Qaluyo -the first of the area-, later followed by cultures like Pukara -around mountain Waynarroque- and Tiwanaku, and by the III and IV century d.C, Qaluyo became a society now known as Waynarroque Culture. The Waynarroque dedicated to agriculture, livestock, fishing and hunting. Approximately from the VII to the X century, Tiwanaku’s colonist state -the most important pre Inca civilization- took control of great part of the plateau that would later became into Kollasuyo. The Juliaca people, however, in spite of geographically being under the domain of Tiwanako, did not receive much of its cultural influence -this allowed them to developed almost independently- because Juliaca belonged to the Aymara Kingdom of Qolla. The Incas would have come later. Guided by Pachacutec, they fought -and won- against the Qolla army that was ruled by Chuchi Capac. The victories occurred in Ayaviri and Pukara, which would have become from that moment a part of Tawantinsuyo´s territory. The Qollas rebelled in many occasions but by 1474 they were subjugated and later relocated to settlements, where they were found by the spanish that came with the conquest around the XVI century and with the purpose of transforming Xullaca town into Juliaca, incorporating it to the Buenos Aires viceroyalty.
named after the Greek god of the sky, was discovered on March 13, 1781 (the first planet found using a telescope), by British astronomer William Herschel who originally believed it to be a comet. It's the seventh planet from the Sun orbiting at an average distance of 2,876,679,082 km or 19.23 AU, the third largest with a diameter of 51,118 km and the fourth most massive. Voyager 2 gathered valuable information as it passed by on its way to Neptune. The spacecraft's closest approach was on January 24, 1986. like Neptune is often referred to as a gas giant, but like its neighbor a more appropriate title would probably be ice giant. The atmosphere is thin (as compared to the true gas giants Jupiter mostly hydrogen (83%) helium (15%) and Methane, Acetylene and other hydrocarbons (2%) with a mean cloud temperature of -193 °C. Its interior is composed primarily of water, ammonia and methane ices surrounding a small rocky core. Uranus has an axial tilt of 97.77 degrees, which places it on its side with respect to the plane of the ecliptic. (The poles are where most planets have their equators.) has a faint ring system composed of eleven darkly colored narrow bands, 1986 U2R, Six, Five, Four, Alpha, Beta, Eta, Gamma, Delta, 1986 U1R and the furthest out Epsilon which orbits at a distance of 51,140 km from the planets center. date Uranus has 27 known moons: Cordelia, Ophelia, Blanca, Cressida, Desdemona, Juliet, Portia, Rosalind, Mab, Belinda, Perdita, Puck, Cupid, Miranda, Francisco, Ariel, Umbriel, Titania, Oberon, Caliban, Stephano, Trinculo, Sycorax, Margaret, Prospero, Setebos and Ferdinand. The Greek god of the sky is actually Ouranos, Uranus is the A planetary ring is a flat disk shaped band composed of rock or ice dust, larger rocks, boulders and ice chunks which circle in a planet's Copyright © 2006-2016 All rights reserved
When shopping for greenery, it is imperative to select species that will thrive in Manitoba’s temperate climate. Every nursery in the province carries a variety of plants that have been tried and tested to survive our extreme temperature swings. Hardiness refers to a plant’s ability to survive in adverse growing conditions. Hardiness Zones are geographic regions around the world that are able to support certain vegetation types. These zones are defined by temperature ranges in increments of 5°F. There are a total of 10 Hardiness Zones worldwide, ranging from Zone 0 through Zone 9, with further Sub Zones designated alphabetically (e.g. Zone 1a, Zone 1b). Zone 0 has the coldest temperatures, and Zone 9 the warmest. Manitoba ranges from Zone 1b in the North, to a small area of Zone 4a in the South. Winnipeg falls into Zone 3a, with a cold temperature extreme of -40°F. This means that plants rated for Hardiness Zones 3a or lower can survive in Winnipeg. Some plants rated for Zone 4a may survive as well, depending on certain variables. For more information, please visit: Veseys. “Manitoba.” Online Image. 2000. https://www.veseys.com/ca/en/learn/reference/hardinesszones/manitoba.
What invention combined the tools of wine-making and the skills of goldsmithing? In this episode of Johan Norberg’s New and Improved, Norberg takes us back to the mid 15th century to the invention of one of history’s most revolutionary devices – Johannes Gutenberg’s printing press. Gutenberg revolutionized printing with movable metal type which pressed ink onto damp paper. This enabled books to be mass-produced rather than hand-copied, lowering the cost of a book from roughly six months wages to six days. By making books accessible to Europe, and history’s great thinkers available to the common man, the printing press revolutionized the spread of ideas, encouraged debate and discovery, and unleashed the Enlightenment and Scientific Revolution.
THE RUSSIAN REVOLUTION ___________________________ 1. How could the Russian Revolution have been avoided? What factors could have been changed that might have stemmed the call for revolution? Or, was the Russian Revolution inevitable? Why? The Russian Revolution, which was started by Lenin and his followers, was a rebellion that occurred in 1917 which forced higher powers to act to the needs of the lower class. For instance, many citizens were worried for their protection in consequence to the lack of survival necessities due to an early drought. Furthermore, their current czar during the time was incapable for his position as a czar and made horrendous decisions as czar. For example, when the czar, Nicholas, entered in World War I, he sent untrained troops into countless battles of failure which costed in mass amounts of lost life (paragraph 23). The Russian Revolution could have been halted or prevented if, in early times, Russia was given a czar with more experience …show more content… Furthermore, they wanted to start revolution against decisions made by their tragic excuse of a czar, Nicholas II. These transactions proposed as the idea of a revolution gained followers and grew greatly in hopes to create change. These transactions were right because they opposed what the people needed, which was equal treatment and protection for not only people of higher authority, but yet for everyone. Once Lenin gained control of Russia as new czar, great changes were created. As proposed, Lenin followed through with his wanted changes and made them present in Russian society. In some circumstances, Lenin made accusations, won wars greatly, and was treated as a threat in fear that he might start a World War III. Although he was treated like a great and dangerous person of higher power, Lenin had also gave improvement to life in Russia since his revolution in Click here to unlock this and over one million essaysShow More Although Russia was once again in a terrible position for war the fought in the first World War and their country and its people faced further hardships. The people began to revolt and took over the government and then assassinated Nicholas II’s entire Post WWl, Russia was still not industrialized, suffering economically and politically and in no doubt in need of a leader after Lenin’s death. “His successor, Joseph Stalin, a ruthless dictator, seized power and turned Russia into a totalitarian state where the government controls all aspects of private and public life.” Stalin showed these traits by using methods of enforcement, state control of individuals and state control of society. The journey of Stalin begins now. Question: Evaluate the rule of Stalin in the Soviet Union, taking into consideration the changes made and the methods used. Russia’s turbulent start in the 20th century was characterized by their involvement in the first world war, being the critical factor in the Bolsheviks seize for power in the October Revolution in 1917. Vladimir Lenin rose into power and lead Russia toward a communist nation with extreme centralization and doctrinaire socialism but the Kronstadt Rebellion of March 1921 forced Vladimir Lenin to begin the New Economic Party in order to stay in power. The policy allowed private ownership and management of agriculture, trade, and small businesses. However, upon Lenin’s death in 1924, rose Joseph Stalin as the leader of The Russian revolution resulted in the overthrow of the country’s monarchy and the establishment of the Soviet Union. It started off with many protests and strikes that forced Tsar Nicholas II out of power. As a result, a provisional government was put in place but it was weak and ineffective so the Bolsheviks took control and established a socialist government. The Bolshevik Revolution was caused by a combination of unstable and corrupt monarchies, unfair treatment of the populace, and a lagging industry, which eventually led to the creation of the USSR. Karl Liebknecht once said, “The Russian revolution was to an unprecedented degree the cause of the proletariat of the whole world becoming more revolutionary.” The revolution was a result of tension and disaffection for the Russian people. The Russian revolution was accountable with how Russia withdrew WW1 because of the destruction it brought forth to the Russian economy. The Russian revolution was caused by hard labor, unprepared leaders, and how Russia was industrially behind. The Russian Revolution of 1917 marked one of the most radical turning points in the country’s 1,300-year history and established the Soviet Union as a Communist state. Russia in the 19th century was a massive empire stretching from Poland to the Pacific. Ruling such a massive country was quite the undertaking, especially because the long-term problems within Russia were approaching the surface. In 1917, these problems finally produced a revolution, which completely wiped the old system away. The Russian Revolution was a rebellion executed by the Russian people against the Russian elite. The Russian Revolution of 1917 marked the end of the Romanov dynasty and centuries of Russian Imperial rule. During the Russian Revolution, the Bolsheviks, led by leftist revolutionary Vladimir Lenin seized power and destroyed the tradition of czarist rule. Civil War broke out in Russia between the Red and White Armies. The Red Army fought for the Lenin’s Bolshevik government. The White Army represented a large group of monarchists, capitalists and supporters of democratic socialism. Throughout Russia’s history, there have been many rulers that tried to manage their country in different ways. Even though, all of these rulers had their own unique ways of ruling, all of them were seen as terrible by the people. This eventually led to a tipping point for the Russian citizens and the Russian Revolution took place. The goal for these people was to gain freedom from their oppressive czar but instead, they got an even worse leader. Joseph Stalin was a leader of the Soviet Union from 1929 to 1953 and he was known for his ability to strike fear into people. To the Russian people, this was their only way to meet their goals because if they spoke out against the Tsar, the would’ve been Both have had workers protest to a palace, in France there was the March of Versailles to the Palace of Versailles, and in Russia there was Bloody Sunday where workers stormed the Winter Palace. They both had a cluster of riots because of the increasing price of bread. A few differences between the French Revolution and the Russian Revolution’s radical uprisings are during the French Revolution, France declared war on Austria, and Prussia joined Austria, while during the Russian Revolution, Russia had a civil war. There were more panicked uprisings during the French Revolution because of rumors and the lack of technology for Czar Nicholas II In 1917 the long trial of the Russian Revolution fell upon the citizens and serfs of Russia. The Russian Revolution was influenced by many people, but the country especially suffered from the choices of two men named Czar Nicholas and Vladimir Lenin. Both leaders had a different impact on the country of Russia, but Czar Nicholas’s poor leadership and stubbornness was the main contributor to the start of the Russian Revolution. By doing this, they overthrown the poorly run government as the Russian people were in favour of a new system that would work in their favour. The Russian Revolution was triggered by the social, political and economic problems, that combined caused the Russian people to rebel. This Revolution was triggered by the poverty of the Russian people, the loss from the wars, the sneakiness of Rasputin and the failure of the Tsar, Nicholas II. The social causes of the Russian Revolution arose from centuries of oppression towards the lower classes.
Sports and Visual Skills When we think of athletes, we probably think of speed and strength first, but what about vision? Strong visual skills are just as important to an athlete’s success as strong muscles. Athletes have to be able to process visual information very quickly so that they can respond to it. The cool thing is that, like muscles, some visual skills can be improved with practice. What Visual Skills do Athletes Use? Here are some of the most essential visual skills that help athletes perform at the top of their game: - Color vision. It’s a lot easier to recognize the difference between teammate and opponent when you can see the different jersey colors! - Depth perception. Athletes need to be able to judge the distances of objects and other players. - Dynamic visual acuity. Beyond just having clear vision, athletes need to be able to see fast-moving objects clearly too. - Eye tracking. Athletes also need to be able to track fast-moving objects with their eyes instead of jeopardizing their balance by turning their heads or torsos. - Eye-hand-body coordination. Being able to adjust the position of your body, hands, and feet based on what you see is essential for succeeding in sports. - Peripheral vision. Athletes need to be able to react to what’s happening at the edges of their vision, not just the things happening straight ahead. - Visual concentration. An athlete needs to be able to focus on what matters even when there are a lot of distractions trying to draw their eyes. - Visual reaction time. The faster an athlete can process and respond to visual information, the faster they can get into position. - Visualization. Athletes need to be able to picture different scenarios to prepare themselves for potential obstacles and opportunities — all while focusing on the events of the moment. - Visual memory. An athlete must keep a great deal of visual information in their heads while playing, including the positions of other players based on where they saw them last. You Can Train Your Visual Skills on the Go You won’t need a gym to train several of these visual skills. A simple exercise for depth perception, for example, is to hold a pen at arm’s length and repeatedly put the cap on it. You could also hold a small pebble at arm’s length and try dropping it into a drinking straw. A great way to train peripheral awareness is by turning our heads to the side while we use a computer or watch TV. We can improve the flexibility of our eyes by switching rapidly between focusing on something close and something far away. To practice dynamic visual acuity, try cutting out different sized letters from a magazine and taping them to a turntable. Then see how well you can identify the letters when it spins at different speeds. Keep Your Eye on the Ball, Athletes! Keeping our eyes sharp (and healthy) is something we might overlook (no pun intended) when thinking about staying in shape for sports. If you want to learn more about how you can improve your visual skills or if you’re experiencing any changes in your eyesight, get in touch with us!
Published: 01.07.2019 Updated: 07.07.2020 After an extended courtship, eggs are fertilized by the male as they are laid by the females. On contact with water, the eggs become sticky and clump into a nest on rocky bottoms. The male lumpfish cares for the eggs by aerating and guarding them against predators. After laying their eggs, the females seem to rapidly leave the spawning ground. Females may release two egg batches and spawning occurs over a 4-month period from February to May. Each female spawn 1/7 of their body weight. Eggs from the different females have different color, so that the lump of eggs guarded by one male may be green, yellow and red. The small lumpfish growing up in the kelp forest, hide and seek to attach themselves with a suction disk on kelp, where we can see them as small buds. When they are a year old, and slightly larger than a Golf ball, they swim out into the open sea. Here they feed on plankton in 2–4 years before they wander back to the coast to spawn. The species is found throughout the eastern Atlantic Ocean, North Sea, Baltic Sea and Barents Sea. Lumpfish may travel great distances in the ocean, and it is uncertain whether there are several distinct populations, and how large these are. In Norway, we estimate that the main population spawns in Nordland, Troms and Finnmark, but there are lumpfish spawning along the rest of the coast. Use as cleanerfish In recent years, lumpfish has been used to rid aquaculture salmon of the parasitic copepod, the salmon louse. Only juveniles are used, and these are also raised in aquaculture but from wild broodstock. Like all new farmed species, there are some challenges, but lumpfish has a promising future as a cleanerfish. In contrast to wrasse, lumpfish are well adapted to low temperatures and can be used throughout Norway. Status, advice, and fishery Stock assessment is based on the lumpfish abundance in the Barents Sea and the northern part of the Norwegian Sea, where most of the fishing takes place. Lumpfish abundance was low up until 1997 and increased to a peak in 2006–2007. The increase appears to be a direct effect of reduced quota in 1997 but was strongly correlated with increased temperature and increased supply of Atlantic water. In recent years, lumpfish have shown a large spread north in the Barents Sea. The calculation for 2017 shows that the biomass of the spawners is around 87 279 tons, which corresponds to 17 456 tons of roe. Under this scenario, and with the current quota of 4 tons, the catch rate remains 1%. The advice of the Institute of Marine Research is that regulatory measures must ensure that the total quantity does not exceed approx. 400 tons of raw eggs. Calculation of lumpfish population Data on lumpfish is recorded during the Ecosystem Survey in the Barents Sea since 1965. The collection has been standardized since 1980. Between 196 and 425 stations are trawled annually, and lumpfish are recorded during this survey. Since 2012, these data have been used in the assessment to calculate abundance and biomass of lumpfish using a stratified swept-area index calculated with the StoX program, a program developed by the Institute of Marine Research. The lumpfish biomass in the Barents Sea follows the temperature fluctuations, and both biomass and temperature have increased since the 1980s.
Honey Bee Life Cycle Worksheets Honey Bees Bumble Bees And All The Other Little Black And Yellow Striped Creatures Busily Going From One Flower To The Other Collecting Nectar Fascinates Every Little Nature Observer Capture That Interest In Learning With These Honey Bee Life Cycle Worksheets How Does A Bee Pollinate A Flower Pollination Activity Nov 18 Worksheet To Be Used With Cheeto Pollination Activity Students Will Eat Cheetos In One Station And More To Another Where They Will Wipe Fingers On A Paper To Represent The Pollination Of Bees Students Can Use Worksheet To Record Observations And Make Analysis Printable Bee Coloring Pages Printable Coloring Pages To Flowers Are The Main Food Source For Bees Bees Eat Pollen In Flowers Called Pollen In Addition To Pollen Bees Also Eat Nectar A Sweet Liquid Produced By Flowers Nectar Is Useful As A Source Of Energy For Bees Bee And Butterfly Coloring Pages Bee And Puppycat Coloring Pages Bee Attitudes Coloring Pages Bee Color Printable Bee Color Sheet Print Bee Coloring Pages Adults Bee Coloring Pages Butterflies Hummingbirds And Bees Oh My Pollinators On The Kinds Of Flowers That Were Bee Pollinated And Conclude If Those Flowers Had Special Characteristics Scent Certain Color Size Etc You Can Also Have Students Infer As To Why A Certain Pollinator Prefers A Particular Plant Students May Need To Make Observations And Do Further Research When Getting Back To The Classroom 3 When Students Are Finished With The Field Trip Activity Sheet Reading Comprehension Worksheet And Kid S Fable The Bee What Is Trapped In The Flower A Bee 2 Why Was It In The Flower It Went To The Flower For Some Honey 3 What Do Lucy And Her Mother Think About Honey They Like Honey 4 What Will It Do Once It Is Let Out It Can Go To Other Flowers And Get Honey Title Reading Comprehension Worksheet And Kid S Fable The Bee Author K5 Learning Subject Reading Comprehension Short Stories For KidsPollinating Flowers Worksheets Activities Jan 22 Pollinating Flowers Worksheets Activities Greatschools Give Your Child A Boost Using Our Free Printable Worksheets20 Engaging Hands On Activities Exploring Bees For Kids Bees Are Important In The Process Of Plant Pollination They Help Flowers Grow And Produce Flowers And Fruits So It S Important To Teach Our Kids How Important They Are We Can Do That By Teaching Them All About Bees These Hands On Bee Activities Will Help You Do Just That Hands On Bees For KidsBees Crafts Activities Lessons Games And Printables Bees Crafts Activities Games And Printables For Preschool And Kindergarten What S The Buzz All About It S About Bees What Would We Do Without These Little Creatures They Provide Us With Honey And They Pollinate Flowers And They Are Like Humans In Many Ways Bees Have Jobs Bees Make Bread And Bees Like To Dance Our Educational Bee Related Crafts Activities Folder Games Rhymes And Bee Movie Worksheet Ms Krenn S 8th Grade Science Class Bee Movie Worksheet Science Standard 3 A E 5 A F 7 C According To All Known Laws Of Aviation There Is No Way That A Bee Should Be Able To Fly Its Wings Are Too Small To Get Its Fat Little Body Off The Ground The Bee Of Coarse Flies Anyway Because Bees Don T Care What Humans Think Is Impossible Events Flowers Bees Egal Ob Fetisch Party Oder Ein Vielfaltiger Erotik Jahrmarkt Fur Erwachsene Fur Jeden Ist Etwas Dabei Wir Freuen Uns Auf Dich Bee Flower Worksheet. The worksheet is an assortment of 4 intriguing pursuits that will enhance your kid's knowledge and abilities. The worksheets are offered in developmentally appropriate versions for kids of different ages. Adding and subtracting integers worksheets in many ranges including a number of choices for parentheses use. You can begin with the uppercase cursives and after that move forward with the lowercase cursives. Handwriting for kids will also be rather simple to develop in such a fashion. If you're an adult and wish to increase your handwriting, it can be accomplished. As a result, in the event that you really wish to enhance handwriting of your kid, hurry to explore the advantages of an intelligent learning tool now! Consider how you wish to compose your private faith statement. Sometimes letters have to be adjusted to fit in a particular space. When a letter does not have any verticals like a capital A or V, the very first diagonal stroke is regarded as the stem. The connected and slanted letters will be quite simple to form once the many shapes re learnt well. Even something as easy as guessing the beginning letter of long words can assist your child improve his phonics abilities. Bee Flower Worksheet. There isn't anything like a superb story, and nothing like being the person who started a renowned urban legend. Deciding upon the ideal approach route Cursive writing is basically joined-up handwriting. Practice reading by yourself as often as possible. Research urban legends to obtain a concept of what's out there prior to making a new one. You are still not sure the radicals have the proper idea. Naturally, you won't use the majority of your ideas. If you've got an idea for a tool please inform us. That means you can begin right where you are no matter how little you might feel you've got to give. You are also quite suspicious of any revolutionary shift. In earlier times you've stated that the move of independence may be too early. Each lesson in handwriting should start on a fresh new page, so the little one becomes enough room to practice. Every handwriting lesson should begin with the alphabets. Handwriting learning is just one of the most important learning needs of a kid. Learning how to read isn't just challenging, but fun too. The use of grids The use of grids is vital in earning your child learn to Improve handwriting. Also, bear in mind that maybe your very first try at brainstorming may not bring anything relevant, but don't stop trying. Once you are able to work, you might be surprised how much you get done. Take into consideration how you feel about yourself. Getting able to modify the tracking helps fit more letters in a little space or spread out letters if they're too tight. Perhaps you must enlist the aid of another man to encourage or help you keep focused. Bee Flower Worksheet. Try to remember, you always have to care for your child with amazing care, compassion and affection to be able to help him learn. You may also ask your kid's teacher for extra worksheets. Your son or daughter is not going to just learn a different sort of font but in addition learn how to write elegantly because cursive writing is quite beautiful to check out. As a result, if a kid is already suffering from ADHD his handwriting will definitely be affected. Accordingly, to be able to accomplish this, if children are taught to form different shapes in a suitable fashion, it is going to enable them to compose the letters in a really smooth and easy method. Although it can be cute every time a youngster says he runned on the playground, students want to understand how to use past tense so as to speak and write correctly. Let say, you would like to boost your son's or daughter's handwriting, it is but obvious that you want to give your son or daughter plenty of practice, as they say, practice makes perfect. Without phonics skills, it's almost impossible, especially for kids, to learn how to read new words. Techniques to Handle Attention Issues It is extremely essential that should you discover your kid is inattentive to his learning especially when it has to do with reading and writing issues you must begin working on various ways and to improve it. Use a student's name in every sentence so there's a single sentence for each kid. Because he or she learns at his own rate, there is some variability in the age when a child is ready to learn to read. Teaching your kid to form the alphabets is quite a complicated practice.
“Music serves to enliven many an hour of sadness, or what would be sadness otherwise. It is an expression of the emotions of the heart, a disperser of gloomy clouds.” (Juliette Montague Cooke; Punahou) Hawaiians devised various methods of recording information for the purpose of passing it on from one generation to the next. The chant (mele or oli) was one such method. Elaborate chants were composed to record important information, e.g. births, deaths, triumphs, losses, good times and bad. In most ancient cultures, composing of poetry was confined to the privileged classes. What makes Hawai‘i unique is that poetry was composed by people of all walks of life, from the royal court chanters down to the common man. “As the Hawaiian songs were unwritten, and adapted to chanting rather than metrical music, a line was measured by the breath; their hopuna, answering to our line, was as many words as could be easily cantilated at one breath.” (Bingham) The Pioneer Company of missionaries (April, 1820) introduced new musical traditions to Hawai‘i – the Western choral tradition, hymns, gospel music, and Western composition traditions. It was one of strophic hymns and psalm tunes from the late-18th century in America. The strophic form is one where different lyrics are put to the same melody in each verse. Later on, with the arrival of new missionaries, another hymn tradition was introduced was the gospel tune with verse-chorus alternation. (Smola) The missionaries also introduced new instrumentation with their songs. Humehume (George Prince, son of Kauai’s King Kaumuali‘i) was given a bass viol or ‘Church Bass’ (like a large cello) and a flute that he have learned to play well. He returned to the Islands with the Pioneer Company. Later, church organs, pianos, melodeons, and other instruments were introduced to the Islands. Bingham and others composed Hawaiian hymns from previous melodies, sometimes borrowing an entire tune, using Protestant hymn styles. In spite of the use of English throughout Hawaii, the Hawaiian language continues to be used in Bible reading and in the singing of hîmeni (hymns) in many Christian churches. Himeni still preserve the beauty of the Hawaiian language. (Smithsonian) The first hymnal in the Hawaiian language was ‘Nā Hīmeni Hawaii; He Me Ori Ia Iehova, Ka Akua Mau,’ published in 1823. It contained 60 pages and 47 hymns. It was prepared by Rev. Hiram Bingham and Rev. William Ellis, a London Missionary Society missionary who was visiting. On June 8, 1820, Rev. Hiram Bingham set up the first singing school at Kawaiaha‘o Church. He taught native Hawaiians Western music and hymnody. These ‘singing schools’ emphasized congregational singing with everyone actively participating, not just passively listening to a designated choir. By 1826, there were 80 singing schools on Hawai‘i Island alone . By the mid-1830s, church choirs began to become part of the regular worship. This choral tradition partially grew out of the hō‘ike, or examination, when the students being examined would sing part of their lessons. “For more than 100-years, love of the land and its natural beauty has been the poetry Hawaiian composers have used to speak of love. Hawaiian songs also speak to people’s passion for their homeland and their beliefs.” (Hawaiian Music Museum) Next time you and others automatically stand, hold hands and sing this song together, you can thank an American Protestant missionary, Lorenzo Lyons, for writing Hawai‘i Aloha – and his expression of love for his home. Na Lani Eha In 1995, when the Hawaiian Music Hall of Fame selected its first ten treasured composers, musicians and vocalists to be inducted, ‘Na Lani Eha’, (The Royal Four), were honored as the Patrons of Hawaiian music. ‘Na Lani Eha’ comprises four royal siblings who, in their lifetimes, demonstrated extraordinary talent as musicians and composers. They were, of course, our last king, Kalākaua, his sister, Hawai‘i’s last queen, Lili‘uokalani, their brother, the prince, Leleiōhoku, and their sister, the princess, Likelike, mother of princess Ka‘iulani. In August 2000, ‘Ka Hīmeni Ana’, the RM Towill Corporation’s annual contest at Hawai‘i Theatre for musicians playing acoustic instruments and singing in the Hawaiian language, was dedicated to missionary Juliette Montague Cooke, the Chiefs’ Children’s teacher and mother. John Montague Derby, Sr., who accepted this honor for the Cooke family, said. “(it is) with gratitude for the multitude of beautiful Hawaiian songs that we enjoy today which were composed by her many students.” Planning ahead … the Hawaiian Mission Bicentennial – Reflection and Rejuvenation – 1820 – 2020 – is approaching (it starts in about a year) If you would like to get on a separate e-mail distribution on Hawaiian Mission Bicentennial activates, please use the following link:
Cultural anthropology is a branch of anthropology focused on the study of cultural variation among humans. It is in contrast to social anthropology, which perceives cultural variation as a subset of a posited anthropological constant. The umbrella term sociocultural anthropology includes both cultural and social anthropology traditions. Anthropologists have pointed out that through culture people can adapt to their environment in non-genetic ways, so people living in different environments will often have different cultures. Much of anthropological theory has originated in an appreciation of and interest in the tension between the local (particular cultures) and the global (a universal human nature, or the web of connections between people in distinct places/circumstances). Cultural anthropology has a rich methodology, including participant observation (often called fieldwork because it requires the anthropologist spending an extended period of time at the research location), interviews, and surveys. The rubric cultural anthropology is generally applied to ethnographic works that are holistic in approach, oriented to the ways in which culture affects individual experience, or aim to provide a rounded view of the knowledge, customs, and institutions of a people. Social anthropology is a term applied to ethnographic works that attempt to isolate a particular system of social relations such as those that comprise domestic life, economy, law, politics, or religion, give analytical priority to the organizational bases of social life, and attend to cultural phenomena as somewhat secondary to the main issues of social scientific inquiry. Parallel with the rise of cultural anthropology in the United States, social anthropology developed as an academic discipline in Britain and in France. This section may require cleanup to meet Wikipedia's quality standards. The specific problem is: this currently is not a history of cultural anthropology, but of specific terms. It also does not explain the outdated terminology used. (August 2020) (Learn how and when to remove this template message) One of the earliest articulations of the anthropological meaning of the term "culture" came from Sir Edward Tylor who writes on the first page of his 1871 book: "Culture, or civilization, taken in its broad, ethnographic sense, is that complex whole which includes knowledge, belief, art, morals, law, custom, and any other capabilities and habits acquired by man as a member of society." The term "civilization" later gave way to definitions given by V. Gordon Childe, with culture forming an umbrella term and civilization becoming a particular kind of culture. The rise of cultural anthropology took place within the context of the late 19th century, when questions regarding which cultures were "primitive" and which were "civilized" occupied the mind of not only Freud, but many others. Colonialism and its processes increasingly brought European thinkers into direct or indirect contact with "primitive others." The relative status of various humans, some of whom had modern advanced technologies that included engines and telegraphs, while others lacked anything but face-to-face communication techniques and still lived a Paleolithic lifestyle, was of interest to the first generation of cultural anthropologists. Anthropology is concerned with the lives of people in different parts of the world, particularly in relation to the discourse of beliefs and practices. In addressing this question, ethnologists in the 19th century divided into two schools of thought. Some, like Grafton Elliot Smith, argued that different groups must have learned from one another somehow, however indirectly; in other words, they argued that cultural traits spread from one place to another, or "diffused". Other ethnologists argued that different groups had the capability of creating similar beliefs and practices independently. Some of those who advocated "independent invention", like Lewis Henry Morgan, additionally supposed that similarities meant that different groups had passed through the same stages of cultural evolution (See also classical social evolutionism). Morgan, in particular, acknowledged that certain forms of society and culture could not possibly have arisen before others. For example, industrial farming could not have been invented before simple farming, and metallurgy could not have developed without previous non-smelting processes involving metals (such as simple ground collection or mining). Morgan, like other 19th century social evolutionists, believed there was a more or less orderly progression from the primitive to the civilized. 20th-century anthropologists largely reject the notion that all human societies must pass through the same stages in the same order, on the grounds that such a notion does not fit the empirical facts. Some 20th-century ethnologists, like Julian Steward, have instead argued that such similarities reflected similar adaptations to similar environments. Although 19th-century ethnologists saw "diffusion" and "independent invention" as mutually exclusive and competing theories, most ethnographers quickly reached a consensus that both processes occur, and that both can plausibly account for cross-cultural similarities. But these ethnographers also pointed out the superficiality of many such similarities. They noted that even traits that spread through diffusion often were given different meanings and function from one society to another. Analyses of large human concentrations in big cities, in multidisciplinary studies by Ronald Daus, show how new methods may be applied to the understanding of man living in a global world and how it was caused by the action of extra-European nations, so highlighting the role of Ethics in modern anthropology. Accordingly, most of these anthropologists showed less interest in comparing cultures, generalizing about human nature, or discovering universal laws of cultural development, than in understanding particular cultures in those cultures' own terms. Such ethnographers and their students promoted the idea of "cultural relativism", the view that one can only understand another person's beliefs and behaviors in the context of the culture in which he or she lived or lives. Others, such as Claude Lévi-Strauss (who was influenced both by American cultural anthropology and by French Durkheimian sociology), have argued that apparently similar patterns of development reflect fundamental similarities in the structure of human thought (see structuralism). By the mid-20th century, the number of examples of people skipping stages, such as going from hunter-gatherers to post-industrial service occupations in one generation, were so numerous that 19th-century evolutionism was effectively disproved. Cultural relativism is a principle that was established as axiomatic in anthropological research by Franz Boas and later popularized by his students. Boas first articulated the idea in 1887: "...civilization is not something absolute, but ... is relative, and ... our ideas and conceptions are true only so far as our civilization goes." Although Boas did not coin the term, it became common among anthropologists after Boas' death in 1942, to express their synthesis of a number of ideas Boas had developed. Boas believed that the sweep of cultures, to be found in connection with any sub-species, is so vast and pervasive that there cannot be a relationship between culture and race. Cultural relativism involves specific epistemological and methodological claims. Whether or not these claims require a specific ethical stance is a matter of debate. This principle should not be confused with moral relativism. Cultural relativism was in part a response to Western ethnocentrism. Ethnocentrism may take obvious forms, in which one consciously believes that one's people's arts are the most beautiful, values the most virtuous, and beliefs the most truthful. Boas, originally trained in physics and geography, and heavily influenced by the thought of Kant, Herder, and von Humboldt, argued that one's culture may mediate and thus limit one's perceptions in less obvious ways. This understanding of culture confronts anthropologists with two problems: first, how to escape the unconscious bonds of one's own culture, which inevitably bias our perceptions of and reactions to the world, and second, how to make sense of an unfamiliar culture. The principle of cultural relativism thus forced anthropologists to develop innovative methods and heuristic strategies. Boas and his students realized that if they were to conduct scientific research in other cultures, they would need to employ methods that would help them escape the limits of their own ethnocentrism. One such method is that of ethnography: basically, they advocated living with people of another culture for an extended period of time, so that they could learn the local language and be enculturated, at least partially, into that culture. In this context, cultural relativism is of fundamental methodological importance, because it calls attention to the importance of the local context in understanding the meaning of particular human beliefs and activities. Thus, in 1948 Virginia Heyer wrote, "Cultural relativity, to phrase it in starkest abstraction, states the relativity of the part to the whole. The part gains its cultural significance by its place in the whole, and cannot retain its integrity in a different situation." Lewis Henry Morgan (1818-1881), a lawyer from Rochester, New York, became an advocate for and ethnological scholar of the Iroquois. His comparative analyses of religion, government, material culture, and especially kinship patterns proved to be influential contributions to the field of anthropology. Like other scholars of his day (such as Edward Tylor), Morgan argued that human societies could be classified into categories of cultural evolution on a scale of progression that ranged from savagery, to barbarism, to civilization. Generally, Morgan used technology (such as bowmaking or pottery) as an indicator of position on this scale. Franz Boas (1858-1942) established academic anthropology in the United States in opposition to Morgan's evolutionary perspective. His approach was empirical, skeptical of overgeneralizations, and eschewed attempts to establish universal laws. For example, Boas studied immigrant children to demonstrate that biological race was not immutable, and that human conduct and behavior resulted from nurture, rather than nature. Influenced by the German tradition, Boas argued that the world was full of distinct cultures, rather than societies whose evolution could be measured by how much or how little "civilization" they had. He believed that each culture has to be studied in its particularity, and argued that cross-cultural generalizations, like those made in the natural sciences, were not possible. In doing so, he fought discrimination against immigrants, blacks, and indigenous peoples of the Americas. Many American anthropologists adopted his agenda for social reform, and theories of race continue to be popular subjects for anthropologists today. The so-called "Four Field Approach" has its origins in Boasian Anthropology, dividing the discipline in the four crucial and interrelated fields of sociocultural, biological, linguistic, and archaic anthropology (e.g. archaeology). Anthropology in the United States continues to be deeply influenced by the Boasian tradition, especially its emphasis on culture. Boas used his positions at Columbia University and the American Museum of Natural History to train and develop multiple generations of students. His first generation of students included Alfred Kroeber, Robert Lowie, Edward Sapir, and Ruth Benedict, who each produced richly detailed studies of indigenous North American cultures. They provided a wealth of details used to attack the theory of a single evolutionary process. Kroeber and Sapir's focus on Native American languages helped establish linguistics as a truly general science and free it from its historical focus on Indo-European languages. The publication of Alfred Kroeber's textbook Anthropology (1923) marked a turning point in American anthropology. After three decades of amassing material, Boasians felt a growing urge to generalize. This was most obvious in the 'Culture and Personality' studies carried out by younger Boasians such as Margaret Mead and Ruth Benedict. Influenced by psychoanalytic psychologists including Sigmund Freud and Carl Jung, these authors sought to understand the way that individual personalities were shaped by the wider cultural and social forces in which they grew up. Though such works as Mead's Coming of Age in Samoa (1928) and Benedict's The Chrysanthemum and the Sword (1946) remain popular with the American public, Mead and Benedict never had the impact on the discipline of anthropology that some expected. Boas had planned for Ruth Benedict to succeed him as chair of Columbia's anthropology department, but she was sidelined in favor of Ralph Linton, and Mead was limited to her offices at the AMNH. In the 1950s and mid-1960s anthropology tended increasingly to model itself after the natural sciences. Some anthropologists, such as Lloyd Fallers and Clifford Geertz, focused on processes of modernization by which newly independent states could develop. Others, such as Julian Steward and Leslie White, focused on how societies evolve and fit their ecological niche--an approach popularized by Marvin Harris. Economic anthropology as influenced by Karl Polanyi and practiced by Marshall Sahlins and George Dalton challenged standard neoclassical economics to take account of cultural and social factors, and employed Marxian analysis into anthropological study. In England, British Social Anthropology's paradigm began to fragment as Max Gluckman and Peter Worsley experimented with Marxism and authors such as Rodney Needham and Edmund Leach incorporated Lévi-Strauss's structuralism into their work. Structuralism also influenced a number of developments in 1960s and 1970s, including cognitive anthropology and componential analysis. In keeping with the times, much of anthropology became politicized through the Algerian War of Independence and opposition to the Vietnam War;Marxism became an increasingly popular theoretical approach in the discipline. By the 1970s the authors of volumes such as Reinventing Anthropology worried about anthropology's relevance. Since the 1980s issues of power, such as those examined in Eric Wolf's Europe and the People Without History, have been central to the discipline. In the 1980s books like Anthropology and the Colonial Encounter pondered anthropology's ties to colonial inequality, while the immense popularity of theorists such as Antonio Gramsci and Michel Foucault moved issues of power and hegemony into the spotlight. Gender and sexuality became popular topics, as did the relationship between history and anthropology, influenced by Marshall Sahlins, who drew on Lévi-Strauss and Fernand Braudel to examine the relationship between symbolic meaning, sociocultural structure, and individual agency in the processes of historical transformation. Jean and John Comaroff produced a whole generation of anthropologists at the University of Chicago that focused on these themes. Also influential in these issues were Nietzsche, Heidegger, the critical theory of the Frankfurt School, Derrida and Lacan. Many anthropologists reacted against the renewed emphasis on materialism and scientific modelling derived from Marx by emphasizing the importance of the concept of culture. Authors such as David Schneider, Clifford Geertz, and Marshall Sahlins developed a more fleshed-out concept of culture as a web of meaning or signification, which proved very popular within and beyond the discipline. Geertz was to state: "Believing, with Max Weber, that man is an animal suspended in webs of significance he himself has spun, I take culture to be those webs, and the analysis of it to be therefore not an experimental science in search of law but an interpretive one in search of meaning."-- Clifford Geertz (1973) Geertz's interpretive method involved what he called "thick description." The cultural symbols of rituals, political and economic action, and of kinship, are "read" by the anthropologist as if they are a document in a foreign language. The interpretation of those symbols must be re-framed for their anthropological audience, i.e. transformed from the "experience-near" but foreign concepts of the other culture, into the "experience-distant" theoretical concepts of the anthropologist. These interpretations must then be reflected back to its originators, and its adequacy as a translation fine-tuned in a repeated way, a process called the hermeneutic circle. Geertz applied his method in a number of areas, creating programs of study that were very productive. His analysis of "religion as a cultural system" was particularly influential outside of anthropology. David Schnieder's cultural analysis of American kinship has proven equally influential. Schneider demonstrated that the American folk-cultural emphasis on "blood connections" had an undue influence on anthropological kinship theories, and that kinship is not a biological characteristic but a cultural relationship established on very different terms in different societies. In the late 1980s and 1990s authors such as James Clifford pondered ethnographic authority, in particular how and why anthropological knowledge was possible and authoritative. They were reflecting trends in research and discourse initiated by feminists in the academy, although they excused themselves from commenting specifically on those pioneering critics. Nevertheless, key aspects of feminist theory and methods became de rigueur as part of the 'post-modern moment' in anthropology: Ethnographies became more interpretative and reflexive, explicitly addressing the author's methodology; cultural, gendered, and racial positioning; and their influence on his or her ethnographic analysis. This was part of a more general trend of postmodernism that was popular contemporaneously. Currently anthropologists pay attention to a wide variety of issues pertaining to the contemporary world, including globalization, medicine and biotechnology, indigenous rights, virtual communities, and the anthropology of industrialized societies. Modern cultural anthropology has its origins in, and developed in reaction to, 19th century ethnology, which involves the organized comparison of human societies. Scholars like E.B. Tylor and J.G. Frazer in England worked mostly with materials collected by others - usually missionaries, traders, explorers, or colonial officials - earning them the moniker of "arm-chair anthropologists". Participant observation is one of the principle research methods of cultural anthropology. It relies on the assumption that the best way to understand a group of people is to interact with them closely over a long period of time. The method originated in the field research of social anthropologists, especially Bronislaw Malinowski in Britain, the students of Franz Boas in the United States, and in the later urban research of the Chicago School of Sociology. Historically, the group of people being studied was a small, non-Western society. However, today it may be a specific corporation, a church group, a sports team, or a small town. There are no restrictions as to what the subject of participant observation can be, as long as the group of people is studied intimately by the observing anthropologist over a long period of time. This allows the anthropologist to develop trusting relationships with the subjects of study and receive an inside perspective on the culture, which helps him or her to give a richer description when writing about the culture later. Observable details (like daily time allotment) and more hidden details (like taboo behavior) are more easily observed and interpreted over a longer period of time, and researchers can discover discrepancies between what participants say--and often believe--should happen (the formal system) and what actually does happen, or between different aspects of the formal system; in contrast, a one-time survey of people's answers to a set of questions might be quite consistent, but is less likely to show conflicts between different aspects of the social system or between conscious representations and behavior. Interactions between an ethnographer and a cultural informant must go both ways. Just as an ethnographer may be naive or curious about a culture, the members of that culture may be curious about the ethnographer. To establish connections that will eventually lead to a better understanding of the cultural context of a situation, an anthropologist must be open to becoming part of the group, and willing to develop meaningful relationships with its members. One way to do this is to find a small area of common experience between an anthropologist and his or her subjects, and then to expand from this common ground into the larger area of difference. Once a single connection has been established, it becomes easier to integrate into the community, and more likely that accurate and complete information is being shared with the anthropologist. Before participant observation can begin, an anthropologist must choose both a location and a focus of study. This focus may change once the anthropologist is actively observing the chosen group of people, but having an idea of what one wants to study before beginning fieldwork allows an anthropologist to spend time researching background information on their topic. It can also be helpful to know what previous research has been conducted in one's chosen location or on similar topics, and if the participant observation takes place in a location where the spoken language is not one the anthropologist is familiar with, he or she will usually also learn that language. This allows the anthropologist to become better established in the community. The lack of need for a translator makes communication more direct, and allows the anthropologist to give a richer, more contextualized representation of what they witness. In addition, participant observation often requires permits from governments and research institutions in the area of study, and always needs some form of funding. The majority of participant observation is based on conversation. This can take the form of casual, friendly dialogue, or can also be a series of more structured interviews. A combination of the two is often used, sometimes along with photography, mapping, artifact collection, and various other methods. In some cases, ethnographers also turn to structured observation, in which an anthropologist's observations are directed by a specific set of questions he or she is trying to answer. In the case of structured observation, an observer might be required to record the order of a series of events, or describe a certain part of the surrounding environment. While the anthropologist still makes an effort to become integrated into the group they are studying, and still participates in the events as they observe, structured observation is more directed and specific than participant observation in general. This helps to standardize the method of study when ethnographic data is being compared across several groups or is needed to fulfill a specific purpose, such as research for a governmental policy decision. One common criticism of participant observation is its lack of objectivity. Because each anthropologist has his or her own background and set of experiences, each individual is likely to interpret the same culture in a different way. Who the ethnographer is has a lot to do with what he or she will eventually write about a culture, because each researcher is influenced by his or her own perspective. This is considered a problem especially when anthropologists write in the ethnographic present, a present tense which makes a culture seem stuck in time, and ignores the fact that it may have interacted with other cultures or gradually evolved since the anthropologist made observations. To avoid this, past ethnographers have advocated for strict training, or for anthropologists working in teams. However, these approaches have not generally been successful, and modern ethnographers often choose to include their personal experiences and possible biases in their writing instead. Participant observation has also raised ethical questions, since an anthropologist is in control of what he or she reports about a culture. In terms of representation, an anthropologist has greater power than his or her subjects of study, and this has drawn criticism of participant observation in general. Additionally, anthropologists have struggled with the effect their presence has on a culture. Simply by being present, a researcher causes changes in a culture, and anthropologists continue to question whether or not it is appropriate to influence the cultures they study, or possible to avoid having influence. In the 20th century, most cultural and social anthropologists turned to the crafting of ethnographies. An ethnography is a piece of writing about a people, at a particular place and time. Typically, the anthropologist lives among people in another society for a period of time, simultaneously participating in and observing the social and cultural life of the group. Numerous other ethnographic techniques have resulted in ethnographic writing or details being preserved, as cultural anthropologists also curate materials, spend long hours in libraries, churches and schools poring over records, investigate graveyards, and decipher ancient scripts. A typical ethnography will also include information about physical geography, climate and habitat. It is meant to be a holistic piece of writing about the people in question, and today often includes the longest possible timeline of past events that the ethnographer can obtain through primary and secondary research. Bronis?aw Malinowski developed the ethnographic method, and Franz Boas taught it in the United States. Boas' students such as Alfred L. Kroeber, Ruth Benedict and Margaret Mead drew on his conception of culture and cultural relativism to develop cultural anthropology in the United States. Simultaneously, Malinowski and A.R. Radcliffe Brown's students were developing social anthropology in the United Kingdom. Whereas cultural anthropology focused on symbols and values, social anthropology focused on social groups and institutions. Today socio-cultural anthropologists attend to all these elements. In the early 20th century, socio-cultural anthropology developed in different forms in Europe and in the United States. European "social anthropologists" focused on observed social behaviors and on "social structure", that is, on relationships among social roles (for example, husband and wife, or parent and child) and social institutions (for example, religion, economy, and politics). American "cultural anthropologists" focused on the ways people expressed their view of themselves and their world, especially in symbolic forms, such as art and myths. These two approaches frequently converged and generally complemented one another. For example, kinship and leadership function both as symbolic systems and as social institutions. Today almost all socio-cultural anthropologists refer to the work of both sets of predecessors, and have an equal interest in what people do and in what people say. One means by which anthropologists combat ethnocentrism is to engage in the process of cross-cultural comparison. It is important to test so-called "human universals" against the ethnographic record. Monogamy, for example, is frequently touted as a universal human trait, yet comparative study shows that it is not. The Human Relations Area Files, Inc. (HRAF) is a research agency based at Yale University. Since 1949, its mission has been to encourage and facilitate worldwide comparative studies of human culture, society, and behavior in the past and present. The name came from the Institute of Human Relations, an interdisciplinary program/building at Yale at the time. The Institute of Human Relations had sponsored HRAF's precursor, the Cross-Cultural Survey (see George Peter Murdock), as part of an effort to develop an integrated science of human behavior and culture. The two eHRAF databases on the Web are expanded and updated annually. eHRAF World Cultures includes materials on cultures, past and present, and covers nearly 400 cultures. The second database, eHRAF Archaeology, covers major archaeological traditions and many more sub-traditions and sites around the world. Comparison across cultures includies the industrialized (or de-industrialized) West. Cultures in the more traditional standard cross-cultural sample of small scale societies are: |Africa||Nama (Hottentot) o Kung (San) o Thonga o Lozi o Mbundu o Suku o Bemba o Nyakyusa (Ngonde) o Hadza o Luguru o Kikuyu o Ganda o Mbuti (Pygmies) o Nkundo (Mongo) o Banen o Tiv o Igbo o Fon o Ashanti (Twi) o Mende o Bambara o Tallensi o Massa o Azande o Otoro Nuba o Shilluk o Mao o Maasai| |Circum-Mediterranean||Wolof o Songhai o Wodaabe Fulani o Hausa o Fur o Kaffa o Konso o Somali o Amhara o Bogo o Kenuzi Nubian o Teda o Tuareg o Riffians o Egyptians (Fellah) o Hebrews o Babylonians o Rwala Bedouin o Turks o Gheg (Albanians) o Romans o Basques o Irish o Sami (Lapps) o Russians o Abkhaz o Armenians o Kurd| |East Eurasia||Yurak (Samoyed) o Basseri o West Punjabi o Gond o Toda o Santal o Uttar Pradesh o Burusho o Kazak o Khalka Mongols o Lolo o Lepcha o Garo o Hajong o Lakher o Burmese o Lamet o Vietnamese o Rhade o Khmer o Siamese o Semang o Nicobarese o Andamanese o Vedda o Tanala o Negeri Sembilan o Atayal o Chinese o Manchu o Koreans o Japanese o Ainu o Gilyak o Yukaghir| |Insular Pacific||Javanese (Miao) o Balinese o Iban o Badjau o Toraja o Tobelorese o Alorese o Tiwi o Aranda o Orokaiva o Kimam o Kapauku o Kwoma o Manus o New Ireland o Trobrianders o Siuai o Tikopia o Pentecost o Mbau Fijians o Ajie o Maori o Marquesans o Western Samoans o Gilbertese o Marshallese o Trukese o Yapese o Palauans o Ifugao o Chukchi| |North America||Ingalik o Aleut o Copper Eskimo o Montagnais o Mi'kmaq o Saulteaux (Ojibwa) o Slave o Kaska (Nahane) o Eyak o Haida o Bellacoola o Twana o Yurok o Pomo o Yokuts o Northern Paiute o Klamath o Kutenai o Gros Ventres o Hidatsa o Pawnee o Omaha (Dhegiha) o Huron o Creek o Natchez o Comanche o Chiricahua o Zuni o Havasupai o Tohono O'odham o Huichol o Aztec o Popoluca| |South America||Quiché o Miskito (Mosquito) o Bribri (Talamanca) o Cuna o Goajiro o Haitians o Calinago o Warrau (Warao) o Yanomamo o Kalina (Caribs) o Saramacca o Munduruku o Cubeo (Tucano) o Cayapa o Jivaro o Amahuaca o Inca o Aymara o Siriono o Nambikwara o Trumai o Timbira o Tupinamba o Botocudo o Shavante o Aweikoma o Cayua (Guarani) o Lengua o Abipon o Mapuche o Tehuelche o Yaghan| Ethnography dominates socio-cultural anthropology. Nevertheless, many contemporary socio-cultural anthropologists have rejected earlier models of ethnography as treating local cultures as bounded and isolated. These anthropologists continue to concern themselves with the distinct ways people in different locales experience and understand their lives, but they often argue that one cannot understand these particular ways of life solely from a local perspective; they instead combine a focus on the local with an effort to grasp larger political, economic, and cultural frameworks that impact local lived realities. Notable proponents of this approach include Arjun Appadurai, James Clifford, George Marcus, Sidney Mintz, Michael Taussig, Eric Wolf and Ronald Daus. A growing trend in anthropological research and analysis is the use of multi-sited ethnography, discussed in George Marcus' article, "Ethnography In/Of the World System: the Emergence of Multi-Sited Ethnography". Looking at culture as embedded in macro-constructions of a global social order, multi-sited ethnography uses traditional methodology in various locations both spatially and temporally. Through this methodology, greater insight can be gained when examining the impact of world-systems on local and global communities. Also emerging in multi-sited ethnography are greater interdisciplinary approaches to fieldwork, bringing in methods from cultural studies, media studies, science and technology studies, and others. In multi-sited ethnography, research tracks a subject across spatial and temporal boundaries. For example, a multi-sited ethnography may follow a "thing," such as a particular commodity, as it is transported through the networks of global capitalism. Multi-sited ethnography may also follow ethnic groups in diaspora, stories or rumours that appear in multiple locations and in multiple time periods, metaphors that appear in multiple ethnographic locations, or the biographies of individual people or groups as they move through space and time. It may also follow conflicts that transcend boundaries. An example of multi-sited ethnography is Nancy Scheper-Hughes' work on the international black market for the trade of human organs. In this research, she follows organs as they are transferred through various legal and illegal networks of capitalism, as well as the rumours and urban legends that circulate in impoverished communities about child kidnapping and organ theft. Sociocultural anthropologists have increasingly turned their investigative eye on to "Western" culture. For example, Philippe Bourgois won the Margaret Mead Award in 1997 for In Search of Respect, a study of the entrepreneurs in a Harlem crack-den. Also growing more popular are ethnographies of professional communities, such as laboratory researchers, Wall Street investors, law firms, or information technology (IT) computer employees. Kinship refers to the anthropological study of the ways in which humans form and maintain relationships with one another, and further, how those relationships operate within and define social organization. Research in kinship studies often crosses over into different anthropological subfields including medical, feminist, and public anthropology. This is likely due to its fundamental concepts, as articulated by linguistic anthropologist Patrick McConvell: Kinship is the bedrock of all human societies that we know. All humans recognize fathers and mothers, sons and daughters, brothers and sisters, uncles and aunts, husbands and wives, grandparents, cousins, and often many more complex types of relationships in the terminologies that they use. That is the matrix into which human children are born in the great majority of cases, and their first words are often kinship terms. Throughout history, kinship studies have primarily focused on the topics of marriage, descent, and procreation. Anthropologists have written extensively on the variations within marriage across cultures and its legitimacy as a human institution. There are stark differences between communities in terms of marital practice and value, leaving much room for anthropological fieldwork. For instance, the Nuer of Sudan and the Brahmans of Nepal practice polygyny, where one man has several marriages to two or more women. The Nyar of India and Nyimba of Tibet and Nepal practice polyandry, where one woman is often married to two or more men. The marital practice found in most cultures, however, is monogamy, where one woman is married to one man. Anthropologists also study different marital taboos across cultures, most commonly the incest taboo of marriage within sibling and parent-child relationships. It has been found that all cultures have an incest taboo to some degree, but the taboo shifts between cultures when the marriage extends beyond the nuclear family unit. There are similar foundational differences where the act of procreation is concerned. Although anthropologists have found that biology is acknowledged in every cultural relationship to procreation, there are differences in the ways in which cultures assess the constructs of parenthood. For example, in the Nuyoo municipality of Oaxaca, Mexico, it is believed that a child can have partible maternity and partible paternity. In this case, a child would have multiple biological mothers in the case that it is born of one woman and then breastfed by another. A child would have multiple biological fathers in the case that the mother had sex with multiple men, following the commonplace belief in Nuyoo culture that pregnancy must be preceded by sex with multiple men in order have the necessary accumulation of semen. In the twenty-first century, Western ideas of kinship have evolved beyond the traditional assumptions of the nuclear family, raising anthropological questions of consanguinity, lineage, and normative marital expectation. The shift can be traced back to the 1960s, with the reassessment of kinship's basic principles offered by Edmund Leach, Rodney Neeham, David Schneider, and others. Instead of relying on narrow ideas of Western normalcy, kinship studies increasingly catered to "more ethnographic voices, human agency, intersecting power structures, and historical contex". The study of kinship evolved to accommodate for the fact that it cannot be separated from its institutional roots and must pay respect to the society in which it lives, including that society's contradictions, hierarchies, and individual experiences of those within it. This shift was progressed further by the emergence of second-wave feminism in the early 1970s, which introduced ideas of marital oppression, sexual autonomy, and domestic subordination. Other themes that emerged during this time included the frequent comparisons between Eastern and Western kinship systems and the increasing amount of attention paid to anthropologists' own societies, a swift turn from the focus that had traditionally been paid to largely "foreign", non-Western communities. Kinship studies began to gain mainstream recognition in the late 1990s with the surging popularity of feminist anthropology, particularly with its work related to biological anthropology and the intersectional critique of gender relations. At this time, there was the arrival of "Third World feminism", a movement that argued kinship studies could not examine the gender relations of developing countries in isolation, and must pay respect to racial and economic nuance as well. This critique became relevant, for instance, in the anthropological study of Jamaica: race and class were seen as the primary obstacles to Jamaican liberation from economic imperialism, and gender as an identity was largely ignored. Third World feminism aimed to combat this in the early twenty-first century by promoting these categories as coexisting factors. In Jamaica, marriage as an institution is often substituted for a series of partners, as poor women cannot rely on regular financial contributions in a climate of economic instability. In addition, there is a common practice of Jamaican women artificially lightening their skin tones in order to secure economic survival. These anthropological findings, according to Third World feminism, cannot see gender, racial, or class differences as separate entities, and instead must acknowledge that they interact together to produce unique individual experiences. Kinship studies have also experienced a rise in the interest of reproductive anthropology with the advancement of assisted reproductive technologies (ARTs), including in vitro fertilization (IVF). These advancements have led to new dimensions of anthropological research, as they challenge the Western standard of biogenetically based kinship, relatedness, and parenthood. According to anthropologists Maria C. Inhorn and Daphna Birenbaum-Carmeli, "ARTs have pluralized notions of relatedness and led to a more dynamic notion of "kinning" namely, kinship as a process, as something under construction, rather than a natural given". With this technology, questions of kinship have emerged over the difference between biological and genetic relatedness, as gestational surrogates can provide a biological environment for the embryo while the genetic ties remain with a third party. If genetic, surrogate, and adoptive maternities are involved, anthropologists have acknowledged that there can be the possibility for three "biological" mothers to a single child. With ARTs, there are also anthropological questions concerning the intersections between wealth and fertility: ARTs are generally only available to those in the highest income bracket, meaning the infertile poor are inherently devalued in the system. There have also been issues of reproductive tourism and bodily commodification, as individuals seek economic security through hormonal stimulation and egg harvesting, which are potentially harmful procedures. With IVF, specifically, there have been many questions of embryotic value and the status of life, particularly as it relates to the manufacturing of stem cells, testing, and research. Current issues in kinship studies, such as adoption, have revealed and challenged the Western cultural disposition towards the genetic, "blood" tie. Western biases against single parent homes have also been explored through similar anthropological research, uncovering that a household with a single parent experiences "greater levels of scrutiny and [is] routinely seen as the 'other' of the nuclear, patriarchal family". The power dynamics in reproduction, when explored through a comparative analysis of "conventional" and "unconventional" families, have been used to dissect the Western assumptions of child bearing and child rearing in contemporary kinship studies. Kinship, as an anthropological field of inquiry, has been heavily criticized across the discipline. One critique is that, as its inception, the framework of kinship studies was far too structured and formulaic, relying on dense language and stringent rules. Another critique, explored at length by American anthropologist David Schneider, argues that kinship has been limited by its inherent Western ethnocentrism. Schneider proposes that kinship is not a field that can be applied cross-culturally, as the theory itself relies on European assumptions of normalcy. He states in the widely circulated 1984 book A critique of the study of kinship that "[K]inship has been defined by European social scientists, and European social scientists use their own folk culture as the source of many, if not all of their ways of formulating and understanding the world about them". However, this critique has been challenged by the argument that it is linguistics, not cultural divergence, that has allowed for a European bias, and that the bias can be lifted by centering the methodology on fundamental human concepts. Polish anthropologist Anna Wierzbicka argues that "mother" and "father" are examples of such fundamental human concepts, and can only be Westernized when conflated with English concepts such as "parent" and "sibling". A more recent critique of kinship studies is its solipsistic focus on privileged, Western human relations and its promotion of normative ideals of human exceptionalism. In "Critical Kinship Studies", social psychologists Elizabeth Peel and Damien Riggs argue for a move beyond this human-centered framework, opting instead to explore kinship through a "posthumanist" vantage point where anthropologists focus on the intersecting relationships of human animals, non-human animals, technologies and practices. The role of anthropology in institutions has expanded significantly since the end of the 20th century. Much of this development can be attributed to the rise in anthropologists working outside of academia and the increasing importance of globalization in both institutions and the field of anthropology. Anthropologists can be employed by institutions such as for-profit business, nonprofit organizations, and governments. For instance, cultural anthropologists are commonly employed by the United States federal government. The two types of institutions defined in the field of anthropology are total institutions and social institutions. Total institutions are places that comprehensively coordinate the actions of people within them, and examples of total institutions include prisons, convents, and hospitals. Social institutions, on the other hand, are constructs that regulate individuals' day-to-day lives, such as kinship, religion, and economics. Anthropology of institutions may analyze labor unions, businesses ranging from small enterprises to corporations, government, medical organizations, education, prisons, and financial institutions. Nongovernmental organizations have garnered particular interest in the field of institutional anthropology because of they are capable of fulfilling roles previously ignored by governments, or previously realized by families or local groups, in an attempt to mitigate social problems. The types and methods of scholarship performed in the anthropology of institutions can take a number of forms. Institutional anthropologists may study the relationship between organizations or between an organization and other parts of society. Institutional anthropology may also focus on the inner workings of an institution, such as the relationships, hierarchies and cultures formed, and the ways that these elements are transmitted and maintained, transformed, or abandoned over time. Additionally, some anthropology of institutions examines the specific design of institutions and their corresponding strength. More specifically, anthropologists may analyze specific events within an institution, perform semiotic investigations, or analyze the mechanisms by which knowledge and culture are organized and dispersed. In all manifestations of institutional anthropology, participant observation is critical to understanding the intricacies of the way an institution works and the consequences of actions taken by individuals within it. Simultaneously, anthropology of institutions extends beyond examination of the commonplace involvement of individuals in institutions to discover how and why the organizational principles evolved in the manner that they did. Common considerations taken by anthropologists in studying institutions include the physical location at which a researcher places themselves, as important interactions often take place in private, and the fact that the members of an institution are often being examined in their workplace and may not have much idle time to discuss the details of their everyday endeavors. The ability of individuals to present the workings of an institution in a particular light or frame must additionally be taken into account when using interviews and document analysis to understand an institution, as the involvement of an anthropologist may be met with distrust when information being released to the public is not directly controlled by the institution and could potentially be damaging. Ruth Benedict Ralph Linton,. Within anthropology's "two cultures"--the positivist/objectivist style of comparative anthropology versus a reflexive/interpretative anthropology--Mead has been characterized as a "humanist" heir to Franz Boas's historical particularism--hence, associated with the practices of interpretation and reflexivity [...]
by Loreena Thiessen During the long cold days of winter, except for people, all the earth seems to be asleep. Trees are bare of their leaves, plants have died back, animals hide in burrows or nest in the trunks of trees, and only a few birds flit about looking for food. How do trees and plants know when it’s time to wake up for Spring? A lot of it is still a mystery, but scientists tell us there are two ways trees know when to begin to wake up. The first is that trees respond to warmer weather. The second is they react to a change in daylight—how long there is daylight. As winter nears its end the nights are shorter and the days grow longer. As the sun’s rays are more direct, they begin to feel warmer on your face and there are more hours of daylight. Trees can sense how long there is daylight; they also know how long it has been warm. This causes buds to sprout and develop. And trees begin their new cycle of growth. As the days get warmer animals and insects begin to stir. Bears have been asleep all winter; their bodies have slowed down. Now they awaken and come out of their dens looking for food. At first they eat berries and the new shoots of plants. As their appetites increase, they head for rivers and streams to hunt for fish. Insects come out from their burrows and hiding holes. Plants open up and begin to flower just in time for insects to come along and pollinate them. This is to make sure that all plants continue to reproduce. Birds have an inner clock that tells them to leave when food gets scarce and the ground freezes. They migrate to warmer places where there is more food. As the weather warms up, they sense a need to return to where they were born. Once more it’s all about food. They arrive just in time to find the right food, insects and berries. This is where they will build new nests, lay their eggs, and raise the new hatchlings. To escape the cold some frogs burrow into the soft mud at the bottom of the pond and in the river banks. Their bodies slow down, their limbs freeze, and their hearts stop altogether. Now as the sun warms the earth they too wake up. Their limbs thaw and their hearts and lungs start working again. On land their bodies warm and they are once again fully alive. God’s creation is all about order. The sun continues to rise and set giving us night and day. Seasons follow one after the other. Water evaporates from seas and rivers and returns to land as rain causing flowers to bloom, lawns to turn green and crops to grow. Fish swim in the sea, birds fly through the air, and your feet remain on the ground as you walk along. God is a God of order not disorder…and of peace (1 Corinthians 14:33). Read Ecclesiastes 1:4-6 and 3:1. Activity: Look for Signs of Spring - Need: camera, notebook pencil, pencil crayons. - Do: Take a look through a window at what is happening around you. Is the air warmer? Can you feel the sun’s rays? Is snow melting? Are any buds visible? Are birds out? Is there open water? Do you see any birds that have returned from the south? - Take photos or draw what you see. Share your findings with family and friends in some way.
functions of the Somatic nervous system: The somatic nervous system is one of the components or divisions of the complex human nervous system. This system is capable of both transmitting information to the brain and driving the orders it issues to the rest of the body. Without this system, people would not be able to analyze environmental stimuli and issue adaptive responses or behaviors. If you want to learn more about it, keep reading this online Psychology article: Somatic nervous system: what it is and function. What is the somatic nervous system To understand what the somatic nervous system is, we must first know that the nervous system is divided into two main parts: - The central nervous system, formed by the brain and spinal cord. - The peripheral nervous system, which contains those nerves that are not found in the central nervous system. The somatic nervous system, together with the autonomic nervous system, is part of the peripheral nervous system. Somatic nervous system: definition What do we call the somatic nervous system? The somatic nervous system is a part of the nervous system composed of different structures responsible for transmitting information. This system is responsible for maintaining the communication of sensory and motor information with the brain and spinal cord, that is, with the central nervous system. Parts of the somatic nervous system The somatic nervous system is formed by the set of neurons that connect both the skin, muscles and sensory organs with the central nervous system. The somatic system is formed by two types of neurons : - The sensory neurons: are related to the senses and perception. - The motor neurons: are related to the movement. The sense of the transmission of information is bidirectional since sensory neurons are afferent and transport nerve impulses to the central nervous system, while motor neurons are efferent and drive these impulses from the brain and spinal cord to the muscles. skeletal Somatic nervous system: function What is the function of the somatic nervous system? What does the somatic nervous system take care of?The process of functioning of the somatic nervous system begins, normally, by the transmission of sensitive information captured by sensory neurons to the central nervous system, where it is processed by the brain. Once interpreted by the central nervous system, it sends a series of signals or orders through the motor neurons to the skeletal organs and muscles. From this scheme, the somatic nervous system performs a series of functions of vital importance for the proper functioning of the organism: functions of Somatic nervous system - The main function of the somatic nervous system is that of communication and connection between the central nervous system and the organs, skin, and muscles of the organism. - It transmits information from sensory receptors, conscious and unconscious, to the central nervous system. - It drives the orders and decisions of the brain to the skeletal muscles. - This system allows both the interpretation of the stimuli, through sensory neurons, and the production of responses based on the processing of this information through motor neurons. Therefore, the somatic nervous system enables the relationship and adaptation to the environment. - Thanks to the sensory neurons of the somatic nervous system the brain can capture odors, flavors, sounds, etc. - Another of the functions of this system is nociception, that is, the transmission of information about pain and temperature to the brain, with the aim of activating responses on the other hand that favor survival. - The voluntary movements and complex actions are regulated and controlled by this system, such as writing or running. This is possible by contracting skeletal musculature. - Also, involuntary movements or reflex acts are other functions of the somatic nervous system. These acts are carried out when a nerve, sensory and motor pathways connect directly with the spinal cord. - Another function of the somatic nervous system is proprioception, the process by which the body is informed about the state or position of the musculature. This function allows balance and coordination, among others. Somatic and autonomic nervous system: differences Both the somatic nervous system and the autonomic nervous system are part of the so-called peripheral nervous system. Despite this, they are not equal. Here are the - The somatic nervous system is mainly responsible for voluntary movements and, to a lesser extent, also for reflex acts. Instead, the autonomic nervous system is responsible for involuntary functions, those that do not require conscious control, such as breathing and digestion. - Another function of the somatic nervous system is sensory, the autonomic nervous system lacks it. - The somatic nervous system is a two-way system, afferent and efferent, so that information and nerve impulses flow in both directions between the central nervous system and this. However, in the autonomic nervous system, nerve impulses are transmitted from the brain and spinal cord to this, it is, therefore, a uniquely efferent system. - The autonomic nervous system is functionally divided into two other systems, sympathetic and parasympathetic systems, while the somatic nervous system is unitary. - The somatic nervous system is made up of spinal and cranial nerves. The autonomic nervous system is formed by roots, plexuses, and nerve trunks. - The action of the somatic nervous system is always excitatory on skeletal musculature, but that of the autonomic nervous system can be excitatory or inhibitory. Diseases of the somatic nervous system Below we list and explain some of the most common diseases or conditions of the somatic nervous system: - Disc herniation: occurs when a disc in the spine travels to the spinal nerves, pressing it and generating pain, numbness and/or loss of sensation. - Radial nerve paralysis: known as “fallen hand”, it is a pathology that affects the nerve that controls the muscles that allow arm extension. This paralysis causes the inability to extend the wrist, so it is hanging. - Carpal tunnel syndrome: pressure on the wrist nerve, causing numbness and loss of movement in the palm of the hand and fingers. This syndrome is associated with people who normally work with their hands performing repetitive movements. - Neuralgia: it is caused by nerve damage or irritation, causing an intense and intermittent sensation of pain and shock. - Spinal stenosis: narrowing of the spinal cord canal that houses the nerves. This creates weakness, cramps, numbness or numbness in the neck and back. - Guillain-Barré syndrome: a disorder in which the immune system itself mistakenly attacks the nerves. The first manifestations are tingling and weakness in the extremities, spreading rapidly and producing paralysis in the body, which remits with treatment. You May Also Like:
Ikebana for kids: Hong Kong and beyond As a parent and educator, what has been most rewarding is to observe the differences in character and behaviour in a child. Children who learn Ikebana tend to become more observant and sensitive to the natural environment around them. Do children learn fast? It depends on age. Usually, children who learn at a younger age show more interest in all aspects such as the different colourful flower materials, and containers which catch their eye. Through Ikebana, we teach children about etiquette from Japanese culture of cleanliness, consideration of others and the environment, and harmony with natural elements. For example, flowers are cut in water to prolong their life, as flowers like us, require water to help them grow. It helps develop their sense of protecting and caring for the natural environment in the future. Do children in Japan learn Ikebana? In Japan, there are Ikebana workshops for children. Children join study tours at botanical gardens and attend performances by Ikebana teachers where an arrangement is used as a teaching tool to learn about the flow of water from rain to river, mountains and forests. Children participate in exhibitions where they share their work with others, and may also create group arrangements together. Ikebana is a fun way to foster creativity, patience, and focus - not just in children, but also in adult learners. For kids, it trains their hand-eye coordination skills. Most importantly, however, as students of Ikebana, we come to appreciate nature and learn to think of how we wish to care for our planet. It's most rewarding to see personal growth in people not just in skill or technique, but as a person. Q&A with Pauline Tsang Wong. December 2020.
How to Make a Basket Fibers play a major role in determining the character of a basket. The type of fiber used will also influence the design of the basket. The different types of fibers are flat, round, and flexible. The most common type of fiber is cotton. It is woven in a continuous pattern, while flexible fibers are wrapped around another fiber to form a coil. The resulting coil is stitched to the sides of the entire basket, forming the sides of the basket. The next step in the process is to decide on a basket’s shape, materials, and function. After deciding on the materials, students should think about how they plan to use the basket. This is an excellent opportunity to ask for input from peers, so that each basket can be uniquely designed. Then, each student will discuss their choice of materials and decide on a form. Once the design is decided, the students can begin creating the basket. The next step in making a basket is to decide how the fiber will be shaped. The most traditional method uses a rod to remove the white fiber and make a round or oblong shape. Women and children used a rod to remove the white fiber. Today, a machine can do this job. But in the past, women and children used the rod to remove the skin. The formers help shape the basket and are typically made of wood or metal. After deciding on a design and materials, the next step is to make the handle. The handle is made of the strongest reed available and should be smooth to the touch. The ends of the rod are threaded into the sides of the basket. The overlap should be large enough to avoid the handle from slipping out. After completing the base, the lid is woven on the same process. The top edge is finished with a border. The spoke ends must be soaked before the border is created. Once the spoke ends are soaked, they can be turned down into the sides of the basket to create a smooth, even edge. Adding the border is important because it can affect the shape of the basket. After the spokes are turned down, they are ready to weave the basket. In the process of making a basket, the ends should be rolled into a smooth loop. The material for the basket is also an important consideration. The chosen material must be durable and easily transportable. The materials must be sturdy enough to carry the items in the basket. They should also be easy to clean. For the best results, the basket should be easy to carry. When a person is holding the basket, they should not be able to touch it while dancing. A good result is that the feet of the women should never touch the ground.
In the Southern Ocean, primary productivity—the rate at which living organisms such as phytoplankton produce organic compounds—is limited by low concentrations of iron. Although earlier studies in the Ross Sea have shown the most important sources of this nutrient include icebergs, windblown dust, and melting sea ice, seasonal streamflow from ice-free areas is another potential contributor. To date, however, the amount of iron these streams supply to coastal Antarctic waters has been poorly constrained. Now Olund et al. have measured iron concentrations in four streams that flow from the McMurdo Dry Valleys, the southern continent’s largest ice-free area, into the Ross Sea to determine their potential impact on coastal water biogeochemistry. These streams, which flow only from 4 to 10 weeks per year, were sampled along their lengths from late December 2015 through late January 2016. The results indicate that two of the streams, Commonwealth and Wales, contribute an average of 240 moles of filterable iron to the Ross Sea each year, an amount that is several orders of magnitude less than the contributions from other sources. The team also discovered that the ratio of iron to other vital nutrients, including nitrogen, phosphorous, and silicon, differs substantially from the ratios found in coastal phytoplankton communities. This finding indicates that seasonal streams are important sources of both phosphorous and iron for the Ross Sea’s plankton communities. By increasing our understanding of iron fluxes into the seas surrounding Antarctica, this study highlights the importance of local nutrient inputs to the Southern Ocean. In addition, because primary production can boost the uptake of carbon dioxide and the consequent sequestration of carbon in marine sediment, this study has implications for understanding future changes in productivity and the cycling of carbon in the region as increased melting augments the flux of iron to the sea via these coastal streams. (Journal Geophysical Research: Biogeosciences, https://doi.org/10.1029/2017JG004352, 2018) —Terri Cook, Freelance Writer
Manifest Destiny has plagued Native peoples of the United States for over 160 years. Historians have written extensively about the philosophy which seemed to become policy of the United States and its citizens in the 19th century. Under the notions of Manifest destiny ten of thousand of Indians were killed, Native Nations were pushed out of the way for Americans to settle their lands. The notions of Manifest Destiny made what was happening to the Tribe a justified action because it was the divine will of providence that Americans should own the whole of the continent. In this manner millions of acres of land were illegally settled and taken from tribes, of the west. It would take generations for the tribes to gain payments for the lands illegally stolen. Many treaties were made with tribes, yet the United States, over time has reneged on each one in some fashion. Either payments were not appropriately made, lands taken away, services to tribes not correctly applied, or tribes lost reservations when Congress declared them assimilated Indians. The lives and cultures lost have yet to be dealt with in any fashion. Generally the philosophy is understood as being the destiny of the American nation to extend its borders to the Pacific ocean. That it is the destiny, manifest, or clearly apparent that Americans are meant to own the whole continent and create a great nation. Indian tribes cringe at this as the philosophy ignores the extensive previous occupation, and at the time, the present ownership and presence of Native peoples on their homelands. When the philosophy was in practice, the notion inspired thousands of Americans to take a journey westward to seek their manifest lands, thereby creating the Oregon and Californian trails, because their nation would at some point take all the land anyway. Richard White, the great historian, in 1991, wrote about the foundations of Manifest Destiny as first created by John O’Sullivan, Away, away with all these cobwebs tissues of rights of discovery, exploration, settlement, contiguity, etc… The American claim is by the right of our manifest destiny to overspread and to possess the whole of the continent which Providence has given us for the development of the great experiment of liberty and federative self-government entrusted to us. It is a right such as that of the tree to the space of air and earth suitable for the full expansion of its principle and destiny of growth… it is in our future far more than in our past or in the past history of Spanish exploration or French colonial rights, that our True Title is to be found. O’Sullivan’s notion, like White notes, is that all of the laws and policies and facts like the ownership of the land by Native nations, are just “cobwebs,” to be pushed aside because Americans have a divine right, through Providence, to take all the land. These cobwebs are the various international and national laws and conventions and agreements which form the laws of the United States. Policies like the Northwest Ordinance, which states that Native Nations are not to be disturbed and their lands are to be respected by Americans. International agreements like the agreement that Great Britain and the United States will jointly occupy the Oregon Territory. At the time Manifest Destiny was written it did not catch much attention, but in time the notions behind it became part of the American way of interaction on the frontier, with Native nations, and with other countries. Though never officially a policy, the notions behind Manifest Destiny can become a regime’s policy, and they are dangerous to ethnic minorities and small nations that get in the way of colonizing powers. Tribal Nations witnessed the practice of Manifest Destiny. Oregon may have been the first, where all of the best lands were saved for the settlers. Tribes were moved to the fringes of American society, to frontier lands that in 1855 Americans did not yet want. But in time, when all the good land was gone and claimed, Congress acted to open up the reservations for more lands for Americans. In 1865, 1875, 1887, and 1901, the Coast, Siletz and Grand Ronde Reservations lost lands to the notion that Americans deserved them more than the Native nations. In 1954, the western Oregon tribes and Klamath Tribe lost all their lands under the notion that they did not need them anymore, they were fully assimilated, and the lands could be better used by logging outfits or for energy generation. In time Manifest Destiny worked to fully alienate Native Nations within their own lands. Outside of Indian affairs, actions and policies by the United States look as if they are inspired by the notions of Manifest Destiny. Nation building around the world, the taking of Hawaii, the imposition of leaders, dictators in other countries, all appear to be inspired by the notion of Manifest Destiny. As if the normal rules of conduct by nations of the world do not matter, Americans can do what they want to, because “Providence” gives us the license to do so. American exceptionalism as an international concept seems to to be on the same branch as Manifest Destiny. The philosophy appears to have expanded exponentially to the manner in which the United States became the policeman of the world, and assumes the moral authority to do whatever they want to, because its the United States. Is the United States practicing Manifest Destiny in its International policy today? White, Richard, “It’s Your Misfortune and None of My Own” a new history of the American West, 1991.
Young children are particularly vulnerable to the effects of tobacco smoke and other environmental toxicants, but their exposure is often difficult and expensive to measure. The results of those measurements, however, can be crucial for research on the success of preventative measures. Jenny Quintana and her research team at San Diego State University’s School of Public Health, have developed wristbands children can wear as an effective tool to measure nicotine exposure. Similar wristbands have been used to measure exposure to other toxic chemicals like pesticides and flame retardant, but never before for second-hand smoke exposure in children. “It is important to measure exposure in all groups, but children are often overlooked because it’s difficult to test them,” said Quintana, an SDSU professor. “Children are more challenging than adults, but if you don’t measure exposure well, you will not be able to tell if interventions or policy changes or other things actually worked and had an effect.” The wristbands, which are made of silicone and resemble the kind bearing motivational messages, were placed on three groups of kids divided by exposure type. The groups included children exposed to nonsmokers and non-e-cigarette users; children exposed only to smokers; and finally, children exposed only to e-cigarette users. Researchers directed the children to wear one wristband for two days and then two wristbands for seven days. After the seven days of usage, urine samples were collected from the children, a more traditional method of documenting exposure. The prevalence of metabolized nicotine in urine, cotinine, was compared to the nicotine measurements recorded by the wristbands. The tests revealed the two forms of measurement produced comparable results, meaning Quintana’s team had developed a simpler way of testing toxicant exposure in children. “We were extremely surprised with how well the data correlated,” said Quintana. “It is also amazing how well this worked with such simple instructions. We just told the kids to wear the wristbands.” Quintana and her team determined that these silicone wristbands may become a useful tool for epidemiology and intervention studies of tobacco product exposure in children and adults in the future. Their findings are detailed in an article in the Journal of Exposure Science & Environmental Epidemiology. “This is all exciting because it can potentially measure exposure in adults and exposure to more than just nicotine,” said Quintana. “We can now think about using these wristbands to measure other compounds such as car or truck exhaust and other air pollutants in impacted communities.” Quintana’s work is representative of long-term efforts by SDSU researchers to measure second- and third-hand smoke in people of all ages. Quintana and a team at SDSU have studied residual tobacco effects on material items and exposure to third-hand smoke. They plan to continue this third-hand smoke exposure research using the wristbands. Eunha Hoh, a collaborator with Quintana and also a professor at SDSU, recently co-authored an article in Environmental Science & Technology with assistant professor Carlos Manzano. They used the wristbands to measure and track patterns of personal exposure to urban air pollution among people living and working in Chile. “This used the wristbands and a non-targeted analysis,” said Hoh. “We weren’t looking for just nicotine or one particular chemical; we looked at a wide range of chemicals. This was really a proof of concept study and a great finding for the future of these wristbands and second- and third-hand smoke exposure studies.” Source: Read Full Article
November 29, 2018 – A new theory based on the physics of cloud formation and neutron scattering could help animators create more lifelike movies, according to a Dartmouth-led study. Software developed using the technique focuses on how light interacts with microscopic particles to develop computer-generated images. Researchers from Pixar, Disney Research, ETH Zurich and Cornell University contributed to the study. A research paper detailing the advancement will be published in the journal Transactions on Graphics andpresented at SIGGRAPH Asia, taking place from December 4-7 in Tokyo, Japan. |Researchers borrowed from nature to help moviemakers design more realistic and physically accurate animation. Images courtesy of Dartmouth Visual Computing Lab.| Objects like clouds contain billions of individual water droplets that are not practical to plot in computer graphics for movie scenes. As a result, current techniques only allow artists to specify the density of particles in each part of a cloud to define its shape and appearance. Existing systems do not allow any control over how the particles are actually arranged with respect to one another. “By only controlling the density, current techniques basically assume that the particles are arranged randomly, without any interdependence,” said Wojciech Jarosz, an assistant professor of computer science at Dartmouth College who oversaw the research. “But this limitation can have a dramatic effect on the final appearance.” In reality, particles are not always randomly arranged. They can clump together or spread evenly apart, depending on the type of material. Understanding how particles are arranged and how light interacts with them provides a variety of new artistic options for moviemakers. “There is a whole range of dramatically different appearances that artists just couldn’t explore until now,” said Jarosz. “Previously, artists basically had one control that could affect the appearance of a cloud. Now it’s possible to explore a vastly richer palette of possibilities, a change that is as dynamic as the transition from black-and-white images to color.” In the Dartmouth study, researchers compared how a beam of light travels through a material comprised of randomly arranged particles with how it travels through a material consisting of particles that are more naturally ordered. The team averaged the results of millions of trials demonstrating how far photons travel before slamming into particles or other objects. Ordinarily, a graph modelling how photons move through a material with independently arranged particles appears as an even, “exponential” curve indicating light evenly dropping off as it travels. When particles clump together, like in a cloud, photons survive longer distances on average, resulting in a curve with a longer tail. Not only is the result exciting in mathematical models, the team programmed the finding into software that will allow artists to create a wider variety of looks by customizing how light travels through “volumetric materials” like clouds, fog, mist, a marble statue, or our own skin. Importantly, the creative result will also be a more accurate depiction of real-world physics. The breakthrough allows artists to maintain a realistic result while responding to creative direction by effectively “steering” the physics to achieve particular artistic effects. “There is an interesting interaction between art and science when you are creating animated films,” said Benedikt Bitterli, a PhD student at Dartmouth who co-authored the research paper. “You’re doing this physics simulation, but the people using it are not physicists. We are creating software and simulations for use by artists.” To tackle the problem of understanding how particles organize themselves, the research team turned to atmospheric sciences and neutron transport. In those research fields, knowing the arrangement of water droplets or reactor material has important implications for studying climate change and keeping nuclear reactors safe. While researchers have been looking to overcome the challenge of particle arrangement for some time, no set of equations had yet been developed that solves the problem in a general way. “This wasn’t simply a matter of taking techniques from other research areas and using them for generating pretty pictures with computer graphics,” said Bitterli, who will present the work at SIGGRAPH Asia. “Getting the physics equations to work properly was a new and extraordinarily difficult challenge.” The research team also applied the technique to solid objects like marble statues where some light reflects off the surface, but some also travels through the material, leading to its translucent appearance. The new technique allows artists to change the way light interacts with the objects but without changing the density. The Dartmouth-led research comes after a recent study from University of Zaragoza that looked at similar problems but that focused only on objects with uniform density. Both studies come as more powerful computers and software innovations have spurred film studios to develop more sophisticated techniques based on the physical world. Srinath Ravichandran (Dartmouth College), Steve Marschner (Cornell University), Thomas Müller (Disney Research/ETH Zurich), Magnus Wrenninge (Pixar) and Jan Novák (Disney Research) all participated in this research.
The application of DNA barcoding by Australian Museum (AM) researchers has been used to unravel the species complex Heterolepisma sclerophyllum, in addition to investigating silverfish phylogenies in the remote islands off Eastern Australia. Silverfish are a fascinating group of insects. Most infamous for invading our homes, they belong to the ancient order Zygentoma. Possessing primitive characteristics, they are believed to have evolved over 400 million years ago. Small and wingless, their scaly bodies taper to a point ending in three distinct appendages. Silverfish are found all over the globe, and call a broad range of environments home. There are species living in the driest of deserts, where they absorb moisture from the air through their anus in order to survive the conditions. Some are close-knit neighbours of termites and ants, found living amongst them in their mound-like homes. Several species are found in the remote and semi-tropical islands off the east coast of Australia. And lastly, there are numerous that are blind, inhabiting the dark depths of caves and seemingly inaccessible rock cracks. The identification of silverfish is notoriously difficult as they continuously moult, even following sexual maturity. This perpetual moulting can result in substantial morphological differences amongst individuals of the same species, making taxonomic evaluations tricky. Prior to two 2019 studies by AM Researchers Dr Graeme Smith and Dr Andrew Mitchell, 24 species of Heterolepisma were known from around the world. Based on morphological characteristics Heterolepisma sclerophyllum was described as a single species in 2014, with a range from the tip of Queensland to the southern reaches of New South Wales. Since its discovery, numerous Heterolepisma sclerophyllum specimens with similar characteristics have been collected, spanning the Australian east coast. Graeme and Andrew analysed 68 of these specimens for DNA barcodes, also comparing the sequences 16S and 28S (these nuclear and mitochondrial rDNA sequences are commonly used in phylogenetic studies). The data showed considerable differences between QLD and NSW populations, as well as within state populations. A detailed morphological examination was also undertaken. Following these analyses, two new species were described; one from southern Queensland (Heterolepisma cooloola) and one from Glen Davis NSW (Heterolepisma coorongooba). The results of the genetic analysis also aided in the determination of which morphological characteristics are most useful in differentiating species within Heterolepisma. Scale shape, the absence of large bristles from the forehead and the number of pairs of abdominal styli were found to be the most important traits. Without DNA barcoding, disentangling this species complex using morphological characteristics alone would have been all-but-impossible. This is due to numerous species sharing similar traits, as well as considerable variability between individuals of the same species, due to continuous moulting. Not satisfied with discovering two novel species, Graeme and Andrew took their silverfish fervour to several islands off Eastern Australia. A lack of wings and love of the desert has not prevented these primitive insects from colonising remote islands. Lord Howe Island, Norfolk Island, Balls Pyramid and the tropical Herald Cays are home to a diverse fauna of silverfish. The pair examined 14 Heterolepisma specimens from these islands, with DNA and molecular analyses supporting two new species. Using morphological criteria, with the aid of molecular data, a new genus was also described! The genus, Maritisma, was discovered on the Herald Cays coral atoll. Unfortunately, Maritisma, along with a new species also described from the low-lying Herald Cays (H. heraldense) are at risk from becoming endangered due to rising sea levels. Although they are not the most popular of insects, silverfish have lasted the test of time. They have endured in the toughest of habitats, survived two mass extinctions and are an incredibly diverse group. Increased use of DNA barcoding in taxonomic studies is bound to further reveal cryptic species, as well as increase our understanding of the silverfish phylogenetic tree. Emma Flannery, Science Communicator, Australian Museum Research Institute Smith, G. B. 2014. Two new species of Heterolepisma (Zygentoma: Lepismatidae) from eastern New South Wales. General and Applied Entomology: The Journal of the Entomological Society of New South Wales 42(2013): 7–22. Graeme B. Smith, Andrew Mitchell, Timothy R. C. Lee, and Luis Espinasa. 2019. DNA barcoding and integrative taxonomy of the Heterolepisma sclerophylla species complex (Zygentoma: Lepismatidae: Heterolepismatinae) and the description of two new species. Records of the Australian Museum 71(1): 1–32. https://doi.org/10.3853/j.2201-4349.71.2019.1677 Smith, Graeme B., and Andrew Mitchell. 2019. Species of Heterolepismatinae (Zygentoma: Lepismatidae) found on some remote eastern Australian Islands. Records of the Australian Museum 71(4): 139–181. https://doi.org/10.3853/j.2201-4349.71.2019.1719
Body Mass Index Define Body Mass Index (BMI) measures lean body mass based on a calculation involving height and weight. The World Health Organization (WHO) set the criteria and categories for BMI, also known as the Quetelet index. BMI is an indicator of body fat and potential health risk and an indirect indicator of obesity. Other indirect indicators of obesity include waist circumference and the ratio of waist to hips. Knowing your BMI allows you to compare your BMI measurement to standard weight ranges to see if your body weight is considered appropriate relative to your height. HOW IS YOUR BMI USED? Though BMI is often used to assess individuals, it was not originally intended to be used for this purpose. BMI was created to evaluate large groups of people for weight issues, such as obesity, in academic studies. Such studies often consider the relationship between BMI levels and health conditions. BMI’s uses also include creating body composition estimates, reference standards, and baseline data for studies as an academic tool. Academics also use BMI to observe trends in certain populations and determine the risks for certain health outcomes. BMI is used as a screening tool for weight issues and private health insurance underwriting for individuals. However, it is not considered to be a diagnostic tool and should be used with other measurements. Many medical professionals use BMI along with other measurements and tests to make treatment recommendations to their patients. HOW DO I CALCULATE MY BMI? Body Mass Index Define. To define Body mass index we need to calculate your BMI, you divide your weight in pounds by your height in inches squared and multiply the result by 703. You can calculate your BMI by dividing your weight in kilograms by your height in meters squared. BMI calculators are available online in which you enter your height and weight, and the calculation is done for you. The BMI calculation must be made using your correct height and weight. BMI is often underestimated or overestimated when height or weight is misreported. Both of these numbers can change with age. Some people cannot stand up straight for an accurate measure of their height due to health issues like disease, weakness, or spine curvatures. WHAT ARE THE BMI CATEGORIES? The BMI categories are: - normal weight - severely obese While all countries use these categories, there is a little variation in what BMI standards fall into each category in some countries. The differences reflect variations in disease risk among different people groups. In the United States, the BMI standards for adults are: - underweight: under 18.5 - normal: 18.5-24.9 - overweight: 25 to 29.9 - obese: 30-40 - severely obese (more than 100 pounds over ideal weight): over 40 For children, teens, and young adults between the ages of two and 19, BMI is calculated and evaluated differently. Height, weight, age, and sex are included in this BMI calculation. The resulting BMI figure is considered as a percentile of other people of the same age and sex. For this BMI calculation, those below the fifth percentile are considered underweight. Those in the 85th and 94th % are considered overweight, and those in the 95th percentile and above are considered obese. WHAT DOES BMI TELL YOU? Though BMI has its limitations, it is a good screening tool for determining if you are overweight or obese, as well as an indicator of your risk for obesity-related diseases. There is also a connection between your BMI and your total fat mass. However, your age, gender, ethnicity, and fitness level should also be considered when you use your BMI to determine your health risk. If you have a high BMI, you are not necessarily at risk for the health problems associated with being overweight or obese. For longevity in adults, a BMI between 20.5 and 24.9 is optimal, though people who are moderately overweight may also have an advantage that translates into a longer life. WHAT DOES AN INCREASING BMI REVEAL? If you are an adult and your BMI is increasing, it is often associated with chronic diseases like type two diabetes, high blood pressure, coronary artery disease, and high cholesterol. It is also a good indicator of the potential for sleep apnea, degenerative joint disease, and certain cancers. An increasing BMI is often linked to depression, low self-esteem, physical disability, social discrimination, and unemployment. In children, annual BMI increases are most often related to an increase in lean mass, not fat tissue. When young people reach late adolescence, their fat mass begins to affect their BMI numbers. WHY IS THE BMI MEASUREMENT CONSIDERED PROBLEMATIC? BMI is considered problematic for certain adults because it only looks at weight and height without context. Therefore, it can be misunderstood or misused. For example, BMI may not correctly measure obesity in some people. Older adults may have less muscle mass, so their obesity may be underestimated using BMI. Simultaneously, muscular people, such as athletes, will fall into an obese category even though their heavier weight comes from more muscle mass rather than excess weight. Similarly, people may have diseases or take medications that cause significant water retention, resulting in incorrect BMI numbers. Another way that BMI fails to account for differences in people is body fat distribution. Some adults have excess body fat in their abdominal region. Aging also can change where body fat is carried. While such people’s BMI may be in the normal range, they may actually be obese due to an increase in their waist-to-hip ratio. Additionally, BMI does not allow for differences in bone structure. Thus, if you have a large frame, your BMI could place you in the overweight or obese category even though you have low body fat. If you have a small or slender frame, your BMI could be normal even though you have excess body fat, this is one of the reasons why Body Mass Index Define is not always easy to do. Check Your BMI Index Here “BMI Calculator.” Mayo Clinic. https://www.mayoclinic.org/diseases-conditions/obesity/in-depth/bmi-calculator/itt-20084938 (accessed October 20, 2018). “Body Mass Index (BMI).” Centers for Disease Control and Prevention. https://www.cdc.gov/healthyweight/assessing/bmi/. (accessed October 20, 2018). “Calculate Your Body Mass Index.” National Heart, Lung, and Blood Institute. https://www.nhlbi.nih.gov/health/educational/lose_wt/BMI/bmicalc.htm (accessed October 20, 2018).
Concentrating solar thermal power (CSP) technologies, which employ reflective material to concentrate the sun’s heat to drive steam or gas turbines to produce electricity, are used in solar thermal electric (STE) plants. The number of STE plants is rising around the world, together with the increasing reliance on renewable sources of energy. According to the Renewables 2017 Global Status Report, from REN21, an international non-profit association which is part of the United Nations environment programme (UNEP), emerging countries with high levels of solar exposure, no or few oil and gas reserves and with a political agenda that favours industrialization and job creation, are increasingly likely to adopt policies favouring the building of such facilities. Most new STE plants can store heat during the day and convert it into electricity at night, making solar thermal attractive for large-scale energy production. As STE plants are situated in sun-drenched areas of the world, they are also a source of predictable and reliable energy. According to the REN21 report, while Spain remains the global leader in installed CSP capacity, new facilities have recently come online in countries including South Africa, China and Morocco. STE projects are on-going in India, Israel and the Middle East. IEC TC 117: Solar thermal electric plants, was established in 2011, following a proposal from the Spanish National Committee (NC), to draft International Standards in the CSP field. It augured the growth of CSP capacity across the world and the requirement for such Standards. The scope of TC 117 is to prepare International Standards for the conversion of solar thermal energy into electrical energy in STE plants. The Standards are expected to cover current different types of systems in installed plants: Blazing a trail Simulation studies of plant power production are often required during the various stages of planning, design and building of an STE plant. A standard methodology based on the annual solar radiation (ASR) data set is used to generate data representative of a typical meteorological year and to extrapolate plant production over the long term. As the Chair of TC 117 Werner Platzer explains: "For the financing of projects, we need a reliable, comprehensive and unambiguous calculation of future generation throughout the lifetime of plants. This can be predicted with representative meteorological data and a precise simulation and prediction of the yield using the meteorological data. The newly published Technical Specifications deal with the question of how to prepare such data sets." TC 117 has issued its two first publications, IEC Technical Specification (TS) 62862-1-2:2017, Solar thermal electric plants-Part 1-2: General – Creation of annual solar radiation data sets for solar thermal electric (STE) plant simulation and IEC TS 62862-1-3:2017, Solar thermal electric plants – Part 1-3: General - Data format for meteorological data sets. The first Technical Specification defines the procedures for the creation of ASR data sets used in STE plant simulation. The document also describes the components and parameters of an ASR data set, including factors such as geographic and time identification. The scope of the second TS is to reduce the efforts involved in preparing data exchange and to avoid misunderstandings rising from the use of different data formats for meteorological data sets. It proposes one format which demonstrates: The data format proposed has been inspired by the thesaurus on solar radiation proposed at EnvironInfo 2007.
What Is Eating My Marigolds? While the pungent aroma and taste of marigolds (Tagetes spp.) wards off many unwanted garden pests, some insects aren't put off by the bitterness, and several diseases don't discriminate between the types of plants they infect. Chewed, yellowed or wilting leaves, powdery substances on stems and petioles, and holes in leaves and blossoms are clear indicators that something is not quite right in marigold-land. Of the several species of aphids, those that attack marigolds include the dark gray bean aphid, the green peach aphid and the yellow to greenish-yellow melon, or cotton, aphid. While the destructive insects sometimes occur singly on plants, they are more commonly found in clusters along the undersides of leaves and stems. Large numbers of aphids produce a sticky substance called honeydew, and they can cause marigold leaves to turn yellow and stunt new growth. Spider mites are tiny pale green insects that live in colonies beneath leaves and often spin webs along leaf stems. Damage includes small holes in the leaves where the mites feed, which in severe cases causes leaves to turn yellow or red and fall off. Damage caused by thrips, tiny winged insects with almost translucent bodies, includes stunted growth of marigold plants and misshapen leaves. Diseases that affect marigolds are often caused by molds and fungi that develop in the soil and are transmitted from plant to plant. Like insects, fungi are living organisms that feed on plant matter, causing the tissues in leaves and stems to wilt, and that often results in the death of the plants. Botrytis blight (Botrytis cinerea) causes spotting and discoloration of flowers; the marigold buds rot before they even open, and leaves become discolored, rot and fall off. Root and crown rot of marigold plants caused by several different fungi produce stunted growth, soft darkened stems that break off and wilting of entire plants. Leaf spots in the form of yellow of brownish raised lesions cause leaves to fall off, while verticillium wilt causes leaves to turn yellow, tan or brown from the bottom of the marigold plants and up, resulting in death. Solutions and Prevention Environmentally conscious methods of marigold plant pest control include destroying all affected plant parts and flushing small insects such as spider mites, aphids and thrips with a strong stream of water from the garden hose. For more severe infestations, spray all parts of the plants with a solution of neem, a horticultural oil that smothers the insects. In a spray bottle, mix 1 teaspoon of neem oil with about 1/2 teaspoon of dish detergent or insecticidal soap with 1 quart of warm water. Mix thoroughly and apply to the entire marigold plant at the first sign of problems and once a week until no sigh remains of the pests. Do not apply insecticidal soap when the temperature is above 90 degrees Fahrenheit. Treat fungal diseases by mixing 1/2 to 2 ounces of liquid copper fungicide concentrate with 1 gallon of water, and spray all parts of the plant at first sign of infection and every seven to 10 days as long as necessary. Wear gloves, protective clothing and observe label safety cautions when mixing or applying garden chemicals. Marigolds can be grown year-round as perennials in U.S. Department of Agriculture plant hardiness zones 9 to 11, and elsewhere as warm-season annuals. According to the University of California, Davis, many insect and fungal pests can be prevented with good gardening practices that include proper watering, providing proper sunlight and air circulation and keeping the area weed-free. When watering marigolds, wet the soil rather than the plants, and allow any excess water to drain off, since chronically damp soil invites disease in plants that have been weakened by insect damage. Keep the plants trimmed of all diseased or damaged parts, and aerate the soil around them periodically to prevent any potential insect colonies from forming.
Amazing discoveries on seahorses could lead scientists to a better understanding of the evolutionary process. What makes a seahorse so different from other fishes and sea creatures? That’s what a team of international researchers were trying to discover when they began to sequence the genome of a tiger tail seahorse, one of the 47 known species of seahorses in the world today. This is the first attempt at sequencing the seahorse genome, and the researchers are hoping the process might explain why they don’t resemble any other ocean creatures. Seahorses are the only vertebrates on Earth that reproduce through the male of the species, and combine that with the vertical body orientation and bony plates covering the body instead of scales, well, you certainly have a unique specimen to study. The genome analysis, which focused on the tiger tail species because of the abundance of that species near the research lab in Singapore, is still very early in the process, but already the team is saying they have learned a lot about evolution of the seahorse. “Regulatory elements are DNA segments that control the function of genes,” according to a statement issued by the University of Konstanz. “Some of them barely change during the course of evolution since they have important regulatory functions. But several such unchanging and seemingly crucial elements are missing in sea-horses. This is also and especially the case for elements that are responsible for the typical development of the skeleton in fish, but also in humans. This is probably one of the reasons why the seahorse’s skeleton has been so greatly modified.” The statement continued to say the seahorse’s body “is armoured with bony plates that add strength and better protection from predators. Additionally, its prehensile curly tail allows seahorses to be camouflaged and remain motionless by holding on to seaweed or corals. The genome sequences suggest that the loss of the corresponding regulatory sequence led to this ossification.” Findings from the study were published in the journal Nature.
Desalination is a process by which salt and brackish water is pulled out of the ocean and run through a desalination and purification system to result in clean, drinkable water. Desalination technology is hailed as a positive answer to worldwide water shortages, and is being developed and encouraged in areas that are close to oceans but lacking in freshwater supplies. However, desalination is not a fail-safe process and carries with it many environmental repercussions. The disadvantages of desalination are causing many people to think twice before starting desalination projects. As with any process, desalination has by-products that must be taken care of. The process of desalination requires pretreatment and cleaning chemicals, which are added to water before desalination to make the treatment more efficient and successful. These chemicals include chlorine, hydrochloric acid and hydrogen peroxide, and they can be used for only a limited amount of time. Once they've lost their ability to clean the water, these chemicals are dumped, which becomes a major environmental concern. These chemicals often find their way back into the ocean, where they poison plant and animal life. Brine is the side product of desalination. While the purified water goes on to be processed and put into human use, the water that is left over, which has a super saturation of salt, must be disposed of. Most desalination plants pump this brine back into the ocean, which presents another environmental drawback. Ocean species are not equipped to adjust to the immediate change in salinity caused by the release of brine into the area. The super-saturated salt water also decreases oxygen levels in the water, causing animals and plants to suffocate. The organisms most commonly affected by brine and chemical discharge from desalination plants are plankton and phytoplankton, which form the base of all marine life by forming the base of the food chain. Desalination plants therefore have the ability to negatively affect the population of animals in the ocean. These effects are further developed through the disadvantages caused by desalination "impingement" and "entrainment." While sucking ocean water in for desalination, the plants trap and kill animals, plants and eggs, many of which belong to endangered species. Desalination is not a perfected technology, and desalinated water can be harmful to human health as well. By-products of the chemicals used in desalination can get through into the "pure" water and endanger the people who drink it. Desalinated water can also be acidic to both pipes and digestive systems. In an age where energy is becoming increasingly precious, desalination plants have the disadvantage of requiring large amounts of power. Other water treatment technologies are more energy efficient.
There are few students who have asked us few tips on programming. By just watch a video or listening to a programming class will not make you professional in programming. Here is a list of tips for programming/coding. - Don’t buy books and waste money try to Google it and watch videos on YouTube. - Try to use PDF or eBooks available online. - Don’t copy, paste the code try to type it out. - Look at the example code - When you learn coding, make sure you write it down on a paper or note it in your computer Also Read: Is Learning To Code Hard ? - Don’t read the code – run it - Always have a backup of your programming files - Try to understand the code - Learn to use a debugger - Try to write your own program once you learn it - Code your program on Linux to get better experience - Experiment with changes of the code Also Read: How To Delete Empty Folder With One Click - Try to pick one programming language and learn it until the end - Learn basic programming languages like Python, Java, Html, C - Learn the core concepts of the language - Focus on one concept at a time - Examine the syntax Here is a small video which will explain you more clearly There is no standard process of learning to code but still there are lot of guidelines, courses, ideologies and set traditions, but there is no one single correct way. Learning to code is quite easy when one devotes sufficient amount of time and effort, you can develop very strong skills on programming short amount of time. - Get started with C, C++ or Java because these are the standard languages used in any programming competition. - Learn C++ if you are already good at C because it is the most popular language and it has speed and an excellent library in the form of STL (Standard Template Library). - There are high quality website to learn coding online such as code.org and many more - To start with coding start with simple problems that requires to transform English to code and does not require any knowledge on algorithms - At the beginning stage of coding we no need to write long pieces of code which is actually not required. Try to keep the codes short, simple and easy. - Keep on practicing the problems until you become famous with it - Start using basic algorithms. You can learn them from here – Topcoder - Once you have the knowledge to solve popular algorithms, you can start solving the medium level problems. - Try to participate regularly in programming contests. Solve the problems which you cannot solve in the contest. - Read the codes of high rated programmers. Compare your solution with them. Analyse how they have improved your skills. - Always practice the problems that you could not solve in the contest - Do not spend too much time if your stuck somewhere. Understand the algorithm and code it. Do not look at the answer before you have tried to write the code on your own. - Programming is a hands on skill. You have to be good at it. It’s not enough to solve the problem theoretically, you have to code it and get the solution accepted. Knowing which algorithm to use and implementing it are two different things. It takes both to be good at programming. Learning to code is going to take a lot of time and the key is practicing regularly. Do not give up on reading the post, try to implementing them, even if it takes many hours/days. Remember everything requires practice to master it. The final tips is Giving up is not the key here. Latest posts by Unallocated Author (see all) - How to choose a reliable VPN provider for 2019 - January 23, 2019 - How To Ensure Spam Protection And Secure Your Website - January 8, 2019 - Watch Netflix Content From Wherever - December 10, 2018
Humans sweat for a reason, sweating is a physiological process the body uses to cool itself down and maintain its internal temperature, which is called thermoregulation. However, some people have conditions that cause them to sweat too much or not enough due to various physiological issues. It is called hyperhidrosis when a person sweats in excess of what is needed by the body to maintain thermoregulation. This just means that a person is sweating more than is useful for the body. Hyperhidrosis itself is not dangerous, but the underlying issues that cause it can be. There are two main types of hyperhidrosis, primary focal hyperhidrosis and secondary generalized hyperhidrosis. Primary focal hyperhidrosis develops when a person is younger and is not dangerous. However, it does cause both physiological and psychological problems for people that can greatly impact their quality of life. Secondary generalized hyperhidrosis comes on suddenly in adulthood and it can be an indication that someone is unwell. Secondary generalized hyperhidrosis can indicate a serious underlying health issue and this type of hyperhidrosis needs to be managed by a doctor. Most often, secondary hyperhidrosis is caused by a medication side effect. In this case a patient can choose to stop the medication or use an oral medication to treat hyperhidrosis symptoms if going off of their medicine is not an option. Sometimes however, there is a physiological condition or disease that is causing hyperhidrosis, and it is very important to have it checked out by a doctor. Some conditions that cause secondary hyperhidrosis, like pregnancy, are not medically dire, but some are. Cancers, like lymphoma, and infections, like tuberculosis, can cause excessive sweating so someone who suddenly develops secondary hyperhidrosis needs to seek medical attention. In most instances excessive sweating is caused by a medication or a benign medical condition, but it is important to have it checked out. Primary focal hyperhidrosis is not physically dangerous, but it can be bad for your health in general, and cause some physical issues if it is not well managed. Excessive sweating can make it difficult to maintain cleanliness, make it more likely for someone to develop a skin infection, and can destroy a person’s clothing. Hyperhidrosis has a larger impact on a person’s quality of life, and this is how it does the most damage. People with hyperhidrosis often struggle with anxiety due to the effect it has on their lives. It can be an extremely embarrassing condition and it can cause people to lose out on experiences they would have otherwise enjoyed. Many people with hyperhidrosis find that it impacts several aspects of their daily lives including intimate relationships, leisure activities, personal hygiene, work, and self-esteem. The effect hyperhidrosis has on a person’s quality of life should not be underestimated. It is just as important to seek treatment for the psychological aspects of the disease as the physical. Luckily, there are several effective treatment options that can help people with hyperhidrosis improve their quality of life. Specifically, botox treatment for axillary hyperhidrosis and a surgical procedure for primary focal hyperhidrosis, called endoscopic thoracic sympathectomy, have shown that they significantly improve patients quality of life. This is especially true when someone suffers from severe hyperhidrosis. There are many other treatments for sweaty hands and feet, and axillary hyperhidrosis, that can also improve patient’s symptoms and their quality of life. There has been some question about whether antiperspirant is safe and, so far, no studies have found negative health outcomes for those who use aluminum based antiperspirant. - Pariser, D. M. (2014). Hyperhidrosis (4th ed., Vol. 32). Philadelphia, PA: Elsevier.
Where Does it All End Up? From FUTURESTATES collection, lesson plan 8 of 13 Audience: Grade 9 Biology, Environmental Science, Earth Science, Grades 10-12 Advanced Earth/Space Science, Advanced Biology, Advanced Environmental Science. Overview of Plastic Bag: Struggling with its immortality, a discarded plastic bag ventures through the environmentally barren remains of America as it searches for its maker. Summary of the Lesson: In this lesson, the students will view the film Plastic Bag and evaluate the information presented in the film. They will gather data and determine the scope of plastic bag use and disposal issues. Students will investigate predictions related to Plastic Bag on the FUTURESTATES Predict-O-Meter website and discuss their viability. They will present proposed solutions to the problem presented in the film and post their own predictions on the website. National Educational Standards: All components are aligned to the National Science Education Standards as presented by the National Academy of Science and available as a free download. NS.9-12.1 SCIENCE AS INQUIRY As a result of activities in grades 9-12, all students should develop: - Abilities necessary to do scientific inquiry - Understanding about scientific inquiry NS.9-12.3 LIFE SCIENCE As a result of their activities in grades 9-12, all students should develop understanding of: - Interdependence of organisms S.9-12.4 EARTH AND SPACE SCIENCE As a result of their activities in grades 9-12, all students should develop an understanding of: - Geochemical cycles NS.9-12.6 PERSONAL AND SOCIAL PERSPECTIVES As a result of activities in grades 9-12, all students should develop understanding of: - Natural resources - Environmental quality - Natural and human-induced hazards - Science and technology in local, national, and global challenges In addition to the National Standards for Science, the lesson plans provide an excellent framework for instruction in Media Literacy. This instruction further supports both NS.9-12.1 SCIENCE AS INQUIRY and NS.9-12.7 HISTORY AND NATURE OF SCIENCE by instructing students in methods to make them more effective in media analysis. Information on Media Literacy can be found here. Background Brief: This is the information for the teacher. It includes information about the disposal, recycling, and environmental problems associated with the widespread use of plastic bags, and may help you direct your students through the lesson plans for Plastic Bag. Paper or Plastic? The commonly asked question “Paper or plastic?” is simply a reflection of how common the use of plastic bags has become. In the United States alone, over a billion plastic bags are given away to customers every day. Less than 3% of those bags are ever recycled. The rest are destined for the landfill, although many of them never make it there. The bags are so light that they often become airborne and can drift until they become stuck on something or bogged down in water. Those that do make it to the landfill will take as much as a thousand years to degrade. Ironically, the plastic bag is in many ways more desirable than the alternative, the paper bag. Both are expensive in terms of the energy needed to produce them, but the paper bag requires the most energy to produce. In addition, paper bags take up much more room in our landfills. Paper bags are biodegradable, but because our landfills are designed to keep out water and air, paper bags take much longer to break down than they would in a normal environment. In a landfill, it may take a century for a paper bag to decompose, which creates significant volume issues for landfills. What’s the Problem? In most cases, the real problem with plastic bags is the fact that they don’t reach the landfills. The United States alone introduces over 8 billion pounds of plastic into the waste stream every year. When even a fraction of that amount escapes the waste stream as litter, the consequences can be devastating, particularly if the plastic becomes airborne. Land animals and birds often mistake bits of plastic for food. When the plastic is ingested, the plastic can choke the animal or block the intestinal tract. Even when the pieces are tiny, they can be very hazardous to wildlife. The plastic particles are polymers, which act as “sponges,” accumulating hazardous chemicals. The effects of plastic bags on land animals are significant, but the effects on marine life are devastating. Over a million seabirds and over 100,000 marine mammals and sea turtles die every year from ingesting pieces of plastic or tangling in plastic netting and line. Small pieces of plastic bags look like jellyfish to turtles. Animals have been found dying with nothing but pieces of plastic bags in their stomachs. Seals and turtles alike have suffocated, encircled by plastic rings that slowly choke them as they grow. The Pacific Ocean The major oceans are made of gigantic gyres that drive the currents and the flow of water around the planet. In the center of the North Pacific Gyre is an area of virtually no wind and very high air pressure. It is the center of a slowly circling vortex of water. Wind and water currents have come together to create a “trap” for the plastic debris floating in the ocean. In an area roughly twice the size of the continental United States, over 100 million tons of flotsam—mostly plastic in origin—have created a “trash vortex” that continues to circulate through the Pacific Ocean. The danger to wildlife is significant and the vortex is growing. What Can We Do? The most obvious answer is to reduce the amount of plastic entering the waste stream. Some countries, such as Ireland, have already instituted a tax on the use of plastic bags. Cities like Boston, and the entire state of California, have considered bans on plastic bags. The problem is that people will either pay the tax or increase the use of paper bags. The current focus for most people working on this issue is on the development of more efficient, consumer-friendly ways to recycle the plastic bags. - The Making of Plastic Bag - Paper Bags in Landfills - An article on the plastic ducks ending up in the Pacific gyre - Are Plastic Grocery Bags Sacking the Environment? - The World’s Dump: Ocean Garbage From Hawaii to Japan - Good Stuff? - An adventure ecology project that follows the creation of a huge catamaran from recycled plastic and its voyage through the Pacific vortex Curricula Writer: A 23 year veteran of teaching, Kathie L. Hilbert is currently the Science Chair at Connersville High School in Connersville, Indiana. Ms. Hilbert has both a BA (University of Evansville) and MAT (Miami of Ohio) in Biology. Ms. Hilbert has taught all levels of Biology and Earth Science, as well as Botany and Geology. She has also accompanied and supported her students on several summer Marine Biology programs held in Hawaii. Ms. Hilbert has written and developed curriculum for Botany, Geology, and Early College Earth Science as well as revised curriculum for other classes. She has also written curriculum for community Science Outreach Programs and was a Science Ambassador for the CDC (writing lesson plans for their website). Ms. Hilbert was Fayette County’s Teacher of the Year in 2001 when she also successfully attained National Board Certification in science teaching. Download activity handouts and lesson plan materials at http://itvs.org/educators. - Compare and contrast the benefits and problems associated with the use of paper and plastic bags. - View the film Plastic Bag. - Calculate theoretical plastic bag use in the students’ community or state for one day. - Paper and plastic bag - Computer with internet access - Plastic bag use worksheet (optional, see page 8) - Calculators (optional) Beginning (10 minutes) The teacher will have a paper bag and a plastic bag. Ask: “Have you ever been asked to choose ‘Paper or plastic?’ What did you choose? Why?” Follow with a brief discussion of the pros and cons of each choice. The teacher will differentiate between biodegradable and recyclable. “Have you ever wondered how many plastic bags are actually used every day? Let’s try to develop an educated guess. How many bags do you typically use when groceries are purchased?” Come up with a class consensus. How many people buy products in a single day in a single store? (Select and discuss a specific store, such as Costco or Best Buy.) Multiply the number of customers by the number of bags, then multiply that by number of days in a week, the number of other stores in the community, etc. “Are you surprised at the number? We are now going to watch a film that presents a different perspective on the use of plastic bags. It is a futuristic fantasy with a very strong message. Enjoy the film, but listen to its message.” Middle (30 minutes) View Plastic Bag. Upon completion of film, discuss student impressions of the film. Sample questions: * What did the film reveal that you didn’t already know? * Is this film based on fact, opinion, or something else? * Do you think the “Pacific Vortex” is real? * What is left out of this message that you probably need to know? * What might have happened to the pieces of the plastic bag? * Why are the people “missing”? * Have any of you seen A.I.? It is the story of an android that lives forever, waiting for its “mother.” How is the android like the plastic bag? End (3-5 minutes) Tomorrow we will investigate some of the claims in the film. We’ll try to find an answer to the question “How bad is the problem of plastic bags...really?” * Responses to intro discussion * Responses to discussion questions after viewing film The students will: - Investigate and verify or refute information from the film. - Assess the nation’s current usage of plastic bags per day or year. - Identify procedures in place to reduce plastic bag use. - Identify procedures in place to safeguard wildlife from non-biodegradable waste. * Computer with internet access * Website evaluation guide (see page 9) * Plastic Bag Web Research Sheet (see page 10) Beginning (10-12 minutes) Remind students of information from the film. Remind students of the theoretical amount of plastic bags generated per day in their community or state. “The film presents a very strong warning about the continued use of plastic bags, but is it a valid warning? Where did the filmmakers get their information? Today we are going to search for more information on the problems associated with plastic bag disposal, but first everyone will examine the information used in the making of Plastic Bag.” Students will go to the FUTURESTATES website, click on Plastic Bag, and then click on “The Making Of” video the left hand bar. The film is about five minutes long and provides additional information on the problems associated with plastic bag disposal. Note: “The Making Of” film may be used as a source of research or as an introduction to the student’s own research. Middle (35-40 minutes) Students will be assigned specific areas to research. The teacher will provide suggestions for evaluating the credibility of websites. Students will share their information with class using worksheet provided (see supplemental documents). General instructions using Google (any search engine may be used for this investigation): Group 1: Google: plastic bag environment Group 2: Google: plastic bags in the Pacific Ocean Group 3: Google: plastic bags in the food chain Group 4: Google: plastic bags recycling Group 5: Google: plastic bags in landfills Students are to gather information from the internet pertaining to the group topic. Each group should divide the website investigations among the members. Each student should be responsible for evaluating and gleaning information from 1-2 websites. End (5 minutes) Tomorrow we will discuss our findings and investigate the predictions on the FUTURESTATES Predict-O- Meter website. Begin thinking of a prediction of your own about future environmental issues related to plastic bags. You must be able to support your prediction with current facts or trends. - Responses during introductory discussion - Accurate location and evaluation of assigned website - Completion of information worksheet The student will: - Investigate the predictions for Plastic Bag posted on the Predict-O-Meter located on the FUTURESTATES website. - Create and post their predictions about the future effects of plastic bag usage. - Computer with internet access - Prediction Evaluation sheet (see supplemental materials) Beginning (10-15 minutes) Students will share and discuss the information they found from the previous day’s lesson. The teacher will aid the students in creating a summary (list) of the important facts about plastic bag use and disposal. Middle (30-35 minutes) Students will visit the FUTURESTATES website and investigate the predictions posted on the Predict-O- Meter for Plastic Bag. Students will select three predictions to analyze on the provided worksheet (see supplemental materials). Students will then create 1-3 predictions of their own to post on the site. The predictions must be based on science and approved by the teacher. The predictions may alter a course projected in a Predict-O-Meter prediction. Students may require an example of a valid prediction. Using the rubric to instruct the students, prepare a sample prediction and lead the class in an analysis of the statement. The following is an example of a proposed prediction and the evaluation of it using the prepared rubric. Proposed Prediction: “In 2012, following the disastrous leaks of undersea oil rigs during 2010 and 2011, a new strand of petroleum-eating bacteria is developed. The organism is capable of devouring many plastic polymers as well.” - Is the prediction based on scientific possibilities? Yes: there are already bio-engineered bacteria that can consume oil. - Do the consequences of the prediction support the film? Don’t know. The film does not present an answer to pollution already present. - Do known events in the past support the prediction? Yes: Archaebacteria have species that are chemosynthetic. - Is this prediction plausible? This is the evaluator’s opinion based on the evidence presented in defense of the prediction. End (Time Varies) Go over the Predict-O-Meter activity instructions with students (see supplemental materials) and direct them to complete the activity. Tell the class that tomorrow we will share our predictions and revisit the film. To save time, the teacher could already have the calculations finished for the plastic bag use calculation activity. The activity could be extended into a homework assignment in which students select different establishments and collect actual data on the number of customers and the number of bags used by the store. The film could be viewed as enrichment following instruction in ecology/pollution or it could also be paired with Mr. Green, as both address environmental concerns. The search for information on this topic could easily extend to two class periods. There is a wealth of information to investigate. If desired, the students could write a research paper on one of the group topics. It may be desirable to simply investigate the Predict-O-Meter site. Students may explore the site without formal evaluation or development of predictions. If time permits, the unit could be expanded by viewing the film a second time and: - Responding to the following writing prompt: “How has your perception of the film changed from the first time you saw it? What is your answer to the question ‘How bad is it...really?’” Information for creating writing rubrics: - AP 9-point rubric) - Sample Six Traits rubrics) Instead of a writing prompt, the students could propose a sequel to the film based on what they have learned. Students would work in teams of 3-5 to develop an outline of their sequel. Upon approval by the instructor and depending upon availability of equipment, students could write a short skit and either perform or film it for the class. The film could also be posted on the school or class website. If desired, the students could analyze the presentations using the “key questions” presented by the National Association for Media Literacy Education. Students could analyze the effectiveness of the film Plastic Bag in educating students about the problem of waste plastic bags. If the students have not previously been instructed in media literacy, this lesson could provide an opportunity to do so. In addition to Plastic Bag, students could watch the film Story of Stuff and evaluate it using the internet site evaluation guide from Lesson 2. Students could then compare the differences in the approaches used by the filmmakers as well as the effectiveness of each film in educating students about the problems of waste. (Story of Stuff is 20 minutes long.)
A windmill is a machine used for grinding, pumping, and is used to harness energy. The secret behind how windmills work is harnessing the driving force of the wind acting upon a number of vanes or sails. People in ancient times have known how windmills work, particularly during the 7th century AD in Persia where the first known structures of this type were constructed. These have been well utilized in Europe especially in Holland. Today, they are primarily used to generate electricity. How Windmills Work Windmills use blades to accumulate wind that flows over it. These blades are used for lift as well – to turn the windmill. Their blades are linked to a drive shaft that is also linked to an electric generator to create electricity. Electricity is produced when the drive shaft revolves as the blades turn. Electricity then is sent through wires and gathered. The windmill’s location is very crucial as it assures that the mill can have access to the best wind reserves possible. Types of Windmills How windmills work depends also on the types. There are various different types of windmills. They are classified depending by the direction their blades rotate. The two most common windmills are the horizontal axis turbines and vertical axis turbines. Horizontal turbines are the most popular by far and the traditional type of windmills that are similar to an airplane that have propeller. Furthermore, vertical axis windmills are windmills that have blades like an egg beater. They are, by far, less popular than the former which only make up a very small total percentage. Modern windmills also called wind turbines are positioned in big groups known as wind farms. Some of these turbines are actually off shore. Turbines generate about forty percent of the total electricity on earth. However, they are usually steered by steam, caused by the burning of fossil fuels or the application of nuclear fission. Size matters on how windmills work. A windmill can produce a certain amount of electricity depending on the size of the structure. Bigger windmills are stronger and can generate greater electricity. Small windmills can power a single household. Windmill farms can generate a considerable megawatts of electricity, enough to power a whole community. Wind power turbines produce zero emissions and use an entirely renewable fuel source. The production of electricity depends on how windmills work specially when there is an irregular wind. Problems may also occur caused by the slacking off of the gusts. Because windmills use a renewable energy, further development is encouraged. However, over time, technology has enhanced the effectiveness of wind turbines. They have become a cost effective enterprise for both the producers and consumers. How Windmills Work Video
Marxism-Leninism considers all questions in their historical settings. Marxism-Leninists view bourgeois nationalism under the given historical conditions. Drawing a distinction between its different objective roles, they decide what different attitudes the proletariat should take toward it. In the early period of capitalism, the national movement led by the bourgeoisie had as its objective the struggle against oppression by other nations and the creation of a national state. This national movement was historically progressive, and the proletariat supported it. In the present period, such bourgeois nationalism still exists in the colonial and semi-colonial countries. This variety of nationalism also has a certain objective progressive historical significance. The bourgeoisie of Europe, the United States, and Japan has established the imperialist system of colonial and semi-colonial oppression in many backward countries. In such colonial and semi-colonial countries as China, India, Korea, Indonesia, the Philippines, Viet-Nam, Burma, Egypt, etc., bourgeois nationalism naturally developed. This was because the national bourgeoisie in these countries has interests antagonistic in the first place to those of imperialism, and in the second place to those of the domestic backward feudal forces. Moreover, these feudal forces unite with imperialism in restricting and hampering the development of the national bourgeoisie. Therefore, the national bourgeoisie in these countries is revolutionary in a certain historical period and to a certain degree. Bourgeois nationalism in these countries has a decidedly progressive significance when the bourgeoisie mobilize the masses in the struggle against imperialism and the feudal forces. As Lenin pointed out (in a speech delivered at the Second Congress of the Eastern Peoples), nationalism of this type “ has historical justification ” . Therefore the proletariat, with the aim of overthrowing the rule of imperialism and the feudal forces, should collaborate with this bourgeois nationalism which plays a defiantly anti-imperialist and anti-feudal role provided, as Lenin said, that these allies do not hinder us in educating and organizing the peasantry and the broad masses of theexploited people in a revolutionary spirit. The clearest example of this type of collaboration was that which existed between the Chinese Communists and Sun Yat-sen. Sun Yat-sen’s nationalism was a form of bourgeois nationalism. The Three Person’s Principle of Sun Yat-sen, as Comrade Mao pointed out in his New Democracy, has undergone great changes in the two historical periods before and after the Russian October Socialist Revolution. In the former period, it came under the category of old democracy, that is, it remained within the scope of bourgeois democratic revolution of the old world and was a part of the bourgeois and capitalist world revolution. In the latter period, however, it belonged to New Democracy, that is it pertained to the scope of new bourgeois democratic revolution and was a part of the proletarian Socialist world revolution. Sun Yat-sen’s nationalism in the old democratic era had a dual character. His opposition to the current rulers of China, the Manchu Dynasty, had a progressive character. Yet the Greater Han-ism he advocated had a reactionary character. After the October Revolution, when China entered the New Democratic era, received help from the U.S.S.R. and from us Chinese Communists. He then revised his nationalism characterized by Greater Han-ism and turned toward revolutionary nationalism characterized by his active opposition to imperialist aggression and his adoption of the three policies of alliance with the Soviet Union, alliance with the Chinese Communist Party and support for the workers and peasants. He also advocated that “the Chinese nation should strive to liberate itself” and that “there should be equality for all nationalities within the country” (Declaration of the First Congress of the Kuomintang). Thus he turned toward New Democracy and we Communists therefore adopted the policy of collaborating with him. This collaboration was absolutely correct and necessary for national liberation and was in accord with the interests of the proletariat at the time, even though it was an unreliable, temporary and unstable alliance which was later undermined by the shameless betrayers of Dr. Yat-sen’s cause. Although Sun Yat-sen’s world outlook at the time was still of a bourgeois of petty-bourgeois character, and although his nationalism was still a form of bourgeois nationalism preserving some reactionary features (for instance, his concepts of so-called “common blood” “state and nation” and “Greater Asianism” etc.), nevertheless he stood for the doctrine of a national revolution which called for “arousing the people and uniting in a common struggle with all nations in the world who treat us as equals.” He also put into effect the three great policies of alliance with the U.S.S.R.. alliance with the Chinese Communist Party and support for workers and peasants. This was an excellent illustration of the progressive character of revolutionary bourgeois nationalism in colonial and semi-colonial countries during the new era of world Socialist revolution. It was of enormous revolutionary significance. However, shortly after Sun Yat-sen’s death, the brazen betrayers of his cause - the representatives of the big bourgeoisie such as Chiang Kai-shek, Wang Ching-wei and other reactionary leaders of the Kuomintang - began to turn Sun Yat-sen’s doctrine of national revolution toward the opposite and extremely counter-revolutionary direction. They swung from the anti-imperialist struggle to capitulation to imperialism, from alliance with the Soviet Union to struggling against it, from unity with the Chinese Communist Party to attacks on the Party, from supporting the workers and the peasants to slaughtering them. Moreover, they used the conservative and reactionary features of Sun Yat-sen’s nationalism as their anti-national banner. It therefore became necessary for the Communist party, in order to defend the interests of the nation, to adopt a firm policy of opposition to the Kuomintang reactionaries, who were headed by Chiang Kai-shek and Wang Ching-wei. Of course, the Communists in other colonial and semi-colonial countries such as India, Burma, Siam, the Philippines, Indonesia, Indo-China, South Korea, etc., must for the sake of their national interests similarly adopt a firm and irreconcilable policy against national betrayal by the reactionary section of the bourgeoisie, which has already surrendered to imperialism. If this were not done, it would be a grave mistake. On the other hand, the communists in these countries should enter into an anti-imperialist alliance with that section of the national bourgeoisie which is still opposing imperialism and which does not oppose the anti-imperialist struggle of the masses of the people. Should the Communists fail to do so in earnest, should they to the contrary, oppose or reject such an alliance, it would also constitute a grave mistake. Such an alliance must be established in all sincerity, even if should be of an unreliable, temporary and unstable nature. The experience of the revolution in other countries as well as in China fully confirms the correctness of the scientific Marxist-Leninist conclusion that the national question is closely linked with the class question and the national struggle within the class struggle. An historical analysis of class relations reveals why in certain periods, one country is oppressed by another and becomes a colony or semi-colony of imperialism; why national traitors may appear in such a country, not only from the ranks of the feudal classes, but also form the ranks of the bourgeoisie - for instance, form the ranks of compradore, bureaucratic bourgeoisie in China. Such an analysis also reveals under what conditions, and under the leadership of which class, national liberation can be achieved. An historical analysis of the class relations also reveals that although such outstanding national revolutionists as Sun Yat-sen sprang from China’s petty-bourgeoisie or national bourgeoisie, yet this bourgeoisie, generally speaking, views the national question solely in the light of its own narrow class interests and changes its position solely in accordance with its own class interests. In the same way, only the class interests of the proletariat are really in full accord with the fundamental interests of the people of a given country, with the common interests of all mankind. When the proletariat of an oppressed nation, as is the case of China, enters the arena of struggle and becomes the leader of the national liberation struggle against imperialism and the saviour of the whole nation, then every genuinely patriotic class, party, group or individual inevitably forms an alliance with the Communist Party, as did Sun Yat-sen (and thus becomes linked with the policies of alliance with the Soviet Union and support for the workers and peasants). On the other hand, those persons or groups - like Chiang Kai-shek and Wang Ching-wei - who oppose the Communist Party (an opposition linked with opposition to the Soviet Union and to the interests of the workers and peasants), inevitably become servile lackeys of imperialism and the most vile, contemptible national traitors who sell out their own country. An historical analysis of class relations further discloses that under the new conditions, in the new period of accentuated international and internal struggle, as a result of threats combined with all kinds of tempting offers and enticements held out by the imperialists, and owing to the developing class struggle within the country, there may appear within the revolutionary ranks such people as Chen Tu-hsiu, Chang Kuo-tao in China and Tito in Yugoslavia. These people capitulate to reactionary bourgeois nationalism, betray the common interests of the toilers of all countries and place the liberation of their own people in serious jeopardy. They are the spokesmen of bourgeois nationalism inside the ranks of the proletariat. They cynically desert the cause of national liberation in mid-path, and they divert their country down the road leading to its transformation into an imperialist colony. The Communist Parties of all countries and each individual Communist must be alert to this danger. Next: VI: Conclusion: Genuine Patriotism is Intimately Connected with Internationalism
You will use what you learn about number relationships and the basic properties of operations of numbers in scientific notation to solve problems. After completing this tutorial, you will be able to complete the following: Exponents, also called powers, tell how many times to multiply the base number. Here a is the base and 3 is the exponent. This expression means that a will be multiplied three times. If a = 2, then 2 × 2 × 2 = 8. Scientific notation is a way of using exponents and powers of 10 to write very large or very small numbers. It is the product of two factors: a base number × a power of 10. The base number, a, is a decimal greater than or equal to 1 but less than 10. The exponent can be either positive or negative. a × 10b 4,000,000 = 4 × 107 0.0057 = 5.7 × 10-3 Professionals in the science, math, and medical fields often find it very useful to write numbers using scientific notation. 9.46 × 1015 meters (the distance light travels in one year) 5.44 × 106 square meters (the area of the Artic Ocean) 7.53 × 10-7 grams (weight of a particle of dust) Please note that in the Activity Object all numbers except zero are written in scientific notation. Laws of Exponents and Properties of Powers to assist in operations. There are specific rules when performing operations with numbers in scientific notation. In the following example from the Activity Object, addition, subtraction, and multiplication are modeled: The problems can be solved using number relationships. Despite the scientific notation, many of the problems can be by examining the relationships between numbers. Every number can be broken down into smaller numbers. For example, 7 can be created by adding several combinations of numbers - 2 + 5, 3 + 4, and 6 + 1. Students use information about number relationships to determine the placement of the numbers in the problems. Using numbers 1 through 9, A must be 1, 2, 3, or 4. When each of these numbers is doubled (or added to itself), the sum is less than ten. B must be 2, 4, 6, or 8 as these numbers are the only possible sums of 1, 2, 3 or 4 doubled. Using number relationships to solve this example allows us to narrow the possible options. When multiple problems are solved in Level 2 and 3, students will be able to eliminate options more easily as they analyze the number relationships. The Identity Property of Addition states that when any number is added to zero the sum is the number. Students should be able to quickly recognize this property when solving the problems in this Activity Object. Algebraically, this can be expressed as: a + 0 = a Zero is called the Identify Element of Addition. Using the Identity Property of Addition, we can see that the number added to C must be zero since the sum of the addition problem is C. Therefore, D is zero. The Identity Property of Multiplication states that when any number is multiplied by one the product is the number. Again, most students should be able to quickly recognize this property when solving the problems in this Activity Object. Algebraically, this can be expressed as: a × 1 = a One is called the Identity Element of Multiplication. Using the Identity Property of Multiplication, we can see that E must be one since the product of the multiplication problem is F. Therefore, E is one. The Multiplication Property of Zero states that when any number is multiplied by zero the product is zero. This property is also easily recognizable when solving problems. Algebraically, this can be expressed as: a x 0=0 Using the Multiplication Property of Multiplication, we can see that H must be zero, since the product of the multiplication is H. Therefore, H is zero. |Approximate Time||10 Minutes| |Pre-requisite Concepts||Students should understand the concept of number sense.| |Type of Tutorial||Skills Application| |Key Vocabulary||numbers in scientific notation, operations, properties|
Over the last few weeks in Spanish class, 8th graders have been learning how to make comparisons of equality and inequality. They have practiced this by writing a mini-dialogue about something not being fair in their “No es justo” assignment and have also made interview questions to ask their classmates about how fast they run, how tall they are, and more so that they could continue to learn how to compare. This week, we turned our focus to Spanish-speaking countries and the verb jugar (to play). Students looked at the sports are played in many Spanish-speaking countries, and interviewed each other to find out what sports are played in the Spanish-speaking world. They then created graphics to demonstrate what they learned. By doing some more cultural investigations, students have a better understanding not only of how to use the sentence structures they’ve been using, but are also gaining new insights as to the values and traditions of the people who speak the language we are learning. This week we have been unpacking the differences between ser and estar. This is a difficult endeavor because both of these words means “to be.” Although they cannot be used interchangeably, students have been learning the specific way to use each of them. The basic rule we have been working with is: Use estar for talking about the condition or state of something. For example: ¿Cómo estás? (How are you?) Use ser to talk about something’s essential qualities. ¿Cómo eres? (What are you like?) We recently practiced this by assuming the identities of famous Spanish-speaking writers, artists, and activists. Starting by choosing a Spanish-speaking country from a box, students looked for famous people from that country who they wanted to pretend to be. Then, students looked up information on the person: their birthdate, place of origin, and other important facts about them. Using this information and new identity, students interviewed each other to learn more about the other famous folks in the room. Some of the people chosen were: Teresa de la Parra After interviewing all of the famous people in the room, students designed diagrams to illustrate what they learned about each person they interviewed. Buen trabajo to all of our Spanish learners and gracias for all of their hard work! Point People: Phelana, Wendy, Lydia Culmination of Pay It Forward projects – display of research, art, poetry, actions, website Students will be stationed with a display board, art project, website. They can talk about their issue – the root causes, impacts, solutions…, perform their spoken word poem, show their lobbying … Continue reading Point Person: Colleen [email protected] Having a social atmosphere for families to get to know each other and have a meeting for adults in CMR and activity for students in the downstairs lab.
Explore the Curriculum Enter a World Where Small Is Powerful Your computer uses a microprocessor to do its work. Smaller and thinner than a dime, this tiny silicon chip contains millions of transistors that work together to help you do everything from write a school report to search the Web for the current population of the Svalbard Islands. But what really is a microprocessor? How are they made? And how do they do all the things they do? Let's shrink ourselves down and explore the world of microprocessors. Go to Lesson 1 to start.
As we age, it becomes harder and harder to recall names, dates—even where we put down our keys. Although we may fear the onset of Alzheimer's, chances are, our recollective powers have dulled simply because we're getting older—and our brains, like our bodies, are no longer in tip-top shape. But what is it that actually causes memory and other cognitive abilities to go soft with senescence? Previous research has shown that bundles of axons (tubular projections sent out by neurons to signal other nerve cells) wither over time. These conduits, collectively referred to as white matter, help connect different regions of the brain to allow for proper information processing. Now, researchers have found that these white matter pathways erode as we age, impairing communication or "cross talk'' between different brain areas. "What we were looking at was the communication or cross talk between different regions of the brain," says study co-author Jessica Andrews-Hanna, a Harvard University graduate student. "The degree to which white matter regions are actually stable predicts the degree to which other regions are able to communicate with each other." Andrews-Hanna and other Harvard researchers (along with collaborators at the University of Michigan at Ann Arbor and Washington University in St. Louis) concluded that white matter naturally degrades as we age—causing disrupted communication between brain regions and memory deficits—after conducting a battery of cognitive tests and brain scans on 93 healthy volunteers, ages 18 to 93. Participants fell into two age groups: one 18 to 34 and the other 60 to 93 years of age. Scientists asked study subjects to perform several cognitive and memory exercises, such as determining whether certain words referred to living or nonliving objects. As they answered, researchers monitored activity in the fronts and backs of their brains with functional imaging magnetic resonance imaging (fMRI) to determine whether those areas were operating in sync. The results, published in Neuron: communication between brain regions appeared to have "dramatically declined" in the older group. They fingered the potential reason for the dip by doing further brain scans using diffusion tensor imaging, an MRI technique that gauges how well white matter is functioning by monitoring water movement along the axonal bundles. If communication is strong, water flows as if cascading down a celery stalk, says Randy Buckner, a cognitive neuroscientist at Harvard; if it is disrupted, the pattern looks more like a drop of dye in a water bucket that has scattered in all directions. The latter was more evident in the older group, an indication that their white matter had lost some of its integrity. The older crowd's performance on memory and cognitive skill tests correlated with white matter loss: The seniors did poorly relative to their younger peers. The researchers note that the white matter appears to fray more over time in the forebrain than in the brain's rear. They speculate that age-related depletion of neurotransmitters (the chemical signals sent between neurons) as well as the shrinking of gray matter (the tissue made up of the actual nerve cell bodies and supporting cells) also contribute to dimming memory and cognitive skills. Buckner says that the team now plans to examine how aging affects white matter as well as gray matter and neurotransmitters. "We want to know," he says, "is this an important factor in why some people age gracefully and others age less gracefully?"
The consistent motion of the engine results in a large buildup of heat, which affects the performance of a vehicle when the amount of heat reaches a definitive threshold. Too much heat will warp metal and cause permanent damage to the cylinders as well as other parts within the engine. In order to combat this issue, the internal combustion engine requires a process of heat transfer to remove the heat from the engine; this process is powered by the water pump. Heat Transfer to Reduce Engine Heat Buildup Within the world of physics, the principles of thermodynamics refer to how heat affects the ability of a system to perform work. In the case of vehicles, these principles are applied to result in a moving piston. It’s important to understand that in all reactions, including combustion reactions within the engine, energy is neither created nor destroyed. Instead, energy is transformed into heat. Temperature is a measure of the relative kinetic energy that molecules possess. As the molecules become more excited, the temperature increases. However, the heat can be transferred from one area to another through three differing processes: convection, conduction, and radiation. The Water Pump Produces Forced Convection and Conduction Heat Transfer Forced convection occurs as the result of a pushing force — the water pump — acting to increase the buoyancy of heated fluids in a given direction and prevent internal currents from occurring. The cooling system runs throughout the engine, and the water pump forces the fluid, antifreeze/coolant, throughout this system. If the water pump fails to provide this force, the currents within the cooling system would not be capable of reducing heat transfer appropriately, which is why any failing pump should be replaced according to the vehicle specifications as soon as possible. When the fluid passes near heated objects, the heat is transferred from the engine to the fluid through convection. However, the proximity of the cooling system to the heated areas enables heat transfer to also occur through direct contact with heated parts as conduction, which occurs when the vibrations of the atoms within an object — the above mentioned kinetic energy — move through electrons to an adjacent molecule. Coolant Moves Toward the Radiator’s Higher Surface Area After absorbing the heat from the engine, the fluid is then pushed by the water pump toward the radiator. The radiator derives its name from the third process of heat transfer in thermodynamics —radiation. Heated fluids project electromagnetic waves of nonlethal radiation relative to the amount of surface area of the fluid’s container. The radiator increases the amount of surface area exponentially, and the water pump pressure enables the radiation heat transfer to complete more quickly. Once the fluids have returned to a lower temperature, the fluid returns to the water pump, and the cycle repeats. The concept of water pumps to reduce heat buildup sounds relatively simple. The laws of physics, specifically thermodynamics, are the heart of effective heat management within the engine. Without the driving force of coolant, the engine would lock in place and fail to perform.
When Margaret Rubega first read about how hummingbirds drink, she thought to herself: That can’t possibly be right. Hummingbirds drink nectar using tongues that are so long that, when retracted, they coil up inside the birds’ heads, around their skulls and eyes. At its tip, the tongue divides in two and its outer edges curve inward, creating two tubes running side by side. The tubes don’t close up, so the birds can’t suck on them as if they were straws. Instead, scientists believed that the tubes are narrow enough to passively draw liquid into themselves. That process is called capillary action. It’s why water soaks into a paper towel, why tears emerge from your eyes, and ink runs into the nibs of fountain pens. This explanation, first proposed in 1833, was treated as fact for more than a century. But it made no sense to Rubega when she heard about it as a graduate student in the 1980s. Capillary action is a slow process, she realized, but a drinking hummingbird can flick its tongue into a flower up to 18 times a second. Capillary action also is aided by gravity, so birds should find it easier to drink from downward-pointing flowers—and they don’t. And capillary action is even slower for thicker liquids, so hummingbirds should avoid supersweet nectar that’s too syrupy—and they don’t. “I was in this very odd position,” says Rubega. “I was only a graduate student and all these really well-known people had done all this math. How could they be wrong?” Even while she turned her attention to other birds, the hummingbird dilemma continued to gnaw at her. And decades later, as a professor at the University of Connecticut, she hired a student named Alejandro Rico-Guevara who would help her solve the mystery. Born in Colombia, Rico-Guevara remembers spotting a hermit hummingbird on a fateful field trip in the Amazon. In the jungle, most animals are heard rather than seen, but the hermit flew right up and hovered in front of his face. “It was just there for a split second but it was clear that it had a completely different personality than other birds in the forest.” He fell in love, and started studying the birds. And when he read the capillary-action papers, he felt the same pang of disbelief that Rubega did. “We decided to go after it,” says Rubega. “Is it capillary action? And if not, what’s going on? We just wanted to know.” Rico-Guevara handcrafted artificial flowers with flat glass sides, so he could film the birds’ flickering tongues with high-speed cameras. It took months to build the fake blooms, to perfect the lighting, and to train the birds to visit these strange objects. But eventually, he got what he wanted: perfectly focused footage of a hummingbird tongue, dipping into nectar. At 1,200 frames per second, “you can’t see what’s happening until you check frame by frame,” he says. But at that moment, “I knew that on my movie card was the answer. It was this amazing feeling. I had something that could potentially change what we knew, between my fingers.” Here’s what they saw when they checked the footage. As the bird sticks its tongue out, it uses its beak to compress the two tubes at the tip, squeezing them flat. They momentarily stay compressed because the residual nectar inside them glues them in place. But when the tongue hits nectar, the liquid around it overwhelms whatever’s already inside. The tubes spring back to their original shape and nectar rushes into them. The two tubes also separate from each other, giving the tongue a forked, snakelike appearance. And they unfurl, exposing a row of flaps along their long edges. It’s as if the entire tongue blooms open, like the very flowers from which it drinks. When the bird retracts its tongue, all of these changes reverse. The tubes roll back up as their flaps curl inward, trapping nectar in the process. And because the flaps at the very tip are shorter than those further back, they curl into a shape that’s similar to an ice-cream cone; this seals the nectar in. The tongue is what Rubega calls a nectar trap. It opens up as it immerses, and closes on its way out, physically grabbing a mouthful in the process. “This has been going on literally under our noses for the entire history of our association with hummingbirds and there it was,” says Rubega. “We were the first to see it.” This same technique is also how the hummingbird swallows. Every time it extends its tongue, it presses down with its beak, squeezing the trapped nectar out. And since there’s limited space inside the beak, and the tongue is moving forward, there’s nowhere for that liberated nectar to go but backward. In this way, the tongue acts like a piston pump. As it pulls in, it brings nectar into the beak. As it shoots out, it pushes that same nectar toward the throat. The tongue even has flaps at its base, which fold out of the way as it moves forward but expand as it moves backwards, sweeping the nectar even further back. The thing that really astonishes Rico-Guevara about all of this is that it is passive. The bird isn’t forcing its tongue open—that happens automatically when the tip enters liquid, because of the changing surface tension around it. Rico-Guevera proved that by sticking the tongue of a dead hummingbird into nectar—sure enough, it bloomed on its own. Likewise, the tongue closes automatically. It releases nectar automatically. It pushes that nectar backward automatically. The bird flicks its tongue in and out, and all else follows. In hindsight, the surprising reality of the hummingbird tongue should have been entirely unsurprising. Almost everything about these animals is counterintuitive. Hummingbirds are the bane of easy answers. They’re where intuition goes to die. Consider their origins. Today, hummingbirds are only found in the Americas, but fossils suggest that they originated in Eurasia, splitting off from their closest relatives—the scythe-winged swifts—around 42 million years ago. These ancestral hummingbirds likely flew over the land bridge that connected Russia and North America at the time. They fared well in the north, but they only thrived when they got to South America. In just 22 million years, those southern pioneers had diversified into hundreds of species, at least 338 of which are still alive today. And around 40 percent of those live in the Andes. As evolutionary biologist Jim McGuire once told me, “the Andes are kind of the worst place to be a hummingbird.” Tall mountains mean thin air, which makes it harder to hover, and to get enough oxygen to fuel a gas-guzzling metabolism. And yet, the birds flourished. Their success shows no sign of stopping, either. By comparing the rates at which new species have emerged and old species go extinct, McGuire estimated that the number of hummingbird species will probably double in the next few million years. As they evolved, they developed one of the most unusual flying styles of any bird—one that’s closer to insects. The wings of medium-sized species beat around 80 times a second, but probably not in the way you think. When I ask people to mimic a hummingbird’s wingbeats, they typically stick their hands out to the side and flap them up and down as fast as they can. That’s not how it works. Try this, instead. Press your elbows into your sides. Keep your forearms parallel to the ground and swing them in and out. Now, rotate your wrists in figure eights as you do it. Congratulations, you look ridiculous, but you’re also doing a decent impression of hummingbird flight. That unusual wingbeat allows them to hover, but it also allows for more acrobatic maneuvers. Hummingbirds use that aerial agility to supplement their nectar diet with insects, which they snatch from the air. While many birds can do that, they typically have short beaks and wide gapes. Hummingbirds, by contrast, have long flower-probing bills and narrow gapes. “It’s like flying around with a pair of chopsticks on your face, trying to catch a moving rice grain,” says Rubega. But once again, she has shown that there’s more to these birds than meets the eye. Another of her students, Gregor Yanega, found that as the birds open their mouths, they can actively bend the lower half of their beaks, giving it a pronounced kink and getting it out of the way. Then, the hummingbirds essentially ram insects with their open mouths. High-speed cameras again revealed their trick. “The moment Gregor first saw a bird fly into frame and open its beak, he stopped, and said: Hey, can you look at this?” says Rubega. She walked in and he played the footage. She asked him to play it again, and he did. Just one more time, she said. He played it again. “That is wild, and you should know that nobody has ever seen that before you,” she told him. We want to hear what you think about this article. Submit a letter to the editor or write to [email protected].
A white hole is the theorized time reversal of a black hole. The event horizon of a black hole attracts matter, so event horizon of a white hole ejects matter even though the white hole itself still attract matter. The main difference between the two is the action of the event horizon. The event horizon of a black hole will engulf every particle of matter that it encounters; however, a white hole shrinks away from any and all matter so that nothing ever crosses the event horizon. The matter ends up scattered when the hole collapses from its constant recession. Using quantum mechanics, Steven Hawking demonstrated that a black hole emits Hawking radiation and can come to thermal equilibrium. That same thermal equilibrium stays unchanged in time reversal. So, the reverse of a black hole in thermal equilibrium is a black hole in thermal equilibrium; meaning that a hole, black or white, is the same thing. The concept of a white hole only appears as part of the vacuum solution to Einstein’s field equations that are used to describe a Schwarzschild wormhole. A wormhole is a black hole on one end, drawing in matter, and a white hole on the other to emit matter. Schwarzschild wormholes are unstable. They collapse as soon as they form. Also, wormholes are only a solution to the Einstein field equations in a vacuum where no matter interacts with the hole. Real black holes are formed by the collapse of stars, but white holes shrink from matter so they could not exist in connection with true black holes because the presence of matter would cause them to collapse. A white hole is only a concept for higher levels of thinking. No one has every observed one and no one probably ever will. A few scientist think that a white hole could be part and parcel of a concept called a Fecund universe. We have written many articles about the white hole for Universe Today. Here’s an article about wormholes, and here’s an article about our universe inside a larger universe. We’ve also recorded an episode of Astronomy Cast all about White Holes. Listen here, Episode 31: String Theory, Time Travel, White Holes, Warp Speed, Multiple Dimensions and Before the Big Bang.
Experts Agree Ocean Acidification Caused by Carbon Dioxide Emission from Human Activity Change our behaviors or expect significant economic and ecosystem loss to our world’s oceans. A coral reef has been completely destroyed, probably by bleaching as high sea surface temperatures cause corals to expel their symbiotic zooxanthellae. Science Daily reported yesterday that experts conclude the acidity of the world’s ocean may increase 170 percent by the end of this century. The summary was led by the International Geosphere-Biosphere Program and results from the world's largest gathering of experts on ocean acidification ever convened. The Third Symposium on the Ocean in a High CO2 World was held in Monterey, California (September 2012), and attended by 540 experts from 37 countries. The summary will be launched at the UNFCCC climate negotiations in Warsaw, 18 November, for the benefit of policymakers. Experts conclude that marine ecosystems and biodiversity are likely to change as a result of ocean acidification, with far-reaching consequences for society. Economic losses from declines in shellfish aquaculture and the degradation of tropical coral reefs may be substantial owing to the sensitivity of molluscs and corals to ocean acidification. One of the lead authors of the summary, and chair of the symposium, Ulf Riebesell of GEOMAR Helmholtz Centre for Ocean Research Kiel said: “What we can now say with high levels of confidence about ocean acidification sends a clear message. Globally we have to be prepared for significant economic and ecosystem service losses. But we also know that reducing the rate of carbon dioxide emissions will slow acidification. That has to be the major message for the COP19 meeting.” The summary for policymakers makes 21 statements about ocean acidification with a range of confidence levels from “very high” to “low.” Very high confidence - Ocean acidification is caused by carbon dioxide emissions from human activity to the atmosphere that end up in the ocean. - The capacity of the ocean to act as a carbon sink decreases as it acidifies - Reducing carbon dioxide emissions will slow the progress of ocean acidification. - Anthropogenic ocean acidification is currently in progress and is measurable - The legacy of historical fossil fuel emissions on ocean acidification will be felt for centuries. - If carbon dioxide emissions continue on the current trajectory, coral reef erosion is likely to outpace reef building some time this century. - Cold-water coral communities are at risk and may be unsustainable. - Molluscs (such as mussels, oysters and pteropods) are one of the groups most sensitive to ocean acidification. - The varied responses of species to ocean acidification and other stressors are likely to lead to changes in marine ecosystems, but the extent of the impact is difficult to predict. - Multiple stressors compound the effects of ocean acidification. - Negative socio-economic impacts on coral reefs are expected, but the scale of the costs is uncertain. - Declines in shellfisheries will lead to economic losses, but the extent of the losses is uncertain. - Ocean acidification may have some direct effects on fish behaviour and physiology. - The shells of marine snails known as pteropods, an important link in the marine food web, are already dissolving. Additional information, including related topics, can be found at www.sciencedaily.com
Being inside of a scanning electron microscope is an extremely hostile environment. Emitting beams of strong radiation in an airless vacuum, it's been a given that anything destined to enter the microscope won't ever be coming out alive. They're meant to view dead material at remarkably high magnifications, and that's it. However, that wasn't the case for ticks at Kanazawa Medical University, in Uchinada, Japan, when they unknowingly crawled carefree through desiccator tubes in the environment, where anything else would have met its demise. Water bears (tardigrades), the previous title-holder for toughest bug , were able to survive the high radiation as well, and even the vacuum of space, but only after being dehydrated into a nigh-mummified state of hibernation. Ticks, on the other hand, need no such treatment. The ticks weren't invincible though, unfortunately. The beams of strong radiation do damage them, as does a prolonged vacuum, just not enough to kill them initially as they survived after 30 minutes of exposure. As stated in the research journal of the findings: "Different from tardigrades, H. flava in our experiments were hydrated and mobile. Then, the ticks keep their water inside of their body and condition is different from tardigrades in space. The ticks in the present study have a pair of spiracular plates, and they breathe through the stigma in them. Therefore, vacuum conditions may cause severe respiratory system damage and death. Actually, some anti-tick agents are emulsifying agents containing fatty acids such as sorbitan esters of fatty acids, which can seal the stigma. This implies that ticks can be choked to death" So not entirely indestructible, after all. Implying that ticks can be choked to death is an especially good sentiment given that ticks can be deadly The tick's natural resilience to being battered with radiation and exposure to vacuums also has implications of how life could live on other planets under similar conditions, and broadens understanding of what organisms can and cannot survive. Just keep in mind on that next camping trip that keeping your socks tucked in will be a better defence against them than a high-powered radiation beam.
Colour blindness is a condition where the eyes have trouble distinguishing certain colours. Most people have either red or green colour blindness. Blue colour blindness and monochromatism, a condition in which a person sees only black, white, and grey, are very rare. Most people have mild forms of colour blindness that don't interfere much with their daily lives. 8% of Caucasian men and less than 1% of Caucasian women have either red or green colour blindness. This condition is rare among people of Asian, First Nations, or African descent. Colour blindness is divided between inherited and acquired kinds. The area at the back of the eye, called the retina, is sensitive to light and colour. It contains specialized cells, called cones, which respond to colour. There are three types of cone cells. One responds best to red light, one to green light, and one to blue light. When a specific type of cone cell doesn't work properly, a person will have trouble seeing the colour that particular cone cell responds to. For example, a person with red colour blindness has a defect in red cone cells. Most colour blindness is inherited, although some cases are caused by an injury or disease of the retina or optic nerve, the nerve that takes information from the eye to the brain. People inherit colour blindness as a result of a defect on the gene(s) for colour located on the X chromosome. Men inherit colour blindness 10 times as often as women do. Colour blindness "shows up" in men because they have only one X chromosome. Since women inherit two X chromosomes, a healthy gene on one X chromosome can override the unhealthy gene on the other. A woman can still have the unhealthy gene; it just doesn't always show up. She can, however, pass the gene to her children. A person who doesn't have a genetic condition like colour blindness but who can pass it to her children is called a "carrier." Symptoms and Complications Colour blindness ranges from very mild to very severe forms, with most people having mild symptoms. People with colour blindness can't see the difference between certain colours. For example, a person with severe green colour blindness (deuteranopia) has trouble seeing the difference between oranges, greens, browns, and pale reds. In someone with severe red colour blindness (protanopia), all red colours look very dull. A few people have trouble distinguishing blue. This condition, called tritanopia, is either inherited or is caused by a reaction to drugs or poisons that damage the retina or optic nerve. It can also be due to a loss of function in these two areas over time. Most people don't know they have colour blindness until someone else notices they have trouble telling shades apart. For example, someone may notice the colour-blind person has trouble matching colours. Making the Diagnosis Tests for colour blindness are generally given to children and to people applying for jobs where colour discrimination is important, such as in the case of pilots, train engineers, or electricians. Colour blindness is tested in daylight, using special coloured cards. A more complicated test uses an instrument called an anomaloscope. It shines a changing mixture of red and green light. The person is asked to change the mixture until it looks the same as a yellow light. The examiner can tell how severely colour blind a person is by looking at the redness or greenness of the adjusted mixture. Treatment and Prevention Inherited colour blindness is not treatable. In cases of acquired colour blindness, a doctor will treat the underlying disease or injury. People with mild colour blindness lead fairly normal lives. Severe colour blindness can interfere with tasks such as seeing traffic signals properly. People with severe colour blindness shouldn't do tasks that require colour discrimination. All material copyright MediResource Inc. 1996 – 2019. Terms and conditions of use. The contents herein are for informational purposes only. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition. Source: www.medbroadcast.com/condition/getcondition/Colour-Blindness
Table Of Contents: Nervous System 1. Functions of Nervous System The nervous system provides electrical circuit pathways that can connect neurons throughout the body. When the body’s electrical stimulus connects neurons, this neural connection allows us the capacity to sense changes within and outside of the body, to interpret and understand those changes, to make very complex decisions and judgments, and to control our body functions and reactions. 2. Organs of the Nervous System The main component of the nervous system is the nerve cell, or neuron. Neurons are special cells with attributes that support the functions of the nervous system. The central nervous system’s organs, the brain and the spinal cord, are complex, organized collections of neurons. These organs connect to all parts of the body through neurons of the peripheral nervous system. 3. Structure of Neurons Neurons have a large cell body that includes two key parts. Dendrites are little extensions at the end of the neuron which receive electrical impulses. The impulse travels to the axon in the neuron, which then conducts the impulse toward a target. 4. Types of Neurons Within the human body, there are three different types of neurons. Sensory neurons carry impulses toward the brain and spinal cord. Motor neurons carry impulses away from the brain and spinal cord, and interneurons carry impulses within the brain and spinal cord. 5. The Nerve Impulse The electrical impulses that neurons receive and send travel extremely fast, in milliseconds. Impulses travel down the insulated axon. At the end of the axon, a chemical called a neurotransmitter is released into a space called the synapse. These neurotransmitters cause a change at the target site, such as a muscle contraction. 6. Central Nervous System The central nervous system is composed of the brain and spinal cord. The spinal cord serves as a neuron highway of neurological information. It enables communication of information up to the brain and from the brain down to body organ systems. 7. Structure of the Brain The brain is the center of higher function in the human body. The large cerebrum is the part of the brain that connects to the spinal cord via the brain stem. The neuron activity of the cerebrum determines our personality, decision-making, behavior, and emotions. It also controls how we initiate body movement and speech as well as how we interpret everything in the world around us that we see, hear, taste, smell and feel. The cerebellum, a mini-brain attached to our brainstem, controls body movement and coordination. 8. Peripheral Nervous System The brain and spinal cord communicate with all other parts of the body through the nerves that make up the peripheral nervous system. A reflex is a motor response to a sensory stimulus and is either involuntary (automatic) or a learned response that serves as a controlled reaction to a challenging situation or stimulus. Reflex systems are found connected with muscles, tendons, ligaments and skin (somatic) and are also found within internal organs, such as in the control of urination and blood pressure. 9. Senses: Vision, Hearing and Balance, Smell, Taste Humans possess specialized neuron receptors in various regions of the head which respond to different stimuli and provide valuable, diverse information about the surrounding environment. These specialized neurons allow complex interpretations of data from sounds, sights, tastes, smells and the overall position of the body. The eyes, for example, interpret visual information, and the ears include neurons that capture sounds. 10. How We See The neurons in our retinas translate light signals into impulses the body can comprehend. Light travels to the back of the eye to the retina where light stimuli are converted to neuron impulses. The neurons transmit these impulses to the back of the brain where the brain enables us to interpret what we see. Think how quickly this all happens! 11. How We Hear Sound waves enter the ear and cause movements of the eardrum and the smallest bones in the body, the ossicles, within the middle ear compartment. Those movements ultimately cause nerve impulses to be generated that travel up to the brain so we can make sense of what we hear. 12. How We Taste and Smell There are specialized neurons in our nose and tongue that respond to molecules in the air we breathe and the food and fluids we eat and drink. Impulses travel from these regions back to our brain so we can understand differences in taste and smell. 13. Drug and Alcohol Abuse Neurons communicate with each other through the release of chemical neurotransmitters. Drugs and alcohol can alter the release and uptake of neurotransmitters, thus changing our reactions and perceptions. For example, alcohol inhibits neuron function of the brain and poisons other organ systems, such as the liver. Cocaine changes the release of “feel-good” neurotransmitters. This results in dependency and elevated feelings of pleasure that can lead to addiction.
All around the world, people have different ways of honoring those who have died. In some countries, these are serious and in some cultures more festive. Here are some sample activities to build cross-cultural awareness and respect, from the book Hands Around the World by Susan Milord: - Compare ways people honor the dead, such as Obon or Dia de los Meurtos. - Stamp adinkra designs. Adinkra cloth is worn at funerals in Ghana. - Make sugar skulls. - Make gravestone rubbings to explore the beautiful art or imagine the lives of historical deceased. The main effect of these activities is to make death less fearsome. You will know best how much your children can handle.
You may have noticed a theme when it comes to the English language: most rules are not completely standardized. This (somewhat frustrating) fact is especially true when it comes to spelling out numbers. Should you write them out in words or leave them as numerals? To write numbers properly, you will have to identify potential differences between major style guides (such as MLA, APA, and Chicago, to name a few) because these guides often outline different rules for using numbers in writing. To make it easier, let's use an example. Say you're working on a paper evaluating the importance of the local public library in your community. The document will make use of small numbers, large numbers, decades, and statistics. Thankfully, when using numbers in writing, you can count on a few conventions that apply to most situations; just be sure to consult your specific style guide if one has been assigned. Small and Large Numbers A simple rule for using numbers in writing is that small numbers ranging from one to ten (or one to nine, depending on the style guide) should generally be spelled out. Larger numbers (i.e., above ten) are written as numerals. For example, instead of writing, "It cost ten-thousand four-hundred and sixteen dollars to renovate the local library," you would write, "It cost $10,416 to renovate the local library." The reason for this is relatively intuitive. Writing out large numbers would not only waste space but could also be a major distraction to your readers. Beginning a Sentence Here is a rule that you can truly rely on: always spell out numbers when they begin a sentence, no matter how large or small they may be. Incorrect: 15 new fiction novels were on display. Correct: Fifteen new fiction novels were on display. If the number is large and you want to avoid writing it all out, rearrange the sentence so that the number no longer comes first. Revised: There were 15 new fiction novels on display. Whole Numbers vs. Decimals Another important factor to consider is whether you are working with a whole number or a decimal. Decimals are always written as numerals for clarity and accuracy. To revisit our library example, perhaps circulation statistics improved in 2015. If a number falls in the range of one to ten and is not a whole number, it should be written as a numeral. Incorrect: The circulation of library materials increased by four point five percent in 2015. Correct: The circulation of library materials increased by 4.5% in 2015. When two numbers come next to each other in a sentence, be sure to spell out one of these numbers. The main purpose of this rule is to avoid confusing the reader. Incorrect: There were 12 4-year-old children waiting for the librarian to begin story time. Correct: There were twelve 4-year-old children waiting for the librarian to begin story time. Correct: There were 12 four-year-old children waiting for the librarian to begin story time. Decades and Centuries Decades or centuries are usually spelled out, especially if the writing is formal. Incorrect: The library was built in the '50s. Correct: The library was built in the fifties. If you are referring to a specific year (e.g., 1955), use the numeral. Always strive for consistency, even if it overrides a previous rule. For example, if your document uses numbers frequently, it is more appropriate for all numbers to remain as numerals to ensure that usage is uniform throughout. Similarly, if a single sentence combines small and large numbers, make sure that all the numbers are either spelled out or written as numerals. Incorrect: The library acquired five new mystery novels, 12 new desktop computers, and 17 new periodicals. Correct: The library acquired 5 new mystery novels, 12 new desktop computers, and 17 new periodicals. Let's complicate things a bit, shall we? If your work must follow the rules of a specific style guide, understand that they all have rules for spelling out numbers that may differ slightly from the rules listed above. For example, MLA style indicates that writers may spell out numbers if they are not used too frequently in the document and can be represented with one or two words (e.g., twenty-four, one hundred, three thousand). APA style advises that common fractions (e.g., two-thirds) be expressed as words. A number of specific rules for spelling out numbers are outlined in section 9.1 of the Chicago Manual of Style. Your ultimate authority will always be a style guide, but in the absence of one, following the rules outlined above will help you be consistent in your use of numbers in writing. Image source: Martin Vorel/Pexels.com
Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made. Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students. In comparison to alkenes and alkynes, alkanes are relatively unreactive due to the absence of a weaker pi bond in their carbon skeletons. However, there are a few classes of reactions that are commonly performed with alkanes. The most important reaction that alkanes undergo is combustion. Smaller, linear alkanes generally oxidize more readily than larger, more branched molecules. Alkanes can be burned in the presence of oxygen to produce carbon dioxide, water, and energy; in situations with limited oxygen, the products are carbon monoxide, water, and energy. For this reason, alkanes are frequently used as fuel sources. The combustion of methane is shown: With the addition of a halogengas and energy, alkanes can be halogenated with the reactivity of the halogens proceeding in the following order: Cl2>Br2>I2. In this reaction, UV light or heat initiates a chain reaction, cleaving the covalent bond between the two atoms of a diatomic halogen. The halogen radicals can then abstract protons from the alkanes, which can then combine or react to form more radicals. Alkanes can be halogenated at a number of sites, and this reaction typically yields a mixture of halogenated products. Monobromination of propane Here, propane is brominated using diatomic bromine. The product distribution in this reaction has to do with the stability of the intermediate radicals, a topic beyond the scope of this atom. The complex alkanes with high molecular weights that are found in crude oil are frequently broken into smaller, more useful alkanes by thermal cracking; alkenes and hydrogen gas are also produced by using this method. Thermal cracking is typically performed at high temperatures, and often in the presence of a catalyst. A mixture of products results, and these alkanes and alkenes can be separated by fractional distillation. A Russian factory for thermal cracking. Assign this as a reading to your class Assign just this concept, or entire chapters to your class for free. You will be able to see and track your students' reading progress. oxidation to produce carbon dioxide, water, and energy by burning in the presence of oxygen, radical halogenation initiated with heat or UV light, thermal cracking in the presence of a metal catalyst, and hydrogenation of double bonds in the presence of molecular hydrogen
Lesson: Thar She Blows!Contributed by: Integrated Teaching and Learning Program, College of Engineering, University of Colorado Boulder Educational Standards : Learning Objectives (Return to Contents) After this lesson, students should be able to: Introduction/Motivation (Return to Contents) Have you ever felt a strong wind? What could you do with a steady, strong wind? (Possible answers: Fly a kite, sail a boat, dry laundry on a line, turn a windmill, open the window to cool or air out your house, etc.) When we get wind to do something for us, we are making it work. Who remembers the definition of energy? (Answer: Energy is when something does work.) Using this definition, is wind an energy source? Yes it is! People have been harnessing the wind's energy for thousands of years. Have you ever seen a wooden windmill with large blades? Or, have you ever seen those tall, modern, white windmills located in large groups on open hillsides? We call those tall, white windmills wind turbines, and when there are several of them together, it is called a wind farm. These wind farms are how engineers convert wind energy into usable energy for people, in the form of electricity. A wind farm is a power plant that uses wind turbines to create electricity. Did you know that some people get the electricity for their home from the wind? Today, engineers continue to make improvements in wind power technology. Wind energy has several advantages. Can you think of any? Well, it is a renewable energy source and it does not pollute the environment. Of course there are some disadvantages, too. What happens when there is no wind? (Answer: No work can be done. So, it is important to be able to store the wind energy for continual use.) Also, it is expensive to change wind into usable energy. Engineers are working make wind power cheaper, more reliable and safer for birds who might fly into the turbines. As power companies install wind farms in more and more locations, engineers design turbines and generators that work under all weather conditions. Engineers must design turbines to work in severe weather conditions as well as typical windy days. Sometimes the force of the wind can be steady and sometimes it can cause a powerful repetitive force on the wind turbine, similar to a flag flapping in the wind. If the wind turbine is improperly designed, it might fall apart in a severe windstorm. As another example, engineers designed a wind farm in Maine that works in the bitter cold of winter. The turbines include rotor blades with a slippery, black surface to minimize the buildup of ice and absorb the sun's energy to melt the ice. In addition, several heaters and synthetic lubricants enable the rotors to operate in temperatures as low as -40°C. Since the wind does not blow all of the time, electrical engineers devise ways to ensure that extra energy generated during windy periods can be stored for use during calmer times. Engineers also design wind farms to protect wildlife. Laws, such as the Migratory Bird Treaty Act and the Endangered Species Act, prohibit the killing of a single bird if it is a protected species. An early 1990s wind farm in California's Altamont Pass caused the loss of so many golden eagles that concerns were raised about building more wind farms. Engineers are involved with research projects to address the bird-wind power problem in a variety of settings, so they can reduce bird deaths from wind plants. You might not expect engineers to be concerned that wind turbines kill thousands of insects. But, dead bugs on the blades can significantly reduce how well the turbines work. Occasionally, utilities must stop the turbines and pressure-wash hundreds of blades. To reduce the problems caused by insects, engineers incorporate nonstick surfaces and different blade angles into their designs. Today, we are going to be engineers and learn more about how wind energy can be captured and used to work for us. Are you ready? Lesson Background & Concepts for Teachers (Return to Contents) See the Wind Power Reading for a more in-depth look at how wind energy is harnessed. (This article is not appropriate reading material level for fourth-grade students.) From where does wind come? Wind is the movement of air relative to the surface of the Earth. Uneven heating of the atmosphere by the sun produces horizontal and vertical differences in atmospheric pressure, which in turn cause air to flow as winds. In this way, wind can be thought of as a form of solar energy. While only 2% of the solar energy reaching the Earth is converted into wind power, the total amount of energy is very large. The direction and strength of the wind are modified by the Earth's terrain, bodies of water and vegetative cover. So, some locations consistently have strong winds from a particular direction, while other locations have erratic or little wind. How can wind be used to do work? For thousands of years people have converted wind flow into energy to do work. Windmills have been used to convert the kinetic energy in the wind into mechanical energy for tasks such as pumping water or grinding grain. Modern wind turbines have generators that convert mechanical energy into electricity. All electric-generating wind turbines, no matter what size, are comprised of a few basic components: a tower, a rotor (the part that actually rotates in the wind), a speed control system and an electrical generator. To capture the most energy, wind turbines are mounted on a tower. At 30 meters or more above ground, they can take advantage of faster and less turbulent wind. Usually, two or three blades are mounted on a shaft-like a propeller to form a rotor. A blade in a wind turbine acts much like an airplane wing; when the wind blows, a pocket of low-pressure air forms on the downwind side of the blade. The low-pressure air pocket pulls the blade toward it, causing the rotor to turn. This is called lift. The force of the lift is actually much stronger than the wind's force against the front side of the blade, which is called drag. The combination of lift and drag causes the rotor to spin like a propeller, and the turning shaft spins a generator to make electricity. An electric generator is a rotating machine that supplies an electrical output with voltage and current. In most generators, a voltage is induced in coils of wire by a change in magnetic field as the machine rotates. The needed magnetic field is produced either by direct current in field coils or by permanent magnets. In the simplest case, the induced voltage is alternating in sign (plus/minus), since the direction of the magnetic field reverses direction as the machine rotates. The amount of energy contained in the wind can be calculated using the following equation: where m is the mass of the moving air and v is the speed of the wind. If the wind speed doubles, the energy of the wind quadruples. Or, if the wind speed increases by three times, then the kinetic energy in the wind increases by nine times. Kinetic energy is measured in Joules (J). We usually want to know the amount of power that a wind turbine can produce rather than the kinetic energy of the wind. The power produced by a wind turbine is the amount of energy gained from the wind in a certain time: P = E/t. Power is measured in watts, W. One watt is equal to one Joule per second (J/s). The power derived from the wind over the area swept by the rotor blades can be expressed as: where ρ is the density of air, A is the exposed rotor area, and v is the wind speed. Wind turbines do not perfectly convert wind energy into electric power, so the formula includes an efficiency factor, e. Since modern turbines have only about 25% efficiency, the electric power generated is no more than one-quarter of the energy in the wind. Vocabulary/Definitions (Return to Contents) Associated Activities (Return to Contents) Lesson Closure (Return to Contents) Today we learned about wind energy. What type of machine changes wind energy into usable energy? (Answer: A wind turbine or windmill.) What do we call it when we have many of these machines together in one place to generate electrical power? (Answer: A wind farm.). How can the energy of the wind be used to do work? (Answer: By building structures [such as windmills, wind turbines, boat sails, etc.] that are moved by the wind and cause something else to move.) What is a disadvantage to using wind energy? (Answer: We need constant wind to work. It can kill birds and insects.) Engineers are working on ways to eliminate the disadvantages of using wind energy by making the wind turbines cheaper to run, safer to birds and insects and able to store energy for less windy times. Would you want to use electricity generated by the wind? Attachments (Return to Contents) Assessment (Return to Contents) Brainstorming: As a class, have the students engage in open discussion about what energy is and how we use it to do work, such as transportation, and heating and cooling of our homes and schools. Remind them that in brainstorming, no idea or suggestion is "silly." All ideas should be respectfully heard. Take an uncritical position, encourage wild ideas and discourage criticism of ideas. Have students raise their hands to respond. Write their ideas on the board. Journaling: Have each student write a journal entry on how they might use energy from the wind instead of from fossil fuels. Possible scenarios include transportation by sailboats or providing electricity for their homes. Have them describe two positive and two negative aspects for each replacement case. For example a sailboat is quieter than a gas engine-powered boat and does not produce as much pollution, but you cannot move as quickly and you could get stuck in the middle of a lake if the wind stops blowing. Definitions: Have students write their own definitions for the following terms: wind energy, wind turbine and wind farm. Lesson Summary Assessment Engineering Design Challenge: A local community, Windy Town, has windy days every day. The Windy Town mayor wants to help the townspeople save money and reduce the pollution in the town. So, your company, Windy Town Transportation Engineering Company, was asked to develop an alternative transportation method to cars that uses wind. Start with another method of transportation, such as a skateboard, bicycle or roller skates, and design (draw) a Windy Town version of this method to get around. How would you modify the object to use the wind to make it go? Present your ideas to the Windy Town mayor and city council members (rest of the class). Inside-Outside Circle: Have the class form into two concentric circles (an inner-outer circle), so that each student has a partner facing him/her from the other circle. The outside circle faces in and the inside circle faces out. Three people may work together if necessary. Ask the students a question (see below).Have partners consult each other to discuss the answer. Call on either the inner or outer circle group to answer the question all together. Repeat until all the questions have been answered correctly. Ask the students: Lesson Extension Activities (Return to Contents) The U.S. Department of Energy provides wind maps for almost every state. Print the map for your state from their website: http://www.windpoweringamerica.gov/wind_maps.asp . Ask students to point out the best areas for a wind farm. Engineers use this information to determine the best places to locate wind farms. Have students explain why some areas are better than others in your state (for example, mountains may block wind, location of bodies of water, mountain gaps funnel wind, type of vegetation, open areas with no wind breaks, etc.). Wind energy has many different applications. See the National Renewable Energy Laboratory (NREL) website for some great resources for learning about wind: http://www.nrel.gov/rredc/. References (Return to Contents) Bugs Can Gum Up Wind-Power Turbines. Published July 5, 2001. USA TODAY.com. http://www.usatoday.com/news/science/enviro/2001-07-05-wind-power-bugs.htm Accessed October 19, 2005. Hewitt, Paul G. Conceptual Physics. Boston, MA: Little, Brown and Company, 1977. Kagan, S. Cooperative Learning. San Juan Capistrano, CA: Kagan Cooperative Learning, 1994. (Source of Inside-Outside Circle assessment tool.) Learning About Renewable Energy & Energy Efficiency. National Renewable Energy Lab (NREL), a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy. www.nrel.gov/learning/ Accessed October 19, 2005. Wind Energy Fact Sheets. 2004. American Wind Energy Association (AWEA). Accessed October 19, 2005. Wind Powering America: State Wind Resource Map. Updated August 16, 2005. Wind & Hydropower Technologies Program, Energy Efficiency and Renewable Energy, U.S. Department of Energy. http://www.windpoweringamerica.gov/wind_maps.asp Accessed October 19, 2005. ContributorsXochitl Zamora-Thompson, Sabre Duren, Natalie Mach, Malinda Schaefer Zarske, Denise W. Carlson Copyright© 2005 by Regents of the University of Colorado. Supporting Program (Return to Contents)Integrated Teaching and Learning Program, College of Engineering, University of Colorado Boulder Acknowledgements (Return to Contents) The contents of this digital library curriculum were developed under a grant from the Fund for the Improvement of Postsecondary Education (FIPSE), U.S. Department of Education and National Science Foundation GK-12 grant no. 0338326. However, these contents do not necessarily represent the policies of the Department of Education or National Science Foundation, and you should not assume endorsement by the federal government.
When lost in the desert or a thick forest terrains devoid of landmarks people tend to walk in circles. Blindfolded people show the same tendency; lacking external reference points, they curve around in loops as tight as 66 feet (20 meters) in diameter, all the while believing they are walking in straight lines. Why can't we walk straight? Only recently have scientists begun to make gains in answering this age-old question. By conducting a series of experiments with blindfolded test subjects, a group of researchers at the Max Planck Institute for Biological Cybergenetics in Germany have systematically ruled out several plausible explanations for loopy walking. For example, body asymmetries has been posed as one theory, but the team found no correlation between factors such as uneven leg lengths and right- or left-side dominance and walkers' veering directions. The researchers also ruled out random physical errors, such as incorrect gauging of how you need to move your legs to walk straight, arguing that these would cause walkers to meander back and forth in a zigzag fashion rather than to trace out circles. The researchers believe that loopy paths follow from a walker's changing sense of "straight ahead." With every step, a small deviation is likely added to a person's cognitive sense of what's straight, and these deviations accumulate to send that individual veering around in ever tighter circles as time goes on. This increasing curvature doesn't happen when external reference points are visible, because these allow the walker to frequently recalibrate his or her sense of direction. When walking down the street, for example, the looming presence of a nearby building (as seen in your peripheral vision) prevents you from curving into it. [How Does a Compass Work? ] As of yet, no one is sure where in our inner workings the accumulating deviations arise. However, as detailed in the July 2011 issue of the journal Experimental Brain Research, the Max Planck team thinks the brain's vestibular (balance-maintaining) and propioceptive (body awareness) systems combine to enable regular spatial updating and it may be the vestibular system in the inner ear that malfunctions in the absence of visual clues. "We will continue to work on these issues in the near future," Marc Ernst, group leader, told Life's Little Mysteries. That inner-ear system is already known to exhibit biases: Some people have vestibular disorders so severe that they find walking in straight lines impossible even under normal circumstances. For most of us, the subtle leftward or rightward bias of our sense of direction would only rear its head if we were trying to find our way through a dense forest, or, perhaps, blindfolded by pirates and made to walk the plank. - 10 Weird Things Humans Do Every Day, and Why - How Do Birds Navigate? - Do Blind People Have a More Acute Sense of Smell?
What is bullying? Bullying is repeated and intentional behavior by an individual or group of individuals that causes harm to another. Bullying can take many forms: - Physical (shoving, hitting, tripping) - Emotional (name calling, jeering, humiliating) - Social (isolation, exclusion from activities) - Cyberbullying (use of electronic technology such as cell phones or internet to threaten, intimidate or spread gossip about the victim) Why don't the schools just deal with the bullying? Bullying does not only happen at school. It occurs among siblings at home, in parks, daycare settings and recreational facilities. Bullying also occurs through Internet and cell phone use. One group cannot prevent or reduce bullying behavior alone. The responsibility for bullying prevention rests with all adults who are concerned for children. - The most successful bullying interventions include strong parent involvement. - Bullying behavior is reduced when schools have strong prevention programs. - Bullying behavior is reduced when children targeted by bullying behavior learn different ways to respond and to report the bullying each time it happens. - Bullying behavior is reduced when other kids, those who are bystanders, intervene and stand up and become upstanders to the bullying behavior. - Children who bully need to be held accountable and often need help to change this pattern of behavior. I have talked with the school about my child being bullied and it keeps happening. What is being done? Most schools have written policies and procedures to follow regarding bullying. These are often outlined in the student handbook and on the school's website. Schools have to investigate each bullying incident and make a determination. This can take time. Privacy laws prevent schools from providing details of the outcome of investigations. In addition, a student who has been bullying may not stop just because of a consequence. This is why it is important to report the bullying each time it occurs. This helps the school establish the problem behavior as more chronic an increase the level of consequence and accountability for the child who is bullying. When reporting bullying, ask the teacher or administrator when the investigation will be concluded and how you will be informed. For schools: Consider having follow-up conversations with the parents of the child who was bullied to determine if the bullying has ended or if it is continuing and to maintain a clear line of communication. What should my school be doing to prevent bullying? Public school districts are required by law to have anti-bullying policies in place. Each school district approaches this problem differently. Programs and policies are often determined by school district administrative leadership and resources available. Each school district's anti-bullying policy should be available to both parents and students. Check your student's school district handbook for more information or ask your school administrator. Are there national/state laws about bullying? Yes. There are 49 states that have laws regarding bullying in place. Each state differs. To find out the law in your state, go to bullypolice.org. Who should I talk to if my child is the one doing the bullying? You may want to start with your child. If you are aware of the bullying behavior, either at home or at school, consider exploring what is reinforcing the behavior. Is it social status? Is it about maintaining power? Does your child appear remorseful or show empathy for the victims of their behavior? Children who persist in bullying behavior are at greater risk for developing legal, academic, social and relationship problems later in life. If your child is engaging in bullying behavior and seems unable or disinterested in stopping this behavior even though they experience consequences, they may need professional help. Start with your school professionals or your pediatrician or family doctor to find out how to get help for your child. How can I teach my kids to stand up for themselves and their friends against bullies? Standing up for yourself is different than fighting. Having frequent conversations with your child about how to deal with conflicts or problems with other children is part of teaching children about relationships. It is never a good idea to bully another child, even if they have bullied your child. Teaching your child to walk away, ignore, not over-react, tell an adult, and be assertive with bullies are all good strategies to use at different times. These strategies may take practice for children to feel comfortable in trying them out. Some of the activities available at this website can help. Does my pediatrician/health care provider screen for bullying? The American Academy of Pediatricians developed recommendations for screening school-age children in 2009 by asking children and their parents about their school and friendship experiences. There are not specific screening tools available. Talk with your pediatrician or family practice provider if you have concerns about your child. The school investigation seems to be at a standstill. Do I intervene? Investigations of bullying can take time. Teachers and student witnesses need to be interviewed. Victims, bystanders and children engaged in bullying behavior often have different stories. Parents of children involved are often contacted to get additional information. If you are a concerned parent, ask when you can expect to be informed about how the bullying incident will be resolved. Be persistent. Be respectful. How do I communicate with my school about bullying? If you are concerned about your child being part of a bullying experience at school contact the school teacher or principal and ask to meet with them. Inform them that you are concerned about a bullying incident that has occurred. Make sure you have as much detail as your child has reported to you: - When the bullying occurred - Location of the bullying - Names and description of the children involved - What happened Please remember this information is your child's perspective of what happened. There may be more information that your child did not see or does not recall. It is the school's responsibility to investigate further. How young can bullying behavior begin? Aggressive behavior in children can start in toddler years. When children are aggressive, and their behavior helps them to get what they want, the aggression is reinforced. This can begin a pattern of bullying. Teaching children patience, kindness, empathy, rules for turn-taking, problem-solving and dealing with emotions can help reduce risk for bullying behaviors. These activities may help. When is it appropriate to talk to the parents of the bully? If bullying has occurred in the school environment, let the school handle the situation. However, bullying can also occur in team sports, church groups, or at the park. It is important to stay calm and not make threatening comments or accusations to others…ever. Talk first with adults responsible for the activity to determine how bullying is handled. Consider how you would want to be approached if your child was being accused of bullying.
The Woodland period in Alabama was characterized by increasing cultural complexity and population growth and began about 1000 BC and lasted until about AD 1000. During this era, people widely adopted horticulture, pottery-making, the bow and arrow, and complex ceremonies surrounding death and burial. Archaeologists divide the Woodland period into the Early, Middle, and Late Woodland time periods. They base these divisions on changes in the way people lived, including their settlement patterns (where they lived), subsistence (what they ate), the tools they used, and mortuary practices (how they buried their dead). Early Woodland (1000 BC to AD 1) The Early Woodland lasted from about 3,000 years ago to about 1,000 years ago. The transition from the Late Archaic to the Early Woodland is marked by an increase in cultural developments that can be traced to the Middle and Late Archaic. Although pottery, horticulture, and earthen mounds were familiar to some people who lived during the Archaic period, after about 1000 BC such innovations became widespread across Eastern North America. As in the Archaic, Early Woodland people lived in small groups of related families, known as bands, who shared a base camp most of the year. People usually located their base camps along the Gulf Coast or in Alabama's river valleys and then left as needed to hunt or fish in the surrounding areas. However, unlike the people of the Late Archaic, Early Woodland peoples generally did not travel long distances from their base camps. As a result, the long-distance exchange networks that developed during the Late Archaic broke down. Leadership during the Early Woodland probably consisted of a male elder who provided guidance to the band but had no real power. Everyone was of generally equal status in Early Woodland society. Archaeologists learn about the lives of prehistoric peoples by studying the remains of the things that they made and used, which they call artifacts. The most common types of artifacts found at prehistoric sites are made of pottery and stone because these materials do not deteriorate as easily as bone, textiles, or other organic remains. People made pottery in different ways and decorated it with different patterns at different points in time, and archaeologists use these changes to determine cultural stability and change as well as the age of an archaeological site or artifact. People collected local clays to make their pots and other vessels. Before they formed the object, they added temper to the clay to prevent the pot from cracking as the clay dried and then hardened in an open-pit fire. Tempers included plant fiber, grit (coarse sand), crushed limestone, crushed bone, and grog (crushed potsherds). The earliest pottery included plant fibers as temper and was made during the Archaic period about 2500 BC Such pottery was not widespread, however, and people seemed to have preferred using stone bowls for cooking well into the Early Woodland. Between 1500 and 1000 BC, people began using sand as temper, and pottery-making became much more common and widely distributed. Early Woodland people made a variety of pottery, including bowls and straight-sided beakers for serving and jars for cooking, serving, or storing food. The pieces usually were decorated with stamped, punctuated, pinched, brushed, or incised designs. Jars with pointed bases seem to have been the most popular types for cooking directly on a fire, and Early Woodland pottery is known for its jars with three or four nodes (leg-like pieces) on their bases. Because Early Woodland people did not move around as much as Archaic people, the various bands did not see each other and share ideas as much, so styles of making pottery became very distinct from region to region. For example, people in northern Alabama tempered their pottery with crushed limestone and decorated it with stamped designs, but in south Alabama, pottery was tempered with sand and decorated with different stamped designs. At the same time, Early Woodland people in central Alabama made their pottery with sand temper but decorated it by using sharpened sticks or other tools to incise designs on the exterior. The stone tools of the Early Woodland are similar to those made during the Archaic. People continued to make stemmed points with broad blades, but they were slightly smaller. Because people tended to remain near their base camps in the Early Woodland, they used stone from nearby sources for making tools. Tubular stone pipes first appeared during the Early Woodland and were likely used for ritual and ceremonial smoking. A remarkable development of the Early Woodland was the widespread construction of earthen mounds. Mound-building seems to have originated in what is now Louisiana during the Archaic, but by about 1000 BC the tradition was adopted by people all over eastern North America. Like early mounds elsewhere, those in Alabama were usually conical or dome-shaped and were small, usually between two and five feet high and 30 to 60 feet across at the base. The mounds generally were built on top of burial pits or tombs of important individuals. Often buried with the person were items such as projectile points, natural pigments like ocher, or a few special trade items. Not all Early Woodland people were buried under mounds, however. Village sites often include graves in round pits scattered over the site. Bodies were buried in a tightly flexed position, with knees and arms folded up against the chest, and grave goods were uncommon. As with pottery styles, there was much geographic variability in Early Woodland mortuary practices. Middle Woodland (AD 1 to AD 500) The Middle Woodland lasted from about AD 1 to 500 and is marked by changes in settlement and subsistence patterns. Populations increased, and people began to spread into a variety of environments where they could take advantage of diverse food resources. They also tended gardens and gathered shellfish from the local rivers, which enabled them to live in one place for long periods of time without having to hunt for food as often. An increase of exotic artifacts at Middle Woodland sites indicates that there was more interaction between different regions than there had been during the Early Woodland. People living near the Gulf Coast and Mobile Bay area likely interacted both with people in the interior and with other coastal peoples, as reflected in the similarities in their pottery styles, the nonlocal sources of stone for their tools, and the presence of exotic items. The most remarkable aspect of Middle Woodland culture is the development of the Hopewell Ceremonial Complex. As used by archeologists, the term "complex" refers to a group of specific artifact styles and mortuary practices that occur together. The Hopewell Complex first developed in what is now the Ohio Valley and other parts of the Midwest and gradually spread southward. It is characterized by large, geometric earthworks; conical mounds that contain elaborate tombs of logs and stone with many exotic grave offerings; and nonutilitarian artifacts made of exotic materials such as copper, mica, obsidian, and ocean shells. The elaborate tombs are especially important because they indicate that the person buried there had a special status. Although Woodland society was still basically egalitarian, these tombs suggest that some people may have achieved higher status possibly because of their activities as important traders, warriors, or religious figures. In Alabama, the Hopewell Complex appeared in the northern Tennessee Valley region, where it is called the Copena Mortuary Complex and is marked by village settlement patterns, burials in caves, and burial mounds. The Oakville Indian Mounds, southeast of Moulton in Lawrence County, are excellent examples of these structures. The name Copena comes from the first three letters of copper and the last three letters of galena (lead ore), which are commonly found in the burials, either as raw materials or fashioned into items. Other characteristic Copena artifacts include copper reel-shaped gorgets (a type of necklace) and earspools (cylinders worn through holes in the ears); cups made from ocean shell; and long, stemless, stone projectile points. Some of these items were traded in from long distances. Pottery was generally absent from Copena burials, and Middle Woodland pottery styles remained basically unchanged from those of the Early Woodland. Sand temper became more and more common, and pots with nodes on the bottom became smaller and less popular. Groups of conical mounds are found in the Tennessee River Valley and the central Tombigbee River Valley. However, flat-topped, or platform, mounds also began to be constructed during the Middle Woodland. The remains of houses have been found on top of the mounds, which, along with elaborate graves, is another indicator that some individuals in Middle Woodland society had achieved a high social status. In north Alabama, archeologists excavated a Copena platform mound at the Walling site. Artifacts in the mound were more diverse than those in the surrounding village, and food remains from the mound consisted primarily of deer bones. This indicates that special rituals and feasts took place on top of the mound. Another important Middle Woodland site is the Pinson Mounds, located in western Tennessee. The site was a major Middle Woodland ceremonial center of at least 12 conical and platform mounds, including a geometric earthwork. Log-covered tombs with shell beads, copper, and engraved turtle-shell rattles were found under some of the mounds. Late Woodland (AD 500 to AD 1000) The Late Woodland period began about AD 500 and lasted about 500 years, until AD 1000. Populations increased and settlements filled up the landscape, spreading northward up small streams. People continued to live in base camps, but their increased numbers led to competition for resources and an increase in warfare. By this time, the use of the bow and arrow had spread from cultures to the west and fulfilled the need for a more accurate hunting tool and weapon. The bow and arrow made hunting less of a communal activity than it had been in the past, and individual families became more self-sufficient. People began making stone projectile points that were shorter, thinner, and more triangular so they could be attached to arrows. Middle Woodland people still hunted, fished, and gathered wild foods, but they also spent increasing amounts of time tending their plots of maize, squash, and other plants. Because they now grew food that could be stored, people developed large, rounded jars used for storage of surplus food. They continued to use sand, grog, limestone, or grit temper in their pottery. As the Hopewell culture declined, mortuary practices became more variable and simplified. Small amounts of exotic items still occur in Late Woodland graves, but they seem not to have been part of an elaborate mortuary complex. The decline in ceremonialism may indicate the development of a new form of religion that focused on a reverence for the ancestors of certain lineages. There is evidence that many small groups occasionally gathered together to build mounds and maintain long-range ties. Likely as a result of these regional gatherings, pottery from different places developed widespread similarities in form and decoration. The mound centers expanded their functions from places for burial to places where civic and ceremonial functions occurred. The combined developments of surplus food, special lineages, and mound centers marked changes in society that were much different from how people had lived up to that point. And these changes set the stage for the developments that would take place in the Mississippian period. Bense, Judith. Archaeology of the Southeastern United States: Paleoindian to World War I. San Diego: Academic Press, 1994. Bense, Judith. Archaeology of the Southeastern United States: Paleoindian to World War I. San Diego: Academic Press, 1994. Hudson, Charles. The Southeastern Indians. Knoxville: University of Tennessee Press, 1976. Knight, Vernon J. Excavation of the Truncated Mound at the Walling Site: Middle Woodland Culture and Copena in the Tennessee Valley. Tuscaloosa: Alabama State Museum of Natural History, 1990. Walthall, John. Prehistoric Indians of the Southeast: Archaeology of Alabama and the Middle South. Tuscaloosa: University of Alabama Press, 1980. Zschomler, Kristen, and Ian W. Brown. Alabama Archaeology: Now and Forever? Montgomery: Alabama Historical Commission, 1996.
What Is Floating A Horse's Teeth? Floating a horse's teeth means to file or rasp their teeth to make the chewing surfaces relatively flat or smooth. The type of file used for this is called a "float," which is where the procedure gets its name. Below: One type of file, or "float," for smoothing the teeth of a horse. This particular float is a power float powered by electricity. Below: A different type of float. This one is manually operated and is shown in a bucket of disinfectant. Why Is Floating A Horse's Teeth Important? Unlike some other species which can properly digest food even if it is swallowed with little or no chewing, horses must chew their food efficiently in order to effectively digest it. If a horse's teeth do not have a flat surface to properly chew food their process of digestion is greatly hindered. This can result in weight loss from the mild to the dramatic and poor absorption of nutrients. Oddly enough in a species like the horse where a flat chewing surface is so important, horses are prone to develop uneven chewing surfaces. This is due, in part, to a horse's upper jaw being wider than its lower jaw. This unequal width results in a natural wear pattern that causes the edges of the teeth on the upper jaw to be longer on the outside of the mouth where they overhang the lower jaw. The opposite is true on the lower jaw, where the edges of the teeth wear longer on the inside of the mouth where they extend inside the upper jaw. Below: Teeth on one side of the bottom jaw of a horse. The inside of the teeth (closest to the tongue in the center of the photo) are higher than the outside, creating an inefficient, uneven chewing surface. In addition, you can see the teeth have several sharp edges. Below: The same teeth shown above after being floated. They are now much more level, creating a more efficient chewing surface. Also, the sharp edges are now gone. Since a horse's teeth continually emerge from the gum line for most if its adult life, and because of the unequal widths of the upper and lower jaws, a horse's teeth are unlikely to grind off during normal chewing to create a flat surface. In addition to hampering a horse's ability to digest food, a horse's teeth might become so uneven that sharp, razor-like edges will form. These sharp edges can cut the horse inside its mouth. Floating a horse's teeth, or at least examining the teeth to see if floating or some other care is needed, should be considered a basic part of routine care. Below: These sharp points were removed from a 10-year old mare during a routine float. These points, often called "hooks," were thin enough that they broke off when the float touched them. If they had been thicker the float would have been used to file them off. Does Floating A Horse's Teeth Hurt? No. There are not any nerves at the surface of the tooth where the floating is performed. However, that doesn't mean a horse will stand willingly for the procedure. Depending on the preference of the person performing the float and the horse's nature, some horses are sedated to have their teeth floated while others are not. When Should A Horse Have Its Teeth Floated? In years past it was common practice only for horses approximately age 10 or older to have their teeth floated. However, modern horse management has taught us that all horses, regardless of age, should have their teeth examined at least once a year. We now know it is not uncommon for younger horses as well as older horses to require floating or some other dental care. A routine examination of a horse's teeth by an equine veterinarian or other qualified person can be vital to a horse's health and well-being. Photos Of Horses Having Their Teeth Floated Below are several different horses having their teeth floated by three different veterinarians. Below is a Quarter Horse mare named Foxy. A veterinarian is holding her tongue to the side so he can look inside her mouth to see her teeth, and also feel them with his fingers. A veterinarian or other qualified professional might examine a horse's teeth in this manner or by using a dental speculum or dental wedge to hold the horse's mouth open. (Please note that care must be taken when holding a horse's tongue as shown in the picture so as not to injure the horse.) Below: A veterinarian looks at and feels Foxy's teeth. CAUTION! Examining a horse's teeth can be far more dangerous than it may seem, even when examining gentle horses. For example, if you reach inside a horse's mouth to feel the teeth with your fingers you can get your fingers severely bitten. In addition, if you aggravate a sore area inside the horse's mouth the horse could react violently to the pain. If you want to learn to examine a horse's teeth be sure to learn safe techniques from someone who is qualified and experienced. To float Foxy's teeth, the veterinarian used a manual float and a dental wedge. The float was used to file the uneven and/or sharp surfaces of Foxy's teeth, while the wedge was used to keep her mouth open during the procedure. There are different types of floats and wedges. The ones used when these photos were taken are two common types. Foxy was not restrained for the procedure other than the use of a normal halter, and was not sedated. Below: Foxy getting her teeth floated with a manual float. The lady on the right side of the photo is keeping Foxy's mouth open with a dental wedge (photo of the wedge below). Here is a better look at the dental wedge in the photo above. This type of wedge is sometimes called a "spool." The spool-shaped part of the wedge is placed inside the horse's mouth between the back teeth to keep the mouth open. Below: One type of equine dental wedge. Unlike Foxy, above, the bay mare below was placed into specially made stocks to have her teeth floated. She was also blindfold and lightly sedated. The veterinarian performing the float on this horse used a "power" float powered by an air compressor, and an equine dental speculum to hold her mouth open instead of a wedge. The speculum had "bite plates" that covered the incisors that were controlled by hinges at the side of the mouth. As the speculum was ratcheted opened or closed, the bite plates would then open or close the mouth. Although it's not clearly visible, there is also a special halter on the horse that helped to position and elevate her head. The veterinarian is floating the mare's teeth. To the left you can see air hoses going down into a white bucket where they are attached to different floats sitting in disinfectant. This is a closer look at the same mare above as she is getting her teeth floated. Similar to the bay mare immediately above, this sorrel gelding was placed into specially made stocks to have his teeth floated. He was not blindfolded, but he was lightly sedated. The veterinarian performing the float on this horse also used used a power float, but hers was powered directly by electricity (not an air compressor like the power float above). She also preferred to use a speculum, not a wedge, to hold the horse's mouth open. The white ring seen in the photo is an equine dental halter that helps position and elevate the horse's head. Ask any veterinarian: Floating a horse's teeth is often a popular spectator sport. Depending on the preference of the person performing the float, the owner's wishes, and the horse's nature, some horses are sedated to have their teeth floated while others are not. In the photos above the veterinarian performing the float on the first horse did not sedate her. While a little annoyed by having her teeth floated the mare generally took the procedure well, with only a little fussing. With a few pauses here and there to give her a break the procedure was over fairly quickly without serious risk of harm to the mare, the veterinarian, or the mare's owner who was assisting the vet. Another factor was that this vet used a manual float which is quieter than a power float, and therefore more easily accepted by some horses. However, some horses do not accept having their teeth floated as well as the first mare. For example, the other two horses shown in the photos above were sedated for the float. The veterinarians and owners agreed that light sedation was a reasonable precaution to minimize the anxiety of the horse and/or risk of injury to the horse or humans. Equine Dental Wedges and Speculums Two common pieces of equipment often used in floating a horse's teeth include the equine dental wedge and the equine dental speculum. • An equine dental wedge (one type, the "spool" type, is seen in the photos above) is placed between the back teeth of the upper and lower jaw. It is primarily used to keep the horse's mouth open during the floating process. • An equine dental speculum has metal plates that fit over the upper and lower incisor teeth, and a ratchet mechanism that spreads the plates apart. A speculum is used to provide a more unobstructed view of a horse's mouth (when compared to a wedge), and also to keep the horse's mouth open during floating. In the world of equine dentistry, each piece of equipment has its fans and its detractors. Dental wedges have been blamed for damaging the molars, while some say they have used a wedge for years with no mishaps, and merely say it must be used properly, and/or be the right type of wedge. Speculums seem to be the preferred piece of equipment when compared to wedges, but we did find one online veterinarian / equine dental professional who blamed an improperly used speculum for damaging a horse's jaw. When having your horse's teeth floated it's important to ask questions of the veterinarian or equine dental professional that's doing the floating. Don't be afraid to ask them about any of the equipment they're using, the procedure itself, possible negative results, and what can be done to avoid them. While floating a horse's teeth, in general, is a safe and often necessary procedure, an informed owner is always a horse's best advocate. Bit Seat - See "performance float," below. Hypsodont - Hypsodont teeth are teeth with high crowns that slowly continue to emerge from the gum for most of the animal's life. As the top of the tooth is worn down, more tooth slowly erupts from the gum line to replace what has been worn away. Horses and other grazing animals like cattle and deer have hypsodont teeth. Malocclusion - Abnormal or incorrect contact between the teeth of the upper and lower jaws. Mastication - The process of mashing or grinding food between the teeth. For horses, mastication is the first step of the digestion process. Unlike some other species which can properly digest food even if it is swallowed with little or no chewing, a horse must efficiently chew, or masticate, its food (grasses, hay, grain, etc.) before swallowing in order to effectively digest it. Occlusion - The manner of contact between the teeth of the upper and lower jaws. Float - To file or rasp the teeth of a horse to make the chewing surfaces relatively flat or smooth. Performance Float - A "performance float" is different than a "regular" float. A performance float is when the front sides of the first cheek teeth, which are the teeth right behind where a bit sets in a horse's mouth, are floated to round them off. In some horse people's opinions this creates a more comfortable area for the horse when bitted. This is also sometimes called a "bit seat." How often do you have your horse's teeth checked? Tell us in the comments below! - Attach A Leather Rope Strap - Bridle A Horse - Buy Cowboy Stuff On eBay - Care For A Silk Wild Rag - Care For Your Felt Cowboy Hat - Care For Your Saddle Pad Or Blanket - Close A Gate With A Chain Latch - Estimate Cattle Age By Their Teeth - Estimate A Horse's Weight - Estimate Western Cinch Size - Fishtail Braid Your Horse's Tail - Flatten Cow Horn - Make A Collapsible Wood Saddle Rack - Make A Flag Boot Out Of A Horn - Make Homemade Hoof Conditioner - Make Homemade Horse Fly Spray - Measure A Horse's Girth - Measure A Horse's Height - Measure A Western Saddle Seat - Put A Horn Knot On Your Rope - Put A Speed Burner On A Honda - Recognize Common Horse Colors - Recognize Common Horse Face Markings - Saddle A Horse - Stop A Saddle From Squeaking - Take Horse Pictures - Tie A Honda - Tie A Horse - Tie A Quick Release Knot - Tie A Stopper Knot Tie a stopper knot for the end of a rope, or a metal, rawhide, or plastic honda - Tie A Stopper Knot For A Honda Tie a stopper knot for a tied honda - Tie A Wild Rag Knot - Trim A Bridle Path - Turn Blevins Buckles Over - Turn Western Stirrups - Understand Leather / Hide Thickness - Whiten Bone - Wrap A Saddle Horn With Rubber What Is / Are... - What Are 5 Reasons Horse Trailer Lighting Matters? - What Are Chestnuts and Ergots? - What Are Cowboy Chinks? - What Are Horns? - What Are Horse Blood Marks? - What Are Horse Vaccines and How Do They Work? - What Are Leads? - What Are Saddle Rigging Positions? - What Are Some Interesting Horse Facts? - What Are Some Interesting Charts and Graphs With Horse Information? - What Are Some Options For Temporary Horse Fencing? - What Are Slobber Straps? - What Are Synthetic Saddles Made Of? - What Is The Angle System For Branding? - What Is A Bosal? - What Is A Bull Riding Vest Made Of? - What Is A Domain Name? Why would I need one for my farm or ranch even if I don't have or want a website? - What Is A Fifth Wheel Trailer Hitch? - What Is Flag and National Anthem Etiquette At A Rodeo? - What Is Floating A Horse's Teeth? - What Is Freeze Branding? - What Is Freeze Branding......What Do Horse Freeze Brands Look Like? - What Is A Galvayne's Groove? - What Is A Gooseneck Trailer Hitch? - What Is A Headstall? - What Is Hermann Oak Leather? - What Is Larvicidal De-Worming? - What Is The Mark Out Rule? - What Is A Nord Fork? - What Is The Rodeo Return Gate? - What Is Rotational Grazing?
The mystery of why Jupiter's Great Red Spot did not vanish centuries ago may now be solved, and the findings could help reveal more clues about the vortices in Earth's oceans and the nurseries of stars and planets, researchers say. The Great Red Spot is the most noticeable feature on Jupiter's surface — a storm about 12,400 miles (20,000 kilometers) long and 7,500 miles (12,000 km) wide, about two to three times larger than Earth. Winds at its oval edges can reach up to 425 mph (680 km/h). This giant storm was first recorded in 1831 but may have first been discovered in 1665. "Based on current theories, the Great Red Spot should have disappeared after several decades," researcher Pedram Hassanzadeh, a geophysical fluid dynamicist at Harvard University,said in a statement. "Instead, it has been there for hundreds of years." [Photos: Most Powerful Storms in the Solar System] Vortices like the Great Red Spot can dissipate because of many factors. For instance, waves and turbulence in and around the storm sap its winds of energy. It also loses energy by radiating heat. Moreover, the Great Red Spot rests between two powerful jet streams in Jupiter's atmosphere that flow in opposite directions and may slow down its spinning. Some researchers suggest that large vortices such as the Great Red Spot gain energy and survive by absorbing smaller vortices. However, "this does not happen often enough to explain the Red Spot's longevity," researcher Philip Marcus, a fluid dynamicist and planetary scientist at the University of California, Berkeley,said in a statement. The Great Red Spot is not the only mysterious vortex. In fact, vortices in general, including ones in Earth's oceans and atmosphere, often live much longer than current theories can explain. To help solve the mystery of the Great Red Spot's endurance, Hassanzadeh and Marcus developed a new 3D, high-resolution computer model of large vortices. Models of vortices generally focus on swirling horizontal winds, where most of the energy resides. Although vortices also have vertical flows, these have much less energy. Therefore, "in the past, most researchers either ignored the vertical flow because they thought it was not important, or they used simpler equations because it was so difficult to model," Hassanzadeh said. The researchers now find that vertical flows hold the key to the Great Red Spot's longevity: When the storm loses energy, vertical flows move hot and cold gases in and out of the storm, restoring part of the vortex's energy. Their model also predicts radial flows that suck winds from the high-speed jet streams around the Great Red Spot toward the storm's center, helping it last longer. Together, vortices — whether on Jupiter or in Earth's oceans — may decay up to 100 times slower than researchers previously thought. "Some vortices in the oceans have been observed to last for several years and are believed to play an important role in the oceanic ecosystem and ocean-atmosphere interaction," Marcustold SPACE.com. In addition, "vortices with physics very similar to the Great Red Spot are believed to contribute to star and planet formation processes, which would require them to last for several million years. Both oceanic and astrophysical vortices are subjected to dissipating processes, and the mechanism described here for the longevity of the Great Red Spot presents a very plausible explanation for their longevity as well." The scientists caution that their model does not entirely explain the Great Red Spot's long life span. They suggest that occasional mergers with smaller vortices may help prolong the giant storm's life as well, and have begun modifying their computer model to test this idea. In addition, their "current model does not account for compressibility of the flow or sphericity of the planet," Hassanzadeh told SPACE.com. "Although we believe that these effects do not change the conclusions of our work, we are planning to modify our model in the next step and include these effects." The scientists will detail their findings Nov. 25 at the annual meeting of the American Physical Society's Division of Fluid Dynamics in Pittsburgh. - Photos: Jupiter, the Solar System's Largest Planet - Jupiter Quiz: Test Your Jovian Smarts - How Big is Jupiter? Copyright 2013 SPACE.com, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Fish gills act as the equivalent of a mammal's lungs by taking in oxygen and releasing carbon dioxide. Oxygen and carbon dioxide travel across small, thin-walled blood vessels in both lungs and gills. In terms of evolution, gills are significantly older than lungs. While mammals' lungs work with air, which is 200,000 parts per million oxygen, a fish's gills are working with water, which is only up to 8 parts per million oxygen. This means fish struggle to control the flow of salt through the blood vessels in their gills. For a freshwater fish, the challenge is to prevent the loss of salt from its body. For a saltwater fish, the challenge is preventing excess salt from getting into its system. Another reason fish must work harder to "breathe" is that water is thicker than air and therefore more difficult to process. Fish gills are actually much more efficient at oxygen extraction than lungs. However, a fish is unable to breathe out of water owing to the fact that the surface area of gills collapse when taken out of water and introduced to air. Crustaceans, such as crab, lobster and crawfish, and mollusks, also breathe through gills, although theirs are structured slightly differently.
NGC 7793: A galaxy about 12.7 million light years away containing a so-called microquasar. Caption: Combined data from Chandra (red, green, and blue) as well as optical light (light blue) and hydrogen emission (gold) reveals a “microquasar” in the galaxy NGC 7793. This system contains a stellar-mass black hole that is being fed by a companion star, shown in X-rays in the upper inset. Material falling onto the black hole is blowing outward via two powerful jets that plow into the surrounding gas and heat it. The lower inset shows the nebula that is being illuminated by the output from these jets. Wide field image is 9 arcmin across (about 34,000 light years); Inset image is 45 arcsec wide (about 2,800 light years). Chandra X-ray Observatory
Yellows evolved as well. Around 1300 painters began using lead-tin yellow, a pigment formed by heating lead and tin oxides in a crucible to tremendously high temperatures; chemists could control the hue by varying the temperature. The pigment disappeared from use around 1750, when the recipe was lost, according to some accounts. Another shade, called Indian yellow, was made on the subcontinent by feeding mangoes to cows and concentrating the urine to retrieve the calcium and magnesium salts that create the color. (The British government outlawed the process in 1908 on the grounds of cruelty.) Since many of these pigments are toxic, “the task of grinding them and mixing them with oils during the Renaissance fell to some lowly—and dispensable—apprentice,” says Sandra Webber, a conservator from the Williamstown lab. Chemists can use these distinct chapters in the technological history of paint to help determine the authenticity of a work of art. In the 1980s, during an inspection of a painting known as The Virgin and Child with Saints John the Evangelist and Paul, believed to date from the 1400s, a chemical analysis of a tiny chip of paint found a nasty surprise: zinc. Ambrogio da Fossano, called Il Bergognone, the purported painter, could not possibly have had a zinc-based pigment on his palette some four centuries before the element’s discovery. “The painting was deemed a forgery and moved downstairs into the basement storage room,” says Martin. Suspicious that the zinc pigment was applied during a modern restoration, Martin took a closer look at the painting. in 1994 with a group of undergraduates. Lifting a scalpel to the surface of the rich yellow tunic worn by the Christ child, he gently pressed in and removed an all but invisible core sample. He encased this fleck of paint in epoxy and polished it smooth to expose a cross section of the chip. Then, working with a microscope and a computer monitor, he examined the geologic-looking strata of varnish, paint, and gilding. He focused on the pigment particles in the layer of yellow paint, and next slid them into a scanning electron microscope. In addition to producing a picture, this microscope produces a line graph with peaks corresponding to the elements present. Martin found lead and tin, an indication that the pigment was lead-tin yellow—which dated the painting to before 1750. “We can’t authenticate paintings using scientific techniques alone, but we can present evidence that art historians can then interpret,” says Martin. The painting was attributed to the school of Il Bergognone and returned to its wall in the neighboring Clark Art Institute. Chemists can also help with conservation—and ease the damage inflicted by past barbarisms. In many old paintings, the original work has been lost beneath countless layers of touch-ups and revisions by later artists. “Art used to be done in secret, so that no one had any idea what the condition of the original painting was,” says Martin. Using a variety of techniques, from X-rays to infrared imaging, modern conservators can find out. In the case of Il Bergognone’s Virgin and Child the results are not encouraging: a later, inferior hand painted a floral motif across the bottom of the painting and gave the Virgin a wardrobe change by outfitting her in an entirely new dress of dark cloth with a fleur-de-lis pattern. Often such changes are irreversible. “If oil paint is used on top of oils, it binds to the surface and can’t be removed,” says Martin. This practice was common during the Victorian era. Damaged areas were filled in, nude figures were covered modestly in clothes, and sometimes entire characters were transformed. One Spanish portrait at the Williams College Art Museum features St. Lucy bearing her trademark symbol—a plate holding a pair of eyeballs. In the nineteenth century she was recast as the primly appealing St. Cecilia, carrying a book and wearing a veil. Fortunately, it was possible to reverse the makeover because it was applied on top of varnish rather than directly onto the oil paint. Deciding what to fix and what to leave alone has become a touchy issue for modern conservators. But in the past, restorers had no such qualms. One of the many treasures now at the Clark, by the Italian painter Perugino, was originally painted on a wood panel. During the 1940s, before the museum acquired it, a restorer peeled the paint off like a fruit roll-up and reapplied it to a flat piece of masonite. “There’s a photograph of a painting being held up in the air, and you can see the light shining through from behind,” says Webber. “It’s really scary. I mean, it would be fun to try—but not with a Perugino!” Using microscopic analysis to determine the chemical composition of the original pigments, conservators now apply paint only to areas where the damage is so distracting it cannot be left alone. Cotton swabs soaked with saliva are used to dab paintings clean; the enzymes in saliva dissolve surface proteins and grime safely, without harming the picture. Moreover, the work of the conservators is reversible because they use materials that are stable but don’t form permanent bonds with the painting. “We do everything in our power not to alter the intent of the artist,” says Martin. “Even if we think a shiny surface would look better than a matte finish, we can’t change it.” Still, he admits that no system is perfect, and each cleaning and analysis removes something of the original: “Every time an object comes in this door, it gives something of itself up in order to continue to live.” Works of modern art present a whole new set of problems for conservators. Twentieth-century painters have experimented not only with abstract forms but with abstract substances as well. Jackson Pollock, for instance, used drums of World War II surplus paints for his splashiest effects, and he often added texture by mixing in things such as cigarette butts. These appear to be stable, at least so far. Another abstract painter, Franz Kline, used house paints for his trademark black-on-white forms; already the whites are yellowing. Willem de Kooning, an abstract expressionist, often used nonhardening safflower oils that remain tacky even today, some 30 years later. “There is some dust on the surface that can be brushed out carefully, but the canvases basically can’t be cleaned, ever, without removing the paint,” says de Kooning expert Susan Lake, a painting conservator with the Smithsonian Institution’s Hirshhorn Museum and Sculpture Garden in Washington, D.C. As for the persistent rumor that de Kooning also mixed pigments with mayonnaise—an ersatz cross between egg tempera and oil—Lake is still searching for the evidence. “I haven’t found cholesterol yet in any of the paintings I’ve looked at,” she says. Despite such tricky substances, conservators of modern paintings have an easier time than conservators of modern sculpture, where a favorite medium just now is chocolate. Deciding whether it’s worth the effort to conserve what an artist obviously planned as an ephemeral form has given rise to a debate among critics and historians: Should art ever be allowed to die? Some say yes: even the act of preserving these works, they believe, would alter the artist’s original intent. But the chemist would like to have a crack at conserving everything else. “It’s a challenge, because you never know what you’re going to find when you look closely at a canvas,” says Martin. And therein lies the art.
Ganymede is Jupiter’s largest moon; in fact it’s the solar system’s biggest moon. Now members of the Icy Worlds team at NASA’s Jet Propulsion Laboratory (JPL) think that the giant moon, which is even larger than the planet Mercury, may have several layers of ice and liquid oceans piled atop each other, much like a club or other type of stacked sandwich. The JPL scientists based their findings on computer models of Ganymede’s makeup. The research also revealed that the icy moon may have hosted primitive life. They drew attention to areas of the Jovian moon where water and rock intermingle and said those interactions are important for the development of life. The researchers pointed out that life on our own planet may have also gotten its start in a similar way. Some scientists propose that about 3.6 billion years ago, key life-giving elements contained within material that originated deep beneath Earth’s surface bubbled out of hydrothermal ocean vents and eventually developed into our planet’s earliest life forms. Until these recent findings, it was thought that the rocky sea bottom of Ganymede was covered with ice instead of liquid, which is something that could prevent the development of life. The computer models the scientists produced for their research also led them to believe that the first layer atop the moon’s core might be salty water. “This is good news for Ganymede,” said JPL’s Steve Vance, who also led the study. “Its ocean is huge, with enormous pressures, so it was thought that dense ice had to form at the bottom of the ocean. When we added salts to our models, we came up with liquids dense enough to sink to the sea floor.” Models of Ganymede’s oceans produced in the past led scientists to assume that salt had little effect of a liquid’s properties with pressure. But the JPL team conducted laboratory tests that showed the density of liquids under the same harsh conditions inside of Ganymede were increased with salt. While some may find it odd that the ocean could be made to be denser with the addition of salt, the researchers suggested an experiment that can be tried at home that will show how this is possible. Simply add some regular table salt to a glass of water. You should be able to notice that instead of increasing the volume of liquid within the glass, it shrinks and becomes denser. This is, according to the scientists, because the salt ions attract water molecules. As the JPL scientists progressed through their computer models they noticed that things got a little more complicated when they took the different forms or phases of ice into consideration. The cubes of ice you add to your drink to make it colder is referred to as something called “Ice Ih.” It’s lighter than liquid water and is the least dense form of ice. But you start adding in more pressure and the structure of the ice crystals become much more compact. Incredibly high pressure, such as what is thought to be found in the deep oceans of Ganymede, produces ice that is so dense that it can actually drop to the bottom of the ocean. Study scientists believe the densest form of ice on the Jovian moon is “Ice VI”. The team added these processes to their computer model and came up with an ocean sandwiched between up to three ice layers that cover the rocky seafloor of Ganymede. The lightest ice makes up the top layer and the bottom layer consists of the saltiest liquid. The JPL team said that their findings can also be applied to the study of exoplanets or planets beyond our solar system. Some have proposed that a number of the rocky exoplanets that are more massive than Earth – Super-Earths – are also covered in oceans. Vance and his colleagues think that scientists conducting laboratory experiments with models similar or even more complex than those they used in their research could help determine whether or not life could exist on these alien “water worlds”.
Origin and Radiation of the Earliest Vascular Land Plants. 2009. P. Steemans, et al. Science 324: 353. Abstract: Colonization of the land by plants most likely occurred in a stepwise fashion starting in the Mid-Ordovician. The earliest flora of bryophyte-like plants appears to have been cosmopolitan and dominated the planet, relatively unchanged, for some 30 million years. It is represented by fossilized dispersed cryptospores and fragmentary plant remains. In the Early Silurian, cryptospore abundance and diversity diminished abruptly as trilete spores appeared, became abundant, and underwent rapid diversification. This change coincides approximately with the appearance of vascular plant megafossils and probably represents the origin and adaptive radiation of vascular plants. We have obtained a diverse trilete spore occurrence from the Late Ordovician that suggests that vascular plants originated and diversified earlier than previously hypothesized, in Gondwana, before migrating elsewhere and secondarily diversifying.
This is one in a series presenting news on technology and innovation, made possible with generous support from the Lemelson Foundation. Nature’s colors can delight the eye. But these dazzling displays can also have many practical uses. For example, some animals hide themselves from predators by changing color to blend into their surroundings. This is known as camouflage. Researchers from Europe have taken inspiration from this to develop a new material. It changes color when exposed to moisture. And the researchers can decide beforehand which colors or patterns that moisture will reveal. It all has to do with the new material’s structure. Consider a peacock’s feathers. They’re a fairly boring brown. Yet your eye doesn’t perceive them that way. The features appear vibrant and multi-colored due to what’s known as structural color. Microscopic features on a plume’s surface can reflect or scatter light in some special way. This alters the material’s apparent color. Waves of certain frequencies of light — colors — can sometimes interfere with, or block, each other. The result? The color seen by the observer is different from the object’s true hue. Besides peacock feathers, other examples of structural color include fish scales and certain butterfly wings. Monali Moirangthem and Albertus Schenning are material scientists. They work in the Netherlands at the Eindhoven University of Technology. These researchers specialize in creating “smart” materials. These are ones that have been designed to exhibit unusual properties based on the conditions of their environment. (Such conditions include temperature, pressure, moisture level or the light shining on it.) The researchers were particularly intrigued by beetles that seem to change color in response to differences in humidity. (Humidity is how much moisture is in the air.) This inspired their new artificial material with similar color-changing traits. While other scientists have achieved this, “They were only able to change between two colors,” Schenning explains. His team didn’t want to limit its color palette to just two hues. They wanted moisture to be able to change their material from one color to any or all others. And on January 31 they described their success in ACS Applied Materials and Interfaces. Ink’s color changes with its depth First, the team produced a solid blue polymer film. A polymer is a material made up of long molecular chains. This special film swells when it makes contact with water. The researchers then used an inkjet printer to print images onto this polymer film. Their "ink" was a chemical — calcium nitrate — dissolved into water. Altering the number of layers of this ink printed onto the film would change its apparent color as soon as it made contact with water. One layer of calcium nitrate appears orange. Two layers: green. Three layers look blue. So, to make a certain color show up when the film gets wet, they simply adjust how many layers of ink are printed on it. Mark MacLachlan is a chemist who focuses on the structure of materials on a molecular scale. He works in Canada at the University of British Columbia, in Vancouver. “When the polymer film comes in contact with water and swells,” he notes, “it changes the dimensions of the pre-printed surface structures.” This, he explains, changes the wavelength — or color — of light that reflects back to the eye of the observer. Once the polymer film dries out again, it returns to its original blue color. This once again camouflages the image that had been printed on it. To bring it back one needs just add water. And that can be as simple as breathing moist air onto it! Schenning is excited about potential uses for such a material. He can imagine smart textiles, cars or buildings that would change color as the level of moisture in the air changed. These could “be most interesting,” he thinks. MacLachlan is excited about possible security applications. Crooks are always making knock-off products or counterfeiting money and medicines. To fight this, companies and governments want to mark the real ones with some type of label or tag. These should be hard to mimic, he says, but easy to recognize. “A tag that changes color when you breathe on it would be great,” he says. Other applications include color-changing vehicles. Imagine, he says, cars that change color on a rainy day. However, MacLachlan warns, tweaking these materials so that they can withstand prolonged use will be challenging. That concern doesn’t deter Schenning. He wants to take the masking ability of his polymers up a notch or two. Again, he is turning to nature for inspiration. “I want to develop a polymer with the camouflaging capabilities of a cuttlefish — the master of camouflage,” he says. These aquatic animals can change their body coloring. By doing so, they can totally blend into the patterns of the environment — and seemingly disappear. application A particular use or function of something. camouflage Hiding people or objects from an enemy by making them appear to be part of the natural surroundings. Animals can also use camouflage patterns on their skin, hide or fur to hide from predators. chemical A substance formed from two or more atoms that unite (bond) in a fixed proportion and structure. For example, water is a chemical made when two hydrogen atoms bond to one oxygen atom. Its chemical formula is H2O. Chemical also can be an adjective to describe properties of materials that are the result of various reactions between different compounds. cuttlefish Lesser-known members of the cephalopod family, which includes octopuses and squid. Hunting by night, cuttlefish use their big eyes and arms with suckers. Masters of disguise, these animals can hide in plain sight by changing their colors to blend into their surroundings. deter An event, action or material that keeps something from happening. For instance, a visible pothole in the road will deter a driver from steering his car over it. environment The sum of all of the things that exist around some organism or the process and the condition those things create. Environment may refer to the weather and ecosystem in which some animal lives, or, perhaps, the temperature and humidity (or even the placement of components in some electronics system or product). hue A color or shade of some color. humidity A measure of the amount of water vapor in the atmosphere. (Air with a lot of water vapor in it is known as humid.) microscopic An adjective for things too small to be seen by the unaided eye. It takes a microscope to view objects this small, such as bacteria or other one-celled organisms. moisture Small amounts of water present in the air, as vapor. It can also be present as a liquid, such as water droplets condensed on the inside of a window, or dampness present in clothing or soil. molecule An electrically neutral group of atoms that represents the smallest possible amount of a chemical compound. Molecules can be made of single types of atoms or of different types. For example, the oxygen in the air is made of two oxygen atoms (O2), but water is made of two hydrogen atoms and one oxygen atom (H2O). nitrate An ion formed by the combination of a nitrogen atom bound to three oxygen atoms. The term is also used as a general name for any of various related compounds formed by the combination of such atoms. polymer A substance made from long chains of repeating groups of atoms. Manufactured polymers include nylon, polyvinyl chloride (better known as PVC) and many types of plastics. Natural polymers include rubber, silk and cellulose (found in plants and used to make paper, for example). predator (adjective: predatory) A creature that preys on other animals for most or all of its food. technology The application of scientific knowledge for practical purposes, especially in industry — or the devices, processes and systems that result from those efforts. textile Cloth or fabric that can be woven of nonwoven (such as when fibers are pressed and bonded together). trait A characteristic feature of something. wavelength The distance between one peak and the next in a series of waves, or the distance between one trough and the next. Visible light — which, like all electromagnetic radiation, travels in waves — includes wavelengths between about 380 nanometers (violet) and about 740 nanometers (red). Radiation with wavelengths shorter than visible light includes gamma rays, X-rays and ultraviolet light. Longer-wavelength radiation includes infrared light, microwaves and radio waves. Journal: M. Moirangthem and A.P.H.J. Schenning. Full color camouflage in a printable photonic blue-colored polymer. ACS Applied Materials and Interfaces. Vol. 10, January 31, 2018, p. 4168. doi: 10.1021/acsami.7b17892.
Students will learn how to survive a hurricane through the use of an interactive website and related worksheets. - Students will understand the factors that lead to the development of a hurricane. - Students will be able to identify the stages and categories of storm development. - Students will learn how to prepare for a hurricane. - Students will determine the effects of a hurricane on the Florida's physical environment and predict effects to industries. - After using the Red Cross Disaster pack to read about what to do before, during and after a hurricane, students will go to the web site for Hurricane Strike at http://deved.meted.ucar.edu/hurrican/strike/. Here students will complete 5 worksheets, one for each day, while navigating throughout the site and completing the hurricane preparedness activities.
Matt Strassler [April 18, 2012] It’s not easy to see dark matter, which makes up most of the matter in the universe. It’s dark. And yet, there is one way that dark matter might, in a sense, shine. How? If dark matter is made from particles that are their own anti-particles (as is true for photons, Z particles, and [assuming they exist] Higgs particles, and perhaps neutrinos), then it is possible that two dark matter particles might encounter each other and annihilate (just as an electron and a positron can annihilate, or two photons can annihilate) and turn into something else that we can potentially detect, such as two photons, or indeed any other known particle and its anti-particle. Whether this is an effect that we could hope to observe depends on a lot of things that we don’t know… but there’s no harm in looking for it, and good reason to try. How would we hope to find it? First, we may want to look toward the center of our galaxy, the Milky Way. Just as the most likely place to see an automobile accident is in heavy rush-hour traffic, the place where collisions of dark matter particles are most likely to occur would be wherever the density of dark matter is highest. And that density is largest in the centers of galaxies. The reason (see Figure 1) is that galaxies of stars form in and around large clumps of dark matter — indeed, most of the mass of the Milky Way galaxy is dark matter, distributed in some fashion that is very roughly a sphere, though with a detailed structure that is unknown and possibly very complicated. The stars, and the big clouds of atoms out of which they form, form a rotating disk with spiral arms, sitting within that big sphere, with a ball of stars (the “bulge”) at its heart. The stars in the disk and bulge are presumably centered on the highest concentration of dark matter. So collisions, and consequent annihilations to particles that we can potentially detect, may be occurring near the center of our galaxy, and for this reason we might want to design scientific instruments that can look in that direction, seeking a hint that these annihilations are taking place. Unfortunately, hints are not so easily obtained, because there aren’t many types of known particles that, if produced in dark matter annihilation near the center of the galaxy, can travel from there to Earth. The only particles that live long enough to reach the Earth are electrons, anti-electrons (positrons), protons, anti-protons, some other stable atomic nuclei (such as helium), neutrinos, anti-neutrinos and photons. But neutrinos (and anti-neutrinos) are extremely difficult to detect, while almost all of the others are electrically charged, so their paths bend and loop in the galaxy’s magnetic field, causing most of them never to reach Earth at all and assuring that we can’t tell, if they make it here, whether they came from the galactic center or not. That leaves photons as the only particles that both can travel straight from the region of the galactic center to Earth and can be easily detected. So a good hint of dark matter annihilation could come from an unusual class of high-energy photons that are streaming from the galactic center but not from anywhere (or almost anywhere) else; see Figure 2. However, there’s still a big challenge for that strategy. There are a lot of unusual astronomical objects at the galactic center, and they make high-energy photons also. How can we tell the difference between photons that come from dark matter annihilation and photons that are coming from some kind of unknown class of stellar processes that might be more common at the center of the galaxy than elsewhere? The answer is that it isn’t easy, except in one special case. If dark matter particles (which have some definite mass, let’s call it M) can sometimes annihilate to two and only two photons, then both of those photons will have motion-energy equal (to a very, very good approximation) to the mass-energy Mc2 of the dark matter particles. The reason is very simple. It is the same as described in this article on particle/anti-particle annihilation, and as seen in Figure 3. If a particle and anti-particle are (nearly) at rest, then the energy of each is (almost) entirely mass-energy and (nearly) equal to Mc2. Both have momentum (nearly) zero. Energy and momentum are conserved, so the total energy is (nearly) 2Mc2 before the annihilation and after it too. When the particle and anti-particle annihilate to a different particle and anti-particle, both the new particle and new anti-particle will have energy (nearly) equal to Mc2. In general, this will be a mix of mass-energy and motion-energy. In the specific case in which the final particle and anti-particle are photons, which have no mass and consequently no mass-energy, all of the energy will be in motion-energy. Now we don’t know what the mass M of the dark matter particles is, and we don’t know therefore what the energies of the resulting photons will be. But since, just as every electron has the same mass and every proton has the same mass, every dark matter particle has the same mass M, every single dark matter annihilation will produce two photons of energy just about equal to Mc2. And that means that if we measure, with a special purpose telescope, the high-energy photons coming from the region near the center of the galaxy, and we make a plot of the number of photons that we detect with a given energy, we should expect astrophysical processes to generate lots of photons at lots of different energies, forming a smooth background, but the dark matter processes will add a bunch of photons that all have the same energy — a bump sticking up above that background. See Figure 4. It’s almost impossible to imagine any astronomical object, such as a bizarre star, that would be simple enough to generate a bump of this sort, so a signal in the form of a narrow bump would be a smoking gun for pairs of dark matter particles annihilating. This gives us a very powerful way to look for dark matter. It won’t work if dark matter particles aren’t their own anti-particles and can’t annihilate at all. It won’t work if dark matter particles don’t often make photons when they annihilate. But it might work. And so there are efforts ongoing, most notably using the Fermi Large Area Telescope, a satellite experiment, which is out in space now, measuring photons coming from all across the sky, including those coming from the galactic center.
These torpedo-shaped ‘living fossils’ with a flattened alligator-like head have been around since the days of dinosaurs, with fossil records tracing their existence to the Early Cretaceous over 100 million years ago. They can grow up to 3 metres in length and weigh up to 140 kilogrammes. Here are more unusual facts about the rare Platinum Alligator Gars (Atractosteus spatula): 1. Their rare colour is due to Leucism Alligator gars are typically brown or dark olive-green dorsally, fading to yellowish white ventrally. Platinum Alligator Gars’ snowy hue is the result of a pigmentation disorder called Leucism – a partial loss of pigmentation resulting in white, pale or patchy colouration of the skin, hair, feathers, scales or cuticles, but not the eyes. 2. Breathes in air AND underwater One reason why they managed to survive this long is their ability to thrive even in low oxygen waters. Like their ancestors from the dinosaur age, they have a swim bladder that they can use as a primitive lung. They fill this swim bladder by gulping air to supplement their gill breathing. 3. Ganoid scales that protect like chainmail Unlike the flexible scales of other fishes, Platinum Alligator Gars have stiff, white enamel-like, jagged diamond-shaped ganoid scales that form an interlocking, protective armour similar to medieval chainmails. 4. Two rows of teeth Their upper jaw has two rows of fang-like teeth which are used to impale and hold prey. Platinum Alligator Gars are stalking, ambush predators feeding primarily on fish, but they will also eat water fowl and small mammals found floating on the water surface. 5. Slow to mature As with most ancestral species, Platinum Alligator Gars are slow to mature. Most females reach sexual maturity only after 10 years while males reach sexual maturity in half that time. 6. Poisonous roe A female Platinum Alligator Gar can produce about 150,000 eggs per spawn. The eggs are bright red and poisonous to humans if ingested. We feed them fish and prawns on alternate days, and they are docile towards us when we dive clean the habitat. ——-– Alex Lee, aquarist in charge of Platinum Alligator Gars These ‘swimming dinosaurs’ can be found at the Central and South American exhibits of S.E.A. Aquarium, located next to the Twilight Reef habitat.
The goal of Spanish for 3rd and 4th grade is to begin to recognize basic/rudimentary vocabulary necessary for effective communication in Spanish. Each year the students build on their skills of recognizing, pronouncing, and spelling a variety of vocabulary terms. Grades 5-8 focus on acquiring more sophisticated vocabulary that is necessary in communicating in Spanish as well as integrating Spanish grammar concepts on a more advanced level. Grades 5-8 use prior knowledge and new content to begin writing sentences and speaking in the target language. Aside from grammar and vocabulary, the students are also exposed to the Hispanic culture and holidays such as Day of the Dead (El d a de los muertos). By the time the students reach the 8th grade, they are able to pray the Glory Be, the Our Father, and the Hail Mary in Spanish! Meet Our Teacher Mrs. América Farmer BA of Psychology, Universidad Juárez del Estado de Durango MA of Education, Universidad Autónoma del EStado de Morelos
|Strange symmetry of observations Due to the .6 c velocity of ship B relative to ship A, the observers(c) aboard A observe that ship B is now only 80 meters long. This foreshortening of ship B observed aboard A will seem strange to those who have not been exposed to this phenomenon in physics courses. But experiments have shown that this foreshortening occurs -- just as relativity theory predicts. The question we are concerned with is, WHY does this phenomenon and related phenomena occur? Can you think of any reason? We will see that the phenomena are natural consequences of a medium through which quanta of energy are propagated. Even stranger is that observers(c) aboard ship B observe the passing of the ships as follows! They observe that A is the foreshortened ship. This also is predicted by relativity theory. But why should A be foreshortened? Nothing occurred that would cause any physical changes aboard A. Could it be that something occurred aboard ship B that caused the observers(c) aboard B to observe a foreshortening of A? We will see that this is the case. The quantum medium and the change in B's velocity through the medium cause physical changes aboard B that cause the observers(c) aboard B to observe a foreshortening of A. There is a symmetry between the observations of observers(c) in the reference frame of A and the reference frame of B. This strange symmetry of observations is a natural consequence of the quantum medium, and it is compelling evidence of the medium's existence. It is not easy to understand quickly why this symmetry of observations occurs, and we hope the following explanation will succeed.
3D Spintronic Microchip DevelopedCategory: Science & Technology Posted: January 31, 2013 03:17PM One of most important developments for city growth was steel strong enough to allow skyscrapers to be built. With these tall buildings, cities confined to a limited amount of area could still grow by building up. Now researchers are trying to do the same thing by developing 3D microchips, and those at the University of Cambridge have finally achieved this. The typical microchip has a fairly flat design and information within it can only move within a plane. The new microchip though has multiple layers that store information, with messenger layers in between to move the information from one layer to another. The information is not stored electronically but instead utilizes electronic spin, or the magnetic moment of individual electrons. The storage layers are made of cobalt and platinum while ruthenium atoms act as the messengers. Using a laser probe and switching a magnetic field on and off allowed the researchers to watch as information climbed through the layers. The ability to have information travel through a 3D space like this could greatly affect the world of electronics by enabling much higher data storage densities. To achieve similar movement with today's technology, one would have to employ a series of transistors, which would be much larger than the atoms used in this design. Source: University of Cambridge
14 February 2013 Cosmic rays are extremely high-energy particles from far beyond our Solar System. They provide us with important samples of material from outer space. But the magnetic fields in our Galaxy and Solar System scramble their paths so much that we can't trace them back to their source. But now, using the remains of a star that died a thousand years ago, astronomers have found clues as to where exactly cosmic rays form. A long time ago, in the year 1006, a new dot of light appeared in the southern skies. It shone so brilliantly that it rivalled the brightness of the Moon and was even visible during the day! The source of this mysterious object was a huge star going through a dramatic end of life phase: it was exploding! Astronomers call the explosion of a star a ‘supernova’. Fast-forward about 1000 years and astronomers have finally located the strewn remains of this ancient star. A glowing, expanding ring of material is all that is left. You can see part of this ring in the second image. By looking at this supernova remnant, astronomers have found what they call the 'seeds' of cosmic rays. These particles can be seen zooming around inside the star remnant. However, they just don't have enough energy to be cosmic rays...yet. Astronomers believe they could go on to grow into cosmic rays by colliding with the material of the ring. This way they could eventually gain enough energy to fly off into space as fully-grown cosmic rays! Cool Fact: Astronauts have seen some truly amazing sights: the Northern lights from above, the curve of the Earth and the dark side of the Moon. On top of this, astronauts aboard Skylab, the Shuttle, Mir, and the International Space Station have reported seeing strange flashes of light. These are caused by cosmic radiation zipping through their eyes like teeny tiny bullets. When one of these particles strikes the nerves in the eye it triggers a false signal that the brain interprets as a flash of light.
Grade 1 Skip Counting Worksheets Grade 1 skip counting worksheets to rocket your students to the next level in Mathematics. Skip counting is a part of math foundations. It leads on to many concepts in the future and should be practiced daily. Skip counting by 2, 5 and 10 can be made fun. Give your students little challenges or worksheets to color. These skip counting worksheets have students shading, circling and drawing to keep them engaged. Challenge your children to work quickly and you’ll see their mental math skills increase! Simply click on the images below to download. To keep up to date with quality resources, follow my TpT store! Skip Counting Worksheets Skip Counting ideas Try to make skip counting fun and have your students do it daily. During your maths warm up is a great time. Here are some ideas to get you started: - Skip count together as a class. - Have skip counting races. - Take your class outside and do skip counting hopping. Make these into fun races. - Do skip counting skipping! Get some skipping ropes and have your students count their jumps by 2s. - Class skip counting – Sit in a circle and have students skip count after each other. Start with one child at 2, the next child says 4 and so on. - The teacher skip counts and when he/she pauses, the class has to call out the number. Give these skip counting games a try and watch your students improve! Need mental maths worksheets? Check out these resources. They’re a great wat to get your students working on mental maths daily. No prep needed, just print and go!
The nervous system functions as the command center for the body, regulating body activities. It is made up of two main parts—the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS consists of the brain and spinal cord, while the PNS is made up of the remaining nerves throughout the body including the peripheral nerves and the autonomic nerves. Nerves, or fibers, carry messages to and from the body and the brain. The nervous system is responsible for various body functions including: - Basic functions such as heartbeat, breathing and digestion - Sensory functions including touch, pressure, pain, sight, hearing, taste and smell - Movement and coordination - The ability to process information for learning, memory, language, thoughts, emotions and reasoning There are hundreds of diseases and disorders of the nervous system that impact the way the body functions. Injury, infection, disease, structural defects, blood flow disruptions and cancer can all cause nervous system damage. A few nervous system diseases and disorders are: Some experts consider headache the most common neurological disorder. Some headaches are classified as secondary headaches indicating they are due to another condition such as sinusitis, hangover, brain tumor, aneurysm, ear infection, high blood pressure and other various conditions. Primary headaches are those that have no clear underlying cause and include migraine, migraine with aura, cluster headache, tension headache, and trigeminal autonomic cephalalgia. - Symptoms of headache include pain that can be sharp, dull or throbbing. The pain may be on one or both sides of the head and may radiate across the head. Headache pain may last for only a short time or persist for days. - Treatment of headache varies greatly depending on the cause. Prescription or over-the-counter medications, hot or cold compresses and massage may be recommended treatment options for the various types of headaches. Parkinson’s Disease (PD) Parkinson’s disease is a progressive disorder that is thought to be caused by a combination of genetic and environmental factors, although the exact cause is not known. Brain cells in the basal ganglia and substantia nigra are affected. - Parkinson’s symptoms are generally mild in the early stages and include tremor in one hand, rigidity and loss of facial expression on one side. As the disease progresses, both sides of the body are affected. Patients experience loss of balance, blinking slows and rapid involuntary movement occurs. As the disease worsens, patients lose the ability to stand and eventually become incapacitated. In the later stages, some patients experience hallucinations or delusions. - Treatment options for PD vary depending on the symptoms the patient is experiencing. There is no cure for Parkinson’s and treatment is aimed at symptom management. Lifestyle changes that provide benefit include aerobic exercise, getting plenty of rest, stretching and balance exercises. Physical therapy and speech therapy may also be helpful. Symptoms are usually caused by decreased dopamine levels in the brain, but dopamine cannot be administered directly to the brain, so medications are prescribed to temporarily replace the dopamine or mimic dopamine action. Dosages are personalized to each patient, depending on their specific needs and symptoms. Many of these medications interact with certain foods, other prescription medications and over-the-counter supplements and may need to be administered in various doses throughout the day to control symptoms. Surgical treatment of Parkinson’s disease is reserved for patients who have exhausted other treatment options. Some surgical interventions used in the past were aimed at destroying specific areas of the brain to control motor symptoms. These surgeries are rarely used today. Current surgical options include deep brain stimulation to disrupt specific brain signals and Duopa therapy surgery. Duopa is a gel form of carbidopa/levodopa that requires surgical insertion of a tube through the patient’s stomach into the intestine. The tube is connected to an external pump that administers the medication. Research is ongoing for the treatment of Parkinson’s Disease including experimental stem cell therapy. Multiple sclerosis (MS) Multiple sclerosis occurs when the body’s own immune system attacks the central nervous system – the brain and spinal cord. Inflammation of the nerve fibers and the myelin sheath (a protective coating that covers the nerves) causes damage, disrupting or blocking signals from the nerves to various parts of the body, which causes the symptoms of MS. The exact cause of multiple sclerosis is unknown, but the chronic disease may be due to a combination of factors including genetics, immune system defects, infections or environmental causes. Symptoms vary greatly and are not the same for everyone. Patients often experience fluctuations and changes in symptoms, which may come and go for days, weeks or years. MS can affect all areas of the body but common symptoms include: - Problems walking. Patients may experience weakness, spastic movement, balance problems, sensory loss and fatigue. - Dysesthesia (MS Hug). Many patients report a squeezing or hugging sensation around the torso as their first symptom. - Fatigue. Marked fatigue can interfere with daily activities. - Spasticity. Muscle contracture, stiffness, involuntary movement and spasm is typically experienced in the legs but can happen in the arms, as well. - Numbness or tingling. Numbness in the face, arms, legs or body is an early symptom. - Weakness. Damage to nerves and lack of muscle usage can lead to weakness. - Vision disturbances. MS patients may complain of double vision, eye pain with movement, diminished vision, blurring and partial or complete loss of vision. - Bowel and bladder problems. Constipation and loss of bowel and bladder control may occur. Patients may also suffer from bladder emptying problems. - Cognitive changes. Processing information, learning, problem-solving ability and loss of attention may all occur in MS patients. - Emotional changes and depression. Mood changes, anxiety and difficulty controlling emotional responses are just some of the symptoms seen in multiple sclerosis. Depression is also prevalent. Treatment of MS is aimed at managing symptoms. Medications such as Interferon, Glatiramer acetate, monoclonal antibodies and chemotherapy drugs may all be helpful in decreasing symptom frequency, lessening exacerbations and treating increasing symptoms. Intravenous glucocorticoids – steroids – may be administered to reduce the severity of new or recurring attacks, followed by a course of steroid pills. Adrenocorticotropic hormone (ACTH) gel and plasmapheresis, a “blood-cleansing procedure,” are additional treatment options. An inpatient rehabilitation program that includes physical, occupational and speech therapies is also beneficial in maintaining or improving function and quality of life. Guillain-Barre´ Syndrome (GBS) Guillain-Barre´ is an autoimmune disorder, meaning the body’s own immune system attacks the peripheral nervous system (the nerves outside the brain and spinal cord). Cases range from mild and brief to severe, causing paralysis and the inability to breathe independently. GBS typically occurs after a viral or bacterial infection and occasionally is triggered by surgery. Guillain-Barre is seen in those of all ages and occurs with equal frequency in both men and women. Most patients recover fully from GBS and even those with severe symptoms recover most of their ability. Symptoms of GBS often start with numbness and tingling in the hands and feet. Weakness is a key symptom and patients may have difficulty walking or climbing steps. Other symptoms include: - Pain that may be worse at night - Muscle weakness and unsteadiness - Swallowing or speaking difficulty - Difficulty breathing - Bladder control issues - Heart rate or blood pressure problems Treatment of Guillain-Barre involves therapies to lessen the severity of symptoms and speed recovery. Since symptoms can progress rapidly, prompt treatment is required. Two treatment methods are commonly used. - Intravenous Immunoglobulin Therapy (IVIg) uses proteins the body makes to fight organisms that cause infection. The immunoglobulins are collected from a pool of thousands of donors and administered as an IV infusion to reduce the effects of the immune system attack. - Plasma Pheresis (plasma exchange) is done by filtering plasma – the liquid portion of the blood—to remove the antibodies that are causing the nerve damage. The clean plasma is then returned to the body. Guillain-Barre patients are usually transferred from acute care to an inpatient rehabilitation hospital where they receive three hours of therapy a day, five days a week to help them regain strength and resume activities of daily living before returning home. Stroke or Cerebrovascular Accident (CVA) Commonly referred to as stroke, CVA occurs when a blood vessel in the brain ruptures (hemorrhagic stroke) or a clot blocks the blood flow to the brain (ischemic stroke). Depending on the part of the brain involved and extent of the damage, neurological complications can occur in different parts of the body. A stroke that happens in the left side of the brain will affect the right side of the body, while a stroke on the brain’s right side will impact the body’s left side. A stroke in the brain stem impacts both sides of the body. Symptoms of stroke should send patients to the nearest hospital because immediate medical care can help prevent extensive or permanent damage. The acronym, F.A.S.T., is used to identify stroke symptoms. F – Face Drooping A – Arm weakness S – Slurred speech T – Time to call 911 Other stroke symptoms depending on the area of the brain where the stroke occurred include: - Paralysis on one or both sides of the body - Speech/language difficulties - Vision problems - Memory loss - Changes in behavior Treatment of stroke depends on the type of stroke. Immediate treatment is vital in reducing disability and improving recovery outlook. It is imperative that patients with stroke symptoms seek immediate medical care. Treatment options include: - Thrombolytic agents, such as tissue plasma activator, or tPA, can be administered to break up the clot, if a patient is treated within the first 3 hours of symptoms - Blood thinners may be used to treat ischemic stroke - Surgery to remove the clot in ischemic stroke or to stop the bleeding in hemorrhagic stroke - Endovascular immobilization is a procedure done under general anesthesia to treat hemorrhagic stroke. A catheter, or tube, is inserted into an artery in the groin allowing the physician to access the point where bleeding occurred. Once the vessel is identified, it is then sealed to stop the bleed - Stroke rehabilitation begins almost immediately after the condition is stabilized. Therapy is individualized but may include physical, speech and occupational therapy. - Additional medications and lifestyle modifications may be needed to address health issues and lessen the risk of a second stroke The above conditions are just a few of the hundreds of neurological disorders. Diagnosis is critical in identifying the disorder and determining the proper treatment. The content of this site is for informational purposes only and should not be taken as professional medical advice. Always seek the advice of your physician or other qualified healthcare provider with any questions you may have regarding any medical conditions or treatments.
What Is A Mental Illness? Mental illnesses are disturbances in a person’s thinking, feeling, or behavior, or a combination of these, that reflect a problem in mental function. They cause distress or disability in social, work, or family activities. Just as “physical illness” is used to describe a range of physical health problems, “mental illness” means the same as it encompasses various mental health conditions. Mental illness can be defined as a health condition that involves changes in emotion, thinking, or behavior—or a combination of these. If left untreated, mental illnesses can have a huge impact on daily living, including your ability to work, care for family, and relate and interact with others. But, like other medical conditions like diabetes or heart disease, there is no shame in having a mental illness, and support and treatment are available. Mental Illnesses Are Incredibly Common According to SAMHSA, Mental illnesses are incredibly common in the United States. Each year: - 1 in 5 U.S. adults experience mental illness - 1 in 25 U.S. adults live with serious mental illness - 1 in 6 U.S. youth aged 6 to 17 years experience a mental health illness What Are The Types Of Mental Illnesses? There are hundreds of mental illnesses, but instead of listing all of them, here is the most common: - Anxiety disorders, including panic disorder, obsessive-compulsive disorder, and phobias - Depression, bipolar disorder, and other mood disorders - Eating disorders - Personality disorders - Post-traumatic stress disorder - Psychotic disorders, including schizophrenia What Signs And Symptoms To Look Out For Everyone experiences ups and downs in their mental health. A stressful experience, such as the loss of a loved one, might temporarily bring you down and take a toll on your psychological well-being. For your situation to be considered a mental illness, your symptoms must cause significant distress or interfere with your social, occupational, or educational functioning and last for a significant period. Each disorder has its own set of symptoms that can vary greatly in severity, but common signs of mental illness in adults and adolescents can include: - Excessive fear or uneasiness: Feeling afraid, anxious, nervous, or panicked - Mood changes: Deep sadness, inability to express joy, indifference to situations, feelings of hopelessness, laughter at inappropriate times for no apparent reason, or thoughts of suicide - Problems thinking: Inability to concentrate or problems with memory, thoughts, or speech that are hard to explain. - Sleep or appetite changes: Sleeping and eating dramatically more or less than usual; noticeable and rapid weight gain or loss - Withdrawal: Sitting and doing nothing for long periods or dropping out of previously enjoyed activities It’s important to know that the presence of one or two of these signs alone doesn’t mean that you have a mental illness. But it does indicate that you may need further evaluation. What Causes My Mental Illness? There is no one reason or single cause of mental illness. Instead, it’s thought that they stem from a wide range of factors or a combination. The following are some factors that may influence whether someone develops a mental illness: - Biology: Changes in your brain chemistry, such as an imbalance in neurotransmitters, the chemical messengers within the brain, are often associated with mental disorders. - Environmental exposures: Children exposed to certain substances in utero may be at higher risk of developing mental illness. Such as when a mother abuses substances or is exposed to dangerous chemicals while pregnant can increase risk. - Genetics: Mental illnesses tend to run in families. People who have a relative with a mental illness—such as autism, bipolar disorder, major depression, and schizophrenia—may be more likely to develop it. - Life experiences: The stressful life events that cause PTSD may contribute to the development of mental illness. Get Help Today At Agape Treatment Center At the Agape Behavioral Healthcare Network, we take mental health issues of all varieties very seriously and provide a safe, nurturing environment where one can learn about the disorders plaguing them from an emotionally unstable life. Our licensed mental health clinicians will fully assess an individual that seems, feels, or has been previously diagnosed with a mental health disorder and meet them exactly where they are in life.
Once your unit question has been developed you can plan the rest of your lessons for the unit. You should be teaching to the answer for that unit question. Provide opportunities for your students to explore the question from multiple points of view. Post the unit question around your room. Assess at the end of the unit based on that question. This will provide your students a deeper understanding of history that we are all seeking as Social Studies teachers.
Junk DNA in two paragraphs "Most people do not realize that all our genes only comprise about 3% of the total human genome. The rest is basically one large black box," says Kevin Verstrepen, heading the research team. "Why do we have this DNA, what is it doing?" Scientists once believed that most of the DNA outside of genes, the so-called non-coding DNA, was useless trash that either entered the genome and never left or was remnants from earlier life. One commonly known example of such 'junk DNA' are the so-called tandem repeats - short stretches of DNA that are repeated head-to-tail. "At first sight, it may seem unlikely that this stutter-DNA has any biological function," says Marcelo Vinces, one of the lead authors on the paper. "On the other hand, it seems hard to believe that nature would foster such a wasteful system." The international team of scientists found that stretches of tandem repeats influence the activity of neighboring genes. The repeats determine how tightly the local DNA is wrapped around specific proteins called 'nucleosomes', and this packaging structure dictates to what extent genes can be activated. Interestingly, tandem repeats are very unstable - the number of repeats changes frequently when the DNA is copied. These changes affect the local DNA packaging, which in turn alters gene activity. In this way, unstable junk DNA allows fast shifts in gene activity, which may allow organisms to tune the activity of genes to match changing environments -a vital principle for survival in the endless evolutionary race. Evolution in test tubes To further test their theory, the researchers conducted a complex experiment aimed at mimicking biological evolution, using yeast cells as Darwinian guinea pigs. Their results show that when a repeat is present near a gene, it is possible to select yeast mutants that show vastly increased activity of this gene. However, when the repeat region was removed, this fast evolution was impossible. "If this was the real world," the researchers say, "only cells with the repeats would be able to swiftly adapt to changes, thereby beating their repeat-less counterparts in the game of evolution. Their junk DNA saved their lives." Article: Marcelo D. Vinces, Matthieu Legendre, Marina Caldara, Masaki Hagihara, Kevin J. Verstrepen, 'Unstable Tandem Repeats in Promoters Confer Transcriptional Evolvability', Science 29 May 2009: Vol. 324. no. 5931, pp. 1213 - 1216 DOI: 10.1126/science.1170097
This Climate Change Chemistry unit plan also includes: Assist your class with learning the importance of caring for our environment as they complete this fun-filled lesson on climate change. Individuals perform simulations related to greenhouse gases, atmospheric gases, and the overall negative effects of climate change. They participate in a group discussion to conclude the lesson and ensure their understanding of the material. - Have students complete worksheets and divide them into groups to discuss answers - Instruct the class to research the effects of climate change in 100 years - Do not use mercury-based thermometers, as they are a health hazard if broken - Ensure the class is careful to avoid breaking lightbulbs during this activity - Multiple activities engage learners of all styles and allow participation from the entire class - Assessments contain answers for easier grading
Thyroid cancer is an abnormal growth of the cells of the thyroid gland, a butterfly-shaped gland located in front of your neck just below the voice box (larynx). Thyroid gland secretes hormones that help regulate the body’s metabolism and levels of calcium. Thyroid cancer is more common in women than men. People who are exposed to high levels of radiation to the neck and have a family history of thyroid cancer and goiter (enlargement of thyroid gland) are at a higher risk of developing thyroid cancer. There are four types of thyroid cancer: - Papillary thyroid cancer: Cancer that begins in the follicular cells and usually spreads slowly. It is the most common type of thyroid cancer and can be cured especially if early diagnosis is made - Follicular thyroid cancer: Cancer that develops in the follicular cells and usually spreads slowly. Like papillary thyroid cancer, it can be cured with early diagnosis - Medullary thyroid cancer: Cancer that arises from C cells of the thyroid gland. It produces abnormally high amounts of the hormone, calcitonin. It tends to grow solely and can be treated before spreading to the other parts of the body - Anaplastic thyroid cancer: Cancer that starts in the follicular cells of the thyroid and grows and spreads quickly to other parts of the body. It is the least common type but the most aggressive form of thyroid cancer As the cancer develops, you may notice a lump or swelling in front of your neck, pain in the neck or throat, difficulty in swallowing or breathing, cough, and changes or hoarseness in your voice. Your doctor will recommend a treatment plan based on the results of diagnostic tests such as blood tests, thyroid biopsy, thyroid scan, and ultrasound of the thyroid gland. - Thyroid scan: A thyroid scan is a nuclear medicine test that allows your doctor to check how well the thyroid gland is functioning. It uses a radioactive tracer and a scanner to measure how much tracer the thyroid gland absorbs from the blood - Ultrasound of the thyroid: It uses sound waves to create images of your body. This test uses a lubricating gel and a transducer rubbed over the neck to look at the size and texture of the thyroid gland. This test can tell whether a nodule is a fluid-filled cyst, or a mass of solid tissue Depending upon the type of thyroid cancer present, your doctor may choose one or more of the following thyroid cancer treatment options: - Surgery: Generally, surgery is the most common treatment of thyroid cancer. Total thyroidectomy is a surgical procedure to remove all of the thyroid gland. Subtotal or partial thyroidectomy is a surgery to remove part of the thyroid gland. Your doctor may also remove the lymph nodes if the cancer has spread to the lymph nodes - Radio-iodine: As thyroid tissue takes up iodine, a form of radioactive iodine (mostly I131) can be used as a very targeted treatment to deliver a dose of radiation to thyroid tissue and therefore minimise radiation exposure to other tissues. Radio-iodine can be used for diagnostic imaging, to ablate any residual normal thyroid tissue or for targeted treatment of thyroid cancer. - Chemotherapy: It is a type of cancer treatment that uses drugs to destroy cancer cells. Chemotherapy may be used to cure the cancer, slow its growth and spread, and lessen the pain. Chemotherapy is used in patients with cancer that cannot be treated with surgery or is unresponsive to radioactive iodine, as well as for patients with cancer that has spread to other parts of body - Radiation therapy: This method uses high beam of radiations or to destroy the cancer cells.
In this ever-evolving era, almost all manual jobs are being automated, making things easier for human beings. This is due to one of the most trending technologies called machine learning. Currently, companies and businesses are leveraging machine learning algorithms to provide better services and meet customers’ expectations. Machine learning has a wide range of applications in a variety of industries. Image identification, self-driving cars, speech recognition, online fraud detection, traffic prediction, product recommendations, virtual personal assistants, medical diagnosis, stock market trading, and so on are just a few examples of machine learning applications. Supervised learning and unsupervised learning are the two fundamental approaches to machine learning. The primary difference between these two approaches is that the first one uses labeled data to predict the output, whereas the latter does not use it. This article explores the differences between supervised and unsupervised learning. But before that, we shall introduce you to what supervised, and unsupervised learning is, with their upsides and downsides. So, let us get started. What is Supervised Learning? Supervised learning is a machine learning algorithm that uses labeled datasets to train or supervise the machine in order for it to anticipate output accurately. As a result, we can define supervised learning as learning that takes place in the presence of a supervisor or teacher. Let's look at a simple example of supervised learning. Consider the following scenario: we have a basket full of various fruits. Those fruits must be identified and classified using the supervised learning model. It recognizes fruits using the data we offer as input and the output we provide as output. As a result, we must train the machine with each fruit, such as: - If the object is round in shape, has a depression on the top, and is red, then it is an apple. - If the object is round, has a very small depression on the top, and is lime yellow, it is sweet lime. - The long curving cylindrical object with green-yellow color is labeled as a banana. After we train the model with the above input/output pairs, we shall test it by providing the new fruit as the input, say banana. The model will identify it by its shape and color, confirm it is a banana, and place it under the ‘banana’ category. Therefore, a supervised model first learns from the training data provided and uses it to predict the output. Supervised learning is classified into two different kinds of algorithms, namely classification and regression. Classification algorithms classify the test data into specific categories accurately. For example, these algorithms can be used to separate apples from bananas or to determine whether an individual will be a defaulter on a loan or not. A real-world example that uses a classification algorithm is Gmail, as it separates spam emails from your inbox. Some typical classification algorithms support vector machines, decision trees, linear classifiers, and random forests. Regression algorithms identify relationships between dependent and independent variables. They are used when the output variable is a real value, like weight or revenue. Linear regression, logistic regression, and polynomial regression are some common types of regression algorithms. Some popular applications of supervised learning are spam detection, face recognition, weather forecasting, stock price predictions, customer discovery, text categorization, etc. Some benefits of Supervised Learning are: - Supervised learning predicts the output depending upon the input/output pair provided to it. Therefore, the results are highly accurate, as it learns from the data provided. - It is ideal for solving several types of real-world computation problems. - With the help of previous experience, it helps you optimize the performance criteria. - You can determine the number of classes in the dataset. - The outputs in supervised learning are likely to be known as the classes used are known. Here are some downsides of Supervised Learning: - It is pretty challenging to classify large data sets using a supervised learning approach. - We need to make the machine aware of each data item in a dataset. Therefore, it consumes a lot of time. - While training the classifier, it is essential to choose several good examples from each class. What is Unsupervised Learning? Unlike supervised learning, unsupervised learning does not use labeled data, and its principal goal is to identify hidden patterns and structures from the input data. Therefore, it does not require any supervision or human intervention to find hidden patterns from the input data, as it does on its own. Hence, the name "unsupervised learning." To understand unsupervised learning better, we shall consider one example. Consider that we provided the machine with an image containing cats and dogs, and there is no training data provided, as we did in supervised learning. As the machine is not trained with input-output pairs, it does not know the features of cats and dogs. It classifies them depending on their similarities, differences, and patterns without any previous knowledge. Unsupervised learning works by identifying patterns from data that were previously undetected. There are two different types of unsupervised learning approaches , namely clustering, and association. It classifies unlabelled input data based on their similarities or differences. For example, we can use clustering to group customers depending on their purchasing behavior. It finds different relationships among the input dataset’s variables. The association is generally used for recommendation engines and market basket analysis. Some popular applications of unsupervised learning are fraud detection, conducting accurate basket analysis, identifying human errors during data entry, etc. The benefits of Unsupervised Learning are: - It does not work on labeled data and does not require training or supervision. - Unsupervised learning uncovers hidden patterns from datasets that humans cannot visualize and are incredibly important for companies and businesses. - Clustering automatically divides the dataset into groups based on their similarities. The downsides of Unsupervised Learning are: - The outputs produced in unsupervised learning are less accurate than the ones in supervised learning. - We cannot predict the outputs, as the number of classes is not known. Supervised vs Unsupervised Learning: A Head-to-Head Comparison The below table highlights the differences between Supervised and Unsupervised learning. |Parameters||Supervised Learning||Unsupervised Learning| |Input data||Supervised learning algorithms work on labeled data.||Unsupervised learning algorithms do not require labeled data.| |Process||We provide the input data and its corresponding output to the machine in supervised learning.||We only provide the input data to the machine in unsupervised learning.| |Algorithms||Supervised learning algorithms are Support Vector Machines, Random Forest, Classification Trees, Linear and Logistic Regression, and Naive Bayes.||Unsupervised learning algorithms are Hierarchical Clustering, K-means, Anomaly Detection, K-nearest Neighbour (KNN), Neural Networks, Apriori Algorithm, Principal Component Analysis, and Independent Component Analysis.| |Results accuracy||The output of the supervised learning model is more accurate and precise.||The output of the unsupervised learning model is less accurate.| |Output||It predicts the output depending on the training data provided.||It learns the input data and uncovers hidden patterns from it.| |Supervision||We need to train or supervise the supervised learning model with input/output pairs.||Unsupervised learning does not require any supervision.| |Types of problems||Classification and Regression are the two different types of problems in supervised learning.||Clustering and Associations are two different types of problems in unsupervised learning.| Which One to Choose - Supervised or Unsupervised? Choosing the right machine learning technique for a particular task is pretty challenging, as every machine learning problem is different. To make an appropriate pick between unsupervised and supervised learning, consider the below points: - Evaluate your input: Verify whether your data is labeled or unlabeled. Also, check whether there are experts available to support additional labeling. - Define your goals: Verify whether a problem is recurring or defined. Furthermore, check if the algorithm requires predicting new problems. - Review your options for algorithms: Check whether the available algorithms best fit the problem in terms of dimensionality, i.e., number of features, characteristics, or attributes. Also, verify whether these algorithms support your data volume and structure. Supervised and unsupervised learning are the two most commonly used machine learning techniques. The first one produces accurate results but is not ideal for classifying large volumes of data, whereas the latter one can handle large volumes of data but there is a high risk of getting inaccurate results. We hope you found all the major differences between supervised and unsupervised learning in this article. However, depending on the structure and volume of your data, make the appropriate choice between these two approaches. People are also reading: - Best Machine Learning Certifications - Introduction to Machine Learning - Best Machine Learning Projects - Top Machine Learning Applications - Top Machine Learning Algorithms - Best Machine Learning Frameworks - Machine Learning Interview Questions - Decision Tree in Machine learning - Data Science vs Machine Learning - Best Machine Learning Books
What are Responsibilities? As we get older and become adults, we begin to have more choices about where we want to go, what we want to do, and how we want to live our lives. But we are also expected to be more independent and do more things for ourselves. These expected behaviours are called responsibilities. My Responsibilities as an Adult There will be many new responsibilities to prepare for as you get ready to leave school. These responsibilities include: Taking care of your own space and items. Whether you live on your own, with family, or friends, you will need to tidy your home and carefully put things where they belong. Keeping a tidy home and taking care of your own items will help make sure things don’t get lost or broken. You are taking care of your home when you: - Put things where they belong - Cook and clean up afterwards - Shop for and put away groceries - Clean and put away laundry - Put out rubbish and recycling - Feed pets Managing your money. You will need to make sure the money you get each week, or each month, is enough to cover the most important expenses. Making good decisions about how to spend your money means that you will have the things you really need to live, like housing, food, and utilities so that you have lighting, phone service, and heat in the home. We will talk a lot more about managing money in the next section. Managing your time. As a child and teenager, much of your time would have been spent at school, and your time spent at home was probably often scheduled for you. Your parents or another adult may have helped you plan visits with friends, made sure you got to the cinema on time, and organized appointments like visiting the doctor. As an adult, you will need to take more responsibility in planning your calendar and daily schedule. You will have more choices about what to do in your free time, but you will also need to plan time to do the things that need to be done; like work, studying or chores. Treating people with Respect. Respect is an important part of getting along with others, having friends, and other relationships. When you respect people and treat them well, they feel safe and happier to be around you. We can show respect by: - Listening to others when they speak - Not intentionally causing harm - Helping others - Not using or taking other people’s things without permission - Keeping spaces tidy and clean - Using kind and thoughtful words - Doing what you said you will Being a good citizen & obeying the law. As an adult, we also have responsibilities to all of the people around us, including those we don’t know but who live near us. These are very important responsibilities called laws, that are there to make sure everyone is safe and well. Breaking a law can get you in trouble with the police or gards. There are many, many laws, but some of the most important ones for you to remember are: - Do not harm anyone - Do not threaten to harm anyone - Do not enter a persons home or business without permission - Do not steal - Do not drive without a license - Do not force anyone to do something they do not want to do Many people use computers, tablets, and smart phones to keep in touch, and share pictures and information. It is important to remember that there are also laws about what we do and say online and in texts. For example, you must not: - Threaten to cause physical harm to others - Post comments that will cause emotional harm - Share inappropriate pictures - Use computers to access certain types of information As an adult, you will have the freedom to make many important decisions. This is very exciting, but is also a very big responsibility because some of these decisions can have a very big impact on your life. Small or Minor Decisions Every day, you will make decisions about things that probably won’t make a huge difference in your life. This might include: - what to wear - what to make for dinner - which movie to watch at the cinema - where to grocery shop Big or Major Decisions Big decisions are the ones that impact many different parts of your life and will stick with you for a long time. Big decisions might include: - Where to live - Whether to start further education or look for work - Making a big purchase (e.g. computer, or maybe - even a car) - Starting or ending a romantic relationship Making Big Decisions We always hope that we will stay happy with the decisions we make, but sometimes the decisions we make can have unwanted or harmful effects. So, it is important to think carefully about these big decisions and get help when we need it from people we trust. When making big decisions, it can be helpful to: List all the options you can think of - For each option, list all of the possible reasons why it might be a good idea. What good things might happen if you make that decision. - For each option, list all of the possible reasons why it might not be a good idea. What bad or unhelpful things might happen if you make that - Review all of the possible effects. Think about what is most important to you and decide whether you are ok if the possible bad things happen. Remember, this does not mean that the bad things will happen, you just need to be ok if they do. It can also be very helpful to talk to family, friends, or other adults that you know well and trust. But you should never feel that someone is trying to persuade you, or bully you in to making a certain choice. Remember, YOU must feel good about the decision. Also, know that it is OK to change your mind. Sometimes, big or important decisions may require time or many steps to actually make happen, so it can be very helpful to set a goal and make a plan. A goal is just a clear statement of something you want to do or achieve. For example, if you decide you want to be healthier and feel better, you might set goals to: - Get 7-9 hours of sleep every night - Drink 8 glasses of water each day - Exercise 3-5 days per week Other goals might require much more thought and planning. For example, you may decide that you want to want to get a job working in a restaurant. So, your long-term goal is ‘to get paid employment working at least 20 hours per week in a restaurant.’ But getting a job involves many different skills and behaviours. You will need to: - Decide what role you want, or what would best suit you (e.g. cooking, cleaning, interacting with customers) - Have the skills needed to do the job (e.g. cook, wash dishes, know how to safely operate any equipment) - Complete applications and job interviews - Get along well with others in the workplace - Get to and from work When you look at this list, you may realise that you need to set other, more immediate goals related to travel or learning job-related skills. When you take time to make a plan, it helps you make short-term goals that can be achieved right now. If we only think about the end goal, sometimes it may feel like it will never happen. If we break it down to smaller steps or short-term goals, it can help us stay positive and keep going until we get to our end goal. It can also be very helpful to talk to family, friends, or other adults that we know well and trust.
In our increasingly digital world, the importance of creating web applications that are accessible to all cannot be overstated. Accessible web applications ensure that all users, including those with disabilities, have equal access to information and functionality. This article explores the significance of web accessibility in development, outlines its key principles, and discusses practical strategies and tools to build inclusive web applications. The Importance of Accessible Web Applications Accessible web applications are vital for several compelling reasons: - Ethical Responsibility: Ensuring that digital services are accessible to all, regardless of their abilities, is a moral imperative. It reflects a commitment to inclusivity and equality in the digital space. - Legal Compliance: Numerous laws and guidelines, such as the Americans with Disabilities Act (ADA) and the Web Content Accessibility Guidelines (WCAG), mandate accessibility. Non-compliance can result in legal consequences. - Wider Audience Reach: By being accessible, web applications can reach a broader audience. This includes not only individuals with permanent disabilities but also those with temporary impairments and the elderly. - Enhanced User Experience: Accessible design often aligns with good design principles, benefiting all users. Features like clear navigation and legible text improve the overall user experience. - Search Engine Optimization (SEO): Accessible websites tend to have better SEO, as search engines favor sites that provide a good user experience. - Brand Image and Reputation: Demonstrating a commitment to accessibility can enhance a company’s brand and reputation, showing that they care about all users. Core Principles of Web Accessibility The Web Content Accessibility Guidelines (WCAG) 2.1 offer a framework to make web content more accessible, focusing on four principles: - Perceivable: Information and user interface components must be presentable in ways users can perceive. - Operable: User interface components and navigation must be operable. - Understandable: Information and the operation of the user interface must be understandable. - Robust: Content must be robust enough to be interpreted by a wide variety of user agents, including assistive technologies. Strategies for Building Accessible Web Applications 1. Semantic HTML Use semantic HTML tags to convey the structure and meaning of web content, aiding screen readers in interpreting the page. 2. Keyboard Navigation Ensure your application is fully navigable using a keyboard, catering to users who cannot use a mouse. 3. ARIA Roles Use ARIA roles to communicate the role, state, and functionality of elements when HTML semantics are insufficient. 4. Color Contrast and Text Size Maintain sufficient color contrast between text and backgrounds and allow text resizing without layout breakage. 5. Alt Text for Images Provide descriptive alternative text for images for screen reader users. 6. Form Accessibility Ensure forms have clear labels and descriptive error messages. 7. Multimedia Content Offer captions and transcripts for audio and video content. 8. Avoid Time-Limited Content Limit or avoid time-bound content or provide ample time for interaction. Tools and Testing Upcoming Accessibility Trends in Web Development The landscape of web accessibility is continually evolving, driven by technological advancements and changing standards. As we look to the future, several key trends and new technologies are emerging that are set to shape the way we approach accessible web development. 1. Artificial Intelligence and Machine Learning AI and machine learning are beginning to play a significant role in enhancing web accessibility. For instance, AI can be used to automatically generate alt text for images, providing descriptions for visually impaired users. Additionally, machine learning algorithms are improving the capability of screen readers to understand and interpret complex web content, including dynamic and interactive elements. Looking forward, AI might also be used to personalize web experiences for users with disabilities, automatically adjusting layouts and navigation based on individual needs. 2. Voice Navigation and Control Voice-controlled interfaces are becoming increasingly prevalent. This trend is highly beneficial for users with physical disabilities or those who prefer voice commands over traditional navigation methods. As voice recognition technology becomes more advanced, we can expect web applications to be more seamlessly navigable using voice commands, making the web more accessible for everyone. 3. Advanced Gesture Control Gesture control technology, like that used in virtual and augmented reality applications, is starting to find its way into web accessibility. This can offer a new way for users with motor impairments to interact with web content. As this technology develops, we may see web applications that can be navigated through simple hand gestures captured via cameras or specialized devices. 4. Augmented Reality (AR) and Virtual Reality (VR) AR and VR technologies are not just for gaming and entertainment; they have potential applications in web accessibility. These technologies can create immersive, 3D web experiences that are more intuitive and engaging for users with certain types of disabilities. For instance, AR can overlay additional context or descriptions on web content, enhancing understanding for users with cognitive impairments. 5. Improved Standards and Regulations As technology evolves, so too do the standards and regulations governing web accessibility. We can expect updated versions of the Web Content Accessibility Guidelines (WCAG), with a greater emphasis on new technologies like AR, VR, and AI. These evolving standards will continue to guide developers in creating accessible web experiences. 6. Personalization and Adaptability The future of web accessibility lies in personalization – the ability of websites to adapt to the individual needs of users. This could include automatic adjustments in color contrast, text size, or layout based on user preferences or disabilities. As personalization technology advances, users with disabilities will have a more tailored and accessible web experience. 7. Internet of Things (IoT) and Accessibility The integration of IoT with web accessibility could lead to smarter, more responsive environments. For example, a web application could automatically interact with a user’s environment, adjusting lighting or sound based on the content being accessed. Accessible web applications are not just a legal obligation but a reflection of a commitment to inclusivity and equality in the digital realm. By adhering to the outlined principles and strategies, developers can create applications that are not only compliant and ethically sound but also more user-friendly and inclusive. Accessibility should be an integral part of the development process, with ongoing efforts to maintain and improve accessibility standards.
Solar-Powered Highways: Paving the Way for a Greener Future Modern road solutions called solar-powered roads include solar panels onto the asphalt or concrete of the road, transforming sunlight into electricity. With the addition of useful road infrastructure, this green technology seeks to produce clean energy, lower carbon emissions, and promote sustainable mobility. Solar-powered roads use road surface-mounted solar panels to produce energy that is clean from sunshine. By generating clean power, lowering the release of carbon dioxide, and promoting environmentally conscious transportation systems, this idea supports the preservation of the environment. The Covenant of Solar Panel Roads: Building the Roads to a Greener Future As the globe grapples with the pressing need to shift to environmentally friendly energy sources, novel approaches are being explored from many directions. Solar panel roadways are one such ground-breaking proposal that is gaining popularity. This ground-breaking technology not only promises to create renewable energy but also to drastically cut the production of greenhouse gasses, paving the path for a brighter future. Solar panel roads, additionally referred to as optoelectronic highways, are simply pavements that have solar panels embedded in them. These panels capture the energy of the sun and transform it into electricity, which can then be utilized to power everything from streetlights to electric vehicles and neighboring residences. The idea began as a response to the energy problem and has now grown into a potential technique for mitigating climate change. The Design of the Solar Highway These solar panels are made up of numerous components, including microprocessors that enable the panels to connect with a central control station, heating elements to prevent snow and ice collection, and LED illumination to generate lines and roadway markings without any application of paint. The reality that it has an abundance of the road surface is exactly why numerous countries have come up with the notion of building solar roadways. China, France, and the Netherlands have already taken the initial steps toward implementing solar roadways. As with everything new, some challenges must be overcome. These routes must be kept clean of snow and mud to function properly. Road dust, tire dust, and diesel exhaust fumes appear to be the most problematic, since they may conceal panels and reduce power output. Panels must also be robust and dependable since no driver wants to see a cracked discussion beneath their vehicle. Fortunately, those concerns have since been resolved. Solar Highways Address The “Space” Issue Fortunately, there is an accessible alternative. Roads are abundant in all affluent nations. The total distance of roadway surface is immense; the United Kingdom alone has a network of over 422.000 km (262.000 miles)! If we could quadruple the area used as solar panels, we may produce enough power to shut down coal-power plants (such as Poland’s Belchatow Power Station, Europe’s greatest CO2 emitter) or phase out diesel automobiles. It is predicted that if converted into solar panels, US roads could provide more than twice the amount of power that the country now requires it. In simple terms, converting from pavement to solar panel roadways might address all the current energy crisis difficulties. It could help in the defeat of the the most severe threat we face now, climate change. Conclusion finally, solar-powered roads have enormous promise as a trailblazing approach for fostering a future that is more sustainable and environmentally friendly. These highways may significantly help to decrease carbon emissions, increase energy self-sufficiency, and encourage greener forms of transportation by seamlessly integrating renewable energy generation with our transportation infrastructure. We pave the road for a more brilliant and environmentally responsible tomorrow by continuing to innovate and invest in such breakthrough technology.
The prejudice of journalists and news producers in the mass media in the selection of numerous events and stories that are reported and how they are covered is referred to as media bias. A number of national and international watchdog organizations report on media bias. These include the Project for Excellence in Journalism at the Pew Research Center, which monitors both print and television journalism. Media bias can take many forms including failure to cover issues that matter to minorities or women (or men), lack of diversity in news reporting sources, and/or failure to accurately report on these issues. For example, research has shown that women are under-represented in positions of power within the mainstream media. Similarly, people of color are disproportionately represented in prison systems across the world, yet there are very few reports that examine this issue from a racial justice perspective. Media bias is also demonstrated by limiting coverage of events that matter to particular groups, such as gay rights news or immigration stories, or failing to cover these topics at all. Some have argued that media bias is a natural consequence of human nature since news organizations must choose what stories to cover. Others argue that it is a deliberate strategy used by those who hold power to maintain their dominance over society. Still others claim that no matter how you slice it, media bias is bad for democracy because it limits public awareness about important issues that could affect voting choices. The ownership of the news source, the concentration of media ownership, the subjective selection of employees, or the preferences of a targeted audience are all market pressures that result in a skewed presentation. These include the National Association of Broadcasters, which reports on issues such as fairness in coverage between liberal and conservative voices; the American Press Institute, which tracks trends in journalism education and practice; and the Project for Excellence in Journalism, which focuses on state-based media systems. In addition to these organizations, there are several studies conducted by academic institutions that examine media bias. Two recent examples are the Columbia University NewsBusters project, which examines political bias in the news over time, and the Pew Research Center's Project for Excellence in Journalism, which assesses state-based media systems. These studies have shown that media bias exists across a variety of topics, including politics, environment/science, crime/justice, health/science, and local government. Some studies have also examined whether there is a difference in bias between sources such as newspapers and television stations. For example, one study conducted by the Columbia University School of Journalism found evidence of both liberal and conservative biases among newspaper articles, but only conservative biases among television news programs. Other factors may cause media bias beyond those listed here. This is because every news agency has limitations on which stories and how they may be covered by their reporters. This essay will discuss the five elements that influence news judgments in almost every newsroom across the world: time, speed, space, profit, and prejudice. Time means the period or division of time; it is also the fourth factor in the formula for news. All news is about events that have recently taken place or are about to take place. So, if you want to cover current affairs you need to look at what is happening now. The more important the event the sooner it will likely be reported on by newspapers so they can get the story out quickly and be first with it. Speed means how much information we need to report on a topic in order to be first. Some events happen so quickly that we cannot keep up with them, such as sports news or political scandals. We must choose what topics we will report on carefully so that we do not miss anything important. Space means how much detail we can include when writing about an event. It is usually limited to 500 words for features and 1,500 words for columns. If we go over this length we risk leaving out important details that readers want to know about. We need to make sure that our articles are clear and concise. Profit means the business objective of a newspaper. Because reporters can gather background information on a matter and then write a newspaper piece or a news report expressing a biased perspective on the story or making up information that they are not sure is accurate, the media can both alter and report events. For example, an editor at a newspaper may decide to print an article written by an investigative reporter. The editor may have concerns about an element of the story but feel printing the article will be more likely to bring attention to the issue than if he/she refused to publish it. The media's ability to influence public opinion through its coverage of current affairs is one reason why some people believe that newspapers play an important role in democracy. Newspapers can report on issues of concern to their readers, such as government policies or corporate practices that may affect their lives. They can also report on major events, such as wars or natural disasters. By doing so, newspapers can make their readers aware of these matters and encourage them to take part in democratic processes (such as voting) regarding them. However, the media can also distort facts and opinions when covering stories. For example, a journalist may include details from interviews with people who support one position on an issue but omit similar details from interviews with those who don't. This method of reporting can lead readers to form inaccurate perceptions about what is happening in the world around them. The news media provides people with unbiased information, allowing them to stay educated and hold those in authority accountable for their actions. Unbiased information allows people to make decisions based on fact rather than opinion. It also ensures that different perspectives are heard, which helps create a more informed society. People need an independent source of information if they are to be able to make informed decisions about what matters most to them. Without this source of information, people become vulnerable to propaganda from politicians who have power over their daily lives or sensationalist stories in the tabloid press. The news media provides people with a way to obtain unbiased information about issues important to them. Every day, journalists work with editors to determine what content should be produced by their organizations. They then use their knowledge and expertise to report on current affairs including politics, business, sports, entertainment, and science. After publishing their articles, journalists seek out interviews with other experts or witnesses to further explain their views or findings. Finally, they submit their work to publication houses which print or publish it. News media consists of newspapers, magazines, online news sites, and others. Newspapers are published on a regular basis and contain mainly text with some illustrations.
Edited by Matthew A. McIntosh The Industrial Revolution was a major shift of technological, socioeconomic, and cultural conditions that occurred in the late eighteenth and early nineteenth century in some Western countries. It began in Britain and spread throughout the world, a process that continues as industrialization. The onset of the Industrial Revolution marked a major turning point in human social history, comparable to the invention of farming or the rise of the first city-states; almost every aspect of daily life and human society was, eventually, in some way influenced by it. The effects spread throughout Western Europe and North America during the nineteenth century, eventually affecting most of the world. The impact of this change on society was enormous. “What caused the Industrial Revolution?” remains the most important unanswered question in social science. The period of time covered by the Industrial Revolution varies with different historians. Eric Hobsbawm held that it ‘broke out’ in the 1780s and was not fully felt until the 1830s or 1840s, while T. S. Ashton held that it occurred roughly between 1760 and 1830. Some twentieth century historians such as John Clapham and Nicholas Crafts have argued that the process of economic and social change took place gradually and the term revolution is not a true description of what took place. This is still a subject of debate amongst historians. As might be expected of such a large social change, the Industrial Revolution had a major impact upon wealth. It has been argued that GDP per capita was much more stable and progressed at a much slower rate until the Industrial Revolution and the emergence of the modern capitalist economy, and that it has since increased rapidly in capitalist countries. The term “Industrial Revolution” applied to technological change was common in the 1830s. Louis-Auguste Blanqui in 1837 spoke of la révolution industrielle. Friedrich Engels in The Condition of the Working Class in England in 1844 spoke of “an industrial revolution, a revolution which at the same time changed the whole of civil society.” In his book Keywords: A Vocabulary of Culture and Society, Raymond Williams states in the entry for Industry: The idea of a new social order based on major industrial change was clear in Southey and Owen, between 1811 and 1818, and was implicit as early as Blake in the early 1790s and Wordsworth at the turn of the century. Credit for popularizing the term may be given to historian Arnold Toynbee, whose lectures given in 1881 gave a detailed account of the process. The causes of the Industrial Revolution were complex and remain a topic for debate, with some historians seeing the Revolution as an outgrowth of social and institutional changes brought by the end of feudalism in Britain after the English Civil War in the seventeenth century. As national border controls became more effective, the spread of disease was lessened, therefore preventing the epidemics common in previous times. The percentage of children who lived past infancy rose significantly, leading to a larger workforce. The Enclosure movement and the British Agricultural Revolution made food production more efficient and less labor-intensive, forcing the surplus population who could no longer find employment in agriculture into cottage industry, for example weaving, and in the longer term into the cities and the newly developed factories. The colonial expansion of the seventeenth century with the accompanying development of international trade, creation of financial markets and accumulation of capital are also cited as factors, as is the scientific revolution of the seventeenth century. Technological innovation was the heart of the industrial revolution and the key enabling technology was the invention and improvement of the steam engine. The historian, Lewis Mumford has proposed that the Industrial Revolution had its origins in the early Middle Ages, much earlier than most estimates. He explains that the model for standardized mass production was the printing press and that “the archetypal model for the [industrial era] was the clock.” He also cites the monastic emphasis on order and time-keeping, as well as the fact that Medieval cities had at their center a church with bell ringing at regular intervals as being necessary precursors to a greater synchronization necessary for later, more physical manifestations such as the steam engine. The presence of a large domestic market should also be considered an important driver of the Industrial Revolution, particularly explaining why it occurred in Britain. In other nations, such as France, markets were split up by local regions, which often imposed tolls and tariffs on goods traded among them. Governments’ grant of limited monopolies to inventors under a developing patent system (the Statute of Monopolies 1623) is considered an influential factor. The effects of patents, both good and ill, on the development of industrialization are clearly illustrated in the history of the steam engine, the key enabling technology. In return for publicly revealing the workings of an invention the patent system rewards inventors by allowing, e.g., James Watt to monopolize the production of the first steam engines, thereby enabling inventors and increasing the pace of technological development. However monopolies bring with them their own inefficiencies which may counterbalance, or even overbalance, the beneficial effects of publicizing ingenuity and rewarding inventors. Watt’s monopoly may have prevented other inventors, such as Richard Trevithick, William Murdoch or Jonathan Hornblower, from introducing improved steam engines thereby retarding the industrial revolution by up to 20 years. Causes for Occurrence in Europe One question of active interest to historians is why the Industrial Revolution started in eighteenth century Europe and not in other parts of the world in the eighteenth century, particularly China, India, and the Middle East, or at other times like in Classical Antiquity or the Middle Ages. Numerous factors have been suggested, including ecology, government, and culture. Benjamin Elman argues that China was in a high level equilibrium trap in which the non-industrial methods were efficient enough to prevent use of industrial methods with high costs of capital. Kenneth Pomeranz, in the Great Divergence, argues that Europe and China were remarkably similar in 1700, and that the crucial differences which created the Industrial Revolution in Europe were sources of coal near manufacturing centers, and raw materials such as food and wood from the New World, which allowed Europe to expand economically in a way that China could not. However, most historians contest the assertion that Europe and China were roughly equal because modern estimates of per capita income on Western Europe in the late eighteenth century are of roughly 1,500 dollars in purchasing power parity (and Britain had a per capita income of nearly 2,000 dollars whereas China, by comparison, had only 450 dollars. Also, the average interest rate was about 5 percent in Britain and over 30 percent in China, which illustrates how capital was much more abundant in Britain; capital that was available for investment. Some historians such as David Landes and Max Weber credit the different belief systems in China and Europe with dictating where the revolution occurred. The religion and beliefs of Europe were largely products of Judaeo-Christianity, and Greek thought. Conversely, Chinese society was founded on men like Confucius, Mencius, Han Feizi (Legalism), Lao Tzu (Taoism), and Buddha (Buddhism). The key difference between these belief systems was that those from Europe focused on the individual, while Chinese beliefs centered around relationships between people. The family unit was more important than the individual for the large majority of Chinese history, and this may have played a role in why the Industrial Revolution took much longer to occur in China. There was the additional difference of outlook. In traditional societies, people tend to look backwards to tradition for answers to their questions. One of the inventions of the modern age was the invention of progress, where people look hopefully to the future. Furthermore, Western European peoples had experienced the Renaissance and Reformation; other parts of the world had not had a similar intellectual breakout, a condition that holds true even into the twenty-first century. Regarding India, the Marxist historian Rajani Palme Dutt has been quoted as saying, “The capital to finance the Industrial Revolution in India instead went into financing the Industrial Revolution in England.” In contrast to China, India was split up into many competing kingdoms, with the three major ones being the Marathas, Sikhs and the Mughals. In addition, the economy was highly dependent on two sectors—agriculture of subsistence and cotton, and technical innovation was non-existent. The vast amounts of wealth were stored away in palace treasuries, and as such, were easily moved to Britain. Causes for Occurrence in Britain The debate about the start of the Industrial Revolution also concerns the massive lead that Great Britain had over other countries. Some have stressed the importance of natural or financial resources that Britain received from its many overseas colonies or that profits from the British slave trade between Africa and the Caribbean helped fuel industrial investment. It has been pointed out, however, that slavery provided only 5 percent of the British national income during the years of the Industrial Revolution. Alternatively, the greater liberalization of trade from a large merchant base may have allowed Britain to produce and utilize emerging scientific and technological developments more effectively than countries with stronger monarchies, particularly China and Russia. Britain emerged from the Napoleonic Wars as the only European nation not ravaged by financial plunder and economic collapse, and possessing the only merchant fleet of any useful size (European merchant fleets having been destroyed during the war by the Royal Navy). Britain’s extensive exporting cottage industries also ensured markets were already available for many early forms of manufactured goods. The conflict resulted in most British warfare being conducted overseas, reducing the devastating effects of territorial conquest that affected much of Europe. This was further aided by Britain’s geographical position—an island separated from the rest of mainland Europe. Another theory is that Britain was able to succeed in the Industrial Revolution due to the availability of key resources it possessed. It had a dense population for its small geographical size. Enclosure of common land and the related Agricultural Revolution made a supply of this labor readily available. There was also a local coincidence of natural resources in the North of England, the English Midlands, South Wales and the Scottish Lowlands. Local supplies of coal, iron, lead, copper, tin, limestone and water power, resulted in excellent conditions for the development and expansion of industry. Also, the damp, mild weather conditions of the North West of England provided ideal conditions for the spinning of cotton, providing a natural starting point for the birth of the textiles industry. The stable political situation in Britain from around 1688, and British society’s greater receptiveness to change (when compared with other European countries) can also be said to be factors favoring the Industrial Revolution. In large part due to the Enclosure movement, the peasantry was destroyed as significant source of resistance to industrialization, and the landed upper classes developed commercial interests that made them pioneers in removing obstacles to the growth of capitalism. Protestant Work Ethic Another theory is that the British advance was due to the presence of an entrepreneurial class which believed in progress, technology and hard work.1 The existence of this class is often linked to the Protestant work ethic and the particular status of dissenting Protestant sects, such as the Quakers, Baptists and Presbyterians that had flourished with the English Civil War. Reinforcement of confidence in the rule of law, which followed establishment of the prototype of constitutional monarchy in Britain in the Glorious Revolution of 1688, and the emergence of a stable financial market there based on the management of the national debt by the Bank of England, contributed to the capacity for, and interest in, private financial investment in industrial ventures. Dissenters found themselves barred or discouraged from almost all public offices, as well as education at England’s only two Universities at the time (although dissenters were still free to study at Scotland’s four universities). When the restoration of the monarchy took place and membership in the official Anglican church became mandatory due to the Test Act, they thereupon became active in banking, manufacturing and education. The Unitarians, in particular, were very involved in education, by running Dissenting Academies, where, in contrast to the Universities of Oxford and Cambridge and schools such as Eton and Harrow, much attention was given to mathematics and the sciences—areas of scholarship vital to the development of manufacturing technologies. Historians sometimes consider this social factor to be extremely important, along with the nature of the national economies involved. While members of these sects were excluded from certain circles of the government, they were considered fellow Protestants, to a limited extent, by many in the middle class, such as traditional financiers or other businessmen. Given this relative tolerance and the supply of capital, the natural outlet for the more enterprising members of these sects would be to seek new opportunities in the technologies created in the wake of the Scientific revolution of the 17th century. In terms of social structure, the Industrial Revolution witnessed the triumph of a middle class of industrialists and businessmen over a landed class of nobility and gentry. Ordinary working people found increased opportunities for employment in the new mills and factories, but these were often under strict working conditions with long hours of labor dominated by a pace set by machines. However, harsh working conditions were prevalent long before the industrial revolution took place as well. Pre-industrial society was very static and often cruel—child labor, dirty living conditions and long working hours were just as prevalent before the Industrial Revolution. Factories and Urbanization Industrialization led to the creation of the factory. Arguably the first was John Lombe’s water-powered silk mill at Derby was operational by 1721. However, the rise of the factory came somewhat later when cotton- spinning was mechanized. The factory system was largely responsible for the rise of the modern city, as workers migrated into the cities in search of employment in the factories. Nowhere was this better illustrated than the mills and associated industries of Manchester, nicknamed Cottonopolis, and arguably the world’s first industrial city. For much of the nineteenth century, production was done in small mills, which were typically powered by water and built to serve local needs. Later each mill would have its own steam engine and a tall chimney to give an efficient draft through its boiler. The transition to industrialization was not wholly smooth. For example, a group of English workers known as Luddites formed to protest against industrialization and sometimes sabotaged factories, by throwing a wooden shoe (sabot) into the mechanical works. One of the earliest reformers of factory conditions was Robert Owen. In other industries the transition to factory production followed a slightly different course. In 1746, an integrated brass mill was working at Warmley near Bristol. Raw material went in at one end, was smelted into brass and was turned into pans, pins, wire, and other goods. Housing was provided for workers on site. Josiah Wedgwood and Matthew Boulton were other prominent early industrialists, who employed the factory system. The Industrial Revolution led to a population increase. Industrial workers were better paid than those in agriculture. With more money, women ate better and had healthier babies, who were themselves better fed. Child mortality rates declined, and the distribution of age in the population became more youthful. There was limited opportunity for formal education, and children were expected to work in order to bring home wages. Employers could pay a child less than an adult even though their productivity was comparable; there was no need for strength to operate an industrial machine, and since the industrial system was completely new there were no experienced adult laborers. This made child labor the labor of choice for manufacturing in the early phases of the industrial revolution. Child labor had existed before the Industrial Revolution, but with the increase in population and education it became more visible. Before the passing of laws protecting children, many were forced to work in terrible conditions for much lower pay than their elders. Reports were written detailing some of the abuses, particularly in the coal mines and textile factories and these helped to spread knowledge the children’s plight. The public outcry, especially among the upper and middle classes, helped stir change in the young workers’ welfare. Politicians and the government tried to limit child labor by law, but factory owners resisted; some felt that they were aiding the poor by giving their children money to buy food to avoid starvation, and others simply welcomed the cheap labor. In 1833 and 1844, the first general laws against child labor, the Factory Acts, were passed in England: Children younger than nine were not allowed to work, children were not permitted to work at night, and the work day of youth under the age of 18 was limited to twelve hours. Factory inspectors supervised the execution of the law. About ten years later, the employment of children and women in mining was forbidden. These laws decreased the number of child laborers; however, child labor remained in Europe up to the twentieth century. Living conditions during the Industrial Revolution varied from the splendor of the homes of the owners to the squalor of the lives of the workers. Cliffe Castle, Keighley, is a good example of how the newly rich chose to live. This is a large home modeled loosely on a castle with towers and garden walls. The home is very large and was surrounded by a massive garden, the Cliffe Castle is now open to the public as a museum. Poor people lived in very small houses in cramped streets. These homes would share toilet facilities, have open sewers and would be at risk of damp. Disease was spread through a contaminated water supply. Conditions did improve during the nineteenth century as public health acts were introduced covering things such as sewage, hygiene and making some boundaries upon the construction of homes. Not everybody lived in homes like these. The Industrial Revolution created a larger middle class of professionals such as lawyers and doctors. The conditions for the poor improved over the course of the 19th century because of government and local plans which led to cities becoming cleaner places, but life had not been easy for the poor before industrialization. However, as a result of the Revolution, huge numbers of the working class died due to disease spreading through the cramped living conditions. Chest diseases from the mines, cholera from polluted water and typhoid were also extremely common, as was smallpox. Accidents in factories with child and female workers were regular. Dickens’ novels perhaps best illustrate this; even some government officials were horrified by what they saw. Strikes and riots by workers were also relatively common. The rapid industrialization of the English economy cost many craft workers their jobs. The textile industry in particular industrialized early, and many weavers found themselves suddenly unemployed since they could no longer compete with machines which only required relatively limited (and unskilled) labor to produce more cloth than a single weaver. Many such unemployed workers, weavers and others, turned their animosity towards the machines that had taken their jobs and began destroying factories and machinery. These attackers became known as Luddites, supposedly followers of Ned Ludd, a folklore figure. The first attacks of the Luddite movement began in 1811. The Luddites rapidly gained popularity, and the British government had to take drastic measures to protect industry. Organization of Labor The Industrial Revolution concentrated labor into mills, factories and mines, thus facilitating the organization of combinations or trade unions to help advance the interests of working people. The power of a union could demand better terms by withdrawing all labor and causing a consequent cessation of production. Employers had to decide between giving in to the union demands at a cost to themselves or suffer the cost of the lost production. Skilled workers were hard to replace, and these were the first groups to successfully advance their conditions through this kind of bargaining. The main method the unions used to effect change was strike action. Strikes were painful events for both sides, the unions and the management. In England, the Combination Act forbade workers to form any kind of trade union from 1799 until its repeal in 1824. Even after this, unions were still severely restricted. In the 1830s and 1840s the Chartist movement was the first large scale organized working class political movement which campaigned for political equality and social justice. Its Charter of reforms received over three million signatures but was rejected by Parliament without consideration. Working people also formed friendly societies and co-operative societies as mutual support groups against times of economic hardship. Enlightened industrialists, such as Robert Owen also supported these organizations to improve the conditions of the working class. Unions slowly overcame the legal restrictions on the right to strike. In 1842, a General Strike involving cotton workers and colliers was organized through the Chartist movement which stopped production across Great Britain. Eventually effective political organization for working people was achieved through the trades unions who, after the extensions of the franchise in 1867 and 1885, began to support socialist political parties that later merged to became the British Labour Party. The application of steam power to the industrial processes of printing supported a massive expansion of newspaper and popular book publishing, which reinforced rising literacy and demands for mass political participation. During the Industrial Revolution, the life expectancy of children increased dramatically. The percentage of the children born in London who died before the age of five decreased from 74.5 percent in 1730 – 1749 to 31.8 percent in 1810 – 1829. Besides, there was a significant increase in worker wages during the period 1813-1913. Intellectual Paradigms and Criticism The advent of The Enlightenment provided an intellectual framework which welcomed the practical application of the growing body of scientific knowledge—a factor evidenced in the systematic development of the steam engine, guided by scientific analysis, and the development of the political and sociological analyzes, culminating in Adam Smith’s The Wealth of Nations. One of the main arguments for capitalism is that industrialization increases wealth for all, as evidenced by rising life expectancy, reduced working hours, and no work for children and the elderly. Marxism is essentially a reaction to the Industrial Revolution. According to Karl Marx, industrialization polarized society into the bourgeoisie (those who own the means of production, the factories and the land) and the much larger proletariat (the working class who actually perform the labor necessary to extract something valuable from the means of production). He saw the industrialization process as the logical dialectical progression of feudal economic modes, necessary for the full development of capitalism, which he saw as in itself a necessary precursor to the development of socialism and eventually communism. During the Industrial Revolution an intellectual and artistic hostility towards (or an emotional retreat from) the new industrialization developed. This was known as the Romantic movement. Its major exponents in English literature included the artist and poet William Blake and poets William Wordsworth, Samuel Taylor Coleridge, John Keats, Lord Byron and Percy Bysshe Shelley. The movement stressed the importance of “nature” in art and language, in contrast to ‘monstrous’ machines and factories; the “Dark satanic mills” of Blake’s poem And did those feet in ancient time. Mary Shelley’s short story Frankenstein reflected concerns that scientific progress might be two-edged. - Lester Russell Brown. Eco-Economy. (Earth Policy Institute.) (James & James / Earthscan. - Eric Hobsbawm. The Age of Revolution: Europe 1789–1848. (Weidenfeld & Nicolson Ltd.) - Joseph E Inikori. Africans and the Industrial Revolution in England. (Cambridge University Press.) Read it - Maxine Berg, Pat Hudson, Rehabilitating the Industrial Revolution, Economic History Review, New Series 45 (1) (Feb., 1992): 24-50 doi:10.2307/2598327 - Julie Lorenzen Rehabilitating the Industrial Revolution. Central Michigan University. Accessed November 2006 - Robert E. Lucas, Jr., “The Industrial Revolution: Past and Future.” 2003. Federal Reserve Bank of Minneapolis. Accessed 13 November 2006. - Arnold Joseph Toynbee. Lectures On The Industrial Revolution In England. (Kessinger Publishing, 2004.) - Pat Hudson. The Industrial Revolution. (Oxford University Press US, 1992.) Read it. Retrieved July 14, 2008. - Phyllis Deane. The First Industrial Revolution. (Cambridge University Press.) Read it. Retrieved July 14, 2008. - Eric Schiff. Industrialization without national patents: the Netherlands, 1869-1912; Switzerland, 1850-1907. (Princeton University Press, 1971) - Michele Boldrin and David K. Levine, Economic and Game Theory Against Intellectual Monopoly, PDF, 3. Retrieved July 14, 2008. - J. Bradford DeLong, Professor of Economics, University of California at Berkeley, Why No Industrial Revolution in Ancient Greece? September 20, 2002. Accessed January 2007. - Steven Kreis, October 11, 2006 The Origins of the Industrial Revolution in England The History Guide.org. Accessed January 2007 - Immanuel Chung-Yueh Hsu. The Rise of Modern China. (Oxford University Press US.) Read it Retrieved July 14, 2008. - Jan Luiten van Zanden, PDF International Institute of Social History/University of Utrecht. May 2005. Accessed January 2007 - David Landes. (1999) Wealth And Poverty Of Nations. (New York: W.W. Norton) - Rajni-Palme Dutt. South Asian History -Pages from the history of the Indian subcontinent: British rule and the legacy of colonization. India Today (Indian Edition published 1947). Accessed January 2007 - Was slavery the engine of economic growth? Digital History Retrieved July 14, 2008. - The Royal Navy itself may have contributed to Britain’s industrial growth. Among the first complex industrial manufacturing processes to arise in Britain were those that produced material for British warships. For instance, the average warship of the period used roughly 1000 pulley fittings. With a fleet as large as the Royal Navy, and with these fittings needing to be replaced ever 4 to 5 years, this created a great demand which encouraged industrial expansion. The industrial manufacture of rope can also be seen as a similar factor. - Barrington Moore, Jr.. Social Origins of Dictatorship and Democracy: Lord and Peasant in the Making of the Modern World. (Boston, Beacon Press, 1966), 29-30. - R.M. Hartwell. The Industrial Revolution and Economic Growth. (Methuen and Co., 1971.), 339-341 - “Testimony Gathered by Ashley’s Mines Commission.” The Mines Act, 1842.victorianweb.org. - “The Life of the Industrial Worker in Ninteenth-Century England.” Testimonies, 1832, Parliamentary Hearings.victorianweb.org. - Kirkpatrick Sale, “The Achievements of `General Ludd’: A Brief History of the Luddites” The Ecologist 29 (5) (Aug/Sep 1999) online .mindfuly.org. Retrieved July 12, 2008. - “Peoples’ Charter” 1838. Chartism or The Chartist Movement. victorianweb.org. - ↑ Mabel C. Buer, Health, Wealth and Population in the Early Days of the Industrial Revolution. (London: George Routledge & Sons, 1926), 30 - ScienceDirect – Explorations in Economic History: Trends in Real Wages in Britain, 1750-1913 sciencedirect.com. 17 July 2006. - Industrial Revolution and the Standard of Living econlib.org. 17 July 2006. - R.M. Hartwell. “The Rising Standard of Living in England, 1800-1850,” Economic History Review (1963): 398 - Murray Rothbard, “Karl Marx: Communist as Religious Eschatologist.” The Review of Austrian Economics 4 (1990) - Ashton, Thomas S., The Industrial Revolution (1760-1830). Oxford University Press, 1948 online edition - Atkinson, Norman. Sir Joseph Whitworth. Sutton Publishing, Limited, 1996 - Bairoch, Paul. Economics and World History: Myths and Paradoxes. University of Chicago Press, 1995. - Berlanstein, Lenard R. The Industrial Revolution and work in nineteenth-century Europe. Routledge, 1992 online edition - Bernal, John Desmond. Science and Industry in the Nineteenth Century. Routledge. 2006. - Birch, A. The economic history of the British iron and steel industry 1784 to 1879. London: Cass, 1967. - Brown, Lester Russell. Eco-Economy. James & James / Earthscan. - Buer, Mabel C. Health, Wealth and Population in the Early Days of the Industrial Revolution. London: George Routledge & Sons, 1926, 30 - Cantrell, John, and Gillian Cookson, eds., Henry Maudslay and the Pioneers of the Machine Age. Tempus Publishing, Ltd, 2002 - Clapham, J. H. An Economic History of Modern Britain: The Early Railway Age, 1820-1850. Cambridge University Press, 1926 online edition - Daunton, M. J. Progress and Poverty: An Economic and Social History of Britain, 1700-1850. Oxford University Press, 1995 online edition - Derry, Thomas Kingston and Trevor I. Williams. A Short History of Technology: From the Earliest Times to A.D. 1900. New York: Dover Publications, 1993. - Dunham, Arthur Louis. The Industrial Revolution in France, 1815-1848. Exposition Press, 1955. online edition - Gill, Graeme, “Farm to Factory: A Reinterpretation of the Soviet Industrial Revolution,” Economic Record 80 (2004) online edition - Green, Constance McLaughlin. Holyoke, Massachusetts: A Case History of the Industrial Revolution in America. Yale University Press, 1939. online edition - Hart, Ivor Blashka. James Watt and the History of Steam Power. 1949. - Hartwell, R.M. “The Rising Standard of Living in England, 1800-1850.” Economic History Review (1963) - Hayek, Friedrich A.. Capitalism and the Historians. The University of Chicago Press, 1963 - Hills, Rev. Dr. Richard L. Life and Inventions of Richard Roberts, 1789-1864. Landmark Publishing Ltd, 2002. - Hills, Richard L. James Watt. 3 vol Vol. 1, His time in Scotland, 1736-1774. Landmark Publishing Ltd; Vol. 2, The Years of Toil, 1775-1784.; Vol. 3, Triumph through Adversity, 1784-1719 - Hobsbawm, Eric J.. Industry and Empire: From 1750 to the Present Day. Penguin, 1990. - Hudson, Pat. The Industrial Revolution. Oxford University Press US, 1992. - Hughes, Thomas Parke. Development of Western Technology Since 1500. MacMillan, 1980. - Hyde, C. K. Technological change and the British iron industry 1700-1870. Princeton NJ: Princeton University Press, 1977. - Inikori, Joseph E. Africans and the Industrial Revolution in England: A Study in International Trade and Economic Development. Cambridge University Press, 2002. - Kisch, Herbert. From Domestic Manufacture to Industrial Revolution The Case of the Rhineland Textile Districts. Oxford USA, 1989 online edition - Landes, David S. The Unbound Prometheus: Technical Change and Industrial Development in Western Europe from 1750 to the Present, 2nd ed. New York: Cambridge University Press, 2003. - King, P. W. “Sir Clement Clerke and the Adoption of Coal in Metallurgy.” Transactions of Newcomen Society 73: 33-53. - King, P. W. “The production and consumption of iron in early modern England and Wales.” Economic History Review LVIII (2005): 1-33. - Kranzberg, Melvin and Carroll W. Pursell, Jr., eds. Technology in Western Civilization, Oxford University Press, 1967. - Landes, David S. The Wealth and Poverty of Nations: Why Some Are So Rich and Some So Poor. New York: W. W. Norton & Company. 1999. - Lines, Clifford. Companion to the Industrial Revolution. London; New York: Facts on File, 1990 - Mantoux, Paul. The Industrial Revolution in the Eighteenth Century. (first English translation 1928, revised edition 1961) online edition - Mokyr, Joel. The British Industrial Revolution: An Economic Perspective. 1999. online edition. - More; Charles. Understanding the Industrial Revolution. 2000. online edition - Mott, R. A. and Peter Singer. Henry Cort: the Great Finer: Creator of Puddled Iron. Maney Publishing, 1983, - O’Brien, Patrick, and Roland Quinault, eds. The Industrial Revolution and British Society. Cambridge University Press, 1993. - Pawson, Eric. Transport and Economy: the turnpike roads of 18th century England. New York, Academic Press, 1977. - Pollard, Sidney. Peaceful Conquest: The Industrialization of Europe, 1760-1970. Oxford University Press, 1981 online edition - Roe, Joseph Wickham. English and American Tool Builders. Yale University Press, 1916. Reprint Bradley IL: Lindsay Publications Inc., 1987. - Rolt, L.T.C., and J. S. Allen. The Steam Engine of Thomas Newcomen. Landmark Publishing Ltd, 1997. - Rick, Szostak. The Role of Transportation in the Industrial Revolution: A Comparison of England and France. McGill-Queens University Press, 1991 online edition - Smelser, Neil J. Social Change in the Industrial Revolution: An Application of Theory to the British Cotton Industry. University of Chicago Press, 1959 - Stearns, Peter N. The Industrial Revolution in World History Boulder, CO: Westview Press, 1998. online version - Thompson, E. P. The Making of the English Working Class. Gloucester, MA: Peter Smith Publisher, Inc., 1999. - Toynbee, Arnold. Lectures on the Industrial Revolution of the Eighteenth Century in England. 1884 reprint ed. Whitefish, MT: Kessinger Publishing, 2004. - Trinder, B. The Industrial Revolution in Shropshire, 3rd ed., Phillimore, 2000. - Tylecote, R. F. A history of metallurgy, 2nd ed. Inst of Materials, 1976. - Usher, Abbott Payson. An Introduction to the Industrial History of England. 1920. Originally published by New World Encyclopedia, 01.11.2018, under a Creative Commons Attribution-ShareAlike 3.0 Unported license.
Found 318 Learning Lab Collections This topical collection includes videos and articles to support teachers in learning and teaching about the concept of intersectionality and being more mindful of intersectionality in their own teaching. As defined by Teaching Tolerance, Intersectionality refers to the social, economic and political ways in which identity-based systems of oppression and privilege connect, overlap, and influence one another. This collection begins with a video from the National Museum of African American History and Culture that serves as a primer on the subject and also includes a TED Talk by Kimberlé Crenshaw, Washington Post articles on the subject, a Teaching Tolerance magazine article, and Crenshaw's 1989 research article, "Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics." Teachers and students may use this collection as a springboard for classroom discussions.. Examine artifacts from 1861-1865 and use them to help prepare your own scrapbook of the time period. This topical collection introduces events that shaped the origin of the Space Race; its connections to World War II, rocketry, nuclear development, and the Cold War. After exploring the collection, students will have a better understanding of how the Space Race evolved from a specific group of geopolitical events. This collection introduces figures from American politics and outlines international events that pushed the United States to mobilize around the Mercury, Gemini and Apollo programs. Students begin by watching the overview video. The resources that follow include metadata summaries, quiz questions, and hotspots to draw attention to details in each resource, and provide an overview of the complex geopolitical situation. This collection is best used as a primer to the space race and could be enhanced by further discussion. The Saturn V is the most powerful rocket flown to date, but how did it actually work? this collection investigates the three stages of the Saturn V rocket, as well as the Instrument Unit. By making comparisons between the engines and computer located on the Saturn V and familiar technologies, students will gain a better understanding of the power and function of the mighty Saturn V. This collection also uses and familiarizes students with several Earth and Space science terms. When exploring this collection, discuss and provide students with the following vocabulary list: Thrust, stage, orbit, velocity, combustion, vacuum What can we learn about global climate change by examining icebergs? This collection includes resources (pictures, articles, and videos) that give more insight on the effects of global warming on icebergs. The video and articles will provide you with more background knowledge on the subject. tags: climate change, global warming, iceberg, glacier, melt, temperature, environment The Harmon Foundation Collection, one of the treasures of the National Portrait Gallery’s permanent collection, comprises a group of more than forty portraits of prominent African Americans. The portraits were part of an unprecedented attempt in the 1940s and 1950s to counter racist stereotypes and racial prejudice through portraiture. Smithsonian resources that relate to labor movements, trade unions, and worker protests. The collection includes sites, sounds, and education materials. Topics include union leadership, labor music, historic advances in labor policy, service workers, and agricultural labor. The collection also includes creative depictions of kept figures in various labor movements and renowned labor musicians such as Woody Guthrie, Pete Seeger, and Joe Glazer. The Vietnam Era is rife with people of controversy and topics worth studying. This collection aims to introduce individuals who played a role in both conflict and compromise during that era. It is not a complete list of every person, but rather a diving off point to get the discussion started. (http://www.vvmf.org/teaching-t...) #NHD2018 #NHD Lesson Prompt: Look at each robot and imagine what it can do. How can it help people? If you were to design your own robot, what would you want it to do to help your family? Sketch your ideas and then draw your robot design. Use images to introduce a stamp-printing lesson with primary students. Observe selected images and discuss. . . - What shapes or lines do you see? - Which fabrics have repeat patterns? - Which fabrics have alternating patterns? - What could the fabric be used for? Play a sorting game with images printed on cards. Categories for sorting could include stripes, plaid, checkerboard, floral, polka dot, etc. ART MAKING CHALLENGE: - Students will stamp print on paper with cardboard edges, stampers, or found objects to create patterns. - Printed paper will then be cut into clothing for collage self portraits. This is a Smithsonian Learning Lab topical collection, which contains interdisciplinary education resources, including videos, images and blogs to complement the Smithsonian's national conversation on immigration and what it means to be an American, highlighted on Second Opinion. Use this sample of the Smithsonian's many resources to introduce or augment your study of this topic and spark a conversation. If you want to personalize this collection by changing or adding content, click the Sign Up link above to create a free account. If you are already logged in, click the copy button to initiate your own version. Learn more here. This collection comes from a family festival at the National Museum of the American Indian that explored uses of leather in Native communities - literally from the hunting and tanning of deer and their hides, to their use in ritual and everyday life. The collection includes demonstrations of deer-hide tanning, moccasin making, bead working, instructions to make a leather pouch and a daisy chain bracelet, and an interview and performance by Lawrence Baker and the White Oak Singers. Lei making is an important part of Hawaiian culture. These twisted strands are worn on important occasions and given as gifts of welcome. In this collection you'll find a demonstration video by Mokihana Scalph, as well as performances of children's stories, dance performances, and images of leis and ti leaves, to give context to the performances. Native American Beading: Examples, Artist Interview, Demonstration and Printable Instructions for Hands-on Activity This collection looks at examples of bead work among Native American women, in particular Kiowa artist Teri Greeves, and helps students to consider these works as both expressions of the individual artist and expressions of a cultural tradition. The collection includes work samples and resources, an interview with Ms. Greeves, demonstration video of how to make a Daisy Chain bracelet, and printable instructions. In this collection, Educator Ramsey Weeks (Assiniboine, Lenape, and Hidatsa), from the National Museum of the American Indian, talks about Native American Ledger Art, and shares ideas for family and classroom "winter count" activities. The activities are suitable for English, art, history, and social studies classrooms. The collection also includes information and resources about Winter Counts from the National Museum of the American Indian, the National Museum of Natural History, the National Anthropological Archives, the Smithsonian Institution Archives, the Cooper-Hewitt National Design Museum, Smithsonian Libraries, and the Smithsonian Center for Learning and Digital Access. This collection includes several images that could be used as starting points for students to engage in a dialogue about the complexities of HIV/AIDS. I would very much encourage students to be given choice when exploring a topic from an interdisciplinary approach, but often it can be helpful to provide a starting point. Works of art can be used, as there are opportunities for students to engage in conversations in pairs or small/large groups about multifaceted issues such as this. A painting or photograph can provide a low-risk way of beginning a discussion about challenging topics. Students should feel free to use other areas of knowledge beyond what I have included such as Geography and History or more detailed topics such as stigma or virology. Data from the local Department of Health could also be used in addition to or in place of the Gapminder HIV Chart. To see a sample exploration that could be used in place of a much larger interdisciplinary exploration, please see the collection titled "The Global Implications of HIV/AIDS." Here is a collection of English and Scottish ballads, recorded by Smithsonian Folkways and sung by Ewan MacColl, who is sometimes referred to as the "godfather of British folk revival." These recordings are in the Folkways Records Collection, 1948-1986. This topical collection features forty international stamps that were issued during the World War I era. These stamps will serve as inspiration and a starting point for teacher-created Smithsonian Learning Lab collections during the National Postal Museum's workshop, "My Fellow Soldiers: Letters from World War I" (July 2017) This is a Smithsonian Learning Lab topical collection, which contains images, text, and other multimedia resources that may complement the Tween Tribune feature, Without Edgar Allan Poe, we wouldn't have Sherlock Holmes. Use these resources to introduce or augment your study of this topic. If you want to personalize this collection by changing or adding content, click the Sign Up link above to create a free account. If you are already logged in, click the copy button to initiate your own version. Learn more here. Work with a partner or partners to analyze each object: - What do you think the symbols mean? - Are there words that help describe it? - What patterns can you find? - Does the design show bilateral symmetry, radial symmetry, or is it asymmetrical? ART MAKING CHALLENGE: Design a medallion to commemorate something important to you. Some possibilities: - An accomplishment - A special event you participated in - A family tradition - A personal interest The final artwork could be a drawing, painting, collage, clay slab, or foil repousse. These classroom resources from different Smithsonian museums focus on American Indian history and culture. This collection represents some of my personal favorites from the digitization project at the United States National Herbarium, at the National Museum of Natural History. This project's goal is to digitize the 4.5 million specimens held in the collection. There are hundred of thousands (at the time of publishing) botany specimens available here in the Learning Lab. Find your own favorites using this search. Technical descriptions of the project can be found in a series of articles from the Smithsonian's Digitization Program Office: Keywords: plant, ferns, algae, flower, moss, stem, green, yellow, red, natural, color, growing
Color or colour (see spelling differences) is the visual perceptual property corresponding in humans to the categories called red, yellow, blue and others. Color derives from the spectrum of light (distribution of light energy versus wavelength) interacting in the eye with the spectral sensitivities of the light receptors. Color categories and physical specifications of color are also associated with objects, materials, light sources, etc., based on their physical properties such as light absorption, reflection, or emission spectra. By defining a color space, colors can be identified numerically by their coordinates. Because perception of color stems from the varying sensitivity of different types of cone cells in the retina to different parts of the spectrum, colors may be defined and quantified by the degree to which they stimulate these cells. These physical or physiological quantifications of color, however, do not fully explain the psychophysical perception of color appearance. The science of color is sometimes called chromatics. It includes the perception of color by the human eye and brain, the origin of color in materials, color theory in art, and the physics of electromagnetic radiation in the visible range (that is, what we commonly refer to simply as light).
While the growth of factories is keep rising, people started to think about their own business instead of rely on the government, which increased capitalism, and drew the emergence of the new middle class. Before the Industrial Revolution, Britain was ruling under a feudalism system. With the society of feudalism, people got financial assistance from the government, because they only needed to have the ability to carry their own family, which also means that people should return their extra production or surplus to the government (Nairn). Since the Industrial Revolution started, the economy in Britain’s higher classes had a big improvement. With the growth of industry, the invention of new machines and technology allowed many landlord or owners got materials easier from the colonies, also, able to sell out their products in an easier and faster way. Those who were rich landlords and factory owners keep their wealthy and became upper middle class (Lobley). As the ability of factory production became better, people’s desire had become stronger, some of the factory or mill owners planned for start their own business, and became the one that taking control of the economy. The idea of the new political form capitalism affected people to repeal feudalism, and overthrew the upper class’s power (Poynton). Not everything came that successful, some of people that used to be working class became landless during the revolution, their life had changed, but not as successful as other classes did. The increasing taxes of goods or not being able to paid the fine to tenant caused them became vagabonds, looking for goods on the streets. Some of them were lucky, kept worked as a labor in the factory, which also became as the working class (Poynton). The increasing of the capitalism courage more merchants and factory owners became upper class, and also helped some landless became as workers in factories.