content
stringlengths
275
370k
, May 11, 2001 On certain afternoons in Uganda, bright orange butterflies with black-and-white wings gather together on small patches of low grass, sometimes in the hundreds. Such congregations are nothing unusual in the animal kingdom; normally, males convene to try to win the attention of females. But the swarms--known as leks--that Acraea encendana form are bizarre: 94% of the butterflies are females, and they jostle for the attention of the few males, who seem reluctant suitors. "You wouldn't expect males to be surrounded by all these virgin females and not wanting to mate," says Francis Jiggins of Cambridge University. Even more bizarre is the cause of their sexual skew: They are plagued with a strain of bacteria known as Wolbachia, which kills males but spares females. Wolbachia's powers would be remarkable enough if they only drove Ugandan butterflies into female-dominated leks. But this sexist microbe may be the most common infectious bacterium on Earth. Although no vertebrates (humans included) are known to carry Wolbachia, infection is rampant in the invertebrate world, showing up in everything from fruit flies to shrimp, spiders, and even parasitic worms. In case after case, researchers are finding that Wolbachia don't leave their survival to chance. To maximize their numbers, the bacteria manipulate the sex life of many of their hosts, using some of the most baroque strategies known in evolution. That's one reason why Wolbachia, discovered in 1924, have just recently become the darlings of evolutionary biologists. Last summer the first international Wolbachia conference was held in Crete. The first Wolbachia genome project should be finished this year by Scott O'Neill of Yale University and his colleagues. And whereas humans merited a single genome project, six other Wolbachia genome projects are under way. "The whole field is just exploding," says O'Neill. And rightly so, say Wolbachia fans. There are tantalizing hints that Wolbachia's extraordinary ability to manipulate their hosts for their own evolutionary benefit can help turn a population of hosts into a new species. And some researchers think that Wolbachia can be used as a weapon against pests and parasites that cause diseases such as malaria and river blindness. Researchers did not begin to fathom the remarkable ways in which Wolbachia ensure their own success until the 1970s. Wolbachia can only live inside the cells of their hosts. If they live in a female, they can infect her eggs and be passed down to her offspring. But if they live in a male, they hit a dead end; as his sex cells divide into tiny sperm, the bacteria are squeezed out. That means only infected females can keep a lineage of Wolbachia alive. To ensure a steady stream of progeny, researchers discovered, Wolbachia sometimes boost their own reproductive success by increasing that of infected female hosts. Researchers discovered in the 1970s that, through a process known as cytoplasmic incompatibility, Wolbachia make it difficult for uninfected females to reproduce. Their strategy works like this: If a healthy female mates with a male carrying Wolbachia, some or all of her fertilized eggs will die. But a female carrying Wolbachia can mate with either infected or uninfected males and produce viable eggs--all of which have Wolbachia in them. As a result, the infected females outcompete parasite-free ones, and the overall proportion of Wolbachia carriers increases in a population. The nuts and bolts of this phenomenon remain a matter of speculation. "That's still a big open question," says John Werren, a Wolbachia expert at the University of Rochester in New York. The evidence so far suggests that the bacteria that end up in males produce a toxin that alters their host's sperm. When these males mate with uninfected females, the tainted sperm do a lousy job of fertilizing their eggs. Meanwhile, Wolbachia living in females produce an antidote that somehow restores the sperm to their full viability. On the run Despite these startling discoveries, few microbiologists had even heard of Wolbachia through the 1980s. "Basically, Wolbachia was thought to be an obscure bunch of bacteria that lived in just a few insects," says Werren. That obscurity, it turned out, was simply due to the fact that Wolbachia are not easily cultured outside a host and thus escape detection through traditional means. But with the advent of the polymerase chain reaction in the early 1990s, researchers were at last able to fish through animal cells for Wolbachia genes. They caught a huge harvest. Surveying insects in Panama, England, and the United States, Werren found that about 20% in all three countries were infected. "Twenty percent is definitely a minimum, if for no other reason that I only sampled one or two individuals per species," says Werren. Indeed, other researchers have found infection rates as high as 76%. All told, Wolbachia may infect well over 1 million species of insects, and the bacteria are not limited to insect hosts: Researchers have been finding them in such disparate groups of invertebrates as millipedes, crustaceans, and mites. When Wolbachia enter a new population, they race through it. In the 1980s, Michael Turelli of the University of California, Davis, and Ary Hoffman, now at La Trobe University in Australia, discovered a new strain of Wolbachia in fruit flies in Southern California. To their amazement, they found that the microbe was expanding across the state at a whopping 100 kilometers a year. Since then it has swept across the country and much of the world. Wolbachia spread so quickly, researchers realized, because they take control of their hosts' reproduction. And in the past decade, they've discovered that cytoplasmic incompatibility is only one of many tricks the bacteria use to do so. In some species of wasps, for example, Wolbachia completely alter the host's sex life, manipulating the host to give birth only to females which then no longer need to mate with males to reproduce. In other species, they allow males to be born but alter their hormones to feminize them and make them produce eggs. A fourth way Wolbachia can boost their reproductive success is to destroy their male hosts (and, paradoxically, themselves in the process). In a number of hosts, Wolbachia kill all of the male eggs that they infect. When the female hosts hatch, they don't have to compete with their brothers for food--in fact, their brothers are their food. By cannibalizing the male eggs, the Wolbachia-infected females increase their chances of survival. With so many of their brethren killed off, the few males that remain can enjoy remarkable reproductive success. A species that might normally be split 50-50 between males and females may become permanently skewed to females, as in the case of the Ugandan butterfly Jiggins studies. And because these females have only a few males to mate with, there's more reproductive payoff in being a male than a female butterfly. This situation, Jiggins suspects, may radically alter the behavior of the butterflies, driving males to be very choosy in their mates, preferring healthy females to Wolbachia- infected ones. Indeed, "uninfected females are more likely to mate," Jiggins points out. If a male chooses an infected mate, he may father few sons or none at all, thereby reducing his chances of having grandchildren. Wolbachia may even provide clues into how species originate. New species arise when populations become isolated. Gradually, each population acquires new genes, and, if their isolation lasts long enough, those new genes make them unable to mate with other members of their species. In the 8 February issue of Nature, Werren and Seth Bordenstein of the University of Rochester demonstrated that Wolbachia may be able to create just this sort of isolation, as has long been suspected. The biologists looked at two closely related species of wasps--Nasonia giraulti and Nasonia longicornis--that carry two different strains of Wolbachia. Normally these two species cannot mate. But when Werren and Bordenstein cured the wasps of their Wolbachia infection, the wasps could produce healthy hybrids that could in turn produce healthy offspring of their own. The wasps are divided into two species, Werren argues, only because they carry different strains of Wolbachia. Each species carries a strain that prevents its males from fathering wasps with females of the other species. The bacteria thus create a reproductive wall between them. Although some evolutionary biologists have suspected for more than 40 years that Wolbachia may be agents of speciation, not everyone agrees, and only recently have researchers such as Werren and Bordenstein begun to test the possibility carefully. "Every time I look further into this topic, I'm coming away with data that say it is important," says Werren. If the work holds up, Werren concedes, they will have stumbled upon a very unconventional path to speciation. Whereas geographically splitting a species in two can create new species over the course of thousands of years, Wolbachia might be able to push their host apart in a few generations. Yet other researchers argue that the Nature paper does not close the case. Although the paper is "interesting," Wolbachia expert Hoffman says that "the research does not demonstrate that Wolbachia causes speciation." He points out that the two wasp species do not live side by side in nature; they might have acquired their incompatible Wolbachia strains after they were isolated. What's more, Hoffman adds, if two strains of Wolbachia invade a host species, mathematical models suggest that one of them will often drive the other out of existence. Wolbachia as weapon Given the breakneck pace at which Wolbachia sweep through the invertebrate world, researchers might be able to use them to fight pests and the diseases they carry, speculate O'Neill and others (Science, 20 October 2000, p. 440). To fight malaria, for example, researchers might be able to introduce a gene encoding resistance to Plasmodium (the protozoan that causes the disease) into Wolbachia's genome. Researchers might then infect mosquitoes with the altered Wolbachia, which could theoretically produce antibodies that block the transmission of the parasite through the insect's body. With Wolbachia's wide reach, entire populations of the insects might become resistant, says O'Neill. Other insects that might be candidates for Wolbachia infection include tsetse flies (which spread sleeping sickness) and leaf hoppers (which spread viral diseases between rice plants). At this stage, however, such strategies remain speculative. It may not be possible, for example, to find a suitable antibody gene, or it may not do its job properly when expressed in bacteria. Taking a different strategy to fighting malaria, O'Neill and his colleagues are investigating a virulent strain of Wolbachia that infects Drosophila melanogaster and cuts their life-span by up to 50%. By killing insects before they get too old, Wolbachia could be devastating to parasites they carry, because they need time to develop inside their hosts before they can infect humans. "Under certain conditions, we should be able to see 80% to 100% reductions in disease transmissions," asserts O'Neill. He and his colleagues have already succeeded in infecting a different species of Drosophila with the virulent strain of Wolbachia in the lab--it cuts their life-span as well--and they're now investigating whether they can establish it in mosquitoes. A quicker approach would be to insert the virulence-producing genes directly into the Wolbachia that live in mosquitoes--if researchers could find the genes. That's one reason O'Neill's team is sequencing the approximately 1-million-base genome of the virulent strain that infects D. melanogaster; they plan to finish it this year. Because Wolbachia strains have evolved so many adaptations for manipulating their hosts, other researchers have started genome projects on six more. Researchers also hope to use Wolbachia to battle river blindness and elephantiasis. These diseases are caused by parasitic worms (called filarial nematodes) carried by flies and mosquitoes. But unlike insect-borne diseases such as malaria and sleeping sickness, these worms carry Wolbachia and depend on the bacteria for their well-being. As early as the mid-1970s, researchers knew that some sort of bacteria were living inside the worms. In 1995, researchers sequencing the genome of one filarial nematode stumbled across Wolbachia genes (Science, 19 February 1999, p. 1105). Wolbachia have now been found in almost every other species of filarial nematode. Although Wolbachia are parasites in most invertebrates, researchers suspect that they live mutualistically with nematodes. Perhaps the clearest sign that the worms derive some benefit from an infection is the fact that they suffer if their Wolbachia are wiped out by antibiotics. Onchocerca ochengi, a filarial nematode in cattle, for example, die when their bacteria are destroyed. In other species, the females simply become sterile. Researchers don't yet know what sort of service Wolbachia provide the worms, but they are already investigating whether they can fight filarial diseases by killing the bacteria. German researchers reported in the 8 April 2000 issue of The Lancet that when they gave the antibiotic doxycycline to people suffering from river blindness in Ghana, the worms' embryogenesis stopped. Antibiotics might prove superior to ivermectin, the drug now used to fight river blindness, say the researchers. Ivermectin kills young parasitic worms but has to be taken every 6 months, whereas one dose of antibiotics may be able to stop the worms from producing any offspring. Whether as a mutualist or a parasite, Wolbachia are proving to be among the most versatile microbes ever found. As O'Neill says, "the discoveries are accelerating so much it's hard to predict where we're going." Some new directions are likely to emerge from the forthcoming genome sequences of these master manipulators. Copyright 2001 Carl Zimmer
Magyarization (also Magyarisation, Hungarization, Hungarisation, Hungarianization, Hungarianisation) was an assimilation or acculturation process by which non-Hungarian nationals came to adopt the Hungarian (also called "Magyar") culture and language, either voluntarily or due to social pressure, often in the form of a coercive policy. The Hungarian Nationalities Law (1868) guaranteed that all citizens of the Kingdom of Hungary (then part of the Austro-Hungarian Empire), whatever their nationality, constituted politically "a single nation, the indivisible, unitary Hungarian nation", and there could be no differentiation between them except in respect of the official usage of the current languages and then only insofar as necessitated by practical considerations. In spite of the law, the use of minority languages was banished almost entirely from administration and even justice. Defiance or appeals to the Nationalities Law, met with derision or abuse. The Hungarian language was overrepresented in the primary schools and almost all secondary education was in Hungarian. By the end of the 19th century, the state apparatus was entirely Hungarian in language, as were business and social life above the lowest levels. The Magyarization of the towns had proceeded at an astounding rate. Nearly all middle-class Jews and Germans and many middle-class Slovaks and Ruthenes had been Magyarized. The percentage of the population with Hungarian as its mother tongue grew from 46.6% in 1880 to 54.5% in 1910. Note that the 1910 census (and the earlier censuses) did not register ethnicity, but mother tongue (and religion) instead, based on which it is sometimes subject to criticism. However, most of the Magyarization happened in the centre of Hungary and among the middle classes, who had access to education; and much of it was the direct result of urbanization and industrialization. It had hardly touched the rural populations of the periphery, and linguistic frontiers had not shifted significantly from the line on which they had stabilized a century earlier. The process continued also in post-Trianon era. The political and cultural rights offered to interwar Hungary's ethnic minorities were more limited than their equivalents in any other country of East-Central Europe. While anyone who resisted Magyarization was, indeed, subject to political and cultural handicaps, he was not subject to the kinds of civic and fiscal tricks (prejudicial court proceedings, overtaxation, biased application of social and economic legislation) that some of Hungary's neighbors often inflicted on their ethnic minorities. - 1 Origin of the term - 2 In the Middle Ages - 3 Historical context of the modern-times Magyarization - 4 Magyarization in the Kingdom of Hungary - 5 Migration - 6 Jews - 7 Notable dates - 8 Post-Trianon Hungary - 9 See also - 10 References - 11 Sources - 12 External links Origin of the term The term generally[dubious ] applies to the policies that were enforced in the Hungarian part of Austria-Hungary in the 19th century and early 20th century, especially after the Austro-Hungarian Compromise of 1867, and in particular after the rise in 1871 of the Count Menyhért Lónyay as head of the Hungarian government. Magyarization in broader sense As is often the case with policies intended to forge or bolster national identity in a state, Magyarization was perceived by other ethnic groups such as the Romanians, Slovaks, Ukrainians, Serbs, Croats, etc., as aggression or active discrimination, especially where they formed the majority of the population. In the Middle Ages At the time of the Magyar conquest the Hungarian tribal alliance consisted of tribes of different ethnic backgrounds. There had to be a substantial Turkic element (e.g. Kabars). The subjugated local population in the Hungarian settlement area (mainly the lowland territories) quickly merged with the Hungarians. In the period between the 9th and 13th centuries more groups of Turkic peoples migrated to Hungary (Böszörménys, Pechenegs, Ouzes, Jassics, Cumans etc.). Their past presence is visible in the occurrence of Turkic settlement names. According to one of the theories, the ancestors of Székelys are Avars or Turkic Bulgars who were Magyarized in the Middle Ages. Others argue that the Székely people descended from a Hungarian-speaking "Late Avar" population or from ethnic Hungarians who received special privileges and developed their own consciousness. As a reward for their achievements in wars, noble titles were granted to some Romanian knezes. They entered Hungarian nobility, a part of them converting to Catholicism and their families being Magyarized: the Drágffy (Drăgoşteşti), Hunyadi, Kendefi (Cândeşti), Majláth (Mailat) or Jósika families. Historical context of the modern-times Magyarization Joseph II (1780–90), a leader influenced by the Enlightenment sought to centralize control of the empire and to rule it as an enlightened despot. He decreed that German replace Latin as the empire's official language. This centralization/homogenization struggle was not unique to Joseph II, it was a trend that one could observe all around Europe with the birth of the enlightened idea of Nation State. Hungarians perceived Joseph's language reform as German cultural hegemony, and they reacted by insisting on the right to use their own tongue. As a result, Hungarian lesser nobles sparked a renaissance of the Hungarian language and culture. The lesser nobles questioned the loyalty of the magnates, of whom less than half were ethnic Magyars, and even those had become French- and German-speaking courtiers. The Magyarization policy actually took shape as early as the 1830s, when Hungarian started replacing Latin and German in education. Magyarization lacked any religious, racial or otherwise exclusionary component. Language was the only issue. The eagerness of the Hungarian government in its Magyarization efforts was comparable to that of tsarist Russification from the late 19th century. In the early 1840s Lajos Kossuth pleaded in the newspaper Pesti Hirlap for rapid Magyarization: "Let us hurry, let us hurry to Magyarize the Croats, the Romanians, and the Saxons, for otherwise we shall perish". In 1842 he argued that Hungarian had to be the exclusive language in public life. He also stated that in one country it is impossible to speak in a hundred different languages. There must be one language and in Hungary this must by Hungarian. Zsigmond Kemény supported a multinational state led by Magyars, but he disapproved Kossuth's assimilatory ambitions. István Széchenyi, who was more conciliatory toward other ethnic groups, criticized Kossuth for "pitting one nationality against another". He promoted the Magyarization of non-Hungarians on the basis of the allegged "moral and intellectual supremacy" of the Hungarian population. But he felt that first Hungary itself must be made worthy of emulation if Magyarization was to succeed. However the radical view on Magyarization of Kossuth gained more popular support than the moderate one of Széchenyi. The slogan of the Magyarization campaign was One country – one language – one nation. In July 1849, Hungarian Revolutionary Parliament acknowledged and enacted foremost the ethnic and minority rights in the world, but it was too late: to counter the successes of the Hungarian revolutionary army, the Austrian Emperor Franz Joseph asked for help from the "Gendarme of Europe", Tsar Nicholas I, whose Russian armies invaded Hungary. The army of the Russian Empire and the Austrian forces proved too powerful for the Hungarian army, and General Artúr Görgey surrendered in August 1849. The Magyar national reawakening therefore triggered national revivals among the Slovak, Romanian, Serbian, and Croatian minorities within Hungary and Transylvania, who felt threatened by both German and Magyar cultural hegemony. These national revivals later blossomed into the nationalist movements of the nineteenth and twentieth centuries that contributed to the empire's ultimate collapse. Magyarization in the Kingdom of Hungary |Time||Total population of the K. of Hungary (without Croatia)||Percentage rate of Hungarians| |1910||18,264,533||54.5% (5% Jews)| The term Magyarization is used in regards to the national policies put into use by the government of the Kingdom of Hungary, which was part of the Habsburg Empire. The beginning of this process dates to the late 18th century and was intensified after the Austro-Hungarian Compromise of 1867, which increased the power of the Hungarian government within the newly formed Austria-Hungary. some of them had little desire to be declared a national minority like in other cultures. However, Jews in Hungary appreciated the emancipation in Hungary at a time when anti-semitic laws were still applied in Russia and Romania. Large minorities were concentrated in various regions of the kingdom, where they formed significant majorities. In Transylvania proper (1867 borders), the 1910 census finds 55.08% Romanian-speakers, 34.2% Hungarian-speakers, and 8.71% German-speakers. In the north of the Kingdom, Slovaks and Ruthenians formed an ethnic majority also, in the southern regions the majority were South Slavic Croats, Serbs and Slovenes and in the western regions the majority were Germans. The process of Magyarization did not succeed in imposing the Hungarian language as the most used language in all territories in the Kingdom of Hungary. In fact the profoundly multinational character of historic Transylvania was reflected in the fact that during the fifty years of the dual monarchy, the spread of Hungarian as the second language remained limited. In 1880, 5.7% of the non-Hungarian population, or 109,190 people, claimed to have a knowledge of the Hungarian language; the proportion rose to 11% (183,508) in 1900, and to 15.2% (266,863) in 1910. These figures reveal the reality of a bygone era, one in which millions of people could conduct their lives without speaking the state's official language. The policies of Magyarization aimed to have a Hungarian language name as a requirement for access to basic government services such as local administration, education, and justice. Between 1850 and 1910 the ethnic Hungarian population increased by 106.7%, while the increase of other ethnic groups was far slower: Serbians and Croatians 38.2%, Romanians 31.4% and Slovaks 10.7%. The Magyarization of Budapest was rapid and it implied not only the assimilation of the old inhabitants, but also the Magyarization of the immigrants. In the capital of Hungary, in 1850 56% of the residents were Germans and only 33% Hungarians, and in 1910 almost 90% declared themselves Magyars. This evolution had benefic influence to the Hungarian culture and literature. According to census data, the Hungarian population of Transylvania increased from 24.9% in 1869 to 31.6% in 1910. In the same time, the percentage of Romanian population decreased from 59.0% to 53.8% and the percentage of German population decreased from 11.9% to 10.7%. Changes were more significant in cities with predominantly German and Romanian population. For example, the percentage of Hungarian population increased in Braşov from 13.4% in 1850 to 43.43% in 1910, meanwhile the Romanian population decreased from 40% to 28.71% and the German population from 40.8% to 26.41%. State policy and ethnic relations The first Hungarian government after the Austro-Hungarian Compromise of 1867, the 1867–1871 liberal government led by Count Gyula Andrássy and sustained by Ferenc Deák and his followers, passed the 1868 Nationality Act, that declared "all citizens of Hungary form, politically, one nation, the indivisible unitary Hungarian nation (nemzet), of which every citizen of the country, whatever his personal nationality (nemzetiség), is a member equal in rights." The Education Act, passed the same year, shared this view as the Magyars simply being primus inter pares ("first among equals"). At this time ethnic minorities "de jure" had a great deal of cultural and linguistic autonomy, including in education, religion, and local government. However, after education minister Baron József Eötvös died in 1871, and in Andrássy became imperial foreign minister, Deák withdrew from active politics and Menyhért Lónyay was appointed prime minister of Hungary. He became steadily more allied with the Magyar gentry, and the notion of a Hungarian political nation increasingly became one of a Magyar nation. "[A]ny political or social movement which challenged the hegemonic position of the Magyar ruling classes was liable to be repressed or charged with 'treason'…, 'libel' or 'incitement of national hatred'. This was to be the fate of various Slovak, South Slav [e.g. Serb], Romanian and Ruthene cultural societies and nationalist parties from 1876 onward…" All of this only intensified after 1875, with the rise of Kálmán Tisza, who as minister of the Interior had ordered the closing of Matica slovenská on 6 April 1875. Until 1890, Kálmán Tisza, when he served as prime minister, brought the Slovaks many other measures which prevented them from keeping pace with the progress of other European nations. For a long time, number of non-Hungarians that lived in the Kingdom of Hungary was much larger than a number of ethnic Hungarians. According to the 1787 data, the population of the Kingdom of Hungary numbered 2,322,000 Hungarians (29%) and 5,681,000 non-Hungarians (71%). In 1809, the population numbered 3,000,000 Hungarians (30%) and 7,000,000 non-Hungarians (70%). As an increasingly intense Magyarization policy was implemented after 1867. Although in Slovak, Romanian and Serbian history writing administrative and often repressive Magyarization is usually singled out as the main factor accountable for the dramatic change in the ethnic composition of the Kingdom of Hungary in the 19th century, it should be noted that spontaneous assimilation was also an important factor. In this regard, it must be pointed out that large territories of central and southern Kingdom of Hungary lost their previous, predominantly Magyar population during the numerous wars fought by the Habsburg and Ottoman empires in the 16th and 17th centuries. These empty lands were repopulated, by administrative measures adopted by the Vienna Court especially during the 18th century, by Hungarians and Slovaks from the northern part of the Kingdom that avoided the devastation (see also Royal Hungary), Swabians, Serbs (Serbs were majority in most southern parts of the Pannonian Plain during Ottoman rule, i.e. before those Habsburg administrative measures), Croats and Romanians. Various ethnic groups lived side by side (this ethnic heterogeneity is preserved until today in certain parts of Vojvodina, Bačka and Banat). After 1867, Hungarian became the lingua franca on this territory in the interaction between ethnic communities, and individuals who were born in mixed marriages between two non-Magyars often grew a full-fledged allegiance to the Hungarian nation. Of course since Latin was the official language until 1844 and the country was directly governed from Vienna (which excluded any large-scale governmental assimilation policy from the Hungarian side before the Austro-Hungarian Compromise of 1867, the factor of spontaneous assimilation should be given due weight in any analysis relating to the demographic tendencies of the Kingdom of Hungary in the 19th century. The other key factor in mass ethnic changes is that between 1880 and 1910 about 3 million of Austro-Hungarians migrated to the United States alone. More than half of them were from Hungary (1.5 million+ or about 10% of the total population) alone. Besides the 1.5 million that fled to the US (2/3 of them or about a million were ethnically non Hungarians) mainly Romanians and Serbs had migrated to their newly established mother states in large numbers, like the Principality of Serbia or the Kingdom of Romania, who proclaimed their independence in 1878.[need quotation to verify] Amongst them were such noted people, like the early aviator Aurel Vlaicu (his face is on the 50 Romanian lei) or famous writer Liviu Rebreanu (first illegally in 1909, then legally in 1911) or Ion Ivanovici. Also many fled to Western Europe or other parts of the Americas. Allegation of violent oppression Many Slovak intellectuals and activists (such as Janko Kráľ) were imprisoned or even sentenced to death during the Hungarian Revolution of 1848. One of the incidents that shocked European public opinion was the Černová (Csernova) massacre when 15 people were killed and 52 injured in 1907. Massacre caused Kingdom of Hungary to lose much prestige in the eyes of the world when English historian R. W. Seton-Watson, Norwegian writer Bjørnstjerne Bjørnson and Russian writer Leo Tolstoy championed this cause. The case being a proof for the violence of Magyarization is disputed, partly because the sergeant who ordered the shooting and all the shooters were ethnic Slovaks and partly because of the controversial figure of Andrej Hlinka. The writers who condemned forced Magyarization in printed publications were likely to be put in jail either on charges of treason or for incitement of national hatred. Schools funded by churches and communes had the right to provide education in minority languages. These church-funded schools, however, were mostly founded before 1867, that is, in different socio-political circumstances. In practice, the majority of students in commune-funded schools who were native speakers of minority languages were instructed exclusively in Hungarian. Beginning with the 1879 Primary Education Act and the 1883 Secondary Education Act, the Hungarian state made more efforts to reduce the use of non-Magyar languages, in strong violation of the 1868 Nationalities Law. The number of minority-language schools was steadily decreasing: in the period between 1880 and 1913, when the number of Hungarian-only schools almost doubled, the number of minority language-schools almost halved. Nonetheless, Transylvanian Romanians had more Romanian-language schools under the Austro-Hungarian Empire rule than there were in the Romanian Kingdom itself. Thus, for example, in 1880, in Austro-Hungarian Empire there were 2,756 schools teaching exclusively in the Romanian language, while in the Kingdom of Romania there were only 2,505 (The Romanian Kingdom gained its independence from the Ottoman Empire only two years before, in 1878). The process of Magyarization culminated in 1907 with the lex Apponyi (named after education minister Albert Apponyi) which forced all primary school children to read, write and count in Hungarian for the first four years of their education. From 1909 religion also had to be taught in Hungarian. Approximately 600 Romanian villages were depleted of proper schooling due to the laws. As of 1917, 2,975 primary schools in Romania were closed as a result. The effect of Magyarization on the education system in Hungary was very significant, as can be seen from the official statistics submitted by the Hungarian government to the Paris Peace Conference (formally, all the Jewish people of the kingdom were considered as Hungarians, who had higher ratio in tertiary education than Christians): |% of total population||54.5%||16.1%||10.7%||10.4%||2.5%||2.5%| |Junior high schools||652||4||-||6||3||-| |Science high schools||33||1||-||2||-||-| |Gymnasiums for boys||172||5||-||7||1||-| |High schools for girls||50||-||-||1||-||-| The census system of the post-1867 Kingdom of Hungary was unfavourable to those of non-Hungarian nationality. According to the 1874 election law, which remained unchanged until 1918, only the upper 5.9% of the whole population had voting rights. That effectively excluded almost the whole of the peasantry and the working class from Hungarian political life. The percentage of those on low incomes was higher among other nationalities than among the Magyars, with the exception of Germans who were generally richer. From a Hungarian point of view, the structure of the settlement[clarification needed] system was based on differences in earning potential and wages. The Hungarians and Germans were much more urbanised than Slovaks, Romanians and Serbs in the Kingdom of Hungary. In 1900, nearly a third of the deputies were elected by fewer than 100 votes, and close to two-thirds were elected by fewer than 1000 votes. Transylvania had an even worse representation: the more Romanian a county was, the fewer voters it had. Out of the Transylvanian deputies sent to Budapest, 35 represented the 4 mostly Hungarian counties and the major towns (which together formed 20% of the population), whereas only 30 deputies represented the other 72%[clarification needed] of the population, which was predominantly Romanian. In 1913, even the electorate that elected only one-third of the deputies had a non-proportional ethnic composition. The Magyars who made up 54.5% of the population of the Kingdom of Hungary represented a 60.2% majority of the electorate. Ethnic Germans made up 10.4% of the population and 13.0% of the electorate. The participation of other ethnic groups was as follows: Slovaks (10.7% in population, 10.4% in the electorate), Romanians (16.1% in population, 9.9% in the electorate), Rusyns (2.5% in population, 1.7% in the electorate), Croats (1.1% in population, 1.0% in the electorate), Serbs (2.2% in population, 1.4% in the electorate), and others (2.2% in population, 1.4% in the electorate). Officially, Hungarian electoral laws never contained any legal discrimination based on nationality or language. The high census[clarification needed] was not uncommon in other European countries in the 1860s but later the countries of Western Europe gradually lowered and at last abolished their censuses[clarification needed]. That never happened in the Kingdom of Hungary, although electoral reform was one of the main topics of political debates in the last decades before World War I. The Magyarization of personal names Hungarian authorities put constant pressure upon all non-Hungarians to Magyarize their names and the ease with which this could be done gave rise to the nickname of Crown Magyars (the price of registration being one korona). In 1881 the "Central Society for Name Magyarization" (Központi Névmagyarositó Társaság) was founded in 1881 in Budapest. The aim of this private society was to provide advice and guidelines for those who wanted to Magyarize their surnames. Simon Telkes became the chairman of the society, and professed that "one can achieve being accepted as a true son of the nation by adopting a national name". The society began an advertising campaign in the newspapers and sent out circular letters. They also made a proposal to lower the fees for changing one's name. The proposal was accepted by the Parliament and the fee was lowered from 5 Forints to 50 Krajcárs. After this the name changes peaked in 1881 and 1882 (with 1261 and 1065 registered name changes), and continued in the following years at an average of 750–850 per year. During the Bánffy administration there was another increase, reaching a maximum of 6,700 applications in 1897, mostly due to pressure from authorities and employers in the government sector. Statistics show that between 1881 and 1905 alone, 42,437 surnames were Magyarized, although this represented less than 0.5% of the total non-Hungarian population of the Kingdom of Hungary. Voluntary Magyarization of German or Slavic-sounding surnames remained a typical phenomenon in Hungary during the whole course of the 20th century. Together with Magyarization of personal names and surnames, the exclusive use of the Hungarian forms of place names, instead of multilingual usage, was also common. For those places that had not been known under Hungarian names in the past, new Hungarian names were invented and used in administration instead of the former original non-Hungarian names. Examples of places where non-Hungarian origin names were replaced with newly invented Hungarian names are: Szvidnik - Felsővízköz (in Slovak Svidník, now Slovakia), Najdás - Néranádas (in Romanian Naidăş, now Romania)[dubious ], Sztarcsova - Tárcsó (in Serbian Starčevo, now Serbia), Lyutta - Havasköz (in Ruthenian Lyuta, now Ukraine), Bruck - Királyhida (now Bruck an der Leitha, Austria). (although Bruck and Királyhida were two separate towns, separated by the boundary formed by the river Leitha) According to Hungarian statistics and considering the huge number of assimilated persons between 1700 and 1944 (~3 million) only 340,000–350,000 names were Magyarised between 1815 and 1944; this happened mainly inside the Hungarian-speaking area. One Jewish name out of 17 was Magyarised, in comparison with other nationalities: one out of 139 (Catholic) -427[clarification needed] (Lutheran) for Germans and 170 (Catholic) -330 (Lutheran) for Slovaks. The attempts to assimilate the Carpatho-Rusyns started in the late 18th century, but their intensity grew considerably after 1867. The agents of forced Magyarization endeavored to rewrite the history[clarification needed] of the Carpatho-Rusyns with the purpose of subordinating them to Magyars by eliminating their own national and religious identity. Carpatho-Rusyns were pressed to add Western Rite practices to their Eastern Christian traditions and efforts were made to replace the Slavonic liturgical language with Hungarian. The Magyarization of place names A list of geographical names in the former Kingdom of Hungary, which includes place names of Slavic, Romanian or German origin that were replaced with newly invented Hungarian names between 1880 and 1918. On the first place the former official name used in Hungarian is given, on the second the new name and on the third place the name as it was restored after 1918 with the proper orthography of the given language. During the dualism era, there was an internal migration of segments of the ethnically non-Hungarian population to the Kingdom of Hungary's central predominantly Hungarian counties and to Budapest where they assimilated. The ratio of ethnically non-Hungarian population in the Kingdom was also dropping due to their overrepresentation among the migrants to foreign countries, mainly to the United States.[need quotation to verify] Hungarians, the largest ethnic group in the Kingdom representing 45.5% of the population in 1900, accounted for only 26.2% of the emigrants, while non-Hungarians (54.5%) accounted for 72% from 1901 to 1913.[need quotation to verify] The areas with the highest emigration were the northern mostly Slovak inhabited counties of Sáros, Szepes, Zemlén, and from Ung county where a substantial Rusyn population lived. In the next tier were some of the southern counties including Bács-Bodrog, Torontál, Temes, and Krassó-Szörény largely inhabited by Serbs, Romanians, and Germans, as well as the northern mostly Slovak counties of Árva and Gömör-Kishont, and the central Hungarian inhabited county of Veszprém. The reasons for emigration were mostly economic.[need quotation to verify] Additionally, some may have wanted to avoid Magyarization or the draft, but direct evidence of other than economic motivation among the emigrants themselves is limited. The Kingdom's administration welcomed the development as yet another instrument of increasing the ratio of ethnic Hungarians at home.[need quotation to verify] The Hungarian government made a contract with the English-owned Cunard Steamship Company for a direct passenger line from Rijeka to New York. Its purpose was to enable the government to increase the business transacted through their medium.[need quotation to verify] By 1914, a total number of 3 million had emigrated, of whom about 25% returned. This process of returning was halted by World War I and the partition of Austria-Hungary. The majority of the emigrants came from the most indigent social groups, especially from the agrarian sector. Magyarization did not cease after the collapse of Austria-Hungary but has continued within the borders of the post-WW-I Hungary throughout most of the 20th century and resulted in high decrease of numbers of ethnic Non-Hungarians. In the nineteenth century, the Neolog Jews were located mainly in the cities and larger towns. They arose in the environment of the latter period of the Austro-Hungarian Empire generally good period for upwardly mobile Jews, especially those of modernizing inclinations. In the Hungarian portion of the Empire, most Jews (nearly all Neologs and even most of the Orthodox) adopted the Hungarian language as their primary language and viewed themselves as "Magyars of the Jewish persuasion". The Jewish minority which to the extent it is attracted to a secular culture is usually attracted to the secular culture in power, was inclined to gravitate toward the cultural orientation of Budapest. (The same factor prompted Prague Jews to adopt an Austrian cultural orientation, and at least some Vilna Jews to adopt a Russian orientation.) After the emancipation of Jews in 1867, the Jewish population of the Kingdom of Hungary (as well as the ascending German population) actively embraced Magyarization, because they saw it as an opportunity for assimilation without conceding their religion. (We also have to point out that in the case of the Jewish people that process had been preceded by a process of Germanization earlier performed by Habsburg rulers). Stephen Roth writes, "Hungarian Jews were opposed to Zionism because they hoped that somehow they could achieve equality with other Hungarian citizens, not just in law but in fact, and that they could be integrated into the country as Hungarian Israelites. The word 'Israelite' (Hungarian: Izraelita) denoted only religious affiliation and was free from the ethnic or national connotations usually attached to the term 'Jew'. Hungarian Jews attained remarkable achievements in business, culture and less frequently even in politics. But even the most successful Jews were not fully accepted by the majority of the Magyars as one of their kind—as the events following the Nazi German invasion of the country in World War II so tragically demonstrated." However, in the 1930s and early 1940s Budapest was a safe haven for Slovak, German and Austrian Jewish refugees and a center of Hungarian Jewish cultural life. In 2006 the Company for Hungarian Jewish Minority could not collect 1000 signatures for a petition to declare Hungarian Jews a minority even though there are at least 100,000 Jews in the country. The official Hungarian Jewish religious organization, Mazsihisz advised not to vote for the new status because they think that Jews identify themselves as a religious group, not as a 'national minority'. There was no real control throughout the process and non-Jewish people could also sign the petition. - 1844 – Hungarian is gradually introduced for all civil records (kept at local parishes until 1895). German became an official language again after the 1848 revolution, but the laws reverted in 1881 yet again. From 1836 to 1881, 14,000 families had their name Magyarized in the area of Banat alone. - 1849 – Hungarian Parliament during the revolution of 1848 acknowledged and enacted foremost the ethnic and minority rights in the world. - 1874 – All Slovak secondary schools (created in 1860) were closed. Also the Matica slovenská was closed down in April 1875. The building was taken over by the Hungarian government and the property of Matica slovenská, which according to the statutes belonged to the Slovak nation, was confiscated by the Prime Minister's office, with the justification that, according to Hungarian laws, there did not exist a Slovak nation.[dead link] - 1874–1892 – Slovak children were being forcefully moved into "pure Magyar districts". Between 1887 and 1888 about 500 Slovak orphans were transferred by FEMKE. - 1883 – FEMKE (Upper Hungarian Magyar education society) was created. Society was founded to propagate Magyar values and Magyar education in the Upper Hungary. - 1897 – The Bánffy law of the villages is ratified. According to this law, all officially used village names in the Hungarian Kingdom had to be in Hungarian language. - 1898 – Simon Telkes publishes the book "How to Magyarize family names". - 1907 – The Apponyi educational law made Hungarian a compulsory subject in all schools in the Kingdom of Hungary. This also extended to confessional and communal schools, which had the right to provide instruction in a minority language as well. "All pupils regardless of their native language must be able to express their thoughts in Hungarian both in spoken and in written form at the end of fourth grade [~ at the age of 10 or 11]" - 1907 – The Černová massacre in present-day northern Slovakia, a controversial event in which 15 people were killed during a clash between a group of gendarmes and local villagers. A considerable number of other nationalities remained within the frontiers of the post-Trianon Hungary: According to the 1920 census 10.4% of the population spoke one of the minority languages as their mother language: - 551,212 German (6.9%) - 141,882 Slovak (1.8%) - 23,760 Romanian (0.3%) - 36,858 Croatian (0.5%) - 23,228 Bunjevac and Šokci (0.3%) - 17,131 Serb (0.2%) The number of bilingual people was much higher, for example - 1,398,729 people spoke German (17%) - 399,176 people spoke Slovak (5%) - 179,928 people spoke Croatian (2.2%) - 88,828 people spoke Romanian (1.1%). Hungarian was spoken by 96% of the total population and was the mother language of 89%. In interwar period, Hungary expanded its university system so the administrators could be produced to carry out the Magyarization of the lost territories for the case they were regained. In this period the Roman Catholic clerics dwelled on Magyarization in the school system even more strongly than did the civil service. The percentage and the absolute number of all non-Hungarian nationalities decreased in the next decades, although the total population of the country increased. Bilingualism was also disappearing. The main reasons of this process were both spontaneous assimilation and the deliberate Magyarization policy of the state. Minorities made up 8% of the total population in 1930 and 7% in 1941 (on the post-Trianon territory). After World War II about 200,000 Germans were deported to Germany according to the decree of the Potsdam Conference. Under the forced exchange of population between Czechoslovakia and Hungary, approximately 73,000 Slovaks left Hungary. After these population movements Hungary became an ethnically almost homogeneous country except the rapidly growing number of Romani people in the second half of the 20th century. After the First Vienna Award which gave Carpathian Ruthenia to Hungary, a Magyarization campaign was started by the Hungarian government in order to remove Slavic nationalism from Catholic Churches and society. There were reported interferences in the Uzhorod (Ungvár) Greek Catholic seminary, and the Hungarian-language schools excluded all pro-Slavic students. According to Chris Hann, most of the Greek Catholics in Hungary are of Rusyn and Romanian origin, but they have been almost totally Magyarized. While according to the Hungarian Catholic Lexicon, though originally, in the 17th century, the Greek Catholics in the Kingdom of Hungary were mostly composed of Rusyns, Ukrainians and Romanians, they also had Polish and Hungarian members. Their number increased drastically in the 17-18th centuries, when during the conflict with Protestants many[quantify] Hungarians joined the Greek Catholic Church, and so adopted the Byzantine Rite rather than the Latin. In the end of the 18th century, the Hungarian Greek Catholics themselves started to translate their rites to Hungarian and created a movement to create their own diocese.[need quotation to verify] - Treaty of Trianon - Transylvanian Memorandum - 1848–49 massacres in Transylvania - "Hungary - Social and economic developments". Encyclopædia Britannica. 2008. Retrieved 2008-05-20. - Joseph Rothschild (1974). East Central Europe between the two World Wars. University of Washington Press. p. 193. - Western civilization: ideas, politics & society. From the 1600s - Marvin Perry - Google Boeken. Books.google.com. Retrieved 2013-05-15. - Sixteen months of indecision: Slovak American viewpoints toward compatriots ... - Gregory C. Ference - Google Boeken. Books.google.com. Retrieved 2013-05-15. - Bideleux and Jeffries, 1998, p. 363. - The policy of the Hungarian state concerning the Romanian church in ... - Mircea Păcurariu - Google Books. Books.google.co.uk. 1990-01-01. Retrieved 2013-05-15. - Google Translate. Translate.google.com. Retrieved 2013-05-15. - The policy of the Hungarian state concerning the Romanian church in ... - Mircea Păcurariu - Google Books. Books.google.co.uk. 1990-01-01. Retrieved 2013-05-15. - The Central European Observer - Joseph Hanč, F. Souček, Aleš Brož, Jaroslav Kraus, Stanislav V. Klíma - Google Books. Books.google.co.uk. Retrieved 2013-05-15. - Paul Lendvai, The Hungarians: A Thousand Years of Victory in Defeat, C. Hurst & Co. Publishers, 2003, p. 14 - Dennis P. Hupchick. Conflict and Chaos in Eastern Europe. Palgrave Macmillan, 1995. p.55. - (Romanian) László Makkai . Colonizarea Transilvaniei (p.75) - Răzvan Theodorescu. E o enormitate a afirma că ne-am născut ortodocşi (article in Historia magazine) - A Country Study: Hungary - Hungary under the Habsburgs. Federal Research Division (Library of Congress). Retrieved 2008-11-30. - The Finno-Ugric republics and the Russian state, by Rein Taagepera 1999. p. 84. - Ioan Lupaș. The Hungarian Policy of Magyarization (p. 14). The Center for Transylvanians Studies - "The Hungarian Liberal Opposition's Approach to Nationalities and Social Reform". mek.oszk.hu. Retrieved 18 January 2014. - Laszlo Deme. The radical left in the Hungarian revolution of 1848 - Matthew P. Fitzpatrick. Liberal Imperialism in Europe - Peter F. Sugar, Péter Hanák, Tibor Frank. A History of Hungary - Robert Adolf Kann, Stanley B. Winters, Joseph Held. Intellectual and Social Developments in the Habsburg Empire from Maria Theresa to World War - John D Nagle, Alison Mahr. Democracy and Democratization: Post-Communist Europe in Comparative Perspective - Anton Špiesz, Dušan Čaplovič. Illustrated Slovak History: A Struggle for Sovereignty in Central Europe - Katus, László: A modern Magyarország születése. Magyarország története 1711–1848. Pécsi Történettudományért Kulturális Egyesület, 2010. p. 268. - Arthur J. Sabin, Red Scare in Court: New York Versus the International Workers Order, University of Pennsylvania Press, 1999, p. 4 - Pástor, Zoltán, Dejiny Slovenska: Vybrané kapitoly. Banská Bystrica: Univerzita Mateja Bela. 2000 - Michael Riff, The Face of Survival: Jewish Life in Eastern Europe Past and Present, Valentine Mitchell, London, 1992, ISBN 0-85303-220-3. - Katus, László: A modern Magyarország születése. Magyarország története 1711–1848. Pécsi Történettudományért Kulturális Egyesület, 2010. p. 553. - Katus, László: A modern Magyarország születése. Magyarország története 1711–1848. Pécsi Történettudományért Kulturális Egyesület, 2010. p. 558. - Religious Denominations and Nationalities - Magyarization process. Genealogy.ro. 1904-06-05. Retrieved 2013-05-15. - IGL - SS 2002 - ao. Univ.-Prof. Dr. Karl Vocelka - VO - John Lukacs. Budapest 1900: A Historical Portrait of a City and Its Culture (1994) p.102 - István Deák. Assimilation and nationalism in east central Europe during the last century of Habsburg rule, Russian and East European Studies Program, University of Pittsburgh, 1983 (p.11) - Rogers Brubaker (2006). Nationalist Politics and Everyday Ethnicity in a Transylvanian Town. Princeton University Press. p. 65. ISBN 978-0-691-12834-4. - Eagle Glassheim (2005). Noble Nationalists: The Transformation of the Bohemian Aristocracy. Harvard University Press. p. 25. ISBN 978-0-674-01889-1. - Bideleux and Jeffries, 1998, pp. 362–363. - Bideleux and Jeffries, 1998, pp. 363–364. - Bideleux and Jeffries, 1998, p. 364. - Kirschbaum, Stanislav J. (March 1995). A History of Slovakia: The Struggle for Survival. New York: Palgrave Macmillan; St. Martin's Press. p. a136 b139 c139. ISBN 978-0-312-10403-0. - Bideleux and Jeffries, 1998, pp. 362–364. - Ács, Zoltán: Nemzetiségek a történelmi Magyarországon. Kossuth, Budapest, 1986. p. 108. - Katus, László: A modern Magyarország születése. Magyarország története 1711–1848. Pécsi Történettudományért Kulturális Egyesület, 2010. p. 220. - [dead link] - Rogers Bruebaker: Nationalism Reframed, New York, Cambridge University Press, 1996. - Yosi Goldshṭain, Joseph Goldstein: Jewish history in modern times - Katus, László: A modern Magyarország születése. Magyarország története 1711–1848. Pécsi Történettudományért Kulturális Egyesület, 2010. p. 392. - Encyklopédia spisovateľov Slovenska. Bratislava: Obzor, 1984.[context?][page needed] - Holec, Roman (1997). Tragédia v Černovej a slovenská spoločnosť. Martin: Matica slovenská. - Sixteen months of indecision: Slovak American viewpoints toward compatriots ... - Gregory C. Ference - Google Knihy. Books.google.com. Retrieved 2013-05-15. - Katus, László: A modern Magyarország születése. Magyarország története 1711–1848. Pécsi Történettudományért Kulturális Egyesület, 2010. p. 570. - Robert Bideleux and Ian Jeffries, A History of Eastern Europe: Crisis and Change, Routledge, 1998, p. 255. - Romsics, Ignác. Magyarország története a huszadik században ("A History of Hungary in the 20th Century"), pp. 85–86. - Raffay Ernő: A vajdaságoktól a birodalomig-Az újkori Románia története = From voivodates to the empire-History of modern Romania, JATE Kiadó, Szeged, 1989) - Teich, Mikuláš; Dušan Kováč; Martin D. Brown (2011). Slovakia in History. Cambridge University Press. Retrieved 31 August 2011. - Stoica, Vasile (1919). The Roumanian Question: The Roumanians and their Lands. Pittsburgh: Pittsburgh Printing Company. p. 27. - Z. Paclisanu, Hungary's struggle to annihilate its national minorities, Florida, 1985 pp. 89–92 - Ference, Gregory Curtis (2000). Sixteen Months of Indecision: Slovak American Viewpoints Toward Compatriots and the Homeland from 1914 to 1915 As Viewed by the Slovak Language Press from Pennsylvania. Associated University Press. p. 31. ISBN 0-945636-59-8. - Brown, James F. (2001). The Grooves of Change: Eastern Europe at the Turn of the Millennium. Duke University Press. p. 56. ISBN 0-8223-2652-3. - R. W. Seton-Watson, Corruption and reform in Hungary, London, 1911 - R. W. Seton-Watson, A history of the Roumanians, Cambridge, University Press, 1934, p. 403 - Georges Castellan, A history of the Romanians, Boulder, 1989, p. 146 - A Pallas nagy lexikona - page 92 - Lelkes György: Magyar helységnév-azonosító szótár, Talma Könyvkiadó, Baja, 1998 - (Hungarian) Kozma, István, A névmagyarosítások története. A családnév-változtatások, História (2000/05-06) - D. Oliver Herbel. Turning to Tradition: Converts and the Making of an American Orthodox Church. Oxford University Press (2013). p. 29-30 - György Lelkes: Magyar helységnév-azonosító szótár, Talma Könyvkiadó, Baja, 1998 - István Rácz, A paraszti migráció és politikai megítélése Magyarországon 1849–1914. Budapest: 1980. p. 185–187. - Júlia Puskás, Kivándorló Magyarok az Egyesült Államokban, 1880–1914. Budapest: 1982. - László Szarka, Szlovák nemzeti fejlõdés-magyar nemzetiségi politika 1867–1918. Bratislava: 1995. - Aranka Terebessy Sápos, "Középső-Zemplén migrációs folyamata a dualizmus korában." Fórum Társadalomtudományi Szemle, III, 2001. - László Szarka, A szlovákok története. Budapest: 1992. - James Davenport Whelpey, The Problem of the Immigrant. London: 1905. - Immigration push and pull factors, conditions of living and restrictive legistration, UFR d'ETUDES ANGLOPHONES, Paris - Loránt Tilkovszky, A szlovákok történetéhez Magyarországon 1919–1945. Kormánybiztosi és más jelentések nemzetiségpolitikai céllal látogatott szlovák lakosságú településekről Hungaro – Bohemoslovaca 3. Budapest: 1989. - Michael Riff, The Face of Survival: Jewish Life in Eastern Europe Past and Present, Valentine Mitchell, London, 1992, ISBN 0-85303-220-3. - Mendelsohn, Ezra (1987). The Jews of East Central Europe Between the World Wars. Indiana University Press. p. 87. ISBN 0-253-20418-6. - Erényi Tibor: A zsidók története Magyarországon, Változó Világ, Budapest, 1996 - Roth, Stephen. "Memories of Hungary", pp. 125–141 in Riff, Michael, The Face of Survival: Jewish Life in Eastern Europe Past and Present. Valentine Mitchell, London, 1992, ISBN 0-85303-220-3. p. 132. - "Budapest". Holocaust Encyclopedia. United States Holocaust Memorial Museum. Retrieved 2008-06-02. - Index - Nem lesz kisebbség a zsidóság - Nationalities Papers - Google Knihy. Books.google.com. Retrieved 2013-05-15. - Oddo, Gilbert Lawrence. Slovakia and its people. R. Speller. - Slovaks in America: a Bicentennial study - Slovak American Bicentennial Editorial Board, Slovak League of America - Google Knihy. Books.google.com. Retrieved 2013-05-15. - Strhan, Milan; David P. Daniel. Slovakia and the Slovaks. - Nationalism and territory: constructing group identity in Southeastern Europe - Google Books. Books.google.com. Retrieved 2013-05-15. - András Gerő, James Patterson, Enikő Koncz: Modern Hungarian society in the making: the unfinished experience, p. 214, Oxford University Press, US, 1995 - *Bobák, Ján (1996). Mad̕arská otázka v Česko-Slovensku, 1944-1948 [Hungarian Question in Czechoslovakia] (in Slovak). Matica slovenská. ISBN 978-80-7090-354-4. - Christopher Lawrence Zugger: The Forgotten: Catholics of the Soviet Empire from Lenin Through Stalin. Syracuse University Press, 2001. - Hann, C. M. (ed.) The Postsocialist Religious Question. LIT Verlag Berlin-Hamburg-Münster, 2007, ISBN 3-8258-9904-7. - Magyar Katolikus Lexikon (Hungarian Catholic Lexicon): Görögkatolikusok (Greek Catholics) ||The examples and perspective in this article may not represent a worldwide view of the subject. (November 2009)| ||The examples and perspective in this article may not include all significant viewpoints. (May 2009)| - Dr. Dimitrije Kirilović, Pomađarivanje u bivšoj Ugarskoj, Novi Sad - Srbinje, 2006 (reprint). Originally printed in Novi Sad in 1935. - Dr. Dimitrije Kirilović, Asimilacioni uspesi Mađara u Bačkoj, Banatu i Baranji, Novi Sad - Srbinje, 2006 (reprint). Originally printed in Novi Sad in 1937 as Asimilacioni uspesi Mađara u Bačkoj, Banatu i Baranji - Prilog pitanju demađarizacije Vojvodine. - Lazar Stipić, Istina o Mađarima, Novi Sad - Srbinje, 2004 (reprint). Originally printed in Subotica in 1929 as Istina o Madžarima. - Dr. Fedor Nikić, Mađarski imperijalizam, Novi Sad - Srbinje, 2004 (reprint). Originally printed in Novi Sad in 1929. - Borislav Jankulov, Pregled kolonizacije Vojvodine u XVIII i XIX veku, Novi Sad - Pančevo, 2003. - Dimitrije Boarov, Politička istorija Vojvodine, Novi Sad, 2001. - Robert Bideleux and Ian Jeffries, A History of Eastern Europe: Crisis and Change, Routledge, 1998. ISBN 0-415-16111-8 hardback, ISBN 0-415-16112-6 paper. |Wikimedia Commons has media related to Magyarization.| - Scotus Viator (pseudonym), Racial Problems in Hungary, London: Archibald and Constable (1908), reproduced in its entirety on line. See especially Magyarization of schools (as of 1906) - Magyarization in Banat
Out in the middle of the frozen Arctic and Antarctic waters are pockets of open water called polynyas (Russian for ‘ice hole’). I first ran across polynyas when I read ‘Ice Station Zebra’ by Alistair MacLean as a teenager - I understand the book was made into a movie in 1968, but I haven’t seen it. A cold war thriller, the novel centers on a nuclear submarine traveling under the Arctic ice pack on a supposed rescue mission that results in sabotage. Getting through the ice becomes critically important to the submarine’s crew - and normal pack ice is much too thick to break through. A polynya provides the perfect way through the ice, but why are they there? It seems paradoxical that open water can co-exist with below freezing air temperatures. Shouldn’t the water just freeze? Polynyas form only under very special conditions. First, a physical barrier is needed to stop ice from moving in. A point of land or ice bridge would do the trick. Next, mechanisms to stop ice formation must occur which can be broken into two broad categories. If the forming ice is removed by some mechanical process, it’s called a mechanically forced polynya. Appropriate mechanical processes include wind, currents and tides. Because ice is being formed, then moved away, the surface waters would become extra-salty - as sea ice forms it rejects the brine. This salty, cold water would then sink. The second type of polynya is formed by convection. Convection is a common heat-transfer process that can be found in any kitchen. It explains how a pot of water is brought to the boiling point from a heat source below. The element heats the bottom layer of water (conduction) and this water rises heating water further up (convection). In Arctic waters (and Antarctic waters I think - I haven’t been looking into what happens in the Antarctic), the lowest layer of water is quite warm, about three degrees Celsius. It stays on the bottom because it’s dense (i.e. heavy). If a process, like tides or upwelling, brought this warmer water up to the surface, it would keep the surface waters from freezing. An added bonus when deep waters are brought to the surface is that they tend to be nutrient rich, supporting diverse life. As with everything in nature, polynya formation is complex. Typically, they form due to a combination of factors and can even create their own feedback loops.
- About MASA - Government Relations - Member Toolbox - Development Tools - Induction Tools - Support Tools Today at school, Rosa saw a boy being bullied. Other kids were in a circle around him, calling him names. Rosa knew this was wrong, but she didn’t know what to do to help this boy. She worried that if she said anything, the other kids would start bullying her. After seeing this boy getting bullied, Rosa doesn’t feel safe at school anymore. Bullying doesn’t involve only those doing the bullying and those being bullied. Bullying involves and affects the entire school community. The three main groups that are affected by bullying are the students who are bullied, the students who bully, and the witnesses or bystanders who see it happen, like Rosa. The Impact on Bullied Students Students who are bullied can develop physical symptoms such as headaches, stomach pains or sleeping problems. They may be afraid to go to school, go to the lavatory, or ride the school bus. They may lose interest in school, have trouble concentrating, or do poorly academically. Bullied students typically lose confidence in themselves. They may experience depression, low self-esteem, and suicidal thoughts or they may lash out in violent ways--the most serious being school shootings. The Impact on Students Who Bully Students who bully do not fare much better. Research shows that these students are more likely to get into frequent fights, steal and vandalize property, drink alcohol and smoke, report poor grades, perceive a negative climate at school, and carry a weapon. Long-term research has also shown that these students are at increased risk to commit crimes later in life. It’s important to note, however, that not all students who bully others have obvious behavior problems or are engaged in rule-breaking activities. Some of them are highly skilled socially and good at ingratiating themselves with their teachers and other adults. For this reason it is often difficult for adults to discover, or even imagine that these students engage in bullying behavior. The Impact of Bullying on Bystanders Students who witness bullying may also be affected. They may feel guilty for not helping, or fearful that they will be the next target. Or they may be drawn into the bullying themselves and feel bad about it afterwards. All of this may gradually change the group or classroom attitudes and norms in a harsher, less empathetic direction. The Impact on the School When bullying continues and a school does not take action, the entire school climate can be affected. The environment can become one of fear and disrespect, hampering the ability of students to learn. Students may feel insecure and tend not to like school very well. When students don’t see the adults at school acting to prevent or intervene in bullying situations, they may feel that teachers and other school staff have little control over the students and don’t care what happens to them. The effects of bullying are so devastating and profound that over the last few years at least 37 state laws against bullying have been adopted. There have also been civil suits brought against schools and school systems over bullying incidents, some with damages in the millions of dollars. It is important to realize that, like sexual harassment and racial discrimination, some forms of bullying are illegal actions. Bullying is a serious issue that will impact the school experience of all children involved. This is why it must be taken seriously and effective measures to prevent it must be put in place. For more information about bullying or The Michigan Bullying Prevention Summit, visit http://bit.ly/mibully or call 517-694-8955 or toll free at 800-227-0824.
Use these Space Printables to create hands-on solar system learning activities for your students. You can use these cards to teach children the names of planets and additional elements from our solar system. These cards can be used in a Montessori classroom, as well, and are perfect for science centers! Use these Space Cards to create hands-on activities and to teach children about planets. Your students will be excited to see the details of the different planets in our solar system! Note: the images are realistic, based on satellite images of the planets from space! If you like these, you will also love our Planets Up-Close Cards that show your students what the surface our the planets in our solar system look like! ★8 Planets 3 Part Cards ★Sun, Moon, Pluto, Asteroid 3 Part Cards MORE SPACE ACTIVITIES Solar System and Planets Fact Book Planets Close-up Cards Solar System Addition Space Counting Cards and Activities Click the green star at the top of this page to follow my store! You will receive access to my NEW products for 50% for the first 24 hours after a new product is uploaded! Visit my store Welcome to Mommyhood Montessori Inspired Living and Learning
Previously, I reviewed what we currently know about anole fossils – these fossils are preserved in amber, a fossilised tree sap/resin from Mexico and the Dominican Republic (like the one pictured right). Today, I want to share how I have been using high resolution x-ray computed tomography, a.k.a CT scanning to look at these fossils and so peer into the past. Background to CT scanning Amber CT scanning involves x-raying an object from many angles, and then compiling these x-rays to reconstruct 3D models of the object (more detailed description here). CT scanning works when the object being scanned is made of different materials that each absorb x-rays differently. Think of a medical x-ray; skin absorbs far fewer x-rays than bone, so the two show up as different shades of grey on the developed x-ray. The inclusions in amber are usually subfossils, where organic material still remains (e.g., bone). This means there will be different materials with different x-ray absorption. Amber absorbs more x-rays than air (similar density to a plastic drinks bottle), but fewer than bone. I digitally remove the amber and make a 3D model of the fossil inside. CT scanning is great technique for studying amber specimens because the x-rays do not damage the amber (no evidence of clouding), and it can be used to see inside even the most opaque of pieces. With this method, you can create see in great detail inclusion in the amber without any destruction to the piece. Exploring Dominican amber using CT To demonstrate how CT scanning is great for amber, I shall show a famous Dominican amber anole fossil, housed at the American Museum of Natural History. It was described by Kevin de Queiroz and co-authors in 1998, and was also featured in the Losos ‘Lizards in an Evolutionary tree’ book. The fossil is most likely a hatchling anole (27mm SVL). It died in a well laid out pose, lying on its stomach with its legs out stretched. I chose to examine this specimen using micro CT to see what information we could glean from such a rare piece. To the left is a 3D model of the skeleton, shown from the back, side and belly (left to right). I discovered that the inclusion in this amber fossil is remarkably well preserved – the whole skeleton is almost completely preserved, except for 3 breaks that cut thorugh the lizard that take out parts of its forelimbs. Amazingly, the lizard is not squashed – it retains its 3D form as in life. CT scanning has also shown me that there’s more to these amber fossils than skeleton. When I scanned some other amber specimens kindly leant to me from private collectors of Dominican amber, I found some very exciting things. Within the amber, there are often air pockets. Usually these are just air bubbles, perhaps due to gas escaping from rotting material encased in the amber. But sometimes, the air turns out to have shape! When the air is contained within the body cavity, the skin leaves an impression on the amber, so we can see soft tissue in remarkable detail. To the right is an anole forelimb, where in addition to bone, we see the impression of the soft tissue as an air-filled void in the amber. Also, I occasionally see interesting invertebrate inclusions, such as the ant shown far right, which are also preserved as an air-filled void in the amber. In the last few years, CT scanning amber has become increasingly popular (e.g. see here for a video of a spider in Baltic amber, and here for the surprising story of a mite that was hitching a ride on the spider!). Scientists are finding that there is a wealth of information encased in these little golden gems. Creatures that may otherwise not fossilize well can be caught in amber and preserved in remarkable detail. The future of anole fossil research certainly lies here – Watch this space! If you have a lizard in amber, we’d love to hear from you! Please contact the author of this post.
Commerce and Youth - "Labeling the World" -- CREATE Portal Using their own clothing labels, students will locate and research information about the country found on their labels. After collecting information, students will use Microsoft Excel to construct graphs. Students will also read pre-selected Web sites for information on wages and labor to help form their opinions on social issues related to imports manufactured using cheap labor. - Approaching WTO Education: How to Bring WTO into Your Classroom by Engaging Students in International Trade Disputes-- a Curriculum for Grades 6-12. Curriculum written by Global Source Education and co-developed by educators from the World Affairs Council of Seattle, the University of Washington School of Business, and the Center for International Business Education and Research at the University of Washington, November 1999. Includes introductory readings to the WTO, multiple perspectives surrounding the debate, and four classroom lessons on various controversial policies. - Asia Pacific Management News -- "The Thai Youth Market" - Child Labor.org -- Child Labor: An Information Kit for Teachers, Educators and Their Organizations Produced by the International Labor Organization. The kit describes child labor problems and solutions to them. It illustrates various techniques, media and modes, which can be used to trigger action and stimulate new ideas. These tools are not new. They have been used and tested over time and proven to be effective in various fora in a variety of programs against child labor. - Globalization and Social Responsibility: Bridging the Real World and the Classroom, Course Handbook. Compiled and written by Global Source Education, 2000. This Course Handbook was specially developed for Global Source Education's summer 2000 Teachers' Institute on Globalization and Social Responsibility in Seattle, WA. The resource contains source material on the WTO, child labor, the environment, military interventionism, selective purchasing laws, world music as a vehicle for engaging in global issues, and student participation in a new civics. The guide also includes two lesson plans called "Who is Making your Sneakers?" and "Coffee: Connecting Local and Global Economies". Extensive readings for both educators and students is included, as well as resources for further inquiry. - International Affairs Department -- "Lost Futures: The Problem of Child Labor" This 16-minute video for middle school students produced by the AFT includes a brief history of child labor in the United States, a description of child labor around the globe including the story of Iqbal Masih--a freed child laborer and martyr from Pakistan--and how American schools have joined in the fight to end child labor. The video is accompanied by a teacher's guide with background information, lesson plan suggestions, and additional resources. - PBS Commanding Heights: The Battle for the World Economy The purpose of this site is to promote better understanding of globalization, world trade, and economic development, including the forces, values, events, and ideas that have shaped the present global economic system. - Using the Internet to Explore Issues: Children's Rights Children deserve to know and communicate with each other about issues that are important and relevant to their lives. Although children all over the world are still suffering from their lack of rights and from their common status as property, a children's bill of rights has been written by adopted by the United Nations and we are beginning see why the world's young citizens would benefit from this protection. In this lesson, students will search through Voices of Youth to find an interview with a child worker, at least one danger that girl children face, at least one issue that children face who live in cities, an example of how war and armed conflict affect children through their artwork, and the date and purpose of The Convention on the Rights of the Child. Students will also participate in an interactive quiz on children and work.
Full Endocrine Glands of the Head and Neck Description [Continued from above] . . . The pineal gland is attached to the brain superior to the midbrain and posterior to the thalamus. Light striking the retinas of the eyes keeps the pineal gland inactive during the day, but in the absence of light the pineal gland produces the hormone melatonin. Melatonin has a sedative effect on the nervous system and helps to set the body’s sleep-wake cycle known as the circadian rhythm. A small region of the brain known as the hypothalamus plays an extremely important role in the function of the endocrine system. The hypothalamus is found at the base of the brain just above the pituitary gland. It acts as the link between the nervous system and the endocrine system by monitoring many of the body’s internal conditions and releasing hormones. Many of these hormones control the anterior pituitary gland, which in turn produces its own hormones to control the body’s functions. Releasing hormones trigger the release of specific hormones in the anterior pituitary gland, while inhibiting hormones inhibit the anterior pituitary from releasing specific hormones. The hypothalamus also produces two hormones - oxytocin and vasopressin - that are stored and released by the posterior pituitary gland. The pituitary gland is actually two distinct structures packaged together into one anatomical region. The anterior half of the pituitary gland, the adenohypophysis, is made of glandular epithelium and produces seven hormones: - Follicle-stimulating hormone (FSH) - Luteinizing hormone (LH) - Melanocyte-stimulating hormone (MSH) - Adrenocorticotropic hormone (ACTH) - Thyroid-stimulating hormone (TSH) - Prolactin (PRL) - Human Growth Hormone (hGH) Each of these hormone targets specific regions of the body, including other glands, to stimulate their metabolism. The posterior pituitary gland, or neurohypophysis, is made of nervous tissue and stores and releases oxytocin and vasopressin produced by the hypothalamus. Oxytocin has many functions in the body, but is mainly involved in the production of uterine contractions during childbirth and milk release from the mammary glands during breast-feeding. Vasopressin, also known as antidiuretic hormone, helps the body to retain water by inhibiting sweat glands and increasing the efficiency of the kidneys. The thyroid gland, a butterfly-shaped mass of glandular tissue in the base of the neck, performs the vital function of controlling the body’s metabolism through its hormones triiodothyronine (T3) and thyroxine (T4). Both T3 and T4 are produced in response to TSH from the pituitary gland and boost the metabolic rate of many diverse cells throughout the body. Calcitonin, another important thyroid hormone, helps to regulate the body’s calcium levels by reducing the amount of calcium ion in the blood. On the posterior side of the thyroid gland are four small masses of glandular tissue known as the parathyroid glands. These glands produce parathyroid hormone (PTH), which acts as an antagonist to calcitonin by raising calcium ion levels in the blood. PTH stimulates osteoclast cells to dissolve the solid calcium matrix of bones to release calcium ions. Calcium ions play a vital role in the contraction of muscle cells and the conduction of nerve signals in neurons that keep the body alive. Between the actions of calcitonin and PTH, the body can maintain the homeostasis of calcium in the blood and skeleton to support healthy muscles, nerves, and bones. Prepared by Tim Taylor, Anatomy and Physiology Instructor
As the Civil War drew to a close, Congress and the President turned their attention to plans for rebuilding and readmitting Southern states into the Union. In his “Proclamation of Amnesty and Reconstruction,” issued on December 8, 1863, President Abraham Lincoln detailed his plan for Reconstruction. Known as the 10 percent plan, it offered repatriation for Confederate states if 10 percent of eligible voters agreed to an oath of allegiance to the Constitution and the Union and to abide by the emancipation of slaves. Many in Congress, particularly the faction known as Radical Republicans, found Lincoln’s plan too lenient. This group advocated a much harsher approach, treating Confederate states as conquered provinces that had forfeited their civil and political rights and that would revert to territorial status after the war. Their response was the Wade-Davis Reconstruction Bill, introduced in the House on February 15, 1864. Co-sponsored by Representative Henry Winter Davis of Maryland and Senator Benjamin Wade of Ohio, it required that 50 percent of eligible voters swear an oath to support the Constitution before state governments were recognized as members of the Union. Passed at the close of the congressional session in July 1864, Lincoln defeated it through use of the pocket veto.
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2000 September 19 Explanation: In the depths of the dark clouds of dust and molecular gas known as M17, stars continue to form. Visible in the above recently released representative-color photograph of M17 by the New Technology Telescope are clouds so dark that they appear almost empty of near infrared light. The darkness of these molecular clouds results from background starlight being absorbed by thick carbon-based smoke-sized dust. As bright massive stars form, they produce intense and energetic light that slowly boils away the dark shroud. M17's unusual appearance has garnered it such nicknames as the Omega Nebula, the Horseshoe Nebula, and the Swan Nebula. M17, visible with binoculars towards the constellation of Sagittarius, lies 5000 light-years away and spans 20 light-years across. Authors & editors: Jerry Bonnell (USRA) NASA Technical Rep.: Jay Norris. Specific rights apply. A service of: LHEA at NASA/GSFC & Michigan Tech. U.
Climate change: Pacific Ocean acidity dissolving shells of key species In a troubling new discovery, scientists studying ocean waters off California, Oregon and Washington have found the first evidence that increasing acidity in the ocean is dissolving the shells of a key species of tiny sea creature at the base of the food chain. The animals, a type of free-floating marine snail known as pteropods, are an important food source for salmon, herring, mackerel and other fish in the Pacific Ocean. Those fish are eaten not only by millions of people every year, but also by a wide variety of other sea creatures, from whales to dolphins to sea lions. If the trend continues, climate change scientists say, it will imperil the ocean environment. “These are alarm bells,” said Nina Bednarsek, a scientist with the National Oceanic and Atmospheric Administration in Seattle who helped lead the research. “This study makes us understand that we have made an impact on the ocean environment to the extent where we can actually see the shells dissolving right now.” Scientists from NOAA and Oregon State University found that in waters near the West Coast shoreline, 53 percent of the tiny floating snails had shells that were severely dissolving — double the estimate from 200 years ago. Click here to continue reading. Current World Population Net Growth During Your Visit
SARS-CoV-2 Genome Mutations Impact Global Health Outbreaks of COVID-19 infection spread while SARS-CoV-2, the virus that causes COVID-19, evolves and circulates around the world. Given time, genetic mutations occur in the genomes of all known human viruses. For example, mutations occur rapidly in common viruses such as influenza, HIV, and hepatitis C. The high mutation rate contributes to the virus' ability to quickly adapt to changes in its environment. Over the past year, thousands of mutations occurred within the SARS-CoV-2 genome. The genetic mutation rate of this virus is about 1:1,000 substitutions per genetic code site per year, according to a recent study published in the journal, Virus Research. This rate is slightly lower than those of influenza and HIV, for example, explains Joseph Yao, M.D., a Mayo Clinic researcher, in a recent scientific publication in Clinical Microbiology Newsletter. This virus mutated in previously uninfected humans and animals, resulting in altered viral replication and host-to-host transmission, Dr. Yao says. "When strains of a virus with genetic sequences containing a set of commonly shared mutations are sufficiently different from the parent viral strain, they are designated as a new viral variant ― for example, delta and omicron ― whether or not these mutations cause observable differences in viral behavior," Dr. Yao says in his review. "Most viral variants do not pose a health risk and would not necessitate public health actions." Scientists continue to study mutations in the viral spike protein of SARS-CoV-2, which is responsible for binding to the host cell receptor. The mutation types include: - Silent mutations These mutations affect only the RNA sequence and not the viral proteins. With little or no ability to change the viral proteins or the virus' behavior, no substantial clinical effects occur because of silent mutations. However, silent mutations can interfere with diagnostic tests designed to detect viral RNA. - Selective advantage mutations These mutations are advantageous to the virus for survival or virus reproduction, perhaps by evading the host immune system after a natural infection or after a vaccination by enhancing viral transmission or affecting host interactions. SARS-CoV-2 variants raise concerns that current vaccines, therapeutic monoclonal antibody therapies, and testing need further investigation. Impact factors are: - Increased mortality and transmissibility One study demonstrated an increased risk of death by 28 days postinfection when the B.1.1.7 variant of concern infections were compared to infections that were not related to this variant of concern. One study suggested increased infection rates attributable to the B.1.351 variant among people who were fully vaccinated. - Eluding the immune response The U.K., B.1.1.7; South Africa, B.1.351; and Brazil, P.1 variants have multiple changes in the spike protein that help them elude immune responses, according to an article in Nature. - Enhanced replication One study found that the B.1.1.7 mutation may hinder the efficiency of existing vaccines and be better able to spread through a population with higher levels of immunity due to infection or vaccination. Scientists rank the threat and categorize the variant according to risk, as follows: - Variant of interest Requires further investigation. - Variant of concern Demonstrates the potential for increased risk in lab studies, but it lacks clinical evidence proving that the risk is increased. - Variant of high consequence This is the highest threat level. These variants would have strong evidence that prevention and medical countermeasures will not be as effective. A variant of high consequence impact might include a failure of diagnostic tests, a reduction in vaccine efficacy, an unusually high number of vaccine breakthrough cases, or low vaccine protection against severe disease. In other words, variants of high consequence could have reduced susceptibility to multiple therapies, and lead to increased disease severity, increased hospitalizations or evasion of testing methods. Fortunately, no SARS-CoV-2 variant is currently classified as a variant of high consequence. Identification of a new variant of high consequence would trigger the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO) to create new strategies to prevent or contain the transmission, and, if needed, recommendations to update treatments and vaccines. Common SARS-CoV-2 identified variants of concern include: - B.1.1.7 lineage/alpha This strain emerged in the U.K. in September 2020 and spread through many countries, including the U.S., where it was first discovered in December 2020. The variant is associated with increased transmissibility, and increased risk of hospitalization and death, compared to other strains. - B.1.351 lineage/beta This strain emerged independent of B.1.1.7 in South Africa, but it shares some mutations with the U.K. strain. Multiple reports found that vaccine-induced antibodies could not bind to or neutralize this variant as well as prior variants. No evidence suggests that this variant affects disease severity. However, it is associated with a selection advantage, meaning it is likely more transmissible. It was detected in the U.S. in January. This variant emerged in India in December 2020. Because of its increased transmissibility, it was found throughout the world, including the U.S., within months. This strain may be more than twice as contagious as previous strains, cause more severe illness and death, and create more breakthrough cases in people who have been vaccinated, according to the CDC. From the end of August until Dec. 4, the delta variant represented more than 99% of the cases in the U.S., according to the CDC's "Nowcast." By the week ending Dec. 11, though, the delta variant percentage dropped to 87% as the omicron variant accounted for nearly 13% of cases. By the week ending Dec. 11, the omicron variant accounted for 73% of cases while the delta variant accounted for only 27%. - P.1 lineage/gamma First identified in Brazil, this strain was detected in the U.S. in January. Evidence suggests that some of the mutations in the P.1 variant may affect its transmissibility and its immunity to a vaccine. The mutations may affect the ability of antibodies to recognize and neutralize the virus. The variant's emergence and association with a higher viral density raised concerns about a potential increase in transmissibility and reinfection. - B.1.427 and B.1.429 These strains originated in California in October 2020. Before mutating, it likely emerged from New York via Europe early in 2020, according to a recent study in JAMA. It may be 20% more transmissible than common strains, the JAMA study suggests. Some COVID-19 treatments may not work well against the variants. Vaccines are still effective against both strains. This variant was first identified on Nov. 11, 2021, according to the WHO. Because this strain has just emerged, scientists are still learning about its potential dangers. Preliminary reports suggest it is more transmissible and has a greater ability to evade preexisting immunity due to vaccination or prior infection. No current evidence suggests that it will cause more severe illnesses, the CDC reports. Because this is a new strain, however, the CDC says it is likely to cause more breakthrough cases in previously vaccinated populations. This variant went from less than 1% of cases in the U.S. before Dec. 4, to 73% by the week ending Dec. 18, according to the CDC. Identifying the circulating and prevailing variants of SARS-CoV-2 that may be associated with increased infectivity, transmission or severity of infection in a given community or geographic region is essential for resource planning and effective mitigative measures to reduce the risk of infection. In the clinical domain, analyzing SARS-CoV-2 sequences could improve the care of an individual patient, if variants of high consequence emerge. Dr. Yao is the senior author of the scientific publication in Clinical Microbiology Newsletter. The first author is Blake Buchan, Ph.D., a researcher at the Medical College of Wisconsin in Milwaukee. - Adam Harringa, December 21, 2021
Bird of The Week: Galapagos Penguin SCIENTIFIC NAME: Spheniscus mendiculus IUCN STATUS: Endangered HABITAT: Endemic to the Galápagos Archipelago of Ecuador The Galapagos Penguin is the smallest South American penguin, and the only one to live near the equator. It shares the Galápagos Archipelago with other seabirds such as the Galapagos Petrel and Waved Albatross. It is the most northerly-breeding penguin species in the world, living around the equator. In fact, a small part of the population actually lives just north of the equator. This species is closely related to Magellanic, Humboldt, and African Penguins, which are found further south, although none of these are Antarctic. The Galapagos Penguin has a number of physical and behavioral adaptations that help it keep cool. Its small size — no more than 20 inches in height — allows it to squeeze into small caves and crevasses to hide from the strong equatorial sun. In addition, bare patches on its face and behaviors such as panting and standing with flippers extended also help it to release heat. Approximately 95 percent of the Galapagos Penguin’s population is found on Isabela and Fernandina, two islands in the western part of the archipelago. There, cool waters of the Humboldt and Cromwell Currents well up and sweep along the shores, nourishing a high density of fish prey that sustains this species year-round. The Galapagos Penguin’s call is a distinctive honking bray, given mainly on the breeding grounds. This vocalization helps individuals to identify both their mates and their chicks. Galapagos Penguins breed in loose colonies in the cracks and caves of the islands’ lava flows. They mate for life, and reinforce their pair-bonds through behaviors such as mutual preening and bill dueling. These penguins are opportunistic breeders, nesting when food is plentiful — probably an adaptation to their unpredictable environment. If conditions are favorable, Galapagos Penguins will breed two to three times per year. Once mated, the female lays up to two eggs, and both parents help with incubation. One parent is always present to incubate while the other goes out to forage. After the chicks have hatched, one parent continues to remain at the nest until the chicks are around three weeks old. Both parents then head out to sea, bringing back food for their rapidly growing and always ravenous chicks. Like the Peruvian Diving-petrel, Inca Tern, and many other equatorial seabirds and other marine creatures, the Galapagos Penguin relies upon the cool temperatures of the Humboldt and Cromwell Currents to provide a rich supply of prey year-round. This penguin feeds close to shore on small fish such as sardines, mullet, and anchovies. It often hunts by diving down over 90 feet, below fish schools, then grabbing its prey as it rises to the surface. It also picks off stray fish. The penguin’s attacks from below push fish schools close to the surface, creating feeding opportunities for other birds such as the Brown Pelican, Brown Noddy, and Flightless Cormorant, another interesting bird found only on the Galápagos Islands. Recovering Penguin Populations The main threat facing this unique penguin is the increasing frequency of El Niño Southern Oscillation (ENSO) events, perhaps due to or exacerbated by climate change. These events reduce food availability and lead to low reproduction or starvation of colonies. Other threats to the Galapagos Penguin include drowning after entanglement in gillnets; oil spills; predation by introduced cats; and avian malaria, which is carried by mosquitoes brought to the Galápagos by humans in the 1980s. ABC’s Seabird Program works to address threats faced by the Galapagos Penguin and other ocean-going birds, including puffins and the Laysan Albatross and Pink-footed Shearwater. One of the main challenges is eliminating and reducing risks posed by fisheries. Since the entire Galapagos Penguin population is found within the Galápagos National Park and Marine Reserve, it is annually monitored by park biologists and rangers, who also work to control introduced predators. A program that provides artificial nest sites, begun in 2010, has shown some success and may help maintain this species’ population. SOURCE: American Bird Conservancy (abcbirds.org)
‘Snowball Earth’—a problem for the supposed origin of multicellular animals Many uniformitarian scientists believe that about five major periods, and several short periods, of glaciation have occurred on Earth.1 In the evolutionary time scale, these ice age periods sometimes lasted several hundred million years and extended back 2–3 billion years ago. These supposed ice ages have been interpreted from till-like rocks2 and other apparent glacial signatures observed within sedimentary rocks around the world. One such ice age is called the Neoproterozoic, or Late Precambrian, and thought to have started about 950 million years ago and ended about 520 million years ago.3 During this 430 million-year period, according to evolutionary time, there were several long ‘glacial’ and ‘interglacial’ periods. ‘Snowball Earth’ hypothesis Based on early paleomagnetic studies, evolutionists deduced that most Precambrian ‘ice ages’, including the one about 2.5–2.2 billion years ago extended as far south as the equator.4 This radical proposal caused many scientists to question the paleomagnetic results, mainly because it is easy to remagnetize rocks. After many paleomagnetic measurements and several decades (i.e. Sohl, Christie-Blick and Kent5), the idea of an equatorial ice sheet, implying a completely glaciated Earth, has become widely accepted. Kerr writes: This is the ‘snowball Earth’ hypothesis. John Crowell, one of the chief investigators of supposed ancient ice ages, had been skeptical of the paleomagnetic measurements for several decades, but now has accepted the measurements. There are several major problems with the idea that ice sheets reached the tropics at low elevation. One problem is that, once ice and snow covered the entire Earth, a frozen Earth would maintain itself indefinitely by ice-albedo positive feedback. Ice and snow have a high albedo, which causes most of the solar radiation to be reflected back to space. Without atmospheric warming, the temperature of the Earth would plummet far below freezing and the frozen condition would become very stable. So, a catastrophic climatic event would be required to melt a ‘snowball Earth’. How could life have survived? The Cambrian period and its supposed ‘explosion’ of life occurred around 550 million years ago.7 This means that the worldwide Neoproterozoic ice age was raging during, or just at the end of, the time when multicellular life exploded over the Earth. The origin of multicellular life would have occurred earlier, at the beginning of the supposed ice age, since some metazoan life occurs between 1,000 and 700 million years ago according to their time scale.8 The origin of life itself has already been pushed back to over 3 billion years ago. So, it seems that evolutionists now have a serious problem with the supposed evolution of multicellular life. Kerr asks: ‘How could life have survived … in a world in which the average surface temperature would have hovered around –50oC, not to mention the all-encompassing sea ice that would average a kilometer thick compared to the Arctic Ocean’s few metres?’6 In a later article, he asks: ‘How could early life have weathered such a horrendous environmental catastrophe without suffering a mass extinction? … How could algae and perhaps even early animals have survived 10 million years sealed off by globe-girdling ice?’9 Hyde et al. reinforce this concern: ‘But this period was a critical time in the evolution of multicellular animals, posing the question of how early life survived under such environmental stress.’8< It seems like evolutionists are caught in a bind. The problem of the cap carbonates Now that most geologists have accepted that the Earth was covered with snow and ice while multicellular life was evolving, another perplexing problem needs to be explained. This is the problem of the cap carbonates, which have a high amount of dolomite. The cap carbonates are interpreted as warm-water rocks because dolomite requires hot water to precipitate from solution. These rocks are very common directly above the Neoproterozoic ‘ice age’ deposits, sometimes with a knife-sharp contact.10 The textures of the cap carbonates often indicate rapid precipitation from warm seas saturated with carbonate.11 Hoffman and Schrag state the significance of such an abrupt transition to the cap carbonates: ‘But the transition from glacial deposits to these “cap” carbonates is abrupt and lacks evidence that significant time passed between when the glaciers dropped their last loads and when the carbonates formed.’12 ‘Snowball Earth’ followed by a rapid hothouse is considered doubly bizarre to some geologists.11 Even more strange is the fact that the hothouse existed before and during the ‘ice ages’ based on the distribution of other carbonates associated with the ‘ice age’ deposits. Carbonates are located below Late Precambrian ‘ice age’ deposits, and in Scotland carbonates, including dolomite, are interlayered within ‘glacial’ deposits.13 Carbon isotope ratios in the cap carbonates also appear to reinforce the idea that practically all life died out during the ‘ice age’.14 Uniformitarian scientists used to say that the carbonates associated with ‘glacial’ deposits were ‘cold-water’ carbonates, citing evidence from patches of biogenetic carbonate that form in cold water today.15 This was obviously a dodge. Now, they are simply accepting the temperature implications of these cap carbonates at face value and postulating a ‘hothouse’ immediately after the ‘glaciation’. The freeze-fry model Evolutionists are back to the drawing board in trying to explain how life supposedly blossomed while such overpowering catastrophes were taking place. Hoffman and Schrag12 have proposed a radical hypothesis that they believe explains the oscillating freeze-fry climate, as well as the mystery of the origin and evolution of multicellular life. Hoffman and Schrag agree that ‘snowball Earth’ would have been an ice age catastrophe of monumental proportions: ‘Dramatic as it may seem, this extreme climate change [Late Cenozoic ice age] pales in comparison to the catastrophic events that some of our earliest microscopic ancestors endured around 600 million years ago. Just before the appearance of recognizable animal life, in a time period known as the Neoproterozoic, an ice age prevailed with such intensity that even the tropics froze over.’16 They say only geothermal heat kept the oceans from freezing clear to the bottom, leaving all but a tiny fraction of the planet’s microscopic organisms to die. The heat near hydrothermal vents kept only patches of life going. Hoffman and Schrag grudgingly agree to the necessity of a hothouse immediately following this catastrophe: ‘To confound matters, rocks known to form in warm water seem to have accumulated just after the glaciers receded.’17 They then propose an hypothesis that produces the ‘snowball Earth’ followed by its rapid reversal to a hothouse. It is this hothouse that subsequently caused the rapid diversification of multicellular life. Suddenly, the bizarre sequence of events is now ‘expected’.18 This is just one example out of hundreds of the incredible plasticity and unfalsifiability of the evolutionary/uniformitarian paradigm. How does Earth transform from a snowball into a steam bath? In storytelling suspense, typical of evolutionary scenarios, Hoffman and Schrag12 explain that volcanoes popped through the ice and belched life-saving carbon dioxide. The extra carbon dioxide in the atmosphere supposedly caused a super greenhouse effect. But, another problem arises, as the hothouse is also precarious to life: ‘Any creatures that survived the icehouse must now endure a hothouse’.16 If one such freeze-fry episode seems fantastic, this scenario supposedly repeated itself four times during the Late Precambrian and at least once during the Mid Precambrian.4 Origin of banded-iron formations The freeze-fry model is also supposed to solve another great mystery of geology—the origin of banded-iron formations.11,19 In the freeze-fry story, millions of years of ice cover would deprive the oceans of oxygen, causing iron from hydrothermal vents to become soluble in the ocean water. Once the ice melted, oxygen would mix into the ocean and cause the iron to precipitate. However, if the oceans lost their oxygen, how could life survive around those deep-sea vents? Another problem with attempting to solve this puzzle is that banded-iron formations not only follow ‘ice ages’, as predicted by the theory, but are also mixed down into the ‘glacial’ deposits.13 Complicating the issue even more, there are no banded-iron formations after the Late Precambrian ‘ice age’. Expanding their theory, Hoffman and Schrag apply the freeze-fry model to future climates by predicting dire consequences of global warming that is assumed to result from increased carbon dioxide today: ‘Certainly during the next several hundred years, we will be more concerned with humanity’s effects on climate as the earth heats up in response to carbon dioxide emissions … but could a frozen world be in our more distant future?’20 Climate modellers used to pay no attention to the snowball Earth hypothesis. However, now that they believe it is ‘proved’, they have attempted to model it by computer climate simulations. Interestingly, many of the modelling efforts are having problems coming up with a totally glaciated Earth.9 For instance, the model of Hyde et al. failed to produce a ‘snowball Earth’.8 However, their model does provide hope for multicellular life in another way—by keeping areas of open water at the equator. However, Scrag and Hoffman21 do not believe that the ‘slushball earth’ model of Hyde et al. agrees with the geologic and paleontologic data. The geological record supposedly indicates that the oceans were completely sealed off or close to it, say the proponents of ‘snowball Earth’.9 Neither do Hyde et al. agree with ‘snowball’ Earth, pointing out many serious problems.22 One difficulty computer modellers encounter is to generate enough carbon dioxide to melt the ice as demonstrated by Hoffman and Schrag.14 In order to reverse the ‘snowball Earth’, the concentration of carbon dioxide in the atmosphere would need to be 350 times the current atmospheric concentration.23 This is a tough challenge for volcanoes, which are more likely to cause cooling by volcanic ash and aerosols than warming by carbon dioxide.24 Models of course are imperfect,9 so proponents of ‘snowball Earth’ believe the models are wrong and need to be adjusted. Kirschvink et al.25 claim some models do predict runaway glaciation with pack ice becoming 500–1,500 m thick, at least for the supposed ice age that occurred about 2.4 billion years ago. They also believe the melting of the ice in the Late Precambrian supplied the ‘trigger’ for evolution of multicellular organisms.26 Painted into a corner It seems as though evolutionists have painted themselves into a corner with their ‘snowball Earth’ hypothesis. They have several near-impossible problems to solve—all at a time when supposedly multicellular life was exploding just before or during the Cambrian explosion. This time, they have two catastrophes to work into their scenario. But evolutionists always seem to have another hypothesis to add when boxed into a corner. If they only realized that the solution to the crazy freeze-fry idea is to challenge the glacial interpretation of the particular rocks. However, mainstream scientists have been unable to abandon their ancient ice age story, so they are stuck with their ‘weird and bizarre’ freeze-fry world, terms used by Kerr.6 For creationists, the rocks and their associated ‘glacial diagnostic features’ can be explained very easily. They are the result of gigantic submarine landslides in a warm ocean that was precipitating carbonates in the early part of the Genesis Flood.15 - Crowell, J.C., Pre-Mesozoic ice ages: their bearing on understanding the climate system, Geological Society of America Memoir 192, Geological Society of America, Boulder, 1999. Return to text. - Till is a mixture of rocks of all sizes within a fine-grained matrix. It is common over the surface of the mid- and high-latitude continents. Till is associated with a post-Flood rapid ice age that uniformitarian geologists call the Late Cenozoic or Pleistocene ice age. See: Oard, M.J., An Ice Age Caused by the Genesis Flood, Institute for Creation Research, El Cajon, 1990. Return to text. - Crowell, Ref. 1, p. 45. Return to text. - Oard, M.J., Another tropical ice age? Journal of Creation 11(3):259–261, 1997. Return to text. - Sohl, L.E., Christie-Blick, N. and Kent, D.V., Paleomagnetic polarity reversals in Marinoan (ca. 600 Ma) glacial deposits of Australia: implications for the duration of low-latitude glaciation in Neoproterozoic time, Geological Society of America Bulletin111:1120–1139, 1999. Return to text. - Kerr, R.A., An appealing snowball Earth that’s still hard to swallow, Science 287:1734–1736, 2000; p. 1734. Return to text. - Gould, S.J., Time’s Arrow, Time’s Cycle, Harvard University Press, Boston, 1987. Return to text. - Hyde, W.T., Crowley, T.J., Baum, S.K. and Peltier, W.R., Neoproterozoic ‘snowball Earth’ simulation with a coupled climate/ice-sheet model, Nature 405:425–429, 2000; p. 425. Return to text. - Kerr, R.A., A refuge for life on snowball Earth, Science 288:1316, 2000. Return to text. - Kennedy, M.J., Christie-Blick, N. and Sohl, L.E., Are Proterozoic cap carbonates and isotopic excursion a record of gas hydrate destabilization following Earth’s coldest intervals? Geology 29(5):443–446, 2001. Return to text. - Kerr, Ref. 6, p. 1735. Return to text. - Hoffman, P.E. and Schrag, D.P., Snowball Earth, Scientific American 282(1):68–75, 2000; p. 73. Return to text. - Kerr, Ref. 6, p. 1736. Return to text. - Runnegar, B., Loophole for snowball Earth, Nature 405:403–404, 2000; p. 403. Return to text. - Oard, M.J., Ancient Ice Ages or Gigantic Submarine Landslides, Creation Research Society Monograph 6, St Joseph, pp. 28–31, 1997. Return to text. - Hoffman and Schrag, Ref. 12, p. 68. Return to text. - Hoffman and Schrag, Ref. 12, p. 69. Return to text. - Hoffman and Schrag, Ref. 12, pp. 73–75. Return to text. - Oard, M.J., Could BIFs be caused by the fountains of the great deep? Journal of Creation 11(3):261–262, 1997. Return to text. - Hoffman and Schrag, Ref. 12, p. 75. Return to text. - Schrag, D.P. and Hoffman, P.F., Life, geology and snowball Earth—reply, Nature 409:306, 2001. Return to text. - Hyde, W.T., Crowley, T.J., Baum, S.K. and Peltier, W.R., Life, geology and snowball Earth, Nature 409:306, 2001. Return to text. - Hoffman and Schrag, Ref. 12, p. 72. Return to text. - Oard, Ref. 2, pp. 33–38. Return to text. - Kirschvink, J.L., Gaidos, E.J., Bertani, L.E., Beukes, N.J., Gutzmer, J., Maepa, L.N. and Steinberger, R.E., Paleoproterozoic snowball Earth: extreme climatic and geochemical global change and its biological consequences, Proc. Nat. Acad. Sci. USA 97(4):1400–1405, 2000, p. 1400. Return to text. - Kirschvink et al., Ref. 25, p. 1403. Return to text.
Environment Pollution What Are Fugitive Emissions? Definition and Impact By Liz Allen Liz Allen LinkedIn Twitter Writer College of William & Mary Northeastern University Liz is a marine biologist, environmental regulation specialist, and science writer. She has previously studied Antarctic fish, seaweed, and marine coastal ecology. Learn about our editorial process Fact checked by Elizabeth MacLennan Fact checked by Elizabeth MacLennan on October 05, 2021 University of Tennessee Elizabeth MacLennan is a fact checker and expert on climate change. Learn about our fact checking process on October 5, 2021 Yurdakul / Getty Images Share Twitter Pinterest Email Environment Planet Earth Climate Crisis Pollution Recycling & Waste Natural Disasters Transportation Fugitive emissions are gases and vapors accidentally released into the atmosphere. Most fugitive emissions come from industrial activities, like factory operations. These emissions contribute to climate change and air pollution. Some fugitive emissions, like the release of ethylene oxide from medical sterilization facilities, pose a significant health risk to people living nearby. Other fugitive emissions, like methane unintentionally released by the oil and gas industry, add a greenhouse gas to the atmosphere that's over 25-times stronger than carbon dioxide. In the United States, fugitive emissions are primarily regulated by the Environmental Protection Agency, or EPA, under the Clean Air Act. Types of Fugitive Emissions Fugitive emissions come in many forms including dust, fine particles, and aerosols. Of these, the most environmentally impactful fugitive emissions are greenhouse gases, such as refrigerants and methane. Dust Water is sprayed onto unpaved areas to prevent vehicles from kicking up dust. Ryan Overman / Getty Images Dust, or fine particles of soil and other organic material, is unintentionally released from driving on unpaved roads, tilling of agricultural fields, and heavy construction operations. Once kicked-up, dust can contribute to air pollution. Fugitive dust can cause people to have difficulty breathing, chronic respiratory illness, and lung disease. It can also increase the risk of traffic accidents due to reductions in visibility and reduce agricultural productivity by shielding sunlight. In the United States, the arid and semi-arid areas in the southwest are especially at risk of releasing fugitive dust from ongoing development. On construction sites, dust can be managed by frequently wetting unpaved areas. When wet, fine particles on the ground are too heavy to be kicked up during the operation of construction machinery. In agriculture, dust can be reduced by the planting of cover crops, irrigation, reducing the frequency of tilling, and combining tractor operations. CFCs Air conditioning systems use refrigerants, which can be released as fugitive emissions. ChuangTzuDreaming / Getty Images Various types of chlorofluorocarbons, or CFCs, were commonly used in the 20th century as refrigerants. The production of CFCs was banned in the United States and in many countries around the world in the 1990s. However, the accidental release of these environmentally damaging chemicals continues today from the ongoing use of CFCs in outdated equipment and the use of recycled CFCs in fire suppression systems. In 2012, there was an unexpected and persistent increase in global emissions of one particular type of CFC, CFC-11, which contributes a quarter of all ozone-depleting chlorine that reaches the stratosphere. International efforts to reduce the fugitive release of CFCs led to rapid declines of atmospheric CFCs in 2019 and 2020. Nebulizers Some of the aerosolized medicine delivered by nebulizers can escape into the surrounding air as fugitive emissions. skynesher / Getty Images Various aerosols commonly used in modern medicine result in fugitive emissions. One source of these emissions is nebulizers, which help deliver aerosolized drugs to patients' lungs. Nebulizers are primarily used to treat respiratory diseases. However, in the process of delivering these aerosols to a patient, some accidentally escape. These fugitive emissions can remain in the surrounding air for several hours, putting people at risk of accidentally inhaling medication. Oil and Gas The natural gas wells created by fracking are an important source of fugitive methane emissions. grandriver / Getty Images Oil and gas wells are a substantial source of fugitive emissions. In 2018, a natural gas well in Ohio operated by a subsidiary of ExxonMobil leaked millions of cubic feet of methane into the atmosphere over the course of twenty days. This massive release of fugitive emissions was detected by a satellite's routine global survey — the first such leak to be detected using satellite technology. Methane leaks are common due to the United States' shift from coal to natural gas, the latter of which produces fewer greenhouse gas emissions when burned. However, the accidental release of methane during natural gas extraction may counteract natural gas's emissions advantage over coal. Additional fugitive emissions come from the oil and gas industry's abandoned wells. Abandoned, uncapped wells are also known to release methane into the atmosphere well after they close. In some cases, fugitive emissions are released by poorly or improperly sealed wells. Ethylene Oxide Ethylene oxide is used to manufacture a variety of chemicals, like plastics, textiles, and antifreeze, and is used to sterilize foods, spices, and medical equipment. Since the 1980s, ethylene oxide has been known to cause cancer in animals based on studies conducted on mice and rats. It is considered to be a known carcinogen by the US EPA and the CDC. During a recent review of the hazardous emissions, the EPA found the fugitive release of ethylene oxide to be a significant driver of unacceptable health risks resulting from all hazardous air pollutants in the United States. How Are Fugitive Emissions Regulated? Vladimir Zapletin / Getty Images Most fugitive emissions are regulated by the EPA . In some cases, state and local agencies apply further regulations to the release of fugitive emissions. Dust Regulations Many development projects are required to go through the National Environmental Policy Act, or NEPA, which includes an assessment of a project's anticipated air quality impacts. If a project is expected to have "significant" impacts on air quality, such as through the fugitive release of dust, measures to mitigate the effects may be required by the EPA. Some states, like California, have an additional environmental review process that applies air quality standards to certain projects, including projects not required to go through the NEPA process. These air quality regulations include measures to reduce the risk of fugitive emissions. CFC Regulations Refrigerators and air conditioning devices used to use various chlorofluorocarbons (CFCs) and hydrochlorofluorocarbons (HCFCs). After the discovery that these aerosols were putting holes in Earth's ozone layer, the international ratification of the Montreal Protocol in 1988 and amendments to the Clean Air Act in 1990 phased out the use of these and other environmentally damaging chemicals. Hydrofluorocarbons (HFCs) and perfluorocarbons (PFCs) are used today instead. Similarly, halon was once commonly used for fire suppression. However, halon also has an ozone-depleting effect. The EPA began phasing out the production and import of new halon in 1994. Halon blends were banned in 1998. Today, only recycled halon is used for specific fire suppression applications, such as on aircraft and for oil and gas exploration operations. The EPA only allows the release of halon during the testing, maintenance, and repair of halon-containing equipment. The EPA has the authority to levy heavy fines against those who release halon and other ozone-depleting substances accidentally or without EPA authorization. While the production of many ozone-depleting substances is banned in the United States and a number of other countries around the world, old products containing these greenhouse gases remain in old refrigerators and air conditioning units. As these decades-old pieces of equipment deteriorate, the CFCs they hold are often released as fugitive emissions. One of these ozone-depleting substances, CFC-12, traps nearly 11,000 times the heat of carbon dioxide. Given the environmental hazard created by these old, often forgotten refrigerants, the recycling of old CFCs is now a part of the carbon-offset market: people can exchange their old refrigerants for money. Monitoring Requirements for Fugitive Emissions The EPA requires certain entities, like active oil wells and compressor stations, to perform semi-annual or annual tests for fugitive emissions. Once a source of fugitive emissions is discovered, the EPA requires repairs to be made within 30 days. In 2020, the EPA eliminated monitoring requirements for "low production" well sites — those producing less than 15 barrels per day. Restrictions on incidental methane emissions were also reduced, which even oil industry proponents criticized. The EPA similarly regulates the unintentional release of ethylene oxide. However, in 2016 ,the EPA increased allowable exposure levels by nearly 50-fold. In 2018, research on a Michigan sterilization facility found local ethylene oxide levels to be 100 times the EPA's 2016 limit and 1500 times State's limit. The study concluded the high ethylene oxide exposure levels were largely caused by uncaptured fugitive emissions. By order of the State of Michigan's Department of Environment, Great Lakes, and Energy (EGLE), the facility was forced to stop using ethylene oxide by January 2020 and pay a $110,000 penalty to the State of Michigan. Future Outlooks The impact of fugitive emissions on climate change and human health has gained attention in recent years. Carbon Offset Market for CFCs In the United States, carbon offset markets are expected to continue filling some of the gaps in the regulation of CFC fugitive emissions by incentivizing the removal of now-banned greenhouse gases. However, carbon offset projects must wait for credits to sell to make a return on investment. For developing countries, the need for capital upfront may be a barrier to implementing effective carbon offset programs for CFCs. Methane Emissions According to a 2018 report published by Climate Chance, the oil and gas industry is the primary producer of fugitive emissions. The report also found the United States to be the second-largest producer of fugitive emissions of the 10 countries analyzed. The Biden Administration has moved to review, and potentially remove, some of the Trump administration's rollbacks to the Clean Air Act, including decisions that reduced restrictions on allowable methane emissions from the oil and gas industry. Additional satellites are scheduled for launch in the coming years to bolster global monitoring of fugitive emissions from the oil and gas industry. According to the Environmental Defense Fund (EDF), which plans to launch a new methane-monitoring satellite in 2022, fugitive emissions from the oil and gas industry are up to 60% higher than what the EPA has found. Ethylene Oxide Emissions State regulations of fugitive ethylene oxide emissions continue to expand as the public becomes more aware of the health risks associated with the chemical. For example, Illinois passed two new laws regulating ethylene oxide in 2019 making the state's ethylene oxide emissions standards the strictest in the country. Similarly, Georgia is working with sterilization facilities to implement voluntary reductions in ethylene oxide emissions. Meanwhile, the state of Texas took its ethylene oxide legislation in the opposite direction by increasing the allowable limit from 1 part per billion (ppb) to 2.4 ppb in 2020. View Article Sources "Health Effects Notebook for Hazardous Air Pollutants: Ethylene Oxide." Environmental Protection Agency, 2018. "Importance of Methane." Environmental Protection Agency. Bergin, Mike H., et al. "Large Reductions in Solar Energy Production Due to Dust and Particulate Air Pollution." Environmental Science and Technology Letters, vol. 4, no. 8, 2017, pp. 339-344., doi:10.1021/acs.estlett.7b00197 "Managing Fugitive Dust." Michigan Department of Environmental Quality, 2016. "Controlling Dust to Improve Air Quality." USDA Natural Resources Conservation Service, 2012. Montzka, Stephen A. et al. "An Unexpected and Persistent Increase in Global Emissions of Ozone-Depleting CFC-11." Nature, vol. 557, no. 7705, 2018, pp. 413-417., doi:10.1038/s41586-018-0106-2 Montzka, Stephen A. et al. "A Decline in Global CFC-11 Emissions During 2018−2019." Nature, vol. 590, no. 7846, 2021, pp. 428-432., doi:10.1038/s41586-021-03260-5 McGrath, James A. et al. "Investigation of Fugitive Aerosols Released into the Environment During High-Flow Therapy." Pharmaceutics, vol. 11, no. 6, 2019, p. 254., doi:10.3390/pharmaceutics11060254 "Compare Side-by-Side." U.S. Department of Energy. Saint‐Vincent, Patricia M. B. et al. "An Analysis of Abandoned Oil Well Characteristics Affecting Methane Emissions Estimates in the Cherokee Platform In Eastern Oklahoma." Geophysical Research Letters, vol. 47, no. 23, 2020., doi:10.1029/2020gl089663 Vincent, Melissa J. et al. "Ethylene Oxide: Cancer Evidence Integration and Dose–Response Implications." Dose-Response, vol. 17, no. 4, 2019., doi:10.1177/1559325819888317 "Ethylene Oxide." Environmental Protection Agency. "EPA Finalizes Amendments to the Miscellaneous Organic Chemical Manufacturing National Emission Standards for Hazardous Air Pollutants." Environmental Protection Agency, 2020. Al-Awad, Tareq K. et al. "Halon Management and Ozone-Depleting Substances Control in Jordan." International Environmental Agreements: Politics, Law And Economics, vol. 18, no. 3, 2018, pp. 391-408., doi:10.1007/s10784-018-9393-1 "Destruction of Ozone Depleting Substances." American Carbon Registry. Olaguer, Eduardo P. et al. "Ethylene Oxide Exposure Attribution and Emissions Quantification Based on Ambient Air Measurements Near a Sterilization Facility." International Journal Of Environmental Research And Public Health, vol. 17, no. 1, 2019, p. 42., doi:10.3390/ijerph17010042 Laconde, Thibault. "Fugitive Emissions: A Blind Spot in the Fight Against Climate Change." Climate Chance, 2018. "Major Studies Reveal 60% More Methane Emissions." Environmental Defense Fund.
Mindfulness Therapy is sometimes referred to as mindfulness-based cognitive-behavioral therapy. Essentially, Mindfulness Therapy is a combination of cognitive-behavioral therapy and mindfulness techniques. This therapy is used to bring relief from an array of symptoms of psychological illnesses. Initially, Mindfulness Therapy was used to treat those with recurrent Depression. However, today, it’s used with a wide variety of issues related to many psychological disorders including addiction. Mindfulness is the ability for a person to be present to the experiences that are going around within and around them. As someone develops more and more awareness, he or she has greater ability to recognize negative patterns, thoughts, and beliefs, and as a result, has a greater ability to make changes in one’s life. Essentially, the underlying principle that guides Mindfulness Therapy is that the more self-aware one becomes, the more successful he or she can become in achieving and maintaining relief and recovery. This is true for any form of recovery, such as from mental illness or from addiction. By becoming more and more aware, one has the ability to make choices that are healthier and more life-affirming. Cognitive Behavioral Therapy Cognitive Behavioral Therapy (CBT) is a therapy that invites a conscious exploration of the types of thoughts one has as well as the associated feelings and behaviors. The point of CBT is to assist someone in identifying their negative thinking patterns so that he or she might be able to change them and replace their thoughts with positive ones. This therapy has been incredibly successful in treating many forms of psychological illness. In fact, because of its effectiveness, it is known as an evidence-based practice to be used by clinicians and other mental health providers. As mentioned above, Mindfulness Therapy is a combination of CBT and mindfulness. It is a therapeutic technique that enhances one’s exploration of thoughts, feelings, and behaviors by inviting them to become more and more aware through the use of mindfulness. Mindfulness Therapy can be used in a wide variety of circumstances to assist someone in changing the way they react to their inner and outer triggers. For instance, it’s very common for a person with an Anxiety Disorder to be triggered by a certain thought. That thought can then lead to more anxiety, which can in turn stimulate more negative thinking and fear. A cycle of negative thinking might begin and cause a person feel overwhelmed by their inner experiences. However, with Mindfulness Therapy, a person might be aware enough to identify the original thought of anxiety and not believe it. He or she might have enough awareness to choose not to inwardly react to that thought. This could prevent an entire downward cycle of anxiety and fear. In fact, the mindfulness practices taught in Mindfulness Therapy have been shown to activate the prefrontal cortex of the brain, which is associated with emotional regulation and self-control. The goal of Mindfulness Therapy is to facilitate greater self-awareness in a person so that he or she can avoid subsequent thoughts, feelings, and behaviors that might cause further emotional pain. If you or someone you know is suffering from a psychological illness, Cognitive Behavioral Therapy and Mindfulness Therapy might be a great option for treatment.
Effect of Change of Pressure: The physical state of matter can also be changed by increasing the pressure or decreasing the pressure. - Gases can be liquefied by applying pressure and lowering temperature - When a high pressure is applied to a gas, it gets compressed (into a small volume), and when we also lower its temperature, it gets liquefied. So, we can also say that gases can be liquefied (turned into liquids) by compression and cooling. - Ammonia gas can be liquefied by applying high pressure and lowering the temperature. - Decreasing the pressure and raising the temperature can change the state of matter. - Solid carbon dioxide (dry ice) is stored under high pressure. When a slab of solid carbon dioxide is kept exposed to air, then the pressure on it is reduced to normal atmospheric pressure (1 atmosphere), its temperature rises, and its starts changing into carbon dioxide gas. The process of a liquid changing into vapour (or gas) even its boiling point is called evaporation. - The wet clothes dry due to evaporation of water present in them. Common salt is also recovered from sea-water by the process of evaporation. - The process of evaporation can be explained as follows: Some particles in liquid always have more kinetic energy than the others. So, even when a liquid is well below its boiling point, some of its particles have enough energy to break the forces of attraction between the particles and escape from the surface of the liquid in the form of vapour (or gas). Thus the fast moving particles (or molecules) of a liquid are constantly escaping from the liquid to form vapor (or gas). Factors affecting Evaporation: The evaporation of a liquid depends mainly on the following factors: - Surface area - Wind speed - Temperature: The rate of evaporation increases on increasing the temperature of the liquid. - Surface area of the liquid: The rate of evaporation increases on increasing the surface area of the liquid. For e.g. If the same liquid is kept in a test tube and in a china dish, then the liquid kept in the china dish evaporate more rapidly. - Humidity of Air: The amount of water present in air is represented by a term called humidity. When the humidity of air is slow, then the rate of evaporation is high, and water evaporates more readily. - Wind Speed: The rate of evaporation of a liquid increases with increasing wind speed. Cooling caused by evaporation: - The cooling caused by evaporation is based on the fact that when a liquid evaporates, it draws or takes the latent heat of vaporization from ‘anything’ which it touches. By losing heat, this ‘anything’ gets cooled. - During hot summer days, water is usually kept in an earthen pot (called pitcher or matka) to keep it cool. The earthen pot has large number of extremely small pores (or holes) in its walls. Some of the water continuously keeps seeping through these pores to the outside of the pot. This water evaporates (changes into vapor) continuously and takes the latent heat required for vaporization from the earthen pot and the remaining water. In this way, the remaining water loses heat and gets cooled. - Perspiration (or sweating) is our body’s method of maintaining a constant temperature. - We should wear cotton clothes in hot summer days to keep cool and comfortable. To Show the Presence of Water Vapor in Air - There is always some water vapor in the air around us. Water vapor comes into the air from the evaporation of water present in ponds, lakes, rivers and oceans. - Water vapor is also given out by plants by the process of transpiration. Animals give out water vapor when they breathe out air. All this water vapor goes into the air around us. - The presence of water vapor in air can be shown by the following experiment: - Let us take some ice -cold water in a tumbler. Soon we will see water droplets on the outer surface of tumbler. The water vapor present in air, on coming in contact with the cold glass of water, loses energy and gets converted to liquid state, which we see as water droplets. link to this page by copying the following text Class 9 Maths Class 9 Science
The global market for Biological Seed Treatment is projected to reach US$1.8 billion by 2025, driven by agricultural intensification in countries across the world to meet food security goals and changing nuances in the practice of plant nutrition management. From overuse of chemical fertilizers towards the use of biofertilizers as a reactive measure to fight agriculture's growing carbon footprint, the focus is now being shed on proactive early intervention to reduce the environmental burden of crop production. Starting with the seed is the new strategy which is resulting in robust developments in seed nutrition technologies. Starting with the seed is a good idea since it offers early defenses against diseases and nutrition challenges faced by food crops as climate change brings about changes in our environment that are detrimental to plant health and quality. New research studies are revealing disturbing evidence of how climate change and global warming are playing a key role in stripping the nutritional value of food crops. Rise in surface temperatures, water scarcity, and accumulation of CO2 in the environment are chief factors responsible for nutrition depletion in plants. CO2 although helps plants to grow, excess of CO2 causes plants to produce carbohydrates such as glucose at the expense of other vital nutrients such as protein, zinc and iron. This worrisome fact has serious consequences to public health if not addressed appropriately and in a timely manner. The scenario is driving increased interest in and importance of nutrition-sensitive agricultural practices. Biological Seed Treatment, in this regard, is growing in prominence and commercial value. Defined as the application of biological agents to seeds, biological seed treatment helps in suppressing and controlling pathogen diseases all through the plant's life cycle. Other benefits of bio-treated seeds include early germination and reduced early planting risks; ability to grow deeper roots that increase nutrient and water absorption; positively influences biological processes that control germination and root development. Dressing, coating, pelleting and inoculation are the methods by which seeds are treated. Microbial inoculation is an effective seed treatment for agricultural crops. Benefits of seed inoculation include efficient placement of microbes in a manner that enables easy colonization of seedling roots and offer protection against soil borne diseases and threats; better ability to absorb nutrition from soils; protects roots from soil decay that tend to destroy beneficial Rhizobium and Bradyrhizobium bacteria; improved stress tolerance during periods of drought; and protection of beneficial bacteria from the negative effects of chemical fertilizers. Bio-treated seeds tend to form root nodules that are engineered to fix nitrogen from the air for better plant growth. Value of bio-treated seeds is also growing given the fact the soil quality deterioration is resulting in inadequate bacteria population in the soil required for effective natural nodulation i.e. symbiotic interaction between soil bacteria and plant hosts. Microorganisms commonly used as inoculants include Bradyrhizobium, Azotobacter, Azospirillum, Pseudomonas, and Bacillus. Growing in popularity are phosphate-solubilizing microorganisms. Rhizobia inoculants are fairly popular as they are easy to inoculate, carry no threat of over-inoculation and therefore do not require specialized expertise and knowledge. The United States and Europe represent large markets worldwide with a combined share of 56% of the market. The U.S also ranks as the fastest growing market with a CAGR of 11.2% over the analysis period supported by the country's massive investment commitments towards encouraging agricultural innovation, sustainability and digital transformation. Innovation in biological seed treatments in the country are poised to intensify in the coming years given the fact that new products are easier to register at Environment Protection Agency (EPA) as compared to chemical substitutes. Competitors identified in this market include, among others, BASF SE, Bayer CropScience AG, Corteva Agriscience, Croda International Plc, ITALPOLLINA S.p.A, Koppert Biological Systems, Plant Health Care plc., Precision Laboratories LLC., Syngenta International AG, UPL Limited, Verdesian Life Sciences.
Biological Pollutants in Your Prepared by: The Consumer Product Safety Commission (CPSC), and The American Lung Association, The Christmas Seal People What are Biological The Scope of the Problem Health Effects Of Biological Pollutants Talking to Your Doctor Coping with the Problem Self-Inspection: A Walk Through Your Home What You Can Do About Biological Pollutants Maintain and Clean All Appliances that Contact Before You Move Where Biological Pollutants May be Found in Correcting Water Damage Additional Sources of Information This page will help you understand: - what indoor biological pollution is; - whether your home or lifestyle promotes its development; and, - how to control its growth and buildup. Outdoor air pollution in cities is a major health problem. Much effort and money continues to be spent cleaning up pollution in the outdoor air. But air pollution can be a problem where you least expect it, in the place you may have thought was safest--your home. Many ordinary activities such as cooking, heating, cooling, cleaning, and redecorating can cause the release and spread of indoor pollutants at home. Studies have shown that the air in our homes can be even more polluted than outdoor air. Many Americans spend up to 90 percent of their time indoors, often at home. Therefore, breathing clean indoor air can have an important impact on health. People who are inside a great deal may be at greater risk of developing health problems, or having problems made worse by indoor air pollutants. These people include infants, young children, the elderly, and those with chronic illnesses. (Back To Top) Biological pollutants are or were living organisms. They promote poor indoor air quality and may be a major cause of days lost from work or school, and of doctor and hospital visits. Some can even damage surfaces inside and outside your house. Biological pollutants can travel through the air and are often invisible. Some common indoor biological pollutants are: - Animal Dander (minute scales from hair, feathers, or skin) - Dust Mite and Cockroach parts - Fungi (Molds) - Infectious agents (bacteria or viruses) Some of these substances are in every home. It is impossible to get rid of them all. Even a spotless home may permit the growth of biological pollutants. Two conditions are essential to support biological growth: nutrients and moisture. These conditions can be found in many locations, such as bathrooms, damp or flooded basements, wet appliances (such as humidifiers or air conditioners), and even some carpets and furniture. Modern materials and construction techniques may reduce the amount of outside air brought into buildings which may result in high moisture levels inside. Using humidifiers, unvented heaters, and air conditioners in our homes has increased the chances of moisture forming on interior surfaces. This encourages the growth of certain biological pollutants. (Back To Top) Most information about sources and health effects of biological pollutants is based on studies of large office buildings and two surveys of homes in northern U.S. and Canada. These surveys show that 30% to 50% of all structures have damp conditions which may encourage the growth and buildup of biological pollutants. This percentage is likely to be higher in warm, moist climates. Some diseases or illnesses have been linked with biological pollutants in the indoor environment. However, many of them also have causes unrelated to the indoor environment. Therefore, we do not know how many health problems relate only to poor indoor air. (Back To Top) All of us are exposed to biological pollutants. However, the effects on our health depend upon the type and amount of biological pollution and the individual person. Some people do not experience health reactions from certain biological pollutants, while others may experience one or more of the following reactions: Except for the spread of infections indoors, ALLERGIC REACTIONS may be the most common health problem with indoor air quality in homes. They are often connected with animal dander (mostly from cats and dogs), with house dust mites (microscopic animals living in household dust), and with pollen. Allergic reactions can range from mildly uncomfortable to life-threatening, as in a severe asthma attack. Some common signs and symptoms are: - Watery eyes - Runny nose and sneezing - Nasal congestion - Wheezing and difficulty breathing Health experts are especially concerned about people with asthma. These people have very sensitive airways that can react to various irritants, making breathing difficult. The number of people who have asthma has greatly increased in recent years. The number of people with asthma has gone up by 59 percent since 1970, to a total of 9.6 million people. Asthma in children under 15 years of age has increased 41 percent in the same period, to a total of 2.6 million children. The number of deaths from asthma is up by 68 percent since 1979, to a total of almost 4,400 deaths per year. INFECTIOUS DISEASES caused by bacteria and viruses, such as flu, measles, chicken pox, and tuberculosis, may be spread indoors. Most infectious diseases pass from person to person through physical contact. Crowded conditions with poor air circulation can promote this spread. Some bacteria and viruses thrive in buildings and circulate through indoor ventilation systems. For example, the bacterium causing Legionnaire's disease, a serious and sometimes lethal infection, and Pontiac Fever, a flu-like illness, have circulated in some large (Back To Top) Are you concerned about the effects on your health that may be related to biological pollutants in your home? Before you discuss your concerns with your doctor, you should know the answers to the following questions. This information can help the doctor determine whether your health problems may be related to biological - Does anyone in the family have frequent headaches, fevers, itchy watery eyes, a stuffy nose, dry throat, or a cough? Does anyone complain of feeling tired or dizzy all the time? Is anyone wheezing or having difficulties breathing on a regular basis? - Did these symptoms appear after you moved to a new or different home? - Do the symptoms disappear when you go to school or the office or go away on a trip, and return when you come back? - Have you recently remodeled your home or done any energy conservation work, such as installing insulation, storm windows, or weather stripping? Did your symptoms occur during or after these activities? - Does your home feel humid? Can you see moisture on the windows or on other surfaces, such as walls and ceilings? - What is the usual temperature in your home? Is it very hot or cold? - Have you recently had water damage? - Is your basement wet or damp? - Is there any obvious mold or mildew? - Does any part of your home have a musty or moldy odor? - Is the air stale? - Do you have pets? - Do your house plants show signs of mold? - Do you have air conditioners or humidifiers that have not been properly - Does your home have cockroaches or rodents? TOXIC REACTIONS are the least studied and understood health problem caused by some biological air pollutants in the home. Toxins can damage a variety of organs and tissues in the body, including the liver, the central nervous system, the digestive tract, and the immune system. (Back To Top) There is no simple and cheap way to sample the air in your home to determine the level of all biological pollutants. Experts suggest that sampling for biological pollutants is not a useful problem-solving tool. Even if you had your home tested, it is almost impossible to know which biological pollutant(s) cause various symptoms or health problems. The amount of most biological substances required to cause disease is unknown and varies from one person to the next. Does this make the problem sound hopeless? On the contrary, you can take several simple, practical actions to help remove sources of biological pollutants, to help get rid of pollutants, and to prevent their return. (Back To Top) Begin by touring your household. Follow your nose, and use your eyes. Two major factors help create conditions for biological pollutants to grow: nutrients and constant moisture with poor air circulation. - Dust and construction materials, such as wood, wallboard, and insulation, contain nutrients that allow biological pollutants to grow. Firewood also is a source of moisture, fungi, and bugs. - Appliances such as humidifiers, kerosene and gas heaters, and gas stoves add moisture to the air. - A musty odor, moisture on hard surfaces, or even water stains, may be caused - Air-conditioning units - Basements, attics, and crawlspaces - Heating and air-conditioning ducts - Humidifiers and dehumidifiers - Refrigerator drip pans (Back To Top) Before you give away the family pet or move, there are less drastic steps that can be taken to reduce potential problems. Properly cleaning and maintaining your home can help reduce the problem and may avoid interrupting your normal routine. People who have health problems such as asthma, or are allergic, may need to do this and more. Discuss this with your doctor. (Back To Top) Water in your home can come from many sources. Water can enter your home by leaking or by seeping through basement floors. Showers or even cooking can add moisture to the air in your home. The amount of moisture that the air in your home can hold depends on the temperature of the air. As the temperature goes down, the air is able to hold less moisture. This is why, in cold weather, moisture condenses on cold surfaces (for example, drops of water form on the inside of a window). This moisture can encourage biological pollutants to grow. There are many ways to control moisture in your home: - Fix leaks and seepage. If water is entering the house from the outside, your options range from simple landscaping to extensive excavation and waterproofing. (The ground should slope away from the house.) Water in the basement can result from the lack of gutters or a water flow toward the house. Water leaks in pipes or around tubs and sinks can provide a place for biological pollutants - Put a plastic cover over dirt in crawlspaces to prevent moisture from coming in from the ground. Be sure crawlspaces are well-ventilated. - Use exhaust fans in bathrooms and kitchens to remove moisture to the outside (not into the attic). Vent your clothes dryer to the outside. - Turn off certain appliances (such as humidifiers or kerosene heaters) if you notice moisture on windows and other surfaces. - Use dehumidifiers and air conditioners, especially in hot, humid climates, to reduce moisture in the air, but be sure that the appliances themselves don't become sources of biological pollutants. - Raise the temperature of cold surfaces where moisture condenses. Use insulation or storm windows. (A storm window installed on the inside works better than one installed on the outside.) Open doors between rooms (especially doors to closets which may be colder than the rooms) to increase circulation. Circulation carries heat to the cold surfaces. Increase air circulation by using fans and by moving furniture from wall corners to promote air and heat circulation. Be sure that your house has a source of fresh air and can expel excessive moisture from the home. - Pay special attention to carpet on concrete floors. Carpet can absorb moisture and serve as a place for biological pollutants to grow. Use area rugs which can be taken up and washed often. In certain climates, if carpet is to be installed over a concrete floor, it may be necessary to use a vapor barrier (plastic sheeting) over the concrete and cover that with sub-flooring (insulation covered with plywood) to prevent a moisture problem. - Moisture problems and their solutions differ from one climate to another. The Northeast is cold and wet; the Southwest is hot and dry; the South is hot and wet; and the Western Mountain states are cold and dry. All of these regions can have moisture problems. For example, evaporative coolers used in the Southwest can encourage the growth of biological pollutants. In other hot regions, the use of air conditioners which cool the air too quickly may prevent the air conditioners from running long enough to remove excess moisture from the air. The types of construction and weatherization for the different climates can lead to different problems and solutions. (Back To Top) - Have major appliances, such as furnaces, heat pumps and central air conditioners, inspected and cleaned regularly by a professional, especially before seasonal use. Change filters on heating and cooling systems according to manufacturer's directions. (In general, change filters monthly during use.) When first turning on the heating or air conditioning at the start of the season, consider leaving your home until it airs out. - Have window or wall air-conditioning units cleaned and serviced regularly by a professional, especially before the cooling season. Air conditioners can help reduce the entry of allergy-causing pollen. But they may also become a source of biological pollutants if not properly maintained. Clean the coils and incline the drain pans according to manufacturer's instructions, so water cannot collect in pools. - Have furnace-attached humidifiers cleaned and serviced regularly by a professional, especially before the heating season. - Follow manufacturer's instructions when using any type of humidifier. Experts differ on the benefits of using humidifiers. If you do use a portable humidifier (approximately 1 to 2 gallon tanks), be sure to empty its tank every day and refill with distilled or demineralized water, or even fresh tap water if the other types of water are unavailable. For larger portable humidifiers, change the water as recommended by the manufacturer. Unplug the appliance before cleaning. Every third day, clean all surfaces coming in contact with water with a 3% solution of hydrogen peroxide, using a brush to loosen deposits. Some manufacturers recommend using diluted household bleach for cleaning and maintenance, generally in a solution of one-half cup bleach to one gallon water. When using any household chemical, rinse well to remove all traces of chemical before refilling humidifier. - Empty dehumidifiers daily and clean often. If possible, have the appliance drip directly into a drain. Follow manufacturer's instructions for cleaning and maintenance. Always disconnect the appliance before cleaning. - Clean refrigerator drip pans regularly according to manufacturer's instructions. If refrigerator and freezer doors don't seal properly, moisture may build up and mold can grow. Remove any mold on door gaskets and replace faulty gaskets. (Back To Top) - Clean mold surfaces, such as showers and kitchen counters. - Remove meld from walls, ceilings, floors, and panelling. Do not simply cover mold with paint, stain, varnish, or a moisture-proof sealer, as it may resurface. - Replace moldy shower curtains, or remove them and scrub well with a household cleaner and rinse before rehanging them. Controlling dust is very important for people who are allergic to animal dander and mites. You cannot see mites, but you can either remove their favorite breeding grounds or keep these areas dry and clean. Dust mites can thrive in sofas, stuffed chairs, carpets, and bedding. Open shelves, fabric wallpaper, knickknacks, and venetian blinds are also sources of dust mites. Dust mites live deep in the carpet and are not removed by vacuuming. Many doctors suggest that their mite-allergic patients use washable area rugs rather than wall-to-wall carpet. - Always wash bedding in hot water (at least 1300 F) to kill dust mites. Cold water won't do the job. Launder bedding at least every 7 to 10 days. - Use synthetic or foam rubber mattress pads and pillows, and plastic mattress covers if you are allergic. Do not use fuzzy wool blankets, feather or wool-stuffed comforters, and feather pillows. - Clean rooms and closets well; dust and vacuum often to remove surface dust. Vacuuming and other cleaning may not remove all animal dander, dust mite material, and other biological pollutants. Some particles are so small they can pass through vacuum bags and remain in the air. If you are allergic to dust, wear a mask when vacuuming or dusting. People who are highly allergy-prone should not perform these tasks. They may even need to leave the house when someone else is cleaning. (Back To Top) Protect yourself by inspecting your potential new home. If you identify problems, have the landlord or seller correct them before you move in, or even consider - Have professionals check the heating and cooling system, including humidifiers and vents. Have duct lining and insulation checked for growth. - Check for exhaust fans in bathrooms and kitchens. If there are no vents, do the kitchen and bathrooms have at least one window apiece? Does the cooktop have a hood vented outside? Does the clothes dryer vent outside? Are all vents to the outside of the building, not into attics or crawlspaces? - Look for obvious mold growth throughout the house, including attics, basements, and crawlspaces, and around the foundation. See if there are many plants close to the house, particularly if they are damp and rotting. They are a potential source of biological pollutants. Downspouts from roof gutters should route water away from the building. - Look for stains on the walls, floor or carpet (including any carpet over concrete floors) as evidence of previous flooding or moisture problems. Is there moisture on windows and surfaces? Are there signs of leaks or seepage in the basement? - Look for rotted building materials which may suggest moisture or water damage. - If you or anyone else in the family has a pet allergy, ask if any pets have lived in the home. - Examine the design of the building. Remember that in cold climates, overhanging areas, rooms over unheated garages, and closets on outside walls may be prone to problems with biological pollutants. - Look for signs of cockroaches. (Back To Top) - Dirty air conditioners - Dirty humidifiers and/or dehumidifiers - Bathroom without vents or windows - Kitchen without vents or windows - Dirty refrigerator drip pans - Laundry room with unvented dryer - Unventilated attic - Carpet on damp basement floor - Closet on outside wall - Dirty heating/air conditioning system - Dogs or cats - Water damage (around windows, the roof, or the basement) (Back To Top) - Do not mix any chemical products. Especially, never mix cleaners containing bleach with any product (such as ammonia) which does not have instructions for such mixing. When chemicals are combined, a dangerous gas can sometimes - Household chemicals may cause burning or irritation to skin and eyes. - Household chemicals may be harmful if swallowed, or inhaled. - Avoid contact with skin, eyes, mucous membranes and clothing. - Avoid breathing vapor. Open all windows and doors and use an exhaust fan that sends the air outside. - Keep household chemicals out of reach of children. - Rinse treated surface areas well to remove all traces of chemicals. (Back To Top) What if damage is already done? Follow these guidelines for correcting water - Throw out mattresses, wicker furniture, straw baskets and the like that have been water damaged or contain mold. These cannot be recovered. - Discard any water-damaged furnishings such as carpets, drapes, stuffed toys, upholstered furniture and ceiling tiles, unless they can be recovered by steam cleaning or hot water washing and thorough drying. - Remove and replace wet insulation to prevent conditions where biological pollutants can grow. DISCLAIMER: This document may be reproduced without change, in whole or in part, without permission, except for use as advertising material or product endorsement. Any such reproduction should credit the American Lung Association and the U.S. Consumer Product Safety Commission. The use of all or any part of this document in a deceptive or inaccurate manner or for purposes of endorsing a particular product may be subject to appropriate legal action. (Back To Top) Contact your local American Lung Association for copies of: Indoor Air Pollution Fact Sheets, Air Pollution in Your Home? and other publications on indoor air pollution. Contact the U.S. Consumer Product Safety Commission, Washington, D.C. 20207, for copies of Humidifier Safety Alert. To report an unsafe consumer product or product-related health problem, consumers may call the U.S. Consumer Product Safety Commission at 1-800-638-2772. A teletypewriter for the hearing impaired is available at 1-800-638-8270; the Maryland TTY number is 1-800-492-8104. You may also contact EPA's IAQ INFO Clearinghouse at 1-800-438-4318 (or (703) 356-4020) for more information on indoor air quality and to order publications from the list of IAQ publications. Created: March 31, 1997, Last Modified: March 19, 1998 (Back To Top)
JAMES COOK UNIVERSITY Research Field: Evolutionary Biology & Phylogenetics Coral reefs are home to over 30% of all marine fishes despite only accounting for less than 0.1% of the world’s ocean. The centre of coral reef fish biodiversity is found in the Indo- Australian Archipelago – the ‘bullseye’ of a steep gradient in species richness that declines across the Indian and Pacific Oceans. Understanding the origins and maintenance of this biodiversity pattern across two-thirds of the world’s oceans is central to Dr Cowman’s By combining DNA and fossil information, Dr Cowman builds phylogenetic trees that allow him to trace the evolutionary history of fishes and estimate how fast species are produced (speciation) and how quickly they disappear (extinction). His research has shown that over the past 60 million years, coral reefs have provided a safe haven for fishes that survive past periods of climate change and acted as a cradle where new species are produced. With the future of Indo-Pacific reefs in peril from anthropogenic climate change, understanding the origins of the biodiversity they support is a critical step to preserving its enormous value for future generations.
What Are Solar Panels? In the 21st century, it has become essential to switch to alternate sources of energy. Solar power has emerged as a great source of energy for household use, offices, etc. in Cleveland Solar panels, also referred to as photovoltaic (PV) panels, are the means by which light from the sun is converted. The light consists of energy particles known as “photons” that get converted into electricity. A solar panel is made up of multiple solar cells. When several solar cells are spread across a large area, they can generate useable amounts of power. The cells in PV panels are made of semi-conductive materials like silicon. They have both a positive layer and a negative layer, which create an electric field together. How Do Solar Panels Work? When sunlight comes in contact with the semi-conductive material in the solar PV cell, the light energy gets absorbed in the form of photons. It loosens up several electrons that then freely float around in the cell. Solar PV cells are carefully designed in a way that negatively and positively charged semi-conductors are squeezed together to form an electric field. The field so formed compels the floating electrons to flow in a certain direction, specifically towards the conductive metal plates lining the cell. This flow is referred to as an energy current. The current's strength ascertains the amount of electricity each cell is capable of producing. As soon as the free-floating electrons meet the metal plates, the current gets steered into wires, letting electrons move like they would in any source of electricity generation. Understanding The Flow Of Current When the solar panels generate electric current, the energy moves across a series of wires and into an inverter. However, solar panels create direct current (DC), while consumers typically require alternating current (AC) for buildings. An inverter is used to convert DC electricity into AC electricity, making it more accessible for daily use. Once the electricity is converted into AC power, it goes from the inverter into the breaker box (electrical panel) and is distributed across the building. This electricity can now be used to power electronics with solar energy. What Happens To The Unused Electricity? The amount of electricity that the breaker box doesn't consume is redirected to the utility grid via the utility meter. The utility meter is a device that measures the flow of electricity between the grid and your property. If your solar energy system produces more electricity than you use, the meter runs backwards, and you receive credits for the excess through net metering. Conversely, when you use more electricity than what your solar panel system generates, you pull electricity from the grid via the meter, and the meter runs normally. What Are the Benefits of Solar Panels? Despite its increasing popularity, there is not much awareness about the benefits of solar panels in Cleveland. Here are some of the major advantages of solar panels. 1. Decreases Air Pollution Fossil fuels create a significant amount of pollutants, resulting in dirty air and smog. These pollutants are bad for our health and the environment. Moreover, with pollutants dispersed throughout the air, visibility decreases. Solar panels use the sun's energy and create clean energy that doesn't create any pollution. 2. Decreases Dependence On Non-Renewable Sources Of Energy Solar energy can help us reduce our dependence on non-renewable energy sources like fossil fuels. It is quite beneficial since these non-renewable resources result in pollutants that deteriorate air quality. In addition, non-renewable sources are bound to deplete with time. However, renewable sources of energy will never run out. 3. Helps In Fighting Climate Change Traditional energy sources dump loads of pollutants in the air, along with additional carbon dioxide. It costs our planet dearly as it becomes difficult for the ecosystem to clean the air. When carbon emissions go up, so does heat retention from the sun. It affects different climates in different ways, causing some areas to cool while others get warmer. As a result, weather patterns turn volatile across the globe. After much research and experimentation, scientists and climatologists have determined that renewable energy projects are needed to control climate change. When we use solar energy, we reduce our carbon dioxide emissions and do not release significant amounts of pollutants into the air. 4. Reduces Water Usage If your current source of energy does not rely on fossil fuels, it most likely uses water to generate electricity. Nuclear energy and hydropower use huge amounts of water to produce electricity. Sometimes, dams are built around water bodies to control how the water flows for producing electricity. However, this practice can negatively impact the local ecosystem. Solar panels do not require any water and therefore protect the scarce resource. 5. Improves Health Solar panels result in cleaner air, which is better for your lung health. Surprisingly, solar power also helps people have more food security in vulnerable regions. The United Nations Development Program (UNDP) empowers women to run their own energy businesses via solar panels. It contributes to the income and directly assures food security for such people. 6. Reduces Electricity Bills When you generate your own electricity with solar panels, you stop depending on your electricity grid. As a result, your utility bill goes down. Solar panels can remain efficient for decades, which means you can make long-term savings on your electricity bills. In fact, some people go completely off the grid too. 7. Increases The Value Of The Home Installing solar panels on your property makes the house more valuable. It happens because you are now independent of the electricity grid and produce clean energy. Solar panels can be a little expensive, but the returns on your investment are completely worth it. 8. Low Maintenance Costs With rapid technological advancements, the maintenance of solar panels has gotten much simpler and cheaper. Moreover, most panels come with warranties for a considerable amount of time. You do not need to spend exorbitant amounts on the upkeep of your panels; regular cleaning is good enough. What Is The Solar Panel Installation Process? Once you've made up your mind to install solar panels in Cleveland, gear up for the process that follows. There is a lot that needs to be done to ensure the proper installation of solar panels. 1. Find A Solar Panel Installer Your Cleveland solar panel installation journey begins with finding a suitable installer. The most important thing to remember here is that the installer should be MCS accredited so that you can be assured of the quality. MCS is a certification by the Department of Energy and Climate Change (DECC). It ensures that green technology manufacturers and installers adhere to the highest possible standards. You can look for MCS accredited companies via the MCS website. 2. Get A Quote You can ask for a quote either before or after deciding on a solar panel installer. The quote will detail the amount of time the entire project will take and how much it will cost. Getting multiple quotes can help you make an informed decision. 3. Home Assessment The installing company will send in a team with a qualified surveyor to carry out a proper assessment. They will also send a salesperson to present their various offerings. After the assessment, do not get pressured in to signing a contract. Instead, look at the findings and try to understand the installer's assessment of your roof. Once you've had enough time to look at the quotes and understand the assessment, pick an installer. 4. Set Up Scaffolding The actual installation process begins with erecting scaffolding on your roof. It ensures safety for the workers on the roof throughout the duration of the installation. 5. Mount Installation It is essential for the mounting to have a robust installation since they will finally support the solar panels. Mounts can either be roof-ground or flush mounts, based on your needs. The mounting structure ensures sturdiness and support. The structure has to be slightly tilted since the panels are supposed to face specific directions. The angle of the tilt is typically between 18 to 36 degrees. 6. Install The Solar Panels The PV panels need to be carefully placed in a particular direction. In the Northern Hemisphere, it is better for the panels to face South for maximum sunlight. However, facing East or West will also give good results. For regions in the Southern Hemisphere, it would be better if the panels faced North. The solar panels need to be fixed with the mounting structure using nuts and bolts for tightening. It has to be carried out very carefully to ensure that the structure is sturdy and lasts for a long time. 7. Electrical Wiring The amperage, power, and voltage of a solar panel system determine how the panels are connected with the wiring. Typically, universal connectors such as MC4 are used for wiring since these connectors can be used with all kinds of solar panels. The panels can be connected with each other in two ways: In a series connection, the positive wire from one PV module is connected to the negative one of another. Such a wiring connection increases the voltage and matches it to the battery bank. In a parallel connection, positive wires are connected to other positive wires, and negative wires are connected to other negative wires. The voltage of every panel remains the same in such wiring. 8. Connect To A Solar Inverter For connecting the solar panel system to a solar inverter, the positive wire of the panel is connected to the positive terminal of the inverter, while the negative wire gets connected to the negative terminal. 9. Connect The Inverter To A Solar Battery & The Grid In order to connect the solar inverter to the solar battery, the battery's positive terminal is connected with the inverter's positive terminal. Likewise, the negative terminals of both devices are connected to each other. A solar battery is required for storing backup electricity in case you wish to go off-grid. However, it is also important to connect the inverter to the grid. For this process, the installers use a normal plug to connect to the main power switchboard. Then, an output wire gets connected to the electric board that is responsible for supplying electricity to the house. 10. Start Solar Inverter Once all the electrical wiring and connections are completed, you can start using your solar inverter immediately. Most inverters come with a digital display to show you the status of usage and generation of the solar unit. How Long Does A Solar PV Installation in Cleveland Take? Before you decide on a provider, you can ask them for a quote and how long the entire process will take. The duration depends on the complexity and size of your solar panel system in Cleveland. It takes a few weeks to get quotes and home visits from installers. But once you decide on a provider, the process will not take much longer. As soon as the scaffolding is complete, the actual installation should be done within a day in the case of regular sized panels. However, bigger panels might take a more prolonged period. Are Solar Panels in Cleveland Worth It? To determine whether solar panels are worth it or not, you first need to understand that they are a long-term investment. Initially, it might be a little costly to install a solar panel system. However, the benefits will kick in gradually. Yes, solar panels are worth it under certain conditions. If you generate a sufficient amount of energy and stay in the same house for a long time, you'll receive your money's worth. Eventually, over the course of several years, solar panels pay for themselves. How Much Electricity Do Solar Panels Produce And How Much Does It Cost? The following table details the different sizes in which solar panel systems are typically available, along with their annual energy output and estimated costs. |System Size||Annual Energy Output||Approximate Costs| |3 kW||2700 kWh||£5,000 to £6,000| |4 kW||600 kWh||£6,000 to £8,000| |5 kW||4320 kWh||£7,000 to £9,000| |6kW||5400 kWh||£8,000 to £10,000| How Much Do Solar Panels Save in Cleveland? Solar energy helps you save money as there is a dip in your energy bill. Since you do not pull as much energy from the grid as you used to, your spending goes down. The table below outlines the average annual savings on utility bills based on the size of your solar panel system. |System Size||Approximate Annual Savings On Utility Bills| What Are Solar Panels Made Of? Solar panels consist of multiple solar cells that work to absorb light from the sun and convert it into electricity. Most solar PV panels use crystalline silicon wafers as the primary material. In fact, silicon is used to make semi-conductors from approximately 95% of all solar panels in the market. The other 5% depend on experimental technologies such as organic PV cells. The semi-conductors are responsible for generating electricity. When they get in contact with sunlight, their electrons are knocked loose, creating electricity. This process is referred to as the photovoltaic effect. The other components of PV cells include metal, glass, wiring, and plastic. A layer of glass is generally used to cover solar panels. Additionally, there is an anti-reflective coating for protecting the delicate silicon cells while still allowing light to pass through. This entire framework is supported with plastic or polymer frames that are required for installation on both rooftop or ground-mounted solar panel systems. Types Of Solar Panels With advancing technology, there are several types of solar panels available today. Monocrystalline panels are the most efficient kind of solar panels. These silicon panels are created from a single crystal. However, they are the most expensive type of panels. Polycrystalline panels are not as efficient as monocrystalline ones but are a great budget-friendly option. These silicon cells are a product of multiple silicon crystals melded together. Made with amorphous silicon, thin-film solar cells are the most flexible solar panels. However, they are the least efficient ones.
The Wildlife Manager's Role Wildlife management is a science. The wildlife manager's job is to conserve, restore, and manage wildlife species. Wildlife biologists apply the basic principles of ecology to maintain and manage wildlife populations. Wildlife biologists develop management goals and create plans to meet those goals. They are involved in developing regulations to protect or restore threatened and endangered species, allow for the harvest of surplus animals, or reduce overabundant wildlife populations. In a sense, a wildlife manager’s task is similar to a rancher’s. Just as a rancher limits the number of animals in a cattle herd to a level that the habitat can tolerate, wildlife managers try to keep the number of animals in balance with their habitat. In addition to looking at the total number of each species in a habitat, wildlife managers also monitor the breeding stock—the correct mix of adult and young animals needed to sustain a population. To manage a habitat, wildlife managers must consider historical trends, current habitat conditions, breeding population levels, long-term projections, and breeding success. With that knowledge, wildlife managers have a variety of practices at their disposal to keep habitats in balance.
A diagram is usually a two-dimensional display which communicates using visual relationships. It is a simplified and structured visual representation of concepts, ideas, constructions, relations, statistical data, anatomy etc. It may be used for all aspects of human activities to explain or illustrate a topic. Discussion[change | change source] - visual information device: Like the term "illustration" the diagram is used as a collective term standing for the whole class of technical genres, including graphs, technical drawings and tables. - specific kind of visual display: This is the genre that shows qualitative data with shapes that are connected by lines, arrows, or other visual links. In science the term is used in both ways. For example, Anderson (1997) stated more generally: "diagrams are pictorial, yet abstract, representations of information, and maps, line graphs, bar charts, engineering blueprints, and architects' sketches are all examples of diagrams, whereas photographs and video are not". On the other hand, Lowe (1993) defined diagrams as specifically "abstract graphic portrayals of the subject matter they represent". Visual thinking[change | change source] Diagrams affect the mind so that the viewer comes to understand them, but not in the way one understands words. Visual thinking or problem-solving is very ancient, and largely automatic. One only has to remember that the brain puts together an image of the world around us based on sensory input, mostly sight. We do not make any conscious decisions: it is done without conscious thought. Diagrams most likely "tap in" to some of these ancient – but largely unknown – routines. The way some diagrams affect thinking is quite important. Mendeleev's periodic table summarised previous research on the elements. Far more important, though, was the way it suggested the properties of elements which were not yet discovered. This diagram stimulated creative thought, and other examples from the history of science could be given: see Feynman diagram. Basic diagram types[change | change source] Examples[change | change source] - Circuit diagram - Euler diagram - Family tree - Feynman diagram - Gantt chart – shows the timing of tasks or activities (used in project management) - Mind map – used for learning, brainstorming, memory, visual thinking and problem solving - Piping and instrumentation diagram - Venn diagram Some main diagram types[change | change source] There are at least the following types of diagrams: - Graph-based diagrams: relationships are expressed as connections between the items or overlaps between the items; examples: - Chart-like diagram techniques, which display a relationship between two variables that take either discrete or a continuous ranges of values; examples: Schematics and other types of diagrams, e.g., References[change | change source] |Wikimedia Commons has media related to diagrams.| - Brasseur, Lee E. 2003. Visualizing technical information: a cultural critique. Amityville, N.Y: Baywood Pub. ISBN 0-89503-240-6 - Michael Anderson (1997). "Introduction to Diagrammatic Reasoning" Archived 2008-09-15 at the Wayback Machine. Retrieved 21 July 2008. - Lowe, Richard K. (1993). "Diagrammatic information: techniques for exploring its mental representation and processing". Information Design Journal. 7 (1): 3–18. doi:10.1075/idj.7.1.01low. - Christine Roman-Lantzy 2007. Cortical visual impairment. New York: AFB Press. ISBN 0-89128-829-5 - Gregory R.L. 1970. The intelligent eye. Weidenfeld & Nicolson, London. - Gregory, Richard 1997. Knowledge in perception and illusion. Phil. Trans. R. Soc. Lond. B 352:1121-1128. (pdf) - Mendeleev, Dmitry Ivanovich; Jensen, William B. (ed) 2005. Mendeleev on the periodic law: selected writings, 1869–1905. Mineola, New York: Dover Publications. ISBN 0-486-44571-2
Given the immense complexity of brain organization, neurobiologist Jean-Pierre Changeux has come up with a new research strategy that encourages a more comprehensive understanding of autism. This approach, outlined in the interdisciplinary journal Trends in Cognitive Sciences – which covers topics ranging from neuroscience to social sciences – combines knowledge about the disorder at several different levels: the genes involved, their expression, synapse formation, and connectivity between distant neurons in the brain (known as "long-distance" connectivity). This specific connectivity is thought to be closely linked to the ability to engage in social interaction with others. With his novel approach, Jean-Pierre Changeux is attempting to shed light on the links between genes and social awareness. We take a closer look... The pioneering work on synaptic genes of Thomas Bourgeron and his team at the Institut Pasteur has revealed almost 400 genes predisposing to autism, which is a large number considering that in children with the disorder, it is a namely just one component of behavior that is affected, namely their ability to engage in social interaction. This is a principle characteristic of autism. "This raises questions in medical terms and also in terms of fundamental research," explains Jean-Pierre Changeux, a neurobiologist and visiting scientist in the Institut Pasteur's Department of Neuroscience. "These genetic components sometimes lead us to think that the brain's functions are entirely innate. But at the same time, we know that the human genome is actually made up of a relatively small number of genes [20,000 to 25,000], a hundred billion neurons and millions of billions of synapses." So genetics is not responsible for everything; the environment (epigenesis) influences how genes are expressed: "When you look a baby in the eyes for just 1 second, around 10 million synapses are created! And they can be altered by experience and other factors." We know that the brain is a dynamic organ which interacts with its internal and external environment. But research aimed at furthering our understanding of the human brain and its functions based mainly on the use of information technology (experimental data from behavioral recordings or cerebral imaging, etc.) tends not to look at the molecular level, which is vital for designing drugs and understanding their mechanism of action. Research in this area is hampered by the difficulty of describing the complex organization of the brain and its development. "We need to bring the many concepts and disparate data together within a unified framework to enable us to understand the higher functions of the brain," analyzes Jean-Pierre Changeux. A new unified perspective in autism research In his recent publication, the scientist proposes a deliberately simplistic conception of autism. In his view, autism is "mainly linked to a lack of connectivity between distant neurons in the brain [long-distance connectivity]. This specific connectivity is thought to influence children's social behavior at a crucial stage in their development." It is acquired during the process of brain development which occurs over a period of nearly 15 years in children, from the embryonic stage through to adolescence. On the basis of this observation, and given the considerable variability of autistic disorders in children, Jean-Pierre Changeux suggests a new approach to help us understand the disease. His theory is to combine several levels of study that are currently often considered separately by neuroscientists: - transcription factors (the regulation of gene expression), - the epigenetic impact of nerve activity (both internal and influenced by the environment) on synapse formation, - long-distance connectivity between neurons (social behavior in children). "These four levels of structural organization are interlinked and inter-regulated during the growth and development of the brain. They can help us gradually establish links between genes and social awareness." To better understand brain dynamics So how does this new unified perspective proposed by the scientist change things? "It means that we avoid looking at things in a way that is too narrow or restrictive," continues Jean-Pierre Changeux. "Childhood disorders are not just about impaired genes. We know, for example, that behavioral therapies can have an impact on children." It goes without saying that genetics remains an important element in autism, and "we can envisage the development of drugs to target the transcription factors involved." There is also potential for new computer modeling approaches incorporating the dynamics of brain development. All these avenues of research are provisionally based on the four levels of brain organization (genes, transcription factors, epigenetics, and long-distance connectivity) explored by Jean-Pierre Changeux, but "others may be necessary," he explains. "This approach needs to be discussed and tested by experience." Climbing brain levels of organization from genes to consciousness, Trends in Cognitive Sciences, february 3rd, 2017. 1: Collège de France, CNRS UMR 3571, & Institut Pasteur, Département de Neuroscience, F-75015 Paris, France.
All lifeforms across the entirety of the planet Earth have one thing in common, in that there is only one genetic code shared amongst all of the biological kingdoms, a singular chemical language handed down through the planet’s history, from the earliest single-celled creatures to the complex organisms that inhabit the Earth today. But the fact that only one known genetic code is known to exist would appear to be an oddity, considering the diversity of lifeforms that have sprung from that singular code. However, an international team of researchers have published a study that shows that other combinations of atoms could be used to make up new genetic codes, opening the possibility that the language that makes up life throughout the universe might be more varied than we might imagine. “It is truly exciting to consider the potential for alternate genetic systems… that these might possibly have emerged and evolved in different environments, perhaps even on other planets or moons within our solar system,” co-author Jay Goodwin, a chemist at Emory University, said in a statement. Genetic encoding on Earth comes in the form of one of two types of large protein molecules, deoxyribonucleic acid or ribonucleic acid—more commonly known as DNA and RNA, respectively. These nucleic acids are essential to all known forms of life on Earth, where they create, encode and then store the information of every living cell of every organism. This chemically-encoded information is used to drive the functions of each individual cell, and in turn the functioning of the larger organism, if that cell is part of a multi-celled creature. Ultimately, this genetic information facilitates the reproduction the organism, copying the entirety of its complex genetic code onto its progeny. But, unlike the myriad codes available across our various computer platforms, there is only one genetic code shared across the entirety of life on Earth, making it difficult for researchers to envision what life on other planets—indeed, life as we don’t know it—might look like. Although a number of both natural and man-made molecules mimic the basic structure of DNA, no-one has studied how many possible codes could theoretically exist, and provide an illustration as to what researchers could look for when either searching for, or trying to write their own, alternate codes. “There are two kinds of nucleic acids in biology,” according to study co-author Jim Cleaves, a chemist at the Tokyo Institute of Technology. “We wanted to know if there is one more to be found or even a million more. The answer is, there seem to be many, many more than was expected.” The study authors designed a computer program that would generate chemical formulas for nucleic acid-like molecules, using combinations of molecules that would assemble in lines the same way nucleotides couple up in DNA strands. When all was said and done, the program had assembled more than 1,160,000 different molecules that met the study’s basic criteria. “We were surprised by the outcome of this computation,” according to study co-author Markus Meringer, a chemist at the German Aerospace Center in Cologne. “It would be very difficult to estimate a priori that there are more than a million nucleic-acid like scaffolds. Now we know, and we can start looking into testing some of these in the lab.” Provided the planet wasn’t genetically “seeded” early in its formation, as per Panspermia theory, these alternate genetic codes may offer some insight into how DNA and RNA evolved on Earth: as it stands, there are no apparent “primitive” forms of DNA or RNA that would offer some understanding of how such a complex chemical code would have come about in Earth’s early oceans, in the same way that the fossil record has recorded the evolution of micro-and-macroscopic organisms throughout Earth’s history. This lack of intermediate examples linking simpler organic molecules to complex nucleic acids makes it appear as if they simply sprung into existence, a prospect that mightn’t be acceptable to mainstream science. Aside from helping xenobiologists recognize what may be truly alien lifeforms on other planets, these genetic look-alikes may also provide the basis for future medical advances: Drugs that resemble DNA and RNA are already used to combat dangerous viruses and malignant cancer cells in the human body; with a wider variety of codes to work with, this could open up drug designs that would expand medical science’s arsenal against deadly diseases. “It is absolutely fascinating to think that by using modern computational techniques we might stumble upon new drugs when searching for alternative molecules to DNA and RNA that can store hereditary information,” explains study co-author Pieter Burger, a biochemist at Emory University. - non-covalent hydrogen bonds betwixt base pairs of the DNA-Double-Helix visualized through an electron microscope via Flickr.com Subscribers, to watch the subscriber version of the video, first log in then click on Dreamland Subscriber-Only Video Podcast link.
A new study discusses potential treatment and prevention methods to Zika virus (ZIKV), which can cause severe birth defects in pregnancy. The researchers found that an antibody, ZIKV-117, has the potential to treat at-risk pregnant women who have the virus. Zika virus (ZIKV) is a virus which is mainly spread through infected mosquitos. Similar to other viruses transmitted through mosquito bites, the common symptoms for Zika virus are mild, and can include fever, rash, joint pain, and red eyes. However, Zika virus can also have more severe effects, such as birth defects, as it can be passed on from a pregnant woman to her fetus. Despite the fact that ZIKV can cause various diseases such as Guillain-Barré syndrome, and Microcephaly, there are no specific vaccines or treatments available to treat or prevent ZIKV. Thus, a new study found in the Nature journal, aimed to explore the effects of various antibodies which can potentially be used or altered to develop candidate therapeutic agents to fight against ZIKV. The researchers put together a panel of human monoclonal antibodies (mAbs), from individuals who previously were infected with different strains of ZIKV from various countries around the world. Eight individuals in the U.S who had a recent or prior ZIKV were involved: 2 subjects were infected with an African lineage strain during a stay in Senegal, and the other 6 were infected during a current outbreak of an Asian lineage strain in Mexico, and Brazil. The mAbs collected were then tested on pregnant and non-pregnant mice, to assess their inhibitory activity against the strain. The results presented a subset of neutralizing mAbs that were able to recognize many antigen molecules that the antibodies can attach themselves to, which exhibited a range of potently inhibitory activity. Of those, it was found that the monoclonal antibody ZIKV-117 was the most inhibitory, and was recognized for reducing tissue pathology, placental and fetal infection and mortality in mice. Although it is unclear as to the extent that mice observations can be translated to humans, it was shown that neutralizing human mAbs have the ability to protect against maternal-fetal transmission, infection, and disease, as well as contribute to future vaccine design efforts. By: Sana Issa, HBSc
Ponderosa Pine (Ponderosa) is generally described as a perennial tree. native to the U.S. (United States) has its most active growth period in the spring and summer . The greatest bloom is usually observed in the with fruit and seed production starting in the summer and continuing until retained year to year. Ponderosa Pine (Ponderosa) has a long life span relative to most other plant species and a moderate growth rate. At maturity, the typical Ponderosa Pine (Ponderosa) will reach up to 223 feet high, with a maximum height at 20 years of Ponderosa Pine (Ponderosa) is easily found in nurseries, garden stores and other plant dealers and distributors. It can be propagated by bare root, container, seed. It has a slow ability to spread through seed production and the seedlings have Note that cold stratification is not required for seed germination and the plant cannot survive exposure to temperatures below high tolerance to drought and restricted water conditions. Uses of : Landscaping, Medicinal, Culinary, etc. Erosion control: Ponderosa pine is a rapid growing tree with the ability to firmly anchor into most soil types. For this reason, it is suitable for use as a windbreak species. It can also be used with other natives to provide cover and erosion control on rehabilitated sites. Ethnobotanic: Native Americans used various parts of ponderosa pine for medicinal, building and household, food, and ceremonial purposes. Needles were used as dermatological and gynecological aids. They were also used to reduce coughs and fevers. The pitch was used as an ointment for sores and scabby skin, backaches, rheumatism, earaches, inflamed eyes, and as a sleeping agent for infants. The boughs of the plant were used in sweat lodges for muscular pain, as decoctions for internal hemorrhaging, and as infusions for pediatric treatments. The roots of ponderosa pine were used to make blue dye and needles were used as insulation for underground storage pits. The wood was used extensively for fence posts, boards for general construction, and to fabricate snowshoes. Single logs were used to make dugout canoes. Bark was used to cover houses. Most parts of the plant were used for food, including the pitch, seeds, cones, bark, buds, and cambium. The pollen and needles were used in healing ceremonies. Ornamental value: Ponderosa pine has a lush green color and pleasant odor that makes it popular for ornamental plantings. It has been planted, sometimes out of its natural range, because of its aesthetic qualities. Ponderosa pine is used as borders of forested highways, but is not planted within the right-of-way. The large stature of the tree limits its use to open spaces. Wildlife: Red-winged blackbirds, chickadees, mourning doves, finches, evening grosbeak, jays, Clark's nutcracker, nuthatches, rufous-sided towhee, turkeys, chipmunks and squirrels consume the seeds of ponderosa pine. Blue and spruce grouse use ponderosa pine needles for nesting material. Mice, porcupines, and other rodents use the bark for nesting material. The trees are also important to various birds for cover, roosting and nesting sites. Wood production: Ponderosa pine is one of the most important timber species in the western United States. The annual production of ponderosa pine is ranked third behind Douglas fir and hem-fir. Approximately 1.3 billion board feet of ponderosa pine lumber is produced annually out of Oregon, the largest supplier in the United States. It is popularly used for the construction of buildings. Description General: Pine Family (Pinaceae). Ponderosa pine is a large tree that lives 300 to 600 years and reaches heights of 30 to 50 m tall and 0.6 to 1.3 m in diameter. The oldest trees can exceed 70 m in height and 2 m in diameter. The bottom one-half of the straight trunk is typically without branches. The crown of ponderosa pine is broadly conical to round-shaped. The bark is characteristically orange-brown with a scaly plate-like appearance. Twigs are stout, up to 2 cm think, orange-brown, and rough. Needles are 12 to 28 cm long, thin and pointed with toothed edges, occur in bundles of three, and give a tufted appearance to the twig. Buds are up to 2 cm long, 1 cm wide, red-brown with white-fringed scale margins. Male cones are orange or yellow and are located in small clusters near the tips of the branches. The female cone is oval, woody, 8 to 15 cm long, with a small prickle at the tip of each scale. Flowering occurs from April to June of the first year, and cones mature and shed winged seeds in August and September of the second year. Distribution: Ponderosa pine is distributed from southern British Columbia through Washington, Oregon, and California, and east to the western portions of Texas, Oklahoma, Nebraska, North Dakota, and South Dakota. For current distribution, please consult the Plant Profile page for this species on the PLANTS Web site (http://plants.usda.gov). Habitat: Ponderosa pine trees occur as p General: Pine Family (Pinaceae). Ponderosa pine is a large tree that lives 300 to 600 years and reaches heights of 30 to 50 m tall and 0.6 to 1.3 m in diameter. The oldest trees can exceed 70 m in height and 2 m in diameter. The bottom one-half of the straight trunk is typically without branches. The crown of ponderosa pine is broadly conical to round-shaped. The bark is characteristically orange-brown with a scaly plate-like appearance. Twigs are stout, up to 2 cm think, orange-brown, and rough. Needles are 12 to 28 cm long, thin and pointed with toothed edges, occur in bundles of three, and give a tufted appearance to the twig. Buds are up to 2 cm long, 1 cm wide, red-brown with white-fringed scale margins. Male cones are orange or yellow and are located in small clusters near the tips of the branches. The female cone is oval, woody, 8 to 15 cm long, with a small prickle at the tip of each scale. Flowering occurs from April to June of the first year, and cones mature and shed winged seeds in August and September of the second year. Required Growing Conditions Ponderosa pine is distributed from southern British Columbia through Washington, Oregon, and California, and east to the western portions of Texas, Oklahoma, Nebraska, North Dakota, and South Dakota. For current distribution, please consult the Plant Profile page for this species on the PLANTS Web site (http://plants.usda.gov). Habitat: Ponderosa pine trees occur as pure stands or in mixed conifer forests in the mountains. It is an important component of the Interior Ponderosa Pine, Pacific Ponderosa Pine-Douglas fir, and Pacific Ponderosa Pine forest cover types. In the northwest, it is typically associated with Rocky Mountain Douglas fir, lodgepole pine, grand fir, and western larch. In California it is associated with California white fir, incense cedar, Jeffrey pine, sugar pine, coast Douglas fir, California black oak, and western juniper. In the Rocky Mountains and Utah, it is associated with Rocky Mountain Douglas fir, blue spruce, lodgepole pine, limber pine, and quaking aspen. In the Black Hills, it is associated with quaking aspen, white spruce, and paper birch. In Arizona and New Mexico, it is associated with white fir, Rocky Mountain Douglas fir, blue spruce, quaking aspen, gamble oak, and southwestern white pine at higher elevations and Rocky Mountain juniper, alligator juniper, and Utah juniper at lower elevations (Oliver & Riker 1990). Shrubs and grasses typically associated with ponderosa pine within its range include ceanothus, sagebrush, oak, snowberry, bluestem, fescue, and polargrass. Adaptation The USDA hardiness zones for ponderosa pine range from 3 to 7. It grows on a variety of soils from shallow to deep, and from gravelly sands to sandy clay loam. It is found growing on bare rock with its roots in the cracks and crevices. It has a low tolerance to alkalinity, preferring soils with a pH of 6.0 to 7.0. It grows best in zones with 30 to 60 cm average annual precipitation on well-drained soils. Once established it also survives hot and dry conditions, exhibiting medium to good drought tolerance. Fifty percent shade reduces the growth rate significantly. It withstands very cold winters. Ponderosa pine is a climax species at the lower elevations of the coniferous forest and a mid-successional species at higher elevations where more competitive conifers are capable of growing. It generally grows at elevations between sea level and 3,000 m. The populations at higher elevations usually occur within the southern part of its range (Oliver & Riker 1990). Cultivation and Care Site preparation is needed to control competition, which compromises seedling survival and growth. Seeds are sown in late March to early April. The seed is sown for an initial density of 237 seedlings/m2 (22 seedlings/ft2). Transplant stock should be one or two years old, with less than 2 prior transplantings, and 15 to 30 cm in height. Space the plants 1 to 3 m apart depending on the site. Initial seedling survival is reduced under moisture stress. Older seedlings can tolerate limited moisture. Competition from other vegetation should be controlled for the first three to six years until the trees become well established. General Upkeep and Control Ponderosa pine can be over-irrigated in poorly drained soils or drowned out on high water table sites. It responds well to thinning, which should be done as stands become older to develop larger crowns, resulting in heavier seed crops for wildlife. More forage for deer and elk become available from associated plants by opening the canopy. The use of repellents or other control measures may be necessary to prevent overuse of the trees by rodents. Ponderosa pine is resistant to fire due to its thick bark. Low intensity surface fires control competitive species like scrub oak and shade-tolerant conifers. Ponderosa pine seedlings can also survive low intensity burns. Pests and Potential Problems Approximately 200 insect species affect ponderosa pine from its cone stage to maturity. Pine cone beetles cause tree death by transmitting blue stain fungus to the tree. Their larvae also consume the phloem, restricting the flow of nutrients to the top of the tree. Western pine beetle is a common cause of death for older trees, drought stressed trees, and even healthy, vigorous trees during epidemics. Bark beetles are naturally present in all stands. Harvesting methods that leave large amounts of logging slash can allow bark beetle populations to explode and kill vigorous trees up to 0.5 m in diameter. The ponderosa pine budworm, also known as the sugar-pine tortrix, eats new needles on trees in New Mexico and Colorado. Several years’ worth of damage will affect the health of the tree. Early research suggests that some insecticides may help to control infestations. Dwarf mistletoe is the most widespread parasite that causes branch and stem deformation. It germinates on ponderosa pine branches and forces its roots into the phloem of the host branch, creating stem cankers that leave the wood weak and unsuitable for use as lumber. This weakens the tree and leaves it susceptible to fungal infections and insect attacks. Root diseases, rusts, trunk decays, and needle and twig blights also cause significant damage. Seeds and Plant Production Ponderosa pine is propagated by seed. Cones are ready for collection in October and November when they turn reddish brown. Mature seed is firm and brown in color. Cones should be dried on canvas tarp in a well-ventilated area immediately after they have been collected. The seeds will drop from the cones as they dry. Several germination methods for ponderosa pine have been utilized, each with their own variations. In general, seeds undergo an imbibation treatment before stratification. Seeds are placed in mesh bags and soaked in cold running water for 48 hours. One variation is to soak the seeds in a 40% bleach solution for 10 minutes with hand agitation prior to placing them under running water. The mesh bags are place in plastic bags and stored at 1oC for 2 to 8 weeks. They should be checked daily for mold. Seeds are sown into containers and covered with media. The media should be kept moist throughout germination. Germination will occur at an average greenhouse temperature of 20oC. Alternating greenhouse temperatures of 21-25oC during the day and 16-18oC at night is an appropriate environment for germinating seeds. Germination will occur in approximately 15 days. Seedlings are thinned and watered daily throughout the establishment phase. They should not be moved outdoors until after the last frost of the year. Seeds can be dried to between 5 and 8% moisture and placed in airtight plastic bags, then stored for long periods of time in freezers set at –15oC. Source: USDA, NRCS, PLANTS Database, plants.usda.gov. National Plant Data Center, Baton Rouge, LA 70874-4490 USA
“Hail, Columbia,” written in Philadelphia in the closing years of the eighteenth century, became a popular patriotic song in early America and served for many years as the unofficial national anthem. Bands began to play it in honor of the vice president of the United States in the 1830s, and later it became the official song of that office. Philadelphia lawyer Joseph Hopkinson (1770–1842) created “Hail, Columbia” in the spring of 1798 when he put lyrics to the tune of the “President’s March,” a patriotic instrumental piece written in 1789 by Philip Phile (1734?–93), a German immigrant musician active in Philadelphia in the 1780s and 1790s. In his later years, Hopkinson related the story behind the song: In April 1798 a young singer-actor named Gilbert Fox (1776–1807?) asked Hopkinson to write a song for Fox to perform at an upcoming benefit concert in Philadelphia. Fox needed a rousing song for the concert and asked if Hopkinson could write lyrics to Phile’s “President’s March.” Hopkinson obliged and came up with lyrics that opened with the stirring proclamation “Hail Columbia, happy land! Hail, ye heroes, heav’n born band.” With Philadelphia then serving as the nation’s capital and the United States on the verge of war with France, Hopkinson envisioned the song as a patriotic rallying cry. The public first heard the song when Fox performed it at the Chestnut Street Theatre on April 25, 1798. The audience loved it and demanded multiple encores. A Philadelphia music publisher issued a sheet music version a few days later and the song quickly became very popular in both Philadelphia and New York. Unlike other early American patriotic songs such as “The Star-Spangled Banner” and “America (My Country Tis of Thee),” which featured new lyrics set to traditional English melodies, both the words and music to “Hail, Columbia” were written in the United States. Philip Phile, who wrote the tune, first appears in the mid-1780s as a performer, composer, and music teacher in Philadelphia and New York. In 1785 he led the orchestra at Philadelphia’s Southwark Theatre. He wrote the “President’s March” in 1789, reportedly in honor of the presidential inauguration of George Washington (1732–99). Philadelphia music publisher Benjamin Carr (1768–1831) first published the piece in 1793. Phile died later that year in Philadelphia, perhaps a victim of the city’s infamous yellow fever epidemic. Joseph Hopkinson, son of well-known Philadelphia patriot and signer of the Declaration of Independence Francis Hopkinson (1735–91), was a prominent lawyer who later served as a U.S. congressman and federal judge. Joseph followed in his father’s footsteps in mixing law, statesmanship, and the arts. Francis Hopkinson, in addition to being a lawyer and judge, also became well known as a poet and musician. Considered America’s first “Poet-Composer,” Francis Hopkinson was the first native-born American to write a popular song, “My Days Have Been So Wondrous Free,” composed in 1759. “Hail, Columbia” remained popular through the centuries and was one of several songs that served as an unofficial American national anthem until Congress officially gave that designation to “The Star-Spangled Banner” in 1931. Written in the new nation’s first capital during a formative period in American history, “Hail, Columbia” was one of the first pieces of music to define the young United States in song. Later, as the official song of the vice president, it continued to play a role in America’s musical identity. Jack McCarthy is a music historian who regularly writes, lectures, and gives walking tours on Philadelphia music history. A certified archivist, he is currently directing a project for the Historical Society of Pennsylvania focusing on the archival collections of the region’s many small historical repositories. Jack has served as consulting archivist for the Philadelphia Orchestra and the 2014 radio documentary Going Black: The Legacy of Philly Soul Radio and is giving several presentations and helping produce the Historical Society of Pennsylvania’s 2016 Philadelphia music series, “Memories & Melodies.” (Author information current at time of publication.) Copyright 2016, Rutgers University
What is listeriosis? Listeriosis, which is caused by eating food contaminated by the bacteria Listeria monocytogenes, can be a serious disease. In the United States, an estimated 1,850 persons become seriously ill with listeriosis each year. Of these, 425 die. In Illinois, approximately 20 cases of listeriosis are reported annually; about 25 percent of the cases die. Who is at risk for listeriosis? While anyone can become ill from eating food contaminated by the bacteria, pregnant women, newborns and adults with weakened immune systems are most at risk. Pregnant women are about 20 times more likely than other healthy adults to get listeriosis. About one-third of all reported cases happen during pregnancy. Infection during pregnancy may result in spontaneous abortion during the second and third trimesters or stillbirth. Those with weakened immune systems (for example, the elderly and persons with cancer, diabetes or kidney disease or HIV/AIDS) are more likely to get listeriosis than people with normal immune systems. How does a person get listeriosis? You get listeriosis from eating food contaminated by the bacteria. Babies can be born with the disease if their mothers ate contaminated food during pregnancy. Although healthy adults and children may consume contaminated food without becoming ill, those who are at increased risk can probably get the disease after consuming even a few bacteria. How does Listeria get into food? Listeria monocytogenes is found in soil and water. Vegetables can become contaminated from the soil or from manure used as fertilizer. Animals can carry the bacteria without appearing ill and meat or dairy products from these animals can be contaminated. The bacteria also have been found in a variety of raw foods, such as uncooked meats and vegetables, as well as in processed food, that become contaminated after processing, such as cheese and cold cuts at the deli counter. Unpasteurized (raw) milk or foods made from raw milk may contain the bacteria. How do you know if you have listeriosis? A person with listeriosis usually has a fever, muscle aches and, sometimes, gastrointestinal symptoms such as nausea and diarrhea. If infection spreads to the nervous system, symptoms such as headache, stiff neck, confusion, loss of balance or convulsions can occur. Infected pregnant women may experience only a mild, flu-like illness. However, infection during pregnancy can lead to premature delivery, infection of the newborn or even stillbirth. There is no routine screening test for susceptibility to listeriosis during pregnancy, as there is for rubella or some other congenital infections. If you have symptoms such as fever or stiff neck, you should consult your physician. A blood or spinal fluid test (to cultivate the bacteria) will show if you have listeriosis. During pregnancy, a blood test is the most reliable way to find out if symptoms are due to listeriosis. How can you reduce your risk for listeriosis? As with other foodborne illnesses, there are several guidelines that will help to reduce the risk of infection with Listeria monocytogenes: Persons who are at high risk pregnant women and persons with weakened immune systems should follow these additional recommendations: How is listeriosis treated? When infection occurs during pregnancy, antibiotics given promptly to the pregnant woman can often prevent infection of the fetus or the newborn. Babies with listeriosis receive the same antibiotics as adults. Even with prompt treatment, however, some infections result in death. This is particularly likely in the elderly and in persons with other serious medical problems. of Public Health 535 West Jefferson Street Springfield, Illinois 62761 Questions or Comments
Women's Voting Rights in the U.S. and Egypt – Outline Example The paper "Women's Voting Rights in the U.S. and Egypt" is a wonderful example of an outline on social science. I. Introduction: Statement of the Problem A. Research Questions What are unique aspects of the American political system that had an impact on women’s suffrage and how do these compare with Egypt’s status? 2. What is the extent of women’s economic contributions in both countries and its general effect on their empowerment? 3. What is the role of educational opportunities in advancing the rights of women in Egypt and America? 4. How have aspects of motherhood and domesticity influenced women’s suffrage in both American and Egyptian context? 5. How has spirituality shaped the liberation of American and Egyptian women? B. Research Hypothesis 1. There are significant similarities concerning women’ suffrage in Egypt and America. 2. There are significant differences concerning women’s suffrage in Egypt and America. II. Background and Significance of the Problem A. Operational Definitions 3. Women’s Suffrage 4. United States B. Importance of Comparing Women’s Suffrage in Both Countries 1. More Information 2. Advocacy Purposes 3. Springboard for Policies and Programs Chapter II: Literature Review A. Interest in Women’s Suffrage Issues II. Historical Background A. Description of Past Movements III. Presentation of Literature A. Books and Journals Concerning Suffrage IV. Purpose of the Study and Rationale A. Importance of Studying Women’s Suffrage in Both Countries B. Advantages of the Study V. Questions or Hypothesis VI. Operational Definitions Chapter III: Method A. Brief Significance of Methods in Researches II. Research Method A. Research Design: Qualitative Research Design (Case Study) B. Selection of Subjects: Purposeful C. Instrumentation: Theories on Women’s Rights D. Procedures for Data Collection 1. Review of Case Studies and Other Researches 2. Review of Published Articles on the Topic E. Procedures for Data Analysis 2. Tentative Hypothesis III. Assumptions and Limitations of the Study A. Application of Theories in General B. Lack of In-depth Analysis of Different Sources and Subjects B. Assumptions and Limitations Chapter IV: Results I. What are the similarities of women’s suffrage in Egypt and America in various factors? II. What are the main differences between the two countries regarding the issue and the possible reasons for such? Chapter V: Discussion and Recommendations I. Discussion of Results and Conclusion A. Implication of Results Via Theories and Issues B. Brief Statement of Inference A. Improvement of the Study’s Weaknesses B. Application of Results to Present and Future Crises III. Suggestions for Further Research A. Inclusion of Related Variables Such as Territory, Crime, and Politics
This material must not be used for commercial purposes, or in any hospital or medical facility. Failure to comply may result in legal action. WHAT YOU NEED TO KNOW: What is heat exhaustion? Heat exhaustion is when your body overheats. Normally, the body has a cooling system that is controlled by the brain. The cooling system adjusts to hot conditions and lowers your body temperature by producing sweat. With heat exhaustion, the body's cooling system is not working well and results in an increased body temperature. What increases my risk for heat exhaustion? - Older age - Medicines, including those used for treating pain, allergies, or depression - Illegal drugs or alcohol What are the signs and symptoms of heat exhaustion? - Heavy sweating - Feeling faint, dizzy, or weak - A headache or tiredness - Fast breathing or a fast heartbeat - Muscle cramps - Nausea or vomiting How is heat exhaustion diagnosed? Your healthcare provider will check your temperature. You may also need any of the following: - Blood and urine tests may show if you are dehydrated and how your body is working. - An EKG test records your heart rhythm and how fast your heart beats. It is used to check for heart problems caused by heat exhaustion. What first aid can I do for heat exhaustion? - Move to an air-conditioned location or a cool, shady area and lie down. Raise your legs above the level of your heart. - Drink cold liquid, such as water or a sports drink. - Mist yourself with cold water or pour cool water on your head, neck, and clothes. - Loosen or remove as many clothes as possible. - If you do not feel better in 1 hour, go to the emergency department. How is heat exhaustion treated? - Cooling materials , such as ice-soaked blankets, may be used to quickly lower your body temperature. - IV fluids help prevent dehydration and complications of overheating. - An oral rehydrating solution is a drink that has the right amounts of water, salts, and sugar your body needs. It is used to help prevent or treat dehydration. How can I prevent heat exhaustion? - Wear lightweight, loose, and light-colored clothing. - Protect your head and neck with a hat or umbrella when you are outdoors. - Drink lots of water or sports drinks. Avoid alcohol. - Eat salty foods, such as salted crackers, and salted pretzels. - Limit your activities during the hottest time of the day. This is usually late morning through early afternoon. - Use air conditioners or fans and have enough proper ventilation. If there is no air conditioning available, keep your windows open so air can circulate. Call 911 for any of the following: - You have trouble breathing. - You are confused or cannot think clearly. - You cannot move your arms and legs. When should I seek immediate care? - You cannot stop vomiting. When should I contact my healthcare provider? - Your signs and symptoms do not improve with treatment. - You have numbness or prickling feeling in your arms or legs. - You have questions or concerns about your condition or care. Care AgreementYou have the right to help plan your care. Learn about your health condition and how it may be treated. Discuss treatment options with your caregivers to decide what care you want to receive. You always have the right to refuse treatment. The above information is an educational aid only. It is not intended as medical advice for individual conditions or treatments. Talk to your doctor, nurse or pharmacist before following any medical regimen to see if it is safe and effective for you. © 2015 Truven Health Analytics Inc. Information is for End User's use only and may not be sold, redistributed or otherwise used for commercial purposes. All illustrations and images included in CareNotes® are the copyrighted property of A.D.A.M., Inc. or Truven Health Analytics.
Samuel de Champlain Samuel de Champlain, cartographer, explorer, colonial administrator, author (born circa 1567 in Brouage, France; died 25 December 1635 in Quebec City). Known as the “Father of New France,” Samuel de Champlain played a major role in establishing New France from 1603 to 1635. He is also credited with founding Quebec City in 1608. He explored the Atlantic coastline (in Acadia), the Canadian interior and the Great Lakes region. He also helped found French colonies in Acadia and at Trois-Rivières, and he established friendly relations and alliances with many First Nations, including the Montagnais, the Huron, the Odawa and the Nipissing. For many years, he was the chief person responsible for administrating the colony of New France. Champlain published four books as well as several maps of North America. His works are the only written account of New France at the beginning of the 17th century.
Sensory Processing Disorder (SPD) What it means and how it looks in daily life SPD is a neurological disorder that causes people to have difficulty organizing and responding to information that comes in through the senses. Unlike those with impairments in sight or hearing, for example, people with SPD are able to take in information through their senses, but that information gets “mixed up” in their brains — causing responses that are not appropriate for the context. Some people are over-responsive. They are repelled by bright lights and rooms full of noisy people, and they hate being touched or getting their hands messy. Others are under-responsive. They need extra sensory input to regulate their bodies, so they jump, climb, crash, and squeeze all day long. They seem wild for constantly moving, touching things, being the loudest one in the room, and are unable to sit still for long. And some people have a mix of both. Someone may be over-responsive to noise and cover their ears when a firetruck comes by and they need noise-cancelling headphones to focus in school. Yet that same person may need the input of a weighted blanket to sleep or their body feels out of control. How we experience the world around us We all learned about the five senses in elementary school — taste, touch, smell, hearing, and seeing. And these are all important for giving our bodies sensory input. But did you know we have three other senses? The Vestibular sense is made up of parts of the inner ear and brain that control balance and eye movement. Difficulties with the vestibular system can cause clumsiness, reading problems, imbalance, trouble focusing, light sensitivity, dissiness, forgetfulness, and hearing changes. Proprioception refers to the way joints and muscles send messages to the brain to help coordinate movement. It’s the idea of “knowing where my body is in space.” It tells our bodies how much force to appply to movement — such as how hard to press a pencil to write, how tight to give a hug, and how to keep our bodies in a chair. Interoception helps you feel and understand what’s going on inside your body. Receptors in your organs, including our skin, send information about the inside of your body to your brain. This regulates vital functions such as body temperature, hunger, thirst, digestion, and heart rate. People who struggle with interoception may not know when they are hungry, full, hot, cold, thirsty, nauseated, itchy, or ticklish. They can also have trouble feeling their emotions, as they may not be tuned in to body cues that interpret emotion. How our products help with spd, autism, anxiety, adhd KnotSense offers tools that provide calming or stimulating sensory input. Our goal is to offer products that can be used every day without causing the person using them to stand out as “different.” Our sensory bracelets and fidgets help kids focus in class; help office workers calm their anxiety; and help people who pick their skin, chew on their clothing, and pull their hair focus that sensory need onto a socially acceptable tool. All of the products we offer are handmade or curated to be the most helpful for both children and adults with sensory needs.
Macular degeneration is a deterioration of the deepest layers of the retina in the area of the macula. What Is the Macula? The thin inner layer of eye is called the retina. It is like the “camera film” of the eye. The central portion of the retina that allows us to see detail vision such as reading and recognizing peoples faces is called the macula. The rest of the retina allows for “side vision” but is not able to distinguish fine detail. A person without good macular function is able to walk around without bumping into things as well as take care of their daily needs such as bathing, cooking, and eating. However, they are unable to read a newspaper, recognized details of a persons face or see highway signs while driving. What Is Macular Degeneration? Macular degeneration is a deterioration of the deepest layers of the retina in the area of the macula. The actual cause of macular degeneration is under intense study but it is usually associated with the aging process. The deterioration appears to be related to a build up of oxidants and other metabolic waste products in the pigmented layer of the retina. Over time this layer begins to degenerate and form what are called drusen. As more drusen form, macular function decreases and vision begins to blur. In some cases the pigmented layer undergoes atrophy. Small gaps (scotomas) in vision develop and eventually enlarge to cause more severe vision loss. Drusen development and/or atrophy of the pigmented layer of the macular is called dry macular degeneration. In the most severe form of macular degeneration small breaks in the layer between retina and the vascular middle layer of the eye (choroid) can develop. These breaks allow abnormal blood vessels to develop and grow underneath the retina. These vessels (subretinal neovascular membranes) hemorrhage and scar causing fairly rapid and severe loss of macular function and central vision. This is called wet macular degeneration. Treatment for dry macular degeneration consists of vitamin supplements and close surveillance for visual changes. The AREDS vitamin formula is available over the counter and can reduce the risk of dry macular degeneration progressing to the wet form. The wet form of macular degeneration has many new treatments recently approved by the FDA. These treatments range from lasers to medications injected in or around the eye. Macular degeneration is a difficult disease but with motivation and patience its effects can be significantly reduced. Early treatment and preventative measures can help slow down the condition and low vision rehabilitation can help people to lead an independent life style.
Convention on the Rights of the Child Each child is entitled to a good, safe childhood. All children have the right to grow up and attend school, as well as play and participate. Every child is also entitled to protection and care. All the children’s rights are set down in the United Nation’s Convention on the Rights of the Child. The Convention is a generally accepted understanding of the rights of all children regardless of their background, such as nationality, religion or family wealth. The Convention applies to all children under 18 years of age. Almost every country has adopted the Convention. In Finland, the Convention on the Rights of the Child became law in 1991. The Convention is legally binding, so it obligates states, municipalities, authorities, children’s parents and other adults just like other laws. Together with the Convention on the Rights of the Child, Finland’s national laws protect children’s rights. Finnish legislation has a number of laws, according to which children must be treated equally as individuals, and responsibility for all children must be taken equally.
The native peoples of North America are diverse in culture, language, and ecological adaptations to varied environments. This variation is expressed in their attire. The only major constant in their clothing prior to European contact was the use of the skins of animals-most notably the tanned skins of the variety of large North American mammals-buffalo or bison, antelope, mountain sheep, caribou, and others. Owing to its wide geographic distribution, deer was the most prevalent. Smaller animals such as mink, beaver, and rabbit were also used but mainly for decorative effects. Native North Americans' Clothing Native peoples in certain regional areas did create textile clothing technologies that mainly utilized fibers harvested from gathered plant products and sometimes used spun thread made from hair from both domesticated and killed or captured wild animals. From Alaska down through the gathering cultures of the Plateau, Great Basin, and California tribes as far to the southwest as the border of Mexico, woven products were worn literally from head to toe. Hats, capes, blouses, dresses, and even footwear were constructed of plant material. In the north, this practice reflected the deleterious effects of the constant dampness of the coastal temperate rain forest climate upon skin products, and in the south it was largely due to the scarcity or rarity of large animals for skins. For example, as a means to maximize available resources, several Great Basin tribes had developed a system of weaving strips of the skins of small animals (like rabbits) into blankets or shawls. Before contact, the main decorative additions for clothing were paints and the quills of the porcupine and the shafts of stripped bird feathers. Entire feathers from a variety of birds were used as well, with the feathers from large raptors, especially the eagle, signifying prestige and sacred power among many tribes. Dyes and paints were used to color both the additive elements and the main bodies of the clothes themselves. These coloring agents were derived from plant and mineral sources, and in some areas very sophisticated systems for obtaining different colors from the local flora were in place. These products, as well as paints derived from regional mineral outcroppings, became important trade items. Bone and shell ornaments were used as jewelry- bracelets, earrings, combs, and hair ornaments-and to a lesser extent as clothing ornaments. Extensive precontact trade routes existed for the distribution of these items, with the coveted shimmering abalone shells and the tapering conical dentalia shells that resembled miniature elephant tusks being traded from California and the more northerly Pacific Coast to the Great Plains and beyond to the Great Lakes region. Similarly, shells found in the Gulf of Mexico and ornaments cut from them were traded up river trade routes to areas in the Northern Plains, Midwest, and Great Lakes regions. A wide network also existed for the disbursement of the beads cut from Atlantic shells, later known to early European settlers as "wampum." The only evidences of metallurgy north of Mexico occurred among the so-called Mound Builders of the Mississippi and Ohio valleys, where copper mined largely in the islands of Lake Superior and traded south to be turned into jewelry and other ornaments existed. On the Pacific Northwest Coast, exploitation of similar "native copper" deposits, allowed the nearly pure copper product to be exploited as jewelry, knives, and other implements. The unique metal shieldlike objects created were a pure demonstration of wealth, which represented prestige and status among the "Potlatch People" of the Northwest Coast. The abundance of resources in the Pacific coastal region led to the extensive use of various vegetation sources for clothing; in the north from Alaska to Northern California people relied upon evergreen root and inner-bark fibers, together with sedges, grasses, and ferns. As the rain forest climate gives way to marshy environments and grassy savannas in the south, the material from grasses and other smaller plants predominates. Nevertheless, this general area created some of the finest basketry products ever made by humanity, and a great array of basket-woven products was used as apparel. Large rain hats, caps, various forms of capes and wraps, dresses, kilts, leggings, and even shoes met the varying needs of the people of the western coast. Animal Skin Clothing Peoples of the arid Southwest and Great Basin areas also wove clothing, but to a lesser degree, incorporating more skin products. Some sedentary tribes raised cotton that had previously been domesticated in Mesoamerica and had been traded north together with chilies, corn, and squashes as part of an agricultural diffusion. The Hopi, for example, produced cotton mantas or women's dresses and sashes and kilts for men. Interestingly, the men wove their own apparel items in this culture. In the Southwest in general, men tended to wear a belt and breechclout combination, while women wore either a skirt or kilt or a dress that covered the entire torso, depending upon the tribe. More warmth for the winter months was furnished by a robe of skin tanned with the hair on, of locally obtained deer, antelope, sheep, or of trade-obtained bison. Woven rabbit-skin robes were also used. Footwear appropriate to resist a rough, rocky environment and the often-thorned plants of the desert climate assumed increased importance. In the far north, the Arctic culture area, the Inuit (formerly called Eskimo) often utilized skins processed especially with the fur retained in such a way as to combat the frigid weather. Fitted fur garments had hoods, which were bordered with specific species of fur to minimize the formation of frost around the edge due to the condensation of moisture from exhaled breath in extreme weather. Other areas of the clothing were specifically engineered as well, with some species' skins being used for specific traits in different areas of the garment. Seal was used for water resistance, caribou for insulative ability. Sealskin-soled mukluks or boots with formed soles were stuffed with dried grasses or mosses to provide insulation and protect the feet. The different species' skins were used in decorative fashion as well, with different tailoring demarcating various culture groups and gender identification. In addition, coastal groups created waterproof clothing of finely stitched seal intestine that enabled seahunters to venture out on frigid Arctic waters, allowing them to fasten themselves into their one-man kayaks in a leak-proof manner, when the intrusion of frigid seawater might have meant death, both for the kayaker and for those he was providing for. Referencing the next cultural area south in the interior of the continent, the Athapaskan and Northern Algonquin also designed their clothing to stave off the hazards of the northern winter. Ironically, hazards of the possibility of thawing ground occasionally posed more danger than cold itself and thus changed the clothing design needs as opposed to those of their neighbors to the north. Additional decoration possibilities were afforded by the existence of porcupine and moose in the arboreal forest, allowing the use of quills and moose-hair as over-lay and embroidery elements. Indians of the Eastern Woodlands also decorated their clothing with quill and hair, both in embroidery and appliqué. Even inland tribes could obtain trade beads and shaped objects made by the coastal tribes from the coverings of the abundant shellfish. Deer, being the most common large animal, provided the most common skins utilized for clothing. Breechclouts, deer-skin leggings worn with each end tucked into a belt, were the norm in male attire, with women generally wearing full dresses. Moccasins in the wooded areas tended to be soft-soled, of tanned deer, moose, or caribou hide, often smoked over a smoldering fire to aid in resisting moisture prior to being cut up for the shoe's construction. Deer-hide robes aided in warmth during the cooler months. Some tribes in the area did develop a textile culture using fibers from gathered plants such as the stinging nettle; however, it was largely limited to smaller objects such as pouches, bags, and sashes. By contrast, the tribes of the Plains had virtually no textile cultural history. In addition, the environment of the Plains area necessitated a change in the footwear technology, with most tribes favoring a two-part moccasin, with a tanned-skin vamp or upper attached to a thicker rawhide sole. As in the Southwest, this was a response to the more barren ground surface and thorned plants. With the majority of the buffalo or bison in North America residing in this area, they assumed a central position in the cultures of the Plains tribes. This importance is reflected in clothing as well, with buffalo hide becoming a major resource. In the northern tribes especially, robes of buffalo hide tanned with the hair on were highly prized as winter attire, and often highly decorated. In order to counter the monolithic image of the Native American, one must consider, in the early 2000s, the estimated 565 viable native groups in their proper cultural contexts to truly comprehend their rich cultural diversity, linguistic variation, and clothing and design of attire. The long utilized Culture Area concept still has pertinence in postcolonial life. Within these coalescing areas, indigenous nations were grouped, mainly along the lines of material culture items-as among the Iroquois in the Northeast where longhouses sheltered several families together based upon matrilineal clan affiliation. There, a mixed hunting and agricultural economy was fostered by matrilocal residence and inheritance through the female and allowed a focus on seasonal ceremonies such as the midwinter and harvest festivals. Ceremonial-plaited corn-husk and carved wood masks were used in these and other rituals, often in the context of healing. Stranded belts of cut-shell beads rose above mere decoration, often being created to commemorate specific events. These wampum belts served as historic record-keeping devices. Quite a number of existing belts document treaties between native and European groups, for example. Environmental Materials Driving Clothing Choices One can select any area and explicate the clothing and adornment of the groups interacting with the environmental opportunity. The Northwest Coast consisted of various peoples speaking unrelated languages, but largely sharing a vibrant cultural lifestyle based upon the possibility for economic surplus afforded by the rich maritime environment. The most dazzling and elegant designs were undoubtedly those of the Haida from the Queen Charlotte Islands off the coast of present-day British Columbia in Canada. Their totemic art was embodied in monumental totem poles and decorated house villages, masks for ceremonial use, and the beautification of virtually every object type in the culture, whether utilitarian or decorative. This urge to beautify transferred to clothing as well, with masterful painting incorporating the same curvilinear stylized totemic themes on the woven hats and mats made from cedar bark and on skin robes and tunics as well. Chilkat blankets woven of mountain-goat wool and cedar bark were important prestige items owned by powerful individuals. All aboriginal people of North America have under-gone coerced culture change by the colonizers. Although native beliefs, culture, and languages have been legally suppressed they have adapted and changed to new lifestyles. Many wear traditional styles adapted to new materials. In attire, they evidence modern styles in new fashions. Coe, Ralph T. Sacred Circles: Two Thousand Years of American Indian Art. London: Arts Council of Great Britain, 1972. Howard, James H. "The Native American Image in Western Europe." American Indian Quarterly 4, no. 1 (1978).
This page explains what Peritoneal Dialysis is. This text is taken from NHS Choices. There are two main types of dialysis: haemodialysis and peritoneal dialysis. Haemodialysis involves diverting blood into an external machine, where it's filtered before being returned to the body Peritoneal dialysis involves pumping dialysis fluid into the space inside your abdomen (tummy) to draw out waste products from the blood passing through vessels lining the inside of the abdomen. There are two main types of peritoneal dialysis: continuous ambulatory peritoneal dialysis (CAPD) – where your blood is filtered several times during the day automated peritoneal dialysis (APD) – where a machine helps filter your blood during the night as you sleep Both treatments can be done at home once you've been trained to carry them out yourself. They're described in more detail below. Preparing for treatment Before you can have CAPD or APD, an opening will need to be made in your abdomen. This will allow the dialysis fluid (dialysate) to be pumped into the space inside your abdomen (the peritoneal cavity). An incision is usually made just below your belly button. A thin tube called a catheter is inserted into the incision and the opening will normally be left to heal for a few weeks before treatment starts. The catheter is permanently attached to your abdomen, which some people find difficult. If you're unable to get used to the catheter, you can have it removed and switch to haemodialysis instead. Continuous ambulatory peritoneal dialysis The equipment used to carry out CAPD consists of: a bag containing dialysate fluid an empty bag used to collect waste products a series of tubing and clips used to secure both bags to the catheter a wheeled stand that you can hang the bags from At first, the bag containing dialysate fluid is attached to the catheter in your abdomen. This allows the fluid to flow into the peritoneal cavity, where it's left for a few hours. While the dialysate fluid is in the peritoneal cavity, waste products and excess fluid in the blood passing through the lining of the cavity are drawn out of the blood and into the fluid. A few hours later, the old fluid is drained into the waste bag. New fluid from a fresh bag is then passed into your peritoneal cavity to replace it, and left there until the next session. This process of exchanging the fluids is painless and usually takes about 30-40 minutes to complete. Exchanging the fluids isn't painful, but you may find the sensation of filling your abdomen with fluid uncomfortable or strange at first. This should start to become less noticeable as you get used to it. Most people who use CAPD need to repeat this around four times a day. Between treatment sessions, the bags are disconnected and the end of the catheter is sealed. Automated peritoneal dialysis (APD) Automated peritoneal dialysis (APD) is similar to CAPD, except a machine is used to control the exchange of fluid while you sleep. You attach a bag filled with dialysate fluid to the APD machine before you go to bed. As you sleep, the machine automatically performs a number of fluid exchanges. You'll usually need to be attached to the APD machine for 8-10 hours. At the end of the treatment session, some dialysate fluid will be left in your abdomen. This will be drained during your next session. During the night, an exchange can be temporarily interrupted if, for example, you need to get up to go to the toilet. Some people who have APD worry that a power cut or other technical problem could be dangerous. However, it is usually safe to miss one night’s worth of exchanges as long as you resume treatment within 24 hours. You'll be given the telephone number of a 24-hour hotline you can call if you experience any technical problems. Fluid and diet restrictions If you're having peritoneal dialysis, there are generally fewer restrictions on diet and fluid intake compared with haemodialysis because the treatment is carried out more often. However, you may sometimes be advised to limit how much fluid you drink and you may need to make some changes to your diet. A dietitian will discuss this with you if appropriate. Dialysis and pregnancy Becoming pregnant while on dialysis can sometimes be dangerous for the mother and baby. It's possible to have a successful pregnancy while on dialysis, but you'll probably need to be monitored more closely at a dialysis unit and you may need more frequent or longer treatment sessions. If you're considering trying for a baby, it's a good idea to discuss this with your doctor first. If you're having home haemodialysis or peritoneal dialysis, the supplies and equipment you need will normally be provided by your hospitalor dialysis clinic. You'll be told how to get and store your supplies as part of your training in carrying out the procedure. It's important to make sure you have enough supplies of equipment in case of an emergency, such as adverse weather conditions that prevent you from obtaining supplies. Your doctor or nurse may suggest keeping at least a week's worth of equipment as an emergency backup supply. You should also let your electrical company know if you're using home haemodialysis or automated peritoneal dialysis. This is so they can treat you as a priority in the event that your electrical supply is disrupted.
Computer facial animation Computer facial animation is primarily an area of computer graphics that encapsulates models and techniques for generating and animating images of the human head and face. Due to its subject and output type, it is also related to many other scientific and artistic fields from psychology to traditional animation. The importance of human faces in verbal and non-verbal communication and advances in computer graphics hardware and software have caused considerable scientific, technological, and artistic interests in computer facial animation. Although development of computer graphics methods for facial animation started in the early 1970s, major achievements in this field are more recent and happened since the late 1980s. Computer facial animation includes a variety of techniques from morphing to three-dimensional modeling and rendering. It has become well-known and popular through animated feature films and computer games but its applications include many more areas such as communication, education, scientific simulation, and agent-based systems (for example online customer service representatives). Human facial expression has been the subject of scientific investigation for more than one hundred years. Study of facial movements and expressions started from a biological point of view. After some older investigations, for example by John Bulwer in late 1640s, Charles Darwin’s book The Expression of the Emotions in Men and Animals can be considered a major departure for modern research in behavioural biology. More recently, one of the most important attempts to describe facial activities (movements) was Facial Action Coding System (FACS). Introduced by Ekman and Friesen in 1978, FACS defines 46 basic facial Action Units (AUs). A major group of these Action Units represent primitive movements of facial muscles in actions such as raising brows, winking, and talking. Eight AUs are for rigid three-dimensional head movements, i.e. turning and tilting left and right and going up, down, forward and backward. FACS has been successfully used for describing desired movements of synthetic faces and also in tracking facial activities. Computer based facial expression modelling and animation is not a new endeavour. The earliest work with computer based facial representation was done in the early 1970s. The first three-dimensional facial animation was created by Parke in 1972. In 1973, Gillenson developed an interactive system to assemble and edit line drawn facial images. And in 1974, Parke developed a parameterized three-dimensional facial model. The early 1980s saw the development of the first physically based muscle-controlled face model by Platt and the development of techniques for facial caricatures by Brennan. In 1985, the short animated film ``Tony de Peltrie’’ was a landmark for facial animation. In it for the first time computer facial expression and speech animation were a fundamental part of telling the story. The late 1980s saw the development of a new muscle-based model by Waters, the development of an abstract muscle action model by Magnenat-Thalmann and colleagues, and approaches to automatic speech synchronization by Lewis and by Hill. The 1990s have seen increasing activity in the development of facial animation techniques and the use of computer facial animation as a key storytelling component as illustrated in animated films such as Toy Story, Antz, Shrek, and Monsters, Inc, and computer games such as Sims. Casper (1995) is a milestone in this period, being the first movie with a lead actor produced exclusively using digital facial animation (Toy Story was released later the same year). The sophistication of the films increased after 2000. In The Matrix Reloaded and Matrix Revolutions dense optical flow from several high-definition cameras was used to capture realistic facial movement at every point on the face. Polar Express (film) used a large Vicon system to capture upward of 150 points. Although these systems are automated, a large amount of manual clean-up effort is still needed to make the data usable. Another milestone in facial animation was reached by The Lord of the Rings where a character specific shape base system was developed. Mark Sagar pioneered the use of FACS in entertainment facial animation, and FACS based systems developed by Sagar were used on Monster House, King Kong, and other films. Two-dimensional facial animation is commonly based upon the transformation of images, including both images from still photography and sequences of video. Image morphing is a technique which allows in-between transitional images to be generated between a pair of target still images or between frames from sequences of video. These morphing techniques usually consist of a combination of a geometric deformation technique, which aligns the target images, and a cross-fade which creates the smooth transition in the image texture. An early example of image morphing can be seen in Michael Jackson's video for "Black Or White". In 1997 Ezzat and Poggio working at the MIT Center for Biological and Computational Learning created a system called MikeTalk which morphs between image keyframes, representing visemes, to create speech animation. Another form of animation from images consists of concatenating together sequences captured from video. In 1997 Bregler et al. described a technique called video-rewrite where existing footage of an actor is cut into segments corresponding to phonetic units which are blended together to create new animations of a speaker. Video-rewrite uses computer vision techniques to automatically track lip movements in video and these features are used in the alignment and blending of the extracted phonetic units. This animation technique only generates animations of the lower part of the face, these are then composited with video of the original actor to produce the final animation. Three-dimensional head models provide the most powerful means of generating computer facial animation. One of the earliest works on computerized head models for graphics and animation was done by Parke. The model was a mesh of 3D points controlled by a set of conformation and expression parameters. The former group controls the relative location of facial feature points such as eye and lip corners. Changing these parameters can re-shape a base model to create new heads. The latter group of parameters (expression) are facial actions that can be performed on face such as stretching lips or closing eyes. This model was extended by other researchers to include more facial features and add more flexibility. Different methods for initializing such “generic” model based on individual (3D or 2D) data have been proposed and successfully implemented. The parameterized models are effective ways due to use of limited parameters, associated to main facial feature points. The MPEG-4 standard (Section 7.15.3 – Face animation parameter data) defines a minimum set of parameters for facial animation. Animation is done by changing parameters over time. Facial animation is approached in different ways, traditional techniques include - shapes/morph targets, - skeleton-muscle systems, - motion capture on points on the face and - knowledge based solver deformations. 1. Shape based systems offer a fast playback as well as a high degree of fidelity of expressions. The technique involves modelling portions of the face mesh to approximate expressions and visemes and then blending the different sub meshes, known as morph targets or shapes. Perhaps the most accomplished character using this technique was Gollum, from The Lord of the Rings. Drawbacks of this technique are that they involve intensive manual labor, are specific to each character and must be animated by slider parameter tables. 2. Skeletal Muscle systems, physically based head models form another approach in modeling the head and face. Here the physical and anatomical characteristics of bones, tissues, and skin are simulated to provide a realistic appearance (e.g. spring-like elasticity). Such methods can be very powerful for creating realism but the complexity of facial structures make them computationally expensive, and difficult to create. Considering the effectiveness of parameterized models for communicative purposes (as explained in the next section), it may be argued that physically based models are not a very efficient choice in many applications. This does not deny the advantages of physically based models and the fact that they can even be used within the context of parameterized models to provide local details when needed. Waters, Terzopoulos, Kahler, and Seidel (among others) have developed physically based facial animation systems. 3. 'Envelope Bones' or 'Cages' are commonly used in games. They produce simple and fast models, but are not prone to portray subtlety. 4. Motion capture uses cameras placed around a subject. The subject is generally fitted either with reflectors (passive motion capture) or sources (active motion capture) that precisely determine the subject's position in space. The data recorded by the cameras is then digitized and converted into a three-dimensional computer model of the subject. Until recently, the size of the detectors/sources used by motion capture systems made the technology inappropriate for facial capture. However, miniaturization and other advancements have made motion capture a viable tool for computer facial animation. Facial motion capture was used extensively in Polar Express by Imageworks where hundreds of motion points were captured. This film was very accomplished and while it attempted to recreate realism, it was criticised for having fallen in the 'uncanny valley', the realm where animation realism is sufficient for human recognition but fails to convey the emotional message. The main difficulties of motion capture are the quality of the data which may include vibration as well as the retargeting of the geometry of the points. A recent technology developed at the Applied Geometry Group and Computer Vision Laboratory at ETH Zurich achieves real-time performance without the use of any markers using a high speed structured light scanner. The system is based on a robust offline face tracking stage which trains the system with different facial expressions. The matched sequences are used to build a person-specific linear face model that is subsequently used for online face tracking and expression transfer. 5. Deformation Solver Face Robot. Speech is usually treated in a different way to the animation of facial expressions, this is because simple keyframe-based approaches to animation typically provide a poor approximation to real speech dynamics. Often visemes are used to represent the key poses in observed speech (i.e. the position of the lips, jaw and tongue when producing a particular phoneme), however there is a great deal of variation in the realisation of visemes during the production of natural speech. The source of this variation is termed coarticulation which is the influence of surrounding visemes upon the current viseme (i.e. the effect of context). To account for coarticulation current systems either explicitly take into account context when blending viseme keyframes or use longer units such as diphone, triphone, syllable or even word and sentence-length units. One of the most common approaches to speech animation is the use of dominance functions introduced by Cohen and Massaro. Each dominance function represents the influence over time that a viseme has on a speech utterance. Typically the influence will be greatest at the center of the viseme and will degrade with distance from the viseme center. Dominance functions are blended together to generate a speech trajectory in much the same way that spline basis functions are blended together to generate a curve. The shape of each dominance function will be different according to both which viseme it represents and what aspect of the face is being controlled (e.g. lip width, jaw rotation etc.). This approach to computer-generated speech animation can be seen in the Baldi talking head. Other models of speech use basis units which include context (e.g. diphones, triphones etc.) instead of visemes. As the basis units already incorporate the variation of each viseme according to context and to some degree the dynamics of each viseme, no model of coarticulation is required. Speech is simply generated by selecting appropriate units from a database and blending the units together. This is similar to concatenative techniques in audio speech synthesis. The disadvantage to these models is that a large amount of captured data is required to produce natural results, and whilst longer units produce more natural results the size of database required expands with the average length of each unit. Finally, some models directly generate speech animations from audio. These systems typically use hidden markov models or neural nets to transform audio parameters into a stream of control parameters for a facial model. The advantage of this method is the capability of voice context handling, the natural rhythm, tempo, emotional and dynamics handling without complex approximation algorithms. The training database is not needed to be labeled since there are no phonemes or visemes needed; the only needed data is the voice and the animation parameters. An example of this approach is the Johnnie Talker system. Face Animation Languages Many face animation languages are used to describe the content of facial animation. They can be input to a compatible "player" software which then creates the requested actions. Face animation languages are closely related to other multimedia presentation languages such as SMIL and VRML. Due to the popularity and effectiveness of XML as a data representation mechanism, most face animation languages are XML-based. For instance, this is a sample from Virtual Human Markup Language (VHML): <vhml> <person disposition="angry"> First I speak with an angry voice and look very angry, <surprised intensity="50"> but suddenly I change to look more surprised. </surprised> </person> </vhml> More advanced languages allow decision-making, event handling, and parallel and sequential actions. Following is an example from Face Modeling Language (FML): <fml> <act> <par> <hdmv type="yaw" value="15" begin="0" end="2000" /> <expr type="joy" value="-60" begin="0" end="2000" /> </par> <excl event_name="kbd" event_value="" repeat="kbd;F3_up" > <hdmv type="yaw" value="40" begin="0" end="2000" event_value="F1_up" /> <hdmv type="yaw" value="-40" begin="0" end="2000" event_value="F2_up" /> </excl> </act> </fml> - Computer animation - Computer graphics - Facial expression - Face Modeling Language - Interactive online characters - Parametric surface - Texture mapping - Computer Facial Animation by Frederic I. Parke, Keith Waters 2008 ISBN 1568814488 - Data-driven 3D facial animation by Zhigang Deng, Ulrich Neumann 2007 ISBN 1846289068 - Handbook of Virtual Humans by Nadia Magnenat-Thalmann and Daniel Thalmann, 2004 ISBN 0470023163 - Face/Off: Live Facial Puppetry - Realtime markerless facial animation technology developed at ETH Zurich - The "Artificial Actors" Project - Institute of Animation - Cubic Motion - Facial Animation Specialist - Direct voice to animation conversion, Johnnie Talker - Animated Baldi - Xface: Open Source 3D Facial Animation Toolkit with MPEG-4 - CU Animate Tools for Enabling Conversations with Animated Characters - CU Animate Applications - Rocketbox Libraries - Stock 3D Character Models with Facial Animation Rigs - Animation with Equations - facial animation blog Wikimedia Foundation. 2010. Look at other dictionaries: Computer animation — Further information: Animation and Computer generated imagery An example of computer animation which is produced in the motion capture technique Computer animation is the process used for generating animated images by using computer… … Wikipedia Computer graphics (computer science) — This article is about the scientific discipline of computer graphics. For other uses see Computer graphics (disambiguation). A modern render of the Utah teapot, an iconic model in 3D computer graphics created by Martin Newell in 1975. Computer… … Wikipedia Facial motion capture — is the process of electronically converting the movements of a person s face into a digital database using cameras or laser scanners. This database may then be used to produce CG (computer graphics) computer animation for movies, games, or real… … Wikipedia animation — /an euh may sheuhn/, n. 1. animated quality; liveliness; vivacity; spirit: to talk with animation. 2. an act or instance of animating or enlivening. 3. the state or condition of being animated. 4. the process of preparing animated cartoons. [1590 … Universalium Morph target animation — In this example from the open source project Sintel, four facial expressions have been defined as deformations of the face geometry. The mouth is then animated by morphing between these deformations. Dozens of similar controllers are used to… … Wikipedia History of animation — A basic summary of animation: past, present and futureThe pastCave paintingsEarly examples of attempts to capture the phenomenon of motion into a still drawing can be found in Paleolithic cave paintings, where animals are depicted with multiple… … Wikipedia 12 basic principles of animation — The 12 basic principles of animation is a set of principles of animation introduced by the Disney animators Ollie Johnston and Frank Thomas in their 1981 book .Ref label|A|a|nonecite book|last=Thomas|first=Frank|coauthors=Ollie Johnston|title=The … Wikipedia Keith Waters — (born 1962, England), formerly of LifeFX Networks, Inc., has been a pioneer in facial animation for the past 20 years. He pioneered the development of a muscle based model for facial animation including a physically based skin tissue model as… … Wikipedia 3D modeling — This article is about computer modeling within an artistic medium. For scientific usage, see Computer simulation. 3D computer graphics … Wikipedia Lip sync — or Lip synch (short for lip synchronization) is a technical term for matching lip movements with voice. The term can refer to: a technique often used for performances in the production of film, video and television programs; the science of… … Wikipedia
Type 2 diabetes is one of the deadliest, yet most preventable, diseases in the Western world. Unhealthy diets combined with a lack of physical activity has seen a rise in the incidence among both adults and children. Growing numbers of clinics receive healthcare outreach to help manage their patients’ diabetes in an already overwhelmed system. Type 2 is both preventable and manageable. If you have the condition, or are at serious risk of developing it, follow these suggestions below. What causes type 2 diabetes Type 2 diabetes is a result of elevated sugar levels overtime. The pancreas makes insulin. Insulin helps your body manage its blood sugar levels. If you have significantly high sugar intake, your body’s insulin isn’t enough to regulate it all. There are two consequences: First, your body develops a resistance to the natural insulin. Second, your body doesn’t have enough insulin to control the blood sugar levels. Over time, this develops into type 2 diabetes. If you don’t take action to manage the condition, it significantly increases the chance of stroke and heart disease. Diabetes is also linked to the development of other diseases and health issues including kidney disease, nerve damage, vision loss and amputation. Managing the condition by changing your diet If the condition is caused by high sugar levels, the best approach is to lower them. You can do this by having a low sugar diet. Not one of the fad diets that you follow for a month or two before reverting back to your old lifestyle. You’ll need to follow this for the rest of your life. As an example, cut out fizzy drinks and unhealthy food. Start eating more fruit and vegetables. Before you come up with a diet, it’s a good idea to speak to your doctor. They can explain your current situation and what will happen if you don’t make big changes to your lifestyle. This ‘shock factor’ also gives you the motivation to stick to it. Doctors can then help you come up with a diet plan to lower your blood sugar levels. The body uses sugar as its fuel. The more active you are, the more sugar it uses. Physical inactivity is another major contributor to type 2 diabetes. Especially as the population suffers from screen and internet addictions. This is particularly a problem with children. Instead of going outside and walking or playing sport, they prefer to sit using their phones. Breaking this habit is difficult. And it does take a lot of willpower. But you can transform your life and will feel much better one year from now. Don’t try to suddenly become an athlete overnight. Take it step-by-step. Start walking up the stairs at work instead of taking the elevator. Leave your car at home and go out on foot instead. Join a local sports club or weekly Zumba class. Over time, increase the amount of exercise you do each week. Then when it becomes a habit and your body is used to it, you can increase the intensity. Managing your type 2 diabetes You can stop or reverse type 2 diabetes. It just takes a few minor lifestyle adjustments. Reduce sugar intake and start to become more physically active. Speak to your doctor before trying to come up with a diet and exercise plan.
The Luwian culture thrived in Bronze Age western Asia Minor. It has thus far been explored mainly by linguists, who learned about Luwian people through numerous documents from Hattuša, the capital of the Hittite civilization in central Asia Minor. Only a few excavations have thus far been conducted in formerly Luwian territories. Therefore, excavating archaeologists have not been taking Luwians into account in their reconstructions of the past. Once Aegean prehistory considers Western Asia Minor and its people, it becomes possible to develop a plausible explanation for the collapse of the Bronze Age cultures around the Eastern Mediterranean. CURRENT STATE OF KNOWLEDGE Possibly due to its vast extent and complicated topography, for thousands of years the majority of western Asia Minor was politically fragmented into many petty kingdoms and principalities. This certainly weakened the region in its economic and political significance, but it also delayed the recognition of a more or less consistent Luwian culture. From a linguistics point of view, however, the Luwian culture is relatively well known. From about 2000 BCE Luwian personal names and loanwords appear in Assyrian documents retrieved from the trading town Kültepe (also Kaniš or Neša). Assyrian merchants who lived in Asia Minor at the time described the indigenous population as nuwa’um, corresponding to “Luwians.” At about the same time, early Hittite settlements arose a little further north at the upper Kızılırmak River. In documents from the Hittite capital Hattuša written in Akkadian cuneiform, western Asia Minor is originally called Luwiya. Hittite laws and other documents also contain references to translations into “Luwian language.” Accordingly, Luwian was spoken in various dialects throughout southern and western Anatolia. The language belongs to the Anatolian branch of Indo-European languages. It was recorded in Akkadian cuneiform on the one hand, but also in its own hieroglyphic script, one that was used over a timespan of at least 1400 years (2000–600 BCE). Luwian hieroglyphic ranks, therefore, as the first script in which an Indo-European language is transcribed. The people using this script and speaking a Luwian language lived during the Bronze and Early Iron Age in Asia Minor and northern Syria. A gap between linguistics and prehistory Thanks to the over 33,000 documents from Hattuša, the capital of the Hittite Kingdom, linguists have been able to gain a comprehensive insight into Luwian culture. Some fundamental publications include the book Arzawa, by Susanne Heinhold-Krahmer (1977); The Luwians, edited by H. Craig Melchert (2003); and Luwian Identities, edited by Alice Mouton and others (2013). Field-oriented excavating archaeologists, on the other hand, never mention Luwians in their explanatory models. The current knowledge regarding the Aegean Bronze Age has been summarized in a number of recently published voluminous works, without attention to any Luwian culture. For a number of reasons discussed elsewhere, recognition of a Luwian civilization seems to have been delayed. The gap between linguistics and prehistory regarding the investigations of the Luwians has existed for almost a century, since Emil Forrer, the Hittitologist who first identified the Luwian language in the tablets from Hattuša, recognized the significance of the Luwians as early as 1920. Today, the term “Luwian” is well-established to denote a language, a script and an ethno-linguistic group of people who commanded either one or both of them. Since most Luwian hieroglyphic documents have thus far been found in Early Iron Age Syria and Palestine, the term Luwian is often used to denote people at the eastern end of the Mediterranean during the 10th and 9th century BCE. However, Luwian hieroglyphic script occurs as early as 2000 BCE in western and southern Asia Minor as well. Therefore, the term Luwian is also applied to the indigenous people who lived in western and southern Anatolia – in addition to the Hattians – prior to the arrival of the Hittites and during the Hittite reign. In the context of this website, the term Luwian is used in a third sense – in a geographic and chronological context. It comprises the people who lived in western Asia Minor during the 2nd millennium BCE between the Mycenaeans in Greece and the Hittites in Central Anatolia, and who would not have regarded themselves as belonging to either one of the aforementioned cultures. This definition is no different from the ones we use today. Every person belongs to an ethno-linguistic group, and everyone lives in a certain jurisdiction – but of course, the two do not have to be identical. In the context of this website, the jurisdiction – Middle and Late Bonze Age western Asia Minor – and the people living within it are the focal point of attention, and not their ethnic provenance.
Dancers are a breed apart. From ballet thru belly to break, the dancer performs with an extreme range of motion and emotion. Dance is the union of movement to rhythm. It spans cultures from soaring ballet leaps to the simple swaying at the school prom. It is dance, a means of recreation, of communication, of expression and perhaps the oldest yet least preserved of the arts. Its origins are lost in prehistoric times but from the study of the most primitive peoples, it is known that men and women have always danced. Originally rhythmic sound accompaniment was provided by the dancers themselves. Eventually a separate rhythmic accompaniment evolved, played on animal skins stretched over wooden frames and made into drums and similar instruments. Later, melodies were added. These might have imitated birdcalls or other sounds of nature or a vocal expression of the dancer's or musician's state of mind. The rhythmic beat, however, was the most important element. This pulsation let all the dancers keep time together and also helped to remember their movements. By controlling the rhythm, the leader of a communal dance could regulate the tempo of the movement. Dances in primitive cultures all had as their subject matter the changes experienced by people throughout their lives. These included changes that occurred as people grew from childhood to old age, those they experienced as the seasons moved from winter to summer and back again, and changes that came about as tribes won their wars or suffered defeats. From these ceremonial dances came magical and religious dance. These were key types of dance that evolved into the ethnic and social dancing we know today.
One of the most fundamental measures of quality of life is access to clean water. Today two thirds of humanity face water stress at some point during the year and one in 10 do not have clean water. As populations grow so will the demands for drinking water and agriculture. At the same time climate change will impact available resources. Our capacity for consuming Earth’s resources must be outpaced by our potential to produce innovative answers. Science is a key to hope—and solving grand challenges, whether in water purification or in other areas such as energy generation and storage or healthcare, and has the potential to address basic human needs while providing revolutionary economic growth and sustainable development. Addressing these challenges requires scientific breakthroughs in a wide range of research areas, with a common requirement for design and development of new materials that meet increasingly complex performance requirements. Our ability to manipulate materials has characterized the ages of civilization through stone, bronze, iron, and silicon. The trial and error approaches of history don’t equip us to tackle the complex polymers, proteins, and nanostructured alloys we work with today. Thankfully, recent years have brought us significant advances in the ability to design and develop the advanced materials required to solve big technology challenges. Increasingly complex materials demand sophisticated tools for both characterization and modelling which forms the basis for understanding that allows us to design new materials with specific properties in mind. Around the world nations have invested in x-ray facilities such as the NSLS-II at Brookhaven National Laboratory in the US and MAX-IV in Sweden to enable the study of material properties and functions with nanoscale resolution. Neutron sources such as the Spallation Neutron Source at Oak Ridge National Laboratory and the European Spallation Source being built right next to MAX-IV use neutrons to explore the fundamental properties of advanced materials. Supercomputers couple the information on structure and dynamics obtained from experimental work with the models needed to inform design and synthesis of new materials. There is a global push to extend performance to the exascale further enhancing capabilities for modelling and simulation as well as data analytics. One challenge for scientists is how to use these new tools to more rapidly perform the translational research needed to solve critical challenges such as the need for potable water. In many areas of the world, potable water is a scarce resource and every minute of every day, a newborn baby dies somewhere in the world because of an infection caused by a lack of clean water and an unclean environment. Researchers around the world are pursuing multiple approaches to desalinating seawater into potable sources of drinking water, including vacuum, multi-stage, multiple-effect, vapor-compression distillation, and reverse osmosis (RO). RO is commonly used for a portion of the desalination process and has greatly reduced the high energy and capital costs associated with some other desalination processes. The key to the RO process is the membrane, a thin film that separates water with higher salt content from water with lower salt content. The membrane must be able to survive having large pressure differences across the two sides, must allow rapid water transport while impeding salt transport, and must resist fouling or becoming clogged by the salts. One major limitation impacting these different approaches to desalination is that many places in the world do not allow the disposal of the liquid waste and require the waste salts to be solid for disposal. This requires another step in the desalination process, increasing overall cost and complexity. Reducing the complexity of the overall desalination process can be achieved either through improvements to the membrane technology used for RO, or through improvements to the underlying RO process, such as with a cascade RO process being developed by Battelle which combines multiple RO stages with novel methods to use the intermediate waste water to significantly reduce operating pressures and energy input, and achieve recoveries two-to-three times higher than conventional RO processes. It is up to the global scientific community to provide the world with affordable, available clean water for continued positive growth. And it’s clear that advanced materials will need to be continuously developed in order for this to occur. The fundamental understanding of how the structure and dynamics of materials determine the properties that make them useful through government investments in research and universities and national laboratories is facilitating a new age of materials by design. This will enable companies and NGOs to deploy technologies such as cascade RO that address humanity’s pressing needs.
Ulcerative colitis is a chronic inflammatory condition that generally affects the innermost lining (mucosa) of the large intestine (colon) and rectum. Ulcerative colitis symptoms typically involve the lining becoming inflamed (red and swollen) and tiny open sores (ulcers) forming on the surface of the lining. These ulcers might bleed – in fact, bleeding from the rectum is often a first sign that something’s not quite right. The inflamed lining also produces a larger than normal amount of intestinal lubricant or mucus, which sometimes contains pus. Most people with this condition respond well to colitis treatment, but in more severe cases, surgery may become a necessary path. How does ulcerative colitis affect the intestines? Inflammation ‘attacks’ the innermost lining of the colon known as the mucosa, resulting in bleeding and diarrhoea. The inflammation is most often located in the rectum and lower colon, but can also involve other parts of the colon, sometimes even the entire colon. Less often, it might involve other parts of the intestine. Depending on the exact location of the inflammation, ulcerative colitis is known by other names: - Proctitis: involves only the rectum - Proctosigmoiditis: involves the rectum and sigmoid colon (the lower segment of the colon before the rectum) - Distal colitis: involves only the left side of the colon - Pancolitis: involves the entire colon - Backwash ileitis: involves the distal ileum.
Doctors use it only when: - Medication doesn’t control your seizures - One side of your brain is working so poorly that losing part of it won’t affect you very much Afterward, you may have fewer seizures or none at all. If a child has the operation, the healthy side of his brain should take over and do everything the missing parts used to do. How It Works Your brain is divided into two halves called hemispheres. They’re split by a deep groove, but they talk to each other through a thick band of nerves called the corpus callosum. Each hemisphere has four lobes. The doctor will make a cut in your scalp, then take out a piece of bone from your skull. He’ll move aside part of the dura, a tough membrane that covers your brain. Then he’ll take out parts of the hemisphere where your seizures start. Usually it’s the temporal lobe. Finally, he’ll cut the corpus callosum so the hemispheres of your brain can’t send signals to each other anymore. This way, if a seizure starts in the hemisphere that doesn’t work right, it can’t spread to the healthy one. Once the surgery is finished, your doctor will put the dura and bone back, then close up the wound with stitches or staples. What Are the Risks? Some are the same as with any major surgery: - Allergic reaction to the anesthesia Others are specific to this procedure: - Loss of movement or feeling on the opposite side of your body (the left side of your body if the operation was on the right side of your brain, and vice versa) - Swelling in your brain - Loss of side vision You’ll have a lot of tests. This helps your doctor figure out where in your brain the seizures start. This might mean you’ll stay in a hospital or treatment center for a few days. Video EEG monitoring. In this test, you wear a transmitter that lets the doctor record your brain waves. At the same time, a video records what you’re doing, like napping, talking, or watching TV. If you have a seizure, the doctor can compare your brain waves with what you were doing when the seizure started. This tells him if the seizure was due to electrical activity in your brain and where it started. Wada test. This checks speech and memory on one side of your brain at a time. Your doctor looks at which side of your brain controls your speech and which side has better memory (it might not be the same side). He compares the results with other tests that tell him where your seizures start. If they start in the same side that controls your speech or has better memory, he might do more tests to lower the chances that surgery will affect your speech or memory. The Wada test can also tell him if you need to be awake during part of your surgery. During the Wada test, the doctor puts one side of your brain to sleep with a special medicine that goes into an artery in your neck. Another doctor shows you different things and pictures. When the medicine wears off, they’ll ask you about what you saw. They’ll test the other side of your brain the same way. You’ll be in intensive care for a day or two, and then go to a regular hospital room for another 3 or 4 days. The stitches or staples will come out 10 to 14 days after surgery. You might have some side effects in the first few weeks. Usually these go away slowly. They may include: - Trouble concentrating - Trouble finding the right words - Feeling tired - Numbness in your scalp - Muscle weakness on one side of your body (the side controlled by the part of the brain the doctor operated on) - Puffy eyes - Feeling depressed Most people feel normal and can go back to work, school, and their usual lives about 6 to 8 weeks after surgery. You’ll most likely have to keep taking your seizure medication for at least 2 years, even if you don’t have any seizures. Your doctor will tell you if and when it’s OK to lower your dose or stop taking it.
Black-throated blue warblers are found in northeastern North America in the summer, breeding season. They are found from the northern Great Lakes region east to the Canadian maritime provinces, throughout New England, and south through the Appalachian mountains. In winter they are found in southernmost Florida, the Antilles south to Trinidad, and the coastal Yucatan peninsula, from Mexico and Belize to Honduras. (Holmes, et al., 2005) Black-throated blue warblers are found in tracts of undisturbed deciduous and mixed-deciduous forests in their breeding range. Forests they occur in include those with maples (Acer), birches (Betula), beeches (Fagus grandifolia), eastern hemlock (Tsuga canadensis), spruce (Picea), and fir (Abies). The elevational range of these forests varies throughout the region. They prefer forests with a dense, shrubby understory. They migrate along woodlands and woodland fragments, including riparian forests. In winter they are found in tropical forests, including secondary forest, plantations, and disturbed forest fragments. (Holmes, et al., 2005) Black-throated blue warblers are about 13 cm long and from 9 to 10 g. Males and females have different plumages. Males have dark blue backs and black faces, throats, and sides. Their bellies and breasts are white. Females are olive green with buffy yellow throat, breast, and bellies. Females have a buffy eye stripe, a white semicircle below the eye, and a small white wing spot. Immature males have a greenish tinge to their dorsal feathers. They have black legs, feet, and bills, but they begin life with flesh-colored legs, feet, and bills. (Holmes, et al., 2005) Black-throated blue warblers are mainly monogamous, although rare males will maintain multiple female mates. Pairs are formed very soon after arrival at the breeding site. Mated pairs remain together for the breeding season through multiple broods or attempted broods. Males guard their mates closely and extra pair copulations are common in this warbler species. Approximately 34% of broods had nestlings that were not fathered by the territorial male. (Holmes, et al., 2005) Black-throated blue warblers start breeding in late May or early June and may lay second clutches in late June or early July. Females determine nest sites and build them out of strips of bark, cobwebs, and saliva. They then line them with softer materials, like moss, hair, pine needles, or shredded bark. Males may help gather nest materials. Females may build up to 5 nests in a season if she has to re-nest several times. Females lay from 2 to 5, usually 4, white, speckled eggs in a clutch. Females usually lay 1 egg each day until done and begin incubating when the last egg is laid. Most females lay multiple clutches in a year either after losing a clutch or as a second nesting attempt after a first, successfully raised brood. They have been reported laying up to 5 clutches, but 2 is more typical. Incubation takes 12 to 13 days and young begin to fly between 8 and 10 days after hatching. They leave the nest at that point, but remain nearby and are fed and protected by their parents for another 2 to 3 weeks after they have fledged. Black-throated blue warblers can breed in their first year after hatching, although males may be unsuccessful at attracting mates until their second year. (Holmes, et al., 2005) Females incubate the eggs and brood hatchlings. Males may feed females while on the nest. Young hatch with their eyes closed and naked. Their eyes open at about 4 days old and they leave the nest at 8 to 10 days old, when they are just beginning to learn to fly. Males and females both feed nestlings and fledglings for up to 3 weeks after they fledge. Both parents protect their young from predators with alarm calls and distraction displays. (Holmes, et al., 2005) The oldest recorded black-throated blue warbler was at least 10 years old. There is some evidence that older individuals may have higher rates of survival, or higher site fidelity. Survival rates in the winter range were from 66 to 77% for females and males, respectively. Nestling mortality is largely from predation but nestlings also die from exposure during cold or rainy weather. (Holmes, et al., 2005) Black-throated blue warblers flit among vegetation and can hop on the ground. They are migratory and active during the day. They spend much of their time foraging, except when females are incubating eggs or brooding young, when they spend 75% of their time incubating or brooding. They are solitary throughout the year, except for mated pairs during breeding season. Males aggressively defend territories for feeding and nesting, excluding all conspecifics from the territory. (Holmes, et al., 2005) Foraging and nesting territories are from 1 to 4 hectares in size during the breeding season. In winter, foraging territories are from 0.2 to 0.3 hectares for males and slightly smaller for females. Although black-throated blue warblers don't seem to return to their natal site in their first year, adults seem to return to the same breeding and wintering sites each year. (Holmes, et al., 2005) Black-throated blue warblers use a series of calls and songs to communicate. Females vocalize sometimes, but males perform the majority of songs. Male songs vary with individual, but there are two main song types: 1) a song of 3 to 7 buzzing notes that trills upward at the end, sounding like "zee-zee-zee-zreeee," and 2) a song of 2 to 5 notes that descends at the end, sounding like "zee-zee-zhurrr." The first song type is the most commonly heard and varies substantially among males. Males use other kinds of songs as well, although their purposes and contexts are not well understood. Most songs are used during the breeding season, but there is some singing during migration and in winter. Males sing from perches in their home range. (Holmes, et al., 2005) Black-throated blue warblers are mainly insectivorous during the breeding season and supplement their insect diet with fruits during the winter. In the breeding range, these warblers eat mainly beetles, caterpillars, butterflies and moths, flies, bugs, and spiders. In the winter they eat as much as 95% insects, but supplement their diet with berries, other fruits, flower nectar, and honeydew excretions from scale insects. Black-throated blue warblers forage by themselves from 22 to 70% of daylight hours, depending on the season and their energy requirements. Females forage more during nest-building and the weeks leading up to egg laying, up to 70% of daylight hours. Males generally forage for 30-32% of daylight hours, but forage for an additional 20% when they are singing to defend nesting territories. They forage in undergrowth shrubs and forest canopy layers, taking most of their prey from leaves and bark. (Holmes, et al., 2005) Black-throated blue warbler adults are preyed on by birds of prey, such as Cooper's hawks (Accipiter cooperi). Eggs and nestlings are taken by a wide variety of nest predators, including sharp-shinned hawks (Accipiter striatus), blue jays (Cyanocitta cristata), red squirrels (Tamiasciurus hudsonicus), eastern chipmunks (Tamias striatus), martens (Martes americana), fishers (Martes pennanti), flying squirrels (Glaucomys), raccoons (Procyon lotor), black bears (Ursus americanus), and garter snakes (Thamnophis sirtalis). Black-throated blue warblers will mob predators and perform broken-wing displays to distract them. Parents give a high-pitched warning call when they see raptors and will respond to the warning calls of other birds. (Holmes, et al., 2005) Black-throated blue warblers are important predators of insects in their forest habitats. They may also help to disperse seeds of the fruits they eat. There are few reported parasites in black-throated blue warblers. Only 2 nesting records indicate parasitism: bot flies (Oestridae) in one and parasitic fly larvae (Calliphoridae) in another. Brown-headed cowbirds will parasitize the nests of black-throated blue warblers, especially in areas of disturbed forest. If parasitized, they can successfully raise a cowbird young about 60% of the time. (Holmes, et al., 2005) There is no direct positive impact of black-throated blue warblers on humans. However, they are lovely and interesting members of native faunas and may attract bird watching interest. (Holmes, et al., 2005) There are no known adverse effects of black-throated blue warblers on humans. However, along with many other bird species, they carry West Nile virus. (Holmes, et al., 2005) Black-throated blue warblers have a large range and large populations without evidence of significant population declines. They are considered "least concern" by the IUCN. They are considered sensitive to forest fragmentation, preferring areas of forest over 100 hectares in size, but they are found in disturbed forests and secondary growth, provided there is a lush understory. Similarly, in their winter range, black-throated blue warblers are found in a variety of forests, including disturbed forests, orchards, and plantations, but populations may be negatively impacted by habitat destruction. They are also found dead as a result of collisions with man-made objects, such as television towers, during migration. (Holmes, et al., 2005) Tanya Dewey (author), Animal Diversity Web. living in the Nearctic biogeographic province, the northern part of the New World. This includes Greenland, the Canadian Arctic islands, and all of the North American as far south as the highlands of central Mexico. living in the southern part of the New World. In other words, Central and South America. uses sound to communicate living in landscapes dominated by human agriculture. young are born in a relatively underdeveloped state; they are unable to feed or care for themselves or locomote independently for a period of time after birth/hatching. In birds, naked and helpless after hatching. Referring to an animal that lives in trees; tree-climbing. having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria. an animal that mainly eats meat uses smells or other chemicals to communicate animals that use metabolically generated heat to regulate body temperature independently of ambient temperature. Endothermy is a synapomorphy of the Mammalia, although it may have arisen in a (now extinct) synapsid ancestor; the fossil record does not distinguish these possibilities. Convergent in birds. forest biomes are dominated by trees, otherwise forest biomes can vary widely in amount of precipitation and seasonality. An animal that eats mainly insects or spiders. offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes). makes seasonal movements between breeding and wintering grounds Having one mate at a time. having the capacity to move from one place to another. the area in which the animal is naturally found, the region in which it is endemic. reproduction in which eggs are released by the female; development of offspring occurs outside the mother's body. having more than one female as a mate at one time rainforests, both temperate and tropical, are dominated by trees often forming a closed canopy with little light reaching the ground. Epiphytes and climbing plants are also abundant. Precipitation is typically not limiting, but may be somewhat seasonal. Referring to something living or located adjacent to a waterbody (usually, but not always, a river or stream). breeding is confined to a particular season reproduction that includes combining the genetic contribution of two individuals, a male and a female living in residential areas on the outskirts of large cities or towns. uses touch to communicate that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle). Living on the ground. defends an area within the home range, occupied by a single animals or group of animals of the same species and held through overt defense, display, or advertisement the region of the earth that surrounds the equator, from 23.5 degrees north to 23.5 degrees south. uses sight to communicate Holmes, R., N. Rodenhouse, T. Sillett. 2005. Black-throated Blue Warbler (Dendroica caerulescens). The Birds of North America Online, 87: 1-20. Accessed April 18, 2009 at http://bna.birds.cornell.edu.proxy.lib.umich.edu/bna/species/087.
Understanding Culture and Identity Step 1: Show your students the following video that defines culture and then discuss it to check for understanding. Step 2: Have your students watch the following video on how to create an Identity Diagram. Step 3: Have your students create their own Identity Diagrams. Create your own first to show as an example. Step 1: Show your students the following video on stereotypes, prejudice, discrimination, and oppression: Step 2: Let your students know the definition of bias: Step 3: Review the following key points about bias and decide which are most important to share and discuss with your students. - Social identity bias occurs because people see the world differently based on their cultures, identities and life experiences. - Everyone has them. Just because we all have them doesn't mean we can't/shouldn't do anything about them. - Biases are often considered to be unfair. - Society provides advantages to certain identities and disadvantages to other identities. These advantages and disadvantages may be invisible. Similarly, people in the majority often do not realize their biases because the social forces impacting those from disadvantaged groups are invisible to them. - Because of their power, their biases have influence on the policies and practices of institutions (e.g., legal, educational, religious), which often adversely affects groups with less power. - In addition to the ways systems impact individuals (i.e., systemic oppression), biases can occur through individual oppression, in which biases manifest in everyday interactions between people, such as coworkers, strangers, or friends. - Any individual or group can face discrimination, harassment, jokes around their identity, due to stereotyping (thoughts), prejudice (feelings), and discrimination (actions). - However, because certain social groups are better represented in decision making positions and have greater collective resources, they influence institutions (e.g., education, media, criminal justice, religion) in ways that oppress, advantage and disadvantage different identities (oppression). - Over time, one can also imagine how (a) systematic oppression can lead to stereotypes and prejudice in society and (b) groups that face ongoing institutional oppression may similarly begin to accept the stereotypes and/or lower status of their group identity (internalized oppression). Step 4: Show this video on oppression to your students: Step 5: Have your students go back to their Identity Diagrams and think through different aspects of their identity they feel provide them with a clear advantage in society, and those they feel provide them a disadvantage in society. Have them journal around these ideas for 15-20 minutes, as well as around the messages that they have learned about different aspects of their identity. Step 6: Ask for 2-3 student volunteers who are willing to share their reflections. Students may be hesitant to discuss the areas of advantage and disadvantage they believe they face. You can help by encouraging multiple people to share, noting that we are just exploring at this point so there are no right or wrong answers, by reminding them that this is a safe place to have dialogues about these topics, or by modeling openness by sharing your own areas of advantage/disadvantage. Working on our own biases Step 1: Have students journal for 15-20 minutes about things that have helped them change their views/biases in the past. They can also discuss other ideas they have for addressing their own biases. Step 2: Ask students to share some of the ideas they wrote about in their journals. Write the ideas down and possibly poster them in your room, to remind students on an ongoing basis of ways they can work on their own biases. Step 3: If your students do not share these ideas from their brainstorming, let your students know other ways to address bias include: - making and taking opportunities to work with and get to know people who are different from you. - being mindful of your own biases and how this may affect your perceptions of things - remembering a time that you personally were impacted by bias/stereotypes can help motivate one to examine biases and stereotypes that they may have about others. - retrain yourself to realize that identifying a bias you have is not a bad/shameful thing, but instead a positive/opportunity-type moment in which you can learn, grow and be a more inclusive person. Shame makes us ignore our biases, when actually being curious about our own biases is a more effective strategy for reducing them. Addressing interpersonal bias Step 1: Have students talk about how they handle instances of bias (e.g., stereotypes, microaggressions) they encounter in their lives, and how effective they think the strategies they use are. Step 2: Let students know you are going to watch a video about different ways to address instances of bias. Ask them to think about the different strategies shared in the video and how comfortable they would feel using these strategies. Show them this video: Step 3: Post or pass around a list of the 12 strategies. Have your students work in pairs or small groups to come up with ways to handle either the situation at the end of the video, or another scenario that would be relevant to the group, based off of the 12 strategies offered in the video. You can assign one or more strategies to each group, or have each group come up with examples based off of each strategy. Step 4: Have your students discuss the following topics: - How comfortable do you feel using the different strategies offered in the video? Which would you feel the most comfortable using? Which feel the least comfortable? Why? - Do you think certain strategies are more comfortable or effective depending on whether the person you are talking with is a friend, family member, someone in a position of authority, or a stranger? Why? - What other strategies can you think of that may be effective in these situations? Bias in School Setting Step 1: Now ask your students to think about how bias plays out in their own school. Step 2: Have them get into small groups and discuss the different biases they see playing out in their school and the types of conflicts that occur around culture and identity. - They should think about what actions, if any, the school takes to address these biases and conflicts, and whether these actions are effective or not. - They should also think about what types of actions may be effective. Have each group report out to the larger group.
horsepower, the common unit of power; i.e., the rate at which work is done. In the British Imperial System, one horsepower equals 33,000 foot-pounds of work per minute—that is, the power necessary to lift a total mass of 33,000 pounds one foot in one minute. This value was adopted by the Scottish engineer James Watt in the late 18th century, after experiments with strong dray horses, and is actually about 50 percent more than the rate that an average horse can sustain for a working day. The electrical equivalent of one horsepower is 746 watts in the International System of Units (SI), and the heat equivalent is 2,545 BTU (British Thermal Units) per hour. Another unit of power is the metric horsepower, which equals 4,500 kilogram-metres per minute (32,549 foot-pounds per minute), or 0.9863 horsepower. Horsepower at the output shaft of an engine, turbine, or motor is termed brake horsepower or shaft horsepower, depending on what kind of instrument is used to measure it. Horsepower of reciprocating engines, particularly in the larger sizes, is often expressed as indicated horsepower, which is determined from the pressure in the cylinders. Brake or shaft horsepower is less than indicated horsepower by the amount of power lost to friction within the engine itself, which may amount to 10 percent or more of the indicated horsepower. Electric motor horsepower can be determined from the electrical input in watts, allowing for heat and friction losses in the motor itself. Thrust horsepower of jet engines and rockets is equal to the thrust in pounds force times the speed of the vehicle in miles per hour divided by 375 (which is equal to one horsepower measured in mile-pounds per hour).
Before describing cooperative protocols and networks, we start with a review of networking models and common practices. Our discussion introduces the conventional decomposition of a network into protocol layers. We then employ these layers to organize the subsequent review of the physical, link, and network layer protocols. We recognize that future cooperative networks will also demand a re-examination of transport layer protocols; however, we omit a review of the higher layers as they are beyond the scope of this text. While much or all of this material is likely to be familiar, this review will serve as a baseline for comparisons with cooperative networks. Layering as embodied in the protocol stack of Figure 2.1 is a key idea in the development of networks. The stack of boxes (modules) arranged as layers represents a network node. Each module operates at a particular layer. The horizontal dashed arrows between modules in different nodes signify that a module may exchange messages with its peer modules in other network nodes. Messages that are sent through lower layer modules to peer modules in other network nodes are the basis for distributed network algorithms. Fig. 2.1 Network protocol stack. Data packets are used to communicate from a source node to a destination node via a path with intermediate nodes. At the source, data packets generated by an application are passed from module to module down through successive layers of the protocol stack. Each module typically appends its own header to each such data packet. A module may repackage the data packets, by dividing packets into smaller packets. The packet headers serve as protocol signaling for peer modules, either at intermediate nodes or at the destination node. A module also may inject its own control packets in order to communicate with peer modules. When a packet proceeds through a multihop route to a destination, a packet climbs no higher than necessary in the protocol stack. That is, a packet passing through an intermediate node will reach the network layer where the routing algorithm will decide to what node the packet should be forwarded. Thus a packet reaches the transport layer and application layer only at the destination. At the destination, each module is responsible for undoing the repackaging of its source node peer by stripping the additional headers and control packets injected by its peer. That is, the higher layer packets passed down to a module at the source should be passed back up to the higher layer module at the destination. In traditional wireline or wireless networks, these modules are well defined. Packets are buffered and sequenced by the transport layer, typically TCP, that implements both a reliable end-to-end connection as well as end-to-end flow control. Finding routes (via a sequence of links) to a destination is the job of the network layer. Maintaining these routes and forwarding packets along these routes is also a network layer task. The link layer ensures reliable packet communication on a single link. As shown in Figure 2.1, this may include a MAC sublayer that regulates channel access. The physical (PHY) layer represents the hardware that performs transmission and reception. In an IP network, the full stack has the simplified representation shown in the gray boxes. In a wireless setting, the MAC sublayer, the link layer, and the PHY layer are lumped together as a PHY layer. This combined PHY layer is just an interface queue that accepts IP packets. For our purposes, we start with a source node s running an application layer process that wishes to transmit messages to an application layer process at a destination node t. The messages are encoded as data packets with appropriate headers that identify the application process, the source node s and the destination node t. When these packets are passed to a TCP transport layer, sequence numbers are appended and the release of packets to the network layer is controlled by reverse stream of TCP ACKs from the receiver TCP process. The network layer examines the destination address (an IP address) and determines where to send the packet using a routing table. For example, the routing table for a source node attached to an Ethernet might specify only two rules: direct transmission to destination nodes on the same Ethernet and forwarding to a gateway node for all other packets. At the data link layer, it is common practice to append a cyclic redundancy check (CRC) to each packet. The CRC allows the data link layer at the receiver to detect packet reception errors. Sequence numbers may also be added to facilitate automatic repeat request (ARQ) retransmission protocols at the link layer. The PHY layer is responsible for the transmission of bits to a receiver of a link. The coding and modulation employed at the PHY layer for a single point-to-point link may be quite complex. When forward error correction (FEC) and hybrid ARQ protocols are employed, the line between the physical and link layers is blurred. However, above the link layer, one can assume that the interfaces between layers are based on binary data packets. In the following sections of this topic, we climb the protocol stack in describing the traditional functions of the physical, link and network layers.
Montessori Views Each Child As Unique - Children are recognized and respected as being different from adults and as being unique individuals distinct from each other. - Young children have “absorbent minds”–the special capacity to absorb information from their environment without concentrated effort or formal instruction. - Montessori materials invite children to learn to read, write, and calculate in the same natural way that they learn to walk and talk–at their own pace. - Young children go through “sensitive periods” when their interest in and ability to absorb a particular concept are greater than at any other time in their lives. - Children have a deep love and need for purposeful work. This work helps them develop their mental, physical, social, and psychological powers. - The Montessori approach embraces the development of “the whole child” — physical, social, emotional, cognitive, and spiritual. Montessori Prepares the Environment for Optimal Learning - The classroom is child-centered. All the materials are within the child’s reach.The tables and chairs are child-sized and the pictures and decorations are at the children’s eye level. Everything has a specific place on the shelves. Children are orderly by nature and having the room set this way allows them to grow in a positive way. - The environment provides a natural sense of discipline–expectations are clearly stated and enforced by the children and teachers. Respect for themselves, for others, and for the environment forms the basis for all classroom rules. - Children are quiet by choice and out of respect for others within the environment–the Montessori classroom allows children to return to the “inner peace” that is a natural part of their personalities. Montessori Teaches Children in Multi-Age Groups - Montessori classes place children in three-year age groups, forming communities in which the older children spontaneously share their knowledge with the younger ones. - Working in multi-age groups creates a natural social environment and fosters a sense of community in which every member is valued for his or her contribution to the whole. - In a mixed-age classroom, children can choose friendships based on common interest, not just age. Montessori Incorporates a Completely Different Approach to Learning - Montessori education emphasizes learning through all five senses, not just through listening, watching, or reading. - The emphasis is on concrete learning rather than abstract learning–children need to experience concepts in “hands-on” ways. - Children work with self-correcting materials, and errors are viewed as a necessary and helpful part of the learning process. The Montessori materials help children evolve from concrete, experience-based learning toward increasingly abstract thought. - Montessori teachers “follow the child.” They recognize that each child learns at a different pace and has different periods of interest, and they allow that individual growth to take place. Montessori Teachers Act as Guides - The teacher plays an unobtrusive role in the classroom–the children are not motivated by the teacher, but by the need for self-development. - Montessori teachers keep detailed records on each child so they know when the child is ready to be introduced to a new skill or concept. - The children are with the same teacher through each developmental stage, allowing a strong bond of trust to develop. The teacher knows where the children are in their development as they start each new school year and the children do not have to adjust to a new classroom or teaching style. Montessori is Backed by Research Controlling the environment, not the child - Children thrive on order, routine, and ritual. - We learn best when we are interested in what we are learning about. - People thrive when they feel a sense of choice and control. - We learn best when our learning is situated in meaningful contexts. - Extrinsic rewards reduce motivation and level of performance once the rewards are removed. - Children can learn very well from and with peers; after age 6 children respond well to collaborative learning situations. Angeline Stoll Lillard, Ph.D. Montessori, The Science Behind the Genius The impact of early development Aline D. Wolf A Parent’s Guide to the Montessori Classroom Making theories reality
Specific learning disabilities are common reasons for school failure. A learning disability compounded with lack of basic needs is a formula for disaster. Opportunities for learning can be increased with small acts of kindness. According to the Urban Institute, the vast majority of low-income parents today are working but still struggling to make ends meet. Studies have shown that children with learning disabilities are more common in low income families. Nutritional deprivation has a negative effect on learning. Children cannot stay focused on learning when they have to worry about their next meal. In some instances, the school lunch is the only nutritional meal that they may receive during the day. Many schools also serve free breakfast for children of low income families. The basic needs of the child need to be met in order for children to be productive in school. Many children face many challenges such as homelessness and hunger. These issues make it nearly impossible to learn. Many people have provided snacks, paper, pencils, clothing, shoes, socks, and nonperishable items to students and their families in need. This type of benevolent giving from generous people is considered charity. There are things that can be done to help children with learning disabilities. Volunteer services, such as reading a book, tutoring a student, telling a story and monitoring the hallway are some ways that could help a child in need. The time alone that is spent with the children would provide valuable social interaction. School supplies, like pencils, crayons, paper, and glue can be donated to your local school. Many schools provide a school supply list each year. These are the things that will be most needed by grade level during the school year. If you cannot find a supply list, consider donating money for these types of supplies. The school may be in need of things that are not on the school supply list. Encyclopedias, reading programs, and books may also be needed. These types of donations may also be tax deductible. You donít have to make donations alone. Supply and food drives have become popular at local churches and other organizations. Other groups, like book clubs, soccer teams, or womenís groups could schedule activities to raise funds or needed supplies for needy children. Financial difficulties can happen to anyone. All it takes is a loss of income, improper management of funds, or loss due to natural disaster, such as fire, flood, or tornado. Unconditional love for children can be shown in different ways. Any small contribution could bridge the gap for a child.
Building a Story Within a Story From FUTURESTATES collection, lesson plan 12 of 13 Audience: This lesson is designed for high school students of all ability levels. Duration: This lesson will take 2-3 days, depending on the class. Summary of the Lesson: This lesson focuses on the narrative techniques of a frame story. Extension activities invite students to write their own frame stories and to research tent cities of the Depression era and of today. National Educational Standards: This lesson addresses the following Common Core Standards in literature: For grades 9-10 Analyze how an author’s choices concerning how to structure a text, order events within it (e.g., parallel plots), and manipulate time (e.g., pacing, flashbacks) create such effects as mystery, tension, or surprise. For grades 11-12 Analyze how an author’s choices concerning how to structure specific parts of a text (e.g., the choice of where to begin or end a story, the choice to provide a comedic or tragic resolution) contribute to its overall structure and meaning as well as its aesthetic impact. Curricula Writer: Carla Beard teaches high school English in Indiana. She often presents at NCTE and has served as Teacher in Residence for the Indiana Department of Education, where she helped teachers integrate technology into their classrooms. She maintains Web English Teacher, a web-based resource for English Language Arts teachers. - Preview the film, which is a little over 17 minutes long, not counting the credits. - Read the synopsis and watch the film The Making of Tent City. The comments in “The Making of Tent City” will help the viewer understand the writer/director’s intent. - Set up web access to view the film online. - Have a projector available so that all students can view the film. - For the introduction, bring a small, empty picture frame and a large picture, poster, or map. Objective: Students will analyze the effect of parallels identified in the two plots of Tent City. Introduction to the concept of a frame story (5-10 minutes) Direct students’ attention to a large picture, poster, or map. Hold up the smaller frame in front of the larger picture. Ask students, “If I put this frame right here, what happens, at least for a moment, to your attention?” Students should respond that the frame causes them to focus mostly on the framed section of the picture. Help students reach the conclusion that the frame’s purpose is to draw attention to what is inside it. If students are not already familiar with the concept of a frame story, take a moment to explain it to them. Explain that this is an ancient storytelling technique. The purpose of the outer story is usually to introduce the inner story, which is the more important plot. Some stories, however, establish a narrative tension so that the outer and inner stories influence one another. That is the case with the story for today. Pre-viewing activity (15-20 minutes) Divide the class into small groups and give each group one set of questions (below) to discuss. After 5-10 minutes, ask them to share their thoughts with the whole group. - When people lose their homes in your community, where do they go? How do they cope? Does anyone try to help them? - Sometimes young people notice aspects of a situation that older people don’t see. If you disagreed with a major decision your parents were making, how might you approach them? - What can people realistically do when corporations act illegally or when their actions are legal but have a negative impact on a community? Middle (about 50 minutes) Distribute the Tent City viewing guide. Explain to students that you will show the film twice. Prior to the first viewing, ask students to just watch the film to understand what is happening. They will not be expected to analyze the film until they see it a second time. After showing the film the first time, allow a couple of minutes for comments or for questions about anything students did not understand. Explain that students should take notes using the viewing guide as they watch the film a second time. Because it can be difficult to watch and take notes at the same time, they might want to divide up the work with a partner. Then show the film a second time. When it is finished, give students a few minutes to complete their graphic organizers and compare answers. Then engage students with the post-viewing discussion questions (see Teacher’s Guide): - When Matthew said that he had no choice, was he making excuses, like Ivan said, or was he seeing a bigger picture that a child can’t see? - Do you agree with the family’s decision? Why or why not? - Would you have made the same decision for your family? - What do you think Tent City will be like? How will people treat Matthew and his family when they move into the Tent City? - At what point did you guess that the story would end the way it did? - The director chose to use black-and-white photographs for the inner story. How does this affect the telling of the inner story? What does this add to the overall narrative? - What elements of the inner story highlighted or emphasized the conflicts that were taking place in the outer story? End (Time determined by needs of the class) Invite students to respond creatively to one of the following scenarios by developing and presenting a digital story. Encourage them to use the frame story narrative technique. - Tent City continues to grow. It develops problems with crime, sanitation, and chronic unemployment. What happens to Ivan and his family? - Tent City continues to grow, and eventually the people of Tent City become the largest block of voters in the city. They want programs to help them get back into their homes. City Council, however, is strongly influenced by Zone Bank, which wants the houses vacant. What happens next? - Ivan and his family are still living in Tent City when he graduates from high school. Does he have any regrets? What will he do after high school? - After Mr. X fell into the city’s water supply, things happened just the way the president of InkaZone planned: there was an epidemic, and the company made billions of dollars selling the only available cure. Did the company get away with it? - Someone fished Mr. X out of the reservoir and repaired and re-activated him. What happened next? Teachers may wish to use one of the following sites to assist in developing a rubric to assess student work: - Evaluating Multimedia Presentations - Kathy Schrock’s Guide for Educators: Multimedia - Overview of Evaluating Projects Other titles that use the frame story device: - The Panchatantra (collection of short stories from India) - The Canterbury Tales by Chaucer - The Celebrated Jumping Frog of Calaveras County by Mark Twain - “Alice’s Restaurant” (song) by Arlo Guthrie - The Princess Bride by William Goldman - Investigate and analyze predictions for Tent City as posted on the FUTURESTATES Predict-O-Meter. - Formulate and post their prediction on the FUTURESTATES Predict-O-Meter site. Beginning (5-7 minutes) Reactivate prior knowledge by reviewing discussions related to the film. Middle (30-35 minutes) Students will investigate predictions as presented on the Predict-O-Meter located on the FUTURESTATES website. After selecting and evaluating three of the predictions using the evaluation rubric, students will develop at least one prediction to post on the website. The proposed prediction will be evaluated by a peer and approved by the instructor before posting. The predictions may alter the course projected in the Predict- O-Meter predictions. Students may require an example of a valid prediction. Using the rubric to instruct the students, prepare a sample prediction and lead the class in an analysis of the statement. The following is an example of a proposed prediction and the evaluation of it using the prepared rubric. Proposed prediction: “In 2030, census data reveals that 50% of the urban population lives in tent cities.” - Is the prediction based on realistic possibilities? Yes. Tent cities are growing, and the economic recovery is very slow. - Do the consequences of the prediction support the film? Yes. The tent city is growing in the film. - Do known events in the past support the prediction? Yes. We can look to the Hoovervilles of the Great Depression as support. - Is this prediction plausible? This is the evaluator’s opinion based on the evidence presented in defense of the prediction.
The nuclear power plants in Japan weathered the earthquake itself without difficulty. The four plants nearest the quake's epicenter shut down automatically, meaning that the control rods were fully inserted into their reactor cores and the plants stopped producing power. This is normal operating procedure for these plants, but it meant that the first source of electricity for the cooling pumps was gone. That isn't a problem because the plant could get power from the power grid to run the pumps. However, the power grid became unstable and it shut down as well. The second source of electricity for the cooling pumps was gone. That brought the backup diesel generators into play. Diesel generators are a robust and time-tested way to generate electricity, so there were no worries. But then the tsunami hit. And unfortunately, the tsunami was far larger than anyone had planned for. If the backup diesel generators had been higher off the ground, designed to run while submerged in water or protected from deep water in some way, the crisis could have been averted. Unfortunately, the unexpected water levels from the tsunami caused the generators to fail. This left the last layer of redundancy -- batteries -- to operate the pumps. The batteries performed as expected, but they were sized to last for only a few hours. The assumption, apparently, was that electricity would become available from another source fairly quickly. Although operators did truck in new generators, they could not be hooked up in time, and the coolant pumps ran out of electricity. The fatal flaw in the boiling water design -- thought to be impossible to uncover through so many layers of redundancy -- had nonetheless become exposed. With it exposed, the next step in the process led to catastrophe.
Algebra: In Simplest Terms In this series, host Sol Garfunkel explains how algebra is used for solving real-world problems and clearly explains concepts that may baffle many students. Graphic illustrations and on-location examples help students connect mathematics to daily life. The series also has applications in geometry and calculus instruction. 1. Introduction—An introduction to the series, this program presents several mathematical themes and emphasizes why algebra is important in today’s world. 2. The Language of Algebra—This program provides a survey of basic mathematical terminology. Content includes properties of the real number system and the basic axioms and theorems of algebra. Specific terms covered include algebraic expression, variable, product, sum term, factors, common factors, like terms, simplify, equation, sets of numbers, and axioms. 3. Exponents and Radicals—This program explains the properties of exponents and radicals: their definitions, their rules, and their applications to positive numbers. 4. Factoring Polynomials—This program defines polynomials and describes how the distributive property is used to multiply common monomial factors with the FOIL method. It covers factoring, the difference of two squares, trinomials as products of two binomials, the sum and difference of two cubes, and regrouping of terms. 5. Linear Equations—This is the first program in which equations are solved. It shows how solutions are obtained, what they mean, and how to check them using one unknown. 6. Complex Numbers—To the sets of numbers reviewed in previous lessons, this program adds complex numbers — their definition and their use in basic operations and quadratic equations. 7. Quadratic Equations—This program reviews the quadratic equation and covers standard form, factoring, checking the solution, the Zero Product Property, and the difference of two squares. 8. Inequalities—This program teaches students the properties and solution of inequalities, linking positive and negative numbers to the direction of the inequality. 9. Absolute Value—In this program, the concept of absolute value is defined, enabling students to use it in equations and inequalities. One application example involves systolic blood pressure, using a formula incorporating absolute value to find a person’s “pressure difference from normal.” 10. Linear Relations—This program looks at the linear relationship between two variables, expressed as a set of ordered pairs. Students are shown the use of linear equations to develop and provide information about two quantities, as well as the applications of these equations to the slope of a line. 11. Circle and Parabola—The circle and parabola are presented as two of the four conic sections explored in this series. The circle, its various measures when graphed on the coordinate plane (distance, radius, etc.), its related equations (e.g., center-radius form), and its relationships with other shapes are covered, as is the parabola with its various measures and characteristics (focus, directrix, vertex, etc.). 12. Ellipse and Hyperbola—The ellipse and hyperbola, the other two conic sections examined in the series, are introduced. The program defines the two terms, distinguishing between them with different language, equations, and graphic representations. 13. Functions—This program defines a function, discusses domain and range, and develops an equation from real situations. The cutting of pizza and encoding of secret messages provide subjects for the demonstration of functions and their usefulness. 14. Composition and Inverse Functions—Graphics are used to introduce composites and inverses of functions as applied to calculation of the Gross National Product. 15. Variation—In this program, students are given examples of special functions in the form of direct variation and inverse variation, with a discussion of combined variation and the constant of proportionality. 16. Polynomial Functions—This program explains how to identify, graph, and determine all intercepts of a polynomial function. It covers the role of coefficients; real numbers; exponents; and linear, quadratic, and cubic functions. This program touches upon factors, x-intercepts, and zero values. 17. Rational Functions—A rational function is the quotient of two polynomial functions. The properties of these functions are investigated using cases in which each rational function is expressed in its simplified form. 18. Exponential Functions—Students are taught the exponential function, as illustrated through formulas. The population of Massachusetts, the “learning curve,” bacterial growth, and radioactive decay demonstrate these functions and the concepts of exponential growth and decay. 19. Logarithmic Functions—This program covers the logarithmic relationship, the use of logarithmic properties, and the handling of a scientific calculator. How radioactive dating and the Richter scale depend on the properties of logarithms is explained 20. Systems of Equations—The case of two linear equations in two unknowns is considered throughout this program. Elimination and substitution methods are used to find single solutions to systems of linear and nonlinear equations. 21. Systems of Linear Inequalities—Elimination and substitution are used again to solve systems of linear inequalities. Linear programming is shown to solve problems in the Berlin airlift, production of butter and ice cream, school redistricting, and other situations while constraints, corner points, objective functions, the region of feasible solutions, and minimum and maximum values are also explored. 22. Arithmetic Sequences and Series—When the growth of a child is regular, it can be described by an arithmetic sequence. This program differentiates between arithmetic and nonarithmetic sequences as it presents the solutions to sequence- and series-related problems 23. Geometric Sequences and Series—This program provides examples of geometric sequences and series (f-stops on a camera and the bouncing of a ball), explaining the meaning of nonzero constant real number and common ratio. 24. Mathematical Induction—Mathematical proofs applied to hypothetical statements shape this discussion on mathematical induction. This segment exhibits special cases, looks at the development of number patterns, relates the patterns to Pascal’s triangle and factorials, and elaborates the general form of the theorem. 25. Permutations and Combinations—How many variations in a license plate number or poker hand are possible? This program answers the question and shows students how it’s done. 26. Probability—In this final program, students see how the various techniques of algebra that they have learned can be applied to the study of probability. The program shows that games of chance, health statistics, and product safety are areas in which decisions must be made according to our understanding of the odds. A Million Thanks Instructional Video Resources Use our classroom videos for every curriculum and every grade level. Lending LibraryAccess our lending library and order form for video titles for all grade levels and subject areas. Find Us at the Following Events: March 25-28: ND Assoc. of Secondary School Principals, Bismarck. March 26: Family Literacy Event, Rolette, ND. March 26-28: ND Music Educators Assoc., Bismarck. March 30: Family Literacy Event, Warwick, ND April 10: Family Literacy Event, Devils Lake. April 14: Family Literacy Event, Wahpeton-Breckenridge. April 16-18: ND STEM Network Conference, Fargo, ND. April 20: Family Literacy Event, Tate Topa, ND. April 21: Family Literacy Event, Minnewaukan, ND. April 23: Family Literacy Event, Dakota Prairie, McVille, ND. April 23-25: NDRA Reading Conference, Minot. April 28: Family Literacy Event, Flasher, ND. April 30: Family Literacy Event, SENDCAA, Fargo, ND. May 29-31: Family Literacy Event at Devils Run, Devils Lake. JUNE 6: SHARE-A-STORY LITERACY EVENT, Rheault Farm, Fargo.
|There is more information available on this subject at Space elevator on the English Wikipedia.| A space elevator, commonly referred to alternatively as a "space tether", is the term given to an immense structure which is used to ferry large loads of materials into space. Space elevators generally consist of large structures of carbon nanofiber which span straight up from the ground, thousands of kilometers high, ending at stations in space. Vehicles using the structure derive their power from strands of superconducting material. Space elevators are only known to be constructed by humans since the 24th century. Space elevators are a common construction by the UNSC, both on Earth and her colonies. There were six on Earth, but only the New Mombasa Orbital Elevator in New Mombasa, the Centennial Orbital Elevator in Havana and the Quito Space Tether in Ecuador have been named. While Earth has six space elevators, many planets within the Outer Colonies have more since there is a heavy human reliance on the production and shipment of agricultural and mineral goods from remote worlds. Prior to the Covenant's invasion, the farming colony of Harvest, for example, had seven elevators linked to the orbital station Tiara, while some mineral-rich worlds had as many as nine. During the events of the Human-Covenant war, many of these elevators were damaged, some seriously, during combat engagements. Of Earth's six elevators, only four remained intact after the Battle of Earth. A space elevator is a structure designed to transport and ferry different materials from a planet's surface into space and onto a platform. The base concept of a Space elevator consists of a cable attached to the surface on the equator and reaching outwards into space. By positioning it so that the total centrifugal force exceeds the total gravity, either by extending the cable or attaching a counterweight, the elevator stays in place in geosynchronous orbit. Once moved far enough, climbers are accelerated further by the planet's rotation. The most common proposal is a tether, usually in the form of a cable or ribbon, that spans from the surface to a point beyond geosynchronous orbit. As the planet rotates, the inertia at the end of the tether counteracts gravity and keeps the tether taut. Vehicles can then climb the tether and escape the planet's gravity without the use of rockets. The engineering of such a structure requires an extremely light but extremely strong material (current estimates require a material ~2 g/cm³ in density and a tensile strength of ~70 GPa). Such a structure could eventually permit delivery of great quantities of cargo and people to orbit, and at costs only a fraction of those associated with current means with very little to no danger. The space elevator is gigantic, reaching into thousands of kilometers in height. An orbital tether's center of gravity must be above or at a point of geosynchronous orbit above the body it is located on. Because geosynchronous orbit above Earth is quite high, (35,900 KM above the surface) the height of the elevator would be twice the distance from the surface to the point of geosynchronous orbit. This gives orbital tethers (Because the same rule would apply to them all) an average height of 70,000 kilometers above the Earth's surface. Space elevators vary in size and shape, but they are all typically composed of the same raw material. Meshed together as a complex composite of intertwining nanofibers, these ingredients form a series of massive cords and rings several hundred meters wide. They bind to a grounded set of Polycrete anchors larger than most buildings which hold the elevator's structure in place while the planet spins on its axis. The zenith of the elevator, commonly known as the "orbital" or "terminus" is then pulled taut by the planet's rotational inertia, sliding into geosynchronous orbit thousands of kilometers above the planet. The UNSC utilizes several designs of space elevators. Many space elevators, such as the New Mombasa Orbital Elevator, consist of a single tether reaching into space, surrounded by additional strands and massive support rings. The lower part of the tether is surrounded by an additional support frame. The Quito Space tether utilizes a similar design. Harvest's orbital elevator system, built about two hundred years later, consisted of seven separate strands of Carbon Nanofiber, attached to the orbital station Tiara. The climber system is often modular, and different types of containers, such as regular cargo containers, maintenance cars, or "Welcome Wagons" used to transport personnel can be used to move on the elevator strands. In high-capacity cargo elevator systems such as the Tiara, the cargo containers are also compatible with ground-based MagLev Train lines and they can be effectively converted into freighters by attaching a propulsion pod to them on the top of the tether. The earliest concepts of space elevator date back to the end of the 19th century. During the second half of the 20th century and the early 21st century, multiple concepts were proposed, but it was not until almost three hundred years later when the first space elevator would be built. The construction of the first space elevator, the New Mombasa Orbital Elevator, begun in 2302. The cities with space elevators, "tether cities" as they came to be called, are often managed by second generation "dumb" AIs. As shown in New Mombasa, space elevators have a significant impact to the importance and economy of the cities they are located in. The cities and their surroundings are usually full of warehouses, to store the massive amounts of cargo transported to and from orbit. During 2552, Earth had six tether cities, each managed with the help of an AI. Due to the size of the space elevator, the safety of such a structure is an obvious concern. The catastrophic effects of a space elevator's collapse were witnessed multiple times during the Human-Covenant war, when many of UNSC's space elevators collapsed due to the fighting. If the orbital counterweight is destroyed or the tether is cut near the top, the whole cable will fall, usually wrapping itself around the planet; the New Mombasa Orbital Elevator would be able to wrap itself around the Earth at least two times. This happened to Harvest's elevator system when Loki destroyed the Tiara station with a Mass Driver. The Centennial Orbital Elevator on Earth also collapsed due to the destruction of Station Wayward Rest on top of it. If the tether is cut halfway up, the upper portion will rise up and remain in orbit while the lower part will drape around the planet. The same will occur if the tether breaks quarter way up. In case the break occurs at or near the anchor point on the planet surface, the whole tether will rise upward and end up in an unstable orbit around the planet. This was the case when New Mombasa's orbital elevator collapsed due to the damage caused to it by a Slipspace rupture backlash. Due to the safety concerns, tether cities are almost always designed with the possibility of a catastrophe in mind. The cities are often compartmentalized into multiple symmetrical sections, to minimize the death toll and property damage in case anything were to happen to the elevator. However, contrary to popular belief that the collapse of a space elevator will cause massive planetary destruction, the tether itself will not cause any significant damage at all. The lightweight construction of the tether allows for air resistance to negate the effects of gravity. Instead, it will be the support structure that surrounds the tether that causes most of the damage, as evidenced by the wreckage strewn across the Tsavo Highway. Known space elevators - Unnamed colony List of appearances - Halo 2 (First appearance) - Halo 3 - Halo 3: ODST - Halo: Reach - Halo: Ghosts of Onyx - Halo: Contact Harvest - Halo: Evolutions - Essential Tales of the Halo Universe - Halo Graphic Novel - Halo 4: Forward Unto Dawn - ↑ Halo: Contact Harvest, page 28 - ↑ Halo 2, level Metropolis - ↑ Halo: Ghosts of Onyx, page 205 - ↑ Halo 3, multiplayer map Orbital - ↑ Halo: Contact Harvest, page 75 - ↑ 6.0 6.1 6.2 Halo Waypoint, "Space Elevator" article - ↑ Wikinomics: Elevator To Space - ↑ 8.0 8.1 Bungie.net: Halo 3: ODST Field Guide, "Superintendent" entry - ↑ Halo: Evolutions - Essential Tales of the Halo Universe, Palace Hotel, page 366 - ↑ 4: Forward Unto Dawn: Part 3 - ↑ Halo: Contact Harvest - ↑ File:Reach E310 Firefight Beachhead03.jpg - ↑ YouTube: E3 2010 Firefight 2.0 Beachhead Gameplay - ↑ ONI Candidate Assessment Program V5.02A: Dutch Interview
Tropical cyclones are one of the most destructive types of weather system on the planet. The obvious human interest in tropical cyclones is in their sheer power. Historically tropical cyclones have had devastating impacts on life, agriculture, water supplies and the economic well-being of tropical countries. This cyclone season (November to April), cyclone activity in the Australian region (5°S-40°S, 90°E-160°E) is likely to be above average. Typically, 12 cyclones develop or move into the region during the tropical cyclone season. The outlook issued by the Bureau’s National Climate Centre suggests an 80% chance of having more than the long-term average number of cyclones in the Australian region during the 2011-12 season. The forecast is due to the presence of a La Niña event in the Pacific Ocean. Cyclones like warm water, and La Niña events are associated with warmer than usual ocean waters in the Australian region. Historical records indicate that La Niña periods are usually, but not always, associated with an increase in tropical cyclone risk for northern Australia during the cyclone season. Almost everything about a tropical cyclone is extreme. The winds, rainfall, storm surges and flooding associated with severe storms are at the very extremes of recorded weather. Last cyclone season, the whole country watched with morbid fascination as the biggest system since 1918, Tropical Cyclone Yasi, tracked in from the Coral Sea and crossed the Queensland coast near Mission Beach on February 3. While the damage and cost of the storm was huge, the human impact was thankfully small. What are cyclones and where do they come from? A tropical cyclone is a rotating storm system characterised by a low-pressure center and numerous thunderstorms, with associated strong winds and torrential rain. The characteristic that separates tropical cyclones from most other cyclonic systems is that at any height in the atmosphere, the center of a tropical cyclone will be warmer than its surroundings. This is a phenomenon called a “warm core” storm system. The term “tropical” refers both to the geographical origin of these systems, which usually form in tropical regions, and to their formation in maritime tropical air masses. The term “cyclone” refers to the storms’ cyclonic nature, with counterclockwise low level wind flow (near the surface of the Earth) in the Northern Hemisphere and clockwise low level wind flow in the Southern Hemisphere. A tropical cyclone is sometimes confused with a tornado, which it is not. When compared to a tropical cyclone, a tornado is a micro-sized rotating system. Around the world, there are lots of different common names for a tropical cyclone. These include simply cyclone, cyclonic storm, tropical storm, hurricane or typhoon. In the northwest Pacific, typhoon is the regional name for a severe tropical cyclone. Hurricane is the regional term for the northeast Pacific and northern Atlantic, bordering North America. For Australian weather and climate folk, a tropical cyclone is known mostly by its initialisation, “TC”. TCs form and develop over tropical ocean waters. You can think of warm, tropical waters as the engine that spins a cyclonic weather system into a true TC. If they move over land, cyclones lose their strength due to loss of the warm ocean as an energy source, and to increased surface friction. This is why coastal regions can receive significant wind damage from a TC, while inland regions are relatively safe. However, torrential rains can produce significant flooding inland, as happened with both cyclones Yasi and Anthony last summer. Devastation caused by a cyclone can be significant enough to reach national disaster proportions, as was the case for severe TC Tracy. Cyclone Tracy moved over Darwin from the Arafura Sea on Christmas Eve 1974. It is one of the most significant tropical cyclones in Australia’s history. It led to the loss of 65 lives and the destruction of most of Darwin, and profoundly affected the Australian view of the tropical cyclone threat. The difference between our readiness for Tracy contrasts strongly with our readiness for Yasi, nearly 40 years later. More or fewer, stronger or weaker, here or there… Tropical cyclones rely on warm tropical waters to form and develop. So what should we expect from cyclones as climate change ramps up in the next 100 years? Will we get more severe storms in future? As the oceans warm up outside of the tropics, can we expect storms in places we have never seen them before? Examining trends in cyclone data is problematic for several reasons. Since TCs are few in number each year and for each region, the overall sample size is small. This makes it hard to find statistically meaningful trends in the number of TCs. In terms of any changing strength of TCs, the science is hampered by the nature of the data. Historically, and without the benefit of satellites, the strength of each TC was manually analysed by forecast meteorologists. This means that there is no objective consistency in historical data over time and over different regions. For example, differences might be found in the manual analysis of the same storm, as it crossed from one tropical cyclone warning centre’s area of responsibility to another. Considering all of the problems with the data, examination of recent tropical cyclone activity in the Southern Hemisphere shows no significant trends in the total numbers of TCs, nor in numbers of severe tropical cyclones in the South Indian and the South Pacific Oceans. In the Australian region, no categorical changes in the total numbers of tropical cyclones, or in the proportion of the most intense tropical cyclones, have been found; though there is considerable year-to-year and decade-to-decade variability. One study by the Bureau of Meteorology has suggested a decline in the most severe storms crossing the Queensland coast over the last hundred years, but the number of storms available to study is small. In the most studied region on Earth, the North Atlantic’s Gulf of Mexico region, the data makes some suggestion of a local increase in the most severe storms, subject to the many caveats mentioned above. Modelling future cyclones will test our scientific limits Whether the characteristics of tropical cyclones have changed, or will change in a warming climate — and if so, how — has been the subject of considerable investigation, with less than clear results. This is because we are posing the existing science of climate modeling a fundamentally very difficult question. Predicting how temperatures will change due to increasing greenhouse gases is relatively straightforward physics. By way of contrast, predicting how regional rainfall will change, such as the future rainfall over South Australia, is a much harder task. Predicting future changes in TCs is harder again. In fact, even if scientists had another Earth to play with, they would likely find concrete answers difficult to pin down. This is particularly true for more modest increases in greenhouse gases, such as will occur in the next few decades, as opposed to a doubling of atmospheric carbon dioxide concentrations. For large increases in greenhouse gases, extensive climate modeling has pointed to some consistent future changes. Future projections based on physics and using high-resolution dynamical models consistently indicate that greenhouse warming will cause the globally averaged intensity of tropical cyclones to shift towards stronger storms. The average intensity will increase with global warming. Existing modeling studies also consistently project decreases in the globally averaged frequency of tropical cyclones. In other words, there will be fewer overall storms. However that is balanced against the projected increases in the frequency of the most intense cyclones as global warming intensifies. This suggests a future world where tropical cyclones are less frequent, but those storms which do occur are more dangerous.
Runaway climate change is a theory of how things might go badly wrong for the planet if a relatively small warming of the earth upsets the normal checks and balances that keep the climate in equilibrium. As the atmosphere heats up, more greenhouse gases are released from the soil and seas. Plants and trees that take carbon dioxide out of the atmosphere die back, creating a vicious circle as the climate gets hotter and hotter. The phrase "tipping point" is heard a lot more from scientists. This is where a small amount of warming sets off unstoppable changes, for example the melting of the ice caps. Once the temperature rises a certain amount then all the ice caps will melt. The tipping point in many scientists' view is the 2˚C rise that the EU has adopted as the maximum limit that mankind can risk. Beyond that, as unwelcome changes in the earth's reaction to extra warmth continue, it is theoretically possible to trigger runaway climate change, making the earth's atmosphere so different that most of life would be threatened. As with a lot of climate science, what used to be theory is now being seen in practice on the ground. New information makes clear that reaching the tipping point is a much more immediate threat than was previously thought. The danger grows with the increase in average temperature above what is called the pre-industrial level - the mid-18th century. Some scientists estimate that when the temperature reaches an extra 2˚C above that equilibrium the earth's natural systems will be in serious trouble. It will affect many species' survival prospects, including our own. Too close for comfort So the key question is how close are we to a 2˚C rise, and when will we get there? The first thing to admit is that nobody knows for sure, but many who understand the science say the answer to this twin question is, first, that we are already very close, and second, we might get there terrifyingly soon. In fact the 2˚C threshold is much closer than almost anyone outside the specialist scientific community is prepared to acknowledge. By any standard, if you care about the future of the human race, it is too close for comfort. So to the vital question of when we might reach 2˚C above pre-industrial levels; in other words how much time do we have to curb our excess emissions? Warming is directly related to the quantities of greenhouse gases there are in the air, the chief of which is carbon dioxide. Concentrations of carbon dioxide in the atmosphere are already at 382 parts per million (ppm). That is up from the pre-industrial level of 280ppm, a considerable increase. To get that in perspective we need to realise that the 280ppm figure had remained more or less unchanged for 10,000 years, the period which accounts for the entire span of modern human history. The benign climate that has allowed the human race to multiply, develop and prosper has remained stable through that period. There have been minor variations: warm periods that allowed places like Greenland to be settled by the Vikings or mediaeval monks to make wine in Britain, and cold periods, known as mini-ice ages, that made it possible to have frost fairs on the frozen Thames in London during the 17th and 18th centuries. The last one was held in the winter of 1814. These so-called natural variations in the climate have allowed those trying to rubbish global warming theories plenty of ammunition. But those changes have now been well studied and are better understood. It is no longer credible to suggest that what is happening now is a natural variation of a sort recorded in the last 2,000 years. In fact the variations in the quantities of carbon dioxide in the atmosphere have been small in that period, and other natural variations like sunspots have been the culprits for the previous warm and cool periods. The recent increases in greenhouse gases have changed all the rules and the stability in the climate system man has enjoyed so long. Current calculations suggest that if and when the level reaches 450ppm there will be a 50% chance of the earth's temperature exceeding a rise of 2˚C - in other words an even chance of potentially catastrophic climate change. To be on the safe side (the so-called precautionary principle, which so many politicians claim they endorse) some scientists believe that the carbon dioxide in the atmosphere must be pegged back to 400ppm - a mere 18ppm above the current level. So, on their current calculations, since man began the industrial revolution, and unwittingly an experiment with the climate, the human race has already got more than 80% of the way to causing a potential disaster. On this evidence it is clear that drastic action is needed. Some scientists have certainly been urging politicians to take urgent and immediate action. Recent evidence demands, according to a consensus of the world's best climate scientists, that we need to cut existing emissions by between 60% and 80% in the next 40 years to stand a chance of preventing climate change becoming unstoppable, and keeping control of our own destiny. Compare that figure with that achieved by the Kyoto protocol, to date the best effort by politicians to cut emissions. This will cut greenhouse gases from 34 of the developed countries by 5.2%, excluding the world's biggest polluter, the United States. Over the period of the agreement, which lasts only until 2012, total world emissions will rise because of the growing industries of the developing world. What does the science tell us about how much time we have left to solve the problem? Measurements taken by Nasa Goddard Institute for Space Studies and Columbia University Earth Institute New York, released in December 2005, show that in the last 100 years the world's average temperature has increased by 0.8˚C. That seems to leave a comfortable 1.2˚C to go before the tipping point is reached, but this is where the climate plays a nasty trick. Unlike glass in a greenhouse, the extra heat-trapping gases released into the air take time to build up their full effect. This is largely because of the delaying effect of the cool oceans as they catch up with the atmosphere. Best estimates are that there is a 25- to 30-year time lag between greenhouse gases being released into the atmosphere and their full heat-trapping potential taking effect. That wipes out any feeling of comfort. It means that most of the increase of 0.8˚C seen so far is not caused by current levels of carbon dioxide but by those already in the atmosphere up to the end of the 1970s. Still worse, the last three decades have seen the levels of greenhouse gases increase dramatically. In this 30-year period the earth has seen the largest increase in industrial activity and traffic in history. This great burning of fossil fuels has also coincided with the mass destruction of rainforests. So on top of the extra heat we are already experiencing there is another 30 years of ever-accelerating warming built into the climate system. Slideshow: How climate change is affecting the planet · Global Warning: The Last Chance for Change, by Paul Brown, is published by the Guardian and A&C Black (£19.95). To order a copy for £18.95, with free UK p&p, go to theguardian.com/bookshop or call 0870 836 0749.
[Top] [Prev] [Next] Maxwell's equations described the fundamental relationship between electricity, magnetism, and wave propagation. Underlies all radio and cable communications. Light and radio waves are the same phenomena. Provides the theoretical explanation for why radio waves can be focused and reflected just like light. "... we have strong reason to conclude that light itself -- including radiant heat, and other radiations if any -- is an electromagnetic disturbance in the form of waves propagated through the electromagnetic field according to electromagnetic laws." Maxwell, Dynamical Theory of the Electromagnetic Field, 1864. The equations state the following: only in a steady state can a magnetic field exist without causing an electric field and vice versa. When one is changing, it automatically brings the other into being for as long as the change continues. These mutually generating fields must be at right angles to each other, and they must both travel with the same velocity, which is equal to that of light. One implication: other forms of electromagnetic waves than light exist, travel at the same speed as light, but differ from it in terms of frequency and wavelength. Verified experimentally by Hertz in late 1880s. 1850s: Professor at King's College, London 1871: Chair in Experimental Physics at Cambridge University Hertz and Maxwell Hertz is the unit of frequency: cycles per second. Demonstrated experimentally the wave character of electrical transmission in space. Developed apparatus that could transmit high frequency, meter length waves. 1883: Professor at the University of Kiel 1885: Professor at the Technical University, Karlsruhe Principle of resonance between transmitting and receiving circuits. Increased distance over which these waves could be detected by making the transmitter and receiver identical. Measured the inverse distance relationship (the strength of the transmitted wave falls off as the inverse of the distance, as opposed to the inverse squared as shown by Faraday). Linear Oscillator: two straight metal rods terminated by metal spheres, storing capacitance. Generates waves that are "linearly polarized." That is, radiated electric field is parallel to the rods. A detector placed at right angles would detect nothing. Demonstrated that short wavelength radio waves can be concentrated into beams by parabolic reflectors. Dimensions of the reflectors on the order of (or greater than) the wavelength. Concept of gain: transmission is more effective than possible with simple dipole aerials. 1895: 21 year old Guglielmo Marconi demonstrated that electromagnetic radiation, created by spark gap, could be detected at a much greater distance than that considered by Hertz. While these effects were known to experimentalists, Marconi had made many improvements to the basic antenna, coherer, and tuning components, and rapidly developed a capability to wirelessly transmit signals over several miles. Coherer: vacuum tube with connections on the ends, filled with iron filings. Electromagnetic waves force the filings to "cohere", or align themselves between the connections. Invented by Branly. Tapping with a relay-controlled hammer causes the coherer to reset itself in preparation for detecting the next wave train. Marconi's insight: elevated antennas with kites, one metal plate on the ground. Yielded a much improved distance in transmission and detection. He first offered his invention to the Italian government, but they rebuffed him. The son of an Anglo-Irish daughter of the Jameson Whiskey family, Marconi traveled to England, where he received a warmer reception for his invention. 1897: Demonstrated his wireless telegraphy system on Salisbury Plain, winning interest of the Royal Navy in communicating with ships at : Installed radio sets on the Royal yacht, and was able to report the results of ship regattas to shore. 1899: Marconi demonstrates wireless telegraphy in US. Reported the international yacht races off Sandy Hook , Long Island. "The possibilities of wireless radiations are enormous." Marconi, 1899. British and Italian navies became Marconi's earliest customers, sustaining the Marconi Wireless Company during early days. Germans developed a rival system, unencumbered by Marconi's patents and heavily subsidized by governments funds. The latter eventually developed into the Telefunken Corporation. The pre-WWI arms race had come to communications. Spark Gap Transmitter Spark gap transmitters transmit across a broad range of frequencies. Strong nearby transmitter can block out weak signals from far away. Next innovation is the notion of tuning: use of a resonant circuit to limit the spread of frequencies radiated by a transmitter and those to which a receiver would detect. General concept demonstrated by Sir Oliver Lodge in 1889: adjust receiver coil to same length as transmitter coil to achieve resonance. Marconi's patent 7777 (April 1900): transmitting aerial coupled to induction coil, receiving aerial coupled to coherer via a high frequency transformer, a tapped inductor in series with aerial, and a capacitor (Leyden jar). Vary the capacitance to "tune" the transmitter and receiver into resonance. 1901: Marconi succeeded in receiving a long wave (> 1Km) transmission from the coast of England (Poldhu, Cornwall, 25 kW alternator-based transmitter) to the coast of Canada (Glace Bay, Newfoundland, antenna: 50 copper wires, 160 ft high, arranged as an inverted triangle). December 12, 1901: the signal . . . ("S") is heard at Glace Bay sent from England. 700 mile transmission range during day, improved to 2000 miles at night. Practical to keep in communications with shipping across the Atlantic. Hertzian waves are not like light; they bend around the curvature of the earth. Explained by the proposed existence of an electrically conducting layer in the upper atmosphere: ionosphere. 1903: International Conference held at Berlin to settle allocation of priorities and rights. Maritime safety a high concern. Dictates the rules of interoperation across different systems and allocates spectrum among government. military, and commercial uses. 1907: After continuous experimentation and refinement, using ever more powerful transmitters (over 100 Kws!) and longer wavelengths, requiring more expensive and extensive antenna farms (30 x 100 m antenna masts), Marconi achieved regular transatlantic radio cable service. Discovers that range increases with increasing wavelength (16 KHz, 20,000 meter wavelengths used to communicate to the antipodes). NOTE: Marconi's system is wireless telegraph; unable to transmit voice at this point. Spark radiations difficult to modulate. Sound requires a mechanism for generating continuously oscillating signals -- the vacuum tube. Marconi's company set up shore-based radio stations and placed radio sets on the majority of transatlantic steamers. 1912: a Marconi set was aboard the ocean liner Titantic when it went down; its existence is credited with averting an even greater loss of life (David Sarnof, future chairman of RCA, mans the telegraph key during the disaster). Two ships near the Titantic had shut down their radio sets at 11 PM -- several hours before the disaster -- and missed the call for ship in distress (CQD and SOS). Marine radio telegraphy become widespread, and is monopolized by Marconi. 1914-1918: Wireless plays key role in war. Within hours of commencement of hostilities, a strategic coup: British Navy cut Germany's overseas telegraphic cables. German overseas radio stations systematically attacked and shut down. Germans similarly cut Britain's overland cables to India passing through Turkey, and severed communications links across the Baltic to Russia. Marconi hurried to complete several radio stations under contract from the British government, to reestablish communications with overseas possessions. The techniques of communications intelligence (comint) such as message interception, cryptoanalysis, direction finding, jamming, and intelligence gathering developed rapidly. Mobile Radio, circa WWI 1916: Marconi's focus on longwave, high power transmission did not yield mobile radio sets. However radios were used on some mobile equipment and airplanes. Airborne to ground communications (transmit only) during WWI: "It became clear as we made our preparations ... was not going to be anywhere near as simple as it had sounded. The first problem was the shear weight of the wireless apparatus. The guts of the system was a marvellously archaic contraption called a spark-generator. This worked by creating an arch through the teeth of a brass cog-wheel spinning against an electrode. Every time a tooth passed the electrode a spark jumped across the gap, and in this way, when connected to the aerial, it would produce a hideous, rasping crackle -- barbed wire made audible. The principle of signalling was that the operator worked a Morse key to turn this excruciating noise into a signal: a crackle for a dash and a short one for a dot. "That part of the wireless alone weighed about thirty kliograms. But there were all the other accoutrements that went with it. Power was provided by a dynamo fixed on to a bracket under the aeroplane's nose and driven by a leather belt from a pulley-wheel on the propeller shaft: that weighed about seven kilograms. Then there was the aerial: twenty meters of wire with a lead weight at one end to trail behind us in flight, plus a cable reel to wind it in when not in use: about ten kilograms' worth in all. Other accessories comprised a signal amplifier, a tuning coil, an emergency battery, an ammeter, a set of signal rockets plus pistol and a repair kit. Altogether the wireless apparatus -- weighed about 110 kilograms. Or to put it another way, the weight of a very fat man as a third crew member." Airborne Radio Telephone, after WWI 1917: US Navy takes over Marconi's stations for the remainder of the war. Navy orders 10,000 radio sets for its fleet. Radio manufacturing scales up. Becomes the domain of large corporations. 1917: The Infamous Zimmermann Telegram German Foreign Minister Arthur Zimmerman Before U. S. entered the war, German Foreign Minister Zimmermann sent a telegram to the German Ambassador in Washington, via three routes: via radiotelegram direct to North America, via Sweden, and by the American embassy in Berlin(!) via the U. S. embassy in Copenhagen and thence to Washington. The message was in a German diplomatic code. British cryptanalysts had received copies from all three routes, and had cracked the code in their legendary Room 40 in the Admiralty. While they knew the provocative contents of the telegraph, they needed a way to reveal it to the U.S. government without letting the Germans know that they had the ability to read essentially all of their diplomatic cable traffic. With the help of an employee in the Mexican Telegraph Office, a British agent in Mexico had obtained a copy of the telegram which had been forwarded from Washington to Mexico City, with minor changes from the original cable. This they handed over to the U.S. Government, and the Germans believed that they intercept had taken place in North America, not in Europe. In it, the Germans attempted to incite the Mexican government to enter the war against the U. S., promising them Texas, New Mexico, and Arizona as their prize. The telegram's existence helped to bring the U. S. into the war. Zimmermann Telegram in the German Diplomatic Code 1920s: Marconi discovers that short waves, reflected off of the Ionosphere, offered a much better communications method requiring substantially lower power, and more compact antenna systems and radio sets. Initial experiments with 10 kW transmitter power, 100 m wavelength, heard at 1250 miles during day, 2230 miles at night). Demonstrated that the received signal strength varied not as the power of the transmitter but rather as the distance from the transmitter, fading out and reappearing far from the transmitter site. By 1924, using shortwave techniques, Marconi was able to send a voice message from England to the Antipodes (aka Australia). British and U.S. governments did not believe that shortwaves could be harnessed for effective communications. These "useless" or "junk" bands given to amateur "ham" radio operators. Further development of vacuum tubes enables shortwave transmission. Since this time, a continuing race to be able to harness higher frequencies, shorter wavelengths. First time in history that something smaller and cheaper actually outperformed something larger and more expensive. Cable and longwave transmission passed from the communications scene. By World War II, shortwave radio had developed to the point where small radio sets could be installed in trucks or jeeps, or carried by a single soldier. Two-way mobile communications on a large scale, revolutionized mobile/combined arms warfare over wide areas. Shortwave services subject to interruption during ionospheric storms. Gradually replaced by submarine telephone circuits and satellite services. Still used for world-service broadcasting, maritime and defense services.
All children are individuals, and therefore the answer, in short, to this first question is ‘in a multitude of ways’. However, in order to obtain an insight to this broad subject I will firstly consider the way children learn English by analysing models of general learning styles and theories. I will then address the National Curriculum for English, and investigate how the theories are applied to its teaching. Research into Learning Ginnis (1992) states that ‘Even now the way many teachers teach is out of step with the way most learners learn.’ (Ginnis, The Teacher’s Tool Kit, p.4). He believes that if teachers follow the ‘natural laws of the learning process’ (ibid) then progress will be made. In the past teaching has centred around what is being taught when, according to Ginnis, the focus should now be on how children are taught. Student centred learning, which is becoming the popular model in the twenty-first century, echoes the Government’s ‘Every Child Matters’ 2005 campaign and White Paper, where children are held in emotional safety, their self esteem is developed and they are given responsibilities. Bowkett (1999) wrote that the child’s emotional development is of paramount importance, and is the foundation on which learning can be built. Teachers must equip children ‘…with an emotional toolkit, and give them the right tools for the job’ (ibid, p.29). Blackmore and Frith (2005) note that the theories to assist in facilitating learning are powerless if we do not take into consideration the individual’s social-emotional state. They cite a child who cannot do his buttons up on a coat. It may be that this is due to motor skill development, or merely that the child just does not want to do them up! (Blackmore and Frith The Learning Brain, p.94). As teachers, a priority must be to get to know the children and be able, to a certain degree, read their body language and verbal cues to determine what emotions they are experiencing. If a child has a particular problem or crisis, this will affect their learning greatly, and we must be flexible to differentiate our teaching to accommodate this. Lefrancois (1997) believed that students should be assessed on entry into school to determine their specific preferences for learning (Ginnis (2002) The Teacher’s Toolkit, p.38). Once this is complete, these pupils may have a choice over the way their lessons are organised. This student centred approach gives the student the responsibility of determining when they are ready for exams, and having further responsibilities in the running of the school. The responsibilities, as mentioned previously, help to develop the emotional side of the individuals; this in turn will help to facilitate learning. An emotionally stable and happy child is one that is open to a learning experience. I believe the ideology behind Lefrancois’ model would help to achieve happy individuals who feel valued. However, in practical terms the idea of a ‘free school’ can only take place in a small scale environment; the private sector is the ideal place for this practice, for example in the successful Steiner Education Centres. In state secondary schools we can still find out what the children’s preferred learning styles are, and use the information to aid learning. An initial questionnaire may be completed by a class as part of an introductory lesson at the beginning of term. Gardner (1983) wrote a model of learning styles based on 8 different types of intelligence. These are: 2. Logical/ Mathematical 5. Body/ Kinaesthetic 6. Interpersonal (co-operates and understands people well) 7. Intrapersonal (Self motivated and self confident) To define a child’s intelligence is to understand where their preferred style of learning, and their learning potential, lies. This method, as seen on Channel Four’s 2005 series Unteachables, is an excellent method of boosting the self esteem of children with Special Educational Needs. If children can firstly understand that they have an intelligence (i.e. a talent) in one area, they will automatically be interested in finding out more. Once the preferred learning style is used to teach, then this self esteem is multiplied by the child being able to access the learning in the way that best suits them. When a class have completed the questionnaires, then the teacher can determine what the dominant styles of the group are and use a mixture of teaching styles to accommodate the children. For example, if a large proportion of the children show a leaning towards body/ kinaesthetic intelligences then lessons could regularly include dispensing of the chairs and completing a physical activity. There are many physical activities and exercises that can be applied to English. I have recently taken a year seven, middle band group ‘On Tour’ (ibid, p.133) as part of a Key Stage Three writing course on fairytales. I divided the class into seven groups of approximately four pupils. Seven tables were set up in the room, and all the chairs taken away. On each table a large piece of sugar paper was placed with an original fairytale title, and a coloured felt pen. Each group chose a table, wrote for four minutes and then all groups changed. The stories evolved slowly, with each group adding to each story, and the class had an enjoyable time in the process. The success was assessed at the end of the lesson when the stories were read out. This method of writing produced stories that did not lose focus or stamina; in fact the amount written in the time given outweighed any exercise previously given individually. It seemed to me that the physical activity, group discussion, teamwork, visual images and colour inspired a range of students with varying learning styles, not those merely with a preference for kinaesthetic learning. The groups went on to correcting and developing a story each from the batch produced, allowing for more formal work to be completed for those who prefer this style of learning. It must be remembered that not all children will like to work in groups, and prefer self-motivated individual tasks. Gregorc (2001) also believed that flexibility and variation are the keys to successful teaching. In creating a model of learning styles based on sliding scales of how individuals store experiences as information in their memories, he found that in a class of 30 children, most teachers will encounter a ‘whole spectrum of styles’ (ibid, p.41).. Gregorc saw people being sequential and structured, or random in the way they store information. He found that people are either abstract in their ideas, or preferred a realistic approach. These are placed on sliding scales to cater for responses from extreme to very subtle. Structure: forms, lists etc Concrete random Abstract sequential Open ended practical work———————————————– Academic Research Unstructured group work In English, these styles can be easily accommodated. ‘Abstract random’ children will prefer the activities such as ‘On Tour’, so long as they have a creative input into the themes and headings of the stories. ‘Concrete random’ children will like project work that does not include a frame for writing. Therefore, with GCSE coursework these pupils will be self motivated to get on with the work. In my experience of a year ten group completing coursework on The Merchant of Venice and also the poetry of Blake, the majority do prefer a frame in which to work within. This fits into Gregorc’s ratios; the majority will not like open ended work. The ‘Blake’ work has been completed using a variety of techniques. I have provided charts for the pupils to complete; this accommodates both the ‘abstract sequential’ and ‘concrete sequential’ learners. Unstructured group work, for the ‘abstract random’ learners has come in the form of group discussions and feed back sessions on specific poems. Once again, this only completely satisfies approximately one quarter of the group, and therefore is not a popular activity with the majority. In learning for coursework, students are under pressure to make notes on as much as possible in order to achieve top marks prior to the exam. This actually creates an artificial atmosphere to use as an example for preference of learning style. Pupils, once given the assignment prefer, as a group, to copy or have information dictated; if the pupils were responsible for their own learning and lessons they would be ‘spoon fed’ information in order to gain the best marks possible at GCSE. This, of course, is not learning. By copying a paragraph we are not constructively learning anything. The most poignant statement that Ginnis (2002) makes is in relation to Constructivism. This echoes the model of making children feel safe by taking what they already know as a basis for learning. This means that children feel comfortable with what is being taught as they have an access point of entry to the learning cited in their existing knowledge. Looking at this from a psychological perspective, new information is embedded into the memory within an existing ‘living web of understanding’ (ibid, p.19). If children can make meaningful relationships between new and existing information, then it is far more likely to be learnt. Ginnis can see very little point in giving students ‘ready made learning’ (ibid), and rather demands that teachers get the children to work things out for themselves wherever possible. Piaget is the main proponent of Cognitive constructivism, which is concerned with thinking and learning, as explained above. Vygotsky also researched constructivism, but on a social level, stating that learning is a ‘social, collaborative and interactional activity…’ (Cohen et al (1977) A guide to Teaching Practice, p.168). Vygotsky believed that the teacher must be there to facilitate, and provide scaffolding for learners, but once confidence has been gained, this scaffolding must be removed to allow learners to develop within the social group, and think for themselves. According to Galton et al (1980) there are three classifications of teaching styles: 1. Class enquirers (Teaching the whole class with control/some individuals working alone) 2. Individual monitors (Teaching pupils individually within the class) 3. Group Instructors (Teaching groups of pupils within the class) These styles focus on the dynamics of the classroom (Cohen et al (1977) A Guide to Teaching Practice, p.184). Flanders (1970) investigated these styles and found that teachers who were able to shift between techniques and naturally change from being observers to proactive ‘counsellors’ were highly successful (ibid). This notion echoes the needs of the children highlighted in the research completed on learning styles: variety of work and responsiveness of the teacher. Variety and responsiveness of the teacher are highlighted in The National Curriculum’s main values, aims and purposes: The curriculum must ‘…develop enjoyment of, and commitment to, learning as a means of encouraging and stimulating the best possible progress and the highest attainment for all pupils’ (The National Curriculum (1999) p.11). I will now address the three branches of English teaching in The National Curriculum: speaking and listening, reading and writing. Speaking and Listening Des Fountain (1994) noted that ‘Teachers in the National Oracy Project found that some aspects of the ‘teacherly’ role of guiding and supporting pupils’ learning could be in fact provided within well planned and organised group work’ (Des Fountain (1994) in Teaching English, p.56). The year nine class that I have been teaching have completed book reviews, and these are presented at the front of the class as individual formal reviews every week, four at a time in a lesson. I initially noticed that the class, when asked to listen and ask questions at the end split into two categories: 1. those who listened and asked questions and 2. those who did not appear to listen. Therefore I consulted the National Strategy for English for help. The result was focusing on the listening rather than the speaking in the lesson. I split the class into four groups, and gave each group a focus for their listening. The focus for each group was to listen for specific techniques used to make the speech interesting, or listen for certain parts of the content to comment on its effectiveness. The groups then had to make notes during the speech, discuss among themselves and comment at the end; any individual may have been asked to comment. The focus gave what Des Fountain calls a purpose to the talk, and assisted me in assessing the group’s ability to listen. The whole class focused and stayed on task for all four speeches each week. They made appropriate comments (with a few exceptions) and feedback after the lessons indicated they enjoyed the task. To Slavin (1990) ‘…one of the greatest benefits from co-operative learning is the raising of self esteem’ (Cohen et al, A Guide to Teaching Practice, p.180). This was certainly true in this example; emotionally happy children who are focused are able to learn. However, it must be remembered that although group work has benefits such as noted by Slavin, there are also problems that may occur. These include pupils failing to get on with the task, or pupils failing to get on with one another. Detailed planning by a teacher who knows his or her pupils well can overcome these (ibid). The teacher’s own speech is vitally important in a learning situation. Whorf (1956) believed that a pupil’s world is constructed through the language used in his or her society. This theory is known as linguistic determinism, and impacts greatly on learning (Fox (1993) Psychological Perspectives in Education, p.61). The development of language directly affects the pupil’s ability to think and learn. As teachers, we must consider our language use carefully as we assist to construct the pupil’s world. Traves (1994) sees reading as the actual ‘…process [of reading] or a response to a literary text’ (ibid, p.91). The way in which pupils can follow the aims of The National Curriculum in order to achieve the best results in both progress and attainment is through Directed Activities Related to Texts (DARTS). These activities give a specific purpose to reading in different contexts, whether it is for relaxed reading or critical analysis. Traves believes that ‘…reading ought to be a dialogue between the reader and the text’ (ibid, p.95). DARTS activities teach skills to enable pupils to differentiate between different types of reading and become critical readers and thinkers. In classroom practice, DARTS activities are easy to construct, and are well received by pupils as they enable the learners to think for themselves and really engage with the text. I have practised DARTS with a year eight group who have been reading Roald Dahl’s Twisted Tales. The group are of a foundation level, and include a number of EAL pupils. Therefore, in order for a text to be enjoyed and understood, the pace must be at an appropriate level, whilst still making the lesson interesting. I have found that the DARTS activity of predicting what will happen next is an ideal technique in this situation. The activity of stopping to write predictions as the reading is taking place gives the pupils time to really think about what has happened in the text. They are unable to predict without a sound understanding of what has happened previously. The EAL pupils find predicting a text challenging, but interesting and worthwhile, as the ‘fun’ element of this activity breaks up the process of going through a lengthy text. The EAL learners receive extra support in the lesson through scaffolded pieces of work, and table to write within. For the Twisted Tales work, I provided the children on the SEN Code of Practice a table where they could place their predictions alongside the actual events of the story. This enabled the group to sequence the tale in a clear and straightforward way. EAL students, under Whorf’s linguistic determinism model, see the world differently to the students who were born and raised in the UK. This is because, according to Whorf, the pupils construct their world according to their language (Fox (1993) Psychological Perspectives in Education, p.61). Therefore in teaching the language through English lessons in school we are also changing the way the pupils view the world. Sequencing texts is a further activity within the DARTS programme, and is an excellent way for a group to draw together all the learning from a particular project, or reader. In a year 7 foundation group I have coupled this idea with work on paragraphing. I cut copies of a familiar text (a famous tale) into paragraphs, and the pupils had to stick them into their books into the correct order. This kinaesthetic exercise allows the children to really engage with the words as they sequence the story. Piaget believed that a ‘…major source of learning is activity by the pupils’ (Fox (1993) Psychological Perspectives in Education, p.57). By dealing with concrete materials, pupils are far more likely to learn through physical interaction and having the opportunity to experiment. As an extension for those in the group who were ready to write in paragraphs, the reason for each paragraph change had to be placed in between each piece of paper stuck into the book. This allowed for the children not only to revise the story, but also revise the technique of paragraphing. Children who prefer to learn from visual stimuli were aided by having the reasons for paragraphing on the board as a series of pictures based on the acronym TIPTOP (time, place, topic and person). The pupils were asked to write TIPTOP in their books, and draw a small doodle by the side, e.g. an alarm clock for ‘time’. Maybin (1994) differentiated between two major approaches to teaching writing in schools (ibid, p.186). In the Process Approach, pupils write for actual audiences, and have a specific purpose for their work. This may be a book to be held in the school library for other pupils to read. Pupils are seen as ‘apprentice authors’ (ibid). In the Genre Approach the focus is on constructing a particular kind of text. Pupils analyse an actual text provided by the teacher, they deconstruct it and then write their own version. When I started teaching, an experienced colleague shared her thoughts on the way English writing is taught in schools. To her, we take a text, deconstruct it to find out how it is written, then attempt to construct our own version. She felt that this process is rather like an ‘Airfix’ model kit. You know how the model on the box is made, but your own version never quite matches up, and inevitably leaves you with a feeling of disappointment. The year 10 group that I have taught were given a piece of GCSE coursework written by a top grade student which was published in a Sunday magazine. The text was about her life, and how she spends a day. Needless to say, her life was interesting; she had a role in the National Youth Parliament. The students in year ten dissected the piece of writing, and then set about composing their own piece for coursework. The example given inspired some, and showed how a top grade may be received, but many students felt disillusioned and inadequate. In the light of this I feel that a compromise must be made in order to maximise learning, and leave pupils with a sense of satisfaction at the end of the process. The National Curriculum favours the Genre approach, and the writing framework is set up in triplets that form different types of articles. The SATS tests for KS3 are designed to test the children’s ability to write in a specific genre, according to the triplet noted in the question set. Models must be used to show pupils what is expected, but may be completed on the spot by the teacher, using ideas from the pupils. This organic approach enables the teacher to support the pupil’s self esteem and develop their ideas in order to have an open dialogue in the classroom where pupils are learning from each other. The genre approach, used sporadically is valid, but examples must hold relevance for pupils; those who can connect with the text will be able to plan their learning around their existing knowledge, making it easier. According to Maybin (1994) those who are ‘proponents to the genre approach argue that… pupils [will] understand more fully how knowledge is constructed in different academic disciplines’ (ibid, p.192). The idea that pupils are empowered to deal with a wide range of subjects and the adult world must be a positive outcome, but pupils must retain ownership of their work by granting them some freedom of expression. The year seven group learning about fairytales were taught the conventions of the oral story telling traditions using ‘Mind mapping’¹ before being given the first chapter of a modern tale. The first chapter was read to them in order for them to make notes. In this example, the pupils were not focused on a text, but on their perceptions and notes relating to the text. They were then asked to finish the story using the conventions of traditional dark fairytales. The pupils were given a stimulus, but not expected to rely on it word for word, enabling a freedom of style to develop. This process follows the Social Constructivism model. Further exercises could be used to develop the pupils’ independent learning, for example writing their own tale from their own initial idea in a future lesson. Through this small scale investigation into the way children learn English, and how teaching reflects this I have been able to draw some concrete conclusions. Learning ‘style’ is a much debated area of study. Prescribed models that dictate a preferred way of learning for each individual are becoming out of vogue, and are being replaced by a model that allows for greater flexibility. The children we teach have many varied learning styles, but even with a preferred way of being taught these children still benefit from a degree of variety in their school day (Griggs (1988) in Psychology in Practice: Education, p.72). This humanistic approach echoes Ginnis’ view that the focus in schools should be the student and the way children are learning, rather than what is being taught. If children are treated as individuals with preferences and needs, the teacher is far more likely to become proficient than through the practice of trying to get the children to fit into an existing system of education. Lessons need to have pace, variety and provide a two way interaction between the teacher and pupils. New ideas need to be delivered in a way that pupils can identify with and fit into their existing knowledge bank. The techniques I have practised so far have been on the whole successful, but I realise that teaching develops with experience, and problems faced in my first term will be easier to overcome in future. As Piaget suggests: like the pupils, teachers learn from experimenting through concrete active exercises! The more teachers learn through experience about teaching their subject, and more importantly about their individual pupils, the more the pupils will learn.
Where did global scourges like AIDS, smallpox, cholera and the black plague come from? Most of them got their start in other animals, then made the cross-species jump to infect humans. If only we could have spotted the malicious microbes when they were just beginning to make that jump.... That's exactly what three prominent researchers are proposing we do, by establishing a global "early warning system" for infectious diseases. The system would involve periodic testing of people who come in close contact with wild animals, ranging from zoo workers to hunters. One of the scientists says such a system could have changed the course of the global AIDS crisis ... if only it had been in place 40 years ago. The proposal comes in an research review article written for the journal Nature by Nathan Wolfe, Claire Panosian Dunavan and Jared Diamond of the University of California at Los Angeles. Diamond is the most famous member of the trio, thanks to his best-selling books "Guns, Germs and Steel" and "Collapse: How Societies Choose to Fail or Succeed." Wolfe and Panosian Dunavan are also well-known for their work in epidemiology. "In 100 years, when people look back on this period of history, they will say that we worked very hard to control existing pandemics, but we did very little to try to prevent future pandemics," Wolfe told me today. "Global disease control today is like cardiology was in the '50s. Instead of preventing pandemics, we wait until the 'heart attack' occurs - of course, at which time it's often too late." In this week's Nature article, Wolfe and his colleagues recap what we've found out about the emergence of infectious diseases over the centuries. They trace five stages leading from first cross-species transmission to human pandemic: - Pathogens found only in animals but not detected in humans under natural conditions - for example, most known malarial plasmodia. - Pathogens that are transmitted from animals to humans but not generally among humans, such as anthrax, rabies and West Nile virus. - Pathogens that jump from animals to humans, but appear to be transmitted among humans for only a few cycles before the outbreak dies out. The Ebola and Marburg viruses are examples. - Pathogens that can be transmitted from animals to humans, and also from human to human in a long outbreak cycle. This category takes in cholera, influenza A, typhus, yellow fever and dengue fever. - Pathogens that are passed exclusively from human to human, either because they go back to the beginnings of humanity or because the species-jumping microbe quickly evolved to become human-specific. Examples of the Stage 5 sicknesses include HIV-1 M, the virus that causes AIDS, as well as measles, mumps, rubella, smallpox and syphilis. The researchers go on to note how infectious agents linked to animals have shaped history - for example, why indigenous Americans were vulnerable to European settlers' diseases but not vice versa (it has to do with domesticated animals, or the lack thereof). They wind up their paper with the call to action, starting out with a proposal for an "origins initiative" to fill the gaps in our knowledge about the roots of a dozen major diseases: AIDS, cholera, dengue fever, falciparum malaria, hepatitis B, influenza A, measles, plague, rotavirus, smallpox, tuberculosis and typhoid. Pathogens from a wide range of wild and domesticated animals would be analyzed. Here's what the Nature authors say could result from such an effort: "In addition to the historical and evolutionary significance of knowledge gained through such an origins initiative, it could yield other benefits such as: identifying the closest relatives of human pathogens; a better understanding of how diseases have emerged; new laboratory models for studying public health threats; and perhaps clues that could aid in predictions of future disease threats." That dovetails nicely with the early warning system: "Most major human infectious diseases have animal origins, and we continue to be bombarded by novel animal pathogens. Yet there is no ongoing systematic global effort to monitor for pathogens emerging from animals to humans. Such an effort could help us to describe the diversity of microbial agents to which our species is exposed; to characterize animal pathogens that might threaten us in the future; and perhaps to detect and control a local human emergence before it has a chance to spread globally. "In our view, monitoring should focus on people with high levels of exposure to wild animals, such as hunters, butchers of wild game, wildlife veterinarians, workers in the wildlife trade and zoo workers. Such people regularly become infected with animal viruses, and their infections can be monitored over time and traced to other people in contact with them." Samples from the target groups would be analyzed for the telltale signs of emerging diseases - for example, retroviruses in the blood of bushmeat hunters. In the event of a future outbreak, public health experts could check the tissue repository to reconstruct the roots of the pathogen and come up with countermeasures. The years-long battle against bird flu illustrates how difficult it is to fight an emerging disease - and how important the fight has become. Eight years ago, Wolfe set up a pilot project to monitor "viral chatter" in Cameroon, by testing bushmeat hunters and their kills for blood pathogens. In the course of the project, he and his team came across three previously unknown retroviruses (that is, from the same family as HIV) and educated the hunters on safer practices for handling animals and meat. Now Wolfe says he is "scrambling" to set up a bigger monitoring system in Cameroon as well as the Democratic Republic of Congo, Malaysia, China, Madagascar and Paraguay, using his $2.5 million in seed money from the National Institutes of Health. "The idea is that we will move out to other bilateral partners," he told me. Wolfe said such a system might have picked up on the HIV epidemic in its earliest stages, had it been around then. "Had we caught HIV in the '60s ... we would have been way ahead of the game. Each extra month of early warning leads to massive lives saved and financial resources preserved. You don't have to hit a home run. If you get a base hit with one of these systems, you get a huge benefit," he said. Wolfe emphasized that the focus of such a system would be on local health authorities, with government agencies and philanthropic institutions playing a supporting role. "What this is about is local scientists stepping up and saying, 'Look, we've got major emerging infectious diseases in our country, and we'd like to play a part,'" he said. Eventually, the system could evolve into something of an Interpol for infectious diseases - turning national public health databases into an international whole that's greater than the sum of its parts. "This takes advantage of global public health needs," Wolfe said. For the full story on infectious diseases and the proposed early warning system, check out this report on SciDev.net, then follow the Web link at the bottom of the report for free access to the Nature paper itself. This UCLA news release and this Wired article provide additional insights. Is such a system too troublesome and expensive to create - or is the cost of not creating it too great? Feel free to weigh in with your comments below.
a.k.a. e-mail address -or- Internet address -or- network address -or- Web address -or- addy A series of letters, numbers, and/or symbols by which you identify yourself and by which the Internet identifies you (actually, your computer). It is also a location where information is stored. Through the use of addresses, people can send e-mail, look at Web sites, and send or receive files and documents. An e-mail address takes the form of [email protected], where the username"> is a name you have chosen and the host name is that of your ISP or e-mail provider. The symbol in the middle is the "at" symbol (@). Your e-mail address is verbalized as "username at hostname dot com." A Web address is the same as a URL. Think of it as a telephone number, where every one is different. A WWW address usually starts with "http://www" followed by a "dot" and then a domain name. The Internet is global, and most companies outside the United States use their country's abbreviation instead of the popular "dot com." (For a list of country code abbreviations, see: country codes.) An Internet address refers to both of the above, as well as to an IP address, which is a number given to a computer terminal where a user logs on to the Internet. If you've ever seen a set of numbers in place of a domain name (for example, http://184.108.40.206), you've seen that Web site's IP address. For network address, see: node. NetLingo Classification: Technical Terms
There are more items that can be added to the toolkits for students, but these I will separate by primary (gr 1-3) and intermediate (gr 4-7) levels. Again, it is hard to just mention the contents without going into activities that use the tools to help students build mathematical understanding. Hopefully the tool itself will prompt you to think about some ways to use it. - 25 chart, laminated (usually created in 5 rows of 5) - blank 5-frame (with spaces big enough to put counters on) - blank 10-frame - blank double-10-frame (two blank 10-frames on one card) - set of filled in 10-frames (1-9, multiple 10’s) - bead bracelet (10 beads in two colours, 5 of each) to be worn draped over the fingers so the beads can be manipulated. Two bracelet may be worn to use for numbers in the teens. - large flattened paper plate or cut out paper circle for making dot plate configurations with bingo chips - mini bags of small coloured wooden sticks or other small materials for patterning - teeny-tiny Hundreds Tens and Ones (HTO’s) — miniature place value pieces cut out of large plastic canvas (found in crafting stores) - place value cards — overlapping cards that show, for example, 425 can be pulled apart to reveal 400, 20 and 5 (click on image above to print) - booklet of mini 100 charts to be coloured in to show multiples (x2, x3, x4, etc.) - metre tape (purchased or created by taping photocopied paper lengths together) - fraction-bar card (a card with a fraction bar in the middle — students use numeral cards to place as the numerator and denominator) - fraction percent circles (two different coloured circles partitioned off in hundredths each cut along one radius and then placed together so they “spin” over each other to show different percent values) As you can see, there are many things that can be used as “tools” in the teaching of mathematics. Creating a toolkit with students is a wonderful way to make lessons engaging.
In mathematics, an operator is some kind of function; if it comes with a specified type of operand as function domain, it is no more than another way of talking of functions of a given type. The most frequently met usage is a mapping between vector spaces; this kind of operator is distinguished by taking one vector and returning another. For example, consider an enlargement, say by a factor of √2; such as is required to take one size of paper to another. It can also be applied geometrically to vectors as operands. In many important cases, operators transform functions into other functions. We also say an operator maps a function to another. The operator itself is a function, but has an attached type indicating the correct operand, and the kind of function returned. This extra data can be defined formally, using type theory; but in everyday usage saying operator flags its significance. Functions can therefore conversely be considered operators, for which we forget some of the type baggage, leaving just labels for the domain and codomain. To begin with, the usage of operator in mathematics is subsumed in the usage of function: an operator can be taken to be some special kind of function. The word is generally used to call attention to some aspect of its nature as a function. Since there are several such aspects that are of interest, there is no completely consistent terminology. Common are these: A single operator might conceivably qualify under all three of these. Other important ideas are: These are abstract ideas from mathematics, and computer science. They may however also be encountered in quantum mechanics. There Dirac drew a clear distinction between q-number or operator quantities, and c-numbers which are conventional complex numbers. The manipulation of q-numbers from that point on became basic to theoretical physics. Operators are described usually by the number of operands: The number of operands is also called the arity of the operator. If an operator has an arity given as n-ary (or n-adic), then it takes n arguments. In programming, outside than functional programming, the -ary terms are more often used than the other variants. See arity for an extensive list of the -ary endings. There are three major systematic ways of writing operators and their arguments. These are For operators on a single argument, prefix notation such as −7 is most common, but postfix such as 5! (factorial) or x* is also usual. There are other notations commonly met. Writing exponents such as 28 is really a law unto itself, since it is postfix only as a unary operator applied to 2, but on a slant as binary operator. In some literature, a circumflex is written over the operator name. In certain circumstances, they are written unlike functions, when an operator has a single argument or operand. For example, if the operator name is Q and the operand a function f, we write Qf and not usually Q(f); this latter notation may however be used for clarity if there is a product — for instance, Q(fg). Later on we will use Q to denote a general operator, and xi to denote the i-th argument. Notations for operators include the following. If f(x) is a function of x and Q is the general operator we can write Q acting on f as (Qf)(x) also. Operators are often written in calligraphy to differentiate them from standard functions. For instance, the Fourier transform (an operator on functions) of f(t) (a function of t), which produces another function F(ω) (a function of ω), would be represented as This section concentrates on illustrating the expressive power of the operator concept in mathematics. Please refer to individual topics pages for further details. Main article: Linear transformation The most common kind of operator encountered are linear operators. In talking about linear operators, the operator is signified generally by the letters T or L. Linear operators are those which satisfy the following conditions; take the general operator T, the function acted on under the operator T, written as f(x), and the constant a: Many operators are linear. For example, the differential operator and Laplacian operator, which we will see later. Linear operators are also known as linear transformations or linear mappings. Many other operators one encounters in mathematics are linear, and linear operators are the most easily studied (Compare with nonlinearity). Such an example of a linear transformation between vectors in R2 is reflection, given a vector x=(x1, x2) We can also make sense of linear operators between generalisations of finite-dimensional vector spaces. For example, there is a large body of work dealing with linear operators on Hilbert spaces and on Banach spaces. See also operator algebra. Main article: Probability theory Operators are also involved in probability theory. Such operators as expectation, variance, covariance, factorials, et al. Calculus is, essentially, the study of one particular operator, and its behavior embodies and exemplifies the idea of the operator very clearly. The key operator studied is the differential operator. It is linear, as are many of the operators constructed from it. Main article: Differential operator The differential operator is an operator which is fundamentally used in Calculus to denote the action of taking a derivative. Common notations are such d/dx, y'(x) to denote the derivative of y(x). However here we will use the notation that is closest to the operator notation we have been using, that is, using D f to represent the action of taking the derivative of f. Given that integration is an operator as well (inverse of differentiation), we have some important operators we can write in terms of integration. Main article: Convolution The convolution of two functions is a mapping from two functions to one other, defined by an integral as follows: If x1=f(t) and x2=g(t), define the operator Q such that; which we write as (f * g)(τ). Main article: Fourier transform The Fourier transform is used in many areas, not only in mathematics, but in physics and in signal processing, to name a few. It is another integral operator; it is useful mainly because it converts a function on one (spatial) domain to a function on another (frequency) domain, in a way that is effectively invertible. Nothing significant is lost, because there is an inverse transform operator. In the simple case of periodic functions, this result is based on the theorem that any continuous periodic function can be represented as the sum of a series of sine waves and cosine waves: When dealing with general function R->C, the transform takes up an integral form: Main article: Laplace transform The Laplace transform is another integral operator and is involved in simplifying the process of solving differential equations. Given f=f(s), it is defined by: Three main operators are key to vector calculus, the operator ∇, known as gradient, where at a certain point in a scalar field forms a vector which points in the direction of greatest change of that scalar field. In a vector field, the divergence is an operator that measures a vector field's tendency to originate from or converge upon a given point. Curl, in a vector field, is a vector operator that shows a vector field's tendency to rotate about a point. Main article: Operator (physics) In physics, an operator often takes on a more specialized meaning than in mathematics. Operators as observables are a key part of the theory of quantum mechanics. In that context operator often means a linear transformation from a Hilbert space to another, or (more abstractly) an element of a C* algebra.
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. January 27, 1997 Explanation: One of the most spectacular solar sights is a prominence. A solar prominence is a cloud of solar gas held above the Sun's surface by the Sun's magnetic field. The Earth would easily fit under one of the loops of the prominence shown in the above picture. A quiescent prominence typically lasts about a month, and may erupt in a Coronal Mass Ejection (CME) expelling hot gas into the Solar System. Although thought by many to be related to the magnetic field, the energy mechanism behind a Solar prominence is still unknown. Authors & editors: NASA Technical Rep.: Jay Norris. Specific rights apply. A service of: LHEA at NASA/ GSFC &: Michigan Tech. U.
American Friends of Tel Aviv University Tel Aviv University researchers investigate sediment slides and coral reefs to study historic earthquake patterns In the wake of the devastating loss of life in Japan, the urgent question is where the next big earthquake will hit. To answer it, geologist Prof. Zvi Ben-Avraham and his doctoral student Gal Hartman of Tel Aviv University's Department of Physics and Planetary Sciences in the Raymond and Beverly Sackler Faculty of Exact Sciences are examining coral reefs and submarine canyons to detect earthquake fault zones. Working with an international team of Israelis, Americans and Jordanians, Prof. Ben-Avraham and his team are developing a new method to determine what areas in a fault zone region are most at risk. Using a marine vessel, he and his colleagues are surveying a unique geological phenomenon of the Red Sea, near the coastal cities of Eilat and Aqaba - but their research could be applied anywhere, including Japan and the west coast of the U.S. Recently published in the journal Geo-Marine Letters, the research details a “mass wasting” of large detached blocks and collapsed walls of submarine canyons along the gulf region of the Red Sea. They believe the geological changes were triggered by earthquake activity. What's next for San Andreas? The team has created the first underwater map of the Red Sea floor at the head of the Gulf of Aqaba, and more importantly, identified deformations on the sea floor indicating fault-line activity. They not only pinpointed the known fault lines along the Syrian-African rift, but located new ones that city engineers in Israel and Jordan should be alert to. “Studying fossil coral reefs and how they've split apart over time, we've developed a new way to survey active faults offshore by looking at the movement of sediment and fossil structures across them,” says Hartman. “What we can't say is exactly when the next major earthquake will hit. But we can tell city engineers where the most likely epicenter will be.” According to Hartman, the tourist area in the city of Eilat is particularly vulnerable. While geologists have been tracking underwater faults for decades, the new research uniquely tracks lateral movements across a fault line (a “transform fault”) and how they impact the sediment around them. This is a significant predictive tool for studying the San Andreas Fault in California as well, says Hartman. The research is supported by a USAID grant through the Middle East Regional Cooperation (MERC) program. Marching orders for city engineers Aboard a marine vessel that traversed the waters of Israel and Jordan and peering at depths as deep as 700 meters, the researchers analyzed the structure of the seabed and discovered active submarine canyons, mass wasting, landslides, and sediment slumps related to tectonic processes and earthquake activity. “There are several indicators of seismic activity. The most significant is the location of the fault. Looking at and beneath the seafloor, we saw that the faults deform the upper sediments. The faults of the Red Sea are active. We managed to find some other faults too and now know just how many active faults are in the region. This should help make authorities aware of where the next big earthquake will strike,” says Hartman. What made their study particularly unique is that they used the offset along linear structures, of fossil coral fringing-reefs to measure what they call “lateral slip across active faults.” With this knowledge, researchers were able to calculate total slip and slip-rates and how active the fault has become. “We can now identify high-risk locations with more certainty, and this is a boon to city planners. It's just a matter of time before we'll need to test how well cities will withstand the force of the next earthquake. It's a matter of proper planning,” concludes Hartman. American Friends of Tel Aviv University (www.aftau.org) supports Israel's leading, most comprehensive and most sought-after center of higher learning. Independently ranked 94th among the world's top universities for the impact of its research, TAU's innovations and discoveries are cited more often by the global scientific community than all but 10 other universities. Internationally recognized for the scope and groundbreaking nature of its research and scholarship, Tel Aviv University consistently produces work with profound implications for the future.
Leaves that are falling with the change of seasons are a cheap and important source of organic material for home vegetable and flower gardens. Anyone who has walked through the woods and kicked the leaves probably has noticed the rich layer of humus that has formed on the ground as a result of many years of leaves falling and decomposing. This is nature's way of returning organic matter to the soil. The same is true of a homemade compost pile. Fallen leaves should be raked and put in a compost pile, then allowed to decay before being worked into the garden soil or added as mulch. Advantages of organic material added to the soil from a compost are improved tilt or workability of the soil, improved water-holding capacity of the soil, better aeration, prevention of crusting and increases of beneficial organisms such as earthworms. All make for better plant growth. When used as mulch around plants, the decomposed compost material helps to control weeds and conserve soil moisture. A minimum of expenses and extra materials are needed to build a composting bin. The structure should keep the leaves from blowing away and allow air to move freely through the pile. Wire fencing material is ideal. The compost pile should be a size that can be easily handled. A 6x6 or 8x8 foot structure is common and manageable. One side should be open for adding materials, easy turning of the pile and removing of the compost. As leaves are collected and added to the compost pile, results are obtained by adding material in layers. Begin with a layer of leaves, followed by a sprinkling of soil, then a sprinkling of fertilizer. Repeat the layers as the pile grows in size. Including soil in the mix will add millions of microscopic organisms to hasten the breakdown of leaves. Fertilizer high in nitrogen, such as ammonium nitrate, will provide supplemental food to the microogransims and speed decay. The compost pile should be kept moist at all times. Others ways of speeding decomposition include shredding the leaves before composting, turning the pile frequently after it goes through its initial heat stage and covering the pile with clear or black plastic during the winter to raise temperatures. If it is covered with plastic, be sure to remove the plastic and wet the pile occasionally. Even if leaves are simply piled up and allowed to decay naturally without added effort or structure, they eventually will decompose and be an important source of organic matter. It will take longer, however. Once the organic content of the garden soil has been increased, working the soil becomes a real pleasure rather than a task. For further information contact Alan Vaughn at (504) 433-3664 or (504) 278-4234.
Daily exposure to air pollution is linked to asthma, stroke, heart disease, lung cancers and dementia. For many people living in cities, exposure to high levels of pollution seems inevitable until regulations on emissions are tightened. But until that happens, there are things individuals can do to reduce daily exposure to pollution, both outdoors and indoors. IBTimes UK spoke to Prashant Kumar, who studies urban air pollution at the University of Surrey, about what air pollution is, why it is harmful and what people can do to avoid it. What is air pollution made up of? Basically you have a number of pollutants that are part of the regulations. You've got the particulate matter – very fine particles of carbon – and they come in different sizes. You've got the PM 10 (10 micrometres across) and PM 2.5 (2.5 micrometres across). The finer particles are the most harmful. And then you've got nitrogen dioxide. These are a big problem in large cities like London. In the first few days of the year, the whole quota for the limit of nitrogen dioxide for the year was breached. These are the key pollutants. Then you've also got ozone, carbon monoxide, sulphur dioxide and lead. Where do these pollutants come from and where do they end up? Most popular: Air pollution and how to avoid it Some of them are what's called a primary pollutant. Primary in the sense that they come direct from the sources. That's PM 10, PM 2.5, the gases sulphur dioxide, nitrogen dioxide or carbon monoxide. They are released from sources such as vehicle exhausts. Then there are secondary pollutants – ozone is one. It's formed when nitrogen oxides react with the oxygen you have in the atmosphere in the presence of sunlight. So ozone is not coming out directly from the source but the pollutants are helping to form it, so it might be more diffuse. You recently did a recent study on exposure to air pollution while commuting – what did you find? We divided the whole of London into four categories based on income. The 10% most deprived, 10% least deprived and then two middle categories, then saw what sort of mode of transport people use to go to work. It wasn't surprising in a way that we found rich people preferred to use the car, compared with the most deprived who use buses. When we look into exposure we found that if you're using a bus then you might end up having a longer time in the bus for the same journey. And in a bus you also have higher exposure to pollutants. What was the pollution like for people who commuted by car? Car drivers were exposed to the least air pollution on their journeys. Cars have better filtering systems compared with buses or the underground. But if you look at it from the perspective of per capita emissions then car drivers are responsible for the most emissions. If you're in a bus then there are usually tens of people using it, so the per capita emissions are much less. You also did a study recently on babies' and children's exposure to pollution in prams – why did you decide to look into this? The motivation for this work was because I used to drop my daughter off at school and I also have a son who's now one and a half. We had to walk past certain red lights on the way. Sometimes you can feel the smoke on your face when you're standing there and the vehicles are idling. This was on a personal basis but we had also been doing the experimental assessment work for a long time, so we had a lot of information on the exposure in cars and what happens at traffic lights. What did you find? We looked at what sort of exposure the babies get compared with the parents. Their bodies aren't fully developed and they don't have fully formed immune systems compared with adults. We found the places where they need the most protection – larger roads and intersections, where traffic is standing at red lights. So what can people do to limit their children's exposure to air pollution? This is a complicated question but there are several things that can be done to reduce exposure. In case of babies' exposure, at the very least you can use the cover on the pram. This is especially helpful at the pollution hotspots, when you can see or smell the fumes. This will not solve the whole problem but it can also create a barrier, and create a layer of defence. What about commuters? We found in the London commuting study that at the traffic lights in a car, if your windows are closed then essentially your car becomes a gas chamber. You are sucking the polluted air inside and it has nowhere else to go. Equally, if you are on a free-flowing road and you open up the windows then this allows you to flush out the pollution. And for pedestrians? For the people exposed to pollution on the street, one of the things you can do is to avoid those polluted routes. Take the routes that are greener, or the smaller roads further away from the main roads. If there is space or greenery such as a hedge in between you and the road then this could help too – we have another research project at the moment looking into this. But if you just have a very narrow footpath next to a busy road, and on the side you have buildings, then this doesn't let the emissions disperse very well. Do pollution masks work? Whatever you put in between you and the pollution will help. That way you are not allowing these emissions to go into your nose directly. They're passing through a filter that will stop some of these pollutants going in. I'm not sure how effective these masks are when we talk about ultra-fine particles less than 100 nanometres. They are really tiny so may be passing through the filter. But masks do work will with PM 10 and PM 2.5, so they might be doing something good to stop people inhaling those emissions. What can people do to minimise their indoor pollution exposure? There are a number of things people can think about. The major source is often the kitchen. Cooking is a major source of indoor pollution – either you are using gas or you are using oil, which produces a lot of small and damaging particles. The best way to reduce your exposure is basically ventilation. That's the key. Either you have a ventilator sucking out the emissions, or you open the window to throw the polluted air outside. Then it is replaced by the clean air. Perhaps if you have the kitchen door open to the rest of the house, then it can also spread the indoor pollution to your whole house. The smell of something cooking is driven by these particles that are pollutants – it may not be unpleasant in terms of the smell, but some of the chemical constituents may not be good for your health. Is there anything else people can do to limit exposure? Of course there is tackling the source itself. There are policy makers working towards policies on how emissions can be reduced. The key thing in my opinion is for the individuals to be more vigilant and become educated about the problem. Awareness is much better than it used to be several years back, but plenty can be done to make people aware so they can devise their own actions to save themselves from exposure to high levels of pollution. You may be interested in:
The Guadalupe Fur Seal (Arctocephalus Townsendi) is the only species of the genus Arctocephalus that is found North of the equator. Small populations can be found on San Benito Island and San Miguel Island though the vast majority of this non-migratory species occurs on Guadalupe Island off Mexico. Although this species is slowly recovering from near extinction, (when the population was reduced to a few dozen individuals by commercial sealing) it remains the least studied of all the fur seals. The species is listed by the IUCN as “near threatened” and is at risk from entanglement, marine pollution, loss of habitat and other environmental factors. Unlike most seals, the Guadalupe Seal is a solitary non-social animal. There is a high level of sexual dimorphism, with males being larger than females. Males are polygamous and, on average, will breed with between 5 to 10 females in a season. As with other species, the male is territorial and will return to the same breeding site for several years in a row. He will guard his smallish piece of turf with vocal displays by coughing and barking. They prefer to breed in caves rather than beaches though there is some speculation that this is due to their persecution from the commercial seal hunts that nearly wiped the species out. Females will arrive a few weeks after the males and give birth shortly afterwards. Birthing runs from mid June through till the end of July. Pups are born black, but lighten to a tan colour as they mature. Gestation period is around a year and, as with the Cape Fur Seal (Arctocephalus Pusillus) they take relatively long to wean. (Around 9 months) INTERESTING FACTS ABOUT GUADALUPE FUR SEALS - In the late 1800’s they were thought to be extinct. In 1928 two were spotted off the coast of Mexico. - Total protection from both US and Mexican legislation has seen these seals recover to an estimated population of around 7 000, though there is strain on the gene pool from a lack of diversity. - In 1997, the first pup was born on San Miguel Island in California. - It is estimated that Guadalupe seals will live between 17 and 20 years. Their main predators are sharks and Killer Whales. - Their Latin name translates as “bear headed.” - They are the rarest of the fur seals. - Feeding almost exclusively at night, they dive to a depth of around 20m to catch their favourite prey of squid and fish. - In 1992, the el Nino and Hurricane Darby combined caused a 33% pup mortality. Not good for a species recovering from the brink of extinction.
The Galapagos Penguin (Spheniscus mendiculus) is a penguin endemic to the Galapagos Islands. It is the only penguin that lives north of the equator in the wild. It can survive due to the cool temperatures resulting from the Humboldt Current and cool waters from great depths brought up by the Cromwell Current. The Galapagos Penguin is one of the banded penguins, the other species of which live mostly on the coasts of Africa and mainland South America. The species is endangered, with an estimated population size of around 1,500 individuals in 2004, according to a survey by the Charles Darwin Research Station. The population underwent an alarming decline of over 70% in the 1980s, but is slowly recovering. It is therefore the rarest penguin species (a status which is often falsely attributed to the Yellow-eyed Penguin). Population levels are influenced by the effects of the El Niño Southern Oscillation, which reduces the availability of shoaling fish, leading to low reproduction or starvation. However, anthropogenic factors (e.g. oil pollution, fishing by-catch and competition) may be adding to the ongoing demise of this species. On Isabela Island, introduced cats, dogs, and rats attack penguins and destroy their nests. When in the water, they are preyed upon by sharks, fur seals, and sea lions.
World Water Day 2001: Oral health: Previous page | Many communities worldwide lack sufficient natural fluoride in their drinking water to prevent caries. Because of the powerful benefits of the right amount of fluoride, water fluoridation programmes (Box 2) have been established in many countries since the 1930s when its ability to reduce dental caries was first recognized. Box 2: What is a fluoridation programme? A fluoridation programme is the artificial and controlled addition of a fluoride compound to a public water supply, in order to adjust its fluoride concentration to an optimal level for prevention of dental caries. The optimal level is usually around 1mg/litre. A fluoride-containing chemical is added to increase the total (raw water plus dosed) level to the pre-determined concentration. The chemical is chosen for its ability to dissolve in water, low cost and lack of undesirable side effects. Fluoride is odourless and tasteless, so there is no perceptible change to the water. The usual chemicals used for fluoridation are: hexafluorosilicic acid, disodium hexafluorosilicate or sodium fluoride. Fluoridation is carried out at water treatment works. A fluoridation programme requires good maintenance and a specially designed plant: fluoridation chemicals are corrosive in concentrated form and must be stored and handled according to safe working practices. Water fluoridation in low fluoride-containing water supplies helps to maintain optimal dental tissue development and dental enamel resistance against caries attack during the entire life span. Fluoride in drinking water acts mainly through its retention in dental plaque and saliva. Frequent consumption of drinking water and products made with fluoridated water maintain intra-oral fluoride levels. People of all ages, including the elderly, benefit from community water fluoridation. For example, the prevalence of caries on root surfaces of teeth is inversely related to fluoride levels in the drinking water: in other words, within the non-toxic range for fluoride, the higher the level of fluoride in water, the lower the level of dental decay. This finding is important because with increasing tooth retention and an aging population, the prevalence of dental root caries would be expected to be higher in the absence of fluoridation. Fluoridation of water supplies, where possible, is the most effective public health measure for the prevention of dental decay. Water fluoridation is a multi-professional activity in which engineers, chemists, physicians, nutritionists and dentists all play important roles. The efficiency of fluoridation programmes, and their acceptability to the communities, depends on the general state of dental health and whether there is good access and attendance for free dental health care for children and young people, as well as high standards of diet and oral hygiene. The consensus among dental experts is that fluoridation is the single most important intervention to reduce dental caries, not least because water is an essential part of the diet for everyone in the community, regardless of their motivation to maintain oral hygiene or their willingness to attend or pay for dental treatment. In some developed countries, the health and economic benefits of fluoridation may be small, but particularly important in deprived areas, where water fluoridation may be a key factor in reducing inequalities in dental health. World Water Day 2001: Oral health: | Next page
Freshwater Fish - Species Species Specific Regulations Freshwater Fishing License required. Guide to Freshwater Fishes (Adobe PDF - 3MB) Pumpkinseed (Lepomus gibbosus) - Native Description: (Anatomy of a Fish) The pumpkinseed is easily recognized by the wavy and iridescent blue lines that radiate from the mouth along the side of its head. The sides of the body fade from olive, covered with gold and yellow flecks, to blue green covered with orange spots, to a yellow or orange belly. The dorsal, anal and caudal fins are decorated with brown wavy lines or orange spots. The pectoral fin is long and pointy and usually extends far past the eye when bent forward. The gill cover or operculum is stiff, short and mostly black with a light colored edge of bright orange to red-orange. The mouth is small. Pumpkinseeds have pharyngeal teeth, which are molar shaped teeth located in the throat area of the fish. Average Length: 4-6 inches Average Size: 2-4 ounces South Carolina State Record: 2 pounds 4 ounces (1997) Life Expectancy: Approximately 8 years Pumpkinseeds can survive and reproduce in a variety of habitat types including pools and backwaters or streams, rivers, ponds and reservoirs over a variety of bottoms. They prefer the vegetated areas of these habitat types. - Aquatic insects, mussels, snails and crayfish. - Pumpkinseed begin to spawn when water temperatures exceed 70 degrees Fahrenheit around late spring to early summer. - Males construct nests in shallow water either singularly or in loose groups. - Females can produce up to 14,000 eggs during a laying season. However, they produce 2,000-3,000 sticky eggs at one time in the bottom of their sandy nests. - The male fertilizes the eggs, guards them throughout incubation and protects them during their early development. Pumpkinseed will often hybridize with other sunfish species, reproducing more than once if conditions are favorable. They rarely reach a size to make them recreationally important to anglers. Commonly Mistaken Species One species of fish that is commonly mistaken for this species: Rohde, Fred C, Arndt, Rudolf G., Foltz, Jeffery W., Quattro, Joseph M. 2009. Freshwater Fishes of South Carolina. University of South Carolina Press, Columbia, South Carolina. Wildlife and Freshwater Fisheries Division. 2009. South Carolina Guide to Freshwater Fishes. Fish Illustration by Duane Raver.
Rising sharply from a narrow summit area, these majestic mountains stand prominently above their surroundings. However, many amongst us have yet to learn the difference between the tallest and the highest mountains. Although, it sounds the same and we often confuse the two but there is a difference. The tallest mountains have the measures from the base of the mountain to its peak, whereas the highest mountains have the measures from the sea level to the peak. See the difference? Looking our planet, into the solar system there are many mountains, peaks and ridges way taller than the mountains on the planet Earth. These extraterrestrial mountains may be a result of crater impact, high volcanic activity etc., none of which we would want happening in our home planet. The fifth highest mountain in the world above sea level at an altitude of 8,481 m above sea level and located at the Nepal-China border. The Makalu has a unique shape of a four sided pyramid and lies only 19 km southeast of Mount Everest. The first attempts to climb the mountain began in 1954. However, the first successful ascent of the summit was made in 1955 during a French expedition by Lionel Terray and Jean Couzy. At 8,516 m above sea level, Lhotse is situated at the borders of China and Nepal and is connected to the Everest through the southern mountain pass. The south face of Lhotse has seen many failed attempts, fatalities with a very few successful ascents. The main summit of Lhotse was first climbed in 1956 by a Swiss team of Ernst Reiss and Fritz Luchsinger. However, the summit of Lhotse Middle remained the highest unclimbed point on Earth, until 2001 when a Russian Expedition finally made a first ascent. The third highest mountain of the world at 8,586 m above sea level, Kangchenjunga is located at the India-Nepal border in the Himalayan Range. The five peaks are collectively called the Kangchenjunga meaning “The Five Treasures of Snows”. The earliest attempts of reaching the summit started in 1848 and it was not until 1955 that Joe Brown and George Band made the first ascent. The landscape of Kangchenjunga is shared by four countries namely China, India, Nepal and Bhutan. The second highest mountain in the world, K-2 or Godwin Austin has a peak elevation of 8,611 m above sea level and lies at the northwest of the Karakoram Range. Known as the savage mountain due its high fatality rate; K-2 is situated at the border of China and Pakistan. Since it is almost impossible to climb the K-2 from China, it is majorly climbed from Pakistan. K-2 was named by Thomas Montgomerie, while he surveyed the Karakoram and labeled its prominent peaks as K-1, K-2, K-3, K-4 and K-5. The earliest attempts to climb the savage mountain began in 1902 and the first successful ascent was finally made in 1954 by Lino Lacedelli and Achille Compagnoni in an Italian Expedition. One interesting fact about K-2 is that no one has ever attempted to climb the summit during the winter season. 6. Mount Everest The world’s highest mountain rises in the eastern Himalayas between Nepal and Tibet. A young limestone mountain not yet worn by erosion, it has two peaks, one of which reaches a height of 8,848 m. Everest is covered in snow except for its bare, gale-swept summits. Many glaciers feed rivers that rise near the Everest base. The mountain got its name in 1865, in the honor of Sir George Everest, the British surveyor general of India who established the location and the approximate altitude of the mountain. Its Tibetan name Chomolungma means “goddess mother of the world”. Heartburn No More ™: Heartburn Cure *top Affiliate Makes $127k+/mon =”550″ height=”411″ /> Climbing attempts began in the early 1920s, and several expeditions came within 300 m of the top. Success came with the development of special equipment to cope with the low oxygen supply, high winds and extreme cold. On May 29, 1953, Edmund Hillary of New Zealand and Tenzing Norgay, a Nepalese Sherpa tribseman became the first successful climbers of the Everest. 5. Mauna Kea The tallest mountain in the world, Mauna Kea is a large dormant volcano located on the north central Hawaii Island, about 43 km northwest of Hilo. Measuring 4,205 m high above sea level, Mauna Kea extends an additional 5,547 km to the ocean floor. Thus, from base to peak it is the tallest individual mountain in the world. The Mauna Kea was last active more than 4,000 years ago; its snow covered cone is used for skiing and is also the site of the Mauna Kea Observatory, the highest astronomical observatory in the world. The mountain’s upper slopes have caves where ancient Hawaiians dug basalt for tools. The lower slopes support large cattle ranches and coffee plantations. The mountain is regarded in the Hawaiian legend as the home of the goddess Poliahu, the fire goddess of Mauna Kea. 4. Maxwell Montes Rising to a height of 11,000 m, Maxwell Montes is the highest point on the surface of planet Venus. Located on the northern highlands of Ishtar Terra, the origin of the mountain belt is controversial with several suggesting its formation. The Maxwell Montes was discovered in 1967 by scientists at the American Arecibo Radio Telescope in Puerto Rico. The mountain is named after the mathematician and physician James Clerk Maxwell, whose works in radio waves made the radar and ultimately resulted in the exploration of the surface of planet Venus. 3. Boösaule Montes Boösaule Montes is known to be the tallest non-volcanic mountain of the Solar System. It is located at Io, the fourth largest satellite of the Solar System and innermost satellite of the planet Jupiter. The geology of Io is quite interesting as it contains about 400 active volcanoes and contains over 150 mountains on its surface. The Boösaule Montes is one such mountain that lies on the northwest of the large Pele plume deposit and reaches an elevation of 17,500 m. The mountain got its name from a cave in the Greek Mythology where Epaphus, son of Zeus, was born. 2. Equatorial Ridge Located on the dark hemisphere of the third largest satellite, Iapetus, of the planet Saturn, the Equatorial Ridge runs along the center of the hemisphere with some isolated peaks as high as 20 km. The Equatorial Ridge was discovered by Cassini spacecraft on December 31, 2004. The formation of the ridge is still debated upon; however, it is agreed that the ridge is ancient as it is heavily cratered. The prominent bulge of the ridge gives Iapetus a walnut like shape. 1. Olympus Mons So far the tallest mountain discovered in our Solar System, Olympus Mons stands 24 km above a smooth plain on the planet Mars. Approximately three times taller than the Mount Everest, Olympus Mons was discovered by the US space probe, Mariner 9, in 1971 when it sent pictures of four immense volcanic mountains. The tallest of these shield volcanoes, Mons Olympus, dwarfs the largest such feature on the Earth, The Mauna Kea. The extraordinary height of the Olympus Mons owes to the absence of mobile tectonic plates, allowing the mountain to remain fixed on a stationary hotspot and continues discharging lava till the mountain reaches a considerable height. The base of the mountain pushes down 2km deep into the crust each year, owing to the enormous weight pressing down the Martian crust. If you like this post, you may also like to follow us on Twitter , or subscribe to this blog by Email or RSS feed!
Source: Image of ILP, Fair Use, https://mncis.intocareers.org In this tutorial, we'll discuss the connections between individualized learning, and competency-based education. We'll begin by defining the elements of individualized learning, and by exploring some best practices. We'll then talk about individualized learning plans, and the benefits of using these plans. Finally, we'll connect individualized learning with CBE. Let's get started. First, what is individualized learning? Individualized learning involves customizing the content, pacing, and resources that you're using in your classroom, based on student interests and abilities. Adaptive learning is one form of individualized learning. Using this method, technology tools help students progress along their learning pathways according to their specific learning needs. This definitely already connects to competency-based education. Because in this format, a student demonstrates mastery of the current concept. And then the computer moves them along their learning path, perhaps to a more challenging piece of content, or more in-depth information. On the other hand, if a student demonstrates that they are having trouble with the current concept, the computer realizes this as well. And some options might be that the computer would send them to some alternate presentations of that content, or would maybe provide them with more practice opportunities until they do master the material. Here are some best practices in individualized learning. Teachers and students need to collaborate to set clear and specific goals. These goals need to be challenging, but they still need to be realistic for each individual learner. These goals should be dynamic, that is we should be able to revise them as necessary. When you are doing regular reviews of goals and assessments of student progress in the classroom, these goals should be modified if that is appropriate. Individualized learning should promote motivation and independence. And it should promote students ownership of their own learning and progress. And finally, individualized learning should involve parents in all different areas. In goal setting, in supporting students learning, and in communicating about student progress, for example. An individualized learning plan, or ILP, is a personalized document that allows students both to set goals, and track their progress. Items that might be included on an individualized learning plan include grades, skills, interests, current and past activities, and test scores. Again, teachers and students collaborate to determine what will go on the plan. Then, students, parents, and teachers use the ILP to help plan for the future. This might include post-high school activities, like specific educational goals or career plans. Thinking about college and career readiness is definitely aligned with the same emphasis as it appears in the Common Core State Standards. Throughout this process, we're helping students to develop profiles that will inform their decision-making in middle school, in high school, and beyond. Always keeping in mind at the end goal is the achievement of the goals that they have set for themselves, with the help of their teachers and parents. What might be some of the benefits of using individualized learning plans? Well, ILPs can help students to explore careers that are aligned with both their interests and their abilities. It provides students with the opportunity to practice setting goals, and tracking their own progress, as they progress through their school years. ILPs may incorporate service learning projects and other extracurricular activities. And it helps them to explore post secondary options that are aligned with their goals. Finally, it provides one comprehensive place to assemble and store students' educational history, including assessment results and information from school administrators and counselors. Your district may employ its own unique ILP template. If you're interested in helping your students to create individualized learning plans, but a template isn't provided for you, there are some great templates available online. Ultimately, if your school employs a college and career readiness program, that particular program might also support the development of ILPs. Here's an example of an ILP that has been generated within a college and career readiness program. Finally, let's explore the connections between individualized learning and competency-based education. Recall that these are the five design principles of competency-based education as developed by iNACOL. Principle one, students advance upon mastery. This is related to the goal setting elements of individualized learning. Goals need to be clear, specific, and challenging, yet also realistic. Design principle number two, for CBE, states that explicit and measurable learning objectives empower students. This also relates to that idea of goal setting. Students are more likely to be motivated, and to take ownership of their own learning, when clear learning objectives or goals are set. And this is especially true when they've had a hand in setting those goals. CBE design principle three reads, assessment is meaningful, and a positive learning experience for students. In individualized learning, remember that goals need to be dynamic. That means that these goals should be able to be revised as necessary. And students are provided with multiple opportunities both to review their goals, and to review their progress towards them. Students need to receive rapid, differentiated support. Individualized learning is differentiated by its very nature. Also, ILPs allow us to formally plan for student support, based on individual learning needs. Design principle five in CBE states that learning outcomes emphasized need to include the application and creation of knowledge. In individualized learning, the focus is definitely on mastery of competencies. And we create ILPs around students' specific skills and abilities. The particular activities that we design as part of this process can definitely focus on the application and creation of knowledge. In this tutorial, we identified the elements of individualized learning. And we explored some best practices. We then talked about individualized learning plans, and the benefits of using ILPs. Finally, we examined the connections between individualized learning and competency-based education. Here's a chance for you to stop and reflect. Is your school currently using a college and career readiness program that already would help to create individualized learning plans for students? If not, consider searching online for some templates that might be useful to you and your students. For more information on how to apply what you learned in this video, please view the additional resources section that accompanies this video presentation. The additional resources section includes hyperlinks useful for applications of the course material, including a brief description of each resource. Thanks for joining me today. Have a great day. (00:00 - 00:25) Introduction (00:26 - 01:22) Elements of Individualized Learning (01:23 - 02:10) Best Practices in Individualized Learning (02:11 - 03:04) Individualized Learning Plans (03:05 - 03:41) Benefits of Individualized Learning Plans (03:42 - 04:16) Sample ILP (04:17 - 05:59) Individualized Learning and CBE (06:00 - 06:18) Review (06:19 - 06:53) Stop and Reflect Personalization vs Differentiation vs Individualization Report This report and accompanying slideshow by Personalize Learning is a useful overview of the differences and overlaps between these three strategies that often get confused with one another despite being very different.This is a helpful tool in understanding the important differences. edSurge: Individualized Learning This article provides helpful explanations as well as tech tools that teachers can use to support individualized teaching and learning in the classroom.
Photo: Scripps Institute of Oceanography Researchers at the Scripps Institute of Oceanography (UC San Diego) have created an innovative new tool that blends robotic technology with oceanography to answer questions about one of the ocean’s most abundant life forms – plankton. Planktonic organisms serve many important functions in the ocean, including fueling ocean food webs and cycling essential nutrients. However, there are still much to be discovered about their movement, dispersal, and impacts on larger organisms and ecosystems. Studying the movement of individual plankton has remained particularly elusive to ocean scientists due to their small size. Researchers have hypothesized that plankton may form dense patches under the ocean surface as a way to feed, reproduce, and seek protection from predators. The current study from Jaffe and colleagues at SCRIPPS university sought to answer this question in a unique way. A new technology – known as “miniature autonomous underwater explorers (M-AUEs), was developed by Jules Jaffe and colleagues at Scripps. M-AUEs are robots approximately the size of a grapefruit and are capable of measuring temperature, depth, and other parameters in the ocean every 12 seconds. They are small, inexpensive, and can be tracked underwater using acoustic signals. M-AUEs are programmed to mimic plankton swimming behavior by adjusting buoyancy in response to the internal, subsurface ocean “waves”. To test the hypothesis that plankton form aggregations in the ocean, researchers conducted a five-hour experiment and tracked the movement of 16 M-AUEs deployed near La Jolla California. In response to ocean movement, the M-AUEs formed aggregations in the warm waters of internal wave troughs and subsequently dispersed over the wave crests. This research provides evidence that plankton are, in fact, capable of using the physical dynamics of the ocean to congregate into “swarms” that may serve important biological functions. Innovative research in ocean technologies, like the development of M-AUEs, may provide valuable information on the movement of larvae between habitats, tracking oil spill dispersal, and monitoring pelagic organism populations. 1. Jules S. Jaffe, Peter J. S. Franks, Paul L. D. Roberts, Diba Mirza, Curt Schurgers, Ryan Kastner, Adrien Boch. A swarm of autonomous miniature underwater robot drifters for exploring submesoscale ocean dynamics. Nature Communications, 2017; 8: 14189 DOI: 10.1038/ncomms14189
Typing and Inserting Text To enter text, just start typing! The text will appear where the blinking cursor is located. Move the cursor by using the arrow buttons on the keyboard or positioning the mouse and clicking the left button. The keyboard shortcuts listed below are also helpful when moving through the text of a document: |Beginning of the line |End of the line |Top of the document |End of the document To change any attributes of text it must be highlighted first. Select the text by dragging the mouse over the desired text while keeping the left mouse button depressed, or hold down the SHIFT key on the keyboard while using the arrow buttons to highlight the text. The following table contains shortcuts for selecting a portion of the text: ||double-click within the word ||triple-click within the paragraph |Several words or lines ||drag the mouse over the words, or hold down SHIFT while using the arrow keys ||choose Edit|Select All from the menu bar, or press CTRL+A Deselect the text by clicking anywhere outside of the selection on the page or press an arrow key on the keyboard. Use the BACKSPACE and DELETE keys on the keyboard to delete text. Backspace will delete text to the left of the cursor and Delete will erase text to the right. To delete a large selection of text, highlight it using any of the methods outlined above and press the DELETE key. The formatting toolbar is the easiest way to change many attributes of text. If the toolbar as shown below isn't displayed on the screen, select View|Toolbars and choose Formatting. - Style Menu - Styles are explained in detail later in - Font Face - Click the arrowhead to the right of the font name box to view the list of fonts available. Scroll down to the font you want and select it by clicking on the name once with the mouse. A serif font (one with "feet" circled in the illustration below) is recommended for paragraphs of text that will be printed on paper as they are most readable. The following graphic demonstrates the difference between serif (Times New Roman on the left) and sans-serif ("no feet", Arial on the right) fonts. - Font Size - Click on the white part of the font size box to enter a value for the font size or click the arrowhead to the right of the box to view a list of font sizes available. Select a size by clicking on it once. A font size of 10 or 12 is best for paragraphs of text. - Font Style - Use these buttons to bold, italicize, and - Alignment - Text can be aligned to the left, center, or right side of the page or it can be justified across the page. - Numbered and Bulleted Lists - Lists are explained in detail later in this tutorial. - Increase/Decrease Indent - Change the indentation of a paragraph in relation to the side of the page. - Outside Border - Add a border around a text selection. - Highlight Color - Use this option to change the color behind a text selection. The color shown on the button is the last color used. To select a different color, click the arrowhead next to the image on the button. - Text Color - This option changes the color of the text. The color shown on the button is the last color chosen. Click the arrowhead next to the button image to select another color. The Font dialog box allows you to choose from a larger selection of formatting options. Select Format|Font from the menu bar to access the box. A handy feature for formatting text is the Format Painter located on the standard toolbar. For example, if you have formatting a paragraph heading with a certain font face, size, and style and you want to format another heading the same way, you do not need to manually add each attribute to the new headline. Instead, use the Format Painter by following these steps: - Place the cursor within the text that contains the formatting you want to copy. - Click the Format Painter button in the standard toolbar. Notice that your pointer now has a paintbrush beside it. - Highlight the text you want to add the same format to with the mouse and release the mouse button. To add the formatting to multiple selections of text, double-click the Format Painter button instead of clicking once. The format painter then stays active until you press the ESC key to turn it off. Feel free to experiment with various text styles. You can always undo your last action by clicking the Undo button on the standard toolbar or selecting Edit|Undo... from the menu bar. Click the Redo button on the standard toolbar or select Edit|Redo... to erase the undo action.
Let’s help children understand the election process by inviting them to engage in debates and utilize important resource guides. President For A Day: Encourage students to understand the importance of the presidential office, by selecting a different student to present information from the perspective of a president one each week for the duration of the school year. This will help to introduce the students to a majority of the U.S. presidents, and the entire class can act as an audience, learning about policy and the life of a U.S. president. Print Your Own Campaign Poster: Each student in a class can be tasked to create a beautiful poster that has a slogan and lists individual values. Election Crafts: Creating a patriotic flag is a great way to convey the importance of voting. By simply using blue and red markers, colored paper, and glue, it’s possible to make an easy flag. Also, write the words “election,” “vote,” and “America” on the flag. Presidential Trivia: Trivia can be one of the most fun ways to share information. Youth Engagement: While very young people don’t vote, this doesn’t mean that they shouldn’t be curious about their future and about the way that other people vote. Young students should compile a list of questions that they can ask their friends and family. Minors can ask others “Why do you vote?” “Who did you vote for? Why?” and “What’s your political party?” What are some other ways young people can learn about voting?
Mental health disorders can affect anyone of any gender, race or age. There are over 50 million Americans who suffer from mental illness, and if you’re one of them, you’re not alone. Of both genders, women are often most likely to suffer from certain mental illnesses. The Substance Abuse and Mental Health Services Administration (SAMHSA) estimates that approximately 23.8 percent of American women have experienced a diagnosable mental health disorder in the last year, compared to the estimated 15.6 percent of men who have mental illness. Studies have shown that biological factors do play an important role in mental illness. It’s in fact a critical element in one’s mental health and possible development of mental health disorders. Women have lower serotonin levels than men and also process the chemical at slower rates, which can contribute to fluctuations in mood. Females are generally more predisposed to hormonal fluctuations as well. Biological differences alone can prove key to the development of some mental health issues. Aside from gender, women are also largely affected by sociocultural influences and beliefs. Culturally speaking, women have historically been the subordinate gender, putting them in roles as primary caregivers to children and the elderly. Even though gender roles have seen a shift in our culture, with women taking on more powerful careers and men staying at home to take care of children, there is still a big amount of stress placed on women. This stress can lead to depression and panic attacks. Throughout our society, females have unfortunately been the object of sexualization, whether it be through magazines, movies, television shows, or peer relationships. This frequently negative sexualization can cause problems with the healthy development of self-esteem and self-image among females, as reported by the American Psychological Association. Both of these factors can not only lead to unhealthy self-image but also to shame, depression, anxiety, and stress. In conjunction with the sexualization of women, violence and sexual abuse are two more important factors contributing to mental health issues in women. Reportedly one in five women is a victim of rape or attempted rape, and females also have a higher instance of experiencing sexual abuse. During civil unrest and violent conflicts, women make up an estimated 80 percent of victims. Indeed, the prevalence of violence against women is cited between 16 to 50 percent over the course of a lifetime. The World Health Organization cites that women are two times more likely than men to develop certain mental health conditions like depression, eating disorders, and panic disorders. Women are also two to three times more likely to attempt suicide, although four times more men die from suicide. Symptoms can also differ between men and women, so it’s important to understand the different factors that can contribute to each illness. For example, females tend to report more physical symptoms in relation to mental illness. These can include fatigue, loss of appetite, restlessness, nausea, and headaches. Some common mental illnesses that affect women are: About 12 percent of women experience depression compared to 6 percent of men, making women twice as likely to be affected. Depression is a feeling of overwhelming sadness or melancholy that can be episodic (bouts of depression lasting days, weeks or longer) or chronic (persistent depression). Symptoms can also include loss of interest in daily activities, change in appetite, and a sense of worthlessness. Major depression, bipolar disorder, and postpartum depression are depressive illnesses. Other quick facts about depression: Types of panic disorders include general anxiety disorder (GAD), phobias, post-traumatic stress disorder (PTSD) and social anxiety. Of them, GAD and specific phobias are more prevalent among women. Panic disorders can also develop as a result of or in addition to other illnesses like depression and drug addiction. Large contributing factors behind eating disorders are the sociocultural aspects mentioned above. The sexualization of women plays a big role in females developing negative self-image as well as negative body image issues and poor self-esteem. Weight has and may always be an aspect of women’s lives that is scrutinized and placed on a pedestal, so it is no wonder why females feel such pressure to be physically perfect. While eating disorders like anorexia nervosa and bulimia nervosa often develop during the teen years, the onset of such disorders can happen anytime. Of those affected by eating disorders, Everyday Health estimates that women account for 85 percent of bulimia and anorexia cases and approximately 65 percent of binge eating disorders. If you or someone you love is going through the pain of a mental illness, don’t wait to seek treatment. Whatever your reason for waiting – maybe it’s “not the right time” or maybe you feel ashamed or scared – understand that the sooner you get help for a mental health disorder, the sooner you can begin a new life free from the constraints of your illness. At FRN, our professionals are skilled and caring individuals who understand how mental illness can impact your life. If you’re scared, that’s all right; you can call us anytime to learn more about how mental disorders affect women and what treatment options are available to you. Call us and get help today.Contact Us Integrated Treatment of Substance Abuse & Mental Illness
Pitcher plants are dioecious meaning that male and female flowers grow on separate plants (4), and only begin to flower once the upper pitchers are produced (6). During the early evening and night, the flowers produce large amounts of nectar which evaporates by morning. This nectar attracts flies during the early evening and moths at night to aid pollination. Once fertilised, the fruit of Nepenthes species usually takes about three months to develop and ripen. These fruits usually contain between 100 and 500 very light, winged, seeds, which can measure up to 30 millimetres long and are thought to be dispersed by the wind (4) (6). Despite enormous numbers of seeds being produced, only a few manage to germinate and only a fraction of those survive to maturity (6). Carnivorous pitcher plants are adapted to grow in soils low in nutrients. Although the plants do gain some nutrition through the soil, and energy through photosynthesis, they supplement this with a diet of invertebrates, usually consisting of ants, cockroaches, centipedes, flies and beetles (4). Insects are attracted to the pitchers by their bright colours and nectar, which is secreted by glands situated on the lid and the peristome of the pitcher. The insects fall into the acidic fluid at the base of the pitcher and, unable to escape, they drown. Digestive enzymes are then released to break down the captured prey (4). Despite the hostile environment of the pitchers, they can be home to number of animals, such as the red crab spider (Misumenops nepenthicola). The red crab spider inhabits pitcher plants in Indonesia, Malaysia and Singapore, ambushing insects that crawl into the pitcher and preying upon other insects, such as mosquitoes, as they emerge from larvae that live in the pitcher fluid (6).
Developed by engineers at the Woods Hole Oceanographic Institution, the $8 million, three-ton Nereus is the world’s first hybrid research vehicle: It is able to act both as an autonomous underwater vehicle (able to survey large regions with cameras and sonar ) and as a remotely operated vehicle (able to record images and collect samples on command). During its first test cruise in May, it dove 6.8 miles into the deepest part of the ocean, the Mariana Trench, a feat that only two other vessels have managed. While in remote mode, a thin 25-mile-long fiber-optic tether transmits information between the Nereus and the research ship deploying it. Bright LEDs allow researchers to see about 10 feet ahead of the vehicle in deep waters where no sunlight penetrates. Nearly 1,500 softball-size hollow ceramic spheres packed into the vehicle’s two hulls provide buoyancy and help it withstand up to 15,000 pounds per square inch of water pressure. If its tether breaks, Nereus can shift into autonomous mode, using its 2,000 rechargeable lithium-ion batteries to navigate back to the surface. Nereus will allow scientists to explore the deepest parts of the seafloor, which had previously been inaccessible. There it will seek out new species and habitats; it may also study subduction zones, where oceanic crust is recycled back into the earth’s mantle.
The success of the Kepler mission in sifting through a field of more than 150,000 stars to locate transiting planets is undeniable, and the number of planets thus far discovered has been used to estimate how often planets occur around stars like the Sun. Now comes a paper to remind us that statistical analysis based on Kepler results assumes that most of the planet candidates are real and not false positives. Alexandre Santerne, a graduate student at the University of Aix-Marseille, has worked with a team of researchers to study the false positive rate for giant planets orbiting close to their star. 35 percent of these Kepler candidates may be impostors. The problem is that eclipsing binaries can mimic planetary transits, which is why scientists perform follow-up radial velocity studies or use transit timing variations (TTV) to confirm the existence of the planet. Another technique is to systematically exclude all possible false positive scenarios to a high level of confidence. Whatever the method, it’s clear that validating Kepler’s candidates — making sure that what looks like a planet really is one — has a key role to play if we’re going to interpret the Kepler results properly and extend them to the larger stellar population. Image: Both Kepler and CoRoT have detected exoplanets by looking for the drop in brightness they cause when they pass in front of their parent star. But as transit studies continue, scientists are working to filter out false positives. Credit: CNES. Santerne’s team used the SOPHIE spectrograph at Observatoire de Haute-Provence, looking at a selection of Kepler giant planet candidates for follow-up spectroscopic studies. Their sample of 46 candidates represented about 2 percent of the total list of 2321 candidates as of February 2012, and about 22 percent of the giant planet candidates with significant transit depth found thus far in the Kepler data. Their candidates all showed a transit depth greater than 0.4%, an orbital period less than 25 days and a host star brighter than Kepler magnitude 14.7. Eleven of the candidates had already been confirmed as planets, and the researchers were able to confirm another nine. Two of the candidates turned out to be transiting brown dwarfs and another eleven were in binary star systems. All of that leaves 13 unconfirmed candidates, and leads the team to conclude that the false-positive rate for giant, close-in planets is 35 percent. It’s an interesting result in light of earlier work by Timothy Morton and John Johnson (Caltech), who calculated an expected false positive probability (FPP) of Kepler planets of 5 percent for most candidates. Morton reacted to the new work in this article in Science News: …comparisons between the two studies might not be so simple, Morton says, noting that the two groups calculated different things. Instead of looking at impostor rates in a specific population of planets, Morton determined the probability that any candidate — plucked from the sea of twinkling candidates — was real. He also excluded data from obvious impostors. “Everything here is sort of a game of probabilities,” Morton says, pointing to the abundance of candidates. “It will be impossible to confirm them all with observations.” The Santerne paper argues that Morton and Johnson did not consider undiluted eclipsing binaries — binary stars that mimic a close-in gas giant — as a source of false positives in the Kepler data, assuming that detailed analysis of Kepler photometry alone would be enough to weed these out. Santerne’s team disagrees: …we have found that more than 10% of the followed-up candidates are actually low-mass-ratio binary stars, even excluding the two brown dwarfs reported here. This source of false positives is expected to be less important for smaller-radii candidates. However, as it is clearly shown by the cases of KOI-419 and KOI-698, stellar companions in eccentric orbits and with relatively long periods can produce single-eclipse light curves, even for greater mass ratios. It is difficult to imagine how these candidates can be rejected from photometry alone if grazing transits are to be kept. In other words, it’s easier to mimic a planetary signature than we realized in the case of close-in giant planets. It will take radial velocity follow-up studies of giant planets on much wider orbits to determine whether the false positive rate is as high with them. And we have a lot to do to learn about the reliability of our smaller planet detections: Only a small fraction of Kepler small candidates are suited for the radial velocity follow-up. These candidates should be followed in radial velocity to determine the true value of FPP and to fill the mass-radius diagram of Neptune and super-Earth like planets. This FPP value for small size candidates is required to correctly derive and discuss the distribution of transiting planet parameters. The paper is Santerne et al., “SOPHIE velocimetry of Kepler transit candidates VI. A false positive rate of 35% for Kepler close-in giant candidates,” accepted by Astronomy & Astrophysics (preprint). Thanks to Antonio Tavani for the pointer to this one.
In today's business world, there are many unethical conducts which need to be put to a stop. Such unethical conducts include piracy and unethical computer hacking among other conducts. Therefore, there is a need for countries to come up with copyright laws to address such problems. In this essay, I will explain why copy right laws are important and why copying software and other resources are called "piracy". In the course of explaining piracy and copyright, Intellectual property will also be partially discussed. Lastly, I will also explain my professional code of ethics as a computer scientist in my future profession. To begin with, a copyright is a form of protection provided by the laws of the country to the authors of original works and their Intellectual Property (United States Copyright Office, 2012). Intellectual Property (IP) on another hand is defined as the "creations of the mind, such as inventions; literary and artistic works; designs; and symbols, names and images used in commerce." (World Intellectual Property Organization (WIPO), n.d). From the definition, the Intellectual Property that covers literary works like novels, poems and plays is called Copyright and the Intellectual Property that includes patents for inventions, trademarks, industrial designs and geographical indications is called Industrial Property (WIPO, nd). Intellectual Property is very important because it encourages creativity among people. From this background, it is important to have copyright laws in today's business world because they provide the legal right to protect one's work. It is possible for someone to claim someone's work as his or hers. But with copyright laws in place, this cannot happen and the claimant may be prosecuted. In addition to this, copyright laws are also important because others can use one's own work at a fee. Sometimes a copyright can be sold to others at a larger fee thereby generating more income. In this case, provision of copyright laws can be a source of living to the authors (Copyrights World, 2013). On another hand, piracy is defined as the violation of license agreements in an effort of acquiring the Intellectual Property without authorization from the owner or creator (The Ohio State University, n.d). In this case, downloading, copying, installing or distributing digitized material without the permission of the creator is one of the examples of piracy in addition to many of them. Most of the times, these resources are sold at lower prices with the aim of copying customers so that they should buy these materials. Besides, most of these materials happen to be obtained through computer hacking making computer hacking to be unethical. Software piracy is rampant nowadays especially in the entertainment industry like music. There are many musicians today who are complaining about this although there are copyright laws which are supposed to protect them. Their work is used by other greedy people who want to reap where they did not actually sow. This is unethical thing to do as this is the same as stealing. As the computer scientist, my professional code of ethics will be as follows: First of all, I shall not use my professionalism for personal gains. In this context, what I mean is that I shall not use the knowledge from my computer expertise for selfish reasons. Also, in this context what I mean is that I shall not use it for financial gains. Good money is supposed to be obtained using ethical means. Secondly, I shall make sure to be honest and trustworthy in all my future transactions with people. What I mean here is that I shall executive my profession with a lot of integrity. This is so because in so doing I believe that the profession shall remain very enjoyable for me. Furthermore, in case I have used certain piece of information from another author, then I shall always remember to give proper credit for intellectual property. In this case, I shall avoid putting myself into troubles. In addition to the above professional ethics, I shall also respect existing laws and regulations pertaining to computer work. In this case, I shall always follow all public laws which shall be pertaining to the computer work. Furthermore, I shall also access computing resources upon only being authorized otherwise not. The other professional ethics that I feel I can't finish this essay without mentioning it is that of computer hacking. I shall not indulge myself in computer hacking which is unethical. As a professional computer scientist, my hacking shall be ethical and of good cause to all people. Just as I said earlier, stealing someone's intellectual property is something which cannot be acceptable. In conclusion, we can say that having copyright laws in today's business is something which is very important and good as highlighted above. Moreover, without these copyright laws then there would be no creativity among people. No one would wish to explore for more additional information in the field of specialization for fear of being not recognized or even quarreling with colleagues who can happen to claim for the same piece of information. Copyright laws offer a platform to punish those people who indulge themselves in piracy. Piracy is unethical thing that needs not to be tolerated by all means. As a future computer scientist I wish to conduct myself in a professional way as not to involve myself in this act of unethical behavior like piracy. Let's join hands together to make this world free from piracy. 1. Copyrights World (2013). Why Copyright protection is so important? 2. The Ohio State University, (n.d). Copyright and Piracy. 3. United States Copyright Office, (2012). Copyright Basics. 4. World Intellectual Property Organization (WIPO) ( n.d). What is Intellectual Property?
Internal dialogue is used by authors to indicate what a character is thinking. Direct internal dialogue refers to a character thinking the exact thoughts as written, often in the first person. (The first person singular is I, the first person plural is we.) Example: “I lied,” Charles thought, “but maybe she will forgive me.” Notice that quotation marks and other punctuation are used as if the character had spoken aloud. You may also use italics without quotation marks for direct internal dialogue. Example: I lied, Charles thought, but maybe she will forgive me. Indirect internal dialogue refers to a character expressing a thought in the third person (the third person singular is he or she, the plural is they) and is not set off with either italics or quotation marks. Example: Bev wondered why Charles would think that she would forgive him so easily. The sense of the sentence tells us that she did not think these exact words. Posted on Tuesday, June 10, 2008, at 4:47 am148 Comments on Internal Dialogue: Italics or Quotes?
In space we feel weightlessness because the earth's gravity has less effect on us. Why do we not see the effect of the gravitational force between the various objects in a spacecraft? We see them floating around. Since the objects in a spacecraft are comparatively close to each other we should be able to see the gravitational effect between them. Although the Earth's gravity has a lesser effect on an astronaut orbiting the Earth in a spaceship than on a person on the surface of the Earth, this is not the reason why an astronaut experiences weightlessness. The space shuttle, International Space Station and most other manned vehicles don't get that far from the Earth. The Earth's gravitational attraction at those altitudes is only about 11% less than it is at the Earth's surface. If you had a ladder that could reach as high as the shuttle's orbit, your weight would be 11% less at the top. Put another way, a person who weighs 100 pounds on the Earth's surface would weigh about 89 pounds at the top of the ladder. The reason why the person wouldn't feel weightless is because they are being pushed by the ladder - it is keeping them from falling. If they were to jump off the ladder, then they would feel weightless, at least up until the time they splatted on the ground. This is why astronauts feel weightless. The astronaut, the spaceship and everything inside it are falling towards the Earth. The reason why the astronaut doesn't go splat is because the Earth is curved and the astronaut, the spaceship and everything inside it are moving 'sideways' fast enough that, as they fall towards the Earth, the surface of the Earth curves away from them. They are always falling towards the Earth, but they never get there. The reason why you don't see gravitational effects between objects in a spacecraft is because gravity is a very, very weak force. Of the four basic forces that scientists are sure about, gravity is, by far, the weakest one. Have you ever tripped and fallen down? Well, it took the whole planet to do that to you. Have you ever seen a sock stick to a shirt after it has come out of a dryer? That static cling, created by a slight imbalance of charge between the sock and the shirt, is stronger than the gravitational attraction of the Earth. The gravitational attraction between two small objects in a spacecraft would be overwhelmed by other forces, such as the force of the air being circulated throughout the spacecraft. Although the force of attraction is there, it is so weak that special care would have to be taken to notice it. Steve Gagnon, Science Education Specialist (Other answers by Steve Gagnon)
Poetry is a great way to express one's creativity. It can be imaginative and express a variety of complex ideas. Writing poetry, however, requires proper scheme and scansion. English poems use a variety of schemes. A common one is the sonnet: ABAB CDCD EFEF GG. It was popularized by William Shakespeare. Here is a list of schemes: http://en.wikipedia.org/wiki/Rhyme_scheme. Scansion is more complex. Poets use syllables and emphasis to create the flow of their works. English poems, usually, use multiples of four syllables and iambic. Iambic emphasises every second syllable. It was, also, popularized by Shakespeare. Needs a Better Description and specific scansions. In poetry, a foot is a measure used when two or more beats get together in a recognizable pattern. Here are some of the most common: - Iamb: one-two - Trochee: one-two - Anapest: one-two-three - Dactyl: one-two-three - Spondee: one-two There's a little poem I learned to remember the first four: - The iamb saunters through my bookTrochees rush and tumbleWhile the anapest runs like a hurrying brookDactyls are stately and classical It takes a little more work to use a spondee, since you have to choose words that can't be unaccented in a line. For example, the phrase "dead weight," which generally can't be shortcutted to deadweight or deadweight but will be read dead... weight. The one everyone knows the name of is Iambic Pentameter. Since "penta" means "five," this means "a line with five iambic feet." William Shakespeare was known for using this one in English free verse, which means the rhythm stayed pretty steady but there were few to no specific rhymes. Bear in mind, too, that just because you set out to write Iambic Pentameter (or any other meter) doesn't mean that you have to use an iamb as every single foot. Shakespeare certainly didn't! You can substitute a trochee at times, or a spondee for emphasis; you might even add some syllables to make one of the longer feet. The number of stressed beats per line, and the major pattern staying iambic, that's what makes Iambic Pentameter. But what you're aiming for is a line that sounds as if someone were actually talking - nothing forced or unnatural about it. That's what's really great about Iambic Pentameter: It sounds a lot like just regular ol' English. Now, as far as other meters: Just pick the number of feet you want. There are names for each (Tetrameter - four; Hexameter - six), but you don't need to worry about the names too much. Now, as far as common usage, a couple good ones are: - Four feet per line - Four feet the first line, three feet the next line - This one forms the basis of many hymns - Six feet per line - Six feet the first line, five feet the next line Let's start with the well-known Limerick. This is constructed primarily with Anapests, and uses lines of 3, 3, 2, 2, 3, with only two rhymes: - A man who was dining in CreweFound a rather large mouse in his stewSaid the waiter, Don't shoutAnd wave it aboutOr the rest will be wanting one, too! As with any poetic, you can have fun by breaking people's expectations: - A decrepit old gas man named Peter,While hunting around for the meter,Touched a leak with his light,He arose out of sight,And, as anyone can see by reading this, it also destroyed the meter. Now that we're moving into longer forms, it'll be harder to stick with just two rhymes. But that's what's required for the Rondel. However, it helps a bit that the Rondel sets up two lines that get repeated as a refrain. We'll use capital letters for the repeated lines: - Verse 1: ABba - Verse 2: abAB - Verse 3: abbaA ...anyway, hope someone continues from where I've left off here!
It’s National Computer Science Education Week! That must mean it’s time for part 2 of my How (And Why) To Program series. Today I will discuss a tricky but powerful concept in computer science: recursion. Briefly, recursion means accomplishing a task by performing it in terms of smaller versions of the same task. For example, each morning I execute my “drive to work” routine, which is really my “drive from point A to point B” routine, where point A is home and point B is work. To do that, I first do “drive from point A to point B” where point A is home and point B is the Golden Gate Bridge (which is about halfway to work for me), followed by “drive from point A to point B” where point A is now the Golden Gate Bridge and point B is work. Each of those steps, of course, can be decomposed into smaller “drive from point A to point B” tasks. One classic example for illustrating recursion in computer code is the Fibonacci sequence — the mathematical sequence in which each number is the sum of the two before it. You might already see the weakness in that definition: what can the first and second numbers in the sequence be, if they don’t have two numbers before them that can be added together? This is a key feature of recursive functions: at some point they reduce the problem into parts so small that they reach the “base case,” where the recursive rule breaks down. It happens that the base case of the Fibonacci sequence says the first two numbers are 0 and 1. From there, the recursive rule takes over to give the numbers that follow: 1, 2, 3, 5, 8, 13, 21, 34, and so on. Let’s look at a “function,” which is the computer programming equivalent of a recipe: you give it some inputs, and it gives you an output, the result of processing the inputs in specific ways. Our function is called fibonacci and it takes one input, or “argument”: a number, which we’ll call n. The result of fibonacci(n) will be the nth number in the Fibonacci sequence, where the first two numbers — fibonacci(0) and fibonacci(1) (recall that in programming, lists and sequences of things are almost always numbered beginning at zero) — are 0 and 1. As before, code samples are presented in the Python programming language, though the same concepts we’re discussing apply to most other programming languages too. def fibonacci(n): if n == 0 or n == 1: return n else: return fibonacci(n-1) + fibonacci(n-2) We start with “def fibonacci(n),” which simply means “define a function named fibonacci taking one argument called n.” The body of the function follows. First it checks for the base case: does the caller (whoever is invoking this function) want one of the first two Fibonacci numbers? If so, the function simply “returns” (or hands back to the caller) the value of n, since by coincidence the value of fibonacci(n) is n when n is 0 or 1. If it’s not the base case, the function returns a different value: the sum of invoking fibonacci first on n-1 and then on n-2. Those recursive calls give the two prior Fibonacci numbers. For instance, if we invoke fibonacci(9), then n is 9 and fibonacci(n-1) is fibonacci(8), which is 21; and fibonacci(n-2) is fibonacci(7), which is 13. Adding those together gives 34, which is the correct result for fibonacci(9). Enough about the Fibonacci sequence. It’s a contrived example and, though it explains recursion pretty well, it doesn’t demonstrate the real-world applicability of the technique. (It also happens that, for reasons I won’t go into here, recursion is a terribly inefficient way to compute Fibonacci numbers compared to other possibilities like iteration.) A few days ago, my Mensa Puzzle-a-Day Calendar presented this riddle: The letters of a certain three-letter word can be added in sequential order (though not necessarily with all three letters together in the same place) to each of the letter strings below to form common, uncapitalized English words. You don’t need to rearrange any of the letters below. Simply add the three needed letters in sequential order. What is the three-letter word, and what are the nine new words formed? 1. alp 2. wl 3. marit 4. ealus 5. urneman 6. cke 7. disintedl 8. traectr 9. epard (To illustrate the puzzle: the letters of what three-letter word can be inserted in both “hoyde” and “ckear” to produce common English words? The answer is “new,” to produce “honeydew” and “neckwear.”) Staring at the puzzle for a while, I was unable to solve it. So I sat down and wrote a program to solve it for me. How’s that for real-world applicability? Once again I relied on the file /usr/share/dict/words (or sometimes /usr/dict/words, or /usr/lib/dict/words) that is a standard feature of some operating systems; it’s simply a list of many common English words (and many uncommon ones, plus some frankly questionable ones), one per line. Reading that file, I produced two sets of words: one set of all the words, and one set of all three-letter words. Here’s how that looks: three_letter_words = set() all_words = set() wordlist = open('/usr/share/dict/words') for word in wordlist: word = word[:-1] all_words.add(word) if len(word) == 3: three_letter_words.add(word) wordlist.close() (Very similar code is explained in detail in part 1 of this series.) With those two word sets in hand, and the nine letter-strings from the puzzle, this was my strategy: try all possible ways of inserting the letters of all the three-letter words in each of the letter-strings. For any three-letter word, if none of its combinations with a given letter-string produces a valid word, remove the three-letter word from further consideration. In other words, beginning with all possible three-letter words, we whittle them away as they become disqualified. In the end, the only three-letter words left should be ones that combine, one way or another, with all of the nine letter-strings to produce valid words. So, for example, the three-letter words “see” and “era” both can be added to the letter-string “alp” to produce valid words (“asleep” and “earlap”). But the three-letter word “new” can’t be, so after running through all the three-letter words on the letter-string “alp,” “see” and “era” will still be in the set three_letter_words, but “new” won’t be. Here’s how that strategy looks: for string in ("alp", "wl", "marit", "ealus", "urneman", "cke", "disintedl", "traectr", "epard"): This starts a loop that will run nine times, once for each letter-string, giving each letter-string the name “string” on its turn through the body of the loop. three_letter_words_to_discard = list() This creates an empty list called three_letter_words_to_discard. It’s empty now but as we progress we will fill it with words to remove from the three_letter_words set. (If you’re wondering why I sometimes use lists for collections of things, and sometimes use sets, gold star! The answer is that they are two different kinds of data structure, each one good at some things and bad at others. A set is very fast at telling you whether a certain item is in it or not; a list is slow at that. On the other hand, a list keeps things in the same order in which you added them; a set doesn’t do that at all.) for three_letter_word in three_letter_words: This starts a nested loop. It’ll run through the complete list of three_letter_words each of the nine times that the outer loop runs. combinations = combine(three_letter_word, string) Here we presume there’s a function called combine that takes the current three-letter word and the current letter string, and produces the complete list of ways that the letters of three_letter_word can be interspersed with the letters of string. For example, combine(“abc”, “def”) should produce the list [“abcdef”, “abdcef”, “abdecf”, “abdefc”, “adbcef”, “adbecf”, “adbefc”, “adebcf”, “adebfc”, “adefbc”, “dabcef”, “dabecf”, “dabefc”, “daebcf”, “daebfc”, “daefbc”, “deabcf”, “deabfc”, “deafbc”, “defabc”]. That’s where recursion is going to come into play. We’ll get to writing the combine function in a moment. good_combinations = list() for combination in combinations: if combination in all_words: good_combinations.append(combination) With the list of combinations in hand, we now look through them to see which of them are valid words, if any. We set good_combinations to be a new empty list where we’ll accumulate the valid words we find. We loop through the combinations, testing each one to see if it’s a member of the set all_words. If one is, we add it to the list good_combinations. if good_combinations: print three_letter_word, "+", string, "=", good_combinations else: three_letter_words_to_discard.append(three_letter_word) After the “for combination in combinations” loop, we check to see whether good_combinations has anything in it. (“If good_combinations” is true if the list has something in it, and false otherwise.) If it does, we print out the current three-letter word, the current letter-string, and the list of valid words they make. If it doesn’t, then three_letter_word goes into our list of three-letter words to discard. for word in three_letter_words_to_discard: three_letter_words.remove(word) After the “for three_letter_word in three_letter_words” loop, this small loop does the discarding of disqualified three-letter words. Why not simply discard those words from three_letter_words in the preceding loop, as we run across them? Why save them up to remove them later? The answer is that when you’re looping through the contents of a data structure, it’s a bad idea to add to or remove from the data structure. The loop can get confused and lose its place in the structure. It may end up running twice with the same list member, or skip a member entirely. It’s safe to make changes to the membership of the data structure only after the loop finishes. Finally, after the outermost loop has finished, it’s time to see which three-letter words remain in our set: And that’s all! All except the tricky part: the combine function. Here is how it starts: def combine(string1, string2): It takes two strings. We’ll give them generic names, string1 and string2, so as not to assume that either one is a three-letter word. As you’ll see, often neither one is. Now, how to approach writing a recursive function? It’s usually a safe bet to start with the base case, the conditions under which combine isn’t recursive. The recursive step will involve passing shorter and shorter strings to combine, so the base case is when one or both of the strings is empty. Obviously if either string is empty, the result should be the other string — or more precisely, the list containing the other string as its one member (since we’ve already stipulated that the result of combine is a list of strings). In other words, combine(“”, “def”) should produce the list [“def”] — which after all is the result of interspersing the letters of “” among the letters of “def” — and combine(“abc”, “”) should produce [“abc”]. So here’s the body of combine so far. It’s just the base case: if len(string1) == 0: return [string2] elif len(string2) == 0: return [string1] (Recall that “elif” is Python’s abbreviation for “else if.”) Now for the case where string1 and string2 are both non-empty; the recursive case. The key to writing the recursive step of a function like this is figuring out (a) how to make the problem the same but smaller, and then (b) what to do with the result of computing the smaller solution. One way to make the problem smaller is to lop off the first letter of string1. So if combine were originally invoked with the strings “abc” and “def,” the recursive call would invoke it with “bc” and “def.” Presuming combine works correctly — which is the counterintuitive assumption you must always make about the recursive step in a function like this — we’ll get back the list [“bcdef”, “bdcef”, “bdecf”, “bdefc”, “dbcef”, “dbecf”, “dbefc”, “debcf”, “debfc”, “defbc”]. None of those belongs in the result list of combine(“abc”, “def”); but if we now restore to the beginning of each of those strings the same letter we lopped off, we get [“abcdef”, “abdcef”, “abdecf”, “abdefc”, “adbcef”, “adbecf”, “adbefc”, “adebcf”, “adebfc”, “adefbc”]. This is halfway to the complete answer: it’s all the strings in the result list that begin with the first letter of string1. We only need to add all the strings in the result list that begin with the first letter of string2, and we’re done. We do this by treating string2 the same way we just treated string1: we lop off its first letter in another recursive call to combine, then paste it back on to each string in the result. Continuing the example, this means calling combine(“abc”, “ef”), which produces [“abcef”, “abecf”, “abefc”, “aebcf”, “aebfc”, “aefbc”, “eabcf”, “eabfc”, “eafbc”, “efabc”]. Sticking the “d” back onto the beginning of each of those strings gives [“dabcef”, “dabecf”, “dabefc”, “daebcf”, “daebfc”, “daefbc”, “deabcf”, “deabfc”, “deafbc”, “defabc”], and adding this list to the list from the first recursive call gives the complete solution. In Python, the first letter of string is denoted string. The rest of string, without its first letter, is denoted string[1:]. So here’s the complete version of combine, with the (double) recursive step added in. def combine(string1, string2): if len(string1) == 0: return [string2] elif len(string2) == 0: return [string1] else: recursive_result1 = combine(string1[1:], string2) recursive_result2 = combine(string1, string2[1:]) result = for string in recursive_result1: result.append(string1 + string) for string in recursive_result2: result.append(string2 + string) return result This is the crazy magic of recursion: at each step, you simply assume the next-smaller step is going to work and give you the result you need. All you have to get right is the base case and the way to process the recursive result, and — well, look: hol + alp = ['alphol'] has + alp = ['alphas'] sae + alp = ['salpae'] her + alp = ['halper'] see + alp = ['asleep'] eta + alp = ['aletap'] era + alp = ['earlap'] soe + alp = ['aslope'] yin + alp = ['alypin'] pus + alp = ['palpus'] een + alp = ['alpeen'] kas + alp = ['kalpas'] ecu + alp = ['alecup'] ist + alp = ['alpist'] doh + alp = ['adolph'] pal + alp = ['palpal'] cul + alp = ['calpul'] ped + alp = ['palped'] Moe + alp = ['Malope'] clo + alp = ['callop', 'callop'] gos + alp = ['galops'] tid + alp = ['talpid'] yum + alp = ['alypum'] pon + alp = ['palpon'] hin + alp = ['alphin'] joy + alp = ['jalopy'] hol + wl = ['wholl', 'wholl'] sae + wl = ['swale'] soe + wl = ['sowel', 'sowle'] joy + wl = ['jowly'] joy + marit = ['majority'] joy + ealus = ['jealousy'] joy + urneman = ['journeyman'] joy + cke = ['jockey'] joy + disintedl = ['disjointedly'] joy + traectr = ['trajectory'] joy + epard = ['jeopardy'] set(['joy'])
Math Problem Solving Exercises In this article, Jennie suggests that we can support this process in three principal ways. Pattern Problems: visual patterns that require students to draw what comes next. The site offers three different levels of difficulty for each strand. Factor Investigation challenges students to list all factors of the numbers 1-25 and identify the numbers as abundant, deficient, perfect, prime. Make the impossible possible with this free problem solving game that kids will enjoy. Use just one piece of information to confidently label the fruit bags without seeing everything that’s inside them. Scroll down to see groups of tasks from the site which will give learners experience of specific skills. Becoming confident and competent as a problem solver is a complex process that requires a range of skills and experience. Read Lynne's article which discusses the place of problem solving in the new curriculum and sets the scene.Have fun and enjoy these free problem solving games for kids.Enjoy this bridge crossing game, a classic problem solving activity that takes some thought to solve.  Students can refer to this list when playing Factor Blaster or Factor Game.The Million Dollar Mission asks students to decide which salary is the better offer for one month's work: one million dollars or one cent on the first day, two cents on the second day, four cents on the third day, etc.Your young learners will love practicing their math skills with the following math exercises.Once they have nailed the basics of addition and subtraction, continue the targeted learning and multiply their knowledge gained through the following multiplication exercises that teach everything from how to multiply by 2 to working through multi-digit multiplication and word problems.Name That Number - 2 is designed to measure student understanding of place value as it is used in the Everyday Math Game of the same name.Open-ended Math Problems from the Franklin Institute Online offers monthly problems in Number Theory; Geometry; Measurement; Patterns, Algebra & Functions; Data, Statistics & Probability.Measure different amounts of water with just 2 jugs, can you do it?Give this educational brain teaser a try and find out.
Know the reason behind Genomic research being done on domesticated animals in the lab : The study of the effects of individual chemical constituents of drugs on animals helps the researchers understand the optimal treatment regimens that minimize the side-effects on humans. Genomic research and Studies on animals in the lab can provide a potential cure for many types of cancer Over the last several decades, studies have been conducted on inbred strains of mice. They are almost genetically identical to human beings, and the effect of chemotherapeutic drugs on these mice is very similar to that on humans. Apart from similarities in genes, certain mammals like mice, rats and monkeys have similar protein expression profiles. The study of their transcriptome can shed new light on the expression and progression of genetic diseases. NGS has opened up new avenues for researchers to continue RNA-sequencing of samples obtained from patients as well as standardized transcriptome of healthy specimen for studying the effects of genetic diseases and their responses to specific treatment. The case is quite similar for the development of treatment for non-Hodgkin’s lymphoma. It is a cancer of the lymph nodes that used to be incurable even a couple of decades ago. Today, the prognosis of the disease is much better and available chemotherapeutic treatments not only limit cancer but keep the patient in remission for a long time. The ability of researchers to study chronic diseases and their response to chemotherapy on models very similar to humans has helped in the evolution of successful treatment plans for cancer victims. Other modern treatment options, including bone marrow transplant, make it possible for the survivor to live a long and healthy life. Animal models can provide the treatment for degenerative diseases in humans Certain diseases are results on small but critical mutations in the genome that persist at the RNA levels as well. The transcription of these RNAs lead to the production of “defective” proteins that interfere with a cell’s metabolic or signaling pathways. The discovery of the role of mutations in the RNA that can cause diseases is comparatively new. Whether these arise from point mutations or larger IN/DEL deletions; they impact the protein expression at the cellular level. RNA-seq analysis is the only way to spot the specific location of these mutations and learn about their heritability. Studies of degenerative and incurable disorders like ALS, frontotemporal dementias (FTD), and myotonic dystrophy have become possible due to the presence of animal models and their RNA-seq data in the lab. It was once difficult due to RNA’s transient and unstable nature, and their high tissue specificity. However, the advent of new sequence analysis and data archiving methods has made transcriptome analysis of any tissue possible within a short timeframe. Visit Basepair’s solutions to RNA Seq data for more details on the analysis of differential expressions of RNA, and the quantification of transcriptome. Therefore, clinical tests on inbred mice give a thorough insight into the drug interactions, side effects, and dosage indications during human use. Transplant rejection studies and Genomic research on animals can save millions of lives That brings us to organ transplant, which is not only a complex field of surgery, but it deals with aspects of immunology and pathology. Without the early experimentations on mice and other animals similar to humans, it would have been impossible for the immunologists and surgeons to perform successful transplants. Researchers uncovered the factors (major histocompatibility complexes, or MHC) that were responsible for the rejection of organs in the host, by studying organ rejection in immune-deficient mice. When one group of researchers discovered the role of MHC in organ rejection, another group began to work on immunosuppressant drugs that could suppress the immune reaction to organs that did not match the donor’s MHC. Their research on mice increased the chances of survival of the patients who had received organ transplants since the MHC of mice is very similar to that of human beings. Close to 6,000 living donations happen every year in the US. According to data from the Organ Procurement Transplantation Network (OPTN), over 30,000 transplants were performed in 2015! Genomic research on animals form a prominent part of preclinical trials Apart from chemotherapeutic drugs, researchers need to test the effectiveness and side-effects of other pharmaceutical compounds. A close study of the immunological factors involved in the process helps in determining the influence of the drugs on human pathology and the prognosis of a disease. Every drug requires preclinical trials before they can move to clinical trial phases. During the preclinical studies, the research is conducted in vitro (cell lines) and in vivo (animals) experiments. These stages are instrumental in determining – - The usefulness of a drug, or procedure in treating a condition or disease. - The toxicity and lethal dose of a compound of interest. - The optimal dosage of the compound for the treatment of target diseases or conditions. Although the in vivo preclinical studies are brief, they provide critical information that can move the drug or procedure or treatment forward to its clinical trials. The FDA does not approve any new drug or procedure that does not show promising results in its clinical trial phases. Clinical trials of a particular medicine or medical procedure highlight the risks, contraindications, and advantages of a new drug, device, and/or therapy. It enables the researchers, doctors, and the consumer or patients to obtain complete information. The genetic likeness of rats, mice, pigs, and primates make them ideal candidates for preclinical trials in many countries. Experimentation on genetically similar animals can open new doors for psychotherapeutics Apart from chemotherapy and organ transplant, preclinical trials play critical roles in the development of medication that can treat chronic and acute mental illnesses, including chronic depression and bipolar disorder. The genetic similarity of the murine and porcine families have made them ideal candidates for the testing of serotonin-reuptake-inhibitor compounds and other drugs that bear the promise of remission from mental health problems. One of the most popular instances is the development of different compounds containing lithium (for example, lithium carbonate) for the treatment of manic depression or bipolar disorder. In the early 1940s, lithium was rampantly used as a substitute for sodium in low-sodium diets. High doses led to intoxication and several deaths in the same decade. Therefore, it was denied acceptance, although there were multiple instances of lithium-mediated treatment of manic disorder and bipolar disorder. It took another two decades of extensive animal study for the scientific community to accept it as an effective treatment for mental disorders. Why studies on animals in labs must go on There are several stories from cancer survivors, chronic depression victims and survivors of acute bacterial infections that would seem like miracles even three or four decades ago due to the lack of information that scientists, and medical professionals now have from animal studies. It is strange to think that the mention of animal testing can be found in the works of Aristotle in the 4th and 3rd century BCE. Although the study of genomic research and medical research on animals can raise some ethical issues, their genomic similarity to human beings makes them the best candidates for preclinical research. It reduces the risks of the participants of clinical trials for drugs, including chemotherapy and procedures like transplants. Their genetic similarity makes their immune system very similar to that of humans. That makes it possible for the researchers to test new vaccines like polio, measles, and tetanus for the safety of administration. Know the reason behind Genomic research being done on domesticated animals in the lab
Measuring the quantum limit with a perfect silicon mirror Scientists from Hannover and Jena have developed a new method of making a silicon crystal into a perfect mirror: they have etched the surface into a specially structured nano-lattice. Such a surface completely reflects laser light - an effect which until now could only be achieved by vapor deposition with a reflective layer system. This new method is highly promising for high-precision measurements in the fields of quantum mechanics and gravitational wave research. The researchers recently published their results in the scientific journal Physical Review Letters, No. 104. For highly precise experiments, especially in quantum optics and gravitational wave research, optical mirrors are required that reflect light as efficiently as possible. In order to achieve the required high level of reflectivity, the conventional method is to coat a ground crystal or polished quartz glass with numerous layers of materials that differ optically (with a so-called “coating”). However, there is a disadvantage to this method. The coating material manifests a particularly strong Brownian motion (heat movement of the particles in the coating). As a result, when measurements are undertaken, a thermal background noise is superimposed on the actual signal, thereby restricting the accuracy of measurement. Daniel Friedrich and Frank Brückner, part of the Research Groups of Prof. Roman Schnabel (Institute of Gravitational Physics, QUEST Cluster of Excellence, Leibniz Universität Hannover and the Max Planck Institute for Gravitational Physics, Hannover), and Prof. Andreas Tünnermann (Institute for Applied Physics, Friedrich Schiller University of Jena and Fraunhofer Institute for Applied Optics and Precision Mechanics, Jena), have together developed a new method designed to eliminate disturbing noise. Harnessing this method, they engraved a nano-lattice onto the surface of a silicon crystal. This lattice structure functions as a resonant wave guide for light of a certain wave length, in this case infrared radiation at 1,550 nm. Perpendicularly incident light is diffracted by the lattice geometry into several partial beams, which subsequently become superimposed (“interference”). However, in this particular surface structure, constructive interference only occurs in the reverse direction. Light rays moving in other directions cancel one another out. All in all, this leads to perfect reflection. “This effect is very similar to that encountered in certain butterfly species in which the wings of Morpho butterflies have such a bright blue shimmer, because their flight surface has also been provided with a periodic nano-structure that reflects certain colors of the incident light selectively,” explains Frank Brückner. The reflectivity achieved in this experiment is exactly 99.8%, but up to 100% is theoretically possible. Smoothly polished, a silicon crystal would only reflect infrared radiation up to 30% in the case of perpendicular incidence. As a result, the nano-lattice replaces the coating with materials that have varying refractive indices. “By dispensing with optical coatings, the thermal noise associated therewith should also disappear. In regard to measuring processes at the quantum limit, this noise is one of the most important sources of interference and significantly reduces sensitivity of the measurement,” says Daniel Friedrich. “With a nano-lattice structure acting as a mirror on crystal surfaces, we now expect a whole new quality in high-precision measurements conducted in different areas of research.” The results of their work were published in an article that appeared on April 23, 2010 in the journal Physical Review Letters, issue 104. The research was undertaken within the framework of the special research area TR7 “Gravitational Wave Astronomy” and funded by the German Research Association. The article “Realization of a monolithic high-reflectivity cavity mirror from a single silicon crystal” by F. Brückner, D. Friedrich et al. can be found at: http://prl.aps.org/abstract/PRL/v104/i16/e163903. The novel technique of surface treatment can, in principle, also be applied to other crystals used in optics and also functions with visible light when the corresponding structural parameters are selected. In the present case, the scientists chose silicon and infrared laser light with a wavelength of 1,550 nm because these parameters represent good candidates for future interferometers in earth-bound gravitational wave detectors. Much of the technology used today in gravitational wave detectors was developed at the GEO600 Gravitational Wave Detector in Ruthe near Hannover. The nanostructured silicon mirror was created at the Friedrich Schiller University of Jena. Its design and characterization was undertaken in close collaboration with the Hannover Work Groups involved with GEO600. The success of this collaboration once again demonstrates the role of GEO600 as a unique think tank for international gravitational wave research. The testing of the silicon mirror The next step is to prove that thermal noise can actually be adequately absorbed by the specially treated crystal surface. In order to test this, the scientists will soon be developing a new silicon mirror that will be incorporated into a highly sensitive, 10-meter interferometer that is operated at the University of Glasgow. Measurements should show that the interferometer sensitivity is increased through use of the new mirrors. If this proves successful, the new technology will also be employed in the large, heavy mirrors (weighing several kilograms) utilized in the GEO600 Gravitational Wave Detector. The new technology can also be harnessed for "mirroring" tiny oscillating crystals. Through the use of cold temperatures it is hoped that Brownian motion could then be reduced to such an extent that it would be possible to directly observe the quantum mechanical aberration of the vibrating crystal described by Heisenberg's uncertainty principle. In addition to applications of the novel optics in basic research, these mirrors are also interesting for the control of high-performance lasers in laser material processing. Special Research Area Transregio 7 - “Gravitational Wave Astronomy: Methods - Sources - Observation” In 2002 the Deutsche Forschungsgemeinschaft established the Special Research Area/ Transregio 7 (SFB/TR 7) for gravitational wave research. The following institutions are taking part in SFB/TR 7: the Max Planck Institute for Gravitational Physics (Albert Einstein Institute) in Golm and Hannover, the Max Planck Institute for Astrophysics in Garching, the Leibniz Universität Hannover, the Friedrich- Schiller-Universität Jena and the Eberhard Karls University of Tübingen. The SFB/TR 7 is devoted to theoretical and experimental astrophysics in the field of gravitational wave research. In the investigation of the field equations of gravitation, the development of new mathematical methods stands at the forefront. The aim is to investigate the structure and dynamics of compact astrophysical objects such as neutron stars, black holes, binary systems and collapsing matter, thereby calculating their emission of gravitational waves. In the experimental field, the design, production and application of effective reflection optics for beam splitting and beam superposition in various types of interferometers are to be investigated on the basis of diffractive structures that have been applied to highly reflective layer systems using micro- and nanostructure techniques. Complementary to this is creating the nanostructure of the crystal surfaces themselves, whose optical properties are comparable to the layer systems. The use of new interferometer topologies (signal recycling, resonant sideband extraction, active vibration insulation, cooling, QND techniques, optimized mirror systems) will significantly increase the possibility of influencing the sensitivity curve of gravitational wave detectors of the second and third generations.
Comparative Speech Topics A comparative speech, or comparison-contrast speech, requires a minimum of two topics. Their similarities and differences are their connection for the speech. Students give the best speeches when they start with a familiar topic and then add research. Choose topics easy to organize as students can get confused during the speech as they move back and forth between details. Compare two people who the audience may not see as similar. Choose a modern singer or band and compare their music, public identity and personal lives to a musician or group from 40 or 50 years ago. Look at the business world and compare two CEOs. Explore if their drive and education are alike or different. Consider researching celebrities, sportscasters or athletes. Choose more historical figures. For instance, how do two presidents look when you evaluate them? What about famous siblings? Most high school students have experience with travel and varying living arrangements. Use this background to compare home and apartment living. Students should explain the intricacies, such as lawn care, rules and neighbors. Also use traveling wisdom, such as comparing various theme parks, museums or ballparks. Students who explore nature can compare differing hiking trails or campsites. Another idea is to compare their town to a local one, or their state to one they also lived in. Students heading to college may research dorms and apartments to determine the benefits and negatives associated with each. 3 High School Life Plenty surrounds high school students that they can compare. Sports, courses and clubs share similarities and differences, from the number of students involved, time commitments and histories. Compare specific classes, like the class of 2013 to 2014 by looking at their extracurricular involvements, size and fundraising efforts. They can also compare general classes, say freshmen to seniors, by looking at how teenagers mature and how their beliefs change. Comparing dances brings details such as proper attire, cost and decorations to be analyzed. High school students pay attention to brand names in many arenas. Students can compare professional teams to college teams by looking at rules, merchandise and fan bases. Students also can analyze electronics, from computers and games, to phones and applications. Video games, their consoles and accessories have factors to evaluate. Teenagers frequent clothing stores and can compare different ones, along with area malls. They should also research restaurants and compare service, food variety and health options.
Sepsis is a condition characterized by infection of the blood. The most common symptoms of sepsis include an elevated heart rate, temperature, and respiratory rate. In addition, symptoms of sepsis can also include generalized weakness, and other related symptoms such as dizziness, light-headedness, and nausea. Patients who are suffering from sepsis will typically experience a gradual increase in the severity of these symptoms. An increased rate or severity of infection may also indicate sepsis, and individuals should pay close attention to additional symptoms in order to determine if medical assistance is necessary. One of the most common symptoms of sepsis is an elevated heart rate. In most cases, those who are diagnosed with sepsis experience a heart rate of at least 100 beats per minute. Healthy adults who are not currently diagnosed with health conditions usually have a heart rate of between 60 and 90 beats per minute at rest. Experienced health care workers will likely be able to measure their heart rates manually, but individuals who are not familiar with this technique may require medical assistance. An elevated temperature is another of the many symptoms of sepsis. As with heart rate, there is a range of body temperatures which is considered to be normal. For most individuals, this range generally is between 94 and 101 degrees Fahrenheit (34.4 and 37.8 degrees Celsius). Individuals who experience a temperature significantly above or below this may currently be suffering from sepsis. Those with an elevated temperature and who are currently experiencing other significant symptoms should seek medical attention as soon as possible. In some cases, increased respiratory rates can be indicative of sepsis infection. High respiratory rates are generally considered to relate to infection, and can be cause for concern. At rest, most healthy individuals typically have a respiratory rate of around 15 breaths per minute. Respiratory rates that are significantly higher or lower can also be related to the current fitness level of the individual in question. Another common symptom of sepsis is generalized weakness. In addition, feelings of light-headedness, dizziness, or nausea may also indicate sepsis infection. These conditions come on gradually in most cases, and increase in intensity over a significant period of time. Weakness, dizziness, or light-headedness that comes on rapidly is typically not considered to be related to sepsis. An increased rate of infection may also indicate sepsis. Those who experience a sudden increase in the rate or urinary tract infections, colds, or other conditions may be suffering from sepsis. Individuals should consider both the occurrence rate as well as the severity of these infections in order to determine whether or not medical assistance is necessary.
People often talk about COVID-19 testing like it means only one thing. But in reality, the U.S. Food and Drug Administration (FDA) has so far granted emergency-use authorization to more than 200 different tests meant to detect a current or past infection from SARS-CoV-2, the virus that causes COVID-19. Most recently, the agency made headlines for approving the first such test that uses saliva samples, the aptly named SalivaDirect test out of the Yale School of Public Health. These COVID-19 tests fall into three main categories: PCR, antigen and antibody. Dr. Aneesh Mehta, chief of infectious diseases services at Emory University Hospital in Atlanta, Ga., broke down the differences between them—and what to keep in mind if you decide to get tested. The majority of COVID-19 testing happening in the U.S. right now uses polymerase chain reaction (PCR) technology. These tests detect disease by looking for traces of the virus’ genetic material on a sample most often collected via a nose or throat swab. The U.S. Centers for Disease Control and Prevention (CDC) considers PCR tests the “gold standard” of COVID-19 testing, but, like all tests, they’re not perfect. Studies have suggested as many as 30% of COVID-19 PCR test results are inaccurate. (For comparison, the CDC in 2018 estimated that rapid flu tests have about the same rate of incorrect results.) With COVID-19 tests, false negatives seem to be much more common than false positives—so if you get a positive result, you very likely do have the virus. If you get a negative result but have coronavirus symptoms or recently encountered someone sick with the virus, you should still self-isolate until symptoms subside. False negatives can happen if health professionals do not go deep enough into the nose or throat to collect a good sample. The timing of the test matters, too. Infections can be missed if testing happens too soon after exposure, research shows. The reverse is also possible. “Sometimes after the virus has been killed off, there’s still a lot of [genetic material] left over in the body,” Mehta says. This can cause someone to test positive even if they’re not actively sick. Getting tested roughly five days after a possible exposure seems to be the sweet spot. Running a PCR test and reading its results requires specific equipment and chemicals (known as reagents) that are in short supply, which is partially why the U.S. has hit such a testing backlog. To try to cut down on wait times, several companies have developed tests that can detect a virus’ genetic material in minutes, but some—like the Abbott ID NOW test used in the White House—have high reported rates of false negatives. These rapid tests aren’t readily available to most of the American public yet, but some experts argue they could serve a valuable purpose despite their questionable accuracy. Fast tests could significantly ramp up testing capacity, feasibly catching more cases of COVID-19 than our current testing strategy, despite the accuracy issues. Coronavirus saliva tests are a new type of PCR diagnostic for COVID-19. Saliva testing “does depend on standard PCR technology, and it does require some manual labor in order to move it through the steps of the test,” Mehta says. But collecting spit is less invasive than a nose or throat swab and easier to do at home or without medical training, Mehta says. SalivaDirect, the test from Yale, also does not require proprietary chemical reagents or test tubes, which its developers hope will help ease supply and access issues. Early Yale research conducted by testing professional basketball players suggests the saliva test is about as accurate as a traditional nasal PCR test, but Mehta says “we need to more broadly test it” to see if that finding holds true. Antigen tests can turn around results in minutes—but speed comes with tradeoffs. Like PCR tests, antigen tests usually require a nose or throat swab. But unlike PCR tests, which look for genetic material from the SARS-CoV-2 virus, antigen tests look for proteins that live on the virus’ surface. This process is a little less labor-intensive than PCR testing, since there isn’t as much chemistry involved, but it’s also less sensitive. Mehta says that opens the door for possible false positives (if the test picks up on proteins that look similar to those from SARS-CoV-2) or negatives (if it misses proteins entirely). False positives are rare with antigen tests, but as many as half of negative results are reportedly inaccurate. If you test negative but are showing symptoms or have had a risky exposure, your doctor may order a PCR test to confirm the result. While antigen testing is becoming more common in the U.S., only a few such tests have been approved by the FDA so far. Much like with rapid genetic tests, some experts argue that fast-moving antigen tests could help ease testing bottlenecks enough to compensate for their reduced accuracy. Unlike the other tests listed here, antibody tests aren’t meant to pick up on current infection with SARS-CoV-2. Rather, they search the blood for antibodies, proteins the body makes in response to an infection that may provide immunity against the same disease in the future. These tests look for SARS-CoV-2-specific antibodies to see if you’ve previously had coronavirus. Right now, antibody tests can’t do much except satisfy curiosity. For one thing, Mehta says, false results are fairly common. Even if the results are accurate, scientists do not yet know how well or for how long coronavirus antibodies protect someone from a future case of COVID-19. A positive antibody test result does not mean you can’t get COVID-19 again, at least as far as current science suggests. Wide-scale antibody testing is useful for researchers, since it could inform estimates about how many people have actually had COVID-19 and help scientists learn more about if or how antibodies bestow immunity to coronavirus. “From the research perspective, there’s a lot of information we can get from antibody testing if we collect it over time,” Mehta says. But in terms of actionable information for individuals, antibody tests don’t reveal much at this point. “Just because we can detect antibodies does not necessarily mean you’re fully protected from acquiring that infection,” Mehta says. “Continue to take all the same precautions that everyone else is taking.”
Infrared Ring Nebula Creator: Spitzer Space Telescope, Pasadena, CA, USA NASA's Spitzer Space Telescope finds a delicate flower in the Ring Nebula, as shown in this image. The outer shell of this planetary nebula looks surprisingly similar to the delicate petals of a camellia blossom. A planetary nebula is a shell of material ejected from a dying star. Located about 2,000 light years from Earth in the constellation Lyra, the Ring Nebula is also known as Messier Object 57 and NGC 6720. It is one of the best examples of a planetary nebula and a favorite target of amateur astronomers. The "ring" is a thick cylinder of glowing gas and dust around the doomed star. As the star begins to run out of fuel, its core becomes smaller and hotter, boiling off its outer layers. The telescope's infrared array camera detected this material expelled from the withering star. Previous images of the Ring Nebula taken by visible-light telescopes usually showed just the inner glowing loop of gas around the star. The outer regions are especially prominent in this new image because Spitzer sees the infrared light from hydrogen molecules. The molecules emit infrared light because they have absorbed ultraviolet radiation from the star or have been heated by the wind from the star. Image Use Policy: http://www.spitzer.caltech.edu/info/18-Image-Use-Policy - Image Type - Object Name - Ring Nebula • Messier 57 • M57 • NGC 6720 - Subject - Milky Way - Nebula » Type » Planetary - Position (ICRS) - RA = 18h 53m 34.8s - DEC = 33° 1’ 37.9” - North is 15.7° CCW - Field of View - 10.7 x 8.8 arcminutes |Spitzer (IRAC)||Infrared (Near-IR)||3.6 µm| |Spitzer (IRAC)||Infrared (Near-IR)||4.5 µm| |Spitzer (IRAC)||Infrared (Mid-IR)||5.8 µm| |Spitzer (IRAC)||Infrared (Mid-IR)||8.0 µm|
Find the no. of elements on the left side. If it is n-1 the root is the median. If it is more than n-1, then it has already been found in the left subtree. Else it should be in the right subtree12. What is Diffie-Hellman? It is a method by which a key can be securely shared by two users without any actual exchange. 13. What is the goal of the shortest distance algorithm? The goal is completely fill the distance array so that for each vertex v, the value of distance[v] is the weight of the shortest path from start to v. 14. Explain the depth of recursion? This is another recursion procedure which is the number of times the procedure is called recursively in the process of enlarging a given argument or arguments. Usually this quantity is not obvious except in the case of extremely simple recursive functions, such as FACTORIAL (N), for which the depth is N. 15. Explain about the algorithm ORD_WORDS? This algorithm constructs the vectors TITLE, KEYWORD and T_INDEX. 16. Which are the sorting algorithms categories? Sorting algorithms can be divided into five categories: a) insertion sorts b) exchange sorts c) selection sorts d) merge sorts e) distribution sorts 17.Define a brute-force algorithm. Give a short example. A brute force algorithm is a type of algorithm that proceeds in a simple and obvious way, but requires a huge number of steps to complete. As an example, if you want to find out the factors of a given number N, using this sort of algorithm will require to get one by one all the possible number combinations. 18. What is a greedy algorithm? Give examples of problems solved using greedy algorithms. A greedy algorithm is any algorithm that makes the local optimal choice at each stage with the hope of finding the global optimum. A classical problem which can be solved using a greedy strategy is the traveling salesman problem. Another problems that can be solved using greedy algorithms are the graph coloring problem and all the NP-complete problems. 19. What is a backtracking algorithm? Provide several examples. It is an algorithm that considers systematically all possible outcomes for each decision. Examples of backtracking algorithms are the eight queens problem or generating permutations of a given sequence. 20. What is the difference between a backtracking algorithm and a brute-force one? Due to the fact that a backtracking algorithm takes all the possible outcomes for a decision, it is similar from this point of view with the brute force algorithm. The difference consists in the fact that sometimes a backtracking algorithm can detect that an exhaustive search is unnecessary and, therefore, it can perform much better.a
A Complete College-Level Music Theory Curriculum. This edition of the course includes levels 1, 2, & 3. What you'll learn - Read Music Using Proven Techniques - Understand All the Symbols (Not Only the Notes) of a Music Score - Read, Play, and Count Rhythms Accurately - The elements of the Score - Pitch Names - Pitch Classes - The White Keys - The Black Keys (not the band!) - Half-Steps and Whole-Steps - Naming Octaves - Identifying Notes on the Staff - Identifying Notes on the Keyboard - Beat and Beat Divisions - Downbeats and Upbeats - Dotted Rhythms - Time Signatures - Form in Music Notation - Chromatic and Diatonic scales - Ordered Pitch Class Collections - The pattern of a Major Scale - Scale Degrees - Writing melodies with major scales - Analyzing melodies - What it means to be “in key” - Key signatures - How to identify key signatures - Popular song analysis - Building triads (chords) - Diatonic chord progressions - Roman numeral analysis - Finding chords by formula - The thirds inside of a chord - Finding fifths by finding thirds - Diminished triads - Augmented triads - Chords on the guitar - Full Analysis: Canon in D (Pachabel) - Full Analysis: Minuet in G (Bach) - 7th Chords - Major 7th Chords - Minor 7th Chords - Dominant 7th Chords - Tendency Chords - Using the Circle of Fifths for Songwriting and Composition - Borrowing from Closely Related Keys - Scale Degree Names - Tendency Tones - Compound Meters - Compound Meter Signatures - Reading and Writing Compound Meters - Triplets, dubplets, and Quadruplets - Finding Minor keys by alternations to Major - Patterns in Minor keys - Relative Minor keys - Parallel Minor keys - Minor keys in the Circle of Fifths - Using Minor Keys for Songwriting and Composition - Diatonic Chord Progressions in Minor - The V Chord and Minor and the Leading Tone Problem - Harmonic Minor Scales - Melodic Minor Scales - Students should be enthusiastic about music, but do not need to be producers or musicians. No prior experience is needed in music – All are welcome! - I'll be using a piece of software in this Music Theory Comprehensive Complete! (Levels 1 2 & 3) course that I would like students to get. Don't worry – it's free! And works on Mac and PC programs. I'll tell you more in the first few videos. ** UDEMY BEST SELLER ** This course is “5-Star Certified” by the International Association of Online Music Educators and Institutions (IAOMEI). This Music Theory Comprehensive Complete! (Levels 1 2 & 3) course has actually been individually evaluated by a panel of specialists and has actually gotten an excellent 5-star rating. Welcome to the COMPLETE Music Theory Guide! This is a class made for the typical individual who is ready to take music theory (or music interest) and turn it into a functional ability. Whether you are an active artist or a hopeful artist, this class is ideal for you. For several years I've been teaching Music Theory in the college class. These classes I'm creating for Udemy utilize the exact same curriculum I've utilized in my college classes for several years, at a portion of the expense. I think anybody can learn Music Theory – and expense should not be a barrier. My method to music theory is to lessen the memorization. If you've attempted to learn music theory in the past, or if you are simply beginning out – this series of courses is the ideal fit. Dr. Allen is an expert artist, premier Udemy trainer, and university teacher. In 2017 the Star Tribune included him as a “Mover and a Shaker,” and he is acknowledged by the Grammy Foundation for his music education classes. This class is a Comprehensive class – it will have lots of parts, going through my whole yearly curriculum. This Edition of the class is the “Complete” Edition: It consists of levels 1, 2, & 3 in their totality. Consisted of in this course: - 151 Video lectures, following my college Music Theory Curriculum. - 28 Downloadable worksheets for practice (with responses!). - Gain access to discount rates to my whole network for music classes. - Subscription to the class theory-learner community. Due to the fact that this is 3 class integrated into one, going through every subject we cover in this class would create for a really, long list. Here is simply a tip of all the subjects we cover:. - My technique to Music Theory. - Tools you will require to learn Music Theory rapidly and effectively. - Music software application: Notation programs. - The components of ball game. - Pitch Names. - Pitch Classes. - The White Keys. - The Black Keys (not the band!). - Half-Steps and Whole-Steps. - Calling Octaves. - Determining Notes on the Staff. - Recognizing Notes on the Keyboard. - Beat and Beat Divisions. - Downbeats and Upbeats. - Dotted Rhythms. - Time Signatures. - Kind in Music Notation. - Chromatic and Diatonic scales. - Ordered Pitch Class Collections. - The pattern of a Major Scale. - Scale Degrees. - Composing tunes with major scales. - Examining tunes. - What it suggests to be “in crucial”. - Key signatures. - How to determine key signatures. - Popular song analysis. - Building triads (chords). - Diatonic chord progressions. - Roman numeral analysis. - Discovering chords by formula. - The thirds within a chord. - Discovering fifths by discovering thirds. - Diminished triads. - Augmented triads. - Chords on the guitar. - Complete Analysis: Canon in D (Pachabel). - Complete Analysis: Minuet in G (Bach). - 7th Chords. - Major 7th Chords. - Minor 7th Chords. - Dominant 7th Chords. - Tendency Chords. - Utilizing the Circle of Fifths for Songwriting and Composition. - Borrowing from Closely Related Keys. - Scale Degree Names. - Tendency Tones. - Compound Meters. - Compound Meter Signatures. - Reading and Writing Compound Meters. - Triplets, dubplets, and Quadruplets. - Finding Minor keys by alternations to Major. - Patterns in Minor keys. - Relative Minor keys. - Parallel Minor keys. - Minor keys in the Circle of Fifths. - Utilizing Minor Keys for Songwriting and Composition. - Diatonic Chord Progressions in Minor. - The V Chord and Minor and the Leading Tone Problem. - Harmonic Minor Scales. - Melodic Minor Scales. - … and much, far more! And obviously, as soon as you register for this class, you immediately get substantial discount rates to all the upcoming parts of this class. You will not have another chance to learn Music Theory in a more thorough method than this. All the tools you require to effectively learn Music Theory are consisted of in this course and the whole course is based upon real-life experiences – not simply scholastic theory. Please click the “Take This Music Theory Comprehensive Complete! (Levels 1 2 & 3) Course” button so you can introduce your music profession today. This course is ideal for preparation for the Praxis II Test (ETS Praxis Music), The ABRSM Music Theory Exam (as much as Grade 8), AP Music Theory Exam, College Placement Exams (Music Theory), and other typical secondary and post-secondary positioning examinations. ** I ensure that this Music Theory Comprehensive Complete! (Levels 1 2 & 3) course is the most extensive music theory course offered ANYWHERE on the marketplace – or your cash back (30 day cash back warranty) **. Closed captions have actually been contributed to all lessons in this course. Captions are likewise consisted of in Spanish, Portuguese, and Chinese. Applaud for Courses by Jason Allen:. ⇢ “It appears like every little information is being covered in an incredibly basic style. The knowing procedure ends up being unwinded and enables intricate ideas to get absorbed quickly. ⇢ “Great for everybody with no understanding up until now. I purchased all 3 parts … It's the very best financial investment in leveling up my abilities up until now.” – Z. Palce. ⇢ “Excellent descriptions! No more or less than what is required.” – A. Tóth. ⇢ “VERY COOL. I've waited for years to see an excellent video course, now I do not have to wait any longer. ⇢ “I am learning LOTS! And I truly like having the worksheets!” – A. Deichsel. ⇢ “The essentials discussed really plainly – loads of truly beneficial suggestions!” – J. Pook. ⇢ “Jason is actually fast and fantastic with concerns, constantly a terrific resource for an online class!” M. Smith. Students who sign up for this Music Theory Comprehensive Complete! (Levels 1 2 & 3) course will get continuous special material and discount rates for all future classes in the series. Who this course is for:. - Anybody in any nation, and any age, who is ready to begin learning music in an enjoyable, casual, and helpful method. - This course is made for students who have either never ever attempted to learn music theory in the past, or attempted and could not come to comprehend the ideas. - This is a course for students who wish to comprehend whatever about music theory, from the ground up. Created by Jason Allen Last updated 4/2020 Size: 3.01 GB
A swarm of tiny probes could zip through the clouds of Jupiter by 2030, beaming home data about the gas giant's dense atmosphere. The bantam spacecraft should survive for about 15 minutes in Jupiter's thick air before bursting into flame, according to researchers developing a concept mission called SMARA (SMAll Reconnaissance of Atmospheres). During this brief time, the microprobes would transmit enough information to give scientists a greater understanding of the atmosphere of Jupiter. "Our concept shows that for a small enough probe, you can strip off the parachute and still get enough time in the atmosphere to take meaningful data while keeping the relay close and the data rate high," John Moores, of York University in Toronto, said in a statement. Moores and his team laid out the SMARA concept in a study published recently in the International Journal of Space Science and Engineering. Under their concept, the microprobes would have multiple, separate functions in order to provide the most complete picture of Jupiter's skies, study team members said. Some might take images, while others could measure the atmosphere's chemical composition. But the SMARA mission wouldn't just improve scientists' knowledge of Jupiter, which is the closest gas giant to Earth and makes up two-thirds of the mass of the solar system, excluding the sun. SMARA could also shed light on other aspects of planetary science, such as the composition of the nebula from which the solar system formed and the nature of small cosmic bodies like asteroids, researchers said. Studying Jupiter in depth could additionally help scientists understand gas giants outside Earth's solar system, yielding general insights about flow dynamics, cloud physics and other phenomena. The probes' small size is essential to the scientists' plan, as larger spacecraft — anything weighing more than, say, 660 lbs. (300 kilograms) — would sample less of Jupiter's atmosphere before burning up. Moores and his colleagues would ideally like to coordinate their mission with the European Space Agency's Jupiter Icy Moons Explorer effort (JUICE), which is scheduled to launch in 2022 and reach the Jovian system in 2030. Tiny satellites are already making their mark in space. In February 2014, for example, astronauts on the International Space Station released a record-breaking 33 tiny satellites, also known as cubesats, into orbit.
Have you ever wanted to dive below the surface of the water? If so, it’s important to learn more about equalizing. As you descend, pressure builds up in the inner and outer parts of your ears, which at a few feet can be uncomfortable, but a bit deeper, can lead to major ear damage. Equalizing balances out that pressure, and opens something called the Eustachian tubes, which connect to the empty air space of the middle ear. Throughout our daily lives, we are often equalizing the pressure in our ears by simply swallowing or yawning. When we dive underwater however, we often need a more deliberate method to make sure that the pressure becomes equalized. One of the most common techniques for equalizing underwater is something called the Valsalva maneuver. For this method, you simply pinch your nose, keep your mouth closed, and blow out. The high amount of pressure in your throat then prompts the Eustachian tubes to open and equalize the pressure in your ears. This method is one of the first that is generally taught to new divers, but it is important to understand that there are still risks. Making sure not to blow too hard can help to prevent damage to the round and oval windows in the inner ear. Other common equalizing techniques include the Toynbee Method, the Lowry Technique, and Voluntary Tubal Opening, among others. For the Toynbee Method, the nose is pinched, and the diver swallows to open the Eustachian tubes. The Lowry Technique combines both the Valsalva and Toynbee methods by pinching the nose and then blowing out and swallowing at the same time. As divers practice equalizing, they may become more comfortable with a specific technique, or learn to control their muscles especially well. For Voluntary Tubal Opening, a diver may learn how to continuously equalize the pressure in their ears by tensing their throat and jutting their lower jaw forward and down similar to a yawn. So when do you equalize? The most important things to remember are to equalize early and to equalize often. If you have a dive planned for later in the day, it is a good idea to start equalizing a few hours beforehand every few minutes. Before going underwater, always equalize on the surface. Then, as you descend, make an effort to equalize about every two feet to avoid the tearing of tissue or eardrum damage. If at any point your ears hurt, make sure you don’t push through the pain and continue to descend. Equalization takes practice, but diving underwater can take you to a whole new world in the ocean! Written by: Jaclyn Lucas
Updated: Dec 3, 2020 Whenever I am teaching children, adolescents or adults, my mind it going in at least two directions - what concepts, facts and procedures do I need to teach a student and secondly, how is that student learning what I am teaching. Bloom’s Revised Taxonomy trained me to do this, but many other models of thinking emphasize the same ideas. It gives us teachers practical strategies to delve into how our students are learning in depth, memorizing information instead of really learning it. In my course, A Teacher's Guide to Teaching Reading and Spelling: Bringing the Science of Reading into the Classroom, Letter/Sound Patterns and Orthography, Syllable Types, and Morphology, I emphasize these two aspects of learning. If you have a student who is a Friday speller, she studies her spelling words diligently during the week, takes the Friday spelling test, and aces it, then the next week misspells the words she knew perfectly on her spelling test, you want to know more about these two processes of learning. A couple of examples from Structured Literacy will illustrate these ideas. If I am teaching students about consonants, I want them to know the following facts: Consonants are closed phonemes – they are pronounced by closing off the air with either your lips or your tongue and/or your teeth and then releasing the air – put your hand in front of your mouth and say /p/ and you will see what I mean. Consonants can be either voiced or unvoiced and people often confuse two letters when spelling that are pronounced the same, except one is voiced and the other is unvoiced. Put your hands over your ears and say /s/, /z/ and you will hear the difference between voiced and unvoiced. Consonants linguistically form pairs, two letters that are different letters, and sound different but are articulated with the same part of the mouth, the same airflow and the same movements, however, one is voiced and the other is unvoiced. Examples of pairs are T and D, F and V, S and Z. I also want my students to be able to answer questions to demonstrate if they have remembered these facts: Is this letter a consonant or a vowel? Is this consonant made using your lips or your tongue? How is the air coming out of your mouth in a stream or a puff? Is this consonant voiced or unvoiced? I want to see if my student understood the concepts and can explain the ideas in their own words. Can you define the characteristics of a consonant? Why do two consonant letters get grouped together in a pair? I also want them to apply the ideas to new learning. Here is a group of four consonants that we haven’t yet learned – which two form a consonant pair? Explain your thinking. If my student makes a mistake when we are working together that is another extraordinarily important opportunity to focus on learning processes. I want to carefully analyze in my head what and where and why they made the mistake they made. I want to ask them their thinking, because maybe what I thought was inaccurate and why it was inaccurate, may be totally different than their thinking. I then can reteach the ideas that they need to think through and then ask them a question with alternate answers, like “Is this letter a consonant or a vowel? OR Is this consonant made using your lips or your tongue?” and they can answer the question thoughtfully. This teaches them how to self correct!!! If you want to learn more about these ideas and much more, come to my class in January, 2020 and visit my website, www.sashaborenstein.com See you there.
This semester, students in my Teaching Composition class designed infographics to share ideas drawn from our course readings and discussions. These multimodal projects, which blend writing and visual design, had to convey ideas from the scholarly articles we were reading in an accessible, visually-appealing way. Many of our future English teachers designed their infographics to be used in their future classrooms. The topics varied widely, though all the projects focused on different aspects of writing. You can check out each infographic by clicking on the images below. In their infographic, Robert Elkins, Mary Kate Hynek, and Samantha Kohrt wanted to, as they put it, “help writers craft effective rhetorical arguments based on Bitzer’s Rhetorical Situation. Having a framework of the rhetorical situation and its three constituents makes a writer’s argument meaningful and relevant to the situation. This allows them to be aware of the needs, values, and expectations of their audience.” Another group was interested in teaching rhetorical concepts to their future students with their infographic as well. English Education majors Alexis Ceballos and Rachel Webber “constructed a useful infographic for upper middle school students and underclassman high school students to utilize when writing a rhetorical piece. Students can refer to this poster to quickly gather the key components of writing a persuasive speech or paper.” In her infographic aimed at students, Cassie Claffy wanted to help students understand how different kinds of writing can serve different purposes. As she put it, Cassie “created her infographic to provide students with exposure to personal writing. It is essential for students to be supported in both academic and personal writing in order to develop their own voice and writing process.” Another group of English Education majors, Alyia Cady, Hannah Bolden, and Kathryn Drey, focused on genres of writing. Their infographic provides an overview of literary genres frequently taught in English classes and is designed for students to reference. At this moment late in the semester, they ask, “Have you read too much this year that your brain can handle? If so, take a look at this poster to get a quick reminder of what each genre entails!” An Elementary Education major, Eve Odum created her infographic to teach young writers how to communicate ethically. As she put it, “the Writing with Ethics infographic shows students what, why, and how to write effectively and ethically. Students learn that their words have power and, therefore, they should use that power to do good. Students can feel empowered when they read about the three young people who have changed the world by using ethics in their speeches and writings.” English majors Sara Cahill and Sarah Deffenbaugh created their infographic for future high school English students. As they put it, “their topic focused on how to organize writing in order to create an energetic and engaging piece. Their infographic stemmed from researching writing theory and how to effectively implement these ideas into a classroom.” Bringing his interest in technical writing to the project, Daniel Snyder created an infographic designed to introduce the field of technical writing to English majors, who might not realize this is a great job opportunity after graduation. To introduce the infographic, Daniel asks: “Are you on track for an English degree and still have no idea what to do with it? Have you ever wondered about what a career as a technical writer might look like? Are you one of the three writing concentration students here at USF? Look no further than this infographic detailing the ways you can use your writing talents in a variety of technical fields.” It is always exciting to see how students synthesize our class readings in rhetoric and writing studies to create original, audience-focused infographics, and this semester, the Teaching Composition students have created a diverse set of engaging projects.
The San Gabriel Convent dates to 1520 and was built by the Spanish on a site that previously had a temple dedicated to a pre-Columbian deity. The convent was designed in the Plateresque style, a design popular in Spain and its colonies in the 15th and 16th centuries whose name derives from the Spanish word for silver (plata) and meant to evoke the fine, delicate work of silversmiths. Natural disasters, including fire and earthquakes, including the major 1999 tremor, had damaged the convent. In addition, the Pilgrim’s Portal, a long, arcaded structure facing the central courtyard, had been filled in with concrete. How We Helped In 2001 WMF, through the Robert W. Wilson Challenge to Conserve Our Heritage, supported the repair and conservation of the Pilgrim’s Portal. The exterior arcade was restored and sealed with glass to create an enclosure for the restored rare books. New marble and wood floors were installed. New electrical and mechanical systems were put in place, including exterior lighting, and a fence was erected. Mural paintings in corridors, reading rooms, and in what is now a small museum for sacred art were conserved. Why It Matters The Franciscan church of San Gabriel is one of the oldest of the religious sites in the Americas and is a fine example of Spanish colonial architecture in Mexico. Cholula played an important role in the early colonial history of the area as the site of Cortez’s Massacre of Cholula, part of his campaign that ultimately toppled the Aztec Empire. The diverse uses of the structure help to outline the history of the region, including its new use as an archive center, which will offer a comprehensive collection of the numerous written materials used by the Franciscans in the region.
The Minor Pentatonic Scale Our previous lesson focused on the major scale. The minor pentatonic scale is the relative minor scale that complements the major scale. In the key of A, F#m is the relative minor. The F# Minor Pentatonic Scale The first step in playing this scale is how you position your fingers. Each finger (one through four) stays on a fret and does not move side to side to different frets – only up and down to different strings. For this scale, your one (index) finger plays only on the fourth fret. Your second (middle) finger plays only the 5th fret, and so on. Start on the 5th fret, low E string with your 2nd finger, and follow the diagram above. The blue notes indicate the root notes of the scale.
Familial hypercholesterolemia is a disorder that is passed down through families. It causes LDL ("bad") cholesterol level to be very high. The condition begins at birth and can cause heart attacks at an early age. Related topics include: Type II hyperlipoproteinemia; Hypercholesterolemic xanthomatosis; Low density lipoprotein receptor mutation Familial hypercholesterolemia is a genetic disorder. It is caused by a defect on chromosome 19. The defect makes the body unable to remove low density lipoprotein (LDL, or "bad") cholesterol from the blood. This results in a high level of LDL in the blood. A high level of LDL cholesterol makes you more likely to have narrowing of the arteries from atherosclerosis at an early age. The condition is typically passed down through families in an autosomal dominant manner. That means you only need to get the abnormal gene from one parent in order to inherit the disease. In rare cases, a child may inherit the gene from both parents. When this occurs, the increase in cholesterol level is much more severe. The risk for heart attacks and heart disease are high, even in childhood. In the early years there may be no symptoms. Symptoms that may occur include: - Fatty skin deposits called xanthomas over parts of the hands, elbows, knees, ankles and around the cornea of the eye - Cholesterol deposits in the eyelids (xanthelasmas) - Chest pain (angina) or other signs of coronary artery disease; may be present at a young age - Cramping of one or both calves when walking - Sores on the toes that do not heal - Sudden stroke-like symptoms such as trouble speaking, drooping on one side of the face, weakness of an arm or leg, and loss of balance Exams and Tests A physical exam may show fatty skin growths called xanthomas and cholesterol deposits in the eye (corneal arcus). The doctor will ask questions about your personal and family medical history. There may be: - A strong family history of familial hypercholesterolemia or early heart attacks - High level of LDL cholesterol in either or both parents People from families with a strong history of early heart attacks should have blood tests done to determine lipid levels. Blood tests may show: Other tests that may be done include: - Studies of cells called fibroblasts to see how the body absorbs LDL cholesterol - Genetic test for the defect associated with this condition The goal of treatment is to reduce the risk of atherosclerotic heart disease. People who get only one copy of the defective gene from their parents may do well with diet changes and statin drugs. The first step is to change what you eat. Most of the time, the doctor will recommend that you try this for several months before prescribing medicines. Diet changes include lowering the amount of fat you eat so that it is less than 30% of your total calories. If you are overweight, losing weight is very helpful. Here are some ways to cut saturated fat out of your diet: - Eat less beef, chicken, pork, and lamb - Replace full-fat dairy products with low-fat products - Eliminate trans fats You can lower the amount of cholesterol you eat by eliminating egg yolks and organ meats such as liver. It may help to talk to a dietitian who can give you advice about changing your eating habits. Weight loss and regular exercise may also help lower your cholesterol level. If lifestyle changes do not change your cholesterol level or you have a very high risk of this condition, your doctor may recommend that you take medicines. There are several types of drugs available to help lower blood cholesterol level, and they work in different ways. Some are better at lowering LDL cholesterol, some are good at lowering triglycerides, while others help raise HDL cholesterol. Statin drugs are commonly used and are very effective. These drugs help lower your risk of heart attack and stroke. - Lovastatin (Mevacor) - Pravastatin (Pravachol) - Simvastatin (Zocor) - Fluvastatin (Lescol) - Atorvastatin (Lipitor) - Pitivastatin (Livalo) - Rosuvastatin (Crestor) Other cholesterol-lowering medicines include: - Bile acid-sequestering resins - Fibrates (such as gemfibrozil or fenofibrate) - Nicotinic acid - PCSK9 inhibitors, such as alirocumab (Praluent) and evolocumab (Repatha) People with a severe form of the disorder may need a treatment called apheresis. Blood or plasma is removed from the body. Special filters remove the extra LDL cholesterol, and the blood plasma is then returned to the body. How well you do depends on how closely you follow your doctor's treatment advice. Making diet changes, exercising, and taking your medicines correctly can lower cholesterol level. These changes can help delay a heart attack, especially for people with a milder form of the disorder. Men and women with familial hypercholesterolemia typically are at increased risk of early heart attacks. Risk of death varies among people with familial hypercholesterolemia. If you inherit two copies of the defective gene, you have a poorer outcome. That type of familial hypercholesterolemia does not respond well to treatment and may cause an early heart attack. - Heart attack at an early age - Heart disease - Peripheral vascular disease When to Contact a Medical Professional Seek immediate medical care if you have chest pain or other warning signs of a heart attack. Call your health care provider if you have a personal or family history of high cholesterol level. A diet low in cholesterol and saturated fat and rich in unsaturated fat diet may help to control LDL level. People with a family history of this condition, particularly if both parents carry the defective gene, may want to seek genetic counseling. Genest J, Libby P. Lipoprotein disorders and cardiovascular disease. In: Bonow RO, Mann DL, Zipes DP, Libby P, eds. Braunwald's Heart Disease: A Textbook of Cardiovascular Medicine. 9th ed. Philadelphia, PA: Elsevier Saunders; 2011:chap 47. Semenkovich, CF. Disorders of lipid metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine. 24th ed. Philadelphia, PA: Elsevier Saunders; 2011:chap 213. Reviewed By: Larry A. Weinrauch MD, Assistant Professor of Medicine, Harvard Medical School, Cardiovascular Disease and Clinical Outcomes Research, Watertown, MA. Review provided by VeriMed Healthcare Network. Also reviewed by David Zieve, MD, MHA, Isla Ogilvie, PhD, and the A.D.A.M. Editorial team.