content
stringlengths
275
370k
The UN SDGs are an urgent universal call for action to change the world and steer it towards a better and more sustainable path. One goal is missing: the 18th SDG on sustainable use of outer space. Planet Earth and the human race are strongly connected to each other. Sustainability of this planet is something that can be achieved from on Earth but also from outer space. Natural disasters and other phenomena that can lead to the extinction of the human race in the long term can be observed from outer space through satellites that provide essential information. Space exploration and research has given humanity many advantages in terms of technology and understanding about ourselves and our origin. These benefits will only increase in the future, not only in those domains, but space also holds economic opportunities. ‘The long-term survival of the human race is at risk as long as it is confined to a single planet… but once we spread into space... our future should be safe.’ - Stephen Hawking The “SDG 18 – SPACE FOR ALL” initiative has the goal that another SDG, focusing on space, will be implemented in the very near future and that space is recognised on a global agenda.
Scientists have long known that circadian clocks—biochemical oscillators that control physiology, metabolism and behavior on a roughly 24-hour cycle—are present in all forms of life, including animals, plants, fungi and some types of bacteria. However, the molecular mechanisms that “run” these systems remain largely unknown. In a study published Sept. 7 in Molecular Cell, a team led by Harvard Medical School researcher Charles Weitz shows that a set of core clock proteins organize themselves into a handful of molecular machines that control the precise workings of circadian rhythms. Providing the first structural glimpse of the clock’s machinery, the results offer a starting point for explaining how circadian clocks run and an understanding of the variety of conditions that can develop—including sleep disorders, metabolic aberrations and cancer—when something in the clock machinery goes awry. In the late 1990s, Weitz, the Robert Henry Pfeiffer Professor of Neurobiology at Harvard Medical School, and researchers from other labs discovered several key proteins involved in the clock system. These include three different period proteins (PER), two different cryptochrome proteins (CRY), and casein kinase-1 (CK1). When these proteins accumulate inside cells and enter the cell nucleus, they bind to a protein called CLOCK-BMAL1 that is attached to DNA responsible for making more PER and CRY. The influx and accumulation of these proteins inside the nucleus effectively shut down the production of PER and CRY. However, when the levels of PER and CRY drop, the CLOCK-BMAL1 can once again resume work unhindered so that the DNA responsible for making PER and CRY can do its job. The completion of this feedback loop—production of PER and CRY, their attachment to CLOCK-BMAL1, shutting down PER and CRY production so that it can start over again—takes about 24 hours, Weitz explains. The traditional view, he adds, is that these proteins enter the cell nucleus independently or in small groups to do different jobs. The Weitz team findings revealed otherwise. To figure out precisely how these proteins might run the clock, Weitz and colleagues used a laboratory technique that selectively pulled out proteins from the nuclei of mouse cells at the peak of PER and CRY negative feedback. Their findings turned up a single large protein complex that incorporated each of the six important clock proteins: the three PERs, two CRYs, and CK1, along with about thirty other accessory proteins. Additionally, the protein complex, which electron microscopy showed is quasi-spherical, was associated with CLOCK-BMAL1, the experiments showed. Although their initial experiments were done in mouse livers—large organs with a strong concentration of different proteins—experiments in other tissues, including kidney and brain, detected the presence of the same large protein complex. The results suggest that this complex, which the researchers named the PER complex, is universal in tissues throughout the body. They also suggest that the six key clock proteins probably don’t operate individually; instead, they seem to organize themselves to work in concert to run the circadian clock’s negative feedback loop. To determine when this organization happens, the researchers looked for the presence of the six main clock proteins in the cytoplasm, the gooey liquid inside a cell that surrounds the nucleus and other organelles. There, they found four other complexes composed of different groups of the six proteins—one with all six, named the upper complex—and three others missing one or more of these key proteins. The researchers hypothesized that these complexes were in various states of assembly, but that the six key proteins entered the nucleus as a group. The upper complex also had a seventh protein called GAPVD1, known from other studies to help shepherd chemicals to different locations inside cells. Although the role of GAPVD1 in the circadian clock remains somewhat unclear, Weitz said, experiments in which he and his colleagues trimmed this protein out of the upper complex caused disruption in circadian cycle—an observation that suggests GAPVD1 plays a key role in the clock. Weitz cautions that the precise orchestration performed by this constellation of proteins in running the body’s clock remains yet to be teased out. However, he said, learning more about how these proteins interact has given researchers a clearer clue into inner workings of the system overall. “The circadian clock is a very deep timing system that controls a large part of the physiology and behavior in all cells in the body to shape multiple processes,” Weitz said. “The more we learn about it, the more links we’ll get to certain kinds of disease states that aren’t easily amenable to treatment today. Now that we understand how these molecular machines are built, we can start asking questions about how they work.” Co-investigators included Rajindra Aryal, Pieter Bas Kwak, Alfred G. Tamayo, Po-Lin Chiu and Thomas Walz. Source: Harvard Medical School
Reforestation is a term many people are unfamiliar with. Others know it as a movement but do not realize the scope of its benefits. Reforestation is defined as “the planting of trees on bare or deforested land.” What Has the Impact of Deforestation Been on the Earth? According to the Union of Concerned Scientists, “Since 1950, we have lost nearly a third of the world’s productive forestland-an area larger than South Africa.” Why Has Deforestation Become Such a Problem? Deforestation is a major factor in climate change. Carbon dioxide and other greenhouse gases are released into the air when trees are burned for firewood. These gases lead to a rise in atmospheric temperatures, which causes other problems such as drought and famine. What Are Some of the Benefits of Reforestation? Reforestation helps combat climate change by removing carbon dioxide from the atmosphere. In addition, forests provide habitat for wildlife and water retention capability, as well as erosion control. Reforestation has been shown to help reduce greenhouse gases and other pollutants in the atmosphere, increase biodiversity, create jobs for people who wish to be reforesters, and protect watersheds. This means that reforestation is not just an act of planting trees because they look nice or because it is the right thing to do, but rather because there are many beneficial reasons to carry out this project. Reforesting bare and deforested land can help reduce pollutants in the atmosphere by helping prevent soil erosion, lowering landslides, reducing erosion of coastal areas, and even slowing down global warming. Another benefit of reforestation is the growth of biodiversity that comes when vegetation is returned to an area. This can be seen when different forms of vegetation are grown in one place versus another. Studies have shown that having multiple vegetation types per area can help protect wildlife habitats because it creates a natural diversity in the ecosystem. Reforesting helps create jobs because there is a need for people to plant and care for reforested areas. The International Labor Organization, which aims to promote opportunities for women and youth employment, also supports reforestation efforts because it offers them job training and work experience. Another benefit of reforestation is the protection of watersheds. The forested area can act as a natural barrier against soil erosion, landslides, and flooding. Although reforestation seems like an extensive project, there are many benefits to carrying it out worldwide. Unfortunately, it may seem like planting trees on its own is not enough incentive for people to get involved. Still, when you consider all of the benefits reforestation can bring to the planet, it becomes clear that this is an initiative we should all be making a part of our lives. Why Is Reforestation Important on a Local Level? While deforestation is prevalent worldwide, it has had the most drastic effects in developing countries. Most of these countries are also struggling to provide basic amenities such as food, water, and shelter. Large-scale reforestation efforts have proven to help decrease poverty levels in these areas by providing the basic needs mentioned above. In addition, reforestation helps promote education by providing school books and supplies and creating jobs through planting trees. In addition to the aforementioned benefits of reforestation, it can also help protect biodiversity in a local area. Studies have shown that a single tree species do not provide sufficient ecological food for wildlife. Therefore, the more diverse the vegetation, the more varied the wildlife.
FREE [1.00 Continuing Education Credit Hours] Course Number: #02-549 CE Course Description Youth violence is widespread in the United States and it impacts the health of individuals, families, schools, and communities. The purpose of this brief continuing education course, developed using information from the Centers for Disease Control and Prevention, is to provide an overview of the prevalence and characteristics of teen violence and bullying and to address prevention efforts. Risk and protective factors, research findings, and strategies to help youth who are exposed to violence and bullying are also discussed. CE Course Objectives 1. Discuss the prevalence and characteristics of youth violence, teen dating violence, school violence, and bullying. 2. Identify how youth violence and bullying impact personal well-being as well as the health of schools and communities. 3. Describe risk factors that contribute to perpetration and victimization among youth, as well as protective factors that decrease risk. 4. Outline specific strategies to address and prevent bullying, teen dating violence, and school violence. 5, Summarize the relationship between bullying and suicide, including research findings and how school personnel can take action.
Pride and Prejudice Analysis Essay Sample Picture by GregMontani from Pixabay Discuss the Importance of Dialogue to Character Development in the Novel Pride and Prejudice Characters play a vital role in every literary work. They deliver exposition, contributing to the development of themes in every storyline. Normally authors opt to utilize the most engaging technique to further building the characters of their own. In the novel ‘Pride and Prejudice,’ Jane Austen has successfully made the most of her expertise in dialogue technique, developing the characters symbolizing certain qualities such as love, arrogance, proud, and intelligence. While readers are in the depth reading phase, they are instead presented with the new character development of each role. Through the conversation between Darcy and Elizabeth, Austen develops Darcy’s character by providing the sense of regret and reflection of himself, depicting his sincere affection towards Elizabeth in a gentlemanlike manner, unlike before (Shmoop Editorial Team). Likewise, through the same conversation, Elizabeth’s character is further developed expressing her alternate insights of Darcy, revealing her ability to rationally think and wisely respond to the crisis arises in their relationship. (Austen 450-457). Another instance is during the conversation of Lady Catherine de Bourgh and Elizabeth in Longbourn. Her true colors of being overly aware of her social class triggers Elizabeth’s character as she becomes even more confident in delivering her standpoint protecting her pride not easily to be manipulated by wealth (Pride). Though Lady de Bourgh is a minor subject, her part is somewhat crucial in supporting the characters of her allies and continuously whips up Elizabeth’s loathe towards Darcy by being an elitist. Having a thorough reading of the novel, Austen perfectly exposed each character accordingly through the arrangement of dialogues, triggering readers’ excitement and curiosity on what is coming next. She notices the importance every conversation created in elaborating each character as they act as the major contributor to the story plot. - Austen, Jane. Pride and Prejudice. Planet eBook, 2008. Planet eBook, - Shmoop Editorial Team. “Mr. Darcy in Pride and Prejudice.” Shmoop. Shmoop University, Inc., 11 Nov. 2008. Web. 25 Jan. 2018. - “Pride and Prejudice: Comprehensive Storyform.” Dramatica. White Brothers, Inc., n.d. Web. 25 Jan. 2018. In this “Pride and Prejudice” analysis essay, we turned to the work of Jane Austen, because she is a unique author in the history of English literature. Although Jane Austen lived and wrote two centuries ago, her style is still a model for many authors. Students are often assigned essays on her books – that’s why this “Pride and Prejudice” essay will be helpful for you. Each paper that you are assigned to write can be easily written by our writers. Due to a lack of writing skills, a lot of students come to use our service, and they remain satisfied with it. In pursuit of quality papers, you can easily place an order and get the paper of your dreams. Transform your grades from low to high by getting our writing help.
Adolescent sexual activity is increasing. Premature sexual intercourse results in high figures of adolescent pregnancy and abortion, as well as in increased risk of sexually transmitted diseases (STDs). Lack of information on the prevention of STDs and poor hygiene in both boys and girls are also main reasons for increased morbidity because of STDs during adolescence. Contraceptive behavior during adolescence varies between countries and communities. It seems, however, that the condom and oral contraceptives (OC) are popular contraceptive methods. Ineffective methods such as periodic abstinence, coitus interruptus, and withdrawal before ejaculation are in use. On the other hand, compliance of adolescents on contraception is poor. The above are additional causes for increasing rates of adolescent pregnancies. Countries providing sexual education programs in schools present lower rates of pregnancy and abortion. Adolescent pregnancy is safe if a careful follow up is accepted by the teenager. A significant number of homeless youth are homosexuals or lesbian adolescents. Most of them are at high risk for HIV infection, AIDS, and STDs. It is concluded that sexual education programs are absolutely necessary to offer adolescents the knowledge on the complications of premature sexual activity, as well as prevention of the undesired pregnancy and STDs. (C) Lippincott-Raven Publishers.
STONES, pieces of rock of any shape, usually detached from bedrock and of no great size, as in stream beds ( Both the nature and age of the bedrock of the ancient Near E varies greatly. Over much of the southern part of the region, Precambrian rocks of the Arabo-Nubian Massif appear (Fig. 1). These are more than 600 million years old and granite is common. Adjacent to this crystalline massif is a zone of flat-lying sedimentary strata in which sandstones, varying in age from 570-100 million years, predominate. Further NW, N and NE the strata are gently folded, with limestones common, and vary in age from 38-100 million years. In the region of northern Syria, eastern Kurdistan and western Persia, the rocks are complexly folded and form part of the Alpine mountain belt. Younger sedimentary rocks occur in the Jordan Rift Valley and under the coastal plain of Pal., while in Syria and down to the region of Lake Tiberias, volcanic activity (brimstone, q.v.) took place spasmodically over the past thirty-eight million years. Thick piles of basalt lava flows developed, one of the youngest flows in Syria having been dated by radiocarbon analysis of carbonized organic matter as being only some 4,000 years old. This great variation in rock type and rock structure combined with the extremes of climatic condition from desert in the S to snowcapped mountain peaks in the N has resulted in great contrasts in the stones found in various parts of the region. On either side of the Red Sea, the joint pattern of the granitic and related crystalline rocks exercised a strong control on the shapes of the exposed pieces of rock. The Red Sea Mountains supplied Egypt, and later Imperial Rome, with monumental stones and some metals. There, as in the mountains of the Sinai Peninsula, frost wedging is active at high altitudes. The freezing of water that has seeped down through joints splits the granitic rocks into rough thin rectangular blocks (cf. tables of stone— In the district from the Gulf of Aqaba to the Dead Sea which includes Edom, wind played an important role in erosion, particularly in carving wide valleys between mountains of sandstone which often can be seen to rest on a plinth of the Aqaba Granite Complex. Narrow gorges, following faults or master joints, are also found in the district, including the vicinity of Petra and the Wadi Yitan where “the King’s road” of Biblical times ascended from Egypt to Jordan and on to Damascus and Mesopotamia (cf. The plateau of Jordan, E of the River Jordan, is open and flat, with much of the higher ground covered by flint gravels, residuals after wind erosion of the chalk strata that once enclosed the flints (q.v.). Occasional flat-topped hills break the monotony of the stony plateau, with the limestones of the Belqa Series (Fig. 3) that cap the hills used as building stone. Lightly incised wadis drain eastward to inland depressions that are filled with gravels, sands, salts and muds. Toward the N there are some volcanic cones and in Syria thick basalt lava flows form the Hauran Plain. They break down to yield good red soil. In the S, adjacent to the Dead Sea, canyons, such as the Wadi Hasa, have cut down as much as 1750 meters, generally along fault lines. In the canyon walls are exposed the whole sequence of geological formations, from the Aqaba Granite Complex upward. The greater part of the hill country W of the River Jordan is hewn from hard wellbedded limestones and dolomites of the Judean Limestone (Figs. 3, 4). This rock formation contains aquifers feeding springs and wells and in it many caves (q.v.) have been formed by the action of ground water. These caves have provided places of refuge ( A series of depressions, many of them fault bounded, cut through the hill country. They include the Beer-sheba plain, with Beer-sheba the principal oasis of the Negeb, and the Esdraelon plain. These depressions, and the coastal plain with which they join, are underlain mainly by recently deposited alluvium, much of which is covered by blown sand and dunes. The few small stones that do occur are pieces of soft shale. Mt. Carmel divides the coastal plain into two, N and S of Haifa. It is made up of a faulted block of Judean Limestone, with various strata of well-jointed limestone and dolomite providing flat rectangular blocks easily erected into an edifice, such as an altar ( The floor of the Jordan Rift Valley (earthquake q.v.) is barren and arid by contrast with the bordering mountain areas. The River Jordan meanders through its flood plain, incised to about fifty meters below the main plain of the valley and flanked on either side by badlands formed by the erosion of the very soft strata making up the main valley floor. South of the Dead Sea some of the rocks are white due to the presence of rock salt. This salt is interbedded with clays (q.v.) and the district is prone to landslides, particularly when earthquakes (q.v.) occur. This together with erosion channeling resulting from thunderstorm precipitation has meant the production of odd erosional forms, including some with the appearance of pillars of salt (cf. E. M. Blaiklock (ed.), The Zondervan Pictorial Bible Atlas (1969), 1-35, 438-452.
Key Instant Recall Facts (KIRFs) are the core facts that children need to be able to recall quickly in order to support their work across the numeracy curriculum. Each year group will have a new focus for each half term. Although each fact will be introduced and practised in some lesson starters, we would ask that you support your child by practising with them at home. The new target will be sent out each half term, with a top tips sheet to help you and your child. Please speak to your child's class teacher if you would like any further information or support.
Why should you use Graph? Here are some of the things Graph can do for you: Graph can plot standard functions, parametric functions and polar functions. You can use a lot of built-in functions, e.g. sin, cos, log, etc. You may specify color, width and line style of the graphs, and the graphs may be limited to an interval. It is also possible to show a circle at the ends indicating open or closed interval. Graph can show any equation and inequality, for example sin(x) < cos(y) or x^2 + y^2 = 25. You can choose line width and color for the equations, and color and shading style for the inequalities. Shadings may be used to mark an area related to a function. They can be created with different styles and colors in a user-specified interval. You can create series of points with different markers, colors and size. Data for a point series can be imported from other programs, e.g. Microsoft Excel. It is possible to create a line of best fit from the data in a point series, either from one of the built-in models or from a user-specified model. Graph can symbolically calculate the first derivative of a function and plot the resulting function. It is also possible to plot tangents and normals to a function. You can save the coordinate system with graphs as an image either as a Windows Bitmap (bmp), Potable Network Graphics (png), JPEG, Windows Enhanced Metafile (emf), Scalable Vector Graphics (svg) or Portable Document Format (PDF). You may also copy the coordinate system into another program, e.g. Microsoft Word, either as a normal image or as an OLE object, which may later be edited by double-clicking on it. Given an x-coordinate Graph will calculate the function value and the first two derivatives for any given function. Alternatively the function may be traced with the mouse. In addition to evaluating single values, Graph can also fill a table with evaluated function coordinates in a user-specified range. Data from the table can easily be copied into another program, e.g. Microsoft Excel. Graph can help you calculate the area between the graph of a function and the x-axis in a given interval and the distance along the curve between two points on the function. For standard functions, the area is the same as the definite integral. In addition to the optional legend used to describe each function, a label may be added anywhere in the system. A label can contain text with different fonts, images and objects created in other programs. You can create your own custom functions and constants for use in functions, relations, etc. You can for example create a custom function sinc(x)=sin(x)/x and a constant R=8.314510. You can then plot the function f(x)=R*sinc(x). With the animation feature, you can create animations showing what happens to a function when a constant changes value. You can for example plot the function f(x)=a*x2+b*x+c and animate what happens when b changes between different values from -5 to 5.
Addressing feral cats' diet may help protect native species Because reducing the impacts of feral cats—domestic cats that have returned to the wild—is a priority for conservation efforts across the globe, a research team recently reviewed the animals' diet across Australia and its territorial islands to help consider how they might best be managed. The investigators recorded 400 vertebrate species that feral cats feed on or kill in Australia, including 16 globally threatened birds, mammals and reptiles. The cats feed mainly on rabbits when they are available, but they switch to other food groups when they are not. Reptiles were eaten most frequently in desert areas, whereas medium-sized mammals, such as possums and bandicoots, were eaten most frequently in the temperate southeast. "Our most significant finding was a pattern of prey-switching from rabbits to small native mammals," said Tim Doherty, lead author of the Journal of Biogeography study. "This is important because control programs for rabbits could inadvertently lead to feral cats killing more native mammals instead. This means that land managers should use a multi-species approach for pest animal control."
Amblyopia, also known as lazy eye, is one of the most common eye issues seen in children. At Children’s Eye Center of Orange County, Dr. Golareh Fazilat provides amblyopia treatment for children. What causes amblyopia? The underlying cause of amblyopia depends on the type of amblyopia that is being experienced, which is why a thorough consultation is an important step in treating the condition. - Strabismic amblyopia – This is the most common type of amblyopia. If the eyes are not properly aligned, the brain ignores the input from the misaligned eye. - Refractive amblyopia – This is caused by unequal refractive errors in two eyes, even if they are perfectly aligned (for example, uncorrected nearsightedness in just one eye). - Deprivation amblyopia – This is caused by a congenital cataract or similar that obstructs light from entering the eye. Determining the cause of your child’s amblyopia is the first step in treatment.
The Secure Sockets Layer (SSL) is a mechanism for wrapping network communication in a security layer that can be used to encrypt communication between the client and the server. It also provides an integrity mechanism to ensure that the communication is not altered between the client and the server. The encryption is based on cryptography using certificate. SSL was originally a proprietary protocol developed by Netscape Communications. It has since been standardized, but the name has been changed to Transport Security Layer. Nevertheless, SSL is still a commonly-used term to refer to this capability, and it is the term used throughout the directory server in order to avoid confusion with the StartTLS extended operation.
Conservation-Courtship continued By: Kevin Williams, Grundy County Conservation Director May 17, 2013 Last week, word restraints forced me to cut things way short of the information I wanted to share about bird courtship. It all started with the rooster pheasants doing battle near my house. Besides the fighting episodes influencing a female’s decision for a mate, there are a host of other aspects of bird courtship depending on the species. Singing: Singing is one of the most common ways birds can attract a mate. The intricacy of the song, or the variety of different songs one bird can produce, help to advertise its maturity and intelligence – both very desirable characteristics for a healthy mate. Singing can also advertise the boundaries of one bird’s territory, warning off competition. For some species, only one gender (most usually the males) will sing, while other species may create a duet as part of the bonding ritual. Displays: This is generally flamboyant plumage colors but can also be elaborate displays of prominent feathers (peacocks), skin sacs (prairie chickens), or even body shape (blue jay’s crest) to show off how strong and healthy a bird is, advertising its suitability as a mate. In most species, birds may use subtle changes in posture to show off their plumage to the best effect sort of like bodybuilders in their posing sequences. Dancing: Physical movements, from daring dives to intricate sequences of wing flaps, head dips, or different steps can be part of a courtship ritual. In many species, the male alone will dance for his female while she observes his actions, while in other species both partners will interact with one another. Dance mistakes show inexperience or hesitancy and would likely not lead to a successful mating but from what I’ve observed still might land them an audition for Dancing With The Stars. Preening: Close contact between male and female birds can be part of the courtship rituals. The birds may lightly preen one another, sit with their bodies touching or otherwise lean on one another to show that they are not intending to harm their partner. Feeding: Offering food is another common part of the bird courtship behavior for many species. A male bird may bring a morsel to the female, demonstrating that he is able not only to find food, but that he can share it and is able to provide for her while she incubates eggs or tends the brood. For some species the male may just bring food and transfer it to the female for her to feed, while in other species he will place a seed or insect directly in her mouth just as he might be expected to do when helping feed hungry nestlings. Building: Some birds seek to attract a mate by showing off their architectural skills. Constructing nests before the female arrives is a way for males to claim territory and show the suitable nesting areas they can defend. Male bluebirds and wooducks arrive ahead of the females and do this. Other species may decorate the nest with pebbles, moss, flowers or even our own litter to make it more eye-catching. The female may then choose the nest she prefers. Male wrens may fill a half dozen cavities with twigs allowing the female to choose which of the homes she prefers. I’m glad my wife didn’t demonstrate that behavior or I’d still be single. News, Blogs & Events Web
competition, in economics, rivalry in supplying or acquiring an economic service or good. Sellers compete with other sellers, and buyers with other buyers. In its perfect form, there is competition among many small buyers and sellers, none of whom is too large to affect the market as a whole; in practice, competition is often reduced by a great variety of limitations, including copyrights, patents, and governmental regulation, such as fair-trade laws, minimum wage laws, and wage and price controls. Competition among merchants in foreign trade was common in ancient times, and it has been a characteristic of mercantile and industrial expansion since the Middle Ages. By the 19th cent. classical economic theorists had come to regard competition, at least within the national state, as a natural outgrowth of the operation of supply and demand within a free market economy. The price of an item was seen as ultimately fixed by the confluence of these two forces. Early capitalist economists argued that supply-and-demand pricing worked better without any regulation or control. Their model of perfect competition was marked by absolute freedom of trade, widespread knowledge of market conditions, easy access of buyers to sellers, and the absence of all action restraining trade by agencies of the state. Under such conditions no single buyer or seller could materially affect the market price of an item. After c.1850, practical limitations to competition became evident as industrial and commercial combinations and trade unions arose to hamper it. A major theme in the history of competition has been the monopoly, which represents a business interest so large that it has the ability to control prices in a given industry. Some governments attempted to impose competition through legislation, as the United States did in the Sherman Antitrust Act of 1890, which made many monopolistic practices illegal. President Teddy Roosevelt was well known for his "trust-busting," filing lawsuits against over 40 major corporations during his two terms in office (1901–09). Later legislation in the United States, such as the Clayton Act (1914), the Robinson-Patman Act (1936), and the Celler-Kefauver Act (1950), offered revisions and clarifications of the Sherman Act. The Federal Trade Commission, created in 1914, is a regulatory agency with the mission of encouraging competition and discouraging monopoly. Until the mid-20th cent., there was widespread government acceptance of the existence of industrial and commercial combinations, together with an effort to apply regulation administered either by the state or by the industries themselves. Governments had accepted the existence of what were considered "practical monopolies," particularly in the field of public utilities (see utility, public). This attitude changed somewhat after the 1970s; for example, the U.S. government forced the breakup (1984) of American Telephone and Telegraph and deregulated (1985) natural-gas prices. In the 1990s, state regulators began to allow competition among some utilities (especially natural-gas and electricity suppliers) in order to bring prices down. This was also a trend in some European countries; Germany, for example, deregulated its electric power industry in 1999. See M. L. Greenhut et al., Economics of Imperfect Competition (1987); L. G. Telser, A Theory of Effective Cooperation and Competition (1987); T. Frazer, Monopoly, Competition and the Law: The Regulation of Business Activity in Britain, Europe and America (1988). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on in economics competition from Infoplease: See more Encyclopedia articles on: Economics: Terms and Concepts
Common Garter Snake The Common Garter Snake (Thamnophis sirtalis) is common to North America. T.s. parietalis has also been introduced to northern Holland in Sweden, where it thrives. The habitat of these snakes can range from forests, fields and prairies to streams, wetlands, meadows, marshes and ponds, but they’re more often found near water being semi-aquatic animals. Habitats range from sea level to mountain locations. Their diet consists of amphibians, insects, fish, small birds, and rodents. Predators to the common garter snake are: large fish, bull frogs, snapping turtles, milk snakes, hawks, and foxes. Most garter snakes have a pattern of yellow stripes on a brown background and their average length is about 3 – 4.5 ft (1 – 1.5 m). Like any other snake, garter snakes use their tongue to smell. Water contamination, urban expansion, and residential and industrial development are all threats to the garter snake’s species. The San Francisco Garter Snake (T.s. tetrataenia), which is extremely scarce and occurs only in the vicinity of ponds and reservoirs in San Mateo County, California, has been listed as an endangered species by the U.S. Fish and Wildlife Service since 1967. Garter snakes can make excellent pets as they are small, easily kept in terrariums and feed readily on goldfish and other commercially available live foods. It is advisable not to give a steady diet of earthworms or night crawlers as these lack sufficient vitamins for the snake’s health. Although they are usually found near water, the pet habitat must be dry with only a water bowl to avoid serious skin diseases. This is true of all snake species, including water snakes. The Common Garter Snake is a diurnal snake. During the summer it is most active in the morning and late afternoon and in cooler seasons or climates, it restricts its activity to the warm afternoons. In southern, warm areas, the Common Garter Snake is active year-round; otherwise, it hibernates in common dens, sometimes in great numbers. On warm winter afternoons, some Common Garter Snakes have been observed to emerge from their hibernacula to bask in the sun. Garter snakes generally mate in March or April, after hibernation. The species is viviparous; females give birth to a litter of 12-40 live young anytime from July through October. The saliva of a garter snake may be toxic to amphibians and other small animals. For humans, a bite is not dangerous, but may produce a swelling or a burning rash. Most garter snakes also secrete a foul-smelling fluid from postanal glands when handled or harmed. Like any predator they are highly unpredictable.
Friction is everywhere and can be either helpful or wasteful depending on the situation. In this investigation you will test models of friction against actual measurements to get a sense of how accurate these friction models are. Coefficient of static and kinetic friction using a friction block Open the experiment file 05C_Friction, and then connect the Smart Cart to the software using Bluetooth. Set up the equipment like the picture. Zero the Smart Cart force sensor while nothing is touching the hook. Start data collection, and then very slowly pull on the string, increasing the force you exert until the block starts to slide. Once the block is sliding, keep pulling at a constant speed until you get to the edge of the table, and then stop data collection. Record the mass of the block in a table. Repeat this activity two more times, each time adding a 250-g mass on top of the block. For each trial, use the graph tools to find the maximum force exerted just as sliding was about to start (static friction force), and the average force while the block was sliding (kinetic friction force). Record the values for each trial in the table. Draw two free-body diagrams of the friction block: one representing the moment just before it began to slide (static friction), and one representing the time when it was sliding at a constant speed (kinetic friction). Label all the forces acting on the block in each diagram, including friction. The coefficients of static and kinetic friction are defined as the ratios of each frictional force to the normal force of the block. Write two equations: one for the coefficient of static friction, and one for the coefficient of kinetic friction–both in terms of the frictional force and the mass of the block. Determine the coefficients of static friction and kinetic friction for each of the trials you performed. Enter these values in the table. Calculate the average for each. The model for static friction treats μs as constant, even as the mass of the friction block was increased. Do your data support this model? Explain your answer. Compare your average values to the tabulated values for the coefficients of static and kinetic friction in section 5.4 of your text. Using your data, evaluate the precision of the tabulated coefficients.
A ‘mysterious network’ of mud springs on the edge of the ‘market town’ of Wootton Bassett, near Swindon, Wiltshire, England, has yielded a remarkable surprise.1 A scientific investigation has concluded that ‘the phenomenon is unique to Britain and possibly the world’. The mud springs Hot, bubbling mud springs or volcanoes are found in New Zealand, Java and elsewhere, but these Wootton Bassett mud springs usually ooze slowly and are cold. However, in 1974 River Authority workmen were clearing the channel of a small stream in the area, known as Templar’s Firs, because it was obstructed by a mass of grey clay.2 When they began to dig away the clay, grey liquid mud gushed into the channel from beneath tree roots and for a short while spouted a third of a metre (one foot) into the air at a rate of about eight litres per second. No one knows how long these mud springs have been there. According to the locals they have always been there, and cattle have fallen in and been lost! Consisting of three mounds each about 10 metres (almost 33 feet) long by five metres (16 feet) wide by one metre (about three feet) high, they normally look like huge ‘mud blisters’, with more or less liquid mud cores contained within living ‘skins’ created by the roots of rushes, sedges and other swampy vegetation, including shrubs and small trees.3 The workmen in 1974 had obviously cut into the end of one of these mounds, partly deflating it. Since then the two most active ‘blisters’ have largely been deflated and flattened by visitors probing them with sticks.4 In 1990 an ‘unofficial’ attempt was made to render the site ‘safe’.5 A contractor tipped many truckloads of quarry stone and rubble totalling at least 100 tonnes into the mud springs, only to see the heap sink out of sight within half an hour! Liquid mud spurted out of the ground and flowed for some 600 metres (about 2,000 feet) down the stream channel clogging it. Worried, the contractor brought in a tracked digger and found he could push the bucket down 6.7 metres (22 feet) into the spring without finding a bottom. ’Pristine fossils’ and evolutionary bias So why all the ‘excitement’ over some mud springs? Not only is there no explanation of the way the springs ooze pale, cold, grey mud onto and over the ground surface, but the springs are also ‘pumping up’ fossils that are supposed to be 165 million years old, including newly discovered species.6 In the words of Dr Neville Hollingworth, paleontologist with the Natural Environment Research Council in Swindon, who has investigated the springs, ‘They are like a fossil conveyor belt bringing up finds from clay layers below and then washing them out in a nearby stream.’7 Over the years numerous fossils have been found in the adjacent stream, including the Jurassic ammonite Rhactorhynchia inconstans, characteristic of the so-called inconstans bed near the base of the Kimmeridge Clay, estimated as being only about 13 metres (almost 43 feet) below the surface at Templar’s Firs.8 Fossils retrieved from the mud springs and being cataloged at the British Geological Survey office in Keyworth, Nottinghamshire, include the remains of sea urchins, the teeth and bones of marine reptiles, and oysters ‘that once lived in the subtropical Jurassic seas that covered southern England.’9 Without the millions–of–years bias, these fossils would readily be recognized as victims of a comparatively recent event.Some of these supposedly 165 million year old ammonites are previously unrecorded species, says Dr Hollingworth, and the real surprise is that ‘many still had shimmering mother-of-pearl shells’.10 According to Dr Hollingworth these ‘pristine fossils’ are ’the best preserved he has seen … . You just stand there [beside the mud springs] and up pops an ammonite. What makes the fossils so special is that they retain their original shells of aragonite [a mineral form of calcium carbonate] … The outsides also retain their iridescence …’11 And what is equally amazing is that, in the words of Dr Hollingworth, ‘There are the shells of bivalves which still have their original organic ligaments and yet they are millions of years old’!12 Perhaps what is more amazing is the evolutionary, millions–of–years mindset that blinds hard–nosed, rational scientists from seeing what should otherwise be obvious—such pristine ammonite fossils still with shimmering mother–of–pearl iridescence on their shells, and bivalves still with their original organic ligaments, can’t possibly be 165 million years old. Upon burial, organic materials are relentlessly attacked by bacteria, and even in seemingly sterile environments will automatically, of themselves, decompose to simpler substances in a very short time.13,14 Without the millions–of–years bias, these fossils would readily be recognized as victims of a comparatively recent event, for example, the global devastation of Noah’s Flood only about 4,500 years ago. Even with Dr Hollingworth’s identification of fossils from the Oxford Clay,15 which underlies the Kimmeridge Clay and Corallian Beds, scientists such as Roger Bristow of the British Geological Survey office in Exeter still don’t know what caused the mud springs.16 English Nature, the Government’s wildlife advisory body which also has responsibility for geological sites, has requested research be done. The difficulties the scientists involved face include coming up with a driving mechanism, and unravelling why the mud particles do not settle out but remain in suspension.17 They suspect some kind of naturally–occurring chemical is being discharged from deep within the Kimmeridge and Oxford Clays, where some think the springs arise from a depth of between 30 and 40 metres (100 and 130 feet). So Ian Gale, a hydrogeologist at the Institute of Hydrology in Wallingford, Oxfordshire, is investigating the water chemistry.18 Clearly an artesian water source is involved.19 Alternately, perhaps a feeder conduit cuts through the Oxford Clay, Corallian Beds and Kimmeridge Clay strata, rising from a depth of at least 100 metres (330 feet).20 The mud’s temperature shows no sign of a thermal origin, but there are signs of bacteria in the mud, and also chlorine gas.21 But why mud instead of water? Does something agitate the underground water/clay interface so as to cause such fine mixing?22 Research may yet unravel these mysteries. But it will not remove the evolutionary bias that prevents scientists from seeing the obvious. The pristine fossils disgorged by these mud springs, still with either their original external iridescence or their original organic ligaments, can’t be 165 million years old! Both the fossils and the strata that entombed them must only be recent. They are best explained as testimony to the global watery cataclysm in Noah’s day about 4,500 years ago.
Albatrosses exploit a phenomenon called dynamic soaring. They ascend from an essentially windless trough of a wave into an area with strong winds blowing above the wave. Crossing the boundary gives the birds a burst of kinetic energy that they use to climb to heights of 10 to 15 meters. (Photo by Phil Richardson, Woods Hole Oceanographic Institution) [ Hide caption ] "Great albatross! The meanest birds Spring up and flit away, While thou must toil to gain a flight, And spread those pinions grey; But when they once are fairly poised, Far o'er each chirping thing Thou sailest wide to other lands, E'en sleeping on the wing." —Perseverando by Charles Godfrey Leland For Phil Richardson, it began with a simple question. How do albatrosses soar so effortlessly, flying around the world without flapping their wings? On an expedition in 1997 to the South Atlantic Ocean off Cape Town, South Africa, he added himself to the list of sailors, scientists, and poets who for centuries have been captivated by this wide-winged symbol of power and elegance overhead. Long before Samuel Taylor Coleridge immortalized the bird in The Rime of the Ancient Mariner, sailors looked on them with awe. "Certain great fowles as big as swannes, soared about us," wrote the great English sailor Richard Hawkins in 1593. More than 400 years later, Richardson, an oceanographer at Woods Hole Oceanographic Institution (WHOI), found himself similarly fascinated by the bird's dramatic, swooping flight pattern, its grace and efficiency. "It was surprising and delightful to see them almost magically soar upwind in wind speeds of 10 to 20 knots," Richardson said. A lover of sailing, plane piloting, and the natural patterns of the ocean, Richardson was intrigued by the aerodynamics of the bird and its soaring capacity. His scientific instincts kicked in. He wondered how albatrosses could soar in any direction they chose. What particularly amazed him was how the albatross seemed to fly into the wind, without losing speed or steadiness in their flight, and with no wing flapping and scant apparent effort. Other work got in the way, but a decade after he first observed the albatrosses, Richardson found time to pursue his wonder. He pored over historical studies of albatross aerodynamics, adding his own experiences and insights, and he slowly constructed a new picture to explain the mechanics of albatross flight. Nearly 14 years after his 1997 cruise, Richardson published his findings in the winter 2011 issue of the journal Progress in Oceanography. To achieve his new understanding, Richardson capitalized on interests and experiences over a life's journey. Richardson grew up loving the wind and the water. Raised on a cattle ranch north of San Francisco, he was particularly fond of science class, even though he missed school from time to time for cattle roundups. His father, Arthur Richardson, who died when Richardson was four, was an architect. So was his grandfather and great-grandfather, Henry Hobson Richardson, who designed Trinity Church in Boston. The younger Richardson tried architecture, too, he said, "but it didn't take." Richardson's stepfather, George Wheelwright III, was a physicist-turned-rancher who co-founded Polaroid Corporation with Edwin Land, the pioneering camera inventor. Wheelwright moved out West after working as a flight navigator during World War II. As a boy, Richardson picked up a love of flying. He enjoyed model planes, and when he grew older, he earned a pilot's license. He enjoyed gliders, too, even hang-gliding. A lifelong love for sailing began on the old sailboat that his family used when they summered in Maine. After graduating from high school, Richardson left behind his days on the cattle ranch and headed off to study civil engineering at the University of California, Berkeley. After college, in the Vietnam War era, Richardson opted for alternative service, becoming an officer with the U.S. Coast and Geodetic Survey, a federal agency that surveyed and charted waterways and coastal regions. Soon after, Richardson had a conversation with his cousin, Columbus Iselin, a former director of WHOI, who encouraged him to earn a Ph.D. in physical oceanography at the University of Rhode Island. Upon graduation, Richardson came to work at WHOI in 1974. Not much was known about the ocean's currents at that time. Research methods had yet to advance significantly from 19th-century approaches. A few nascent current meters existed, but oceanographers more often than not measured ship drifts or sent off messages in bottles to see where they ended up. "They would record where the bottles were picked up and how long they took to get there," Richardson said. "They'd put out hundreds of those things. But you only knew where a bottle started and stopped. We wanted to know how it got there. What was its real path?" To find out more about the movement of the ocean's currents, Richardson and others began taking advantage of new technologies. Their "bottles" evolved into sophisticated floats equipped with scientific instruments, which drifted along with currents. At first, oceanographers used military listening systems to record acoustic signals from subsurface floats; or they used surface drifters that they tracked via radio signals or satellites to reveal something about the fluid pathways through the ocean. "It was a time of interesting theories," Richardson said. "It was not easy to make measurements, so almost any measurement you made told you something new about the ocean. It was very exciting. As my father was an architect, I guess I studied the architecture of the ocean." Over his long oceanographic career, Richardson used floats, satellites, and hydrography—measurements of water's physical characteristics, such as temperature and salinity—to examine many of the major currents in the Atlantic Ocean. Each is something like a major highway in a global oceanic "interstate" system. He investigated the North Brazil Current transporting water northwestard over the equator; the Caribbean Current; the Gulf Stream; and the Agulhas Current carrying water from the Indian Ocean to the southern tip of Africa. The Agulhas Current doesn't import water directly into the Atlantic, but as the Agulhas veers back eastward off South Africa, huge, swirling rings of water called eddies pinch off and spiral westward into the South Atlantic. Eddies spinning off from major currents became another major focus of Richardson's research, and he explored their formation, pathways, and impacts. "Phil contributed to a better understanding of ocean currents and eddies," said Amy Bower, a senior scientist in Woods Hole's Department of Physical Oceanography and a colleague and friend of Richardson. "His figures are often used in textbooks, because of their clarity and his ability to portray complex ocean current circulation patterns in a relatively simple way." Richardson's knowledge of ocean currents and his natural curiosity occasionally prompted forays into peripheral scientific territory. In the months leading up to the quincentennial of Columbus's 1492 voyage, for example, Richardson was intrigued by questions about which still-unverified island Columbus first landed on in the New World. He collaborated with WHOI researcher Roger Goldsmith, applying scientific information on the effects of currents, winds, and variations in Earth's magnetic field to records in Columbus's logbook on his distances traveled and compass readings. "We didn't have a lot of information about the early explorers," Richardson said, "just as we don't have a lot of information about the albatross." After a long career, Richardson formally retired in 1999, though he remains at Woods Hole as a scientist emeritus. "When you're a full-time scientist, you can't follow up on many of your interests," he said. But after he retired, he had time to pursue curiosities like albatross flight. In doing so, Richardson applied more than his understanding of the ocean's currents. He also drew on his love of sailing and flying. Albatrosses spend the majority of their long lives above the ocean. By the age of 50, an albatross has typically flown at least 1.5 million miles. Adults routinely fly hundreds of miles to gather food before returning home to feed a youngster. Placing its beak next to its offspring's, the adult albatross injects liquid food, converted from its prey of fish, squid, or krill, directly into the baby's beak. In recent times, many albatrosses are being lured to their deaths by bait on long fishing lines, Richardson said. The young birds face a perilous path to adulthood. About 40 percent don't make it, because they themselves become prey or because they don't learn to fly well enough. "Gravity and drag relentlessly force a gliding albatross down through the air," Richardson wrote in his paper. "To continuously soar, an albatross must extract sufficient energy from the atmosphere." But how? Richardson knew waves had to play a vital role. Strong prevailing winds blow steady parades of waves across great stretches of the ocean, especially in the vast Southern Ocean. That makes the ocean surface and winds above it "lumpy and bumpy and gusty," he said. "An albatross can take advantage of that." Early on, like many other students of albatross flight, Richardson assumed that the birds use updrafts of air that flow up the backs of waves—similar to updrafts that form over ridges on land. Certainly albatrosses exploit wave updrafts. And certainly they also gain energy from tailwinds blowing horizontally. But these couldn't account for the "accelerated twisting, turning, swooping flight of albatrosses" that Richardson had observed. Nor could it answer a question that kept nagging him: "How can they be flying into the wind and, at the same time, keep up alongside our ship?" Richardson was inspired by a theory described by the Nobel laureate physicist Lord Rayleigh in a paper written in 1883. Rayleigh knew that horizontal winds don't blow uniformly; often they can blow faster the higher you ascend. He proposed a two-layer scheme with an imaginary boundary, above which winds blew faster. This boundary is often referred to as a "wind shear." A bird flying up across a wind shear would abruptly gain airspeed and could use this pulse of kinetic energy to climb upward. Then the bird could turn and swoop downward. Descending through the boundary, it would gain airspeed by flying against weaker winds. Richardson saw something similar going on in the ocean. Building on a hypothesis by British scientist Colin Pennycuick, he outlined the following scenario. In the trough of waves, there is little wind, because the waves block it. But above the waves and their troughs, winds blow briskly across the ocean in thin layers, stacked somewhat like cards in a deck: Lower layers are slowed by air-sea friction near the ocean surface, but wind speeds increase as you go farther from the surface and higher up. An albatross ascending from a wave trough at an angle would encounter progressively faster winds. This would increase the bird's speed through the air—a burst of kinetic energy that it uses to climb to heights of 10 to 15 meters. Then the albatross makes a tight turn downwind and swoops down into another wave trough, adding airspeed as it descends through the wind shear into progressively slower winds. Each addition of airspeed balances the loss of energy caused by drag on the bird. With another turn in the trough, the albatross ascends to begin the cycle again. Each swoop cycle takes about 10 seconds. (See accompanying diagrams.) This phenomenon is called dynamic soaring. The pilot in Richardson knew that in the late 1990s, radio-controlled glider pilots began using the same tactic—looping in strong winds blowing over ridges, rather than waves—to achieve surprisingly fast speeds. A new world's record of 468 miles per hour was set this year with an albatross-sized glider. As he did with his diagrams portraying complex ocean currents, Richardson devised a relatively simple model that captures the essential physics of dynamic soaring of albatrosses, incorporating both winds and waves. Evaluating the two theories of albatross flight, he concluded that using wind shear, rather than updrafts from waves, accounted for 80 to 90 percent of the energy needed for albatrosses to fly. Using his model, he calculated that albatrosses need a minimum wind speed of 7 knots to soar. He also calculated that an albatross could soar upwind at a speed of 12 knots, "which is just what I observed from our ship," he said. But how did they fly upwind? Then Richardson thought about his experience sailing. That was his Eureka moment. "To travel upwind, a sailor tacks into the wind, alternating sailing in a direction around 45 degrees to the right and then to the left of the wind direction," he said. "That's what albatrosses are doing—they're tacking!" Again using his model, Richardson calculated that the fastest course upwind for an albatross is to tack about 30 degrees to the right and left of the wind. "One trick I observed is that the birds climb upwind but often dive perpendicular to the wind to maximize their average velocity in an upwind direction," he said. To paint a more precise picture of albatross flight, Richardson said he would love to see sophisticated microsensors developed that could be attached to albatrosses. Such sensors could measure the nuances of the albatrosses' flights and their navigation through wind and wave patterns. Richardson isn't alone in his interest in the mechanics of soaring and gliding. "Flight dynamics at small scales has suddenly become a hot topic," he said. "Bird, bat, and insect flight has become very interesting to the military." The new autonomous "drone" flying vehicles being developed for military usage, especially the smaller ones, may benefit from the study of bird flight dynamics. For his own part, Richardson is moving on to new hobbies. His latest interest is photography. He loves capturing the beauty and grace of birds in flight. Whether or not you can chart the path of an albatross or analyze the mechanics of its flight, Richardson said, you can recognize nature's beauty in a snapshot of a bird in flight. And beyond photography, there is plenty of oceanographic data still to be analyzed, he said. The oceanographer in him will never give up that passion. There are endless patterns of wind and water to ponder.
view a plan Butterfly Unit – From Caterpillar to Butterfly Art, Language Arts, Math, Science Title – Butterfly Unit – From Caterpillar to Butterfly By – Leslie Tetrault Primary Subject – Science Secondary Subjects – Math, Art, Language Arts Grade Level – 1-2 Butterfly Unit Contents: - What Caterpillars Eat - From Caterpillar to Butterfly - Life Cycles of Butterflies - Life Cycle Sequencing - Eat Like a Butterfly - How to Attract Butterflies - Symmetry Lesson - Butterfly Poetry - What Have We Learned? Note from LessonPlansPage.com: This Butterfly Unit uses some materials (books, life cycle cards, worksheets, butterfly shape cutouts, etc.) that are not included. You may be able to create your own version of the materials, purchase the materials, do without the materials, or contact the author at the email address at the bottom of this lesson plan to request more information on the materials. Standards Met (Science): 1.1 Recognizes diversity of living things, 1.1.1 Identifies diversity of living things, 1.3 Recognizes parts of living things, 1.3.1 Identifies parts of living things. (Language Arts): 1.1 Listens attentively, 1.2.2 Contributes orally within groups, 1.2.5 Uses effective speaking strategies. Objectives: The objectives of this lesson are that the children will discover through reading and discussion, the life cycle of the caterpillar. Introduction: To introduce this lesson, we will do the first and second steps of the K-W-L chart. This will help me to learn what the children already know about the life cycle of the butterfly, and what they would like to learn from this lesson Sequence of Activities: After working on the first two parts of the K-W-L chart, we will read and discuss books on our topic. I will use a lot of open-ended questions to assess their knowledge and to help them to think and focus. After we have read and discussed the books, we will write on our chart what the children have learned. If the children still have unanswered questions, we will refer to our books and find the answers. I will put our chart on the wall where the children can refer to it. The books we read will also be available in the same area in case the children want to learn more later. Materials: Creepy, Crawly Caterpillars, From Caterpillar to Butterfly, Caterpillar Caterpillar, Discovering Butterflies, Amazing World of Butterflies and Moths, The Monarch Butterfly, The Butterfly Book, A Kid’s Guide to Attracting, Raising, and Keeping Butterflies, paper for K-W-L chart. Self-Evaluation: I will evaluate the children’s learning by reviewing the K-W-L chart with them at the end of the unit to see what they have learned and to answer and questions they still have. E-Mail Leslie Tetrault!
The Storyline method of teaching and learning is used in schools, universities, and communities around the world. In Storyline, curriculum is structured through story format; teachers choose settings appropriate for the curriculum and students develop characters to inhabit those settings. Through the interactions of the characters and the setting, plot develops. Students find that they need information to help their characters solve problems that arise within the plot. Teachers have used Storyline since the mid-1960s to integrate curriculum, differentiate student work, develop student ownership of the curriculum, and deepen students' sense of community. This site was developed in response to a growing desire to have one source which could direct researchers, grant writers, teachers, and others to resources on Storyline research and practice. Resources are from authors in many countries, and items in languages other than English are included. Most resources are linked, but a few resources have been provided in full by the copyright owners. Please continue to check back with us as we expect the site to continue to develop. The first stop for those unfamiliar with this method of teaching and learning should be Storyline International, which provides background information on the Storyline strategy, news from Storyline practitioners around the world, and links to Storyline websites in many different languages. These sites also provide information on conferences.
“How can we produce a controlled chromatic aberration for study?” Chromatic aberration is one of the most perplexing phenomena in the study of optics. However, creating controllable versions for laboratory study are extremely difficult to accomplish. So how can we make a machine that will be able to make adjustable chromatic aberrations at our will? Well, let’s use our engineering mindset to solve this scientific problem. Well, we know that if we were to pass monochromatic light into transparent glass at an angle then some will be reflected and some will be refracted. This creates two beams of light of identical wavelength for us to study. Now, let’s take this a step further. We need to pass these beams of light back to each other so that they interfere, which can be accomplished by placing mirrors in the path of beams. So what if were to take this system and bring it into a reality? This machine is known as an interferometer, and is used in physics labs all over the world to study the intricacies of chromatic aberration.
A photocopiable resource book designed as supplementary material for young teenage English language learners at Beginner and Elementary Level; aims to activate and improve all the essential areas of language and grammar at this level and are ideal for putting language skills into active practice; can be used in pairs or small groups in order to promote interactive classroom practice, also suitable for self-study; features: 64 Vocabulary, Grammar and Skills Worksheets; Audio CD; Answer key and Audio Scripts. Quick Search returned no results. Please try different keywords, or try our advanced search page. Still can't find what you're looking for? Try searching our extensive stock database. Still unable to find what you want? Why not submit a query to our bibliographical search facility? For stock and delivery information regarding our titles, please refer to the individual title page. - For All Languages - For EFL/TFL Books - For French Books - For German Books - For Italian Books - For Portuguese Books - For Romanian Books - For Russian Books - For Spanish Books - For Young Learners Read our latest blog, Why we need the CEFR
All the gory details ..... First two protons collide and form a deuterium nucleus (one proton + one neutron). This is not as simple as it sounds. Only one collision in ten trillion trillion actually produces deuterium. In fact, the average proton must wait some 10 billion years to be part of the proton-proton chain. During this collision, a positron and a neutrino are released. A positron is the anti-matter equivalent of an electron (an electron with a positive charge). As soon as the positron encounters an electron, the two anihilate one another to form two gamma rays. The neutrino has virtually no mass and passes right through the sun and out into space. The deuterium nucleus collides with a proton in less than 1 second. The two form a light helium nucleus which is made up of two protons and one neutron. Another gamma ray is released. The gamma rays are carrying away the energy that results from the conversion of mass to energy. On the average, about a million years ellapses before two light helium nuclei collide to form a regular helium nucleus (made up of 2 protons and 2 neutrons) with the release of two protons. Final tally: 4 protons --> 1 He nucleus + 6 gamma rays + 2 neutrinos
Does This Sound Like Your Child? Many children of normal or superior intelligence do not perform well in school. Do some of these symptoms seem to apply to your child? • Difficulty in learning to read or spell • Reverses letters, syllables and words • Cannot understand written or spoken words • Struggles with math calculations and/or concepts • Gets upset over changes in established routine • Easily distracted - short attention span • Lacks continuity of effort and perseverance • Often hyperactive and restless - easily confused and impulsive • Avoids homework or gets very frustrated working on homework • Becomes stubborn, uncooperative, overly apprehensive and illogical • Feels sad, anxious, and/or irritable, particularly on school days At times, these conditions have been described by various professionals as learning disabilities, perceptupal handicaps, minimal brain dysfunction, dyslexia, developmental aphasia, etc. Yet these children have great potential for success when given appropriate and timely help. Innovative educational methods based on theories of child development, can help them achieve potential within a regular or special education classroom. Children who struggle in school may develop anxiety and/or depression because school is a primary activity in their lives and they are unable to meet the expectations of parents, teachers, or themselves. Some of them may benefit from medication, some from nutritional and diet control, all need academic support, emotional support and cooperation between parents and professionals. Explaining the Problems Further Learning difficulties often stem from difficulties with perception. Perception is the process of understanding experience then comprehending and organizing the information given by the senses. Without consistent perception it is difficult to separate ideas, words or sounds; to discriminate between the important and irrelevant; to coordinate and use information effectively. To illustrate, the individual may be experiencing one or more of the following difficulties: Visual: Has considerable difficulty remembering what words look like even after much exposure to them; or he may transpose words (“was” for “saw”); or he may have trouble focusing on a word. Auditory: May confuse similar words, such as “lecture” for “electric” or will transpose sounds or syllables, such as “plasket” for “plastic”, “aminal” for “animal.” Also, background noises disorganize his learning ability. Thinking: Ideas are often out of order, characterized by poor sequencing. He/She has trouble following directions and following through until tasks are complete. Language: Usage of words may be confused or poorly organized. Self expression may be difficult with a tendency to use only basic vocabulary. Movement: May be awkward in coordinating his arms and legs, may be a slow writer or may have illegible hand writing. Often he/she has poor integration of vision and movement, and very often cannot determine the consequences of a particular movement. It must be emphasized that these examples constitute only a few of the problems experienced by these children and adults. Identification of the learning disabled child is difficult because the symptoms and deficits can be exhibited occasionally (but not continually) by all children.Therefore, having a complete evaluation is important to assess and diagnose cognitive, perceptual and academic strengths and weaknesses. Participating in a program designed to address their specific needs can improve their skills and the quality of their lives. Today the future is bright for students with learning difficulties. They can lead normal and productive lives. Many can go to college and become respected leaders of their community.... If . . . their problem is correctly diagnosed. If . . . they are provided with specialized attention. If . . . they are treated with compassion and understanding. If . . . instead of pity - they are given the tools to help themselves. ACLD LEARNING CENTER (established 1972) People in the Mahoning Valley are fortunate because there is a non-profit, tax deductible, centrally located learning center available and ready to help with these learning difficulties. ACLD Learning Center is equipped to evaluate learning problems and offer effective solutions. This program specializes in tutoring individuals ages 41/2 through adulthood who are having the following problems: • Reading, Math and/or Written Expression skills below age/grade level expectations • Learning Disabilities • Attention Problems that Affect Learning • Mild Early Sensory & Language Delays • Perceptual Motor Difficulties How does ACLD Learning Center help address learning difficulties? The most effective approach to strengthening learning skills is through individualized tutoring, often in conjunction with perceptual motor training. To individually prescribe academic remediation for reading, writing and arithmetical skills and to individually prescribe auditory, visual and motor activities to strengthen underlying difficulties. Professional recommendations for remediation will be implemented by trained professionals in each of these fields. This complete evaluation, offered at ACLD Learning Center, is optional and not always necessary before enrolling in the tutoring program. The pscho-educational evaluation involves two to three hours of assessment time. Approximately two weeks later there will be a parent conference with a written report and specific recommendations. Our diagnostician will then talk to school personnel if requested by the parent/guardian. The pscho-eduacational evaluation explores: • Organizational skills • Academic skills • Reading Accuracy • Phonetic Ability • Auditory Discrimination • Auditory Memory • Visual Memory • Perceptual Motor Efficiency Individual Tutoring and Programs Diagnostic screening instruments are always given at the onset of tutoring to measure the student's skills and design a program that meets his/her needs. Those enrolled usually attend a 60 minute session. The frequency of the sessions may range from 1 to 3 times per week, depending on the prescribed needs. Most children benefit most from attending two sessions each week. Diagnostic Tests: The ACLD Learning Center uses diagnostic tests to determine your child’s approximate level of academic functioning. These tests are designed to identify strengths and weaknesses. This allows us to set goals for your child and meet his or her individual needs. Developmental Program: The ACLD offers a developmental program that includes training in Visual Perception, Auditory Perception, and Perceptual Motor Skills. This program is designed to help build your child’s underlying learning skills, as well as, remediate specific kinds of developmental problems. Visual Perception: This program helps students handle the way visual information is perceived, interpreted, and stored in the learning process. It involves skills such as: visual discrimination, visual memory, visual closure and position in space. Auditory Perception: This program helps students handle the way auditory information is perceived, interpreted and stored in the learning process. It involves skills such as: auditory discrimination, auditory memory, and auditory closure (sound blending). Perceptual Motor Skills: This program incorporates movement activities designed to strengthen the student’s balance, eye hand coordination, body awareness, position in space, integration of movement and sensory skills. It also targets organizational, fine motor, and memory skills. Reading Skills: Reading recognition involves being able to sight read or decode words. Reading comprehension involves understanding what is read. When evaluating reading, scores can be classified as these three levels: independent, instructional, and frustration reading levels: Independent – 94% accuracy and above Instructional – 93%-84% accuracy Frustration – below 83% accuracy Math Skills: Students are evaluated in the areas of understanding concepts, computation, and applied skills in problem solving. The concepts include: numeration, algebra, geometry, measurement, and data analysis. Written Expression: Students work to develop written communication skills, focusing on the writing process (steps in writing), applications (types of writing), and convention (grammar, spelling, punctuation). We start with writing good sentences and build to various types of essays, including comparison/contrast, descriptive, persuasive, and correspondence. Students also learn to answer two and four point response questions that are commonly found on the OAA and OGT. Grade Equivalency: These scores are used as approximate grade levels to give us a starting point for your child and also allows us to demonstrate progress and growth. Scores on diagnostic tests may vary. Goals: We use the State Standards to select appropriate goals for each child. Some of the goals may not reflect the grade level your child is in because we may need to back up and develop lower level skills and knowledge. Assessment of Goals: After students practice and master new skills and knowledge, they will receive and informal assessment. The results will be reported in the monthly progress reports. Parent Conferences: The ACLD Learning Center will schedule parent conferences near the end of the school year and again at the end of summer tutoring. You will have the opportunity to meet with your child’s tutors to discuss his or her progress. ACLD Learning Center is open Monday through Thursday from 9:00 am to 6:00 pm. Sessions for children take place during after school hours. For a tour of the learning center or for more information, call (330) 746-0604. ACLD Learning Center offers a complete, on-going, comprehensive tutoring program throughout the school year as well as during the summer months. Students who struggle with learning often experience a deterioration of skills without support over the summer. Summer Hourly Tutoring Summer tutoring on an hourly basis is available Monday through Thursday 8:00 am - 4:00 pm. This is the same tutoring program offered after school throughout the school year. If you are interested in registering for The ACLD Learning Center Tutoring Program please click here. ACLD School and
A locale identifies language-specific information about how users in a specific region, culture, or custom expect data to be presented. Locales define how data in different languages is interpreted, sorted, and collated.Directory Server supports multiple languages through the use of locales. A locale specifies the following information. The code page is an internal table used by an operating system to relate keyboard keys to character fonts displayed on a screen. A locale can indicate what code page an application should select for interaction with an end user. The collation order provides information about how the characters of a given language should be sorted. The collation order specifies the following information: The sequence of the letters in the alphabet How to compare letters with accents to letters without accents Whether there are characters that can be ignored when comparing strings The direction, left to right, right to left, or up and down, in which the language is read The character type distinguishes alphabetic characters from numeric or other characters. It defines the mapping of uppercase letters to lowercase letters. For example, in some languages, the pipe character (|) is considered punctuation, while in other languages it is considered as alphabetic. The monetary format specifies the following information: the monetary symbol used in a region, whether the symbol goes before or after its value, and how monetary units are represented. The time and date formats determine the appearance of times and dates in a region. The time format indicates whether the locale uses a 12–hour clock or 24-hour clock. The date format includes both the short date order and the long date format, and include the names of months and days of the week in each language.
The first run through American women battled in a war was amid the American Civil War. Women were not permitted to be chosen into the draft, so they masked themselves as men and fought. A couple of them were just found to be women when discovered dead. American women were just permitted to serve in the armed force amid the First World War. A considerable lot of the women were nurses and staff who cooked and provided food for harmed troopers. In any case, a considerable lot of these women were white as at the time servitude and prejudice kept black women from giving their administrations to America. The success of the formation of the all black female battalion was thanks to Mary McLeod Bethune, an African American civil rights activist who at the time, appealed to the then-first lady of America, Eleanor Roosevelt, to create more meaningful roles for black women in the army to help balance out the shortage of soldiers. Mary’s appeal gained the attention of the first lady who then helped the military create a space for an all-black female group to work in the war in Europe. Women were recruited and trained until May 1942 when the Women’s Auxiliary Army Corps was formed, and women of all races were allowed to serve in the war officially. Soon after, in July 1942, through their hard work and dedication, women were given full benefits in the military, and the word “auxiliary” was removed from their name. The Corps became known as the Women’s Army Corps. The military trained women of all races in all divisions and sections of the army in preparation for war. In 1945, history was made when the first all-black female battalion in the world was sent from the U.S. to serve in parts of Europe during the Second World War. Known as the 6888 Central Postal Directory Battalion, the all black female battalion of the Women’s Army Corps were sent to parts of France and England to contribute to solving problems that the Second World War brought with it. With the main task of clearing several years of abandoned and backlogged mail in Europe, The 6888 Central Postal Directory Battalion were trained and sent off to help with managing the postal service in Europe. They set off for Europe on February 3, 1945, and arrived in France on February 14 where they were quickly taken to Birmingham, England. The battalion sent to Europe was made up of 855 women who served under the command of Major Charity Williams. Their motto was no mail, no morale and they were popularly known as the six triple eight. Between 1945 to 1946, the majority of the women worked under the mail service while others served as cooks, mechanics, nurse assistants and other roles as and when necessary. They worked under dangerous and risky conditions in abandoned and infested aircraft and offices throughout the war. For their hard work, they were honoured with the European African Middle Eastern Campaign Medal, the Good Conduct Medal and the World War II Victory Medal whiles they were still offering service.
Ries and WHOI colleagues Anne Cohen and Dan McCorkle kept conchs (Strombus alatus) in seawater under different levels of atmospheric carbon dioxide (CO2) to see how their shells were affected by the increased ocean acidity caused by elevated CO2. On the left, a conch from seawater under today's CO2 levels (400 parts per million, or ppm) has a normal shell, with normal bumpy protuberances. The conch on the right, reared under very high CO2 conditions (2,850 ppm), has a shell that has begun to deteriorate, its protuberances dissolved away in the more acidic seawater. (Photo by Tom Kleindinst, Woods Hole Oceanographic Institution) [ Hide caption ] A new study has yielded surprising findings about how the shells of marine organisms might stand up to an increasingly acidic ocean in the future. Under very high experimental CO2 conditions, the shells of clams, oysters, and some snails and urchins partially dissolved. But other species seemed as if they would not be harmed, and crustaceans, such as lobsters, crabs, and prawns, appeared to increase their shell-building (see interactive). “Marine ecosystems—particularly those based on calcium-carbonate shell-building, such as coral or oyster reefs—could change with increasing atmospheric CO2 (carbon dioxide),” said Justin Ries, a marine biogeochemist and lead author of the study, published online Dec. 1, 2009, in the journal Geology. Sensitive species could lose their protective shells and eventually die out, while other species that build stronger shells could become dominant in a future ocean that continues to absorb the buildup of CO2 in the atmosphere caused by industrial emissions, deforestation, and other human activities. Excess CO2 dissolves into the ocean and is converted to corrosive carbonic acid, a process known as “ocean acidification.” At the same time, the CO2 also supplies carbon that combines with calcium already dissolved in seawater to provide the main ingredient for shells—calcium carbonate (CaCO3), the same material found in chalk and limestone. While a postdoctoral scholar at the Woods Hole Oceanographic Institution (WHOI), Ries worked with WHOI scientists Anne Cohen and Dan McCorkle. In tanks filled with seawater, they raised 18 species of marine organisms that build calcium carbonate shells or skeletons. The scientists exposed the tanks to air containing CO2 at today’s level (400 parts per million, or ppm), at levels that climate models forecast for 100 years from now (600 ppm) and 200 years from now (900 ppm), and at a level (2,850 ppm) that should cause the types of calcium carbonate in shells (aragonite and high-magnesium calcite) to dissolve in seawater. The test tanks’ miniature atmospheres produced elevated CO2 in the tiny captive oceans, generating higher acidity. The researchers measured the rate of shell growth for the diverse species ranging from crabs to algae, from both temperate and tropical waters. They included organisms such as corals and coralline algae, which form foundations for critical habitats, and organisms that support seafood industries (clams, oysters, scallops, conchs, urchins, crabs, lobsters, and prawns). In waters containing more CO2, organisms have more raw material (carbon) to use for shells. But they can only benefit from the high CO2 if they can convert the carbon to a form they can use to build their shells and can also protect their shells from dissolving in the more acidic seawater. The scientists found clear differences among species. “The wide range of responses among organisms to higher CO2—from extremely positive to extremely negative—is the truly striking thing here,” Ries said. As expected, in the highest CO2 used, the shells of some species, such as conchs—large, sturdy Caribbean snails—noticeably deteriorated. The spines of tropical pencil urchins dissolved away to nubs. And clams, oysters, and scallops built less and less shell as CO2 levels increased. However, two species of calcifying algae actually did better at 600 ppm (predicted for the year 2100) than at present-day CO2 levels, but then they fared worse again at even higher CO2 levels. Temperate (cool-water) sea urchins, unlike their tropical relatives, grew best at 900 ppm, as did a temperate limpet. Crustaceans provided the biggest surprise. All three species tested—the blue crab, American lobster, and a large prawn—defied expectations and grew heavier shells as CO2 swelled to higher levels. "We were surprised that some organisms didn't behave in the way we expected under elevated CO2," said Anne Cohen, second author on the Geology paper. "Some organisms were very sensitive [to CO2 levels], but there were a couple [of species] that didn't respond 'til it was sky-high—about 2,800 parts per million. We're not expecting to see that [CO2 level] any time soon." Ries and colleagues found that species with more protective coverings on their shells and skeletons—crustaceans, the temperate urchins, mussels, and coralline red algae—are less vulnerable to the acidified seawater than those with less protective shells, such as conchs, hard clams, and tropical urchins. All of the test organisms continued to create new shell throughout the experiment, Ries said, but some suffered a net loss of shell because older, more massive portions of their shells dissolved under the highest CO2 conditions. To build shells, organisms extract calcium ions (Ca2+) and carbonate ions (CO32-) from seawater, which combine into the solid crystals of calcium carbonate (CaCO3) that shells are made of. However, seawater also contains hydrogen ions (H+), or protons. These tend to bond with negatively charged carbonate ions, leaving fewer for organisms to build shells. So shell-builders have a task: They have to eliminate hydrogen ions in the places where they lay down shell. One theory, proposed and discussed by Cohen and colleague, geochemist Ted McConnaughey, is that shelled organisms solve the problem by creating small, enclosed, fluid-filled spaces next to their shells. From these spaces, they forcibly pump out protons, leaving behind calcium and carbonate ions that combine into the crystals that compose their shells. In a more acidic ocean with more protons, species with stronger “proton pumps” could have an advantage. But even these species might pay a price: Like an air-conditioner working harder in hotter weather, the pumps would require more energy. “This increased energy consumption to build shells may come at the expense of other critical life processes, such as tissue growth and reproduction,” Ries said. Temperate urchins fared better than their tropical relatives in the experiments, and Ries and colleagues hypothesize an evolutionary explanation. Cold water absorbs more CO2 than warm water, so temperate seas already contain more CO2 and hydrogen—and therefore less carbonate—than the tropics. Ries speculates that temperate species may have evolved stronger proton pumps to compensate for the naturally lower carbonate levels in these waters. The results, Ries said, suggest that the predicted rise in CO2 over the coming centuries could cause changes in marine ecosystems—particularly those composed largely of shell-builders, such as tropical coral reefs. Moreover, even organisms that appear to benefit from the elevated CO2 may suffer from the decline of less tolerant species upon which they depend for food or habitat. “These results suggest that different types of marine calcifying organisms will respond in very different ways to any future ocean acidification caused by increased CO2,” said Ries, now an assistant professor at the University of North Carolina. "Crabs, lobsters, shrimp, calcifying algae, and limpets could build more massive skeletons, while tropical corals and urchins, and most snails, oysters, and clams could be less successful at defending themselves from predators than they are today. However, given the complex relationships that exist amongst benthic marine organisms, it is difficult to predict how even subtle changes in organisms' abilities to calcify will ultimately work their way through these ecosystems." Justin Ries was a postdoctoral scholar of the Ocean and Climate Change Institute at WHOI. This work was also supported by the WHOI Tropical Research Initiative and the National Science Foundation.
Methane hydrate is often called “fiery ice.” Methane hydrate looks like an ice, and starts burning when an open flame is brought close to it; hence the name. Only water is left after combustion. It is a strange substance. Note that the natural methane hydrate – existing in areas surrounding Japan – that MH21 aims to develop is not purely a white agglomeration like artificial methane hydrate. Since methane hydrate exists in between the sand particles of sandy sediments as shown in the photo below, the methane hydrate-bearing layers do not appear white but rather look similar to soil. What is methane hydrate? Water molecules form a cage-like structure in a certain temperature and pressure environment. This cage-like structure encaging methane molecules is called methane hydrate. Methane is the primary component of natural gas, and the development of methane hydrate follows almost the same procedure as that for natural gas. Although it is called gas hydrate generally, MH21 habitually uses the term methane hydrate since almost 100% of natural hydrate distributed in Japan contains methane. A small cage composed of the crystalline structure of methane hydrate Green: methane molecule Red: water molecule Water molecules form the cage-like structure and methane molecules are contained in it. So, how much methane is contained in methane hydrate? For example, 1 m³ of methane hydrate dissociates to approximately 160 – 170 m³ (at 0ºC and 1 atmosphere) of methane gas, although the exact amount varies depending on the measuring environment. Conversely, methane hydrate can involve the methane of approximately 160 – 170 times its own volume. Aside from the development of natural methane hydrate, studies are now being conducted, by utilizing the above property, in an attempt to transform natural gas that consists primarily of methane that has been hydrated to decrease its volume, and ensure better transportation efficiency. (Reference: JOGMEC (Japan Oil, Gas and Metals National Corporation) website) We live in an environment where the pressure is 1 atmosphere and water freezes at 0℃. Under a pressure of 1 atmosphere, methane hydrate can exist only in a low temperature of -80℃ or below. When the temperature is 0℃, it can exist only under a high pressure of 23 atmospheres or above. Low temperature and high pressure. Methane hydrate requires a “Low temperature and High pressure” environment in order to exist. The areas that permit the existence of methane hydrate on the Earth are limited to (1) permafrost zones in polar regions, and (2) layers within several hundred meters from the seafloor of an ocean with a depth of 500 m or deeper.
A Mother’s Day Recipe - Grades: PreK–K, 1–2, 3–5, 6–8 Looking for a way to tie in some of your great poetry studies from last month? How about Mother’s Day? Wrap up all things poetry with a Mother’s Day recipe poem for that special mom in your students' lives. Have your students research and study different types of recipes, from food to playdough and more, so they can get a handle on what it takes to be a recipe writer. Once they are completely familiar with recipes, they can take what they have learned (format, word choice, and purpose), and mix it all up to create a one-of-a-kind recipe for Mom. Read on for the step-by-step process I use with my class when creating recipe poems: Step 1: Introduction Bring in all sorts of recipes for students to view. I show a variety of recipes and invite them to bring in some of their own. Ask the class to write down or share out loud what they notice. Chart this information on chart paper for students to refer back to when creating their poems. Some guiding questions might be: What do you notice about the format of the recipe? What information comes first? What information comes last? How is the information shared with you? What is the purpose of writing a recipe? Step 2: Word Choice We spend some time discussing the word choice and sentence structure of recipes. Again, we start with an inquiry. I ask, “What do you notice about the words used in your recipe example?" Students go back to their recipes and share their findings with the class. This information is also charted on our class findings. We make a list of recipe verbs because many recipes use a variety of verbs as commands. For example: “Mix 3 cups of flour," or "Sprinkle with salt and pepper.” As students continue exploring recipes, I ask them to write down the verbs that they come across. These verbs become a great word bank for students to use when writing their own recipes. We also talk about how recipes are written in numerical order. Many of the recipes we looked at were written in step order. This observation was noted on our class findings chart paper. One student shared that instead of having a beginning, middle, and end, recipes have a three-part process of introduction to the product, ingredients, and then procedure. There will be a variety of findings that your students will catch. Remember to write these down on the chart paper for the class to use as a guide when writing their own recipe poems. Step 3: Brainstorm and Write Once my students have researched how to create a recipe, it becomes their turn. We start with a brainstorming exercise where I ask the class to think about all the characteristics found in moms. From there, I ask them to use fractions (Yes! Math review!) to create amounts of each characteristic. I usually ask, “If you had to measure the amount of each trait you listed for your mom, how much would you need?” From there, students create their ingredients list. Once that is finished, I let them write creatively. I have a form for those who want a structure that you can download here. For those who don’t, I just let them create. I keep samples and visuals (class notes on chart paper) out for students to refer back to. I ask my students to wait until the end to create the introduction to their poem. This is a short description of what it is they plan to make. While they have the option to write this first, I find it is easier for them to describe their recipe and connect their ideas after they get them down on paper. Step 4: Revise, Title, Publish After rough drafts are completed, I have students partner up and share their poems. Partners are advised to critique with purpose. This means that any feedback given comes with a suggestion or idea to help improve their partner's work. Students make revisions to their poems and are then asked to create a title. When titling our work, we abide by the “Rule of three C’s.” This is a quick reminder that I learned at the San Marcos Writing Project. A title can be Common, (A Recipe for Mom), Catchy (Mom’s Marvelous Masterpiece), or Creative (Never-ending Love). I have the class rewrite their poems as nicely as possible. They can choose which type of paper they want to use. They can also choose to type it, or write it out by hand. (I personally like hand-written. It is such a neat thing to go back and see what your child’s writing once looked like.) For the finishing touch, I take pictures of each student wearing a chef’s hat and apron while holding a mixing bowl. I’ve used the picture differently every time I’ve introduced this recipe poem activity. One year I put the picture at the top of the recipe. Another year I made a card and the picture was on the front. This year I noticed a Scholastic post about making a book for Mother’s Day complete with an "About the Author" page. I thought about adding something like that to the back of the recipe with the picture in the middle. The sky’s the limit with ways you can incorporate their cute little faces into the final product. We wrap up their finished work with tissue paper and send it home as a gift for the kids to give. As a mother, I feel like what I want captured in a gift are the words my child might say at any given age and what they look like. Creating a recipe poem allows students a chance to write about how special their mom is and the process it could take to create her. What other fantastic ideas do you use to celebrate Mother’s Day? I’d love to hear from you. Thank you for reading.
In mathematics, an addition chain for computing a positive integer n can be given by a sequence of natural numbers v and a sequence of index pairs w such that each term in v is the sum of two previous terms, the indices of those terms being specified by w: - v =(v0,...,vs), with v0 = 1 and vs = n - for each 0< i ≤ s holds: vi = vj + vk, with wi=(j,k) and 0 ≤ j,k ≤ i − 1 Often only v is given since it is easy to extract w from v, but sometimes w is not uniquely reconstructible. The length of an addition chain is the number of sums needed to express all its numbers, which is one less than the cardinality of the sequence of numbers. An introduction is given by Knuth. As an example: v = (1,2,3,6,12,24,30,31) is an addition chain for 31 of length 7, since - 2 = 1 + 1 - 3 = 2 + 1 - 6 = 3 + 3 - 12 = 6 + 6 - 24 = 12 + 12 - 30 = 24 + 6 - 31 = 30 + 1 - 52 = 51 × 51 - 53 = 52 × 51 - 56 = 53 × 53 - 512 = 56 × 56 - 524 = 512 × 512 - 530 = 524 × 56 - 531 = 530 × 51 Methods for computing addition chains Calculating an addition chain of minimal length is not easy; a generalized version of the problem, in which one must find a chain that simultaneously forms each of a sequence of values, is NP-complete. There is no known algorithm which can calculate a minimal addition chain for a given number with any guarantees of reasonable timing or small memory usage. However, several techniques to calculate relatively short chains exist. One very well known technique to calculate relatively short addition chains is the binary method, similar to exponentiation by squaring. Other well-known methods are the factor method and window method. Let denote the smallest s so that there exists an addition chain of length s which computes n. It is known that where is Hamming weight (the number of ones) of the binary expansion of n. It is clear that l(2n) ≤ l(n)+1. Strict inequality is possible, as l(382) = l(191) = 11, observed by Knuth. A Brauer chain or star addition chain is an addition chain in which one of the summands is always the previous chain: that is, - for each k>0: ak = ak-1 + aj for some j < k. A Brauer number is one for which the Brauer chain is minimal. Brauer proved that - l*(2n−1) ≤ n − 1 + l*(n) where is the length of the shortest star chain. For many values of n,and in particular for n ≤ 2500, they are equal: l(n) = l*(n). But Hansen showed that there are some values of n for which l(n) ≠ l*(n), such as n = 26106 + 23048 + 22032 + 22016 + 1 which has l*(n) = 6110, l(n) ≤ 6109. - l(2n − 1) ≤ n − 1 + l(n) . It is known to be true for Hansen numbers, a generalization of Brauer numbers; N. Clift checked by computer that all n≤5784688 are Hansen (while 5784689 is not). Clift further checked that is true with equality for n≤64. - D. E. Knuth, The Art of Computer Programming, Vol 2, "Seminumerical Algorithms", Section 4.6.3, 3rd edition, 1997 - Downey, Peter; Leong, Benton; Sethi, Ravi (1981). "Computing sequences with addition chains". SIAM Journal on Computing. 10 (3): 638–646. doi:10.1137/0210047.. A number of other papers state that finding a shortest addition chain for a single number is NP-complete, citing this paper, but it does not claim or prove such a result. - Otto, Martin (2001), Brauer addition-subtraction chains (PDF), Diplomarbeit, University of Paderborn. - A. Schönhage A lower bound on the length of addition chains, Theoret. Comput. Sci. 1 (1975), 1–12. - Guy (2004) p.169 - Clift, Neill Michael (2011). "Calculating optimal addition chains" (PDF). Computing. 91 (3): 265–284. doi:10.1007/s00607-010-0118-8. - Brauer, Alfred (1939). "On addition chains". Bulletin of the American Mathematical Society. 45 (10): 736–739. doi:10.1090/S0002-9904-1939-07068-7. ISSN 0002-9904. MR 0000245. - Richard K. Guy (2004). Unsolved Problems in Number Theory. Springer-Verlag. ISBN 0-387-20860-7. OCLC 54611248. Zbl 1058.11001. Section C6. - OEIS sequence A003313 (Length of shortest addition chain for n). Note that the initial "1" is not counted (so element #1 in the sequence is 0). - F. Bergeron, J. Berstel. S. Brlek "Efficient computation of addition chains"
Many times, we adults deprive ourselves of the one thing that can help refresh our bodies and minds overnight: sleep. And as adults, we sometimes make choices that cause our sleep patterns to get out of whack. But do your children have a choice when it comes to the amount of sleep they get? It is our duty to help them adopt healthy sleep habits while they’re young so they can grow into happy, energetic, and healthy adults. Because their bodies are growing, children need more sleep than adults. An important part of a child’s healthy sleep is a steady bedtime routine, says Judith Owens, M.D., FAAP, co-author of Take Charge of Your Child’s Sleep: The All-in-One Resource for Solving Sleep Problems in Kids and Teens. “At the end of the day, both the body and mind need to wind down, relax, and prepare physically and mentally for sleep,” she says. “A bedtime routine is the best way to make sure that there is enough time to make that transition.” The body’s natural cycles of sleeping and being awake are sometimes called circadian rhythms. These sleep patterns are regulated by light and dark. Children begin to develop a cycle around six weeks of age, and most have a regular pattern by three to six months. What is keeping our children awake? The National Sleep Foundation’s (NSF) 2004 Sleep in America poll showed that about 69 percent of children age 10 and under experience some type of sleep problem. Some of the most common include the following conditions and occurrences: Insomnia occurs when a child complains of having trouble falling or staying asleep, or of waking up too early in the morning. Nightmares occur late at night during REM (rapid-eye movement) sleep and awaken a child. Restless Legs Syndrome (RLS) is a movement disorder that includes uncomfortable feelings in the legs, which cause an overwhelming urge to move. Sleep terrors (also called night terrors) occur early in the night. A child may scream out and be distressed, although he is neither awake nor aware during a sleep terror. Sleep terrors may be caused by not getting enough sleep, an irregular sleep schedule, stress, or sleeping in a new environment. Sleeptalking occurs when the child talks, laughs, or cries out in her sleep. As with sleep terrors, the child is unaware and has no memory of the incident the next day. Sleepwalking is experienced by as many as 40 percent of children, usually between ages 3 and 7. Snoring occurs when there is a partial blockage in the airway that causes the back of the throat to vibrate, creating the noise we all know. About 10 to 12 percent of normal children habitually snore. Sleep apnea occurs when snoring is loud and the child is having trouble breathing. Symptoms include pauses in breathing during sleep caused by blocked airway passages, which can wake the child up repeatedly. Lack of Sleep = Health Problems Sleep deprivation in children has been linked with potentially serious health issues. These can include some of the most pressing illnesses facing American children today. - Anxiety and Depression: Insomnia can contribute to anxiety by raising levels of cortisol, the stress hormone. Sleep problems can also make other symptoms of depression worse and are much more common than oversleeping in people with depression. - Obesity: “About two-thirds of the children diagnosed with sleep apnea in our clinic are overweight or obese,” says Owens. Obese children tend to have more fat tissue around their neck, which puts more pressure on the airway and further block air from getting through to the lungs. - Diabetes: New research presented at an American Diabetes Association conference showed that inadequate sleep may prompt development of insulin resistance, a well-known risk factor for diabetes. - Immunity problems: Several nights of poor rest can hamper the production of interleukin-1, an important immune booster. A good night’s sleep helps your child’s body fight off illness and stay healthy. - ADHD: A University of Michigan study published in the March 2002 issue of Pediatrics discovered that youngsters who often snore or have sleep problems are almost twice as likely to suffer from ADHD as those who sleep well. Other research has shown that children who don’t get enough sleep tend to have more problems concentrating during the day. What Can Parents Do? Talk to your pediatrician if you notice any of the following symptoms: - An infant who is extremely and consistently fussy - A child having problems breathing - A child who snores, especially if it’s loud - Unusual awakenings - Difficulty falling asleep and maintaining sleep, especially if you see daytime sleepiness or behavioral problems Here are some important things you can do to help your child get enough sleep. - Set a regular bedtime for everyone each night and stick to it. - Establish a relaxing bedtime routine, such as giving your child a warm bath or reading her a story. - After one year of age, let your child pick a doll, blanket, stuffed animal, or other soft object as a bedtime companion. - Do not allow a TV or computer in your child’s bedroom. - Avoid giving children anything with caffeine within six hours of bedtime, and limit the amount of caffeine children consume. - Keep noise levels low, rooms dark, and indoor temperatures slightly cool. - Talk to your pediatrician if your child has symptoms of RLS. There are several options for treating this condition. - Talk to your pediatrician if your child is showing signs of sleep apnea. There are proven treatments for this condition, as well.
Monitoring the Mississippi When Europeans first gazed upon it, the Mississippi River looked much different than it does today. In 1797, Nicolas de Finiels, who traveled the river from the mouth of the Ohio River to St. Louis, described, "many lengthy detours, endless islands, bends where the current moves as swiftly as lightning, innumerable sand bars, snags, fallen trees here and there, rocks, sometimes in the channel, sometimes along the banks." Even then, people were interested in using the river for commerce. Several historical accounts describe how commercial harvesters cleared the river islands of trees. In the interest of commerce and easy navigation, the river was made straight and deep, or as straight and deep as dikes, rip-rapped banks and dredging could make it. The result of this transformation was a river that no longer could meander within its floodplain. The river's natural processes of eroding and flooding, which created new habitats from old, were either destroyed or arrested. The river's ecological health suffered at the expense of economic growth. To better understand the effects of human induced changes on the river's ecology, Congress in 1986 created the Environmental Management Program. The program is designed to provide river managers with information and tools to help them balance the competing interests of navigation, industry, conservation and recreational uses of the river. The EMP consists of two major elements or programs: Habitat Rehabilitation and Enhancement and Long Term Resource Monitoring. Five basin states, Illinois, Iowa, Minnesota, Missouri and Wisconsin, maintain Long Term Resource Management field stations to collect data along the 1300-mile Upper Mississippi River system. The Open River Field Station, near Cape Girardeau, began operating in January 1991. It was the last of six field stations added to the Long Term Resource Management Program. At full staff, the field station supports six permanent employees specializing in fisheries biology, limnology (water quality), invertebrates, botany and ecology. "Open river" refers to the stretch of the Upper Mississippi River not impounded by dams. It lies between the confluences of the Missouri and Ohio rivers. The open river study area is between river miles 30 and 80 (roughly 25 miles north and south of Cape Girardeau). The station's staff also conduct specific studies beyond their study area. Field station biologists collect information on water quality, water levels and flows, bathymetry, vegetation, fish, invertebrates, sediment types and distribution, sediment and nutrient transport and land cover and use. Some of the things they look for include changes in aquatic vegetation, fish communities, water quality and sediment, as well as navigation impacts, nutrient transport and water level fluctuations. All field stations use the same gear and methods to ensure consistent and reliable data. The data they collect is transferred electronically to the United States Geological Survey's Upper Midwest Environmental Science Center in La Crosse, Wisconsin. Thanks to the information collected by the field stations, the environmental science center now is in possession of the largest source of nutrient transport data on the Upper Mississippi. This information is being used to address what is known as the "dead zone" in the Gulf of Mexico. Years of fertilizer use in the Upper Mississippi River basin has helped create a large area outside the mouth of the Mississippi River so depleted of oxygen it cannot support life. Data collected by the field stations helps us understand the sources of the nutrients, how they are transported and how we can manage the problem. The Open River Field Station works to blend the monitoring program with goals of the Missouri Conservation Department. The monitoring program accounts for most of the effort, but the staff also is active in research,coordination with other agencies and outreach and education. The research is primarily dedicated to developing new and more efficient sampling techniques and to understanding the ecology of rare animals or communities in the river. Because the Mississippi River near Cape Girardeau is big, deep, muddy and swift, biologists have had difficulty studying it. Until the Open River Field Station was established, virtually nothing was known about the river's ecology and how to adequately sample it. Open River Field Station biologists are continually developing new techniques to capture organisms that live in these waters. One new sampling technique is a trawl designed to capture small fishes and the young of large fishes. The experiments have led to discoveries of rare animals thought to be gone, or nearly so, in the open Mississippi River. In 1998, Open River Field Station biologists used the trawl to capture a pallid sturgeon measuring only about 3 inches long. It was the first time a young-of-the-year pallid sturgeon was captured in the wild. Pallid sturgeon belong to an ancient order of fishes that have managed to survive to modern times. They were once common in the Mississippi and Missouri rivers. Around the turn of the century, this species comprised a large percentage of the commercial fishery catch in those rivers. Pallid sturgeon numbers began to decline in the early 1900's, probably from overharvesting. Since then, habitat in the Mississippi and Missouri rivers has been greatly altered, which may account for the low numbers we see today. The experimental trawl was originally developed to capture several species of rare big river chubs (minnows). Specifically, these included sturgeon chubs and sicklefin chubs. Both species live only in big, turbid rivers. In Missouri, they are found only in the Missouri and Mississippi rivers. Historical data suggests that both species declined in the open Mississippi River and in much of the Missouri River. Both species are candidates for federal listing and remain species of special concern in Missouri becausethe specific habitat they require is rare in our big rivers. In 1991, Open River field station biologists "rediscovered" a large freshwater prawn thought to have be extinct in the Upper Mississippi River. The prawn, called the Ohio shrimp, grows large enough for humans to eat. They were once so common in the Mississippi River that they supported commercial fisheries in several small towns along the river. Many of these small towns held annual "shrimp fries." The species began to decline sometime in the late 1930s. The last known collection of Ohio shrimp in the open Mississippi River was near Cairo, Ill., in 1962. Discoveries like that of the Ohio shrimp suggest a lack of information about invertebrates in the Mississippi River. Furthermore, Open River Field Station biologists recently captured several larval specimens of pseudiron mayfly from a Mississippi River gravel bar. This is the first time this species has been found in Missouri's portion of the Mississippi River. New research is helping develop even better methods of sampling for big river invertebrates. The Long Term Resource Monitoring and the Open River Field Station programs have caught the attention of river ecologists around the world. In 1993, during the biggest flood in years, scientists from Russia's Institute of Biology of Inland Waters visited the Open River Field Station. Dr. Arthur Poddubney, head of the Department of Ichthyology, and Gregory Scherbina, from the Laboratory of Aquatic Invertebrate Ecology, toured the facility and flooded areas. During their visit, we exchanged much information about our respective programs on the Mississippi and Volga rivers. In 1995, Peruvian biologists Enrique Rios Tsern and Norma Flores Arana of the Universidad Nacional de al Amazonia Peruana visited twice. They are responsible for "managing" the Amazon River and were interested in our procedures. By definition, monitoring something means watching, observing or checking it for some purpose. This may seem easy, but in the case of the Mississippi River, which has an imposing and sometimes harsh environment, special skills and gear are required to detect changes. Furthermore, biologists must take special care when designing a monitoring program. They have to be careful not to bias the data, which would lead to erroneous conclusions. Finally, because monitoring programs are designed to detect broad environmental changes over many years (long term), they cannot determine the direct causes of biological trends. Therefore, a certain type of research, called "cause and effect," should be added to a monitoring program to help explain long-term changes. This is the program used by the Open River Field Station. Since the establishment of the Open River Field Station, much has been learned about the ecology of the open Mississippi River. Trend data are only about 10 years old, so definitive statements about trends in the health of the river may be premature. However, the data seems to indicate little short-term change in the aquatic communities and water quality of the open river. Most damage to animals and their habitat occurred long before the monitoring program was established. Because of a dearth of information about the open Mississippi River, biologists can't be sure how many species may have been extirpated from the river or to what degree the community of animals are disturbed. Because no one had ever taken the time to look, some animals thought to be rare or extirpated may not be. The Long Term Resource Monitoring Program will provide more information about existing populations over time. The benefits to Missourians from the Environmental Management Program include free information and aerial photos, the construction of habitat rehabilitation projects to improve the river's environmental conditions and better fishing. In addition, partnerships have been established that have helped bring environmentalists, engineers and industrialists together to balance the river's delicate ecology with economic needs. This is important, considering recreational uses (including sportfishing) of the Upper Mississippi River account for more than $1.2 billion annually in direct expenditures. The Mississippi River will never look like it did before European settlement, nor is that one of the goals of the Environmental Management Program. However, we do need to know how the river is reacting to human activities so we can manage it for the benefit of all who depend on it. Recognizing the value of the data collected by the field stations to commerce, navigation and recreation, Congress has authorized the Environmental Management Program into perpetuity. As the years go by, we will know more and more about our country's most valuable river.
East African Campaign (World War II) The East African Campaign was a series of battles fought in East Africa during World War II by the British Empire, the British Commonwealth of Nations and several allies against the forces of Italy from June 1940 to November 1941. Under the leadership of the British Middle East Command, British allied forces involved consisted not only of regular British troops, but also many recruits from British Commonwealth nations (Sudan, British Somaliland, British East Africa, the Indian Empire, South Africa, Northern Rhodesia, Southern Rhodesia, Nyasaland, British West Africa, as well as the British Mandate of Palestine). In addition to the British and Commonwealth forces, there were Ethiopian irregular forces, Free French forces, and Free Belgian forces. The Italian forces included Italian nationals, East African colonials (Eritreans, Abyssinians, and Somali Dubats), and a small number of German volunteers (the German Motorized Company). The majority of the Italian forces were East African colonials led by Italian officers. Fighting began with the Italian bombing of the Rhodesian air base at Wajir in Kenya, and continued, pushing the Italian forces through Somaliland, Eritrea, and Ethiopia until the Italian surrender after the Battle of Gondar in November 1941. Other related articles: ... ^a The conflict in East Africa caused enormous civilian casualties ... The Oxford History of World War One notes that "In east and central Africa the harshness of the war resulted in acute shortages of food with famine in some areas, a weakening of populations, and epidemic diseases which killed hundreds of thousands of people and also cattle." The following estimates of civilian deaths during World War I were made by a Russian journalist in a 2004 handbook of human losses in the 20th century Kenya 30,000 Tanzania 100,000 Mozambique 50,000 Rwanda 15,000 Burundi 20,000 and the Belgian Congo 150,000 ... ... The following is a list of recipients of the Victoria Cross (VC) during this campaign Eric Charles Twelves Wilson (Somaliland Camel Corps) - Received during the Italian invasion of British Somaliland Premindra Singh Bhagat - Received during fighting on the Northern Front Richhpal Ram - Received during fighting on the Northern Front Nigel Gray Leakey (cousin of Louis Leakey and sergeant in the 1/6 Battalion King's African Rifles) - Received during fighting on the Southern Front. ... Famous quotes containing the words war, east, campaign and/or african: “Now, were I once at home, and in good satire, Id try conclusions with those Janizaries, And show them what an intellectual war is.” —George Gordon Noel Byron (17881824) “The practice of politics in the East may be defined by one word: dissimulation.” —Benjamin Disraeli (18041881) “The winter is to a woman of fashion what, of yore, a campaign was to the soldiers of the Empire.” —Honoré De Balzac (17991850) “I always draw a parallel between oppression by the regime and oppression by men. To me it is just the same. I always challenge men on why they react to oppression by the regime, but then they do exactly the same things to women that they criticize the regime for.” —Sethembile N., South African black anti-apartheid activist. As quoted in Lives of Courage, ch. 19, by Diana E. H. Russell (1989)
Lower Salmon River - Natural History The Lower Salmon River winds through the volcanic rocks and metamorphosed sediments of the Seven Devils Group and the lava flows of the Columbia River Basalt. About 200 million years ago the Seven Devils Mountains, now located just west of Riggins, were a chain of volcanic islands called the "Wallowa Terrane." The Wallowa Terrane was located in the Pacific Ocean, near the modern Aleutian Islands. Movement of crustal plates brought the Wallowa Terrane to the west coast of North America where it "slammed" against the continent at the speed of a few centimeters per year. Over time, the slow but relentless collision caused the rocks to fold and push up, nearly vertical in some places. About 15 million years ago, molten basalt flowed repeatedly from large rifts, or cracks in the the earth's surface. Most of the rifts occurred near what is now the Columbia River in the southeastern corner of Washington state. Because molten basalt is very fluid, these enormous flows covered huge areas - nearly half of Washington, large areas of northern Oregon, and northern and central Idaho - before they hardened. As the basalt cooled, columns were formed. These columns are visible today and in many places along the Salmon, particularly in the vicinity of Wapshilla Rapids. The width and orientation of the columns was determined by the way the hot lava that formed them cooled. Wide columns indicate slow, even cooling while narrow columns signify rapid cooling. Vertical columns formed when the lava cooled from the surface. Curved and horizontal columns resulted when water entered the lava through cracks and cooling proceeded for the center out. Other features of the Lower Salmon River also provide clues about the area's geology. Where the canyon walls are steep and confining, the rock is generally hard and resistant to the erosive action of water. The "pool and drop" character of the river, or alternating between stretches of deep, slow water and rapids, indicates that some layers of the Seven Devils bedrock are more resistant than others. Places where the canyon widens and the river slows, making lots of riffles and a few mild rapids, signify passing through the Columbia River basalt. Basalt is very susceptible to erosion and the resistance of every layer is about the same. Most of the Lower Salmon river rolls through arid grassland, a relatively small yet distinctive vegetative region of the Pacific Northwest. The semi-arid climate features hot, dry summers and mild, moist winters with the longest growing season and most frost free days of any region in Idaho. Elevations within the river canyon range from 900 to over 5,000 feet enabling many plant communities to thrive. Native species common to the Lower Salmon River include bluebunch wheatgrass, prickly pear cactus, poison ivy, lupine, arrowleaf balsamroot, yarrow, mullein, willow, curl leaf mahogany, netleaf, hackberry and ponderosa pine. Much of the easily accessible land surrounding the river canyon has been disturbed by grazing, logging or fire, facilitating the invasion of non-native or introduced plant species. Non-native species include yellowstar thistle, cheatgrass, teasel, knapweed, and horticultural species such as apricot, apple and walnut trees. Floods, which vary widely in frequency and duration on free flowing rivers like the Salmon, have created distinct bands of lichen and moss on the canyon walls. Four distinct zones are normally apparent. The low water zone, usually underwater, contains lichen and algae. The normal flood zone, covered by water only during normal high flow periods, contains whitish-gray lichen and eddy moss. The high flood zone, covered by water only during extreme high flow periods, contain two types of flood moss. This zone occurs more consistently than any other. The extreme flood zone supports terrestrial vegetation and is predominantly barren of lichen or moss.
Measuring photosynthetically active radiation (PAR) under the surface of water is useful in a variety of fields, from algal biofuels research to environmental quality. When light passes through a water column it is attenuated based on the thickness of the water column and the turbidity of the water. Measuring PAR underneath the surface of a lake, stream, bay, ocean, pond, or bioreactor can provide an excellent gauge of how much light is available to unicellular and multicellular phototrophic organisms such as algae, aquatic plants, protists, and phytoplankton. This, in turn, can be used as a gauge of the overall productivity of a system, though the specific calculations are often complex and include many other factors. Upwelling and downwelling radiation are two aspects of underwater PAR that can be useful to researchers and environmental quality investigators. Upwelling radiation is radiation received from below the sensor due to reflectance off a lower surface of some type, while downwelling radiation is a measure of radiation from above the sensor, usually due to sunlight or other external light sources. The combined upwelling and downwelling radiation measurements provide an overall measure of PAR available in the water column.
This newspaper article illustrates the use of the four Japanese writing systems – kanji, hiragana, katakana and the roman alphabet: Kanji, hiragana and katakana are all well represented in this typical newspaper article; there are also a few words in the roman alphabet. The red box shows the kanji 奎 that is unusual and most people don’t know it, so its sound is placed by its side in hiragana, i.e., 奎 sounds ‘けい’ (kei); these small hiragana characters that indicate the sound of an unusual kanji are called furiganas. On the top-right of the article we find the word ‘Tyranosaurus’ written in katakana as ティラノサウルス (ti-ra-no-sa-u-ru-su), from top to bottom, as shown by the red arrows. This is the traditional Japanese writing direction, i.e., vertical, from top to bottom, and read from right to left. Most novels and books are written this way. On the bottom-left corner we find the word ‘Tyranosaurus’ again, this time written horizontally, from left to right, as shown by the blue arrows. This way of writing Japanese is identical to that of most indo-european languages. Nowadays, both vertical and horizontal directions are common in Japanese writings. In Japanese there are no spaces between words so using the different writing systems helps breaking up the text into words, i.e., parsing the sentence. With respect to numbers, this article uses Arabic numbers (1, 2, 3, 4, …) almost exclusively, a popular alternative to writing them in kanji (一, 二, 三, 四, …). Although it is no the case here, commonly we write numbers in kanji when we write vertically, and in Arabic when we write horizontally.
A host virtual machine is the server component of a virtual machine (VM), the underlying hardware that provides computing resources to support a particular guest virtual machine (guest VM). The host virtual machine and the guest virtual machine are the two components that make up a virtual machine. The guest VM is an independent instance of an operating system and associated software and information. The host VM is the hardware that provides it with computing resources such as processing power, memory, disk and network I/O (input/output), and so on. A virtual machine monitor (VMM) or hypervisor intermediates between the host and guest VM, isolating individual guest VMs from one another and making it possible for a host to support multiple guests running different operating systems. A guest VM can exist on a single physical machine but is usually distributed across multiple hosts for load balancing. A host VM, similarly, may exist as part of the resources of a single physical machine or as smaller parts of the resources of multiple physical machines. 'host virtual machine (host VM)' is part of the: View All Definitions
|Our subscribers' grade-level estimate for this page: 2nd| |More Arachnid Printouts||EnchantedLearning.com Label Me! Printouts Spiders are arachnids (and not insects); they are related to scorpions and ticks. Young spiders are often cannibals (they will eat each other), and females often eat the male after mating. Spiders are carnivores (meat-eaters); most eat insects (like moths and crickets), but the larger spiders, like tarantulas, will eat many other small animals. Webs: Spiders produce silk in abdominal glands (called spinnerets). Spiders use silk to make webs and traps (for catching prey), shelter, life lines, cocoons, and diving bells (for those spiders who hunt underwater). The tips of the spider's legs are oily; this oil keeps them from getting trapped in their own webs. Weight for weight, spider's silk is stronger than steel. Anatomy: All spiders have eight legs; each leg has 2 to 3 tiny claws at the end. They have a two-part body and strong jaws (usually with poisonous fangs). They have a hard exoskeleton and not an internal skeleton. Life Cycle: After mating with a male, the female spider produces an egg sac that can contain up to a thousand tiny spider eggs. The egg sac is made of silk, and the color varies from species to species. In some species, the female spider carries the egg sac on her spinnerets or in her jaws until the eggs hatch. In other species, the egg sac is hidden under a rock, attached to a plant stalk, or encased in a web. Tiny spiderlings (baby spiders) hatch from the eggs - they look like tiny versions of an adult spider. Some spiderlings are on their own and receive no care from their mother. Other spiders climb onto their mother's back after hatching, where she feeds them. In some species, the mother dies when the young are ready to go off on their own, and the spiderlings eat her carcass. |Search the Enchanted Learning website for:|
METAL-ARC WELDING AND CUTTING Upon completion of this chapter, you will be able to do the following: Identify the equipment of arc-welding systems and describe the procedures and techniques used in shielded metal-arc welding. Identify the different types and classes of bare and covered electrodes and select the proper electrode and heat settings for typical welding. Describe the safety equipment used in metal-arc welding and the correct procedures for striking, establishing, maintaining, and breaking the arc. Describe the characteristics of aluminum, their effect on its weldability, and the procedures required to prepare aluminum for welding. Recognize the basic techniques used in gas tungsten-arc (GTA) welding, and describe the function and maintenance requirements of associated welding Specify the methods used in making gas metal-arc (GMA) welds in various positions, and describe some of the equipment used. State the procedures to be followed in metal and carbon-arc cutting Explain the procedures to follow in air carbon-arc cutting. equipment and with the units of measurement used or units of measurement used in this chapter, study Electric welding processes include shielded the applicable parts of NEETS, modules 1 and 2. metal-arc welding, shielded gas metal-arc welding, stud welding, and resistance welding. This chapter deals primarily with the first two processes; shielded SHIELDED METAL-ARC WELDING metal-arc welding and shielded gas metal-arc welding. The other processes are summarized briefly at the end of the chapter along with the arc Most of your metal-arc welding will be done by This is a the shielded metal-arc process. nonpressure process, and the heat necessary for coalescence is generated by an electric arc between To understand the operation of electrical a heavily covered electrode and the base metal. The welding equipment, you must have a basic arc develops an intense heat that melts the base be familiar with the terms used to describe electrical metal and forms a molten pool of metal. At the
Teaching With Documents Lesson Plan: Constitutional Issues: The Separation of Powers This lesson correlates to the National History Standards. - Era 8-The Great Depression and World War II (1929 - 1945) - Standard 2C-Demonstrate understanding of opposition to the New Deal, the alternative programs of its detractors, and the legacy of the New Deal. This lesson also correlates to the National Standards for Civics and Government. - Standard III.B.1-Evaluate, take, and defend positions on issues regarding the purposes, organization, and functions of the institutions of the national government. Share this exercise with your history and government colleagues. - Review the definitions of the following words before reading the document. camouflage (verb)-to disguise in order to conceal expedite (verb)-to hasten dissertation (noun)-a formal and lengthy report absolutism (noun)-system where ruler has unlimited powers integrity (noun)-honesty, wholeness tribunal (noun)-court of justice - After reading and working with the document, ask students to write a brief story of the court-packing controversy using five words from the list. Reading for the Main Idea Students should review what their textbook has to say about the court-packing controversy. Ask them to read the document and answer the following questions. - How many Justices does FDR want to add to the Supreme Court? - What does Gannett feel will be the result of this increase? - What alternative method for changing the system does Gannett propose? - List three principles of government that Gannett mentions in this statement. The Constitutional Issue - Ask students to define the constitutional issue. Why was this issue so - In paragraph 4, Gannett expresses his fear that the executive will dominate the other two branches of government. Ask students to recall other times in our history when one of the three branches became too powerful. - Some have argued that our system of separation of powers and checks and balances paralyzes the efficient working of government and that we should amend the Constitution to provide for a parliamentary system of government. Ask interested students to research and stage a debate for the class on the question: RESOLVED that the Constitution should be amended to provide for a parliamentary system of government. - In the third paragraph, the author uses a metaphor when he compares the Supreme Court to an anchor. Play with this idea with your students. How is the Court like an anchor? If the Court is the anchor, what is the ship? What is the sea? What other storms might there have been in our history? Invite them to suggest other possible metaphors for the Court's role in our system. - Supporters of Roosevelt's plan would have seen the Supreme Court differently. Follow the steps below to help students write their own metaphorical statement. - List on the board how the supporters of the President's plan might have viewed the Supreme Court. - Ask students to look at the list and suggest something in nature or something mechanical that has those qualities. List their suggestions on the board. - Ask students to write several possible metaphorical statements that FDR's supporters might have used to describe the Court. - List on the board how the supporters of the President's plan might have viewed the Supreme Court. Techniques of Persuasion Ask students to reread the document and underline the parts that are particularly persuasive, and then to complete one of the following activities. - Rank in order of importance the three most persuasive sections and discuss why they are most persuasive. - Write a brief paper describing the reasons why this document is or is not persuasive. For Further Study The number of Justices on the Supreme Court has been changed six times in our history: 1789, 1801, 1802, 1837, 1863, and 1869. Ask students to investigate the circumstances under which the number was changed.
“Othello, no matter how respected or how much he can claim the privileges of whiteness, cannot escape his blackness.” Writing in 1953 about the effects of French colonialism on the black Antillean, psychoanalyst Frantz Fanon claimed that as the white man establishes blackness as inferior to whiteness, the black man internalizes racist attitudes and wants to be white. The black man is elevated above his jungle status in proportion to his adoption of his civilizing nation’s cultural standards. His inferiority complex develops when he rejects his blackness and strives to be white. The black man who has lived in France, breathed and eaten the prejudices of racist Europe, and assimilated the collective unconsciousness of that Europe will “be able . . . to express only his hatred of the Negro.”1 In William Shakespeare’s The Tragedy of Othello the Moor of Venice, Othello holds a similar inferiority complex. While Othello is not a colonized person, the psychological impact Fanon describes is seen in Othello’s character. His sense of identity is altered by the complex dynamics of his presence as a racialized other in Venetian society. Fed negative ideas about blackness, Othello seeks the privileges of whiteness in order to deny his perceived inferiority, a quest that eventually leads to his downfall. Attitudes toward blackness during Shakespeare’s time manifest in the other characters’ descriptions of and actions toward Othello. Fanon claims that in European unconsciousness, the black man has become the symbol of evil and sin throughout the developments of different time periods. He writes, “The torturer is the black man, Satan is black . . . when one is dirty one is black.”2 Fanon is writing a few hundred years after Shakespeare, but these associations with blackness have already formed during the Shakespearean era. The black Devil in particular was solidified earlier in the medieval period, before interactions between Europeans and black Africans took place. Historian Jeffrey B. Russell states: “The Ethiopian as the Devil, far from being new with Othello or even with the Song of Roland, is found in the writings of the [Church] Fathers . . . There is a deep psychological terror of blackness associated with death and night. The ‘black man’ is also a Jungian archetype of the brute or of the lower natures or drives and is found in this capacity long before any considerable contact between Europeans and Black Africa.”3 While black Satan and the black barbarian only existed in the minds of Europeans during the medieval times, later during the Elizabethan era, increased trade with Africa and an influx of dark-skinned individuals into Shakespeare’s England gave his contemporaries throughout Europe physical incarnations of the terrible black Devil. Russell notes that the same characteristics assigned to black males by white racists in contemporary times—such as animal strength, hairiness, outsized organs, and great sexual potency—were applied half a millennium ago to the black Devil. As characteristics of blackness and maleness were assigned to the Devil, the traits Russell mentions became attributed to the black man. The devil evolved during this period from a cosmic entity to the treacherous and powerful Prince of Darkness. Othello shows us that these associations have stayed in European unconsciousness during the early modern period. Othello is consistently referred to as the devil or the black devil in the play. Iago urges Brabantio to take action “or else the devil will make a grandsire of you.”4 In the final scene, Emilia tells Othello after Desdemona’s death, “Thou dost belie her, and thou art a devil.”5 Othello’s final identity, despite his positions as a general, civil servant, hero and leader, is that of the devil. Furthermore, besides directly identifying Othello as the devil throughout the play, characters ascribe traits associated with the devil to Othello. Othello’s physical prowess is acknowledged even by Iago and Roderigo, who state that in the Cyprus wars “of his fathom they have none.”6 The other characters consistently describe Othello through animalistic imagery like the “old black ram.”7 Iago states to Brabantio in the same scene, “You’ll have/ your daughtered covered with a Barbary house; you’ll have coursers for cousins, and jennets for germans.”8 Another medieval component that added to Satan’s grotesque inhumanity in the people’s minds is the fear of bestiality and the animal within. Medieval thought viewed human nature as something to be feared and susceptible to the evil within, of which bestiality was the greatest evil of them all. Due to the adoption of classical humanism, typical Renaissance works elevated human nature from its unfavorable status. However, the medieval suspicion of human nature still existed. Renaissance thought adopts and builds on the medieval ideas of human fallibility and the omnipresent risk of primitive behavior. Pico della Mirandola’s canonical Renaissance work, Oration on the Dignity of Man, demonstrates this twofold trend of elevation and caution by discussing the duality of the human soul. While human nature is commended of having the capability to imitate divine beings like the seraphim, human nature is still unclean and possesses desires of the lower drives. Through philosophy, one must continuously defend themselves from the temptations of the soul and strive to cleanse themselves of moral filth in order to fulfill their potential to be divine-like. Shakespeare’s characters share the same belief by painting human nature as something to be feared and susceptible to the evil within. People must consistently mentally defend themselves or else they will fall into their animalistic wants and other deviant sexual desires. Iago tells Roderigo: If the beam of our lives had not one scale of reason to poise another of sensuality, the blood and baseness of our natures would conduct us to most prepost’rous conclusions. But we have reason to cool our raging motions, our carnal stings or unbitted Iago is acknowledging the possibility of falling into carnal desires. Only reason can stop people from falling into these desires and becoming beasts. Othello is already assigned animalistic characteristics that render him a brute with a carnal sinful nature or, as Iago claims, “the lusty Moor” set out to satisfy his sexual craze by stealing and bedding European women.10 In others’ eyes, Othello embraces the moral filth of humankind. However, Othello is clearly not a brute. Shakespeare painstakingly leads us to realize that Othello is an honorable, well-liked, and levelheaded leader in Venetian society. Lodovico’s surprise at the state of Othello near the end of the play speaks to Othello’s honorable reputation: Is this the noble Moor whom our full Senate Call all in all sufficient? Is this the nature Whom passion could not shake? Whose solid virtue The shot of accident nor dart of chance Could neither graze nor pierce?11 Othello has adopted the language, religion, and customs of Venice. He has converted to Christianity, speaks the language eloquently, and, as a respected general, holds a high position in Venetian society. Othello thus represents an assimilated person. While he does not have a permanent home, he has adopted Venetian values and integrated into the mainstream society. However, similar to Fanon’s black Antillean who has breathed and eaten the prejudices of racist Europe, Othello’s assimilation comes with the adoption of deeply rooted Venetian prejudices and hatred for blackness that have carried over from medieval times. Othello is met with constant reminders of his blackness no matter where he turns. Mentioning his blackness is not constrained to the insults made behind Othello’s back. He is constantly referred to as “the Moor” or “the black Moor” in conversation. Even when complimented, Othello is referred to as “the valiant Moor.”12 When the Duke attempts to comfort Brabantio about Desdemona’s marriage to Othello, the Duke states, “If virtue no delighted beauty lack/ Your son-in-law is far more fair than black.”13 Othello is acknowledged as a good and virtuous man, but his goodness must stem from a source fairer than his blackness. Othello, no matter how respected or how much he can claim the privileges of whiteness, cannot escape his blackness. Although Desdemona falls in love with Othello, his blackness is still seen as unattractive to her. She looks past his blackness in order to love him for his mind and character. When Othello marries Desdemona, white society is outraged by the implications of his interracial marriage. Before the union, Brabantio loved Othello and often welcomed Othello in his home to ask Othello about his past battles and victories. Perhaps Brabantio’s fondness for Othello is partly due to Othello’s exotic appeal, but regardless of his reasons, Brabantio does not dislike Othello prior to the marriage and was in fact quite fond of Othello. Brabantio’s attitude and behavior toward Othello immediately changes when Othello dares to think that he is worthy of marrying into the family. Othello, despite all his great tales, courage and privileges, is not worthy of true whiteness. Brabantio immediately expresses his outrage and makes xenophobic and racist remarks, stating, “For if such actions may have passage free/ Bond-slaves and pagans shall our statesmen be.”14 His remarks suggest that if inferior people are treated as the white man’s equal, people who belong under white authority will eventually take away the white man’s power. Othello thus faces reminders of his blackness from his wife, enemies, friends and soldiers. He is in all respects assimilated into Venetian culture. His background is from “men of royal siege”, and he is wealthy, honorable, noble and courageous.15 Othello is higher in the class hierarchy than Iago, Roderigo, Cassio, and many other characters, but Othello is not white. His color will never allow him to acquire the full privileges of whiteness. Othello can be liked and respected, but he can never be a white man’s equal. Blackness determines his permanent place under his peers despite any class or character advantages Othello holds. Fanon writes, “After having been the slave of the white man, he [the black man] enslaves himself.”16 As Othello continuously faces his blackness and perceives blackness as wickedness, ugliness, barbarism, and immorality, he learns to hate his blackness. Othello’s soul can be white, but his skin is black. Othello’s assimilation can be read as a journey to whiteness, but his journey is hindered by the realization that he is black and thus inferior when he is finally convinced by Iago of Desdemona’s infidelity. It is ultimately Othello’s inferiority complex, formed through his adoption of the white man’s hatred of blackness, that fuels his jealousy and Desdemona’s eventual death. Iago describes Desdemona’s love for Othello as of “foul disproportions, thoughts unnatural,” which makes Othello doubt Desdemona because he believes himself that his blackness is inferior. He states, “Haply, for I am black/ And have not those soft parts of conversation/ That chamberers have.”17,[18.Ibid., 3.3.263-265.] Iago tells Othello, “Men should be what they seem;/ Or those that be not, would they might seem none.”18 Ironically, the characters in Othello cannot be judged by appearances. Perhaps Shakespeare is criticizing the absolute binaries of black/white and good/evil that existed in society’s subconscious. Although Othello has a good character, he is associated with evil primarily because of his color. While Iago is white and has the appearance of a man with good character, he is the villain of the play. This critique is exemplified in the minor character Bianca. Bianca’s name interestingly means white, but Bianca is not associated with the traditional traits of whiteness. Instead of being associated with purity, goodness and virginity, Bianca is a courtesan, or from Iago’s perspective, Cassio’s whore. While the other characters continuously uphold the binary of whiteness as goodness and blackness as evil, Bianca’s placement in the play seem to question and deconstruct this binary. The message in Othello is clear: The black man who tries to survive in European society will need to reject blackness and strive for whiteness, but he will never succeed because he will always be black. Whether or not Shakespeare was conscious of this racial dynamic, Othello serves as a critique of the impossible task the white man has prepared for the black man. Othello’s tragedy is one regarding the blackness of his skin and the psychological effects of racist attitudes on the black man. - Frantz Fanon, Black Skin, White Masks, translated by Charles Lam Markmann (New York: Grove Press, 1967), 188. - Ibid., 189. - Jeffrey Burton Russell, Witchcraft in the Middle Ages. (Ithaca: Cornell University Press, 1972), 114 - William Shakespeare, Othello, edited by Russ McDonald (New York: Penguin Group, 2001), 1.1.90. - Ibid., 5.2.133. - Ibid., 1.1.87 - Ibid., 1.1.109-12 - Ibid., 1.3.326-331. - Ibid., 2.1.292. - Ibid., 4.1.258-262. - Ibid., 1.3.47. - Ibid., 1.3.289-290. - Ibid., 1.2.98-9 - Ibid., 1.2.22 - Frantz Fanon, Black Skin, White Masks, 192 - William Shakespeare, Othello, 3.3.233
Table of Contents How does behaviorism influence education practice? By providing valuable and speedy feedback, rewarding good behaviour and getting students used to routines, teachers start to create habits in students that make them improve their learning. This can give teachers greater control over the class and empower them to take lead of lessons. What are the advantages of behaviorism? - Scientifically credible. Brings in processes like replication and objectivity. - Set in Lab Settings. - Real-Life Application. - Social Learning Theory emphasizes importance of mental Processes. - Process mediates from response and Stimuli. Why is behavior important in education? Students know and understand what’s expected of them, which gives them confidence. Students monitor themselves and take more responsibility for their behavior — and their learning. Students gain a sense of safety and security. The classroom culture and the school culture become more positive overall. What are the advantages of the behaviourist learning theory? An obvious advantage of behaviorism is its ability to define behavior clearly and to measure changes in behavior. According to the law of parsimony, the fewer assumptions a theory makes, the better and the more credible it is. What is the main focus of behaviorism? Behaviorism or the behavioral learning theory is a popular concept that focuses on how students learn. Behaviorism focuses on the idea that all behaviors are learned through interaction with the environment. What does behaviorism look like in the classroom? Behaviorism can also be thought of as a form of classroom management. An example of behaviorism is when teachers reward their class or certain students with a party or special treat at the end of the week for good behavior throughout the week. The same concept is used with punishments. What are the problems with behaviorism? Behaviorism is harmful for vulnerable children, including those with developmental delays, neuro-diversities (ADHD, Autism, etc.), mental health concerns (anxiety, depression, etc.). The concept of Positive Behavior Intervention and Supports is not the issue. The promotion of behaviorism is the issue. What is the greatest strength of behaviorism? Strengths. One of the greatest strengths of behavioral psychology is the ability to clearly observe and measure behaviors. Behaviorism is based on observable behaviors, so it is sometimes easier to quantify and collect data when conducting research. How does behavior affect learning? A student’s behavior can affect her ability to learn as well as other students’ learning environment. Students who behave disruptively by bullying other students, talking during lectures or by requiring the teacher to interrupt lessons to discipline them can have a negative effect on an entire classroom. Why is behavior so important? Since behaviour is within our locus of control, affirmative feedback on behaviour offers a positive lead for personal development, showing where and how we can adapt to meet the needs of a particular situation or job role. What is behaviourist approach to learning? Behaviorism or the behavioral learning theory is a popular concept that focuses on how students learn. This learning theory states that behaviors are learned from the environment, and says that innate or inherited factors have very little influence on behavior. A common example of behaviorism is positive reinforcement. What is the pros and cons of behaviorism? Pros and Cons Behaviorism in Education - Pro: Behaviorism can be a very Effective Teaching Strategy. - Pro: Behaviorism has been a very Effective method of Psychotherapy. - Con: Some aspects of Behaviorism can be considered Immoral. - Con: Behaviorism often doesn’t get to the Core of a Behavioral Issues.
Water, the catholicon of life, is a finite resource pivotal for the survival of all living organisms. As global populations rise and climate change intensifies, the demand for water has reached unknown situations. still, it’s essential to fete that water consumption is not just about quenching our thirst; it involves a complex trip from its source to our drafts. This composition explores the environmental impact of water consumption, slipping light on the processes that contribute to both its failure and pollution. The Source Water’s trip begins at its source, which can be underground aquifers, gutters, lakes, or budgets. Understanding the health of these sources is consummate asover-extraction, impurity, and niche destruction can have severe consequences. Artificial conditioning, husbandry, and civic development frequently place immense pressure on these sources, leading to reduction and ecosystem dislocation. birth and Treatment Once sourced, water undergoes birth and treatment processes. Pumping water from underground aquifers or diverting it from gutters and lakes consumes energy, primarily deduced from fossil energies. also, water treatment involves the use of chemicals, further contributing to the carbon footmark. It’s pivotal to optimize these processes to minimize environmental impact. Transportation The transportation of water from its source to distribution points adds another subcaste to its environmental impact. Whether it’s through channels, tankers, or other means, the energy needed for transportation and the associated emigrations must be considered. Original sourcing and effective transportation styles can help reduce these environmental costs. Distribution The distribution network that delivers water to homes and businesses is a critical element. Leaks, hamstrung structure, and the energy needed to pump water through the network contribute to waste and environmental declination. enforcing smart technologies and sustainable structure can ameliorate effectiveness and reduce environmental impact. Consumption At the consumer position, water operation habits play a significant part in the environmental impact. Extravagant practices, similar as leaving gates running orover-irrigating meadows, contribute to gratuitous water consumption. Raising mindfulness about water conservation and promoting responsible operation can alleviate these impacts. Wastewater Management After use, water becomes wastewater. shy wastewater operation can lead to pollution of natural water bodies. Advanced treatment technologies and the perpetration of indirect frugality principles, similar as water recycling, can minimize pollution and insure a further sustainable water cycle. Impact on Ecosystems The environmental impact of water consumption extends beyond mortal conditioning. Submarine ecosystems, dependent on the balance of water volume and quality, are directly affected. Over-extraction and pollution can lead to niche loss, biodiversity decline, and ecosystem collapse, impacting both submarine and terrestrial life. Conclusion Understanding the intricate trip of water from its source to our drafts is pivotal for addressing the environmental challenges associated with its consumption. Sustainable water operation practices, technological advancements, and individual responsibility all play crucial places in icing a future where water isn’t just a resource for moment but a heritage for generations to come. As servants of this precious resource, it’s our responsibility to consider the entire lifecycle of water and work towards a more sustainable and indifferent water future.
The use of two independent mechanisms to verify the identity of a user. There are four authentication factors as follows: 1. What you know (password, PIN, personal data). 2. What you have (private cryptographic key, authentication token). 3. What you are (biometric scan). 4. What you do (speak a phrase, hand write a signature). Any two of the four are used in two-factor authentication (2FA); for example, using a password with a token (1 and 2) or a password and fingerprint scan (1 and 3). A password and security question such as "what is your grandmother's maiden name" may be two factors, but they both fall into the "what you know" category, and both could be acquired illegally from the same website. One factor from two different categories is more secure. Cellphone Second-Factor Codes Another common two-factor method is that after users log in with a password, a code is texted to their cellphone ("what you have"). Copying that security code from the phone into the login process provides the second factor. See FIDO , smart card and one-time password Cellphone two-factor authentication is increasingly used for financial transactions. The transmitted verification code is only valid for a short period of time before it expires.
A new method for monitoring drug concentrations during treatment uses light to identify harmful overdosing or inefficient underdosing, according to research published online this week in Nature Chemical Biology. The simplicity of this method may be particularly valuable for monitoring drug treatment outside of diagnostic laboratories. The optimal drug dosage, or the concentration of drug that is prescribed, can vary widely depending on the disease and the profile of the specific patient being treated. Some drugs have a very limited ‘treatment window’, meaning they have to be used at specific concentrations to effectively treat the disease without too much toxicity to the patient. Consequently, being able to confirm that the prescribed dose is at the right concentration is very important. However, identifying drug concentrations currently requires lengthy procedures or machines that are not readily used outside of doctors’ offices and clinical laboratories. Kai Johnsson and colleagues now incorporate luminescent proteins, or proteins that give off light, as part of a larger sensor molecule made of both proteins and synthetic components and introduce these molecules into blood samples. In the absence of the drug molecule, this sensor system gives off red light. However, upon binding the drug, the sensor switches to blue light, with the ratio of red and blue light dependent on the drug concentration. The authors demonstrate the system works with six different commonly used drugs. As the system gives off light, the signal can be detected from a single drop of blood with a commercially-available digital camera, suggesting the method could be extended for use in developing world regions with limited infrastructure or in patient homes.
Find out about how we fight fires today compared with fires in the 17th century. Generate questions and research the answers about the Great Fire of London to write reports for a class newspaper ‘Great Fire’. Contrast the design, properties and materials used in modern buildings to those at the time of the Great Fire of London. Make 3D models and 2D collages of Tudor homes to re-enact the fire with tissue paper 'flames'! Find out about historical songs and chants connected to the Great Fire of London. Explore dynamics, pitch and tempo. Create a simple 4-part music and movement composition, inspired by the Great Fire. Using drawing, imagination and communication; use charcoal drawing and potato printing to develop artistic ideas inspired by St Paul’s before designing, making and decorating a model Cathedral. Learn about modern and 17th Century fire-fighting. Understand how the Great Fire of London started, spread and what the results were. Finally, think about your own fire safety, before creating a poster Find out about the famous diarists Samuel Pepys and John Evelyn. Write your own diary entries, including a realistic entry set during the Great Fire. Share diaries in a ‘coffee house’ setting. Discover food eaten at the time. Contrast the diet of the rich and poor. Compare and make contemporary and period recipes. Study the Great Fire monument in London. Build your own structure. Prepare tours for London, make souvenirs, role-play key people and draw maps, to transform your classroom into 17th Century London. Become tour guides, teaching visitors about the 1666 historic event.
Your Heart’s Electrical System The heart has a special system that creates and sends electrical signals. First, signals tell the atria (singular: atrium) to squeeze. This moves blood to the ventricles. Next, signals tell the ventricles to squeeze. This moves blood to the lungs and body. Groups of special cells in the right atrium, called nodes, send the heart’s electrical signals. The signals travel along pathways. In the ventricles, these pathways are called bundle branches. The SA Node This sets the pace of the heartbeat. It starts each beat by releasing a signal telling the atria to squeeze. The AV Node This receives the signal from the atria. It is the “gateway” between the atria and the ventricles. The AV node channels the signal into the ventricles. The Bundle Branches These carry the signal through the ventricle walls. As the signal moves through the ventricles, the ventricles squeeze.
Part 7 of 10: Springs and Rivers - Category: Viktor Schauberger Springs are considered wells. They have always been thought to have healing powers. "I can remember living in the Ukraine for a summer in 1991. There was one spring with cool, clean water that people traveled for miles around to come visit and stock up on water. I can remember drinking directly from it with my hand and it was so refreshing on that hot summer day!" There are also hot springs, which are thought to be healing for physical aches and pains too. Viktor Schauberger invented a “spring water machine” which he said had the same properties as a natural spring. There are seepage springs and true springs. Seepage springs are formed by the overflowing of excess water from the top layers of Earth; it is very easy for them to dry up in the heat or overflow in rain season. True springs well up from a much deeper source in the ground, and those water sources are said to possibly be hundreds of years old and full of the soil’s minerals. As the deeper Earth is much cooler, so will the spring water be. Since the spring water has been depleted of oxygen through passing under plant roots, it is important to gather it far enough away from the source that it has been re-oxygenated as it hits the surface of the Earth. Otherwise the spring water will want the oxygen that is in the body and take it from our cells. Rivers are a wonderful source of cool movement and even on a hot summer day, they will still be cold having flowed down from their mountain source. In order to keep rivers cool enough so they do not flood excessively, the author of Hidden Nature (Alick Bartholomew) lists four ways this is possible in Chapter 11: 1. Replanting trees. Evaporation cools the tree’s sap and this goes into the roots and effectively cools the water. It will also aid the nutrient deficiency in rivers. 2. Place better constructed dams in the right places. Dams need to release cool water, not water that is warmer than the current river temperature. This heat increase hurts the natural ecosystem of the river; however, at certain times, it is possible to release water that is too cold as well, equally disturbing the river. Schauberger designed a dam which took temperature differences into account and released just the right temperature of water. 3. Install flow-deflecting guides. The right flow in a river creates a vortex that will cool the river to the correct temperature and fertilization abounds as oxygen and nutrients reach the plant life that lines the river. 4. Place ‘energy bodies’ in rivers. This system creates a vortex as well, but is designed for a portion of the river that is not bending but rather traveling in a straight line for some distance. They have been tested by Schauberger himself, and did remove unwanted sediment from the banks of the river. In conclusion, with the right knowledge of what has been tested and placing simple structures in the path of flowing water, our water sources can once again bring structure and life to our planet Source: Hidden Nature: The Startling Insights of Viktor Schauberger by Alick Bartholomew.
Low emission glass that has a low-emissivity coating applied to it in order to control heat transfer through windows. Windows manufactured with low E coatings typically cost about 10–15% more than regular windows, but they reduce energy loss by as much as 30–50%. Glass is one of the most popular and versatile building materials used today. One reason is because of its constantly improving solar and thermal performance. And one way this performance is achieved is through the use of passive and solar control low-e coatings. In order to understand coatings, it’s important to understand the solar energy spectrum or energy from the sun. Ultraviolet (UV) light, visible light and infrared (IR) light all occupy different parts of the solar spectrum – the differences between the three are determined by their wavelengths. - Ultraviolet light, which is what causes interior materials such as fabrics and wall coverings to fade, has wavelengths of 310-380 nanometers when reporting glass performance. - Visible light occupies the part of the spectrum between wavelengths from about 380-780 nanometers. - Infrared light or heat energy, is transmitted as heat into a building, and begins at wavelengths of 780 nanometers. Solar infrared is commonly referred to as short-wave infrared energy, while heat radiating off of warm objects has higher wavelengths than the sun and referred to as long-wave infrared. Low-e coatings have been developed to minimize the amount of ultraviolet and infrared light that can pass through glass without compromising the amount of visible light that is transmitted. When heat or light energy is absorbed by glass it is either shifted away by moving air or reradiated by the glass surface. The ability of a material to radiate energy is known as emissivity. In general, highly reflective materials have a low emissivity and dull darker colored materials have a high emissivity. All materials, including windows, radiate heat in the form of long-wave, infrared energy depending on the emissivity and temperature of their surfaces. Radiant energy is one of the important ways heat transfer occurs with windows. Reducing the emissivity of one or more of the window glass surfaces improves a window’s insulating properties. Low-e glass has a microscopically thin, transparent coating – it is much thinner than a human hair – that reflects long-wave infrared energy (or heat). Some low-e also reflect significant amounts of short-wave solar infrared energy. When the interior heat energy tries to escape to the colder outside during the winter, the low-e coating reflects the heat back to the inside, reducing the radiant heat loss through the glass. The reverse happens during the summer time. To use a simple analogy, low-e glass works the same way a thermos does. A thermos has a silver lining, which reflects the temperature of the drink it contains back in. The temperature is maintained because of the constant reflection that occurs, as well as the insulating benefits that the air space provides between the inner and outer shells of the thermos … similar to an insulating glass unit. Since low-e glass is comprised of extremely thin layers of silver or other low emissivity materials, the same theory applies. The silver low-e coating reflects the interior temperatures back inside, keeping the room warm or cold. There are actually two different types of low-e coatings: passive low-e coatings and solar control low-e coatings. Most passive low-e coatings, are manufactured using the pyrolytic process – the coating is applied to the glass ribbon while it is being produced on the float line, the coating then “fuses” to the hot glass surface, creating a strong bond, or “hard-coat” that is very durable during fabrication. Finally, the glass is cut into stock sheets of various sizes for shipment to fabricators. Passive low-e coatings are good for very cold climates because they allow some of the sun’s short-wave infrared energy to pass through and help heat the building during the winter, but still reflect the interior long-wave heat energy back inside. Most solar control low-e coatings are manufactured using the MSVD process – the coating is applied off-line to pre-cut glass in a vacuum chamber at room temperature. This coating, sometimes referred to as a “soft-coat,” needs to be sealed in an IG or laminated unit and has lower emissivity and superior solar control performance. That being said, the best performing solar control coatings are MSVD and are ideal for mild to hot climates that are more dominated by air conditioning use in commercial buildings. Low-E coatings are applied to the various surfaces of insulating glass units. In a standard double panel IG there are four potential coating surfaces to which they can be applied: the first (#1) surface faces outdoors, the second (#2) and third (#3) surfaces face each other inside the insulating glass unit and are separated by an airspace and an insulating spacer, and the fourth (#4) surface faces directly indoors. Whether a low-e coating is considered passive or solar control, they offer improvements in performance numbers. The following are used to measure the effectiveness of glass with low-e coatings: - U-Value is the rating given to a window based on how much heat loss it allows. - Visible Light Transmittance is a measure of how much light passes through a window. - Solar Heat Gain Coefficient is the fraction of incident solar radiation admitted through a window, both directly transmitted and that is absorbed and re-radiated inward. The lower a window’s solar heat gain coefficient, the less solar heat it transmits. - Light to Solar Gain is the ratio between the window’s Solar Heat Gain Coefficient (SHGC) and its visible light transmittance (VLT) rating.
Both of these subjects are planned for separately. In History the emphasis is upon developing a sense of the passing of time and an awareness that the circumstances we experience today have been shaped by our past. We begin at a personal level with your child’s direct experiences and gradually introduce wider historical issues as defined in the National Curriculum. When teaching Geography we begin with your child’s experience within the neighbourhood and then plan progression following the National Curriculum so that they are introduced to: - A wider knowledge of the world - The relative position of the countries - The effects of mankind on the landscape and an investigation into geographical issues. We subscribe to oddizzi which can be used to view videos, maps, photographs and fact files to support homework or to simply follow interests. A bit like an online Geographical Encyclopedia. Our History and Geography policies can be downloaded below as can the curriculum overviews for each subject.
“In 1930 Mannes and Godowsky were invited to join the staff of the Kodak Research Laboratory, where they concentrated on methods of processing multilayer films, while their colleagues worked out ways of manufacturing them. The result was the new Kodachrome film, launched in 1935. Three very thin emulsion layers were coated on film base, the emulsions being sensitised with non-wandering dyes to red, green and blue light, the red-sensitive layer being at the bottom.” (Coe, Brian (1978): Colour Photography. The First Hundred Years 1840-1940. London: Ash & Grant, pp. 121 ff.) “The Konicolor system, introduced by Konishiroku Shashin Kogyo (Now Konica Minolta Holdings, Inc.), split the image into three colors and shot them separately onto three b&w films. In that sense it had something in common with the US ‘Technicolor system’, but this was not a contact print with color dye to create positive film, but used coated emulsion to develop each color in a triple process, which is peculiar. […].” The Kodachrome process was invented in 1913 by John G. Capstaff for still photography and subsequently adapted to motion pictures. For the process two frames were advanced simultaneously, one located above the other. The light passed either through two lenses or through a beam-splitter, fitted with red and green filters. The release print was exposed through a beam-splitter whereby the alternate frames were projected onto either side of double-coated stock. After development by a usual b/w process, the film was tanned to harden the exposed areas. The soft areas were dyed red-orange and blue-green respectively. During the capturing of the film a beam-splitter in combination with filters in the camera divided the incoming light into a red and a green separation negative on black-and-white stock. When projected in the cinema the two images were combined simultaneously by additive mixture through corresponding red and green filters into one picture consisting of red and green colored light. The reduction of the whole color range to two colors (and their additive combinations) was necessary because of the complex optical arrangement. Kinemacolor was an additive process operated with alternating red and green filters that were applied to the shutter in front of the camera and in front of the projector. With at least 32 fps the frame rate was double the minimal frame rate of 16 fps. Time parallax with small differences between the red and green record resulted in color fringes that became visible when objects or scenes were moving.
In this lesson, students will work on their literary analysis paper, focusing on the thesis. They will also spend time on their group research project. - Read the lesson and student content. - Anticipate student difficulties and identify the differentiation options you will choose for working with your students. - Decide how you will have students share their ideas, questions, and comments in Tasks 3 and 4. Group Project Check-In - Students should return to the project groups. - If appropriate, allow groups to plan and organize their use of class time. Let students know that presentations will be during Lesson 25 and will be in chronological order. Report back to your group on your homework. - Were you able to complete the work assigned to you? - What problems did you face? - How can you and your group resolve any problems or difficulties? Short Story Share - If possible, find a way for students to share their thoughts in a central location. Share your thoughts on the second short story you read for homework. - Review each of the questions you answered in the previous lesson, about the second short story, with your classmates: - Who is the protagonist? - What is the setting? - Who is the author? What do you know about him or her? - Does the biographical information about the author contribute at all to your understanding of the story? - What theme or themes emerges from your reading of the story? - Can you connect this theme to any of the works you have read in this unit? Any of the works in your project? - SWD: Some students may benefit from using a Venn diagram to graphically organize the similarities and differences between their two stories. If time allows, you can demonstrate how this would work for the class. - The questioning strategy works best when students are encouraged to answer one question by developing another one. They should narrow down their ideas from a broad question (e.g., “What are these authors trying to say about America?”) to a more focused question (e.g., “How does the author use descriptions of the landscape to express the main characters' shared sense of isolation?”). - Display or project the following qualities of a critical question for thesis generation: - ✓ These questions should be open-ended. - ✓ It should be possible to answer the question in one sentence. - If feasible, provide a central location where students can share ideas and questions. If Internet access is not available, a whiteboard would work. Before you continue working, revisit the strategies you used to develop the thesis for your first paper in this unit, in Lesson 10. When you have finished, make notes on the following questions. - What do the story you read for homework and the story you read in class have in common? - What commonalities among stories might you explore in your paper? - What questions do you have that might help you generate a meaningful thesis? Write at least four questions. It might be helpful to review the assignment and your Independent Reading Journal to help you stay on track during this process. Response to Peer Questions - If Internet access is not available, put students in small groups and have them share digitally, trade tablets, or use some other method so they can comment on each other's questions. - Make sure each student receives a few substantive responses from peers. - ELL: This task can be a good opportunity to check in with these students and review the responses that they’ve received. Share the questions you developed in the previous task. - Follow your teacher’s instructions to respond to at least four of your classmates’ questions. You will have at least four responses to your questions by the end of this task. Project Homework Assignment - Check in with each group to make sure everyone is clear on their assignment. Assign homework for each member of your group. - What still needs to be done? - What will each person do to prepare for the next lesson? Independent Project Work - Remind students that presentations will take place in a couple of lessons. - Also, remind students to continue their Independent Reading and to continue filling out the Independent Reading Journal they began in Lesson 16, Task 6. - Complete the work assigned by your group. Continue working on your Independent Reading Journal.
A new analysis is reigniting a concern agricultural scientists have been voicing for years: That rising carbon dioxide could exacerbate malnutrition by reducing the nutrient content of staple crops. The study, published Monday in Nature Climate Change, projects that if atmospheric carbon dioxide levels rise to 550 parts per million (ppm)—a level conceivable by later this century if we don’t aggressively reduce emissions—it could result in an additional 175 million zinc deficiencies and 122 million protein deficiencies worldwide. The study also concluded that by mid-century, some 1.4 billion women and children under five will live in regions facing a high risk of iron deficiency. There are a lot of assumptions baked into these model-borne estimates, but the findings fall in line with other recent analyses. For instance, a study published last month projected that rising carbon dioxide could result in an additional 126 million life years lost by 2050 due to falling iron and zinc concentrations in crops. And a widely-publicized paper from May projected that the protein, iron, zinc, and B vitamin content of various cultivars of rice would fall as carbon dioxide rises. “The growing body of literature on the impacts of rising carbon dioxide concentrations on the nutritional quality of our food indicates the health consequences could be significant, particularly for poorer populations in Africa and Asia,” Kristie Ebi, director of the Center for Health and the Global Environment at the University of Washington and a co-author on the aforementioned papers from May and July, told Earther. The basic premise here is that crops take up carbon dioxide from the atmosphere for photosynthesis, and as carbon dioxide levels rise, some crop plants incorporate more of it into their tissue relative to nutrients they acquire from the soil. This, in essence, dilutes the nutrients. Experimental studies have demonstrated this effect for a wide variety of nutrients and crops, including zinc and iron in major staples like wheat and rice. Previous modeling studies, meanwhile, have projected the impacts of this into the future under a range of different assumptions. The new study expands on prior research by looking at 151 countries and 225 foods under a single set of assumptions and using more detailed food supply datasets for individual countries. The Harvard-based authors also pulled data on human nutritional requirements, and from their own previous analyses of the carbon dioxide-responsiveness of various crops, to consider the impacts of a 550 ppm world on dietary intake of iron, zinc, and protein. The hundreds of millions of additional nutrient deficiencies the researchers project are most concentrated in low-income regions featuring more plant-rich diets, including India, Southeast Asia, Sub-Saharan and North Africa, and the Middle East. The findings reinforce the overall picture of a disproportionate climate change burden falling on the world’s poor. But they also rely on some big assumptions, chiefly that diets in these countries won’t change over the coming century. Changes in the types of crops grown—some staples, like maize and millet, appear less sensitive to carbon dioxide—or the particular cultivars, could counteract a decline in nutritional quality. Many agricultural changes might be necessitated around the world this century as temperatures climb, rainfall patterns change, fire seasons lengthen, and droughts and heat waves intensify. How carbon dioxide’s effects on nutrition slot into this bigger, more complex picture is something researchers are still working out. As a final caveat, Ebi pointed out that the authors’ model makes use of some unpublished datasets about the effects of carbon dioxide on minerals, “making it difficult to check the validity of the raw data.” We’ve reached out to the study authors for comment and will update this post if and when we hear back. Overall, the authors’ message that all else being equal, rising carbon dioxide levels could exacerbate nutritional deficiencies is part of a growing body of work. The suggestion that vulnerable regions should monitor their crops and plan for such an effect, whether through breeding new cultivars, different growing techniques, or national supplementation programs, seems well-advised. “Another clear and direct intervention globally would be to redouble efforts to reduce global CO2 emissions,” the authors write.
Children are steeped in television and social media, and it is difficult for them to avoid hearing something about the horrific shooting at Sandy Hook Elementary School in Newtown, Connecticut. At this time, it is important for parents and teachers to have open discussions with their children and students about what they are experiencing as they are learning about the events that transpired. Here are some tips on how to approach the sensitive subject of violence: Model Calm Behavior When talking about a traumatic event with children, it is first important to model calm and controlled behavior. Children look to adults to see how to respond in situations, and will be more likely to exhibit composure if adults set the scene as a quiet, safe one. Remind children that they are in a safe place, and that they have adults to care for them, as well as officials like police officers who work day and night to make sure they are safe. Validate Children’s Deelings – It’s OK to be Upset It is also integral for adults to validate children’s feelings. “Let children know that it is okay to feel upset,” said the National Association of School Psychologists in a handout on national tragedies. “Explain that all feelings are okay when a tragedy like this occurs. Let children talk about their feelings and help put them into perspective.” Be Interested in What Children Have to Say To start the conversation about the actual event, adults should first let children know that they are interested in how they are feeling, as well as how they are coping. “Listen to their thoughts and point of view,” recommends the American Psychological Association. “Don't interrupt — allow them to express their ideas and understanding before you respond.” After they have shared their thoughts, adults can share their own opinions, but should do so without putting down children’s opinions. Tell the Truth and Stick to the Facts The APA also believes that adults should tell students the truth of what happened – not by releasing any details that could be upsetting, but acknowledging the tragedy that occurred. “Don’t try to pretend the event has not occurred or that it is not serious,” wrote the APA. “Children are smart. They will be more worried if they think you are too afraid to tell them what is happening.” Instead of speculating about what happened and possibly providing wrong information, just tell students the facts that are known. Explanations must also be appropriate for the developmental stage of the child – young children should only need simple information with lots of reassurances of their safety, whereas older children may want to discuss causes of violence and how to make society a safer place. Regardless of age, the NASP says that it is very important for children to be able to verbalize their thoughts and feelings and have an adult to listen to how they feel. Monitor Children for Changes and Give News Breaks Some children may not be as quick to verbalize something that is upsetting them, so look for changes in behavior, such as children being more quiet than usual or changes in eating habits, sleep patterns, or anxiety levels. To help take the edge off the stress they may feel due to the national attention on the tragedy, give children what the APA calls “news breaks.” Older students may be watching television or seeing updates on the event online, but being constantly bombarded by information about the traumatic event can cause children to become more stressed. Encourage them to change the channel, play a game, or engage in a leisurely activity to take their mind off of the event. Looking Toward the Future Remind children of the people who are trustworthy and are looking to prevent such violence from happening in the future, such as local security guards, police, firefighters, and military. If an adult is concerned over the mental, physical, and emotional well-being of their child after experiencing stress related to anxiety or grief from a traumatic event, contact the school’s counselor or psychologist to speak with the child and provide further assistance.
A. Focus groups are used in market research and studies of people's political views. Limitations of Focus Group Discussion. In focus group discussion investigators interview people with common qualities or experience for eliciting ideas, thoughts and perceptions about particular subject areas or certain issues associated with an area of interest. Finally, when evaluating the quality of results from a focus group, it is important to remember, as Ritchie and Lewis (p. 196) explain, that the most stimulating and successful focus groups come with experience. A focus group is either a group of people for market research purposes or brainstorming.In market research, a focus group is a group of five to fifteen people. It is well established that the focus group method is a useful and effective mechanism in which the researcher is interested in processes whereby a group jointly constructs meaning about a topic.1, 2 Several researchers have critically analyzed the methodological and … The method’s popularity is one focus group was held in each country for each of these categories determined by the trans-national research team. 5 – 12) who have something in common that is of interest to the researcher. Focus group interviewing is particularly suited for obtaining several perspectives about the same topic. Not useful for answering questions like how many and how much B. people try and answer questions in a way that pleases the interviewer C. are not ideal for sensitive topics D. Results cannot be generalized as only 6-12 people are used in a focus group… Focus Group: A focus group is a small set of six to ten people who usually share common characteristics such as age, background, geography, etc.. To ensure that the maximum number of different ideas or reactions have been captured from participants, companies typically hold several focus groups, often in different cities; 3-4 is common. A focus group is a small group people (e.g. Limitation of Focus Groups. The researcher acts as a facilitator and the participants are encouraged to talk openly about particular topics that are brought up by the researcher. 3, No. A focus group is a form of qualitative market research, in which a group of individuals come together to discuss specific topics. Focus group participants are recruited based on their purchase history, demographics, psychographics, or behavior and typically do not know each other. This is because qualitative data is often context specific. They differ from one-on-one and group interviews in that they capitalise on communication A focus group is a quasi-structured method of qualitative research and does not offer any quantitative insights. Which of the following is not a limitation of the focus group method? Limitations. When planning a new focus group discussion, it is always worthwhile to sit down for a minute and consider the pros and cons of an online focus group vs. an offline focus group. Focus groups are another commonly used qualitative research method. 103-119. Thus, they comprise of predetermined semi-structured interviews whereby broad questions are asked on a theme or a topic generating transcripts of discussion and opinions. These could include brands, companies, or products, as well as prominent societal figures, such as politicians. Focus group is an important research method, here we list the advantages and disadvantages of such research method. The ideal size of a focus group is usually between five and eight participants. When it comes to limitations in research, they play an important role. A focus group is a qualitative research method in which a trained moderator conducts a collective interview of typically six to eight participants from similar backgrounds, similar demographic characteristics, or both. Such processes tend … A focus group is a part of marketing research technique. It remains most effective as a preliminary research tool, where a more structured approach may be premature, and where the data collected does not require much analysis. In their 2008 study, Amy Slater and Marika Tiggemann (2010) conducted six focus groups with 49 adolescent girls between the ages of 13 and 15 to learn more about girls’ attitudes towards’ participation in sports. A focus group is a group interview involving a small number of demographically similar people. Furthermore focus groups hold a unique strength, group interaction, not found in any other method. There is also a third approach: a video focus group. Focus groups have become an increasingly popular method of data collection in nursing research. means that a. a moderator may unknowingly limit open, free expression of group members. Text-based online focus groups are not suitable for every qualitative research project. e. have a wide range of capability regarding focus group equipment. 2, pp. Focus groups can vary in size, but many experts suggest the group should optimally consist of 10 to 12 people. focus group process, as well as a brief discussion and suggestions for making sense of the resulting qualitative data. They typically come together with a moderator. Focus groups. This paper Their reactions to specific researcher-posed questions are studied. A focus group discussion is an interaction among one or more experts and more than one individual with the intention of gathering data. That’s why you need to include the limitation section in your work. Focus Group Discussion (FGD) is a research method in the social sciences, with a particular emphasis and application in the developmental program evaluation sphere (Doody, Slevin, & Taggart, 2013). The set comes together to discuss a predetermined topic. Focus group discussion is frequently used as a qualitative approach to gain an in‐depth understanding of social issues. Expert Answer . It is important to realize that there are several limitations to FGDs. What is a benefit AND what is a limitation of a focus group? After introducing focus group method and briefly overviewing its use in health research, this article shows that the distinctive (and under-used) feature of focus group method is its generation of interactive data. 2. However, some limitations of the present study should also be discussed: First, we conducted our research with focus group moderators, all of whom worked in the same research program. Focus group method is becoming increasingly popular among qualitative researchers. Limitations of Focus Group Discussions. There are numerous examples of focus group research. It will help you provide readers with a clear context for your study and enough data to evaluate the impact and relevance of your … In particular, the values and beliefs about a […] Illustrating my argument with examples from health- First, since FGD data is qualitative, it cannot necessarily be generalizable to the population. focus groups aord opportunities to observe group press- The origin of focus groups is a somewhat complex process ures on an individual, and that respondents will stimulate to identify— it is often attributedto marketresearch meth- each other, this interaction is portrayed very much as a If the topic is of minor concern to participants, and if they have little experience with the topic, then a group … Focus group research about views on sexual risks can also be considered as 'sensitive' because to talk about 'risk-taking' may involve the disclosure of perceived moral failures. Which of the following is a limitation of focus groups? the same focus group the men discuss the notion of paternit y leav e: 2 All names of participants ha ve been changed. Advantages Focus group brings interaction between participants highlights their view of the world. 3 Overlaps are sign ifi ed in the transcript notation by ‘=’. d. be able to screen for appropriate focus group participants. The negativity of risk-taking is originated in the contemporary emphasis on the moral accountability for personal welfare. Even though the INFOPAT program consists of several subprojects, they all deal to a greater or lesser extent with the advantages and disadvantages of an electronic support system (electronic … During the focus group interview, it is imperative that the focus group moderator: a. all of these b. remain completely objective. Focus Group MethodoloGy: IntroductIon and hIstory 5 idea that group processes assist people to explore and clarify their points of view. Focus groups are a facilitated discussion and work well when the topic can be well scoped. (2000). Students should learn to understand their own conditions to choose the best research method for their study. The focus group practice involves a number of participants having an open discussion on a specific topic, set by a researcher. A typical focus group session will last between one and two hours. They can concern a new product or something else. Despite the strength of FGD in soliciting in-depth information, the method is limited in terms of the following: Unlike questionnaires and interviews, the Focus Group Discussion method is not a good way to obtain numerical information, Introduction. c. have good quantitative skills. 29) A limitation of focus group interviews is the so-called “polarization effect.” This means that a. a moderator may unknowingly limit open, free expression of group members. Using and analysing focus groups: Limitations and possibilities. International Journal of Social Research Methodology: Vol. Appliance Warehouse Direct, Pizza Aloha Pizza Company, Smith County Appraisal, Helo Meaning Slang, Wooden Plank Images, Forgotten Worlds Book, Commodus Death Gladiator, Empty State Icon, Slip Stitch Purlwise, Jatin Name Status, Coca-cola Vanilla Soda,
BIOANATOMY AND BIOMECHANICS Together with the eyes the nose forms the centerpiece for initial impression. It has a complex three-dimensional structure, and its skin is nonuniform. With its many distinct forms, the nose is among the most challenging sights for surgical reconstruction, and one of the most rewarding. The takeoff point of the nose is from the nasion. From this point, the nasal bone extends inferiorly and anteriorly. The majority of the nose is composed of cartilage, fascia, muscle, and skin. The structure of the upper nose rests on the upper lateral alar cartilages. The lower nose is supported by the columella and the lower lateral alar cartilages. Biomechanically, the nasal tissues are relatively inelastic. The upper nose tends to have thin, nonsebaceous skin. The lower nose is more sebaceous. The very tip of the nose and the columella are often thinner and less sebaceous as well. These three zones of nasal skin are identified as type I, II, and III1 (Fig. 7.1). They do not exist in a predictable location and transition variably. Some individuals have thin, mobile type I skin on the majority of the nose, while others—particularly older men—have a thick sebaceous quality of almost all of the nasal skin. In these individuals, reconstruction with local flaps can be particularly challenging due to the inherent visibility of complex surgical scars. The bony/cartilaginous structure of the nose is lined internally by a loose, thin, subcutaneous tissue and mucosa. The external surface of the upper and mid nose is lined by epidermis, dermis, a thin loose superficial fascia, a layer of nasals muscle, and a thicker multilayer inframuscular fascia. Three types of skin are present on the nose. The type 1 skin of the nasal bridge and upper nose is less sebaceous and more mobile. Type 2 skin is present on the alae and distal nose. It is thicker and sebaceous in nature. Type 3 skin lines the nares, columella, and soft triangle. It is thin, less sebaceous, and relatively immobile The shape of the nose varies dramatically (Fig. 7.2). Hooked or so-called Roman or Aquiline noses have a very prominent convex shape with a shaper nasal spine. The hawk nose is an accentuated hooked nose, with a very thin side-to-side profile. The Greek or straight nose has no curve to it, proceeding straight from the nasal spine to the tip. The Nubian nose starts thin at the upper bridge and widens and enlarges in thickness toward the wide open nares. This nose type is common in African Americans. The pug nose is a short slightly concave nose, with a flattened tip. The upturned or celestial nose is a long, thin nose with an upwardly projected tip, often with large, prominent lower lateral alar cartilages. Each such nasal subtype presents a different challenge to ...
There is no easy, one-size-fits-all solution for creating the ideal learning environment. The multitude of factors ranging from teachers’ teaching styles to community involvement and everything in between necessitate that the ideal learning spaces for a school will vary. However, after almost a decade of working with schools to create their ideal learning environments, we have found that there are 5 essential elements that, when combined, create a high impact learning environment. (Learner mobility is showcased in this breakout space shown above. Students’ learning is not confined to the classroom.) High impact learning environments center on the reality that the 21st Century knowledge worker will need extremely high agility and adaptability in order to succeed. They have to be able to assimilate new technologies, adopt new skill sets, and validate information that they are receiving. Sure you can look up bits and pieces of information online, but effectively sourcing, analyzing, and validating that data – then using it to collaborate with others – is an extremely important soft skill that not all students are acquiring at the K-12 level. And while the physical classroom setting doesn’t necessarily correct this problem, it does support the lifelong learner and his or her future needs. A supportive, collaborative, high-impact learning environment includes the following critical elements: Integrated Technology: The integration of technology into the educational environment is more involved than placing computers in a classroom. Integrated technology becomes an integral part of the learning experience in a high impact learning environment. Learner Mobility: Today’s learner is mobile. Formal and informal learning contexts are now prevalent as a result of pedagogy and technology. Adaptability: The learning facility use is likely to change as often as education changes, therefore the design of a space must allow owners many options of use. Multiple Modalities: A high impact learning environment is designed so that differentiated instruction may take place with ease. This means creating spaces, configurations, and flexibility to allow for highly varied learning environments. Dynamic Ergonomics: Humans are made to move and an active learning environment stimulates cognitive development.
My previous post discussed the mathematical concepts of function and relation. Because the content of this post heavily depends on an understanding of the ideas presented in that post, you may find it helpful to read it before continuing. The concept of the inverse of a relation is a natural extension of the important concept of a relation. The central idea is that an inverse relation is about reversing a relationship by exchanging variables, reversing/undoing an operation, or reversing/undoing a series of operations in a specific order. The following five questions and situations illustrate how a person uses the concept of the inverse of a relation to solve a problem. (1) If we know a formula to convert Fahrenheit temperatures to Celsius, what formula converts Celsius temperatures to Fahrenheit? (2) If we know a formula that tells us how to calculate the area of a circle from its radius, what formula will tell us how to calculate the radius of the circle from its area? (3) A diner in a restaurant uses the restaurant’s menu function in inverse mode to determine what food items on the menu he/she can afford. (4) A criminal investigator uses the one-to-one function that matches people with DNA molecules in inverse mode to match a sample of DNA molecules with a criminal. (5) When solving for the sides and angles of a triangle, a trig student uses the inverse trig functions on his/her calculator to find the measure of an angle that has a specific trig function value. The purpose of this post is to discuss inverse functions and relations when the matching rule is given by an x-y variable equation where both the domain and range is a subset of real numbers. These concepts will be discussed from algebraic and geometric points of view. I will begin by looking at inverses of functions and relations from a geometric point of view. The two text boxes below summarize the geometric relationships between a relation and the inverse of a relation. The companion graphs illustrate the geometric relationships described in the text boxes. Notice that exchanging the variables in an equation gives us the equation of the inverse relation. These observations, of course, follow from the definition of the inverse of a relation, midpoint formula, definition of slope, and the fact that the product of the slopes of two perpendicular lines equals -1. The text box below shows examples of elementary functions and the corresponding inverse relation which may or may not be a function. Notice that the inverse of the functions y = x2 and y = |x| are relations, but not functions since y = x2 and y = |x| are not one-to-one functions. As a reminder, the symbol √(x) means take the positive square root of x, and positive real numbers have a positive square root and a negative square root. Also note that the function y = Sin(x) is not one-to-one, and therefore the inverse relation is not a function. Calculators get around this problem by restricting the range of the function Sin-1(x) to values that range from –π/2 to π/2. The next part explains how I teach the inverse of trig functions y = Sin(x) and y = Cos(x). Initially, students struggle with the definitions of the inverse trig functions. Consider the equations listed in the edit box and graphs below. Because the trig functions are periodic, there are infinitely many solutions for each equation. Because the calculator keys Cos-1(x) and Sin-1(x) are function keys, the calculator should display only one of the infinitely possible output values. When x ranges from 0 to π, Cos(x) is one-to-one in adjacent quadrants I and II, and all possible output values of Cos(x) from -1 to 1 can be generated in quadrants I and II. Therefore Cos-1(x) is a function if the output is restricted to range values from 0 to π radians. When x ranges from – π/2 to π/2, Sin(x) is one-to-one in adjacent quadrants I and IV, and all possible output values of Sin(x) from -1 to 1 can be generated in quadrants I and IV. Therefore Sin-1(x) is a function if the output is restricted to range values from – π/2 to π/2 radians. I have my trig students find six solutions of simple trig equations. Example: Find six angles β in degrees in quadrant III, 3 positive and 3 negative, such that Cos(β) = -0.951056516. Round solutions to the nearest tenth of a degree. I will conclude this post my showing you how I teach my students to find the inverse of a function when the function is composed of basic functions. The steps in the algorithm involve applying inverse operations in the reverse order of the order of operation rules. Exercises of this type reinforce concepts and are a good way to practice algebra skills. If you want to add some rigor to your course, have students check their solution by showing f(f-1(x)) = f -1(f(x)) = x. I remind students that an initial equation like x = y/(3y – 4) is an equation of the inverse relation, but it’s not expressed as a function of x. When a relationship is expressed as a function of x, we can graph the relation with a graphing utility. This is one of the reasons that we teach kids to solve an equation for a given variable. Sometimes I tell students to rearrange the equation for some variable because it makes more sense to them. Useful tools from Math Teacher’s Resource: • The graphs in my posts are created with my software, Basic Trig Functions. I think that you will find it very useful for teaching mathematical concepts in your classroom and developing custom instructional content. Relations can be entered as an explicitly defined function of x, an explicitly defined function of y, or as an implicitly defined x-y variable relation. Check it out at mathteachersresource.com/trigonometry. • There are a wide variety of free handouts that teachers can use to create lessons or give to students as a handy reference handout. Among these handouts are Inverse Relations and Functions, Even and Odd Functions, and Relations and Functions Introduction handouts. Go to mathteachersresource.com/instructional-content to download MTR handouts. All content is available for immediate download. No sign-up required; no strings attached! Comments Regarding My Previous Post: • Some readers wanted to know the equation of the lead graph in my previous post. The equation of the graph is Cos(x) + Cos(y) >= 0.4 where both x and y range from -15 to 15. In view of the fact that Cos(x) is an even function, it should be no surprise that the graph has symmetry with respect to the x-axis, the y-axis, and the origin. • The equation of the strange graph at the end of my previous post is 2xSin(3x) + 2y <= 3yCos(x + 2y) + 1. If you are skeptical, here are six solutions that you can plug into the equation to verify that the equation really does have solutions that satisfy the equality relationship. Just make sure that your calculator is in radian angle mode. (-5.4, 5.195 577 636) (5.5, 5.976 946 313) (8.680 865 276, -5.2) (-6.8, -6.786 215 284) (0.578 827 17, -3) (0.051 781 64, 5.8)
Video codecs are often misunderstood, or in some cases forgotten completely. However they are arguably the most important part of the video format, and it is crucial that you know why that is the case – and what they do. “What is a Video Codec?” Simply put a video codec is a tool that encodes and decodes video data using specific algorithms. The algorithms are designed to compress the video data so that it takes up less space when it is stored in the video file within the container format that is used. The decoder in the video codec then proceeds to decompress the video data for playback, allowing it to be viewed on your display. Both the encoder and the decoder in the video codec are important. Without the encoder it would be impossible to compress and store the video data, whereas without the decoder it wouldn’t be possible to view it. “Why are Video Codecs Important?” Now that you know what video codecs are, you may already be starting to see why they are important. Suffice to say they are what determines the compression that is used by the video (whether lossless or lossy) and therefore its file size. Not all codecs use the same algorithms to compress videos, and in fact the algorithms are constantly being improved on over time. That is why newer video codecs are able to provide more efficient compression and reduce the file size more while retaining the same video quality. Aside from its role in compression, the video codec is also the main factor in determining whether a video format is supported by specific devices. If a device has the right decoder to view the data encoded by the video codec – it is supported. On the other hand if it does not, the video codec will not be supported. It should be noted that some formats have stricter specifications that affect their compatibility aside from the video codec. It can encompass the audio codec, frame rate, resolution, and other video settings. For example to convert a video to DVD format, you will need to follow a strict set of specifications in order for it to be compatible. On the other hand learning how to convert DVD to MP4 is relatively easy, and for example you can do it quickly in Movavi Video Converter. That is because MP4 has less strict specifications and so long as the codec you use is supported – you should have no trouble viewing it on most devices. Now that you understand the role that video codecs play, you should see why they are important. To put it simply the video codec you choose can affect both the file size of the video as well as its compatibility.
Blood group antigens are polymorphic residues of protein or carbohydrate on the red cell surface. They can provoke an antibody response in individuals who lack them, and some antibodies can lead to hemolytic transfusion reaction or hemolytic disease of the fetus/newborn (HDFN). Researchers have identified the molecular basis of many red cell blood group antigens, and an actively maintained database currently lists more than 1,600 alleles of 44 genes. A mini-review, published in the March issue of CLN, describes the major applications of the explosion of knowledge in blood group genetics to the practice of blood banking and transfusion medicine. Blood banks and clinical laboratories routinely type blood donors and patients for ABO and Rh(D), as these are the most critical antigens for safe transfusion, writes Suneeti Sapatnekar, MD, PhD, a transfusion medicine staff physician at Cleveland Clinic in Cleveland, Ohio. Laboratories generally do not type for minor antigens—of the Rh, Kell, Duffy, Kidd, and MNS systems—but they do screen plasma for antibodies against these antigens (antibody screen). If a patient’s antibody screen is negative, units for red blood cell (RBC) transfusion must be ABO- and Rh(D)-compatible. If an antibody to a clinically significant minor antigen is present, units must additionally lack the corresponding antigen. The red cell phenotype is the complement of antigens on the red cell surface. In transfusion practice, this term refers to the status of clinically significant antigens other than ABO and Rh(D), typically some or all of the following minor antigens: C/c and E/e (Rh system); K/k (Kell system); Fya/Fyb (Duffy system); Jka/ Jkb (Kidd system); and M/N and S/s (MNS system). Red cell phenotype testing of blood donors and donor RBC units is used to identify antigen-negative units for transfusion, usually for patients with red cell antibodies, but sometimes for transfusion-dependent patients without antibodies, to prevent alloimmunization. Phenotype testing is performed by serological typing with specific antisera using direct or indirect (antihuman globulin phase) hemagglutination. Serological typing methods are simple, but they require reliable typing for antisera, and typing for multiple antigens is labor-intensive. However, standard serological typing cannot be used if the patient was transfused recently, because donor red blood cells can persist in the circulation for up to 3 months after transfusion. Also, standard serological typing cannot be used for many antigens if the patient has a positive direct antiglobulin test (DAT), as only antigens detectable by direct agglutination can be typed. Specialized serological methods can overcome these limitations but are not always successful. The expression of many clinically significant antigens is determined by single nucleotide polymorphisms (SNPs). Detecting these SNPs can predict the red cell phenotype and is an alternative to serological typing. Multiple SNPs can be included in a single assay, allowing efficient screening for multiple antigens. For this reason, molecular typing is eminently suitable for the mass screening of blood donors and is expected to greatly expand the pool of blood donors (and donor RBC units) who are negative for multiple antigens or negative for a high-prevalence antigen. Molecular typing also provides the means to identify antigen-negative donors when typing antisera are not available. Pick up the March issue of CLN to learn more about molecular typing.
Make a 3D model of any element atom. The atom model can be used to teach students about the structure of elements; and it is a suitable project for students learning the basics of chemistry. While preparing the element atom model, students can learn about the element, protons, neutrons, electrons and energy levels. The models are created using simple crafting supplies and the technique can be applied to any element on the Periodic Table. Research the element that you will use to create the model. To build the model, you will need to know the number of neutrons, protons and electrons in the atom. Also, refer to an electron configuration table for the number of electrons in each of the atom's energy levels. Draw a model of the atom. Draw a centre circle for the nucleus and fill it with the appropriate number of protons and neutrons for your element. Draw circles around the nucleus to represent the number of energy levels for the atom, then fill them with the appropriate number of electrons by drawing smaller circles along the rings. Assign a colour to the neutrons, protons, and electrons. Paint the 1.5-inch styrofoam balls in the colours selected for the protons and neutrons. Paint the one inch balls in the colour selected for the electrons. Allow the painted balls to dry completely. Use the glue gun to glue the protons and neutrons together into the nucleus. Alternate between colours so that the protons and neutrons are connected in an alternating pattern. Set the piece aside and allow it to dry. Measure the wire rings against the nucleus. The first ring should be large enough to allow for a one inch space between the nucleus and the ring. Increase the ring size for each new energy level. Cut the wire rings to create an opening in each ring. Gently pull the ring apart to create a one inch gap in the cut. Thread the electrons onto the rings, following the number of electrons per energy level outlined by the electron configuration table. The electrons should be arranged on each end of the ring, at the top, bottom, left and right sides. Use the glue gun to close the cut made in the rings. Cut the clear thread into an 18 inch piece. Take the largest ring and tie the thread at its top end, allowing for the remainder of the thread to hang down into the ring. Secure the knot in the thread. Measure one inch down the thread and tie the next ring onto it. Continue measuring and attaching rings to the thread until all of the energy levels have been attached. Measure two inches down on your thread and cut the remaining string. Measure one inch on the string and mark that point. Attach the thread at the point you marked to the centre of the nucleus, attaching the thread to the nucleus with clear tape. Cut any remaining thread hanging from the nucleus. Hang the atom from the banana hanger by its largest ring. Use a dab of hot glue to secure the rings to the banana hanger. If no banana hanger is available, the atom model can be attached to an S-hook and hung to display. Label the atom and its protons, neutrons and electrons. Use a permanent marker to label the charges on the protons and electrons. Tips and warnings - If no banana hanger is available, the atom model can be attached to an S-hook and hung to display. - Label the atom and its protons, neutrons and electrons. - Use a permanent marker to label the charges on the protons and electrons. Things you need - 1-inch styrofoam craft balls - 1.5-inch styrofoam craft balls - 3 shades of acrylic or craft paint - Wire banana hanger - Glue gun - Craft rings, varying sizes - Wire cutters - Clear thread - Clear tape
Lactation describes the secretion of milk from the mammary glands and the period of time that a mother lactates to feed her young. The process occurs in all female mammals, although it predates mammals. In humans the process of feeding milk is called breastfeeding or nursing. In most species milk comes out of the mother's nipples; however, the platypus (a non-placental mammal) releases milk through ducts in its abdomen. In only one species of mammal, the Dayak fruit bat, is milk production a normal male function. Newborn infants often produce some witch's milk. Galactorrhea is milk production unrelated to nursing, it can occur in males and females of many mammal species as result of hormonal imbalances or unusual physiological stimuli. The chief function of lactation is to provide nutrition and immune protection to the young after birth. In almost all mammals, lactation induces a period of infertility, which serves to provide the optimal birth spacing for survival of the offspring. Human lactation Hormonal influences - Progesterone influences the growth in size of alveoli and lobes; high levels of progesterone inhibit lactation before birth. Progesterone levels drop after birth; this triggers the onset of copious milk production. - Estrogen stimulates the milk duct system to grow and differentiate. Like progesterone, high levels of estrogen also inhibit lactation. Estrogen levels also drop at delivery and remain low for the first several months of breastfeeding. Breastfeeding mothers should avoid estrogen-based birth control methods, as a spike in estrogen levels may reduce a mother's milk supply. - Prolactin contributes to the increased growth and differentiation of the alveoli, and also influences differentiation of ductal structures. High levels of prolactin during pregnancy and breastfeeding also increase insulin resistance, increase growth factor levels (IGF-1) and modify lipid metabolism in preparation for breastfeeding. During lactation, prolactin is the main factor maintaining tight junctions of the ductal epithelium and regulating milk production through osmotic balance. - Growth hormone is structurally very similar to prolactin and contributes to its galactopoietic function. - ACTH (adreno-cortico-tropic hormone) and glucocorticoids have an important lactation inducing function in several animal species. ACTH is thought to contribute as it is structurally similar to prolactin. Glucocorticoids play a complex regulating role in the maintenance of tight junctions. - TSH is a very important galactopoietic hormone whose levels are naturally increased during pregnancy. - Oxytocin contracts the smooth muscle of the uterus during and after birth, and during orgasm(s). After birth, oxytocin contracts the smooth muscle layer of band-like cells surrounding the alveoli to squeeze the newly-produced milk into the duct system. Oxytocin is necessary for the milk ejection reflex, or let-down to occur. - Human placental lactogen (HPL) – From the second month of pregnancy, the placenta releases large amounts of HPL. This hormone appears to be instrumental in breast, nipple, and areola growth before birth. - Follicle stimulating hormone (FSH) - Luteinizing hormone (LH) By the fifth or sixth month of pregnancy, the breasts are ready to produce milk. It is also possible to induce lactation without pregnancy. Secretory Differentiation During the latter part of pregnancy, the woman's breasts enter into the Secretory Differentiation stage. This is when the breasts make colostrum (see below), a thick, sometimes yellowish fluid. At this stage, high levels of progesterone inhibit most milk production. It is not a medical concern if a pregnant woman leaks any colostrum before her baby's birth, nor is it an indication of future milk production. Secretory Activation At birth, prolactin levels remain high, while the delivery of the placenta results in a sudden drop in progesterone, estrogen, and HPL levels. This abrupt withdrawal of progesterone in the presence of high prolactin levels stimulates the copious milk production of Secretory Activation. When the breast is stimulated, prolactin levels in the blood rise, peak in about 45 minutes, and return to the pre-breastfeeding state about three hours later. The release of prolactin triggers the cells in the alveoli to make milk. Prolactin also transfers to the breast milk. Some research indicates that prolactin in milk is greater at times of higher milk production, and lower when breasts are fuller, and that the highest levels tend to occur between 2 a.m. and 6 a.m. Other hormones—notably insulin, thyroxine, and cortisol—are also involved, but their roles are not yet well understood. Although biochemical markers indicate that Secretory Activation begins about 30–40 hours after birth, mothers do not typically begin feeling increased breast fullness (the sensation of milk "coming in the breast") until 50–73 hours (2–3 days) after birth. Colostrum is the first milk a breastfed baby receives. It contains higher amounts of white blood cells and antibodies than mature milk, and is especially high in immunoglobulin A (IgA), which coats the lining of the baby's immature intestines, and helps to prevent pathogens from invading the baby's system. Secretory IgA also helps prevent food allergies. Over the first two weeks after the birth, colostrum production slowly gives way to mature breast milk. Autocrine control - Galactapoiesis The hormonal endocrine control system drives milk production during pregnancy and the first few days after the birth. When the milk supply is more firmly established, autocrine (or local) control system begins. During this stage, the more that milk is removed from the breasts, the more the breast will produce milk. Research also suggests that draining the breasts more fully also increases the rate of milk production. Thus the milk supply is strongly influenced by how often the baby feeds and how well it is able to transfer milk from the breast. Low supply can often be traced to: - not feeding or pumping often enough - inability of the infant to transfer milk effectively caused by, among other things: - jaw or mouth structure deficits - poor latching technique - rare maternal endocrine disorders - hypoplastic breast tissue - inadequate calorie intake or malnutrition of the mother Milk ejection reflex This is the mechanism by which milk is transported from the breast alveoli to the nipple. Suckling by the baby stimulates the paraventricular nuclei and supraoptic nucleus in the hypothalamus, which signals to the posterior pituitary gland to produce oxytocin. Oxytocin stimulates contraction of the myoepithelial cells surrounding the alveoli, which already hold milk. The increased pressure causes milk to flow through the duct system and be released through the nipple. This response can be conditioned e.g. to the cry of the baby. Milk ejection is initiated in the mother's breast by the act of suckling by the baby. The milk ejection reflex (also called let-down reflex) is not always consistent, especially at first. Once a woman is conditioned to nursing, let-down can be triggered by a variety of stimuli, including the sound of any baby. Even thinking about breastfeeding can stimulate this reflex, causing unwanted leakage, or both breasts may give out milk when an infant is feeding from one breast. However, this and other problems often settle after two weeks of feeding. Stress or anxiety can cause difficulties with breastfeeding. The release of the hormone oxytocin leads to the milk ejection or let-down reflex. Oxytocin stimulates the muscles surrounding the breast to squeeze out the milk. Breastfeeding mothers describe the sensation differently. Some feel a slight tingling, others feel immense amounts of pressure or slight pain/discomfort, and still others do not feel anything different. A poor milk ejection reflex can be due to sore or cracked nipples, separation from the infant, a history of breast surgery, or tissue damage from prior breast trauma. If a mother has trouble breastfeeding, different methods of assisting the milk ejection reflex may help. These include feeding in a familiar and comfortable location, massage of the breast or back, or warming the breast with a cloth or shower. A surge of oxytocin also causes the uterus to contract. During breastfeeding, mothers may feel these contractions as afterpains. These may range from period-like cramps to strong labour-like contractions and can be more severe with second and subsequent babies. Some women's breasts also become dry and chapped and even crack open and bleed while breast feeding.This has many different causes. To treat painful nipples treating the underlying cause is the best action to take. In the mean time gelpads or rubbing lanolin on the nipples and areola can reduce associated pain. Lactation without pregnancy, induced lactation, relactation In humans induced lactation and relactation has been observed frequently in primitive cultures and demonstrated with varying success in adoptive mothers. It appears plausible that the possibility of induction of lactation in women (or females of other species) who are not biological mothers does confer an evolutionary advantage especially in groups with high maternal mortality and tight social bonds. The phenomenon has been also observed in most primates, some lemurs and dwarf mongooses. Lactation can be induced in humans by a combination of physical and psychological stimulation, by drugs, or by a combination of those methods. Some couples may stimulate lactation outside of pregnancy for sexual purposes. Rare accounts of male lactation (as distinct from galactorrhea) exist in historical medical and anthropological literature, although the phenomenon has not been confirmed by more recent literature. Darwin correctly recognised that mammary glands developed from cutaneous glands and hypothesized that they evolved from glands in brood pouches of fish where they provided nourishment for eggs. The later aspect of his hypothesis has not been confirmed, but recently the same mechanism has been postulated for early synapsids. Instead the discus fish (Symphysodon aequifasciata) became known for (biparentally) feeding their offspring by epidermal mucus secretion. A closer look reveals that similar to most mammals the secretion of the nourishing fluid may be controlled by prolactin. During early evolution of lactation the secretion was through pilosebaceous glands, and mammary hairs transported the nourishing fluids to the eggs or youngs. Later the development of the mammary patch rendered mammary hairs obsolete. Another well known example of nourishing young with secretions of glands is the crop milk of pigeons. Like in mammals and disc fish this also appears closely related to prolactin. Other birds such as flamingos and penguins are utilizing similar feeding techniques. Lactation is also the hallmark of adenotrophic viviparity - a breeding mechanism developed by some insects, most notably tsetse flies. The single egg of the tse-tse develops into a larva inside the uterus where it is fed by a milky substance secreted by a milk gland inside the uterus. At least one cockroach species is also known to feed their offspring by milky secretions. See also - Capuco, A. V.; Akers, R. M. (2009). "The origin and evolution of lactation". Journal of Biology 8 (4): 37. doi:10.1186/jbiol139. PMC 2688910. PMID 19439024. Text "19439024" ignored (help) - McNeilly, A. S. (1997). "Lactation and fertility". Journal of Mammary Gland Biology and Neoplasia 2 (3): 291–298. doi:10.1023/A:1026340606252. PMID 10882312. - Mohrbacher, Nancy; Stock, Julie (2003). The Breastfeeding Answer Book (3rd ed. (revised) ed.). La Leche League International. ISBN 0-912500-92-1. - Cregan M, Mitoulas L, Hartmann P (2002). "Milk prolactin, feed volume and duration between feeds in women breastfeeding their full-term infants over a 24 h period". Exp Physiol 87 (2): 207–14. doi:10.1113/eph8702327. PMID 11856965. - Sears, Martha; Sears, William (2000). The Breastfeeding Book. Little, Brown. ISBN [[Special:BookSources/978-0-316-77924-5|978-0-316-77924-5 [[Category:Articles with invalid ISBNs]]]] Check - deCarvalho M, Anderson D, Giangreco A, Pittard W (1985). "Frequency of milk expression and milk production by mothers of non-nursing premature neonates". Am J Dis Child 139 (5): 483–5. PMID 3984973. - Hopkinson J, Schanler R, Garza C (1988). "Milk production by mothers of premature infants". Pediatrics 81 (6): 815–20. PMID 3368280. - Daly S, Owens R, Hartmann P (1993). "The short-term synthesis and infant-regulated removal of milk in lactating women". Exp Physiol 78 (2): 209–20. PMID 8471241. - Breastfeeding Answers Made Simple, Nancy Mohrbacher, IBCLC, FILCA - Fray, Kathy (2005). Oh Baby...Birth, Babies & Motherhood Uncensored. Random House NZ. ISBN 1-86941-713-5. - Sobrinho, L. (2003). "Prolactin, psychological stress and environment in humans: adaptation and maladaptation". Pituitary 6 (1): 35–39. doi:10.1023/A:1026229810876. PMID 14674722. - Bose, C.; D'ercole, A.; Lester, A.; Hunter, R.; Barrett, J. (1981). "Relactation by mothers of sick and premature infants". Pediatrics 67 (4): 565–569. PMID 6789296. - König, B. (1997). "Cooperative care of young in mammals". Die Naturwissenschaften 84 (3): 95–104. Bibcode:1997NW.....84...95K. doi:10.1007/s001140050356. PMID 9112240. - Creel, S. R.; Monfort, S. L.; Wildt, D. E.; Waser, P. M. (1991). "Spontaneous lactation is an adaptive result of pseudopregnancy". Nature 351 (6328): 660–662. Bibcode:1991Natur.351..660C. doi:10.1038/351660a0. PMID 2052092. - "Relactation: an effective intervention to promote exclusive breastfeeding". Journal of tropical pediatrics 43 (4): 213–6. 1997. PMID 9283123. - Strange but True: Males Can Lactate: Scientific American - Oftedal, OT (2002). "The mammary gland and its origin during synapsid evolution". Journal of Mammary Gland Biology and Neoplasia 7 (3): 225–52. doi:10.1023/A:1022896515287. PMID 12751889. - Chong, K.; Joshi, S.; Jin, L. T.; Shu-Chien, A. C. (2006). "Proteomics profiling of epidermal mucus secretion of a cichlid (Symphysodon aequifasciata) demonstrating parental care behavior". Proteomics 6 (7): 2251–2258. doi:10.1002/pmic.200500591. PMID 16385477. Text "16385477" ignored (help) - Khong, H. K.; Kuah, M. K.; Jaya-Ram, A.; Shu-Chien, A. C. (2009). "Prolactin receptor mRNA is upregulated in discus fish (Symphysodon aequifasciata) skin during parental phase". Comparative Biochemistry and Physiology Part B: Biochemistry and Molecular Biology 153: 18. doi:10.1016/j.cbpb.2009.01.005. Text "19272315" ignored (help) - Horseman, N. D.; Buntin, J. D. (1995). "Regulation of Pigeon Cropmilk Secretion and Parental Behaviors by Prolactin". Annual Review of Nutrition 15: 213–238. doi:10.1146/annurev.nu.15.070195.001241. PMID 8527218. Text "8527218" ignored (help) - Bird Milk - Attardo, G. M.; Lohs, C.; Heddi, A.; Alam, U. H.; Yildirim, S.; Aksoy, S. (2008). "Analysis of milk gland structure and function in Glossina morsitans: Milk protein production, symbiont populations and fecundity". Journal of Insect Physiology 54 (8): 1236–1242. doi:10.1016/j.jinsphys.2008.06.008. PMC 2613686. PMID 18647605. - Williford, A.; Stay, B.; Bhattacharya, D. (2004). "Evolution of a novel function: Nutritive milk in the viviparous cockroach, Diploptera punctata". Evolution & development 6 (2): 67–77. PMID 15009119. - How mammals lost their egg yolks—Did mammals develop nutritional milk before or after they abandoned yolky eggs? (New Scientist, 18 March 2008)
Salix herbacea, the dwarf willow, least willow or snowbed willow, is a species of tiny creeping willow (family Salicaceae) adapted to survive in harsh arctic and subarctic environments. Distributed widely in alpine and arctic environments around the North Atlantic Ocean, it is one of the smallest of woody plants. |Female plant with red fruits| S. herbacea is adapted to survive in harsh environments, and has a wide distribution on both sides of the North Atlantic, in arctic northwest Asia, northern Europe, Greenland, and eastern Canada, and further south on high mountains, south to the Pyrenees, the Alps and the Rila in Europe, and the northern Appalachian Mountains in the eastern United States. It grows in tundra and rocky moorland, usually at over 1,500 m altitude in the south of its range but down to sea level in the Arctic. The dwarf willow is one of the smallest woody plants in the world. It typically grows to only 1–6 cm (0.4–2.4 inches) in height, with spreading prostrate branches, reddish brown and very sparsely hairy at first, growing just underground forming open mats. The leaves are deciduous, rounded, crenate to toothed and shiny green with paler undersides, 0.3–2 cm long and broad. Like other willows, it is dioecious, with male and female catkins on separate plants. As a result, the plant's appearance varies; the female catkins are red-coloured when ripe, while the male catkins are yellow-coloured.:382:88 - Meikle, R. D. (1984). Willows and Poplars of Great Britain and Ireland. BSBI Handbook No. 4. ISBN 0-901158-07-0. - Salicaceae of the Canadian Arctic Archipelago: Salix herbacea - "Salix herbacea". Germplasm Resources Information Network (GRIN). Agricultural Research Service (ARS), United States Department of Agriculture (USDA). Retrieved 21 December 2017. - Blamey, M.; Fitter, R.; Fitter, A (2003). Wild flowers of Britain and Ireland: The Complete Guide to the British and Irish Flora. London: A & C Black. ISBN 978-1408179505. - Stace, C. A. (2010). New Flora of the British Isles (Third ed.). Cambridge, U.K.: Cambridge University Press. ISBN 9780521707725.
The immune system uses cytokines to "sound the alarm." Like a well-trained fire station crew, the immune system can quickly spring into action at the “sound” of danger. Instead of alarms though, immune cell substances called cytokines (taken from the Greek words for cell and movement) trigger immune cell movement and get them where they need to be to respond effectively. Cytokines come various forms that can allow for responses against a variety of threats, including cancer. In addition to affecting immune cells, cytokines can also act directly on other types of cells, both healthy and diseased. One cytokine, tumor necrosis factor alpha (TNF-α), was named (by Dr. Lloyd J. Old) for its ability to induce cancer cell death. TNF-α also plays important roles in responses to bacteria and viruses, helping to eliminate infected cells. Another cytokine, interferon gamma (IFN-γ), is an important stimulator of adaptive immune responses that are capable of targeting tumors with incredible precision. The activity of IFN-γ was crucial in the work of Robert D. Schreiber, Ph.D., that validated the immunosurveillance hypothesis. Other CRI scientists who have made important contributions to our understanding of cytokines and immune cell migration include: Image credit: Creative Commons (via Ben Schumin)
Find information on common issues. Ask questions and find answers from other users. Suggest a new site feature or improvement. Check on status of your tickets. On June 30, 1948, AT&T Bell Labs unveiled the transitor to the world, creating a spark of explosive economic growth that would lead into the Information Age. William Shockley led a team of researchers, including Walter Brattain and John Bardeen, who invented the device. Like the existing triode vacuum tube device, the transistor could amplify signals and switch currents on and off, but the transistor was smaller, cheaper, and more efficient. Moreover, it could be integrated with millions of other transistors onto a single chip, creating the integrated circuit at the heart of modern computers. Today, most transistors are being manufactured with a minimum feature size of 60-90nm--roughly 200-300 atoms. As the push continues to make devices even smaller, researchers must account for quantum mechanical effects in the device behavior. With fewer and fewer atoms, the positions of impurities and other irregularities begin to matter, and device reliability becomes an issue. So rather than shrink existing devices, many researchers are working on entirely new devices, based on carbon nanotubes, spintronics, molecular conduction, and other nanotechnologies. Learn more about transistors from the many resources on this site, listed below. Use our simulation tools to simulate performance characteristics for your own devices. ECE 606 Lecture 10: Additional Information out of 5 stars 16 Feb 2009 | | Contributor(s):: Muhammad A. Alam Outline:Potential, field, and chargeE-k diagram vs. band-diagramBasic concepts of donors and acceptorsConclusion ECE 606 Lecture 13a: Fermi Level Differences for Metals and Semiconductors Short chalkboard lecture on Fermi level and band diagram differences for metals and semiconductors. ECE 606 Lecture 9: Fermi-Dirac Statistics 04 Feb 2009 | | Contributor(s):: Muhammad A. Alam Outline:Rules of filling electronic statesDerivation of Fermi-Dirac Statistics: three techniquesIntrinsic carrier concentrationConclusion ECE 606 Lecture 8: Density of States Outline:Calculation of density of statesDensity of states for specific materialsCharacterization of Effective MassConclusions ECE 606 Lecture 7: Energy Bands in Real Crystals Outline:E-k diagram/constant energy surfaces in 3D solidsCharacterization of E-k diagram: BandgapCharacterization of E-k diagram: Effective MassConclusions ECE 606 Lecture 5: Energy Bands Outline:Schrodinger equation in periodic U(x)Bloch theoremBand structureProperties of electronic bandsConclusions ECE 606 Lecture 6: Energy Bands (continued) Outline:Properties of electronic bandsE-k diagram and constant energy surfacesConclusions ECE 606 Lecture 4: Solution of Schrodinger Equation Outline:Time-independent Schrodinger EquationAnalytical solution of toy problemsBound vs. tunneling statesConclusionsAdditional Notes: Numerical solution of Schrodinger Equation ECE 606 Lecture 3: Elements of Quantum Mechanics 28 Jan 2009 | | Contributor(s):: Muhammad A. Alam Outline:Why do we need quantum physicsQuantum conceptsFormulation of quantum mechanicsConclusions ECE 606 Lecture 2: Geometry of Periodic Crystals Outline:Volume & surface issues for BCC, FCC, Cubic latticesImportant material systemsMiller indices ConclusionsHelpful software tool: Crystal Viewer in the ABACUS tool suite. ECE 606 Lecture 1: Introduction Outline:Course information Current flow in semiconductors Types of material systems Classification of crystals Illinois ECE 440 Solid State Electronic Devices, Lecture 7: Temperature Dependence of Carrier Concentrations 30 Dec 2008 | | Contributor(s):: Eric Pop Illinois ECE 440 Solid State Electronic Devices, Lecture 6: Doping, Fermi Level, Density of States 04 Dec 2008 | | Contributor(s):: Eric Pop, Umair Irfan Illinois ECE 440 Solid State Electronic Devices, Lecture 1 Introduction 26 Nov 2008 | | Contributor(s):: Eric Pop Introduction to Solid State Electronic Devices ECE 606 Lecture 32: MOS Electrostatics I 19 Nov 2008 | | Contributor(s):: Muhammad A. Alam ECE 606 Lecture 26: Schottky Diode II ECE 612 Lecture 20: Broad Overview of Reliability of Semiconductor MOSFET 14 Nov 2008 | | Contributor(s):: Muhammad A. Alam Guest lecturer: Muhammad A. Alam. Lecture 1: Percolation in Electronic Devices 04 Nov 2008 | | Contributor(s):: Muhammad A. Alam Even a casual review of modern electronics quickly convinces everyone that randomness of geometrical parameters must play a key role in understanding the transport properties. Despite the diversity of these phenomena however, the concepts percolation theory provides a broad theoretical framework... From density functional theory to defect level in silicon: Does the “band gap problem” matter? 01 Oct 2008 | | Contributor(s):: Peter A. Schultz Modeling the electrical effects of radiation damage in semiconductor devices requires a detailed description of the properties of point defects generated during and subsequent to irradiation. Such modeling requires physical parameters, such as defect electronic levels, to describe carrier... Illinois ECE 440 Solid State Electronic Devices, Lecture 3: Energy Bands, Carrier Statistics, Drift 19 Aug 2008 | | Contributor(s):: Eric Pop Discussion of scaleReview of atomic structureIntroduction to energy band model
How are half life and radiocarbon dating used by scientists kristin friendster dating men A popular way to determine the ages of biological substances no more than 50,000 years old is to measure the decay of carbon-14 into nitrogen-14. This process begins as soon as a living thing dies and is unable to produce more carbon-14. Scientists know how quickly radioactive isotopes decay into other elements over thousands, millions and even billions of years. Scientists calculate ages by measuring how much of the isotope remains in the substance. Radiocarbon dating allows the dating of organic material from the last 60,000 years. Libby received numerous awards for this work,including the 1960 Nobel Prize for Chemistry. However, when an organism dies, the carbon – 1 4 in its cells decays and is not replaced. A half-life measures the time it takes for one half of a radio isotope's atoms to break down into another element.
Don't bring fear and confusion into your place value homework child's learning, try one of turtle how to start a introduction on a essay diary's activities that will give your place value homework kid an entertaining way to get much needed place value practice use place value to add draw a quick picture. scroll down to print free place value worksheets for 2nd grade math, or keep reading further. bead numbers is a place value investigation involving a tens and ones abacus 4-digit place rate my essay free value worksheets are for our grade 3 beginners. 608 questions racism in america essay all questions 5 questions 6 questions 7 questions 8 questions 9 questions 10 questions 11 questions 12 questions 13 questions 14 questions 15 questions 16 questions 17 questions 18 questions 19 questions 20 questions. round any number to teaching kids critical thinking the nearest 10, 100 or 1000. for example, recognize that 700 ÷ 70 = 10 by applying what is essay form concepts of place value and division place value, rounding, place value homework and algorithms for addition and subtraction. this unit is filled with everything you need creative writing workshop london to teach 6 writing google reviews place value standards/concepts: give us some feedback on pages you have how to write a academic essay used and enjoyed. this game place value homework will help personal reflective essay example your fourth grader learn working overnight shifts depression place value in an efficient manner.- splashlearn. common core standards: our place cause and effect essay example topics value worksheet will help your ki. problem h2 find great research paper topics the base ten fractions represented by the following:.
Give your child with hearing loss the best support - learn how you can support and help in their everyday life. Childhood hearing loss is often identified during newborn hearing screenings that are performed 24-48 hours after birth. However, some children who pass these screenings show signs of loss as they grow older. 1.1 billion people between the ages of 12-35 are at risk of hearing loss due to exposure to noise in recreational settings with 12.5% having permanent damage. Early identification and intervention is critical and allows professionals to work with families to educate them on how to support their child’s communication and developmental needs. Your hearing care professional will assess your child’s hearing and guide you through the next steps. Depending on the type and severity of your child’s hearing loss, interventions may include a combination of the following: - Hearing aid technology– electronic device worn on the ear to amplify sound - Cochlear implants - surgically implanted devices that directly stimulate the auditory nerve in the inner ear with electrical stimulation - Speech therapy - therapeutic instruction designed to improve language development and communication. - Assistive hearing devices - technology that helps transmit sounds directly to a child’s hearing aids and/or cochlear implants There are many ways you can help your child be successful, including the following: - Actively participate and promote communication to your child. Establish routines around talking and reading books. Speak clearly and no further than 6 feet away. - Encourage your child to wear their hearing aids. Understand they may not want to wear them. Celebrate small successes until hearing aids are worn full time. - Be prepared for the unexpected. Carry extra batteries, utilize a hearing aid clip that attaches the hearing aid to their clothing and carry the audiologist’s contact information. - Reduce background noise whenever possible. Make it easier for your child to hear. - Consider using assistive listening devices such as a ReSound Micro Mic, ReSound Multi Mic or an FM system.
Flamingos can stand on one leg for far longer than humans can. They can even do it while asleep. Now scientists have shed some more light on just how these pink birds manage such a balancing act without getting tired. The researchers from the Georgia Institute of Technology in the US one of the main theories used to explain this behaviour, the muscle fatigue hypothesis. The more a muscle is used, the more likely it is to become tired and so most animals standing on one leg need to regularly switch. But flamingos can use one leg for much longer periods of time . So the theory is that the leg holding them up doesn’t get fatigued. The two scientists wanted to test if it was possible for flamingos to remain stable on only one leg without the need for active muscular effort. To see if a flamingo could do this, they used a novel method involving two dead flamingos, obtained from a local zoo. The researchers positioned the bodies on one leg using clamps and measured how well each cadaver could hold its body weight and maintain balance. They also dissected the leg structures to see if muscle control was used when the birds were stood on one leg. And they collected information from living flamingos to see how much body sway was affected by how many legs the birds stood on. They not only found that a flamingo could support its body weight passively (with no need for muscular activity) on one leg, but also that it was impossible for the bird to hold a stable, balanced position on two legs. They concluded that a flamingo standing on two legs uses more muscular energy to maintain a steady posture. So why is a one-legged posture more efficient, and why does it use no active muscular movement? Apparently, it is down to the weight of the bird itself. When a flamingo is standing on one leg, its bodyweight forces the joints in its leg into a fixed arrangement. By moving the dead flamingo, the scientists noticed that there must be a group of muscles and ligaments that lock into place (known as a stay apparatus) in the proximal (near centre) part of the limb. This stay apparatus resists certain types of movement and keeps the flamingo stable, without the need for it to use leg muscles to keep balanced. The efficient balancing action is only possible when the bird’s foot is placed directly below its body, the position the birds naturally adopt. This actually becomes even easier when the flamingo is asleep because it moves less and so there is less variation in the centre of pressure. This is the first evidence of a passive, gravity-driven bodyweight support mechanism in a bird’s proximal leg joints. That means the bird supports itself without conscious effort because of the anatomy of the joints in its leg. What they cannot demonstrate is any other explanation as to why a sleeping, unipedal flamingo should benefit from being so stable and secure based on their behaviour. This requires further investigation. For example, that birds can lose a significant amount of heat through their legs and this can help them maintain the right body temperature. Even more heat escapes if the birds are stood in water (as flamingos often are) and so being able to easily stand on one leg would help to reduce the amount of heat lost. This would be particularly beneficial for those flamingos and areas where water temperature is close to, or below, freezing. The heat loss theory is plausible and makes sense, but is probably supported by the muscular activity hypothesis, too. What is clear is that flamingos, as familiar and fascinating as they are, still challenge our understanding of their physiology, biology and evolutionary history. Many birds stand on one leg but the flamingos’ balancing act may appear more noticeable because they are such strikingly shaped and coloured animals, which adds to their sense of being weird and wonderful. So the debate about exactly why they stand on one leg is sure to continue well into the future. Paul Rose is an associate fellow at the Centre for Research in Animal Behaviour, University of Exeter. - How to tackle India's sexual violence epidemic – it starts with sex education - How to write a novel – four fiction writers on Danielle Steel's insane working day - Biodiversity: many species could be even more likely to go extinct than we realise - The conspicuous absence of women in India’s labour force This article originally published at The Conversation here
Related Species: Wild Olive (Olea africana), Oleaster (O. europaea var. oleaster). Distant Affinity: American Olive (Osmanthus americana), Fragrant Olive (O. fragrans). Origin: The olive is native to the Mediterranean region, tropical and central Asia and various parts of Africa. The olive has a history almost as long as that of Western civilization, its development being one of civilized man's first accomplishments. At a site in Spain, carbon-dating has shown olive seed found there to be eight thousand years old. O. europaea may have been cultivated independently in two places, Crete and Syria. Archeological evidence suggest that olives were being grown in Crete as long ago as 2,500 B.C. From Crete and Syria olives spread to Greece, Rome and other parts of the Mediterranean area. Olives are also grown commercially in California, Australia and South Africa. There is some disagreement over when the trees first appeared in California. Some say they were introduced in 1769 when seeds brought from Mexico were planted. Others site the date 1785 when trees were brought in to make olive oil. Adaptation: The olive requires a long, hot growing season to properly ripen the fruit, no late spring frosts to kill the blossoms and sufficient winter chill to insure fruit set. Home grown olives generally fruit satisfactorily in the warmer coastal valleys of California. Virtually all U.S. commercial olive production is concentrated in California's Central Valley, with a small pocket of olive acreage outside Phoenix. The tree may be grown as an ornamental where winter temperatures do not drop below 12° F. Green fruit is damaged at about 28°, but ripe fruit will withstand somewhat lower temperatures. Hot, dry winds may be harmful during the period when the flowers are open and the young fruits are setting. The trees survive and fruit well even with considerable neglect. Olives can also be grown in a large container, and has even appeared in shows as a bonsai. Foliage: The olive's feather-shaped leaves grow opposite one another. Their skin is rich in tannin, giving the mature leaf its gray-green appearance. The leaves are replaced every two or three years, leaf-fall usually occurring at the same time new growth appears in the spring. Flowers: The small, fragrant, cream-colored olive flowers are largely hidden by the evergreen leaves and grow on a long stem arising from the leaf axils. The olive produces two kinds of flowers: a perfect flower containing both male and female parts, and a staminate flower with stamens only. The flowers are largely wind pollinated with most olive varieties being self-pollinating, although fruit set is usually improved by cross pollination with other varieties. There are self-incompatible varieties that do not set fruit without other varieties nearby, and there are varieties that are incompatible with certain others. Incompatibility can also occur for environmental reasons such as high temperatures. Fruit: The olive fruit is a green drupe, becoming generally blackish-purple when fully ripe. A few varieties are green when ripe and some turn a shade of copper brown. The cultivars vary considerably in size, shape, oil-content and flavor. The shapes range from almost round to oval or elongated with pointed ends. Raw olives contain an alkaloid that makes them bitter and unpalatable. A few varieties are sweet enough to be eaten after sun drying. Thinning the crop will give larger fruit size. This should be done as soon as possible after fruit set. Thin until remaining fruit average about 2 or 3 per foot of twig. The trees reach bearing age in about 4 years. Soils: Olives will grow well on almost any well-drained soil up to pH 8.5 and are tolerant of mild saline conditions. Irrigation: Irrigation is a necessity in California with its dry summers. A monthly deep watering of home grown trees is normally adequate. Because of its small leaves, with their protective cuticle and slow transpiration, the olive tree survives even extended dry periods. Fertilization: Fertilizing olive trees with additional supplies of nitrogen has proved beneficial. In California farmers systematically apply fertilizers well ahead of the time flowers develop so the trees can absorb the nitrogen before fruit set. Many growers in Mediterranean countries apply organic fertilizers every other year. Pruning: Proper pruning is important for the olive. Pruning both regulates production and shapes the tree for easier harvest. The trees can withstand radical pruning, so it is relatively easy to keep them at a desired height. The problem of alternate bearing can also be avoided with careful pruning every year. It should be kept in mind that the olive never bears fruit in the same place twice, and usually bears on the previous year's growth. For a single trunk, prune suckers and any branches growing below the point where branching is desired. For the gnarled effect of several trunks, stake out basal suckers and lower branches at the desired angle. Prune flowering branches in early summer to prevent olives from forming. Olive trees can also be pruned to espaliers. Propagation: None of the cultivated varieties can be propagated by seed. Seed propagated trees revert to the original small-fruited wild variety. The seedlings can, of course, be grafted or chip budded with material from desired cultivars. The variety of an olive tree can also be changed by bark grafting or top working. Another method of propagation is transplanting suckers that grow at the base of mature trees. However, these would have to be grafted if the suckers grew from the seedling rootstock. A commonly practiced method is propagation from cuttings. Twelve to fourteen inch long, one inch wide cuttings from the two year old wood of a mature tree are treated with a rooting hormone, planted in a light rooting medium and kept moist. Trees grown from such cuttings can be further grafted with wood from another cultivar. Cutting grown trees bear fruit in about four years. Pests and diseases: The olive tree is affected by some pests and diseases, although it has fewer problems than most fruit trees. Around the Mediterranean the major pests are medfly and the olive fruit fly, Dacus oleae. In California, verticillium wilt is a serious fungal disease. There is no effective treatment other than avoiding planting on infested soils and removing damaged trees and branches. A bacterial disease known as olive knot is spread by pruning with infected tools during rainy months. Because the olive has fewer natural enemies than other crops, and because the oil in olives retains the odor of chemical treatments, the olive is one of the least sprayed crops. Harvest: Olive fruits that are to be processed as green olives are picked while they are still green but have reached full size. They can also be picked for processing at any later stage up through full ripeness. Ripe olives bruise easily and should be handled with care. Mold is also a problem for the fruit between picking and curing. There are several classical ways of curing olives. A common method is the lye-cure process in which green or near-ripe olives are soaked in a series of lye solutions for a period of time to remove the bitter principle and then transferred to water and finally a mild saline solution. Other processing methods include water curing, salt curing and Greek-style curing. Explicit directions for various curing and marinating methods can be found in several publications including Maggie Blyth Klein's book, Feast of the Olives, and the University of California Agricultural Sciences Publications Leaflet 21131. Both green-cured and ripe-cured olives are popular as a relish or snack. For California canned commercial olives, black olives are identical to green olives. The black color is obtained by exposure to air after lye extraction and has nothing to do with ripeness. Home production of olive oil is not recommended. The equipment required and the sheer mass of fruit needed are beyond most households. Commercial Potential: Commercial olive production is a multimillion dollar business in California. In the Mediterranean region olives and olive oil are common ingredients of everyday foods. Raw olives are sometimes sold in speciality produce stores, and home growers in California often sell their excess crop to others interested in home curing. There is also a growing interest in specialty olive oils, often produced commercially from small groves of olive trees.
There is a notable increase in the use of the word 'digital' for products and services that are becoming part of our everyday life. Examples are digital camera, digital watch, digital weighing machine, digital signature, digital payment, digital art and so on. The digital prefix associates a term with digital technology and is considered a step up in the delivered performance at a given cost. The world of digital provides easy storage and reproduction, immunity to noise and interference, flexibility in processing, different transmission options, and very importantly, inexpensive building blocks in the form of integrated circuits. Digital systems represent and manipulate digital signals. Such signals represent only finite number of discreet values. A signal can be discreet by nature whereas, a continuous signal can be discretized for digital processing and then converted back. Manipulation and storage of digital signal involves switching. This switching is done through electronic circuits. Basic gates made from electronic circuits are primary building blocks of digital systems. These gates combine in different ways to develop digital circuits that are associated with different functionalities. This is helped by an understanding of Boolean Algebra. The functional blocks in turn, combine to generate a complex digital system. There are general purpose programmable blocks, too. This course is aimed at developing a deep understanding of digital electronic circuits. At the end of the course, one would be able to analyze and synthesize different kind of combinatorial and sequential digital systems for real-world use. INTENDED AUDIENCE : Electronics, Electrical, Instrumentation, Computer SciencePRE-REQUISITES : Basic understanding of diode, transistor operation. If this is not covered in 10+2 Board of the students then the same may be studied from Basic Electronics or Analog Electronic Circuits course.INDUSTRY SUPPORT : NIL Week 1: Introduction; Relation between switching and logic operation; Use of Diode and Transistor as switch; Concept of noise margin, fanout, propagation delay; TTL, Schottky TTL, Tristate; CMOS Logic, Interfacing TTL with CMOS Week 2: Basic logic gates, Universality of NAND, NOR gates, AND-OR-Invert gates, Positive and Negative Logic; Boolean Algebra axioms and basic theorems; Standard and canonical representations of logic functions, Conversion between SOP and POS; Simplification of logic functions, Karnaugh Map, Don’t Care Conditions Week 3:Minimization using Entered Variable Map, Minimization using QM algorithm; Cost criteria, Minimization of multiple output functions; Static-0, Static-1 and Dynamic Hazards and their cover.Week 4: Multiplexer; Demultiplexer / Decoder, BCD to 7-segment decoder driver; Encoder, Priority encoder; Parity generator and checker Week 5: Number systems-binary, Signed binary, Octal, hexadecimal number; Binary arithmetic, One’s and two’s complements arithmetic; Codes, Code converters; Adder, Subtractor, BCD arithmetic Week 6: Carry look ahead adder; Magnitude comparator; ALU; Error detecting and correcting codes Week 7: Bistable latch, SR, D, JK, T Flip-Flop: level triggered, edge triggered, master – slave, Various representations of flip-flops; Analysis and synthesis of circuits that use flip-flop Week 8: Register, Shift register, Universal shift register; Application of shift register: ring counter, Johnson counter, sequence generator and detector, serial adder; Linear feedback shift register Week 9: Up and down counter, Ripple (asynchronous) counters, Synchronous counters; Counter design using flip flops, Counter design with asynchronous reset or preset; Applications of counters Week 10: Design of synchronous sequential circuit using Mealy model and Moore model: state transition diagram, algorithm state machine (ASM) chart; State reduction technique Week 11: Digital to analog converters: weighted resistor/converter, binary ladder, converter, accuracy and resolution; Analog to digital converter: quantization and encoding, different types of conversion, accuracy and resolution Week 12: Memory organization and operation, Memory expansion; Memory cell; Different types of memory, ROM, PROM, PAL, PLA, CPLD, FPGA
About the Library - Search here for information about a variety of technologies that support literacy development in students who are deaf and hard of hearing, including different applications of video and captioning technologies. Also learn about descriptive video service for students who are blind and visually impaired. - Investigate hypermedia tools and instructional programs that incorporate a range of media to address the needs of students with various learning styles. - Learn about software-based tools that can help students with learning difficulties organize information for deeper understanding and enhanced expression. - Probe the various issues confronting educators as they provide broader access to laptops and other portable electronic tools for their students with and without disabilities. - Discover the range of assistive and instructional technologies available for children with disabilities in preschool and early childhood settings. - Explore ways a variety of technology tools support students who are blind and visually impaired in the development and application of literacy skills. - Discover how students with disabilities can develop their writing abilities online and how telecommunications can provide these students access to authentic learning opportunities within their broader communities. - Learn about a unique software feature that can assist students with physical and learning impairments in developing their written language skills. [ Home | Library | Videos | Tour | Spotlight | Workshops | Links ] material was developed by the National Center to Improve Practice (NCIP), located at Education Development Center, Inc. in Newton, Massachusetts. NCIP was funded by the U.S. Department of Education, Office of Special Education Programs from October 1, 1992 - September 30, 1998, Grant #H180N20013. Permission is granted to copy and disseminate this information. If you do so, please cite NCIP. Contents do not necessarily reflect the views or policies of the Department of Education, nor does mention of trade names, commercial products, or organizations imply endorsement by NCIP, EDC, or the U.S. Government. This site was last updated in September ŠEducation Development Center, Inc.
What is the TMJ? TMJ is the abbreviation for temporomandibular joint. Everyone has two of these joints, and the term “TMJ” refers to the normal, healthy joint on each side of the skull that allows the lower jaw to function during speech, swallowing, and chewing. The name of the joint comes from the two bones that make up each side of the joint. Like all joints, the TMJ is made of muscle, ligaments, cartilage, and bone with supporting nerves and nutritional supply in the form of synovial fluid. The temporal bones of the skull form the “roof” and “inside” of the TMJ, and the mandible (lower jaw) makes up the floor of the joint, which moves in three directions: It rotates around an imaginary axis for the first part of the opening stroke of the jaw and the last part of the closing stroke; it translates , or slides, down and forward from the endpoint of rotation to maximum jaw opening; and, it moves side to side. Specifically, the part of the mandible that is involved with the TMJ is called the condyle, and the part of the temporal bone involved in the joint is called the temporal fossa. Because the TMJ moves in three planes of space, unlike all other joints in the body that move like a hinge, the TMJ is called a ginglymoarthroidal joint. Between the two bones is a cartilaginous disc that serves as sort of a shock absorber to protect the joint that slides with the condyle during the range of motion. Attaching to the disc and the mandible is a specific muscle, called the lateral pterygoid that pulls the disc and the jaw forward as the jaw translates. There also ligaments that hold the disc to the condyle on each side of the disc, called collateral ligaments. A discussion of the TMJ is not complete without mentioning that the mandible is essentially a bone floating in space. In fact, when ancient skulls are discovered in archaeology, the mandibles are usually absent because the soft tissue has all been lost, allowing separation of the mandible from the skull. This means that the function, health, and stability of the TMJ is totally dependent on the supporting muscles and ligaments of the head and neck and the teeth that provide a stop at the right place for optimal chewing strength. The masseter muscles, the temporalis muscles, the digastric muscles, the medial pterygoid muscles, and the lateral pterygoid muscles are the primary muscles involved in jaw function. Since these muscles all work by pulling on the bones of the skull, it is important to consider that the skull is like a bowling ball balancing on a broken broomstick, which is the spine. This balancing act requires harmony in function of many supporting muscles of the upper back, neck, chest, and shoulders. Therefore, it is easy to understand how many problems of the head, neck, and upper back can manifest themselves as TMJ problems; sometimes, TMJ problems can also present as dental problems, neck pain, headaches, etc. Often, the term “TMJ” is incorrectly used to refer to a problem that does not easily fit another diagnosis by the medical community. Dr. Huff discusses TMJ/TMD VIDEO: TMJ Disorder What is TMD? When any part of the anatomical structures or supporting structures of the TMJ is injured or damaged, dysfunction occurs. While “TMJ” refers to the temporomandibular joint itself, “TMD” refers to temporomandibular disorder. TMD syndrome is a vague term that usually involves one or more conditions listed below and/or others not mentioned: - Myofascial pain - Degenerative Joint Disease - Headaches of various types - Dislocation of the disc - Subluxation of the disc - Muscle spasm Treatment for TMD is dependent on the specific diagnoses involved and is typically directed to resolving pain rather than reducing joint noises (popping, clicking, etc.). Joint noises are evidence that injury has occurred and are signs rather than symptoms. Pain, compromised quality of life, and compromised function, however, are symptoms that treatment is typically directed toward. Treatment for TMJ disorders is controversial, and clinicians often differ in their approach. Some tend to treat physically (physical therapy, bite splint therapy, chiropractic, etc.), some tend to approach treatment from a medical model (medications, mental health therapy, etc.), and some practice the philosophy that TMD tends to be self-limiting and opt not to treat but rather provide supportive care only. The reality is that successful management of TMD usually involves elements of each approach and often requires a team approach involving some or all of the following healthcare providers: dentist, mental health specialist, family physician, physical therapist, chiropractor, massotherapist, orofacial pain specialist, oral surgeon, etc. Because TMD is often a chronic pain disorder, compromises in mental health in the form of depression, anxiety, psychosomatic conditions, all of which may require the assistance of a psychiatrist. Dentists often use bite splints to treat TMD syndrome The type of splint used should be dependent on a specific diagnosis of a specific condition. For this reason, different types of splints may be used at different times during the treatment of TMD. In fact, splints may actually also be used for diagnostic purposes to rule out complicating factors. Importance of Early Treatment for TMD Additional Information on TMJ & TMD Do I Have TMJ? When someone has jaw pain, pain in or around the ear, and/or popping or clicking in front of the ear, they’re likely to ask their dentist if they have “TMJ.” However, this isn’t technically the right question to ask, since EVERYONE has TMJ—two of them, to be accurate. The term TMJ is an abbreviation that refers to the jaw joint, known as the temporomandibular joint. Everyone has these joints, and they allow us to chew, swallow, speak and keep our airways open. When someone asks if they have “TMJ,” it is more likely that they have a temporomandibular disorder, or TMD. Even then, TMD is a broad term and not a diagnosis of a specific disorder. There are many possible issues that fall under this blanket, each which may require a different type of treatment (if they require treatment at all)—muscle injuries, bone problems, inflammation of blood vessels, etc. Most patients with a temporomandibular disorder will have more than one of these, and, for best results, they will require treatment that targets each of the issues that contribute to their symptoms. Treatment for TMD Initial treatments for temporomandibular disorders typically include self-care regimens assigned by your provider, exercises, behavioral therapy, physical therapy, bite splints (“nightguards”), and the like. Various medications may also be prescribed, including antidepressants, anti-inflammatories or muscle relaxants. Dietary supplements can help as well. Caution: Athletic-type mouthguards and store-bought bite splints may cause significant injury in many types of TMD and should never be recommended. It’s only after these options have been explored that more permanent options such as bite adjustments or surgical proceeders will be considered. A thorough diagnosis of TMD involves an in-depth review of the patient’s medical and dental history, including looking for risk factors for sleep disorders and psychological risks, as well as a thorough physical exam of the head and screening of cranial nerves. Imaging like panoramic x-rays, conebeam CT, and MRI are usually considered and used as appropriately indicated. As there are over forty recognized diagnoses that fall under “TMD,” a thorough approach is required in order to find the source(s) of the problem before an appropriate treatment plan can be developed. Being told you may have TMD is a starting point, but isn’t enough to determine the best course of treatment for you. If you’ve suffered from any of the symptoms we described above and have ineffective treatment in the past, it’s very likely that you may need a more accurate diagnosis of the real problems behind your pain. Make an appointment with Dr. Huff to find out what a specialist in orofacial pain can do for you.
American Japanese Internment Camps American Japanese Internment Camps Japanese Americans refer to all Americans of the Japanese heritage who were born in Japan or the descendants of those who were born in Japan. Initially, they were the largest Asian American group but currently they are sixth largest group in those of mixed race and mixed ethnicity. The largest group of these people is found in California while others are distributed in other states such as Washington, New York, Illinois and Hawaii. Although every year there is quite a considerable number of Japanese immigrants who enter United States, the net migration still remains low since the older Japanese Americans still leave United States and go back to their original country, Japan. Japanese Americans have a long history in the United States since history records that the first group arrived American in the late 1800s. In the year 1942, the United States government forced all the Japanese Americans and the Japanese who had settled along the Pacific Coast to relocate to war relocation camps which were referred to as internment camps. Since the internment camps resulted from the presence of Japanese Americans, this research shall first focus on their history and later discuss about the internment camps. 2.0 History of Japanese Americans in the 19th Century United States has ever been known as the country of immigrants as a result of war, food shortages and political persecutions in other countries where the immigrants hail from. Japanese people happen to make a large percentage of the immigrants, and as highlighted earlier, they began to migrate in to the United States from the late 1800s. The main cause of the immigration of the Japanese was to work in the sugar plantations which were established along the Pacific by traders who had settled in the Hawaiian Kingdom. The sugar industry had grown tremendously as it was aided by the Americas civil war in the year 1861-1865, and that called for more workers after the Hawaiian population was decreasing due to disease. Other workers were leaving the plantations for better work, and as a result the Hawaii’s foreign minister sought more workers from Japan. Consequently, in the year 1868, the first one hundred and forty nine Japanese immigrants arrived in Hawaii. Since they were not used to the harsh conditions in the region and all the hard work in the sugar plantations, about forty of them returned to Japan. The rest went ahead and even intermarried with the Hawaii residents. The first Japanese immigrants in to the Hawaii gave formed the Japanese American community. In the year 1886, the Japan and the Hawaii signed labor convection after which a lot of Japanese migrants arrived to Hawaii as contract workers and some went to California as student laborers. According to the studies of Niiya and Japanese American National Museum-Los Angeles, Calif. (1993), the Japanese migration to Hawaii was mainly labor migration which intensified following Chinese exclusion from the United States in the year 1882. It also involved emigration back to Japan and also to West Coast. It was halted by the Gentlemen’s Agreement in the year 1908 and finally by the Exclusion Act in the year 1924. 2.1 Reasons for the Japanese Migration to America Although most of the Japanese went to America for the contract labor, some still had others reasons. For instance, some just followed their parents like the case of one teenage girl who narrates that she just followed her dad. In another case, a woman followed her spouse after he had stayed for quite some time without returning back to Japan. Though she had thought that they would make enough money and return home, they ended up settling there permanently. Student’s immigrants also made a good number of Japanese Americans especially in San Francisco. In the year 1890, there were about three thousand Japanese students in America. Since they did not have enough money for their upkeep and studies, they resulted in to working in the plantations to earn extra money. Consequently, they ended up living in very poor conditions and one newspaper described them as “poor students and youths who have rashly left their native shores. Hundred of such are landed every year, with miserably scant funds in their pockets…Their objection is to earn with labor of their hands, a pittance sufficient to enable them to pursue their studies in language, sociology and politics” (Niiya & Japanese American National Museum (Los Angeles, Calif.) 1993 pp. 3). 2.2 Japanese Americans Life in the Early 20th Century Contrary to what most Japanese had expected, life in America was quite hard for any one else other than the Native Americans. The life and the work were made difficult by the banks, labor recruiters, and the immigration agents who used to charge Japanese immigrants extortion fees. In addition to the economic exploitation, the Japanese Americans also used to face racial discrimination. The social attitude, laws, and practices limited and excluded them from enjoying life fully, liberty, and also property. The salary that they were getting was barely enough to sustain them, leave alone saving money to enable them go back to Japan. Most of them wished they were back in Japan like one worker who used to be paid fourteen dollars a month and out of those dollars, he used to pay more than half for the sleeping quarters. The rest was spent in buying food and other personal use. In such a situation, it was practically hard for such a person to save enough money that would have enabled him to go back to Japan. As a result, majority were eventually forced to settle completely in America (Niiya, & Japanese American National Museum (Los Angeles, Calif.) 1993). The harsh living conditions of Japanese Americans continued to worsen as the years progressed. In the year 1941, the situation worsened further especially after the Japan attacked and damaged the Pearl Harbor. The Americans accused the Japanese Americans of collaborating with Japan and as a result, they betrayed America. Since every one had started spreading rumors of how the Japanese Americans had helped Japan in the war, the whole of the American population started to have a bad altitude towards them. As a result, many people started to propose for their removal from the Western States, as they feared Japan might attack them from West Coast although Japan did not have such plans. However, other Americans had other reasons for their removal since some coveted their farms. The groups who were pressing for the Japanese Americans removal from the West Coast continued to increase as groups like Anti-immigration Organizations, Chambers of Commerce from every city, and the American Legion joined the rest who were pressing for the same. The major reason why the Americans wanted the Japanese Americans removed was mere hatred other than the reasons that they were giving initially. Henry McLemore, one of the San Francisco Examiner was quoted to have said that “let us have no patience with the enemy or with any one whose veins carry his blood.” He continued to say that “I personally hate Japanese” (Spickard 2009 pp. 106). Still, some politicians continued to express their sentiments towards Japanese as some said that it was impossible to know whether they were loyal or not and were often referred to as inscrutable Orientals. With such hatred, it was obvious that the Japanese Americans were not going to escape relocation. The decision of relocating or imprisoning the Japanese Americans was made in the Washington D.C. by the administration of Roosevelt guided by the military leaders. They were arguing that it was of military necessity to do so, though they were not able to demonstrate that necessity. The military leaders believed that Japanese were dangerous regardless of whether they are loyal or not. Moreover, they continued to argue that even giving them citizenship was not to help in any way, since that would not change their nature. Despite the fact that there were a few protests who argued that they had already jailed all the dangerous Japanese Americans, the administration went ahead and made the decision to remove all of them from the West Coast. Studies of Spickard (2009) record that on 19th February 1942, President Roosevelt issued executive order 9066 that empowered the Secretary of War, Henry Stimson, to designate military areas with an aim of excluding Japanese Americans from the West Coast. As a result, Arizona, Washington, Oregon and California were divided in to two military regions and the Japanese Americans were prohibited from western parts of the states and some inland sections. Following the order, some of the Japanese Americans started to move towards east with their belongings and family. However, moving with such a short notice was almost impossible for them and many American did not want them to settle in their territories. They were continuously harassed, and due to this, they continued to move to the east. One governor from Idaho was quoted to have said that “The Japs live like rats, breed like rats, and act like rats. We do not want them buying or leasing land or becoming permanently settled in our state” (Spickard, 2009 pp.107). When voluntary migration failed to produce desirable results, on March 27, DeWitt stopped it and put travel restrictions on the Japanese Americans in the military zone. In addition, the army decided to move all of them in the concentration camps. 3.0 Concentration Camps The concentration camps were the barbed wire enclosures where the Japanese Americans were moved to after the executive order was issued in the year 1942, to bar them from residing in the West Coast parts of America. Though there had been camps earlier in the history of America, these camps were exceptional because a whole ethnic group was forced to reside there. Since Japanese Americans were passive by nature and accepted anything that was imposed on them, as some people argue, they did not resist moving in to the camps neither did they move out of the same without an order. Some people planed to resist legally though much was not derived from the same, since it did not stop them from being evacuated from their places. Studies of Spickard (2009), record that during the evacuation day, one hundred and twelve thousand Japanese Americans were taken to the evacuation camps. The camps were of very poor conditions since it is recorded that even the ground was wet especially on the day of evacuation. There was no adequate light and the rooms were very small. The environment was not favorable either since it was hot during the day and very cold at night. Whichever the case, they had no alternative but to stay in the barbed wire enclosures. The ten camps were located at different locations particularly in the interior west, in the isolated desert areas. Some of the camps were located at Amache, Minidoka, Poston, Manzanar California, Jerome, Tula lake California and Heart Mountain. After evacuation, only six Japanese Americans remained in the local hospitals since they were seriously sick. Since they were living communally, all facilities were being shared by about two fifty people. Given that the conditions in the camps were not conducive at all, around one thousand and two hundred left the camps when they were given the chance of joining the US Army. Although many of the Japanese Americans had become desperate and frustrated at first given that some of them even attempted suicide, they later decided to adapt to the life of the camps. Each camp had a government owned farm land that was leased to them; they engaged in agricultural activities and produced poultry and dairy products. The cost of food was not high and other services like the medical cares were provided free of charge. Education was also offered free of charge up to the high school level and majority of the internees were recruited as teachers and others were trained to fit in the employment programs that were available at the camps. 3.1 Japanese Americans Life after Relocation from Concentration Camps After January 1945, all people were finally allowed to leave the internment camps. The Japanese Americans were given the identification card and they were told that once they presented them to the authorities, they would be allowed to go back to their homes. However, though the government had allowed them to leave, they were still afraid of the Americans for they were still hostile towards them. Even the people who received them were similarly harassed by the rest. One man who had returned to California in May after the executive order was removed was quoted to have said “Everybody was afraid of being attacked by the white people. The war was still going on at that time and prejudice and oppression were very severe” (Niiya & Japanese American National Museum-Los Angeles, Calif., 1993 pp. 19) as he described the situation. Moreover, on top of racial discrimination and other forms of harassment, the Japanese Americans still went through a lot trying to rebuild their lives once again. The Japanese Americans are among the many immigrant groups found in the United States. Since the late 1800s nearly half a million Japanese immigrants have settled in America and more than twice of that number today claim Japanese ancestry. Although they went to America being optimistic that they would work hard and establish themselves, some of these dreams were never realized. Some thought that after making some money, they would go back to their motherland which never came to be since life in America was characterized by a lot of economic hardships. In addition, they faced a lot of prejudice and were discriminated against. The worst came to worst during the Second World War when all the Japanese Americans were forced in to camps with no apparent reason –other than being of the same ancestry with the America’s enemy, Japan. The relocation camps which were located far from the West Coast were characterized by the poor living conditions. Since the year 1942 when the Japanese Americans was relocated to the internment camps, they were able to go back after the year 1945 when the executive order was finally removed. Subject: Race and Ethnicity, University/College: University of Arkansas System Type of paper: Thesis/Dissertation Chapter Date: 16 October 2016 We will write a custom essay sample on American Japanese Internment Camps for only $16.38 $12.9/page
WHO Report on Global Surveillance of Epidemic-prone Infectious Diseases - Yellow fever - Yellow fever is an important public health threat, which needs more attention. Currently it is endemic in Africa and South America, but other continents, particularly Asia, with mosquitos that are known to transmit yellow fever virus must be considered potentially at risk. - The efficacy of immunization has been well documented historically, and immunization of at-risk populations is the most important action to take for the prevention of epidemics. Yellow fever control programmes have lapsed in many countries, and current levels of immunization are well below their targets. - In the absence of adequate immunization levels, surveillance for yellow fever cases is essential to rapidly control disease outbreaks. Physicians must promptly report suspected cases, and health officials in at-risk countries should have laboratory capacity to perform diagnostic tests for yellow fever. Monitoring and surveillance of yellow fever incident cases and immunization coverage need strengthening to assess risk and detect outbreaks.
Australia is home to more than 200 native frog species. The fauna is unusual in that it is dominated by three families — Hylidae, Microhylidae and Myobatrachidae. The first two families are widespread in warmer parts of the world, whereas the latter is endemic to Australia and New Guinea. Although diverse elsewhere, only one species of Ranidae occurs in the country (Rana daemelii in NE Queensland). No other amphibians are indigenous. No native bufonids or pipids, no salamanders, newts or caecilians. But if you want frogs, we've got 'em. The highest diversity occurs in the tropics. Western Arhemland (Northern Territory), the Wet Tropics (NE Queensland) and southern Queensland – northern New South Wales are all hot spots containing 30 or more species. Diversity drops off rapidly away from the coast. Much of inland Australia is home to only one or two species. Slatyer, Rosauer and Lemckert (2007) examined patterns of endemism in Australian frogs. On analysing almost 100,000 records from 75% of the country — there are no records from a large part of the Nullabor Plain and Western Desert — they found that many of the areas with middling diversity were actually high in endemic species. Patterns of (a) weighted endemism and (b) species richness for Australian anurans (Slatyer et al.) Click for larger version. By weighting endemics according to the size of their ranges (the smaller the range, the greater the weighting), they identified 11 hot spots with high endemism scores. Apart from the expected areas (mentioned above), other locations, such as McIlwraith and Iron Ranges on Cape York Peninsula, the Townsville, Eungella and Gladstone regions (Queensland) and Walpole, Bunbury – Augusta and Mitchell Plateau regions (Western Australia), were also packed with endemics. Slatyer and colleagues propose that this type of analysis should be considered in determining locations of conservation significance. In that case, the analysis must be applied to a range of taxa. Using a similar method, Crisp et al. (2001) identified endemic hot spots for plants, which also included all of the above plus Tasmania, Adelaide and Kangaroo Island, the Australian Alps and the Sydney sandstone. None of these were of significance in the frog study. But neither the frog or plant study picked up central Australia as an area of endemism, although there are numerous short-range endemic snails in that region … The study highlights the complex relationship between diversity and endemism. It's not always easy to spot. Caveat conservator. Patterns of corrected weighted endemism for Australian plants (Crisp et al.) Crisp, MD, Laffen, S, Lindner, HP & Monro, A. (2001). Endemism in the Australian flora. Journal of Biogeography 28: 183 – 198. Slatyer, C, Rosauer, D & Lemckert, F. (2007). An assessment of endemism and species richness patterns in the Australian Anura. Journal of Biogeography 34: 583 – 596.
If you are sitting in a boat or in St. Nicholas, a sound coming from the north shore will sound louder than the same sound heard by a person on land. Sound can be amplified when it travels over water. The reason is that the water cools the air above its surface, which then slows down the sound waves near the surface. This causes refraction or bending of the sound wave, such that more sound reaches the boat passenger or St Nicholas resident. Sound waves skimming the surface of the water can add to the amplification effect, if the water is calm. The speed of sound is the distance travelled during a unit of time by a sound wave propagating through an elastic medium. In dry air at 68 °F, the speed of sound is 1,126 ft/s. This is 768 mph, or approximately one mile in five seconds. However, the speed of sound varies from substance to substance. Sound travels faster in liquids and non-porous solids than it does in air. It travels about 4.3 times as fast in water 4868.766 f/s, and nearly 15 times as fast in iron 16,797.9 f/s than in air at 68F degrees. Just hold the concerts on wet windy days when the water is at a chop and you'll have no complaints.
Sunday, February 19, 2012 Blog Post 4 The Benefits of Podcasting in the Classroom The Benefits of Podcasting in the Classroom by Joe Dale names various benefits of podcasting. The students of today have never lived in a world without technology. They use technology every day, and it consumes their free time. By using podcasts, teachers are expanding their teaching methods, and they are making the student’s learning experience more enjoyable. Also, say a student has to miss class for some reason. They do not have to worry about the information they missed because it will be available for them in a podcast. These podcasts would also be there to help them review for tests. Then there are students’ podcasts. When students make podcasts they are using project based learning. This requires them to actually learn the skills of making a podcast instead of just memorizing facts that are given to them. The students also get a more enriched learning experience when they use podcasts to role play. Podcasting with First Grade Podcasting with First Grade is a great way to motivate the students. This article pointed out that even younger students will get something from this experience, and they are especially excited when they get comments from people or teachers around the world. In this example, the first graders read a book as a class and made their podcast like an interview between the two main characters. Even some of the students who were normally shy started to open up and gain confidence. This project was great for the first graders because it helped them with many skills such as: speech, listening, comprehension, and technology. There are so many skills that can be learned through podcasting, even at this young age, that I was unaware of until I read this article. Listening-Comprehension-Podcasting discusses using podcasting as a tool for learning a foreign language. When learning another language, you have to hear not only the word by itself, but you also need to hear it in context to fully understand it. In this case, they used the podcast to tell the story of Purim. The kids had to write a script, and then piece back together everyone's sentences to form the whole story. This wasn't necessarily about learning to make a podcast, but rather learning a language. The podcast was just a tool to engage the students and enhance their learning experience. This is yet another use of podcasts that I hadn't thought of before.
Trying to Make Sense of the English Language For many reasons (most of them too ugly to go into here), English is a pretty tough language to learn. If you're a native speaker of English, you're probably familiar with the idiosyncrasies that make the language so downright mind-boggling. If you're a nonnative speaker, you may lack the familiarity that native speakers have. Either way, the following sections offer a very basic explanation for why English words are the way they are. Understanding how English words are formed and where they come from can help when you come up against unfamiliar words. Borrowing words from other speakers One of the things that makes the English language so rich (and sometimes so overwhelming) is that English words come from — or are influenced by — lots of different languages. The English language is essentially a Germanic language. (Other Germanic languages are German, Dutch, Flemish, and the Scandinavian languages.) English has a lot of words that reach back to its German roots: ox, cow, meadow, grass, pig, king, knife, knight, and skirmish are just a few. But, being the accommodating language that it is, English absorbed and adopted words (and parts of words) from lots of other languages, too, like Latin, Greek, French, and Spanish. Tree, for example, comes from Old German. In English, the word tree means, well, "tree." We also use the word arbor, which is the Latin word for tree, to mean "tree." Arbor Day is a day for planting trees. So, in English, if you know what tree means, and you know that arbor is another word for tree, you know that anytime you see arbor in a word, that word has something to do with trees. What it has to do with trees depends on what prefixes and suffixes the word uses (the topic of the next section). English has adopted numerous words from other languages. (Garage, for example, is actually a French word; piano is an Italian word.) And if English didn't adopt the whole word, as is the case with many Greek and Latin words, it probably took parts of it. Hexagon uses two Greek elements: hexa meaning "six," and gon meaning "angles." Triumvirate uses Latin elements: trium meaning "three," vir meaning "man," and -ate, a suffix meaning "acted upon in a specific way." A hexagon is a six-sided object; a triumvirate is a group of three people who are in power in some context. Breaking out word elements: Prefixes, roots, and suffixes A strong understanding of common prefixes, roots, and suffixes can go a long way toward improving your vocabulary. The root is a word's foundation. Prefixes and suffixes are elements that are attached to the root to shape the word's meaning. For example, one of the most common prefixes is un-, which means "not or against." Stick that prefix in front of almost any word, and you have that word's opposite: patriotic --> unpatriotic (not patriotic) predictable --> unpredictable (not able to be predicted) reliable --> unreliable (not reliable) Suffixes come at the end of the word and usually indicate what part of speech the word is. (Knowing the part of speech is important, not only for defining a word but also for using it correctly.) Using the earlier example of the Latin root arbor (meaning "tree"), you can assemble a lot of different words simply by attaching different suffixes: - Arboreous uses the suffix -ous, which means "full of." So that word means — you guessed it — "full of trees." The -ous suffix makes adjectives (adjectives modify people, places, or things): Correct: The terrain was dark and arboreous. Incorrect: If a tree falls in the arboreous and no one is around to hear, does it still make a noise? - Arboreal means "of or relating to trees." The suffix -al means "of or relating to" and turns words into adjectives: Correct: Arboreal animals live in trees. Incorrect: The animal lives in an arboreal. - An arborist is one who works with and cares for trees. The suffix -ist means "one who does." As such, -ist makes the word a noun. Correct: We had to call in an arborist to help us transplant the trees in the back yard. Incorrect: The arborist book shows all kinds of trees. Both prefixes and suffixes modify the root. By knowing what each element means, you can get a general idea of the word's definition, which is often all you need to make sense of what's being said or read. Assembling blended words In addition to taking words wholesale from other languages and combining roots with prefixes and suffixes to make words, English also creates words by sticking two complete words together. By knowing what each word means by itself, you can get a general idea of what the combined (or compound)word means. Check out these examples: backbone = spine freshman = first-year student eggshell = exterior of an egg cost-effective = economical bedspread = comforter Sometimes, when the words come together, a few letters get squeezed out. These types of words are called portmanteau words: agriculture + business = agribusiness (business related to farming) basket + cart = bascart (shopping cart) cafeteria + auditorium = cafetorium (area used as both a cafeteria and an auditorium) tangerine + lemon = tangemon (hybrid fruit of a tangerine and a lemon) You may not use any of these hybrid words every day, but seeing how they're composed can help you decipher other portmanteau words you come across. Figuring out English oddities, peculiarities, and quirks Here's the rub — and it's a particularly abrasive one for people who are learning English as a second language: Sometimes, there's no way, other than context, to tell what an English word means. Why? Because English is full of oddities. The result of such a rich linguistic heritage — English words, German words, French words, Spanish words, Greek and Latin words, word parts from everywhere, combined words, blended words, and so on — is that the English language has few rules you can rely on all the time: - Many words spelled similarly don't sound alike. Bomb, comb, and womb don't rhyme with each other: You say "bom,""kohm,"and "woom."Sometimes, words spelled exactly the same are pronounced differently: tear (teer), "a teardrop," and tear (tehr), "to rip," for example. - Many words spelled differently do sound alike: Write, right, and rite are all pronounced the same but mean different things. These types of words are called homophones. - One word can have various meanings: Pool (the place to swim), pool (the billiards game), pool (to put together) — and that's not even all of the definitions of pool. These types of words are called homonyms, and English is absolutely full of them. These types of odd words plague all English speakers, native and otherwise. When you come up against them, the best you can do is use the context of what is being said or written, or, if you still aren't sure, head to a dictionary.
The health belief model in behavioural psychology is termed as an ‘expectancy-value’ model. This means the model assumes that an individual takes an action based on their evaluation of the most likely outcome of engaging in a new or of changing existing behaviour. The model is very popular and has proven its durability in the field of health education. It details the complex relationship between motivation, health behaviour and outcome. Development of the Model Hochbaum, originally, developed this model based on interviews he conducted during the 1950’s. During this period, tuberculosis was considered to be a serious health problem, but not everyone was going in for chest x-rays. Through his interviews with such people, Hochbaum developed a model that tried to predict the chances or likelihood of an individual taking up a recommended course of preventive action to safeguard his health. The model dealed with motivation and decision making processes that influenced a person’s choice of seeking medical intervention. Health Belief Model The health belief model is a framework that helps indicate whether a person will adopt or not a recommended health behaviour. According to the model, an individual’s decision to engage in a health behaviour is based on his perceptions. Therefore, by changing his perception, one can get him to adopt a new behaviour. A person takes a health care decision based on the following six factors 1. Perceived Susceptibility: This refers to how vulnerable a person feels about getting afflicted by a disease. There are fears that one is more prone to an illness compared to others. 2. Perceived Severity: This refers to the serious repercussions that could follow as a result of not adopting a recommended health behaviour. This could range from becoming bedridden, dying, to even social consequences in terms of the extend to which it affects a family, inability to work, etc. 3. Perceived Benefits: The person evaluates the value of getting medical treatment by comparing the cost and side effects of the treatment with the expected consequences of being struck by an illness. 4. Perceived Barriers: This includes the cost of the treatment, complexity of adopting a new dietary/health regimen, lack of belief that one has the ability to change, side effects and length of treatment. 5. Health Value: This refers to the physical and emotional costs of undergoing treatment. One has to value his/her health to be motivated enough to make the necessary changes. 6. Cues to Action: These are signals that prompt the person to take the initiative to treat illness. These can range from being exposed to health reports and messages in the mass media, watching a friend or relative suffer from the disease, reading a health pamphlet to even the onset of symptoms in one’s body. The model, postulates that an individual get a treatment if he thinks that he is prone to a disease that has severe consequences. For the individual to make the decision, though, his evaluation of whether the benefits of taking up treatment will outweigh the difficulties that he will face in the process, is crucial. In addition to the six factors that influence the making of a health care decision, various demographic factors like age, sex, race, social class, education, employment status, knowledge and experience play a role in how a person perceives the urgency of taking proper action to deal with his health condition. Failure to Change Behaviours The main reasons for the failure to change one’s lifestyle behaviour are perceived susceptibility and barriers to change. A person, who feels that he is highly vulnerable to being afflicted by diseases, is more likely to pay attention to any health message. But barriers, like social pressure, may stop various changes even if the person is highly motivated. Application of the Model - Breast and Cervical Cancer: The model has been used to find ways in which women can be encouraged to go in for cancer screening. - Diet Change: The model has been used to predict the likelihood of people adopting a healthier diet. - Smoking: The model is used to identify if a person is likely to quit smoking by taking into account various factors like peer pressure, threat of cancer, onset of symptoms like breathing problems, etc.
HydroElectric Power: Risks and Rewards Cheap Energy vs the Environment The Case of Hydroelectric Power Historical Growth of Hydroelectric power: - Currently Hydro power is 7% of the total US Energy Budget. This has been going decreasing - This varies considerably with region in the US due to the availability of freely - Dam building really was initiated in the 1930's as part of a public works program to combat the depression - Low cost per KWH (see below) caused exponential increase of dam building from 1950-1970 (lots of this on the Columbia) - Since 1970 hydroproduction has levelled off and therefore becomes an increasingly smaller percentage of the US energy budget. Hydropower is a natural renewable energy source as it makes use Hydropower production is sensitive to secular evolution of weather; seasonal snowpacks, etc, etc. Long term droughts (10 years or so) seem to occur frequently in the West About 30% of the hydropotential in the US has been tapped Why is Hydro so attractive? Energy density in stored elevated water is high: So one liter of water per second on a turbine generates 720 watts of power. If this power can be continuously genreated for 24 hours per day for one month then the total number of KWH per month is 720 watts x 24 hours/day x 30 days/month = 518 Kwh/month. Power generating capacity is directly proportional to the height the water falls. For a fall of say only 3 m, 30 times less electricity would be generated (e.g. 17 Kwh/month) - but this is just for a miniscule flow rate of 1 kg/sec. Capacities of some large dams: Grand Coulee 1942 6500 MW John Day 1969 2200 MW Niagara (NY) 1961 2000 MW The Dalles 1957 1800 MW Chief Joseph 1956 1500 MW McNary 1954 1400 MW Hoover 1936 1345 MW Glen Canyon 1964 950 MW Three Gorges 2000 18000 Mw Pacific Northwest has 58 hydroelectric dams 63% of total electricity generated. Most of the rest comes from coal fired steam plants (e.g. Centralia Washington). Note, the Trojan Nuclear Power Plant was relatively easy to shut down because replacement power was immediately available. Again the main advantages of Hydro are a) its renewable and b) there is a lot of energy available: Some Real Disadvantages: Dams are frequently located upstream from major population centers: Hydroelectric Power - The Risks: - 1918--1958: 33 Major dam failures resulting in 1680 documented - 1959--1965: 9 major dams failed throughout the world - 1976: Teton Dam failure in Idaho - Most of the dams on the columbia have been built since 1950 and are not close to their failure points - The Salmon Problem: - Extremely Emotional Issue --> icon of the PNW - Some Federal Dam Licenses can now be lost because of salmon migration problems - Some studies suggest Federal dams are mostly resonsible for drop from 16 million to 300,000 wild fish per year - Actual Salmon Count data is available for these dam sites: - Estimated that to improve migration, utility rates will rise in the PNW by 8% - There are lots of other factors at work as well: - El Nino - Agressive Fishing - Poor logging practices and increased soil erosion Note that reservoirs offer expanded habitat for geese, pelicans, eagles, osprey. They also help with flood control thus minimizing soil erosion in the watershed. Adverse effects of dams on salmon: - migratory barrier - killed in turbines (especially young ones swimming downstream) - supersaturation of air in water (high pressure of water falling down forces air into the solution) - reduced oxygen content if river flow is reduced (summer) due to separation of warm and cold water; cold water doesn't mix to be aerated (this is mostly a problem in the Tennesee Valley) - Build fish "passages" to direct them towards tributaries this has proven successful for trout in Oregon - Better turbine design and screen systems can help eliminate fishkill on the downstream migration - Minimize turbulence in the operation of the turbine - Have better flow control How will potential lost power be compenstated for? - energy conservation? - sale of hydro to the US by Canada? - Coal-fired plants? Overview of Hydropower and the Issues The Columbia River Salmon Passage model (CRiSP) your questions or comments about this particular lecture
NASA trained several pairs of eyes on Saturn as the planet put on a dancing light show at its poles. While NASA’s Hubble Space Telescope, orbiting around Earth, was able to observe the northern auroras in ultraviolet wavelengths, NASA’s Cassini spacecraft, orbiting around Saturn, got complementary close-up views in infrared, visible-light and ultraviolet wavelengths. Cassini could also see northern and southern parts of Saturn that don’t face Earth. The result is a kind of step-by-step choreography detailing how the auroras move, showing the complexity of these auroras and how scientists can connect an outburst from the Sun and its effect on the magnetic environment at Saturn. “Saturn’s auroras can be fickle — you may see fireworks, you may see nothing,” said Jonathan Nichols of the University of Leicester in England, who led the work on the Hubble images. “In 2013, we were treated to a veritable smorgasbord of dancing auroras, from steadily shining rings to super-fast bursts of light shooting across the pole.” The Hubble and Cassini images were focused on April and May of 2013. Images from Cassini’s ultraviolet imaging spectrometer (UVIS), obtained from an unusually close range of about six Saturn radii, provided a look at the changing patterns of faint emissions on scales of a few hundred miles (kilometers) and tied the changes in the auroras to the fluctuating wind of charged particles blowing off the Sun and flowing past Saturn. “This is our best look yet at the rapidly changing patterns of auroral emission,” said Wayne Pryor, a Cassini co-investigator at Central Arizona College in Coolidge, Ariz. “Some bright spots come and go from image to image. Other bright features persist and rotate around the pole, but at a rate slower than Saturn’s rotation.” The UVIS images, which are also being analyzed by team associate Aikaterini Radioti at the University of Liege, Belgium, also suggest that one way the bright auroral storms may be produced is by the formation of new connections between magnetic field lines. That process causes storms in the magnetic bubble around Earth. The movie also shows one persistent bright patch of the aurora rotating in lockstep with the orbital position of Saturn’s moon Mimas. While previous UVIS images had shown an intermittent auroral bright spot magnetically linked to the moon Enceladus, the new movie suggests another Saturn moon can influence the light show as well. The new data also give scientists clues to a long-standing mystery about the atmospheres of giant outer planets. “Scientists have wondered why the high atmospheres of Saturn and other gas giants are heated far beyond what might normally be expected by their distance from the Sun,” said Sarah Badman, a Cassini visual and infrared mapping spectrometer team associate at Lancaster University, England. “By looking at these long sequences of images taken by different instruments, we can discover where the aurora heats the atmosphere as the particles dive into it and how long the cooking occurs.” The visible-light data have helped scientists figure out the colors of Saturn’s auroras. While the curtain-like auroras we see at Earth are green at the bottom and red at the top, Cassini’s imaging cameras have shown us similar curtain-like auroras at Saturn that are red at the bottom and purple at the top, said Ulyana Dyudina, an imaging team associate at the California Institute of Technology, Pasadena, Calif. The color difference occurs because Earth’s auroras are dominated by excited nitrogen and oxygen molecules, and Saturn’s auroras are dominated by excited hydrogen molecules. “While we expected to see some red in Saturn’s aurora because hydrogen emits some red light when it gets excited, we also knew there could be color variations depending on the energies of the charged particles bombarding the atmosphere and the density of the atmosphere,” Dyudina said. “We were thrilled to learn about this colorful display that no one had seen before.” Scientists hope additional Cassini work will illuminate how clouds of charged particles move around the planet as it spins and receives blasts of solar material from the Sun. “The auroras at Saturn are some of the planet’s most glamorous features — and there was no escaping NASA’s paparazzi-like attention,” said Marcia Burton, a Cassini fields and particles scientist at NASA’s Jet Propulsion Laboratory, Pasadena, Calif., who is helping to coordinate these observations. “As we move into the part of the 11-year solar cycle where the Sun is sending out more blobs of plasma, we hope to sort out the differences between the effects of solar activity and the internal dynamics of the Saturn system.” There is still more work to do. A group of scientists led by Tom Stallard at the University of Leicester is busy analyzing complementary data taken during the same time window by two ground-based telescopes in Hawaii — the W. M. Keck Observatory and NASA’s Infrared Telescope Facility. The results will help them understand how particles are ionized in Saturn’s upper atmosphere and will help them put a decade of ground-based telescope observations of Saturn in perspective, because they can see what disturbance in the data comes from Earth’s atmosphere. The EarthSky team has a blast bringing you daily updates on your cosmos and world. We love your photos and welcome your news tips. Earth, Space, Human World, Tonight.
|INDEX > Transposons| |Key terms defined in this section| |Transposon is a DNA sequence able to insert itself at a new location in the genome (without any sequence relationship with the target locus).| Genomes evolve both by acquiring new sequences and by rearranging existing sequences. The sudden introduction of new sequences results from the ability of vectors to carry information between genomes. Extrachromosomal elements move information horizontally by mediating the transfer of (usually rather short) lengths of genetic material. In bacteria, plasmids move by conjugation (see 12 The replicon), while phages spread by infection (see 11 Phage strategies). Both plasmids and phages occasionally transfer host genes along with their own replicon. Direct transfer of DNA occurs between some bacteria by means of transformation (see 1 Genes are DNA). In eukaryotes, some viruses (notably the retroviruses discussed in 16 Retroviruses and retroposons) can transfer genetic information during an infective cycle. Rearrangements are sponsored by processes internal to the genome. One cause is unequal recombination, which results from mispairing by the cellular systems for homologous recombination. Nonreciprocal recombination results in duplication or rearrangement of loci (see 4 Clusters and repeats). Duplication of sequences within a genome provides a major source of new sequences. One copy of the sequence can retain its original function, while the other may evolve into a new function. Furthermore, significant differences between individual genomes are found at the molecular level because of polymorphic variations caused by recombination. We saw in 4 Clusters and repeats that recombination between "minisatellites" adjusts their lengths so that every individual genome is distinct. Another major cause of variation is provided by transposable elements or transposons: these are discrete sequences in the genome that are mobile-they are able to transport themselves to other locations within the genome. The mark of a transposon is that it does not utilize an independent form of the element (such as phage or plasmid DNA), but moves directly from one site in the genome to another. Unlike most other processes involved in genome restructuring, transposition does not rely on any relationship between the sequences at the donor and recipient sites. Transposons are restricted to moving themselves, and sometimes additional sequences, to new sites elsewhere within the same genome; they are therefore an internal counterpart to the vectors that can transport sequences from one genome to another. They may provide the major source of mutations in the genome. Transposons fall into two general classes. The groups of transposons reviewed in this chapter exist as sequences of DNA coding for proteins that are able directly to manipulate DNA so as to propagate themselves within the genome. The transposons reviewed in the next chapter are related to retroviruses, and the source of their mobility is the ability to make DNA copies of their RNA transcripts; the DNA copies then become integrated at new sites in the genome. Transposons that mobilize via DNA are found in both prokaryotes and eukaryotes. Each bacterial transposon carries gene(s) that code for the enzyme activities required for its own transposition, although it may also require ancillary functions of the genome in which it resides (such as DNA polymerase or DNA gyrase). Comparable systems exist in eukaryotes, although their enzymatic functions are not so well characterized. A genome may contain both functional and nonfunctional (defective) elements. Often the majority of elements in a eukaryotic genome are defective, and have lost the ability to transpose independently, although they may still be recognized as substrates for transposition by the enzymes produced by functional transposons (for review see Finnegan, 1985). A eukaryotic genome contains a large number and variety of transposons. The fly genome has >50 types of transposon, with a total of several hundred individual elements. Transposable elements can promote rearrangements of the genome, directly or indirectly: The intermittent activities of a transposon seem to provide a somewhat nebulous target for natural selection. This concern has prompted suggestions that (at least some) transposable elements confer neither advantage nor disadvantage on the phenotype, but could constitute "selfish DNA," concerned only with their own propagation. Indeed, in considering transposition as an event that is distinct from other cellular recombination systems, we tacitly accept the view that the transposon is an independent entity that resides in the genome. Such a relationship of the transposon to the genome would resemble that of a parasite with its host. Presumably the propagation of an element by transposition is balanced by the harm done if a transposition event inactivates a necessary gene, or if the number of transposons becomes a burden on cellular systems. Yet we must remember that any transposition event conferring a selective advantage-for example, a genetic rearrangement¡Vwill lead to preferential survival of the genome carrying the active transposon (for review see Campbell, 1981). |Campbell, A. (1981). Evolutionary significance of accessory DNA elements in bacteria. Ann. Rev. Immunol. 35, 55-83.| |Finnegan, D. J. (1985). Transposable elements in eukaryotes. Int Rev Cytol 93, 281-326.|
THE FLIGHT OF ATOMS PHOTOGRAPHED (Oct, 1923) THE FLIGHT OF ATOMS PHOTOGRAPHED By W. D. HARKINS Professor of Physical Chemistry, University of Chicago An atom is 2,000 times too small to be seen through a microscope and it is apt to stagger the imagination of most people to hear about photographing atoms in flight. Not so long ago an atom was spoken of as the smallest particle of matter, but now it is believed to represent a grouping of electrons around a nucleus, much in the manner that the planets arranged around the sun constitute the solar system. Air that is not too dry. confined in a cylinder and alternately compressed and expanded under the action of a piston, will produce a mist of minute water particles similar to a rain cloud. If the cylinder is vertical, and the head and upper part of the walls are made of glass, and with sufficient illumination, this phenomenon can be seen with the naked eye as the miniature fog settles toward the piston. In further developing the apparatus, the terminals of an electric circuit of high voltage are connected at the top and bottom of the cylinder and in circuit with a device that makes contact at the proper instant during the stroke of the piston. Charged helium atoms, or nuclei, emanating from a speck of radioactive material, called polonium, located on the side of the cylinder, are projected through the air, and about 200,000 minute water drops deposited in the path of each helium atom, each making what appears like the trail of a tiny skyrocket. The helium nuclei, or part atoms, travel at a speed 20,000 times that of a rifle bullet, and as they pass through the atoms in the air, they bump into the atom parts, called electrons, knocking them out of the atoms, which is called ionizing or charging the atoms, producing the effect seen in the trails in one of the illustrations. On each electron thus knocked out, and on each ion left by tearing an electron out of an atom, a minute water drop deposits itself. A great number of these drops, as described above, constitute each atom track, which when illuminated by a powerful arc light, are extremely bright. When one of the helium nuclei collides with the nucleus of another atom, the result is similar to one billiard ball hitting another, but usually it shoots through 400,000 atoms without striking a nucleus. A motion-picture camera can be mounted directly above the top of the cylinder and arranged to operate in synchronism with the movements of the piston so as to photograph each series of flights as it occurs. It is interesting to note that out of 30,000 tracks photographed, only one indicated a nearly direct collision between a helium nucleus and the nucleus of another atom. By measuring the angles of incidence and deflection shown in such a collision, it is possible to compute the diameter of the atom nucleus involved.
Peer deeply into a star system of any size, and you'll probably find a black hole. That's the lesson from new observations by the Hubble Space Telescope, which has spotted the signs of midsize black holes at the hearts of ancient stellar swarms called globular clusters. The masses of these black holes suggest that some precise but unknown cosmic recipe dictates how large a given black hole will become. Discoveries during the last 2 decades have unveiled holes of many sizes. The deaths of giant stars in supernova explosions can create black holes with several times the mass of our sun. At the other extreme, galaxies harbor supermassive black holes millions or even billions of times more massive. Last year, x-ray astronomers also found hints of "intermediate" black holes with hundreds to thousands of times our sun's mass in other galaxies (ScienceNOW, 7 June 2001), but they hadn't measured the gravitational pulls of such holes--the best way to confirm their presence and gauge their masses. Now, Hubble has done just that for two globular clusters: M15, in our Milky Way, and G1, in the nearby Andromeda galaxy. The clusters are tight knots of hundreds of thousands to millions of stars that orbit around galactic centers like moths around a streetlamp. Two research teams used Hubble's sharp vision to spy stars moving at the cores of the clusters. The rapid motions could arise only from the strong gravity of hidden objects: black holes with 4,000 solar masses in M15 and 20,000 solar masses in G1. Astronomers announced the results on 17 September at NASA Headquarters in Washington, D.C. Astronomers had long debated whether globular clusters were massive enough for black holes to form, either when the clusters condensed in the early universe or when gas and stars accumulated at their cores. Curiously, the fraction of each cluster's mass that resides in the black hole--about 0.5%--is the same ratio seen for supermassive black holes in the central bulges of giant galaxies. "Whenever you see such a perfect relationship in astronomy, there's almost always an underlying cause," says astronomer Karl Gebhardt of the University of Texas, Austin, a member of both research teams. However, it's not yet clear whether all black holes and their host star systems are born with that half-percent ratio, or whether they grow at the same rate over time. Nor do astronomers know whether all globular clusters house black holes today, or whether many lost theirs when gravitational jostling at their crowded hearts flung the holes into space.
Trigonometry - Sine and Cosine Rule The solution for an oblique triangle can be done with the application of the Law of Sine and Law of Cosine, simply called the Sine and Cosine Rules. An oblique triangle, as we all know, is a triangle with no right angle. It is a triangle whose angles are all acute or a triangle with one obtuse angle. The two general forms of an oblique triangle are as shown: Sine Rule (The Law of Sine) The Sine Rule is used in the following cases: CASE 1: Given two angles and one side (AAS or ASA) CASE 2: Given two sides and a non-included angle (SSA) The Sine Rule states that the sides of a triangle are proportional to the sines of the opposite angles. In symbols, Case 2: SSA or The Ambiguous Case In this case, there may be two triangles, one triangle, or no triangle with the given properties. For this reason, it is sometimes called the ambiguous case. Thus, we need to examine the possibility of no solution, one or two solutions. Cosine Rule (The Law of Cosine) The Cosine Rule is used in the following cases: 1. Given two sides and an included angle (SAS) 2. Given three sides (SSS) The Cosine Rule states that the square of the length of any side of a triangle equals the sum of the squares of the length of the other sides minus twice their product multiplied by the cosine of their included angle. In symbols: Go to the next page to start practicing what you have learnt.
They are beautiful and sometimes otherworldly. Existing beneath the surface of the planet, caves have attracted humans for hundreds of thousands of years. Considered by some cultures as sacred, caves have been used in rituals and ceremonies. They have served both as shelter and burial tombs. The human remains and artifacts found in them have aided archaeologists in learning about early humans. Pictographs (rock paintings) in caves, some estimated to be more than 30,000 years old, attest to the creativity of early humans and their relationship to the natural world. The shape of the land The scientific study of caves is called speleology (pronounced speelee-AH-luh-jee; from the Greek words spelaion, meaning "cave," and logos, meaning "study of"). A cave is generally defined as a naturally formed cavity or hollow beneath the surface of Earth that is beyond the zone of light and is large enough to be entered by humans. Some sources use the word cavern interchangeably with cave. Technically, a cavern is a large chamber within a cave. A series of caves connected by passages is a cave system. Individual caverns and cave systems may be immense. In the Chiquibul (pronounced chee-ke-BOOL) Cave System in Belize and Guatemala, the Belize Chamber measures nearly 1,600 feet (490 meters) long by 600 feet (180 meters) wide. It is the largest cavern in the Western Hemisphere. The largest recorded cave system in the world is Mammoth Cave System. It extends for more than 345 miles (555 kilometers) in south-central Kentucky. There are different types of caves, formed in different areas by different geologic processes, that do not meet the general definition of a cave. Glacier caves are formed inside glaciers by meltwater (water from melted ice or snow) that runs through cracks in the ice, producing tunnels and cavities. Sea caves are formed in cliffs and ledges along the shores of oceans and other large bodies of water where the constant pounding of waves wears away rock. Lava tube caves are formed when the outer surface of a lava flow begins to cool and harden while lava inside remains hot. Once the stream of molten lava inside drains out, a tube or tunnel remains. Kazumura Cave in Hawaii, measuring approximately 38 miles (61 kilometers) in length, is the longest lava tube cave in the world. The most common, largest, and most spectacular caves, however, are solution caves. These caves are formed through the chemical interaction of air, water, soil, and rock. They usually form in areas where the dominant rock is limestone, a type of sedimentary rock (rock formed by the accumulation and compression of sediment, which may consist of rock fragments, remains of microscopic organisms, and minerals). Many solution caves feature streams and lakes and unusual mineral formations. These formations are known as speleothems (pronounced SPEE-lee-ohthems; from the Greek words spelaion, meaning "cave," and thema, meaning "deposit"). Because of the way they form, speleothems are also commonly known as dripstone. The primary speleothems are stalactites, stalagmites, columns, curtains, and flowstones. A stalactite (pronounced sta-LACK-tite) is an icicle-shaped formation that hangs from the ceiling of a cave. A similarly shaped deposit, though often not as pointy, that projects upward from the floor of a cave is a stalagmite (pronounced sta-LAG-mite). Stalagmites generally form underneath stalactites. The two deposits often grow until they join, forming a stout, singular deposit known as a column. A curtain (sometimes called drapery) is a mineral deposit that forms a thin, wavy or folded sheet that hangs from the ceiling of a cave. Any mineral deposit that forms sheets on a wall or floor of a cave is known by the general term flowstone. Although normally whitish or off-white in color, speleothems may contain traces of different minerals that add shades of brown, orange, yellow, red, pink, green, black, and other colors. Cave ceilings often collapse. As they do, the rock or ground above them also collapses. If the cave is located near Earth's surface, a bowl-like depression known as a sinkhole can develop on the surface. Sinkholes may also form above areas where limestone or other sedimentary rock has been eroded away (erosion is the gradual wearing away of Earth surfaces through the action of wind and water). Sinkholes may range in diameter from a few feet to a few thousand feet. A landscape dominated by sinkholes on the surface and extensive cave systems underneath is known as karst topography or karst terrain. Karst ( Kras in Serbo-Croatian) is the name of a limestone plateau in the Dinaric Alps in northwest Slovenia that is marked by such geological formations. It was the first area to be studied based on these formations. Karst topography also features losing streams, which are streams on Earth's surface that are diverted underground through sinkholes or caves, and springs, which are areas where water from underground flows out almost continuously through an opening at Earth's surface. As karst topography continues to develop, a variety of landforms may arise on the surface. This is especially true in tropical or humid climate areas. Caves that grow ever larger soon start to collapse. Sinkholes in the area enlarge and merge. Sections of the ground remain elevated as streams and other running water erode the limestone rock mass around them ever deeper. These sections may form hills, known as cone karst, separated by the sinkholes. Eventually, steep limestone landforms called karst towers may remain standing hundreds of feet above the surrounding landscape. With nearly vertical walls, the towers are often bare of vegetation. The world's most impressive karst towers are perhaps those found in the Guangxi (pronounced GWAN-shee) Province in southern China. Cave: Words to Know - A naturally formed cavity or hollow beneath the surface of Earth that is beyond the zone of light and is large enough to be entered by humans. - A large chamber within a cave. - Cave system: - A series of caves connected by passages. - A thin, wavy or folded sheetlike mineral deposit that hangs from the ceiling of a cave. - The gradual wearing away of Earth surfaces through the action of wind and water. - The general term for a sheetlike mineral deposit on a wall or floor of a cave. - Freshwater lying within the uppermost parts of Earth's crust, filling the pore spaces in soil and fractured rock. - Karst topography: - A landscape characterized by the presence of sinkholes, caves, springs, and losing streams. - A sedimentary rock composed primarily of the mineral calcite (calcium carbonate). - Losing stream: - A stream on Earth's surface that is diverted underground through a sinkhole or a cave. - Sedimentary rock: - Rock formed by the accumulation and compression of sediment, which may consist of rock fragments, remains of microscopic organisms, and minerals. - A bowl-like depression that develops on Earth's surface above a cave ceiling that has collapsed or on an area where the underlying sedimentary rock has been eroded away. - A mineral deposit formed in a cave. - An icicle-shaped mineral deposit hanging from the roof of a cave. - A cone-shaped mineral deposit projecting upward from the floor of a cave. Forces and changes: Construction and destruction Caves are found almost everywhere around the planet. More than 17,000 have been identified in the United States, underlying 20 percent of the country's land surface. They are found in 48 of the 50 states (only Louisiana and Rhode Island lack caves). While the processes that form lava tube caves, glacier caves, sea caves, and other caves are obvious, those that form solution caves—the most common caves of all—are not. Solution caves are not formed by volcanic activity or by the abrasive forces of water or wind. The primary force behind their formation is chemical weathering, which alters the internal structure of minerals by removing or adding elements. It begins in the sky The formation of a solution cave begins in Earth's atmosphere. As precipitation (mainly rain) falls to the planet's surface, the water (H 2 O) reacts with carbon dioxide (CO 2 ) in the atmosphere to form weak carbonic acid (H 2 CO 3 ). This is the same acid found in soda pop that produces its "fizz." Once this water and carbonic acid solution reaches Earth's surface and begins to percolate down through the soil, it reacts with carbon dioxide given off by decaying plants and animal matter to form even more carbonic acid solution. The main mineral in limestone is calcite (calcium carbonate). Most seashells are made of this mineral. Limestone is almost insoluble (unable to be dissolved) in water. Carbonic acid, however, dissolves calcite from limestone. Over hundreds of thousands to millions of years, as carbonic acid moves downward through cracks and fractures in limestone, it dissolves the rock and forms crevices. Over time, these crevices widen to become passages and caverns. A Sinking State The entire state of Florida lies on limestone. Much of this underlying rock is weathered, featuring cavities exceeding 100 feet (30 meters) in height and width. Although many are buried beneath sediments, sinkholes dot the land surface. This is especially true in central Florida, an area prone to sinkhole formation. The water table in this area is often only 5 to 10 feet (1.5 to 3 meters) below the surface of the ground. The largest sinkhole to have formed in Florida in recorded history appeared suddenly in May 1981 in the city of Winter Park. In the span of one day, a hole measuring 350 feet (107 meters) wide and 110 feet (34 meters) deep opened up. The Winter Park sinkhole, as it became known afterward, swallowed a house, five cars from a nearby parking lot, and part of a city swimming pool. The city later stabilized and sealed the sinkhole, converting it into an urban lake. This activity occurs in an area beneath Earth's surface where freshwater fills all pore spaces and microscopic openings in rocks and sediment. These openings include the spaces between grains of sand as well as cracks and fractures in rocks. As rain or melted snow seeps through the ground, some of it clings to particles of soil or to roots of plants. The remaining water moves deeper, drawn downward by gravity, until it reaches a layer of rock or sediment, such as clay, through which it cannot easily pass. It then fills the empty spaces and cracks above that layer. This water is known as groundwater, and the area where it fills all the spaces and pores underground is the zone of saturation. The top surface of this zone is called the water table. Above it, the pores and spaces in rock hold mainly air, along with some water. This is called the zone of aeration. Caves initially form just below the water table. Filled with water, the cavities and fractures in the limestone are enlarged by the continuous movement of water and carbonic acid through them. Air enters a cave only when the water table is lowered through some geologic event, such as erosion of the land surface above or uplift of the rock beneath the cave. When this occurs, the cave stops enlarging and water begins to drain out of the cave down through cracks and other passages in the surrounding limestone. Areas of the cave may continue to lie below the water table and, therefore, are still water-filled. An underground stream, whose water source lies farther away, may still flow through the cave. Drip by drip The air-filled sections of the cave provide the perfect environment for the development of speleothems. Even though the water table may have dropped, water weaving its way downward from Earth's surface still enters a cave through cracks and crevices in its ceiling and walls. When this water and carbonic acid solution enters the cave, some of the carbon dioxide in the solution escapes into the air (much like a soda pop that loses carbon dioxide and goes "flat" when left uncovered). This changes the chemical structure of the solution, and it can no longer hold the dissolved calcite. The calcite is then deposited in crystallized form as a speleothem. Its shape depends on where and how quickly water enters the cave. Though growth rates of speleothems vary from cave to cave, it may take 120 years or longer for 1 cubic inch (16.4 cubic centimeters) of calcite to be deposited on a cave formation. The Largest Enclosed Space on Earth The largest cavern in the world is the Sarawak Chamber of the Good Luck Cave in Sarawak, Malaysia. It measures approximately 1,970 feet (600 meters) in length, 1,310 feet (400 meters) in width, and 330 feet (100 meters) in height. It has a total area of 1,751,300 square feet (162,700 square meters). The cavern is large enough to hold eight Boeing 747 aircraft lined up nose to tail. By comparison, the largest cavern in the United States is the Big Room in the Carlsbad Caverns cave system in New Mexico. Covering an area of 357,472 square feet (33,210 square meters), it is just over one-fifth the size of the Sarawak Chamber. Water slowly dripping from a small opening in the ceiling of the cave initially forms a soda straw. This tubelike formation develops when each drop evaporates, leaving behind a small amount of calcite around its border. As more drops fall, more calcite is deposited and the tube grows downward. Even though they are quite fragile and have the diameter of a drop of water, soda straws may grow to 3 feet (1 meter) or more in length. If the tube becomes blocked and more drops begin to fall, then a stalactite forms around the soda straw. If drops of water increase even further from the ceiling, they may fall off a stalactite before evaporating and form a stalagmite. Because the drops spread when they hit the floor or ledge of a cave, a stalagmite is often wider than the stalactite under which it often grows. An extremely rapid drip from a ceiling may form a pool of water on the floor of a cave. As the water evaporates along the edges of the pool, calcite may form terraces. If water drips from various points in a crack in a cave ceiling, stalactites may grow in a row. Eventually, they may grow together, forming a continuous sheet. A flowing sheet may also develop if water seeps slowly along the length of a thin slit in the ceiling. When a crack appears in a cave wall, a film of the water may flow down the wall and over ledges, forming sheets of flowstone. The multitude of speleothems that develop in caves vary widely. In fact, no two caves are ever alike. The air temperature of the cave, the amount and chemical composition of the water entering it, and the size of the joints and cracks in its ceiling and walls are just a few of the factors that determine a cave's particular appearance. Caves are environments that contain not only fantastic mineral formations but rare and unusual animals. These include blind fish, colorless spiders, and many other troglobites (pronounced TROG-lah-bites), animals that live in caves and cannot survive outside of them. Troglobites have evolved over millions of years, becoming adapted to the absolute blackness and meager food offerings of cave life. Caves are also home to animals that venture out periodically in search of food. Beetles, crickets, frogs, salamanders, and others are of this type. Finally, caves serve as temporary homes to animals that move freely in and out of them. Bats, bears, moths, and skunks are examples of these. For many people, cave exploration is a fascinating and fun activity. Spelunking (pronounced spi-LUNG-king) is the term given to such exploration. Spelunking societies, organizations, and groups exist across the country, helping people explore the more than 100 caves that are open to the public for study and enjoyment. Although caves are carved out of rock, they are fragile. Vandalism, property development, and air and water pollution have all had a devastating effect on caves and cave life. Even oil left on a speleothem by the accidental touch of a human hand can alter its formation, eventually destroying it. Of the more than 130 species that inhabit the Mammoth Cave System in Kentucky, dozens are considered threatened or endangered. For the continued study and exploration of caves and the life they harbor, great care must be taken. Most caves are constantly changing. Some are still enlarging, with new passages being formed below the water table (in a cave system, the oldest caves and passages are closest to Earth's surface). Many caves are still wet, with calcite being deposited on various formations. Other caves and cave systems, however, are dry and are no longer enlarging or growing speleothems. Eventually, in a dry cave, the thin ceiling may lose support and collapse, exposing the cave to the surface through a sinkhole. Spotlight on famous forms Lechuguilla Cave, New Mexico The deepest limestone cave in the United States is Lechuguilla (pronounced lech-uh-GEE-yah) Cave. Part of the Carlsbad Caverns cave system in southeast New Mexico, it extends to a depth of 1,571 feet (479 meters). The cave was discovered by a group of cavers in 1986. Scientists estimate that the cave has existed beneath Earth's surface for at least 2 million years. The cave is notable not only for its size, but for its fantastic array of rare speleothems. Unlike other solution caves, Lechuguilla was not formed by carbonic acid. Rather, rising hydrogen sulfide from nearby oil fields reacted with groundwater to form sulfuric acid. This acid dissolved the limestone and created a cave filled with lemon-yellow sulfur formations. Among those is a 24-foot (7.3-meter) soda straw, the longest in the world. In addition to unusual speleothems, Lechuguilla contains rare bacteria that feed on the sulfur, iron, and manganese minerals present in the cave. Scientists believe these bacteria may have played a part in the formation of the cave and its speleothems. They also believe the sulfur-laden environment of Lechuguilla may be similar to that on the surface of Mars, so they have studied the cave's bacteria to determine how life may exist on that planet. Mammoth Cave System, Kentucky The Mammoth Cave System, properly known as the Mammoth Cave-Flint Ridge System, is the largest cave system in the world. Lying beneath the surface in south-central Kentucky, the system extends for more than 345 miles (555 kilometers) and to a depth of 379 feet (116 meters). Geologists believe there may be an additional 600 miles (965 kilometers) of undiscovered passageways connected to the system. Scientists estimate the system began to form in the limestone rocks underlying the area some 30 million years ago. Archaeologists have found evidence that early Native Americans inhabited the cave system as many as 4,000 years ago. The land surface above Mammoth Cave System is marked by sinkholes and losing streams. Underneath this karst topography lie tunnels, passages, caverns, and almost every type of speleothem. Underground rivers flow through some of the system's deepest caverns. Mammoth Dome is a cavity in the system that measures 192 feet (59 meters) in height. Another extraordinary feature is Frozen Niagara, a mass of flowstone 75 feet (23 meters) tall and 4 feet (1.2 meters) wide. Voronya Cave, Republic of Georgia On January 6, 2001, a team of Ukrainian and Russian cavers exploring a cave in the Abkhazia region of the Republic of Georgia reached a depth of 5,610 feet (1,710 meters). This event confirmed Voronya Cave (also known as Krubera Cave) as the world's deepest cave. The previous record holder had been Lamprechtsofen-Vogelshacht Cave in Austria, which measures 5,355 feet (1,632 meters) in depth. Voronya Cave was so-named because of the large number of crows that gather around its entrance ( voron is Russian for "crow"). Discovered in the late 1960s, the cave is located in a valley in the western Caucasus Mountains. Meandering downward through dense limestone, the cave features one entrance that leads to three branches. When first explored in the 1980s, the cave was thought to end in a narrow passage 1,110 feet (3,335 meters) beneath the surface. In 1999, an expedition found new passages that led to deeper pits. For More Information Aulenbach, Nancy Holler, and Hazel A. Barton. Exploring Caves: Journeys into the Earth. Washington, D.C.: National Geographic, 2001. Gillieson, David S. Caves: Processes, Development, and Management. Cambridge, MA: Blackwell Publishers, 1996. Moore, George W., and Nicholas Sullivan. Speleology: Caves and the Cave Environment. Third ed. St. Louis, MO: Cave Books, 1997. Palmer, Arthur N., and Kathleen H. Lavoie. Introduction to Speleology. St. Louis, MO: Cave Books, 1999. Taylor, Michael Ray. Caves: Exploring Hidden Realms. Washington, D.C.: National Geographic, 2001. "Cave Facts." American Cave Conservation Association. http://www.cavern.org/CAVE/ACCA_index.htm (accessed on August 14, 2003). "Caves Theme Page." Gander Academy. http://www.stemnet.nf.ca/CITE/cave.htm (accessed on August 14, 2003). "Karst Topography Teacher's Guide and Paper Model." U.S. Geological Survey. http://wrgis.wr.usgs.gov/docs/parks/cave/karst.html (accessed on August 14, 2003). "NOVA: Mysterious Life of Caves." WGBH Educational Foundation. http://www.pbs.org/wgbh/nova/caves/ (accessed on August 14, 2003). "Park Geology Tour of Cave and Karst Parks." National Park Service, Geologic Resources Division. http://www.aqd.nps.gov/grd/tour/caves.htm (accessed on August 14, 2003).
M.Ed., Stanford University Winner of multiple teaching awards Patrick has been teaching AP Biology for 14 years and is the winner of multiple teaching awards. Sex linked genes are carried on the X and Y chromosomes, also known as the sex chromosomes. If a gene is carried on the Y chromosome, only men can inherit it. Recessive genes carried on the X chromosome are always expressed for men, who only have one X chromosome, but must be homozygous to be expressed in women. One of the most common complications to introduce to Mendel's laws is the idea of sex linked traits. Now sex linked genes are those genes that are found on the x chromosome. Now that's a significant difference from the standard way of doing genetics which is talking about the autosome or chromosomes because unlike the autosomes, the other non-sex chromosomes, males only get one of those x chromosomes. Some common examples of tricks that are used during tests or in class that are sex linked are hemophilia and color blindness. Now because they're on the x chromosome, when you write them instead of putting just the letter big H for having normal blood clotting and little h for having hemophilia, you actually put the x to represent the x chromosome and typically as a super script floating up high like an exponent. You put the capital H for normal or the lower case version for hemophilia. Now males again only get one x chromosome. So you can't say they're homizygous or heterozygous or anything like that because they don't have two of the same or two different things. Instead, some people will call that hemizygous because it has half the normal number of chromosomes and hemi means half. Males sorry females will actually get two x chromosomes. So they can be homozygous dominant with x big h x big h or heterozygous x big h x little h, or homozygous recessive x little h x little h. Now what would some of the problems that would involve sex linked traits look like? Here's a standard kind of question. A normal man marries a woman who is a carrier for hemophilia. Predict their offspring. So the first thing you have to do is figure out alright, a normal man. What does that mean? Well, that means that he does not have the hemophilia allele. So he has the normal blood clotting allele. The woman, she is normal but she's a carrier. So that means she has one normal and one hemophilia allele. So she is heterozygous for this trait. So now that I've got their genoptypes figured out, let me plug in them in to this punnett square here. So I put the guy's genes up here. Hers right there. And now it's time for them to have some babies. So let me put this daughter here is xx and gets a big h here and a big h there from mummy and daddy. x from mummy with a big h and daddy's daddy's y. What does this mean? Son. xx we have another daughter. Little h from mummy and big h from daddy. Remember you always put the dominant allele first. It doesn't matter who you get it from. You always put the dominant first. So we'll see that the daughters are both normal. Here mummy gives the x that she has. Daddy gives y that he has and that and now we have our two sons. And what you'll see with sex linked traits, the way to spot them is that you'll see an equal expression of the traits in male versus female. Here this daughter is normal. This daughter is normal even though she's a carrier, they both have normal blood clotting. So 100 percent of the females and I'll use the symbol for female are normal whereas this guy is normal. So 50 percent of the males are normal and this guy here however, he's a hemophiliac. 50 percent of the males are hemophiliacs. So that's the easy way to do it. So of course, some bio teachers like me like to sneak in stealthy little sex link examples. So let me show how that could happen. In fruit flies white eyes are recessive to red. So we have a white eyed male fruit fly mating with a red eye female to produce 24 white males, 28 white females, 27 red males and 23 red females. Can this be sex linked? Well obviously based on this video it can. But if you just saw these normal results you might say, well, it's affecting the males and females the same. It's just somebody's homozygous recessive crossing with somebody who's heterozygous and if they gave you no other information you'd be right. That's the simplest answer and often the simplest answer is the right one. However, there is a way to make the sex link and usually what will happen is they'll talk about another generation. So let me show you how you could make this sex linked. So, we have our white eyed male. I'm going to use because red is the dominant trait I'm going to use big r for red, little r for white. So he's little r. Now, she's red we know that. Now if she was homozygous dominant all of her offsprong are going to be red because that's what homozygous dominant people do. So let's make her heterozygous. She's got this little one right there. So let's take a look. So, this daughter here is big r little r. This daughter here is little r little r. This son here is big r, nothing, so red, and this son, sorry that's the son and this is the son, is x little r. So we have as we see predicted by our punnet square half of our female offspring have red eyes and we see that. Half of our male offspring have red eyes and we see that, half have white and half have white. Now let's take this daughter here ooh fly incest and mate it with this son here and we'll see x big r, x little r, red female. x big r, x little r red female. 100 percent of our females are red. Oops. White male, white male. So we see ultimately in the second generation finally we see that unequal distribution of the effects in the genders that equals sex linked traits.
The U.S. Environmental Protection Agency (EPA) estimates that between 0.1% and 0.4% of usable surface aquifers are contaminated by industrial impoundments and landfills (1). Dumps and landfills are a threat to water supplies when water percolates through waste, picking up a variety of substances such as metals, minerals, organic chemicals, bacteria, viruses, explosives, flammables, and other toxic materials. This contaminated water is called leachate and is produced when the waste becomes saturated with water (2). Wastes with high moisture content or which receive artificial irrigation, rainwater, surface or groundwater infiltration produce leachate and methane gas. It has been shown that once a dump is saturated, annual precipitation of 36 inches per year can percolate 1 million gallons of contaminated water per acre (3). If the leachate is not contained and migrates from a site the chemical and physical properties of the substances and the soil, as well as the hydrogeological conditions around the site, will determine the extent of contamination. If a leachate reaches ground or surface water it could contaminate water supply wells. Dumps and landfills are not entirely synonymous and a distinction should be made. A dump is defined as, " a site used to dispose of solid wastes without environmental controls." (4). The term 'landfill' is replacing 'dump' due to the modernization of our solid waste facilities. Landfill is defined as a "facility in which solid waste from municipal and/or industrial sources is disposed; sanitary landfills are those that are operated in accordance with environmental protection standards." (2) This distinction is very important because it allows us to distinguish between two different eras and practices. Even so, some modernized landfills are poorly engineered or located in an environmentally unsound areas. The upgrade of waste disposal sites from dumps to environmentally sound solid waste disposal systems was mandated by a set of hazardous waste amendments passed in 1986. Landfills are now regulated at one of three class levels depending on the nature of solid or hazardous waste accepted. Well designed landfills should not cause water quality problems because leachate problems are anticipated and controlled. 1. USEPA (1980b). Planning Workshop to develop recommendations for a Ground Water Protection Strategy. Appendixes. Washington DC. pp 171. 2. EPA Drinking Water Glossary: A Dictionary of Technical and Legal Terms Related to Drinking Water. USEPA Office of Water. June 1994. pp 17. 3. Salvato, JA., et al. 1971. Sanitary Landfill-Leaching Prevention and Control. Journal Water Pollution Control Federation, 43(10):2084-2100. 4. Environmental Glossary. 4th ed. 1986. Edited by G. William Frick and Thomas F.P. Sullivan. pub by Government Institutes, Inc., Rockville,MD. pp 99. This page was prepared by T.L. Pedersen , June 1997. UCD EXTOXNET FAQ Team. Revised by B.T. Johnson, November, 1997
The year 1958 held much promise for the United States space program. Both the US and the Soviet Union were preparing to orbit a satellite as part of the International Geophysical Year (IGY), a series of activities planned between July 1957 and December 1958, intended to allow scientists around the world to study the Earth and space through coordinated observations. Given the Cold War competition between the two superpowers, the first to launch a satellite could claim technological pre-eminence. The Soviet Union leaped ahead of the US and stunned the world when they orbited Sputnik, the world's first artificial satellite, on October 4, 1957. The US response to Sputnik was two-fold. The first was to accelerate the Vanguard program, a joint National Academy of Sciences/US Naval Research Laboratory project, which unfortunately resulted in the spectacular and embarrassing launch failure of Vanguard TV3 on December 6. By that time, the Soviets had already achieved their second success with Sputnik 2, carrying a dog named Layka, the first live animal in space. The second response was to resurrect the Army Ballistic Missile Agency's (ABMA) Jupiter-C rocket program, which had involved Wernher Von Braun's team and the California Institute of Technology's Jet Propulsion Laboratory (JPL) testing reentry vehicles in sub-orbital launches. JPL designed and built the Explorer satellite . The ABMA and JPL completed the job of modifying the Jupiter-C to the Juno rocket and building Explorer 1 in 84 days, and it was hoped that 1958 would start off much better than 1957 had ended. The Juno rocket could trace its ancestry back to the German V-2 rocket, which Von Braun had also designed. Once working in the US after World War II, he used the V-2 to develop the Redstone intermediate range ballistic missile, from which he developed the Jupiter-C as a high-performance three-stage rocket. The addition of a fourth stage created the Juno rocket, capable of launching a satellite into orbit. Explorer 1 successfully launched from Cape Canaveral's Pad 26 on January 31, 1958. A team of women mathematicians at JPL computed Explorer's trajectory and were able to confirm that it was indeed in orbit around the Earth, although its orbit of 224 miles by 1,575 miles was somewhat higher than planned. Explorer 1 weighed 30 pounds of which more than 18 pounds were scientific instruments developed under the direction of James Van Allen of the University of Iowa. The instrumentation consisted of a cosmic-ray detector, five temperature sensors and two micrometeoroid detectors. The cosmic ray detector indicated a much lower cosmic ray count than expected. Van Allen postulated that the instrument was giving these readings because it was actually saturated by energetic charged particles originating mainly in the Sun and trapped by Earth's magnetic field. Explorer 1's discovery of these trapped radiation belts, subsequently named after Van Allen, is considered one of the outstanding scientific discoveries of the IGY. Explorer 1 continued to record and transmit data until its batteries died on May 23, 1958. By then it had been joined in orbit by Explorer 3, also launched on a Juno rocket on March 26 (Explorer 2 failed at launch). Although no longer active, Explorer 1 remained in orbit until March 31, 1970, when it burned up on reentry over the Pacific Ocean. It was not only America's first satellite in orbit, but also the first of a long-running series of scientific satellites that returned a wealth of useful information about the Earth, its environment, and interactions with the Sun. The competition between two separate groups to independently develop and orbit the first American satellite contributed to the recognition for the need of a single civilian space organization to plan future efforts. Following lengthy committee hearings, Congress passed the National Aeronautics and Space Act of 1958 on July 16, and President Dwight D. Eisenhower signed it July 29, establishing the National Aeronautics and Space Administration. NASA officially began operating on October 1, 1958. A fully-instrumented flight backup of Explorer 1 is on display at the Smithsonian Institute's National Air and Space Museum's Milestones of Flight Gallery, as is a model of a Juno rocket. A mockup of a Juno rocket is on display at Kennedy Space Center Visitors Center.
Research into the diet of mankind’s Stone Age ancestors reveals that horses were well adapted to cold winters. It even suggests that horses may at one time have been hibernators. Studies into the diet of stone age man have examined the essential fatty acid content of frozen animals such as mammoths, bison and horses, that might have formed part of the diet of our ancestors many years ago. Humans, and presumably their ancestors, have certain nutritional requirements that have to be met from food. One requirement is for essential fatty acids such as linoleic acid and alpha-linolenic acid. Researchers have been looking at the fatty acid profile of adipose tissue of animals found in the permafrost of Siberia. Lead researcher was José L Guil-Guerrero of the Chemistry of Biomolecules and Food Processing Research group at the University of Almeria Spain. The latest issue of Equine Science Update reports that six specimens were included in the study: two mammoths (one a baby calf, the other a young female, that died about 40,000 years ago); two bison from about 9000 years ago; and two adult horses that died about 4500 years ago. The researchers used gas-liquid chromatography-mass spectrometry (GLC-MS) and GLC-flame ionization detector (GLC-FID) to determine the current fatty acid content of the specimens. Then, using information on how fats change when frozen for long periods, they were able to deduce the likely fatty acid profile of the animals at the time of death. Several factors influence the fatty acid profile of such animals. First is the composition of the vegetation on which the animals had been feeding. A detailed analysis of the stomach contents of the mammoth calf revealed plants known to be good sources of polyunsaturated fatty acids such as alpha-linolenic acid. The animal’s digestive physiology is also important. Single stomach animals, such as mammoths and horses, are better able to assimilate fatty acids from the food they eat than are ruminants such as bison. The researchers concluded that the fat of single-stomached mammals, like mammoths and horses, that were often eaten by stone age hunters contained suitable amounts of omega-3 and omega-6 fatty acids, possibly in quantities sufficient to meet today’s recommended daily intake for good health. They added that the results also suggest that mammoths and horses at that time were hibernators. They found high proportions of both linoleic acid and alpha-linolenic acid in the reconstructed fatty acid profiles of both frozen horses examined. Such a profile is ideally suited to animals that hibernate. These polyunsaturated fatty acids are important because they influence the metabolic rate and the length of hibernation bouts in hibernating mammals. Animals without linoleic acid in their diet tend to have higher metabolic rates and shorter bouts of hibernation. Shorter bouts of hibernation means that the animal arouses from hibernation more frequently, using more of its energy stores. This could adversely affect its chance of survival. The researchers point to similarities with the present day Yakutian horses,which are well adapted to living in cold conditions. They have an unusually thick layer of fat under the skin and in the abdomen. During the winter, although they move a little, they stay mainly in the sleeping position with little feeding or other activity. The researchers conclude: “The results of this study indicate that the monogastric animals analysed, i.e. the woolly mammoth and the horse, might have had a hibernating or semi-hibernating behaviour, while their subcutaneous fat could have been consumed by Stone Age hunters to fulfil the daily needs in essential fatty acids.” Guil-Guerrero JL, Tikhonov A, Rodríuez-Garcí I, Protopopov A, Grigoriev S, et al. (2014) The Fat from Frozen Mammals Reveals Sources of Essential Fatty Acids Suitable for Palaeolithic and Neolithic Humans. PLoS ONE 9(1): e84480. Florant GL, Lipid Metabolism in Hibernators: The Importance of Essential Fatty Acids Amer. Zool. (1998) 38 (2): 331-340
Richter scale (rĭkˈtər) [key], measure of the magnitude of seismic waves from an earthquake. Devised in 1935 by the American seismologist Charles F. Richter (1900–1985) and technically known as the local magnitude scale, it has been superseded by the moment magnitude scale, which was developed in the 1970s. The Richter scale is logarithmic; that is, the amplitude of the waves increases by powers of 10 in relation to the Richter magnitude numbers. The energy released in an earthquake can easily be approximated by an equation that includes this magnitude and the distance from the seismograph to the earthquake's epicenter. Numbers for the Richter scale range from 0 to 9, though no real upper limit exists. An earthquake whose magnitude is greater than 4.5 on this scale can cause damage to buildings and other structures; severe earthquakes have magnitudes greater than 7. Like ripples formed when a pebble is dropped into water, earthquake waves travel outward in all directions, gradually losing energy, with the intensity of earth movement and ground damage generally decreasing at greater distances from the earthquake focus. In addition, the nature of the underlying rock or soil affects ground movements. In order to give a rating to the effects of an earthquake in a particular place, the modified Mercalli scale, based on a scale developed by the Italian seismologist Giuseppe Mercalli, is often used. It measures an earthquake's intensity, the severity of an earthquake in terms of its effects on the inhabitants of an area, e.g., how much damage it causes to buildings. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on Richter scale from Fact Monster: See more Encyclopedia articles on: Geology and Oceanography
My children really enjoy learning about animals. They like to catch and release critters, visit animals at the zoo, do animal science projects like dissecting owl pellets, watch movies like Whale Rider and The Story of the Weeiping Camel, do craft projects like this blue morpho butterfly craft, and read books like these about Australian animals. We are animal lovers! So on a recent visit to the zoo, we learned about ratites: large flightless birds. They share several characteristics, even though they are spread widely among different continents. Many scientists believe that their similarities and distance from each other suggest that the earth’s land masses were once much closer together than they are now. Scientists also believe that flightless birds on islands like Australia and New Zealand evolved because they had little reasons to escape flying because there were few predators. These birds developed short wings, great running or swimming skills, and special defenses like large toe claws. Let’s discover some special characteristics of these unique birds! An excellent book that explains how flightless birds evolved is “Charlie and Kiwi” by Peter Reynolds and the New York Hall of Science. Learn how “little changes in each generation can add up to BIG changes” in the long term. After reading the book, begin the lesson by having your child make a chart with the types of birds across the top: After locating Australia and New Zealand on a map, read about the birds below, watch the videos, and check out the pictures. Have your kids right down important facts about each of the birds as they learn. Flightless Birds of New Zealand and Australia Cassowary: this large and heavy bird is mainly found in New Guinea, though one species lives in northern Australia. Cassowaries are HUGE birds, standing at 4-5 ft (1.2-1.5 m) tall, and they have a large bony growth on their heads called a “casque,” which is taller on the female than the male. Notice that the rest of their head is featherless and usually bright blue. Despite their size, cassowaries are shy creatures that live in the forest, moving around at night. They eat lots of different types of fruits, and also flowers, fungi, snails, insects, frogs, birds, fish, rats, mice, and carrion (wikipedia.com). Cassowaries have very powerful legs and long feet with 3 toes and the inner toe on each foot has a sharp claw used for defense. The emu is the national bird of Australia! The emus have adapted to the dry central plains of Australia by feeding on different things during the different seasons. For example, they feed on seeds in the dry season, and seasonal plants and insects such as grasshoppers and crickets, once it has rained. The emus are the second largest standing bird, standing 5-6 ft (1.5-1.8m) tall, but its wings are so small they are mere stubs! Scientists believe that the emu has been on Earth since prehistoric times and dates back 80 million years roaming the outback of Australia. The kiwi are native to New Zealand, and are the national symbol of the country. In fact, the term Kiwi is used all over the world as a nickname for New Zealanders! Kiwis have no tails, tiny wings that are useless, and feathers that are almost like coarse hairs. Unlike the other flightless birds, kiwi are the size of a domestic chicken, making them the smallest living ratites. They use their unique long and skinny beaks- having nostrils at the end- to poke into the ground in search of earthworms, their favorite food. Another interesting fact is that a female kiwi lays one egg that is nearly 1/4 of her body weight! The kagu is another *almost* flightless bird, and no one can say for certain what this bird is and what it is related to! Unlike the other ratites here, its wings are large, but not quite strong enough for flight. The kagu is the size of a duck, and wanders around on the forest floor, using its beak to stab creatures on the ground. In the video above, you can see how the chick hides among the leaves while its mother brings it a worm. The kagu’s feathers are silky, it has distinctive bright red eyes, and a wild, wispy crest. Kagus are in danger from dogs, and its nests are frequently targeted by cats and rats. Like the other flightless birds, it is in danger of extinction. The kakapo are huge, goose-sized parrots confined to New Zealand, and on the edge of extinction (less than 150 left in the world!). The introduced predators such as dogs, weasels, and rats have decimated the population since the early 1900s. Kakapos are probably the longest living bird in the world- the average life expectancy is 90 year old, and some live even longer! Like the other ratites, because it is living on an island without mamalian predators, it is large, slow-moving, and flightless. Kakapos are vegetarian, with an acute sense of smell used to find nutritious leaves and stems. It spends its time walking around and using its strong claws to climb to feed on shrubs and trees. Kakapo have lime green fleathers, and use a low booming sound to attract the females. Additional Resources about Flightless Birds Print out these free printables about flightless birds. Read this article for reading comprehension on flightless birds. Check out these National Geographic photos of flightless birds. Play this Flightless Birds Quiz. See all of the Bird Unit lesson plans at Mosswood Connections *All of the pictures in this post have been taken from wikipedia, used under the creative commons license.
Climate change results from an increased concentration of greenhouse gases like carbon dioxide, nitrous oxide, and methane associated with economic activities, including energy, industry, transport, and land use patterns. Rich countries emit the majority of these gases, while poor countries are more vulnerable to their negative effects. Further, developing countries are more vulnerable and less able to adapt to these changing climatic conditions because of their locations; greater dependence on agriculture and natural resources; larger variations in weather and temperature conditions; and lower availability of critical resources like water, land, production inputs, capital, and public services. The inability of developing countries to respond and act immediately to lessen the impacts of climate change will have serious global economic consequences. Appropriate climate change policies, if adopted now, can stimulate pro-poor investment. More specifically, they can increase the profitability of environmentally sustainable practices even as they generate income for small producers and investment flows for rural communities. Climate mitigation through carbon offsets and carbon trading can increase income in rural areas in developing countries, directly improving livelihoods while enhancing adaptive capacity. In its recently released fourth assessment report, the Intergovernmental Panel on Climate Change concluded that a portfolio of both adaptation and mitigation will be required. This brief supports this conclusion as it explores pro-poor adaptation, risk management, and mitigation strategies in response to climate change.
Alternative Names: Various Indian nations and clan names Location: throughout USA, especially west and south-west Population: approximately 1.5 million % of Population: 0.63% Language: English and various Indian dialects There are approximately one-and-a-half million Indians living in the USA today. They are descendants of the original inhabitants of North America and do not represent a homogenous group but have different social, cultural, economic, and linguistic characteristics. The Bureau of Indian Affairs (BIA), which supervises all Indian affairs in the United States, recognizes 283 tribes in the mainland United States. These tribes receive special federal services, and trusteeships for their lands and assets based on treaties signed in the nineteenth century. Tribes range in size and character with reservations of more than 22,000 square miles and populations of more than 130,000 to tiny bands of less than 100 with a few acres, to those who on outward appearance are almost indistinguishable from their white neighbours. Some Indians live in cities and towns and therefore cease to be eligible for BIA services. Additionally there are some groups who identify themselves as Indian but are not officially recognized as such. These are tribes who had their status terminated in the 1950s and 1960s or those groups that never had federal status at all. Before European discovery and settlement there were perhaps three million Indians in present-day USA, with 600 distinct societies ranging from tiny hunting and gathering bands to sophisticated agricultural nations. Indian societies were generally small communities of only a few hundred people, divided by distance and traditional hostilities. Even at their zenith the larger nations numbered only approximately 60,000 individuals. Each group adapted to their own environments, and had distinct cultures, economies, beliefs and customs. The Atlantic and Pacific seaboards were the most densely populated. The west coast and northwest had an abundance of fish, game, and wild plants, so the groups in these areas had prosperous, settled communities with rich cultures. The eastern seaboard was populated by farming nations whose people lived in permanent well ordered towns, and were usually organized into confederacies for mutual defence. Westward, across the Appalachian mountains, lived smaller more scattered migratory groups who usually depended on hunting, supplemented by a small amount of agriculture. Further west in the great plains region lived societies that depended primarily on hunting, while in the south-west between the southern plains and present day California lived the Pueblo peoples whose civilizations were influenced by the great indigenous civilizations of Central America and Mexico. They had adobe towns and cultivated the earth. This region was also home to the wandering bands of Navajos and Apaches. The Great Basin region, in what is now present day Utah and Nevada, was the poorest area. It was populated by tiny migratory bands of 15-20 people. In the larger societies there were hereditary hierarchies and elementary policing systems but in general decisions were reached by consensus and individuals who disagreed with the decisions of the group could leave to join another tribe or form a society of their own. Religion played a very important part in the life of the Indians. They believed in influential spirit forces as well as a cosmic unity, that embraced man, animals, plants, and the elements. They had a reverence for the land and adapted their cultures to the peculiarities of their environments. The notion that the earth was their mother was a literal belief. They had an immense knowledge of nature and the resources of their own areas and their diet was more varied and plentiful than in Europe. The first European conquerors were the Spanish who in 1598 declared the territory of the Pueblos in the south-west to be part of the Spanish empire, established the capital of Santa Fe and forced the Pueblos to work as slaves. The Pueblos later rebelled and were successful but 12 years later all the Pueblo tribes except the Hopi were again subdued. By 1656 the Spanish had also established settlements in Florida. Meanwhile the Dutch established a trading colony in Manhattan and the French established Port Royal in modern Nova Scotia. By the end of the seventeenth century the French, who enjoyed relatively peaceful relations with the Indians (because their primary interest was the fur trade and not land acquisition), had spread out from Canada down along the Illinois river to the mouth of the Mississippi. The British had meanwhile founded Jamestown in Virginia. Less than a century later the English had colonies stretching from Maine to the Carolinas. In New England as well as in Virginia relations between the Europeans and the Indians were at first friendly. The Indians helped the early colonists to survive, sometimes even providing protection and the Europeans gave iron implements and goods to the Indians. By the 1630s the colonists had become self-sufficient. There was increased immigration and as a result they encroached further onto Indian territory. The Europeans had a devastating effect on the Indians. They brought diseases that wiped out whole Indian populations. By 1662 a long stretch of the New England coast had been depopulated and whole communities wiped out. Over-hunting caused the extermination of fur-bearing animals from region to region and trade with the Indians eventually bred dependence on European trade goods, iron tools and weapons which were clearly superior to the implements of the Indians. It was inevitable that the three European powers would fight over ultimate control of the territory and between 1689 and 1763 they were fighting among themselves. It was the Indians, however, who were most affected by these wars. Indians had turned against each other to aid their European allies. Some tribes were wiped out by other Indian tribes. At the end of these wars many of the tribes east of the Mississippi were destroyed. The end result was that Indian land was confiscated by Europeans, and eventually those tribes that had fought with and for the victor found their lands taken over by whites as European immigration to the new world increased. After the wars the British government realized the necessity for native allies on the frontier, so the government issued the Royal proclamation of 1763, which outlined plans for permanent Indian territory west of the Alleghenies. The proclamation forbade private individuals or organizations to take or buy tribal lands, but because the authorities could not police the frontier indefinitely against the stream of settlers moving south and west, the proclamation was a failure. In 1776 when the colonies rebelled the Indians were again divided and weakened. Those tribes that had fought for the British lost their lands and the new United States signed treaties with the south-eastern nations, forcing them to cede lands already seized by whites, but recognizing and guaranteeing their title to lands remaining. This “nation to nation” relationship was reaffirmed by the US Congress in October 1988 on the 200th anniversary of the Constitution. In 1828 Andrew Jackson, an avowed Indian hater, was elected president. He was of the opinion that all tribes east of the Mississippi river should be moved, by force if necessary, west of the Mississippi. Jackson’s policy had the greatest effect on the “Five Civilized Tribes”; the Creeks, Chickasaws, Cherokees and Choctaws of Mississippi, Alabama and Georgia, and the Seminoles of Florida which had been ceded to the USA from Spain in 1821. These tribes were living peacefully with their non-Indian neighbours and had embraced the European social educational and political systems. In 1830, when president Jackson’s Indian Removal Act became law, the tribes were removed one by one to land in present day Oklahoma. The Cherokees, under the leadership of chief John Ross, fought back through the federal courts which in 1832 upheld their case in a decision that said that they were independent political communities that retained their individual rights. Jackson’s reply was that John Marshall (Supreme Court Justice) had made his decision so now let him enforce it. Later that year the Georgia government held a lottery and much of the Cherokee’s land was distributed to the winners. Some of the Cherokees resisted and continued to live a marginal existence in the area, but they were eventually moved by force to modern Oklahoma. In this move 4,000 Cherokee died and the journey became known as the “trail of tears”. A small group of Cherokees however managed to escape and hid in the Carolina mountains as well as a large part of the Seminoles who held out in the Florida swamps. After seven years of fighting the army gave up and left the Indians alone but, despite this small victory, 1832 marked the end of armed resistance east of the Mississippi. This pattern was followed throughout much of the 1800s. As new territory in the west was granted statehood, lands which had been promised to Indians in perpetuity was gradually taken away and the Indians herded onto reservations in areas that were not yet given statehood or had not yet been found to be economically profitable for the whites. When Indian land was not given freely it was taken by deceit or force, and sometimes whole populations were wiped out in the process. By 1880 in California the Indian population fell from an estimated pre-European level of 350,000 to 20,000. The only successful resistance effort came from the plains region which was home to the tribes who, since the middle of the eighteenth century, had adopted the horse and gun from the Europeans. These tribes, the Sioux, Cheyenne and Comanche among them, had established a warrior ethic and developed great military skill to protect their hunting territories from encroachment by whites. Sporadic fighting continued in this region until the 1880s when small bands of Apaches, who had continued to hold out in the south-west, were finally subdued. By the end of the century the Indians were completely dependent on their conquerors, and the population was down to one tenth of its pre-European level due to disease and warfare. The General Allotment Act was passed in 1887. Also known as the Dawes Act after the Senator who initiated the proposal, it was an honest attempt by some to transform Indian society by assimilation. Indians were each given a plot of land, approximately 160 acres, in trust until the Indian owner was thought competent enough to hold the land in fee simple. The economic effect of the Act was that, by 1890,17.4 million acres of Indian land which had been retained after the wars of the 1880s were now part of the public domain and eventually more than 90 million acres of Indian land no longer belonged to them. The social effects of the Act were even greater. Tribes were broken up when tribal lands were lost and the social structure of the tribes was threatened. The old system of communal property which was vital to Indian social and traditional survival was destroyed and the Indians were left feeling discontented and hopeless. Young Indians were deprived of their traditions and did not get the skills necessary to survive in the white world. As a result they found themselves caught between two worlds neither of which they were equipped to deal with. The Meriam Report, published in 1928, described the Indians as destitute and the housing, sanitation and health conditions of the Indians as deplorable. In 1933, when Franklin Roosevelt became president, the direction of Indian policy was changed. The Reorganization Act of 1934 was accepted by 191 tribes and became law. It re-established the sovereignty of Indian tribes and tribal governments were given the authority to draw up constitutions and to assume judicial and fiscal control over the reservations. Allotment of tribal lands was halted, two million dollars was allotted for Indian land acquisition and a 10 million dollar loan fund was established so that economic enterprises could be undertaken by the Indians themselves. Religious freedoms were extended, educational programmes were re-evaluated and the O’Malley Act of 1934 gave the BIA authority to make contracts with federal, state and local agencies for specific Indian programmes. Despite the war and the depression the Indian New Deal achieved good results. Indian beef-cattle holdings increased by 105% and their yield of animal products over 20 times. Indians became good credit risks, for of the $12 million that had been loaned to Indians only $3627 had been cancelled as uncollectable. However there were those who argued that the Indians should be more rapidly absorbed into mainstream America, and during the 1950s when Dillon S. Meyer (who had been in charge of Japanese internment during the Second World War), became the Commissioner of Indian Affairs, the new official policy toward the Indians changed. The policy was embodied in the House Concurrent Report (HRC) 108, which was adopted by Congress in 1953 and stated that the Indians should be freed as soon as possible from all federal supervision and responsibility; thus they would be forced to assimilate into white society. Another similar bill was passed extending the authority of states to enact similar legislation. Some Indian groups fought against termination, arguing that supervision should continue for a period of time so that the tribes could prepare themselves for termination. However, by 1960, 61 Indian tribes and bands had been terminated. Termination was also followed by a policy of withdrawal which meant that development projects were discontinued, loan funds were frozen and federal services ceased. In 1944 the National Congress of American Indians had been formed, to represent every recognized tribe in the country. By 1960 they, along with the new Indian leaders who had fought in World War II and the Korean war, and were more aware of how white society functioned, managed to stop the termination policy and it has never actually been resumed. The original function of the Bureau of Indian Affairs (BIA) was to hold the lands in trust for the tribes. Because of its structure the BIA has over the years become unresponsive to the plight of the Indians and very bureaucratic. The head of the bureau is the Assistant Secretary for Indian Affairs, who has ultimate control over the various Indian nations’ constitutions, the composition of their governments, their power to make contracts, the disposition of their property and the funding and implementation of most programmes that affect them. The BIA has authority to veto decisions made by the tribal Councils. The policies of the Bureau are decided by the Congressional Committee on Interior and Insular Affairs and by the Indian section of the Bureau of the Budget, which is subject to changing political fashions in the country. The BIA also has a position in the Department of the Interior which is under the management of the Assistant Secretary for Public Land Management. The Bureau has at times been negligent or has abused its function and authority. One example of the many abuses by the BIA concerns the land rights of the 3,650 Cheyenne people living in their original homeland of eastern Montana on 433,434 acres of land that is rich in coal. Between 1966 and 1971 the BIA drew up leases with energy companies which were economically unfavourable to the tribe, without any protective clauses to protect either the land or the Cheyenne population. After six years of legal battles, the leases were finally revoked by act of Congress in 1980. Although several mineral companies received compensation the Cheyenne did not. The 1960s saw an increase in Indian political activity. In 1961 the Chicago American Indian Conference was held and representatives of 90 tribes set out the goals of the Indian community. They wanted to retain their Indian culture and special relationship with the Federal government but they also proposed improving government programmes so that one day Indians would be self-sufficient. The same year the National Indian Youth Council (NIYC) was founded by 10 college-educated Indians. The NIYC was a more radical group whose leaders were impatient with the BIA. They wanted a clear definition of Indian culture and Indian rights and their first focus of attention was on the north-west states and native fishing rights. They staged sit-ins and fish-ins and demanded recognition of the rights guaranteed to the Indians in treaties made with them by the Federal government. When the BIA was slow to act the NIYC decided to use force if necessary to resist state action and a series of confrontations followed. The efforts were successful and the government eventually did file charges against the state governments on behalf of the Indians. Other more radical Indian groups also followed. One was the American Indian Movement (AIM), which consisted mainly of urban Indians. They used confrontations and demonstrations to draw attention to the problems of native Americans. In 1972 they and a number of other groups organized a march on Washington known as the “Trail of Broken Treaties” to present a list of grievances and a 20-point programme to stress the treaty rights of the tribes. The Indians were not able to meet with officials and the 20-point programme, which was formulated by a number of representatives from different groups around the country, was never considered. This was partly due to the fault of AIM itself who were perceived by the public as destructive because of damage done to the BIA building which the marchers occupied for six days when they found out that officials would not meet with them. In addition members of the Tribal Chairman’s Association which was formed to counteract the more radical elements of groups like AIM held a press conference denouncing the demonstrators. During the 1960s legal aid organizations were also set up to help the Indians fight for their rights in the courts. One of the most important of these organizations was the Native American Rights Fund (NARF). The NARF encouraged non-recognized Indian groups such as some Eastern Indian groups to present claims for their land. The government eventually settled claims with many of the tribes. One such example is the Passamquoddy and Penobscott Indians of Maine, who received 300,000 acres of undeveloped land and $27.5 million. The Indians went on to invest their money and land into small businesses that have provided jobs for Indians as well as non-Indians in the area. American Indian organizations have forged international links with other oppressed indigenous peoples. In response to their attempts to be awarded UN representation as a sovereign nation the UN Working Group on Indigenous Populations Working Grants was established in 1982. Indians tend to be very poor. Most still live on reservations where work is scarce. In 1985 half of the Indian workforce had no work while in some areas unemployment was as high as 75%. There are housing shortages on the reservations and 55% of homes are sub-standard. The Indian population has a greater incidence of communicable diseases and fatal infectious illnesses. Over the years a welfare society has developed. Many Indian people are depressed, lacking in initiative, self assurance and not able to live successfully in their own culture or the white culture. These symptoms usually manifest themselves in violence, delinquency drunkenness and despair. Suicide and accidents are the single biggest cause of Indian deaths. The suicide rate is twice the national average and most of the accidents are related to alcohol and drug abuse. Crimes of violence are 10 times more frequent on reservations than among the population as a whole. The Indians were forced to part with 64% of the land which they retained at the end of the Indian Wars of the 1880s and today less than 53 million acres, mostly in the mid-west and the south-west, belongs to them. These areas tend to have severe water shortages and limited economic potential. The Indian population has increased five-fold over the past century and the land base which has remained constant is unable to sustain them. The BIA estimated that 75% of the land is suitable only for the least intensive grazing, the least most profitable form of agriculture, while 10% of the land has viable resources of oil, gas and minerals. Twenty-five per cent of all remaining Indian lands is in the hands of non-Indian owners because of legal entanglements. In 1964 the Economic Opportunity Act was passed, and Indians gained access to funds not controlled by the BIA. Although the budget for the Indians was small the results were good as Indians planned and implemented programmes. For example in Washington state the Lummis, who were one of the poorest tribes, were able to establish a successful fish-farming business based on Indian cultural traditions. Because of these successes more money was eventually channelled to Indian communities. In 1970 President Nixon outlined various proposed administrative reforms. Although many of his proposals were never followed, the sacred Blue Lake of Taos Pueblo which the Indians had been trying to recover since 1906 was returned to them, the composition of the Congressional Sub-Committee of Indian Affairs was changed so that it was more responsive to Indian needs and Louis Bruce, a business man of Indian ancestry became Commissioner of the BIA. Later in the 1970s legislation was passed to allow some Indian tribes to take responsibility for running most or all of their federal programmes. In 1975 25 Indian tribes in the north-west joined together to form the Council of Energy Resource Tribes, which is modelled on OPEC. In addition many other tribes have been taking advantage of legislation such as the Indian Tax Status Act of 1980 to enter into enterprises that can attract money from outside the reservation. Indeed these and other types of legislation have allowed Indians to become more self-reliant. Although reservations that follow a policy of economic and industrial expansion have a higher percentage of social breakdown, other legislative measures, like the Indian Religious Freedom Act, may help to offset these developments by allowing the Indians to retain their traditions and culture. The Act, which was passed during the 1970s, gives the same degree of protection to Indian faiths that is given to other religious faiths in the USA. This has also meant that there could be greater protection for Indian burial places and sacred sites. Government aid to the Indians and Indian programmes has continued to increase despite budget cuts during the Reagan administration. Despite this there has been little improvement in the economic circumstances of the Indians. They are still unable to support themselves on their own land, therefore economic dependence on the government continues. The increase in government funding has meant increased involvement in the lives of the Indians. The BIA and other federal agencies now provide more than half the jobs for Indians and 60% of the Indian’s personal income. These figures are higher in reservation communities where there are no significant alternative sources of employment and wealth. In 1979 the largest Indian land settlement in American history was awarded to Sioux Indians when they won a court case against the USA which had been going on for almost a century. The Indians were awarded $105 million dollars for the illegal seizure of the Black Hills in 1880. The Indians refused to accept the money and wanted the land instead, for it represented more than just an economic opportunity. They saw it as a chance once again to be reunited as one nation in their traditional homeland. In 1985, the Senator of New Jersey introduced a bill whereby the federally owned land, including some of the most important Indian burial sites, would be returned to them. There are a number of other land claims pending, such as those of the Western Shoshone in Nevada and the Yurok, Karok and Tolawa in the northwest. A highly controversial case has been the partition of disputed land between members of the Hopi and Dine (Navajo) peoples. In the early 1970s the Indian Education Act was passed. This allowed the Indian communities to run their own schools and to emphasize their own cultures and histories. This was then put under the Department of Health Education and Welfare. The head of the Indian Education Office was given the rank of Assistant Secretary and therefore had direct access to the White House. In addition the Tribally Controlled Community Colleges Act was also established and is today a very successful programme. It gives the tribal government authority to establish centres for further education where members are able to return to college and learn more about their own history and culture while receiving qualifications. This was an important step for American Indians whose school drop-out rate is between 45% and 62% and who do not always have the skill or the capital required to undertake enterprises that would make the best use of their lands and resources. However, the Indians have been able to make advances legally, politically, educationally and, when given the opportunity, they have also been successful economically.
A Level Chemistry 9701/12/O/N/20 Q9 Chlorine dioxide, ClO2, reacts with sodium hydroxide in the reaction shown. Which statement correctly describes this redox reaction? A Chlorine atoms are oxidised and oxygen atoms are reduced. B Chlorine atoms are reduced and oxygen atoms are oxidised. C Some chlorine atoms are oxidised and some chlorine atoms are reduced. D Some oxygen atoms are oxidised and some oxygen atoms are reduced.
Word Search by Letters How to make the process of word search accurate - Enter the letters you know in the order in which they are found in the word. - Select the desired word length if you have to look for words with a certain number of letters. - The system will present the right words, separated by blocks. You have the opportunity not only to learn new words on the set parameters, but also to become familiar with their use in the text, which helps you remember the lexical meaning of a word better.
Sudden Sensorineural Hearing Loss (SSHL), or sudden deafness, is a rapid loss of hearing. SSHL can happen to a person all at once or over a period of up to 3 days. It should be considered a medical emergency. A person who experiences SSHL should visit a doctor immediately. A doctor can determine whether a person has experienced SSHL by conducting a normal hearing test. If a loss of at least 30 decibels in three connected frequencies is discovered, it is diagnosed as SSHL. A decibel is a measure of sound. A decibel level of 30 is half as loud as a normal frequency is another way of measuring sound. Frequencies measure sound waves and help to determine what makes one sound different from another sound. Hearing loss affects only one ear in 9 out of 10 people who experience SSHL. Many people notice it when they wake up in the morning. Others first notice it when they try to use the deafened ear, such as when they make a phone call. Still others notice a loud, alarming “pop” just before their hearing disappears. People with SSHL often experience dizziness or a ringing in their ears (tinnitus), or both. Some patients recover completely without medical intervention, often within the first 3 days. This is called a spontaneous recovery. Others get better slowly over a 1 or 2 week period. Although a good to excellent recovery is likely, 15 percent of those with SSHL experience a hearing loss that gets worse over time. Approximately 4,000 new cases of SSHL occur each year in the United States. It can affect anyone, but for unknown reasons it happens most often to people between the ages of 30 and 60. Though there are more than 100 possible causes of sudden deafness, it is rare for a specific cause to be precisely identified. Only 10 to 15 percent of patients with SSHL know what caused their loss. Normally, diagnosis is based on the patient’s medical history. Possible causes include - Infectious diseases. - Trauma, such as a head injury. - Abnormal tissue growth. - Immunologic diseases such as Cogan’s syndrome. - Toxic causes, such as snake bites. - Ototoxic drugs (drugs that harm the ear). - Circulatory problems. - Neurologic causes such as multiple sclerosis. - Relation to disorders such as Méniere’s disease. People who experience SSHL should see a physician immediately. Doctors believe that finding medical help fast increases the chances for recovery. Several treatments are used for SSHL, but researchers are not yet certain which is the best for any one cause. If a specific cause is identified, a doctor may prescribe antibiotics for the patient. Or, a doctor may advise a patient to stop taking any medicine that can irritate or damage the ear. The most common therapy for SSHL, especially in cases with an unknown cause, is treatment with steroids. Steroids are used to treat many different disorders and usually work to reduce inflammation, decrease swelling, and help the body fight illness. Steroid treatment helps some SSHL patients who also have conditions that affect the immune system, which is the body’s defense against disease. Another common method that may help some patients is a diet low in salt. Researchers believe that this method aids people with SSHL who also have Méni?re’s disease, a hearing and balance disorder. Two factors that help hearing function properly are good air and blood flow inside the ear. Many researchers now think that SSHL happens when important parts of the inner ear do not receive enough oxygen. A common treatment for this possible cause is called carbogen inhalation. Carbogen is a mixture of oxygen and carbon dioxide that seems to help air and blood flow better inside the ear. Like steroid therapy, carbogen inhalation does not help every patient, but some SSHL patients taking carbogen have recovered over a period of time.
The climate disaster is here Earth is already becoming unlivable. Will governments act to stop this disaster from getting worse? The enormous, unprecedented pain and turmoil caused by the climate crisis is often discussed alongside what can seem like surprisingly small temperature increases – 1.5C or 2C hotter than it was in the era just before the car replaced the horse and cart. These temperature thresholds will again be the focus of upcoming UN climate talks at the COP26 summit in Scotland as countries variously dawdle or scramble to avert climate catastrophe. But the single digit numbers obscure huge ramifications at stake. “We have built a civilization based on a world that doesn’t exist anymore,” as Katharine Hayhoe, a climate scientist at Texas Tech University and chief scientist at the Nature Conservancy, puts it. The world has already heated up by around 1.2C, on average, since the preindustrial era, pushing humanity beyond almost all historical boundaries. Cranking up the temperature of the entire globe this much within little more than a century is, in fact, extraordinary, with the oceans alone absorbing the heat equivalent of five Hiroshima atomic bombs dropping into the water every second. Until now, human civilization has operated within a narrow, stable band of temperature. Through the burning of fossil fuels, we have now unmoored ourselves from our past, as if we have transplanted ourselves onto another planet. The last time it was hotter than now was at least 125,000 years ago, while the atmosphere has more heat-trapping carbon dioxide in it than any time in the past two million years, perhaps more. Since 1970, the Earth’s temperature has raced upwards faster than in any comparable period. The oceans have heated up at a rate not seen in at least 11,000 years. “We are conducting an unprecedented experiment with our planet,” said Hayhoe. “The temperature has only moved a few tenths of a degree for us until now, just small wiggles in the road. But now we are hitting a curve we’ve never seen before.” No one is entirely sure how this horrifying experiment will end but humans like defined goals and so, in the 2015 Paris climate agreement, nearly 200 countries agreed to limit the global temperature rise to “well below” 2C, with an aspirational goal to keep it to 1.5C. The latter target was fought for by smaller, poorer nations, aware that an existential threat of unlivable heatwaves, floods and drought hinged upon this ostensibly small increment. “The difference between 1.5C and 2C is a death sentence for the Maldives,” said Ibrahim Mohamed Solih, president of the country, to world leaders at the United Nations in September. There is no huge chasm after a 1.49C rise, we are tumbling down a painful, worsening rocky slope rather than about to suddenly hit a sheer cliff edge – but by most standards the world’s governments are currently failing to avert a grim fate. “We are on a catastrophic path,” said António Guterres, secretary general of the UN. “We can either save our world or condemn humanity to a hellish future.” Earth’s atmosphere, now saturated with emissions from human activity, is trapping warmth and leading to more frequent periods of extreme heat This year has provided bitter evidence that even current levels of warming are disastrous, with astounding floods in Germany and China, Hades-like fires from Canada to California to Greece and rain, rather than snow, falling for the first time at the summit of a rapidly melting Greenland. “No amount of global warming can be considered safe and people are already dying from climate change,” said Amanda Maycock, an expert in climate dynamics at the University of Leeds. A “heat dome” that pulverized previous temperature records in the US’s Pacific northwest in June, killing hundreds of people as well as a billion sea creatures roasted alive in their shells off the coast, would’ve been “virtually impossible” if human activity hadn’t heated the planet, scientists have calculated, while the German floods were made nine times more likely by the climate crisis. “The fingerprint of climate change on recent extreme weather is quite clear,” said Michael Wehner, who specializes in climate attribution at Lawrence Berkeley National Laboratory. “But even I am surprised by the number and scale of weather disasters in 2021.” After a Covid-induced blip last year, greenhouse gas emissions have roared back in 2021, further dampening slim hopes that the world will keep within the 1.5C limit. “There’s a high chance we will get to 1.5C in the next decade,” said Joeri Rogelj, a climate scientist at Imperial College London. For humans, a comfortably livable planet starts to spiral away the more it heats up. At 1.5C, about 14% of the world’s population will be hit by severe heatwaves once every five years. with this number jumping to more than a third of the global population at 2C. Beyond 1.5C, the heat in tropical regions of the world will push societies to the limits, with stifling humidity preventing sweat from evaporating and making it difficult for people to cool down. Extreme heatwaves could make parts of the Middle East too hot for humans to endure, scientists have found, with rising temperatures also posing enormous risks for China and India. A severe heatwave historically expected once a decade will happen every other year at 2C. “Something our great-grandparents maybe experienced once a lifetime will become a regular event,” said Rogelj. Globally, an extra 4.9 million people will die each year from extreme heat should the average temperature race beyond this point, scientists have estimated. At 2C warming, 99% of the world’s coral reefs also start to dissolve away, essentially ending warm-water corals. Nearly one in 10 vertebrate animals and almost one in five plants will lose half of their habitat. Ecosystems spanning corals, wetlands, alpine areas and the Arctic “are set to die off” at this level of heating, according to Rogelj. Earth’s hotter climate is causing the atmosphere to hold more water, then releasing the water in the form of extreme precipitation events Across the planet, people are set to be strafed by cascading storms, heatwaves, flooding and drought. Around 216 million people, mostly from developing countries, will be forced to flee these impacts by 2050 unless radical action is taken, the World Bank has estimated. As much as $23tn is on track to be wiped from the global economy, potentially upending many more. Some of the most dire impacts revolve around water – both the lack of it and inundation by it. Enormous floods, often fueled by abnormally heavy rainfall, have become a regular occurrence recently, not only in Germany and China but also from the US, where the Mississippi River spent most of 2019 in a state of flood, to the UK, which was hit by floods in 2020 after storms delivered the equivalent of one month of rain in 48 hours, to Sudan, where flooding wiped out more than 110,000 homes last year. Meanwhile, in the past 20 years the aggregated level of terrestrial water available to humanity has dropped at a rate of 1cm per year, with more than five billion people expected to have an inadequate water supply within the next three decades. At 3C of warming, sea level rise from melting glaciers and ocean heat will also provide torrents of unwelcome water to coastal cities, with places such as Miami, Shanghai and Bangladesh in danger of becoming largely marine environments. The frequency of heavy precipitation events, the sort that soaked Germany and China, will start to climb, nearly doubling the historical norm once it heats up by 2C. Earth’s hotter atmosphere soaks up water from the earth, drying out trees and tinder that amplify the severity of wildfires Virtually all of North America and Europe will be at heightened risk of wildfires at 3C of heating, with places like California already stuck in a debilitating cycle of “heat, drought and fire”, according to scientists. The magnitude of the disastrous “Black Summer” bushfire season in Australia in 2019-20 will be four times more likely to reoccur at 2C of heating, and will be fairly commonplace at 3C. A disquieting unknown for climate scientists is the knock-on impacts as epochal norms continue to fall. Record wildfires in California last year, for example, resulted in a million children missing a significant amount of time in school. What if permafrost melting or flooding cuts off critical roads used by supply chains? What if storms knock out the world’s leading computer chip factory? What happens once half of the world is exposed to disease-carrying mosquitos? “We’ve never seen the climate change this fast so we don’t understand the non-linear effects,” said Hayhoe. “There are tipping points in our human-built systems that we don’t think about enough. More carbon means worse impacts which means more unpleasant surprises.” Unpredictable weather, like too much or too little rainfall, decreases the quantity and quality of crop yields There are few less pleasant impacts in life than famine and the climate crisis is beginning to take a toll on food production. In August, the UN said that Madagascar was on the brink of the world’s first “climate change famine”, with tens of thousands of people at risk following four years with barely any rain. Globally, extreme crop drought events that previously occurred once a decade on average will more than double in their frequency at 2C of temperature rise. Heat the world a bit more than this and a third of all the world’s food production will be at risk by the end of the century as crops start to wilt and fail in the heat. Many different aspects of the climate crisis will destabilize food production, such as dropping levels of groundwater and shrinking snowpacks, another critical source of irrigation, in places such as the Himalayas. Crop yields decline the hotter it gets, while more extreme floods and storms risk ruining vast tracts of farmland. Despite the rapid advance of renewable energy and, more recently, electric vehicles, countries still remain umbilically connected to fossil fuels, subsidizing oil, coal and gas to the tune of around $11m every single minute. The air pollution alone from burning these fuels kills nearly nine million people each year globally. Decades of time has been squandered – US president Lyndon Johnson was warned of the climate crisis by scientists when Joe Biden was still in college and yet industry denial and government inertia means the world is set for a 2.7C increase in temperature this century, even if all emissions reduction pledges are met. By the end of this year the world will have burned through 86% of the carbon “budget” that would allow us just a coin flip’s chance of staying below 1.5C. The Glasgow COP talks will somehow have to bridge this yawning gap, with scientists warning the world will have to cut emissions in half this decade before zeroing them out by 2050. “2.7C would be very bad,” said Wehner, who explained that extreme rainfall would be up to a quarter heavier than now, and heatwaves potentially 6C hotter in many countries. Maycock added that much of the planet will become “uninhabitable” at this level of heating. “We would not want to live in that world,” she said. A scenario approaching some sort of apocalypse would comfortably arrive should the world heat up by 4C or more, and although this is considered unlikely due to the belated action by governments, it should provide little comfort. Every decision – every oil drilling lease, every acre of the Amazon rainforest torched for livestock pasture, every new gas-guzzling SUV that rolls onto the road – will decide how far we tumble down the hill. In Glasgow, governments will be challenged to show they will fight every fraction of temperature rise, or else, in the words of Greta Thunberg, this pivotal gathering is at risk of being dismissed as “blah, blah, blah”. “We’ve run down the clock but it’s never too late,” said Rogelj. “1.7C is better than 1.9C which is better than 3C. Cutting emissions tomorrow is better than the day after, because we can always avoid worse happening. The action is far too slow at the moment, but we can still act.” *This article was amended on 15 October 2021 with the correct IPCC projections for when global temperatures are expected to reach each threshold and to correct the spelling of Wooroloo. The time ranges in each map have also been amended to show time range projections from the Climate Action Tracker's current policies pathway. 14 October 2021
In electronics there are thin slices of semiconductor material called wafers. These semiconductor wafers are usually made by etching silicon and are widely used in the production of integrated circuits and various other microelectromechanical systems (MEMS). These parts are essential in almost all modern technology including computers, cell phones, and all other digital home devices. Si etching in general and silicon wafers in particular is one of the most widespread and best-understood etching processes. Usually, the etching is performed by a working gas or gas mixture, which contains fluorine atoms. Sometimes also noble gases such as xenon or argon are added. Some popular plasma species for wafer etching are CF4, CCl4, SF6 XeF2, or radicals like CF3 or CF2. The reason for this common usage of fluorine components in the etching process is that this element reacts swiftly with silicon. This plasma etching process is even more rapid when reactive ion etching (RIE) is applied. In that technology, strong fluxes of highly energetic ions are directed toward the wafer surface. This increases the etch rates by up to an order of magnitude compared to just chemical etching. For example, one argon ion with a kinetic energy of 1 keV can remove up to 25 silicon atoms from the wafer surface. These high etch rates are partly achieved by sputtering and partly because the fluorine atoms chemically bind to the surface atoms and vaporize. The removal of the top layers of the substrate either occurs via isotropic etching (e. g. as it happens in wet chemical processes) or via anisotropic etching which can be achieved through plasma and RIE. Another common type of etchant is Cl. If chlorine is used for silicon wafer etching, the etch rates are negligible at room temperature and, in general, lower than for fluorine. This can be an advantage when it comes to selectivity. For example, masks on a silicon surface will be etched by chlorine while the Si surface is left intact. Another important aspect of chlorine and fluorine is the so-called doping effect. This effect leads to a higher etch rate for n-type silicon than for p-type silicon wafers. Depending on the aim of the process, this can either be a bug or a feature, which has to be kept in mind. Plasma etching silicon wafers is a crucial step in the production process for integrated circuits that need to be very small and thin in size. Trying to create the patterns necessary on the surface of these minuscule devices is difficult without damaging them but with the help ofplasma technology etching siliconwafers can be done very effectively. When etching silicon for the production of integrated circuits, Thierry uses a process called reactive ion etching or RIE. This process is a different form of plasma etching that combines chemically active plasma such as XeF2 with particles that produce more physical energy like argon plasma. Etching silicon requires a more vigorous approach and RIE can accomplish this task and carve out the necessary micro-structure in the silicon wafers that will be used to make the integrated circuits. Silicon nitride or Si3N4 is also commonly etched with fluorine-containing plasma. One of the biggest advantages is that fluorine has a very good selectivity, even at room temperature. This means that the plasma predominantly etches the silicon at the surfaces rather than dopants, for example. The same holds for silicon oxides and similar materials. It is important to etch Si3N4 selectively because it is used as a dielectric material for semiconductors. Silicon nitride can also be utilized as a masking material for the selective oxidation of pure silicon. After the oxidation process, the silicon nitride layer often has to be removed via etching. If conventional dry etching is employed, the etching itself is isotropic, comparable to wet etching processes. The latter has the drawback that it is they are highly isotropic and the etching chemicals can even undercut applied etching masks. However, in plasma, electric fields can be applied to achieve anisotropic etching. These fields accelerate energetic ions toward the surface. These ions are then able to etch practically only in one direction. The feedstock in plasma-assisted Si3N4 etching is normally some kind of fluorocarbon-containing gas. When Si3N4 is etched with plasma it has to be taken into account that the etch rate depends on how the silicon nitride layer was created. The etch rate is higher if the substrate was coated with a PECVD process than for PCV depositions. Silicon dioxide, like other Si-containing compounds, is neatly etched by fluorine or carbon-fluorine-containing plasma. At room temperature typical etch rates are several dozens of nm/min. The reason is that SiO2 is much harder (about 40 times) to etch than pure silicon. For this reason, the best way to go is RIE, as it is used in the system by Thierry Corp. If the ion energy is around 500 eV, or even higher, the etching becomes mostly anisotropic and the etch rate surpasses 200 nm/min. If a carbon-fluorine compound like CF4 is used in the process, one can achieve great selectivity of SiO2 compared to Si. The main disadvantage of fluorocarbon etching is the unwanted deposition that can occur parallel to the etching process. To learn more about etching, check out our eBook titled "Plasma Etching and Cleaning Strategy for Better Product Quality."