content
stringlengths
275
370k
Lions live at the top of the ecological pyramid and they can only thrive in… (Association for the Advancement…) During the 1960s, when most African nature reserves were being established, lions tended to be born free. But today, freedom doesn't always serve them well. Fifty years ago, human population densities were low in the areas where lions roamed. But since then, the human population in that part of Africa has increased fourfold to fivefold, and demands on land have intensified. The prey that lions rely on has been reduced by poaching and habitat loss, which means that lions living in unfenced preserves roam out into farms and pastures, where they kill livestock — or humans. In the last 20 years, lions have attacked more than 1,000 people in southern Tanzania. The big cats have become a problem not because of anything they're doing wrong. They're just being lions. The problem is that few African nations can invest adequately in the management of their parks. Lions live at the top of the ecological pyramid and they can only thrive in healthy ecosystems. But although African nations have allotted more than 400,000 square miles as wildlife areas — more real estate than California, Oregon and Nevada combined — the money to take care of those parks is inadequate. How much does it cost to conserve a species like the lion? Along with 57 other scientists, I recently compiled data on the current status of lion populations in 11 African countries. We assessed how well lions were being managed in each area by comparing their current population sizes against numbers that would be predicted on the basis of prey abundance in each park. We found that conservation success depends on two things: dollars and fences. Unfenced lion populations needed budgets of about $5,000 a square mile each year to reach even half their potential size. Without that, lion populations are losing ground. We estimated that nearly half the unfenced populations are at risk of extinction in the next 20 to 30 years. In parks that are surrounded by wildlife-proof fences — such as South Africa's Kruger National Park, which is about the size of New Jersey — it's a very different picture. They have lion populations that exceeded 80% of their potential, and the cost of conserving them is only about $1,250 annually a square mile. Moreover, none of the fenced populations are heading toward extinction. Yet wildlife fencing is surprisingly contentious. Some conservationists worry that physical barriers disrupt fundamental ecological processes; others seek to retain a sense of untouched wilderness in romantic destinations such as Kenya and Tanzania. But open plains cannot protect wildlife, especially because so few unfenced reserves are able to raise the necessary revenue to effectively manage themselves. South Africa's story provides hope even beyond the fences of its national parks. By the 1890s, South Africa was covered by western-style ranches and farms, and dangerous wildlife had been extirpated everywhere except for Kruger and Kalahari parks. But during the 20th century, many ranchers converted their land to conservancies and established private game reserves. To allay the fears of local communities about bringing lions and other potentially dangerous animals into closer proximity to humans, the reserves were fenced. Today, more wild lions live in South Africa's fenced parks and conservancies than a century ago, yet no one complains of livestock losses, much less of man-eating lions. It's true that fencing could destroy migratory ecosystems like that of Tarangire National Park in Tanzania, where wildebeest leave the park and mingle with livestock each year. But many of the places lions thrive, such as Tanzania's Selous Game Reserve, which holds the largest surviving lion population in Africa, would be suited to fencing. Selous encloses a nonmigratory ecosystem the size of Switzerland, and its management budget is about $5 million a year. To maintain the reserve's lion population at even 50% of what the area could sustain would take about $110 million a year. But if Selous were fenced, a $28-million annual budget could safely secure 80% of the lion population the area could sustain. Fencing Selous would cost about $30 million, well beyond the budget of the Tanzanian government but not of major donors like the World Bank. The donor community spends billions every year on human health and economic development in Africa. And because tourism directly contributes to economic development, better management of the wildlife that draws tourists would seem like exactly the kind of thing the World Bank should be funding. In the absence of a comprehensive management plan, lion populations are likely to be fragmented into an archipelago of tiny parks no larger than the scattered tiger reserves of Asia. Conservationists are already failing to save elephants and tigers, and lions won't fare any better unless there's a change in approach. If the world really wants to conserve iconic wildlife for the next 1,000 years, we need a latter-day Marshall Plan that integrates the true costs of park management into the economic priorities of international development agencies. Lions are too valuable to take for granted. Ecologist Craig Packer is a professor at the University of Minnesota and director of the Lion Research Center.
Above: Hubble Deep Field image showing myriad galaxies dating back to the beginning of time. Image by Robert Williams and the Hubble Deep Field Team (STScI) and NASA. A new study from the University of Oxford looks at the possibility of human colonization throughout the universe. Scientists as eminent as Stephen Hawking and Carl Sagan have long believed that humans will one day colonize the universe. But how easy would it be, why would we want to, and why haven’t we seen any evidence of other life forms making their own bids for universal domination? A new paper by Dr Stuart Armstrong and Dr Anders Sandberg from Oxford University’s Future of Humanity Institute (FHI) attempts to answer these questions. To be published in the August/September edition of the journal Acta Astronautica, the paper takes as its starting point the Fermi paradox – the discrepancy between the likelihood of intelligent alien life existing and the absence of observational evidence for such an existence. Dr Armstrong says: “There are two ways of looking at our paper. The first is as a study of our future – humanity could at some point colonize the universe. The second relates to potential alien species – by showing the relative ease of crossing between galaxies, it makes the lack of evidence for other intelligent life even more puzzling. This worsens the Fermi paradox.” The paradox, named after the physicist Enrico Fermi, is something of particular interest to the academics at the FHI – a multidisciplinary research unit that enables leading intellects to bring the tools of mathematics, philosophy and science to bear on big-picture questions about humanity and its prospects. Dr Sandberg explains: “Why would the FHI care about the Fermi paradox? Well, the silence in the sky is telling us something about the kind of intelligence in the universe. Space isn’t full of little green men, and that could tell us a number of things about other intelligent life – it could be very rare, it could be hiding, or it could die out relatively easily. Of course it could also mean it doesn’t exist. If humanity is alone in the universe then we have an enormous moral responsibility. As the only intelligence, or perhaps the only conscious minds, we could decide the fate of the entire universe.” According to Dr Armstrong, one possible explanation for the Fermi paradox is that life destroys itself before it can spread. “That would mean we are at a higher risk than we might have thought,” he says. “That’s a concern for the future of humanity.” Dr Sandberg adds: “Almost any answer to the Fermi paradox gives rise to something uncomfortable. There is also the theory that a lot of planets are at roughly at the same stage – what we call synchronized – in terms of their ability to explore the universe, but personally I don’t think that’s likely.” As Dr Armstrong points out, there are Earth-like planets much older than the Earth – in fact most of them are, in many cases by billions of years. Dr Sandberg says: “In the early 1990s we thought that perhaps there weren’t many planets out there, but now we know that the universe is teeming with planets. We have more planets than we would ever have expected.” A lack of planets where life could evolve is, therefore, unlikely to be a factor in preventing alien civilizations. Similarly, recent research has shown that life may be hardier than previously thought, weakening further the idea that the emergence of life or intelligence is the limiting factor. But at the same time – and worryingly for those studying the future of humanity – this increases the probability that intelligent life doesn’t last long. The Acta Astronautica paper looks at just how far and wide a civilization like humanity could theoretically spread across the universe. Past studies of the Fermi paradox have mainly looked at spreading inside the Milky Way. However, this paper looks at more ambitious expansion. Dr Sandberg says: “If we wanted to go to a really remote galaxy to colonize one of these planets, under normal circumstances we would have to send rockets able to decelerate on arrival. But with the universe constantly expanding, the galaxies are moving further and further away, which makes the calculations rather tricky. What we did in the paper was combine a number of mathematical and physical tools to address this issue.” Dr Armstrong and Dr Sandberg show in the paper that, given certain technological assumptions (such as advanced automation or basic artificial intelligence, capable of self-replication), it would be feasible to construct a Dyson sphere, which would capture the energy of the sun and power a wave of intergalactic colonization. The process could be initiated on a surprisingly short timescale. But why would a civilization want to expand its horizons to other galaxies? Dr Armstrong says: “One reason for expansion could be that a sub-group wants to do it because it is being oppressed or it is ideologically committed to expansion. In that case you have the problem of the central civilization, which may want to prevent this type of expansion. The best way of doing that get there first. Pre-emption is perhaps the best reason for expansion.” Dr Sandberg adds: “Say a race of slimy space aliens wants to turn the universe into parking lots or advertising space – other species might want to stop that. There could be lots of good reasons for any species to want to expand, even if they don’t actually care about colonizing or owning the universe.” He concludes: “Our key point is that if any civilization anywhere in the past had wanted to expand, they would have been able to reach an enormous portion of the universe. That makes the Fermi question tougher – by a factor of billions. If intelligent life is rare, it needs to be much rarer than just one civilization per galaxy. If advanced civilizations all refrain from colonizing, this trend must be so strong that not a single one across billions of galaxies and billions of years chose to do it. And so on.” “We still don’t know what the answer is, but we know it’s more radical than previously expected.” Publication: Stuart Armstrong, Anders Sandberg, “Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox,” Acta Astronautica, Volume 89, 2013, Pages 1–13; doi:10.1016/j.actaastro.2013.04.002 Source: Stuart Gillespie, University of Oxford Image: Robert Williams and the Hubble Deep Field Team (STScI) and NASA More from Amazing Places BIZARRE UFOs have been filmed crossing the lunar surface through a high-zoom telescope. YouTuber Crrow777 has spent years trailing a high-definition … There is a massive black hole with millions of times more mass than our sun is plunging towards Earth and …
View of European Space Agency (ESA) Andre Kuipers, Expedition 30 flight engineer, working with the Kubik facility in the Columbus Module of the International Space Station. (NASA) View large image European Space Agency (ESA) astronaut Thomas Reiter works with Astrolab; one of the experiments for Astrolab was Leukin, an experiment to study how human immune system cells adapt to weightlessness. (ESA) View large image When we get sick, our immune systems kick into gear to tell our bodies how to heal. Our T cells - white blood cells that act like tiny generals - order an army of immune cells to organize and attack the enemy. Microgravity studies aboard the International Space Station are helping researchers pinpoint what drives these responses, leading to future medical treatments on Earth. "The lack of gravity is important, because we are removing a variable," said Millie Hughes-Fulford, Ph.D., a former NASA astronaut; director of the Laboratory of Cell Growth at the University of California, San Francisco; and principal investigator for the Leukin study. This investigation looks at how human immune system cells adapt to microgravity. "Much like in math, whenever you remove a variable, you can solve the equation. The space station laboratory offers us the ability to look at things in a new way and therefore perhaps find a new answer as to how the immune system works." Scientists have known since the early days of human spaceflight that living in microgravity suppresses the immune system. During the Apollo Program, for instance, 15 of the 29 astronauts developed an infection either during or right after flight. Forty years later, Leukin results show that immunosuppression begins within the first 60 hours of flight. Findings from this investigation enabled researchers to pinpoint some specific genetic triggers for the go/no go of the immune system responses in the T cells. "We got really good results!" said Hughes-Fulford. "It was the first time anyone has been able to absolutely prove that gravity is making a difference in activation of the T cell." Samples were paired person-to-person from four different donors, allowing researchers to look directly at the changes per individual and compare them against each other. This powerful way to look at data is called a paired ANOVA - a two way analysis of variance, based in this case on microgravity. The study used the Kubik facility aboard station to create a control sample with simulated 1g so that all other factors, such as radiation and temperature, would be the same. This helps isolate the response of the samples to the variable of gravitational force for clear results. Answering the questions of how and why microgravity impacts immunosuppression aids researchers as they identify ways to increase the body's chances in health-related battles. "Once you activate your T cell, you are activating other parts of the immune system," said Hughes-Fulford. "So it's not just the T cell, it's the entire immune system that is affected. When a T cell does not activate, all the cells that it brings into the war against the invader are not activated or are only activated in a partial way." A healthy body depends on these T cells giving orders for the immune system to function properly as it marches into battle. There are factors that can hinder victory, however, such as signal interruption, delayed responses or even outright cell death. A suppressed immune system is like an army with an ineffective leader, significantly reducing the chances of a successful fight. Results revealed that specific genes within T cells showed down regulation - a decrease in cell response - when exposed to microgravity. This combined down regulation in the genetics of T cells leads to a reduction in the body's defense against infections during spaceflight in various ways. For instance, there is a reduced proinflammatory response - the cell's protective reaction to initiate healing. Cells also produce fewer cytokines, the proteins responsible for signaling communications between cells. There is even a negative impact to a cell's ability to multiply, known as mitogenesis - the chromosomal splitting in a cell nucleus necessary for cell reproduction. Examples of immunosuppression on Earth include the AIDS related HIV infection, rheumatoid arthritis and even age-related impacts to the immune system, which is why the elderly have a difficult time fighting off infections like pneumonia. Identifying how the immune system works at the cellular level provides a powerful tool to develop treatments at the root of the defense response. This is like a negotiation for peace talks before conflict breaks out, instead of trying to raise a white flag in the midst of an already raging battle. If doctors can isolate and control specific immune responses, they increase the chance for recovery. "What is really important is what we're going to do with the data," said Hughes-Fulford. "Using this method we are able to start looking at the immune system and new control points to either activate or deactivate…That's the whole goal of what we're doing when looking at the bioinformatics to use that in application to immune diseases here on Earth." Hughes-Fulford currently is in preparation for her next immunology study aboard the space station, funded by a grant from the National Institutes of Health and sponsored by the Center for Advancement of Science in Space (CASIS). Scheduled to launch with the SpaceX-3 commercial resupply mission, the investigation, called T-Cell Activation in Aging, will look at another class of control points in T cells that trigger immune response. Finding the genes that tell the cells to turn on and off is key to advancing medical options to improve immune system functions. "The base of this grant is the fact that we are able to use T cells from healthy human beings that are younger to look at the control points, like the go/no go from engineering," said Hughes-Fulford. "These are points that tell the cells yes or no, and by looking at those points we can pinpoint new potential pharmaceutical targets to treat immunosuppression."
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Field theory is a psychological theory which examines patterns of interaction between the individual and the total field, or environment. The concept was developed by Kurt Lewin, a Gestalt psychologist, in the 1940s. Field theory holds that behavior must be derived from a totality of coexisting facts. These coexisting facts make up a "dynamic field", which means that the state of any part of the field depends on every other part of it. Behavior depends on the present field rather than on the past or the future. Kurt Lewin (1890-1947) was a famous, charismatic psychologist who is now viewed as the father of social psychology. Born in Germany, Lewin emigrated to the USA as a result of World War II. Lewin viewed the social environment as a dynamic field which impacted in an interactive way with human consciousness. Adjust elements of the social environment and particular types of psychological experience predictably ensue. In turn, the person's psychological state influences the social field or milieu. Lewin was well known for his terms "life space" and "field theory". He was perhaps even better known for practical use of his theories in studying group dynamics, solving social problems related to prejudice, and group therapy (t-groups). Lewin sought to not only describe group life, but to investigate the conditions and forces which bring about change or resist change in groups. In the field (or 'matrix') approach, Lewin believed that for change to take place, the total situation has to be taken into account. If only part of the situation is considered, a misrepresented picture is likely to develop. - Sundberg, Norman (2001). Clinical Psychology: Evolving Theory, Practice, and Research, Englewood Cliffs: Prentice Hall. |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
Mahatma Gandhi was a scrawny, sickly kid and a mediocre student. He became a lawyer as an adult, but his shyness made him ineffective. He could be rude and tactless. And he wasn't remotely charismatic. Until, that is, he got angry. Really angry. Gandhi had moved from India to South Africa in 1893 to work as a lawyer and was traveling on a train in South Africa. Although he had a first-class ticket, a white man didn't want him sitting there, so a guard threw him off. Shivering in a dark waiting room, Gandhi had an epiphany. Within a week, he was speaking out publicly on discrimination and mesmerizing crowds with his passion. He shed the English clothing he'd favored, and began wearing the simple tunic-like garb of Indian farmers. Soon, his modus operandi of nonviolent protest through civil disobedience was born, which he used to work toward human rights and political equality. The more prominence and success he achieved, the more he was viewed as charismatic [sources: Denning, Daniel]. After helping to change some of the discriminatory laws in South Africa, Gandhi moved back to India in 1915. Soon, he was mobilizing the people to peacefully revolt against their British colonizers. Specifically, Gandhi instructed Indians to boycott everything British: British-made clothing, British universities and even British laws. One such law stipulated Indians couldn't produce salt, but instead had to buy it from licensed factories -- all of which were owned by the British. So in 1930, Gandhi staged a 24-day march to the sea, later known as the Great Salt March. Hundreds of thousands of his countrymen joined the march; when they reached the sea, they used it to make their own salt [sources: Denning, Daniel]. Gandhi's tactics worked. India gained its independence in 1947, and the new country of Pakistan was also created out of the northeastern and northwestern areas, which were predominantly Muslim. Unfortunately, Gandhi was assassinated in 1948 by a Hindu nationalist who despised him for his tolerance of Muslims [source: History Learning Site].
|MadSci Network: Earth Sciences| In a technical sense, it is not true that warmer air "holds" more water vapor than cold air. Actually, it is the temperature of the water vapor itself that governs the amount of water vapor that may be held in the atmosphere. The warmer the water vapor, the greater its maximum vapor pressure. Vapor pressure is the portion of atmospheric air pressure attributable to water vapor. The greater the maximum (saturation) vapor pressure is the greater the capacity of the mixture of air and vapor to hold water vapor. Since the amount of water vapor in the air is quite small compared to the rest of the gases in the atmosphere, the temperature of the water vapor is governed by the temperature of the rest of the air in which it resides. This leads to the somewhat inaccurate but very convenient notion that warmer air holds more water vapor. In order to explain this to 4th graders, we won't differentiate between the notion of vapor pressure versus "air capacity." It is probably sufficient to say that the air is like a sponge. When air temperature increases, that sponge grows a little and the air can hold more water vapor. When air temperature decreases, the sponge shrinks and the air can hold less vapor. Keep in mind though the reality of the situation: If air temperature increases, water vapor temperatures does too. This results in a higher saturation (or maximum) vapor pressure. If there isn't enough vapor in the air to meet the maximum, evaporation occurs as the atmosphere strives to reach balance. If air temperature decreases, the saturation vapor pressure decreases as well. If there is more vapor present than this maximum value can support, the condensation occurs as the atmosphere strives to reach balance. Try the links in the MadSci Library for more information on Earth Sciences.
The Sac Fungi |Sac fungi get their names from the fact that they produce their spores, called ascospores, in special pods or sac-like structures called asci (singular ascus). Technically this group of fungi is known as the Ascomycetes or Ascomycota. The drawing at the right shows a cross-section of a cup fungus, a kind of sac fungus, and the microsopic view shows how asci cover the inside of the cup. The orange color is just to serve as a background. You might want to compare this diagram showing asci and ascospores with the diagram showing basidia and basidiospores on the club fungus page. One of the most famous sac fungi is the Morel mushroom shown at the right. Morels are famous because they are so good to eat! Common in many parts of North America, they grow in rich soil and though there are several species (this one is Morchella angusticeps), they are all delicious. There are some "false morels," which are not so good for eating. However, any mushroom with a honeycombed cap like the one at the right, in which the depressions are very deep, is a morel. The depressions in false morels are shallow or nonexistent. Spore-producing sacs, or asci, are borne in palisadelike layers lining the cap's depressions. Right now a terrible disease is spreading through eastern North America killing Flowering Dogwoods, one of our most beautiful and treasured native tree species. The disease is caused by Dogwood Anthracnose Fungus, and the picture at the left shows the disease's symptoms. Note the dark brown spots (dead tissue) surrounded by yellowness. In the disease's later stages the leaves are often red, as during the fall. The fungus kills by releasing a substance that breaks down the dogwoods' tissues into something the fungus can absorb and use for its own purposes. The fungus also releases toxins that kill tissue outright, and can spread into the stem to infect new limbs and cause cankers. There are two main types of anthracnose fungus on dogwoods. One is a species of Discula, to which science hasn't yet given a scientific name.. The other, more deadly, fungus is called Discula destructiva. Since this disease is so important, there is plenty on the Web about it. To visit one of the best sites, which has a good section on identifying fungi by looking at their DNA, click here. At the left you more Flowering Dogwood leaves, this time very sick with a completely different kind of fungal disease. You can see that the newly developed spring leaves are pale, puckery, and appear to be covered with a white dust or powder. The close-up at the lower right shows that there's not much to this white stuff -- it really is like powder. That powder is the fungus and this species doesn't get much more glamorous-looking than that. The fungus is causing Dogwood Powdery Mildew, and the genus name for it is either Microsphaera or Phyllactinia. The white matter you see consists of masses of tangled hyphae that obtain their food by sending rootlike haustoria into the leaf's living cells. At this early stage in the mildew's lifecycle it is reproducing with special kinds of asexual spores called conidia. This species can reproduce when the tips of certain vegetative hyphae simply constrict in certain places (no sex involved) forming egg-shaped spores that then can blow away and under proper conditions sprout new hyphae. Later in the year sexual reproductive structures will form in the powdery area (tiny items barely large enough to see with the naked eye) and they will produce regular sex-based spores. The table grapes at the right, which had been stored for too long in a refrigerator, is a species of the genus Penicillium. This is the same genus from which the powerful antibiotic penicillin is derived. If you ever opened an old jar of jelly or jam and its surface was covered with a greenish mat, that was probably Penicillium, too. In case you don't like to throw food away you will be happy to know that when you find Penicillium covering your favorite jam, the fungus itself is not poisonous and if the mold layer is removed the food will still be edible, assuming that something else hasn't spoiled it. Another big genus among the blue-green molds is Aspergillus, whose species aren't always so generous. They can cause ear and lung infections. The black-and-white little fungus at the right (only 0.4 inch or 10 mm high) is a sac fungus of the genus Xylaria, growing on the wood of a log fallen in the woods. Most people would never notice this little being unless they were specifically looking for small things, poking around on fallen logs. This species isn't illustrated in most mushroom field guides. In fact, I can't even find a common English name for it. You may be more familiar with other sac fungi, at least their names. For example you have probably heard of yeast and maybe ergots. I'm looking for pictures of these and other sac fungi, so stay tuned... Return to the FUNGUS MENU Return to the PLANT MENU Return to the HOME PAGE
Cool Australia and Planet Ark’s National Recycling Week have created three resources to help educators connect children to the wonders of the natural world through sensory and play-based learning. This lesson has been developed as part of the Schools Recycle Right Challenge for Planet Ark’s National Recycling Week. Turn your recycling efforts into classroom learning by accessing these online lesson plans and digital resources. Each lesson is aligned with the Early Years Learning Framework and includes hand-on, children-centred learning activities that are fun and engaging. These three free-to-access lessons below cover: - Waste and recycling investigations - Describing and sorting waste - Sorting waste according to size, shape, weight, colour and texture - Sensory exploration of paper (feel, smell, and sound) - Drawing the connection between paper and trees Activities are differentiated for 0 to 2 year olds, and 3 to 5 year olds. Suggestive learning evaluation criteria included in each lesson. These activities will help you embed sustainability and recycling into your daily practice at your centre and in your community. Check out the digital library for National Recycling Week – its jam packed with videos, pictures, factsheets and news articles to inspire recycling in your centre.
Yellow Fever in Brazil: The Latest Global Health Security Threat June 23, 2017 On January 13, 2017, Minas Gerais—Brazil’s second most populous state—declared a 180-day state of emergency following eight yellow fever-related deaths. Since then, the outbreak has swept across much of Brazil, with confirmed cases in rural villages spanning 130 municipalities in nine states. And while Brazil has responded aggressively to the outbreak, concerns remain that the virus could eventually reach one of its major cities or even move beyond its borders; thereby igniting a larger yellow fever epidemic. In late May, the Center for Strategic and International Studies (CSIS) Global Health Policy Center traveled to Brazil to look at the state of U.S.-Brazilian cooperation on matters of global health security. Throughout the course of our discussions, we came to learn more about the yellow fever outbreak and the risks it poses to the international community. Yellow fever—a mosquito-borne virus that can cause fever, chills, muscle aches, headaches, and (in serious cases) bleeding, organ failure, and death—has long been endemic to Brazil. First brought to the Americas from Africa in the 1600s during the slave trade, yellow fever used to kill hundreds of thousands of people annually throughout the region. That changed in the early 1900s with aggressive regional Aedes aegypti mosquito elimination programs in cities (Aedes aegypti being the primary mosquito to transmit yellow fever in urban settings), the development of an effective yellow fever vaccine, and the implementation of large-scale urban immunization campaigns. These combined efforts caused the number of human fatalities in Latin America to drop exponentially and largely confined the virus to monkey populations living deep in the Amazon. However, the risk of yellow fever outbreaks in Brazil has remained. Several different mosquito species can transmit the virus from monkeys to humans who live or work near the jungle, resulting in localized yellow fever outbreaks. It’s of particular concern if an individual infected with the virus then travels to a major city. There, the presence of the Aedes aegypti mosquito increases the likelihood of a larger urban yellow fever outbreak. Aedes aegypti—the same mosquito that transmits dengue, chikungunya, and the Zika virus—is ubiquitous in large Brazilian cities and is uniquely suited to quickly spread yellow fever throughout a population. Unlike other mosquito species that only feed from a single person, Aedes aegypti feeds on several individuals throughout the course of its two- to four-week lifespan. This means that if an Aedes aegypti mosquito bites a person infected with yellow fever, it could potentially transmit the virus to dozens of individuals within a matter of days. Such is the concern among health officials regarding the current situation in Brazil, where outbreaks in remote parts of the country are now threatening to spread to major urban centers for first time in the Americas since 1942. Accelerated urbanization along the Amazon’s border in recent years has concentrated a large number of non-immunized individuals in traditionally high virus transmission areas, resulting in the current spike of yellow fever cases in the country. As of May 31, there have been 3,240 suspected cases of yellow fever reported to the Brazilian Ministry of Health—792 of which have been confirmed and 519 that are still under investigation. Nearly all confirmed cases thus far have been concentrated in rural areas in just four states: São Paulo, Minas Gerais, Rio de Janeiro, and Espírito Santo. Three of those states—São Paulo, Minas Gerais, and Rio de Janeiro—are home to nearly 40 percent of Brazil’s 208 million citizens and boast some of Brazil’s largest cities. The cities of São Paulo and Rio de Janeiro alone are home to a combined 18.5 million people. The sheer size of these cities, plus their role as major regional and global transportation hubs, could pose a significant public health risk if a case of yellow fever were to reach them. To Brazil’s credit, the government has responded aggressively to prevent such a scenario from occurring. It has worked closely with international actors like the Pan American Health Organization (PAHO)/World Health Organization (WHO) and has leveraged its considerable domestic health capacities—including the expertise of its world-renowned Oswaldo Cruz Foundation (Fiocruz) and its domestic yellow fever vaccine production capabilities—to avert a major urban outbreak. Since January, the Ministry of Health has dispatched over 26 million doses of yellow fever vaccine to affected states. Over 1,000 municipalities have conducted surge vaccination campaigns, while surveillance and case management capacities have been strengthened. Additionally, Brazil’s experience combatting the 2015-2016 Zika epidemic has also helped stave off an urban outbreak. In the lead-up to the 2016 Olympics, the Brazilian government undertook an extensive vector-control campaign in the city of Rio de Janeiro. The Ministry of Health partnered with the Brazilian military to spray for Aedes aegypti throughout much of the city and surrounding countryside and the Rio city government spent months eliminating potential mosquito breeding grounds. That included cleaning up standing pools of water, removing trash from the streets, and properly discarding used tires (which serve as ideal mosquito breeding sites). Such efforts, coupled with the fortuitous arrival of the cooler, drier winter months which typically experience a decrease in mosquito populations, might prove enough to curb the current outbreak. However, concerns remain. First, the outbreak response to-date has severely depleted the global supply of yellow fever vaccine, thereby calling into question whether the international community has the means to effectively respond to a potential outbreak in a major Brazilian city or elsewhere in the world. Brazil is one of only four manufacturers of yellow fever vaccine worldwide and is a key contributor to global yellow fever vaccine emergency stockpiles like that of the International Coordinating Group (ICG) on Vaccine Provision or the PAHO Revolving Fund. As Brazil has been forced to use more of its domestically-produced yellow fever vaccine to deal with its own outbreak, less is available to contribute to such global reserves. Additionally, the outbreak response has been so taxing on Brazil’s national yellow fever vaccine stockpile that it’s had to request an additional 3.5 million vaccine doses from the ICG’s 6 million dose vaccine inventory. With global vaccine stockpiles already low from the Angolan outbreak last year and the long time needed to produce additional yellow fever vaccine, this additional strain has severely weakened global capacities to respond to a broader epidemic. Second, while the number of yellow fever cases in Brazil are currently declining, risks remain that the virus could still reach other parts of the Americas. As occurred with Zika, it’s possible that the yellow fever outbreak in Brazil could spread to neighboring parts of Latin America and the Caribbean through travel-related case importation or vector-borne transmission. Granted, there are procedures currently in place meant to address travel-related case importation, including provisions in the International Health Regulations (IHR) and country requirements to show a WHO vaccination yellow card upon entry. However, with over one hundred million airline passengers traveling to and from Brazil per year, countless overland crossings with neighboring countries, and Aedes aegypti pervasive throughout much of the region, it is conceivable that isolated cases could still slip by, potentially leading to similar outbreaks with local transmission of the virus in neighboring countries. While a significant yellow fever outbreak in the United States in unlikely, the situation in Brazil still poses some risk to the homeland. A combination of low yellow-fever vaccination rates—citizens of and travelers to the U.S. aren’t required to have a yellow fever shot—and the concentration of Aedes aegypti throughout a large portion of the South means that it is plausible that a limited outbreak with some localized transmission of the virus could occur if a travel-related case of yellow fever were to reach the southern U.S. Effectively responding to such a scenario could prove particularly challenging right now, as manufacturing problems at the only U.S.-licensed yellow fever vaccine production facility have caused a significant vaccine supply shortage, with the U.S. Centers for Disease Control and Prevention (CDC) estimating that the U.S. stockpile could run out by mid-July. Finally, there is a chance—albeit minimal—that yellow fever could ultimately migrate beyond the Americas and lead to localized outbreaks in Asia. Using the Angolan outbreak last year as an example, ten travel-related cases of yellow fever were confirmed in China, including six in the Fujian Province. These cases suggest that even though China requires travelers from yellow fever endemic countries to produce a WHO vaccination card upon entry, unvaccinated individuals were still able to enter. Considering China’s large expatriate community in Angola, it’s possible that additional cases may have been imported into China as well yet gone undetected. The deepening of economic and commercial ties between Brazil and Asia in recent years, large Chinese and Japanese expatriate communities in Brazil, the abundance of Aedes aegypti throughout much of China and Southeast Asia, and the potential for a similar public health surveillance breakdown with the current outbreak at least raises the possibility of travel-related yellow fever cases with some degree of local transmission arising in Asia; a prospect that must be taken quite seriously. The yellow fever outbreak in Brazil is simply the latest in a long line of health security threats the world has faced in recent years. While Brazil has responded admirably to the outbreak, there are still concerns of a broader public health crisis unfolding should the virus reach a major urban center or be exported out of the country. Ensuring such scenarios don’t come to pass will require continued efforts by the Brazilian government at boosting yellow fever vaccination rates in rural areas, implementing mosquito-control programs in major cities, and bolstering surveillance capacities throughout the country. It will also require a concerted effort by the international community to strengthen global health security capacities in the region and beyond, including strengthening detect and response capacities throughout Latin America and the Caribbean, rectifying shortcomings in the current global vaccine stockpile system, and shoring up potential gaps in countries’ travel-related case detection protocols. As recent history has demonstrated time and time again, a health threat anywhere is a health threat everywhere. As Brazil continues its tireless efforts to prevent a yellow fever epidemic from taking hold, the world would be wise to take heed.
Golden Ratios, A Treasure in Math Lesson 5 of 13 Objective: Students will be able to approximate the Golden Ratio I have this written on the board: “What down your observations? What strikes you as interesting in these videos?” As students enter the room, I welcome them to the “Golden Movie Theatre” and give them their “Golden Movie Ticket” which has the prompts on the board written down. I show students selected segments from ViHart’s video series on the Golden Ratio. She currently has three videos available and I show clips from the first two, which discuss the Fibonacci sequence and show the patterns from fruit and flowers. These videos confirm their discoveries from the days before. The third video is especially helpful, because she discusses the universal connection between plants with Fibonacci and non-Fibonacci leaves and petals. Essentially, our discussion mixes biology and mathematics and points out that plants will grow however they need to in order to get the most sunlight possible. The angle spread between leaves tends to maximize a plants exposure to sunlight. This might change with the orientation and climate of the plant, but that is essentially the connection. She briefly mentions Lucas Numbers (2,1,3, 4, 7,…) and that is the launching point for our investigation for the day. Remember that Fibonacci numbers are named after Fibonacci. Lucas numbers are also attributed to their creator. Today you will create your number sequence and explore the results of your pattern. The basic rule is that your pattern needs to be recursive and each number should be the sum of the two numbers before it. However you can start with any number. When you are happy the pattern sequence you have set up, come get your official number sequence log from me. We want to record your invention and share it with the class. This is a fun activity because students take ownership of their pattern. They often start with fun types of numbers. Its especially fun when students pick decimals and large numbers. They name their patterns and then fill out the official number log of our class. This is listed as the “student numbers” document. They fill out their numbers and then write out pairs of consecutive numbers as both fractions and ratios. I ask them to write their observations on the back of the sheet. If they have time, I ask them to make a scatter plot of the terms, where the x axis represents the number pair (1st pair, 2nd pair, etc) and the y axis represents the decimal approximations (with increments of about .05 and a range of 0-1.7). This is a fun chance to spiral back to lines of best fit and other data content from the common core curriculum. Students are curious about each others patterns and are excited to share their patterns and table results. I always know things are going well when a pair of students notices how “crazy” it is that their decimal results are about the same. This summary is more about their number discoveries then the Golden Ratio. We mention Phi and I use various images of spirals and references of structures to emphasize the wide spread recognition of this ratio, but the Golden Ratio is not something that students can fully comprehend in a single lesson or set of lessons. It is an enormously rich topic that we are simply introducing. The summary starts with a number sequence share. I encourage students to simply describe the sequence that they created. We popcorn around the room and share a sequence of patterns. They great thing is that they seem to have nothing in common except for the general recursive nature of the sequence. After they share, I ask: “What did all of these patterns have in common?” “Each number is the sum of the two before it." Of course this is an unfair question, but its meant simply to remind that there is more here than they might have thought. That is when I show a table as a sample. We talk about the decimal results as we read the table. Do these decimals seems to be approaching a certain value?” I like to help students understand this pattern by setting up a quick scatter plot (something they are familiar with in the 8th grade common core standards). We think about the line of best fit and set it at about 1.6. Students are surprised to find that all of the patterns hover around this line. We talk about this and share that the Fibonacci sequence and every sequence built like it approaches the same number, phi. I usually follow this by discussing the premise of the golden ratio with a simple line diagram cut where the ratio of the segments corresponds to the golden ratio. “The golden ratio is often associated with the golden spiral, which you can draw through the golden rectangles formed by tiling the Fibonacci numbers. You tiled the Fibonacci numbers the other day. I will return them to you and give you a chance to draw the golden spiral through the rectangles in the tiling pattern.” If nothing else, students enjoy the rich connections through all the facets of the lessons in this series. There is a level of mystery around these numbers that one can’t help get excited about. There is an undeniable pleasant beauty to the shapes that exhibit the golden ratio and spiral. Students draw the spirals easier on the floors they tiled in their previous lesson and are now aware that there is world of beauty in ratios.
February 25, 2008 An enormous plume of dust and water spurts violently into space from the south pole of Enceladus, Saturn's sixth-largest moon. This raging eruption has intrigued scientists ever since the Cassini spacecraft provided dramatic images of the phenomenon. Now, physicist Nikolai Brilliantov, University of Leicester, and colleagues in Germany, have revealed why the dust particles in the plume emerge more slowly than the water vapor escaping from the moon's icy crust. Enceladus orbits in Saturn's outermost E-ring. It is one of only three outer solar system bodies that produce active eruptions of dust and water vapor. Moreover, aside from the Earth, Mars, and Jupiter's moon Europa, it is one of the only places in the solar system for which astronomers have direct evidence of the presence of water. The erupting plume on Enceladus is ejected by geyser-like volcanic eruptions from deep, tiger-stripe cracks within the moon's south pole. Some astronomers have suggested that the myriad tiny grains of dust from these eruptions could be the actual source of Saturn's E-ring. However, the dynamics and the origin of the plume itself have remained a mystery. Now, Brilliantov, who is also on the faculty at the University of Potsdam, Germany and Moscow State University, working with Juergen Schmidt and Frank Spahn of Potsdam and Sascha Kempf of the Max Planck Institute for Nuclear Physics in Heidelberg, and the Technical University of Braunschweig, Germany have developed a new theory to explain the formation of these dust particles and to explain why they are ejected into space. The researchers point out that once ejected the dust particles, which are in fact icy grains, and water vapor, are too dilute to interact with each other and so the water vapor cannot be the cause of the dusty slowdown. Instead, the team suggests that the shift in speed must occur below the moon's surface before ejection. The numerous cracks through which the plume material escapes from the moon's icy surface, and which can be hundreds of meters deep, are narrower at some points along their length. At these points temperature and pressure of vapor drop drastically down, causing condensation of vapor into icy grains and hence to formation of the dust-vapor mixture. The required vapor density to accelerate the grains to the observed speeds implies temperatures where liquid water can exist in equilibrium with solid ice and water vapor within the moon's frozen crust. These peculiar conditions allow the water vapor to erupt rapidly carrying with it the dust particles. However, these particles undergo countless frequent collisions with the inside of the channel walls which causes friction that slows them down before final ejection. The larger the particle the slower the ejection speed. This effect, quantified by the new theory, explains the structure of the plume and eventually the particle size distribution of the E-ring of Saturn. The existence of liquid water is a prerequisite for life and, while not suggesting there is life on Enceladus, offers another extraterrestrial place that might be searched.
1.The Dead Sea Scrolls were discovered in eleven caves along the northwest shore of the Dead Sea between the years 1947 and 1956. The area is 13 miles east of Jerusalem and is 1300 feet below sea level. The mostly fragmented texts, are numbered according to the cave that they came out of. They have been called the greatest manuscript discovery of modern times. See a Dead Sea Scroll Jar. 2. Only Caves 1 and 11 have produced relatively intact manuscripts. Discovered in 1952, Cave 4 produced the largest find. About 15,000 fragments from more than 500 manuscripts were found. 3. In all, scholars have identified the remains of about 825 to 870 separate scrolls. 4. The Scrolls can be divided into two categories—biblical and non-biblical. Fragments of every book of the Hebrew canon (Old Testament) have been discovered except for the book of Esther. 5. There are now identified among the scrolls, 19 copies of the Book of Isaiah, 25 copies of Deuteronomy and 30 copies of the Psalms . 6. Prophecies by Ezekiel, Jeremiah and Daniel not found in the Bible are written in the Scrolls. 7. The Isaiah Scroll, found relatively intact, is 1000 years older than any previously known copy of Isaiah. In fact, the scrolls are the oldest group of Old Testament manuscripts ever found. 8. In the Scrolls are found never before seen psalms attributed to King David and Joshua. 10. The Scrolls are for the most part, written in Hebrew, but there are many written in Aramaic. Aramaic was the common language of the Jews of Palestine for the last two centuries B.C. and of the first two centuries A.D. The discovery of the Scrolls has greatly enhanced our knowledge of these two languages. In addition, there are a few texts written in Greek. 11. The Scrolls appear to be the library of a Jewish sect. The library was hidden away in caves around the outbreak of the First Jewish Revolt (A.D. 66-70) as the Roman army advanced against the rebel Jews. 12. Near the caves are the ancient ruins of Qumran. They were excavated in the early 1950's and appear to be connected with the scrolls. 13. The Dead Sea Scrolls were most likely written by the Essenes during the period from about 200 B.C. to 68 C.E./A.D. The Essenes are mentioned by Josephus and in a few other sources, but not in the New testament. The Essenes were a strict Torah observant, Messianic, apocalyptic, baptist, wilderness, new covenant Jewish sect. They were led by a priest they called the "Teacher of Righteousness," who was opposed and possibly killed by the establishment priesthood in Jerusalem. 14. The enemies of the Qumran community were called the "Sons of Darkness"; they called themselves the "Sons of Light," "the poor," and members of "the Way." They thought of themselves as "the holy ones," who lived in "the house of holiness," because "the Holy Spirit" dwelt with them. 15. The last words of Joseph, Judah, Levi, Naphtali, and Amram (the father of Moses) are written down in the Scrolls. 17. The Temple Scroll, found in Cave 11, is the longest scroll. Its present total length is 26.7 feet (8.148 meters). The overall length of the scroll must have been over 28 feet (8.75m). 18. The scrolls contain previously unknown stories about biblical figures such as Enoch, Abraham, and Noah. The story of Abraham includes an explanation why God asked Abraham to sacrifice his only son Isaac. 19. The scrolls are most commonly made of animal skins, but also papyrus and one of copper. They are written with a carbon-based ink, from right to left, using no punctuation except for an occasional paragraph indentation. In fact, in some cases, there are not even spaces between the words. 20. The Scrolls have revolutionized textual criticism of the Old Testament. Interestingly, now with manuscripts predating the medieval period, we find these texts in substantial agreement with the Masoretic text as well as widely variant forms. 22. Although the Qumran community existed during the time of the ministry of Jesus, none of the Scrolls refer to Him, nor do they mention any of His follower's described in the New Testament. 23. The major intact texts, from Caves 1 & 11, were published by the late fifties and are now housed in the Shrine of the Book museum in Jerusalem. 24. Since the late fifties, about 40% of the Scrolls, mostly fragments from Cave 4, remained unpublished and were unaccessible. It wasn't until 1991, 44 years after the discovery of the first Scroll, after the pressure for publication mounted, that general access was made available to photographs of the Scrolls. In November of 1991 the photos were published by the Biblical Archaeological Society in a nonofficial edition; a computer reconstruction, based on a concordance, was announced; the Huntington Library pledged to open their microfilm files of all the scroll photographs. 25. The Dead Sea Scrolls enhance our knowledge of both Judaism and Christianity. They represent a non-rabbinic form of Judaism and provide a wealth of comparative material for New Testament scholars, including many important parallels to the Jesus movement. They show Christianity to be rooted in Judaism and have been called the evolutionary link between the two. The rugged terrain of the Qumran area. Recommended For Further Study: The Dead Sea Scrolls: A New Translation The Dead Sea Scrolls Bible Understanding the Dead Sea Scrolls Listing of Dead Sea Scroll Books and Links to Other Sites on the Web Copyright ©1996-2011 CenturyOne Bookstore. All Rights Reserved. All prices subject to change and given in U.S. dollars. Your purchase from CenturyOne.com will assist the CenturyOne Foundation in providing funding for various archaeological and research projects which seek to provide more information about the period of the First Century C.E., the origins of Christianity and the world of the Bible in general. All materials contained in http://www.centuryone.com are protected by copyright and trademark laws and may not be used for any purpose whatsoever other than private, non-commercial viewing purposes. Derivative works and other unauthorized copying or use of stills, video footage, text or graphics is expressly prohibited.
Humans destroy the equivalent of one Ireland-sized swath of tropical rainforest every year—mostly due to expansion of agriculture. Cocoa, which fuels the multi-billion dollar chocolate market, is grown in tropical rainforest—mostly in Sub-Saharan Africa on small, family owned farms. African farmers typically cannot afford pesticides, so they rely nature for pest insect removal, but neither farmers nor scientists know which birds and bats provide this service. Fortunately, cocoa is not nearly as destructive as other crops—it is grown under a lush canopy of rainforest trees, which if managed appropriately, can support biodiversity comparable to that of virgin rainforest. These shade trees also provide habitat for the African birds and bats that undoubtedly save farmers millions of dollars through pest control. Sadly, very little is know about how to manage African cocoa plantations—either for improvement of agricultural production or for the benefit of biodiversity. However, in a technological breakthrough, Biodiversity Initiative scientists are using cutting edge new genetics techniques that allow us to sequence the bits of insect and plant DNA left in birds and bat faeces. This then allows us to map thousands of species in the food web—including shade trees and important pest insects. With this breakthrough, we have the ability to determine: - Which shade trees benefit birds, bats and farmers via pest insect removal, and - Which shade trees benefit rainforest biodiversity in general. Chocolate consumption is rapidly increasing worldwide as the middle class of developing nations swells, thus driving cocoa farms deeper into tropical rainforests to grow their crops. With this novel framework, we will create a system in which African farmers benefit through inexpensive, sustainable management of cocoa, and rainforest animals benefit through the planting of trees that mimic their natural habitat. More to come soon as this project continues to develop!
Watch students respond to the question “Why A Mountain ?” For the last several years, fifth grade students at PS 119 have analyzed an excerpt of Dr. Martin Luther King Jr’s last speech in which he speaks about being to the mountain top. The lesson “Why a Mountain?” involves students in discussions focused on Dr. King’s use of poetic imagery in his speeches and on the civil rights struggle as they use the mountain as a timeline of famous historic events in the struggle for civil rights. A visiting Service in Schools intern Ugonna Igweatu, a film student at Yale, who was at the school last year to attend the school’s Pajama Program Service Award Ceremony donated his time to create this video of students’ poetry written in response to the question “Why A Mountain ?” This story is part of a series to promote the ongoing school-based programs and activities through the Respect For All initiative.
Encyclopædia Britannica’s Student Library CD-ROM teaches students to research information on their own. Covering the most important topics for kids in grades 1 to 9, it includes the Britannica Elementary Encyclopedia, the Britannica Student Encyclopedia, the Merriam-Webster Student Dictionary and Thesaurus, games and interactive study guides, and other rich multimedia that make learning fun and keep children engaged. Taking students from frogs to physics, these two complete encyclopedias form the cornerstone of a total reference library that interests and involves young students. - Britannica Elementary Encyclopedia for young learners and the Britannica Student Encyclopedia for middle-school students. - Merriam-Webster’s Student Dictionary & Thesaurus plus a complete world atlas. - Explore article and media tours. - Thousands of photographs, videos, audio clips, virtual tours, and timelines. - Virtual note cards. - Learning games and interactive study guides that make learning fun. - Useful homework resources including a video subject browse and how-to documents on topics such as writing a book review. - A-Z browse or search by keyword or question that allow children to find information quickly and easily.
Are you using these final few weeks of the summer term to get your KS3 planning wrapped up? Have a look at our section on lesson plans for 11-14 year olds. There are lesson plans on the science of baking, bread and cake making skills, health and nutrition as well as farming and growing. All the lesson plans are free and come with curriculum links, activity sheets and homework ideas. You can use the resources as a complete lesson or pick and choose elements to compliement your existing lesson plan.
Summertime brings hot weather, family vacations, and severe thunderstorms. Those thunderstorms do more than change your afternoon plans - they cause a lot of damage. A bolt of lightning can cause injury or death, fire, and destruction of electric-powered equipment. Lightning is electricity, and electricity is always looking for the fastest way to reach the earth. Water, metal, trees, or even people can serve as a conductor to provide that path to ground. When a thunderstorm threatens, you should get inside. Even though the thunderstorm may not be directly overhead, lightning can strike several miles from the parent cloud. Hilltops, hillsides, and buildings surrounded by flat fields all tend to attract lightening. A wooden rain shelter or stand of trees doesn't provide adequate protection. If you are caught outside, you should avoid being higher than your surrounding area. In an open area, find a low spot to wait out the storm. Stay away from open water and get off tractors or other open metal vehicles. You should also avoid wire fences, clotheslines, metal pipes, and rails. Do not stand underneath a telephone pole or a tall, isolated tree. When a person is struck by lightning, they receive a severe electrical shock and may be burned. It is not dangerous to touch them. If someone appears to have been killed by lightning, frequently they can be revived if you act quickly. According to the American Red Cross, you should immediately begin mouth-to-mouth resuscitation if a victim is not breathing. Cardiopulmonary resuscitation (CPR) is necessary if both a pulse and breathing are absent. A victim should still be given first aid for shock if they appear to be unharmed. Look for burns at fingers and toes, as well as next to jewelry and buckles. To help prevent lightning damage to buildings, you should prepare ahead of time. Install lightening rods on high points of buildings, especially vents and air-handling units. Another protection device is a ground rod, which should be installed around the perimeter of buildings along with the interconnecting cables. Surge arresters can protect inside wiring and appliances from any lightening-generated surges that travel through power lines. You should also protect any tree that is within ten feet of a building and taller than the building. A lightning protection system for a tree attaches lightning rods and cables to the branches and tree trunk. The conductor cable is buried at the base of the tree and extends at least twelve feet from the trunk to ground the connections. This system not only protects the tree, but might prevent damage to the building, if the tree should fall. You probably want to get a certified installer to install a lightning protection system for your home and trees, rather than trying to do it yourself. Check the weather forecast and be prepared when lightning strikes.
Turkish literature can be divided into three main periods: the purely Turkish period before the conversion of the Turks to Islām, covering approximately the 8th to the 11th century AD; the period of Islāmic culture, from the 11th to the mid-19th century, when Arabic and Persian influences were strong; and the modern period, from the accession of Sultan Abdülmecid I in 1839, when the influence of Western thought and literature became predominant. The oldest literary legacy is to be found in the Orhon inscriptions (q.v.), discovered in the valley of the Orhon River, northern Mongolia, in 1889 and deciphered in 1893 by the Danish philologist Vilhelm Thomsen. They are carved in a script used also for inscriptions found in Mongolia, Siberia, and western Turkistan and called by Thomsen “Turkish runes.” They relate in epic and forceful language the origins and ancient history of the Turks. Their polished style suggests considerable earlier development of the language. With conversion to Islām, the Turks gradually adopted Arabo-Persian metres and literary traditions. Linguistically there arose three traditions. First, the Chagatai language of the eastern Turks was used mainly in Central Asia, by the Golden Horde; in Egypt; and in the Indian courts of the Mughal period. Lacking a political and literary centre, it was influenced by local spoken dialects. ʿAlī Shir Navāʾī and the Mughal emperor Babur were among the great classical writers in this dialect. The second tradition centred on Azeri, the literary language of the eastern Oğuz in western Persia, Iraq, and eastern Anatolia before the Ottoman conquest. Seyid İmadeddin Nesimi is its first outstanding representative; his poems have rare beauty and religious feeling. Shah Ismāʿ īl, founder of the Ṣafavid dynasty of Persia, had a lasting influence on popular religious literature in Anatolia. His poems, a blend of religious emotion and political propaganda, preach the Shiʿite doctrine. Mehmed bin Süleyman Fuzuli, the greatest representative of the classical school, influenced Azeri and Ottoman poets of all succeeding generations. The third and most prolific Turkish literature was written in Anatolian, or Ottoman, Turkish, the language of Anatolian Seljuqs and of the Ottoman Empire after the 13th century. In the earliest, preclassic period, spanning the 14th and 15th centuries, the influence of the Persian classics was paramount. But by the mid-15th century, with the establishment of the Turks in Istanbul, the golden age of Turkish letters began, lasting through the 17th century. The Persian classics were no longer mechanically imitated but had been fully assimilated, enabling Turkish poets to evolve a genuine classical poetry that bore the imprint of their own individuality. A price had to be paid, however, for this assimilation. The Turkish language lost some of its purity by accepting many Persian and Arabic words and constructions. As a result, Turkish literature was restricted to a small educated class. The two greatest poets among the many splendid poets of the era were Fuzuli and Bâkî. The prose of the classical period also showed great variety—with folktales, half-religious and half-epic narratives, belles lettres written in a rather heavy and artificial style, and, particularly, the work of the chroniclers, who were masters of classical prose. Notable in the postclassic period, beginning in the 18th century, was Ahmed Nedim, who sang in colourful poems of the so-called Tulip Age of Istanbul under the sultan Ahmed III. In the second half of the century, Gâlib Dede, the last great classical author, wrote the original mystical romance Hüsn ü Aşk (“Beauty and Love”). In the 19th century the introduction of Western reforms had its effect on literature. Mainly under French influence, various writers adopted and adapted Western literary forms, such as the novel, the drama, and the essay, while the rigid forms and attitudes of classical poetry gradually went out of fashion in the country. In the 20th century a Turkish nationalist literature flourished, and an especially rich and varied literature developed after Kemal Atatürk’s reforms. Particularly after the 1930s, a fundamental change began to take place; for the first time, a native and original literature began to develop. Unlike the preceding literary schools, which dealt only with the life and problems of the upper and middle classes of the old capital of Istanbul, republican literature—such as that of Yashar Kemal—was increasingly concerned with the problems and destiny of the people in every part of Turkey. The earliest Turkish literature was produced in Mongol-controlled Anatolia during the later 13th century. Among the numerous Turkic dynasties of Central Asia, South Asia, the Middle East, and the Caucasus, only the post-Mongol Anatolian states and then the Ottoman Empire maintained Turkish as a literary language. From the 14th through the early 20th century, writing in Turkish flourished in the Ottoman Empire, and it subsequently continued in the Turkish republic. Despite changes in language and culture from the Mongol and Ottoman periods to the emergence of modern-day Turkey, Turkish literature has remained an important means of expression for the Turkish-speaking peoples of Anatolia and the adjacent areas of the Balkans. Much of this region’s literary activity has centred on Istanbul, its central urban metropolis since the mid-15th century. The oldest genre of Turkish literature is the heroic epic, of which the prime example is the Kitab-i Dede Korkut (“The Book of My Grandfather Korkut”; Eng. trans. The Book of Dede Korkut), which has survived in two 16th-century manuscripts. The actual date of the work is unknown. At least one of the tales was already circulating in written form in the early 14th century, and Central Asian sources suggest that the shaman-bard Korkut and his tales date from the 9th and 10th centuries. The style of the epic—which consists of prose narrative mixed with verse speeches—suggests oral composition. The language of the text is Oghuz Turkish, containing both Anatolian and Azerbaijani elements. There is no overall narrative framework, but most of the 12 tales revolve around legendary Oghuz heroes. The original poem (if not the 16th-century manuscripts) was evidently created by an oral bard, or ozan, the heir to a partly shamanic tradition, although the circumstances of the epic’s transformation to written literature are unknown, and the work as such had no influence on the subsequent development of Turkish literature. Both manuscripts known at the turn of the 21st century were discovered in Europe, the larger one in Germany in the early 19th century. Yet Turkish interest in the Book of Dede Korkut emerged nearly a century after significant German and Russian work. In the 20th century major studies of the text were undertaken in Turkey, Russia, and Azerbaijan as well as in Europe. Much of the style of the Book of Dede Korkut predates the heroic tradition of the Oghuz Turkish poet-musician known as the âşik, who emerged in the 16th century in Anatolia, Iran, and the southern Caucasus and eventually supplanted the ozan. The âşik (ashoog in Azerbaijani; from the Arabic ʿashiq, “lover” or “novice Sufi”) was a professional or semiprofessional performer, singing a variety of epic, didactic, mystical, and lyrical songs to the accompaniment of a long-necked lute (saz). The classical âşik of the Anatolian Turkmen tribes was Karacaoğlan, who flourished in the later 16th century or possibly the mid-17th century (his date of death is sometimes given as 1679). He is mentioned in several biographical dictionaries (tezkires) of the period. In its formal qualities his poetry is closely related to folk verse, and he generally treats lyrical themes without the mystical subtext that was common in courtly verse of the period. His style influenced such 17th-century âşiks as Âşik Ömer of Aydin and Gevherî, as well as the âşiks of the 18th century. During the 17th century the popular urban song (şarkı) was taken up by court poets and musicians, and it became fashionable for courtiers to entertain themselves by performing these songs with the folkloric bağlama. The great 17th-century poet Nâʾilî was the first to include such songs in his divan (collected works), a practice that reached its culmination in the following century with Ahmed Nedim. The outstanding âşik of the later 17th century was Âşik Ömer, who wrote both folkloric qoşma poems and courtly lyrics, or gazels (Persian: ghazals). Thus, during the 17th century the âşiq became a bridge between the literary taste of the court and the people of the towns. The interplay between this popular poetry and the courtly gazel continued into the 19th century, when it was exemplified by the work of İbrahim Dertli. By the middle of the 13th century, mystical (Sufi) poetry had become a major branch of Turkish literature, with Sufi poets working primarily in Anatolian Turkish. One of the two well-known poets of the 13th and 14th centuries was Âşık Paşa, author of the Garībnāmeh (“The Book of the Stranger”), a didactic poem of some 11,000 couplets that explores philosophical and moral themes. It is considered among the finest mesnevîs (Persian: masnavīs; see masnawi) of the era. Yunus Emre, author of a divan and of the didactic mesnevî Risâletʿün nushiyye (“Treatise of Counsel”), was the period’s other well-known poet. Despite Yunus Emre’s evident scholastic learning, he wrote in a language and style that appealed to popular taste. His poetry was read and studied in Ottoman times, and it remains central today to the dhikr ceremony of ritual prayer practiced by Sunni brotherhoods (tarikats) and to the ayîn-i cem ritual of the Alevî Bektashi, an order of tribal Shīʿite Sufis. Later in the 13th century Seyid İmadeddin Nesimi, probably of southeast Anatolia, created brilliant Sufi verse in Persian and in a form of Turkish rather closer to Azerbaijani. The 15th century saw a split between heterodox Sufi tendencies, as seen in the verse of Kaygusuz Abdal, and the orthodox Sufism of Eşrefoğlu Rumi. Like Yunus Emre, Eşrefoğlu wrote verse in which the Sufi poet functions as a charismatic and sacred figure who writes poetry in order to communicate his sacerdotal authority to his disciples. By the early 16th century, this style of poetry, generally known as ilâhî (“divine”), was practiced by such sheikh-poets as İbrahim Gülşeni and his son Gülşenîzâde Hayali as well as Muslihiddin Merkez, Muhiddin Uftade, Seyyid Seyfullah Nizamoğlu, and Aziz Mahmud Hüdâyî. The growth during the 16th and 17th centuries of this type of poetry, which was intended to be sung in the dhikr ceremony, was a function of the monopoly over mysticism held by the Sufi brotherhoods of that era. The most outstanding representative of this tradition is Niyazi Mısri, a 17th-century poet of the Halvetiye tarikat. His verse was enormously popular in his lifetime and throughout the 18th century. Like Yunus Emre, Niyazî Misrî was able to express subtle mystical insights using very simple language: I was seeking a cure for my trouble;My trouble became my cure.I was seeking a proof for my origin;My origin became my proof. I was looking to the right and the leftSo that I could see the face of the Beloved.I was searching outside,But the Soul was within that very soul. While the Ottomans wrote a great deal of prose (especially on history, theology, mysticism, biography, and travel), poetry was the focus of literary thought; hence, the following discussion will confine itself to verse. The forms, genres, and themes of pre-Ottoman and Ottoman Turkish literature—those works written between about 1300 and 1839, the year in which the wide-ranging Tanzimat reforms were begun—were generally derived from those of Persian literature, either directly or through the mediation of Chagatai literature. Anatolia and parts of the Balkans, although increasingly Turkish-speaking, developed a high literary culture of the type known as Persianate. The dominant forms of Ottoman poetry from its origins in the 14th century until its decline in the late 19th century were the gazel and the kasîde (originally from the Arabic qaṣīdah; see qasida). The formal principles of the gazel were the same for both Persian and Ottoman varieties. Composed of a series of couplets (distichs), it was subject to a single metrical scheme and was usually in monorhyme, often using a repeated word (redîf). The pen name (mahlas) of the poet usually appeared in the closing distich. In the 15th and 16th centuries Ottoman gazels might extend from 5 to 10 couplets, but in the mid-17th century 5 became the norm. The tropes and images of the classical Ottoman gazel were extremely conventional; in many cases they appeared as early as the 12th century in the Persian ghazals of Sanāʾī. In general, the images of the gazel cast the poet as the lover singing to his beloved—that is, as the nightingale singing to the rose. The world of the gazel is thus largely confined to a garden, with a vocabulary related to the appearance and growth of flowers and plants and also to birds. A second family of images concerns the hair and face of the beloved, focusing on the eyes, eyebrows, mouth, and cheeks as well as the expressions created by these features. The speaker, addressee, and theme might also change from couplet to couplet. It was mainly religious gazels that retained a single speaker and theme; these single-melody poems were known as yek-ahenk. But by the mid-17th century, with the work of poets Cevri, Nâʾilî, Fehim, and Neşatî, gazels of all sorts became largely monothematic. The kasîde was an encomium whose object was to praise its subject. It had two major varieties, secular and religious. Unlike the gazel, whose mystical references (as well as its secular ones) were often ambiguous, the religious kasîde had as its ostensible subject God, the Prophet Muhammad, or ʿAlī, Muhammad’s son-in-law and the fourth caliph. Secular kasîdes usually took as their subject individuals—a sultan, a vizier, a pasha, or a high member of the secular bureaucracy (ulema)—or specific events, such as a military victory. All kasîdes were divided into several sections. In the secular kasîde the lyric prologue (nesîb) often described some aspect of nature or the garden, while in the religious kasîde it might take a more general moral or philosophical theme. The medhîye followed, a section that named and praised the subject. In secular kasîdes this section’s imagery was usually drawn from the Shāh-nāmeh (“Book of Kings”), the epic completed by the Persian poet Ferdowsī in the 11th century, while in religious kasîdes allusions to the Qurʿān and Hadith are very common. After the medhîye came a couplet, the hüsn-i tahallus (literally, “beauty of the pen name”), in which the poet mentions his own name. It led into a section of self-praise (the fahrîye), in which the poet lauds his skills. The poem might end with a hüsn-i taleb (literally, “beauty of the request”), in which he seeks patronage or a favour. Within these parameters, the kasîde could take a wide variety of forms. Some are centred to such an extent on a specific situation or request of the poet that the distinctions between these sections become somewhat blurred. Other kasîdes share with gazels a lyric mood. During the 17th century a number of kasîdes incorporated into the fahrîye praise for poetry in general or, similarly, broader meditations on the nature of poetry. In size the kasîde varies from 14 to more than 100 couplets. The Ottomans’ principal narrative poetic form, the mesnevî, was also made up of couplets. (It was common practice for poets to insert gazels or other stanzaic forms into a mesnevî to express the speech of the characters.) Starting with Aşık Paşa and Yunus Emre in the 14th century, the mesnevî was often used by Sufi writers as a vehicle for didactic works. During the late 14th and early 15th centuries, Ottoman writers achieved distinction by writing original mesnevîs, such as the Çengname (“Tale of the Harp”), a mystical allegory by Ahmed-i Dâi, and the satirical Harname (“Tale of the Donkey”), by Sinan Şeyhi. A century later, Lâmiî Çelebi of Bursa initiated translations of the major Persian mesnevîs into Turkish. He was especially influenced by the 15th-century Persian scholar and poet Jāmī. Nevertheless, the major innovations in the narrative structure of the mesnevî created by the brilliant Chagatai poet ʿAlī Shīr Navāʾī, who was a student of Jāmī, had little effect among the Ottomans. Indeed, the acknowledged Ottoman master of the genre in the late 15th and early 16th centuries, Nevʾî-zade Atâyî, broke up the narrative into small unconnected tales and criticized Navāʾī for the complexity of his mesnevîs. The mesnevî was still used successfully at times for didactic works such as the 17th-century Hayrîyye of Yusuf Nâbî. By the 17th century both the Persian and the Chagatai mesnevî forms had gone into decline, and Ottoman writers generally ceased to treat the genre as one of first-rate literary significance. Nevertheless, the final two major works of Ottoman literature were written in the mesnevî form: Hüsn ü aşk (1782; “Beauty and Love”), a mystical allegory by Şeyh Galib, and Mihnetkeşan (1822; “The Sufferers”), a self-satirizing autobiography by Keçecizade İzzet Molla. Thus, the Ottoman mesnevî was generally of the first order of literary significance only at the beginning and end of Ottoman literary history. The one striking exception is the Leyla ü Mecnun (Leylā and Mejnūn) of Mehmed bin Süleyman Fuzuli, written in the 16th century. Although this work has been accepted into the Ottoman canon, its author wrote within the Turkmen literary tradition, under the influence of the Chagatai mesnevî. While the gazel was the Ottoman lyric form par excellence, stanzaic forms were also in limited use. Stanzas ranged from 4 to 10 lines and were of two basic types: the müzdeviç, in which the last line (or couplet) of each stanza has the same rhyme, and the mükerrir, in which the last line (or couplet) is the same in each stanza. The four-line murabbaʾ form seems to have emerged from both Persian quatrain forms (especially the robāʿī) and Turkic quatrain forms (especially the tuyugh). Ottoman murabbaʾs often feature an epigrammatic style. The tercibend and terkibbend are more-elaborate stanzaic forms. Both feature stanzas with the stylistic features of the gazel, but, unlike gazels, each stanza in these forms is followed by a couplet with a separate rhyme. In the tercibend the same couplet is repeated after each stanza, while in the terkibbend each couplet following a stanza is unique. Poems that use these forms are frequently elegies, in which case they are called mersiyes. A masterpiece of the terkibbend genre is the elegy for Sultan Süleyman I written by Bâkî in the 16th century. Other Ottoman stanzaic forms utilize varying numbers of couplets, such as the müseddes, which has three. A fine example of this form is the Müseddes der ahvâl-i hod (“Six-Line Poem on His Own State”), by Nâʾilî. Less common are the müsemmen, with four couplets, the muʾaşşer, with five couplets, and the müsebbaʾ, with seven lines. The muhammes, a five-line poem, was generally reserved for a type of poetic imitation in which a second poet closed the poem by writing three lines that mimicked the style of the opening couplet, written by a first poet. The second poet might also insert three new lines between the first and second lines of the other poet’s couplet. In the muhammes the aim was for the second poet to subordinate his style to that of the first poet. (The type of imitation used in the muhammes was distinct from that used in certain types of gazel and kasîde in which a poet referred to a poem by another poet—or sometimes by two or three previous poets—in order to “answer” and surpass his predecessors.) Poetry’s place within Turkish society prior to the second half of the 15th century is relatively unknown, but the 16th century saw the composition of seven biographical dictionaries (tezkires) by Ottoman poets that make clear the high esteem in which poets and their poetry were held. Of these, five—by Sehî Bey (1538), Latifî (1546), Âşık Çelebi (1568), Hasan Çelebi (1585), and Ali Efendi (1599)—may be considered major examples of the genre. All five are large-scale works that include much biographical material as well as many anecdotes and some aesthetic judgments. Early in the 17th century, three more tezkires were written, of which one (by Riyazî) covers the entire 16th century in detail. Patronage for Ottoman poets in the classical age took a variety of forms. The location of this patronage varied as well: poets were attached to the imperial household in Bursa or, later, Istanbul, or they were supported at the provincial Anatolian courts of the Ottoman princes. These princes also sometimes took poets along on military campaigns. Aside from the sultan, the leading ministers of state might also contribute toward the upkeep of poets. The simplest form of patronage was the annual stipend. During the 15th and 16th centuries the sultan Bayezid II granted an annual stipend to each of more than 30 poets. Throughout the Ottoman Empire’s early history, either official patronage or a good position in the bureaucracy—or both—were available (and often attained) by poets who were from provincial cities or otherwise outside the inner circles of Ottoman rulers. During the second reign (1451–81) of the sultan Mehmed II, the poet İsa Necati, who was of obscure origins, was able to attract the attention of the sultan, who read and admired one of his gazels and immediately had him enrolled as a chancery secretary. Hayali Bey, the most influential poet of the first half of the 16th century, was the son of a timar sipahî (feudal cavalryman) from Rumeli, in the Balkans. He began his career with a troupe of wandering dervishes and eventually came under the protection of the vizier İbrahim Paşa. Through the vizier he became a favourite of Sultan Süleyman I, who granted him a yearly stipend and the income of several fiefs. A major basis for this structure of poetic patronage was the bureaucratization of the ulema. (See Ottoman Empire: Classical Ottoman society and administration.) Once the ilmiye (ulema class) had become firmly attached to the imperial bureaucracy, it was possible for a talented poet who was a graduate of a madrassa (Turkish: medrese; a Muslim school of theology) to expect an appointment first as a mülâzim (assistant professor) and eventually as a müderris (professor). Among the many candidates for these professorships, a considerable number composed poetry and were, at least in their own minds, identified as “poets.” Some of the most talented or ambitious could use their poetry to advance quickly in the system. Bâkî is perhaps the supreme example of a poet who achieved success in the ilmiye system of mid-16th-century Turkey, but he is in no way typical. These two trends—the integration of the Islamic clergy into the Ottoman bureaucratic system and the separation (and subsequent expansion) of the secular bureaucracy from the madrassa-educated potential clergy—came to alter fundamentally the meaning of the word poet as a professional designation by the middle of the 16th century. From the beginning of the reign of Sultan Selim I in 1512 until the 1539 reorganization of the bureaucracy (following the execution of İbrahim Paşa in 1536), the Ottoman state seemed to be able both to fill its expanded bureaucracies and to support leading poets. But it appears that, after this time, the state began to view its bureaucratic and fiscal needs as holding priority over its literary ones. The entry of Rüstem Paşa into the office of grand vizier in 1555 ushered in a new period of fiscal austerity and antiliterary sentiment in which new poets had a much slimmer chance of patronage. The real and apparently inexorable decline in state patronage for poetry set in with the accession of Murad III as sultan in 1574. Ottoman poetry of the later 15th and 16th centuries represents a mature synthesis of the three major Islamic languages—Turkish, Persian, and Arabic—within a secure matrix of Turkish syntax. Despite the hybridization of courtly literary language, the literary production of the Ottoman court, almost alone among Turkic dynasties of the period, remained predominantly Turkish. A close analysis of the language of the classical-age poets reveals the liberal use of Turkish linguistic features, sometimes linked with popular and humorous effects, even to the point of self-parody. Stylistically, the 16th century was marked by two major trends: the further elaboration of the Turkish courtly style of the later 15th century, represented by Necati, and the creation of a new synthesis of Sufi and secular concerns. The foremost representative of the former movement was Bâkî; the latter was Hayali Bey. In the second half of the 16th century, the courtly style asserted itself by way of the brilliant poetry of Bâkî. A ranking member of the ulema, Bâkî perfected an essentially secular style that held a central position in the poetry of the period. Among Bâkî’s couplets areBehold the beauty that expands the heart within the mirror of the rose—Behold the one who holds the mirror to the shining face of Truth. andBehold the love-addicted heart—a beggar wandering in the street.Behold the beggar who loves kingship and sovereignty. In the first half of the 17th century this courtly style was represented most notably by Yahya Efendi, who rose to the position of şeyhülislâm, the highest rank within the ulema. However, this style was challenged by Yahya Efendi’s contemporary Nefʾi, an aristocrat from the eastern Anatolian provinces who was an outsider in the Ottoman capital. Nefʾi was a master of the kasîde, but he is also remembered for couplets such asI am the wonder-speaking parrot.Whatever I say is no idle chatter. He emphasized his outsider identity by perfecting his satirical verse (hiciv; Arabic: hijāʾ) and by adopting features of the new Indo-Persian style of the Mughal court in northern India. In doing so, he initiated a major stylistic movement in Ottoman poetry. The principal poets of this school, some of them students or followers of Nefʾi, were Cevri, Nâʾilî, Fehim, and Neşatî, all of whom wrote some of the very finest verse in Ottoman Turkish. By using an almost exclusively Persian lexicon, however, their poetry reversed the dominant trend of Ottoman poetry. In the 17th century this newer style of poetry was termed tâze-gûʾî (“fresh speech”) or tarz-i nev (“new style”). (By the early 20th century it had come to be known as poetry of the Indian school, or Sabk-i Hindī.) In the late 16th century the two most important figures had been the Indian-born poet Fayzî and the Iranian Urfî (who was patronized in India). The Persian poets of the next generation, such as Kalîm Kâshânî and Saib-i Tabrizi, were encouraged by the Mughal court to develop their meditations on the poetic imagination. Much of this new philosophy of literature and poetic style influenced a major group of 17th-century Ottoman poets. The death of the Ottoman sultan Murad IV in 1640 was followed by a series of events that resulted in a progressively weaker basis for governmental patronage of poetry. While higher clerical positions continued to be monopolized by a group of prominent Istanbul ulema families in the mid-17th century, places in the secular bureaucracies were being apportioned largely according to political patronage. Poets continued to rise through the ranks of the bureaucracies, but only rarely was their poetic ability a major factor in their careers or a source of much material benefit for them. The ulema, however, continued to produce poets, the most illustrious of whom was the şeyhülislâm Bahayî Efendi. Like his predecessor Yahya Efendi, he was the scion of an illustrious ulema family. Bahayî Efendi’s poetry is a continuation of Bâkî’s style as it was developed by Yahya Efendi, and, as such, it furnishes the prime example of the neoconservative tendencies of the poets of his class. It is also indicative of the secondary position of poetry within his life that his divan is very small; it contains only 6 kasîdes and 41 gazels. The major contemporary source for knowledge about the poets of the mid-17th century is the Teşrîfâtʾ üs-şuarâ of Edirneli Güftî, written in 1660–61—the only Ottoman tezkire composed as a mesnevî. It was not commissioned nor apparently presented to any patron, and its major function seems to have been as a means for the author to satirize and slander many of his contemporaries. It was also a general attack on and complaint about the literary situation in Turkey. Beginning in the early 17th century, the Mawlawīyah (Turkish: Mevleviyah), an order of dervishes who were followers of the 13th-century Sufi mystic and poet Rūmī, were exerting a major influence on poetry. Cevri and Neşatî are the prime examples of leaders of the “fresh speech” who were committed Mawlawīyah. In the Ottoman capital the order began to create an alternative structure of literary evaluation that was independent of the courtly tradition, which had by this time become largely dominated by the higher ulema. The leading poet of the later 17th century was Nâbî, a provincial notable who became an intimate of the second vizier, Köprülü Fazıl Mustafa Paşa, and eventually served as his chancery secretary. In his youth Nâbî attracted the notice of Nâʾilî, the most eminent poet of his time. Nâbî’s fame rests mainly on his didactic mesnevî Hayrîyye, which contains moral maxims for his son. The 18th century witnessed significant changes in style and genre that led ultimately to the dissolution of the classic form of Ottoman poetry. But these changes were incremental and resulted in major stylistic splits only after the middle of the century. The first third of the 18th century was dominated by Ahmed Nedim, scion of an illustrious ulema family, who rose to prominence under the grand vizier Damad İbrahim Paşa between 1718 and 1730. Nedim’s fame rests largely on his kasîdes, the strongest and most original since those of Nefʾi a century earlier, and on two lesser genres that were undergoing development at this time—the tarîh (chronogram) and the şarki (a form of urban popular song). The tarîhs of Nedim display an entirely new awareness of the physical characteristics of the buildings being praised, thereby registering a perceptible shift from formal, highly stylized techniques of literary representation to ones based partly on observation of worldly phenomena. Similarly, his şarkis revel in the physical surroundings of the pleasure grounds of Saʿadābād Palace in Tehrān. The leading poet of the middle of the 18th century was Koca Ragıb Paşa, whose public life was that of a high bureaucrat and diplomat. His career extended from serving as chief secretary of foreign affairs and, later, as grand vizier to being governor of several large provinces. Ragıb Paşa made no striking formal innovations, but the language of his gazels shows a happy synthesis of the canonical tradition of Bâkî with the “fresh” (or “Indian”) style of Nâʾilî. By this period such stylistic departures no longer aroused the acrimony of a century earlier. The last third of the 18th century saw a lack of faith in older lyric metaphors. Drawing on the tradition of popular theatre, poets turned toward colloquial speech. At times they also embraced a new form of poetic subversion by which the praise characterizing the traditional lyric was replaced by its traditional opposite—hiciv, the poetry of satire. Vâsif Enderunî combined local Istanbul speech with a strong reminder of Nedim’s kasîdes and gazels in his poetry. Fazıl Enderunî went even further in his development of the şehrengiz (city-description) genres, of which Hubanname (“The Book of Beauties”), Zenanname (“The Book of Women”), and Çengîname (“The Book of Dancing Boys”) were part. All of these are replete with dialogue and descriptions that are both satirical and vulgar. The album paintings accompanying manuscripts of these works emphasize the new realism of their style and contents. These tendencies took a somewhat more mature form in the Mihnetkeşan (1823–24) of Keçecizade İzzet Molla, who wrote a humorous autobiographical mesnevî that has been hailed by some as the first work of modern Ottoman literature. Unique in Ottoman literature, the tale has no purpose other than to describe the author’s trials and misfortunes as he was sent into exile from the capital. One of the most important Ottoman literary classics was created at the end of the 18th century, when Şeyh Galib, a sheikh of the Galata Mawlawīyah dervishes, wrote his mesnevî Hüsn ü aşk (1782; “Beauty and Love”), an allegorical narrative poem. Galib, who had been befriended by Sultan Selim III, wrote with considerable reference to the Indian style, although by his era Ottoman poets were no longer conversant with contemporary Indo-Persian literature. Despite the masterly quality of Beauty and Love, which is perhaps the greatest mesnevî ever written by an Ottoman poet, neither Galib’s mystical theme nor his highly Persianate language were to have much influence on succeeding generations of Ottoman writers. The last chapter of traditional Ottoman verse was written in the mid- and late 19th century within a bureaucratic circle, the Encüman-i Şuarâ (“Council of Poets”) group of Leskofçali Galib Bey, which also included Arif Hikmet Bey and Yenişehirli Avnî Bey. The Indian-style poets of the mid-17th century, especially Nâʾilî, Neşatî, and Fehim, furnished the models for these late Ottoman poets, who rejected the type of change that began engulfing Ottoman literature in the 1840s. Two of the major poets of this generation, Ziya Paşa and Namık Kemal, began their literary careers as members of this conservative circle, only to break with it in their own mature works. The lack of faith in traditional literary models that had emerged during the later 18th century took a drastic new turn for the generation that experienced the Tanzimat reforms, which began in 1839 and, under the influence of European ideas, were aimed at modernizing the Ottoman state. The most radical new voice was that of Ibrahim Şinasi, who studied in France and then returned to Constantinople for several years, during which time he started the newspaper Tasvir-i Efkar (“Description of Ideas”). He subsequently remained active as a journalist and as a translator, and he also became the first modern Ottoman playwright with his Şair evlenmesi (1859; The Wedding of a Poet). At midcentury the central literary conflict was between Şinasi and Leskofçali Galib Bey, and Şinasi succeeded in winning both Ziya Paşa and Namık Kemal over to the cause of modernization. Ziya Paşa led a successful career as a provincial governor, but in 1867 he fled to France, England, and Switzerland; while in exile he collaborated with Şinasi. In Geneva in 1870, Ziya Paşa wrote the Zafername (“The Book of Victory”) as a satire on the grand vizier Mehmed Emin Âli Paşa and as a general attack on the state of the empire. Written in classical language, it nonetheless represents a far-reaching modern development of the type of satire used by Vasif Enderunî in the previous generation. Ziya Paşa’s poetic anthology Harabat (1874; “Mystical Taverns”) is a thoughtful attempt to evaluate the Ottoman literary heritage and to create a classical canon. Namık Kemal took over the newspaper Tasvir-i Efkar when Şinasi fled to Paris in 1865, but in the late 1860s he left Turkey for London, where he published the newspaper Hürriyet (“Freedom”). Eventually he devoted himself to poetry and theatre that usually carried a strong nationalist and modernizing message. His most famous play was Vatan; yahut, Silistre (1873; “The Motherland; or, Silistria”). After the accession of Abdülhamid II as sultan in 1876, Kemal spent most of the rest of his life in exile. The increasingly strict censorship in the reign of this sultan, which lasted until the revolution of the Young Turks in 1908, limited the possibilities for the development of new Ottoman literature. The novel made its appearance in Turkish in the late 19th century, most notably with the works of Ahmet Mithat, who published prolifically between 1875 and 1910. During Mithat’s lifetime, both the novel and poetry assumed a strongly public, didactic orientation that would prove highly influential among many writers well into the 20th century. Tevfik Fikret became a major literary voice of the late Ottoman era through his editorship of the literary journal Servet-i fünun (1896–1901; “The Wealth of Knowledge”) and his leadership of the literary circle of the same name. His poetry displays a shift from the romanticism of his early works (such as Rübab-i şikeste [1900; “The Broken Viol”]) to social and political criticism after 1901. Abdülhak Hâmid’s career spans the late Ottoman, Young Turk, and early republican eras. While maintaining a successful life as a state official and diplomat, he wrote poetry and plays using a style that mixed classical and journalistic effects. Despite the numerous political problems of the Young Turk era (1908–18), the relative easing of censorship compared with the previous regime allowed writers a greater freedom of expression, which they were quick to take advantage of, both thematically and stylistically. Refik Halid Karay was a journalist who became one of the leading short-story writers in Turkey. His political columns, mainly of a satirical nature, appeared between 1910 and 1913 in various journals; they were published under the pen name Kirpi (“The Porcupine”) and were collected in Kirpinin dedikleri (1919; “What the Porcupine Said”). Many of his columns display a highly nuanced ear for the local speech of various social groups and a keen eye for detail in locations within Istanbul. These qualities also characterized his short-story writing. Among his best short stories are Şeftali bahçeleri (1919; “The Peach Orchards”), a half-ironic description of the placid lives of an earlier generation of provincial bureaucrats, and Şaka (1913; “The Joke”), a more jaundiced view of the same class during his own time. Karay also wrote a number of novels, none of them matching the quality of his short stories. His political opinions, expressed mainly in his journalistic writings, led to his being exiled to Anatolia from 1913 to 1918 by the Young Turks and to Lebanon and Syria from 1922 to 1938 by Kemal Atatürk. Other writers who emerged during the early 20th century, such as Memduh Şevket Esendal, Omer Seyfeddin, Yakup Kadri Karaosmanoğlu, and Reşat Nuri Güntekin, employed the short story mostly as a vehicle for social edification and commentary. One of the period’s more striking figures was Halide Edib Adıvar. Educated at the American College for Girls in Istanbul, she later taught English literature at Istanbul University (1939–64) and wrote some of her best-known works in English. Among these are The Clown and His Daughter (1935), which later became a best seller in Turkish as Sinekli bakkal. Although she and her husband joined Atatürk’s rebellion against the Allies and the Ottoman government, they were soon after exiled from Turkey (1923–39). Nevertheless, she maintained a strongly nationalist stance in her work. Halide Edib was the first Turkish female writer to attain widespread recognition. Şevket Süreyya Aydemir was principally a writer of short stories, but his autobiographical novel Suyu arayan adam (1961; “The Man in Search of Water”) displays a brilliant style and reveals a deep search for a personal and national self that was rare in Turkish prose. In poetry the outstanding figure of that generation was Yahya Kemal Beyatlı. Born in Skopje (Usküb; now in Macedonia), Beyatli studied in Paris for several years and subsequently taught at Istanbul University. After the proclamation of the Turkish republic, he held several ambassadorial posts. Although he supported republican principles, much of Beyatli’s poetry glorifies the Ottoman past. His lasting artistic achievement was his synthesis of classical Ottoman and contemporary French poetry. One of the most multifaceted figures of 20th-century Turkish literature is Ahmed Hamdi Tanpınar. A scholar of modern Turkish literature, he taught at Istanbul University for most of his life and published much literary criticism, including a major critical work on the poetry of Beyatlı, under whom he had studied. But Tanpınar’s scholarship was overshadowed by his short stories, novels, and lyrical poetry. Tanpınar is considered the founder of modernist fiction in Turkey largely on the basis of his novels. Saatleri ayarlama enstitüsü (serialized 1954, published in book form 1961; The Time Regulation Institute), the most complex novel written in Turkish until the 1980s and ’90s, is his most important. It is the autobiography of Hayri Irdal, a poorly educated petit bourgeois born in Istanbul in the 1890s. He follows charlatans of various types until he begins working for the founder of the Time Regulation Institute, which is responsible for ensuring that all clocks in Istanbul are set to the same time. Both the founder—the American-style entrepreneur of the Turkish present—and Hayri—the conformist of the Turkish past—emerge as reprehensible figures who offer scant hope for a Turkish society immersed in cultural self-deception. His other novels include Huzur (1949; “Contentment”) as well as the posthumously published Sahnenin dışındakiler (1973; “Those Not on the Stage”), Mahur beste (1975; “Composition in the Mahur Mode”), and Aydaki kadın (1987; “The Woman on the Moon”). Tanpınar’s poetic output, while not extensive, is also highly regarded. Throughout all his writings he demonstrated his ability to bridge the cultural gap between the Ottoman and the republican periods. Most poets of the 1930s and ’40s rejected Beyatlı’s neo-Ottomanism and preferred a much simpler style reminiscent of folk poetry. The outstanding figure of the era was Nazım Hikmet. Born in Salonika (now Thessaloníki, Greece), he studied in Moscow, became a devoted communist, and was much influenced by the modernist style of the Russian poets Aleksandr Blok and Vladimir Mayakovsky. His Şeyh Bedreddin destani (1936; The Epic of Sheikh Bedreddin), written during a short imprisonment in Turkey, is an unprecedented work that blends a folk ballad style with poetic modernism. His politically motivated incarceration was followed by a much longer period of imprisonment in 1938, from which he was freed only in 1951. He spent the rest of his life in Russia and traveling in Europe. His later poetry was not published in Turkey until 1965, two years after his death, and so affected only a much younger generation. He went on to become the most widely known and translated Turkish poet of the 20th century. His major works include Moskova senfonisi (1952; The Moscow Symphony) and Memleketimden insan manzaraları (1966–67; Human Landscapes from My Country). In 1941 three poets—Orhan Veli Kanık, Oktay Rifat, and Melih Cevdet Anday—initiated the Garip (“Strange”) movement with publication of a volume of poetry by the same name. In it they emphasized simplified language, folkloric poetic forms, and themes of alienation in the modern urban environment. Later, Anday broke with this style, treating philosophical and aesthetic issues in his poetry in a more complex manner. One of his best-known collections of poetry is Göçebe denizin üstünde (1970; On the Nomad Sea). The Garip group had immense influence on Turkish poetry. A contemporary poet who also used folk metrics and language with success was Cahit Sıtkı Tarancı; unlike the Garip poets, who were urban dwellers, he was born deep in Anatolia. Behçet Necatigil chose a distinct poetic path, eventually creating poems that are models of brevity and wit; they occasionally refer obliquely to the Ottoman culture of the past. Sevgilerde (1976; “Among the Beloveds”) is a collection of his earlier poetry. Fazıl Hüsnü Dağlarca wrote modernist poetry, often with a socialist outlook, while pursuing a career in the military, which he left in 1950. He became one of Turkey’s most influential poets during the post-World War II era. Choosing a simplified and modernist literary form, Necip Fazıl Kısakürek, who taught literature in Turkey at the University of Ankara, turned his critique of the alienation of the individual in modern society into a conservative Islamist political message. Collections of his poetry include Sonsuzluk kervanı (1955; “The Caravan of Eternity”) and Şiirlerim (1969; “My Poems”). The two outstanding short-story writers of the mid-20th century were Sait Faik Abasıyanık and Sabahattin Ali. Leading a reclusive and uneventful life as a high-school teacher in Istanbul, Abasıyanık revolutionized the Turkish short story by choosing a stream-of-consciousness style in which plot is de-emphasized; this style focuses the reader’s attention on the perspective of the writer in a way that had not been attempted previously in Turkish. As such, he was a lonely figure in Turkish letters and came to be better appreciated only after his death, in 1954. His many collections of short stories include Semaver (1936; “The Samovar”) and Mahkeme kapısı (“The Court-House Gate”), published posthumously in 1956. Sabahattin Ali was probably the most powerful and effective of the 20th-century short-story writers in Turkey who addressed social themes. He was born into a military family in northern Greece, and he studied and taught in Germany, where his controversial writing caused him to lose his teaching position and to be imprisoned for libel in 1948. A year later, after his release, he was assassinated under mysterious circumstances. His short story Ses (1937; “The Voice”) is representative of his thematic concerns: it describes an encounter between two educated urbanites and a village troubadour, through which Sabahattin Ali points toward the incompatibility of the aesthetic ideals of the West and those of a Turkish village. His most famous collections of short stories are Değirmen (1935; “The Mill”) and Kağnı (1936; “The Oxcart”). As literacy spread to the countryside after the founding of the Turkish republic in 1923 and the output of urban writers became more varied, Turkish writers expanded their thematic horizons. Among the most influential novelists of the generation born in the 1920s is Yashar Kemal. Born in a small village in southeastern Anatolia, where he never completed his secondary education, Kemal arrived in Istanbul in 1951 and found work with the prestigious newspaper Cumhuriyet. In 1955 he published the novella Teneke (“The Tin Pan”) and his first full-length novel, Ince Memed (“Thin Memed”; Eng. trans. Memed, My Hawk), both of which brought him immediate recognition in Turkey. The latter, like many of his other novels and short stories, is set in the rural eastern Anatolia of his youth and portrays the glaring social contradictions there, often aggravated by the process of modernization under the capitalist system. The novel gained an international audience and was adapted as a film in 1983. Kemal subsequently published novels at frequent intervals, including Yer demir gök bakır (1963; Iron Earth, Copper Sky), Binbogalar efsanesi (1971; “The Legend of a Thousand Bulls”), and Demirciler çarşısı cinayeti (1974; Murder in the Ironsmiths Market). He also published highly acclaimed collections of short stories. While Kemal’s works appear to be realistic and straightforward, his subtle narrative techniques ensure that his works are appreciated by a wide range of readers. Another leading novelist born in the 1920s, Adalet Ağaoğlu, portrayed life from a more personal and introverted perspective than Kemal. She was one of the generation that suffered from the repression after the military coup d’état in 1971, and she based some of her fiction on these experiences. Her novels deal with a broad spectrum of the social changes that occurred within the Turkish republic, and she was among the first Turkish writers to treat sexual themes in depth. Her first novel, Ölmeye yatmak (1973; “Lying Down to Die”), brought her considerable success. She followed this with Fikrimin ince gülü (1976; “The Slender Rose of My Desire”), Bir düğün gecesi (1979; “A Wedding Party”), Uç bes kişi (1984; “A Few People”; Eng. trans. Curfew), Hayır (1987; “No”), and Ruh uşümesi (1991; “A Shiver in the Soul”). The promising literary career of Sevgi Soysal was cut short by her untimely death in 1976. Born in Istanbul, Soysal studied philology in Ankara and archaeology and drama in Germany. Her first novel, Yürümek (1970; “To Walk”), features a stream-of-consciousness narrative and a keen ear for local dialogue; its treatment of sexual issues was unusually open for its time. She was imprisoned after the coup of 1971, an experience she recounted in her memoir Yıldırım bölge kadinlar koğuşu (1976; “The Yildirim Region Women’s Ward”). Other novels by Soysal include Yenişehirʿde bir oğle vakti (1973; “Noontime in Yenişehir”) and Şafak (1975; “Dawn”). She also wrote short stories. Beginning with Troyaʿda olüm vardı (1963; Death in Troy), Bilge Karasu created works that display a sophisticated narrative style. Among his novels and novellas are Uzun sürmüş bir günün akşamı (1970; “The Evening of One Long Day”), Göçmüş kediler bahçesi (1979; The Garden of Departed Cats), Kısmet büfesi (1982; “The Buffet of Fate”), and Kılavuz (1990; “The Guide”). Although he wrote prolifically in every genre, Necati Cumalı is known primarily as a short-story writer. He abandoned a career as a lawyer for writing and subsequently lived in Paris, Istanbul, and Israel. His first collection of short stories was Yalnız kadın (1955; “The Lonely Woman”). He explores the end of Turkish life in the Balkans in his collection Makedonya 1900 (1976; “Macedonia 1900”). The poet Attilâ İlhan also wrote several successful novels. He lived and worked in Paris intermittently between 1949 and 1965 and later worked as a journalist in Turkey. His poetry, while modernist in its use of highly sophisticated language, often refers to Ottoman poetry, music, and history. Several of his poem cycles refer to the Young Turk era and the Balkan Wars. His poems on the latter focus on the political catastrophes that led to the fall of the Ottoman Empire and the emergence of modern Turkey, a theme that he also developed in several of his novels, including Dersaadette sabah ezanları (1982; “Morning Calls to Prayers at the Sublime Port”). İlhan’s collections of poems include Duvar (1948; “The Wall”), Sisler bulvarı (1954; “The Avenue of Mist”), Yağmur kaçagı (1955; “The Rain Deserter”), Tutuklunun günlüğü (1973; “Diary of Captivity”), Korkunun krallığı (1987; “The Principality of Fear”), and Ayrilik sevdaya dahil (1993; “Separation Is Included Within Love”). Among the poets of the latter half of the 20th century, Sezai Karakoç blended European and Ottoman sensibilities with a right-wing Islamist perspective. His poetry collections include Körfez (1959; “The Gulf”) and Şiirler VI (1980; “Poems VI”). Karakoç also published numerous essays on Islam. The poet İsmet Özel began his career as a Marxist, but by the 1980s his writing had become strongly Islamist, although of a more liberal variety than Karakoç’s. Özel’s volumes of poetry include Evet isyan (1969; “Yes, Rebellion”) and Celladima gülümserken (1984; “While Smiling at My Executioner”). Ataol Behramoğlu studied in Ankara and Moscow as well as in England and France. Often seen as the successor to Nâzim Hikmet, he merged political themes and folkloric forms. Among his collections of poetry are Kuşatmada (1978; “During the Siege”) and Türkiye üzgün yurdum, güzel yurdum (1985; “Turkey My Sad Home, My Beautiful Home”). Hilmi Yavuz worked as a journalist in London, where he also completed a degree in philosophy, and he later taught history and philosophy in Istanbul. In his poems the aesthetics of Ottoman civilization become the object of deep, at times nostalgic, reflection within a thoroughly modernist framework. His volumes of poetry include Bakiş kuşu (1969; “The Glance Bird”) and Doğu şiirleri (1977; “Poems of the East”); Seasons of the Word (2007) is a collection of his poetry in English translation. The two best-known novelists in Turkey at the turn of the 21st century were Orhan Pamuk and Latife Tekin. In very distinct ways, both expanded the scope of the novel in Turkish and opened up modern Turkish literature to readers in Europe and North America. To a large extent, their differences in social background and gender impelled them toward radically divergent literary paths. In his first novel, Cevdet Bey ve oğullari (1982; “Cevdet Bey and His Sons”), Pamuk wrote about the entry of Turkish Muslim merchants into the higher bourgeoisie during the Young Turk era. He gained international fame with Beyaz kale (1985; The White Castle), the first of his novels to be translated into English. This work, set in the mid-17th century, is a meditation on the oppositions between East and West. He returned to these themes in his subsequent novels, including Kara kitap (1990; The Black Book)—which, set in contemporary Istanbul, alludes to Ottoman mystical literature while playfully deconstructing the Turkish cultural present—and Kar (2002; Snow). Pamuk, who won the Nobel Prize for Literature in 2006, was perhaps the only Turkish novelist of his time to have built upon the avant-garde aspect of Tanpınar’s writing. Tekin’s first novel, Sevgili arsiz ölüm (1983; Dear Shameless Death), depicts many of her own experiences as a displaced villager from Anatolia in the metropolis of Istanbul. Berci Kristin çöp masallari (1984; Berji Kristin: Tales from the Garbage Hills) focuses on female characters living in a new shantytown. Tekin’s deconstruction of narrative duplicates the deconstruction of every element of the life of the former villagers, which does not spare any part of their former religious and social belief system. The manner in which her novels use the Turkish language sets her critique of modernity apart from and beyond earlier attempts to treat similar themes in Turkish literature. Her other novels include Gece dersleri (1986; “Night Lessons”) and Buzdan kılıçlar (1989; Swords of Ice). The works of Pamuk and Tekin mirrored Turkey’s identity at the turn of the 21st century, when the country was the heir to, on the one hand, a sophisticated urban civilization with a history of both confronting the West and desiring to assimilate its values and, on the other hand, a rural culture that remained embedded in the developing world and vulnerable to a predatory modernity. Karl Reichl, Turkic Oral Epic Poetry: Tradition, Forms, Poetic Structure (1992), is the best introduction to the epics of the Turkic peoples. A useful study of The Book of Dede Korkut is Ilhan Başgöz, “Epithet in a Prose Epic: The Book of My Grandfather Korkut,” in Ilhan Başgöz and Mark Glazer (eds.), Studies in Turkish Folklore, in Honor of Pertev N. Boratav (1978), pp. 25–45. Kemal Silay (ed.), An Anthology of Turkish Literature (1996), includes translations of epic and Sufi poetry. E.J.W. Gibb, A History of Ottoman Poetry, 6 vol. (1900–09, reprinted 1963–84), is the standard history. While based primarily on critics of the late Ottoman period, it had not been entirely superseded even at the turn of the 21st century. Walter G. Andrews, An Introduction to Ottoman Poetry (1976), and Poetry’s Voice, Society’s Song: Ottoman Lyric Poetry (1985), examine both the formal properties of Ottoman poetry and its social context. Victoria Rowe Holbrook, The Unreadable Shores of Love: Turkish Modernity and Mystic Romance (1994), is a study of Şeyh Galib’s Beauty and Love that is also a polemic about the study of Ottoman poetry. J. Stewart-Robinson, “The Ottoman Biographies of Poets,” Journal of Near Eastern Studies, 24(1/2):57–74 (January–April 1965), explores the social background of the Ottoman poets. Walter G. Andrews, Najaat Black, and Mehmet Kalpaklı, Ottoman Lyric Poetry: An Anthology (2006), has the best group of translations of Ottoman gazels. Nermin Menemencioğlu and Fahir İz (eds.), The Penguin Book of Turkish Verse (1978), contains many fine translations of Ottoman poetry. Among the few critical treatments in English of modern Turkish literature is Ahmet Ö. Evin, Origins and Development of the Turkish Novel (1983). Louis Mitler, Contemporary Turkish Writers: A Critical Bio-Bibliography of Leading Writers in the Turkish Republican Period Up to 1980 (1988), is a useful starting point for scholarship produced during and before the 1980s. İhsan Işik, Encyclopedia of Turkish Authors, 3 vol. (2005; originally published in Turkish, 2004), provides comprehensive coverage of modern Turkish authors as well as of authors from earlier periods. Talat Sait Halman (ed.), Contemporary Turkish Literature: Fiction and Poetry (1982), is a valuable anthology. Fahir İz (ed.), An Anthology of Modern Turkish Short Stories (1978, reissued 1990), contains a number of classic tales by Refik Halid Karay, Sait Faik Abasıyanık, and others.
Secret of birdsong something to tweet about London - Birdsong is one of nature’s greatest delights – and scientists believe they may have discovered why. Songbirds possess a musical instrument more complex than anything found in an orchestra to produce their beautiful sound, they have discovered. Unlike humans, birds possess two usable pairs of vocal cords allowing them to produce two different notes at the same time, even in flight. While humans and other mammals make sound with their larynx, birds have evolved a unique structure that is located where the windpipe forks to the lungs, known as the syrinx. Researchers using 3D-imaging examined a zebra finch’s version of the voice box. They discovered how muscles, cartilage and bone work in tandem allowing birds to sing highly intricate songs even on the wing. Much like humans, birds use sounds to communicate with each other. Different calls can signal their identity or their species and are vital in attracting partners and dealing with predators, as well as educating their young. Lead scientist Dr Coen Elemans, from the University of Southern Denmark, said songbirds routinely perform while in constant motion. “Songbirds in particular excel at vocal communication”, said Dr Elemans. “Just imagine an orchestra musician playing his instrument while performing a dance. How do birds do this? We know quite a bit about how the songbird brain codes and decodes songs and how young songbirds learn to imitate the songs of their adult fathers. But we know very little about the instrument itself. “We used cutting-edge 3D imaging techniques to understand the complicated structure of the vocal organ of songbirds, the syrinx. We show how it is adapted for superfast trills and how it can be stabilised while the bird moves.” The research is published in the online journal BMC Biology. - Daily Mail
Read this comprehensive essay on Social Classes in India (Rural and Urban) ! Social Classes in Rural India: In the British India a new type of landlords was created out of the erstwhile tax collectors viz. the Zamindars and permanent land settlement. Under the term of this settlement the right of ownership was conferred on the Zamindars. Before this settlement’, the land was to be auctioned by the State on patta basis on which the Zamindars only had the right to collect revenue. After this settlement, this land became theirs permanently i.e., they became hereditary owners of this land. Zamindars’ only obligation was the payment of fixed land revenue to the British Government. Broadly, there were two types of landlords: (i) the Zamindars taluqdars (old landlords) and (ii) money-lenders, merchants and others. Those who held such ownership of tenure rights were often referred as intermediaries. On the eve of the independence, the class of intermediaries owned a large portion of land, while the peasant cultivators had little or no lands. The Zamindari system was abolished in the 1950’s for abolition of intermediaries. The abolition of Zamindari system had several consequences. It led to the formation of new classes. When Zamindari system was abolished, the Zamindars declared themselves as the owners of land and they forced most of the tenants out of the land which they had given on lease. As a result most of the tenants who were actually cultivating the land prior to land reforms were thrown out of their lands and most of them became landless labourers. After the land reforms the Zamindars in .U.P simply came to be renamed as bhoomidars, i.e. cultivators of the soil. However, the zamindars lost their right to extract taxes from the peasants. They were left with truncated landholding. The erstwhile landlords took maximum benefits out of the Green Revolution programme launched by the Government .These had led to the development of a class of “gentlemen “or progressive farmers who had some education and training in agriculture . Another settlement made by the British is known by the name of Ryutaro Settlement .Under this system ownership of land was vested in the peasants -the actual cultivators subject to the payment of revenue. This settlement gave rise to a class of peasant proprietors. In the process, a few climbed up in the socio – economic hierarchy but a large number fell from their previous rank and position. A great majority of them were transformed into tenants and even agricultural labourers. In the post – independence period, there was increase in the number of peasant proprietors This was due to the abolition of Zamindari system and ceiling on existing land holding. The peasant proprietors, in the past as well as in the present, hardly constitute a homogeneous category. They may be broadly divided into three categories – (i) The rich, (ii) The middle and (iii) The poor peasants: (i) Rich Peasants: They are proprietors with considerable holding. They perform no field work but supervise cultivation and take personal interest in land management and improvement. They are emerging into a strong capitalist armer group. (ii) Middle Peasants: They are landowners of medium size holdings. They are generally self- sufficient. They cultivate the land with family labour. (iii) Poor Peasants: They are landowners with holdings that are not sufficient to maintain a family. They are forced to rent in other’s land or supplement income by working as labourers. They constitute a large segment of the agricultural population. Non-cultivating landlords, peasant proprietors and tenants constituted social groups connected with agriculture. There was also a progressive rise in the number of agricultural labourers. The agricultural labourers were and still are broadly of three types. Some own or held a small plot of land in addition to drawing their livelihood from sale of their labour. Others were landless and lived exclusively on hiring out their labour. In return for their labour, they were paid wages which are very low. Their condition was far from satisfactory. There was another type of labour class prevailing in many parts of the country. Their status was almost that of bonded or semibond-ed labourer Dublas and Halis in Gujarat Padials in Tamil Nadu are a few examples of such bonded labour force existing in India, Such labour force exists in some parts even today. Many studies have been conducted regarding the agrarian class structure in India including Andre Beteille, Hemsa Alavi Daniel Thorner, A.R. Desai. Andre Beteille in his work -“The Case of the Jotedars”, writes that in rural society, people divide up their social universe not only in terms of categories of caste but also in terms of economic categories .The first set of categories can be called the ‘Community type ‘and the second categories can be called the ‘Class type ‘. T.K. Oommen lists the following five categories. (i) Landlords, who own but do not cultivate land, either employing intermediaries or leasing out land. (ii) Rich farmers, who look upon agriculture as a business proposition, produce for the market and for profit. Employ wage labour .and supervise rather than cultivate. (iii) Middle Peasants, who cultivate their own land and hire labourers only for certain operations or at certain points of time. (iv) Poor peasants, who own small and uneconomic holdings and often have to work as part -labourers or as share -croppers or tenants. (v) Landless agricultural workers, who sell their labour and fully depend on the first three categories for their livelihood. The Indian Communist parties give a five -fold classification. (i) Landlords (feudal and capitalist), who do not take part in manual labour. (ii) Rich peasants, who participate in manual work, but mainly employ wage labourers (iii) Middle peasants, who own or lease land which is operated predominantly by their family and also by wage labourers. (iv) Poor peasants, whose main income is derived from land leased or owned, but who employ no wage labourers. (v) Agricultural labourers, who earn their livelihood mainly through selling their labour in agriculture or allied occupations. Hamsa Alavi adopted the three -fold classification of peasants under the heading of rich, middle and poor peasants. In rural areas the class of artisans form an integral part of the village community. They have existed since the ancient periods contributing to the general self – sufficient image of an Indian village .Some of these are like the Carpenter (Badhai), the ironsmith (Lohar), the Potter (Kumhar) and so on. Rural artisans and craftsmen were hard hit under the British rule .They could not compete with the mass manufactured goods produced by the British industries The artisans suffered badly and most of them had to revert back to agriculture .This in turn flooded the agricultural fields with surplus labour which became counterproductive instead of useful. After independence, the Indian Government has taken steps to improve the condition of the artisans. However, the class of artisans and craftsmen in the rural areas is not a homogeneous lot. In their group there are some who are highly skilled and some are semiskilled or less skilled. Thus, socially all of them cannot be ranked in one occupation. But in a broad sense they can be considered as a class by virtue of their occupation. Social Classes in Urban India: In the urban areas social classes comprise principally. (i) Capitalists (commercial and industrial), (ii) Professional classes ,(iii) Petty traders and shopkeepers and (.v) Working 1. Commercial and Industrial Class: During British rule there was the growth of a class of merchants engaged in export -import business. Thus, there came into being a commercial middle class invested in the country. Subsequently, rich commercial middle Cass invested their savings in the form of capital in large -scale manufactured goods and modern industries. Indian society thus, included in its composition such new groups as mill owners mine owners etc. Economically and socially this class turned out to be the strongest class India. After independence, the major fields like agriculture, industry and trade were left to the private individuals/The creation of infrastructure and establishment of heavy industries were taken of by the State sector .This type of economy led to a phenomenal rise in the number of industries owned and controlled by the capitalists .It also led to the rise of commercial classes. There is heavy concentration of assets, resources and income in a few business houses such as the Tatas, Birlas, Dalmias, and a few others. 2. Professional Classes: During British rule .there came into being an expanding professional class Such social categories were linked up with modern industry, agriculture, commerce, finance, administration, press and other fields of social life. The professional classes comprised lawyers ,doctors , teachers , managers and others working in the mode n commercial and other enterprises, engineers, technologists agricultural scientists and so Rapid industrialization and urbanization in post-independent India has opened the way for large-scale employment opportunities in industries trade and commerce ,construction transport service etc. Similarly, the State has created a massive institutional set-up comprising a complex bureaucratic structure throughout the length and breadth of the country. Bureaucrats, management executives, technocrats, doctors, lawyers, teachers and journalists etc, have grown considerably in size and scale ever since independence ‘But this class hardly constitutes a homogeneous category. Within this non-proprietary class of non-manual workers, a deep hierarchy exists. There are some high paid cadres at the top and low paid at the bottom .They differ in their style of life as well .In view of these they have not crystallised into a well -defined middle class. 3. Petty Traders, Shopkeepers and Unorganized Workers: There has also been in existence in urban areas a class of petty traders and shopkeepers .These classes have developed with the growth of modern cities and towns .They constitute the link between the producers of goods and commodities and the mass of consumers. They make their living on the profit margin of the process on which they buy and sell their goods. “Like all other classes, this class has grown in large -scale in post-independent India. The unprecedented growth of the cities has stimulated the growth of this class. The growing urban population creates demands for various kinds of needs and services .Petty shop-keeping and trading caters to these needs of the urban population. Besides these spheres of activities urbanisation also offers opportunities for employment in the organised and unorganised sector of the economy. The bulk of rural migrants lacks educational qualification and hence the organised sector is closed to them. They fall back upon the unorganised sector of economy. They work in small-scale production units or crafts, industry or manual service occupations. They get low wages and also are deprived of the benefits of the organised labour force. This class also constitutes an amorphous category .It comprises on the one hand, self- employed petty shop-keepers, traders, vendor, hawkers and on the other; semi-skilled and unskilled workers in the informal sectors. 4. The Working Class: This was another class which emerged during British rule in India. This was the modern working class which was the direct result of modern industries .railways and plantations. This Indian. Working class was formed predominantly out of the pauperised peasants and ruined artisans. The working class has grown in volume in post-independent India. They have also been distributed in different parts and different sectors of the industry. Thus, the working class has become much more heterogeneous. This diversity in the working class has given rise to a complex set of relations among the different sectors. In the post-independent India, the Government’s attitude towards the working class has become favourable. Several Acts were passed granting some facilities to the workers. Trade union movements have taken place in independent India. Yet considerable division exists among the trade unions in terms of control, sector and region of the industries.
URLs can contain a wide variety of characters; however, there are only certain characters that you can put directly into a URL. For example, you cannot put a space into a URL. If you wanted to pass the value “John Smith” to the variable “name”, do not construct a URL such as: The above URL is invalid because spaces are not allowed in a URL. To put a space into a URL, write the URL as: The above URL contains the characters “%20” instead of a space. This is because a space is ASCII code 32. The decimal number “32” is “20” in hexadecimal. Any ASCII code, from 0 (hexadecimal 0) to 255 (hexadecimal FF) can be represented this way. However, the space character is special. There is a second way that it can be represented. Spaces can be represented with the plus (+) character. Therefore, it is valid to represent the above URL as: This means the only way to represent a “+” symbol in a URL is to use the hexadecimal ASCII code “%2B”. Now that you understand how to construct a URL, it is time to see how to use them. This will be covered in the next section.
The formation of condensation is a warming process because it releases energy into the atmosphere that causes its temperature to increase. When condensations forms, water vapor condenses from a gas into a liquid. The liquid state requires less energy than the gas state. Since the gas state requires less energy, the extra energy is released into the atmosphere. With less energy, the molecules can slow down and form a liquid. The amount of heat that is released when water vapor condenses into liquid is known as the latent heat of condensation. A process that removes heat from the atmosphere, such as melting, is a cooling process.
Ruins of Jamestown. - former village near the mouth of the James River, Va.: the 1st permanent English colonial settlement in America (1607) - capital of St. Helena: pop. 1,500 Origin of Jamestownafter James I - The capital of St. Helena in the southern Atlantic Ocean. - A former village of southeast Virginia, the first permanent English settlement in America. It was founded in May 1607 and named for the reigning monarch, James I. Jamestown became the capital of Virginia after 1619 but was almost entirely destroyed during Bacon's Rebellion (1676) and further declined after the removal of the capital to Williamsburg (1698–1700). - A city of western New York on Chautauqua Lake near the Pennsylvania border. It is the trade center of a farming and grape-producing region. - The capital of the island Saint Helena. - A city in Kentucky
Our aim is to maintain the individuality and encourage the potential of each child in all possible areas of school life. We aim to involve each child fully in their own learning through our use of active learning strategies, assessment and target setting in order to enable them to make maximum progress. We enable our children to develop mathematical thinking, to become readers and lovers of good books, to learn techniques and strategies to acquire scientific, geographical and historical methods of working, and to fully appreciate a variety of creative and physical aspects of learning. We value practical investigation and first hand experience. We consider that respect for a child's achievements is extremely important, and a high level of presentation and display of work is expected, both for and by the children. We follow the 2014 New National Curriculum in Key Stage One and Key Stage Two. Full details of the changes to the Curriculum together with our assessment system can be found here. In KS1 we follow the Letters and Sounds phonics programme and we use the Oxford Reading Tree scheme of reading books to support individual reading and for some guided group sessions. The Nursery and Reception children follow the Foundation Stage Curriculum, providing purposeful play and exploratory activity both indoors and outdoors. To find out more about the Early Years Education click here. The subjects included in the National Curriculum for Key Stage One and Two are: - Religious Education (RE) - Computing (ICT) - Design and Technology (DT - Physical Education At each Key Stage, Programmes of Study set out what pupils should be taught, and attainment targets set out the expected standards of performance. The curriculum is delivered in a variety of ways. Using the Programmes of Study, individual teachers will adopt a variety of approaches, including whole-class work, group, individual work and, as appropriate, inter-year work. Whole-School curriculum planning, individual teacher planning and the overview of subject coordinators ensures a whole-school approach to curriculum organisation under the general direction of the Headteacher. Thereby we ensure breadth and balance within the delivery of our school curriculum. Cross-curriculum themes are carefully selected to ensure that each child experiences the appropriate range of Attainment Targets in each Key Stage. Key Stage 1 Key Stage 2 3 - 5 6 - 7 7 - 11 Nursery & Foundation 1 and 2 3 to 6 Our school curriculum comprises Religious Education and the National Curriculum. For more information about what each year group are currently studying, please click the links from the above "Curriculum" tab.
It took Arthur Conan Doyle just three weeks to write the first Sherlock Holmes novel, A Study in Scarlet, published in 1887. The novel first introduced the classic literary characters of the detective Sherlock Holmes and his friend Dr. John Watson, and is one of just four full-length novels featuring the characters. Its title is in reference to a part of the story, in which he refers to solving murders as “a study in scarlet.” The Sherlock Holmes character became so popular with the public, 56 short stories were published up until 1927 – even after Doyle originally killed off the Holmes character in 1893’s The Final Problem. More about Sherlock Holmes:
Awesome LEGO learning activities and ideas for kids that includes math, science, literacy, fine motor, art, and STEM with free printable activities too. Also pick up your copy of the Unofficial Guide to Learning with LEGO! LEGO Learning Pages and Free Printables! Printable LEGO activities for math, literacy, science, challenges, emotions pages, coloring Sheets, and search and find pages. Printable LEGO learning pages for preschool, kindergarten, and early elementary age kids. Lego learning games - exploring Lego and play dough. This is a great activity for sensory play, imaginative play, letter recognition and sight words. This would be great to use in an autism classroom while learning long vowel sounds with silent E.
Venus is said to be our sister planet: about the same size, mass and density as Earth; and just 30% closer to the sun. Venus has an atmosphere filled with CO2, and, because of the heat trapping properties of this greenhouse gas, averages over 400 degrees C on a good day – hotter than Mercury, which is closer to the sun…. I’d still go … to either. Anyway: the Transit of Venus is a rare astronomical event. It happens in pairs: there was one in 2004 and 2012 and the last pairing was in 1874 and 1882. Historically, we measured the size of our solar system by observing this and similar events. Now, we have the Hubble Space Telescope, the Kelper Spacecraft and others to get similar and other interesting measurements. The Transit of Venus won’t happen again until 2117. On June 5, 2012, Venus crossed between us and the sun. Something the whole world could marvel in. I went to witness. This was an exciting event and got me to think about many things. Too many for a single post. However, I will elaborate on the issue of distance. The simplest analogy of the vastness of our solar system and the universe that I have come across was stated by Sir Martin Rees in his book Just Six Numbers, p. 81-82: Suppose our star, the Sun, were modeled by an orange. The Earth would be a millimeter-sized grain twenty meters (65 feet!) away, orbiting around it. Depicted to the same scale, the nearest stars would be 10,000 kilometers away (about 6,214 miles!): that is how thinly spread matter is in a galaxy like ours. Try to visualize that one: Earth, Sun, the next closest Star; a grain of sand, 65 feet, over 6,0oo miles! He goes on, but … You know what? I’m gonna give you the rest of the quote, because it blows me away even though I have a difficult time imagining it. It has to do with understanding why the universe’s expansion is not slowing down because of gravity, like we initially thought, but expanding. I ❤ this. Have fun, or just skip to the end: But galaxies are, of course, especially high concentrations of stars. If all the stars from all of the galaxies were dispersed through intergalactic space, then each star would be several hundred times further from its nearest neighbor than it actually is within a typical galaxy – in our scale model, each orange would be millions of kilometers from its nearest neighbor. If all the stars were dismantled and their atoms spread uniformly through our universe, we’d end up with just one atom in every ten cubic meters. There is about as much again (but seemingly no more) in the form of diffuse gas between the galaxies. That’s a total of 0.2 atoms per cubic meter, twenty-five times less than the critical density of five atoms per cubic meter that would be needed for gravity to bring cosmic expansion to a halt. OK: Vast, Huge stuff. The other major issue that I though about when viewing the Tranist of Venus is The Iconic Geometry of the Circle. That’s for a future post.
All green plants require their own specific combination of 16 different elements, water and soil. Although plants vary in those unique needs, they unanimously require the addition of sunlight and its warmth. Elements like carbon and oxygen are only available through air, while the sun's light is the only energy source a plant takes in. Without light, any green plant will fail and die. According to the North Carolina Department of Agriculture and Consumer Services, photosynthesis literally means "making things with light." The chlorophyll in a plant's leaves take the energy from light and use it to produce starch from the other elements and minerals. All garden plants have preferences in regard to the length of time they receive light. Labels like eight hours, six hours and four hours dictate whether a plant needs light for longer or shorter periods during the day. Inadequate light exposure causes leaf browning and death. Light strength is another important qualification for light exposure. Green plants require conditions like full sun (full exposure), partial sun (sun during part of the day, or dappled sunlight), or full shade (no direct sunlight). For a plant to thrive and grow, it must receive the level of light it requires. If plants are put in an inadequate lighting situation, they do their best to remedy the situation by growing toward the correct situation and turning their leaves in a specific direction. Plants that need more light grow leggy, with long stems that reach their leaves toward the light. Plants that need more shade grow away from the light source. Lack of Light Lack of light is detrimental to plant growth. If a plant can't remedy the situation by reaching toward the light, its leaves yellow, die and drop off. Plants that don't get enough light don't have the resources they require, and fail to bloom or fruit.
What are gravitational waves? Well, for us to understand that we must first go back to the 17th century. In 1687 Sir Isaac Newton’s work was published in a book ‘Philososphiae Naturalis Principia Mathematica’, in which he postulated that the force that makes an apple fall to the earth is also the one that keeps the moon in its orbit around the earth. Essentially, every celestial body exerts an attractive force on every other, which is known by gravitational force. He proposed that the force was propotional to the masses of the two bodies and inversely proportional to the square of the distance between them. Discrepancies in Newtonian gravity By the end of the 19th century, a discrepancy in Mercury’s orbit pointed out flaws in Newton’s theory. Slight perturbations were shown by Mercury’s orbit that could not be accounted for entirely under Newton’s theory. Also the issue of the instantaneous exertion of gravitational force between distant bodies contradicted Einstein’s special theory of relativity which claimed that nothing travels faster than the speed of light. Einstein came to the rescue In the year 1915, Albert Einstein proposed in his General Theory of Relativity that gravitational force is the result of warping of space-time fabric. This could be explained by imagining space-time fabric as a two dimensional rubber sheet where a celestial body could be assumed as a massive ball creating curvature in it. When a smaller ball is rolled on the rubber sheet, it revolves around the large ball along the curvature for a while before falling into it. Einstein said that all the bodies creates curvature around them in the space-time fabric and other bodies follows the curvature. As pebble produces ripple in water when thrown into pond, the cataclysmic events like merging black holes in the cosmos produces ripple in the space-time fabric. These ripples are known as Gravitational Waves. These waves cascade outwards from the event at the speed of light, stretching and squeezing space-time as they go. By the time waves reaches earth, squeezing and stretching shrinks down to minute fraction of the width an atomic nucleus. According to general relativity, a pair of black holes orbiting around each other lose energy through the emission of gravitational waves, causing them to gradually approach each other over billions of years and then much more quickly in the final minutes. During the final fraction of a second , the two black holes collide into each other at nearly one-half the speed of light and form a single more massive black hole converting a portion of the combined black holes’ mass to energy, according to Einstein’s formula E=mc^2. This energy is emitted as a final strong burst of gravitational waves. About the discovery About 1.3 billion years ago two black holes swirled closer and closer together until they merged giving rise to a new black hole and created a gravitational field so strong that it distorted space-time creating gravitational waves, this collision was the first of its kind ever detected and its waves the first ever seen. The gravitational waves were detected on September 14, 2015 by both of the twin Laser Interferometer Gravitational-wave Observatory (LIGO) detectors, located in Livingston, Louisiana, and Hanford, Washington, USA. The LIGO Observatories are operated by Caltech and MIT. Based on the observed signals, LIGO scientists estimate that the black holes for this event were about 29 and 36 times the mass of the sun. About 3 times the mass of the sun was converted into gravitational waves in a fraction of a second-with a peak power output about 50 times that of the whole visible universe. Astrophysicists say the detection of gravitational waves opens up a new window on the universe, revealing faraway events that can’t be seen by optical telescopes, but whose faint tremors can be felt, even heard, across the cosmos. How LIGO works? Being able to detect gravitational waves opens a new window to the universe. Everything we know about the universe comes from observations made through electromagnetic waves. But unlike electromagnetic waves, gravitational waves can pass through the universe unobstructed, so they carry information that we cannot obtain otherwise. The ability to detect gravitational waves opens up the new field of gravitational-wave astronomy. Now we would be able to explore the early stages of the universe as gravitational waves can freely propagate through the hot plasma of the early universe
Find out who invented the acoustic piano and when. This article also takes you through the development of the acoustic piano to the modern piano of today. You can also find out what materials are used in the making of upright and grand pianos and what the names of all the parts are called. History of the Acoustic Piano A piano is a large musical instrument with a keyboard. Its sound is produced by strings stretched on a rigid frame. These vibrate when struck by felt-covered hammers, which are activated by the keyboard. The word piano is derived from the original Italian name for the instrument, gravicembalo col piano e forte. Literally harpsichord with soft and loud, this refers to the ability of the piano to produce notes at different volumes depending on how hard its keys are pressed. As a keyboard stringed instrument, the piano is similar to the clavichord and harpsichord. The three instruments differ in the mechanism of sound production. In a harpsichord, strings are plucked by quills or similar material. In the clavichord, strings are struck by tangents which remain in contact with the string. In a piano, the strings are struck by hammers which immediately rebound, leaving the string to vibrate freely. The piano was invented by Bartolomeo Cristofori in Florence, Italy. When he built his first piano is not entirely clear, but an inventory made by Cristofori’s employers, the Medici family, indicates the existence of an early Cristofori instrument by the year 1700. Cristofori built only about twenty pianos before he died in 1731; the three that survive today date from the 1720s. Like many other inventions, the piano was founded on earlier technological innovations. In particular, it benefited from centuries of work on the harpsichord, which had shown the most effective ways to construct the case, the soundboard, the bridge, and the keyboard. Cristofori was himself a harpsichord maker and well acquainted with this body of knowledge. Cristofori’s great success was to solve, without any prior example, the fundamental mechanical problem of piano design: the hammers must strike the string but not continue to touch it once they have struck (which would damp the sound). Moreover, the hammers must return to their rest position without bouncing violently, and it must be possible to repeat a note rapidly. Cristofori’s piano action served as a model for the many different approaches to piano actions that were to follow. Cristofori’s early instruments were made with thin strings and were much quieter than the modern piano. However, in comparison with the clavichord (the only previous keyboard instrument capable of dynamic nuance) they were considerably louder, with greater sustain. Cristofori’s new instrument remained relatively unknown until an Italian writer, Scipione Maffei, wrote an enthusiastic article about it (1711), including a diagram of the mechanism. This article was widely distributed, and most of the next generation of piano builders started their work as a result of reading it. One of these builders was Gottfried Silbermann, better known as an organ builder. Silbermann’s pianos were virtually direct copies of Cristofori’s, but with an important exception: Silbermann invented the forerunner of the modern damper pedal (also known as the sustaining pedal or loud pedal), which permits the dampers to be lifted from all the strings at once. Virtually all subsequent pianos incorporated some version of Silbermann’s idea. Silbermann showed Bach one of his early instruments in the 1730s. Bach did not like it at that time, claiming that the higher notes were too soft to allow a full dynamic range. Though this earned him some animosity from Silbermann, the latter did apparently heed the criticism. Bach did approve of a later instrument he saw in 1747, and apparently even served as an agent to help sell Silbermann’s pianos. Piano-making flourished during the late 18th century in the work of the Viennese school, which including Johann Andreas Stein (who worked in Augsburg, Germany) and the Viennese makers Nannette Stein (daughter of Johann Andreas) and Anton Walter. The Viennese-style pianos were built with wooden frames, two strings per note, and leather-covered hammers. It was for such instruments that Mozart composed his concertos and sonatas, and replicas of them are built today for use in authentic-instrument performance. The piano of Mozart’s day had a softer, clearer tone than today’s pianos, with less sustaining power. The term fortepiano is nowadays often used to distinguish the 18th-century style of instrument from later pianos. For further information on the earlier part of piano history, see fortepiano. The development of the modern piano In the lengthy period lasting from about 1790 to 1890, the Mozart-era piano underwent tremendous changes which ultimately led to the modern form of the instrument. This evolution was in response to a consistent preference by composers and pianists for a more powerful, sustained piano sound. It was also a response to the ongoing Industrial Revolution, which made available technological resources like high-quality steel for strings (see piano wire) and precision casting for the production of iron frames. Over time, piano playing became a more strenuous and muscle-taxing activity, as the force needed to depress the keys, as well as the length of key travel, was increased. The tonal range of the piano was also increased, from the five octaves of Mozart’s day to the 7 1/3 (or even more) octaves found on modern pianos. In the first part of this era, technological progress owed much to the English firm of Broadwood, which already had a strong reputation for the splendour and powerful tone of its harpsichords. Over time, the Broadwood instruments grew progressively larger, louder, and more robustly constructed. The Broadwood firm, which sent pianos to both Haydn and Beethoven, was the first to build pianos with range of more than five octaves: five octaves and a fifth during the 1790s, six by 1810 (in time for Beethoven to use the extra notes in his later works), and seven by 1820. The Viennese makers followed these trends. The two schools, however, used different piano actions: the Broadwood one more robust, the Viennese more sensitive. By the 1820s, the centre of innovation had shifted to the Érard firm of Paris, which built pianos used by Chopin and Liszt. In 1821, Sébastien Érard invented the double escapement action, which permitted a note to be repeated even if the key had not yet risen to its maximum vertical position, a great benefit for rapid playing. As revised by Henri Herz about 1840, the double escapement action ultimately became the standard action for grand pianos, used by all manufacturers. Some other important technical innovations of this era include the following: The modern concert grand achieved essentially its present form around the beginning of the 20th century, and progress since then has been only incremental. For some recent developments, see Innovations in the piano. Some early pianos had shapes and designs that are no longer in use. The once-popular square piano had the strings and frame on a horizontal plane, but running across the length of the keyboard rather than away from it. It was similar to the upright piano in its mechanism. Square pianos were produced through the early 20th century; the tone they produced is widely considered to be inferior. Most had a wood frame, though later designs incorporated increasing amounts of iron. The giraffe piano, by contrast, was mechanically like a grand piano, but the strings ran vertically up from the keyboard rather than horizontally away from it, making it a very tall instrument. These were uncommon. Piano history and musical performance The huge changes in the evolution of the piano have somewhat vexing consequences for musical performance. The problem is that much of the most widely admired music for piano—for example, that of Haydn, Mozart, and Beethoven was composed for a type of instrument that is rather different from the modern instruments on which this music is normally performed today. Even the music of the early Romantics, such as Chopin and Schumann, was written for pianos substantially different from ours. One view that is sometimes taken is that these composers were dissatisfied with their pianos, and in fact were writing visionary “music of the future” with a more robust sound in mind. This view is perhaps more plausible in the case of Beethoven, who composed at the beginning of the era of piano growth, than it is in the case of Haydn or Mozart. Others have noted that the music itself often seems to require the resources of the early piano. For example, Beethoven sometimes wrote long passages in which he directs the player to keep the damper pedal down throughout (a famous example occurs in the last movement of the “Waldstein” sonata, Op. 53). These come out rather blurred on a modern piano if played as written but work well on (restored or replicated) pianos of Beethoven’s day. Similarly, the classical composers sometimes would write passages in which a lower violin line accompanies a higher piano line in parallel; this was a reasonable thing to do at a time when piano tone was more penetrating than violin tone; today it is the reverse. Current performance practice is a mix. A few pianists simply ignore the problem; others modify their playing style to help compensate for the difference in instruments, for example by using less pedal. Finally, participants in the authentic performance movement have constructed new copies of the old instruments and used them in performance; this has provided important new insights and interpretations of the music. The modern piano Types of piano Modern pianos come in two basic configurations and several sizes: the grand piano and the upright piano. Grand pianos have the frame and strings placed horizontally, with the strings extending away from the keyboard. This avoids the problems inherent in an upright piano, but takes up a large amount of space and needs a spacious room with high ceilings for proper resonance. Several sizes of grand piano exist. Manufacturers and models vary, but as a rough guide we can distinguish the “concert grand”, approximately. 3 m; the “grand”, approximately. 1.8 m; and the smaller “baby grand”, which may be a bit shorter than it is wide. All else being equal, longer pianos have better sound and lower inharmonicity of the strings (so that the strings can be tuned closer to equal temperament in relation to the standard pitch with less stretching), so that full-size grands are almost always used for public concerts, whereas baby grands are only for domestic use where space and cost are crucial considerations. Upright pianos, also called vertical pianos, are more compact because the frame and strings are placed vertically, extending in both directions from the keyboard and hammers. It is considered harder to produce a sensitive piano action when the hammers move sideways, rather than upward against gravity; however, the very best upright pianos now approach the level of grand pianos of the same size in tone quality and responsiveness. For recent advances, see Innovations in the piano. In 1863, Henri Fourneaux invented the player piano, a kind of piano which “plays itself” from a piano roll without the need for a pianist. Also in the 19th century, toy pianos began to be manufactured. A relatively recent development is the prepared piano, which is a piano adapted in some way by placing objects inside the instrument, or changing its mechanism in some way. Since the 1980s, digital pianos have been available, which use digital sampling technology to reproduce the sound of each piano note. Digital pianos have become quite sophisticated, with standard pedals, weighted keys, multiple voices, MIDI interfaces, and so on in the better models. However, with current technology, it remains difficult to duplicate a crucial aspect of acoustic pianos, namely that when the damper pedal (see below) is depressed, the strings not struck vibrate sympathetically with the struck strings. Since this sympathetic vibration is considered central to a beautiful piano tone, digital pianos are still not considered by most experts as competing with the best acoustic pianos in tone quality. Progress is now being made in this area by including physical models of sympathetic vibration in the synthesis software. Almost every modern piano has 88 keys (seven octaves and a bit, from A0 to C8). Many older pianos only have 85 (from A0 to A7), while some manufacturers extend the range further in one or both directions. The most notable example of an extended range can be found on Bösendorfer pianos, some of which extend the normal range downwards to F0, with others going as far as a bottom C0, making a full eight octave range. On some models these extra keys are hidden under a small hinged lid, which can be flipped down to cover the keys and avoid visual disorientation in a pianist unfamiliar with the extended keyboard; on others, the colours of the extra keys are reversed (black instead of white and vice versa) for the same reason. The extra keys are added primarily for increased resonance; that is, they vibrate sympathetically with other strings whenever the damper pedal is depressed and thus give a fuller tone. Only a very small number of works composed for piano actually use these notes. More recently, the Stuart and Sons company has also manufactured extended-range pianos. On their instruments, the range is extended up the treble for a full eight octaves. The extra keys are the same as the other keys in appearance. For the arrangement of the keys on a piano keyboard, see Musical keyboard. This arrangement was inherited from the harpsichord without change, with the trivial exception of the colour scheme (white for naturals and black for sharps) which became standard for pianos in the late 18th century. Pianos have had pedals, or some close equivalent, since the earliest days. (In the 18th century, some pianos used levers pressed upward by the player’s knee instead of pedals.) The three pedals that have become more or less standard on the modern piano are the following. The damper pedal (also called the sustaining pedal or loud pedal) is often simply called “the pedal,” since it is the most frequently used. It is placed as the rightmost pedal in the group. Every note on the piano, except the top two octaves, is equipped with a damper, which is a padded device that prevents the strings from vibrating. The damper is raised off the strings of its note whenever the key for that note is pressed. When the damper pedal is pressed, all the dampers on the piano are lifted at once, so that every string can vibrate. This serves two purposes. First, it permits notes to be connected (i.e., played legato) when there is no fingering that would make this possible. More important, raising the damper pedal causes all the strings to vibrate sympathetically with whatever notes are being played, which greatly enriches the tone. Piano music starting with Chopin tends to be heavily pedalled, as a means of achieving a singing tone. In contrast, the damper pedal was used only sparingly by the composers of the 18th century, including Haydn, Mozart and Beethoven; in that era, pedalling was considered primarily as a special coloristic effect. The soft pedal or “una corda” pedal is placed leftmost in the row of pedals. On a grand piano, this pedal shifts the action to one side slightly, so that hammers that normally strike all three of the strings for a note strike only two of them. This softens the note and also modifies its tone quality. For notation of the soft pedal in printed music, see Italian musical terms. The soft pedal was invented by Cristofori and thus appeared on the very earliest pianos. In the 18th and early 19th centuries, the soft pedal was more effective than today, since it was possible at that time to use it to strike three, two or even just one string per note—this is the origin of the name “una corda”, Italian for “one string”. In modern pianos, the strings are spaced too closely to permit a true “una corda” effect—if shifted far enough to strike just one string on one note, the hammers would also strike the string of the next note over. On upright pianos, the soft pedal is replaced by a mechanism for moving the hammers’ resting position closer to the strings. This reduces volume, but does not change tone quality as a true “una corda” pedal does. Digital pianos often use this pedal to alter the sound of other instruments like organs, guitars, and harmonicas. Pitch bends, leslie speaker on/off, vibrato modulation, etc. increase the already-great versatility of such instruments. The sostenuto pedal or “middle pedal” maintains in the raised position any damper that was raised at the moment the pedal was depressed. It makes it possible to sustain some notes (depress the sostenuto pedal before releasing the notes to be sustained) while the player’s hands have moved on to play other notes, which can be useful for musical passages with pedal points and other tricky situations. The sostenuto pedal was the last of the three pedals to be added to the standard piano, and to this day many cheap pianos—and even a few good ones— do not have a sostenuto pedal. (Almost all modern grand pianos have a sostenuto; most upright pianos do not.) A number of twentieth-century works call for the use of this pedal. Over the years, the middle pedal has served many different functions. Some upright pianos have a practice pedal in place of the sostenuto. This pedal, which can usually be locked in place by depressing it and pushing it to one side, drops a strip of felt between the hammers and the keys so that all the notes are greatly muted— a handy feature for those who wish to practice at odd hours without disturbing others in the house. The practice pedal is rarely used in performance. Other uprights have a bass sustain as a middle pedal. It works the same as the damper pedal except it only lifts the dampers for the low end notes. Irving Berlin’s famed Transposing Piano used the middle pedal as a clutch to shift the keyboard with a lever. The entire action of the piano would shift to allow the operator to play in any key. Telephone 020 8367 2080
Three potential responses that could occur along the mid-Atlantic coast in response to sea-level rise over the next century were identified. - Bluff and upland erosion. Shorelines composed of older geologic units that form headland regions of the coast will retreat landward with rising sea level. As sea level rises over time, the uplands are eroded, and sandy materials are incorporated into the beach and dune systems along the shore and adjacent compartments. It is expected that bluff and upland erosion will persist for all four sea-level rise scenarios. A possible management reaction to bluff erosion is armoring of the shore. This may reduce bluff erosion in the short term, but will probably increase erosion of the beach in front of the armored bluff due to wave reflection as well as increased erosion of adjacent coastal segments by modifying the littoral sediment budget. - Overwash, inlet processes, shoreline retreat, and barrier island narrowing. Five main processes were identified as agents of change as sea-level rise occurs. First, storm overwash will become more likely. In addition, recent studies suggest that hurricanes have become more intense over the last century (Emanuel, 2005; Webster and others, 2005). Some have argued that there is insufficient data to support this finding (Landsea and others, 2006), but recent work supports this trend for the North Atlantic (Kossin and others, 2007) and the contention that the increased storm activity is linked to 20th century climate and ocean warming (Holland and Webster, 2007). Tidal inlet formation and migration will also be important components of future shoreline changes. Barrier islands are often subject to inlet formation by storms. If the storm surge produces channels that extend below sea level, an inlet may persist after the storm abates. These inlets can persist for some time until the inlet channels are filled with sediments accumulated from longshore transport, or they may remain open for months to decades. Geological investigations along the shores of the mid-Atlantic Bight have encountered numerous geomorphic features and deposits indicating former inlet positions (Fisher, 1962; Everts and others, 1983; Leatherman, 1985; McBride and Moslow, 1991; Moslow and Heron, 1994; Riggs and others, 1995; McBride, 1999). Historically, most inlets have opened by the storm surge associated with major hurricanes. In the 20th century four of the most important inlets in the mid-Atlantic Bight were formed by storm surges and breaches from the 1933 hurricane (Barden's Inlet, NC; Ocean City Inlet, MD; Indian River Inlet, DE; and Moriches Inlet, NY). Most recently, tidal inlets have formed in the North Carolina Outer Banks in response to Hurricane Isabel in 2003 and on Nauset beach, Cape Cod, MA in response to a spring 2007 storm. The combined effect of rising sea level and stronger storms potentially acting at higher elevations on the barrier could be expected to accelerate shoreline retreat in many locations. Assessments of shoreline change on barrier islands indicate that barrier island narrowing has been observed on some islands over the last century (Leatherman, 1979; Jarrett, 1983; Everts and others, 1983; McBride and Byrnes, 1997; Penland and others, 2005). Actual barrier island migration is less widespread, but has been noted at Core Banks, NC (Riggs and Ames, 2007), the Virginia barriers (Byrnes and Gingerich, 1987; Byrnes and others, 1989), and the northern end of Assateague Island, MD (Leatherman, 1984). - Threshold Behavior. Barrier islands are dynamic environments that are sensitive to a variety of driving forces. Some evidence suggests that changes in some or all of these processes can lead to conditions where a barrier system becomes less stable and crosses a geomorphic threshold. In this situation, the potential for rapid barrier-island migration or segmentation/disintegration is high. It is difficult to precisely define an unstable barrier but indications of instability can be: - rapid landward recession of the ocean shoreline - decrease in barrier width and height - increased overwash during storms - increased barrier breaching and inlet formation - chronic loss of beach and dune sand volume. Given the unstable state of some barrier islands under current rates of sea-level rise and climate trends, it is very likely that conditions will worsen under accelerated sea-level rise rates. The unfavorable conditions for barrier maintenance could result in barrier segmentation/disintegration as witnessed in coastal Louisiana (McBride and others, 1995; McBride and Byrnes, 1997; Penland and others, 2005; Day and others, 2007; Sallenger and others, 2007). This segmentation/disintegration may result from a combination of 1) limited sediment supply by longshore or cross-shore transport, 2) accelerated rates of sea-level rise, and 3) permanent removal of sand from the barrier system by storms. Changes in sea level coupled with changes in the hydrodynamic climate and sediment supply in the broader coastal environment contribute to the development of unstable behavior. The threshold behavior of unstable barriers could result in: a) landward migration/roll-over, barrier segmentation, or c) disintegration. If the barrier were to disintegrate, portions of the ocean shoreline could migrate or back-step toward and/or merge with the mainland. During storms, large portions of low-elevation, narrow barriers can be inundated under high waves and storm surge. The parts of the mid-Atlantic coast most vulnerable to threshold behavior can be estimated based on their physical dimensions. Narrow, low-elevation barrier islands are most susceptible to storm overwash, which can lead to landward migration, and the formation of new tidal inlets. The northern portion of Assateague Island and segments of the North Carolina Outer Banks are examples of barrier islands that are extremely vulnerable to even modest storms because of their narrow width and low elevation (e.g., Leatherman, 1979; Riggs and Ames, 2003). The future evolution of narrow, low-elevation barriers will likely depend in part on the ability of salt marshes in back-barrier lagoons and estuaries to keep pace with sea-level rise (FitzGerald and others, 2003 and 2006; Reed and others, 2007). It has been suggested that a reduction of salt marsh in back-barrier regions could change the hydraulics of back-barrier systems, altering local sediment budgets and leading to a reduction in sandy sediment available to sustain barrier systems (FitzGerald and others, 2003 and 2006). In these cases, even barrier systems that are relatively wide and exhibit well-developed dunes may evolve toward narrow, low-elevation barriers as local sand supplies are reduced.
Trichloroethylene, a colourless, toxic, volatile liquid belonging to the family of organic halogen compounds, nonflammable under ordinary conditions and used as a solvent and in adhesives. Trichloroethylene has a subtle, sweet odour. Trichloroethylene was first prepared in 1864; its commercial manufacture, begun in Europe in 1908, is based on the reaction of 1,1,2,2-tetrachloroethane with dilute caustic alkali. The compound is denser than water, in which it is practically insoluble. Trichloroethylene is used in dry cleaning, in degreasing of metal objects, and in extraction processes, such as removal of caffeine from coffee or of fats and waxes from cotton and wool. It is also used in adhesives, such as cement for polystyrene plastics like those found in model-building kits. Industrially, an important use for trichloroethylene is in the manufacture of tetrachloroethylene: trichloroethylene is treated with chlorine to form pentachloroethane, which is converted to tetrachloroethylene by reaction with caustic alkali or by heating in the presence of a catalyst. Inhalation of the vapours (glue-sniffing) induces euphoria; the practice can be addictive. Inhalation of more than 50 ppm (parts per million) trichloroethylene can produce acute effects on the body, including nausea and vomiting, eye and throat irritation, dizziness, headache, and liver, heart, or neurological damage. Trichloroethylene exposure has been linked to Parkinson disease. The manufacture, use, and disposal of trichloroethylene has led to the chemical’s presence in sources of groundwater and surface waters. Studies in animals suggest that consumption of trichloroethylene-contaminated water can cause organ damage and may lead to heart defects in developing fetuses. Hence, it is recommended that pregnant women avoid exposure.
Student Designed Lab: Graphing the Results Lesson 5 of 7 Objective: SWBAT accurately choose and create a graph displaying experimental results. While students are familiar with creating a variety of graph types, most students don't know when to use specific graphs. This is a concept that students need to master prior to leaving middle school as it builds their skills within SP4: Analyzing and Interpreting Data. This lesson specifically focuses on the following areas within Practice 4: - Construct, analyze, and/or interpret graphical displays of data and/or large data sets to identify linear and nonlinear relationships. - Distinguish between causal and correlational relationships in data. - Consider limitations of data analysis (e.g., measurement error), and/or seek to improve precision and accuracy of data with better technological tools and methods (e.g., multiple trials). As students enter class, they are asked to look at the data they collected the previous day and determine what type of graph they will use and why they are choosing that particular graph. I ask a few students to share their answers with the class. Most students will choose a type of graph but their reasons are usually unclear or "because it is easy to make." I address the class asking "does anyone know how to determine when to use specific graphs?" Most have no idea that there are right and wrong graph choices. At this point I have students take formal notes as we go over the Common Graphs PowerPoint. I tell students that they don't need to write every word, they just need to get the key points to create a student friendly cheat sheet (I identify the key points on the first few slides and then ask students to identify the key points for the rest.) I want students to have the key information written neatly in a format they can easily find and refer to over the course of the year. Additionally, this connects to Common Core RI.8.2 Determining a central idea of a text...(and) provide an objective summary of a text. Summarizing is a critical skill that students will be practicing throughout the year. This video is a screen cast explaining how I use this PowerPoint with my students. When we finish going through the notes, I either project graph types guided practice onto the screen or pass out a copy for each student. I have students answer the questions independently before we go over the answers together to ensure the students are developing an understanding of the concept. Students work in their groups to determine which type of graph to use for the data from their independent experiment. Once they agree, each student must make his/her own graph so everyone gets practice making the graph. Students can hand draw or use the computer to create their graph. I have students work on this independently so I can assess how well each student constructs a graph while they continue to develop the skills found within Science Practice 4, particularly constructing graphical displays of data. I often use this time to teach students how to use the free online program Create a Graph, but it depends on the ability to get access to the team computers. I like this program because it is easy to use and students can either print or email their graph depending on what is preferred. The downside is you need to be aware of cheating as it is easy for one student to do the work and just change the name while printing different graphs. If you are unfamiliar with this program, the CREATE-A-GRAPH instructions are what I give the students to become familiar with what to do. I still suggest you play around with the program as there is some trial and error involved with making the different types of graphs. When students finish their independent graph, they will begin the Graph Practice worksheet. This should be done independently so that I can get an accurate understanding of each student's understanding of the concept. Students hand this in when it is complete. The following video shows a student using the cheat sheet we just created to determine the answer to a practice question. The human brain is good at remembering rhymes and I like to use that to my advantage as much as possible. To conclude this lesson I ask students to create a rhyme for each type of graph, starting with the graph the student finds the most confusing and write it in their science journal. For example, when considering a circle graph I might write "When comparing parts of a whole, a circle graph is how I roll." Rhymes should make sense based on when to accurately use each graph but must be memorable for the student. Here are some Graph Rhyme Student Examples that may be helpful. I like to pick 2 or 3 of the best rhymes for each of the 4 types of graphs we use and retype and hang them on the wall for students to reference as they learn this material.
Presence is a theoretical concept describing the extent to which media represent the world (in both physical and social environments). Presence is further described by Matthew Lombard and Theresa Ditton as "an illusion that a mediated experience is not mediated." Today, it often considers the effect that people experience when they interact with a computer-mediated or computer-generated environment. The conceptualization of presence borrows from multiple fields including communication, computer science, psychology, science, engineering, philosophy, and the arts. The concept of presence accounts for a variety of computer applications and Web-based entertainment today that are developed on the fundamentals of the phenomenon, in order to give people the sense of, as Sheridan called it, "being there." Evolution of 'presence' as a concept Typology of human experience in the study of presence The specialist use of the word "presence" derives from the term "telepresence", coined by Massachusetts Institute of Technology professor Marvin Minsky in 1980. Minsky's research explained telepresence as the manipulation of objects in the real world through remote access technology. For example, a surgeon may use a computer to control robotic arms to perform minute procedures on a patient in another room. Or a NASA technician may use a computer to control a rover to collect rock samples on Mars. In either case, the operator is granted access to real, though remote, places via televisual tools. As technologies progressed, the need for an expanded term arose. Sheridan extrapolated Minsky's original definition. Using the shorter "presence," Sheridan explained that the term refers to the effect felt when controlling real world objects remotely as well as the effect people feel when they interact with and immerse themselves in virtual reality or virtual environments. Lombard's conceptualization of presence Lombard and Ditton went a step further and enumerated six conceptualizations of presence: - presence can be a sense of social richness, the feeling one gets from social interaction - presence can be a sense of realism, such as computer-generated environments looking, feeling, or otherwise seeming real - presence can be a sense of transportation. This is a more complex concept than the traditional feeling of one being there. Transportation also includes users feeling as though something is "here" with them or feeling as though they are sharing common space with another person together - presence can be a sense of immersion, either through the senses or through the mind - presence can provide users with the sense they are social actors within the medium. No longer passive viewers, users, via presence, gain a sense of interactivity and control - presence can be a sense of the medium as a social actor. Lombard's work discusses the extent to which 'presence' is felt, and how strong the perception of presence is regarded without the media involved. The article reviews the contextual characteristics that contribute to an individual's feeling presence. The most important variables that are important in the determinants of presence are those that involve sensory richness or vividness - and the number and consistency of sensory outputs. Researchers believe that the greater the number of human senses for which a medium provides stimulation, the greater the capability of the medium to produce a sense of presence. Additional important aspects of a medium are visual display characteristics (image quality, image size, viewing distance, motion and color, dimensionality, camera techniques) as well as aural presentation characteristics, stimuli for other senses (interactivity, obtrusiveness of medium, live versus recorded or constructed experience, number of people), content variables (social realism, use of media conventions, nature of task or activity), and media user variables (willingness to suspend disbelief, knowledge of and prior experience with the medium). Lombard also discusses the effects of presence, including both physiological and psychological consequences of "the perceptual illusion of nonmediation." Physiological effects of presence may include arousal, or vection and simulation sickness, while psychological effects may include enjoyment, involvement, task performance, skills training, desensitization, persuasion, memory and social judgement, or parasocial interaction and relationships. Lee's typology of virtual experience Presence has been delineated into subtypes, such as physical-, social-, and self-presence. Lombard's working definition was "a psychological state in which virtual objects are experienced as actual objects in either sensory or nonsensory ways." Later extensions expanded the definition of "virtual objects" to specify that they may be either para-authentic or artificial. Further development of the concept of "psychological state" has led to study of the mental mechanism that permits humans to feel presence when using media or simulation technologies. One approach is to conceptualize presence as a cognitive feeling, that is, to take spatial presence as feedback from unconscious cognitive processes that inform conscious thought. Several studies provide insight into the concept of media influencing behavior. - Cheryl Bracken and Lombard suggested that people, especially children, interact with computers socially. The researchers found, via their study, that children who received positive encouragement from a computer were more confident in their ability, were more motivated, recalled more of a story and recognized more features of a story than those children who received only neutral comments from their computer. - Nan, Anghelcev, Myers, Sar, and Faber found that the inclusion of anthropomorphic agents that relied on artificial intelligence on a Web site had positive effect on people's attitudes toward the site. The research of Bracken and Lombard and Nan et al. also speak to the concept of presence as transportation. The transportation in this case refers to the computer-generated identity. Users, through their interaction, have a sense that these fabricated personalities are really "there". - Communication has been a central pillar of presence since the term's conception. Many applications of the Internet today largely depend on virtual presence since its conception. Rheingold and Turkle offered MUDs, or multi-user dungeons, as early examples of how communication developed a sense of presence on the Web prior to the graphics-heavy existence it has developed today. "MUDs...[are] imaginary worlds in computer databases where people use words and programming languages to improvise melodramas, build worlds and all the objects in them, solve puzzles, invent amusements and tools, compete for prestige and power, gain wisdom, seek revenge, indulge greed and lust and violent impulses." While Rheingold focused on the environmental sense of presence that communication provided, Turkle focused on the individual sense of presence that communication provided. "MUDs are a new kind of virtual parlor game and a new form of community. In addition, text-based MUDs are a new form of collaboratively written literature. MUD players are MUD authors, the creators as well as consumers of media content. In this, participating in a MUD has much in common with script writing, performance art, street theater, improvisational theater - or even commedia dell'arte." - Further blurring the lines of behavioral spheres, Gabriel Weimann wrote that media scholars have found that virtual experiences are very similar to real-life experiences, and people can confuse their own memories and have trouble remembering if those experiences were mediated or not. - Philipp, Vanman, and Storrs demonstrated that unconscious feelings of social presence in a virtual environment can be invoked with relatively impoverished social representations. The esearchers found that the mere presence of virtual humans in an immersive environment caused people to be more emotionally expressive than when they were alone in the environment. The research suggests that even relatively impoverished social representations can lead to people to behave more socially in an immersive environment. Presence in popular culture - Sheridan's view of presence earned its first pop culture reference in 1984 with William Gibson's pre-World Wide Web science fiction novel "Neuromancer", which tells the story of a cyberpunk cowboy of sorts who accesses a virtual world to hack into organizations. - Joshua Meyrowitz's 1986 "No Sense of Place" discusses the impact of electronic media on social behavior. The novel discusses how social situations are transformed by media. Media, he claims, can change one's 'sense of place,'by mixing traditionally private versus public behaviors - or back-stage and front-stage behaviors, respectively, as coined by Erving Goffman. Meyrowitz suggests that television alone will transform the practice of front-stage and back-stage behaviors, as television would provide increased information to different groups who may physically not have access to specific communities but through media consumption are able to determine a mental place within the program. He references Marshall McLuhan's concept that 'the medium is the message,' and that media provide individuals with access to information. With new and changing media, Meyrowitz says that the patterns of information and shifting accesses to information change social settings, and help do determine a sense of place and behavior. With the logic that behavior is connected to information flow, Meyrowitz states that front- and back-stage behaviors are blurred and may be impossible to untangle. - ^ a b c Lee, Kwan Min (February 2004). "Presence, Explicated". Communication Theory. 14 (1): 27-50. doi:10.1093/ct/14.1.27. - ^ a b c d e f g h i Lombard; Ditton (1997). "At the heart of it all: the concept of presence". Journal of Computer-Mediated Communication. 2. 3. doi:10.1111/j.1083-6101.1997.tb00072.x. - ^ Sheridan, T. B. (1999). Presence: Teleoperators and Virtual Environments (8)5. pp. 241-246. - ^ a b c d Sheridan, T. B. (1992). Presence: Teleoperators and Virtual Environments (1). pp. 120-126. - ^ a b Minsky, M (June 1980). "Telepresence". MIT Press Journals: 45-51. - ^ a b c Steuer, J. "Defining virtual reality: Dimensions determining telepresence" (PDF). Retrieved 2008. Cite error: Invalid <ref> tag; name "steuer" defined multiple times with different content (see the help page). - ^ Schubert, Thomas W. (2009). "A New Conception of Spatial Presence: Once Again, with Feeling". Communication Theory. 19 (2): 161-187. doi:10.1111/j.1468-2885.2009.01340.x. - ^ a b Bracken, C; Lombard, M (2004). "Social presence and children: Praise, intrinsic motivation, and learning with computers". Journal of Communication. 54: 22-37. doi:10.1093/joc/54.1.22. - ^ a b Nan, X; Anghelcev, G.; Myers, J. R.; Sar, S.; Faber, R. J. (2006). "What if a website can talk? Exploring the persuasive effects of web-based anthropomorphic agents". Journalism and Mass Communication Quarterly. 83 (3): 615-631. doi:10.1177/107769900608300309. - ^ "Presence-Research.org". Welcome. Retrieved 2008. - ^ "International Society for Presence Research". About ISPR. Retrieved 2008. - ^ a b c Rheingold, H (1993). The virtual community: Homesteading on the electronic frontier. Reading, MA: Addison-Wesley. - ^ Turkle, S (1995). Life on the screen: Identity in the age of the Internet. New York, NY: Simon & Schuster. - ^ Weimann, G (2000). Communicating unreality: Modern media and the reconstruction of reality. Thousand Oaks, CA: Sage Publications, Inc. - ^ Philipp, MC; Storrs, K; Vanman, E (2012). "Sociality of facial expressions in immersive virtual environments: A facial EMG study". Biological Psychology. 91: 17-21. doi:10.1016/j.biopsycho.2012.05.008. - ^ Meyrowitz, Joshua (1986). No sense of place : the impact of electronic media on social behavior (12. printing. ed.). New York: Oxford University Press. ISBN 978-0-19-504231-3. - ^ Goffman, Erving (1990). The presentation of self in everyday life (Reprint. ed.). Harmondsworth: Penguin. ISBN 978-0140135718. - ^ McLuhan, Marshall (1964). Understanding Media: The Extensions of Man. New York: McGraw Hill. - Bob G. Witmer, Michael J. Singer (1998). Measuring Presence in Virtual Environments: A Presence Questionnaire - G. Riva, J, Waterworth (2003). Presence and the Self: a cognitive neuroscience approach. - W. IJsselsteijn, G. Riva (2003). Being There: The experience of presence in mediated environments.
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Philosophy Index: Aesthetics · Epistemology · Ethics · Logic · Metaphysics · Consciousness · Philosophy of Language · Philosophy of Mind · Philosophy of Science · Social and Political philosophy · Philosophies · Philosophers · List of lists An individual is a person or a specific object. Individuality (or selfhood) is the state or quality of being an individual; a person separate from other persons and possessing his or her own needs or goals. Being self expressive, independent. From the 15th century and earlier, and also today within the fields of statistics and metaphysics, individual meant "indivisible", typically describing any numerically singular thing, but sometimes meaning "a person." (q.v. "The problem of proper names"). From the seventeenth century on, individual indicates separateness, as in individualism. Early empiricists such as Ibn Tufail and John Locke introduced the idea of the individual as a tabula rasa ("blank slate"), shaped from birth by experience and education. This ties into the idea of the liberty and rights of the individual, society as a social contract between rational individuals, and the beginnings of individualism as a doctrine. Hegel regarded history as the gradual evolution of Mind as it tests its own concepts against the external world. Each time the mind applies its concepts to the world, the concept is revealed to be only partly true, within a certain context; thus the mind continually revises these incomplete concepts so as to reflect a fuller reality (commonly known as the process of thesis, antithesis, and synthesis). The individual comes to rise above his or her own particular viewpoint, and grasps that he or she is a part of a greater whole insofar as he or she is bound to family, a social context, and/or a political order. With the rise of existentialism, Kierkegaard rejected Hegel's notion of the individual as subordinated to the forces of history. Instead, he elevated the individual's subjectivity and capacity to choose his or her own fate. Later Existentialists built upon this notion. Nietzsche, for example, examines the individual's need to define his/her own self and circumstances in his concept of the will to power and the heroic ideal of the Übermensch. The individual is also central to Sartre's philosophy, which emphasizes individual authenticity, responsibility, and free will. In both Sartre and Nietzsche (and in Nikolai Berdyaev), the individual is called upon to create his or her own values. Rather than rely on external, socially imposed codes of morality. In Buddhism, the concept of the individual lies in anatman, or "no-self." According to anatman, the individual is really a series of interconnected processes that, working together, give the appearance of being a single, separated whole. In this way, anatman, together with anicca, resembles a kind of bundle theory. Instead of an atomic, indivisible self distinct from reality (see Subject-object problem), the individual in Buddhism is understood as an interrelated part of an ever-changing, impermanent universe (see interdependence, Nondualism, reciprocity). Ayn Rand's Objectivism regards every human as an independent, sovereign entity who possesses an inalienable right to his or her own life, a right derived from his or her nature as a rational being. Individualism and Objectivism hold that a civilized society, or any form of association, cooperation or peaceful coexistence among humans, can be achieved only on the basis of the recognition of individual rights — and that a group, as such, has no rights other than the individual rights of its members. The principle of individual rights is the only moral base of all groups or associations. Since only an individual man or woman can possess rights, the expression "individual rights" is a redundancy (which one has to use for purposes of clarification in today’s intellectual chaos), but the expression "collective rights" is a contradiction in terms. Individual rights are not subject to a public vote; a majority has no right to vote away the rights of a minority; the political function of rights is precisely to protect minorities from oppression by majorities (and the smallest minority on earth is the individual). - Main article: Outline of self - Action theory - Atom (disambiguation) - Cultural identity - Self (philosophy) - Self (psychology) - Self (sociology) - Self (spirituality) - Structure and agency - ↑ Abbs 1986, cited in Klein 2005, pp.26-27 - ↑ G. A. Russell (1994), The 'Arabick' Interest of the Natural Philosophers in Seventeenth-Century England, pp. 224-262, Brill Publishers, ISBN 90-04-09459-8. - ↑ Ayn Rand, "Individualism" Ayn Rand Lexicon. - ↑ Ayn Rand (1961), "Collectivized 'Rights,'" The Virtue of Selfishness.
Belize Coast Mangroves The mangrove swamps here are a nursery ground for many fish species associated with the huge Belize Barrier Reef. By filtering runoff from rivers and trapping sediment, the mangroves also protect the clarity of the coastal waters, helping the coral reef to survive. Numerous cays—small islands composed largely of coral or sand—along the coast are covered with mangroves and form a habitat for birds. In all, more than 250 bird species share the swamps with West Indian manatees and a variety of reptiles, including boa constrictors, American crocodiles, and iguanas. - Atlantic Ocean West - Principal Species Red, black, white, and button mangroves - Area 1,100 square miles (2,800 square km) - Location Eastern Belize, on the western margins of the Caribbean Sea
The foundations for reading and writing at Bressingham Primary School are taught daily in phonics lessons, but also applied across the curriculum. This enables us to extend phonics teaching and learning beyond ‘dedicated phonics time’ to ensure learning is applied in a relevant context, reinforced and becomes ‘sticky knowledge’. What is phonics? Phonics is designed to help teach children to read and spell by teaching the skills of segmenting and blending, the alphabetic code and an understanding of how this is used in reading and spelling. To simplify, it is sounding out a word and blending the sounds back together to read the whole word. When writing, it is hearing the sounds in a word and writing them down to spell it correctly. Phonics is taught daily in Reception and Year 1 using the Little Wandle Letters and Sounds Revised programme. This is a systematic synthetic phonics programme, validated by the Department for Education. Spoken English uses about 42 sounds (phonemes). These phonemes are represented by letters (graphemes). The alphabet contains only 26 letters, but we use it to make all the graphemes that represent the phonemes of English. In other words, a sound can be represented by a letter (e.g. ‘s’) or a group of letters (e.g. ‘th’ or ‘igh’) Once children begin learning letters, they are used as quickly as possible in reading and spelling words. For this reason, the first six letters taught are ‘s’, ‘a’, ‘t’, ‘p’, ‘i’, ‘n’. These can immediately be used to make a number of words such as ‘sat’, ‘pin’, ‘pat’, ‘tap’, ‘nap’. Following this, children continue learning sounds and the letters that represent them in a particular order. Our reading books are organised into the same order, so the children can practise reading the sounds and words they are learning in lessons. The following links provide further information and resources to support your child at home. Below are overview documents from the Little Wandle Letters and Sounds Revised phonics programme. They show the order the sounds and tricky words are taught during Reception and Year 1. As identified in the ‘Overview’, the Little Wandle Letters and Sounds Revised programme is organised into phases containing letter sets. The graphemes (letters) are accompanied by images that help the children recall these. These images are available on grapheme mats (below) that the children use in school to support reading and writing. Children learn phrases that help them remember the correct formation. See the guidance on forming the letters below. Please note: ‘y’ and ‘w’ will be pronounced in the usual way in Phase 2. Later in the programme children will learn that ‘y’ also makes the ‘ee’ sound.
When many people think of slavery, they think of the translatlantic trade that took place between Africa, the Americas and the Caribbean. The legacy of enslavement in the Americas (particularly in the United States) is known globally through the cultural and political impact of African-American iconography, films, history and references in popular culture. For many people of African descent across the world, it is one of the clearest historical links that binds us together, even if we do not have west African or American ancestry. But the slave trade across the Atlantic Ocean is not the only history of longstanding mass global enslavement. Less well-known is a system that went on for centuries longer, but which took place across its opposite oceanmass, the Indian Ocean. The Indian Ocean slave trade encompassed Africa, Asia and the Middle East, with people from these areas involved as both captors and captives. The numbers of people enslaved and the exact length of the trans-Indian slave trade have not been definitively established, but historians believe that it preceded the transatlantic enslavement by centuries. Even though it is largely ignored as an international slave trade, examples of its impact abound. Writing on Indian Ocean slavery frequently mentions African people in China and Persia as well as in the Muslim holy cities of Mecca and Medina, which also served as central slave markets. The longevity of the Indian Ocean slave trade is also evident in key historical moments. Long before the slave revolt of Haiti under Toussaint L’Overture, which istouted as the most successful slave revolt in modern history, established the first black republic in the western hemisphere, African slaves in the southern Iraqi city Basra established political power centres in Iraq and parts of present-day Iran for a period of fourteen years. The Zanj rebellion, and subsequent rule of East African slaves in parts of Iraq, took place between 869-883AD1. Centuries later, when American president Barack Obama was elected as the first African-American president in the United States, his election proved inspirational to their black descendants who continue to live in Basra. But focussing solely on African people enslaved across Asia would be hiding the extent of Indian Ocean slavery: Asian people were enslaved for centuries as well, with Asian slaves who survived shipwrecks on European ships found living with the indigenous population on South Africa’s coast long before colonisation. There are also reports of Indian people enslaved and living in Kenya and Tanzania, and later, there was the large-scale movement of enslaved Asian people sent to work as slaves in colonial South Africa, starting from Dutch colonisation in 1652. Enslaved Asian people in South Africa came from as far afield as Japan and Timor, but the majority were from India, Sri Lanka, Indonesia and China. In addition, men from Baluchistan in present-day Pakistan are regularly mentioned working as guards in relation to the slaving community based in Tanzania in the 1800s, overseen by the Omani sultanate who ruled Zanzibar, and Indian and Chinese slaves were to be found in South Africa, as well as in parts of the African eastern coast. The Ottoman Empire enslaved non-Muslim populations in the Balkans, and women were often the target for sexual slavery, hence the Orientalist “allure” of the harem, and likely the source of the term “white slavery”. Afro-Turks also continue to live in Turkey. At its most pernicious, the effects of Asian enslavement is seen in contemporary racist European depictions of Asian women – which often have roots and metaphors in the sexual abuse inherent in the enslavement of Asian women and their status in the early days of colonialism. There are other contemporary reverberations of the Indian Ocean slave trade – and continuing practices of enslavement in parts of north Africa, including in Mauritania. Enslavement of “African” populations by the “Arab” Sudanese ruling class in Sudan was one of the key reasons for the breakup of the Republic of Sudan and the secession of South Sudan. Even today, being darker-skinned African is synonymous with being called abd/abeed (slave) by Arabs. This includes Arab people who have been born and have lived all of their lives in western Europe and north America. (The Twitter hashtag #abeed will show you how prevalent and contemporary the epithet is.) Words like “coolie” and “kaffir”, often associated with the Asian indentured labour system prevalent under later European colonialism, had roots and common usage in the periods of Indian Ocean slavery from the 1600s onwards. Starting today, Media Diversified will be publishing an ongoing series on slavery across the Indian Ocean (#IndianOceanSlavery). The articles will have most of their starting points in South Africa, which was one of the epicentres of the Indian Ocean slave trade, with the country importing slaves as part of its colonisation process. This series will include articles looking at the history of Asian political prisoners in the country, the history of Chinese people in Africa which goes back for at least a millennium, and the wider resonances of both slavery and very specific under-reported histories in Australia, Ireland and India. Although the descendants of enslaved Africans and Asians continue to live in South Africa, outside of academic publications the country has very little knowledge about its own history of slavery. What will become apparent is that slavery in Africa stretched much further than the west African coast where most of the transatlantic slave trade took place from. It also decimated the African interior for centuries longer than the period in which the transatlantic slave trade took place. Southern, central and east Africa were similarly affected, including by the large-scale movement of enslaved people within Africa, most notably in places like Mozambique and Madagascar. At the same time, there was extensive enslavement in Asia, in India as well as in Indonesia and other parts of south-east Asia, including Japan. Publishing this series on Indian Ocean slavery is significant because it brings together key aspects of a largely underplayed history for general readers. When I started reading up on the topic, I was surprised at how many academic tracts had been published on the issue, and yet that knowledge had not in any significant way filtered through to the populations from whom the history was drawn. If anything, despite all of the extensive body of research on Indian Ocean slavery, the information remains “hidden within books”. It challenges the history we tell ourselves in Africa, Asia and the Middle East about how we came to be, and it also challenges the history that we tell ourselves about other continents. It brings to light that what was perceived as anti-colonial solidarity in the 1950s and 1960s (often with India as its centre) was a continuation of a centuries-long historical twinning between what is collectively called the “Third World” or developing world. Very often finding the information involved following whispers of conversation or remembering a fact that I had heard long ago and could not make historical sense of at the time. The internet made researching information easier at times, but I would not have been able to do concerted research without the extensive archives in Cape Town and the dedicated staff who manage them. I also would not have been able to find the background material without the well-stocked libraries in South Africa. In fact, if I attempted this project outside of South Africa, there would likely have been very little in terms of records and libraries to bolster my knowledge. In a wider context, I also drew strength from the burgeoning interest in the history of slavery from the descendants of enslaved people in South Africa. At the moment it reaches a small group of people, but it is the start of reversing the trend of historians writing about history as if there are no contemporary resonances and impact, and as if there are no contemporary living descendants of slaves in South Africa, the wider African continent and Asia. Outside of the formal research, finding the information has been an astonishing experience, which led me to retrace all of my life’s journey, especially the often disparate lives that I have led across Africa and Asia during the past two decades — stretching from Senegal to east Africa, across Turkey and Afghanistan to south-east Asia. What was previously incongruous to me made sense when I walked into the Slave Lodge in Cape Town and saw a map detailing the places where enslaved people in South Africa came from. The map of slaves’ origins was in fact a map of all of the places that I had lived in or had very significant contact with. And so, many of the gaps were things that only I could have known, having lived a very particular life: why in Turkey I encountered the exact same fig jam recipe as my grandmother’s in Cape Town, which is a traditional Cape Malay dish; the common words close to isiZulu that I would hear when I lived in northern Uganda; why – besides the common vocabularies of Persian, Kiswahili and isiZulu that I’d draw on in Kabul and Nairobi – Persians in Iran and Afghanistan as well as Zulus in South Africa both ate maas/maast/amasi (plain yoghurt/fermented milk) with their meals. These were small questions that I could not answer up to now, but the thread of what I have discovered is much bigger than I had anticipated. The first article in this series will look at the history of the Dutch Christmas icon, Zwarte Piet (Black Peter). The iconography around Zwarte Piet brings together my own questions about slavery in South Africa, and about why the soot-smeared, golden-earringed icon who arrived in a wooden boat continues to be such a key cultural figure in Holland. But researching the history of Zwarte Piet took me far away from what had been the familiar framing of enslavement to me for most of my life, namely the trade between mainly west Africa and the Americas. I hope that for you, the reader, it will as fruitful to read as it was for me to spend the past year in musty archives, running after snippets of information, being surprised again and again, and ultimately giving voice not to an academic pursuit, but to real people who lived and breathed, who were part of my history, and might be part of how you came to be as well. Encyclopaedia Britannica, Zanj Rebellion. For more information on the African-Iraqi community in Basra, see: 1Encyclopaedia Britannica http://www.britannica.com/event/Zanj-rebellion (First accessed 08/04/2016) All work published on Media Diversified is the intellectual property of its writers. Please do not reproduce, republish or repost any content from this site without express written permission from Media Diversified. For further information, please see our reposting guidelines. Karen Williams works in media and human rights across Africa and Asia. She was part of the democratic gay rights movement that fought against apartheid in South Africa. She has worked in conflict areas and civil wars across the world and has written extensively on the position of women as victims and perpetrators in the west African and northern Ugandan civil wars. Indian Ocean Slavery is a series of articles by Karen Williams on the slave trade across the Indian Ocean and its historical and current effects on global populations. Commissioned for our Academic Space, this series sheds light on a little-known but extremely significant period of international history. This article was commissioned for our academic experimental space for long form writing edited and curated by Yasmin Gunaratnam. A space for provocative and engaging writing from any academic discipline. If you enjoyed reading this article and you got some benefit or insight from reading it buy a gift card or donate to keep Media Diversified’s website online Or visit our bookstore on Shopify – you can donate there too! We are 100% reader funded
*The American Yawp is an evolving, collaborative text. Please click here to improve this chapter.* - I. Introduction - II. The Origins of the Pacific War - III. The Origins of the European War - IV. The United States and the European War - V. The United States and the Japanese War - VI. Soldiers’ Experiences - VII. The Wartime Economy - VIII. Women and World War II - IX. Race and World War II - X. Toward a Postwar World - XI. Conclusion - XII. Primary Sources - XIII. Reference Material The 1930s and 1940s were trying times. A global economic crisis gave way to a global war that became the deadliest and most destructive in human history. Perhaps eighty million individuals lost their lives during World War II. The war saw industrialized genocide and nearly threatened the eradication of an entire people. It also unleashed the most fearsome technology ever used in war. And when it ended, the United States found itself alone as the world’s greatest superpower. Armed with the world’s greatest economy, it looked forward to the fruits of a prosperous consumers’ economy. But the war raised as many questions as it would settle and unleashed new social forces at home and abroad that confronted generations of Americans to come. II. The Origins of the Pacific War Although the United States joined the war in 1941, two years after Europe exploded into conflict in 1939, the path to the Japanese bombing of Pearl Harbor, the surprise attack that threw the United States headlong into war, began much earlier. For the Empire of Japan, the war had begun a decade before Pearl Harbor. On September 18, 1931, a small explosion tore up railroad tracks controlled by the Japanese-owned South Manchuria Railway near the city of Shenyang (Mukden) in the Chinese province of Manchuria. The railway company condemned the bombing as the work of anti-Japanese Chinese dissidents. Evidence, though, suggests that the initial explosion was neither an act of Chinese anti-Japanese sentiment nor an accident but an elaborate ruse planned by the Japanese to provide a basis for invasion. In response, the privately operated Japanese Guandong (Kwangtung) army began shelling the Shenyang garrison the next day, and the garrison fell before nightfall. Hungry for Chinese territory and witnessing the weakness and disorganization of Chinese forces, but under the pretense of protecting Japanese citizens and investments, the Japanese Imperial Army ordered a full-scale invasion of Manchuria. The invasion was swift. Without a centralized Chinese army, the Japanese quickly defeated isolated Chinese warlords and by the end of February 1932, all of Manchuria was firmly under Japanese control. Japan established the nation of Manchukuo out of the former province of Manchuria.1 This seemingly small skirmish—known by the Chinese as the September 18 Incident and the Japanese as the Manchurian Incident—sparked a war that would last thirteen years and claim the lives of over thirty-five million people. Comprehending Japanese motivations for attacking China and the grueling stalemate of the ensuing war are crucial for understanding Japan’s seemingly unprovoked attack on Pearl Harbor, Hawaii, on December 7, 1941, and, therefore, for understanding the involvement of the United States in World War II as well. Despite their rapid advance into Manchuria, the Japanese put off the invasion of China for nearly three years. Japan occupied a precarious domestic and international position after the September 18 Incident. At home, Japan was riven by political factionalism due to its stagnating economy. Leaders were torn as to whether to address modernization and lack of natural resources through unilateral expansion (the conquest of resource-rich areas such as Manchuria to export raw materials to domestic Japanese industrial bases such as Hiroshima and Nagasaki) or international cooperation (a philosophy of pan-Asianism in an anti-Western coalition that would push the colonial powers out of Asia). Ultimately, after a series of political crises and assassinations enflamed tensions, pro-war elements within the Japanese military triumphed over the more moderate civilian government. Japan committed itself to aggressive military expansion. Chinese leaders Chiang Kai-shek and Zhang Xueliang appealed to the League of Nations for assistance against Japan. The United States supported the Chinese protest, proclaiming the Stimson Doctrine in January 1932, which refused to recognize any state established as a result of Japanese aggression. Meanwhile, the League of Nations sent Englishman Victor Bulwer-Lytton to investigate the September 18 Incident. After a six-month investigation, Bulwer-Lytton found the Japanese guilty of inciting the September 18 incident and demanded the return of Manchuria to China. The Japanese withdrew from the League of Nations in March 1933. Japan isolated itself from the world. Its diplomatic isolation empowered radical military leaders who could point to Japanese military success in Manchuria and compare it to the diplomatic failures of the civilian government. The military took over Japanese policy. And in the military’s eyes, the conquest of China would not only provide for Japan’s industrial needs, it would secure Japanese supremacy in East Asia. The Japanese launched a full-scale invasion of China. It assaulted the Marco Polo Bridge on July 7, 1937, and routed the forces of the Chinese National Revolutionary Army led by Chiang Kai-shek. The broken Chinese army gave up Beiping (Beijing) to the Japanese on August 8, Shanghai on November 26, and the capital, Nanjing (Nanking), on December 13. Between 250,000 and 300,000 people were killed, and tens of thousands of women were raped, when the Japanese besieged and then sacked Nanjing. The Western press labeled it the Rape of Nanjing. To halt the invading enemy, Chiang Kai-shek adopted a scorched-earth strategy of “trading space for time.” His Nationalist government retreated inland, burning villages and destroying dams, and established a new capital at the Yangtze River port of Chongqing (Chungking). Although the Nationalists’ scorched-earth policy hurt the Japanese military effort, it alienated scores of dislocated Chinese civilians and became a potent propaganda tool of the emerging Chinese Communist Party (CCP).2 Americans read about the brutal fighting in China, but the United States lacked both the will and the military power to oppose the Japanese invasion. After the gut-wrenching carnage of World War I, many Americans retreated toward isolationism by opposing any involvement in the conflagrations burning in Europe and Asia. And even if Americans wished to intervene, their military was lacking. The Japanese army was a technologically advanced force consisting of 4,100,000 men and 900,000 Chinese collaborators—and that was in China alone. The Japanese military was armed with modern rifles, artillery, armor, and aircraft. By 1940, the Japanese navy was the third-largest and among the most technologically advanced in the world. Still, Chinese Nationalists lobbied Washington for aid. Chiang Kai-shek’s wife, Soong May-ling—known to the American public as Madame Chiang—led the effort. Born into a wealthy Chinese merchant family in 1898, Madame Chiang spent much of her childhood in the United States and graduated from Wellesley College in 1917 with a major in English literature. In contrast to her gruff husband, Madame Chiang was charming and able to use her knowledge of American culture and values to garner support for her husband and his government. But while the United States denounced Japanese aggression, it took no action during the 1930s. As Chinese Nationalists fought for survival, the Communist Party was busy collecting people and supplies in the northwestern Shaanxi Province. China had been at war with itself when the Japanese came. Nationalists battled a stubborn communist insurgency. In 1935 the Nationalists threw the communists out of the fertile Chinese coast, but an ambitious young commander named Mao Zedong recognized the power of the Chinese peasant population. In Shaanxi, Mao recruited from the local peasantry, building his force from a meager seven thousand survivors at the end of the Long March in 1935 to a robust 1.2 million members by the end of the war. Although Japan had conquered much of the country, the Nationalists regrouped and the communists rearmed. An uneasy truce paused the country’s civil war and refocused efforts on the invaders. The Chinese could not dislodge the Japanese, but they could stall their advance. The war mired in stalemate. III. The Origins of the European War Across the globe in Europe, the continent’s major powers were still struggling with the aftereffects of World War I when the global economic crisis spiraled much of the continent into chaos. Germany’s Weimar Republic collapsed with the economy, and out of the ashes emerged Adolf Hitler’s National Socialists—the Nazis. Championing German racial supremacy, fascist government, and military expansionism, Hitler rose to power and, after aborted attempts to take power in Germany, became chancellor in 1933 and the Nazis conquered German institutions. Democratic traditions were smashed. Leftist groups were purged. Hitler repudiated the punitive damages and strict military limitations of the Treaty of Versailles. He rebuilt the German military and navy. He reoccupied regions lost during the war and remilitarized the Rhineland, along the border with France. When the Spanish Civil War broke out in 1936, Hitler and Benito Mussolini—the fascist Italian leader who had risen to power in the 1920s—intervened for the Spanish fascists, toppling the communist Spanish Republican Party. Britain and France stood by warily and began to rebuild their militaries, anxious in the face of a renewed Germany but still unwilling to draw Europe into another bloody war.3 In his autobiographical manifesto, Mein Kampf, Hitler advocated for the unification of Europe’s German peoples under one nation and that nation’s need for Lebensraum, or living space, particularly in Eastern Europe, to supply Germans with the land and resources needed for future prosperity. The Untermenschen (lesser humans) would have to go. Once in power, Hitler worked toward the twin goals of unification and expansion. In 1938, Germany annexed Austria and set its sights on the Sudetenland, a large, ethnically German area of Czechoslovakia. Britain and France, alarmed but still anxious to avoid war, agreed—without Czechoslovakia’s input—that Germany could annex the region in return for a promise to stop all future German aggression. They thought that Hitler could be appeased, but it became clear that his ambitions would continue pushing German expansion. In March 1939, Hitler took the rest of Czechoslovakia and began to make demands on Poland. Britain and France promised war. And war came. Hitler signed a secret agreement—the Molotov-Ribbentrop Pact—with the Soviet Union that coordinated the splitting of Poland between the two powers and promised nonaggression thereafter. The European war began when the German Wehrmacht invaded Poland on September 1, 1939. Britain and France declared war two days later and mobilized their armies. Britain and France hoped that the Poles could hold out for three to four months, enough time for the Allies to intervene. Poland fell in three weeks. The German army, anxious to avoid the rigid, grinding war of attrition that took so many millions in the stalemate of World War I, built their new modern army for speed and maneuverability. German doctrine emphasized the use of tanks, planes, and motorized infantry (infantry that used trucks for transportation instead of marching) to concentrate forces, smash front lines, and wreak havoc behind the enemy’s defenses. It was called Blitzkrieg, or lightning war. After the fall of Poland, France and its British allies braced for an inevitable German attack. Throughout the winter of 1939–1940, however, fighting was mostly confined to smaller fronts in Norway. Belligerents called it the Sitzkrieg (sitting war). But in May 1940, Hitler launched his attack into Western Europe. Mirroring the German’s Schlieffen Plan of 1914 in the previous war, Germany attacked through the Netherlands and Belgium to avoid the prepared French defenses along the French-German border. Poland had fallen in three weeks; France lasted only a few weeks more. By June, Hitler was posing for photographs in front of the Eiffel Tower. Germany split France in half. Germany occupied and governed the north, and the south would be ruled under a puppet government in Vichy. With France under heel, Hitler turned to Britain. Operation Sea Lion—the planned German invasion of the British Isles—required air superiority over the English Channel. From June until October the German Luftwaffe fought the Royal Air Force (RAF) for control of the skies. Despite having fewer planes, British pilots won the so-called Battle of Britain, saving the islands from immediate invasion and prompting the new prime minister, Winston Churchill, to declare, “Never before in the field of human conflict has so much been owed by so many to so few.” If Britain was safe from invasion, it was not immune from additional air attacks. Stymied in the Battle of Britain, Hitler began the Blitz—a bombing campaign against cities and civilians. Hoping to crush the British will to fight, the Luftwaffe bombed the cities of London, Liverpool, and Manchester every night from September to the following May. Children were sent far into the countryside to live with strangers to shield them from the bombings. Remaining residents took refuge in shelters and subway tunnels, emerging each morning to put out fires and bury the dead. The Blitz ended in June 1941, when Hitler, confident that Britain was temporarily out of the fight, launched Operation Barbarossa—the invasion of the Soviet Union. Hoping to capture agricultural lands, seize oil fields, and break the military threat of Stalin’s Soviet Union, Hitler broke the two powers’ 1939 nonaggression pact and, on June 22, invaded the Soviet Union. It was the largest land invasion in history. France and Poland had fallen in weeks, and German officials hoped to break Russia before the winter. And initially, the Blitzkrieg worked. The German military quickly conquered enormous swaths of land and netted hundreds of thousands of prisoners. But Russia was too big and the Soviets were willing to sacrifice millions to stop the fascist advance. After recovering from the initial shock of the German invasion, Stalin moved his factories east of the Urals, out of range of the Luftwaffe. He ordered his retreating army to adopt a “scorched earth” policy, to move east and destroy food, rails, and shelters to stymie the advancing German army. The German army slogged forward. It split into three pieces and stood at the gates of Moscow, Stalingrad, and Leningrad, but supply lines now stretched thousands of miles, Soviet infrastructure had been destroyed, partisans harried German lines, and the brutal Russian winter arrived. Germany had won massive gains but the winter found Germany exhausted and overextended. In the north, the German army starved Leningrad to death during an interminable siege; in the south, at Stalingrad, the two armies bled themselves to death in the destroyed city; and, in the center, on the outskirts of Moscow, in sight of the capital city, the German army faltered and fell back. It was the Soviet Union that broke Hitler’s army. Twenty-five million Soviet soldiers and civilians died during the Great Patriotic War, and roughly 80 percent of all German casualties during the war came on the Eastern Front. The German army and its various conscripts suffered 850,000 casualties at the Battle of Stalingrad alone. In December 1941, Germany began its long retreat.4 IV. The United States and the European War While Hitler marched across Europe, the Japanese continued their war in the Pacific. In 1939 the United States dissolved its trade treaties with Japan and the following year cut off supplies of war materials by embargoing oil, steel, rubber, and other vital goods. It was hoped that economic pressure would shut down the Japanese war machine. Instead, Japan’s resource-starved military launched invasions across the Pacific to sustain its war effort. The Japanese called their new empire the Greater East Asia Co-Prosperity Sphere and, with the cry of “Asia for the Asians,” made war against European powers and independent nations throughout the region. Diplomatic relations between Japan and the United States collapsed. The United States demanded that Japan withdraw from China; Japan considered the oil embargo a de facto declaration of war.5 Japanese military planners, believing that American intervention was inevitable, planned a coordinated Pacific offensive to neutralize the United States and other European powers and provide time for Japan to complete its conquests and fortify its positions. On the morning of December 7, 1941, the Japanese launched a surprise attack on the American naval base at Pearl Harbor, Hawaii. Japanese military planners hoped to destroy enough battleships and aircraft carriers to cripple American naval power for years. Twenty-four hundred Americans were killed in the attack. American isolationism fell at Pearl Harbor. Japan also assaulted Hong Kong, the Philippines, and American holdings throughout the Pacific, but it was the attack on Hawaii that threw the United States into a global conflict. Franklin Roosevelt called December 7 “a date which will live in infamy” and called for a declaration of war, which Congress answered within hours. Within a week of Pearl Harbor the United States had declared war on the entire Axis, turning two previously separate conflicts into a true world war. The American war began slowly. Britain had stood alone militarily in Europe, but American supplies had bolstered their resistance. Hitler unleashed his U-boat “wolf packs” into the Atlantic Ocean with orders to sink anything carrying aid to Britain, but Britain’s and the United States’ superior tactics and technology won them the Battle of the Atlantic. British code breakers cracked Germany’s radio codes and the surge of intelligence, dubbed Ultra, coupled with massive naval convoys escorted by destroyers armed with sonar and depth charges, gave the advantage to the Allies and by 1942, Hitler’s Kriegsmarine was losing ships faster than they could be built.6 In North Africa in 1942, British victory at El Alamein began pushing the Germans back. In November, the first American combat troops entered the European war, landing in French Morocco and pushing the Germans east while the British pushed west.7 By 1943, the Allies had pushed Axis forces out of Africa. In January President Roosevelt and Prime Minister Churchill met at Casablanca to discuss the next step of the European war. Churchill convinced Roosevelt to chase the Axis up Italy, into the “soft underbelly” of Europe. Afterward, Roosevelt announced to the press that the Allies would accept nothing less than unconditional surrender. Meanwhile, the Army Air Force (AAF) sent hundreds (and eventually thousands) of bombers to England in preparation for a massive strategic bombing campaign against Germany. The plan was to bomb Germany around the clock. American bombers hit German ball-bearing factories, rail yards, oil fields, and manufacturing centers during the day, while the British RAF carpet-bombed German cities at night. Flying in formation, they initially flew unescorted, since many believed that bombers equipped with defensive firepower flew too high and too fast to be attacked. However, advanced German technology allowed fighters to easily shoot down the lumbering bombers. On some disastrous missions, the Germans shot down almost 50 percent of American aircraft. However, the advent and implementation of a long-range escort fighter let the bombers hit their targets more accurately while fighters confronted opposing German aircraft. In the wake of the Soviets’ victory at Stalingrad, the Big Three (Roosevelt, Churchill, and Stalin) met in Tehran in November 1943. Dismissing Africa and Italy as a sideshow, Stalin demanded that Britain and the United States invade France to relieve pressure on the Eastern Front. Churchill was hesitant, but Roosevelt was eager. The invasion was tentatively scheduled for 1944. Back in Italy, the “soft underbelly” turned out to be much tougher than Churchill had imagined. Italy’s narrow, mountainous terrain gave the defending Axis the advantage. Movement up the peninsula was slow, and in some places conditions returned to the trenchlike warfare of World War I. Americans attempted to land troops behind them at Anzio on the western coast of Italy, but, surrounded, they suffered heavy casualties. Still, the Allies pushed up the peninsula, Mussolini’s government revolted, and a new Italian government quickly made peace. On the day the American army entered Rome, American, British and Canadian forces launched Operation Overlord, the long-awaited invasion of France. D-Day, as it became popularly known, was the largest amphibious assault in history. American general Dwight Eisenhower was uncertain enough of the attack’s chances that the night before the invasion he wrote two speeches: one for success and one for failure. The Allied landings at Normandy were successful, and although progress across France was much slower than hoped for, Paris was liberated roughly two months later. Allied bombing expeditions meanwhile continued to level German cities and industrial capacity. Perhaps four hundred thousand German civilians were killed by allied bombing.8 The Nazis were crumbling on both fronts. Hitler tried but failed to turn the war in his favor in the west. The Battle of the Bulge failed to drive the Allies back to the English Channel, but the delay cost the Allies the winter. The invasion of Germany would have to wait, while the Soviet Union continued its relentless push westward, ravaging German populations in retribution for German war crimes.9 German counterattacks in the east failed to dislodge the Soviet advance, destroying any last chance Germany might have had to regain the initiative. 1945 dawned with the end of European war in sight. The Big Three met again at Yalta in the Soviet Union, where they reaffirmed the demand for Hitler’s unconditional surrender and began to plan for postwar Europe. The Soviet Union reached Germany in January, and the Americans crossed the Rhine in March. In late April American and Soviet troops met at the Elbe while the Soviets pushed relentlessly by Stalin to reach Berlin first and took the capital city in May, days after Hitler and his high command had died by suicide in a city bunker. Germany was conquered. The European war was over. Allied leaders met again, this time at Potsdam, Germany, where it was decided that Germany would be divided into pieces according to current Allied occupation, with Berlin likewise divided, pending future elections. Stalin also agreed to join the fight against Japan in approximately three months.10 V. The United States and the Japanese War As Americans celebrated V-E (Victory in Europe) Day, they redirected their full attention to the still-raging Pacific War. As in Europe, the war in the Pacific started slowly. After Pearl Harbor, the American-controlled Philippine archipelago fell to Japan. After running out of ammunition and supplies, the garrison of American and Filipino soldiers surrendered. The prisoners were marched eighty miles to their prisoner-of-war camp without food, water, or rest. Ten thousand died on the Bataan Death March.11 But as Americans mobilized their armed forces, the tide turned. In the summer of 1942, American naval victories at the Battle of the Coral Sea and the aircraft carrier duel at the Battle of Midway crippled Japan’s Pacific naval operations. To dislodge Japan’s hold over the Pacific, the U.S. military began island hopping: attacking island after island, bypassing the strongest but seizing those capable of holding airfields to continue pushing Japan out of the region. Combat was vicious. At Guadalcanal American soldiers saw Japanese soldiers launch suicidal charges rather than surrender. Many Japanese soldiers refused to be taken prisoner or to take prisoners themselves. Such tactics, coupled with American racial prejudice, turned the Pacific Theater into a more brutal and barbarous conflict than the European Theater.12 Japanese defenders fought tenaciously. Few battles were as one-sided as the Battle of the Philippine Sea, or what the Americans called the Japanese counterattack, the Great Marianas Turkey Shoot. Japanese soldiers bled the Americans in their advance across the Pacific. At Iwo Jima, an eight-square-mile island of volcanic rock, seventeen thousand Japanese soldiers held the island against seventy thousand Marines for over a month. At the cost of nearly their entire force, they inflicted almost thirty thousand casualties before the island was lost. By February 1945, American bombers were in range of the mainland. Bombers hit Japan’s industrial facilities but suffered high casualties. To spare bomber crews from dangerous daylight raids, and to achieve maximum effect against Japan’s wooden cities, many American bombers dropped incendiary weapons that created massive firestorms and wreaked havoc on Japanese cities. Over sixty Japanese cities were fire-bombed. American fire bombs killed one hundred thousand civilians in Tokyo in March 1945. In June 1945, after eighty days of fighting and tens of thousands of casualties, the Americans captured the island of Okinawa. The mainland of Japan was open before them. It was a viable base from which to launch a full invasion of the Japanese homeland and end the war. Estimates varied, but given the tenacity of Japanese soldiers fighting on islands far from their home, some officials estimated that an invasion of the Japanese mainland could cost half a million American casualties and perhaps millions of Japanese civilians. Historians debate the many motivations that ultimately drove the Americans to use atomic weapons against Japan, and many American officials criticized the decision, but these would be the numbers later cited by government leaders and military officials to justify their use.13 Early in the war, fearing that the Germans might develop an atomic bomb, the U.S. government launched the Manhattan Project, a hugely expensive, ambitious program to harness atomic energy and create a single weapon capable of leveling entire cities. The Americans successfully exploded the world’s first nuclear device, Trinity, in New Mexico in July 1945. (Physicist J. Robert Oppenheimer, the director of the Los Alamos Laboratory, where the bomb was designed, later recalled that the event reminded him of Hindu scripture: “Now I am become death, the destroyer of worlds.”) Two more bombs—Fat Man and Little Boy—were built and detonated over two Japanese cities in August. Hiroshima was hit on August 6. Over one hundred thousand civilians were killed. Nagasaki followed on August 9. Perhaps eighty thousand civilians were killed. Emperor Hirohito announced the surrender of Japan on August 15. On September 2, aboard the battleship USS Missouri, delegates from the Japanese government formally signed their surrender. World War II was finally over. VI. Soldiers’ Experiences Almost eighteen million men served in World War II. Volunteers rushed to join the military after Pearl Harbor, but the majority—over ten million—were drafted into service. Volunteers could express their preference for assignment, and many preempted the draft by volunteering. Regardless, recruits judged I-A, “fit for service,” were moved into basic training, where soldiers were developed physically and trained in the basic use of weapons and military equipment. Soldiers were indoctrinated into the chain of command and introduced to military life. After basic, soldiers moved on to more specialized training. For example, combat infantrymen received additional weapons and tactical training, and radio operators learned transmission codes and the operation of field radios. Afterward, an individual’s experience varied depending on what service he entered and to what theater he was assigned.14 Soldiers and Marines bore the brunt of on-the-ground combat. After transportation to the front by trains, ships, and trucks, they could expect to march carrying packs weighing anywhere from twenty to fifty pounds containing rations, ammunition, bandages, tools, clothing, and miscellaneous personal items in addition to their weapons. Sailors, once deployed, spent months at sea operating their assigned vessels. Larger ships, particularly aircraft carriers, were veritable floating cities. In most, sailors lived and worked in cramped conditions, often sleeping in bunks stacked in rooms housing dozens of sailors. Senior officers received small rooms of their own. Sixty thousand American sailors lost their lives in the war. During World War II, the Air Force was still a branch of the U.S. Army and soldiers served in ground and air crews. World War II saw the institutionalization of massive bombing campaigns against cities and industrial production. Large bombers like the B-17 Flying Fortress required pilots, navigators, bombardiers, radio operators, and four dedicated machine gunners. Airmen on bombing raids left from bases in England or Italy or from Pacific islands and endured hours of flight before approaching enemy territory. At high altitude, and without pressurized cabins, crews used oxygen tanks to breathe and on-board temperatures plummeted. Once in enemy airspace, crews confronted enemy fighters and anti-aircraft flak from the ground. While fighter pilots flew as escorts, the Air Corps suffered heavy casualties. Tens of thousands of airmen lost their lives. On the ground, conditions varied. Soldiers in Europe endured freezing winters, impenetrable French hedgerows, Italian mountain ranges, and dense forests. Germans fought with a Western mentality familiar to Americans. Soldiers in the Pacific endured heat and humidity, monsoons, jungles, and tropical diseases. And they confronted an unfamiliar foe. Americans, for instance, could understand surrender as prudent; many Japanese soldiers saw it as cowardice. What Americans saw as a fanatical waste of life, the Japanese saw as brave and honorable. Atrocities flourished in the Pacific at a level unmatched in Europe. VII. The Wartime Economy Economies win wars no less than militaries. The war converted American factories to wartime production, reawakened Americans’ economic might, armed Allied belligerents and the American armed forces, effectively pulled America out of the Great Depression, and ushered in an era of unparalleled economic prosperity.15 Roosevelt’s New Deal had ameliorated the worst of the Depression, but the economy still limped its way forward into the 1930s. But then Europe fell into war, and, despite its isolationism, Americans were glad to sell the Allies arms and supplies. And then Pearl Harbor changed everything. The United States drafted the economy into war service. The “sleeping giant” mobilized its unrivaled economic capacity to wage worldwide war. Governmental entities such as the War Production Board and the Office of War Mobilization and Reconversion managed economic production for the war effort and economic output exploded. An economy that was unable to provide work for a quarter of the workforce less than a decade earlier now struggled to fill vacant positions. Government spending during the four years of war doubled all federal spending in all of American history up to that point. The budget deficit soared, but, just as Depression-era economists had counseled, the government’s massive intervention annihilated unemployment and propelled growth. The economy that came out of the war looked nothing like the one that had begun it. Military production came at the expense of the civilian consumer economy. Appliance and automobile manufacturers converted their plants to produce weapons and vehicles. Consumer choice was foreclosed. Every American received rationing cards and, legally, goods such as gasoline, coffee, meat, cheese, butter, processed food, firewood, and sugar could not be purchased without them. The housing industry was shut down, and the cities became overcrowded. But the wartime economy boomed. The Roosevelt administration urged citizens to save their earnings or buy war bonds to prevent inflation. Bond drives were held nationally and headlined by Hollywood celebrities. Such drives were hugely successful. They not only funded much of the war effort, they helped tame inflation as well. So too did tax rates. The federal government raised income taxes and boosted the top marginal tax rate to 94 percent. With the economy booming and twenty million American workers placed into military service, unemployment virtually disappeared. More and more African Americans continued to leave the agrarian South for the industrial North. And as more and more men joined the military, and more and more positions went unfilled, women joined the workforce en masse. Other American producers looked outside the United States, southward, to Mexico, to fill its labor force. Between 1942 and 1964, the United States contracted thousands of Mexican nationals to work in American agriculture and railroads in the Bracero Program. Jointly administered by the State Department, the Department of Labor, and the Department of Justice, the binational agreement secured five million contracts across twenty-four states.16 With factory work proliferating across the country and agricultural labor experiencing severe labor shortages, the presidents of Mexico and the United States signed an agreement in July 1942 to bring the first group of legally contracted workers to California. Discriminatory policies toward people of Mexican descent prevented bracero contracts in Texas until 1947. The Bracero Program survived the war, enshrined in law until the 1960s, when the United States liberalized its immigration laws. Though braceros suffered exploitative labor conditions, for the men who participated the program was a mixed blessing. Interviews with ex-braceros captured the complexity. “They would call us pigs . . . they didn’t have to treat us that way,” one said of his employers, while another said, “For me it was a blessing, the United States was a blessing . . . it is a nation I fell in love with because of the excess work and good pay.”17 After the exodus of Mexican migrants during the Depression, the program helped reestablish Mexican migration, institutionalized migrant farm work across much of the country, and further planted a Mexican presence in the southern and western United States. VIII. Women and World War II President Franklin D. Roosevelt and his administration had encouraged all able-bodied American women to help the war effort. He considered the role of women in the war critical for American victory, and the public expected women to assume various functions to free men for active military service. While most women opted to remain at home or volunteer with charitable organizations, many went to work or donned a military uniform. World War II brought unprecedented labor opportunities for American women. Industrial labor, an occupational sphere dominated by men, shifted in part to women for the duration of wartime mobilization. Women applied for jobs in converted munitions factories. The iconic illustrated image of Rosie the Riveter, a muscular woman dressed in coveralls with her hair in a kerchief and inscribed with the phrase We Can Do It!, came to stand for female factory labor during the war. But women also worked in various auxiliary positions for the government. Although such jobs were often traditionally gendered female, over a million administrative jobs at the local, state, and national levels were transferred from men to women for the duration of the war.18 For women who elected not to work, many volunteer opportunities presented themselves. The American Red Cross, the largest charitable organization in the nation, encouraged women to volunteer with local city chapters. Millions of women organized community social events for families, packed and shipped almost half a million tons of medical supplies overseas, and prepared twenty-seven million care packages of nonperishable items for American and other Allied prisoners of war. The American Red Cross further required all female volunteers to certify as nurse’s aides, providing an extra benefit and work opportunity for hospital staffs that suffered severe personnel losses. Other charity organizations, such as church and synagogue affiliates, benevolent associations, and social club auxiliaries, gave women further outlets for volunteer work. Military service was another option for women who wanted to join the war effort. Over 350,000 women served in several all-female units of the military branches. The Army and Navy Nurse Corps Reserves, the Women’s Army Auxiliary Corps, the Navy’s Women Accepted for Volunteer Emergency Service, the Coast Guard’s SPARs (named for the Coast Guard motto, Semper Paratus, “Always Ready”), and Marine Corps units gave women the opportunity to serve as either commissioned officers or enlisted members at military bases at home and abroad. The Nurse Corps Reserves alone commissioned 105,000 army and navy nurses recruited by the American Red Cross. Military nurses worked at base hospitals, mobile medical units, and onboard hospital “mercy” ships.19 Jim Crow segregation in both the civilian and military sectors remained a problem for Black women who wanted to join the war effort. Even after President Roosevelt signed Executive Order 8802 in 1941, supervisors who hired Black women still often relegated them to the most menial tasks on factory floors. Segregation was further upheld in factory lunchrooms, and many Black women were forced to work at night to keep them separate from whites. In the military, only the Women’s Army Auxiliary Corps and the Nurse Corps Reserves accepted Black women for active service, and the army set a limited quota of 10 percent of total end strength for Black female officers and enlisted women and segregated Black units on active duty. The American Red Cross, meanwhile, recruited only four hundred Black nurses for the Army and Navy Nurse Corps Reserves, and Black Army and Navy nurses worked in segregated military hospitals on bases stateside and overseas. And for all of the postwar celebration of Rosie the Riveter, after the war ended the men returned and most women voluntarily left the workforce or lost their jobs. Meanwhile, former military women faced a litany of obstacles in obtaining veteran’s benefits during their transition to civilian life. The nation that beckoned the call for assistance to millions of women during the four-year crisis hardly stood ready to accommodate their postwar needs and demands. IX. Race and World War II World War II affected nearly every aspect of life in the United States, and America’s racial relationships were not immune. African Americans, Mexicans and Mexican Americans, Jews, and Japanese Americans were profoundly impacted. In early 1941, months before the Japanese attack on Pearl Harbor, A. Philip Randolph, president of the Brotherhood of Sleeping Car Porters, the largest Black trade union in the nation, made headlines by threatening President Roosevelt with a march on Washington, D.C. In this “crisis of democracy,” Randolph said, many defense contractors still refused to hire Black workers and the armed forces remained segregated. In exchange for Randolph calling off the march, Roosevelt issued Executive Order 8802, the Fair Employment Practice in Defense Industries Act, banning racial and religious discrimination in defense industries and establishing the Fair Employment Practices Committee (FEPC) to monitor defense industry hiring practices. While the armed forces remained segregated throughout the war, and the FEPC had limited influence, the order showed that the federal government could stand against discrimination. The Black workforce in defense industries rose from 3 percent in 1942 to 9 percent in 1945.20 More than one million African Americans fought in the war. Most Black servicemen served in segregated, noncombat units led by white officers. Some gains were made, however. The number of Black officers increased from five in 1940 to over seven thousand in 1945. The all-Black pilot squadrons, known as the Tuskegee Airmen, completed more than 1,500 missions, escorted heavy bombers into Germany, and earned several hundred merits and medals. Many bomber crews specifically requested the Red Tail Angels as escorts. And near the end of the war, the army and navy began integrating some of their units and facilities, before the U.S. government finally ordered the full integration of its armed forces in 1948.21 While Black Americans served in the armed forces (though they were segregated), on the home front they became riveters and welders, rationed food and gasoline, and bought victory bonds. But many Black Americans saw the war as an opportunity not only to serve their country but to improve it. The Pittsburgh Courier, a leading Black newspaper, spearheaded the Double V campaign. It called on African Americans to fight two wars: the war against Nazism and fascism abroad and the war against racial inequality at home. To achieve victory, to achieve “real democracy,” the Courier encouraged its readers to enlist in the armed forces, volunteer on the home front, and fight against racial segregation and discrimination.22 During the war, membership in the NAACP jumped tenfold, from fifty thousand to five hundred thousand. The Congress of Racial Equality (CORE) was formed in 1942 and spearheaded the method of nonviolent direct action to achieve desegregation. Between 1940 and 1950, some 1.5 million Black southerners, the largest number of any decade since the beginning of the Great Migration, also indirectly demonstrated their opposition to racism and violence by migrating out of the Jim Crow South to the North. But transitions were not easy. Racial tensions erupted in 1943 in a series of riots in cities such as Mobile, Beaumont, and Harlem. The bloodiest race riot occurred in Detroit and resulted in the death of twenty-five Black and nine White Americans. Still, the war ignited in African Americans an urgency for equality that they would carry with them into the subsequent years.23 Many Americans had to navigate American prejudice, and America’s entry into the war left foreign nationals from the belligerent nations in a precarious position. The Federal Bureau of Investigation (FBI) targeted many on suspicions of disloyalty for detainment, hearings, and possible internment under the Alien Enemy Act. Those who received an order for internment were sent to government camps secured by barbed wire and armed guards. Such internments were supposed to be for cause. Then, on February 19, 1942, President Roosevelt signed Executive Order 9066, authorizing the removal of any persons from designated “exclusion zones”—which ultimately covered nearly a third of the country—at the discretion of military commanders. Thirty thousand Japanese Americans fought for the United States in World War II, but wartime anti-Japanese sentiment built on historical prejudices, and under the order, people of Japanese descent, both immigrants and American citizens, were detained and placed under the custody of the War Relocation Authority, the civil agency that supervised their relocation to internment camps. They lost their homes and jobs. Over ten thousand German nationals and a smaller number of Italian nationals were interned at various times in the United States during World War II, but American policies disproportionately targeted Japanese-descended populations, and individuals did not receive personalized reviews prior to their internment. This policy of mass exclusion and detention affected over 110,000 Japanese and Japanese-descended individuals. Seventy thousand were American citizens.24 In its 1982 report, Personal Justice Denied, the congressionally appointed Commission on Wartime Relocation and Internment of Civilians concluded that “the broad historical causes” shaping the relocation program were “race prejudice, war hysteria, and a failure of political leadership.”25 Although the exclusion orders were found to have been constitutionally permissible under the vagaries of national security, they were later judged, even by the military and judicial leaders of the time, to have been a grave injustice against people of Japanese descent. In 1988, President Reagan signed a law that formally apologized for internment and provided reparations to surviving internees. But if actions taken during war would later prove repugnant, so too could inaction. As the Allies pushed into Germany and Poland, they uncovered the full extent of Hitler’s genocidal atrocities. The Allies liberated massive camp systems set up for the imprisonment, forced labor, and extermination of all those deemed racially, ideologically, or biologically “unfit” by Nazi Germany. But the Holocaust—the systematic murder of eleven million civilians, including six million Jews—had been under way for years. How did America respond? Initially, American officials expressed little official concern for Nazi persecutions. At the first signs of trouble in the 1930s, the State Department and most U.S. embassies did relatively little to aid European Jews. Roosevelt publicly spoke out against the persecution and even withdrew the U.S. ambassador to Germany after Kristallnacht. He pushed for the 1938 Evian Conference in France, in which international leaders discussed the Jewish refugee problem and worked to expand Jewish immigration quotas by tens of thousands of people per year. But the conference came to nothing, and the United States turned away countless Jewish refugees who requested asylum in the United States. In 1939, the German ship St. Louis carried over nine hundred Jewish refugees. They could not find a country that would take them. The passengers could not receive visas under the U.S. quota system. A State Department wire to one passenger read that all must “await their turns on the waiting list and qualify for and obtain immigration visas before they may be admissible into the United States.” The ship cabled the president for special privilege, but the president said nothing. The ship was forced to return to Europe. Hundreds of the St. Louis’s passengers would perish in the Holocaust. Anti-Semitism still permeated the United States. Even if Roosevelt wanted to do more—it’s difficult to trace his own thoughts and personal views—he judged the political price for increasing immigration quotas as too high. In 1938 and 1939, the U.S. Congress debated the Wagner-Rogers Bill, an act to allow twenty thousand German-Jewish children into the United States. First lady Eleanor Roosevelt endorsed the measure, but the president remained publicly silent. The bill was opposed by roughly two thirds of the American public and was defeated. Historians speculate that Roosevelt, anxious to protect the New Deal and his rearmament programs, was unwilling to expend political capital to protect foreign groups that the American public had little interest in protecting.26 Knowledge of the full extent of the Holocaust was slow in coming. When the war began, American officials, including Roosevelt, doubted initial reports of industrial death camps. But even when they conceded their existence, officials pointed to their genuinely limited options. The most plausible response for the U.S. military was to bomb either the camps or the railroads leading to them, but those options were rejected by military and civilian officials who argued that it would do little to stop the deportations, would distract from the war effort, and could cause casualties among concentration camp prisoners. Whether bombing would have saved lives remains a hotly debated question.27 Late in the war, secretary of the treasury Henry Morgenthau, himself born into a wealthy New York Jewish family, pushed through major changes in American policy. In 1944, he formed the War Refugees Board (WRB) and became a passionate advocate for Jewish refugees. The WRB saved perhaps two hundred thousand Jews and twenty thousand others. Morgenthau also convinced Roosevelt to issue a public statement condemning the Nazi’s persecution. But it was already 1944, and such policies were far too little, far too late.28 X. Toward a Postwar World Americans celebrated the end of the war. At home and abroad, the United States looked to create a postwar order that would guarantee global peace and domestic prosperity. Although the alliance of convenience with Stalin’s Soviet Union would collapse, Americans nevertheless looked for the means to ensure postwar stability and economic security for returning veterans. The inability of the League of Nations to stop German, Italian, and Japanese aggressions caused many to question whether any global organization or agreements could ever ensure world peace. This included Franklin Roosevelt, who, as Woodrow Wilson’s undersecretary of the navy, witnessed the rejection of this idea by both the American people and the Senate. In 1941, Roosevelt believed that postwar security could be maintained by an informal agreement between what he termed the Four Policemen—the United States, Britain, the Soviet Union, and China—instead of a rejuvenated League of Nations. But others, including secretary of state Cordell Hull and British prime minister Winston Churchill, disagreed and convinced Roosevelt to push for a new global organization. As the war ran its course, Roosevelt came around to the idea. And so did the American public. Pollster George Gallup noted a “profound change” in American attitudes. The United States had rejected membership in the League of Nations after World War I, and in 1937 only a third of Americans polled supported such an idea. But as war broke out in Europe, half of Americans did. America’s entry into the war bolstered support, and, by 1945, with the war closing, 81 percent of Americans favored the idea.29 Whatever his support, Roosevelt had long shown enthusiasm for the ideas later enshrined in the United Nations (UN) charter. In January 1941, he announced his Four Freedoms—freedom of speech, of worship, from want, and from fear—that all of the world’s citizens should enjoy. That same year he signed the Atlantic Charter with Churchill, which reinforced those ideas and added the right of self-determination and promised some sort of postwar economic and political cooperation. Roosevelt first used the term united nations to describe the Allied powers, not the subsequent postwar organization. But the name stuck. At Tehran in 1943, Roosevelt and Churchill convinced Stalin to send a Soviet delegation to a conference at Dumbarton Oaks, in the Georgetown neighborhood of Washington, D.C., in August 1944, where they agreed on the basic structure of the new organization. It would have a Security Council—the original Four Policemen, plus France—which would consult on how best to keep the peace and when to deploy the military power of the assembled nations. According to one historian, the organization demonstrated an understanding that “only the Great Powers, working together, could provide real security.” But the plan was a kind of hybrid between Roosevelt’s policemen idea and a global organization of equal representation. There would also be a General Assembly, made up of all nations; an International Court of Justice; and a council for economic and social matters. Dumbarton Oaks was a mixed success—the Soviets especially expressed concern over how the Security Council would work—but the powers agreed to meet again in San Francisco between April and June 1945 for further negotiations. There, on June 26, 1945, fifty nations signed the UN charter.30 Anticipating victory in World War II, leaders not only looked to the postwar global order, they looked to the fate of returning American servicemen. American politicians and interest groups sought to avoid another economic depression—the economy had tanked after World War I—by gradually easing returning veterans back into the civilian economy. The brainchild of William Atherton, the head of the American Legion, the G.I. Bill won support from progressives and conservatives alike. Passed in 1944, the G.I. Bill was a multifaceted, multibillion-dollar entitlement program that rewarded honorably discharged veterans with numerous benefits.31 Faced with the prospect of over fifteen million members of the armed services (including approximately 350,000 women) suddenly returning to civilian life, the G.I. Bill offered a bevy of inducements to slow their influx into the civilian workforce as well as reward their service with public benefits. The legislation offered a year’s worth of unemployment benefits for veterans unable to secure work. About half of American veterans (eight million) received $4 billion in unemployment benefits over the life of the bill. The G.I. Bill also made postsecondary education a reality for many. The Veterans Administration (VA) paid the lion’s share of educational expenses, including tuition, fees, supplies, and even stipends for living expenses. The G.I. Bill sparked a boom in higher education. Enrollments at accredited colleges, universities, and technical and professional schools spiked, rising from 1.5 million in 1940 to 3.6 million in 1960. The VA disbursed over $14 billon in educational aid in just over a decade. Furthermore, the bill encouraged home ownership. Roughly 40 percent of Americans owned homes in 1945, but that figure climbed to 60 percent a decade after the close of the war. Because the bill did away with down payment requirements, veterans could obtain home loans for as little as $1 down. Close to four million veterans purchased homes through the G.I. Bill, sparking a construction bonanza that fueled postwar growth. In addition, the VA also helped nearly two hundred thousand veterans secure farms and offered thousands more guaranteed financing for small businesses.32 Not all Americans, however, benefited equally from the G.I. Bill. Indirectly, since the military limited the number of female personnel, men qualified for the bill’s benefits in far higher numbers. Colleges also limited the number of female applicants to guarantee space for male veterans. African Americans, too, faced discrimination. Segregation forced Black veterans into overcrowded “historically Black colleges” that had to turn away close to twenty thousand applicants. Meanwhile, residential segregation limited Black home ownership in various neighborhoods, denying Black homeowners the equity and investment that would come with home ownership. There were other limits and other disadvantaged groups. Veterans accused of homosexuality, for instance, were similarly unable to claim GI benefits.33 The effects of the G.I. Bill were significant and long-lasting. It helped sustain the great postwar economic boom and, even if many could not attain it, it nevertheless established the hallmarks of American middle class life. The United States entered the war in a crippling economic depression and exited at the beginning of an unparalleled economic boom. The war had been won, the United States was stronger than ever, and Americans looked forward to a prosperous future. And yet new problems loomed. Stalin’s Soviet Union and the proliferation of nuclear weapons would disrupt postwar dreams of global harmony. Meanwhile, Americans who had fought a war for global democracy would find that very democracy eradicated around the world in reestablished colonial regimes and at home in segregation and injustice. The war had unleashed powerful forces that would reshape the United States at home and abroad. XII. Primary Sources Charles Lindbergh won international fame in 1927 after completing the first non-stop, solo flight across the Atlantic Ocean. As Hitler’s armies marched across the European continent, many Americans began to imagine American participation in the war. Charles Lindbergh and the America First Committee, advocating “America First,” championed American isolationism. As the United States prepared for war, Black labor leader A. Philip Randolph recoiled at rampant employment discrimination in the defense industry. Together with NAACP head Walter White and other leaders, Randolph planned “a mass March on Washington” to push for fair employment practices. President Franklin Roosevelt met with Randolph and White on June 18, and, faced with mobilized discontent and a possible disruption of wartime industries, Roosevelt signed Executive Order 8802 on June 25. The order prohibited racial discrimination in the defense industry. Randolph and other leaders declared victory and called off the march. The leaders of the United States and United Kingdom signed the Atlantic Charter in August 1941. The short document neatly outlined an idealized vision for political and economic order of the postwar world. During World War II, the federal government removed over 120,000 men, women, and children of Japanese descent (both foreign-born “issei” and native-born “nisei”) from the West Coast and interned in camps. President Roosevelt authorized the internments with his Executive Order No. 9066, issued on February 19, 1942. Aiko Herzig-Yoshinaga was born in 1924 in Los Angeles, California. A second-generation (“Nisei”) Japanese American, she was incarcerated at the Manzanar internment camp in California and later at other internment camps in Arkansas. Her she describes learning about Pearl Harbor, her family’s forced evacuation, and her impressions of her internment camp. On August 6, 1945, Harry Truman disclosed to the American public that the United States had detonated an atomic bomb over Hiroshima, Japan. Vietnam, which had been colonized by the French and then by the Japanese, declared their independence from colonial rule—particularly the re-imposition of a French colonial regime—in the aftermath of Japan’s defeat in World War II. Proclaimed by Ho Chi Minh in September 1945, Vietnam’s Declaration of Independence reflected back the early promises of the Allies in World War II and even borrowed directly from the American Declaration of Independence. The Tuskegee Airmen stand at attention as Major James A. Ellison returns the salute of Mac Ross, one of the first graduates of the Tuskegee cadets. The Tuskegee Airmen who continued a tradition of African American military service while honorably serving a country that still considered them second-class citizens. This pair of US Military recruiting posters demonstrates the way that two branches of the military—the Marines and the Women’s Army Corps—borrowed techniques from advertising professionals to “sell” a romantic vision of war to Americans. These two images take different strategies: one shows Marines at war in a lush jungle, reminding viewers that the war was taking place in exotic lands, the other depicted women taking on new jobs as a patriotic duty. XIII. Reference Material This chapter was edited by Joseph Locke, with content contributions by Mary Beth Chopas, Andrew David, Ashton Ellett, Paula Fortier, Joseph Locke, Jennifer Mandel, Valerie Martinez, Ryan Menath, Chris Thomas. Recommended citation: Mary Beth Chopas et al., “World War II,” Joseph Locke, ed., in The American Yawp, eds. Joseph Locke and Ben Wright (Stanford, CA: Stanford University Press, 2018). - Adams, Michael. The Best War Ever: America and World War II. Baltimore: Johns Hopkins University Press, 1994. - Anderson, Karen. Wartime Women: Sex Roles, Family Relations, and the Status of Women During WWII. Westport, CT: Greenwood, 1981. - Black, Gregory D. Hollywood Goes to War: How Politics, Profit and Propaganda Shaped World War II Movies. New York: Free Press, 1987. - Blum, John Morton. V Was for Victory: Politics and American Culture During World War II. New York: Marine Books, 1976. - Borgwardt, Elizabeth. A New Deal for the World: America’s Vision for Human Rights. Cambridge, MA: Harvard University Press, 2005. - Daniels, Roger. Prisoners Without Trial: Japanese Americans in World War II. New York: Hill and Wang, 1993. - Dower, John. War without Mercy: Race and Power in the Pacific War. New York: Pantheon, 1993. - Honey, Maureen. Creating Rosie the Riveter: Class, Gender, and Propaganda During World War II. Amherst: University of Massachusetts Press, 1984. - Hooks, Gregory Michael. Forging the Military-Industrial Complex: World War II’s Battle of the Potomac. Champaign: University of Illinois Press, 1991. - Kaminski, Theresa. Angels of the Underground: The American Women Who Resisted the Japanese in the Philippines in World War II. New York: Oxford University Press, 2015. - Keegan, John. The Second World War. New York: Viking, 1990. - Kennedy, David. Freedom from Fear: America in Depression and War, 1929–1945. New York: Oxford University Press, 1999. - Leonard, Kevin Allen. The Battle for Los Angeles: Racial Ideology and World War II. Albuquerque: University of New Mexico Press, 2006. - Lichtenstein, Nelson. Labor’s War at Home: The CIO in World War II. New York: Cambridge University Press, 1982. - Malloy, Sean L. Atomic Tragedy: Henry L. Stimson and the Decision to Use the Bomb. Ithaca, NY: Cornell University Press, 2008. - Meyer, Leisa D. Creating G.I. Jane: The Regulation of Sexuality and Sexual Behavior in the Women’s Army Corps During WWII. New York: Columbia University Press, 1992. - Murray, Alice Yang. Historical Memories of the Japanese American Internment and the Struggle for Redress. Palo Alto, CA: Stanford University Press, 2007. - O’Neill, William L. A Democracy at War: America’s Fight at Home and Abroad in World War II. Cambridge, MA: Harvard University Press, 1995. - Rhodes, Richard. The Making of the Atomic Bomb. New York: Simon and Schuster, 1988. - Russell, Jan Jarboe. The Train to Crystal City: FDR’s Secret Prisoner Exchange Program and America’s Only Family Internment Camp During World War II. New York: Scribner, 2015. - Schulman, Bruce J. From Cotton Belt to Sunbelt: Federal Policy, Economic Development, and the Transformation of the South, 1938–1980. New York: Oxford University Press, 1991. - Sparrow, James T. Warfare State: World War II Americans and the Age of Big Government. New York: Oxford University Press, 2011. - Spector, Ronald H. Eagle Against the Sun: The American War with Japan. New York: Random House, 1985 - Takaki, Ronald T. Double Victory: A Multicultural History of America in World War II. New York: Little, Brown, 2000. - Wynn, Neil A. The African American Experience During World War II. New York: Rowman and Littlefield, 2010. - For the second Sino-Japanese War, see, for instance, Michael A. Barnhart, Japan Prepares for Total War: The Search for Economic Security, 1919–1941 (Ithaca, NY: Cornell University Press, 1987); Dick Wilson, When Tigers Fight: The Story of the Sino-Japanese War, 1937–1945 (New York: Viking, 1982); and Mark Peattie, Edward Drea, and Hans van de Ven, eds., The Battle for China: Essays on the Military History of the Sino-Japanese War of 1937–1945 (Palo Alto, CA: Stanford University Press, 2011). [↩] - See Joshua A. Fogel, The Nanjing Massacre in History and Historiography (Berkeley: University of California Press, 2000). [↩] - On the origins of World War II in Europe, see, for instance, P. M. H. Bell, The Origins of the Second World War in Europe (New York: Routledge, 1986). [↩] - Antony Beevor, Stalingrad: The Fateful Siege, 1942–1943 (New York: Penguin, 1999); Omer Bartov, The Eastern Front, 1941–45: German Troops and the Barbarization of Warfare (New York: Palgrave Macmillan, 1986); Catherine Merridale, Ivan’s War: Life and Death in the Red Army, 1939–1945 (New York: Picador, 2006). [↩] - Herbert Feis, The Road to Pearl Harbor: The Coming of the War Between the United States and Japan (Princeton, NJ: Princeton University Press, 1950). [↩] - For the United States on the European front, see, for instance, John Keegan, The Second World War (New York: Viking, 1990); and Gerhard L. Weinberg, A World at Arms: A Global History of World War II (New York: Cambridge University Press, 2005). [↩] - Rick Atkinson, An Army at Dawn: The War in North Africa, 1942–1943 (New York: Holt, 2002. [↩] - Max Hastings, Overlord: D-Day and the Battle for Normandy (New York: Simon and Schuster, 1985. [↩] - Richard Overy, Why the Allies Won (New York: Norton, 1997). [↩] - Christopher Duffy, Red Storm on the Reich: The Soviet March on Germany, 1945 (New York: Da Capo Press, 1993. [↩] - For the Pacific War, see, for instance, Ronald Spector, Eagle Against the Sun: The American War with Japan (New York: Vintage Books, 1985); Keegan, Second World War; John Costello, The Pacific War: 1941–1945 (New York: Harper, 2009); and John W. Dower, War Without Mercy: Race and Power in the Pacific War (New York: Pantheon Books, 1986). [↩] - Dower, War Without Mercy. [↩] - Michael J. Hogan, Hiroshima in History and Memory (New York: Cambridge University Press, 1996); Gar Alperovitz, The Decision to Use the Atomic Bomb (New York: Vintage Books, 1996). [↩] - Works on the experiences of World War II soldiers are seemingly endless and include popular histories such as Stephen E. Ambrose’s Citizen Soldiers (New York: Simon and Schuster, 1997) and memoirs such as Eugene Sledge’s With the Old Breed: At Peleliu and Okinawa (New York: Presidio Press, 1981). [↩] - See, for instance, Michael Adams, The Best War Ever: America and World War II (Baltimore: Johns Hopkins University Press, 1994); Mark Harrison, ed., The Economics of World War II: Six Great Powers in International Comparison (Cambridge, UK: Cambridge University Press, 1998); and Kennedy, Freedom from Fear). [↩] - Deborah Cohen, Braceros: Migrant Citizens and Transnational Subjects in the Postwar United States and Mexico (Chapel Hill: University of North Carolina Press, 2011). [↩] - Interview with Rogelio Valdez Robles by Valerie Martinez and Lydia Valdez, transcribed by Nancy Valerio, September 21, 2008; interview with Alvaro Hernández by Myrna Parra-Mantilla, February 5, 2003, Interview No. 33, Institute of Oral History, University of Texas at El Paso. [↩] - Alecea Standlee, “Shifting Spheres: Gender, Labor, and the Construction of National Identity in U.S. Propaganda During the Second World War,” Minerva Journal of Women and War 4 (Spring 2010): 43–62. [↩] - Major Jeanne Holm, USAF (Ret.), Women in the Military: An Unfinished Revolution (Novato, CA: Presidio Press, 1982), 21–109; Portia Kernodle, The Red Cross Nurse in Action, 1882–1948 (New York: Harper), 406–453. [↩] - William P. Jones, The March on Washington: Jobs, Freedom, and the Forgotten History of Civil Rights (New York: Norton, 2013). [↩] - Stephen Tuck, Fog of War: The Second World War and the Civil Rights Movement (New York: Oxford University Press, 2012); Daniel Kryder, Divided Arsenal: Race and the American State During World War II (New York: Cambridge University Press, 2000). [↩] - Andrew Buni, Robert L. Vann of the Pittsburgh Courier: Politics and Black Journalism (Pittsburgh, PA: University of Pittsburgh Press, 1974). [↩] - Dominic J. Capeci Jr. and Martha Wilkerson, Layered Violence: The Detroit Rioters of 1943 (Jackson: University Press of Mississippi, 1991). [↩] - Greg Robinson, By Order of the President: FDR and the Internment of Japanese Americans (Cambridge, MA: Harvard University Press, 2001). [↩] - Commission on Wartime Relocation and Internment of Civilians, Personal Justice Denied: Report of the Commission on Wartime Relocation and Internment of Civilians (Washington, DC: U.S. Government Printing Office, 1982), 18). [↩] - Richard Breitman and Allan J. Lichtman, FDR and the Jews (Cambridge, MA: Belknap Press, 2013), 149. [↩] - Peter Novick, The Holocaust in American Life (New York: Houghton Mifflin, 1999). [↩] - David Mayers, Dissenting Voices in America’s Rise to Power (Cambridge, UK: Cambridge University Press, 2007), 274. [↩] - Fraser J. Harbutt, Yalta 1945: Europe and America at the Crossroads of Peace (Cambridge, UK: Cambridge University Press, 2010), 258; Mark Mazower, Governing the World: The History of a Modern Idea (New York: Penguin, 2012, 208. [↩] - Paul Kennedy, The Parliament of Man: The Past, Present, and Future of the United Nations (New York: Random House, 2006). [↩] - Kathleen Frydl, The G.I. Bill (New York: Cambridge University Press, 2009); Suzanne Mettler, Soldiers to Citizens: The G.I. Bill and the Making of the Greatest Generation (New York: Oxford University Press, 2005). [↩] - Kathleen Frydl, G.I. Bill; Mettler, Soldiers to Citizens. [↩] - Lizabeth Cohen, A Consumer’s Republic: The Politics of Mass Consumption in Postwar America (New York: Knopf, 2003). [↩]
Galapagos Islands finches Darwin's theory of evolution explains how species of living things have changed over geological time. The theory is supported by evidence from fossils, and by the rapid changes that can be seen to occur in microorganisms such as antibiotic-resistant bacteria. Many species have become extinct in the past and the extinction of species continues to happen. Charles Darwin (1809 - 1882) Charles Darwin was an English naturalist. He studied variation in plants and animals during a five-year voyage around the world in the 19th century. He explained his ideas about evolution in a book called On the Origin of Species, which was published in 1859. Darwin's ideas caused a lot of controversy, and this continues to this day, because the ideas can be seen as conflicting with religious views about the creation of the world and creatures in it. Darwin studied the wildlife on the Galápagos Islands - a group of islands on the equator almost 1, 000 kilometres west of Ecuador. He noticed that the finches - songbirds - on the different islands there were fundamentally similar to each other, but showed wide variations in their size, beaks and claws from island to island. For example, their beaks were different depending on the local food source. Darwin concluded that, because the islands are so distant from the mainland, the finches that had arrived there in the past had changed over time. Darwin's drawings of the different heads and beaks he found among the finches on the Galapagos Islands Darwin studied hundreds more animal and plant species. After nearly 30 years of research, in 1858 he proposed his theory of evolution by natural selection.
Making the transition from middle school to high school to postsecondary education to a career is difficult for most students, but especially for traditionally underserved youth who lack support, guidance, counseling, financing, and opportunities to explore and learn about possible futures. The K-12 and postsecondary education systems, working with employers, must align their efforts to create smoother transparent pathways, to accelerate learning, to help youth identify their chosen career pathway, and to ensure high standards and expectations for all. These pathways should include opportunities for work-based learning so that youth can learn about occupations and potential careers and develop employability skills and relationships with adults around authentic work and problem-solving. To obtain a more comprehensive view of College and Career Readiness and Success, please see below: Alternative Education refers to educational settings that are nontraditional or different from the traditional K-12 school setting. These schools address the needs of students not typically met in a more traditional educational setting. Career Pathways refers to a series of structured and connected programs, supports, and experiences that help students transition to postsecondary education and work, including: career and technical education (CTE), apprenticeships, internships, dual and concurrent enrollment, and work-based learning. Deeper Learning refers to the accumulation of six interrelated competencies: mastery of rigorous core academic content, critical thinking and problem solving, teamwork and collaboration, effective communication, learning how to learn, and cultivation of an academic mindset. Postsecondary Education refers to all educational activities beyond high school. This includes 2- and 4-year colleges and universities, certificate programs, workforce development, as well as adult and continuing education programs. Skill Development refers to the mastery of academic and technical content youth need in order to develop a range of non-academic, social and emotional, and employability skills. These skills are sometimes referred to as ‘soft skills’ and are seen as essential in order for youth to be successful in college, career, and beyond. Weighing the Evidence: A Conversation with Community College Presidents on Using Research to Support Student Success Outcomes This forum highlighted a portfolio of new research on several promising strategies to help underprepared students transition to college and ultimately advance … - May 15, 2012 SynopsisIn February 2009, AYPF hosted a field trip to Indianapolis, Indiana showcasing cross linkages aimed at increasing the number of students prepared for, … Building a Comprehensive System to Support All Students Getting to High School Graduation and Beyond This forum focused on what a responsive system for all youth looks like based upon the current reform efforts in Massachusetts and New York City. This webinar examined how states’ education accountability systems can be refined to more accurately measure progress toward high school graduation and college…
1. "State suggestions or directions in a positive form." Tell children what they should do instead of what they should not. Keep directions simple so children will understand what is expected of them. Be specific so they know exactly how to follow your directions. "Be nice" or "Share" may be too general to help children change their behavior. "Use kind words like ___" or "Give it to her when you are finished playing with it" are more precise. Here are a few more examples of positive instructions: • “Use a soft voice” instead of “Don’t yell” • “Walk” instead of “Don’t run” • “Put your feet on the floor” rather than “Don’t climb on the table” 2. "Give the child a choice only when you intend to leave the situation up to him." 3. "Use your voice as a teaching tool." Young children can be sensitive to adults' voices - they may perceive our tone as unfriendly or angry, even if we are not using a loud or harsh voice. Use a pleasant tone that communicates "I like being with you" whenever possible. To get children's attention, get close and use a quiet voice, or try singing. When children must do something, be like Mary Poppins, "kind but extremely firm"! 4. "Make health and safety of the children a primary concern." The COVID-19 pandemic has certainly made all of us even more aware of the primary importance of children's health and safety. Although we knew beforehand to wash hands diligently, to clean and sanitize toys, and to avoid sharing eating utensils, our current protocols have probably increased our vigilance of these and many other practices. Even after the threat of this deadly disease has passed, we must continue to minimize the risk of communicable diseases. The physical environment must also provide for safe play, both indoors and out. Survey all areas before allowing children to enter. Ensure all activities have adequate supervision. [See guide #14 for more on supervising.] 5. "Use methods of guidance that build the child’s self-respect." 6. "Help a child set standards based on his/her own past performance, rather than on comparison with peers." Developmentally appropriate practice (DAP) requires that we meet children where they are and help them to meet challenging but achievable goals. What one child is capable of may be very different from what another can do, so avoid comparing one child to another. Instead, point out the progress the child is making: "Last week you couldn't walk all the way across the balance beam. Today you did it!" Avoid even subtle ways of using comparisons to influence behavior ("Susan is sitting quietly") and discourage competition between children, too. Encourage cooperative play and helping others, and you will be improving children's behavior and self-esteem and developing a sense of community, too. 8. "Time directions and suggestions for maximum effectiveness." Timing is important when guiding children’s behavior. Children should be given a chance to work things out for themselves, but not get too frustrated or upset. Remember DAP - goals should be challenging but achievable. One consideration of timing your guidance is to notice the child's emotional state. When they are really upset they are not ready to learn. Save your guidance for a few minutes, and help them calm down and feel connected first. Another timing tip is to set consequences that follow the behavior as soon as possible. Having to leave the block area after deliberately knocking over another child's structure would be an example of an immediate consequence. If you can't time it that way, at least connect the consequence to the behavior. Having to clean up a large mess of one's making is a logical consequence; sitting out at playtime later in the day is not. [See guide #11 for more about following up on limits.] 9. "Observe the individual ways children use art media, explore the materials yourself, but avoid making models for children to copy." You may also explore the media yourself. Avoid judgements of children's creations, even positive ones. Instead of saying you like a child's painting when asked, point out something you notice about the painting ("You used lots of red and blue") or about the child ("Your smile is telling me you are happy with it"). 10. "Give the child the minimum of help in order that s/he may have the maximum chance to grow in independence." Allowing children to do things themselves helps them to develop self-help skills. Even if they are struggling, they may not want our assistance, and we can show respect by honoring their wishes. If a child asks for help, scaffold his or her learning by offering the least amount of help he or she needs to do the task. Over time, as children become more confident and skilled, they will need less and less help from adults. One note about this guide: It's important to recognize cultural differences in doing for others; doing something the other person is capable of may be seen by some as a way of strengthening their relationship. 11. "Make your directions effective by reinforcing them when necessary." 12. "Learn to foresee and prevent rather than “mop up” after difficulty." There is an old saying, "an ounce of prevention is worth a pound of cure." When guiding young children's behavior, anticipating and preventing problems is usually the best strategy by far. Be especially sensitive to those children who experience difficulties getting along with others. Entering a group of children at play is an incredibly difficult skill for some, and they benefit from us giving suggestions ahead of time rather than waiting until they encounter a problem. T Socially successful children observe the play and find a way to join in without disrupting it. You can help other children do this as well. The better you know the children, the better you can anticipating their actions - and guide them toward success. 13. "Clearly define and consistently maintain limits when they are necessary." 14. "Use the most strategic positions for supervising." Always be alert to the entire environment; supervise the children as if you are the only adult around. Get into the habit of positioning yourself where you can see as much as possible, and move about to check on areas that are difficult to see at all times. For safety you must be able to observe all the children. Many times a teacher is often in a better Getting at the children’s level is ideal for supervising and for allowing the children to approach you. Of course, if there are two or more adults, position yourselves in different areas so all may be seen. 15. "Increase your own awareness by observing and taking notes." I hope you find these 15 guides as useful as I did when I was a student and beginning teacher. I wish you well as you use positive techniques to guide your children's behavior! Coming up in future posts: Handling and preventing challenging behaviors - stay tuned! I'm Diane Goyette, a Child Development Specialist, Trainer, Consultant and Keynote Speaker. I'm excited to share my blog!
Permutations and Combinations - Fundamental Principle of Counting: If an event occurs in m different ways, following which another event occurs in n different ways, then the total number of occurrence of the events in the given order is m × n. This is called the fundamental principle of counting. There are as many words as there are ways of filling 5 vacant places by the 5 letters. The first place can… To view the complete topic, please
04 Jan The Language of Humility Honorifics and keigo in Japan—the language of politeness Different from the major Western languages, and a further refinement of the systems used in other Asian languages, Japanese has an extensive, complicated system of honorifics—keigo—to explicitly express politeness, humility and formality. Relationships are seldom equal in Japan, and the grammar employed in any given context is dependent upon a complex combination of factors such as age, gender, job and experience of both the person speaking and person spoken to. Simply put, speaking to one of higher position requires a polite form of speech, while speaking to one lower dictates a plainer form. Intent also plays a role—when asking for a favour humble language is expected, and previous favours done or owed dictate a requisite humility in language spoken. Strangers, even when not familiar with rank or position, also will usually speak to one another politely in Japanese, using a neutral language middle-ground if a difference in status is not immediately apparent. Women generally speak a more polite style of language than men, and use it in a broader range of circumstances. Interestingly, in Heian Japan, a period approximately one thousand years ago, not just language but handwriting was gender-specific—women were confined to the hiragana script, with its rounder, so-called “feminine” edges. Other Asian languages do employ honorifics, for example Chinese, Korean, Vietnamese, Thai, Burmese and Javanese—all with exalted terms for others and terms humble for self, but the Japanese system is by far the most complex, a simple sentence capable of being expressed in more than twenty different ways, dependent on the context of speaker and spoken to. Unlike other languages, Japanese honorifics alter the level of respect or humility based upon context as well as the person spoken to or about. For example, when talking about a company president inside the company, exalted terms are used, but referring to the same person outside the company requires humble language. The relativity of keigo is in sharp contrast to the Korean system of absolute honorifics, where the same register used regardless of the context or relationship of those speaking. Translated verbatim in Japanese, the Korean language comes across as extremely presumptuous; the perfectly acceptable “Our Mr. Company-President” in Korean totally inappropriate if used “out-of-group,” or outside the company, in Japanese. Keigo is not learned in Japan until the teens, a time when one is expected to begin to learn to speak “politely.” This is partly due the complexity of the language and its honorific forms, although no doubt some would suggest that the rudeness of young is a universal trait. New employees are frequently sent on courses by employers to refine their use of honorifics, and it is not uncommon for even university graduates to have not completely mastered all the polite forms of the Japanese language. In recent years some Japanese companies, in the face of a long economic slump, have attempted to abandon keigo in favour of a more open, hopefully competitive culture; parents often no longer emphasise honorific language to their children, and most schools no longer expect its use in the classroom. The result is that many young people in Japan today have a poor understanding of honorifics, and feel little compulsion to use them. No doubt ardent writer-patriot and master of the Japanese language Yukio Mishima, along with every other long-dead champion of old Japan, will be spinning in his grave. - Policing manners: In Yokohama “Smile-Manner-Squadron” has been charged with bringing back the standards of “old Japan”—politely encouraging the young to give up their seats to those more needy on the city’s overcrowded trains. - Mono no aware: Beauty in Japan: Meaning literally “a sensitivity to things,” mono no aware is a concept coined to describe the essence of Japanese culture, and is the central artistic imperative in Japan to this day. - The Most Shocking Ending in All Literature: Biography of author and master of the Japanese language Yukio Mishima.
3.1 Spanish Exploration and Colonial Society In their outposts at St. Augustine and Santa Fe, the Spanish never found the fabled mountains of gold they sought. They did find many native people to convert to Catholicism, but their zeal nearly cost them the colony of Santa Fe, which they lost for twelve years after the Pueblo Revolt. In truth, the grand dreams of wealth, conversion, and a social order based on Spanish control never came to pass as Spain envisioned them. 3.2 Colonial Rivalries: Dutch and French Colonial Ambitions The French and Dutch established colonies in the northeastern part of North America: the Dutch in present-day New York, and the French in present-day Canada. Both colonies were primarily trading posts for furs. While they failed to attract many colonists from their respective home countries, these outposts nonetheless intensified imperial rivalries in North America. Both the Dutch and the French relied on native peoples to harvest the pelts that proved profitable in Europe. 3.3 English Settlements in America The English came late to colonization of the Americas, establishing stable settlements in the 1600s after several unsuccessful attempts in the 1500s. After Roanoke Colony failed in 1587, the English found more success with the founding of Jamestown in 1607 and Plymouth in 1620. The two colonies were very different in origin. The Virginia Company of London founded Jamestown with the express purpose of making money for its investors, while Puritans founded Plymouth to practice their own brand of Protestantism without interference. Both colonies battled difficult circumstances, including poor relationships with neighboring Native American tribes. Conflicts flared repeatedly in the Chesapeake Bay tobacco colonies and in New England, where a massive uprising against the English in 1675 to 1676—King Philip’s War—nearly succeeded in driving the intruders back to the sea. 3.4 The Impact of Colonization The development of the Atlantic slave trade forever changed the course of European settlement in the Americas. Other transatlantic travelers, including diseases, goods, plants, animals, and even ideas like the concept of private land ownership, further influenced life in America during the sixteenth and seventeenth centuries. The exchange of pelts for European goods including copper kettles, knives, and guns played a significant role in changing the material cultures of native peoples. During the seventeenth century, native peoples grew increasingly dependent on European trade items. At the same time, many native inhabitants died of European diseases, while survivors adopted new ways of living with their new neighbors.
The first thing you need to know about TDS is that it’s not actually a measurement of total dissolved solids. Rather, it’s a measure of the electrical conductivity of water. When dissolvable elements are present in water, they increase its conductivity. The second thing you need to know is that TDS isn’t the same as salinity or total dissolved solids (TDS). Salinity is the amount of dissolved salts in water, whereas TDS is measured in parts per million (ppm). As mentioned above, the TDS meaning water measurement gauges how much ionized solids are present in water by using EC metres rather than evaporating the solution or weighing it after evaporation. 1. TDS can be healthy or unhealthy. TDS is an abbreviation for total dissolved solids. It’s a measurement of the amount of minerals and metals in water, including salt. When many people think of TDS, they think of water contaminants, such as sewage and runoff. But TDS is not universally safe or unsafe in nature. The U.S. Environmental Protection Agency (EPA) stipulates the maximum level of TDS allowed in safe drinking water as 500 parts per million (ppm). That said, just because a water sample has a TDS value below 500 ppm, that doesn’t mean it’s safe to consume. Although TDS testers can gauge the amount of elements dissolved in water, TDS testing alone cannot identify what those elements are. For this reason, TDS testing often serves as part of a more robust water-quality monitoring strategy that checks for other factors as well, including temperature, conductivity, salinity, and pH 2. Not all TDS testers are created equal. The Total Dissolved Solids (TDS) test is a common method for measuring the total amount of solids in water. TDS is measured in ppm (parts per million). The higher the TDS, the more minerals are dissolved in your water. High TDS is often associated with poor quality water and can lead to mineral build up on fixtures and appliances. Although any element that’s dissolved in water will have an electrical charge, not all TDS testers are engineered to account for elements that are poor conductors. Elements such as oils and some pharmaceutical chemicals can be poor electrical conductors. If your TDS tester isn’t capable of detecting very low EC, it’s possible you’re not seeing the full picture. Before interpreting TDS readings as “safe,” make sure to look into the EC sensitivity of your tester. 3. TDS testers can be used to identify hard water. If you’ve ever used hard water—water with elevated levels of minerals, typically magnesium and calcium—then you’ve experienced water with an elevated TDS level. Significantly hard water leaves hard, crusty mineral deposits in drains, showers, sinks, toilets, and more. It can have an unpalatable taste, cause skin irritation and dryness, erode pipes and water-dependent appliances, clog drains, and make it more difficult to clean clothes. TDS (total dissolved solids) testers measure the “hardness” or “softness” of water and help homeowners plan accordingly. A TDS tester is extremely useful for anyone who uses well water or municipal water that contains high levels of minerals. 4. TDS testing has many applications. TDS is a measurement of the total dissolved solids in Types of water, which includes minerals, salts, and other substances. The term “total” refers to the fact that all of these substances are dissolved in the water. TDS is a common measurement for drinking water and wastewater, but it can also be used for other types of liquids such as food, pharmaceuticals, and industrial waste. 5. High organic TDS levels are responsible for creating limestone around hot springs. The term TDS is short for total dissolved solids. It refers to the amount of minerals and salts in water, as well as any other compounds that are dissolved in it. When you hear about how much TDS is in a glass of water, what you’re really hearing is the amount of chemicals found in the water. In the United States, TDS ranges from 0 mg/L to 500 mg/L (milligrams per litre). The more minerals there are in the water, the higher its TDS will be.
Birds are vertebrates with feathers, modified for flight and for active metabolism. Birds are a monophyletic lineage, evolved once from a common ancestor, and all birds are related through that common origin. There are a few kinds of birds that don't fly, but their ancestors did, and these birds have secondarily lost the ability to fly. Modern birds have traits related to hot metabolism, and to flight: There are about 30 orders of birds, about 180 families, and about 2,000 genera with 10,000 species. Most of them don't live in Michigan, though there are about 400 species that do. Related web sites: having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria. animals which must use heat acquired from the environment and behavioral adaptations to regulate body temperature having the capacity to move from one place to another. To cite this page: 2001. "Aves" (On-line), Animal Diversity Web. Accessed May 25, 2022 at https://animaldiversity.org/accounts/Aves/ Disclaimer: The Animal Diversity Web is an educational resource written largely by and for college students. ADW doesn't cover all species in the world, nor does it include all the latest scientific information about organisms we describe. Though we edit our accounts for accuracy, we cannot guarantee all information in those accounts. While ADW staff and contributors provide references to books and websites that we believe are reputable, we cannot necessarily endorse the contents of references beyond our control.
From Farm to Market: Fruit Ripening Fruit has a brief window where it is perfectly ripe. If farmers waited until every piece of fruit was ripe before harvesting, farming would be more labor-intensive as farmers rushed to pick ripe fruits. Prices might crash due to a short-term glut of fruit on the market. To ensure a steady supply and demand, keep prices competitive, and reduce food waste, farmers use artificial ripening procedures. One method for ripening fruit after harvest involves ripening chambers. Ripening chambers using ethylene, a natural plant hormone, enable the fruit to be harvested, stored, and transported to where it will be marketed and consumed. While ethylene ripening chambers are beneficial, they are not without risks. How Ethylene Ripening Chambers Work While there are other ways to artificially ripen fruit in ripening chambers, ethylene has become a favorite, since it occurs naturally in fruit. Ethylene is a natural hormone found in plants. Fruits begin to ripen when exposed to ethylene, whether the exposure occurs naturally or artificially. In ethylene ripening chambers, unripe fruits are laid out, and the chamber is sealed.Ethylene gas is then piped into the sealed chamber. As the fruit is exposed to ethylene, the fruit “respires”,which involves intake of oxygen andemission of carbon dioxide. For the ripened fruit to have the right color and flavor, the ripening should occur in a controlled atmosphere in which the temperature, humidity, ethylene, oxygen, and CO2 concentrationaremaintained at optimum levels. However, there is a risk of combustion from the ethylene gas, as well as decreased levels of oxygen and increased levels of carbon dioxide inside the chamber. How Oxygen/Carbon Dioxide and LEL Combustible Monitors Protect Employees Low oxygen levels cause respiratory distress. If oxygen levels drop below the safe threshold for breathing, which could happen in the event of an ethylene gas leak, employees could suffocate. Suffocation is also a danger when there is too much carbon dioxide in the air. Ethylene gas used in ripening chambers would be hazardous if an employee were to enter the chamber before determining that oxygen and carbon dioxide were at safe levels. A dual oxygen/carbon dioxide (O2/CO2) monitor detects the levels of oxygen and carbon dioxide within the chamber and sounds an alarm should the oxygen level falls to an OSHA action levelor if the carbon dioxide rises to an unsafe level. By checking the monitor’s display, an employee will know when it is safe to enter the chamber. PureAire Monitoring Systems has developed its dual O2/CO2 monitor with zirconium oxide and non-dispersive infrared sensor (“NDIR”) cells. The cells are unaffected by changing barometric pressure, storms, temperatures, and humidity, ensuring reliable performance. Once installed, the dual O2/CO2 monitor needs no maintenance or calibration. Ethylene is a highly flammable and combustible gas. If the gas lines used to pipe ethylene into the ripening chambers were to develop a leak, the chamber could fill with ethylene and reach combustible levels. A combustible gas monitor, which takes continuous readings of combustible gases, would warn employees of an ethylene leak within the chamber. PureAire Monitoring System’s Air Check LEL combustible gas monitor continuously monitors for failed sensor cell and communication line breaks. The Air Check LEL gas monitor is housed in an explosion-proof enclosure. If a leak or system error should occur, an alarm will immediately alert employees. To learn about PureAire Monitoring Systems’ dual O2/CO2 monitors or the Air Check LEL Combustible monitor, please visit www.pureairemonitoring.com.
|A domain name is the actual words or names that a website addresses to its intended users. The Domain Name System (Dns) is a decentralized and hierarchical naming scheme for computers, internet services, or other online resources associated with the Internet or a private local network. It associates different information with domain names uniquely assigned to each of the involved entities. In simple terms, domain names are unique key-words or labels given to a domain, an internet service provider or a website, that uniquely identifies the particular computer or other entity which owns and operates the domain.| An IP address is actually a series of numbers used together with a short name to identify a specific computer or server. In simple terms, the IP address is a string of numbers which uniquely identifies a specific computer or server in the network. Similarly, a domain name is a string of words or alphanumeric characters identifying a specific internet service provider, domain name, website or computer. A name server is an important part of the Domain Name System. It is an application that dynamically determines or identifies the IP addresses assigned to an individual or entity. Name servers can be network operated or centralized, with server centers being the most common. Name servers may serve as gateways to the domain system, through which clients connect to domain systems via the internet or through other communications technologies such as fax. Some popular name servers are dyldap. It serves as a directory for email addresses, and also for the domain names themselves. It is one of the protocols used to locate email servers for other protocols such as SMTP and IMAP. The dns protocol is used to translate domain names to IP addresses and vice versa, and sometimes it is used to determine which subnet names a server should use for a specific computer system. Some of the popular IP address sub-types are: primary DNS, secondary DNS, top DNS, local DNS, global DNS, virtual hosting DNS and namespaced IP addresses. These domain name servers are authoritative for the primary DNS, which is the domain on which all other sub-names depend. If another domain is specified during the bind request, the primary DNS will check the specified domain and if it does not contain the requested domain name, it will return a list of available domains. This list is known as the root name servers. There are different types of domain name registrars including the following: ICANN, DNS Banks, Shared DNS, autonomous system organizations (ASO), dedicated server registrars and last-resort registrars. The main advantage of using these registrars is that they provide flexibility and control and security for the user. They are also able to provide guaranteed quality service and reliable location. An IP address has the specific meaning only when it has been specified during the binding process. Even if the name server is authoritative for some sub-name, if it is resolved after the query result, the name server is authoritative for the whole domain resolved after that. Name resolution using the DNS root zone is done by checking the IP addresses against the zone records and then checking whether they point to an IP address or not. If the resolved names do not point to any IP address, it will return a zone-specific error message. Apart from the DNS, another service used to discover the domain name is the WHOIS website. The WHOIS website was introduced to make information about domain names, extensions and registrars easily accessible by the public. It contains database entries for registered domain names, description of terms, and information regarding the registrant. The WHOIS website is supported by sub-protocols and query language(QL) that enables the client computers to interact with the WHOIS database.
Have you ever had a committed African-American history unit in February—Black History Month—as a feature of your schedule? Black History Month brings a yearly spotlight to Black accomplishments, creativity, and history that often get overlooked. With all the strife of 2020 and the Black Lives Matter development’s significance, celebrating Black History Month 2021 is a great priority than ever. February denotes the beginning of Black History Month, a federally recognized festival of the commitments African Americans have made to the United States and an opportunity to consider the struggle for racial equity. Black History Month has become perhaps the most celebrated social legacy on the American calendar. Schools and organizations offer Black-history-themed meals and talks. At the same time, significant brands roll out clothing, TV specials, and consumer content, often tone-deaf, especially when introduced without setting. Why is Black History Month Celebrated? Black History Month actually began as Negro History Week in 1917. Author, writer, and antiquarian Carter G. Woodson—presently recognize as the father of black history — campaigned overwhelmingly for the public acknowledgment of black stories and viewpoints. Woodson accepted profoundly that fairness was only conceivable with the affirmation and comprehension of a race’s history and devoted his life to the study of African-American historical research. Woodson was the second African American to procure a Ph.D. at Harvard University. Woodson perceived that the American education system offered very little data about African Americans’ achievements and established the Association for the Study of Negro Life and History, presently called the Association for the Study of African American Life and History. Woodson set up the Journal of Negro History in 1916 and the Negro History Bulletin in 1937 as a method for giving Black researchers a spot to publish their exploration and discoveries. Why is Black History Month in February? Woodson picked the second week of February to harmonize with Frederick Douglass’s birthdays, an acclaimed abolitionist who escaped from slavery, and President Abraham Lincoln, who officially abolished slavery. The first day of February is National Freedom Day, the anniversary of the thirteenth amendment’s endorsement, which abolished slavery in 1865. Also, Richard Wright, who was oppressed and turned into a civil rights advocate and author, campaigned for the day’s festival. As indicated by the Association for the Study of African American Life and History, Woodson created Black History Week around customary days of recognizing the black past. He was requesting the public to broaden their study of black history, not to make another custom. In doing as such, he expanded his odds for success. Significance for the celebration of Black History Month Woodson believed it was essential for young African Americans to understand and be proud of their heritage. Equally, Black History is significant because of the commitments that African Americans have made to build the United States into what it is today. In our World today, African Americans have contributed to every part of society from the economy, health, various creations, education, activism, music, and sports. Without that history, there are holes in the story that can be told about the past; however, it also erases the persistence and resistance of the minority communities that have reliably pushed the United States forward to accomplish better racial value. While highlighting Black history and contribution is a unique one, it is important to recognize that Black history is American history. Black History Month Theme 2021 Every year, a subject is selected by the Association for the Study of African American Life and History. This year 2021, the theme is The Black Family: Characterization, Identity, and Diversity, and it will investigate the African diaspora. Observing Black History Month in the work environment best practices Race in the working environment can be a delicate subject, and numerous associations attempt to be colorblind in a confused effort to build up equity. Indeed, when organizations minimize demographic differences, this, in reality, builds underrepresented employees’ view of predisposition from their white partners and decreases commitment in their work. Black history month best practice: Don’t be colorblind. Your workers ought to have the option to straightforwardly talk about, embrace, and be pleased with their social and ethnic foundations. Embrace your disparities! What Black History Month Honors Black History Month was made to focus attention on the commitments of African Americans to the United States. It respects all Black individuals from all American history times, from the subjugated individuals previously brought over from Africa in the mid-seventeenth century to African Americans living in the United States today. Black History is American History When learning about the passage of social liberties laws and different triumphs, learners may grapple with the thought of uniformity versus the truth that America is a long way from fair. Schools and neighborhoods are segregated, and African Americans actually fare more awful than their white companions concerning wellbeing, wealth, and incarceration rates. Black history month helps Black people see how the inability to recognize past truths reinforces the status quo sticking to structures established in oppression. Strategy changes alone will not add up to equality. Policies haven’t ended institutional and foundational prejudice. And they never will until all lives are really valued. During black history month, individuals and students learn consistently and through all disciplines about the achievements, encounters, and viewpoints of Black people. Recognizing Intersectional Black Identity Black people must understand the value of their identity; like other marginalized groups, Black individuals have characters that intersect, making complex lived encounters and intensifying oppressive frameworks’ impacts. This idea greatly helps African Americans that the overlapping identities help inspire and impact culture, activism, and the cognizance of black communities. Intersecting Identities lend sympathy and an appreciation for every human experience. Celebrating Black History Month with other Countries Since the first Negro History Week in 1926, different nations have joined the United States in celebrating Black people and their commitment to history and culture, including Canada, the United Kingdom, Germany, and the Netherlands. Black History Month keeps on serving us well. In part because Woodson’s creation is as much about today as it is about the past. February is a brief month, so if you want to actualize Black History Month activities seriously and insightfully, start today!
Emergence: February 12, 1809 Demise: April 15, 1865 Abraham Lincoln was an American statesman and lawyer who served as the 16th president of the United States from 1861 until his assassination in 1865. Lincoln led the nation through the American Civil War, the country’s greatest moral, constitutional, and political crisis. He succeeded in preserving the Union, abolishing slavery, bolstering the federal government, and modernizing the U.S. economy. Lincoln was born into poverty in a log cabin and was raised on the frontier primarily in Indiana. He was self-educated and became a lawyer, Whig Party leader, Illinois state legislator, and U.S. Congressman from Illinois. In 1849, he returned to his law practice but became vexed by the opening of additional lands to slavery as a result of the Kansas–Nebraska Act. He re-entered politics in 1854, becoming a leader in the new Republican Party, and he reached a national audience in the 1858 debates against Stephen Douglas. Lincoln ran for President in 1860, sweeping the North in victory. Pro-slavery elements in the South equated his success with the North’s rejection of their right to practice slavery, and southern states began seceding from the union. To secure its independence, the new Confederate States fired on Fort Sumter, a U.S. fort in the South, and Lincoln called up forces to suppress the rebellion and restore the Union. As the leader of moderate Republicans, Lincoln had to navigate a contentious array of factions with friends and opponents on both sides. War Democrats rallied a large faction of former opponents into his moderate camp, but they were countered by Radical Republicans, who demanded harsh treatment of the Southern traitors. Anti-war Democrats (called “Copperheads”) despised him, and irreconcilable pro-Confederate elements plotted his assassination. Lincoln managed the factions by exploiting their mutual enmity, by carefully distributing political patronage, and by appealing to the U.S. people. His Gettysburg Address became a historic clarion call for nationalism, republicanism, equal rights, liberty, and democracy. Lincoln scrutinized the strategy and tactics in the war effort, including the selection of generals and the naval blockade of the South’s trade. He suspended habeas corpus, and he averted British intervention by defusing the Trent Affair. He engineered the end to slavery with his Emancipation Proclamation and his order that the Army protect and recruit former slaves. He also encouraged border states to outlaw slavery, and promoted the Thirteenth Amendment to the United States Constitution, which outlawed slavery across the country. Lincoln managed his own successful re-election campaign. He sought to heal the war-torn nation through reconciliation. On April 14, 1865, just days after the war’s end at Appomattox, Lincoln was attending a play at Ford’s Theatre with his wife Mary when he was assassinated by Confederate sympathizer John Wilkes Booth. His marriage had produced four sons, two of whom preceded him in death, with severe emotional impact upon him and Mary. Lincoln is remembered as the martyr hero of the United States and he is consistently ranked as one of the greatest presidents in American history.
A severe trauma or being prolonged or repeatedly exposed to traumatic events can have a wide range of effects on personality, identity, mood change, emotional regulation, and interpersonal relationships. - Having difficulty expressing and controlling emotions (e.g. anger outbursts) is very common. - It’s common to have difficulty calming down when upset. Consequently, some children and adolescents start to consume alcohol or drugs to calm down. - Some children and adolescents harm themselves or start to consume alcohol or drugs to calm them down and try to manage their emotions. - Try to calm yourself by doing a simple breathing exercise (counting to 3 while breathing in, counting to 5 while breathing out). - Try to find helpful strategies to manage your emotions (sport, using a punching bag, yoga, etc.). Be creative. - You can also try the reinterpret the triggering situation by asking yourself how someone else could see the situation and how they would resolve the situation differently. Beliefs about oneself - Feeling helpless and thinking that you have no control over what happens in your life is common. - Feeling ashamed and guilty (even in situations for which you are not responsible), inferior to others, or worthless, is common. - Try to talk to your friends and family about. Talking about it often feels liberating and thus helps. - Try to think of positive examples in your life to correct these negatives beliefs. Write them down. - It’s common to have trust issues, which consequently can lead to have disturbed relationships to other people or cutting yourself off from others. Try to talk to your friends and family about it and let them know what you find difficult. - It’s common to have difficulty to get in contact with other people or to avoid conflicts. - To be more often vulnerable to abuse or exploitation does sometimes happen because of having difficulty setting boundaries. Try to watch out for such situations and seek help from adults if something like this happens to you. - Try to pursue hobbies with others or to meet up with friends to stay socially involved. - If you did get in contact with other people or when you managed to settle a conflict, reward yourself by e.g. going to the movies, meeting friends, etc.
Galaxies are collections of stars, gas, and dust, combined with some unknown form of dark matter , all bound together by gravity. The visible parts come in a variety of sizes, ranging from a few thousand light years with a billion stars, to 100,000 light-years with a trillion stars. Our own Milky Way galaxy contains about 200 billion stars. Types of Galaxies The invisible parts of galaxies are known to exist only because of their influence on the motions of the visible parts. Stars and gas rotate around galaxy centers too fast to be gravitationally bound by their own mass, so dark matter has to be present to hold it together. Scientists do not yet know the size of the dark matter halos of galaxies; they might extend over ten times the extent of the visible galaxy. What we see in our telescopes as a giant galaxy of stars may be likened to the glowing hearth in the center of a big dark house. Imagine viewing a galaxy through a small telescope, as pioneering astronomers William and Caroline Herschel and Charles Messier did in the late eighteenth century. You would see mostly a dull yellow color from countless stars similar to the Sun, all blurred together by the shimmering Earth atmosphere. This light comes from stars that formed when the universe was only a tenth of its present age, several billion years before Earth existed. American astronomer Edwin P. Hubble used a larger telescope starting in the 1920s and saw a wide variety of galaxy shapes. He classified them into elliptical, with a smooth texture; disk-like with spirals; and everything else, which he called irregular. Elliptical galaxies are three-dimensional objects that range from spheres to elongated spheroids like footballs. Some may have developed from slowly rotating hydrogen clouds that formed stars in their first billion years. Others may have formed from the merger of two or more smaller galaxies. Most ellipticals have very little gas left that can form new stars, although in some there is a small amount of star formation within gas acquired during recent mergers with other galaxies. Spiral galaxies, which include the Milky Way, formed from faster-spinning clouds of hydrogen gas. Theoretical models suggest they got this spin by interacting with neighboring galaxies early in the universe. The center of a spiral galaxy is a three-dimensional bulge of old stars, surrounded by a spinning disk flattened to a pancake shape. Hubble classified spiral galaxies according to the tightness of the spirals that wind around the center, and the relative size of the disk and bulge. Galaxies with big bulges tend to have more tightly wrapped spirals; they are designated type Sa. Galaxies with progressively smaller bulges and more open arms are designated Sb and Sc. Barred galaxies are similar but have long central barlike patterns of stars; they are designated SBa, SBb, and Sbc, while intermediate bar strengths are designated SAB. Type Sa galaxies rotate at a nearly constant speed of some 300 kilometers per second (186 miles per second) from the edge of the bulge to the far outer disk. Sc galaxies have a rotation speed that increases more gradually from center to edge, to typically 150 kilometers per second (93 miles per second). The rotation rate and the star formation rate depend only on the average density. Sa galaxies, which are high density, converted their gas into stars so quickly that they have very little gas left for star formation today. Sc galaxies have more gas left over and still form an average of a few new stars each year. Some galaxies have extremely concentrated gas near their centers, sometimes in a ring. Here the star formation rate may be higher, so these galaxies are called starbursts. The pinwheel structures of spiral galaxies result from a concentration of stars and gas in wavelike patterns that are driven by gravity and rotation. Bright stars form in the concentrated gas, highlighting spiral arms with a bluish color. Theoretical models and computer simulations match the observed spiral properties. Some galaxies have two long symmetric arms that give them a "grand design." These arms are waves of compression and rarefaction that ripple through a disk and organize the stars and gas into the spiral shape. These galaxies change shape slowly, on a timescale of perhaps ten rotations, which is a few billion years. Other galaxies have more chaotic, patchy arms that look like fleece on a sheep; these are called flocculent galaxies. The patchy arms are regions of star formation with no concentration of old stars. Computer simulations suggest that each flocculent arm lasts only about 100 million years. Irregular galaxies are the most common type. They are typically less than one-tenth the mass of the Milky Way and have irregular shapes because their small sizes make it difficult for spiral patterns to develop. They also have large reservoirs of gas, leading to new star formation. The varied ages of current stars indicate that their past star formation rates were highly nonuniform. The dynamical processes affecting irregulars are not easily understood. Their low densities and small sizes may make them susceptible to environmental effects such as collisions with larger galaxies or intergalactic gas clouds. Some irregulars are found in the debris of interacting galaxies and may have formed there. Some small galaxies have elliptical shapes, contain very little gas, and do not see any new star formation. It is not clear how they formed. The internal structures of irregulars and dwarf ellipticals are quite different, as are their locations inside clusters of galaxies (the irregulars tend to be in the outer parts). Thus it is not likely that irregulars simply evolve into dwarf ellipticals as they age. Active Galaxies, Black Holes, and Quasars In the 1960s, Dutch astronomer Maarten Schmidt made spectroscopic observations of an object that appeared to be a star but emitted strong radio radiation, which is uncharacteristic of stars. He found that the normal spectral lines emitted by atoms were shifted to much longer wavelengths than they have on Earth. He proposed that this redshift was the result of rapid motion away from Earth, caused by the cosmological expansion of the universe discovered in the 1920s by Hubble. The velocity was so large that the object had to be very far away. Such objects were dubbed quasi-stellar objects, now called quasars . Several thousand have been found. With the Hubble Space Telescope, astronomers have recently discovered that many quasars are the bright centers of galaxies, some of which are interacting. They are so far away that their spatial extents cannot be resolved through the shimmering atmosphere. Other galaxies also show the unusually strong radio and infrared emissions seen in quasars; these are called active galaxies. The energy sources for quasars and active galaxies are most likely black holes with masses of a billion suns. Observers sometimes note that black holes are surrounded by rapidly spinning disks of gas. Theory predicts that these disks accrete onto the holes because of friction. Friction also heats up the disk so much that it emits X rays . Near the black hole, magnetic and hydrodynamic processes can accelerate some of the gas in the perpendicular direction, forming jets of matter that race far out into intergalactic space at nearly the speed of light. Nearby galaxies, including the Milky Way, have black holes in their centers too, but they tend to be only one thousand to one million times as massive as the Sun. Active spiral galaxies are called Seyferts, named after American astronomer Carl Seyfert. Their spectral lines differ depending on their orientation, and so are divided into types I and II. The lines tend to be broader, indicating more rapid motions, if Seyfert galaxies are viewed nearly face-on. Active elliptical galaxies are called BL Lac objects (blazars) if their jets are viewed end-on; they look very different, having giant radio lobes , if their jets are viewed from the side. These radio lobes can extend for hundreds of millions of light-years from the galaxy centers. Galaxies generally formed in groups and clusters, so most galaxies have neighbors. The Milky Way is in a small group with another large spiral galaxy (Andromeda, or Messier 31), a smaller spiral (Messier 33), two prominent irregulars (the Large and Small Magellanic Clouds), and two dozen tiny galaxies. In contrast, the spiral galaxy Messier 100 is in a very large cluster, Virgo, which has at least 1,000 galaxies. With so many neighbors, galaxies regularly pass by each other and sometimes merge together, leading to violent gas compression and star formation. In dense cluster centers, galaxies merge into giant ellipticals that can be 10 to 100 times as massive as the Milky Way. There is a higher proportion of elliptical and fast-rotating spiral galaxies in dense clusters than in small groups. Presumably the dense environments of clusters led to the formation of denser galaxies. The Milky Way Galaxy In the 1700s the philosophers Thomas Wright, Immanuel Kant, and Johann Heinrich Lambert speculated that our galaxy has a flattened shape that makes the bright band of stars called the Milky Way. Because English physicist and mathematician Isaac Newton (1642-1727) showed that objects with mass will attract each other by gravity, they supposed that our galaxy disk must be spinning in order to avoid collapse. In the early 1800s William Herschel counted stars in different directions. The extent of the Milky Way seemed to be about the same in all directions, so the Sun appeared to be near the center. In the 1900s American astronomer Harlow Shapley studied the distribution of globular clusters in our galaxy. Globular clusters are dense clusters of stars with masses of around 100,000 Suns. These stars are mostly lower in mass than the Sun and formed when the Milky Way was young. Other galaxies have globular clusters too. The Milky Way has about 100 globular clusters, whereas giant elliptical galaxies are surrounded by thousands of globulars. Shapley's observations led to an unexpected result because he saw that the clusters appear mostly in one part of the sky, in a spherical distribution around some distant point. He inferred that the Sun is near the edge of the Milky Way—not near its center as Herschel had thought. Shapley estimated the distance to clusters using variable stars. Stars that have finished converting hydrogen into helium in their cores change their internal structures as the helium begins to ignite. For a short time, they become unstable and oscillate, changing their size and brightness periodically; they are then known as variable stars. American astronomer Henrietta Leavitt (1868-1921) discovered that less massive, intrinsically fainter stars vary their light faster than higher mass, intrinsically brighter stars. This discovery was very important because it enabled astronomers to determine the distance to a star based on its period and apparent brightness. Much of what we know today about the size and age of the universe comes from observations of variable stars. Shapley applied Leavitt's law to the variable stars in globular clusters. He estimated that the Milky Way was more than 100,000 light years across, several times the previously accepted value. He made an understandable mistake in doing this because no one realized at the time that there are two different types of variable stars with different period-brightness relations: the so-called RR Lyrae stars in globular clusters are fainter for a given period than the younger Cepheid variables . The Discovery of Galaxies In the 1920s astronomers could not agree on the size of the Milky Way or on the existence of other galaxies beyond. Several lines of conflicting evidence emerged. Shapley noted that nebulous objects tended to be everywhere except in the Milky Way plane. He reasoned that there should be no special arrangement around our disk if the objects were all far from it, so this peculiar distribution made him think they were close. Actually the objects are distant galaxies, and dust in the Milky Way obscures them. The distance uncertainty was finally settled in the 1930s when Hubble discovered a Cepheid variable star in the Andromeda galaxy. He showed from the period-brightness relationship that Andromeda is far outside our own galaxy. Galaxy investigations will continue to be exciting in the coming decades, as new space observatories, such as the Next Generation Space Telescope , and new ground-based observatories with flexible mirrors that compensate for the shimmering atmosphere, probe the most distant regions of the universe. Scientists will see galaxies in the process of formation by observing light that left them when the universe was young. We should also see quasars and other peculiar objects with much greater clarity, leading to some understanding of the formation of nuclear black holes . see also Age of the Universe (volume 2); Black Holes (volume 2); Cosmology (volume 2); Gravity (volume 2); Herschel Family (volume 2); Hubble Constant (volume 2); Hubble, Edwin P. (volume 2); Hubble Space Telescope (volume 2); Shapley, Harlow (volume 2). Debra Meloy Elmegreen and Bruce G. Elmegreen Bothun, Gregory. "Beyond the Hubble Sequence: What Physical Processes Shape Galaxies." Sky and Telescope 99, no. 5 (2000):36-43. Elmegreen, Debra Meloy. Galaxies and Galactic Structure. Upper Saddle River, NJ:Prentice Hall, 1998. Elmegreen, Debra Meloy, and Bruce G. Elmegreen. "What Puts the Spiral in Spiral Galaxies?" Astronomy Vol. 21, No. 9 (1993):34-39. Ferris, Timothy Coming of Age in the Milky Way. New York: Anchor Books, 1989. Sandage, Allan. The Hubble Atlas of Galaxies. Washington, DC: Carnegie Institution of Washington, 1961. Sawyer, Kathy. "Unveiling the Universe." National Geographic 196 [supplement](1999):8-41. When Galaxies Collide WHEN GALAXIES COLLIDE Collisions between galaxies can form spectacular distortions and bursts of star formation. Sometimes bridges of gas and stars get pulled out between two galaxies. In head-on collisions, one galaxy can penetrate another and form a ring. Interactions can create bars in galaxy centers and initiate spiral waves that make grand design structure. Close encounters can also strip gas from disks, which then streams through the cluster and interacts with other gas to make X rays.
Reposted from 3 Quarks Daily: Michael Gove (remember him?), when England’s Secretary of State for Education, told teachers Never have I seen so many major errors expressed in so few words. But the wise learn from everyone, so let us see what we can learn here from Gove. From the top: Newton’s laws. Gove most probably meant Newton’s Laws of Motion, but he may also have been thinking of Newton’s Law (note singular) of Gravity. It was by combining all four of these that Newton explained the hitherto mysterious phenomena of lunar and planetary motion, and related these to the motion of falling bodies on Earth; an intellectual achievement not equalled until Einstein’s General Theory of Relativity. In Newton’s physics, the laws of motion are three in number: 1) If no force is acting on it, a body will carry on moving at the same speed in a straight line. 2) If a force is acting on it, the body will undergo acceleration, according to the equation Force = mass x acceleration 3) Action and reaction are equal and opposite So what does all this mean? In particular, what do scientists mean by “acceleration”? Acceleration is rate of change of velocity. Velocity is not quite the same thing as speed; it is speed in a particular direction. So the First Law just says that if there’s no force, there’ll be no acceleration, no change in velocity, and the body will carry on moving in the same direction at the same speed. And, very importantly, if a body changes direction, that is a kind of acceleration, even if it keeps on going at the same speed. For example, if something is going round in circles, there must be a force (sometimes, confusingly, called centrifugal force) that keeps it accelerating inwards, and stops it from going straight off at a tangent. Then what about the heavenly bodies, which travel in curves, pretty close to circles although Kepler’s more accurate measurement had already shown by Newton’s time that the curves are actually ellipses? The moon, for example. The moon goes round the Earth, without flying off at a tangent. So the Earth must be exerting a force on the moon. And finally, the Third Law. If the Earth is tugging on the moon, then the moon is tugging equally hard on the Earth. We say that the moon goes round the Earth, but it is more accurate to say that Earth and moon both rotate around their common centre of gravity. All of this describes the motion of single bodies. Thermodynamics, as we shall see, only comes into play when we have very large numbers of separate objects. The other thing that Gove might have meant is Newton’s Inverse Square Law of gravity, which tells us just how fast gravity decreases with distance. If, for instance, we could move the Earth to twice its present distance from the Sun, the Sun’s gravitational pull on it would drop to a quarter of its present value. Now here is the really beautiful bit. We can measure (Galileo already had measured) how fast falling bodies here on Earth accelerate under gravity. Knowing how far we are from the centre of the Earth, and how far away the moon is, we can work out from the Inverse Square Law how strong the Earth’s gravity is at that distance, and then, from Newton’s Second Law, how fast the moon ought to be accelerating towards the Earth. And when we do this calculation, we find that this exactly matches the amount of acceleration needed to hold the moon in its orbit going round the Earth once every lunar month. Any decent present-day physics student should be able to do this calculation in minutes. For Newton to do it for the first time involved some rather more impressive intellectual feats, such as clarifying the concepts of force, speed, velocity and acceleration, formulating the laws I’ve referred to, and inventing calculus. But what about the laws of thermodynamics? These weren’t discovered until the 19th century, the century of the steam engine. People usually talk about the three laws of thermodynamics, although there is actually another one called the Zeroth Law, because people only really noticed they had been assuming it long after they had formulated the others. (This very boring law says that if two things are in thermal equilibrium with a third thing, they must be in thermal equilibrium with each other. Otherwise, we could transform heat into work by making it go round in circles.) The First Law of Thermodynamics is, simply, the conservation of energy. That’s all kinds of energy added up together, including for example heat energy, light energy, electrical energy, and the “kinetic energy” that things have because they’re moving. One very important example of the conservation of energy is what happens inside a heat engine, be it an old-fashioned steam engine, an internal combustion engine, or the turbine of a nuclear power station. Here, heat is converted into other forms of energy, such as mechanical or electrical. This is all far beyond anything Newton could have imagined. Newton wrote in terms of force, rather than energy, and he had been dead for over a century before people realized that the different forms of energy include heat. There are many ways of expressing the Second Law, usually involving rather technical language, but the basic idea is always the same; things tend to get more spread out over time, and won’t get less spread out unless you do some work to make them. (One common formulation is that things tend to get more disordered over time, but I don’t like that one, because I’m not quite sure how you define the amount of disorder, whereas there are exact mathematical methods for describing how spread out things are.) For example, let a drop of food dye fall into a glass full of water. Wait, and you will see the dye spread through the water. Keep on waiting, and you will never see it separating out again as a separate drop. You can force it to, if you can make a very fine filter that lets the water through while retaining the dye, but it always takes work to do this. To be precise, you would be working against osmotic pressure, something your kidneys are doing all the time as they concentrate your urine. This sounds a long way from steam engines, but it isn’t. Usable energy (electrical or kinetic, say) is much less spread out than heat energy, and so the Second Law limits how efficiently heat can ever be converted into more useful forms. The Second Law also involves a radical, and very surprising, departure from Newton’s scheme of things. Newton’s world is timeless. Things happen over time, but you would see the same kinds of things if you ran the video backwards. We can use Newton’s physics to describe the motion of planets, but it could equally well describe these motions if they were all exactly reversed. Now we have a paradox. Every single event taking place in the dye/water mixture can be described in terms of interactions between particles, and every such interaction can, as in Newton’s physics, be equally well described going forwards or backwards. To use the technical term, each individual interaction is reversible. But the overall process is irreversible; you can’t go back again. You cannot unscramble eggs. Why not? In the end, it comes down to statistics. There are more ways of being spread out than there are of being restricted. There are more ways of moving dye molecules from high to low concentration regions than there are of moving them back again, simply because there are more dye molecules in the former than there are in the latter. There is an excellent video illustration of this effect, using sheep, by the Princeton-based educator Aatish Bhatia. The Third Law is more complicated, and was not formulated until the early 20th century. It enables us to compare the spread-out-ness of heat energy in different chemical substances, and hence to predict which way chemical reactions tend to go. We can excuse Gove for not knowing about the Third Law, but the first two, as C. P. Snow pointed out a generation ago, should be part of the furniture of any educated mind. So if you don’t immediately realize that Newton’s laws and the laws of thermodynamics belong to different stages of technology, the age of sail as opposed to the age of steam, and to different levels of scientific understanding, the individual and macroscopic as opposed to the statistical and submicroscopic, then you don’t know what you’re talking about. Neither the science, nor its social and economic context. R, a fluyt, typical ocean-going vesselof Newton’s time. Below, L, the Great Western, first trans-Atlantic steamship, designed by Isambard Kingdom Brunel, on its maiden voyage (Disclosure: I taught Boyle’s Law for over 40 years, and it gets three index entries in my book, From Stars to Stalagmites.) Bottom line: Boyle’s Law is not basic. It is a secondary consequence of the Kinetic Theory of Gases, which is basic. The difference is enormous, and matters. Anyone who thinks that Boyle’s Law is a principle doesn’t know what a principle is. (So a leading Westminster politician doesn’t know what a principle is? That figures.) Mathematically, the Law is simply stated, which may be why Mr Gove thinks it is basic: volume is inversely proportional to pressure, which gives you a nice simple equation, as in the graph on the right: P x V = a constant that even a Cabinet Minister can understand. But on its own, it is of no educational value whatsoever. It only acquires value if you put it in its context, but this appeal to context implies a perspective on education beyond his comprehension. Now to what is basic; the fundamental processes that make gases behave as Boyle discovered. His Law states that if you double the pressure on a sample of gas, you will halve the volume. He thought this was because the molecules of gas repel each other, so it takes more pressure to push them closer together, and Newton even put this idea on a mathematical footing, by suggesting an inverse square law for repulsion, rather like his Inverse Square Law for gravitational attraction. They were wrong. The Law is now explained using the Kinetic Theory of Gases. This describes a gas as shown on the right; as a whole lot of molecules, of such small volume compared to their container that we can think of them as points, each wandering around doing their own thing, and, from time to time, bouncing off the walls. It is the impact of these bounces that gives rise to pressure. If you push the same number of molecules (at the same temperature) into half the volume, each area of wall will get twice as many bounces per second, and so will experience twice the pressure. Pressure x volume remains constant; hence Boyle’s Law. Actually, Boyle’s Law isn’t even true. Simple kinetic theory neglects the fact that gas molecules attract each other a little, making the pressure less than what the theory tells you it ought to be. And if we compress the gas into a very small volume, we can no longer ignore the volume taken up by the actual molecules themselves. So what does teaching Boyle’s Law achieve? Firstly, a bit of elementary algebra that gives clear answers, and that can be used to bully students if, as so often happens, they meet it in science before they have been adequately prepared in their maths classes. This, I suspect, is the aspect that Gove finds particularly appealing. Secondly, some rather nice experiments involving balancing weights on top of sealed-off syringes. Thirdly, insight into how to use a mathematical model and, at a more advanced level, how to allow for the fact that real gases do not exactly meet its assumptions. Fourthly, a good example of how the practice of science depends on the technology of the society that produces it. In this case, seventeenth century improvements in glassmaking made it possible to construct tubes of uniform cross-section, which are needed to compare volumes of gas accurately. Fifthly … but that’s enough to be going on with. Further elaboration would, ironically, lead us on to introductory thermodynamics. Ironically, given the interview that started this discussion. The one thing it does not achieve is the inculcation of a fundamental principle. There are mistakes like thinking that Shakespeare, not Marlowe, wrote Edward II. There are mistakes like thinking that Shakespeare wrote War and Peace. And finally, there are mistakes like thinking that Shakespeare wrote War and Peace, that this is basic to our understanding of literature, and that English teachers need to make sure that their pupils know this. Then Education Secretary Gove’s remarks about science teaching fall into this last category. Such ignorance of basic science (and education) at the highest levels of government is laughable. But it is not funny. 1] Ben Zoma, MishnahChapters of the Fathers, 4a. “Chapters of the Fathers” may also be interpreted to mean “Fundamental Principles”. 2] It is often said that Einstein’s famous equation, E = mc2 means that we can turn mass into energy. That puts it back to front. The equation is really telling us that energy itself has mass. 3] There are lots of situations (steam condensing to make water, living things growing, or indeed urine becoming more concentrated in the kidney) where a system becomes less spread out, but this change is always accompanied by something in the surrounds, usually heat energy, becoming more spread out to compensate. Newton as painted by Godfrey Keller, via Wikipedia. Gove image via Daily Telegraph, under headline “Michael Gove’s wife takes a swing at ageing Education Secretary”. Solar system image from NASA. Steam turbine blade Siemens via Wikipedia. Dye diffusing in water from Royal Society of Chemistry. Fluyt imge from Pirate King website. Great Western on maiden voyage, 1938, by unknown artist, via Wikipedia. Boyle’s Law curve from Krishnavedala repllot of Boyle’s own data, via Wikipedia. Kinetic theory image via Chinese University of Hong Kong The news that Michael Gove wants to lead the country sent me back to what I wrote about him almost exactly 3 years ago. He was, of course, talking through his hat, and we all do that from time to time. But he did this while telling the rest of us, in his then capacity of Education Secretary, what to do and how to teach. Now it may not matter that Michael Gove has only a limited grasp of physics (actually, I think it does, when the physics of climate change underlie the most important single challenge facing us), but it matters enormously that a would-be future Prime Minister considers his ignorance a qualification. Anyway, here is the gist of what I wrote back then, which is suddenly attracting a surprising number of hits, given its age: The Education Secretary said “What [students] need is a rooting in the basic scientific principles, Newton’s laws of thermodynamics and Boyle’s law.” [reported here]. He has been widely criticized for this (e.g. here and here), but it’s still worth discussing exactly why what he said is so appallingly wrong, on at least four separate counts. In the unlikely event that Mr. Gove ever reads this, he may learn something. Muddling up the laws of motion with the laws of thermodynamics is bad enough. Muddling up an almost incidental observation, like Boyle’s Law, is even worse, especially when this muddle comes from someone in charge of our educational system [well, not mine actually; I’m glad to say I live in Scotland], and in the very act of his telling teachers and examiners what is, and what is not, important. Okay, from the top. Newton’s laws; Gove probably meant (if he meant anything) Newton’s laws of motion, but he may also have been thinking of Newton’s law (note singular) of gravity. [I went on to summarise both Newton’s laws, and Newton’s law, and to explain how the combination of these explained the hitherto mysterious phenomenon of planetary motion and related it to the motion of falling bodies on Earth; an intellectual achievement not equalled until Einstein’s General Theory of Relativity] But what about the laws of thermodynamics? These weren’t discovered until the 19th century, the century of the steam engine… [I briefly described them] If you don’t immediately realize that Newton’s laws and the laws of thermodynamics belong to different stages of technology, the age of sail as opposed to the age of steam, and to different levels of scientific understanding, the individual and macroscopic as opposed to the statistical and submicroscopic, then you don’t know what you’re talking about. Gove’s blunder has been compared to confusing Shakespeare with Dickens. It is far, far worse than that. It is – I am at a loss for an adequate simile. All I can say is that it is as bad as confusing Newton’s laws with the laws of thermodynamics, and I can’t say worse than that. And regarding Gove’s description of Boyle’s Law as “basic”, I had this to say: He [Gove] has been justly mocked for confusing Newton’s laws with the laws of thermodynamics. But the kind of ignorance involved in describing Boyle’s Law as a “basic scientific principle” is far more damaging. Disclosure: I taught Boyle’s Law for over 40 years, and it gets three index entries in my book, From Stars to Stalagmites. Bottom line: Boyle’s Law is not basic. It is a secondary consequence of the kinetic theory of gases, which is basic. The difference is enormous, and matters. Anyone who thinks that Boyle’s Law is a principle doesn’t know what a principle is. (So Gove doesn’t know what a principle is? That figures.) Mathematically, the Law is simply stated, which may be why Mr Gove thinks it is basic: volume is inversely proportional to pressure, which gives you a nice simple equation (P x V = a constant) that even a Cabinet Minister can understand. But on its own, it is of no educational value whatsoever. It only acquires value if you put it in its context [in the kinetic theory of gases], but this involves a concept of education that seems to be beyond his understanding… Educationally, context is everything, the key to understanding and to making that understanding worthwhile. A person who decries the study of context is unfit for involvement with education. Even at Cabinet level. And, I would now add, completely unfit for making major decisions in these interesting times. Image: Gove claiming that EU regulations prevent us from keeping out terrorists. Dylan Martinez via Daily Telegraph Readers in England in particular, please write to your MP in support of the BHA campaign to combat Creationism, including Creationism in publicly funded schools; details here. The rest of this post is an explanation of why, shockingly, such action is necessary. In post-principle politics, it would be naive to suggest that this or perhaps any feasible alternative Government is really interested in the merits. The Creationists are a coherent constituency, who make their voices heard. Defenders of scientific reality (regardless of their position on religious matters) must do likewise. Dr Evan Harris assures us, and he should know, that 20 letters to an MP are a lot (Glasgow Skeptics 2011). So the readership of this column, alone, is enough to make a real contribution. Do it. And ask your friends to do likewise. The school “will retain its right to censor papers, under agreed conditions.” Yesodey Hatorah (Charedi Jewish) Senior Girls School blacked out questions about evolution on pupils’ science exams in 2013. One wonders how this was even possible, given that exam papers are supposed to be sealed until opened at the specified time in the presence of the pupils. However, when the relevant Examination Board, OCR, investigated, they were satisfied that no students had received an unfair advantage, and took no action. The Board now tells Ofqual, the government agency responsible for the integrity of examinations, that it intends “to come to an agreement with the centres concerned which will … respect their need to do this in view of their religious beliefs.” And OCR’s chief executive says the case has “significantly wider implications and could apply to other faith schools. It gets worse; or perhaps it doesn’t. The school now says that it does teach evolution, but in Jewish Studies, that “there are minute elements within the curriculum which are considered culturally and halachically [in terms of Jewish law] questionable” (evolution a minute element!), that “This system has successfully been in place within the charedi schools throughout England for many years,” and that “we (the school) have now come to an agreement with OCR to ensure that the school will retain its right to censor papers, under agreed conditions.” The latest word, however, is that this agreement, and Ofqual’s acquiescence, may be unravelling under scrutiny, illustrating the importance of public awareness and response. Creationist Noah’s Ark Zoo Farm claims 15,000 school visitors annually and boasts of Government body award Noah’s Ark Zoo Farm, near Bristol, which claims to be visited by 15,000 schoolchildren annually, promulgates the view that Noah’s Ark is historic (and indeed, pre-historic), displays posters arguing that apes and humans are too distinct to share a common ancestor, and suggesting how the different kinds of animal could have been housed in the Ark, which it regards as historical (Professor Alice Roberts reported on her own visit last December; I have discussed the Zoo Farm’s reaction to her account) . The giraffes, for instance, would according to one poster have been housed in the highest part of the vessel, next to the T. Rex (Hayley Stevens, private communication). This Zoo Farm recently received an award from the Council for Learning Outside the Curriculum, which justified itself by referring to”education that challenges assumptions and allows them to experience a range of viewpoints; giving them the tools needed to be proactive in their own learning and develop skills to enable them to make well informed decisions.” Connoisseurs of creationism will recognise this as a variant of the “teach the controversy” argument, which advocates presenting creationism and real science as alternatives both worthy of consideration, and inviting schoolchildren to choose between them. “We do not expect creationism, intelligent design and similar ideas to be taught as valid scientific theories in any state funded school.” Evolution will become part of the National Curriculum in 2014. However, that curriculum is not binding on Academies or Free Schools. The Government assures us that this is not a problem, because all schools need to prepare for external exams, and these exams, of course, include evolution. Exams that the schools have now been openly invited to censor. There is supposedly clear guidance for state-funded schools in England. Michael Gove, Education Secretary, has declared himself “crystal clear that teaching creationism is at odds with scientific fact”, and official guidance to Free School applicants states “We would expect to see evolution and its foundation topics fully included in any science curriculum. We do not expect creationism, intelligent design and similar ideas to be taught as valid scientific theories in any state funded school.” The reality however is that what are clearly creationist establishments do get government funding. Creationist preschools, to which the guidance does not apply, can and do receive public money through nursery vouchers, while being run by organisations such as ACE (see below) that openly teach rigid biblical creationism along with even more rigid gender roles. BHA knows of 67 nursery schools that are run by Creationist or other organizations that openly reject the basics of biology. Some of these directly teach Adam-and-Eve history as fact that must be believed, and Government funding to these nursery schools may also be indirectly underwriting primary and secondary schools run by the same organizations. “We will teach creation as a scientific theory” In addition, a number of Academies and Free Schools have been licensed despite clear warning signals. Grindon Hall Christian School , formerly private, was licensed to receive public funding in 2012, despite a record of teaching creationism, and a website Creation Policy, hastily deleted after it received public attention, which stated “We will teach creation as a scientific theory”. Newark School of Enterprise, until recently expected to open in 2014, is a thinly disguised relabelling of Everyday Champions Church School, which was originally denied licensing because of its obvious links to a creationist church. (Last month, it was announced that the Government had withdrawn support for the school on other grounds.) Ibrahim Hewitt, of the Association of Muslim Schools, has said that his members’ schools, including six state-funded ones, taught children about Darwin, because they had to, but they also taught a different, Koranic view. The ill-fated al-Madinah School originally specified “Darwinism” as un-Koranic on its website, but under “curriculum” now says only “We are committed to providing a broad and balanced curriculum for all our pupils. Further information will be available in due course.” In the private sector, we have Christian Schools Trust (CST), with 42 schools. Some of these are applying for “Free Schools” status; so far unsuccessfully, but Tyndale Community School, which has been approved, is run by Oxfordshire Community Churches which also runs the CST Kings School in Whitney. CST schools teach Genesis as historical fact, with the Fall as the source of all evil, and discuss evolution in such a way as to make it seem incredible. According to the Ph.D. thesis of Sylvia Baker, founder and core team member of CST, 75% of students end up believing in Noah’s ark. Dr Baker, author of Bone of Contention and other creationist works, is also directly linked to Genesis Agendum, a “creation science” website, and language in her style appears in the related WorldAroundUs “virtual museum”, which claims to show that evolution and old Earth geology are outdated scientific paradigms in the process of crumbling (for a detailed analysis of the museum’s arguments, see here, where I describe it as a “museum of horrors”). Since 2008, CST and the Association of Muslim Schools have shared their own special inspectorate, of which Sylvia Baker is a board member. So the foxes placed in charge of the hen house have under two successive Governments been entrusted with the task of evaluating their own stewardship. In an even grosser scandal, NARIC, the National Academic Recognition Information Centre, has approved the ICCE advanced certificate, based on Accelerated Creation Education (ACE), as equivalent to A-level. ACE has claimed, and in the US still does claim, that Nessie is evidence for a persistence of dinosaurs, and teaches that evolution has been scientifically proven false, and that those who accept its “impossible claims” do so in order to reject God. This in a text that prepares students for a certificate that NARIC would have us accept as preparation for the study of biology at university. And NARIC is the body that provides information on qualifications on behalf of the UK Government. The ACE curriculum’s straw man version of evolution In all these cases, the actual offence is compounded by official complacency or collusion. I can only guess at why is this allowed to happen, but among relevant factors may be official concerned with procedures rather than outcomes, scientific illiteracy among decision-makers, free market forces (the exam boards, after all, are competing for the schools’ business), misplaced respect for differences, and electoral calculation. Religious zealots form an organised political pressure group, while their reality-orientated co-religionists are far too slow to condemn them. Ironically, these co-religionists have even more to lose than the rest of us, as their institutions are subverted from inside, and their faith brought into disrepute. In response, those of us who oppose the forces of endarkenment must become recognised as a constituency, not necessarily in any formal sense, but in the sense that politicians are aware of the depth of our concerns. Numbers are increasingly on our side, since young people are more sceptical than their elders, and Humanists, secularists, Skeptics, and even geeks are our natural allies. And so, on this issue, are liberal-minded believers from all faiths. There is need for coordinated public pressure, through teachers’ organisations, other educational bodies and learned societies, publicity and protests after specific cases revealed, and campaigns such as the BHA letter-writing campaign that is the subject of this post. So here, once more, is the BHA link: Use it. For other posts on the issues discussed here, as they apply in England and Scotland, see Evolution censored from exam questions in publicly funded English schools, with government permission; PhD Thesis of Sylvia Baker, founder of “Christian” (i.e. Creationist) Schools Trust; Noah’s Ark Zoo Farm Responds to Criticism; ACE Infantile creationist burblings rated equivalent to UK A-level (school leaving; University entrance) exams; and Young Earth Creationist books handed out in a Scottish state school. Poster displayed at Noah’s Ark Zoo Farm, image by Pip through Wikipedia Commons. This post is based on a talk I gave to the Conway Hall Ethical Society on March 16, 2014. I would like to say that Michael Gove shows a knowledge of what counts as basic science that is some 300+ years out of date, but that would be too kind. Gove said there had been previous attempts to make science relevant, by linking it to contemporary concerns such as climate change or food scares. But he said: “What [students] need is a rooting in the basic scientific principles, Newton’s laws of thermodynamics and Boyle’s law.” [Times interview, reported here] As many readers will know, but the Education Secretary clearly doesn’t, Newton’s laws describe the motion of individual particles. Thermodynamics is intrinsically statistical, and was developed over a century after Newton’s death. Boyle’s Law is not a basic scientific principle, although it is a corollary of the basic principles followed by (ideal) gases. And here we have someone ignorant of these elementary facts, in a position of enormous power, telling the schools how to teach, and the examination boards how to examine. And in this same interview, he says he wants schools to form chains and brands, like businesses. Satire falls silent.
Written by: Kimberly White The World Bank has released a new report highlighting the impacts poor water quality has on economies and health. Quality Unknown: The Invisible Water Crisis shows how a combination of chemicals, sewage, bacteria, and plastics can remove oxygen from water supplies and transform water into a source of poison for ecosystems and people. According to the report, a lack of clean water limits economic growth by one-third. As pollution increases, downstream regions face adverse impacts on health, ecosystems, and agriculture, causing a drop in GDP. A key contributor to poor water quality is nitrogen. Frequently applied as an agricultural fertilizer, nitrogen makes its way into rivers, lakes, and oceans where it evolves into nitrates. Excess levels of nitrates are harmful to aquatic ecosystems. High concentrations of nitrogen and phosphorus allow for cyanobacteria to grow in large numbers, resulting in a harmful algal bloom. NOAA reports that some algal blooms produce toxins that can kill fish, mammals, and birds, and can cause human illness or death. In 2018, the red tide algal bloom in Florida made international headlines. Hundreds of aquatic species such as sea turtles, dolphins, sharks, and manatees were reported dead and washed up on Florida beaches. Early exposure of children to nitrates can affect their growth and brain development, impacting their both their health and adult earning potential. The report shared that every additional kilogram of nitrogen fertilizer runoff per hectare can increase the level of childhood stunting by as much as 19%. Adult earning potential can reduce as much as 2%. The report also highlights current and upcoming challenges with salinity in water and soil. As more intense droughts, storm surges, and rising water extraction occur due to climate change, agricultural yields will decrease. The World Bank reports that each year, the world loses enough food to saline water to feed 170 million people. Salinity doesn’t only pose harm to agriculture, but also human health. While salt is most commonly linked to hypertension, the most acute impacts are seen when exposure to saline water occurs during pregnancy and infancy. A rise in drinking water salinity is linked to high levels of infant and neonatal death. According to the World Bank report, around 3% of infant deaths in the coastal subdivisions of Bangladesh can be attributed to increased drinking water salinity. Additionally, in some subdivisions of Bangladesh, the infant death rate rises to 20%. Maternal exposure to high levels of salt during pregnancy leads to a higher risk of preeclampsia and gestational hypertension. In Bangladesh, women living within 20km of the coastline are 1.3 times more likely to miscarry than women who live inland. In the report, the World Bank calls for immediate global, national, and local-level attention to the dangers faced in developed and developing countries. “Clean water is a key factor for economic growth. Deteriorating water quality is stalling economic growth, worsening health conditions, reducing food production, and exacerbating poverty in many countries,” said World Bank Group President David Malpass. “Their governments must take urgent actions to help tackle water pollution so that countries can grow faster in equitable and environmentally sustainable ways.” The report provides a set of actions that countries can take to improve water quality, including implementing environmental policies and standards, effective enforcement systems, accurate monitoring of pollution loads, water treatment infrastructure supported with incentives for private investment, and reliable information disclosure to households to catalyze citizen engagement. Header Image Credit: Tom Archer/NASA
Learn German A1 Level From Scratch From An Experienced German Teacher! The course will start by introducing you to the German alphabet and work our way up to learning German Grammar, Vocabulary, & Conversation. Apart from the 35 grammar lectures, this course also includes 17 animated videos which will make it easier for you to learn basic German conversations as well as the vocabulary. It is an interactive course with a lot of varied input and a very experienced and motivated German teacher who will lead you through the entire course. This course also includes 2 videos which will show and prepare you for the speaking and writing part of the A1 German language certification. We will also make an intermediate and final exam in order to test what you have learned so far. You will also learn about the German school system, German culture, food, sights, valuable tips on how to learn languages and we will have a dictation also. - Become Fully Competent In German A1 Level - Master The Basics Of The German Language - Learn The Grammar & Vocabulary Of German Language - Learn The Basics Of Conversation In German - Learn German Pronunciation, Speaking, & Writing - Be Able To Confidently Introduce Yourself In German - Be Able To Make An Appointment In German - Be Able To Order Food In A Restaurant In German - Learn To Count & Say Numbers In German - Learn To Read & Write In German - Learn what the speaking and writing part of the A1 German language certificate looks like and how to do that You will learn the following: The Alphabet | Conjugation Of Regular Verbs | The Nominative | Numbers | Gender Rules | Plural | Irregular Verbs | Tips On How To Learn A Language More Efficiently | Compound Words | The Word “es” | Formal Salutation | The Accusative | Word Order Of Main Clauses | Separable And Inseparable Verbs | Modal Verbs | The Dative | Negation | How To Form Questions | Conjunctions | The imperative | Possessive Determiner | Demonstrative Pronouns | Indefinite Pronouns | Temporal Adverbs | Perfect Tense (Perfekt) | Introduction To The Simple Past (Präteritum) | Introduction To The Comparative | Indefinite Pronouns I Dictations Pronunciation Rules | How To Introduce Yourself | How To Make Appointments | Using Public Transport | Looking For Apartments | Asking For / Giving Directions | How To Say The Time And Date | Doctor’s Visit | How To Order Food In A Restaurant Colors | Family | Body | Clothing | At home | Food & Beverages | Animals | Professions I Weather | Leisure Time Activities | Emotions & Adjectives | Countries & Nations | Means of Transport If you are not familiar with the Common European Framework of Reference for Languages; here is a brief overview: A1 – Complete beginner A2 – Elementary B1 – Intermediate B2 – Upper intermediate C1 – Advanced C2 – Proficient I hope you join me inside the course & have you as one of my students. Join my course now! Who this course is for: - People Who Are Interested In Learning German - People Who Want To Acquire A German A1 Level - Because this is an absolute beginner’s course, you don’t have to know even one German word to attend this course. Last Updated 3/2021
The Brain and Memory In today’s episode, I will provide you with basic information about your brain and memory. When we talk about “memory,” we mean the ability to store and recall information from our brain. There are multiple types of memory, as the following graphic shows: Sensory memory: Sensory memory stores input from your senses, like sight, smell, or sound, for a very short time after the stimulus has disappeared. Since there is so much information coming in through your senses all the time (ever tried shutting your ears?), your brain discards this information quickly, retaining it for three seconds at most. Short-term memory: According to Miller’s law , short-term memory holds seven, plus or minus two, pieces of information for up to 30 seconds. If your attention wanders, this information will be lost. Long-term memory: There are no known limits to how much and how long information can be stored in long-term memory. This is where you must store information that you would like to keep and actively recall. • Explicit long-term memory refers to storing information that you consciously experienced or learned. This type of memory can be further classified as “episodic” (things that have a time and place, such as when and where you ate your last muffin) and “semantic” (facts and general knowledge, such as the capital of Japan). • Implicit memory is concerned with unconscious learning. The motor skills you develop when you learn to ride a bike or play table tennis are an example of implicit memory. When you ride a bike, you don’t have to consciously recall how to do it, you just ride along with your implicit memory. All the techniques you will learn in this course help store information in your explicit long-term memory. How Does Remembering Work, Anyway? The memory process consists of three stages: acquisition, consolidation, and recall. Acquisition is the introduction of new information to the brain. Consolidation is the process in which this information becomes stable and connected in long-term memory. Recall is the ability to access this stored information. Where Is Information Stored? The brain is an amazing organ, and thanks to neuroplasticity, it can reorganize and form new neural pathways throughout your entire life. Memories are not stored in one specific area, but rather are distributed across the brain, where they are stored as groups of neurons that fire together in the same pattern created by the original experience. A single memory must often be reconstructed by firing many neurons from different areas of the brain. Short-term memories are processed in the prefrontal cortex, which is also responsible for other complex cognitive functions, such as decision making. Explicit long-term memories are stored in the hippocampus, neocortex, and amygdala. The hippocampus stores episodic memories, the neocortex stores semantic memories, and the amygdala connects memories to emotions. Memories connected to strong emotions like love and fear are harder to forget, as they are strengthened by the neural connections to the amygdala. This is why post-traumatic stress disorder (PTSD) is so serious and difficult to treat. Implicit long-term memories are stored in the basal ganglia and the cerebellum. The basal ganglia is especially involved with movement and sequences of motor activity (e.g., learning to ride a bike). The cerebellum stores memory of fine motor skills, such as holding a pencil or typing on a keyboard. This episode was a crash course on what memory is and where it is stored. Tomorrow, we’ll start with the first basic techniques for memorizing or, to use the jargon from this episode, for filling our explicit, semantic long-term memory with information. Get your mind ready! Learn Something New Every Day Get smarter with 10-day courses delivered in easy-to-digest emails every morning. Join over 400,000 lifelong learners today! Share with friends
Course material 2010–11 This course is not taken by NST or PPST students. Lecturer: Dr I.J. Wassell No. of lectures and practical classes: 11 + 7 This course is a prerequisite for Operating Systems and Computer Design (Part IB). The aims of this course are to present the principles of combinational and sequential digital logic design and optimisation at a gate level. The use of transistors for building gates is also introduced. - Introduction. Semiconductors to computers. Logic variables. Examples of simple logic. Logic gates. Boolean algebra. De Morgan’s theorem. - Logic minimisation. Truth tables and normal forms. Karnaugh maps. - Number representation. Unsigned binary numbers. Octal and hexadecimal numbers. Negative numbers and 2’s complement. BCD and character codes. Binary adders. - Combinational logic design: further considerations. Multilevel logic. Gate propagation delay. An introduction to timing diagrams. Hazards and hazard elimination. Fast carry generation. Other ways to implement combinational logic. - Introduction to practical classes. Prototyping box. Breadboard and Dual in line (DIL) packages. Wiring. Use of oscilloscope. - Sequential logic. Memory elements. RS latch. Transparent D latch. Master-slave D flip-flop. T and JK flip-flops. Setup and hold times. - Sequential logic. Counters: Ripple and synchronous. Shift registers. - Synchronous State Machines. Moore and Mealy finite state machines (FSMs). Reset and self starting. State transition diagrams. - Further state machines. State assignment: sequential, sliding, shift register, one hot. Implementation of FSMs. - Circuits. Solving non-linear circuits. Potential divider. N-channel MOSFET. N-MOS inverter. N-MOS logic. CMOS logic. Logic families. Noise margin. [2 lectures] At the end of the course students should - understand the relationships between combination logic and boolean algebra, and between sequential logic and finite state machines; - be able to design and minimise combinational logic; - appreciate tradeoffs in complexity and speed of combinational designs; - understand how state can be stored in a digital logic circuit; - know how to design a simple finite state machine from a specification and be able to implement this in gates and edge triggered flip-flops; - understand how to use MOS transistors. * Harris, D.M. & Harris, S.L. (2007). Digital design and computer architecture. Morgan Kaufmann. Katz, R.H. (2004). Contemporary logic design. Benjamin/Cummings. The 1994 edition is more than sufficient. Hayes, J.P. (1993). Introduction to digital logic design. Addison-Wesley. Books for reference: Horowitz, P. & Hill, W. (1989). The art of electronics. Cambridge University Press (2nd ed.) (more analog). Weste, N.H.E. & Harris, D. (2005). CMOS VLSI Design - a circuits and systems perspective. Addison-Wesley (3rd ed.). Mead, C. & Conway, L. (1980). Introduction to VLSI systems. Addison-Wesley. Crowe, J. & Hayes-Gill, B. (1998). Introduction to digital electronics. Butterworth-Heinemann. Gibson, J.R. (1992). Electronic logic circuits. Butterworth-Heinemann.
Throughout China’s long past, no animal has affected its history as greatly as the horse. Ever since its domestication in north-eastern China around 5,000 years ago, the horse has been an integral figure in the creation and survival of the Middle Kingdom. Its significance has been such that as early as the Shang Dynasty (ca. 1600-1100 BC), horses were entombed with their owners for the afterlife. During the Western Zhou Dynasty (ca. 1100-771 BC) military strength was measured by the number of war chariots available to a particular kingdom. As the military significance of the horse increased, so did its role in court recreational activities. Social status and power were displayed by the number of horses owned and their appearance in lavish public displays. “Dancing” dressage horses delighted emperors in court ceremonies as early as the Han Dynasty, reaching their peak with the elaborate performances of the Tang Dynasty. During the Tang Dynasty, both polo and hunting from horseback became fashionable for both sexes. From this period, female court attendants on horses appear in art and in tomb sculpture. China created three of the most important innovations in equestrian history: the horse collar, the stirrup and a reliable and effective harnessing system based on the breast strap. One of the great paradoxes of Chinese history is that despite the horse’s significance to the survival of the empire, domestic horse-breeding programs were rarely successful. As a result, China was forced to import horses from its nomadic neighbours throughout most of the imperial period. Silk had been traded for horses during the Han Dynasty (157 – 87 BC). Tea was the commodity of trade during the Song Dynasty (681 – 907), and so began the history of ‘Tea for Horses’ markets. Tea production was controlled by China and they attempted to maintain the prices of tea at an artificially high level in order to acquire more horses.
Last updated: August 14, 2018 Wind Speed Scatter Plot - Grade Level: - Middle School: Sixth Grade through Eighth Grade - Lesson Duration: - 90 Minutes - State Standards: - NC.8.SP.1 Construct & interpret scatter plots for bivariate measurement data to investigate patterns of association between two quantities. Investigate & describe patterns such as clustering, outliers, positive or negative association, linear association - Additional Standards: - NC.8.SP.1 Investigate & describe patterns such as clustering, outliers, positive or negative association, linear association and nonlinear association. How does plotting data on a scatter plot help to determine if a relationship exists between bivariate data? The student will be able to: a) construct a scatter plot, b) interpret the constructed scatter plot to determine if a relationship exists, c) if a relationship exists, construct a line of correlation and determine if the relationship is positive or negative. The Wright brothers chose the Kitty Hawk location for many reasons. Based upon their journal entries they had three requirements for choosing a location for their test flights: seclusion, sand, and wind. When they wrote to the Weather Bureau they wanted to gain data about wind speeds around the United States. Kitty Hawk, North Carolina provided the wind speed, isolation, and geographic terrain to give the Wright brothers the necessities to conduct their test flights. Prior to the Wright Brothers National Memorial visit, the teacher should lead a discussion about the features that the Wright brothers needed when choosing a location. The following website can be used to help guide this discussion. Prior to the visit, also discuss the differences between an altimeter (measuring altitude) vs. an anemometer (measuring wind speed). Anemometer, altimeter app or device, graph paper, and a pencil. To download materials list and full lesson plan click the lesson materials link below. Students will construct an anemometer and take wind speed measurements at various heights at Wright Brothers National Memorial. They will then plot these coordinates on a coordinate plane in order to construct a scatter plot. Finally, they will look to see if a positive or negative relationship exists between the data points. Students will be placed in groups of 3-4. Each group will utilize the tools to build an anemometer in order to conduct their tests at various sites around the park. Station #1 (20-30 minutes): Each group will be given the following tools: a stopwatch, 5 small cups, 3 wooden dowels, scissors, duct tape, a single hole punch, and a water bottle (more than one group can share the scissors, duct tape, and hole punch). Each group will construct an anemometer using the following steps. If a technological device is available, a video tutorial is also linked below: Steps for Anemometer Construction: Step 1: In 4 cups punch a single hole at the top of the cup. On the 5th cup punch 4 evenly spaced holes around the rim of the cup. Then using the scissors cut a small opening in the bottom of this cup. Step 2: Take two of the dowels and put them through the holes of the 5th (center) cup so that they cross through the middle. Using the holes punched earlier place the remaining four cups on the ends of the wooden dowels. Step 3: Secure each of the cups with duct tape to the dowels. Place the third wooden dowel through the bottom hole of the center cup and secure the three dowels with duct tape in the center. Also, secure the bottom dowel with duct tape to the bottom of the center cup. Step 4: Place the center dowel into the water bottle and take to the desired location in order to gauge wind speed. Using a marker draw an “X” on one of the outer cups. How to Calculate Wind Speed using an Anemometer: Using a stop watch count how many rotations the anemometer makes by counting how many times the “X” passes in 30 seconds. Every spin is one mile-per-hour. Instructional Video: https://www.youtube.com/watch?v=w65F-ZyMw-c Station #2 (20 minutes): Each group will visit various sites around the Wright Brothers National Memorial and record the altitude of their location (using the altimeter) and the wind speed using their self-constructed anemometer. They will organize their data in a t-chart and determine whether the altitude or wind speed will represent the independent (x) and dependent (y) variable. Station #3: (10-20 minutes): Each group will use the graph paper to create a graph and plot the points collected from their data. Students should then determine if a relationship exists between the bivariate data. If so, they will then determine if a positive or negative relationship exists. Anemometer - An anemometer is a device used for measuring the speed of wind, and is also a common weather station instrument. The term is derived from the Greek word anemos, which means wind, and is used to describe any wind speed instrument used in meteorology. Scatter plot - A graph in which the values of two variables are plotted along two axes, the pattern of the resulting points revealing any correlation present. Line of correlation - When two sets of data are strongly linked together we say they have a High Correlation. The word Correlation is made of Co- (meaning "together"), and Relation. Correlation is Positive when the values increase together, and. Correlation is Negative when one value decreases as the other increases. Bivariate data - Bivariate data deals with two variables that can change and are compared to find relationships. If one variable is influencing another variable, then you will have bivariate data that has an independent and a dependent variable. After the visit, students should compare their data in order to determine the validity of their findings. As a group, the teacher should lead a discussion about if their findings substantiate the Weather Bureau’s records for wind speed at Kitty Hawk, North Carolina in 1903. - Why do you think that the Wright brothers chose to start their flight from ground level and not on top of Kill Devil Hill? - What advantages did the wind speed give the Wright brothers in terms of lift? - Why was it necessary to extend the wing length in order to gain adequate lift? - Before their successful flight, Wilbur and Orville had numerous failed attempts to achieve flight. What factors, do you feel attributed to these failed attempts? After data has been collected and graphed, students should plot a line of best fit on the graph and then determine the equation of the line. Using the equation of the line they should predict the wind speed at various heights. The Wright Brothers, David McCullough
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. April 29, 1998 Explanation: Giant spinning clouds of gas, similar to Earth's tornadoes, have been found on the Sun. Solar tornadoes, however, can be larger than the entire Earth, and sustain wind gusts over 1000 times stronger than their Earth counterparts. The SOHO spacecraft has found that solar tornadoes start low in the Sun's atmosphere and spiral outwards, gathering speed as they enter the Solar System. Earthlings have more to fear from Earth's own weather phenomena, though, because the high speed particles that result from solar tornadoes are easily stopped by the Earth's thick atmosphere. Earthlings may have much to learn from solar tornadoes, including details of how the solar wind and corona are powered, and how to better predict future solar particle storms that could damage sensitive satellites. Authors & editors: NASA Technical Rep.: Jay Norris. Specific rights apply. A service of: LHEA at NASA/ GSFC &: Michigan Tech. U.
Activity 5: Practice Activity time: 20 minutes Materials for Activity - Newsprint, markers, and tape - Participants' journals, and writing instruments - Participants' clipboards with anklets (Workshop 1, Activity 3, Practice) - Beads, and waterproof markers and/or other decorations - Extra clipboards and string/hemp, and scissors Preparation for Activity - If needed, read instructions for making the anklets in the Before You Start section of the program Introduction and in Workshop 1, Decision Making. - Retrieve participants' clipboards with anklets, and participants' journals if these are also kept on-site. - Write on newsprint, and post: - When was a time that you practiced respect? - What made it possible for you to practice respect in this instance? - Have you ever experienced circumstances where it would have been helpful for you to show more respect? - What are the areas of your life now where you could practice respect to help you be the person you want to be? Description of Activity Participants understand how the use of respect affects their lives. Invite youth to take five minutes to journal, using the questions on newsprint as prompts, or to draw or meditate on the questions. Additional prompts you may add, while the group journals: - Are there individuals or groups of individuals that make respect a challenge? If so, what would help you to feel at least basic respect for them? - Are your friends respectful to others or do they like to put people down? If they frequently insult and disrespect others, how does this make you feel? What should you do about it? - How do fortune and fame influence whether or not we respect someone? Who does society or our peers tell us to respect and why? - On what do you base your level of respect? Do you respect someone more because they behave in virtuous ways? Do you respect someone more because they attain high levels of success in their chosen fields? Does the high school athlete being recruited by several colleges get more respect than the second string player or does the first chair violinist deserve more respect than the last chair player? - What does our first Principle ("The inherent worth and dignity of every person") say about the importance of respect to us as people of faith? - Is respect necessary for true feelings of compassion? After five minutes, ask participants to stop. Invite volunteers to share journal writing, to their level of comfort. When sharing is complete or after ten minutes, distribute participants' clipboards, new beads (one per youth), and decorating materials. Invite youth to take the next five minutes to decorate a bead while reflecting on their personal experiences with respect. Remind them that the bead will act as a reminder to use the virtue of respect. As participants finish, have them add this bead to the anklet they started in Workshop 1. If any participant missed Workshop 1, provide them with a clipboard, hemp, a bead for their name bead, and instruction to begin their anklet. Collect journals, clipboards, and anklet-making materials, and store for the next workshop.
A low-cost catalyst to produce hydrogen, which could make clean energy cheaper Scientists at the Indian Institute of Science (IISc) have developed an inexpensive catalyst, sodium cobalt metaphosphate, to produce hydrogen from water. Hydrogen is a clean energy source used in fuel cells. Sleek vehicles that run on hydrogen fuel cells have just started entering the market. The IISc scientists’ invention could be a major step in making fuel cells more affordable. Typically, ruthenium and platinum are the catalysts used to generate hydrogen from water. These catalysts are quite expensive, but without them the reaction would be slow. Sodium cobalt metaphosphate, which the IISc team has developed, is just as effective a catalyst as ruthenium and platinum, but costs only a fraction as these. “The material cost [of the sodium cobalt metaphosphate catalyst] is over two hundred times less than the current state-of-the-art ruthenium dioxide catalyst, and the reaction rate is also faster,” Ritambhara Gond, a doctoral candidate at the Materials Research Centre at IISc , said in a press release. According to a paper co-authored by Gond, published this March in the journal Angewandte Chemie, the high reaction rate could be due to a “chemical couple” between the catalyst and the carbon used in the reaction. Gond says that the catalyst can also be used over multiple cycles because it shows “high stability” even after the reaction. A gel to protect farmers from pesticides Researchers at the Institute for Stem Cell Science and Regenerative Medicine (InStem) in Kodigehalli have developed a gel that can protect farmers from the side effects of organophosphate pesticides. Organophosphate pesticides are very commonly used in India. While effective at killing insects, these can also harm humans and even cause death. Farmers are exposed to these pesticides through the skin, and many report experiencing pain after spraying these. According to a paper by the InStem team, published last October in the journal Science Advances, these pesticides inhibit the enzyme acetylcholinesterase, which causes the neurotransmitter acetylcholine to accumulate. This in turn can lead to neurological disorders, paralysis, and suffocation. A common way to prevent exposure is to use protective equipment like suits and gloves, but these are not used in India due to discomfort and inconvenience given the climate here. Hence the researchers wanted to develop a chemical that would deactivate pesticides on the skin before they entered the body. They developed ‘poly-Oxime gel’ that can be used on the skin. Lab tests showed that rats on which the gel was applied, did not show any symptoms of poisoning when exposed to the pesticides. But when exposed to pesticides without gel application, they died. “At present, we are conducting extensive safety studies in animals which will be completed in four months. Subsequently we plan a pilot study in humans to demonstrate the efficacy of the gel,” Dr Praveen Kumar Vemula, a senior member of the research team was quoted in India Science Wire. Materials that are superconductive at room temperature IISc researchers have found that gold films with silver nanoparticles embedded in them display superconductive properties at room temperature. Superconductors are substances that conduct electricity with no resistance to the flow of electrons, at a certain temperature called the critical temperature. The critical temperature for most conventional metallic superconductors is notoriously low – around -253°C – making their practical use difficult. Superconductors are used to make very powerful electromagnets, which find application in Maglev trains, particle accelerators, Magnetic Resonance imaging (MRI) machines and so on. The IISc team tested 125 samples of thin films of silver nanoparticles embedded in a gold matrix, and found 10 of these to be superconductive. According to their paper published this May at arxiv.org, more of the samples displayed superconductivity when exposure to oxygen was minimised. The team also found that increasing the concentration of silver and reducing the exposure of the film to the atmosphere increased the temperature at which superconductivity was exhibited. The material displayed superconductivity at temperatures as high as 77°C – over 300 degrees higher than the critical temperature of metallic conductors! “If this [result] is correct, it would be the greatest work done in India since the discovery of Raman effect,” physicist and IISc professor T V Ramakrishnan was quoted as saying in The Hindu. A natural potential-cure for pancreatic cancer A team of researchers at InStem have found that Urolithin A (Uro A), a compound found in pomegranate, reduces the size of pancreatic ductal adenocarcinoma (PDAC) tumors. PDAC is the seventh leading cause of cancer-related deaths world-wide, and the third leading cause in the United States. Often diagnosed at an advanced stage, only nine percent of patients survive five years beyond diagnosis. InStem team’s paper published by this February in the journal Molecular Cancer Therapeutics, says that traditional chemotherapy involving the drug gemcitabine has not been very effective against PDAC. The paper states that the team had based its research on the fact that there is an inverse relationship between berry consumption (pomegranates, strawberries, black raspberries) and PDAC occurrence. The correlation is likely due to chemicals named ellagitannins in berries. In the body, ellagitannins are broken down to form ellagic acid. Ellagic acid is further metabolised to form Urolithin A, B, and C. Of these, Urolithin A is responsible for the antioxidant and anti-inflammatory properties of ellagitannins; it blocks the oncogenic pathways in PDAC cells. The team genetically engineered mice that would develop PDAC and would mimic the effects of the disease in humans. The mice developed an invasive form of the cancer within four and a half weeks, and were given treatment at four weeks. The mice that were given Uro A had a higher overall survival rate compared to those that weren’t treated. These breakthroughs show that Bengaluru scientists are well making their mark among the global scientific community.
The first step of speech recognition system is feature extraction. MFCC is a feature describing the envelope of short-term power spectrum, which is widely used in speech recognition system. I. Mel filter Each speech signal is divided into several frames. Each frame of signal corresponds to a spectrum (realized by FFT transform). The spectrum represents the relationship between frequency and signal energy. Mel filter refers to a number of band-pass filters. In Mel frequency, the passband of the band-pass filter is the same width, but in Hertz spectrum, Mel filter has narrow dense cut-off band at low frequency, sparse high frequency and wide passband, aiming to simulate the perception of non-linear human ear to sound by having more discrimination at lower frequency and less discrimination at higher frequency. The relationship between Hertz frequency and Mel frequency is as follows: Assuming that there are m band-pass filters HM (k), 0 ≤ m < m in MEL spectrum, the center frequency of each band-pass filter is f (m) f (m) f (m), and the transfer function of each band-pass filter is: The following figure shows Mel filter in Hertz frequency, and the number of band-pass filters is 24: Features of MFCC MFCC coefficient extraction steps: (1) speech signal frame processing (2) Fourier transform power spectrum of each frame (3) pass the short-time power spectrum through Mel filter (4) logarithm of filter bank coefficient (5) DCT the logarithm of filter bank coefficients (6) the second to the 13th cepstrum coefficients are generally reserved as the features of short-term speech signals import wave import numpy as np import math import matplotlib.pyplot as plt from scipy.fftpack import dct def read(data_path): '' read voice signal ''' wavepath = data_path f = wave.open(wavepath,'rb') params = f.getparams() Nchannels, sampwidth, framerate, nframes = params [: 4] ා number of channels, quantization bits, sampling frequency, sampling points STR? Data = f.readframes (nframes)? Read audio, string format f.close() Wavedata = np.fromstring (str_data, dtype = NP. Short) ා convert string to floating-point data Wavedata = wavedata * 1.0 / (max (ABS (wavedata))) × wave amplitude normalization return wavedata,nframes,framerate def enframe(data,win,inc): '' frame the voice data Input: data (one-dimensional array): voice signal WLAN (int): sliding window length Inc (int): the length of each window move Output: F (two-dimensional array) a two-dimensional array composed of data in each sliding window ''' Nx = len (data) × length of voice signal try: nwin = len(win) except Exception as err: nwin = 1 if nwin == 1: wlen = win else: wlen = nwin NF = int (NP. Fix ((NX - WLAN) / Inc) + 1) times of window movement F = NP. Zeros ((NF, WLAN)) initialize 2D array indf = [inc * j for j in range(nf)] indf = (np.mat(indf)).T inds = np.mat(range(wlen)) indf_tile = np.tile(indf,wlen) inds_tile = np.tile(inds,(nf,1)) mix_tile = indf_tile + inds_tile f = np.zeros((nf,wlen)) for i in range(nf): for j in range(wlen): f[i,j] = data[mix_tile[i,j]] return f def point_check(wavedata,win,inc): '' voice signal endpoint detection Input: wavedata (one-dimensional array): original voice signal Output: startpoint (int): start endpoint Endpoint (int): endpoint ''' #1. Calculate the short-time zero crossing rate FrameTemp1 = enframe(wavedata[0:-1],win,inc) FrameTemp2 = enframe(wavedata[1:],win,inc) Signs = NP. Sign (NP. Multiply (frametemp1, frametemp2)) calculate whether each bit of data adjacent to it has a different sign, and the different sign will cross zero signs = list(map(lambda x:[[i,0] [i>0] for i in x],signs)) signs = list(map(lambda x:[[i,1] [i<0] for i in x], signs)) diffs = np.sign(abs(FrameTemp1 - FrameTemp2)-0.01) diffs = list(map(lambda x:[[i,0] [i<0] for i in x], diffs)) zcr = list((np.multiply(signs, diffs)).sum(axis = 1)) #2. Calculate short-term energy amp = list((abs(enframe(wavedata,win,inc))).sum(axis = 1)) #Set threshold #Print ('set threshold ') Zcrlow = max ([round (NP. Mean (ZCR) * 0.1), 3]) low threshold of zero crossing rate Zcrhigh = max ([round (max (ZCR) * 0.1), 5]) high threshold of zero crossing rate Amplow = min ([min (AMP) × 10, NP. Mean (AMP) × 0.2, max (AMP) × 0.1]) energy low threshold Amphigh = max ([min (AMP) × 10, NP. Mean (AMP) × 0.2, max (AMP) × 0.1]) high energy threshold #Endpoint detection Maxsilence = 8 × maximum voice gap time Minaudio = 16 ා minimum voice time Status = 0 ා status 0: Mute segment, 1: transition segment, 2: voice segment, 3: end segment Holdtime = 0 ා voice duration Silence time = 0 ා voice gap time Print ('Start endpoint detection ') StartPoint = 0 for n in range(len(zcr)): if Status ==0 or Status == 1: if amp[n] > AmpHigh or zcr[n] > ZcrHigh: StartPoint = n - HoldTime Status = 2 HoldTime = HoldTime + 1 SilenceTime = 0 elif amp[n] > AmpLow or zcr[n] > ZcrLow: Status = 1 HoldTime = HoldTime + 1 else: Status = 0 HoldTime = 0 elif Status == 2: if amp[n] > AmpLow or zcr[n] > ZcrLow: HoldTime = HoldTime + 1 else: SilenceTime = SilenceTime + 1 if SilenceTime < MaxSilence: HoldTime = HoldTime + 1 elif (HoldTime - SilenceTime) < MinAudio: Status = 0 HoldTime = 0 SilenceTime = 0 else: Status = 3 elif Status == 3: break if Status == 3: break HoldTime = HoldTime - SilenceTime EndPoint = StartPoint + HoldTime return FrameTemp1[StartPoint:EndPoint] def mfcc(FrameK,framerate,win): '' extract MFCC parameters Input: framek (two-dimensional array): two-dimensional framing speech signal Framerate: voice sampling frequency Win: framing window length (FFT points) output: ''' #Mel filter mel_bank,w2 = mel_filter(24,win,framerate,0,0.5) FrameK = FrameK.T #Calculate power spectrum S = abs(np.fft.fft(FrameK,axis = 0)) ** 2 #Pass the power spectrum through the filter P = np.dot(mel_bank,S[0:w2,:]) Take a logarithm logP = np.log(P) #Calculate DCT coefficient # rDCT = 12 # cDCT = 24 # dctcoef = # for i in range(1,rDCT+1): # tmp = [np.cos((2*j+1)*i*math.pi*1.0/(2.0*cDCT)) for j in range(cDCT)] # dctcoef.append(tmp) # Take a logarithm后做余弦变换 # D = np.dot(dctcoef,logP) num_ceps = 12 D = dct(logP,type = 2,axis = 0,norm = 'ortho')[1:(num_ceps+1),:] return S,mel_bank,P,logP,D def mel_filter(M,N,fs,l,h): '' Mel filter Input: m (int): number of filters N (int): FFT points FS (int): sampling frequency L (float): low frequency coefficient H (float): high frequency coefficient Output: melbank (2D array): Mel filter ''' FL = FS * l ා lowest frequency in filter range FH = FS * h ා highest frequency of filter range BL = 1125 * np.log (1 + FL / 700) bh = 1125 * np.log(1 + fh /700) B = BH - BL band width Y = NP. Linspace (0, B, M + 2) ා mark Mel equally Print ('mel interval ', y) FB = 700 * (NP. Exp (Y / 1125) - 1) change Mel to Hz print(Fb) w2 = int(N / 2 + 1) df = fs / N Freq = ා sampling frequency value for n in range(0,w2): freqs = int(n * df) freq.append(freqs) melbank = np.zeros((M,w2)) print(freq) for k in range(1,M+1): f1 = Fb[k - 1] f2 = Fb[k + 1] f0 = Fb[k] n1 = np.floor(f1/df) n2 = np.floor(f2/df) n0 = np.floor(f0/df) for i in range(1,w2): if i >= n1 and i <= n0: melbank[k-1,i] = (i-n1)/(n0-n1) if i >= n0 and i <= n2: melbank[k-1,i] = (n2-i)/(n2-n0) plt.plot(freq,melbank[k-1,:]) plt.show() return melbank,w2 if __name__ == '__main__': data_path = 'audio_data.wav' win = 256 inc = 80 wavedata,nframes,framerate = read(data_path) FrameK = point_check(wavedata,win,inc) S,mel_bank,P,logP,D = mfcc(FrameK,framerate,win) The above is the whole content of this article. I hope it will help you in your study, and I hope you can support developepaer more.
Birds as pollinators Essay Pollination, whereby pollen grains (male) are transferred to the ovule (female) of a plant, is an irreplaceable step in the reproduction of seed plants. Most plant fruits are unable to develop without pollination taking place and many beautiful flower varieties would die out if not pollinated. Bees and insects are the most common pollinators, but bats and birds are known to do their share in this vital activity. The agent moving the pollen, whether it is moths, bees, bats, wind or birds, is called the “pollinator” and the plant providing the pollen is called the “polliniser”. Biotic pollination is the term used when pollination is aided by a pollinator. When this is carried out by birds, the term used is Ornithophily. Hummingbirds, spider hunters, sunbirds, honeycreepers and honeyeaters are the most common pollinator bird species. Plants that make use of pollination by birds commonly have bright red, orange or yellow flowers and very little scent. This is because birds have a keen sense of sight for colour, but generally little or no sense of smell. Bird pollinated flowers produce copious amounts of nectar to attract and feed the birds that are performing the pollination, as well as having pollen that is usually large and sticky to cling to the feathers of the bird. Hummingbirds are small birds which are found only in the Americas. Their ability to hover in mid-air by flapping their wings up to eighty times per second, plus their long curved beaks and a love for sweet nectar, makes them perfect pollinators. Hummingbirds burn up a tremendous amount of energy as they dart about from flower to flower and so they are attracted to the flowers that will give them something in return for their pollinating efforts. The flowers they are particularly fond of include shrimp plants, verbenas, bee balm, honeysuckles, fuchsias, hibiscus and bromeliads. Sunbirds and spider hunters feed mainly on nectar, although when feeding young, they often also eat insects. Sunbird species can take nectar while hovering, but usually perch to feed. Their long curved beaks and brush-tipped tubular tongues make these birds particularly suited to feeding on and pollinating tubular flowers. Honeyeaters resemble hummingbirds in many ways, but are not capable of lengthy hovering flight. Honeyeaters quickly flit from perch to perch, stretching or hanging upside down in order to reach the nectar with their highly developed brush-tipped tongue, while at the same time serving as a pollinator. Birds are not known for pollinating food growing crops, but this does not mean that they are not important. If it were not for the assistance of our feathered friends, many plant species would be in danger of extinction. Attained from: http://www. birds. com/blog/the-important-role-of-birds-in-pollination/ on 20th Nov, 2012. Globally, bird-pollinated plants can be separated into two groups, one consisting of species pollinated by specialist nectarivores, and the other of plants pollinated by occasional nectarivores. There are marked differences in nectar properties among the two groups, implying that there has been pollinator-mediated selection on these traits. This raises the possibility that variation in bird assemblages among populations of a plant species could lead to the evolution of intraspecific variation in floral traits. We examined this hypothesis in Kniphofia linearifolia, a common and widespread plant in southern Africa. Although bees are common visitors to flowers of this species, exclusion of birds from inflorescences led to significant reductions in seed set, indicating that the species is primarily bird-pollinated. We showed that bird pollinator assemblages differ markedly between five different populations of K. inearifolia, and that variation in flower morphology and nectar properties between these populations are associated with the dominant guild of bird visitors at each population. We identified two distinct morphotypes, based on corolla length, nectar volume and nectar concentration, which reflect the bird assemblages found in each type. Further work is needed to establish if a natural geographic mosaic of bird assemblages are the ultimate cause of differentiation in floral traits in this species Authors: Brown, Mark1 [email protected] ac. za Downs, Colleen1 Johnson, Steven1 Source: Plant Systematics & Evolution; Jul2011, Vol. 294 Issue 3/4, p199-206, 8p, 1 Color Photograph, 3 Charts, 3 Graphs, 1 Map Attained from: http://web. ebscohost. com/ehost/detail? sid=1fe8f789-9d74-4021-8fda-9059204eb8a8%40sessionmgr115&vid=1&hid=123&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d# db=a9h&AN=61844127 on 20th Nov, 2012 Birds That Pollinate Flowers Many people in North and South America think of the hummingbird when they think of a bird that pollinates flowers. However, there are over 2,000 species of birds that pollinate flowers, and hummingbird species are just some of the bird pollinators. Other birds that pollinate flowers include the Hawaiian honeycreeper, certain parrot species in New Guinea, tropical sunbirds and the Australian honeyeater. What Flowers Do Birds Pollinate? Different flowers suit different pollinators. Birds, bees, beetles and butterflies all pollinate flowers, and the flowers and the pollinators suit each other. Flowers that birds can pollinate tend to look similar. They tend to be long, tubular or cup-shaped flowers like honeysuckles. This shape allows a bird to reach into the flower and pollinate it when it places its beak into the flower to look for nectar. Bird-pollinated flowers are often bright colours like red, yellow or orange. Bright red and pink flowers are particularly attractive to birds. Think of the columbine, many honeysuckles and the fuschia in the hanging planter. These plants are very attractive to bird pollinators. The nectar is deep within the flower so that the bird needs to probe the flower with its beak. While it probes the flower, it collects the pollen on its head and back. Birds look to flowers for nectar, and the pollination is what flowers get out of the deal. Since birds don’t smell very well, flowers that attract birds do not need to have a scent, although some of them may be scented. Species of Flowers That Attract Birds * If you are creating a flower garden for the birds, what species of flowers should you grow in your garden? Grow honeysuckles on trellises and up existing plants. These plants add a beautiful scent and attract birds with their nectar. Add clematis to the trellis for its large and beautiful flowers that also attract bird pollinators. Place fuschias in planter boxes and hanging baskets. Grow columbines at the base of trees and shrubs in the partial shade. Impatiens and phloxes also attract birds and create a lovely cottage garden look. Looking for a shrub to plant for the birds? Butterfly bush attracts both bird and butterfly pollinators. Azaleas come in vibrant colours and will also attract avian pollinators. Attained from: Flowers Pollinated by Birds | eHow. com http://www. ehow. com/list_6502456_flowers-pollinated-birds. html#ixzz2ClZEkoi4 on 20th Nov, 2012 Pollination and Plant Families Some plants, such as pine and grass, are wind-pollinated, so their reproductive strategy is to produce large amounts of pollen in hopes that some makes it to the female. Many other plants depend on animals to spread their pollen. In that case, the animal involved is called a pollinator. Not all animals can pollinate all plants, but certain types of animals such as birds, butterflies, moths, bees, beetles, wasps, bats, and flies, typically pollinate certain types of plants. This is a mutualistic relationship where both the plant and the pollinator(s) benefit each other. Some plants are very specific with respect to what animal is able to serve as a pollinator, and have special modifications (special shape, etc. ) to attract that pollinator or exclude other would-be pollinators. Others plants are more general and are more attractive to a wider variety of pollinators, but the risk here is that the pollen may not get to the “right” species if the pollinator visits a different type of flower next. | | | | There are special cases where a plant species and a species of pollinator are totally dependent on each other. These cases are examples of coevolution, the joint evolution of a plant and its animal pollinator. In coevolution, each of the species involved serves as a source of natural selective pressure on the other. A more formal definition for coevolution is “the mutual evolutionary influence between two species. ” One example of this type of coevolution would be the yucca plant and the yucca moth. The female moth lays her eggs in the flowers, simultaneously pollinating the plant, and the caterpillars develop within the seeds in the ovary of the plant. For the plant, the loss of a few seeds to caterpillars is a price worth paying to insure pollination. The yucca moth is the only animal that is the right size and shape to pollinate yucca flowers. | | In order to make use of animal pollinators, plants must: 1. supply some reward, frequently food, for the pollinator, 2. advertise the presence of this to attract visitors, and 3. Have a way of putting pollen on the pollinator so it is transferred to the next plant/flower. | | The “reward” is not always food (nectar). There is a tropical orchid with a flower that looks and smells like the female of a certain species of wasp. Males of this species emerge one week before the females, but these orchids are already blooming. The male wasps smell the orchids, “think” they’ve found a female, and try to copulate. The texture of the flowers is such that they “feel” like a female wasp, but the poor males just can’t get it to work, leave to find a more cooperative mate, and end up transferring pollen instead. The adaptations exhibited by any given flower depend on the type of pollinator the flower is designed to attract. Various pollinators have differing adaptations and means of gathering pollen and/or whatever nectar, etc. flower has to offer. | Bees don’t see red, but do see blue, yellow, and ultraviolet. Thus, bee-pollinated flowers are mostly yellow (some blue) with ultraviolet nectar guides or “landing patterns. ” The flowers typically have a delicate, sweet scent, which the bees can smell. Usually the nectar is at the end of some type of small, narrow floral tube which is the right length to fit the tongue of the particular species bee that pollinates that plant. Bee-pollinated flowers typically have a sturdy, irregular shape with some type of specifically-designed landing platform. An example of this is snapdragons, where only a bee of just the right size and weight is able to trigger the flower to open, while all others (which are too small or too heavy) are excluded. Typically, pollen sticks to the “fur” of a bee or else the bees collect the pollen in specially-modified areas on their legs. | | | | Butterflies are diurnal and have good vision but a weak sense of smell. They can see red. Butterfly-pollinated flowers are brightly-coloured (even red) but odourless. These flowers are often in clusters and/or are designed to provide a landing platform. Butterflies typically walk around on a flower cluster, probing the blossoms with their tongues. Examples of butterfly-pollinated flowers would be many members of the plant family Compositae, where many small flowers are arranged into a flat-topped head, and other plants, such as the milkweeds, where the flowers occur in large clusters. The individual flowers are typically tubular with a tube of suitable length for butterflies. | | Most moths are nocturnal and have a good sense of smell. Moth-pollinated flowers typically are white or pale colours so they will be at least somewhat visible on a moonlit night. Often, moth-pollinated flowers may only be open at night. They typically use a strong, sweet perfume to advertise their presence in the darkness, and typically this odour is only exuded at night (evolutionarily, it doesn’t make sense to waste energy producing attractant in the daytime when it is useless). Moths are hover-feeders, so these flowers have deep tubes to precisely match the length of a specific moth’s tongue. One famous story relates that Charles Darwin found an extra ordinarily long, tubular flower in South America and predicted that someday, someone would find a moth with a tongue of matching length. After much searching, around a hundred years later, indeed, this moth was found. More recently, a flower with an even-longer tube was found on Madagascar, and Dr. Gene Kritsky out at Mt. St. Joe has been interested in trying to find the “missing” moth that goes with it. In moth-pollinated flowers, the petals are flat or bent back so the moth can get in, and hover close to the flower. | Birds, especially hummingbirds, have good eyes and seem to be especially attracted to red. However, birds have a poor sense of smell (yes, it is OK to carefully put fallen babies back into the nest — the parents cannot smell your scent). Bird-pollinated flowers are brightly-coloured, especially red but lack odour. Their petals are recurved to be out of the way. Hummingbirds are hover-feeders, so the flowers are designed to dust the birds head/back with pollen as the bird probes the flower for nectar. Flowers such as Columbine, red Salvia, and Fuchsia are favourite nectar sources for hummingbirds. | | Bats are nocturnal with a good sense of smell. While many bats depend on echolocation rather than sight to navigate, those species which serve as pollinators do have good vision. Also, bats which pollinate flowers have long, bristly tongues to lap up nectar and pollen. Since these flowers are open at night, they are white or light-coloured so they’ll be visible in moonlight. Bat-pollinated flowers have a musty smell like the smell of bats. These flowers are large and sturdy to withstand insertion of the bat’s head as it licks nectar and pollen. Flies are attracted to rotting meat. Thus, fly-pollinated flowers may be nondescript or brownish red in colour, and typically have a strong, “bad,” rotten sort of smell. | Attained from: http://biology. clc. uc. edu/courses/bio106/pollinat. htm on 20th Nov, 2012
A bishop is a piece in the strategy board game of chess. Each player begins the game with two bishops. One starts between the king's knight and the king, the other between the queen's knight and the queen. In algebraic notation the starting squares are c1 and f1 for white's bishops, and c8 and f8 for black's bishops. The bishops may be differentiated according to which wing they begin on, i.e. the "king's bishop" and "queen's bishop" respectively. However, after a bishop has moved several times, it may be difficult to remember where it came from. Therefore it is more common to refer to the "light-squared" and "dark-squared" bishops, as each always remains on either the white or black squares. The bishop has no restrictions in distance for each move, but is limited to diagonal movement, forward and backward. Bishops cannot jump over other pieces. As with most pieces, a bishop captures by occupying the square on which an enemy piece sits. Because the bishop has access to only thirty-two squares of the board, it is rather weaker than the rook to which all sixty-four squares of the board are accessible. Furthermore, a rook on an empty board always attacks fourteen squares, whereas a bishop attacks only seven to thirteen depending on how near it is to the center. A rook is generally worth about two pawns more than a bishop. Bishops are approximately equal in strength to knights. Bishops gain in relative strength as more and more pieces are traded, and lines open up on which they can operate. When the board is empty, a bishop can operate on both wings simultaneously, whereas a knight takes several moves to hop across. In an open endgame, a pair of bishops is decidedly superior to a bishop and a knight or two knights. A player possessing a pair of bishops has a strategic weapon in the form of a long-term threat to trade down to an advantageous endgame. On the other hand, in the early going a bishop may be hemmed in by pawns of both players, and thus be inferior to a knight which can hop over obstacles. Furthermore, on a crowded board a knight has many opportunities to fork two enemy pieces. While it is technically possible for a bishop to fork, practical opportunities are rare. A bishop which has trouble finding a good square for development in the center may be fianchettoed, for example pawn g2-g3 and bishop f1-g2. This forms a strong defense for the castled king on g1 and the bishop can often exert pressure on the long diagonal h1-a8. After a fianchetto, however, the bishop should not be given up lightly, because then the holes around the king can easily prove fatal. A player with only one bishop should generally place his pawns on squares of the color that the bishop can't move to. This allows the player to control squares of both colors, allows the bishop to move freely among the pawns, and helps fix enemy pawns on squares on which they can be attacked. A bishop which is impeded by friendly pawns is sometimes disparagingly called a "tall pawn", or more simply, a "bad bishop". In endgames where each side has only one bishop, and the bishops are on squares of opposite colors, a draw becomes more likely. Each side tends to gain control of squares of one color, and a deadlock results. However, if the queens are still on the board, the opposite colored bishops may make the game less drawish, because each side has an extra attacker on squares the other player has trouble defending. In endgames with same-colored bishops, even a minute advantage may be enough to win.
Nagoya, Japan - Professor Takashi Yoshimura and colleagues of the Institute of Transformative Bio-Molecules (WPI-ITbM) of Nagoya University have finally found the missing piece in how birds sense light by identifying a deep brain photoreceptor in Japanese quails, in which the receptor directly responds to light and controls seasonal breeding activity. Although it has been known for over 100 years that vertebrates apart from mammals detect light deep inside their brains, the true nature of the key photoreceptor has remained to be a mystery up until now. This study led by Professor Yoshimura has revealed that nerve cells existing deep inside the brains of quails, called cerebrospinal fluid (CSF)-contacting neurons, respond directly to light. His studies also showed that these neurons are involved in detecting the arrival of spring and thus regulates breeding activities in birds. The study published online on July 7, 2014 in Current Biology is expected to contribute to the improvement of production of animals along with the deepening of our understanding on the evolution of eyes and photoreceptors. Many organisms apart from those living in the tropics use the changes in the length of day (photoperiod) as their calendars to adapt to seasonal changes in the environment. In order to adapt, animals change their physiology and behavior, such as growth, metabolism, immune function and reproductive activity. "The mechanism of seasonal reproduction has been the focus of extensive studies, which is regulated by photoperiod" says Professor Yoshimura, who led the study, "small mammals and birds tend to breed during the spring and summer when the climate is warm and when there is sufficient food to feed their young offspring," he continues. In order to breed during this particular season, the animals are actually sensing the changes in the seasons based on changes in day length. "We have chosen quails as our targets, as they show rapid and robust photoperiodic responses. They are in the same pheasant family as the roosters and exhibit similar characteristics. It is also worth noting that Toyohashi near Nagoya is the number one producer of quails in Japan," explains Professor Yoshimura. The reproductive organs of quails remain small in size throughout the year and only develop during the short breeding season, becoming more than 100 times its usual size in just two weeks. In most mammals including humans, eyes are the exclusive photoreceptor organs. Rhodopsin and rhodopsin family proteins in our eyes detect light and without our eyes, we are unable to detect light. On the other hand, vertebrates apart from mammals receive light directly inside their brains and sense the changes in day length. Therefore, birds for example, are able to detect light even when their eyes are blindfolded. Although this fact has been known for many years, the photoreceptor that undertakes this role had not yet been clarified. "We had already revealed in previous studies reported in 2010 (PNAS) that a photoreceptive protein, Opsin-5 exists in the quail's hypothalamus in the brain," says Professor Yoshimura. This Opsin-5 protein was expressed in the CSF-contacting neurons, which protrudes towards the third ventricle of the brain. "However, there was no direct evidence to show that the CSF-contacting neurons were detecting light directly and we decided to look into this," says Professor Yoshimura. Yoshimura's group has used the patch-clamp technique for brain slices in order to investigate the light responses (action potential) of the CSF-contacting neurons. As a result, it was found that the cells were activated upon irradiation of light. "Even when the activities of neurotransmitters were inhibited, the CSF-contacting neurons' response towards light did not diminish, suggesting that they were directly responding to the light," says Professor Yoshimura excitedly. In addition, when the RNA interference method was used to inhibit the activity of the Opsin-5 protein expressed in the CSF-contacting neurons, the secretion of the thyroid-stimulating hormone from the pars tuberalis of the pituitary gland was inhibited. The thyroid-stimulating hormone, so-called the "spring calling hormone" stimulates another hormone, which triggers spring breeding in birds. "We have been able to show that the CSF-contacting neurons directly respond to light and are the key photoreceptors that control breeding activity in animals, which is what many biologists have been looking for over 100 years," elaborates Professor Yoshimura. There have been many theories on the role of CSF-contacting neurons in response to light. "Our studies have revealed that these neurons are actually the photoreceptors working deep inside the bird's brain. As eyes are generated as a protrusion of the third ventricle, CSF-contacting neurons expressing Opsin-5, can be considered as an ancestral organ, which shares the same origin as the visual cells of the eyes. Opsin-5 also exists in humans and we believe that this research will contribute to learning how animals regulate their biological clocks and to find effective bio-molecules that can control the sensing of seasons," says Professor Yoshimura. Professor Yoshimura's quest to clarify how animals measure the length of time continues. This article "Intrinsic photosensitivity of a deep brain photoreceptor" by Yusuke Nakane, Tsuyoshi Shimmura, Hideki Abe and Takashi Yoshimura is published online on July 7, 2014 in Current Biology, Volume 24, Issue 13, Pages R596-597. The World Premier International Research Center Initiative (WPI) for the Institute of Transformative Bio-Molecules (ITbM) at Nagoya University in Japan is committed to advance the integration of synthetic chemistry, plant/animal biology and theoretical science, all of which are traditionally strong fields in the university. As part of the Japanese science ministry's MEXT program, the ITbM aims to develop transformative bio-molecules, innovative functional molecules capable of bringing about fundamental change to biological science and technology. Research at the ITbM is carried out in a "Mix-Lab" style, where international young researchers from multidisciplinary fields work together side-by-side in the same lab. Through these endeavors, the ITbM will create "transformative bio-molecules" that will dramatically change the way of research in chemistry, biology and other related fields to solve urgent problems, such as environmental issues, food production and medical technology that have a significant impact on the society.
When a person is sick a doctor makes a diagnosis based on the unique needs of the individual. The doctor seeks to strike the right balance of treatments in order to bring the patient to a place of wholeness and wellness. The doctor begins this process by asking the question, “Exactly what is wrong with my patient?” The answer to this question is the diagnosis which will drive and dictate what will be prescribed. When a parent or caregiver analyzes a child’s reading problems he/she is much like a doctor making a diagnosis. Adults caring for children have valuable knowledge about how the child learns and can evaluate needs with specificity. The child may need interventions, basic support, enrichment or all of the above. To determine what is needed, one must consider what is (and what is not) working for the reader. Once the diagnosis is made, the appropriate prescription can be written, and the problem can undergo correction. The specific nature of reading problems can be revealed by taking a look at the common causes of reading difficulties. Listed here are the types of reading difficulties defined under Step 1: Diagnose. Click on the these links to learn more about these common reading problems: Child Needs Glasses? Child Needs Vision Therapy? Child is Not Ready to Read? Child Needs Phonics Instruction? Child Needs Sight Word Instruction? Child Needs Fluency Instruction? Child Needs Comprehension Help? Child Needs Schema Building? Child Needs Appropriate Reading Material? Go to Step 2: Prescribe to write a prescription for addressing your child's reading problems. Go to Step 3: Correct to learn about reading approaches that will correct your child's reading problems. Remember that parents and caregivers spend great amounts of time with their children and, therefore, can have a tremendous impact on how their children learn. Don’t shy away from getting involved. You can help correct problems in reading. You can partner with your child’s teacher, school personnel and other professionals to help your child overcome reading difficulty. Use this website to help you get started on facilitating this valuable partnership.
Did you know that methane is a more potent greenhouse gas than CO2? Although methane breaks down faster than carbon CO2, it is 28-36 times more effective at trapping heat. Livestock are accountable for emitting 44 percet of all human-caused methane. As per a report titled “Livestock’s Long Shadow” published in 2006, 18 percent of greenhouse gasses are emitted by cattle and methane accounts for most of that. In layman terms, cattle are the bigger culprits for releasing greenhouse gasses than cars, planes, and all other forms of transportation combined. Considering the need to reduce methane emissions from livestock, Australian scientists, in a big breakthrough, discovered that if dried seaweed is added to sheep and cattle feed, methane emissions could be cut by more than 70 percent. This discovery could help in cutting a huge chunk of the 3.1 gigatons of methane that these animals release each year in burps and farts. According to Rocky De Nys, a professor of aquaculture at James Cook University, “We have results already with whole sheep; we know that if Asparagopsis is fed to sheep at 2 percent of their diet, they produce between 50 and 70 percent less methane over a 72-day period continuously, so there is already a well-established precedent.” As agriculture researcher Michael Battaglia from Australia’s CSIRO explained, this seaweed species contains a compound called bromoform (CHBr3), which blocks methane production by reacting with vitamin B12 at a later stage of the process. This chemical reaction disrupts the enzymes that are used by gut microbes to produce methane as a byproduct. Let’s hope this bright idea saves Mother Earth from these dangerous burps and farts.
Scientists at the University of Aberdeen have made a major breakthrough towards the mechanism of high-temperature superconductivity. Results from studies of a crystal structure of a new chemical compound containing copper and ruthenium have provided valuable insight into the mechanism of high-temperature superconductivity. The new results have shown for the first time that the mechanism of high-temperature superconductivity (when there is no resistance to the flow of electrical current) is actually coupled to the crystal lattice. This is extremely exciting as this new discovery could lead to a breakthrough in the theory of high-temperature superconductivity, which has been puzzling scientists for nearly 20 years, according to a study published in this week’s issue of Nature. A metal consists of a lattice of atoms. Electrons can dissociate from these atoms and travel through the lattice making the metal a conductor of electrical current. The atoms within the metal are actually vibrating. The electrons, which are travelling through the lattice, can collide with the vibrating atoms and this results in a reduction of the electrical current. This is known as electrical resistance. Superconductors are so-called because they do not exhibit electrical resistance and hence do not suffer from losses in electrical current. Dr Abbie Mclaughlin, RSE Fellow, Department of Chemistry, is leading the research and explains: “We are interested in the chemistry of materials that show fascinating physical properties which may be important in the technologies of the future. We are particularly interested in synthesising new layered materials which have an interesting property, such as magnetism, in one layer and another property, such as superconductivity, in another layer. It is then possible to observe how the two different phenomena compete with one another which can in itself lead to the observation of novel physics.” Unlike normal conductors a superconductor exhibits zero electrical resistance. Unfortunately, superconductivity is only observed at very low temperatures. Currently, the record temperature at which superconductivity is observed is -113 °C. Practical applications include superconducting magnets for MRI scanners and magnetic levitation trains. Chemical compounds that superconduct at temperatures >-238°C are known as “high-temperature superconductors”. There is currently no complete theory for high-temperature superconductivity. However it is thought that once a final theory has been established it will be possible to design new superconducting materials which show no electrical resistance at higher temperatures – with the possibly at room temperature. Dr Mclaughlin continued: “This would lead to a plethora of technological possibilities such as high performance electric motors. At the same time there would be a huge conservation in energy by using superconducting power cables to transmit electricity to consumers. At present a considerable amount of energy is lost due to electrical resistance. ” Energy losses due to electrical resistance in the transmitting power cables were reported at 7.4% in the UK in 1998. At the same time this new chemical compound being developed by Dr Mclaughlin and collaborators - a copper and ruthenium containing oxide material - also exhibits a variety of other useful properties. Firstly, a phenomenon known as negative magnetoresistance, has been observed at low temperature. Materials exhibiting this property show a large increase in electronic conductivity on application of a magnetic field and are currently used in memory storage devices in computers. At low temperatures this new compound also exhibits a property known as negative thermal expansion. Most materials expand as they are heated but this is not the case for this new compound. Negative thermal expansion is extremely unusual and practical applications can be found in areas ranging from electronics to dentistry. Dr Mclaughlin added: “This collaborative research project aims to shed more light on the theory of high-temperature superconductivity. We also hope to learn more about the mechanism behind the large negative magnetoresistances and negative thermal expansion observed in this material and hopefully design new materials which could then be used in practical applications at room temperature.” The collaborative research project involves Professor Paul Attfield from the Centre for Science at Extreme Conditions and School of Chemistry, University of Edinburgh, and Dr Falak Sher from the Department of Chemistry, University of Cambridge. They bring together a unique blend of materials chemistry to develop and study new materials with fascinating properties. For a full copy of the paper which is published in this week’s edition of Nature, please visit: www.nature.com/nature/journal/v436/n7052/abs/nature03828.html Explore further: Clarifying the role of magnetism in high-temperature superconductors
How to Detect and Correct Formula Errors in Excel 2016 Excel 2016 offers several ways to correct errors in formulas. You can correct them one at a time, run the error checker, and trace cell references, as explained here. By the way, if you want to see formulas in cells instead of formula results, go to the Formulas tab and click the Show Formulas button or press Ctrl+’ (apostrophe). Sometimes seeing formulas this way helps to detect formula errors. Correcting errors one at a time When Excel detects what it thinks is a formula that has been entered incorrectly, a small green triangle appears in the upper-left corner of the cell where you entered the formula. And if the error is especially egregious, an error message, a cryptic three- or four-letter display preceded by a pound sign (#), appears in the cell. The table explains common error messages. |Message||What Went Wrong| |#DIV/0!||You tried to divide a number by a zero (0) or an empty |#NAME||You used a cell range name in the formula, but the name isn’t defined. Sometimes this error occurs because you type the name |#N/A||The formula refers to an empty cell, so no data is available for computing the formula. Sometimes people enter N/A in a cell as a placeholder to signal the fact that data isn’t entered yet. Revise the formula or enter a number or formula in the empty |#NULL||The formula refers to a cell range that Excel can’t understand. Make sure that the range is entered correctly. |#NUM||An argument you use in your formula is invalid.| |#REF||The cell or range of cells that the formula refers to aren’t |#VALUE||The formula includes a function that was used incorrectly, takes an invalid argument, or is misspelled. Make sure that the function uses the right argument and is spelled correctly. To find out more about a formula error and perhaps correct it, select the cell with the green triangle and click the Error button. This small button appears beside a cell with a formula error after you click the cell, as shown here. The drop-down list on the Error button offers opportunities for correcting formula errors and finding out more about them. Running the error checker Another way to tackle formula errors is to run the error checker. When the checker encounters what it thinks is an error, the Error Checking dialog box tells you what the error is, as shown. To run the error checker, go to the Formulas tab and click the Error Checking button (you may have to click the Formula Auditing button first, depending on the size of your screen). If you see clearly what the error is, click the Edit in Formula Bar button, repair the error in the Formula bar, and click the Resume button in the dialog box (you find this button at the top of the dialog box). If the error isn’t one that really needs correcting, either click the Ignore Error button or click the Next button to send the error checker in search of the next error in your worksheet. Tracing cell references In a complex worksheet in which formulas are piled on top of one another and the results of some formulas are computed into other formulas, it helps to be able to trace cell references. By tracing cell references, you can see how the data in a cell figures into a formula in another cell; or, if the cell contains a formula, you can see which cells the formula gathers data from to make its computation. You can get a better idea of how your worksheet is constructed, and in so doing, find structural errors more easily. The following figure shows how cell tracers describe the relationships between cells. A cell tracer is a blue arrow that shows the relationships between cells used in formulas. You can trace two types of relationships: Tracing precedents: Select a cell with a formula in it and trace the formula’s precedents to find out which cells are computed to produce the results of the formula. Trace precedents when you want to find out where a formula gets its computation data. Cell tracer arrows point from the referenced cells to the cell with the formula results in it. To trace precedents, go to the Formulas tab and click the Trace Precedents button (you may have to click the Formula Auditing button first, depending on the size of your screen). Tracing dependents: Select a cell and trace its dependents to find out which cells contain formulas that use data from the cell you selected. Cell tracer arrows point from the cell you selected to cells with formula results in them. Trace dependents when you want to find out how the data in a cell contributes to formulas elsewhere in the worksheet. The cell you select can contain a constant value or a formula in its own right (and contribute its results to another formula). To trace dependents, go to the Formulas tab and click the Trace Dependents button (you may have to click the Formula Auditing button first, depending on the size of your screen).Tracing the relationships between cells. To remove the cell tracer arrows from a worksheet, go to the Formulas tab and click the Remove Arrows button. You can open the drop-down list on this button and choose Remove Precedent Arrows or Remove Dependent Arrows to remove only cell-precedent or cell-dependent tracer arrows.
Fundamentals of Transportation/About< Fundamentals of Transportation This book is aimed at undergraduate civil engineering students, though the material may provide a useful review for practitioners and graduate students in transportation. Typically, this would be for an Introduction to Transportation course, which might be taken by most students in their sophomore or junior year. Often this is the first engineering course students take, which requires a switch in thinking from simply solving given problems to formulating the problem mathematically before solving it, i.e. from straight-forward calculation often found in undergraduate Calculus to vaguer word problems more reflective of the real world. How an idea becomes a roadEdit The plot of this textbook can be thought of as "How an idea becomes a road". The book begins with the generation of ideas. This is followed by the analysis of ideas, first determining the origin and destination of a transportation facility (usually a road), then the required width of the facility to accommodate demand, and finally the design of the road in terms of curvature. As such the book is divided into three main parts: planning, operations, and design, which correspond to the three main sets of practitioners within the transportation engineering community: transportation planners, traffic engineers, and highway engineers. Other topics, such as pavement design, and bridge design, are beyond the scope of this work. Similarly transit operations and railway engineering are also large topics beyond the scope of this book. Each page is roughly the notes from one fifty-minute lecture.
Researchers from the University of Washington, under Dr. Hannele Ruohola-Baker, are looking into ways stem cells keep their surroundings optimal for self-renewal. It is a fascinating process: Stem cells require these niches – nest-like microenvironments made up of regulatory cells — in order to self-renew. Stem cells can divide and turn into many types of new cells. The niches help regulate the amount and kinds of new cells produced to meet current demands. The niches also help maintain a supply of stem cells for later use. Inside your body, for example, there are separate niches for stem cells that will become blood, for cells that will become skin, and so on. Niches are places where your stem cells can replenish themselves and your tissue cells throughout your lifetime… Inside the fruit fly ovary are structures called germarium which contain tiny cradles made of cap cells that nurture stem cells. Each such cradle contains two to three stem cells preparing to become fly eggs that are cuddled in a niche composed of three to six cap cells. Cap cells adhere to stem cells and this close contact may allow cap cells to play critical roles in communicating with stem cells. The research team looked at a kind of signaling that usually depends on direct contact between cells, called the Notch pathway. The Notch protein is like a trigger poking out of a cell that can activate a mechanism inside the cell. When this trigger is pulled by proteins, called Delta and Serrate, from another cell, proteins are freed inside the cell to travel to the cell nucleus and turn on various genes. According to Ruohula-Baker, the Notch pathway plays an important role in many stem-cell niches, including those in the blood system, gut, breasts, and muscles. However, in many cases it hasn’t been clear which cells send and which ones receive the signaling protein. The UW researchers analyzed the role of the Notch signaling pathway in both the stem cells and the cap cells. They found that either an increased production of Delta protein in the stem cells, or the presence of activated Notch protein in niche cells, resulted in up to 10 times the normal number of niche cells. These extra niche cells in turn resulted in a larger population of stem cells. On the other hand, when stem cells don’t produce functional Delta protein, they cease to be stem cells and soon leave the niche. The researchers also found that the receiving end for the Notch pathway, the trigger, is required in the niche cells, making them receivers of signals, not just senders. Work by other scientists had shown that TCF-beta signaling from niche cells is required to maintain active stem cells. “Our study now shows that stem cells use the Notch pathway to signal to neighboring cells to maintain an active niche, and in turn, the niche induces and maintains the fate of the stem cells,” Ruohola-Baker noted. “This is a first indication of a dialogue taking place between the stem cells and the niche that supports them. It is tempting to speculate that maybe multiple potential niches exist for stem cells in our bodies that can be turned on to action when signaling stem cells are in the neighborhood. It may very well be that the power of cancer cells to spread comes from this natural ability of stem cells to make a home when in a hospitable environment. We all need a home, and stem cells with their strong survival instinct are active homebuilders.”
Leukemia or leukaemia (Greek leukos λευκ?ς, “white”; aima α?μα, “blood”) is a cancer of the blood or bone marrow and is characterized by an abnormal proliferation (production by multiplication) of blood cells, usually white blood cells (leukocytes). Leukemia is a broad term covering a spectrum of diseases. In turn, it is part of the even broader group of diseases called hematological neoplasms. Classification of Leukemia Leukemia is clinically and pathologically subdivided into several large groups. The first division is between its acute and chronic forms: - Acute leukemia is characterized by the rapid increase of immature blood cells. This crowding makes the bone marrow unable to produce healthy blood cells. Acute forms of leukemia can occur in children and young adults. (In fact, it is a more common cause of death for children in the US than any other type of malignant disease). Immediate treatment is required in acute leukemias due to the rapid progression and accumulation of the malignant cells, which then spill over into the bloodstream and spread to other organs of the body. Central nervous system (CNS) involvement is uncommon, although the disease can occasionally cause cranial nerve palsies. - Chronic leukemia is distinguished by the excessive build up of relatively mature, but still abnormal, blood cells. Typically taking months or years to progress, the cells are produced at a much higher rate than normal cells, resulting in many abnormal white blood cells in the blood. Chronic leukemia mostly occurs in older people, but can theoretically occur in any age group. Whereas acute leukemia must be treated immediately, chronic forms are sometimes monitored for some time before treatment to ensure maximum effectiveness of therapy. Additionally, the diseases are subdivided according to which kind of blood cell is affected. This split divides leukemias into lymphoblastic or lymphocytic leukemias and myeloid or myelogenous leukemias: - In lymphoblastic or lymphocytic leukemias, the cancerous change took place in a type of marrow cell that normally goes on to form lymphocytes. - In myeloid or myelogenous leukemias, the cancerous change took place in a type of marrow cell that normally goes on to form red cells, some types of white cells, and platelets. Combining these two classifications provides a total of four main categories: |Acute lymphocytic leukemia (ALL)||Chronic lymphocytic leukemia (CLL)| (also “myeloid” or “nonlymphocytic”) |Acute myelogenous leukemia (AML)||Chronic myelogenous leukemia (CML)| Within these main categories, there are typically several subcategories. Finally, hairy cell leukemia is usually considered to be outside of this classification scheme. - Acute lymphoblastic leukemia (ALL) is the most common type of leukemia in young children. This disease also affects adults, especially those age 65 and older. Standard treatments involve chemotherapy and radiation. The survival rates vary by age: 85% in children and 50% in adults. - Chronic lymphocytic leukemia (CLL) most often affects adults over the age of 55. It sometimes occurs in younger adults, but it almost never affects children. Two-thirds of affected people are men. The five-year survival rate is 75%. It is incurable, but there are many effective treatments. - Acute myelogenous leukemia (AML) occurs more commonly in adults than in children, and more commonly in men than women. AML is treated with chemotherapy. The five-year survival rate is 40%. - Chronic myelogenous leukemia (CML) occurs mainly in adults. A very small number of children also develop this disease. Treatment is with imatinib (Gleevec) or other drugs. The five-year survival rate is 90%. - Hairy cell leukemia (HCL) is sometimes considered a subset of CLL, but does not fit neatly into this pattern. About 80% of affected people are adult men. There are no reported cases in young children. HCL is incurable, but easily treatable. Survival is 96% to 100% at ten years. Symptoms of Leukemia Damage to the bone marrow, by way of displacing the normal bone marrow cells with higher numbers of immature white blood cells, results in a lack of blood platelets, which are important in the blood clotting process. This means people with leukemia may become bruised, bleed excessively, or develop pinprick bleeds (petechiae). White blood cells, which are involved in fighting pathogens, may be suppressed or dysfunctional. This could cause the patient’s immune system to be unable to fight off a simple infection or to start attacking other body cells. Finally, the red blood cell deficiency leads to anemia, which may cause dyspnea. All symptoms can be attributed to other diseases. Some other related symptoms: - Fever, chills, night sweats and other flu-like symptoms - Weakness and fatigue - Swollen or bleeding gums - Neurological symptoms (headaches) - Enlarged liver and spleen - Frequent infection - Bone pain - Joint pain - Swollen tonsils - Unintentional weight loss The word leukemia, which means ‘white blood’, is derived from the disease’s namesake high white blood cell counts that most leukemia patients have before treatment. The high number of white blood cells are apparent when a blood sample is viewed under a microscope. Frequently, these extra white blood cells are immature or dysfunctional. The excessive number of cells can also interfere with the level of other cells, causing a harmful imbalance in the blood count. Some leukemia patients do not have high white blood cell counts visible during a regular blood count. This less-common condition is called aleukemia. The bone marrow still contains cancerous white blood cells which disrupt the normal production of blood cells. However, the leukemic cells are staying in the marrow instead of entering the bloodstream, where they would be visible in a blood test. For an aleukemic patient, the white blood cell counts in the bloodstream can be normal or low. Aleukemia can occur in any of the four major types of leukemia, and is particularly common in hairy cell leukemia. Diagnosis of Leukemia Diagnosis requires blood tests to look for an abnormal number of white blood cells, and a bone marrow examination to look for abnormal numbers or forms of cells in the bone marrow. Causes and risk factors of Leukemia There is no single known cause for all of the different types of leukemia. The different leukemias likely have different causes, and very little is certain about what causes them. Researchers have strong suspicions about four possible causes: - natural or artificial ionizing radiation - certain kinds of chemicals - some viruses - genetic predispositions Leukemia, like other cancers, results from somatic mutations in the DNA which activate oncogenes or deactivate tumor suppressor genes, and disrupt the regulation of cell death, differentiation or division. These mutations may occur spontaneously or as a result of exposure to radiation or carcinogenic substances and are likely to be influenced by genetic factors. Cohort and case-control studies have linked exposure to petrochemicals, such as benzene, and hair dyes to the development of some forms of leukemia. Viruses have also been linked to some forms of leukemia. For example, certain cases of ALL are associated with viral infections by either the human immunodeficiency virus or human T-lymphotropic virus (HTLV-1 and -2, causing adult T-cell leukemia/lymphoma). However, a CNN Health report says children may be offered limited protection against leukemia by exposure to certain germs. Fanconi anemia is also a risk factor for developing acute myelogenous leukemia. Until the cause or causes of leukemia are found, there is no way to prevent the disease. Even when the causes become known, they may not be readily controllable, such as naturally occurring background radiation, and therefore not especially helpful for prevention purposes. Treatment options for leukemia by type Acute lymphoblastic leukemia (ALL) Management of ALL focuses on control of bone marrow and systemic (whole-body) disease. Additionally, treatment must prevent leukemic cells from spreading to other sites, particularly the central nervous system (CNS). In general, ALL treatment is divided into several phases: - Induction chemotherapy to bring about bone marrow remission. For adults, standard induction plans include prednisone, vincristine, and an anthracycline drug; other drug plans may include L-asparaginase or cyclophosphamide. For children with low-risk ALL, standard therapy usually consists of three drugs (prednisone, L-asparaginase, and vincristine) for the first month of treatment. - Consolidation therapy to eliminate any remaining leukemia cells. This typically requires one to three months in adults and four to eight months in children. Patients with low- to average-risk ALL receive therapy with antimetabolite drugs such as methotrexate and 6-mercaptopurine (6-MP). High-risk patients receive higher drug doses of these drugs, plus additional drugs. - CNS prophylaxis (preventive therapy) to stop the cancer from spreading to the brain and nervous system. Standard prophylaxis may include radiation of the head and/or drugs delivered directly into the spine. - Maintenance treatments with chemotherapeutic drugs to prevent disease recurrence once remission has been achieved. Maintenance therapy usually involves lower drug doses, and may continue for two years. - Alternatively, allogeneic bone marrow transplantation may be appropriate for high-risk or relapsed patients. Chronic lymphocytic leukemia (CLL) Decision to treat Hematologists base CLL treatment upon both the stage and symptoms of the individual patient. A large group of CLL patients have low-grade disease, which does not benefit from treatment. Individuals with CLL-related complications or more advanced disease often benefit from treatment. In general, the indications for treatment are: - falling hemoglobin or platelet count - progression to a later stage of disease - painful, disease-related overgrowth of lymph nodes or spleen - an increase in the rate of lymphocyte production Typical treatment approach CLL is probably incurable by present treatments. The primary chemotherapeutic plan is combination chemotherapy with chlorambucil or cyclophosphamide, plus a corticosteroid such as prednisone or prednisolone. The use of a corticosteroid has the additional benefit of suppressing some related autoimmune diseases, such as immunohemolytic anemia or immune-mediated thrombocytopenia. In resistant cases, single-agent treatments with nucleoside drugs such as fludarabine, pentostatin, or cladribine may be successful. Younger patients may consider allogeneic or autologous bone marrow transplantation. Acute myelogenous leukemia (AML) Many different anti-cancer drugs are effective for the treatment of AML. Treatments vary somewhat according to the age of the patient and according to the specific subtype of AML. Overall, the strategy is to control bone marrow and systemic (whole-body) disease, while offering specific treatment for the central nervous system (CNS), if involved. In general, most oncologists rely on combinations of drugs for the initial, induction phase of chemotherapy. Such combination chemotherapy usually offers the benefits of early remission and a lower risk of disease resistance. Consolidation and maintenance treatments are intended to prevent disease recurrence. Consolidation treatment often entails a repetition of induction chemotherapy or the intensification chemotherapy with additional drugs. By contrast, maintenance treatment involves drug doses that are lower than those administered during the induction phase. Chronic myelogenous leukemia (CML) There are many possible treatments for CML, but the standard of care for newly diagnosed patients is imatinib (Gleevec) therapy. Compared to most anti-cancer drugs, it has relatively few side effects and can be taken orally at home. With this drug, more than 90% of patients will be able to keep the disease in check for at least five years, so that CML becomes a chronic, manageable condition. In a more advanced, uncontrolled state, when the patient cannot tolerate imatinib, or if the patient wishes to attempt a permanent cure, then an allogeneic bone marrow transplantation may be performed. This procedure involves high-dose chemotherapy and radiation followed by infusion of bone marrow from a compatible donor. Approximately 30% of patients die from this procedure. Hairy cell leukemia (HCL) Decision to treat Patients with hairy cell leukemia who are symptom-free typically do not receive immediate treatment. Treatment is generally considered necessary when the patient shows signs and symptoms such as low blood cell counts (e.g., infection-fighting neutrophil count below 1.0 K/µL), frequent infections, unexplained bruises, anemia, or fatigue that is significant enough to disrupt the patient’s everyday life. Typical treatment approach Patients who need treatment usually receive either one week of cladribine, given daily by intravenous infusion or a simple injection under the skin, or six months of pentostatin, given every four weeks by intravenous infusion. In most cases, one round of treatment will produce a prolonged remission. Other treatments include rituximab infusion or self-injection with Interferon-alpha. In limited cases, the patient may benefit from splenectomy (removal of the spleen). These treatments are not typically given as the first treatment because their success rates are lower than cladribine or pentostatin. Homeopathy Treatment for Leukemia Keywords: homeopathy, homeopathic, treatment, cure, remedy, remedies, medicine Homeopathy treats the person as a whole. It means that homeopathic treatment focuses on the patient as a person, as well as his pathological condition. The homeopathic medicines are selected after a full individualizing examination and case-analysis, which includes the medical history of the patient, physical and mental constitution, family history, presenting symptoms, underlying pathology, possible causative factors etc. A miasmatic tendency (predisposition/susceptibility) is also often taken into account for the treatment of chronic conditions. A homeopathy doctor tries to treat more than just the presenting symptoms. The focus is usually on what caused the disease condition? Why ‘this patient’ is sick ‘this way’. The disease diagnosis is important but in homeopathy, the cause of disease is not just probed to the level of bacteria and viruses. Other factors like mental, emotional and physical stress that could predispose a person to illness are also looked for. No a days, even modern medicine also considers a large number of diseases as psychosomatic. The correct homeopathy remedy tries to correct this disease predisposition. The focus is not on curing the disease but to cure the person who is sick, to restore the health. If a disease pathology is not very advanced, homeopathy remedies do give a hope for cure but even in incurable cases, the quality of life can be greatly improved with homeopathic medicines. The homeopathic remedies (medicines) given below indicate the therapeutic affinity but this is not a complete and definite guide to the homeopathy treatment of this condition. The symptoms listed against each homeopathic remedy may not be directly related to this disease because in homeopathy general symptoms and constitutional indications are also taken into account for selecting a remedy. To study any of the following remedies in more detail, please visit the Materia Medica section at www.kaisrani.com. None of these medicines should be taken without professional advice and guidance. Homeopathy Remedies for Leukemia : Acet-ac., acon., aran., ars., ars-i., bar-i., bar-m., bry., calc., calc-p., carb-s., carb-v., carc., cean., chin., chin-s., con., cortiso., crot-h., ferr-pic., ip., kali-p., merc., nat-a., nat-m., nat-p., nat-s., nux-v., op., phos., pic-ac., sulfa., sulph., syph., thuj., tub., x-ray. Significant research into the causes, diagnosis, treatment, and prognosis of leukemia is being done. Hundreds of clinical trials are being planned or conducted at any given time. Studies may focus on effective means of treatment, better ways of treating the disease, improving the quality of life for patients, or appropriate care in remission or after cures. As of 1998, it is estimated that each year, approximately 30,800 individuals will be diagnosed with leukemia in the United States and 21,700 individuals will die of the disease. This represents about 2% of all forms of cancer. - ^ Harrison’s Principles of Internal Medicine, 16th Edition, Chapter 97. Malignancies of Lymphoid Cells. Clinical Features, Treatment, and Prognosis of Specific Lymphoid Malignancies. - ^ Finding Cancer Statistics » Cancer Stat Fact Sheets »Chronic Lymphocytic Leukemia National Cancer Institute - ^ Colvin GA, Elfenbein GJ (2003). “The latest treatment advances for acute myelogenous leukemia”. Med Health R I 86 (8): 243–6. PMID 14582219. - ^ Patients with Chronic Myelogenous Leukemia Continue to Do Well on Imatinib at 5-Year Follow-Up Medscape Medical News 2006 - ^ Updated Results of Tyrosine Kinase Inhibitors in CML ASCO 2006 Conference Summaries - ^ Else M, Ruchlemer R, Osuji N, et al (2005). “Long remissions in hairy cell leukemia with purine analogs: a report of 219 patients with a median follow-up of 12.5 years”. Cancer 104 (11): 2442–8. doi:10.1002/cncr.21447. PMID 16245328. - ^ a b c Fausel C (October 2007). “Targeted chronic myeloid leukemia therapy: seeking a cure“. J Manag Care Pharm 13 (8 Suppl A): 8–12. PMID 17970609. - ^ “Trends in leukemia incidence and survival in the United States (1973-1998)“.
C Input and Output What is C Input and Output? In this tutorial, you are going to learn about C Input and Output. C Language provides mechanisms to take input from users and display their outputs. C Input refers to storing the value in the memory after its execution. C output in the other hand displays such values stored in memory. There are various built-in functions used in C which is used to perform the supplying input data to the program and obtaining the output results from the program. In this tutorial, you will learn about different built-in functions which can be used for providing inputs from the user to the computer. They also support in displaying the outputs in the screen and their respective header files. scanf() and printf() functions scanf() and printf() functions definitions are contained by the header file stdio.h which is used for taking input from the user and displaying output on the screen respectively. int a, b; printf("Enter values that you want to display"); printf("You entered: %d",a); The program above illustrates a simple program which takes input from the user and displays the output. The scanf() function takes an input from the user as per their type specified. On the other hand, the printf() function will print the parameters that are passed into the program in the console screen. You might have been wondering about what is the purpose of using “%d” inside the printf() and scanf() functions. They are format specifiers. Format specifiers are used while giving input and displaying an output of the program. It is a mechanism to tell the compiler about what data are going to be used in the variables and the type of data to be printed. The table below lists some of the most used format specifiers in C with their respective uses. |%f||It is used for scanning and printing floating point number.| |%d||It is used for scanning and printing signed decimal number.| |%S||It is used for scanning and printing character string.| |%c||It is used for scanning and printing a character.|
(If you would like to print to the following illustrations, use your "print button" at the top of your browser or go to the "file button", then select "print". Another way of to print is to highlight the illustration, then hold down the "P" & "Ctrl" buttons on your keyboard. Or just use the PRINT button below!) Horses have particularly big, strong teeth, much more so in proportion to their size than in humans. This is due to their diets, which cause them to need efficient mastication (chewing) to break the foods up so the digestive juices can process the nutrients and they can be absorbed by the horse. Domesticated horses need regular dental care to maintain proper chewing and comfort in their mouth. Their upper jaw is larger than the lower which makes the teeth wear on a slant. From constant grinding of foods can cause the back teeth to become very sharp and irritate the inner cheeks. Also, hooks can form on the front and back of the rows of molars which, if not removed can result in the horse not being able to close it's mouth. The vet can file or rasp the sharp edges off. This might need to be done twice a year, and therefor a regular series of checkups should be done bi-yearly to yearly. As the teeth wear down throughout the horses life, the pattern can be seen on the surface of the incisors gradually changes, giving a fairly accurate idea of the horse's age. The teeth also become more triangular as a horse gets older, giving another clue to it's age. The teeth continue to erupt from their sockets throughout the horse's life. The length of the crown in the gum shortens and the roots develop with age, and only a small amount of tooth is left by the time a horse becomes elderly. ©The Horse Lover's Corral 1999, 2000, 2001, 2002, 2003 The Horse Lover's Corral
Libraries such as Parsec use the combinator pattern, where complex structures are built by defining a small set of very simple 'primitives', and a set of 'combinators' for combining them into more complicated structures. It's somewhat similar to the Composition pattern found in object-oriented programming. In the case of the Parsec, the library provides a set of extremely simple (almost trivial) parsers, and ways to combine small parsers into bigger parsers. Many other libraries and programs use the same ideas to build other structures: - Parsec builds parsers out of smaller parsers. - The School of Expression (SOE) graphics library builds pictures out of individual shapes. - The SOE book also mentions a library to build music out of individual notes and rests. - Another textbook describes building financial contracts. - [Software transactional memory] builds big transactions out of smaller ones. - The Haskell IO system itself builds whole programs out of small I/O actions using and>>=.return
Colossus, statue that is considerably larger than life-size. They are known from ancient Egypt, Mesopotamia, India, China, and Japan. The Egyptian sphinx (c. 2550 bc) that survives at al-Jīzah, for example, is 240 feet (73 m) long; and the Daibutsu (Great Buddha; ad 1252) at Kamakura, Japan, is 37 feet (11.4 m) high. The ancient Greeks made a number of colossi that are presently known purely through historical texts and echoes in figurines and coins, such as the archaic Apollo of Delos and Phidias’ chryselephantine (gold and ivory) figure of Athena Parthenos. Chares’ statue of Helios in Rhodes was considered one of the Seven Wonders of the World. More than 100 feet (30 m) high, it took 12 years to complete. The Romans also erected large statues; Pliny reports, for example, that Zenodorus made a 106-foot (32-metre) colossus of Nero. Colossal sculpture continued through the European Middle Ages and the Renaissance, as evidenced by the “St. Christopher” at Notre-Dame de Paris (28 feet [8.5 m]) and Michelangelo’s “David.” Among the many modern examples are the “Christ of the Andes,” by Mateo Alonso, between Argentina and Chile (26 feet [7.9 m] high), and the Statue of Liberty, by the French sculptor Frédéric-Auguste Bartholdi, in New York Harbor (about 305 feet [93 m] high).
Throughout the nineteenth century Native Americans were treated far less so respectful by the United States’ authorities. This was the clip when the United States wanted to spread out and turn quickly as a land. and to accomplish this end. the Native Americans were “pushed” westward. It was a memorable and slippery clip in the Natives’ history. The US authorities made many interventions with the Native Americans. doing large alterations on the Indian state. Native Americans wanted to populate peacefully with the white work forces. but the consequence of interventions and understandings was non rather peaceable. In this essay I will explicate why and how the Native Americans were treated by the United States’ authorities. in which manner were the pacts broken and how the Native state were affected by the nineteenth century occurrences. I will concentrate largely on the Cherokee Indians. During the 1750s and 1760s there were several struggles between the British and Gallic states. This Great War of Empire or the Seven old ages War took topographic point in the Carolinas and it was known as the Cherokee War between 1756 and 1763. Europeans were fighting for North America in the eighteenth century. and each of them controlled a land in America: Florida was controlled by the Spanish. Canada and Louisiana was occupied by the Gallic. and the British held the Atlantic seaside. Europeans wanted to convert Indians to assist them with the battle for North America. particularly British and Gallic competed for Cherokee commitment. It turned out that Cherokees were assisting the English at the beginning of the Seven Years War. In this manner. the Cherokees were continually attacked by the Gallic Alliess: the Choctaw and Iroquois. Because of these onslaughts. Cherokees asked the British to protect their households and places by constructing garrisons. In 1756 Governor Glen of South Carolina agreed to construct two garrisons for the Cherokees: the first one built on Savannah River is Fort Prince George. and the 2nd 1 is Fort Loudoun built in eastern Tennessee. There was built a 3rd garrison in northern Tennesee by the Virginians and it is called “the Virginia Fort” . Virginia colonists attacked the Cherokees on their manner to Chota. and killed some of them in barbarous manner. Then they took the scalps to Governor Dinwiddie. This onslaught was a error made by the Virginians. so Dinwiddie and the Virginians who had killed the Indians. apologized for their action. The Cherokee leader. Ada-gal’kala besides sent apologies to the governors because of the Virginia and North Carolina occurrences. In this manner. in November of 1758 in Charlestown party heads met with the governor and some functionaries. and “peace was officially declared” . After the Seven Years War hungriness and disease decreased Cherokee population to one-half. Many pacts and understandings between Europeans and the Cherokees. the Indians has lost and sold the bulk of their lands and they were moved westward. Many Europeans married Cherokee adult females. and created mixed-blooded households. The Gallic and Indian War’s decisions led to the American Revolution. which began in 1763. In 1765 the British Parliament imposed direct revenue enhancements on the British American settlements. These revenue enhancements helped paying the military personnels in North America after the Seven Years War. but the settlers didn’t sent representatives to the British Parliament. because they considered it a misdemeanor of their rights. This meant the beginning of the American Revolution. The Cherokees allied with the British in the American Revolution for several grounds. One ground was that the British stopped their settlers to settle beyond the Appalachian Mountains. and the Indians considered this “as an effort by the Crown to forestall mistreatment of native peoples” . In 1776 the Cherokees took over the frontiers of Georgia. Virginia and the Carolinas. The American soldiers didn’t bury the thing that Cherokees were really near to win the Seven years’ War ; so they wanted to avenge it. The Cherokees wanted to recapture their land what was taken by white colonists through unjust pacts. The American Revolution has started in 1776. and it was a opportunity for the Cherokees to recover its land. While the Cherokees were originating with the British. the commanding officer of North Carolina military personnels. general Griffith Rutherford attacked the in-between towns of the Cherokee state. Soldiers killed every work forces and adult females on their manner or they were taken as captives. and about 30 Cherokee towns were left without any supplies. This was known as the Cherokee Campaign. The American Revolution ended with peace understandings with the Cherokee Indians. and they gave up all the lands in the E of the Appalachians. Between 1776 and 1794 there were several pacts. runs. frontier conflicts of the Cherokees during and after the American Revolution against the American backwoodsmans. This period was called the Chickamauga Wars which was a guerrilla-style war. . In November 1794 the Treaty of Tellico Blockhouse was signed and this meant the terminal of the Chickamauga Wars. The blockhouse ran until 1807 and its intent was to maintain the peace between the nearby Overhill Cherokee towns and the Euro-American colonists. In 1827 they proposed a written fundamental law which was adopted by the Cherokee National Council and it was the creative activity of the Cherokee democracy. Harmonizing to this Constitutional Convention the Cherokee folk and the Whites should follow peace in footings of self authorities. The Cherokee democracy had great consequence on the US authorities seting it in crisis. The Cherokees created a province within a province which means misdemeanor of federal US jurisprudence. Therefore. they opened the Indian land to white colonists by allowing the province authoritiess to advance the remotion of all Indian states to the West of the Mississippi River. Throughout decennaries of pacts and dialogues. the Cherokees faced many challenges and differences over land with the US authorities. After the “civilization” plan. “many Cherokees who opposed peaceable dealingss with the United States moved west into contemporary Texas and Arkansas” . Other Cherokees made peace with white Americans and started to populate together. There were Cherokee diehards from the East who actively opposed assimilation with white people. In the late 18th and early 19th centuries the Cherokees were traveling through a clip of metempsychosis and regeneration. After the American Revolution the Cherokees confronted with economic depression. They gave up their places. small towns. towns and runing evidences to white Americans. Many Cherokees adopted imposts. beliefs and life styles of white Americans ; they deeply assimilated White civilization because in this manner they hoped could last as a state in their fatherland. In 1819 Georgia appealed to the U. S. authorities to take the Cherokee from Georgia lands. When the entreaty failed. efforts were made to buy the district. Meanwhile. in 1820 the Cherokee established a governmental system modeled on that of the United States. with an elected principal head. a senate. and a house of representatives. Because of this system. the Cherokee were included as one of the alleged Five Civilized Tribes. The other four folks were the Chickasaw. Choctaw. Creek. and the Seminoles. In 1832 the Supreme Court of the United States ruled that the Georgia statute law was unconstitutional ; federal governments. following Jackson’s policy of Native American remotion. ignored the determination. About five hundred taking Cherokee agreed in 1835 to yield the tribal district in exchange for $ 5. 700. 000 and land in Indian Territory ( now Oklahoma ) . Their action was repudiated by more than nine-tenths of the folk. and several members of the group were subsequently assassinated. In 1838 federal military personnels began physical evicting the Cherokee. Approximately one 1000 escaped to the North Carolina Mountains. purchased land. and incorporated in that province ; they were the ascendants of the contemporary Eastern Band. Most of the folk. including the Western Band. was driven west about eight 100s stat mis in a forced March. known as the Trail of Tears. During the warfare of the eighteenth century and in the early nineteenth century beside the Cherokee civilization and life style. the Cherokee policy besides has changed. Europeans wanted urgently to hold Cherokee warriors in their military runs. To win this demand. Europeans offered and gave gifts for the Cherokees: guns. ammo. tools. fabrics and other goods. Contending together and sharing the goods with each other. the Europeans and the Cherokees formed mix-blooded households. doing the first stairss to alter the Cherokee civilization. Europeans recognized Cherokee leaders as heads. and measure by measure they started to exert the European power along in the Cherokee society. In clip the Cherokee Indians signed many pacts. fought in many wars along the British and other Europeans because they hoped to acquire protection in alteration. Some of the Cherokees gave up their freedom and independency for the protection of their fatherland and households.
Post-traumatic stress disorder (PTSD) is an anxiety disorder that can develop after exposure to a terrifying event or ordeal in which grave physical harm occurred or was threatened. Traumatic events that can trigger PTSD include violent personal assaults such as rape or mugging, natural or human-caused disasters, accidents, or military combat. PTSD can be extremely disabling. Military troops who served in the Vietnam and Gulf Wars; rescue workers involved in the aftermath of disasters like the terrorist attacks on New York City and Washington, D.C.; survivors of the Oklahoma City bombing; survivors of accidents, rape, physical and sexual abuse, and other crimes; immigrants fleeing violence in their countries; survivors of the 1994 California earthquake, the 1997 North and South Dakota floods, and hurricanes Hugo and Andrew; and people who witness traumatic events are among those at risk for developing PTSD. Families of victims can also develop the disorder. Fortunately, through research supported by the National Institute of Mental Health (NIMH) and the Department of Veterans Affairs (VA), effective treatments have been developed to help people with PTSD. Research is also helping scientists better understand the condition and how it affects the brain and the rest of the body. What Are the Symptoms of PTSD? Many people with PTSD repeatedly re-experience the ordeal in the form of flashback episodes, memories, nightmares, or frightening thoughts, especially when they are exposed to events or objects reminiscent of the trauma. Anniversaries of the event can also trigger symptoms. People with PTSD also experience emotional numbness and sleep disturbances, depression, anxiety, and irritability or outbursts of anger. Feelings of intense guilt are also common. Most people with PTSD try to avoid any reminders or thoughts of the ordeal. PTSD is diagnosed when symptoms last more than 1 month. How Common Is PTSD? About 3.6 percent of U.S. adults ages 18 to 54 (5.2 million people) have PTSD during the course of a given year. About 30 percent of the men and women who have spent time in war zones experience PTSD. One million war veterans developed PTSD after serving in Vietnam. PTSD has also been detected among veterans of the Persian Gulf War, with some estimates running as high as 8 percent. When Does PTSD First Occur? PTSD can develop at any age, including in childhood. Symptoms typically begin within 3 months of a traumatic event, although occasionally they do not begin until years later. Once PTSD occurs, the severity and duration of the illness varies. Some people recover within 6 months, while others suffer much longer. What Treatments Are Available for PTSD? Research has demonstrated the effectiveness of cognitive-behavioral therapy, group therapy, and exposure therapy, in which the patient gradually and repeatedly relives the frightening experience under controlled conditions to help him or her work through the trauma. Studies have also shown that medications help ease associated symptoms of depression and anxiety and help promote sleep. Scientists are attempting to determine which treatments work best for which type of trauma. Some studies show that giving people an opportunity to talk about their experiences very soon after a catastrophic event may reduce some of the symptoms of PTSD. A study of 12,000 schoolchildren who lived through a hurricane in Hawaii found that those who got counseling early on were doing much better 2 years later than those who did not. Do Other Illnesses Tend to Accompany PTSD? Co-occurring depression, alcohol or other substance abuse, or another anxiety disorder are not uncommon. The likelihood of treatment success is increased when these other conditions are appropriately identified and treated as well. Headaches, gastrointestinal complaints, immune system problems, dizziness, chest pain, or discomfort in other parts of the body are common. Often, doctors treat the symptoms without being aware that they stem from PTSD. NIMH encourages primary care providers to ask patients about experiences with violence, recent losses, and traumatic events, especially if symptoms keep recurring. When PTSD is diagnosed, referral to a mental health professional who has had experience treating people with the disorder is recommended. Who Is Most Likely to Develop PTSD? People who have suffered abuse as children or who have had other previous traumatic experiences are more likely to develop the disorder. Research is continuing to pinpoint other factors that may lead to PTSD. It used to be believed that people who tend to be emotionally numb after a trauma were showing a healthy response, but now some researchers suspect that people who experience this emotional distancing may be more prone to PTSD. What Are Scientists Learning From Research? NIMH and the VA sponsor a wide range of basic, clinical, and genetic studies of PTSD. In addition, NIMH has a special funding mechanism, called RAPID Grants, that allows researchers to immediately visit the scenes of disasters, such as plane crashes or floods and hurricanes, to study the acute effects of the event and the effectiveness of early intervention. Studies in animals and humans have focused on pinpointing the specific brain areas and circuits involved in anxiety and fear, which are important for understanding anxiety disorders such as PTSD. Fear, an emotion that evolved to deal with danger, causes an automatic, rapid protective response in many systems of the body. It has been found that the body's fear response is coordinated by a small structure deep inside the brain, called the amygdala. The amygdala, although relatively small, is a very complicated structure, and recent research suggests that different anxiety disorders may be associated with abnormal activation of the amygdala. The following are also recent research findings: In brain imaging studies, researchers have found that the hippocampus—a part of the brain critical to memory and emotion—appears to be different in cases of PTSD. Scientists are investigating whether this is related to short-term memory problems. Changes in the hippocampus are thought to be responsible for intrusive memories and flashbacks that occur in people with this disorder. People with PTSD tend to have abnormal levels of key hormones involved in response to stress. Some studies have shown that cortisol levels are lower than normal and epinephrine and norepinephrine are higher than normal. When people are in danger, they produce high levels of natural opiates, which can temporarily mask pain. Scientists have found that people with PTSD continue to produce those higher levels even after the danger has passed; this may lead to the blunted emotions associated with the condition. Research to understand the neurotransmitter systems involved in memories of emotionally charged events may lead to discovery of medications or psychosocial interventions that, if given early, could block the development of PTSD symptoms. Publication No. OM-99 4157 (Revised) Printed September 1999
Clothing has assumed a major economic role in human society ever since we began to wear animal skins. Along with the provision of food, shelter and fuels, the manufacture of clothing contributes to human environmental impact worldwide. So as I contemplated my closet recently, I asked myself whether cotton vs. linen shirts, or nylon or denim pants, left a greater footprint on the environment. Good data exist for the energy consumption to produce a given weight of fabric. But after that, the analysis is fraught with difficulty. A given weight of thread can be woven, thick or thin, into a vastly different yardage of fabric, which, of course, determines how much is used in any particular garment. Further processing, such as dyeing the material and incorporation of silver nanoparticles and anti-wrinkle- or flame-retardant chemicals requires additional energy inputs. Post-purchase activities, such as how often a garment must be washed or ironed should also factor into the analysis. Linen requires much less energy to manufacture than the same amount of cotton fabric, but a linen garment wrinkles easily, requires dry cleaning, and must be ironed frequently. Synthetic fabrics such as nylon or polyester are made from fossil fuels (petroleum), but they may result in lower carbon dioxide emissions to the atmosphere than the production of cotton. The analysis of environmental footprint must extend to water use, especially for those materials that are grown as crops. It must also consider organic farming and the impact of genetically-modified crop varieties on nature. Given the vast diversity of fibers, and multiple options for their weaving, dyeing, and additive compounds, I found a highly confounded answer to the simple question: which fabric is best for nature? Here I focus on six fabrics: three synthetic fabrics produced from fossil fuels (polyester, acrylic and nylon) and three derived from plant fibers (cotton, linen and rayon). The following statistics emerge based on the environmental impact to produce various fibers: - The energy use to produce a kilogram of polyester (125 MegaJoules), nylon (130 MJ), or acrylic (175 MJ) is greater than that used to produce rayon (71 MJ), cotton (60 MJ) or linen (10 MJ). - Emissions of greenhouse gases differ slightly from energy use, inasmuch as different sources of energy are involved with different fabrics (natural gas vs. electricity) and fabrics differ in the source of their embodied energy (photosynthesis vs. petroleum). For instance polyester is associated with the lowest emissions of CO2 (2.8 kgCO2/kg fiber) and cotton (6 kgCO2/kg fiber) moves up the list above acrylic (5 kgCO2/kg fiber). - The water use to produce a kilogram of conventionally-grown cotton (22,000 L) is astoundingly greater than that for rayon (640 L), linen (214 L), or polyester (62 L). If irrigation water is pumped, then the energy costs of cotton (# 1) are much higher. - The land use associated with the production of cotton is greater than for rayon, and essentially zero for polyester or acrylic. Organic cotton is estimated to require 30% more land area than conventional cotton, which has a higher yield per area. - Growth of cotton requires a number of pesticides in the field and chemicals for post-harvest processing of the fiber. In the United States about 85% of the planted cotton contains genes that help it to resist insect attack (i.e., GMO varieties, such as Bt cotton). Considering these various metrics, several studies conclude that rayon and linen may be the best compromise, considering these various metrics. Dress for success. Cay, A. 2018. Energy consumption and energy saving potential in clothing industry. Energy 159: 74-85. Muthu, S.S., Y. Li, J.Y. Hu and P.Y. Mok. 2012. Quantification of environmental impact and ecological sustainability for textile fibres. Ecological Indicators 13: 66-74. Shen, L., E. Worrell and M.K. Patel. 2010. Environmental impact assessment of man-made cellulose fibres. Resources Conservation and Recycling 55: 260-274. Van der Velden, M.M., M.K. Patel. And J.G. Vogtlander. 2014. LCA benchmarking study on textiles made of cotton, polyester, nylon, acryl or elastane. International Journal of Life Cycle Assessment 19: 331-356.
Health is a state of total well-being—physical, mental and social—helping us both survive and thrive in our everyday lives. Health promotion, then, encourages us to embrace this idea of well-being and in the process increase our control over how we experience everyday life. It is therefore less about preventing disease than about helping us manage our life situation, whatever it may be, and reach our full potential. But how does it work? Effective health promotion strikes a balance between personal choice and social responsibility, between people and their environments. In other words, it does not put the onus for good health on the individual alone. This multi-lens perspective can be applied in a variety of settings, such as workplaces, neighbourhoods, cities, schools or campuses, to help promote ways we can improve our experiences in our everyday environments. Health promotion may also be applied to common but complex human behaviours such as substance use. Like food, sex and other “feel good” things in life, psychoactive substances (or drugs) change the way we feel. And just as food and sex help humans survive and thrive but can also get us into trouble—with our health, our relationships, our sense of self-worth—substance use has both benefits and the potential to lead us down an unhappy, unhealthy path. As a complex human behaviour, substance use requires that we look at it from a broad perspective that considers many factors, not just personal ones about wanting relief or to feel good. Supply reduction: interventions that restrict access to a substance (particularly for populations considered vulnerable to harm) Demand reduction: services to reduce the number of individuals who use substances, the amount they use or the frequency of use Harm reduction: interventions that seek to reduce the harmful consequences even when use remains unchanged Traditionally, the substance use field has focused simply on substance use and ways to measure, prevent and treat negative consequences. This has led to a continuum of laws, policies and services that runs from restricting supply to reducing demand and, for some, continuing on to harm reduction. Various versions of this simple continuum have been used over time, all of them beginning with a focus on a disease or harm that must be avoided. While this may seem completely sensible at first glance, it makes less sense when considering that many people use psychoactive substances to promote physical, mental, emotional, social and/ or spiritual well-being. In other words, people use substances to promote health, yet substance use services focus on how drug use detracts from health. Health promotion begins from a fundamentally different focus. Rather than primarily seeking to protect people from disease or harm, it seeks to enable people to increase control over their health whether they are using substances or not. Human experience is complex. Helping people understand that complexity, and giving them skills to manage it, helps make them actors (rather than victims) in their own lives. That said, no one is completely autonomous. Our choices and behaviours are influenced by a variety of factors, including biology, physical and social environments and events throughout our life course. These factors interact in complex ways to create unique sets of opportunities and constraints for each of us. Institutional and community cultures, as well as family and societal values, all influence our behaviour and the impact that behaviour might have on our total health. Since many people use drugs often or in part to promote health and well-being, health promotion along these lines involves helping people manage their substance use in a way that maximizes benefit and minimizes harm. (Indeed, this is how we address other risky behaviours in our everyday lives, including driving and participating in sports.) It means giving attention to the full picture—the substances, the environments in which they are used and in which people live, and the individuals who use those substances and shape the environments. Substances: regulate supply to ensure the quality of substances and enact appropriate restrictions Environments: promote social and physical contexts that encourage moderation and are stimulating and safe Individuals: increase health capacity and resilience and develop active responsible citizens Caffeine, alcohol and other psychoactive drugs tap into the wiring system of the human brain and influence the way nerve cells send, receive or process information. This has led some researchers to categorize drugs according to the type of effect they have on the central nervous system, though some may fit in more than one group. Depressants decrease heart rate, breathing and mental processing—for example, alcohol and heroin Stimulants increase heart rate, breathing and mental processing—for example, caffeine, tobacco or cocaine Hallucinogens make things look, sound or feel different than normal—for example, magic mushrooms or LSD People have been using a wide variety of psychoactive (or mind-altering) drugs for thousands of years to celebrate successes and to help deal with grief and sadness, to mark rites of passage and to pursue spiritual insight. Indeed, drug use is deeply embedded in our cultural fabric. To feel good To feel better To do better Curiosity or new experiences But the use of drugs involves risk. And risk can be associated with significant harm. Some of the harms relate to the short-term intoxicating properties of psychoactive drugs. These harms tend to be acute or immediate (e.g., injuries from car accidents, death from overdose). Other harms relate to chronic conditions (e.g., heart disease, cancers that emerge from longer term use). These vary depending on characteristics of the drug itself or the mode in which it is taken. For example, much of the chronic harm related to tobacco is from inhaling the smoke rather than from the drug (nicotine) itself. The reasons we use a drug influence our pattern of use and risk of harmful consequences. If it is out of curiosity or another fleeting motive, only occasional or experimental use may follow. If the motive is strong and enduring (e.g., relieving a chronic sleep or mental health problem), then more long-lasting and intense substance use may follow. Motives for intense short-term use (e.g., to fit in, have fun or alleviate temporary stress) may result in risky behaviour with high potential for acute harm. Certain places, times and activities also influence our substance use patterns and likelihood of experiencing harm. Unsupervised teen drinking, for example, tends to be a particularly high-risk activity. Being in a situation of social conflict or frustration while under the influence of alcohol or antianxiety drugs (e.g., benzodiazepines) can increase the likelihood of a conflict escalating to violence. And using drugs before or while driving, boating or hiking on dangerous terrain increases the risk of injury. The overall social and cultural context surrounding our drug use is often more significant than we think. Consider, for example, the economic availability factor of different drugs: the cheaper and more available they are, the more likely they are to be used. Community norms also influence individual behaviour, and the degree of connection to family, friends and the wider community impact how much, how often, when, where and how we use different substances. Personal factors, including our physical and mental health status, also affect our likelihood of using drugs in risky ways. If we struggle with anxiety or depression, for example, we may try to feel better by drinking alcohol. In some cases, difficult life experiences (e.g., physical, sexual or emotional abuse) may impact our physical or mental health as well as contribute directly to risky drug use. There is also evidence that genetic inheritance and personality or temperament may have an impact. For example, people with a tendency toward sensation-seeking are at higher risk of harm. It goes without saying that certain things about a drug itself—its chemical composition and purity, the amount, frequency of use, method of consuming or administering it—influence the degree of risk and type of harm we might experience. Depressant drugs such as alcohol or heroin have elevated risks related to overdose, for example, whereas heavy use of stimulants can lead to psychotic behaviour. Another case in point: injecting concentrated forms of cocaine is much more risky than chewing coca leaves even though the same drug is involved. When our brain is repeatedly exposed to a drug, it may respond by making several adaptations to re-balance itself. But this balancing act comes at a price. Our brain may become less responsive to a particular chemical so that natural “feel good” sources—exercise, food, sex, fun hobbies, and so on—no longer provide any significant pleasure and we begin to feel flat, lifeless and depressed As a result, we may feel we need to use drugs just to feel normal and sometimes may need to take larger and larger amounts. Changes in the brain can also lead to impairment of our cognitive motor functioning. Conditioning is another side effect of repeated drug use. It can lead us to link things in the environment with our drug experience. Exposure to those cues can later trigger powerful cravings. For example, we may associate drinking coffee with smoking, with one psychoactive substance triggering use of another. Or we might associate the end of a work day with going out for beer. Our minds and bodies can become so adapted to the pattern that we may struggle or be uncomfortable when we break the routine. A common perception in our culture is that some drugs are intrinsically dangerous and possess the power to control human behaviour. According to this notion, a person takes a drug until, one day, the drug takes the person. Once this shift occurs, the person is characterized as “addicted” and powerless to control their substance use. A convenient image that too often comes to mind when we think about addiction is a person who is overwhelmed by their substance use, unemployed, homeless and disconnected from family and friends. But how accurate can this stereotype be? Many of us know people who seem unable to control their drinking, drug use or other behaviour. We may, in fact, feel powerless ourselves in certain circumstances or at certain times. Does this feeling of powerlessness mean the drug or some other force is actually controlling us? If so, what are we to make of people who suddenly quit using a substance after years of habitual use? Many people, for example, successfully quit smoking simply by deciding one day not to buy any more cigarettes. A more compassionate and logical perspective on substance use places the focus on the person rather than the drug. It considers the context and reasons why we start and continue to use drugs in the first place. From a “person first” point of view, risky and harmful substance use may be seen as a coping or adaptive response to a situation or condition. Using this approach can help us better explain reallife situations that do not fit neatly into a one-dimensional view of “addiction.” For instance, it helps us understand how some people who inject drugs to cope with trauma can and do continue to work and maintain close relationships. Or how some people use alcohol in ways that might be damaging their physical health while at the same time helping them to build or maintain business and social relationships. One of the best reasons for adopting a “person first” perspective on substance use involves the issue of belonging and our calling as humans to reach out to others when we can. When we look at people as having a disease or being possessed by a power we do not understand, we tend to regard them as “broken” or “alien” and not like us. We label them as an “alcoholic” or “addict,” someone controlled by a substance. But when we adopt a more balanced view which takes into account a range of human factors—from biological to environmental—we see instead a “thinking and feeling human being” who uses particular substances within certain contexts and for specific reasons. In other words, we see someone much more like us. We can begin to understand why some people may feel a sense of dependence on a substance—their only known means to cope—and why they may be reluctant to give it up. “The question is,” said Humpty Dumpty, “which is to be master—that’s all.” Keeping the focus on the person rather than the drug helps us in reaching out to a person who may appear to be “controlled” by their substance use and barely surviving. It also offers a way to support a well-functioning person who regularly uses drugs in harmful ways. In both cases, we affirm self-efficacy rather than seeing a person who use substances as a victim or inferior or somehow less human than others. One way to visualize substance use from a health promotion perspective is to consider a “frogs in a pond” scenario. If the frogs in a pond started behaving strangely, our first reaction would not be to punish them or even to treat them. Instinctively, we would wonder what was happening in the pond—in the soil or water, or among the pond creatures— that was affecting the frogs. This same ecological approach is necessary when we are thinking and talking about people and their relationships with substances, especially in our society where alcohol and other drug use is not only common and largely acceptable but often encouraged and rewarded. We need to keep in mind “the pond”—all of the factors that can contribute to a person’s choices about alcohol and other drugs. All of us—our children, parents, friends, neighbours and coworkers— are influenced by a unique set of opportunities and constraints related to our biology, relationships and environment. These influences interact in different ways in each one of us. Indeed, we are complex beings and our behaviours are complex too. Substance use is only one example of a complex behaviour that requires a look at “the pond.” Food and sex also fit this picture. Just as our eating and sexual behaviours are not only about food and sexuality, substance use is not just about substances. To illustrate this point, consider how people drink or don’t drink alcohol for a variety of reasons that have little to do with alcohol itself. Young people, for example, are influenced by the attitudes and behaviours of the key people in their lives, particularly their parents. And young and old alike in our culture are likely to find themselves in situations where they have to make decisions about whether to accept offers to drink or not and, if so, how much, how often, when, where, with whom and so on. These seemingly simple decisions may be based on too many socio-ecological factors to count. Using a socio-ecological model helps us step back and look at the whole picture or the “ecosystem” in which people function. It highlights that each of us is influenced by a unique set of opportunities and constraints shaped by a complex interaction of biological, social and environmental factors that play out over our life course. In other words, it draws attention to the range of influences—from personal characteristics to broad social factors—that shape our behaviours, including those related to substance use. While our personal role—the role of the individual—is always critical, the factors that influence health and wellness in ourselves and our community go far beyond individual choices or even individual capacities. For instance, the risk and protective factors that impact resilience, our ability to rise above or bounce back from adversity, do not reside only within ourselves. Many of the most important factors relate to our relationships (e.g., family, friends) and aspects of our community environment (e.g., norms, availability of alcohol and other drugs). If we think of substance use within a socio-ecological frame, it takes the focus away from the substances. It involves attention to the health behaviours and skills of individuals seeking to manage their lives. But it also includes attention to the environments in which those behaviours and skills play out. A socio-ecological orientation provides a way to reflect on how individual, societal and environmental factors influence and feed back on one another. Many of the things that influence us interact with one another. So, under some conditions, a factor might have a different influence on us than it would under other conditions. For example, a chronically stressful family environment may influence the development of ineffective coping strategies and compromise the learning of healthy habits by children, which may in turn feed into their risky use of alcohol. However, community norms that promote moderation may mitigate risky alcohol consumption, and a mentor program may provide young people with an opportunity to learn positive coping strategies and healthy habits. But it can work the other way too. In a community where norms encourage risky drinking and where supports for individuals are absent, the outcomes for young people and their community may be very different. An individual with poorly developed coping strategies may function quite well in a comfortable environment but suddenly become angry when confronted with normal demands in a situation that feels threatening. For example, a program in which clients are asked sensitive questions in a public space is more likely to experience confrontations than a program in which the same questions are explored in a comfortable private environment. The resulting behaviour is not just a matter of individual capacity. Environmental factors—institutional structures, policies and practices—influence immediate behaviours and can contribute to the development of future capacity. The effects of biological, social and environmental factors play out over the life course. For example, the younger a person is when they start using drugs excessively or regularly, the more likely they are to experience harms or develop problematic substance use later in life. Similarly, people who experience repeated trauma early in life are more likely to experience a wide range of problems later on. Life transitions (e.g., entering high school) can also increase vulnerability while secure attachment and access to supportive resources in early childhood can help us face challenges later in life. Environments that encourage and support young people to make healthy choices can help to build individual capacity. So, for example, a school with clear expectations and restorative practices for dealing with students who break the rules will likely graduate a high level of resilient students with the knowledge and skills needed to thrive in life. On the other hand, overly regulated school environments may achieve short-term compliance but are less likely to build in young people the self-management capacity they need to survive and thrive in adulthood. Our communities are social ecosystems where a variety of factors interact to influence the health of the environment and the people who live within it. Therefore, improving the health of our communities involves influencing our health actions, enhancing our health capacities and ensuring health opportunities for all individuals and institutions that make up our communities. An obvious way we can work together to improve the health and well-being of our communities is to collectively recognize substance use as a complex human behaviour and then quickly move beyond this acceptance to focus on what really matters—managing risk and harm related to substance use. Managing risk and harm is both an individual and a social responsibility. When used with care and in the right context, many psychoactive drugs can be beneficial. That is, the positive impact may outweigh the risks involved. When not used with care or in the wrong contexts, the risks can quickly outweigh the benefits. Managing risk and reducing harm—whether it involves substance use or other common but risky human behaviours—require examination of the reasons or motivations for the behaviour and assessment of the risk and protective factors in play. Individuals can engage in selfassessment and seek to maintain moderation (not too much, not too often) in their use of substances. Communities can contribute to reducing the risks and harms related to drug use by promoting a culture of inclusivity and responsibility among citizens, and by addressing the social and economic conditions that might lead to risky drug use. Words have the potential to affect how we feel about ourselves and how we view other people. Anyone who has responsibility within a community (a family, a school, a social housing program, a drop-in centre) may have opportunities to help end the discriminatory views behind some terms we use, and help shape language that will help us speak clearly and promote inclusion rather than exclusion. Where do we begin? Use simple, general language. Whenever possible, use broad language (e.g., substance use, substance-related harm).This does not label individuals and does not introduce emotionally-loaded judgments. Narrower language (e.g., substance use disorders) can be appropriately used when clearly required in the context. Limit the use of negative language. Terms like “substance abuse” have moral overtones. Abuse connotes an action where there is an abuser and a victim. Substances cannot be victims and it is not clear who is experiencing the abuse. But the term suggests moral culpability of the person using the substance and this is inaccurate and unhelpful. Terms such as “problematic substance use,” while less judgmental, may still constrain the discussion and force attention toward the negative when a more balanced language may be more useful. So, for example, saying that problematic substance use by adults may influence the behaviour of young people fails to draw attention to the fact that any pattern of substance use may influence young people (some positively, others negatively). People have been using psychoactive substances for centuries to promote health and well-being. Yet these same substances have caused—or have the potential to cause—harm to both individuals and communities. Therefore, health promotion must revolve around helping people manage their substance use as safely as possible in order for the approach to be meaningful and successful. Ultimately, the goal of health promotion is healthy people in healthy communities. In a healthy community, a high proportion of people are engaged in health-promoting actions, such as following low-risk drinking guidelines, avoiding smoking and adopting safer use techniques. Promoting health actions directly might involve a variety of motivational strategies and social marketing campaigns. However, attention must also be given to building the capacity of people to engage in healthy actions. This requires a focus on health literacy to increase the number of people who have the knowledge and skills necessary to manage their personal health effectively and who are equipped to help others in the community. But healthy action requires more than knowledge and skills. It is not enough to teach people how to be healthy if the social or economic conditions in which they live undermine their ability or motivation to engage in health actions. The third, and probably most important, element of a healthy community is a focus on health opportunity. This requires attention to social justice and health equity. It means advocating for policies and practices that acknowledge the complex circumstances that impact on people’s actions and abilities. It means seeking to create environments free from childhood trauma and other factors that increase the likelihood of substance use problems in youth and adulthood. It means promoting social connectedness that increases meaningful opportunities and reduces isolation and anti-social behaviour. From a health promotion perspective, drug education should be more about developing health literacy (the knowledge and skills needed to manage substance use) than about lifestyle marketing. Prevention programs should focus on preventing harmful patterns of use rather than on drug use per se. And substance use treatment services should be designed to empower individuals to select their own goals and the services that meet their individual needs and develop their personal skills. Across all services, the focus should be on developing individual and community capacities, giving adequate attention to both healthy public policy and community action, rather than on preventing or “fixing” problems that many of us mistakenly believe belong to the “other people” in society. - Alexander, B.K. (2010). The Globalization of Addiction. New York: Oxford University Press. - Glass, T.A. & McAtee, M.J. (2006). Behavioral science at the crossroads in public health: Extending horizons, envisioning the future. Social Science & Medicine, 62(7), 1650–1671. - Health Council of Canada (2010). Stepping It Up: Moving the Focus from Health Care in Canada to a Healthier Canada. Toronto. http://www.healthcouncilcanada.ca/rpt_det.php?id=162 - Perry, S. & Reist, D. (2006). Words, Values and Canadians. Vancouver: Centre for Addictions Research of BC. - Small, D. (2012). Canada’s highest court unchains injection drug users: Implications for harm reduction as standard of healthcare. Harm Reduction Journal, 9, 34. - Stokols, D. (1992). Establishing and maintaining healthy environments: Toward a social ecology of health promotion. American Psychologist, 47(1), 6–22.
One of the common birth defects is syndactyly, in which two or more fingers are fused together. Surgical correction involves cutting the tissue that connects the fingers, then grafting skin from another part of the body. (The procedure is more complicated if bones are also fused.) Surgery can usually provide a full range of motion and a fairly normal appearance, although the color of the grafted skin may be slightly different from the rest of the hand. Other common congenital defects include short, missing, or deformed fingers, immobile tendons, and abnormal nerves or blood vessels. In most cases, these defects can be treated surgically and significant improvement can be expected. Syndactyly requires surgical intervention. Full-term infants can be scheduled for elective surgical procedures as early as 5 or 6 months of age. Surgery before this age can increase anesthetic risks. Prior to that time, there is generally no intervention necessary if there are no problems. If there is an associated paronychia which can occur with complex syndactyly, the parents are given instructions to wash the child's hands thoroughly with soap and water and toa apply a topical antibacterial solution or ointment. Oral antibiotics are given when indicated. The timing of surgery is variable. However, if more fingers are involved and the syndactyly is more complex, release should be performed earlier. Early release can prevent the malrotation and angulation that develops from differential growth rates of the involved fingers. In persons with complex syndactyly, the author performs the first release of the border digits when the individual is approximately 6 months old. This approach is used because differential growth rates are observed, particularly between the small finger and ring finger or between the thumb and index finger. Prolonged syndactyly between these digits can cause permanent deformities. If more than one syndactyly is present in the same hand, simultaneous surgical release can be performed, provided only one side of the involved fingers is released. For example, in a 4-finger syndactyly involving the index, long, ring, and small fingers, the index finger can be released from the long finger, and the small finger can be released from the ring finger, leaving a central syndactyly involving the long and ring fingers (see Images 27-28). If both hands are involved, bilateral releases can be performed at one operative setting. Perform bilateral releases whenever feasible to reduce the number of surgeries and the associated risks. Postoperative bilateral immobilization of the upper extremities is well tolerated in the child who is younger than 18 months. The increasingly active child who is older than 18 months has a difficult time with bilateral immobilization. Therefore, in children older than 18 months, any procedures must be staged unilaterally. The remaining syndactyly between the long finger and ring finger can be released approximately 6 months later (see Images 29-30). In an individual with isolated central syndactyly between the long finger and ring finger, the release need not be accomplished until the second year of life because of similar growth rates between the long finger and ring finger. It is preferable to complete all major reconstructions before a child is school age.
Even without ears, oysters are “clamming up” when they hear too much noise in the ocean. In response to sounds similar to cargo ships, oysters slam their shells closed, seemingly to protect their soft bodies, according to a study published Wednesday in PLOS ONE. Oysters are filter feeders, so noise pollution in the ocean may stunt growth and reduce water quality, the scientists argue. Ocean noise pollution is a known problem for many marine mammals, which use their hearing for survival tasks like navigation and finding food. But little is known on how sound affects invertebrates, which account for the largest number of animals in the sea. A few years ago in a bustling Spanish port, University of Bordeaux physiologist Jean-Charles Massabuau came across an underwater filmmaker. As a large cargo ship crossed the water, the filmmaker surfaced and said, “Wow, I never heard such a noisy spot,” Massabuau recalled. Man-made sounds such as offshore drilling, seismic testing for deep sea oil, and even the hum from that Spanish cargo ship permeate the ocean at ever-increasing levels. Massabuau’s research involves learning how changes in light, temperature or salinity affect oysters. So after his exchange with the diver, Massabuau wondered, “Can the oysters hear it?” Back in the lab, his team affixed accelerometers to thirty-two oysters to detect when their shells were open or closed. An oyster’s shell position is linked to its well-being. An open shell indicates a relaxed state, while shutting is a marker for stress. Massabuau lowered the animals into two tanks replete with food, currents and seawater pumped from the Bay of Arcachon, France. With an underwater speaker in one of the tanks, he played a variety of sounds, including low frequencies below 200 Hertz that are typically produced by cargo ships. Massabuau found that the oysters rapidly closed their shells with sound frequencies between 10 to 1000 Hertz. He likens an oyster’s reflexive shutting to the sharp shrug that humans do when startled by an unpleasant sound. “They are aware of the cargo ships,” Massabuau said. “What is for sure is that they can hear. The animals can hear these frequencies.” Many marine organisms can detect vibrations like ones produced by predators. But most definitions of hearing require an organ capable of sound perception, said University of Hull marine biologist Mike Elliott. Oysters don’t have ears like humans, but hair cells similar to ones in the inner ear are found on the gills. These cells sense vibrations, Massabuau said, so whether people call it “hearing” or “sensing sound vibrations” makes little difference to him. Elliott, who was not involved in the study, has conducted research similar to Massabuau’s, but with hermit crabs and mussels. Elliott said when these animals become stressed and hide inside their shells, they stop feeding and breathing, “and sooner or later they start suffering.” But Elliott said it remains unclear if sound pollution can harm these organisms in the long run. “It is quite a big leap from detecting a response [to sound] to if the animal is being harmed by it,” Elliott said. “The big challenge is converting this into a response that denotes harm to the organism.” Massabuau agreed. His lab is investigating if chronic exposure to unnatural sounds can disturb the growth rates of oysters. He reports signs of slow growth rates, indicating poor health in a study in the process of publication.
What is erythroderma? Erythroderma is the term used to describe intense and usually widespread reddening of the skin due to inflammatory skin disease. It often precedes or is associated with exfoliation (skin peeling off in scales or layers), when it may also be known as exfoliative dermatitis (ED). Idiopathic erythroderma is sometimes called the ‘red man syndrome’. Who gets erythroderma and what is the cause? Erythroderma is rare. It can arise at any age and in people of all races. It is about 3 times more common in males than in females. Most have a pre-existing skin disease or a systemic condition known to be associated with erythroderma. About 30% of cases of erythroderma are idiopathic. Erythrodermic atopic dermatitis most often affects children and young adults, but other forms of erythroderma are more common in middle-aged and elderly people. The most common skin conditions to cause erythroderma are: - Drug eruption — with numerous diverse drugs implicated (list of drugs) - Dermatitis especially atopic dermatitis - Psoriasis, especially after withdrawal of systemic steroids or other treatment - Pityriasis rubra pilaris Other skin diseases that less frequently cause erythroderma include: - Other forms of dermatitis: contact dermatitis (allergic or irritant), stasis dermatitis (venous eczema) and in babies, seborrhoeic dermatitis or staphylococcal scalded skin syndrome - Blistering diseases including pemphigus and bullous pemphigoid - Sezary syndrome (the erythrodermic form of cutaneous T-cell lymphoma) - Several very rare congenital ichthyotic conditions Erythroderma may also be a symptom or sign of a systemic disease. These may include: - Haematological malignancies, eg lymphoma, leukaemia - Internal malignancies, eg carcinoma of rectum, lung, fallopian tubes, colon - Graft-versus-host disease - HIV infection It is not known why some skin diseases in some people progress to erythroderma. The pathogenesis is complicated, involving keratinocytes and lymphocytes, and their interaction with adhesion molecules and cytokines. The result is a dramatic increase in turnover of epidermal cells. What are the clinical features of erythroderma? Erythroderma is often preceded by morbilliform (measles-like) eruption, dermatitis, or plaque psoriasis. Generalised erythema can develop quite rapidly in acute erythroderma, or more gradually over weeks to months in chronic erythroderma. Signs and symptoms of erythroderma Generalised erythema and oedema affects 90% or more of the skin surface. - The skin feels warm to the touch. - Itch is usually troublesome, and is sometimes intolerable. Rubbing and scratching leads to lichenification. - Eyelid swelling may result in ectropion. - Scaling begins 2-6 days after the onset of erythema, as fine flakes or large sheets. - Thick scaling may develop on scalp with varying degrees of hair loss including complete baldness. - Palms and soles may develop yellowish, diffuse keratoderma. - Nails become dull, ridged, and thickened or develop onycholysis and may shed (onychomadesis). - Lymph nodes become swollen (generalised dermatopathic lymphadenopathy). Clues may be present as to the underlying cause. - Serous ooze, resulting in clothes and dressings sticking to the skin and an unpleasant smell, is characteristic of atopic erythroderma. - Persistence of circumscribed scaly plaques in certain sites such as elbows and knees suggests psoriasis. - Islands of sparing, follicular prominence, orange-hue to keratoderma are typical of pityriasis rubra pilaris. - Subungual hyperkeratosis, crusting on palms and soles, and burrows are indicative of crusted scabies. - Sparing of abdominal creases (deck chair sign) is typical of papuloerythroderma of Ofuji. Systemic symptoms may be due to the erythroderma or to its cause. - Lymphadenopathy, hepatosplenomegaly, abnormal liver dysfunction and fever may suggest a drug hypersensitivity syndrome or malignancy. - Leg oedema may be due to inflamed skin, high output cardiac failure and/or hypoalbuminaemia. Complications of erythroderma Erythroderma often results in acute and chronic local and systemic complications. The patient is unwell with fever, temperature dysregulation and losing a great deal of fluid by transpiration through the skin. - Heat loss leads to hypothermia. - Fluid loss leads to electrolyte abnormalities and dehydration. - Red skin leads to high-output heart failure. - Secondary skin infection may occur (impetigo, cellulitis). - General unwellness can lead to pneumonia. - Hypoalbuminaemia from protein loss and increased metabolic rate causes oedema. - Longstanding erythroderma may result in pigmentary changes (brown and/or white skin patches). How is erythroderma diagnosed? Blood count may show anaemia, white cell count abnormalities, and eosinophilia. Marked eosinophilia should raise suspicions for lymphoma. - >20% circulating Sézary cells suggests Sézary syndrome - C-reactive protein may or may not be elevated. - Proteins may reveal hypoalbuminaemia and abnormal liver function. - Polyclonal gamma globulins are common, and raised immunoglobulin E (IgE) is typical of idiopathic erythroderma. Skin biopsies from several sites may be taken if the cause is unknown. They tend to show nonspecific inflammation on histopathology. Diagnostic features may be present however. Direct immunofluorescence is of benefit if an autoimmune blistering disease or connective tissue disease is considered. What is the treatment for erythroderma? Erythroderma is potentially serious, even life-threatening, and most patients require hospitalisation for monitoring and to restore fluid and electrolyte balance, circulatory status and body temperature. The following general measures apply: - Discontinue all unnecessary medications - Monitor fluid balance and body temperature - Maintain skin moisture with wet wraps, other types of wet dressings, emollients and mild topical steroids - Antibiotics are prescribed for bacterial infection - Antihistamines may reduce severe itch and can provide some sedation How can erythroderma be prevented? In most cases, erythroderma cannot be prevented. People with known drug allergy should be made aware that they should avoid the drug forever, and if their reaction was severe, wear a drug alert bracelet. All medical records should be updated if there is an adverse reaction to a medication, and referred to whenever starting a new drug. Patients with severe skin diseases should be informed if they are at known risk of erythroderma. They should be educated about the risks of discontinuing their medication. What is the outlook for erythroderma? Prognosis of erythroderma depends on the underlying disease process. If the cause can be removed or corrected, prognosis is generally good. If erythroderma is the result of a generalised spread of a primary skin disorder such as psoriasis or dermatitis, it usually clears with appropriate treatment of the skin disease but may recur at any time. The course of idiopathic erythroderma is unpredictable. It may persist for a long time with periods of acute exacerbation.
Crustaceans are the main component of the yellow-crowned night-heron’s diet (2) (3), which includes marsh, mud, swimming, land and beach crabs, as well as crayfish, fish, aquatic invertebrates, mussels and leeches. In drier areas, this species may also take terrestrial arthropods, lizards, small snakes (2), mice, rabbits and young birds (3). Hunting is usually done individually (3), with most activity occurring during the night, or at dusk and dawn (2), especially during the breeding season (4). This species forages by slowly stalking its prey until it is close enough to attack. The individual will lunge towards its prey and capture it within the bill, consequently swallowing it whole, or shaking, crushing or spearing it into smaller pieces (3) (4). The shape and size of the bill varies between each subspecies, and it has been suggested that different populations have evolved their specific characteristics due to the prey availability within their habitat (2). The courting routine of the male yellow-crowned night-heron involves display flights and neck stretching, which a receptive female may copy (3). Once the pair bond is formed it is thought to last for one breeding season (4). In the northern parts of its range, the female yellow-crowned night-heron lays eggs between March and June, while eggs are laid later in the year in the south, usually between August and October. Small colonies of nesting birds are common, although this species may also nest alone. The nest is usually built in a bush or tree. The outer layer of the nest is made of sticks, and lined inside with thin twigs, roots, grass or leaves (2). Both sexes contribute to the construction of the nest, with the female staying on the nest site while the male collects sticks (3) (4). As the nest is nearing completion, the female also gathers sticks (4), which are usually taken directly from trees, rather than from the ground (3) (4). The nest is generally complete after around 11 days, and between 2 and 8 eggs are laid by the female shortly afterwards. Females from northern populations generally have a larger clutch than those in the south (3). Both sexes incubate the eggs (3), which hatch after 21 to 25 days (2). Once the eggs have hatched, the male and female share brooding responsibilities (4). The young fledge the nest around 25 days after hatching (2). All subspecies of the yellow-crowned night-heron are sedentary, except for the nominate subspecies. Nyctanassa violacea violacea migrates from its northern breeding grounds in September to overwinter in Central America and the Caribbean, returning to the north to breed in March (2).
True / False Flags I'm still pretty busy, but yesterday I spent the whole day doing this DIY project for my practicum, so I just wanted to share what I made with you. I had to hold a class to 6th graders and I wanted to amp it up a bit because the topic wasn't very interesting. Therefore, I decided to create these true / false flags for the students to use during a typical true/false activity. I read some statements connected to the text we were working on and they had to raise a "T" flag if a statement was correct and an "F" flag if it was false. They were ecstatic when they saw the flags, so if you have some free time, try spending it on creating them, your students will love you and you'll be able to reuse them for sure. So, you'll need the following: - some long and thick straws in two colors - some paper in two colors Cut the straws in half. The easiest and fastest way to do that is to measure the first one and then just use already cut one as a stencil. Cut the paper in half, so that you get two long pieces. You can also use the first one as a stencil. Fold the paper in half, put some glue in the middle, and put a straw on it. Put some more glue on one side of the paper, fold the rest over and stick together. Write "T" on one color flags and "F" on another ones. And that's it! I hope this helps and would like to see your recreations if you try it out. :)
According to the Centigrade temperature scale, now known as Celsius, water freezes at zero degrees and boils at 100 degrees. According to the Farenheit scale, water freezes at 32 degrees and boils at 212 degrees. To convert Celsius to Fahrenheit, multiply by nine, divide by five and add 32. To convert Fahrenheit to Celsius, subtract 32, multiply by five and divide by nine. Celsius is universally used in the sciences. Andres Celsius and Daniel Gabriel Fahrenheit both developed temperature scales in the early 1700s. Soon after, Jean Pierre Cristin modified the boiling and freezing points in Celsius' scale and named it Centigrade. The name was changed to Celsius in 1948 as part of a global harmonization initiative. As of 2015, Celsius is the more common scale with the notable exception of the United States, which uses Fahrenheit.
Forests are essential for existence on Earth. Beyond giving us physical resources like timber, they provide clean air, safer water and animal habitats whilst counteracting climate change. Indigenous people living there rely on them even more. Yet sadly, we continue to put all at risk, with loss occurring at the rate of 50 football fields a minute. Responsible forestry is crucial in order to prevent the damage caused by issues such as illegal deforestation, implacable demand for goods derived from their finite supplies and inadequate forest management practices. It is important to learn about our forests because cultivating connections with nature can become inspiration for looking after our natural heritage. Knowledge about the effects humans are having – especially those endangering its future – could be transformative. Nearly every single aspect of daily life involves taking from this world and that rarely includes giving back to equivalent levels. One of the challenges with encouraging more sustainability focused thinking, which turns attitudes into action, seems to be that we do not get to see or consider the consequences of particular actions. After all, how are you going to care about something if it feels altogether too separate from your usual concerns? Whilst certain forests within South America, Asia and Africa are notably most affected, the first step to making a difference internationally should perhaps be finding out what to do locally. Afforestation programs, and drives that raise awareness about the scientific mechanisms that make forests promote balance is probably a good start. For the good of our planet let us ensure forests, and those who call these beautiful places home, have a sustainable future ahead of them. Keenly observing natural heritage and learning about it by experienceprovides educational experiences for visitors, groups, families, schools and colleges. If we integrate the principles of conservation into tourism at national parks and biodiversity reserves, the effort to spread awareness will mitigate considerably. Altering science textbooks to explain the mechanism of the water cycle in a manner that talks about consequences rather than just ‘processes’ might also be a way to ignite the minds of young souls towards striving for a more sustainable future.
Searching for enlightenment, cultures from the ancient Minoans to Native Americans placed this potent symbol in their holy places. Later Christians installed labyrinths on cathedral floors. As medieval pilgrims traced the maze, they meditated on their path, as seekers still do today. This Minoan Labyrinth, the oldest labyrinth design known in our world, comes from the Mediterranean island of Crete. Its function was to hold the Minotaur, a mythical creature that was half man and half bull and was eventually killed by the Athenian hero Theseus.
Presentation on theme: "English II. Figurative language is language that uses words or expressions with a meaning that is different from the literal interpretation. uses exaggerations."— Presentation transcript: Figurative language is language that uses words or expressions with a meaning that is different from the literal interpretation. uses exaggerations or alterations to make a particular point ---- not just stating the facts. Examples: Imagery Simile Metaphor Symbolism Allegory Allusion The use of vivid language to represent objects, actions, or ideas. Used to create an image or spark a memory by stimulating one of the five senses. Examples: “Muddy banks covered in tangled water plants lead to large rocks that increase in size” ---Used to describe the setting in Hunger Games "The Radley place jutted into a sharp curve beyond our house...The house was low, was once white with a deep front porch and green shutters, but had long ago darkened to the color of the slate-grey yard around it. Rain rotted shingles drooped over the eaves of the veranda; oak trees kept the sun away.” ---- Description of Radley house in TKaM Definition: a figure of speech that makes a comparison between two unlike things using the word like or as Example: “She looked and smelled like a peppermint drop”. --Scout describing Miss Caroline "The Radley Place fascinated Dill. In spite of our warnings and explanations it drew him as the moon draws water…”. --Scout describing Dill’s obsession with Boo Radley’s house Definition: a figure of speech that makes a comparison between two unrelated things without using like or as Example: “Then I heard Atticus cough. I held my breath. Sometimes when we made a midnight pilgrimage to the bathroom we would find him reading”. Compares a trip to the bathroom with a pilgrimage “I had never thought about it, but summer was Dill by the fish pool smoking string, Dill’s eyes alive with complicated plans to make Boo Radley emerge; summer was the swiftness with which Dill would reach up and kiss me when Jem was not looking…”. Compares summertime to Scout’s relationship with Dill Definition: the use of symbols to signify ideas and qualities by giving them symbolic meanings that are different from their literal sense. Example: Hunger Games: Mockingjay is a symbol for rebellion TKaM: Mockingbird is a symbol for good and innocence Definition: a literary representation in which a literary work actually has a deeper meaning; sometimes the actions, events, people and things on the surface level of the story actually represent ideas. Example: Atticus shooting the rabid dog (Chapter 10) The dog itself symbolizes racism. Atticus's willingness to shoot the dog, parallels his willingness to take on Tom Robinson's case. The dog is described as being just as dangerous dead as alive. So, too, is the racism in the town. While Atticus may attack that racism in court, no matter what the outcome of the trial, the racism is still rampant, still dangerous whether dead (an acquittal) or alive (a conviction). Definition: an indirect reference to a famous person, place, event, or literary work. Examples from Chapter 1: Andrew Jackson: 7th President of the United States ( ). disturbance between the North and the South: The Civil War ( ). Dracula: the 1931 film version of the famous vampire story. John Wesley: Founder of the Methodist Church. Merlin: King Arthur's adviser, prophet and magician. Mobile: a city in southwest Alabama. nothing to fear but fear itself: an allusion to President Franklin D. Roosevelt's first Inaugural Address Decide whether the following is an example of a simile, metaphor, symbolism, allegory, or allusion. On Valentine’s Day, many boys turn into Romeos. “Romeo” is a reference to Shakespeare’s Romeo, a passionate lover of Juliet, in “Romeo and Juliet”. The murderer’s eyes were as black as coal as he stared down his next victim. The color of the murderer’s eyes is being compared to the color of coal using “as”. On the surface, C.S. Lewis's The Chronicles of Narnia: The Lion, the Witch, and the Wardrobe is a novel about four children that visit an enchanted land and meet a talking lion. However, the lion is actually a symbol for Jesus Christ, and the character of Edmund is a symbol for Judas. Through these symbols and classic story of betrayal is told. The Chronicles of Narnia: The Lion, the Witch, and the Wardrobe seems, at it’s basic level, like an everyday child’s story. However, the characters and plot line have a deeper figurative meaning. Time is a thief that steals our youth, our memories, and, finally, our breath. Time is being compared to a thief that steals things without using ‘like’ or ‘as’. “All the world's a stage, and all the men and women merely players; they have their exits and their entrances; and one man in his time plays many parts”. What are the stage and actors? William Shakespeare, As You Like It Shakespeare is using the idea of a play and actors as symbols for people living their day-to- day lives. BONUS! Because this quote literally says one thing but figuratively means another, it can also be considered what? Write your name, date, and period Answer the following questions with complete sentences. What are the 6 types of figurative language we discussed today? What is an example of a SYMBOL from everyday life? Why do authors use Figurative Language in their writing?
Micro-sized machines operate under very different conditions than their macro-sized counterparts. The high surface-area-to-mass ratio of tiny motors means they require a constant driving force to keep them going. In the past, researchers have relied on asymmetric chemical reactions on the surface of the motors to supply the force. For example, Janus motors, are spherical particles coated with a different material on each side. One of the sides is typically made of a catalyst like platinum, which speeds up the reaction that converts hydrogen peroxide into water and oxygen. When the Janus motor is immersed in hydrogen peroxide, oxygen bubbles form more quickly on the platinum side, pushing the sphere forward. This image shows a possible application of chemical micromotors. (Credit: Daigo Yamamoto/Doshisha) Researchers from Doshisha University in Kyoto, Japan have now discovered, however, that two-sided materials aren't necessary to make micromotors move. The researchers placed tiny spheres made only of platinum in hydrogen peroxide and observed the particles' movement through a microscope. Although the individual spheres bounced about randomly, the researchers noticed that clumps of particles began to exhibit regular motions. The clumps shaped like teardrops moved forward, those that resembled windmills started to spin, and the boomerang shaped clumps traveled in a circle. After creating a theoretical model of the forces at work, the researchers realized they could explain the regular motions by the asymmetrical drag generated by the different shapes. The researchers envision combining their new type of motors with existing motors to create easily controllable machines with a versatile range of motions. Micro- and nano-sized machines may one day ferry drugs around the body or help control chemical reactions, but the Japanese team also sees a more fundamental reason to study such tiny systems. "Micromotors may be used not only as a power source for micromachines and microfactories, but may also give us significant insight regarding mysterious living phenomenon," said Daigo Yamamoto, a researcher in the Molecular Chemical Engineering Laboratory at Doshisha University and an author on the paper that describes the new motors. Source: American Institute of Physics If you liked this article, please give it a quick review on reddit or StumbleUpon. Thanks! Check out these other trending stories on Nanowerk:
Are you, like many of us, feeling helpless about the impending climate crisis? Maybe actively participating in scientific efforts to make sense of the changing climate so we can find out ways to stop its destruction can give your mind a little bit of peace. Old Weather is a project with the main goal of studying a wide range of climate phenomena in order to understand their impact on the global environment. They do this by looking at weather observations. According to Kevin Wood, the Lead Investigator of Old Weather Arctic Project, information about the weather in the past is crucial to understanding what is happening to our climate in the present and what will happen to it in the future. Old Weather's information about the history of our climate is collected by investigating sea-journeys, as old as 150 years. Dating back to the 19th century, Old Weather has amassed a large number of logbooks from ships — mostly military and whaling. These logbooks, along with logistical details about the journeys themselves, also contain detailed descriptions of the weather and sea-ice conditions of their time. This is valuable data for the scientists at Old Weather, especially because the logs belong to ships that patrolled the Arctic. The area is now a matter of major concern in terms of climate destruction. Old Weather takes this data and plugs it into computer models specifically made to improve climate projections based on past conditions. Information about what the weather and sea-ice conditions were like in the past is not only crucial for climate scientists to make sense of the present and predict the future of our climate, but it’s also extremely valuable for historians to better understand the context in which world-changing historical events occurred. Old Weather, however, has one big obstacle in their pursuit: the fact that the data they are trying to recover all comes from old ship’s logs means that all of it is handwritten by hundreds of different people. For the scientists, this means that they cannot automate the transcription of these ship’s logs because computers are not able to decipher human handwriting accurately. This is where concerned global citizens came in. Old Weather works with volunteers from all over the world in getting the handwritten information from old ship's logs into computer models. People decipher the information on these old logbooks and deliver it to the scientists at Old Weather. Anyone who is comfortable with the English language can sign up to become a citizen scientist with Old Weather. So, if you want to actively contribute to climate science fighting the destruction of our living planet, you can do so by volunteering with Old Weather. As a bonus, you'll get to read original historical documents, detailing epic adventures of sailing the seas.
Everyone knows that it’s past time to prune when a large branch drops onto their lawn or car. But many people wonder exactly when a tree first needs to be pruned. When it’s 50 years old? 20? 10? Pruning serves many purposes – and eliminating dead branches is just one of them. For a young tree, pruning offers the opportunity to correct many of the issues that will cause problems later on – and even lead to an early demise. One example: co-dominant stems. A co-dominant stem simply means the tree splits into 2 or more vertical stems (trunks). While for some trees, such as birch, multiple stems growing from the base are common and natural, for most trees this is undesirable. When stems are competing in a tree, there is inherent weakness. In a mature tree, the double trunk may not be strong enough to support the heavy canopy. Tree failure is common in this situation. When we see this developing in a young tree, we have the opportunity to choose one trunk and prune away the other wood, leading to a safer, healthier tree. Even in a mature tree, when it’s too late to eliminate one of the stems, the crown can be reduced to lessen the weight load supported by the weaker stems. When arborists look at a tree, we look at the “scaffold”: the arrangement of stem and branches. We want to see strong unions between each branch and the stem. Usually, each branch is joined to the trunk through a “branch collar” a ring of strong wood that supports the developing branch. The union between stem and branch should be smooth, like the letter “U.” If branches or stems are too close together, as in the case of a co-dominant stem, this smooth, strong union is unable to form correctly. Instead we see a “V” between the two rivals. This V zone between the stems or branches is vulnerable to splitting (see the picture). With periodic pruning throughout the life of a tree, we can create a strong, balanced scaffold as the tree develops, which leads to a healthier and safer tree. As trees mature, we want them to grow with a certain symmetry. This isn’t just because it is pleasing to the eye: symmetry means the weight of branches and leaves will be balanced, making the tree more stable. An arborist will also remove branches that are crossing or competing for the same space. This eliminates damage from branches rubbing into each other, and also opens up more space among the branches. This space increases airflow beneath the canopy which creates a less hospitable climate for insects and disease to spread. Pruning a tree regularly throughout its life will actually reduce problems later on. It’s similar to seeing a dentist: periodic checkups and minor dental work throughout your life are preferable to procrastinating until you need a root canal or lose a tooth. There are, of course, many other reasons to prune: to keep a tree from impinging on a house or wires, or to open up a view, to name a couple. Although pruning can be done at any time of year, I feel winter is the best time. Winter creates a more sanitary environment (no bugs!), and of course, it’s easier to get a clear view of a tree without its leaves.
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. Click herefor a complete list of Reading Like a Historian lessons, and click here for a complete list of materials available in Spanish.
Mucociliary clearance is the way in which our sinuses clean themselves out. This occurs on a daily basis. Mucous is produced from the cells on the mucous membranes that line the nose. These cells (on the mucous membranes) have little fingers (cilia) that sweep mucous out of the sinuses and out of the nose. The mucous is then pushed to the back of the nose, then swallowed and destroyed. Mucociliary clearance occurs in the lungs as well as the nose and sinuses. It works 24 hours a day, 365 days a year. There is no commonly used test to see how well Mucociliary clearance works. When Mucociliary clearance works, harmful bacteria and viruses are removed from the nose. When it does not work, bacteria or virus can grow and start an infection. When infection occurs, swelling in the nose and sinuses develops, and this causes headache, congestion and drainage. The sinuses are cavities or caves that each has a single narrowed opening. If opening swells shut, (from allergies, infection or an irritant) the Mucociliary clearance will not be able to clean out the sinuses. Debris, bacteria and viruses become trapped. This is thought to be how most sinus infections start. Mucociliary clearance can be affected or stopped with certain conditions. Bacterial or viral infections damage the cells that perform the clearance. The damage can be temporary or permanent. Smoke, chemicals or other irritants will also stop clearance. Dehydration, or dusty conditions will affect the mucous, it will stop the production of mucous or the mucous becomes thick and sticky or is simply not produced. When the clearance stops, infection, headaches, congestion and drainage starts.
Honey is what bees make from nectar in order that it will keep; as such it contains what is present in the nectar minus some water. In fact it is much more complicated than that. Honey is a complex mixture of: - vitamins and minerals Approximately 17-20% of honey is water. One of the most important steps in the process is to reduce the water content to below about 20% because it has the effect of making honey indigestible to micro-organisms. The amount of water the bees can remove depends on: - the amount or water present in a nectar in the first place; - the weather; - the strength of the colony. The amount of water in a nectar varies with the species of plant and with the weather. For instance – dandelion nectar has a higher sugar content than that of apple which produces a nectar ‘runny’ by comparison. Superimposed on this is the weather effect. Many flowers tend to point their faces to the sun; this makes them very conspicuous to insects but if it rains then the flower fills with water and the nectar is diluted. When ripening honey, bees fan at the entrance to pull air through the hive and over the honey to evaporate water from its surface. The warmer and drier the air, the faster the ripening. The amount of air passing over the honey would also speed evaporation and this last is related to the numbers of bees available for fanning. So if the weather is cold and wet and/or colony strength is down then the job is much more difficult and the honey may contain more water than is desirable. So far, honey contains what was in the nectar and minus a quantity of water. Of the solids in honey, that is if all the water was to be removed, 95-99.9% is sugar of one sort or another, from the simple to the complex. To understand the complexity, and the simplicity, of sugars and the actions of enzymes it is useful to know a little about carbohydrate chemistry. 2.1. Carbohydrate chemistry Simple sugars are called monosaccharides. There are very many different species of monosaccharides but what they all have in common is that they are built from a number of carbohydrate sub-units. The basic formula of a carbohydrate is CH2O or H-C-OH. Each molecule of carbohydrate may be thought of as one of the vertebrae which, linked together, make up the spine of a simple sugar and to which, other chemical appendages may, or may not be attached. Monosaccharides can have a spine of three, four, five, six or seven individual carbohydrates and are known as triose, tetrose, pentose, hexose or heptose monosaccarides respectively. Glucose and fructose are hexose monosaccharides and both are very common in honey. In Figure 1 below, each C represents a Carbon atom, H is a Hydrogen atom and O is an Oxygen atom. The black lines between them represent the bonds holding the individual atoms together as a molecule. It is important to understand that the atoms illustrated here are not welded permanently into place – it is more as if they are held together by forces similar to magnetism. Figure 1. Carbohydrate skeletons of 4 monocaccharides. There are two ways in which each of the above molecules can assemble so that two forms of each exist which are essentially the mirror image of each other – rather like a pair of gloves and they are known as L and D forms. D is for ‘dexter’ which is Latin for right, L is for ‘læve’, Latin for left – although these refer to the direction in which they rotate polarised light and not their handedness. Sugar molecules, such as those above, in a solution such as in honey, tend to curl up into rings with an oxygen atom forming the clasp so to speak. There are two ways they can do this too, and they are known as a or bconfigurations (see fig 2 below). A solution of pure glucose will contain equal quantities of a & b. Figure 2 a and b-D-glucose. There are many ways the different forms of the various monosaccharides can link up but a molecule made of two linked monosaccharides is always termed a disaccharide. Example of disaccharides would be - sucrose which is made of a-D-glucose and b-D-fructose (see fig 3 below); - maltose which is made up of two a-D-glucose molecules.
We are searching data for your request: Upon completion, a link will appear to access the found materials. Orderly process of setting up and developing a community. It occurs over time and ends when a stable community is established in the area. The Succession Steps Let's take as an example a completely uninhabited region, like a bare rock. The set of conditions for plants and animals to survive or settle in this environment are very unfavorable: - Direct lighting causes high temperatures; - The absence of soil makes it difficult to fix vegetables; - Rainwater does not settle and quickly evaporates. Living beings able to settle in such an environment must be well adapted and undemanding. These are the lichens (association of cyanobacteria with fungi), which can survive only with water, light and a small amount of mineral salts. This characterizes the formation of a pioneer community or ecese. Lichens for being the first beings to settle are called "pioneer organisms." The metabolic activity of lichens is slowly changing the initial conditions of the region. Lichens produce organic acids that gradually erode the rock, forming through erosion the first layers of soil. Layer upon layer of lichen, they form an organic carpet that enriches the soil, leaving it moist and rich in mineral salts. From then on the conditions, not so unfavorable, allow the appearance of small plants, such as bryophytes (mosses), that need a small amount of nutrients to develop and reach the reproduction stage. New and constant modifications follow one another allowing the appearance of larger plants such as ferns and shrubs. Also small animals like insects and mollusks begin to appear. In this way, step after step, the pioneer community evolves, until the speed of the process begins to gradually decrease, reaching a point of equilibrium, in which the ecological succession reaches its maximum development compatible with the physical conditions of the place (soil, climate, etc.). .). This community is the final step in the succession process, known as the climax community. Each intermediate step between the pioneer community and the climax is called the sere. The characteristics of a climax community By observing the process of ecological succession we can identify a progressive increase in biodiversity and species and in total biomass. Food webs and food chains become increasingly complex and new niches are constantly forming. The stability of a climax community is largely associated with increased species variety and the complexity of eating relationships. This is because having a complex and multidirectional food web makes it easier to circumvent the instability caused by the disappearance of a particular species. Simpler communities have fewer food choices and are therefore more unstable. It is easy to imagine this instability when we observe how an agricultural monoculture is susceptible to pest attack. Although total biomass and biodiversity are higher in the climax community, we have some differences from primary productivity. Gross productivity (total organic matter produced) in climax communities is large and higher than in predecessor communities. However, net productivity is close to zero, since all organic matter that is produced is consumed by the community itself. This is why a climax community is stable, meaning it is no longer expanding. In pioneer communities and beings, there is a surplus of organic matter (Net Productivity) that is exactly used for the evolution of the ecological succession process. Expected ecosystem trends throughout (primary) succession variable and unpredictable constant or predictably variable Population Size Determination Mechanisms abiotic, density independent short / simple long / complex fast, high mortality Stratification (spatial heterogeneity) Species diversity (richness) Species diversity (equitativity) Total organic matter PPB / R PPB / B Nutrient exchange between organisms and environment Role of debris in nutrient regeneration POSSIBILITY OF EXPLORATION BY MAN Ability to resist exploitation
|IC Number||IC Name| |74LS14||Hex Inverters with Open-Collector Outputs| An open collector is a common type of output found on many integrated circuits (IC), which behaves like a switch that is either connected to ground or disconnected. NOT gate is a digital circuit that has a single input and single output. The output of NOT gate is the logical inversion of input. Hence, the NOT gate is also called an inverter. Logic NOT Gates are available using digital circuits to produce the desired logical function. The standard NOT gate is given a symbol whose shape is of a triangle pointing to the right with a circle at its end. This circle is known as an "inversion bubble". Logic NOT gates provide the complement of their input signal and are so called because when their input signal is "HIGH" their output state will NOT be "HIGH". Likewise, when their input signal is "LOW" their output state will NOT be "LOW". As they are single-input devices, logic NOT gates are not normally classed as "decision" making devices Boolean Expression Y = A' "If A is true, then Y is false" This NOT gate produces an output Y, which is the complement of input, A.
Social-Emotional Skills for Preschoolers: Part 5 We’ve made it to the last of this five-part series on social-emotional skills for preschoolers! Today I’m going to focus on the up-and-down emotional world of preschoolers. During this time in their development, preschoolers are learning how to express what they are feeling and what to do with those emotions – what a big job! Here are some ideas to help you support your preschooler in this process: Honor your child’s emotions, while being clear about inappropriate behaviors. Help your child identify the physical reactions our body has to different emotions — “butterflies” in our tummy when we are nervous, feeling flushed or tense when we are angry, etc. Help your child build emotional vocabulary – the words to express the feelings. Start with happy, sad, and mad and expand to frustrated, angry, nervous, excited, proud, disappointed, etc. Use puppets and dolls to role play everyday situations and ways to handle various emotions. Help your child find ways to relax and calm down – deep breaths, tightening and relaxing muscles, counting slowly, humming a song, hugging a stuffed animal, etc. Sing If You’re Happy and You Know It, choosing a new feeling for each verse. Help your child write and illustrate a book about feelings. Try a pattern like, “When I feel sad, I like to hug my mommy.” Write and draw about a different emotion on each page. Use your reading time to talk about how the characters are feeling. Ask questions like, “Have you ever felt that way?” or “What would you do if you felt frustrated?” Books that are good springboards for talking about feelings: - Today I Feel Silly: And Other Moods That Make My Day by Jamie Lee Curtis - My Many Colored Days by Dr. Seuss - Feelings to Share from A to Z by Todd & Peggy Snow - The Way I Feel by Janan Cain - Lots of Feelings by Shelley Rotner (This one had great photographs of children’s faces.) Thanks for joining me for this whole series on social-emotional skills for preschoolers! Now, back to some art or something… :)