content
stringlengths 275
370k
|
---|
"Dinosaur!" the 5-year-old child of a friend exclaimed upon seeing a great blue heron for the first time. There is something primitive about the large heron, especially one in flight. The oversized wings flap deliberately as it slowly makes its way to new and distant fishing holes. They are affectionately referred to as "pterodactyls" by many birders.
Real pterodactyls, although not dinosaurs, were flying reptiles in the age of dinosaurs. They are usually portrayed as large, slow, flying animals with wings of skin stretched between elongated little fingers and the body. The face was long, reminiscent of the large bill of the heron, and the head of some was decorated with a crest.
While some species matched this image, pterodactyls were as diverse as modern birds, occupying many of the niches now filled by living birds. They ranged in size from a small sparrow to impressive flying machines with 50-foot wingspans. Some pterodactyls apparently plucked fish from the ocean like frigatebirds. Others caught flying insects like swallows. One species unearthed in Brazil had hundreds of fine teeth that were apparently used to filter plankton from the water just as flamingos use the comb-like projections inside their bill. But I digress.
Great blue herons are nothing like pterodactyls. Still, the ponderous flight that recalls the long-extinct reptiles is tremendously important to the heron. The wings appear to be oversized for the weight of the bird. Somewhat smaller wings would support them just fine and even allow them to fly faster. The large surface area of heron wings creates a lot of friction with the air and slows them down.
To better understand their need for large wings, consider the flight of a loon. Loons have ridiculously tiny wings. Tiny wings create little friction with the air, and loons do fly fast. They have to. Tiny wings create less lift, and to generate enough for flight, a loon must run as fast as it can over the surface before it has enough speed for its meager wings to haul it into the air. But living on big lakes and the ocean they can be forgiven. Space and grace are not issues.
Landing requires a similar expanse. A loon must touch down at high speeds, skidding over the surface of the water. If a loon tried to fly slowly, it would simply fall out of the sky.
Loon-like wings for a heron would never work. A heron's long toothpick legs — which permit them to wade into deep waters — are nearly as fragile as they look. A compromise, like the wings of a hawk or a goose, still wouldn't work. Even at moderate speeds, landing can be disastrous. The oversized wings permit a heron to land as soft as a feather, protecting the delicate legs. Better to get there safe than fast.
The oversized wings also help when it's time to nest. Great blue herons nest high in trees in colonies called "heronries." Landing on small limbs is an art form for all birds but especially for the heron with those stilt-like legs. Slow is good. You can see for yourself before the trees leaf out at a heronry along the Rogue River on Table Rock Road, just past the entrance to TouVelle State Park. Twenty pair or more nest high among the cottonwoods.
Stewart Janes is a biology professor at Southern Oregon University. He can be reached at [email protected]. |
****RAINBOWS/COLOR SCIENCE, CLOUDS, SPRING WEATHER, & SUN/MOON/STARS****
Are you looking for a science resource that includes 4 weeks full of lesson plans and activities as well as cross-curricular content
? Well you're in luck!! The Science of March is here to help you!!!
In this JAM-PACKED resource, you will find four weeks worth of science lesson plans including cross-curricular activities for ELA and math
. The Science of February covers RAINBOWS/COLOR SCIENCE, CLOUDS, SPRING WEATHER, and SUN/MOON/STARS
, and includes lesson plans, experiments, non-fiction texts, science notebook activities, vocabulary resources, and MORE!
The Science of March is over 400 pages in length and includes everything from visual plans to pacing calendars and everything in between. If you're looking for something to simplify your planning, this is the resource for you!
This resource was created with the K-2 classrooms in mind, but the games and activities could be easily adapted to meet the needs of a 3rd grade classroom as well. The science skills covered in this resource include 5 senses, matter & energy, force/motion/energy, earth & space, scientific investigation and research, and organisms and the environment
(Common Core and TEKS compatible).
Check out our other exciting science resources below!
The Science of January
The Science of February
The Science of April
The Science of May
The Science of Summer (Science Sampler)
The Science of August
The Science of September"
The Science of October
The Science of November
The Science of December
We also offer half-year bundles available HERE:
Science of the Month Bundle - AUG.-DEC.
Science of the Month Bundle - JAN.-MAY
And you can purchase the ENTIRE YEARLONG BUNDLE HERE:
The Science of...YEARLONG BUNDLE
Looking for MORE March themed resources? Check out what teachers are saying about these!
All About the Weather
Weather Literacy Centers
Weather Math Centers
March Mega Pack of Activities
Work on Writing Center Activities - March
Write the Room Math & Literacy Activities - March
March Literacy Centers
March Math Centers
March NO PREP Printable Math & Literacy Activities
Flower Friends Writing Craftivity
The Science of March
Cara Carroll & Abby Mullins
Visit my blog The First Grade Parade
Follow me on Facebook
Find me on Instagram |
Archeologists at Work
Hard-working Yosemite National Park archeologists hike streamsides and climb steep mountainsides to find archeological sites. Approximately 1,500 sites have been identified, covering only about 10% of the park's large acreage. Thousands of new sites--marked by obsidian artifacts, tree blazes, house pits, privies and foundations--are yet to be discovered and documented within Yosemite. On any given day, archeologists arrive at work to gather tools, which could include compasses, clipboards, GPS devices, trowels, screens or sieves, and digital cameras. Archeologists locate, document, study and manage the archeological resources.
At a glance, here is what archeologists do:
Survey: Archeological survey crew members spread out approximately 20 meters apart and survey the landscape for anything that catches their well-trained eyes. Archeologists document the exact locations of objects, often through GPS to help plot sites. They also take digital pictures of the sites, artifacts and features for the reports. Yosemite archeologists survey the Yosemite backcountry, working with wilderness restoration, firefighters and trail crews to manage their work and avoid potential disturbances to archeological sites.
Monitor: Legislation requires archeologists to be consulted when potential exists for archeological sites to be disturbed through ground disturbances. Construction provides archeologists with an opportunity, if necessary, to sift through displaced soils for artifacts, and archeologists have authority to halt construction to protect resources. American Indian representatives often work alongside archeologists to monitor and retrieve information for their tribes and to educate archeologists on traditional cultural values. Archeological sites on federal property are revisited regularly in order to monitor natural and human-caused threats. Common natural disturbances include erosion, fire, pest infestation, vegetation growth and tree fall. Human-caused disturbances include camping and campfire building, social trailing, off-road vehicle traffic, theft and looting, and vandalism. It's understood that some necessary park operations might threaten archeological sites. These include road and trail building, fire suppression and control, utility construction and operation, waste removal, wilderness restoration, and exotic plant removal. Park archeologists work closely with work crews to avoid these potential impacts.
Excavate: Excavating is a systematic technique that can reveal material remains buried in the layers of soil accumulated through the years. Some sites in Yosemite Valley are six feet deep as a result of thousands of years of habitation and changing environments. A recent excavation in 2008 in the Yosemite Valley recovered approximately 50 stone tools, called pestles, which were used for processing acorn. This represents one of the largest deposits of pestles ever found in the Sierra Nevada.
Research: Research and archiving allows archeologists to examine results, to test new hypotheses or to analyze materials in a different way. Researchers can use objects collected and stored in the Yosemite Museum for this. Obsidian, for example, has different translucency depending on where its original volcanic source is located. And, obsidian from different sources is found in different parts of the park. In the northern locations of the park, obsidian is usually more transparent, indicating that it came from the Bodie Hills area. More opaque obsidian found in the southern part of the park is from the Casa Diablo area. Both of these obsidian sources are located in the Long Valley, east of the Sierra Nevada. To pinpoint the exact source of a piece of obsidian, archeologists use a technique called X-Ray fluorescence, or geochemical sourcing. Another analysis technique is called obsidian hydration dating. The technique measures the microscopic amount of water absorbed by the surface of the obsidian flake, which tells archeologists how long ago that piece was modified. Research can go beyond tangible items to the intangible. Archeologists and anthropologists speak with living descendants of the park’s native people to understand traditional native lifeways. Yosemite National Park maintains a consultative relationship with the seven associated local Indian tribes. Also consulted, at times, are descendents from the cultural groups that are associated with other historic resources of Yosemite.
Did You Know?
Giant sequoias are a fire adapted species. Their bark is fire resistant and fire helps open the sequoia cone and scatter the tiny seeds. Fire also clears forest debris from the mineral soil and provides a nutrient rich seed bed as well as clearing competing species. |
The distinctive verse form of Old Germanic poetry, including Old English. It employed a long line divided by a caesura into two balanced half‐lines, each with a given number of stressed syllables (usually two) and a variable number of unstressed syllables. These half‐lines are linked by alliteration between both (sometimes one) of the stressed syllables in the first half and the first (and sometimes the second) stressed syllable in the second half. In Old English, the lines were normally unrhymed and not organized in stanzas, although some works of the later Middle English alliterative revival used both stanzaic patterns and rhyme. This metre was the standard form of verse in English until the 11th century, and was still important in the 14th, but declined under the influence of French syllabic verse. W. H. Auden revived its use in The Age of Anxiety (1948). These lines from the 14th‐century poem Piers Plowman illustrate the alliterative metre:Al for love of oure Lord livede wel straite,In hope for to have hevene‐riche blisse.See also accentual verse. |
August 16, 2013
Prehistoric Fossil Discovery Reveals Details About Earth’s Most Successful Mammal Lineage
Brett Smith for redOrbit.com - Your Universe Online
The 160 million-year-old fossil of a newly described species has revealed new details about the most successful mammalian lineage in Earth’s history.Multituberculates were a group of extremely diverse rodent-like mammals, ranging from tree dwellers to fastidious burrowers. They existed for about 120 million years before being out-competed into extinction by more modern mammals in the Oligocene epoch around 35 million years ago.
According to a report published in the journal Science, a fossil of the species being called Rugosodon eurasiaticus shows evidence of traits that allowed multituberculates to flourish, such as teeth that allowed for eating both plants and animals and ankle joints that made for easy rotation.
"The later multituberculates of the Cretaceous [era] and the Paleocene [epoch] are extremely functionally diverse: Some could jump, some could burrow, others could climb trees and many more lived on the ground," explained study co-author Zhe-Xi Luo, a biologist at the University of Chicago. "The tree-climbing multituberculates and the jumping multituberculates had the most interesting ankle bones, capable of 'hyper-back-rotation' of the hind feet."
Luo noted that these highly mobile ankle joints are normally associated with animals that are exclusively tree-dwellers.
"What is surprising about this discovery is that these ankle features were already present in Rugosodon—a land-dwelling mammal," he said.
Study researchers said R. eurasiaticus, which resembles a small rat or a chipmunk, could eat a variety of food. The animal’s fossilized teeth confirmed a 2012 study of tooth types that found multituberculates subsisted on a carnivorous diet for much of their existence and later converted to an herbivorous one. The evolution of both multituberculates’ diet and their ankle structure most likely led to their proliferation on Earth for around 100 million years, study researchers said.
The R. eurasiaticus fossil discovered by Yuan and his team was found in ancient lake sediments in eastern China, suggesting that the animal may have lived on the lake’s shoreline. Based on an analysis of the evolved ankle joints and teeth of this early multituberculate, the researchers said that such adaptations probably arose very early in the evolution of the order. These early evolutions set the stage for the major diversification of multituberculates that ensued.
The discovery of the fossil also expands the distribution of some multituberculates from Europe to Asia during the Late Jurassic period, the researchers said.
"This new fossil from eastern China is very similar to the Late Jurassic fossil teeth of multituberculates from Portugal in western Europe," Luo said. "This suggests that Rugosodon and its closely related multituberculates had a broad paleogreographic distribution and dispersals back-and-forth across the entire Eurasian continent [sic]."
While multituberculates became extinct during the Oligocene epoch around 23 million years ago, other land vertebrates grew into larger sizes than they had in previous epochs, many taking on the appearances we are familiar with today. Animals such as horses, rhinoceroses and camels learned how to run during the Oligocene and quickly began to dominate plains around the Earth. Marine life also began to take on a modern appearance, with the emergence of bivalves and whales. |
Images, real and virtual
Real images are those where light actually converges, whereas virtual images are locations from where light appears to have converged. Real images only occur when for objects which are placed outside the focal length of a convex lens. A real image is illustrated below. Ray tracing gives the position of the images by drawing one ray perpendicular to the lens that passes through the focal point, and a second ray that passes through the center of the lens (this ray is not bent by the lens). The intersection of the two rays gives the position of the image. Note that the real image is inverted and larger than the object.
The position of the image can be found through the equation:
Here, the distances are those of the object and image respectively as measured from the lens. The focal length f is positive for a convex lens. A positive image distance corresponds to a real image, just as it did for the case of the mirrors. However, for a lens, a positive image distance implies that the image is located on the opposite side as the object.
Virtual images are formed by concave lenses or by placing an object inside the focal length of a convex lens. The ray-tracing exercise is repeated for the case of a virtual image.
In this case the virtual image is upright and shrunken. The same formula for the image and object distances used above applies again here. Only in this case the focal length is negative, and the solution for the image distance will also be negative. Virtual images can also be produced by convex lenses when the object is placed inside the focal length. In that case, the virtual image will be upright and enlarged, as it will be further from the lens than the object. |
Science Fair Project Encyclopedia
Socialism is a concept, an ideology and a collection of party-based political movements that have evolved and branched over time. Initially, it was based on the organized working class, with the purpose of building a classless society. But eventually, it increasingly concentrated on social reforms within modern democracies. This concept and the term Socialist also refer to a group of ideologies, an economic system, or a state that exists or has existed. See Definitions of Socialism
In Marxist theory, it also refers to the society that would succeed capitalism, and in some cases develop further into communism. Marxism and communism are both very specific branches of socialism. The two do not represent socialism as a whole.
In modern socialist theory, it is in the pursuit of the goal of creating a democratic society that has a responsible people and a sympathetic government that would form the backbone of an ideal welfare state.
The word dates back at least to the early nineteenth century. It was first used, self-referentially, in the English language in 1827 to refer to followers of Robert Owen. In France, again self-referentially, it was used in 1832 to refer to followers of the doctrines of Saint-Simon and thereafter by Pierre Leroux and J. Regnaud in l'Encyclopédie nouvelle . Use of the word spread widely and has been used differently in different times and places, both by various individuals and groups that consider themselves socialist and by their opponents. While there is wide variation between socialist groups, nearly all would agree that they are bound together by a common history rooted originally in nineteenth and twentieth-century struggles by industrial and agricultural workers, operating according to principles of solidarity and advocating an egalitarian society, with an economics that would, in their view, serve the broad populace rather than a favored few.
An ideology or a group of ideologies
According to Marxists (most notably Friedrich Engels), socialist models and ideas are said to be traceable to the dawn of human social history, being an inherent feature of human nature and early human social models. During the Enlightenment in the 18th century, revolutionary thinkers and writers such as the Marquis de Condorcet, Voltaire, Rousseau, Diderot, abbé de Mably, and Morelly provided the intellectual and ideological expression of the discontented social layers in French society. This included even the bourgeoisie, at that time kept out of political power by the ancien régime, but also the "popular" classes among whom socialism would later take root.
The earliest modern socialist groups were the so-called utopian socialists, who shared characteristics such as focusing on general welfare rather than individualism, on co-operation rather than competition, and on producers of wealth rather than on political leaders and structures. They did not think in terms of class struggle, but argued that the wealthy should join with the poor in building a new society. Class struggle, the challenge to private property and the accompanying notions of the special role of the proletariat in the revolution find their earliest origins in the Conspiracy of Equals of Babeuf, an unsuccessful actor in the French Revolution. Later, they were much greatly developed by the Marxist branch of socialism.
Elie Halevy claims that the term "socialism" was coined independently by two groups advocating different ways of organizing society and economics: the Saint-Simonians, and most likely Pierre Leroux, in the years 1831-33, and the followers of Robert Owen, around 1835. By the time of the Revolution of 1848 there were a variety of competing "socialisms", ranging from the socialism of Charles Fourier to the self-described "scientific" socialism of Karl Marx and Friedrich Engels.
Depending on the context, the term socialism may refer either to these ideologies or any of their many lineal descendants. While these cover a very broad range of views, they have in common a belief that feudal and capitalist societies are run for the benefit of a small economic elite and that society should be run for the common good. "Socialist" ideologies tend to emphasize economic cooperation over economic competition; virtually all envision some sort of economic planning (many, but by no means all, favor central planning). All advocate placing at least some of the means of production -- and at least some of the distribution of goods and services -- into collective or cooperative ownership.
Historically, the ideology of socialism grew up hand in hand with the rise of organized labor. In many parts of the world, the two are still strongly associated with one another; in other parts, they have become two very distinct movements.
Branches of Socialism
See main article Branches of Socialism
Other ideologies including the word "Socialism"
As branches of ideologies sometimes consider their brand the only valid one, an ongoing campaign to distance Socialism from certain of its branches remains quite active.
The German National Socialists (Nazis) claimed to be "socialist" much like any branch of Socialism, but some scholars argue that the term "socialism" in "national socialism" did not meaningfully extend beyond propaganda purposes, and that, in practice, the Nazis allowed (friendly) capitalists to thrive while liquidating socialists everywhere else (including from within their own party in the Night of the Long Knives). Unlike 'national socialists,' many socialists who consider themselves nationalist reject the racialist theories and totalitarianism of the Nazis, though racial tolerance is not necessarily a socialist ideal. (see:Socialism and Nazism).
Another party who employs socialist in its name but is viewed by some as being not genuinely socialist, is the Arab Socialist Ba'ath Party which rules Syria and also ruled Iraq under Saddam Hussein. It claims a tradition of secular, non-Marxist socialism, but most political theorists (as well as nearly all other socialists) argue that, in fact, it persecutes socialists (who wish to redistribute wealth more equally in the country) while promoting capitalists from within the dominant minority ethnic group that controls the Party and, decisively, the Syrian armed forces.
For a discussion of the controversial views of one philosopher of history who sees a close, though antagonistic, relationship between the left and the right descendants of Hegelianism, see Eric Voegelin.
Various Catholic clerical parties have at times referred to themselves as "Christian Socialists." Two examples are the Christian Social Party of Karl Lueger in Austria before and after World War I, and the contemporary Christian Social Union in Bavaria. Most other socialists would consider these two parties to be "socialist" in name only. However, there are other individuals and groups, past and present, that are clearly both Christian and Socialist, such as Frederick Denison Maurice, author of The Kingdom of Christ (1838), or the contemporary Christian Socialist Movement (UK) (CSM), affiliated with the British Labour Party. (See main article Christian socialism; see also Christian left and social gospel)
A note on usage
Some groups (see above) have called themselves socialist while holding views that most socialists consider antithetical to socialism. The term has also been used by some politicians on the political right as an epithet for certain individuals who do not consider themselves to be socialists and policies that are not considered socialist by their proponents (e.g. referring to all publicly funded medicine as "socialized medicine" or to the United States Democratic Party as "socialist"). This article touches only briefly on those peripheral issues.
What distinguishes the various types of socialism
There are a few questions that point out some of the big differences among socialisms:
- Do advocates of this ideology say that socialism should come about through revolution (e.g. Leninism, Trotskyism, Maoism, revolutionary Marxism) or through reform (e.g. Fabianism, reformist Marxism), or do they view both as possible (e.g. Syndicalism, various Marxisms) or do they fail to address the question of how a socialist society would be achieved (e.g. utopian socialisms)?
- Do they advocate centralized state control of the socialized sectors of the economy (e.g. Stalinism), or control of those sectors by workers' councils (e.g. syndicalism, left and council communism, Anarcho-communism)? This question is usually referred to by socialists in terms of "ownership of the means of production." None of the social democratic parties of Europe advocate total state ownership of the means of production in their contemporary demands and popular press, but most contain language and ideas in their platform which state that in the event the capitalists fail to meet up to their end of the social contract, the workers have the legitimate historical basis to assume or seize total control of the means of production, should those conditions ever arise in the future. Almost all Social-Democratic parties hold that state control of certain sectors of the economy is vital for the general public interest.
- Do they advocate that the power of the workers' councils should itself constitute the basis of a socialist state (coupled with direct democracy and the widespread use of referendums), or do they hold that socialism entails the existence of a legislative body administered by people who would be elected in a representative democracy? In other words, through what legal and political apparatus will the workers maintain and further develop the socialization of the means of production?
- Do they advocate total or near-total socialization of the economy (e.g. revolutionary Marxism, Leninism, Stalinism, Trotskyism, Left and Council Communism, anarcho-syndicalism and syndicalism), or a mixed market economy (e.g. Bernsteinism, reformism, reformist Marxism)? Mixed economies, in turn, can range anywhere from those developed by the social democratic governments that have periodically governed Northern and Western European countries, to the inclusion of small cooperatives in the planned economy of Yugoslavia under Josip Broz Tito. In a related, but not identical, question, do they advocate a fairer society within the bounds of capitalism (e.g. most social democrats) or the total overthrow of the capitalist system (most Marxists).
- Did the ideology arise largely as a philosophical construct (e.g. libertarian socialism), or in the heat of a revolution (e.g. early Marxism, Leninism), or as the product of a ruling party (e.g. Castroism, Stalinism), or as the product of a party or other group contending for political power in a democratic society (e.g. social democracy).
- Does the ideology systematically say that "bourgeois liberties" (such as those guaranteed by the U.S. First Amendment or the Charter of Fundamental Rights of the European Union) are to be preserved (or even enhanced) in a socialist society (e.g. social democracy, democratic socialism), or are undesirable (e.g. Maoism), or have they held different opinions at different times (e.g. Marx and Engels), or is this a dividing point within the ideology (e.g. different strains of Trotskyism)?
- Does their critique of the existing system center on the ownership of the means of production (e.g. Marxism), on the nature of mass and equitable distribution (e.g. most forms of utopian socialism), or on opposition to industrialism as well as capitalism (common where socialism intersects green politics)? Utopian Socialists, like Robert Owen and Saint-Simon argued, though not from exactly the same perspective, that the injustice and widespread poverty of the societies they lived in were a problem of distribution of the goods created. Marxian Socialists, on the other hand, determined that the root of the injustice is based not in the function of distribution of goods already created, but rather in the fact that the ownership of the means of production is in private hands. Also, Marxian Socialists maintain, in contrast to the Utopian Socialists, that the root of injustice is not in how goods (commodities) are distributed, but for whose economic benefit are they produced and sold.
- Which governments does the ideology regard as practicing or moving toward socialism and which does the ideology not regard as doing so? For example, in the era of the Soviet Union, western socialists were bitterly divided as to whether the Soviet Union was basically socialist, moving toward socialism, or inherently un-socialist and, in fact, inimical to true socialism. Similarly, today the government of the People's Republic of China claims to be socialist and refers to its own approach as "Socialism with Chinese characteristics," but most other self-identified socialists consider China to be essentially capitalist, albeit with a still large (but gradually shrinking) state sector. The Chinese leadership concurs with most of the usual critiques against a command economy, and many of their actions to manage what they call a socialist economy have been determined by this opinion.
Note also that while many would say that socialism is defined by state ownership and state planning of the means of production and economic life, a certain degree of such state ownership and planning is common in economies that would almost universally be considered capitalist. In Canada, Crown Corporations are responsible for various sectors of the economy deemed to be of strategic importance to the people (for example power generation). In the U.S., a semi-private central bank with close ties to the federal government, the Federal Reserve, regulates lending rates, serving as a "bank of banks." Also, governments in capitalist nations typically run the post office, libraries, national parks, highways, and (in the case of the US) NASA. Interestingly, though, the federal government's monopoly on space travel from U.S. take-off sites is itself a thing of the past -- as of 2004 (see Ansari X Prize) private capital is entering even that field.
State, provincial, and local governments within a capitalist system can operate and own power companies and other utilities, parks, mass transit including rail and airports, hospitals and other medical facilities, and public schools (often including a number universities). Capitalist governments also frequently subsidize or otherwise influence (though do not own) various sectors of the economy, such as automotive, weapons, oil (petrol), aerospace, and agriculture.
In the post-World War II political lexicon, this sort of (limited) economic state planning became integral to stabilization of the global economy, and has come to be known as Keynesian economics, after John Maynard Keynes.
Conversely, Chinese economic reform under Deng Xiaoping has been characterized by decreasing state ownership of the economy, the replacement of central planning mechanisms with market-based ones that are also used in Western capitalist nations, and even going as far as removing governmental social welfare services that are commonly found in most capitalist nations. However, because the legitimacy of the Communist Party of China is based on the premise that China has already made a transition to socialism, the government insists that it is a socialist government. Very few outside China would support this claim.
An economic system
See main article - Social economy
As in the realm of ideology, there is no single consensus on what it means for a particular economic system to be "socialist". However, all socialists agree that a socialist economy must be run for the benefit of the vast majority of the people rather than for a small aristocratic, plutocratic, or capitalist class. In the mid-nineteenth century, when socialism first arose, many political ideologies of the day were frank in supporting the interests of elite classes. Today, in a world where many countries offer a broader electoral franchise, such open support for the wealthy would be the equivalent of political suicide. Therefore, most ideologies claim to support the greatest good for the greatest number, something that was once advocated only by socialists. Still, even today, socialism stands out by being particularly forthright in advocating direct pursuit of working class interests, even at the expense of what other ideologies consider the legitimate property rights of the wealthy classes.
Most socialists argue that socialism also entails democratic control of the economy, although they differ vastly over the appropriate institutions of that democracy and over whether control should be centralized or highly dispersed. Similarly, they differ over the extent to which a socialist economy could involve markets, and among those who believe that it could, there is a further dividing line on whether markets should apply only to consumer goods or, in some cases, to the means of production themselves (factory and farm equipment, for example). For consumer goods, this is simply a question of efficient distribution; for the means of production, this is a question of ownership of the economy, and therefore of control over it.
Many non-socialists use the expression "socialist economy" (or "socialization" of a sector of the economy) almost exclusively to refer to centralized control under government aegis: for example, consider the use of the term "socialized medicine" in the US by opponents of single-payer health care.
There is general agreement among socialists and non-socialists that a socialist economy would not include private or estate ownership of large enterprises; there is less agreement on whether any such enterprises would be owned by society at large or (at least in some cases) owned cooperatively by their own workers. Among the few self-described socialists who dispute these principles are the leadership of the Communist Party of China, who claim to remain socialist, even while the continuing Chinese economic reform explicitly includes the concept of privately-owned large enterprises competing on an equal basis with publicly-owned ones. The adoption by China of this essential characteristic of capitalism is a principal reason why, outside of mainland China, few people (socialists or otherwise) consider present-day mainland China and its ruling party to be, in any meaningful sense, socialist.
It has been claimed, both by socialists and non-socialists, that the former Soviet Union and the Eastern Bloc had socialist economies, as the means of production were owned almost entirely by the state and the bulk of the economy was centrally controlled by the Communist Party acting through the state. However, many other socialists object to that label, because the people in those countries had little or no control over the government, and therefore they had little or no control over the economy. The aforementioned socialists argue that these societies were essentially oligarchies; some would call them state-capitalist, Stalinist, or as some Trotskyists would say, "degenerated workers states". Trotskyists contend that Stalinist economies fulfilled one criterion of a socialist economy, in that the economy was controlled by the state, but not the other criterion, that the state must be in turn democratically controlled by the workers. Many non-Marxist socialists would agree with the general outline of this argument, while perhaps dissenting from the statement that state control of the economy is one of the criteria of socialism. Further, many socialists would argue that the Soviet Union and its satellite states merely replaced a capitalist ruling class with a new ruling class, the coordinator class or nomenklatura, who played an extremely analogous role to the former capitalists, by managing the economy for their own benefit, or at least attempting to do so.
During the Cold War, a common term used by the Soviet Union and its allies to refer to their own economies was "actually existing socialism" (presumably as against any number of theoretically possible socialisms, but carrying an implicit statement that their economy was, in fact, socialist). Another similarly used term was (and is) "real socialist ." Typically, when these terms were or are used by anyone outside of the particular parties that ruled these countries (or the parties who supported them in other countries), they are placed in scare quotes and are used with at least mild irony.
A state that exists, or has existed, or may exist
Most past and present states ruled by parties of Communist orientation called (or call) themselves "socialist." However, in the western world they were usually all referred to as "Communist states." Once again, whether these states were socialist or not was (and is) disputed, with the large majority of today's socialists (including many, perhaps most, communists) contending that they were not socialist, for reasons directly analogous to those just discussed in the section above (regarding the "socialist" economy).
A libertarian socialist society emerged in 1930s Spain during the civil war. See Anarchism in Spain.
There are also some who dispute whether it is appropriate to refer to any state, past, present, future, or hypothetical as "socialist," preferring to reserve that word for an economy or even a society, but not a state.
Socialism as transition from Capitalism
Although Marxists and other socialists generally use the word "socialism" in the senses described above, there is also another specifically Marxist use of the term that is worth noting. Karl Marx, in his exposition of historical materialism (his Hegelian model of history) saw socialism as a phase of human society that would follow capitalism and precede communism. Marx is by no means clear about the expected characteristics of such a society, but he is consistent in his belief in the eventual triumph of revolutionary-socialism over capitalism, and then, its eventual transformation into communism.
According to Marx, the socialist society will be controlled by the working class (the proletariat), whose familiarity with large, collective undertakings will be reflected in the character of this society. It will be a "dictatorship of the proletariat", in the sense that it is contrasted with the existing dictatorship of the bourgeoisie (i.e. capitalism). It is worth noting in this context that Marx was not necessarily advocating or predicting "dictatorship" in the sense that word is commonly used today; he was only referring to what class would be dominant. While Leninist dictatorship is arguably consistent with this vision, so is workers' democracy, analogous to bourgeois democracy. In addition, note that most Marxist models of socialism involve the abolition of the so-called "exploitation of man by man" which is presumed to exist in capitalist society. This would mean abolishing class distinctions, therefore making "the proletariat" a universal term synonymous with "the people".
Marx saw socialism (the "dictatorship of proletariat", as explained above) as a transitional phase, ultimately to be replaced by a classless communist society in which the existing forms of government would no longer be needed. According to Engels, the state was destined to eventually "wither away", as the representative democracy of socialism slowly turned into the direct democracy of socialism, and economic life would be re-organised on a basis of freedom and equality. In holding this classless non-state as the ultimate goal, Marx expressed an ideal not far from that of anarchism.
This definition of socialism is particularly important in understanding the official ideology of the People's Republic of China. The Communist Party of China states that class struggle has already pushed China into the socialist phase of social development. Because of this and Deng Xiaoping's theory of seeking truth from facts, any economic policy which "works" is automatically classified as a socialist policy, and hence there are no constraints on how "socialism with Chinese characteristics" can look like.
Socialism and the mixed economy
As remarked above, some self-described socialists, especially those who identify as social democrats, but also including (for example) the reform-oriented Euro-communists (Marxists, but by no means Leninists), advocate a mixed economy rather than a complete re-working of existing capitalist economies along socialist lines. These views also extend to many who would not describe themselves as "socialists."
In the most moderate formulation of such a mixed economy, collective ownership is typically limited to control of natural resources and public utilities. The rationale for prioritizing these is that natural resources are a common patrimony and that (all or some) public utilities are natural monopolies.
Others would extend a socialist approach within a mixed economy to what they deem to be essential industries to prevent certain capitalists from having a stranglehold on society, or to prevent massive concentrations of wealth which result in a power imbalance (including disproportionate bargaining leverage). There is also often a rationale of national defense or national sovereignty. Thus, many otherwise capitalist countries have, at least at times, nationalized such industries as steel, automobiles, or airplanes. In the U.S., for example, President Harry S. Truman nationalized the steel mills during the Korean War. They soon returned to private ownership by order of the U.S. Supreme Court, however.
All socialist thinkers argue that unrestrained free market economics would generally result in profits for a few at the expense of the many. Communists, in particular, are adamantly opposed to any compromise with capitalism, claiming that any economic system that permits the private accumulation of wealth is inherently unjust and allows capitalists (those who own and control capital) to compel behavior out of individuals due to their own necessity to survive. (see: labor theory of value). As noted several times above, this is disputed by the contemporary Communist Party of China, making China (if it is regarded as socialist or communist) an inevitable exception to much of what follows here.
While few self-described communists support any scheme upholding private ownership of the means of production (except, perhaps, as a temporary disposition on the way to something purer, and again noting the contemporary Chinese exception), other socialists are split over this, arguing over whether to only moderate the workings of market capitalism to produce a more equitable distribution of wealth, or whether to expropriate the entire owning class to guarantee this distribution. Many socialists acknowledge the extreme complexity of designing other appropriate non-market mechanisms to identify demand, especially for non-essential goods. Some have put forward models of market socialism where markets exist, but an owning class does not.
In practice, many aspects of the socialist worldview and socialist policy have been integrated with capitalism in many European countries and in other parts of the world (especially in the industrialized "first world"). Social democracy typically involves state ownership of some corporations (considered strategically important to the people) and participation in ownership of the means of production by workers. This can include profit sharing and worker representation on decision-making boards of corporations (a measure in vigour in Germany, for instance). Some inherently capitalist measures, such as stock ownership for workers or stock options would, however, also fit the description. Social services are important in social democracies. Such services include social welfare for the disadvantaged and unemployment insurance.
Likewise, market economies in the United States and other capitalist countries have integrated some aspects of socialist economic planning. Democratic countries typically place legal limits on the centralization of capital through anti-trust laws and limits on monopolies, though the extent to which these laws are actually enforced has to do with the balance of power between the actually existing or emerging monopoly firms, as well as political ties between government and some corporations (crony capitalism). Ownership of stock has become common for middle class workers, both in companies they work for and in other companies (see mutual fund). Labor market pressures (see labor economics) and regulations have encouraged profit sharing. Social welfare and unemployment insurance are mandated by law in the US, UK, Canada and other market economies. There is a lively debate today as to whether the world is moving closer to or farther away from "socialism", as defined by different people. Another component of this debate is whether or not these developments are to be encouraged.
Opposition and criticisms of socialism; arguments for and against
See also - Criticisms of Socialism
A number of thinkers, economists and historians have raised some issues with socialist theory. These individuals include Milton Friedman, Ayn Rand, Ludwig von Mises, Friedrich Hayek, and Joshua Muravchik, to name a few. Most of their objections and critiques seem directed more at a centrally planned economy (not a part of all proposed socialisms), some at socialism and Marxism in general, but because these distinctions are relatively difficult to tease out of their writings, it is probably useful to take them up in a single context.
These objections and critiques usually fall into the following categories:
- Tendency Toward Genocide
- Profits and Losses
- Private Property Rights
References and further reading
- Friedrich Engels, The Origin Of The Family, Private Property And The State, Zurich, 1884
- Elie Halevy, Histoire du Socialisme Européen. Paris, Gallimard, 1937
- Market Socialism: the debate among socialists, ed. Bertell Ollman (1998) ISBN 0415919673
- G.D.H. Cole, History of Socialist Thought, in 7 volumes, Macmillan and St. Martin's Press (1965), Palgrave Macmillan (2003 reprint); 7 volumes, hardcover, 3160 pages, ISBN 140390264X
- James Weinstein, Long Detour: The History and Future of the American Left, Westview Press, 2003, hardcover, 272 pages, ISBN 0813341043
- Leo Panitch, Renewing Socialism: Democracy, Strategy, and Imagination, ISBN 0813398215
- Michael Harrington, Socialism, New York: Bantam, 1972
- Edmund Wilson, To the Finland Station: A Study in the Writing and Acting of History, Garden City, NY: Doubleday, 1940.
- Albert Fried, Ronald Sanders, eds., Socialist Thought: A Documentary History, Garden City, NY: Doubleday Anchor, 1964.
- Communist state
- History of socialism
- Libertarian socialism
- List of socialists
- Socialist economics
- Laissez-faire capitalism
- For the governments of the USSR, the PRC, and others, see: Communist state,
- For information on mainstream political parties using the term "Socialist", see Social democracy and Democratic socialism,
- Socialist International
- Ludwig von Mises Institute
- Google Directory collection of critical articles
- Revolutionary Left, active forum with theoretical discussions of different revolutionary socialist ideologies
- Socialism by Robert Heilbroner
- Socialism/Antisocialism - The transformation from socialism to statism
- Capitalism is a Society of Wolves by Fidel Castro Criticism of capitalism in support of socialism
- Failure of Socialism and Lessons for America
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details |
When a family arrives at the exhibition in the Children's Wing, it is difficult to know where to start. The exhibition is intended for children aged, but calls on the whole family to pull up their sleeves and get to work.
The exhibition invites us to operate simple machinery and to check its efficiency.
Simple machinery, like the crane, the screw and the pulley demonstrate a simple principle: The "mechanical advantage". This means that one can gain distance at the expense of force or force at the expense of distance.
The giant spoon in the "balls and forces" exhibit, the door handle and the crane for lifting weights - all use this principle, as do the pulley-blocks on the deck of a ship and in the exhibit "Pulley Elevator".
The exhibition has a number of focal points: a small "infinite room" that is a sort of kaleidoscope through into which one can enter, and a wall of lenses through which one can look at what is happening in the exhibition area without being seen.
In the center of the space is the "balls and forces" exhibit - the centerpiece of the exhibition. Here, with the help of various machines, the children raise balls at a number of operating stations.
The various discovery environments are accompanied by written explanations: discovery sentences that the children said or can say when operating the exhibit. For example:
- "The crane in the exhibit is like a giant spoon. A spoon is also a crane." (balls and forces).
- "You can turn the propeller without touching it" (one cogwheel turns another).
- Each corner has a different angle and the number of reflections in it is different" (small infinite room). |
Water: Substance and Solvent
Water, present in the geological environment as solid, liquid, vapor and supercritical fluid, is one of the most important components of Earth's fluid envelope. Without water and its special properties, Earth would be a vastly different planet in many ways. Water plays critical roles in atmospheric processes, including the absorption of radiant energy and the redistribution of energy through weather mechanisms, thus fundamentally influencing climate. In the lithosphere, water serves as an essential solvent and transport medium for many of the elements, a modifier of rock physical properties, and a lubricant for tectonic processes. The biosphere also depends on water for its solvent properties and, in photosynthesis, as an electron-donating nutrient. |
Farmers who need to control the destructive European corn borer (Ostrinia nubilalis) may soon be able to distinguish it from look-alike species by simply scanning an image of its wing into a computer and pecking a few keys. A technique developed by Polish scientists marks the first time that measurements of key structural features in the wing have been used to identify the borer, potentially a major advance in controlling the pest.
The method was developed by Lukasz Przybylowicsz, Michal Pniak, and Adam Tofilski, and it is described in an article in the Journal of Economic Entomology.
The European corn borer is a prime pest on corn but also impacts more than 200 other crops, by some estimates causing up to $2 billion in damage annually in the United States alone. Most farmers are not able to identify adult corn borers or distinguish it from other species.
The identification method developed by the scientists focuses on the arrangement of veins in the wings of the moths, applying a technique known as geometric morphometry. Essentially, it examines and compares the geometry of an organism's structures -- in other words, where its parts are positioned in relation to one another. Computerized statistical analysis is key to attaining results.
The researchers selected nine points -- called "landmarks" -- at junctions of veins in the central part of the wing. Landmarks, such as where veins join, are a common feature among species. A mass of geometrical information based on coordinates of the landmarks was then entered into software used for identification, and when the shape of wing venation was compared, significant differences were seen between species. The accuracy of the test was 97 percent.
Before farmers can be sure of results, the scientists note, the results "should be confirmed by further studies." Once they are done, the researchers say "this method can be used by farmers to identify this pest and apply control measures at optimal time."
The full article is available at http://dx.
The Journal of Economic Entomology is published by the Entomological Society of America, the largest organization in the world serving the professional and scientific needs of entomologists and people in related disciplines. Founded in 1889, ESA today has nearly 7,000 members affiliated with educational institutions, health agencies, private industry, and government. Members are researchers, teachers, extension service personnel, administrators, marketing representatives, research technicians, consultants, students, and hobbyists. For more information, visit http://www. |
The sound heard in the above video is the alarm that sounds over the Aamjiwnaang First Nation Reserve to warn of chemical spills and toxic releases. The tragic flaw in this warning system is that there is rarely any follow-up report: no announcement of what was released nor how much and where, not even an “all’s clear” signal. The alarm sounds at least once a week.
Aamjiwnaang — located just outside of the city of Sarnia in Southwestern Ontario — is a reserve of certain bands of Anishinaabe people, who are better known to some as the Chippewa or Ojibwa(y) of North America. Aamjiwnaang is also located in an area known as Chemical Valley. Home to 40% of Canada’s petrochemical industry, the Chemical Valley region experiences some of the worst air pollution in Canada. The Aamjiwnaang Reserve, which is surrounded by industry on three sides, bears the brunt of this pollution. Another challenge faced by this community is the lack of legal protection from actions such a dumping of toxic chemicals within the reserve. This situation arises from the way in which environmental legislation is divided between the provincial and federal levels. Under current law, while provinces are responsible for most of the regulation of pollution within their borders, they have little to no power regarding environmental issues on a First Nations reserve. There is a huge risk for pollution of people’s bodies and the land on which they live that comes with being surrounded by petrochemical plants and related facilities. There are over 100 spills in Chemical Valley every single year — spills into the air and water. For generations, many of the First Nations people used the St. Clair River as an additional source of water, but it has now become a toxic chemical stew that cannot be used for human purposes. The constant repetition of chemical spills from the local industrial facilities are most likely a major factor in a number of health concerns for the inhabitants of Aamjiwnaang. Research has shown that toxins of the kind present in the water can cause eye and skin irritations, central nervous system disorders, and respiratory problems such as asthma. Many are known carcinogens that can cause leukemia and other cancers. They can affect blood and bone marrow, leading to anemia, bleeding, and immunosuppression, and can be corrosive to the digestive system, causing esophagitis and gastritis. However, the most notable health concern at Aamjiwnaang that might well be the result of pollution from surrounding industry is the change in ratio of male to female births. The male birth rate has lowered significantly due to effects that, while not entirely understood, have been associated in other locations with the kinds of toxins that occur as by-products of petrochemical facilities. A sad truth about the pollution at Aamjiwnaang is that it is but one chapter in a long story of discrimination and marginalization of First Peoples throughout Canadian history. Through various discriminatory government policies, Anishinaabek (plural of Anishinaabe) have been forced or tricked into giving up land for two centuries, with one result being the location of petrochemical industries all around, and even in between, many parts of the Reserve. The Canadian government created systems to facilitate the manipulation of First Nations peoples; the industries that moved onto Aamjiwnaang’s former land in turn took full advantage of that and continue to do so today, as they get away with violation after violation that is allowed to happen, it seems, only because it happens on the Reserve.
Despite the overwhelming shadow cast by Chemical Valley, there are people working to improve the situation. Two such people are Ron Plain and Ada Lockridge, Aamjiwnaang band members who are currently involved in a lawsuit against the Ontario Ministry of Environment (MOE) and Suncor Corporation. They are waging this lawsuit because they believe it is a basic human right to step outside one’s home and not breathe air harmful to one’s health. From a legal standpoint, their claim is that certain actions taken by Suncor and permitted by the MOE violate the rights of area residents — both on the Aamjiwnaang reserve and within the city of Sarnia — under the Canadian Charter of Rights and Freedoms. Ron, Ada, and their Ecojustice legal team have a long road ahead of them, but their case is a strong voice with potentially transformative effects in this battle for environmental justice.
For another perspective on Aamjiwnaang’s circumstances, read this poem by Australian philosopher Glenn Albrecht: Life Out of Balance |
The Eastern Ganga Dynasty ruled most parts of southeast India during the 11th century. Their capital was known by the name Kalinganagar, which is the modern Srimukhalingam
in Srikakulam District
of Andhra Pradesh
During their reign (1076-1435) a new style of temple architecture came into being, commonly called as Indo-Aryan architecture. This dynasty was founded by King Anantavarman Chodaganga Deva (1077–1147). He was a religious person and a patron of art and literature. He is credited for having built the famous Jagannath Temple of Puri in Orissa.
King Anantavarman Chodaganga deva was succeeded by a long line of illustrious rulers. Among them was
Narasimha I (1238-1264), who built the famous Sun Temple of Konark in Orissa. The rulers of Eastern Ganga dynasty defended their kingdom from the constant attacks of the Muslim rulers. This kingdom prospered through trade and commerce and the wealth was mostly used in the construction of temples. The rule of the dynasty came to end under the reign of King Bhanudeva–IV, in the early 15th century. |
Single-bubble sonoluminescence occurs when an acoustically trapped and periodically driven gas bubble collapses so strongly that the energy focusing at collapse leads to light emission. Detailed experiments have demonstrated the unique properties of this system: the spectrum of the emitted light tends to peak in the ultraviolet and depends strongly on the type of gas dissolved in the liquid; small amounts of trace noble gases or other impurities can dramatically change the amount of light emission, which is also affected by small changes in other operating parameters (mainly forcing pressure, dissolved gas concentration, and liquid temperature). This article reviews experimental and theoretical efforts to understand this phenomenon. The currently available information favors a description of sonoluminescence caused by adiabatic heating of the bubble at collapse, leading to partial ionization of the gas inside the bubble and to thermal emission such as bremsstrahlung. After a brief historical review, the authors survey the major areas of research: Section II describes the classical theory of bubble dynamics, as developed by Rayleigh, Plesset, Prosperetti, and others, while Sec. III describes research on the gas dynamics inside the bubble. Shock waves inside the bubble do not seem to play a prominent role in the process. Section IV discusses the hydrodynamic and chemical stability of the bubble. Stable single-bubble sonoluminescence requires that the bubble be shape stable and diffusively stable, and, together with an energy focusing condition, this fixes the parameter space where light emission occurs. Section V describes experiments and models addressing the origin of the light emission. The final section presents an overview of what is known, and outlines some directions for future research.
- Published 13 May 2002
© 2002 The American Physical Society |
Home > Preview
The flashcards below were created by user
on FreezingBlue Flashcards.
What was the German Revolution in November 1918?
- "Kiel Mutiny": German navy (which had been inactive for most of the war) triggered a revolution through mutinies.
- They attacked the superior British navy on the closing days of the war.
- Joined by protesters and workers.
- First Worker's and Soldier's Council: Established in Kiel who supported Maxism and democracy as well as the removal of the Kaiser and end to the war.
Who was Ebert and what position did he gain on 9th
- On the 9th of November, the day of the Kaiser’s
- abdication, Prince Max von Baden also stepped down, handing over the government
- to Friedrich Ebert who was the leader of the social democratic party. Headed
- new provisional government and became chancellor.
How was it that Germany was declared a republic against Ebert’s wishes?
- Ebert wanted there to be an elected assembly to finally decide whether Germany would become a republic. However, before this could
- occur Phillip Scheidemann addressed the nation of a balcony in Reichstag stating
- “The old and the rotten – the monarchy – has broken down. Long live the German
When did Ebert become president?
6th Feb 1919
Which were the most popular parties?
- Ebert’s Social Democratic Party, with 165 seats and the Centre Party with 91 seats out of 423. But after a coalition was formed, of the
- Social Democratic party, they held 329 seats.
Which groups opposed the Weimar republic?
Left wing politicians, anti-democratic groups, the German army (to an extent),
Why did the government find it hard to establish order in Germany?
- There was trouble establishing order in Germany due to the unrest caused by hyperinflation, a feeling of betrayal towards the government for agreeing to the Treaty of Versailles, there were many counter-revolutionaries wanting to overthrow the Weimar republic, the German
- economy was in a disastrous state after the war. Six governments between 1924
- and 1928 which had no real stability. There were many groups within Germany
- which resented a republic – The constitution gave the states and the army too
- much power. There were many threats to the government who resented them for
- signing ToV.
- After Germany failed to pay a war repayment as the ToV dictated, French invaded
- (occupations of the Ruhr) Germany.
List three putsches.
- Spartacist Revolt: 30 December 1918 - Spartacists declare themselves the communist party of Germany. A number of key buildings in Berlin were ceased, and declared the Ebert government deposed. It was unsuccessful. Ebert used freikorps (ex WWI soldiers to help keep order) to stop the rebellion and kill communists. The leaders were murdered. Using the freikorps made him
- look weak – they could turn on him at any moment.
- Kapp Putsch: March 1920, right wing revolt led by Kapp in an attempt to initiate a military coup. Soldiers followed this revolt and the government had to halt it with the help of the loyal soldiers and street gangs. General
- Kapp received a 6 month prison sentence as many felt it was important to respect the social hierarchy.
- Munich/Beer Hall Putsch: November 1923, Hitler and the Nazi’s attempted to take control of Munich by force. This revolt failed as
- neither the Bavarian army nor the von Kahr, right-wing of Bavaria were prepared
- to support it.
Who were the spartacists? What were their aims?
- Led by Karl Liebknecht and Rosa Luxemburg, were
- revolutionaries. They believed in Marxism and wanted to stage a revolution like
- the one seen in Russia in 1918. Through their newspaper, The Red Flag, they
- urged a counter revolution in Germany to overthrow the Ebert government. Just
- as the second revolution in Russia saw the communists victorious, they hoped a
- second revolution in Germany would have the same result.
How did the Treaty of Versailles undermine the
- As the previous government promised a victory in WWI,
- Germans felt betrayed and shocked at their defeat. This resulted in the ToV While
- the government had no choice but to accept the harsh terms of the treaty, it
- outraged German citizens. They coined a name for it – “diktat”, meaning
- dictated peace, and accused the Weimer government of “stabbing them in the
- back”. The people of Germany began to look elsewhere for better politicians to
- rule Germany other than the Social Democrats.
Who were the ‘November criminals’?
- the German government didn’t want to claim responsibility for the war - they
- had promised a victory - they placed the blame on specific minorities of German
- society. These included; Jews, pacifists and socialists who were against the
- war and had therefore cause Germany’s defeat and were solely responsible for
What were the ‘freikorps’ and what were their
- The Free Corps. Volunteered Military units formed in 1918
- and were mainly made up of ex-soldiers. They independently found themselves
- into their own military groups under former officers. They were opposed to the
- extremists of the left and were used to crush the Spartacist uprising in 1919.
- Ebert constantly used their services to “put down” revolts and revolutionary
- acts as seen with the Spartacists and Bavarian uprising. Making use of the
- freikorps made Ebert look weak as they could turn on him at any moment and thus
- politically unreliable. Together, the freikorps saw the extreme left as an
- immediate danger and would do anything to stop them from taking power, thus
- their agreement to assist the Ebert government.
Who were the Reichswehr? What was the Ebert
- The Reichwehr were the German army during the rule of the
- Weimer Republic. General Groener, Ludendorff’s successor, feared the extreme
- left as well as the disintegration of military discipline and thus, said that
- the army was prepared and ‘at Ebert’s disposal’ in return for Ebert maintaining
- order in the army.
- Its implications were that the army, despite having little sympathy for the new
- republic, were prepared to support them, a moderate left wing government
- against elements that threatened republicanism.
- It also had the long term effect of allowing the German army to retain its
- influence and become a key political force within the new republic.
Who was General Hans von Seeckt? In what ways
did he try to preserve the German army following the end of the war and the
imposition of the Treaty of Versailles?
- General Hans von Seekt was the commander of the German army
- from 1920 to 1926. He, like many others had little faith in the republic and
- believed that the army’s loyalty was to the nation. He was determined of
- preserve the army’s position, but knew that for this to happen the army had to
- work with the new government. They supported the Policy of Fulfilment and by
- the late 1920s, the army had a significantly increased influence within
- government, especially when Field Marshal, Paul von Hindenburg became the
- president of Germany in 1925. He also wanted to overcome the restrictions the
- Treaty of Versailles imposed on the German army. He did this by:
- The restrictions of the number of German officers was
- overcome by giving many officers civilian titles and placing them within
- government agencies.
Soldiers maintained their role by joining the police force.
- Used reduced size to his advantage: he made recruitment for
- the army more rigorous and selective, trying to create an army of leaders. They
- were also trained for ranks above him, so they would be ready when the time
What was the Kapp putsch? What did the Reichwehr
do to protect the Weimar government at this time? How was the Kapp putsch
- In 1920 the extreme right-wing elements, who had never
- supported the new government, tried to overthrow the elected government, and
- came to be known as the Kapp putsch. Its immediate cause was the government’s
- attempt to carry out the military clauses of the ToV, of which caused deep
- resentment. A part of this was to reduce the size of Freikorps, which had
- helped put down the Spartacists in 1919, so when the government ordered their
- dissolution, General von Luttwitz refused to obey. He organised a march on the
- city of Berlin, with the other right winged officers and civilians. Around
- 12000 men marched.
- Since the army was a right winged organisation, it did not resist against it.
- They were successful for only a few days due to the defiance of the working
- class. There was a general strike, which was very effective and paralysed the
- city, as a show of support for the legal government.
- The right wingers left the city and the legal government returned.
It was significant because:
- It was the first attempt by the radical right to cease
- It exposed weakness of the government whose president and
- elected leaders were forced to leave the capital.
- It showed the growing power of the German army: the army was
- becoming a state within a state, able and prepared to follow its own policies
- regardless of the elected government.
When was the reparations settlement announced?
What sum did Germany have to pay to the victorious allies?
- It was decided in 1921 that Germany was to pay 132 million
- gold marks as reparations for the war. It was to be paid annually in cash or
- resources such as coal and iron ore. It caused political crises and saw the
- fall of the government as a consequence. A coalition government was quickly
- formed with the Socialists, German Democratic Party, and the Centre Party.
Between which two nations was the Treaty of
Rapallo made? When it signed and what were its provisions? In particular what
did the Reichwehr gain from the treaty?
- The Treaty of Rapallo was made in 1922 between Germany and
- Russia. “Germany re-establishing relations with its old foe”. It included the
- setting up of a military cooperation with the soviet union and negotiation of
- military training facilities on Soviet soil for German officers and men. Pilots
- were trained for the new German air force at gliding and aviation clubs within
- Germany and at Lipetsk, an airbase north of Moscow. The Reichwehr essentially
- gained training.
What was the Ruhr crisis?
- French invasion of the Ruhr: In January 1923, French and Belgian troops marched into and
- occupied Germany’s industrial Ruhr region as a result of Germany’s
- non-compliance with the terms of the treaty of Versailles. The Ruhr occupation
- would last more than two and a half years. But some evidence also suggests the
- Poincare government had been plotting to occupy the Ruhr since 1919. France had
- its own sizeable war debts to meet and were beginning to feel short-changed by
- the terms of Versailles. And there was much to be gained by occupying the Ruhr,
- which housed three quarters of Germany’s steel and coal production.
- The Ruhr occupation was achieved swiftly. Once French and
- Belgian troops had crossed the border, they sealed off the Ruhr from the rest
- of Germany and began marching 150,000 civilians and non-essential workers out
- of the area. German industrial workers remained in the Ruhr and in some cases
- were prevented from leaving. By July, the French had set up an exclusion zone,
- restricting traffic in and out of the Ruhr. Across Germany there were
- press reports, most of them exaggerated if not entirely fictional, of French
- soldiers executing or beating German workers and civilians in the
- Ruhr. The occupiers also began confiscating raw materials and manufactured
- goods, which were loaded onto railway carts to be shipped back to France and
- Belgium – payment in kind for the missed reparations instalments.
What was the beer hall putsch?
- Beer-Hall Putsch: A right winged party in Bavaria, led by
- Adolf Hitler (Nazis) took opportunity amid political unrest in 1923 (threat of
- separatism) to try and sieze control of the Bavarian government. This was on 8
- November 1923, becoming to be known as the Beer Hall Putsch. It was their first
- step in taking Berlin. It failed due to a lack of support. The Bavarian army
- and von Kahr, the right wing prime minister of Bavaria did not support them.
- During the march in Munich, gunfire halted the Nazis, killing sixteen of them
- and arresting Hitler.
Who was Gustav Stresemann and what contribution
did he make to the solution of the crises in 1923? What was the policy of
- One of the few outstanding politicians of the Weimar
- Republic. He supported the monarchy, but after the republic was proclaimed he
- accepted this new political situation and became one if its true champions.
- When the National Liberal party split into German Democratic Party and the
- National People’s Party, he formed his own, the German People’s Party, conservative
- group. He ended the policy of passive resistance The Stresemann government
- were given special emergency powers to deal with the problems facing the
- country, with the help of the Reichstag. This was the Enabling Act, legal
- under Section 76 of the constitution. Two days later, the government tackled
- the problem of hyperinflation. They introduced a new currency, the Rentenmark.
- It was soon replaced by the German Mark in 1924. The government also carried
- out long overdue economic reforms. The budget was balanced, the government
- expenditure was cut, particularly after the ending of passive resistance, ad
- new taxes were introduced.
- The policy of fulfilment was the name given to the policy of
- the Weimar government of the early 1920s to meet or fulfil the terms of the
- treaty of Versailles imposed on Germany by the Allies.
What was the Dawes plan and what benefits did it bring to Germany?
- The Dawes plan was announced in April 1924, and Germany
- started paying reparations starting with 1000 million marks in 1925, and 2500
- million marks by the next five years. This plan was introduced by a committee
- led by American banker Charles Dawes to adjust Germany's reparations payments
- to Germany's capacity to pay. This significantly stabilized its economy.
Who signed the Treaty of Locarno in 1925? What
were the provisions of the Treaty and how did it changes Germany’s status in
the international community.
- The Treaty was between Germany, France, Belgium, Italy and
- Britain in October 1925. The treaty renounced the borders between Belgium and
- Germany, France and Germany. It marked a change in European relations as for
- the first time Germany had been treated like and equal. The French and Germans
- felt secure, as neither countries would occupy each other again.
What was the Young Plan?
- Stresemann introduced the Young plan in 1929. It followed
- the early work of the Dawes Plan and set out to revise the issue of reparation.
- Through this plan, the costs were reduced from 132000 million marks to 37000
- million marks. |
The Galileo spacecraft, launched in 1989 with the ultimate destination of Jupiter, carried a number of scientific instruments on board to study the solar system while on route to Jupiter, including a radiometer and ultraviolet, extreme ultraviolet, and near-infrared spectrometers, which take pictures of light outside the visible range. Upon arrival at Jupiter in 1995, Galileo released a probe that plunged into the planet’s fiery atmosphere, transmitting vital scientific data before it was destroyed.
NASA’s Cassini spacecraft set out toward Saturn and Saturn’s moon Titan in October 1997. Cassini reached Jupiter at the end of the year 2000 and is scheduled to reach Saturn in 2004. After reaching Saturn, it should release a probe into Titan’s atmosphere.
Other Solar System Missions
Aside from the planets and their moons, space missions have focused on a variety of other solar system objects. The Sun, whose energy affects all other bodies in the solar system, has been the focus of many missions. Between and beyond the orbits of the planets, innumerable smaller bodies—asteroids and comets—also orbit the Sun. All of these celestial objects hold mysteries, and spacecraft have been launched to unlock their secrets.
A number of the earliest satellites were launched to study the Sun. Most of these were Earth-orbiting satellites. The Soviet satellite Sputnik 2, launched in 1957 to become the second satellite in space, carried instruments to detect ultraviolet and X-ray radiation from the Sun. Several of the satellites in the U.S. Pioneer series of the late 1950s through the 1970s gathered data on the Sun and its effects on the interplanetary environment. A series of Earth-orbiting U.S. satellites, known as the Orbiting Solar Observatories (OSO), studied the Sun’s ultraviolet, X-ray, and gamma-ray radiation through an entire cycle of rising and falling solar activity from 1962 to 1978. Helios 2, a solar probe created by the United States and West Germany, was launched into a solar orbit in 1976 and ventured within 43 million km (27 million mi) of the Sun. The U.S. Solar Maximum Mission spacecraft was designed to monitor solar flares and other solar activity during the period when sunspots were especially frequent. After suffering mechanical problems, in 1984 it became the first satellite to be repaired by astronauts aboard the space shuttle. The satellite Yohkoh, a joint effort of Japan, the United States, and Britain, was launched in 1991 to study high-energy radiation from solar flares. The Ulysses mission was created by NASA and the European Space Agency. Launched in 1990, the spacecraft used a gravitational assist from the planet Jupiter to fly over the poles of the Sun. The European Space Agency launched the Solar and Heliospheric Observatory (SOHO) in 1995 to study the Sun’s internal structure, as well as its outer atmosphere (the corona), and the solar wind, the stream of subatomic particles emitted by the Sun.
Asteroids are chunks of rock that vary in size from dust grains to tiny worlds, the largest of which is more than a third the size of Earth’s Moon. These rocky bodies, composed of debris left over from the formation of the solar system, are among the latest solar system objects to be visited by spacecraft. The first such encounter was made by the Galileo spacecraft, which passed through the solar system’s main asteroid belt on its way to Jupiter. Galileo flew within 1,600 km (1,000 mi) of the asteroid Gaspra on October 29, 1991. Galileo’s images clearly showed Gaspra's irregular shape and a surface covered with impact craters. On August 28, 1993, Galileo passed close by the asteroid 243 Ida and discovered that it is orbited by another, smaller asteroid, subsequently named Dactyl. Ida is the first asteroid known to possess its own moon. On June 27, 1997, the Near-Earth Asteroid Rendezvous (NEAR) spacecraft flew past asteroid 253 Mathilde. NEAR reached the asteroid 433 Eros and became the first spacecraft to orbit an asteroid in February 2000. The United States launched the spacecraft Deep Space 1 (DS1) in 1998 to prepare for 21st-century missions within the solar system and beyond. In July 1999 DS1 flew by the small asteroid 9969 Braille and discovered that it is composed of the same type of material as the much larger asteroid 4 Vesta. Braille may be a broken piece of Vesta, or it may have simply formed at the same time and place as Vesta in the early solar system.
Comets are icy wanderers that populate the solar system’s outermost reaches. These “dirty snowballs” are chunks of frozen gases and dust. When a comet ventures into the inner solar system, some of its ices evaporate. The comet forms tails of dust and ionized gas, and many have been spectacular sights. Because they may contain the raw materials that formed the solar system, comets hold special fascination for astronomers. Although several comets have been observed by a variety of space-born instruments, only one has been visited by spacecraft. The most famous comet of all, Halley’s Comet, made its most recent passage through the inner solar system in 1986. In March 1986 five separate spacecraft flew past Halley, including the USSR’s Vega 1 and Vega 2 probes, the Giotto spacecraft of the European Space Agency, and Japan’s Sakigake and Suisei probes. These encounters produced valuable data on the composition of the comet’s gas and dust tails and its solid nucleus. Vega 1 and 2 returned the first close-up views ever taken of a comet’s nucleus, followed by more detailed images from Giotto. Giotto went on to make a close passage to Comet P/Grigg-Skjellerup on July 10, 1992.
Piloted spaceflight presents even greater challenges than unpiloted missions. Nonetheless, the United States and the USSR made piloted flights the focus of their Cold War space race, knowing that astronauts and cosmonauts put a face on space exploration, enhancing its impact on the general public. The history of piloted spaceflight started with relatively simple missions, based in part on the technology developed for early unpiloted spacecraft. Longer and more complicated missions followed, crowned by the ambitious and successful U.S. Apollo missions to the Moon. Since the Apollo program, piloted spaceflight has focused on extended missions aboard spacecraft in Earth orbit. These missions have placed an emphasis on scientific experimentation and work in space.
Vostok and Mercury
At the beginning of the 1960s, the United States and the USSR were competing to put the first human in space. The Soviets achieved that milestone on April 12, 1961, when a 27-year-old pilot named Yuri Gagarin made a single orbit of Earth in a spacecraft called Vostok(East). Gagarin’s Vostok was launched by an R-7 booster, the same kind of rocket they had used to launch Sputnik. Although the Soviets portrayed Gagarin’s 108-minute flight as flawless, historians have since learned that Vostok experienced a malfunction that caused it to tumble during the minutes before its reentry into the atmosphere. However, Gagarin parachuted to the ground unharmed after ejecting from the descending Vostok.
On May 5, 1961, the United States entered the era of piloted spaceflight with the mission of Alan Shepard. Shepard was launched by a Redstone booster on a 15-minute “hop” in a Mercury spacecraft named Freedom 7. Shepard’s flight purposely did not attain the necessary velocity to go into orbit. In February 1962, John Glenn became the first American to orbit Earth, logging five hours in space. His Mercury spacecraft, called Friendship 7, had been borne aloft by a powerful Atlas booster rocket. After his historic mission, the charismatic Glenn was celebrated as a national hero.
The Soviets followed Gagarin’s flight with five more Vostok missions, including a flight of almost five days by Valery Bykovsky and the first spaceflight by a woman, Valentina Tereshkova, both in June 1963. By contrast, the longest of the six piloted Mercury flights was the 34-hour mission flown by Gordon Cooper in May 1963.
By today’s standards, Vostok and Mercury were simple spacecraft, though they were considered advanced at the time. Both were designed for the basic mission of keeping a single pilot alive in the vacuum of space and providing a safe means of return to Earth. Both were equipped with small thrusters that allowed the pilot to change the craft’s orientation in space. There was no provision, however, for altering the craft's orbit—that capability would have to wait for the next generation of spacecraft. Compared to Mercury, Vostok was both roomier and more massive, weighing 2,500 kg (5,500 lb)—a reflection of the greater lifting power of the R-7 compared with the U.S. Redstone and Atlas rockets. |
Using the Very Large Baseline Array, astronomers have managed to capture an image of a black hole firing two gigantic pockets of ionized gas at almost a quarter the speed of light. The resulting cosmic explosion produces as much energy in an hour as our sun emits in five years.
The Very Large Baseline Array is a set of 10 radio telescopes that spans 5,000 miles from Mauna Kea in Hawaii to St. Croix in the U.S. Virgin Islands. It provides astronomers with the sharpest vision of any telescope on Earth or in space. The black hole in question has been designated H1742-322 and it is situated near the center of our galaxy, approximately 28,000 light years from Earth in the constellation Scorpius.
Gregory Sivakoff from the University of Alberta states that if our eyes were as sharp as the VLBA, we would be able to see a person on the moon. The findings were presented on January 10th of this year at a meeting of the American Astronomical Society.
A Sol-like star orbits H1743 and the black hole will periodically siphon matter from its companion. The gas and dust are incorporated into a large disk, which slowly spirals around the black hole’s event horizon. Researchers haven’t discovered yet how the process works exactly, but the disk constantly emits large jets of plasma, which spew out in opposite directions. Occasionally, these jets recede and are followed by an enormous bullet-like burst of gases.
Sivakoff and his team detected a lump of material, which they hypothesized as being a blob of ionized gas, spiraling its way into the black hole’s center. These blobs are known as quasi-periodic oscillations, which disappeared before the jets waned. A few days later, the team detected the ejection of gases. |
Weak Acid – Strong Base Titration
A titration is the combination of two chemicals that will react. Often, we will titrate to the endpoint with a standardized solution (called the titrant) in order to find the concentration of an unknown solution. For example, we might titrate an unknown acid with a titrant of known concentration, like titrating HCl of unknown concentration with NaOH of known concentration. We know a neutralization occurs in an acid-base titration and an equivalence point is reached when the reactants are stoichiometrically related.
HCl(aq) + NaOH(aq) H2O + NaCl(aq) (1)
Since this reaction is a 1:1 mole ratio, at the endpoint (equivalence point) we know the moles of the HCl and NaOH that reacted will be equal. There may be more than one equivalence point for a diprotic or triprotic acid or a base that produces more than one hydroxide ion. Ideally, we do this with an indicator that changes color at the equivalence point so that the endpoint will coincide with the equivalence point.
Titration is a volumetric technique based on a precise measurement of volume of the solution of known molarity (standard solution or titrant) which enables one to calculate number of moles of titrant reacted and to proceed with further solution stoichiometry calculation to obtain quantitative data needed about an unknown solution (analyte). We will perform a titration slightly differently. We will begin with a sample of a weak acid and add strong base (see equation 2), measuring the pH as we go. From these data we will create a titration curve.
HC2H3O2 (aq) + NaOH(aq) H2O(l) + NaC2H3O2 (aq) (2)
A titration curve (Figure 1) monitors a titration as it proceeds with added titrant (base in this case) by graphing pH vs volume of titrant added. The curve shows several very important features that give us information. We can find the equivalence point, when the added titrant neutralizes the analyte. As you can see, this will sometimes be a neutral pH (Figure 1, i and ii) and sometimes not (Figure 1, iii). The equivalence point is at the inflection point of the curve, where we can calculate the moles of titrant added to find the moles of analyte.
Figure 1: Titration curves of (i) strong acid with a strong base, (ii) strong base with a strong acid, and (iii) a weak base with a strong acid. From Chemistry: A Molecular Approach, 5th Edition. Nivaldo Tro, Pearson Education, Inc.
Let’s say, in the titration of a weak base with a strong acid, we were titrating 100. mL of ammonia (weak base) with 0.100M HCl (strong acid). We see from the graph (Figure 1, iii) that the equivalence point occurs after 25 mL of acid has been added. In order to find the original concentration of the base we would do the following calculations.
Moles of acid added at equivalence point = (volume)acid x (M)acid (3)
Moles acid = (0.025L)(0.100M)
Moles acid = 0.0025 moles
At the equivalence point we know
Initial moles unknown base = moles acid added
moles base = 0.0025 moles
And the original base concentration would be
Mbase = = 0.025 M NH3
Because this is a weak base, the pH at the equivalence point is slightly acidic. At the equivalence point all of the weak base will be reacted and converted to the conjugate acid. Consequently the presence of the weak acid will make the solution acidic. The same reasoning applies to the titration of a weak acid with a strong base and the conjugate weak base that is present at the equivalence point.
We also see the half equivalence point (labeled in Figure 1, iii) which is in the buffer region and is where the solution contains an equimolar amount of a weak base analyte and its conjugate weak acid. At this point
pH = pKa (of the conjugate acid) (4)
pOH = pKb (of the weak base) (5)
There are several different types of calculations (Figure 2) we must do at different times of the titration (different regions of the graph). Here we will see the titration of a weak acid with a strong base.
Figure 2: Calculation regions of a weak acid/strong base titration.
Initial pH, weak acid only region (no titrant added)
In this region there is only the initial substance (weak acid in Figure 2) before any base is added. The pH will be the pH of the weak acid solution.
Buffer region (before equivalence point)
Here we have a mix of weak acid and conjugate base. We would determine this with an SCF table (stoichiometry) and an ICE table or the Henderson Hasselbalch equation.
We would do an SCF table (stoichiometry) and then an ICE table, since we are left with a weak base.
All the acid is neutralized to produce the conjugate acid at the equivalence point, but this weak base is insignificant compared to the excess strong base (titrant). We calculate the amount of excess base with an SCF table (stoichiometry) and then find pH or pOH from that concentration.
In addition to using the pH meter to measure the pH, we will also use indicator paper. Indicator paper is a piece of paper that is impregnated with an indicator or a mixture of indicators that change color depending on the degree of protonation. At a low pH, the indicator (a weak acid) contains a proton. As the pH increases, the indicator loses the proton. The key to these molecules is the weak acid form is a different color than the conjugate base. Litmus paper is an example of indicator paper, but we will use universal indicator paper.
Prelab: Show all you’re CALCULATIONS in each problem
Figure 3: A Titration curve for the titration of 20. mL of acetic acid with NaOH.
1) Predict (ICE table) the pH of 50. mL of 0.10 M HC2H3O2(aq), abbreviated as HA(aq) in further text, (Ka=1.8 x 10-5). This is the initial pH on a titration curve, if this solution is titrated with NaOH(aq). Use an arrow to point to and label this point on the curve above (Figure 3).
2) Predict the pH of a solution if 10. mL of 0.10M NaOH is added to the solution in #1. In order to solve this problem, you must calculate how many moles of NaOH are being added, then react it at 100% (SCF table) to find out how much HA(aq) and A-1(aq) is present after the reaction, and finally calculate the pH using the Henderson-Hasselbalch equation or by using the Ka (ICE table and be aware that there is a new volume). Use an arrow to point to and label this point on the curve above (Figure 3).
3) Predict the pH of a solution if 15.0 mL of 0.10M NaOH is added to the solution in #1. In order to solve this problem, you must calculate how many moles of NaOH are being added, then react it at 100% (SCF table) to find out how much HA and A-1 is present, and finally calculate the pH (Henderson Hasselbalch or ICE table). Use an arrow to point to and label this point on the curve above (Figure 3).
4) Predict the pH of a solution if a total of 50. mL of 0.10M NaOH is added to the solution in #1. In order to solve this problem, you must calculate how moles of NaOH are being added, then react it at 100% (SCF table) to find out how much HA and A-1 is present, if any. Once you know what species are present in the solution at this point, you will have to write the corresponding base dissociation of that given species with water, set appropriate ICE table, and calculate pH in the solution using Kb of the base conjugate of the weak acid initially present in the solution (described in question 1). Use an arrow to point to and label this point on the curve above (Figure 3).
5) Predict the pH of a solution if a total of 90. mL of 0.10M NaOH is added to the solution in #1. In order to solve this problem, you must calculate how moles of NaOH are being added, then react it at 100% (SCF table) to find out how much excess NaOH is present, and finally calculate the pH. Use an arrow to point to and label this point on the curve above (Figure 3).
Safety: Wear safety goggles.
Chemical Disposal: All chemicals in this experiment can be disposed of in the sink. pH paper can be disposed of in the trash.
0.1 M NaOH
250 mL beaker
Glass stirring rod
Calibrating the volume of a drop of solution:
- Prefill a 10 mL graduated cylinder with approximately 2 mL of water. Carefully record the volume to the appropriate number of significant figures. Use a plastic disposable pipet from the kit to add 25 drops of water. Be careful to make each drop consistent. Measure and record the final volume. Repeat adding an additional 25 drops, measure, and record the final volume. Repeat a third time, adding an additional 25 drops, measure, and record the final volume.
0 (original volume)
- Calculate the volume of 1 drop of solution () for each trial and report the average. Show your work and report your answer here.
- Measure approximately 5 mL of vinegar in the graduated cylinder. It does not have to be exactly 5 mL, but you must record the exact volume and add this the 250 mL beaker. Add approximately 50 mL of water to the beaker and mix with the stirring rod.
- Measure and record the pH of the solution with the pH meter. This is your initial pH. Measure and record the pH of the solution with pH paper. Dip your glass rod in the solution, remove it and touch it to a piece of pH paper. Compare the color of the paper with the chart on the container. Dispose of the paper in the garbage.
- Measure 5.0 mL of 0.1M NaOH in the 10 mL graduated cylinder. Record the volume in your data table. If it is not precisely 5.0 mL record the precise amount. Add it to the solution of vinegar (described in step 3 above) and stir the mixture thoroughly.
- Measure and record the pH with both the pH meter and the pH paper.
- Continue adding NaOH and measuring the pH by adding 2 more 5.0 mL aliquots (samples) of NaOH. Stir, measure the pH with the pH meter and the pH paper, and record your results after each addition.
- Measure 1.0 mL of NaOH with the pipet and add it to the solution. Then stir, measure the pH with the pH meter and the pH paper, and record your results.
The graduated pipet is marked to measure 0.25, 0.50, 0.75, and 1.0 mL. When you use the pipet, be sure the tip of it is below the level of the liquid when you are sucking it up and be sure there are no air bubbles in the pipet. Suck up the solution so it is above the 1.0 mL line, remove it from the solution, and squirt out the solution so it is at the line. Make this amount as precise as possible.
- Continue adding 1.0 mL aliquots of NaOH until the pH is close to 6.0. Stir, measure the pH with the pH meter and the pH paper, and record your results after each addition. Go to the next step when the pH is at or above 6.0.
- Begin adding the NaOH dropwise, 10 drops at a time. Record the number of drops you add in the first column of the data table. Using your average drop volume, calculate the volume you added and record that to the volume column in the data table. Stir, measure the pH, and record your results.
- Continue this addition of 10 drops at a time, stirring, measuring and recording, until the pH is greater than 10.0.
- Add 5 more 1.0 mL 0.1 M NaOH aliquots (measured with the pipet). Stir, measure the pH with the meter and the paper, and record your results after each addition.
- Add 1 more 5.0 mL 0.1M NaOH aliquot. Stir, measure the pH, and record your results.
Drops added, when appropriate
Volume Base added this step
Total Volume Base Added
Drops added, when appropriate
Volume Base added this step
Total Volume Base Added
- Draw a titration curve for the titration on the attached graph paper or with the scatter plot with smooth lines and markers using Excel (see Figure 1 and Figure 2). Make the graph as large as possible.
- Label the following regions on the titration curve: Weak Acid, Buffer, Equivalence point, Strong Base.
- Label the titration curve with the species (HC2H3O2, C2H3O21-, NaOH) present in each of the following regions you labeled in question 2.
- Why are the calculations done differently in the different regions you labeled in Question 2? Be specific by referring to the species present in each region.
- What is the volume of base added to reach the equivalence point (the inflection point of the curve)? ___________
- What is the initial concentration of the Acetic Acid solution (the vinegar you began with)? Keep in mind that for a monoprotic acid the moles of titrant at the equivalence point equals the moles of analyte. Using this information and the initial vinegar volume, you can calculate the concentration.
- What is the volume of base added and pH of the half titration point? _______________
- How does this pH compare to the pKa of acetic acid? Is this what you would expect?
- Since you know the pH at the equivalence point, you know the H+ concentration and can use an ICE table to calculate the initial concentration of acetic acid a different way. What was the initial concentration of acetic acid in the vinegar solution using this calculation?
- Do your calculated initial concentrations (questions 6 and 9) agree? Why or why not? If they are different, which do you think is more accurate and why?
- Compare the results of the pH readings from the pH strips with those from the pH meter. Why might they be similar or different. Which tool is better for the creation of a titration curve and why? |
You are given a boolean function of three variables which is defined by its truth table. You need to find an expression of minimum length that equals to this function. The expression may consist of:
If more than one expression of minimum length exists, you should find the lexicographically smallest one.
Operations have standard priority. NOT has the highest priority, then AND goes, and OR has the lowest priority. The expression should satisfy the following grammar:
E ::= E '|' T | T
T ::= T '&' F | F
F ::= '!' F | '(' E ')' | 'x' | 'y' | 'z'
The first line contains one integer n — the number of functions in the input (1 ≤ n ≤ 10 000).
The following n lines contain descriptions of functions, the i-th of them contains a string of length 8 that consists of digits 0 and 1 — the truth table of the i-th function. The digit on position j (0 ≤ j < 8) equals to the value of the function in case of , and .
You should output n lines, the i-th line should contain the expression of minimum length which equals to the i-th function. If there is more than one such expression, output the lexicographically smallest of them. Expressions should satisfy the given grammar and shouldn't contain white spaces.
The truth table for the second function: |
Taylor-Couette Flow between Rotating Cylinders
Created using ANSYS 18.1
A viscous fluid is between two concentric cylinders of radii a and b, which are rotating at constant angular velocities. The diagram below shows these two cylinders and their respective angular velocities. In this problem, but the velocity of the inner wall must be calculated to create the Taylor-Couette phenomenon. Find the velocity vectors that are characteristic of the Taylor-Couette flow. |
Lower Primary (ages 6 to 9)
Mental growth, reasoning and imagination.
A period of intellectual growth and mental independence.
At this age, children begin searching for moral order and develop their conscience or sense of right and wrong – every child becomes a social explorer, developing their imagination and reasoning skills.
Children at the lower primary level begin to show a genuine interest in others, whether it is within their local community or in a more global sense of awareness. They develop a very strong sense of justice and perceived fairness and following the rules becomes very important.
Nurturing social explorers in the second plane of development.
Dr Maria Montessori described this phase as a ‘calm phase of uniform growth.’
Montessori based her educational philosophy on the idea that children develop through a series of four planes. At this age (6-9), children enter the second plane of development, which is a period when children crave intellectual independence while developing ethics and social responsibility.
Want to know more about Montessori’s planes of development? Check out our glossary:
Exploring cosmic education through great stories.
Cosmic education is a cornerstone of the Montessori Philosophy and, at its core, tells the story of the interconnectedness of all things, and describes the role of education as comprehensive, holistic and purposeful.
Through the five Great Stories, the students are presented with a holistic vision of knowledge, drawing on material from the various disciplines. They learn about the creation of the universe through these stories, which integrate the studies of astronomy, chemistry, biology, geography, and history.
- First Great Story: Coming of the Universe and the Earth
- Second Great Story: Coming of Life
- Third Great Story: Coming of Human Beings
- Fourth Great Story: Communication in Signs
- Fifth Great Story: The Story of Numbers
By offering the child the story of the universe, we give [them] something a thousand times more infinite and mysterious to reconstruct with [their] imagination, a drama no fable can reveal.”
– Dr Maria Montessori
Exploring the big picture in the prepared environment
Using unique Montessori learning materials, our children expand their understanding of language and grammar and continue to develop their mastery of mathematics.
In this space, we undertake more abstract work, allowing the children to look at the ‘big picture’ – the world and their place in it. The child’s work moves from concrete to abstract, calibrated to the child’s level of development. Lessons are given after careful observation of the child’s interest and ability level and finely tuned to meet the child’s need for meaningful and interesting effort.
From study to independent research.
Through the study of the fundamental needs of all humans, the children become conscious of the interconnected yet diverse nature of humanity.
The children are encouraged and assisted to undertake a more detailed, research-based exploration of areas that interest them, with organised excursions bringing the world to their attention.
The Montessori National Curriculum is approved by the Australian Curriculum, Assessment and Reporting Authority (ACARA).
The primary academic curriculum covers:
- Language (English & German)
- Mathematics, Geometry and Measurement
- Cosmic Education (Humanities and Social Sciences, Science, and Technologies)
- Health and Physical Education
“The senses, being explorers of the world, open the way to knowledge.”
– Dr Maria Montessori |
Respiratory diseases such as allergies and asthma are a serious public health burden around the world. The World Health Organization (WHO) estimates that more than 300 million people worldwide have asthma and that millions are affected by allergies. Each year, thousands of people die of asthma. The prevalence of allergies and asthma is growing, and there is a continued need for new and more effective therapies.
Spring is often associated with allergies, although any change of season or weather condition can actually trigger allergies and asthma. For many people, a drop in temperature is enough to spark such respiratory disorders. What happens is that when humidity drops, the air becomes drier. Because dry air is more likely to irritate a hypersensitive respiratory system, people with asthma usually suffer from an attack when humidity is low.
While most people think of allergy and asthma as disorders caused by outside factors, such as weather changes and dry air, the underlying problem is actually innate. Both allergy and asthma develop when a particular part of the immune system is out of balance. The immune system is naturally designed to protect the body against infections and to keep it healthy. Meanwhile, allergy comes up when the immune system mistakes something that is normally harmless as being a threat. Because the immune system perceives the thing as something foreign and harmful, it creates a strong inflammatory response in an attempt to “protect” the body from it. This inflammatory response is exactly what produces the symptoms allergy, such as runny nose, coughing, hives, itching, rashes, and puffy eyes. In asthma, this inflammatory process manifests in the form of shortness of breath, chest tightness, wheezes, or cough.
If seasonal allergies and asthma are affecting your ability to take pleasure in the outdoors, then it’s best to seek professional help from a certified allergist. A health care expert can prescribe medication for allergies and asthma, as well as provide education on medication and therapies so that you know exactly how and why they work. Taking medications may not be enough in a number of cases, which is why it’s also important that you become well equipped about the proactive ways to avoid asthma or allergy flare-ups. Below are some useful tips on how you can keep clear of asthma and allergic episodes.
1) Know what Triggers your Asthma and Allergies
Try to determine what triggers your asthma or allergies. Talk to a medical professional about emergency medications in case of an episode. Once you know what medications to take, be sure to know when and how to take them.
2) Good Housekeeping
It’s very important to keep your house clean every day. However, since cleaning puts dust into the air, ask another person without asthma or allergies to do the task. If you can’t find somebody else to clean, be sure to wear a dust mask. Try to keep your clutters down by storing your belongings in plastic or boxes instead of keeping them in piles somewhere. Remember, clutters kept in stacks collect dust and make cleaning a lot harder.
3) Control Household Pests
Dust mites are a common trigger of allergies and asthma. To avoid exposure to dust mites, use zippered plastic mattress and special anti-allergy pillow covers beneath sheets and pillowcases. Also, don’t forget to wash beddings in hot water every week. Temperatures above 130 degrees Fahrenheit are effective in killing dust mites.
Rodents and cockroaches can also trigger asthma and allergies. To keep these pests away, store food in tightly sealed containers, empty your garbage often, clean up crumbs and spills immediately, wash dirty dishes right away, and seal cracks where these pests hide or enter your home.
4) Avoid Having Pets inside the Home
Furry pets like dogs and cats can cause asthma and allergy episodes because of dander. It is best to either not have pets at all or keep them outside the house. If you do have pets inside, make sure to keep them out of the bedroom and off upholstered furniture.
Millions of people suffer from allergies and asthma. Asthma is among the major reasons why children miss school and end up in the hospital. It causes wheezing, coughing, difficulty of breathing, and tightening of the chest. Allergies, on the other hand, cause uncomfortable symptoms such as runny nose, itching, and rash. Both these disorders make sufferers difficult to breathe. The good news is, both of these conditions can be controlled just as long as you know what triggers them and how you can avoid them. |
What is “Dialogue”?
Dialogue can be defined as a discussion between two or more people or groups, especially one directed toward exploration of a particular subject or resolution of a problem.
Why is it important?
- Engage across difference of perspective and identity
- Foster intergroup community through a diversity of identities
- Explore personal experience and societal issues
- Provide tools for navigating difference on and off campus
In our hyper polarized society, we are in need of deeper understanding and engagement of those who are different than ourselves. The Center for Civic Engagement is dedicated to modeling equitable, respectful dialogue. This is done primarily through two programs: Hot Topics and Civic Strolls.
Hot Topics is a program that gathers students of diverse perspectives, identities, and backgrounds who want to better understand the perspective, values, and actions of people who differ from them. In each event, we intentionally recruit participants to have diversity of experience and perspective. An optional component is a spicy challenge where students answer questions while starting with a mild salsa that increases to very hot by the end of our time. Discussion Topics for past and future Hot Topics: Politics, Religion, Racial Identity, Personality type, Gender Identity, Gun Control/Rights, Death Penalty, Abortion, Marijuana Legalization, Sex Work, and more!
Civic Strolls is a weekly program where students can gather to discuss and explore what it means to be an active citizen. |
To increase our use of solar energy, we need to create more efficient, stable, and cost-effective solar cells. What if we could use an inkjet printer to fabricate a solar cell?
A solar cell is nothing but a light emitting diode (LED) operating in reverse. While an LED converts electrical energy into light energy, a solar cell converts light energy into electrical energy, taking advantage of a phenomenon called the photovoltaic effect. Its discoverer Edmond Becquerel found that when some materials absorb light, an electric voltage is created within the material, even without another energy source present.
It works like this: in a semiconducting material, a photon excites negatively charged electrons into the conduction band, so named because electrons can freely move when excited there. Semiconducting materials come in two main types—n-type, which have some “extra” electrons, and p-type, which are missing some electrons. These “missing” electrons are called holes. The real magic happens where n-type and p-type semiconductors are in contact with one another, however. When sunlight is absorbed at the p-n interface (Fig. 1), its energy excites electrons enough for them to enter the conduction band. The movement of negative charges and positive charges in opposite directions produces an electric current. This is how silicon solar cells generate the electricity that powers nearly all aspects of our lives today.
The first solar cells were installed on New York City rooftops in 1884 and had only a 1-2% energy conversion rate. By the 1950s, solar cells were ready for commercial production and boasted a still-underwhelming efficiency of 6%. Since then, the main goal of solar cell research has been continuously to improve their efficiency, while also lowering the cost of the materials needed for solar cell panels. Apparently, this is not an easy task, since commercial solar cells still have a maximum efficiency of only 18-20%. Solar cells produced in the lab have achieved up to 45% efficiency, but the methods used to produce these cells are too expensive to be applied to mass production.
A new type of solar cell uses a class of materials called perovskites. Perovskites have a special crystal structure (Fig. 2), with chemical formula CsPbX3, where X is a halogen element like chlorine (Cl), bromine (Br), or iodine (I). Our team fabricates perovskite films of different thicknesses—one, two, or three layers—and analyzes their properties to decide which are the best candidates for use in solar cells.
The method we use to produce these perovskite solar cells is surprisingly simple and cheap. We start with emptied refillable printer ink cartridges and fill them with the desired perovskite solutions. The substrates on which we will deposit the perovskites are mounted on a CD, which is inserted into the printer. The printer’s original CD printing software is then used to print out a “colored” image, with the perovskite solution corresponding to each of the original ink colors printed on the substrate in place of that color (Fig. 3). Using this method, multiple perovskite films can be printed at the same time, and the films can even be reprinted for a multilayer design. This allows us a great deal of flexibility to alter the thickness, fluorescence, and other film properties that will affect its performance in a solar cell.
Once we have printed the perovskite films, we study their electric transport properties—specifically, how charge is stored (capacitance) and how charge flows (current) as we alter the voltage applied to the material. This allows us to identify the perovskite formulas with the most desirable properties for use in solar cells, and we can then work to optimize these perovskite films for energy conversion efficiency and lifespan of the solar cell.
Because of their low cost, inkjet-printed solar cells are a very promising technique to improve the availability of solar energy in the future. Learn more about the process in the video below!
Fig. 1 (Click to enlarge) The solar cell functions as a p-n junction.When sunlight is absorbed at the p-n interface, an electron-hole pair is formed, creating an electric field that forces the electrons to move towards the "n" region and the holes towards the "p" region. The electric current in the external circuit flows from positive terminal to the negative terminal and the ammeter (labeled "A") measures the current. Diagram courtesy of Victor Sabirianov.
Fig. 2 (Click to enlarge) The crystalline structure of the perovskite CsPbBr3. Diagram courtesy of Ian Evans. |
Scavenge into the icy unknown as you chisel through three ice crystals to discover a polar bear, a penguin, and a walrus, then use the excavation guide to learn about each inhabitant of the arctic. Simply soak the clay ice crystals in water, then carefully chisel away the sedimentary material to reveal the chilly creature inside.
• Each ice crystal contains a different arctic animal
• Helps promote an early interest in science for kids
• A great activity to share with a friend
• 3 ice crystals, 1 chiseling tool and instructions
Age Recommendation: Ages 4 and up |
Course Title: Life Skills Training Department: Special Education
Grade Level: 9-12
Time Per Day/Week: 42 minutes per day Length of Course: Year
Primary Resources: Unique Learning Curriculum, and Brigance Transition Skills Inventories
Units of Study:
Unit 1: Vocational
Unit 2: Daily Living
Unit 3: Personal Life
Unit 4: Community lessons
Curriculum-Based Assessments: Unique Learning Curriculum which contains checklists of skills being taught and reinforced with repetition.
Standardized Assessments: Brigance Transition Skills Inventory
Description of Course:
This Life Skills Training course uses the Transition Passport, which is a part of the Unique Learning Curriculum. Transition Passport includes vocational, daily living, personal life, and community lessons. These lessons provide tools that create valuable resources to plan for future educational, vocational, and adult living outcomes. The Life Skills Training Course also uses the Brigance Transition Skills Inventory, which provides criterion-referenced assessments to support transition planning for post-secondary, employment, and independent living skills. |
Unexpected discovery enables simple conversion of waste glycerol into methanol, closing the sustainability loop for biodiesel
In the transesterification process of biodiesel production, the carbon chain of a molecule of vegetable oil is broken into three. At each break, a hydrogen atom from methanol is substituted for the link to the adjacent carbon atom. The production of biodiesel, however, leads to the formation of large quantities of crude glycerol – around 10% of the mass of biodiesel created – but is generally uneconomical to refine. Researchers are seeking ways to convert this waste product into something useful, and some efforts have focused on the dehydration reaction to acrolein – used as a herbicide and polymer precursor. This reaction is usually acid-catalysed, but researchers at Cardiff University considered that it could also be base-catalysed and investigated this using magnesium oxide. Much to their surprise, they found that the main product was not acrolein but methanol: 'We made [lead author Muhammad Haider] do an awful lot of experiments to convince us that this was real!' says principal investigator Graham Hutchings of Cardiff University.
Catalysis chemist Edman Tsang of the University of Oxford describes the result as 'a bit of a shock'. The accumulation of glycerol in biodiesel production 'is an environmental problem', he says, and groups including his own are investigating various ways to solve it. 'This looks like a very simple and straightforward way.' He cautions, however, that the long-term stability of the catalyst needs investigation, and worries that the large number of different minor products could make purification of the methanol expensive. |
Some people’s joints are more flexible than the general population. In these people, joints such as wrist, knee, ankle, hip may bend and flex more than normal. Flexibility can be a feature that is noticed from childhood, or it can be at a level that is more vague and understood when special attention is paid. The ability of the joints to move more than normal degrees is called hypermobility. If the number of flexible joints is more than a certain value, it is called diffuse joint hypermobility.
If the flexibility or hypermobility of the joints does not cause any discomfort to the person, it is not considered a medical problem. For most people, being flexible does not cause complaints. It can even be an advantage in some sports like gymnastics, dancing and playing a musical instrument. But sometimes flexible joints can be associated with joint and muscle pain, frequent sprains or even dislocations of the joints. The presence of such complaints together with flexibility is defined as “hypermobility syndrome”. Another name is “Ehlers-Danlos syndrome type 3”.
To reiterate, widespread flexibility (hypermobility) in the joints and the fact that this situation causes complaints such as pain and susceptibility to sprains indicates joint hypermobility syndrome.
Who Gets Hypermobility Syndrome?
Elasticity may be caused by the genetic differences of the collagen fibers that make up the connective tissue or by the fact that the bones can move more due to the shallowness of the joint pit. The shallowness of the joint cavity causes excessive movement of several joints such as hips and shoulders. Diffuse hypermobility may be due to the difference in collagen fibers.
Hypermobility may show familial inheritance. Women are generally more flexible than men. Collagen fibers forming the ligaments become more tightly bound to each other as age progresses. For this reason, the elderly are more hesitant than the young. Some people with hypermobile joints may find that as they get older, they can’t stretch as easily as they used to. Joint range of motion can be increased to a certain extent with exercise and training. For example, with yoga movements, the joints can be stretched.
Hypermobility may also develop due to another disease. The joints of people with Down syndrome are mostly flexible. This feature is also seen in other types of Marfan syndrome and Ehlers-Danlos syndrome.
There may be increased muscle pain, especially after exercise or heavy physical activity. Since the ligaments that hold the joints together are loose, more work falls on the muscles, which can lead to pain and injury. If there is damage to the joint, edema may occur and limitation of movement may develop until it heals. Foot and ankle pain are also common. Frequent ankle sprains may occur. Flat feet may also accompany it. Complaints increase with standing for a long time. Looseness of the ligaments holding the spine can cause neck, back and low back pain. If hypermobile joints are overstretched, the risk of dislocation is higher than normal, especially in the shoulder and kneecaps.
Skeletal muscles and joints may also be affected. Weakness in the muscles that move the bowels can lead to constipation, bloating and pain. Reflux (stomach acid to escape into the esophagus) can be seen. Weakness of the groin floor muscles can cause the problem of urinary incontinence called stress incontinence. Heart valves can also be affected by connective tissue elasticity; sometimes it does not cause any complaints, in some cases it can cause symptoms such as chest pain and palpitation. Blood pressure may be lower than normal, and when standing up quickly, blood pressure drops, palpitations, and fainting may occur.
How Is It Diagnosed?
The diagnosis of hypermobility syndrome is made by questioning the complaints (taking a medical history) and physical examination. The Beighton score provides an assessment of the flexibility of the thumb/wrist, little finger, elbow, waist, and knee with some standard movements. A high Beighton score indicates hypermobility. There may also be hypermobility in different joints such as shoulders, neck, jaw, back, hip, ankle and foot that are not examined with this test. Along with a high Beighton score, pain in flexible joints, previous dislocations, injuries in structures such as muscles and tendons around the joint, and skin flexibility support the hypermobility syndrome.
Beighton score can range from 0 to 9:
- (1) Being able to touch the ground with the palms by bending from the waist without bending the knees (one point)
- (2) More than 10 degrees of elbow backward flexion (hyperextension) (one point each side)
- (3) Able to bend backwards more than 10 degrees (hyperextension) of the knees (one point for each side at each joint)
- (4) Ability to flex the thumb until it touches the forearm (one point per side)
- (5) Ability to bend the little finger backward more than 90 degrees (1 point for each side)
The Brighton criteria can be used when deciding whether a person with flexible joints has joint hypermobility syndrome :
- Beighton score of 4 or higher
- Joint pain affecting 4 or more joints and lasting longer than 3 months
- Beighton score of 1, 2 or 3
- Back pain or pain in one to three joints lasting more than 3 months or spinal spondylosis, spondylolysis, spondylolisthesis
- Dislocation of more than one joint or more than one dislocation of a joint
- 3 or more soft tissue problems (eg tendinitis, bursitis)
- long slender body
- Excessively supple skin, stretch marks, thin skin, or abnormal scarring
- Low eyelids, myopia
- Varicose veins, hernia, uterine or rectal prolapse
- Mitral valve prolapse
Diagnosis requires any of the following:
- 2 major kriter
- 1 major and two minor criteria
- 4 minor criteria
- 2 minor criteria and first degree relative EHS
Joint flexibility is not something that can be treated or changed. It is a structural feature of the body. However, if there are complaints due to excessive flexibility, these problems can be controlled with good planning of activities and physical therapy.
It is helpful to do muscle strengthening exercises in a controlled manner. It is important not to overdo it with strengthening exercises. Smaller weights can be used. Exercises should be done regularly. For some people, careful stretching may also be beneficial. It is recommended to learn the exercises under the supervision of a physiotherapist. If there is pain with any movement, the activity should not be persisted without determining the cause of the pain. Swimming is a safe and beneficial sport in which muscles can be developed without overloading the joints.
Various splints (wristbands, etc.) can be used to protect the joints while working or doing sports. This should be considered especially for joints that have developed dislocations before. Insoles that support the arch of the foot may be recommended. If the pain complaint is excessive, your doctor may prescribe pain medication for short-term use . Acupuncture and various physical therapy methods can also be effective in reducing complaints. Following healthy dietary recommendations can help normal development of connective tissue. |
For decades, RP-1, liquid hydrogen, solid rocket, and hypergolic fuels were the propellant of choice for rocket manufacturers. In recent years, though, liquid methane has become an attractive alternative for rocket propulsion.
In rocket propulsion, liquid methane is a cryogenic fuel used to power orbital rockets. Its high Specific Impulse, availability as natural gas, low carbon footprint, and ability to be manufactured on other celestial bodies make it an attractive alternative to traditional rocket propellants.
For more than half a century, rocket propellants like RP-1 and hydrogen were mixed and combusted with liquid oxygen (LOX) in combustion chambers of orbital launch vehicles to power their first and upper stages.
To assist in launching and allowing these large rockets to push through Earth’s thick atmosphere and escape its gravity, solid rocket propellants are also commonly used, while hypergolic fuels are primarily used to power the Reaction Control Systems of spacecraft.
(Learn more about the different types of fuel orbital rockets use and their various advantages and drawbacks in this article.)
However, with the establishment of private space agencies after the turn of the century and a strong push to establish interplanetary travel and reach Mars within the next few decades, liquid methane is gaining popularity as the primary propellant to achieve this goal.
Private companies like SpaceX and Blue Origin are already developing rocket engines that run exclusively on this fuel type with their Raptor and BE-4 engines, respectively. As the following sections will describe, there is good reason for this focus on liquid methane.
What is Liquid Methane?
In its normal state, methane (CH4) is a naturally occurring gas found in abundance within soil and rock sediments below the Earth’s surface. The majority of it is created by the decay and breakdown of organic matter beneath the Earth’s surface at high temperatures.
Like RP-1 propellant (which is a highly refined form of kerosene), methane is a hydrocarbon, meaning it is an organic compound consisting entirely of hydrogen and carbon. It is the simplest type of hydrocarbon, consisting of one carbon and four hydrogen molecules.
(It is its simple make-up that gives methane so many advantages over kerosene, which can consist of multiple chains of carbon and hydrogen combinations, which will be illustrated in upcoming sections of this article.)
Liquid methane is a cryogenic fuel, which means the gas has to be cooled to temperatures of -162° Celsius (-260° Fahrenheit) or below to turn into a liquid.
In rocket propulsion, liquid methane is used as fuel to power orbital launch vehicles. Its high Specific Impulse, low carbon footprint, and ability to be manufactured on other celestial bodies make it an attractive alternative to traditional rocket propellants.
Like all other fuels used in orbital rockets, liquid methane needs an oxidizer to combust. This comes in the form of liquid oxygen (LOX). The fuel and oxidizer are mixed in the combustion chamber, where they combust to form the hot gases that propel the spacecraft.
The following sections will highlight the specific advantages of using liquid methane as rocket propellant, as well as some of the drawbacks. However, one first needs to establish how this fuel is produced in the first place.
How Liquid Methane Is Made
Although natural gas can be found close to underground crude oil or coal reserves, deeper deposits often contain a much purer form of methane (CH4) that does not require as much refinement to remove unwanted compounds.
The gas that is captured underground in natural gas fields and below the ocean floor in pockets of sediment and rock formations is extracted by drilling vertical and horizontal wells to allow the gas to escape, after which it is brought to the surface.
From the wells, the gas is typically transported via a network of pipelines to processing plants, where water vapor and non-hydrocarbon compounds like helium, nitrogen, and carbon dioxide are removed to produce a pure form of natural gas (methane).
To turn it into a liquid, the methane gas is cooled to temperatures of -162° Celsius (-260° Fahrenheit) and below, which is the boiling point of methane.
Advantages Of Using Liquid Methane For Orbital Rockets
Compared to more traditional rocket propellants, liquid methane offers several advantages but also a few drawbacks. Some of the main advantages of using liquid methane include:
- Simpler And Cheaper To Produce
- Little To No Coking And Other Forms Of Residue Buildup
- Environmentally Friendly
- Higher Specific Impulse Than RP-1
- Can Be Produced On Other Celestial Bodies
- Smaller Fuel Tanks Required Compared To Hydrogen
- No Additional Compounds Needed To Keep Fuel Tanks Pressurized
- Allows Rocket Engines To Run At Higher Pressures
1) Simpler And Cheaper To Produce
As described in the previous section, liquid methane does require a certain degree of refinement to remove any unwanted compounds and refrigeration to produce the final cryogenic propellant.
However, the process is far simpler and cheaper than the numerous complex steps involved in the production of RP-1 propellant or liquid hydrogen, which are also both more expensive.
(Learn more about RP-1 propellant, what it is, and its different advantages and drawback in this article.)
In a typical orbital rocket, of which at least 85% of its mass consists of liquid propellant, keeping the cost of the fuel down is crucial and always has to be taken into consideration.
2) Little To No Coking And Other Forms Of Residue Buildup
In the description of liquid methane, it was highlighted that it is a hydrocarbon, but not just any hydrocarbon. Methane is the simplest type of hydrocarbon, consisting of only one carbon atom bonded by four hydrogen atoms.
This is in sharp contrast with the long chains of carbon and surrounding hydrogen molecules that make up RP-1 propellant, the fuel still used in the first stages of the majority of modern launch vehicles. (One chain can be up to 20 carbons in length.)
These complex long chains of molecules mean RP-1 never burns completely. Instead, it breaks down and produces soot and other residue buildups, commonly referred to as coking, in rocket engines. This has a number of adverse effects on orbital rockets.
Residue buildup within rocket engines can clog up rocket engines and reduce performance and reliability. It also makes reusing an orbital launch vehicle much more difficult since the coking and resulting damage results in a more complex and expensive refurbishing process.
Methane’s simple makeup means that when it burns, it burns completely and leaves no residue buildup within the rocket engine. This not only makes the engine perform more efficiently and reliably but also makes refurbishing the craft easier and less expensive.
3) Environmentally Friendly
RP-1 propellant does not only cause coking and other types of residue buildup. Its exhaust plumes also contain large amounts of carbon dioxide, soot, nitrogen oxides, sulfur compounds, and carbon monoxide. All of which contribute to air pollution.
In contrast, due to its ability to burn almost completely and its high hydrogen content, the exhaust plumes produced by the combustion of liquid methane primarily consist of water, some carbon dioxide, and small amounts of nitrogen oxides.
This makes methane one of the cleanest burning rocket propellants currently available, with only hydrogen capable of producing more environmentally friendly exhaust products.
4) Higher Specific Impulse Than RP-1
More than 85% of an orbital rocket’s mass consists of fuel since it takes an incredible amount of propellant to provide enough thrust to allow a large launch vehicle to push through Earth’s thick atmosphere and break free from its gravity to reach orbit.
As a result, one of the Holy Grails of rocket propulsion is how efficiently a rocket can burn its fuel. Specific Impulse is the term used to describe this efficiency and is typically measured in seconds. It is essentially the rocket equivalent of the automotive “miles per gallon.”
(Going into a detailed discussion about Specific Impulse falls beyond the scope of this article, but you can learn more about it in the following article about nuclear propulsion.)
The Specific Impulse of any rocket engine is determined to a large degree by the type of fuel it uses. To date, liquid hydrogen has proven to be the most fuel-efficient propellant for launch vehicles and is commonly used in the upper stages of many orbital rockets.
However, its low density means hydrogen requires much larger fuel tanks than other liquid propellants. This adds to the mass and size of the rocket, something rocket engineers are always trying to avoid or keep to a minimum.
Methane does not have the same efficiency (Specific Impulse) as hydrogen, but it has a greater density, requiring smaller fuel tanks for essentially the same amount of fuel. What also counts in its favor is that liquid methane has a higher Specific Impulse than RP-1.
To illustrate this point, one can look at the Specific Impulse generated by modern examples of rocket engines running on each fuel type:
- Liquid Hydrogen: 366 – 452 seconds (Space Shuttle/SLS RS-25 engine)
- Liquid Methane: 330 – 350 seconds (SpaceX Raptor engine)
- RP-1 Propellant: 282 – 311 seconds (SpaceX Merlin engine)
(Credit: Everyday Astronaut)
From this comparison, it is clear that liquid methane is not as energy-efficient as liquid hydrogen but significantly more efficient than RP-1. Combined with the smaller fuel tank requirements than hydrogen, one can start to see part of the appeal of using this fuel type.
However, as the following section will illustrate, the advantages of using liquid methane as a rocket propellant go far beyond its fuel efficiency and volume.
5) Can Be Produced On Other Celestial Bodies
In 2017, NASA launched its Artemis Program with the aim of returning humans to the Moon and establishing a base for further exploration of the Solar System, including Mars, a project that is also the main focus of private aerospace companies like SpaceX.
Taking all the fuel required for such a long trip will be impossible for any orbital rocket. Instead, scientists are looking to produce the fuel needed for the spacecraft on the planned destinations themselves, which is where the real advantage of liquid methane comes in.
Theoretically, methane can be produced on Mars. The planet’s atmosphere consists of 95% carbon dioxide and a substantial amount of water below its surface and on its poles. Through a process called the Sabatier Reaction, they can be used to produce methane.
If a production facility generating methane can be established on Mars, it will not only help to make interplanetary travel a more realistic endeavor but also make it sustainable. If successfully implemented, Mars can also be used as a base for further exploration.
To do this, spacecraft need to be powered by liquid methane, which is why companies like SpaceX and Blue Origin are investing so many resources in developing methane-powered rockets which can take advantage of the possibility of off-planet fuel production.
6) Smaller Fuel Tanks Required Compared To Hydrogen
As mentioned in the section on Specific Impulse, it was highlighted that liquid methane does not have hydrogen’s high Specific Impulse and, as a result, is not as energy-efficient.
However, it is also much denser, which means it requires smaller fuel tanks than hydrogen for the same amount of fuel, which not only brings the launch vehicle’s overall mass down but also allows it to be smaller.
(Combined with the fact that it also has a higher Specific Impulse than RP-1 propellant makes liquid methane an attractive option, even though RP-1’s high density means the latter requires even smaller fuel tanks.)
7) No Additional Compounds Needed To Keep Fuel Tanks Pressurized
All propellant tanks in an orbital rocket need to be pressurized and stay pressurized to allow continuous and consistent propellant flow and maintain the structural integrity of the tanks.
Typically, a light gas like helium placed in smaller tanks is used, which is released in the fuel tanks in a controlled manner to maintain the correct pressure. However, this adds to the complexity of the propellant tank structure and, again, adds to the mass of the rocket.
However, methane tanks can be pressurized by a gaseous version of the same fuel by warming up the liquid methane in the launch vehicle’s engine and using the methane gas to keep the tanks pressurized through a process called autogenous pressurization.
This makes the pressurization of propellant tanks in methane-fueled rockets a lot simpler, reducing possible complications, and bringing the overall mass of the vehicle down.
8) Allows Rocket Engines To Run At Higher Pressures
As already stated, liquid methane has a higher Specific Impulse than RP-1 propellant due to its lower density. When they are burned at the same pressure inside a combustion chamber, methane offer around a five percent increase in performance compared to RP-1.
However, liquid methane can be burned at much higher pressures than its kerosene-based counterpart. (SpaceX’s Raptor engine is designed to run at pressures of up to 300 bar.) The increased pressure can result in a performance gain of approximately twenty percent.
Disadvantages Of Using Liquid Methane For Orbital Rockets
Despite the numerous advantages of liquid methane as a rocket propellant, it also has several drawbacks. The most notable disadvantages of using methane include:
- Do Not Produce The Same Amount Of Thrust As RP-1
- Larger Fuel Tanks Required Due To Lower Density Than RP-1
- Lower Specific Impulse Than Hydrogen
1) Do Not Produce The Same Amount Of Thrust As RP-1
Methane has a lower density than RP-1 propellant, allowing rocket engines to achieve higher exhaust velocities, which increases their Specific Impulse. However, due to the smaller molecular mass of the fuel, it does not create the same amount of thrust as RP-1.
RP-1’s higher thrust (as a result of the fuel’s larger mass per volume and resulting increased density) is crucial in a launch vehicle’s first-stage boosters to allow the rocket to push through Earth’s thick atmosphere & escape the planet’s gravitational forces to reach space.
2) Larger Fuel Tanks Required Than RP-1
One of the advantages methane has over hydrogen is that it is much denser, which means it requires smaller fuel tanks for the same amount of fuel, bringing the launch vehicle’s overall mass down and allowing it to be smaller.
However, the fuel is not nearly as dense as RP-1 propellant, which means the latter still has a clear advantage when it comes to tank size and can use smaller fuel tanks for the same amount of fuel (in mass). This is an important consideration when a rocket’s mass is crucial.
(This was one of the reasons why engineers only used hydrogen, which is even less dense than methane, for the 2nd and 3rd stages of the Saturn V rocket while using RP-1 for the first stage. They simply couldn’t practically make the first stage’s fuel tanks any larger.)
3) Lower Specific Impulse Than Hydrogen
As mentioned in an earlier section, liquid methane has a significantly higher Specific Impulse than RP-1 propellant, which makes it a more energy-efficient fuel. However, when it comes to fuel efficiency, no liquid propellant can match or even come close to liquid hydrogen.
Liquid methane has a Specific Impulse of 330 – 350 seconds compared to hydrogen’s 366 – 452 seconds, clearly showing hydrogen’s superiority in this regard.
This helps to explain why most launch vehicles like the Saturn V, Atlas, Delta, and Ariane V rockets use hydrogen to propel their upper stages. Liquid methane simply does not provide any clear advantage when it comes to fuel efficiency.
(Learn more about liquid hydrogen, what it is, as well as the different advantages and drawbacks of this fuel in this article.)
Until recently, liquid methane provided no clear advantage over Liquid hydrogen or RP-1 propellant as rocket fuel. It is more energy-efficient than RP-1 but not as much hydrogen. It requires smaller fuel tanks than hydrogen but is still larger than the ones used by RP-1.
However, ever since aerospace companies like NASA and SpaceX indicated their intent to make a concerted effort to reach Mars within the next few decades, methane has become a very attractive proposition as the fuel of choice for this objective.
Combined with numerous other advantages the fuel provides, it is clear why liquid methane is receiving so much attention and why several new rocket engines are being developed running on this fuel type. |
President Abraham Lincoln signed the Emancipation Proclamation on January 1, 1863 – exactly 150 years ago.
There is no greater evil in U.S. history than slavery, with the possible exception of the genocidal policies against Native Americans.
But there are other evils that exist today from which we could be emancipated. It is therefore worth reading the proclamation in its entirety (reprinted below) – for what it says, and for what it doesn’t say — and learning of the history behind it.
The United States government was declaring free the three million or so slaves who resided in the Confederacy (in other words, beyond the Union’s control), not the roughly 425,000 slaves who lived in the four “border states” — Delaware, Kentucky, Maryland, and Missouri — nor some 300,000 more slaves who lived in counties in Confederate states that were under Union control. Legally, the President was invoking the powers the United States Constitution gave to the chief executive in times of war or rebellion, and he felt he did not have the constitutional authority to override slavery in the loyal states. Politically, the representatives of those states feared that freeing the slaves in them would drive those states into the Confederacy.
It was also a practical maneuver, for the Proclamation also directed the United States Army to allow black men into their ranks. Ultimately, almost 200,000 of them served in the Union Army.
Lincoln had written the Proclamation in July, 1862, and read it to his Cabinet, a meeting celebrated by a famous painting by Francis Bicknell Carpenter in 1864, which hangs today in the U.S. Capitol over the west staircase in the Senate wing.
Lincoln waited to make the Emancipation Proclamation official (on the advice of his Secretary of State, William Henry Seward) until the Union had won a major battle. This happened with the Battle of Antietam.
So-called Radical Republicans were afraid that President Lincoln wasn’t sufficiently committed to the freeing of the slaves and might retract the Emancipation Proclamation. They viewed him as conservative and dilatory. The timing of the proclamation was another example of Lincoln’s “exceptionally sensitive grasp of the limits set by public opinion,” writes historian Doris Kearns Goodwin in “Team of Rivals: The Political Genius of Abraham Lincoln” (the book that inspired the curent movie “Lincoln” which depicts the events leading up to the ratification of the 13th amendment abolishing slavery everywhere in the U.S. for good, which occurred in December 1865, nearly three years after the Emancipation Proclamation.) Lincoln himself wrote: “It is my conviction that, had the proclamation been issued even six months earlier than it was, public sentiment would not have sustained it.”
But once written, Lincoln never had any intention of retracting the Proclamation. “I never, in my life, felt more certain that I was doing right, than I do in signing this paper. If my name ever goes into history it will be for this act, and my whole soul is in it.”
New York Public Library’s Schomburg Center for Research in Black Culture, Exhibition Hall,
515 Malcolm X Boulevard
To commemorate the 150th Anniversary of the Emancipation Proclamation, the Schomburg presents 80 pre– and post– Civil War era photographs of enslaved and free black women, men, and children. The images record the presence of black soldiers and black workers in the American South and help the 21st century viewer reimagine a landscape of black people’s desire to be active in their own emancipation.
In Washington D.C., The National Archives will commemorate the 150th Anniversary of the Emancipation Proclamation with a free special display of the original document January 1, 2013. The Emancipation Proclamation is displayed only for a limited time each year because of its fragility, which can be made worse by exposure to light, and the need to preserve it for future generations.
January 1, 1863
By the President of the United States of America:
Whereas, on the twenty-second day of September, in the year of our Lord one thousand eight hundred and sixty-two, a proclamation was issued by the President of the United States, containing, among other things, the following, to wit:
“That on the first day of January, in the year of our Lord one thousand eight hundred and sixty-three, all persons held as slaves within any State or designated part of a State, the people whereof shall then be in rebellion against the United States, shall be then, thenceforward, and forever free; and the Executive Government of the United States, including the military and naval authority thereof, will recognize and maintain the freedom of such persons, and will do no act or acts to repress such persons, or any of them, in any efforts they may make for their actual freedom.
“That the Executive will, on the first day of January aforesaid, by proclamation, designate the States and parts of States, if any, in which the people thereof, respectively, shall then be in rebellion against the United States; and the fact that any State, or the people thereof, shall on that day be, in good faith, represented in the Congress of the United States by members chosen thereto at elections wherein a majority of the qualified voters of such State shall have participated, shall, in the absence of strong countervailing testimony, be deemed conclusive evidence that such State, and the people thereof, are not then in rebellion against the United States.”
Now, therefore I, Abraham Lincoln, President of the United States, by virtue of the power in me vested as Commander-in-Chief, of the Army and Navy of the United States in time of actual armed rebellion against the authority and government of the United States, and as a fit and necessary war measure for suppressing said rebellion, do, on this first day of January, in the year of our Lord one thousand eight hundred and sixty-three, and in accordance with my purpose so to do publicly proclaimed for the full period of one hundred days, from the day first above mentioned, order and designate as the States and parts of States wherein the people thereof respectively, are this day in rebellion against the United States, the following, to wit:
Arkansas, Texas, Louisiana, (except the Parishes of St. Bernard, Plaquemines, Jefferson, St. John, St. Charles, St. James Ascension, Assumption, Terrebonne, Lafourche, St. Mary, St. Martin, and Orleans, including the City of New Orleans) Mississippi, Alabama, Florida, Georgia, South Carolina, North Carolina, and Virginia, (except the forty-eight counties designated as West Virginia, and also the counties of Berkley, Accomac, Northampton, Elizabeth City, York, Princess Ann, and Norfolk, including the cities of Norfolk and Portsmouth[)], and which excepted parts, are for the present, left precisely as if this proclamation were not issued.
And by virtue of the power, and for the purpose aforesaid, I do order and declare that all persons held as slaves within said designated States, and parts of States, are, and henceforward shall be free; and that the Executive government of the United States, including the military and naval authorities thereof, will recognize and maintain the freedom of said persons.
And I hereby enjoin upon the people so declared to be free to abstain from all violence, unless in necessary self-defence; and I recommend to them that, in all cases when allowed, they labor faithfully for reasonable wages.
And I further declare and make known, that such persons of suitable condition, will be received into the armed service of the United States to garrison forts, positions, stations, and other places, and to man vessels of all sorts in said service.
And upon this act, sincerely believed to be an act of justice, warranted by the Constitution, upon military necessity, I invoke the considerate judgment of mankind, and the gracious favor of Almighty God.
In witness whereof, I have hereunto set my hand and caused the seal of the United States to be affixed.
Done at the City of Washington, this first day of January, in the year of our Lord one thousand eight hundred and sixty three, and of the Independence of the United States of America the eighty-seventh.
By the President: ABRAHAM LINCOLN
WILLIAM H. SEWARD, Secretary of State.
Update: Historian Eric Foner writes in an Op-Ed today of how the Emancipation Proclamation emancipated Lincoln himself in a way — marked a turning point in his views:
“A military order, whose constitutional legitimacy rested on the president’s war powers, the proclamation often disappoints those who read it. It is dull and legalistic; it contains no soaring language enunciating the rights of man. Only at the last minute, at the urging of Treasury Secretary Salmon P. Chase, an abolitionist, did Lincoln add a conclusion declaring the proclamation an “act of justice.”
“Nonetheless, the proclamation marked a dramatic transformation in the nature of the Civil War and in Lincoln’s own approach to the problem of slavery. No longer did he seek the consent of slave holders. The proclamation was immediate, not gradual, contained no mention of compensation for owners, and made no reference to colonization.” |
Engineering Acoustics/Attenuation of Sound Waves
When sound travels through a medium, its intensity diminishes with distance. This weakening in the energy of the wave results from two basic causes, scattering and absorption. The combined effect of scattering and absorption is called attenuation. For small distances or short times the effects of attenuation in sound waves can usually be ignored. Yet, for practical reasons it should be considered. So far in our discussions, sound has only been dissipated by the spreading of the wave, such as when we consider spherical and cylindrical waves. However this dissipation of sound in these cases is due to geometric effects associated with energy being spread over an increasing area and not actually to any loss of total energy.
Types of AttenuationEdit
As mentioned above, attenuation is caused by both absorption and scattering. Absorption is generally caused by the media. This can be due to energy loss by both viscosity and heat conduction. Attenuation due to absorption is important when the volume of the material is large. Scattering, the second cause of attenuation, is important when the volume is small or in cases of thin ducts and porous materials.
Viscosity and Heat conductionEdit
Whenever there is a relative motion between particles in a media, such as in wave propagation, energy conversion occurs. This is due to stress from viscous forces between particles of the medium. The energy lost is converted to heat. Because of this, the intensity of a sound wave decreases more rapidly than the inverse square of distance. Viscosity in gases is dependent upon temperature for the most part. Thus as you increase the temperature you increase the viscous forces.
Boundary Layer LossesEdit
A special type of absorption occurs when a sound wave travels over a boundary, such as a fluid flowing over a solid surface. In such a situation, the fluid in immediate contact with the surface must be at rest. Subsequent layers of fluid will have a velocity that increases as the distance from the solid surface increases such as in the figure below.
The velocity gradient causes an internal stress associated with viscosity, that leads to a loss of momentum. This loss of momentum leads to a decrease in the amplitude of a wave close to the surface. The region over which the velocity of the fluid decreases from its nominal velocity to that of zero is called the acoustic boundary layer. The thickness of the acoustic boundary layer due to viscosity can be expressed as
Where is the shear viscosity number. Ideal fluids would not have a boundary layer thickness since .
Attenuation can also occur by a process called relaxation. One of the basic assumptions prior to this discussion on attenuation was that when a pressure or density of a fluid or media depended only on the instantaneous values of density and temperature and not on the rate of change in these variables. However, whenever a change occurs, equilibrium is upset and the media adjusts until a new local equilibrium is achieved. This does not occur instantaneously, and pressure and density will vary in the media. The time it takes to achieve this new equilibrium is called the relaxation time, . As a consequence the speed of sound will increase from an initial value to that of a maximum as frequency increases. Again the losses associated with relaxation are due to mechanical energy being transformed into heat.
Modeling of lossesEdit
The following is done for a plane wave. Losses can be introduced by the addition of a complex expression for the wave number
which when substituted into the time-solution yields
with a new term of which resulted from the use of a complex wave number. Note the negative sign preceding to denote an exponential decay in amplitude with increase values of .
is known as the absorption coefficient with units of nepers per unit distance and is related to the phase speed. The absorption coefficient is frequency dependent and is generally proportional to the square of sound frequency. However, its relationship does vary when considering the different absorption mechanisms as shown below.
The velocity of the particles can be expressed as
The impedance for this traveling wave would be given by
From this we can see that the rate of decrease in intensity of an attenuated wave is
Wood, A. A Textbook of Sound. London: Bell, 1957.
Blackstock, David. Fundamentals of Physical Acoustics. New York: Wiley, 2000. |
Whereas DNA can survive for millennia, RNA is short-lived, which can make total RNA extraction tricky, because RNA is prone to degradation by enzymes called RNases, which are everywhere.
Therefore, isolation of total RNA from cells and tissues requires a method that will efficiently isolate the RNA from the samples while also minimizing RNA degradation.
What is total RNA?
Total RNA, as you might expect, is all the RNA molecules found inside a cell. This includes:
Messenger RNA (mRNA): long protein-coding messenger RNA transcripts, which serve as the instantaneous readout of cellular gene expression under particular conditions
MicroRNA (miRNA): a myriad of other smaller noncoding RNA molecules, many of which are involved in regulating and silencing gene expression.
Ribosomal RNA (rRNA): a key component of ribosomes and critical for protein synthesis.
Transfer RNA (tRNA): another critical component for protein synthesis. These RNA molecules transport amino acids to the ribosome and base pair with the mRNA to ensure the correct amino acid is added to the protein being synthesized.
What Are RNases?
Ribonucleases (RNases) are enzymes that degrade RNA. These enzymes are very problematic when isolating or working with RNA because RNases are ubiquitous (found everywhere), difficult to eliminate, and very destructive. Therefore you need to ensure you are working RNA-free when extracting total RNA.
Isolating total RNA, minus the RNases using TRIzol®
Fortunately, there are ways to inactivate RNases and prevent your precious RNA samples from being destroyed. One method to extract RNA while also inhibiting RNases is the Guanidinium thiocyanate-phenol-chloroform extraction method first applied for RNA extraction by Piotr Chomczynski and Nicoletta Sacchi.
TRIzol® is a monophasic mixture of phenol and guanidine isothiocyanate commonly used for RNA extraction and is a powerful protein denaturant that breaks down protein cell components and inactivates all enzymes, including RNases.
TRIzol extraction typically uses acidic phenol–chloroform to confine total RNA in a clear aqueous phase while proteins and cell debris end up in the pink organic layer. RNA can be recovered by precipitation with isopropanol, washed, and then redissolved in water. TRIzol is a brand name, and many other suppliers supply their own version including:
It is also possible to make your own phenol and guanidine isothiocyanate mixture.
Protocol for total RNA extraction from Cells and Tissues Using TRIzol
1. Cell Lysis
If you are isolating RNA from tissues, you will need to homogenize the sample first in 1 ml of TRIzol reagent per 50 to 100 mg of tissue using a homogenizer. The sample volume should not exceed 10% of the TRIzol volume.
If you are isolating RNA from adherent cells grown in culture, rinse the cells with ice-cold PBS and lyze cells directly in a culture dish or flask by adding 1 ml of TRIzol reagent per 10 cm² area and scraping with cell scraper or pipette tip.
Pass the cell lysate several times through a pipette and vortex thoroughly. If your culture is composed of suspension cells, spin cells down to remove old media, wash in PBS lyse cells with 1 ml TRIZOL for up to 10 million cells by pipetting up and down several times.
2. Incubation and Phenol–Chloroform Separation
Incubate the homogenized sample for 5 minutes at room temperature to dissociate nucleoprotein complexes.
Add 0.2 volume of chloroform per 1 volume of TRIzol reagent. Cap the tubes securely and vortex samples vigorously for 15 seconds.
Incubate samples at room temperature for 5 minutes.
Centrifuge the samples at no more than 12,000 x g for 15 minutes at 4°C. The mixture will separate into three phases (Figure 1).
- Lower organic phase.
- Upper aqueous phases containing RNA.
3. RNA Precipitation
Carefully transfer the upper aqueous phase without disturbing the interphase into a fresh tube. The volume of the aqueous phase is usually about 60% of the TRIzol volume used in step 1.
Use 0.5 ml of isopropanol per 1 ml of TRIZOL to precipitate the RNA from the aqueous phase.
Incubate samples at room temperature for 10 minutes and centrifuge at not more than 12,000 x g for 10 minutes at 4°C. The RNA precipitate will form a pellet on the side or bottom of the tube, which can be hard to see by eye.
4. RNA Wash and Resuspension
Remove the supernatant and wash the RNA pellet once with 75% ethanol.
Mix the samples by vortexing and centrifuge at no more than 8000 x g for 5 minutes at 4°C. Repeat the above washing procedure and remove all leftover ethanol.
Air-dry or vacuum dry RNA pellet for 5-10 minutes but don’t heat or centrifuge under vacuum. Also, avoid overdrying the RNA or it will be hard to redissolve the pellet.
Dissolve RNA in DEPC-treated water, by passing the solution a few times through a pipette tip.
5. Determining RNA Concentration and Purity
Once you have your sample redissolved, determine sample concentration and purity by taking OD measurements at 260 nm and 280 nm. The A260/A280 ratio should be above 1.6.
Benefits of TRIzol for Total RNA Extraction
TRIzol has many benefits for total RNA extraction, including:
- Denaturing of RNases.
- Extraction of total RNA, including small molecular weight RNA such as miRNA.
- High-quality RNA.
- Relatively simple to use.
- Allows simultaneous extraction of DNA, protein, and RNA from a sample.
Disadvantages of using TRIzol
TRIzol use in total RNA extraction has some limitations.
- The reagent can be costly compared to other traditional methods of RNA extraction.
- Uses dangerous and hazadous chemicals.
- Extracted RNA can be contaminated with phenol and other contaminants when removing the aqueous layer.
- Can be time-consuming to perform and has a steep learning curve.
Alternative methods to TRIzol for RNA Extraction
The dangerous nature and steep learning curve of TRIzol total RNA extraction have resulted in a market for safer, simpler alternatives for RNA extraction that still provide high-quality RNA.
RNA Extraction Kits
Many commercial kits are available that don’t use TRIzol or other hazardous chemicals. These use spin columns or magnetic beads to capture RNA which can then be eluted. However, they might not be suited to all applications, so check the user manual before purchasing to check that the kit:
- Works with your sample type.
- Isolates RNA you are interested in.
If you are extracting DNA from specialist tissue samples (e.g., blood, fibrous tissue, plant tissue), check to see if there is a kit available. Manufacturers have created and optimized kits for extracting RNA from a range of tissues and source materials.
If you are looking to analyze miRNA or other small molecular weight RNA species, be aware. Many RNA isolation kits, especially those using spin columns, may not successfully isolate these small molecules. However, custom kits designed especially for miRNA and other small RNA molecules are available.
Companies that provide RNA extraction kits include:
- ThermoFisher Scientific
- Zymo Research
- Agilent Technologies
While RNA extraction kits offer a handy alternative, they can be expensive and if you isolate RNA from various tissues you may need multiple kits.
There are kits that can be used in combination with TRIzol extraction by trapping the RNA following the precipitation step to make subsequent washes and elutions easier and faster. This may be an option if you already have samples stored in TRIzol.
Old-school methods for Total RNA Extraction
If you don’t want to use commercial kits but still want to avoid the cost of TRIzol (and alternatives), consider going back to the old-school methods of total RNA extraction.
Below we share a TRIzol-free method of total RNA extraction from yeast (which can be used for other sources as well) from Vicki Doronina. Note that this method still uses hazardous chemicals, and therefore could be carefully considered before using.
This total RNA extraction protocol uses a mix of phenol and some salts. All that is required is some Tris, SDS, and phenol–chloroform mix. Vicki has never used this protocol on non-yeast cells but she is almost sure that it can be applied to any cell type after the homogenization step in the RNA buffer. Changing the buffer pH from neutral to acidic—pH 4.5—will allow you to isolate aminoacylated tRNAs as well.
Phenol–chloroform RNA Extraction Protocol
1. Grow 25–100 ml of cells to OD600 = 0.25–0.5 (You don’t even need a spectrophotometer for this).
2. Spin cells, wash them in 1 mL dH2O, and transfer to a screw-cap tube. You can snap-freeze pellet at this stage.
3. To 1 volume of cold RNA buffer add SDS to final concentration 0.5% (1/40 volume 20% SDS).
4. Resuspend frozen pellet in 200 µl cold SDS/RNA buffer.
5. Add 1 volume phenol–chloroform; fill with beads to reach the above solution.
6. Break open (usually 30 sec/ 1 min on ice/30 sec on maximum Ribolyser setting) but your routine breakage procedure can be just as effective.
7. Fill tube with RNA buffer (no SDS added), vortex, and spin 5 min keeping cold.
8. You will see phenol–chloroform fraction at the bottom of the tube, white layer of debris, and top buffer level. Take the top level and transfer to a fresh RNase-free Eppendorf.
9. Extract aqueous fraction with phenol–chloroform twice, shaking for 5 min. Add 0.9–1 volume of isopropanol, mix, and spin at RT 15–20 min. Because the buffer contains a lot of salt, no additional sodium acetate is necessary
(If 1/10 3M sodium acetate is accidentally added, get rid of the salts by dissolving the dried pellet in 600 µl RNA buffer, incubating 5 min with shaking, adding 600 µl of isopropanol, and spinning 10–5 min. After this go to step 9).
10. Wash pellet with 70%, air dry, resuspend in 30-50 µl dH2O or TE, and measure OD260/280 ratio.
RNA buffer (50 ml)
- 100 mM EDTA pH 8.0 (10 mL 0.5M stock)
- 100 mM NaCl (1 mL 5M stock)
- 50 mM Tris-HCl pH 8.0 (2.5 mL 1M stock)
- 36.5 mL dH2O
Total RNA Extraction Summarized
There are many methods available for total RNA extraction from a range of samples, from the ‘gold-standard’ TRIzol to commercially available spin kits and old-school phenol–chloroform extraction. Each method of RNA isolation has its pros and cons from the cost, use of hazardous chemicals, and suitability, each of which should be carefully considered before selecting.
Which total RNA extraction method do you use? Leave a comment below.
Originally published March 13, 2013. Reviewed and updated February 2022. |
B Day- Roland
Today we had the roots quiz (#2) and then MJ guided the class through some information on “The Song of Roland”.
Please re-watch this video as needed:
HW: Finish reading “The Song of Roland” and focus on the details of the adventure. You should annotate it because it will be a while before we get back to it.
A Day- Group 2 (China)
Today, due to the science fair on Friday, we have started the second group, China. Kristen, David, and Thong have shown great flexibility and I really appreciate them being able to start today. We will have the roots quiz on Friday (in a short block).
HW: Do the following questions in a sheet of paper:
- Choose one of Confucius’ sayings that you think relates directly to life today? Explain your opinion in a paragraph that contains at least 5 sentences.
- In passage 68, the writer speaks of competing in a “spirit of play”. What do you think he means?
- Do you think “Mulberry on the Lowland” tells more about the speaker or more about her lord? Explain your answer.
HW: Writing a mood poem: Using “Still Night Thoughts” as a model to write a four-line poem that expresses a mood. Create a mood by choosing a central image and contains at least one simile or metaphor that help to create the mood of the poem. This poem should aim for simplicity and it should be typed, single-spaced and printed.
Thursday/Friday: More Crazy
B Day- Africa (Group 2)
HW: Finish the paragraph on “Anasi the Spider” that you started in class– is he a hero, a deceiver, or a fool? You should include 2 examples for support and write 7+ sentences.
HW: Write your own creation myth- it should emphasize existing cultural values/customs and explain the natural world. Include at least 2 cultural values (UNDERLINE THESE) and include at least 3 things to explain in the natural world (CIRCLE THESE). This should be at least 2 paragraphs. Page 627 can help you generate ideas.
HW: Answer question 2 and 6 on page 631.
HW: Write a short comic that ties into the scenario in question 6. (2-3 panels). |
Triazines generally refers to a six-membered heterocyclic compound containing 3 nitrogen atoms, and is generally divided into three structures: 1,2,3-triazine, 1,3,5-triazine and 1,3,4-triazine. The triazines conforms to the 4n+2 rule of Huckel's rule and possesses 6 delocalized electrons. All the atoms of triazines are in the same plane and have an aromaticity similar to that of a benzene ring. These characteristics make triazines have strong electronic conductivity, good spatial symmetry and easy modification properties. Introducing a triazine unit into the molecular framework as the core of the conjugated molecule can not only improve the electron injection and transport properties of the molecule, but also improve its thermal stability. Therefore, triazines are often used in the field of optoelectronic devices as electron transport materials, host materials, and light-emitting materials.
Figure 1. Chemical structures of triazines
- Host material: When preparing light-emitting devices, phosphorescent materials need to be doped into the host material to prevent quenching. The host material is generally required to have a higher triplet energy level. Triazine derivatives contain nitrogen atoms with high electronegativity, have strong electron-withdrawing properties, and have high tertiary energy levels. Therefore, triazine derivatives can be used as ambipolar transport host materials to prepare phosphorescent devices with high external quantum efficiency.
- Electron transport materials: Electron transport materials require that the material structure be an electron-deficient system with strong electron accepting ability. Triazine materials have strong electron withdrawing ability, large electron affinity and good thermal stability, and are widely used in the field of electronic devices as electron transport materials.
- Light-emitting materials: Triazine materials have excellent thermal stability. Through reasonable design, triazines light-emitting material with high quantum yield, good thermal stability and excellent processing performance can be obtained. In particular, triazine materials can also be used as delayed fluorescent materials to prepare organic light-emitting diode devices, which greatly improves the external quantum yield of light-emitting devices.
- Xu Liu, Ming Sang, He Lin, Chengfang Liu, Jialing Zhang, Jianpeng Yi, Kun Gao, Wen‐Yong Lai, Wei Huang. Donor–Acceptor Type Pendant Conjugated Molecules Based on a Triazine Center with Depressed Intramolecular Charge Transfer Characteristics as Gain Media for Organic Semiconductor Lasers [J]. Chem. Eur. J., 2020, 26, 3103-3112. |
With the arrival of spring comes longer days, warmer temperatures – and insects. An increase in insect activity in the spring is just as inevitable as flowers blooming.
Insects surge in the spring time, which often confuses people because they think insects “die off” in the winter. Then where do all the spring insects come from? It depends, in part, upon what really happened to them during the winter.
How Do Insects Handle Winter?
Insect activity decreases drastically during winter because insects are ectothermic, which people commonly refer to as being “cold-blooded.” The body temperature of ectothermic creatures depends on its external environment.
When winter comes, insects do one of three things – they move, stay active, find a warm location or go dormant. Insects that move, like monarch butterflies and green darner dragonflies, head to warmer climates just like geese and other snowbirds do.
Surprisingly, some bees actually stay active by huddling in the hive and form a “winter cluster” around the queen. Each bee then shivers and flutters their wings to stay active and generate warmth. As temperatures drop, the cluster becomes tighter to conserve warmth. Worker bees rotate positions, so no single bee gets too cold by being on the outermost part of the cluster. Honey provides the nutritional energy bees need to shiver and ensure the hive’s survival.
Other insects survive the winter much like we do – by hiding in warm locations. While people commonly talk about mosquitoes “dying off” in the fall and winter, in reality they find spots, like inside wall cavities, where they stay warm. Elm leaf beetles will do the same or hide in attics. Wasps and yellow jackets like to nest in eaves and overhangs during cold weather.
Hibernation versus Diapause
The fourth category of insect, those that go dormant, is the largest. It’s how most insects survive the winter.
While commonly referred to as “hibernation,” the correct term for dormant insects is “diapause.” Hibernation is only for warm-blooded animals and is a stage of inactivity that occurs only in winter.
Diapause is a dormant stage of development that can occur in any climate that isn’t conducive to an insect’s survival. While winter is a common cause, it’s not the only one. During diapause, an insect’s activities and growth are suspended and their metabolic rate stays just above what is needed to live. Before entering diapause, insects will typically search for some kind of shelter such as burrowing into the ground, hiding in tree trunks, under rocks, etc.
For some insects, like termites, what they do during winter depends on the exact species and location. Subterranean termites, for example, dig into the soil below the frost line to stay warm in winter. Drywood termites burrow into dry wood for safety. Termites that have gotten into a heated home can be active year-round. Regardless of the type of termite or where they spent the winter, termites tend to swarm in spring which might be how a homeowner realizes termites invaded their property.
Regardless of which method insects use to survive the winter, when spring arrives, insects become prominent and emerge from hiding places or return to active phase. This surge is why you see so many insects in the spring. Insects don’t actually “die off” in the winter. In fact, mild and short winters can contribute to larger insect populations from a longer breeding season.
So, it’s not your imagination that you see more insects in spring nor is it that you simply notice them because you’re outside more. Insects of all types are active in much greater quantities in the spring as they emerge from diapause or hiding places in your home. That’s no reason to let insects bug you, though.
Call Arrow To Handle Spring Insect Infestations
If termites, carpenter ants, carpenter bees or other insects are invading your home or business this spring, you need the pest control experts at Arrow Exterminating to help you. Arrow will help identify if you have a problem, and then we’ll explain our plan to stop your infestation. To get started, contact Arrow today. |
Technology Level: Classroom Based.
Audience: Ages 8-10
Duration: 90 minutes
By designing their own pet students will learn that different animals have certain requirements to stay healthy and happy and need care throughout their lives.
- Board for brainstorming ideas
- Paper, pencils, paint, cardboard, fabric, boxes, papier-mâché
|1.Introduction (5 mins)|
|1.1 Invent A Pet(5 mins)||
Students are asked to design a new pet for a selected group of people (can use a number of scenarios) Examples:
Students should brainstorm features that their invented pet could have including the pet’s needs and care instructions.
|2. Body(60 mins)||
Students individually draw the pet to suit the allocated group of people and provide relevant information regarding the pet’s needs and car instructions.
|2.1 Plan Your Pet(15 mins)||
As a class, devise a plan to ensure students cover all aspects. Consider the points which need to be included such as size, exercise requirements, fur/feathers/fins, (allow students to brainstorm). Record on board for students to revisit while planning their pet. Students draw a detailed picture, labeling their animal’s attributes.
|2.2 Produce Your Pet45 mins)||
Students produce their animals using materials such as fabrics, boxes, paper-mache etc (anything you like or can get your hands on). As well as their designed animal, students also need to provide a detailed list of needs and care instructions for their animals.
Points for students to consider:
|3. Conclusion(10 mins)||
|3.1 Share Your Invention(10 mins)||
Students take turns to show their creation to the class and explain what their pet is like and why they would like to own this type of pet. They include the special care and needs of their pet.
Discuss with class why these pets would be suitable for the specific needs in the scenario.
|Key Learning area||Curriculum link|
VA 2.1 Students make images and objects by selecting and manipulating elements and additional concepts
VA 4.2 Students make and display images and objects, considering purposes and audiences.
|English – Essential Learnings||
Speaking & Learning
LL 1.1 Students discuss their thinking about needs of living things
LL 2.3 Students make links between different features of the environment and the specific needs of living things
|Technology||TP 2.2 Students generate design ideas, acknowledge the design ideas of others and communicate their design ideas using annotated drawings that identify basic design features.| |
How Does Middlebury Interactive Languages Elementary Spanish 1 Work?
As always, I’m glad you asked! The Elementary Spanish Level 1 course is aimed at the younger students, from K through to Grade 2. It contains twelve units, although the twelfth unit is a review of the previous 11. Each unit focuses on one particular subject so that by the end of the course the children have a working knowledge of the following areas of Spanish:
Each unit has six lessons. The first lesson of each unit follows a similar pattern including an Introduction to the topic; Exploration of topic; A story based around the topic; A story recap; Further exploration; Practice and a Speaking Lab. The following pictures show the contents of lesson 1 in the unit about food:
The next lesson continues with lots of practice, a review of the same story as well as activities which go with the story (I’ve shown lesson 2 of the Topic Food):
The four lessons which follow are made up of lots of reviews and fun activities, such as:
- Putting the story into order:
- Exploring the topic (in this case school):
- Exploring phrases linked to the topic (in this case the body):
- Quizzes to check listening and understanding skills:
- Speaking Labs:
and much, much more.
The lessons are colourful, fun and absolutely kept the attention of my eight year old daughter. She was also able to download and print colouring sheets, the stories which were printed in both English and Spanish as well as a variety of worksheets:
How Did We Use Elementary Spanish 1?
A8 used this program completely and utterly by herself. She taught herself to print out all the printables and created a Spanish folder for herself. A8 is very motivated to learn when something interests her and this absolutely grabbed her attention from day 1. This meant that she begged, almost from the moment of waking, to do her Spanish. I was frequently thrown off the computer for A8 to work at this Spanish course. Although I had planned to do it with her a few times a week, she ended up doing it daily for about an hour at a time! Needless to say that apart from a few speaking sessions, she has completely finished this course.
What Did We Think of Elementary Spanish?
This has been one of A8’s favourite curriculum choices this year. I basically logged her in on day one, went through the how-to page with her and let her at it. And boy did she go at it! She shared everything she was learning with anybody who would listen (and I am certain to a few who weren’t). Daddy couldn’t even step through the front door on his return from work, before she accosted him with a torrent of Spanish words 🙂
She carried her self-created Spanish folder around the house as if it were a teddy bear, stroking it every so often…..
I am certain when I ask her to redo the whole course again next term, there will be no groans of protest. In fact, I would wager it will be her idea to redo it. And how often can you say that about a curriculum?
I can’t do anything but highly recommend this program, and A8 seconds the recommendation 🙂
Find Middlebury Interactive Languages at their Social Media Links:
Twitter: https://twitter.com/MiddInteractive @middinteractive |
Given the small number of original Rabbinic sources that discuss Hanukkah, there are an overwhelming number of questions that arise. We shall present some of these questions, and see if answering them can give us a deeper understanding of the important role Chanukah has in the development of Judaism in modern times.
1. The Beit Yosef (Orach Chaim, 670) asks the classic question about Chanukah. “Why did they make Chanukah eight days? There was enough oil in the flask [to burn] for one night, so it turns out that a miracle happened for only seven nights!” Since the oil burned naturally for one of the days, we should celebrate Chanukah for only seven days.
2. There are two Halachic principles which seem to render the miracle completely unnecessary. “Ones Rachmana patrei” states that the Torah exempts one who is a victim of circumstances beyond his control. “Tuma hutra betzibur” states that if the entire community is impure, service is permitted in that impure state. The lack of pure oil after the Beit Hamikdash was rededicated was certainly beyond the control of the Jews. And since it was a communal situation, even the defiled oil could have been used. Why should G-d have performed a supernatural miracle, altering the order of creation, under such circumstances?
3. There were ten miracles on a daily basis in the Beit Hamikdash (Avot 5:5) many of them much more striking than this one. Yet we don’t find any commemoration of them. And Mishna and Gemara tell us of many other great miracles throughout this period, none of which led to days of commemoration. What was so special about the miracle of the oil that made it deserving of such prominence?
4. The major decrees of the Greeks were to prohibit observance of the Shabbath, Brith Milah (Circumcision), and Kiddush Hachodesh (court sanctification of a new month based on the new moon). Why did these three specific Mitzvot bother the Jews more than the other 610 Mitzvot?
5. They also prohibited the study and dissemination of Torah Sheb’al Peh (oral Torah) while elevating the Torah Shebichtav (written Torah) by having it translated into Greek. Why did they make such a striking distinction?
The texts dealing with Hanukkah are themselves a source for additional questions.
We are taught in Megilat Ta’anit (Ch. 9) of the events leading up to the Chanukah miracle. The Rabbis ask: “Why did they make Chanukah eight days? The other Chanukahs (referring to the consecrations of the Mishkan (Tabernacle) built by Moshe Rabbeinu, and of the first Temple built by Shlomo) were seven days!? The Chashmonaim entered the Heichal (Sanctuary of the Temple), built the Altar and plastered it, fixed the service vessels, and were occupied with [the Heichal] (misaskim bo) for eight days.”
It sounds like the actual work needed to take only a short time, yet they prolonged the process, almost artificially, for eight days! What were the Chashmonaim occupied with for eight days that couldn’t have been accomplished in less time, and what are the Rabbis telling us by emphasizing it?
In the Midrash on the second verse in the Torah (Breishit 1:2; Breishit Rabba 2:4) the Rabbis teach us that the forces which would exile the Jewish people throughout history existed from the time of the creation process. V’Haretz hayta tohu – zu Malchut Bavel…: “The earth was desolate,” this is the Babylonian kingdom…; VaVohu – zu Malchut Madai…: “and chaotic” is the kingdom of Persia…; V’Choshech – zu Malchut Yavan, sehechshicha eineihem shel Yisrael b’gzeiroteihem…: “and darkness” is the kingdom of Greece, which darkened the eyes of the Jews with their decrees.
Calling the Greek kingdom one of darkness is particularly difficult to understand. Greek ideology loved and worshipped wisdom. They were the most enlightened society until that time, with most of our western thought, culture, and intellectual and academic disciplines having developed from it. Chazal respected wisdom, and they teach us: Chochma BaGoyim ta’amin, wisdom can be found among non-Jewish nations. The Greeks themselves appreciated the wisdom of the Torah. That was one of the reasons they wanted the Torah translated into Greek — so they could better understand it. It is therefore strange that from among all the kingdoms, the Rabbis chose to call such an enlightened society “choshech,” darkness.
We will begin our discussion from this last question, which will enable us to gain a deeper understanding of the conflict between Jewish and Greek ideology.
Noach had three children, Shem, Cham, and Yefeth, the forefathers of the world cultures. Shem in Hebrew means “name,” which represents the essence of something, the pnimiyut, the internal reality. The ancestor of Greece was Yefet, from the word yofi, representing the external beauty.
The Greeks believed in nature, and they worshipped it. They placed primary importance externals: strength; the physical body; majority subjugates the minority; survival of the fittest; what you see is what you get. Their ideology required man to operate within the laws of nature, to try to dominate nature, and when necessary to pay the required homage to the gods of nature, gods imbued with observable human characteristics, lusts and limitations. Only what was observable on the outside counted, not what was hidden inside. Even their wisdom was based on what man, exclusively through his natural human intellect, could figure out and understand. Chazal appropriately called that “Chochma Chitzonit,” exterior wisdom.
The Jews believed in the existence of an inner dimension of reality, pnimiyut, which was itself not observable but which was the essence of all that was observable. Everything that exists is an outward revelation of this inner reality. The source of this inner reality is the Divine, and every aspect of creation is an outward revelation of G-d, whether it be nature, Torah or man himself. (For example, the physical body reflects the number of positive (248) and negative (365) commandments of the Torah. Modern discoveries in quantam mechanics reflect a physical world working on an atomic level in ways that are similar to the metaphysical world painted for us by Chazal and the Kabalists.)
Our ancestor was Shem. The ancestor of Greece was Yefeth (Breishit 10:4). The fundamental conflict between Israel and Greece is embodied in the names of our ancestors: The pnimiyut of Shem or the chitzoniyut of Yefet; the inner dimension, or what appears obvious on the surface; the hidden essence or “what you see is what you get.”
This dialectic encompasses the world, nature, and even the Torah. The Torah itself has an exterior dimension, the Written Torah, which is accessible to all nations. This is how the Bible has become the basis of three major religions. But there is also an inner hidden dimension, the Oral Torah, and this is where the hidden Divine aspects of Torah reside. The Oral Torah can be likened to the “personality” of the Torah, the essence of the Torah. This dimension is accessible only through a combination of man’s intense intellectual struggle coupled with Divine inspiration. Torah Sh’bichtav, the Written Torah, has no real impact on a person when it is studied only on its surface level without its inner dimension, which is why the non-Jewish world can have the Bible and be so little influenced by it. Yet this is exactly the kind of Torah the Greeks believed in, a wisdom that need not change the essence of the person, that need not bring with it any obligations, that has no inner affect. Torah was treated as any other wisdom, and they had it translated into Greek to show that even the Torah could be part of their curriculum. The Jews had no monopoly on it. From the perspective of wisdom and intellect, the Greeks appeared correct, and the Jews were a threat to this limited perspective. The Greek defense was to usurp the Written Torah for themselves, and eradicate the concept of Torah Sheb’al Peh, an Oral Torah.
As we say in Al Hanissim: Lehashkicham Toratechah (To make the forget YOUR Torah) uLeha’aviram M’CHUKEI Retzonecha (and to make them transgress your statutes). Chukim, statutes, are the Torah laws which defy rational explanation, reflecting the hidden inner dimension that exists in the Torah. This is exactly the dimension of Torah that the Greeks were trying to eradicate, for this drove home the fact that it was G-d’s Torah; that there was wisdom that transcended man’s own wisdom, and that there were laws that were not accessible to man’s understanding. If Judaism has a conflict with Western culture in the twentieth century, it is with the blatant superficiality and emphasis on externals that pervades. But this is a natural extension of the perspective that says that the only reality is one that we can see and figure out for ourselves.
Torah, viewed only with its exterior dimension, is another way to enrich life. Jews view Torah, with its inner, hidden dimensions, as life itself.
Greek Spirituality vs. Jewish Spirituality
The Greeks believed that the only reality is the physical reality of nature, and that nature was an absolute. If there is a drought, it is the result of natural cycles, and man has to wait out these natural cycles. If calamities befall the world, we search for geopolitical, economic, social, or psychological factors to explain them. G-d has no input in the world after its creation, and it is propelled by fixed forces.
The Jews believed that there is an ongoing relationship between G-d and man, and that the laws of nature are related to a spiritual reality. These two ideas are embodied in Shabbath and in Kiddush HaChodesh, sanctification of the New Moon. Shabbath, the seventh day, imbues the six days of creation with a Kedusha, an INTERNAL spiritual reality which the Greeks denied could exist. And Shabbat embodied a Brith, a covenant, between G-d and the Jewish people, testifying to a unique relationship that existed on an ongoing basis between them. Kiddush HaChodesh manifests man’s influence over the spiritual process. Without man’s input, there are holidays with no holiness. Man can actually create (hidden) spiritual reality.
Are We Prisoners of Nature?
The Greeks believed that man is a product of nature and was controlled by it. His physical drives and lusts were an integral part of his essence, and they controlled him. Brith Mila represented Judaism conviction of man’s ability to transcend his natural lusts and instincts, to control and elevate them. Man is the unification of the physical body with an inner soul. There was a “pnimiyut,” and inner dimension, to the external shell.
This uniquely Jewish concept of man having the ability to transcend his nature is represented by the number eight.
One of the most frequently occurring numbers that we encounter is the number seven. It is the number of days of creation of the world, the days of the week, the days of Sukkot and Pesach, the weeks in the Omer cycle, the number of years in Shmittah and Yovel cycles, the number of days the Torah considers a woman a Niddah, the number of days required for ritual purification. It is a number very much tied to cycles in nature. It is also the number of Mitzvot non-Jews have, and 70 was the number of cows (representing the 70 nations) which were sacrificed on Sukkoth, a holiday of seven days, and in which non-Jews could have a part. When Bila’am brought sacrifices in preparation for cursing the Jews, he brought seven cows and seven rams on seven altars (Bamidbar Ch. 23). It is a number very much associated with universalism as well as the totality of material creation.
The Maharal elaborates on this with the illustration of the six directions in the three-dimensional physical world, plus the center point, which itself has no dimension but is the anchor and the essence of the six directions. This gives a total of seven points, with the seventh representing the spiritual dimension that exists within nature. This spiritual dimension is a property of the natural world, and is not something unique to Jews, as we find even non-Jews searching for meaning, for a spiritual significance in their lives.
The number eight, on the other hand, represents a dimension transcending nature. This dimension is reserved exclusively for the Jews. We find the number eight in Brit Mila, the eternal covenant of membership of the Jewish people. Shavuoth, the day the Torah was given to the Jewish people, is on the day following the seventh week of seven days, and is considered like the eighth day of Pesach, paralleling Shmini Atzereth as the eighth day of Sukkoth (Ramban Vayikra 23:36; Maharal Ner Mitzvah). Shmini Atzereth, following the seven days of Sukkoth, is designated as a private celebration for the Jews with G-d (Yalkut Shimoni 782, Bamidbar Ch. 29.) In the Beit Hamikdash an animal could only be brought as a sacrifice from the eighth day, after it has been with its mother through one natural cycle of seven days. The number eight is found in things that are unique to the Jewish people and in things which transcend the order of nature.
The Spiritual and Physical United
The Psalm of the day for Chanukah is Tehilim Ch. 30, Mizmor Shir Channukat HaBayit LeDavid… Yet the Psalm seems to have nothing to do with Beit David, the Beit HaMikdash, or its consecration. It is a description of the trials and tribulations buffeting man during the vicissitudes of human life. What makes this appropriate for Channukat HaBayit, the consecration of the Temple?
The Mishkan and the Beit Hamikdash are the meeting places between infinite G-d who descends to manifest his presence in the finite world, and finite man who strives to elevate himself to the heights of an infinite G-d. It is the most tangible manifestation of the concept of “chibur elyon v’tachton,” the unification of the transcendent spiritual world with the material physical world.
But the challenge of a Jew is to reveal that unification in the ongoing functioning of the world, in nature, and in man himself.
The rising and setting sun, the rainfall, the birth of a baby, and all the daily events which we take for granted as “nature” are in fact as miraculous as a one-day quantity of oil burning for eight days. To answer the classic question of the Beit Yosef, we can understand the eight days of Chanukah as our declaration and as a revelation of the existence of Divine reality in every aspect of nature, an identity between the one day for which the oil burnt naturally and the seven days when the Menora burnt with no natural explanation. The days of miraculous burning were made possible through the recognition of that inner reality of the natural burning, a reality that truly exists only because of the unification of the Divine with physical matter. This is a reality not apparent when one looks only at the surface, limited to observable nature, represented by the number seven.
So when the Chashmonaim entered the defiled sanctuary, they occupied themselves with repairing it for a full eight days. Eight days were not necessary for the physical-level work that needed to be done. But eight days were necessary to anchor the concepts of an inner reality and spiritual transcendence, so crucial at a time when the world was in the process of adopting a culture that denied anything beyond the natural and observable.
While the lack of pure oil was a circumstance beyond their control, in the inner world of the Divine there are no excuses such as being a “victim of circumstances.” Every circumstance is another opportunity to reveal, in some way, the inner Divine reality that encompasses all creation. Purity and holiness are elements of an inner reality. Oil that is tahor, pure, and oil that is tamei, defiled, look the same. The difference lies only in their hidden essence. In this case, lighting pure oil wasn’t simply optional, it was a necessity. Nothing less than pure oil could serve to highlight the Jewish emphasis on internal reality, in opposition to the emphasis on the external dimension. What you see is not necessarily what you get. It’s what’s inside that counts.
We live in a culture of blatant superficiality with an emphasis on externals. This reflects an existence which is limited to the dimensions of nature, based on “seven.” Even our Judaism and Torah study can be limited to that external dimension. These things can be meaningful, they can enrich our lives, but if they lack the internal soul and essence, we have lost their uniquely Jewish dimension, based on “eight,” which the Chashmonaim fought so valiantly to preserve in the still-ongoing battle with Greek culture. |
Although researchers have studied the effects of aircraft noise on wild and domestic animals for many years, accurate descriptions of the noise levels received by the animals in many of the studies was not verified. Recent technology has allowed for the miniaturization of much of the hardware for noise monitoring devices, making it feasible to build a noise monitor small enough to fit on a large animal collar. An animal noise monitor (ANM) was designed and built to capture A- and C-weighted noise levels above a programmable threshold, distinguishing aircraft noise from other sound sources. The device, weighing only 350 g, also captures onset rate, Leq, and gross movements of the animal via accelerometers after a noise event, while fitting on a large animal collar. The ANMs were also designed to function as stand-alone, weatherproof units for up to 6 months. The ANMs have already been field tested under natural weather conditions at stationary locations by Peregrine Falcon aeries around Fairbanks, Alaska. The ANMs will be tested on penned animals to determine their reliability and accuracy. |
The cissoid 1) can be constructed as follows:
Given a circle C with diameter OA and a tangent l through A. Now, draw lines m through O,
cutting circle C in point Q, line l in point R. Then the cissoid is the set of points P
for which OP = QR.
construction has been done by the Greek scholar Diocles (about 160 BC) 2).
He used the cissoid to solve the Delian problem, dealing with the duplication of the
cube. He did not use the name cissoid. The name of the curve, meaning 'ivy-shaped', is found for the first time in the
writings of the Greek Geminus (about 50 BC). Because
of Diocles' previous work on the curve his name has been added: cissoid
Roberval and Fermat constructed the tangent of the cissoid (1634):
from a given point there are either one ore three tangents.
In 1658 Huygens and Wallis showed that the area between the curve and his asymptote is π/4.
The cissoid of Diocles is a special case of the generalized cissoid, where line l and circle C have
been substituted by arbitrary curves C1 and C2.
The curve, having one cusp and one asymptote, has as Cartesian coordinates:
x3 + xy2 - y2 = 0 or y2 = x3/(1-x)
Another generalization of the curve is the ophiuride.
Some relationships with other curves are:
In the special case that construction line l passes through the center of the circle,
the curve is a strophoid.
And near the cusp the curve approaches the form of a semicubic parabola.
1) In Italian: cissoide di Diocle.
2) Waerden 1950 |
The sun emits 3 kinds of radiation: UVC, UVA, and UVB radiation. UVA & UVB radiation poses the biggest threat to us because they are not fully absorbed by the ozone layer, like UVC Radiation is. UVB radiation affects the upper layer of skin called the epidermis and is what causes sunburn. It is the most severe when the sun is at its brightest, especially during the summer, when most of us are exposed to the sun. UVA radiation has been shown to be a huge contributor to sun damage as it penetrates deeper into the skin than UVB radiation does. Sunscreen is needed to lessen the damage of these types of radiation and here is why:
The Ozone layer is weakening
The ozone layer is a part of the Earth’s stratosphere that absorbs UVC radiation and some UVB radiation. With the ozone layer diminishing the earth’s surface is being exposed to more and more radiation, making it more important than ever to protect your skin.
Reduces the risk of developing skin cancer.
Although this is something we might all know, it is something that needs to be reiterated. Sun damage is one of the major causes of skin cancer. Regular use of SPF15 sunscreen can lower the chances of you developing melanoma and squamous cell carcinoma by around 40 percent and 50 percent, respectively. Your skin can also develop wart-like lesions which, although not carcinogenic, can be unpleasant.
Prevent sunburns and other sun damage.
Not only is sunburning an eyesore, but it can also be painful and can lead to permanent skin damage such as the increased risk of melanoma and wrinkling. There is a misconception that people who are darker in skin tone or don’t burn easily, will not need to wear sunscreen. This is untrue and sun damage affects all skin types and therefore protection is necessary for everyone.
Prevent Premature Aging.
Continued sun exposure can cause your skin to age prematurely with the presence of fine lines, moles, wrinkles, and sagging. Collagen is a structural component of your skin, that gives skin its firmness and elasticity. As we age we naturally produce less collagen, leading to wrinkles and dry skin. UV radiation penetrates the dermis, the middle layer of our skin, and produces excess elastin. The body then tries to break this down and in the process breaks down collagen as well. Continued exposure causes the collagen to break down faster than normal aging, causing discolored, leathery and saggy skin. |
A symbol of morality in politics
Abraham Lincoln began his presidency with no intention of abolishing slavery. Within two years, he had changed his mind. Associate Professor of History Michael Vorenberg looks at the legacy of the Emancipation Proclamation.
St. Martin’s Press recently published The Emancipation Proclamation: A Brief History with Documents by Associate Professor of History Michael Vorenberg.
In a summary that appeared in Times Higher Education (London) on February 4, a reviewer noted that only a rapid reversal of intention by President Abraham Lincoln made slavery-ending legislation possible at that time. “In his 1861 inaugural address,” the reviewer wrote, “Lincoln vowed not to interfere with slavery. Yet two years later he signed the Emancipation Proclamation, setting the stage for national emancipation. This volume reveals the complexity of the process by which African Americans gained freedom and explores the struggle over its meaning.”
As President’s Day approached, Michael Vorenberg spoke with Today at Brown about the proclamation’s significance and legacy.
If one looks for a moment in time that defines a larger, world-shaking transformation, the issuing of the Emancipation Proclamation is one of those moments. The power of it was not so much in its legal as its symbolic power. It signaled that the United States, the most successful democratic experiment of the day, would no longer countenance slavery. For the slaves and for all African Americans, it signaled as nothing had done before that they had a stake in this democracy and that the commitment they already had made to it was to be honored.
To whom is it most important? Least important?
It was most important to the slaves themselves, most of whom learned of it immediately, despite great efforts by slave owners and Confederate authorities to keep the information from them. It was probably least important to those who supported the Confederacy who had assumed – wrongly – that Abraham Lincoln had always intended to issue such a Proclamation.
What lessons have we learned, and failed to learn, from Lincoln?
We learn from Lincoln that politics can make a difference, that there is a place for morality in politics, and that powerful people – even the most powerful office-holder in the country – can learn from others, can be changed by events, and can be influenced by ordinary people. For Lincoln, the ordinary people who influenced him the most were the slaves who took steps to secure their own freedom and African Americans generally who fought for a country, the United States, that had denied them so much.
Has President Obama ever commented on the Emancipation Proclamation? If so, what has he had to say about it?
On Martin Luther King day this year (2010), President Obama hung a copy of the proclamation in the Oval Office. Eventually it will go to the Lincoln Bedroom, the site where the original document was signed. At the ceremony during which the proclamation was hung near a statue of Dr. King, President Obama said nothing about the document. Instead he praised the achievements of the elderly, distinguished African Americans attending the ceremony.
I believe that this event represents the only time in his presidency when Barack Obama might be said to have commented on the proclamation. Obviously, the symbolic power of the proclamation on that day was more important than its actual words. But the proclamation was no empty symbol on that day. It signaled that the transition of African Americans from slavery to freedom is not some dull event from a history textbook but rather a living legacy.
In your opinion, what might American life today be like if the Emancipation Proclamation never happened?
It is hard to imagine that slavery could have survived the Civil War even if the proclamation had never been issued. So I don’t think that the imagined absence of the Proclamation in history translates into the possibility that racialized, chattel slavery would still be in the United States today.
But without the proclamation, the war would not have had the same moral force that it now has in American memory. Without the proclamation, Lincoln would not have become such an iconic figure of worldwide significance. Without the proclamation, white Americans — northern whites in particular – could not enjoy what Robert Penn Warren disdainfully called a “treasury of virtue,” that sense that “we” fought a war to end slavery, and thus all part and future injustices are absolved. For African Americans, the absence of the proclamation would mean the absence of a single, simple, and living symbol of a struggle against slavery that began more than two hundred years before the outbreak of the Civil War. |
Common Core Standards: ELA
1. Text Types and Purposes: Write arguments to support claims in an analysis of substantive topics or texts, using valid reasoning and relevant and sufficient evidence.
While Mom might be able to get away with “Because I said so!” as a logical explanation for her side of an argument, most other people need to provide some kind of proof to back up what they want to say. This standard explains just what constitutes a good written argument—from claim to proof.
First, the writer must make a claim. Whether it’s stating that vampires in stories reflect the culture’s fear of people who are different or stating that the conch shell in William Golding’s Lord of the Flies represents civilization and order, the claim tells the reader the author’s belief or opinion. A claim must be something someone can disagree with. No one would argue with the “claim” that gravity makes objects fall down, so there is no point writing an argument to prove it.
Second, this claim must analyze or look at a substantive topic or text. In other words, the claim the writer is making must be about something important. So, a claim examining what vampires symbolize in stories is in, while a claim arguing whether or not vampires sparkle is out!
Third, the writer must create arguments using valid reasoning. This means that the argument must follow logic and present a reason that anyone could be convinced by. For example, if you claim that the conch stands for civilization because your brother says so, that is not valid. It doesn’t matter just how smart and wonderful your brother might be, this reasoning is not going to convince people. You must give reasons based on evidence from the book, and your reasons must follow logical thought processes.
Finally, there has to be evidence that is relevant and sufficient. Relevant evidence is connected to your argument somehow. For your claim that vampires represent a culture’s fears, quoting Bram Stoker’s Dracula is relevant, but quoting an article about Robert Pattinson’s favorite restaurant is not. Sufficient evidence means that you have enough proof of what you are saying. There are plenty of instances in Lord of the Flies when the conch is used to bring everyone together—exploring those instances would be sufficient evidence.
1. In the case of Goldilocks vs. The Three Bears, it is necessary to prove which party is at fault for Goldilocks’ Post-Traumatic Stress Disorder following her visit to the home of The Three Bears.
|Goldilocks' PTSD is a result of her own actions, and The Three Bears bear no responsibility for her trauma.||(This is the claim. Notice that the claim reflects a belief that someone could disagree with.)|
|Her inability to return to work picking flowers in the woods is the result of her own actions in the home of The Three Bears.||(This shows that the topic is substantive. Goldilocks has lost work because of her problems—which is a serious issue.)|
|On the day in question, Goldilocks chose to enter the abode of the Bear family despite the fact that she was uninvited, unwelcome, and no one was home. In short, she committed the felony of breaking and entering. From there, she systematically destroyed the possessions of The Three Bears. In particular, Baby Bear suffered the greatest losses—his porridge, his favorite chair, and the security of his own bed.||(This evidence is relevant. It makes it clear that Goldilocks harmed The Three Bears through her illegal entry, which means their frightening her is justified—and is not their fault.|
|No one is disputing that Goldilocks was frightened by the return of the bears: “At the sound of the Baby Bear's voice the little girl awoke with a start. She sat up and glanced about her. Then she sprang out of bed, and dashed down the stairs and out of the house as fast as her legs would carry her.” Clearly, the Bears’ appearing suddenly while she slept was enough to scare her silly. But as she was sleeping in another family’s house after having eaten their breakfast and ruined their chairs, it is her own fault that she was in a position to be so frightened.||(Between the quotation from the story and the description of the losses suffered by Baby Bear, there is sufficient or enough evidence that Goldilocks had no expectation of calm in another person’s house.)|
|If Goldilocks can no longer perform her duties as a flower-gatherer, it is because she is guilty of trespassing.||(The reasoning for arguing that Goldilocks has only herself to blame is valid).|
2. In this example, identify the claim. Explain how it is substantive. How is the reasoning valid? What relevant evidence is used? Is there sufficient evidence of the Bears’ responsibility?
The Three Bears are entirely to blame for Goldilocks’ trauma and subsequent inability to enter the woods to pick flowers. The little girl only wanted to eat and rest before she tried to find her way home again, and by frightening her after a day full of ordeals, the Bears traumatized a minor in their own home.
On the day in question, the Bears left their home unlocked and their breakfast open on the table in order to go for a walk. By leaving the doors unlocked, the Bears lost any expectation of home privacy.
When Goldilocks happened upon their house, she was already exhausted and scared from being lost in the woods. “The little girl went up to the door and knocked. There was no answer. She knocked again. Still no answer. And so she opened the door and went in. She was very tired and hungry.” Clearly, the little girl knew that she should announce her presence before entering. It was not her fault that the Bears were not at home to answer. Finding the door unlocked, the little girl did the only thing she could—she went in and made herself comfortable.
When the Bears returned to find their breakfast eaten, their chairs moved and broken, and their beds rumpled and slept in, they should not have been surprised. They had left their door open. By then scaring the worn out little girl sleeping in Baby Bear’s bed, they committed an atrocious breach of hospitality.
Goldilocks’ current mental state is completely the fault of the Three Bears. If they did not want any visitors in their home, they should not have left it unlocked. For that matter, they should not have left the house at all!
Quiz QuestionsHere's an example of a quiz that could be used to test this standard.
- A Separate Peace: Real History in Made-Up Devon
- Teaching A Tale of Two Cities: Mapping A Tale of Two Cities
- Teaching The Adventures of Huckleberry Finn: It Runs in the Family
- Teaching The Adventures of Huckleberry Finn: The N-Word
- Teaching The Adventures of Huckleberry Finn: Huck Finn vs. Video Games
- Teaching A Farewell to Arms: Hemingway and ... Yiyun Li?
- Teaching A Good Man is Hard to Find: Killer Short Stories: Flannery O'Connor and Southern Gothic Literature
- Teaching To Kill a Mockingbird: Atticus Finch, Number One Dad
- Teaching To Kill a Mockingbird: A Dream Deferred
- Night: Survivors Unite
- Teaching Of Mice and Men: Photo Synthesis
- Teaching Of Mice and Men: Close Reading Steinbeck: Letters vs. Novel
- Teaching Of Mice and Men: New American Dream
- The Great Gatsby: Come a Little Closer
- The Great Gatsby: Reviewing a Classic
- The Great Gatsby: Zelda, My Sweet!
- Teaching Macbeth: “With Great Power Comes Great Responsibility”
- Teaching Macbeth: Wave Those Numbers!
- Teaching The Catcher in the Rye: Party Planner
- Teaching The Catcher in the Rye: Searching the Big Apple
- Teaching The Catcher in the Rye: No Oscar for Holden |
In the year 1909, a little-known astronomer named Vesto Slipher began a series of painstaking observations at the Lowell Observatory in Flagstaff, Arizona, in the US. The observatory had been built primarily to look for evidence of Martian canals, but Slipher had set his sights well beyond the Red Planet and its putative inhabitants. His interest was in the nature of fuzzy patches of light called nebulae. Were they gas clouds in the Milky Way, or far-flung galaxies in their own right? Slipher carefully measured the colour quality of the glowing patches, and found that the fainter they were, the redder their light. Through the lens of history, we can now see that this discovery marked the beginning of cosmology as a proper science.
Slipher’s “red shift” came to the attention of astronomer Edwin Hubble, who understood that the reddening effect implied that the objects were rushing away from us at great speed. Using a more powerful telescope, he confirmed that most of the nebulae were in fact distant galaxies. On 23 November 1924, Hubble announced in the New York Times that the entire universe is expanding. It was one of the most momentous scientific pronouncements of all time.
It took several more decades, however, before the modern big bang theory became established, according to which the universe was propelled on its path of expansion from an explosive origin 13.8 billion years ago. The intense heat of the primordial explosion still exists as a fading afterglow, filling all space with a sea of microwaves. This cosmic microwave background, or CMB, was detected by accident in 1967 by two radio engineers. It was immediately apparent that this was the big bang’s smoking gun, and that, etched into the structure of the CMB, lay vital clues about the origin and nature of the universe.
In November 1989, NASA launched the satellite COBE (Cosmic Background Explorer) to map the remnant primordial heat in detail. A few weeks later, NASA released the first heat map of the universe – a colour-coded palette of amorphous splodges indicating slightly hotter and colder patches of the sky. The golden age of cosmology had begun.
Over the three decades since, the CMB has been data-mined to enormous precision, first using COBE’s results, then those of other instruments, the most recent of which is the European Space Agency’s Planck satellite. Piecing together the CMB observations with those from powerful ground-based telescopes, astronomers and physicists have been able to construct the Great Story of the Universe from the first split second to today, in extraordinary detail. During my career, cosmology has gone from being a speculative backwater to a precision science.
In spite of this ringing success, some ugly cracks have started to appear in the cosmic facade. If theorists are to be believed, those tell-tale splodges in the CMB carry a faint echo of what the universe was doing a mere billion-trillion-trillionth of a second after the big bang, an era known as the inflationary phase, when the universe abruptly leapt in size by an enormous factor, as if it had taken a sudden deep breath. Quantum effects during inflation imprinted slight fluctuations in density and temperature on the nascent cosmos, sowing the seeds of what was to eventually evolve into the large-scale structure of the universe – galaxies and clusters of galaxies. The splodges in the CMB are evidently fossils from the edge of time itself, writ large and frozen in the sky.
The laws of quantum physics neatly explain the characteristic patterning observed by COBE and its successors, but there are a couple of discrepancies. The most glaring concerns a large patch in the Southern Hemisphere constellation of Eridanus which is weirdly much cooler than it should be based on statistical fluctuations. It looks like something has taken a giant bite out of the universe, leaving a supervoid. The Eridanus cold patch has led to some imaginative speculation. Could it be a blemish left by another universe bumping into our own? Might it be a portal into a region beyond the known universe? Or some sort of matter-destroying “bubble”?
Another fly in the cosmic ointment concerns the rate that the universe is expanding, known as Hubble’s constant. For decades astronomers sharply disagreed with the measurements, until a few years ago they agreed on a compromise value. Just as the dust was settling on this vexed issue, a new way to measure Hubble’s constant, using the splodges in the CMB, gave an answer seriously out of whack – about 10% smaller than the agreed number. Because the inferred age of the universe hinges on the value of Hubble’s constant, the implication is that 13.8 billion years is now an underestimate.
Dark matters of dispute
Next on the list of unanswered questions is the nature of dark matter and dark energy. Astronomers are certain that the stuff of which you, me and the stars are made is but a tiny percentage of all there is.
Fully five times as much matter is in some unknown form that doesn’t seem to interact noticeably with normal matter, except for the gravitational tug it exerts. The smart money is on some sort of weakly interacting heavy subatomic particle, legions of which must be passing through us all the time without causing a shudder. The race is on to try to detect the occasional fleeting passage of a dark matter particle, or perhaps to create one in giant accelerator machines like the Large Hadron Collider at CERN in Switzerland.
Even if a dark matter particle is nailed in the near future, it still leaves unanswered the nature of the stuff that makes up three-quarters of the mass of the universe, the thing known as dark energy. It is not really matter in the normal sense of the word. Rather, the best way to envisage dark energy is the energy of empty space (which is why we can’t see it).
The idea that space itself might have energy goes back to 1917, when Einstein realised that if the energy of space isn’t strictly zero then space would be self-repulsive; that is, it would possess an intrinsic propensity to expand, faster and faster. In effect, space energy is a form of anti-gravity. It wasn’t taken seriously until the 1990s when, low and behold, astronomers (including Brian Schmidt, now the Vice-Chancellor of ANU) found that the expansion of the universe is speeding up. Dark energy would do the trick nicely.
Not everyone is happy about invoking Einstein’s anti-gravity to explain the accelerating expansion. Part of the problem is that the amount of energy in, say, a million cubic kilometres of empty space is entirely arbitrary. It is, however, an exceedingly tiny number: astronomers measure it to just enough energy to boil a kettle if it could be harnessed. But why that particular number and not some other?
Appeals to quantum physics to derive a value for dark energy fail spectacularly. One estimate is out by about 120 powers of ten! Maybe space is permeated by a new sort of field that produces just the right amount of cosmic repulsion, but so far all we have to show is a lot of different models and calculations and nothing definitive.
The question “What is dark energy?” is high on the list of outstanding problems in fundamental science.
This excerpt is republished online from Cosmos Magazine issue 92, which went on sale on Thursday 2 September 2021.
To read more about the cosmological mysteries puzzling physicists, subscribe today and get access to our quarterly magazine in print or digital, plus access to all back issues of Cosmos magazine. |
In the ongoing war to defeat antibiotic resistance, a new study has identified a protein that acts as a “membrane vacuum cleaner” — an attribute that means it could serve as a new target for antibiotics. The research indicates that the process of purging the outer membrane of gram-negative bacteria of specific lipids (which requires a particular protein) might be a vulnerability drugs could target. More specifically, antibiotics could possibly enhance their existing effectiveness by using the protein researchers identified, or even decrease the virulence of many common bacteria such as E coli.
Gram-negative bacteria have two membranes — one inner and one outer. This new research implicates the outer rather than the inner membrane. The outer membrane is an asymmetrical bilayer composed of inner and outer leaflets. The inner leaflet is made up of phospholipids, and the outer leaflet is made up of mostly lipopolysaccharides, which create a sugar-coated surface that efficiently excludes hydrophobic molecules and resists antibiotics — as well as other compounds that might endanger the bacteria.
However, the outer leaflet requires a cleaning system, because phospholipids from the inner leaflet accumulate inside it creating “islands” that render the outer membrane more permeable to toxic compounds. This, in turn, makes the entire bacterium more vulnerable.
The asymmetry and permeability barrier of the outer membrane must be restored in order to keep the bacterium healthy, which means those phospholipid molecules must be removed. This is the job of the maintenance of lipid asymmetry (Mla) system, which most Gram-negative bacteria have. The focus of the recent research is the MlaA protein, a component of the Mla system.
Newcastle University Professor of Membrane Protein Structural Biology and lead author Bert van den Berg explained in a press release: “Our three-dimensional structures and functional data show that MlaA forms a donut in the inner leaflet of the outer membrane. This binds phospholipids from the outer leaflet and removes these via the central channel, somewhat similar to a vacuum cleaner.”
The researchers plan to continue to study the MlaA protein as a target for antibiotics. This work is essential, as the development of new drugs is being outpaced by antibiotic resistance. As such, many researchers have pivoted to focusing bacteria themselves; in space, in nature, and even at the nanoscale for quantum effects. Researchers are also working to attack antibiotic resistance at the chemical and molecular level, searching for the genetic roots of resistance, using CRISPR and otherwise preventing expression of genes that enable resistance. The issue itself is at a crisis point, according to authorities like the World Health Organization, the Centers for Disease Control, and the United Nations.
This new research will aid in our ongoing fight against this critical issue. Professor van den Berg commented in the release, “Our study illuminates a fundamental and important process in Gram-negative bacteria and is a starting point to determine whether the Mla system of Gram-negative pathogens could be targeted by drugs to decrease bacterial virulence, and to make various antibiotics more effective.” |
Is there a really important reason why PHP uses all those dollar signs?
It's the syntax. $x is a variable; $$x is a variable variable. Normally you don't use these, but the feature is there. What it does is evaluate the original (single-dollared) variable as a string, and use it as a variable name. So if you have this:
$a = 'foobar';
$x = 'a';
...it will print "foobar" - $x is 'a', so $$x resolves to $a.
Variable variables are loosely comparable to pointers in C, only that you use strings to point to variables instead of binary pointer values (and that memory management is taken care of by the language).
The C++ equivalent of the above would be:
string a = "foobar";
string* x = &a;
cout << *x;
Only that PHP uses strings to reference variables, so you can construct variable references by name, as strings. |
Applications for five-axis machining fall into two distinct categories. The first is related to machining very complex shapes, as is required in aerospace and the mold industry. When machining 3D shapes, it is often advantageous (if not necessary) to keep the cutting tool perpendicular to the surface being machined. This is possible through the articulation of three linear axes (X, Y and Z) and two rotary axes (commonly A and B). Since this kind of work tends to be so complex, a CAM system is required to create the CNC program.
A second application for five-axis machining (the one we address in this column) is not nearly as complex. Many workpieces have surfaces to be machined that are not at right angles. The five axes (again, X, Y, Z, A and B) are simply required to expose the surface to the spindle for machining. Once exposed, machining will occur in a relatively simple plane. Most five-axis programmers in this category would agree that you need a CAM system to create programs for this application, even for simple operations such as drilling, due to the unusual surface angles involved.
You probably know that all simple three-axis CNC machining center controls allow you to specify plane selection commands. G17 is commonly used to specify the X/Y plane, G18 for X/Z and G19 for Y/Z. However, these commands require that the surfaces to be machined be in 90-degree increments. These commands are helpful with right angle heads and allow many of the same programming features that are used in the X/Y plane to be used in X/Z and Y/Z planes.
While these simple plane selection commands are very helpful for three-axis machining centers, they do little to help five-axis machining center programmers. Again, surfaces that must be machined are often not at simple right angles to one another. The 3D coordinate conversion allows the programmer to define a plane for any surface in which machining is to take place. In essence, it allows variable plane selection. As with G17, G18 and G19, once a plane is defined, almost all programming features allowed in the X/Y plane will be available in the plane being defined. This can dramatically simplify the programming of five-axis machining centers. In some applications, programmers are actually able to develop programs manually. Again, all machining is taking place in a simple two-direction plane.
Controls vary when it comes to just exactly how you define the plane in which machining is to be done. One popular control manufacturer uses a G68 command for this purpose. In this command an X, Y and Z value specify a point through which the plane will be passing. Additionally, an I, J or K value specifies the axis that the plane will be rotated about (I for X, J for Y, K for Z). An R word in the command specifies the angle of rotation for the plane (from normal). The command N020 G68 X0 Y0 Z0 I0 J1 K0 840.0 for example, will define a plane that passes through the program 0 point. It will be rotated at 40 degrees about the Y axis. Note that this is still a relatively simple plane. By adding a second G68 command we could continue defining the plane to be used for machining, rotating it in two directions. Again, any surface to be machined on the workpiece can be defined in this manner, regardless of how complex the plane.
If you're going to be purchasing a five-axis machining center for work in this category, we'd urge you to look into this program simplifying feature.blog comments powered by Disqus |
This Solar Kit Lesson #9 - Properties of Solar Radiation: Reflection, Transmission, and Absorption lesson plan also includes:
- Join to access all included materials
Middle school science stars observe and record data on the solar radiation reflected off or transmitted through various materials. They predict properties for various materials, and test their predictions by touch. This lesson becomes practical as the gleaned knowledge is applied to making consumer choices when it comes to characteristics of a car. Comprehensive resources are provided for you in this writeup, including background information, materials and procedures, lab sheets for learners, and review questions. |
Note: Subscribe now to our newsletter to receive great info for new dads. Also visit GreatDad's page on Toddler Advice for dads.
Toileting (or using the potty) is one of the most basic physical needs of young children. It is also one of the most difficult topics of communication among parents, child care providers, and health care professionals when asked to determine the "right" age a child should be able to successfully and consistently use the toilet.
Most agree that the methods used to potty train can have major emotional effects on children. The entire process-from diapering infants to teaching toddlers and preschoolers about using the toilet-should be a positive one. Often, and for many reasons, toilet learning becomes an unnecessary struggle for control between adults and children. Many families feel pressured to potty train children by age two because of strict child care program policies, the overall inconvenience of diapering, or urging from their pediatricians, early childhood columnists, researchers, other family members, friends, etc.
The fact is that the ability to control bladder and bowel functions is as individual as each child. Some two-year-olds are fully potty trained, and some are not. But those that aren't should not be made to feel bad about it. There are also many cultural differences in handling potty training, therefore it is important that families and program staff sensitively and effectively communicate regarding these issues.
The purpose of toilet learning is to help children gain control of their body functions. If a child is ready, the process can provide a sense of success and achievement. Here are some helpful hints on determining when young children are ready to begin the potty training process and suggestions on how to positively achieve that task.
Ready, set, go!
Children are most likely ready to begin toilet learning when they:
Become a cheerleader
- show a preference for clean diapers-a preference adults can encourage by frequent diaper changing and by praising children when they come to you for a change.
- understand when they have eliminated and know the meaning of terms for body functions. For example, "wet," "pee," "poop," and "b.m." are words commonly used by children to describe bladder and bowel functions.
- indicate that they need to use the potty by squatting, pacing, holding their private parts, or passing gas.
- show that they have some ability to hold it for a short period of time by going off by themselves for privacy when filling the diaper or staying dry during naps.
- There may be times during the learning process when children accidentally go in their diapers or training pants. This can be very distressing and may cause them to feel sad-especially if they have been successfully using the chair for some period of time. When this happens, change the diaper without admonition-... |
Lesson 4 of 7
Objective: SWBAT explain that in rotations, all points are rotated through congruent angles with congruent radii.
Activating Prior Knowledge
Where We've Been: Students have been doing activities similar to the one for todays lesson. So far, we've defined translations in terms of congruent line segments with the same slope, and we've defined reflections in terms of segments and their perpendicular bisector.
Where We're Going: Students will be using the definitions we've developed to verify that a particular transformation has occurred or to solve problems related to transformations.
Since we have been doing this type of activity for the past couple of lessons, students should be ready to jump right in at the beginning of class. I do need to make sure that they haven't forgotten how to use a protractor to measure and create angles before we start, though. So the task for this section of the lesson is:
Use a protractor to create a 72 degree angle. Then exchange papers with a neighbor so that they can measure your angle and verify that it measures 72 degrees. Then exchange again with a new person and do the same thing. (MP6)
To begin, I pass out the Defining Rotations Activity resource and students get right to work on it. The task is intended to be self-guided. My job is to coach and facilitate.
A couple of foreseeable issues:
- When students measure their angles they should draw the rays of the angles lightly, preferably in four different colors, so that the graphs don't get overly convoluted.
- Regarding item #4, some students may not come up with the formula right away. This is ok...they can move on to #5 and find the distances using the distance formula if they like. Hopefully, after a few times doing it this way, these students will start to recognize some regularity in their calculations and see that there is an easier way. Then they can come back and complete #4. This is a good time to have a conversation about MP8 as I explain in the video.
Once students have completed the activity I’ll want to make sure they have the important concepts as I intended. In this section of the lesson I recap while observing student body language and class conversation informally.
First, we discuss how we know that the transformation in this activity was a rotation.
Next I explain (and write on the whiteboard) that in order to define a particular rotation precisely, we need to specify three things:
1. The center of rotation
2. The angle of rotation
3. The direction of rotation (by convention, positive angles of rotation indicate counterclockwise rotation and negative angles indicate clockwise)
Having established that, I speak precisely about the rotation that took place in the activity:
It was a 90 degree rotation counterclockwise about the origin. I point out that if anyone had noticed, we could have determined this just by noticing the rule (x,y) ---> (-y,x) [which students have learned in an earlier lesson on transformations as functions]
Directing students' attention to item #3, I point out how every angle formed by a point on the pre-image, the center of rotation as vertex, and the corresponding point on the image was 90 degrees. That means that every point on the pre-image got rotated 90 degrees.
Moving our attention to item #5, I point out how we found that the distance from any point on the pre-image to the center of rotation was the same as the distance between its corresponding point on the image and the origin. I emphasize (and write on the whiteboard) how this means that after a rotation, all points will still be their original distance from the center of rotation.
Finally I explain how in a rotation each point on the pre-image travels on a path called an arc, which is part of a circle's circumference. How much of the circle's circumference the point travels is determined by the angle of rotation (360 degrees is a full circle so the 90 degree rotation in this example was a quarter of a circle.) and the radius of the circle is the distance between the original point and the center of rotation.
I check for understanding by asking "Do all points in a rotation travel the same distance? Explain." I have students do a Think-Pair-Share to answer this question. When we share out after the Think-Pair-Share, I make sure that everyone goes away knowing (and writing in their notes) that all points move through the same angle but do not travel the same distance. Points originally farther away from the center of rotation travel along bigger circles (therefore farther), and points that are originally closer to the center of rotation travel along smaller circles. As a final check, I ask for volunteers to answer the question "What do you suppose would happen to a point that was originally on the center of rotation?"
After the check for understanding, if at least 85% of the students seem to "get it", it's time to put the concepts into action by rotating a figure around a specific center of rotation using a specified angle of rotation. For that, I use the Define Rotations Independent Practice Task, which I explain and demonstrate in the Independent Task video. |
Few Pakistanis know what the Higgs boson is and even fewer realise that some of the earliest theoretical groundwork that led to this discovery was laid by Pakistan’s only Nobel laureate, Dr Abdus Salam.
The Higgs boson is a subatomic particle whose existence was confirmed by the European Organisation for Nuclear Research (known by its French acronym, CERN) on July 4. The discovery of the particle provides the last remaining bit of empirical evidence necessary for the Standard Model of physics, which seeks to explain the existence of all forces in the universe except gravity.
In the 1950s, physicists were aware of four different types of forces in the universe: gravity, electromagnetic force, the force that attracts an electron towards the nucleus of an atom (weak nuclear force), and the force that keeps the nucleus of the atom together (strong nuclear force). The Standard Model can offer an integrated explanation for the latter three of those forces. Its origins lay in the discovery in 1960 by American physicist Sheldon Glashow of the fact that the weak nuclear force and electromagnetic force are the same thing.
Of the many discoveries that later solidified the Standard Model of physics was work done in 1967 by Dr Abdus Salam and American physicist Steven Weinberg in unifying the Higgs mechanism to Glashow’s theory, giving the “electroweak theory” its current form. But Dr Salam’s contributions to particle physics do not end there. Collaborating with Indian physicist Jogesh Pati, he proposed the Pati-Salam model in 1974, which further moved forward the theoretical underpinnings of the Standard Model.
It was for this body of work that Salam, along with Weinberg and Glashow, was awarded the Nobel Prize for physics in 1979.
While this work in theoretical physics may seem obscure and with little practical application, the tools created by physicists engaged in this research are ones we all live with today. For instance, in order to assist the thousands of physicists around the world collaborating on this project, European scientists helped develop the internet. The need to crunch massive amounts of data led to the development of what is now known as cloud computing.
Research like this does not come cheap: it cost the Europeans about $10 billion to build the Large Hadron Collider, the atom-smashing machine that allowed for the discovery of the Higgs boson. But the economic payoffs for any country that invests in them seem to be several orders of magnitude higher, making it well worth it. Imagine: the thousands of internet companies – worth trillions of dollars – would not exist, were it not for the innate curiosity of particle physicists seeking what seems an outlandish goal: one theory that explains everything in the universe.
It is this curiosity to seek out the truth through empirical evidence, to seek explanations for the inexplicable, to unmask the unknown, to venture into the uncharted, that forms the basis for the fundamental drive of moving humanity forward. It is at the frontier of discovery that the future is born, and new industries and new avenues of wealth created, allowing millions – even billions – to lead better lives than they did before.
A Pakistani was at the fore of this frontier of discovery in the 1960s and 1970s. But rather than encourage and celebrate his magnificent achievement, he was maligned and sidelined for his faith. An ironic fact: most physicists are staunch atheists but Salam was one of the few firm believers in God.
Published in The Express Tribune, July 6th, 2012. |
During Communication Technology, students worked in groups to build simple cup & string communicators. These communicators were used to reinforce the lesson on the Components of a Communication System as each of the components could be identified using this simple communication device and allowed students to have some fun while learning a very important concept.
After the information on House Construction was presented, students worked in groups to sketch each of the 3 major subsystems of the house (Floor, Walls, & Roof). Once the sketches were complete, each group created one of the subsystems using materials available in class (shoe boxes, drinking straws, cardboard, construction paper, chenille stems, etc.). Once the subsystem was complete, each group used an iPad to video themselves describing the subsystem they had created and uploaded the completed video to Google Classroom. This activity allowed students to work collaboratively and also enabled them to use technology to demonstrate what they had learned as well as gave them practical experience using Google Classroom which will be used extensively in high school.
For Bridge Construction, students applied what they had learned about bridge construction by working in groups to create a bridge using no more than 100 popsicle sticks and wood glue. The groups were given a set of criteria that the bridge should meet but were allowed to use their creativity and imagination to create a bridge that they thought would be strong enough to support at least one text book and hopefully break last year's record of 24 text books (67.2 pounds). Three groups came very close with their bridges holding 23 books but last year's record is safe for another year. |
Nuclear ventriculography is a test that uses radioactive materials called tracers to show the heart chambers. The procedure is noninvasive. The instruments DO NOT directly touch the heart.
How the Test is Performed
The test is done while you are resting.
The health care provider will inject a radioactive material called technetium into your vein. This substance attaches to red blood cells and passes through the heart.
The red blood cells inside the heart that carry the material form an image that a special camera can pick up. These scanners trace the substance as it moves through the heart area. The camera is timed with an electrocardiogram. A computer then processes the images to make it appear as if the heart is moving.
How to Prepare for the Test
You may be told not to eat or drink for several hours before the test.
How the Test will Feel
You may feel a brief sting or pinch when the IV is inserted into your vein. Most often, a vein in the arm is used. You may have trouble staying still during the test.
Why the Test is Performed
The test will show how well the blood is pumping through different parts of the heart.
Normal results show that the heart squeezing function is normal. The test can check the overall squeezing strength of the heart (ejection fraction). A normal value is above 50% to 55%.
The test also can check the motion of different parts of the heart. If one part of the heart is moving poorly while the others move well, it may mean that there has been damage to that part of the heart.
What Abnormal Results Mean
Abnormal results may be due to:
- Blockages in the coronary arteries (coronary artery disease)
- Heart valve disease
- Other cardiac disorders that weaken the heart (reduced pumping function)
- Past heart attack (myocardial infarction)
The test may also be performed for:
Nuclear imaging tests carry a very low risk. Exposure to the radioisotope delivers a small amount of radiation. This amount is safe for people who DO NOT have nuclear imaging tests often.
Cardiac blood pooling imaging; Heart scan - nuclear; Radionuclide ventriculography (RNV); Multiple gate acquisition scan (MUGA); Nuclear cardiology; Cardiomyopathy - nuclear ventriculography
Kramer CM, Beller GA, Hagspiel KD. Noninvasive cardiac imaging. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine. 25th ed. Philadelphia, PA: Elsevier Saunders; 2016:chap 56.
Udelson JE, Dilsizian V, Bonow RO. Nuclear cardiology. In: Bonow RO, Mann DL, Zipes DP, Libby P, Braunwald E, eds. Braunwald's Heart Disease: A Textbook of Cardiovascular Medicine. 10th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 16.
Review Date 5/5/2016
Updated by: Michael A. Chen, MD, PhD, Associate Professor of Medicine, Division of Cardiology, Harborview Medical Center, University of Washington Medical School, Seattle, WA. Also reviewed by David Zieve, MD, MHA, Isla Ogilvie, PhD, and the A.D.A.M. Editorial team. |
A new paper in Current Anthropology suggests that Neanderthals were as gifted at hunting as modern humans.
CURRENT ANTHROPOLOGY Volume 47, Number 1, February 2006
Ahead of the Game
Middle and Upper Palaeolithic Hunting Behaviors in the Southern Caucasus
by Daniel S. Adler, Guy Bar-Oz, Anna Belfer-Cohen, and Ofer Bar-Yosef
Over the past several decades a variety of models have been proposed to explain perceived behavioral and cognitive differences between Neanderthals and modern humans. A key element in many of these models and one often used as a proxy for behavioral "modernity" is the frequency and nature of hunting among Palaeolithic populations. Here new archaeological data from Ortvale Klde, a late Middle–early Upper Palaeolithic rockshelter in the Georgian Republic, are considered, and zooarchaeological methods are applied to the study of faunal acquisition patterns to test whether they changed significantly from the Middle to the Upper Palaeolithic. The analyses demonstrate that Neanderthals and modern humans practiced largely identical hunting tactics and that the two populations were equally and independently capable of acquiring and exploiting critical biogeographical information pertaining to resource availability and animal behavior. Like lithic techno-typological traditions, hunting behaviors are poor proxies for major behavioral differences between Neanderthals and modern humans, a conclusion that has important implications for debates surrounding the Middle–Upper Palaeolithic transition and what features constitute "modern" behavior. The proposition is advanced that developments in the social realm of Upper Palaeolithic societies allowed the replacement of Neanderthals in the Caucasus with little temporal or spatial overlap and that this process was widespread beyond traditional topographic and biogeographical barriers to Neanderthal mobility. |
Large inequities in health exist between indigenous and non-indigenous populations worldwide. This health divide has also been demonstrated in India, where indigenous groups are officially classified as scheduled tribes (STs). India has one of the largest tribal populations in the world. Tribal communities in general and primitive tribal groups in particular are highly disease prone and their misery is compounded by poverty, illiteracy, ignorance of causes of diseases, hostile environment, poor sanitation, lack of safe drinking water, blind beliefs, etc. As per the estimates of National Family Health Survey-3 (NFHS-3), the likelihood of having received care from a doctor is lowest for ST mothers (only 32.8% compared to India of 50.2%). While many strategies have been attempted over the years to discuss some of the economic, social, and physical factors preventing tribal population to get access to healthcare services, the ultimate outcome has remained far less than the expectations. Considering that these ST groups are culturally and economically heterogeneous, the methods to tackle their health problems should not only be integrated and multi-fold, but also specific to the individual groups as feasibly as possible. Measures like strengthening of the existing human resources, bringing health services within the reach of remote populations, promotion of health awareness, facilitation of community participation using innovative strategies, bringing about a change in the behavior of health care providers, implementation of measures for the empowerment of ethnic groups by carrying out administrative reforms and finally by ensuring the sustainability of all above recommended measures. |
Why extremism is relevant
Teachers play an important role in keeping children and young people safe. They are in a key position to protect them from the dangers of extremist narratives.
You do an invaluable job in protecting students from drug abuse, gangs, neglect and sexual exploitation. Radicalisation has a similarly devastating effect on young people, families and communities. Helping to protect students from extremist and radicalising influences is an important part of your overall safeguarding role.
Children and young people are particularly vulnerable to radicalisation. Many teenagers look for answers to questions about identity, faith and belonging, and are in search of adventure and excitement. Extremist groups, whether Islamist, far-right or other, claim to offer the answers and promise vulnerable young people a sense of identity. Though instances are rare, even very young children may be exposed to extremism, both inside and outside the home, or online.
Many young people also spend a lot of time online which exposes them to additional risks. Extremist groups’ use of internet and social media has become a prolific way for them to spread their ideology.
It’s important to ensure your classroom is a safe space, where ideas and controversial issues can be discussed freely and openly. Encouraging such activities will help students challenge extremist arguments, by equipping them with skills and knowledge to explore political and social issues critically, to weigh evidence, debate and make reasoned arguments. |
Encourage your child to build simple CVC (consonant-vowel-consonant) words from letters.
Gather all the letters of the alphabet using magnetic letters, foam letters, blocks or write each letter of the alphabet on an index card: a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y and z. Separate the vowels and consonants. Have your child pick a vowel and try to form CVC (consonant-vowel-consonant) words by sandwiching the vowel between two consonants, such as choosing the letter a and forming: cat, pan, tap, etc. Vary the sessions, concentrating on a specific vowel each time. |
Holocaust: The Scorching Inferno of Prejudice
The history of mankind is overwhelmingly comprised of examples of "man's inhumanity to man." Yet, the rationale for its study is to enlighten "humanity" to prevent future tragedies like the Holocaust. This abominable period in history is aptly named because it was a notoriously planned genocide that resulted in the murders of six million Jews, along with five million other human beings, due to Nazi-generated prejudice, an "irrational hatred or suspicion of a specific group, race, or religion" (Webster 872). With genocide appallingly evident in countries like Bosnia, the people who appear to have benefited the most from the study of history are blood-thirsty dictators. They have learned how easily the propaganda of prejudice can camouflage their heinous quest for power by duping the ignorant into scapegoating others for economic misfortunes, easily igniting the flames of hatred into the black chaos of war.
Therefore, propaganda effectively employs man's own fallacious prejudice to divert him from more effective problem-solving. For centuries, Christians have scapegoated Jews, even though Christ, ironically a Jew, preached tolerance. Although abused and segregated by prejudice, European Jews managed to survive and even to prosper for centuries. However, the relatively high-level status some achieved, despite all obstacles, instead of disproving absurd allegations of inferiority, only sparked the envious flames of prejudice.
The Nazis cleverly used pre-existing anti-Semitism to fuel their rise to power. From the Middle Ages, Jews were even outrageously accused of Satanic world domination (Bauer 19). Nomadic Jews, forbidden land ownership, became usurers, promoting further animosity-generated prejudice, as did the Crusades, which set a precedent for the mass murder of Jews in western German towns in 1196 (Bauer 19). By the late 1800's, prejudice against Jews extended beyond religion to race (Bauer 43). In 1878, German racist, Wilhelm Marr, had coined the more "antiseptic" term, "anti-Semitism," to replace more blatantly pejorative labels (Bauer 43). Moreover, Duhring's Social Darwinism "rationalized" the racial symbol of "blood" to stereotype genetic inferiority (Bauer 41), while German "scholar," Paul de LaGarde directed Germans to "trample this usurious vermin to death" (Bauer 42-43). In 1854, Count de Gobineau indulged in prejudicial "groundbreaking" with his contention that tall, blond, blue-eyed "Aryans" were "the superior race," in marked contrast to the "Dark" or "Bad" Jews (Bauer 42). German historian, Treitschke, even made the grossly scapegoating remark, "Jews are our misfortune" (Bauer 47), a slogan later borrowed by the popular Nazi newspaper, "Der Stuermer," known for carrying on the German tradition of grotesquely caricaturing Jews to promote racism (Guttman 1420-1421).
German political parties, like the Christian Social Workers' Party in 1879, even carried on a tradition of promulgating prejudice to gain votes (Bauer 47). Setting the stage for Nazi scapegoating of Jews during the depression, Socialistic Anarchists, like Proudhon and Foumier, labeled Jews, like Rothschilde, as initiators of the evils of capitalism (Bauer, 41). Yet, optimistic Jews fervently hoped that the "Jewish problem" had been settled when they were finally granted legal rights (Bauer 39), blissfully unaware of the smoldering approach of the Nazi "Final Solution" of judenrein or Jewish genocide.
So Hitler took advantage of human foibles to slowly fan the embers of pre-existing prejudice, a "smokescreen" to mask his reduction of fundamental human rights to ashes. Niemoller, a German World War I hero, clergyman, and ironically Nazi prisoner in 1937, founded the Pastors' Emergency League to protest Nazi inhumanity. He was credited with the truism that the Nazis dominated Germany because there was no opposition from individuals who were not, themselves, victimized by Nazi prejudice, resulting in their own ultimate oppression (Bauer 134-137). Ironically, Hitler, himself, would exemplify prejudicial pragmatism when designating his Arab allies "honorary Aryans" (Bauer. 44). Once made Chancellor of Germany in 1933, Hitler quickly "legalized" anti-Semitism by banning all Jews from government service (Dawidowicz 77). The black clouds of prejudice thickened as Dachau was quickly opened two months later, initially to "smother" political dissidents. Nazi Action Committees were ordered to systemically achieve such objectives as the universal boycott of Jewish businesses on April 1,1933, effectively hastening the 1933 panic emigration of German Jews (Arad 34), while Hitler gradually desensitized Germans into accepting his legalized sterilization (1933) and "euthanasia" (1939) of "undesirables" (Bauer 208). Meanwhile, Reich Propaganda Minister, Goebbels fanatically burned books contrary to Nazi ideology in May of 1933 and eliminated dissident teachers to promote the blackness of prejudicial ignorance (Bauer 95-96). From 1933-1935, innocent Jewish children were progressively thwarted from obtaining an education, while being humiliated by proliferating "No Jews" signs, posted even on playgrounds (Bauer 179).
Yet, it was the blatantly racist Nuremberg Laws of 1935, ironically reputed to promote German "honor," which set the stage for near Jewish genocide. Under The Reich Citizenship Law, Jews lost their hard-won German citizenship, while The Law for the Protection for German Blood and Honor prohibited marriages between Jews and "Germans" (Holocaust 839.) The political rights of the German Jew were, therefore, systemically eliminated. Diverted by sacrificial fires exterminating the outrageously scapegoated, mainstream Germans ignored the oppressive darkness enveloping them with the approaching Nazi-generated war, ignited by the prejudice of nationalism. Only gradually did Germans confront the black oppression generated by condoning prejudice, forever exemplified in the anti-Jewish rioting and looting during the infamous Kristallnacht pogrom in 1938 (Bauer 109).
Although in 1937, Pope Pius XI condemned racism to protect "Jewish-Catholic" converts, he refrained from opposing Nazi anti-Semitic policy (Bauer 136), as the black cloud of mass extermination was advancing. To this end, Jews were identified by cards, as well as by the middle names of "Israel" and "Sara" on all official documents in 1938. Banned from public schools, universities, and even their own homes in 1939, Jews were more readily segregated by Stars of David and arm bands (Bauer 146). The "Final Solution" began as Polish Jews were being imprisoned in 1940, and by 1941, tens of thousands of Jews were being "gassed" in Chelmno (Bauer 208-209), while another "killing center," Birkenau, was opened in 1942 to house deported Jews (Bauer 206). Nazism became synonymous with mass murder when approximately 1,790,000 Jews were executed in Belzec, Sobibor, Treblinka, and in Majdanek (Bauer 209). Auschwitz inmates were even forced on a "death march" as late as January of 1945 to German concentration camps, just before Soviet liberation and War II's end (Bauer 160). Perhaps, even then, Nazis resorted to the immature tactic of scapegoating their victims for their well-deserved defeat.
If it is incomprehensible how much dictators can pathologically distort reality, it is even more appalling how much their "subjects" can be brainwashed into meek adherence. Amazingly, human beings seem to be more inspired to save endangered species of animals than their own species. It is a veritable disgrace that international law, and a tribunal to enforce it, had not been established after World War I, a war which had already set a precedent for large-scale mass murder (Bauer 58). Only a globally enforceable legal system, responding decisively, can serve to deter any dictator who even hints at "ethnic cleansing". In 1945, the Nuremberg Trial was established to set a precedent for conviction of "crimes against humanity" (Stadtier 197). In addition, the United Nations was created in response to World War II atrocities, but it has not reacted quickly enough to such modern abominations as Bosnia's attempted genocide, Algeria's and Rwanda's mass murders, and to Iraqi weapons of mass destruction, let alone bringing human rights criminals to trial.
Therefore, an intense scrutiny of the Holocaust and the insidious corrosion of prejudice, which erodes the very infrastructure of human society, should be required for all mankind. Robert Jackson, Chief Prosecutor of the Nuremberg Trial, was correct: "These wrongs have been so calculated, so malignant, so devastating, that civilization cannot tolerate their being ignored, because it cannot survive being repeated" (Powell 53). A critical study of Mein Kampf might also alert mankind to Hitler's truism: "With the help of a skillful and continuous application of propaganda, it is possible to make the people conceive even heaven as hell" (United States 175).
The Jewish people have performed a monumental service to mankind in keeping the gruesome issue of genocide painfully alive by establishing a Museum of Tolerance and Holocaust Museums to promote humanity. All human beings need to be continuously reminded of the full ramifications of the embers of prejudice, that so easily ignite to produce the blackened ashes of human destruction, destroying peace, the only light under which mankind flourishes. Witness the aims currently professed by incendiary groups like the Klu Klux Klan and German neo-Nazis (Cowell 26A). The real antidote to the vicious venom of prejudice is to destroy the blight of ignorance so that mankind will not be duped into surrendering fundamental moral responsibility to dictators, but instead, will fully support international efforts to thwart and hold dictators fully accountable for any acts of "world treason." As a Holocaust survivor and distant cousin (who prefers anonymity) fervently believes, ethical people should be at least as vociferous and articulate in promoting human rights as those who would seek to destroy them.
Arad, Yitzhak, Yisrael Gutman and Abraham Margaliot, eds. Documents on the
Holocaust: Selected Sources on the Destruction of the Jews of Germany and Austria, Poland, and the Soviet Union. Jerusalem: Yad Vashem in cooperation with the Anti-Defamation League and KRAV Publishing, 1981.
Bauer, Yehuda. History of the Holocaust. New York: Franklin Watts, 1982
Cowell, Alan. "Neo-Nazis Seek Room for 'Real Power.' Sun-Sentinel. 8 Feb. 1998,
Broward-Metro Edition.: 26A.
Dawidowicz, Lucy S. The War Against the Jews 1933-1945. New York:
Bantam Books, 1979.
Guttman, Israel, ed. "Der Stuermer." Encyclopedia of the Holocaust, Vol 3.
New York: Macmillan Publishing Company, 1990: 1420-1421.
"Holocaust." Encyclopedia Judaica. 1978.
'Prejudice." Webster's II New College Dictionary. 1995.
Powell, Bill. "Lessons of Nuremberg." Newsweek. 6 November, 1995: 52-55.
Stadtier, Bea. The Holocaust: A History of Courage and Resistance. West Orange:
Behrman House, 1973.
United States. Chief of Counsel for Prosecution of Axis Criminality. Nazi Conspiracy
and Aggression: Opinion and Judgment. Washington: GPO, 1947.
The opinions, comments, and sentiments expressed by
the participants are not necessarily those of |
Nitrous oxide, commonly known as laughing gas, is now the dominant ozone-depleting substance emitted by humans – and is likely to remain so throughout the century, a new study suggests.
Researchers suggest use of the compound – which is produced by the breakdown of nitrogen in fertilisers and sewage treatment plants – should be reduced to avoid thinning the protective ozone layer that blankets the Earth.
The ozone layer shields Earth from the sun’s ultraviolet rays, which increase the risk of cancer and threaten crops and aquatic life.
Human-produced chemicals called chlorofluorocarbons (CFCs) made headlines in the 1980s when it became clear they were eating a hole in the ozone layer above Earth’s polar regions. An international treaty called the Montreal Protocol regulated production of CFCs and certain other ozone-depleting gases in 1987, and they were phased out completely by 1996.
Since then, Earth’s ozone – both the polar hole and the atmospheric layer around the whole planet – has been on the mend. But the emission of nitrous oxide, which is not regulated by the Montreal Protocol, could reverse those gains – and could even make the situation worse.
“Right now, nitrous oxide is the most important ozone-depleting gas that is emitted,” says A. R. Ravishankara of the US National Oceanic and Atmospheric Administration, lead author of the new research. “It will continue to be so unless something is done.”
Nitrous oxide is also a heat-trapping greenhouse gas in the league of methane or carbon dioxide, so regulating it would also be good for the climate, he says.
Nitrous oxide (N2O) is produced naturally when nitrogen in soil or water is eaten by bacteria. It rises into the stratosphere, where most of it is broken down into harmless molecules of nitrogen and oxygen by the sun’s rays.
But some of it remains, and can survive for hundreds of years. The compound reacts with high-energy oxygen atoms to produce a deadlier compound, nitric oxide (NO). This then goes on to destroy ozone, a molecule made up of three oxygen atoms.
Nitrous oxide has no effect on the hole in the ozone layer, Ravishankara points out, but it makes the global layer thinner.
This chemical process has been known since the 1970s, when scientists were worried about the environmental effects of flying supersonic planes, which emit ozone-depleting nitrogen oxides.
Ravishankara and his colleagues are the first to put hard numbers on the role of nitrous oxide in ozone depletion.
To do so, they modelled the atmosphere and the chemical reactions that take place inside it. They found that nitrous oxide’s potential to deplete ozone is comparable to other ozone-depleting substances, called hydroCFCs, that replaced CFCs but are also in the process of being phased out.
But although the depletion potential is roughly equivalent, nitrous oxide could have a more damaging effect because it is much more abundant. Global human emissions of N2O are roughly 10 million tonnes per year, compared to slightly more than 1 million tonnes from all CFCs at the peak of their emissions.
On the rise
Scientists say humans’ role in producing the harmful gas has largely been overlooked. Thanks to fossil fuel combustion, which produces the gas, as well as nitrogen-based fertilisers, sewage treatment plants and other industrial processes that involve nitrogen, about one-third of the nitrous oxide emitted per year is anthropogenic.
Although supersonic transport never took off, current N2O emissions destroy as much ozone as flying 500 such planes a day. Emission levels have increased by 0.25 per cent a year since pre-industrial times.
“Nitrous oxide is kind of the forgotten gas,” says Don Wuebbles of the University of Illinois at Urbana-Champaign, who invented the method of quantifying a chemical’s ozone-depletion potential but was not involved in this work. “It was always thought of as a natural thing. People have forgotten that it’s been increasing.”
And as CFC levels abate, nitrous oxide could become even more powerful. Nitrogen and chlorine compounds counteract each others’ effects on ozone – the more chlorine there is, the less effective nitrogen becomes at destroying ozone, and vice versa. As CFCs are purged from the atmosphere, nitrous oxide will become 50 per cent more potent than it was before, Ravishankara says.
“People were expecting that ozone was just going to recover from the results of human activities that resulted in CFCs,” Wuebbles says. “Nitrous oxide could prevent that from happening.”
Journal reference: Science (DOI:10.1126/science.1176985)
More on these topics: |
What does RNA do during protein synthesis - …
9. As the ribosome moves by two codons, next round of protein synthesis is initiated by the attachment of a new ribosome. Thus, at a time, a single mRNA is found to be attached with many ribosomes with their polypeptides of different length, (shortest polypeptide at the 5′ end of the mRNA and longest at the 3′ end), called polysomes.
Fig. 8.15 Peptide bond formation in growing polypeptide.
10. Ultimately, the A-site of ribosome is occupied by the termination codon (UAA,UAG or UGA) at the 3′ end of mRNA, which is not recognized by any tRNA. Thus, the termination of the protein synthesis is helped by the release factors RFl, RF2 and RF3 (in eukaryotes eRF1), which release the newly synthesized polypeptide chain from the P-site (Fig. 8.16).
Translation: Making Protein Synthesis Possible
The process of synthesis of proteins from mRNA (translation of language of nucleic acids into the language of proteins) is called translation. There are 20 different types of amino acids, which constitute various proteins, and these amino acids themselves cannot recognize their respective codons in the mRNA. Different amino acids are carried by their specific tRNA molecules at the
site of protein synthesis (mRNA). There are about 55 types of tRNA molecules available in the cytoplasm, so that one amino acid may have more than one tRNAs.
that are present in mRNA and contain the information of proteins are known as exons. Thus, HnRNA produced after transcription is quite longer than the mRNA. Most of the extra nucleotide sequences, including introns, are cleaved by snurp. Moreover, after removal of the extra nucleotides from the 3′ end of the HnRNA, poly A tail is added that is required for the stability of the mRNA. Similarly, after removal of extra nucleotides from the 5′ end, a cap of 7-methyl guanosine (7mG) is added that is required for the translation process. The production and processing of HnRNA occurs in the nucleus from where it escapes into the cytoplasm through nuclear pores for the translation process (Fig 8.11 and 8.12).
Protein synthesis :: DNA from the Beginning
Now two things happen. The transfer RNA carrying a methionine attaches itself to the AUG codon by pairing its anti-codon bases with the complementary bases on the messenger RNA. And the second, bigger part of the ribosome attaches to the system as well.
RNA acts as the information bridge between DNA and protein
The process of synthesis of RNAs (mRNA, tRNA and rRNA) from DNA by the enzyme RNA polymerase is known as transcription. At the time of transcription, the RNA polymerase binds with double stranded DNA (gene) at a particular site (in prokaryotes known as promoter site) and after unwinding of the two strands of DNA by the rotation of the DNA, it starts copying one of
the two strands, known as coding strand (sense strand or template strand). The other strand of the DNA, which is not copied for the RNA synthesis, is known as non-coding strand (antisense strand) (Fig. 8.6).
This lesson discusses the vital role mRNA plays in protein synthesis
In the diagram, the anti-codon is for the amino acid methionine. The messenger RNA code for methionine is AUG. If you look at the code in the anti-codon for methionine, it is UAC. That is exactly complementary to AUG. The U in the anti-codon will pair with the A in the messenger RNA; the A in the anti-codon pairs with the U in the mRNA; and the C in the anti-codon pairs with the G in the mRNA.
Protein biosynthesis - Wikipedia
At the 3' end of every transfer RNA molecule, the chain ends with the sequence of bases C C A. Remember that the bases in RNA and DNA are attached to a backbone of alternating phosphate and sugar groups. At the very end of the chain is the -OH group on the 3' carbon of a ribose ring.
Protein synthesis is the process ..
Transfer RNA is a short bit of RNA containing about 80 or so bases. These are mostly the same bases as in messenger RNA (A, U, G and C), but it also contains some modified bases which won't concern us at this level. A model of a typical transfer RNA looks like this: |
Are your citrus leaves turning green with yellow veins in winter?
When gardeners see this colour change in their citrus leaves, they often wonder if this is due to a nutrient deficiency, and if so, what they can do to fix it.
The abnormal yellowing of leaf tissue is called chlorosis, which is caused by a lack of the green pigment chlorophyll which is essential for photosynthesis.
Leaf chlorosis can be caused by various factors, plant nutrient deficiencies being one possible cause. Other possible causes include poor drainage, damaged or compacted roots and high (alkaline) soil pH.
Yellow Vein Chlorosis
When your citrus tree leaves displays yellow veins while the rest of the leaf remains a normal green colour, this condition is referred to as yellow vein chlorosis.
Usually, yellow vein chlorosis occurs during the autumn and winter period due to reduced nitrogen uptake by the roots from the soil in low temperatures. Citrus tree nitrogen uptake is generally lowest during dormancy, it increases during flowering and reaches its peak during fruit set. It doesn’t matter how much nitrogen fertilizer is present in the soil, it is less available to the citrus tree in cold weather and as such the tree displays the signs of nitrogen deficiency.
Nitrogen is classed as a mobile nutrient, which means that plants and trees can move it from one part of their structure (leaves and branches, etc.) to another, away from places where it’s no longer needed and into new growth.
Since the nitrogen in the soil is less available in the cold seasons, citrus trees will mobilize nitrogen reserves from older tissues, redirecting them during the spring flush into new leaves and flowers. When part of the nitrogen of the leaf is translocated back into the tree because of inadequate nutrition, the result is yellow-vein chlorosis.
When It’s Not Nutrient Deficiency
Keep in mind that yellow vein chlorosis can also be caused by girdling of branches, roots, or the tree trunk itself. Girdling (ring-barking) is the removal of a strip of bark right around a branch or trunk of a woody plant.
Look for obvious signs of physical damage to branches or the base of the trunk at soil level. Bark may be eaten by pest animals such as rodents (rats, rabbits, etc.) or damaged with powered gardening equipment such as line trimmers. Damage to the roots may occur due to root rot from waterlogged soils. Check drainage during the wet seasons and keep mulch away from the base of the trunk to prevent collar rot.
How do you distinguish whether the problem is related to girdling or cold weather nutrient deficiency? Unlike nutrient deficiency related yellow vein chlorosis, this type of damage will also cause leaf drop, fruit drop, dieback, and possibly the eventual death of the tree.
Treating Yellow Vein Chlorosis
If your citrus tree has cold weather induced yellow-vein chlorosis, what can you do?
Ideally, nothing! If you’ve been feeding your citrus at the right times of the year, typically at the start of spring and autumn at the very minimum (or as per the feeding directions on the fertilizer packaging) and you’re using a balanced fertilizer, then the tree will take care of itself when the weather warms up and it can better access the nitrogen available in the soil. Let Nature do the work, that’s the Permaculture approach!
If you’re obsessing about doing something, then you can use a foliar fertilizer which is sprayed on the leaves, where it’s absorbed directly. It’s the quickest method of getting nutrients into plants. Foliar nutrition is useful in condition where the tree’s ability to take up nutrients is decreased and it needs the extra nutrition, such as prolonged periods of drought, wet conditions or cold weather.
University of Florida, IFAS Extension – Publication #HS876, Citrus Problems in the Home Landscape, Mongi Zekri and Robert E. Rouse
University of Florida, IFAS Extension – Publication #HS-797, A Guide to Citrus Nutritional Deficiency and Toxicity Identification, Stephen H. Futch and David P. H. Tucker
University of California, Davis – California Fertilization Guidelines, Citrus |
WikiDoc Resources for
Evidence Based Medicine
Guidelines / Policies / Govt
Patient Resources / Community
Healthcare Provider Resources
Continuing Medical Education (CME)
Experimental / Informatics
A rebreather is a type of breathing set that provides a breathing gas containing oxygen and recycles exhaled gas. This recycling reduces the volume of breathing gas used, making a rebreather lighter and more compact than an open-circuit breathing set for the same duration in environments where humans cannot safely breathe from the atmosphere. In the armed forces it is sometimes called "CCUBA" (Closed Circuit Underwater Breathing Apparatus).
Rebreather technology is used in many environments:
- Underwater - where it is sometimes known as "closed circuit scuba" or "semi closed scuba", or CCUBA = "closed circuit underwater breathing apparatus", as opposed to Aqua-Lung-type equipment, which is known as "open circuit scuba".
- Mine rescue and in industry - where poisonous gases may be present or oxygen may be absent.
- Space suits - outer space is, for all intents and purposes, a vacuum where there is no oxygen to support life.
- Hospital anaesthesia breathing systems - to supply controlled proportions of gases to patients without letting anaesthetic gas get into the atmosphere that the staff breathe.
- Submarines and hyperbaric oxygen therapy chambers - where the gas in the habitat must remain safe. Here the rebreather is big and is connected to the air in the habitat.
This article is mainly about diving rebreathers.
As a person breathes, the body consumes oxygen and makes carbon dioxide. A person with an open-circuit breathing set typically only uses about a quarter of the oxygen in the air that is breathed in. The rest is breathed out along with nitrogen and carbon dioxide.
With a rebreather, the exhaled gas is not discharged to waste. The rebreather recovers the exhaled gas for re-use. It absorbs the carbon dioxide, which otherwise would accumulate and cause carbon dioxide poisoning. It adds oxygen to replace what was consumed. Thus, the gas in the rebreather's circuit remains breathable and supports life processes. Nearly always, the oxygen comes from a gas cylinder, and the carbon dioxide is absorbed in a canister full of some absorbent chemical designed for diving applications such as Sofnolime, Dragersorb or Sodasorb. Some systems also use a prepackaged Reactive Plastic Curtain (RPC) based cartridge, a common brand name for these RPC cartridges is ExtendAir. These absorbents may contain small amounts of soda lime, but are generally less toxic. Pure oxygen is not considered to be safe for recreational diving below 6 meters, so recreational rebreathers also have a diluent cylinder to reduce the percentage of oxygen breathed and enable them to be used to greater depths.
History of rebreathers
|This section may require cleanup to meet Wikipedia's quality standards.
Please improve this article if you can.
- See also: Timeline of underwater technology
Around 1620 in England, Cornelius Drebbel made an early oar-powered submarine. Records show that, to re-oxygenate the air inside it, he likely generated oxygen by heating saltpetre (sodium or potassium nitrate) in a metal pan to make it emit oxygen. That would turn the saltpetre into sodium or potassium oxide or hydroxide, which would tend to absorb carbon dioxide from the air around. That may explain how Drebbel's men were not affected by carbon dioxide build-up as much as would be expected. If so, he accidentally made a crude rebreather nearly three centuries before Fluess and Davis: see this link.
The first certainly known closed circuit breathing device using stored oxygen and absorption of carbon dioxide by an absorbent (here caustic soda), was invented by Henry Fluess in 1879 to rescue mineworkers who were trapped by water.
The Davis Escape Set was the first rebreather which was practical for use and produced in quantity. It was designed about 1900 in Britain for escape from sunken submarines. Various industrial oxygen rebreathers (e.g. the Siebe Gorman Salvus and the Siebe Gorman Proto) were descended from it; this link shows a Draeger rebreather used for mines rescue in 1907.
The first known systematic use of rebreathers for diving was by Italian sport spearfishers in the 1930s. This practice came to the attention of the Italian Navy, which developed its frogman unit, which had a big effect in World War II. Image of wartime Italian frogman.
In World War II captured Italian frogmen's rebreathers influenced design of British frogman's rebreathers. Ref British commando frogmen#1942 at "April" for more information: Many British frogmen's breathing sets' oxygen cylinders were German pilot's oxygen cylinders recovered from shot-down German Luftwaffe planes. Those first breathing sets may have been modified Davis Submarine Escape Sets; their fullface masks were the type intended for the Siebe Gorman Salvus. But in later operations different designs were used, leading to a fullface mask with one big face window. One version had a flip-up single window for both eyes to let the user get binoculars to his eyes when on the surface. They used bulky thick diving suits called Sladen suits. Early British frogman's rebreathers had rectangular breathing bags on the chest like Italian frogman's rebreathers; later British frogman's rebreathers had a square recess in the top so they could extend further up onto his shoulders; in front they had a rubber collar that was clamped around the absorbent canister, as in the CGI image below.
US Navy rebreathers were developed by Dr. Christian J. Lambertsen in the early 1940s for underwater warfare. Dr. Lambertsen, who currently works at the University of Pennsylvania, is considered by the US Navy as "the father of the Frogmen". Information about early history of USA frogman's rebreathers is scarce because the many available photographs of UDT men and training and operations rarely show a rebreather, as if there was a secrecy law against it.
Innovations in recreational diving rebreather technology
Over the past ten or fifteen years rebreather technology has advanced considerably often driven by the growing market in recreational diving equipment. Innovations include:
- The electronic, fully closed circuit rebreather itself - use of electronics and electro-galvanic fuel cells to monitor oxygen concentration within the loop and maintain a certain partial pressure of oxygen
- Automatic diluent valves - these inject diluent gas into the loop when the loop pressure falls below the limit at which the diver can comfortably breathe.
- Dive/surface valves or bailout valves - a device in the mouthpiece on the loop which connects to a bailout demand valve and can be switched to provide gas from either the loop or the demand valve without the diver taking the mouthpiece from his or her mouth. An important safety device when carbon dioxide poisoning occurs.
- Integrated decompression computers - these allow divers to take advantage of the decompression benefits provided by the ideal mix in the loop of a fully closed circuit rebreather. By monitoring the oxygen content of the mix they can work out the inert gas content and generate a schedule of decompression stops.
- Carbon dioxide scrubber life monitoring systems - temperature sensors monitor the progress of the reaction of the soda lime and provide an indication of when the scrubber will be exhausted.
Advantages of rebreather diving
The main advantage of the rebreather over other breathing equipment is the rebreather's economical use of gas. With "open circuit" scuba, the entire breath is expelled into the surrounding water when the diver exhales. A breath inhaled from an open circuit scuba system whose cylinder(s) are filled with ordinary air is about 21% oxygen. When that breath is exhaled back into the surrounding environment, it has an oxygen level in the range of 15 to 16% when the diver is at atmospheric pressure. This leaves the available oxygen utilization at about 25%; the remaining 75% is lost.
At depth, the advantage of a rebreather is even more marked. The amount of CO2 in exhaled gas is not a constant percentage, but a constant partial pressure of about 0.04bar. The amount of oxygen used from each breath is about the same - so as the ambient pressure increases (as a result of going deeper), the percentage of oxygen used from each breath drops. At 30m (100ft), a diver's exhaled breath contains about 20% oxygen and about 1% CO2.
Long or deep dives using open circuit equipment may not be feasible as there are limits to the number and weight of diving cylinders the diver can carry. The economy of gas consumption is also useful when the gas mix being breathed contains expensive gases, such as helium. In normal use only oxygen is consumed: small volumes of expensive inert gases can be reused for many dives.
Rebreathers produce far fewer bubbles and make less noise than open-circuit scuba; this can conceal military divers and allow divers engaged in marine biology and underwater photography to avoid alarming marine animals and thereby get closer to them. The electronic fully closed circuit rebreather, is able to minimise the proportion of inert gases in the breathing mix, and therefore minimise the decompression requirements of the diver, by maintaining a specific and relatively high oxygen partial pressure at all depths. The breathing gas in a rebreather is warmer and more moist than the dry and cold gas from open circuit equipment making it more comfortable to breathe on long dives and causing less dehydration in the diver.
Parts of a rebreather
Although there are several design variations of diving rebreather, all types have a gas-tight loop that the diver inhales from and exhales into. The loop consists of components sealed together. The diver breathes through a mouthpiece or a fullface mask (or with industrial breathing sets, sometimes a mouth-and-nose mask). This is connected to one or more tubes bringing inhaled gas and exhaled gas between the diver and a counterlung or breathing bag. This holds gas when it is not in the diver's lungs. The loop also includes a scrubber containing carbon dioxide absorbent to remove from the loop the carbon dioxide exhaled by the diver. Attached to the loop there will be at least one valve allowing injection of gases, such as oxygen and perhaps a diluting gas, into the loop. There may be valves allowing venting of gas from the loop.
Most modern rebreathers have a twin hose mouthpiece or breathing mask where the direction of flow of gas through the loop is controlled by one-way valves. Some have a single pendulum hose, where the inhaled and exhaled gas passes through the same tube in opposite directions. The mouthpiece often has a valve letting the diver take the mouthpiece from the mouth while underwater or floating on the surface without water getting into the loop. Many rebreathers have "water traps" in the counterlungs, to stop large volumes of water from entering the loop if the diver removes the mouthpiece underwater without closing the valve, or if the diver's lips get slack letting water leak in.
Carbon dioxide scrubber
The exhaled gases are forced through the chemical scrubber which removes the carbon dioxide from the gas mixture and leaves the oxygen and other gases available for re-breathing. The active ingredient of the scrubber is often soda lime. The carbon dioxide passing through the scrubber absorbent is removed when it reacts with the absorbent in the canister; this chemical reaction is exothermic. This reaction occurs along a "front" which is a cross section of the canister, of the unreacted soda lime that is exposed to carbon dioxide-laden gas. This front moves through the scrubber canister, from the gas input end to the gas output end, as the reaction consumes the active ingredients. However, this front would be a wide zone, because the carbon dioxide in the gas going through the canister needs time to reach the surface of a grain of absorbent, and then time to penetrate to the middle of each grain of absorbent as the outside of the grain becomes exhausted. In larger environments, such as recompression chambers, a fan is used to pass gas through the canister.
The term "break through" means the failure of the "scrubber" to continue removing carbon dioxide from the exhaled gas mix. There are several ways that the scrubber may fail or become less efficient:
- Complete consumption of the active ingredient ("break through").
- The scrubber canister has been incorrectly packed or configured. This allows the exhaled gas to bypass the absorbent. In a rebreather, the soda lime must be packed tightly so that all exhaled gas comes into close contact with the granules of soda lime and the loop is designed to avoid any spaces or gaps between the soda lime and the loop walls that would let gas avoid contact with the adsorbent. If any of the seals, such as o rings, or spacers that prevent bypassing of the scrubber, are not cleaned or lubricated or fitted properly, the scrubber will be less efficient, or outside water or gas may get in circuit.
- When the gas mix is under pressure caused by depth, the inside of the canister is more crowded by other gas molecules (oxygen or diluent) and the carbon dioxide molecules are not so free to move around to reach the absorbent. In deep diving with a nitrox or other gas-mixture rebreather, the scrubber needs to be bigger than is needed for a shallow-water or industrial oxygen rebreather, because of this effect. Among British naval rebreather divers, this type of carbon dioxide poisoning was called shallow water blackout.
- A Caustic Cocktail - Soda lime is caustic and can cause burns to the eyes and skin. A "caustic cocktail" is a mixture of water and soda lime that occurs when the "scrubber" floods. It gives rise to a chalky taste, which should prompt the diver to switch to an alternative source of breathing gas and rinse his or her mouth out with water. Many modern diving rebreather absorbents are designed not to produce "cocktail" if they get wet.
- An indicating dye in the soda lime. It changes the colour of the soda lime after the active ingredient is consumed. For example, a rebreather absorbent called "Protosorb" supplied by Siebe Gorman had a red dye, which was said to go white when the absorbent was exhausted. With a transparent canister, this may be able to show the position of the reaction "front". This is useful in dry open environments, but is not useful on diving equipment, where:
- A transparent canister would likely be brittle and easily cracked by knocks.
- Opening the canister to look inside would flood it with water or get unbreathable outside gas in circuit.
- The canister is usually out of sight of the user, e.g. inside the breathing bag or inside a backpack box.
- Temperature monitoring. As the reaction between carbon dioxide and soda lime is exothermic, temperature sensors, most likely digital, along the length of the scrubber can be used to measure the position of the front and therefore the life of the scrubber.
- Diver training. Divers are trained to monitor and plan the exposure time of the soda lime in the scrubber and replace it within the recommended time limit. At present, there is no effective technology for detecting the end of the life of the scrubber or a dangerous increase in the concentration of carbon dioxide causing carbon dioxide poisoning. The diver must monitor the exposure of the scrubber and replace it when necessary.
- Carbon dioxide gas sensors exist, but they are not sensitive enough to be used in a rebreather - the scrubber "break through" occurs quite suddenly and the diver shows symptoms before the sensor indicates a dangerous build-up of carbon dioxide. Even if a sensitive carbon dioxide sensor is developed, it may not be useful as the primary tool for monitoring scrubber life when underwater, because mixed gas rebreathers allow very long dives where long decompression stops may be needed: knowing that the rebreather will begin to deliver a poisonous breathing gas in five minutes may not be useful to a diver needing to carry out an hour or more of decompression stops.
In rebreather diving, the typical effective duration of the scrubber will be half an hour to several hours of breathing, depending on the granularity and composition of the soda lime, the ambient temperature, the design of the rebreather, and the size of the canister. In some dry open environments, such as a recompression chamber or a hospital, it may be possible to put fresh absorbent in the canister when break through occurs.
Controlling the mix
A basic need with a rebreather is to keep the amount of oxygen in the mix, or more technically known as the partial pressure of oxygen or ppO2, from getting too low (causing anoxia or hypoxia) or too high (causing oxygen toxicity).
With humans, the urge to breathe is caused by a build-up of carbon dioxide rather than lack of oxygen. When using a rebreather, as the oxygen in circuit is used, the resulting carbon dioxide is removed from the breathing gas by the scrubber, suppressing this natural warning. If not enough new oxygen is being added, and the oxygen in circuit is a long way from 100% pure, the proportion of oxygen may get too little to support life while plenty of gas seems to be in circuit. The resulting serious hypoxia causes sudden blackout with little or no warning. This makes hypoxia a deadly problem for rebreather divers: it was sometimes called "shallow water blackout".
In many rebreathers the diver can control the gas mix and volume in the loop manually by injecting each of the different available gases to the loop and by venting the loop. The loop often has a pressure relief valve preventing the "hamster cheek" effect on the diver caused by over-pressure of the loop.
In some early rebreathers the diver had to manually open and close the valve to the oxygen cylinder to refill the counter-lung each time. In others the oxygen flow is kept constant by a pressure-reducing flow valve like the valves on blowtorch cylinders; the set also has a manual on/off valve called a bypass. In some modern rebreathers, the pressure in the breathing bag controls the oxygen flow like the demand valve in open-circuit scuba; for example, trying to breathe in from an empty bag makes the cylinder release more gas. Most modern closed-circuit rebreathers have electro-galvanic fuel cell sensors and onboard electronics, which monitor the ppO2, injecting more oxygen if necessary or issuing an audible warning to the diver if the ppO2 reaches dangerously high or low levels.
Underwater, the position of the breathing bag, on the chest, over the shoulders, or on the back, has an effect on the ease of breathing. The design of the rebreather also affects the swimming diver's streamlining and thus ease of swimming.
For use out of water, this does not matter so much: for example, in an industrial version of the Siebe Gorman Salvus the breathing bag hangs down by the left hip.
Some diving rebreather sets include a bailout regulator allowing the user to bail onto open-circuit using his diluent tank. This lets the diver ascend on a separate gas supply. The majority of rebreather trainers teach students to also carry an open-circuit scuba cylinder and regulator as a separate bailout source. Bailout is a key area of discussion for rebreather diving, as when the depth starts to increase the bailout strategy becomes a crucial part of planning particularly for technical diving.
Many rebreathers have their main parts in a hard backpack casing. This casing needs venting to let surrounding water or air in and out to allow for volume changes as the breathing bag inflates and deflates. In a diving rebreather this needs fairly large holes, including a hole at the bottom to drain the water out when the diver comes out of water. The SEFA, which is used for mine rescue, to keep grit and stones out of its working, is completely sealed, except for a large vent panel covered with metal mesh, and holes for the oxygen cylinder's on/off valve and the cylinder pressure gauge. Underwater the casing also serves for streamlining, e.g. in the IDA71 and Cis-Lunar.
Main rebreather design variants
This is the oldest type of rebreather and was commonly used by navies from the early twentieth century. The only gas that it supplies is oxygen. As pure oxygen is toxic when inhaled at pressure, oxygen rebreathers are limited to a depth of 6 meters (20 feet); some say 9 meters (30 feet). Oxygen rebreathers are also sometimes used when decompressing from a deep open-circuit dive, as breathing pure oxygen makes the nitrogen diffuse out of the blood more rapidly.
In some rebreathers, e.g. the Siebe Gorman Salvus, the oxygen cylinder has two first stages in parallel. One is constant flow; the other is a plain on-off valve called a bypass; both feed into the same exit pipe which feeds the breathing bag. In the Salvus there is no second stage and the gas is turned on and off at the cylinder. Some simple oxygen rebreathers had no constant-flow valve, but only the bypass, and the diver had to operate the valve at intervals to refill the breathing bag as he used the oxygen.
Semi-closed circuit rebreather
Military and recreational divers use these because they provide good underwater duration with fairly simple and cheap equipment. Semi-closed circuit equipment generally supplies one breathing gas such as air or nitrox or trimix. The gas is injected at a constant rate. Excess gas is constantly vented from the loop in small volumes.
The diver must fill the cylinders with gas mix that has a maximum operating depth that is safe for the depth of the dive being planned. As the amount of oxygen required by the diver increases with work rate, the oxygen injection rate must be carefully chosen and controlled to prevent either oxygen toxicity or unconsciousness in the diver due to hypoxia.
Fully closed circuit rebreather
Military, photographic and recreational divers use these because they allow long dives and produce no bubbles. Closed circuit rebreathers generally supply two breathing gases to the loop: one is pure oxygen and the other is a diluent or diluting gas such as air, nitrox or trimix.
The major task of the fully closed circuit rebreather is to control the oxygen concentration, known as the oxygen partial pressure, in the loop and to warn the diver if it is becoming dangerously low or high. The concentration of oxygen in the loop depends on two factors: depth and the proportion of oxygen in the mix. Too low a concentration of oxygen results in hypoxia leading to sudden unconsciousness and ultimately death when the oxygen is exhausted. Too high a concentration of oxygen results in oxygen toxicity, a condition causing convulsions, which make the diver spit his regulator out when they occur underwater and can lead to drowning.
In fully automatic closed-circuit systems, a mechanism injects oxygen into the loop when it detects that the partial pressure of oxygen in the loop has fallen below the required level. Often this mechanism is electrical and relies on oxygen sensitive electro-galvanic fuel cells called ppO2 meters to measure the concentration of oxygen in the loop.
The diver may be able to manually control the mixture by adding diluent gas or oxygen. Adding diluent can prevent the loop's gas mixture becoming too oxygen rich. Manually adding oxygen is risky as additional small volumes of oxygen in the loop can easily raise the partial pressure of oxygen to dangerous levels.
Rebreathers whose absorbent releases oxygen
There have been a few rebreather designs (e.g. the Oxylite) which had an absorbent canister filled with potassium superoxide, which gives off oxygen as it absorbs carbon dioxide: 4KO2 + 2CO2 = 2K2CO3 + 3O2; it had a very small oxygen cylinder to fill the loop at the start of the dive. This system is dangerous because of the explosively hot reaction that happens if water gets on the potassium superoxide. The Russian IDA71 military and naval rebreather was designed to be run in this mode or as an ordinary rebreather.
Rebreathers which store their oxygen as liquid oxygen
If used underwater, the liquid-oxygen tank must be well insulated against heat coming in from the water. As a result, industrial sets of this type may not be suitable for diving, and diving sets of this type may not be suitable for use out of water. They include these types:
- Aerorlox. See http://www.healeyhero.co.uk/rescue/glossary/aerorlox.htm .
- Cryogenic rebreather: see below.
There have been plans for a "cryogenic rebreather". It has a tank of liquid oxygen and no absorbent canister. The carbon dioxide is frozen out in a "snow box" by the cold produced as the liquid oxygen expands to gas as the oxygen is used and is replaced from the oxygen tank.
Such a rebreather called the S-1000 was built around or soon after 1960 by Sub-Marine Systems Corporation. It had a duration of 6 hours and a maximum dive depth of 200 meters of salt water. Its ppO2 could be set to anything from 0.2 bar to 2 bar without electronics, by controlling the temperature of the liquid oxygen, thus controlling the equilibrium pressure of oxygen gas above the liquid. The diluent could be either liquid nitrogen or helium depending on the depth of the dive. The set could freeze out 230 grams of carbon dioxide per hour from the loop, corresponding to an oxygen consumption of 2 liters per minute. If oxygen was consumed faster (high workload), a regular scrubber was needed. See:
- Fischel H., Closed circuit cryogenic SCUBA, "Equipment for the working diver" 1970 symposium, Washington, DC, USA. Marine Technology Society 1970:229-244.
- Cushman, L., Cryogenic Rebreather, Skin Diver magazine, June 1969, and reprinted in Aqua Corps magazine, N7, 28, 79.
- In the Siebe Gorman Proto the absorbent was in a flexible-walled compartment in the bottom of the breathing bag and not in a canister.
- This link describes an experimental drysuit (with built-in hood and fullface mask) and rebreather combination where the drysuit acts as the breathing bag, like in an old Draeger standard diving suit variant which had a rebreather pack attached.
- Some British naval rebreathers (e.g. the Siebe Gorman CDBA) had a backpack weight pouch instead of the diver having a separate weight belt.
Risks and precautions with rebreather diving
Many diver training organizations teach the "diluent flush" technique as a safe way to restore the mix in the loop to a level of oxygen that is neither too high nor too low. It only works when partial pressure of oxygen in the diluent alone would not cause hypoxia or hyperoxia, such as when using a normoxic diluent and observing the diluent's maximum operating depth. The technique involves simultaneously venting the loop and injecting diluent. This flushes out the old mix and replaces it with a known proportion of oxygen from the diluent.
Divers using oxygen rebreathers are advised to flush the system when they start the dive, to get surplus nitrogen out of the system.
In addition to the other diving disorders suffered by divers, rebreather divers are also more susceptible to:
- Sudden blackout due to hypoxia caused by too low a partial pressure of oxygen in the loop. A particular problem when using a closed circuit rebreather is the drop in ambient pressure caused by the ascent phase of the dive, which reduces the partial pressure of oxygen to hypoxic levels leading to what is sometimes called deep water blackout.
- Seizures due to oxygen toxicity caused by too high a partial pressure of oxygen in the loop. This can be caused by the rise in ambient pressure caused by the descent phase of the dive, which raises the partial pressure of oxygen to hyperoxic levels. In fully closed circuit equipment, aging oxygen sensors may become "current limited" and fail to measure high partial pressures of oxygen resulting in dangerously high oxygen levels.
- Disorientation, panic, headache, and hyperventilation due to excess of carbon dioxide caused by incorrect configuration, failure or inefficiency of the scrubber. The scrubber must be configured so that no exhaled gas can bypass it; it must be packed and sealed correctly. Another problem is the diver producing carbon dioxide faster than the absorbent can handle, for example, during hard work or fast swimming. The solution to this is to slow down and let the absorbent catch up. The scrubber efficiency may be reduced at depth where the increased concentration of other gas molecules, due to pressure, stops all the carbon dioxide molecules reaching the active ingredient of the scrubber.
- The rebreather diver must keep breathing in and out all the time, to keep the exhaled gas flowing over the carbon dioxide absorbent, so the absorbent can work all the time. Divers need to lose any air conservation habits that may have been developed while diving with open-circuit scuba. In closed circuit rebreathers, this also has the advantage of mixing the gases preventing oxygen-rich and oxygen-lean spaces developing within the loop, which may give inaccurate readings to the oxygen control system.
- "Caustic cocktail" in the loop if water comes into contact with the soda lime used in the carbon dioxide scrubber. The diver is normally alerted to this by a chalky taste in the mouth. A safe response is to bail out to "open circuit" and rinse the mouth out.
When compared with Aqua-Lungs, rebreathers have some disadvantages including expense, complexity of operation and maintenance and fewer failsafes. A malfunctioning rebreather can supply a gas mixture which cannot sustain life. Various rebreathers try to solve these problems by monitoring the system with electronics, sensors and alarm systems. Many very competent divers have died using rebreathers in accidents, which are often put down to operator error. Rebreathers are generally considered safer in extreme conditions such as deep dives (75m = 246 feet or more) or overhead environments, as they reduce the risk of running out of breathable gas.
The bailout requirement of rebreather diving can sometimes also require a rebreather diver to carry almost as much bulk of cylinders as an open-circuit diver so the diver can complete the necessary decompression stops if the rebreather fails completely. Some rebreather divers prefer not to carry enough bailout for a safe ascent breathing open circuit, but instead rely on the rebreather, believing that an irrecoverable rebreather failure is very unlikely. This practice is known as alpinism or alpinist diving and is generally maligned due to the perceived extremely high risk of death if the rebreather fails.
Some makes of rebreather
- The Davis Submerged Escape Apparatus was the first or nearly the first rebreather to be made in quantity.
- The "Universal" rebreather was a long-dive derivative of the Davis Submerged Escape Apparatus, intended to be used with the Sladen Suit, which see for more information about it.
- Military rebreathers (VIPER and SIVA) made by Carleton Life Support and the Viper E made by Carleton and Juergensen Defense Corporation
- Russian IDA71 military and naval rebreather
- CDBA = "Clearance Diver's Breathing Apparatus":
- In the British Navy the Carleton CDBA is (as at June 2007) planned to be superseded by the CDLSE = "Clearance Divers' Life Support Equipment" made by Divex in Aberdeen in Scotland. It is an electronic closed circuit rebreather allowing diving to 60 meters = 197 feet.; Google search for information
- Siebe Gorman Salvus
- The Savox was made by Siebe Gorman. See this link and the image at this link. It was an oxygen rebreather with a use duration of 45 minutes. It was worn in front of the body. It had no hard casing.
- The Blackett's Aerophor is a nitrox semi-closed-circuit rebreather with liquid gas storage made in England from 1910 onwards for use in mine rescue and other industrial uses.
- SEFA is a make of industrial oxygen rebreather with 2 hours duration on a filling.
- SDBA is a type of frogman's oxygen rebreather. It has a nitrox variant called ONBA.
- FROGS (= Full Range Oxygen Gas System) is the make of frogman's oxygen rebreather which has been used in France since 15 October 2002: see this link (in French): image at this link: it is made by the diving gear makers Aqualung: see this link for more information.
- Some military rebreathers (for example the US Navy MK-25 and the MK-16 mixed-gas rebreather), and the Phibian CCS50 and CCS100 rebreathers, were developed by Oceanic. (Stuart Clough of Undersea Technologies developed the Phibian's electronics package .)
- The current US Navy Mark 16 Mod 2 (Explosive Ordinance Disposal) and Mark 16 Mod 3 (Naval Special Warfare) units utilize the Juergensen Defense Corporation Mark V Control System.
- The KISS line of manually operated closed circuit rebreathers designed by Gordon Smith of Jetsam Technologies
|This article or section may contain spam.
Wikipedia spam consists of external links mainly intended to promote a website. Wikipedia spam also consists of external links to websites which primarily exist to sell goods or services, use objectionable amounts of advertising, or require payment to view the relevant content. If you are familiar with the content of the external links, please help by removing promotional links in accordance with Wikipedia:External links. (You can help!)
Other information sources
- Rebreatherworld, The largest online rebreather forum and community.
- Page in Russian describing the images at the next link. (Click each thumbnail to get a fullsized image).
- Links to images of old and modern Russian rebreathers
- 100 Dollar Rebreather - Rebreather built from a hot water bottle
- Richard Pyle's rebreather page
- British Sub-Aqua Club - BSAC Technical Diving Resource Centre
- Some photos of various Soviet-Russian rebreathers (text in Russian)
- Diver Dave's site. It has many detailed photographs of rebreathers and their components.
- www.rebreathers.it: English and Italian language versions
- Rebreather Articles: English and French articles about rebreathers (Cedric Verdier)
- Rebreather Scuba Diving Information on rebreathers, includes a rebreather library and rebreather forums, and info on rebreather trips and vacations and holidays.
- which includes: Rebreather Forums a big set of rebreather forums
- Shallow Water Blackout
- Images of LAR-6 and LAR-7 and FGT II and LAR V rebreathers, and other combat frogman's kit
- Teknosofen homepage - General rebreather theory and rebreather tear-downs
- The Rebreather Site, including long lists of types of rebreathers
- DIRrebreather a website dedicated to rebreather in the DIR philosophy
- TMIShop.com a website dedicated to rebreather information dissemination
Surface-only (industrial) rebreather manufacturers
- BioPak 240R Revolution - claim to make a 4-hour-duration rebreather
Diving rebreather manufacturers
- Narked at 90 - Rebreather controllers, safety devices, upgrades, bespoke parts and components. Winners of the DEMA Innovation award 2007.
- Ambient Pressure Diving - maker of the Inspiration and Evolution rebreathers.
- Analytical Industries - manufacturer of oxygen sensors for rebreathers.
- Carleton Life Support Technologies - manufacturer of the VIPER and SIVA military rebreathers.
- CCR2000 for CCR2000 rebreathers.
- Cis-Lunar - made closed-circuit automatic rebreathers, now operated by Juergensen Marine.
- Closed Circuit Research Ltd - manufacturer of the Ouroboros rebreather.
- Divematics - maker of the Shadow Pac II rebreather.
- Dive Rite - technical SCUBA gear pioneer established in 1984. Manufacturer of the O2ptima FX closed circuit rebreather.
- Divex Ltd - manufacturer of several military semi-closed and closed circuit rebreathers.
- Draeger Safety - maker of various semi-closed circuit rebreathers.
- Halcyon - maker of a semi-closed circuit rebreather.
- Jetsam - maker of the KISS rebreather.
- Laguna Research Inc. manufacturer of rebreather controller and monitoring systems.
- Megalodon & Mini Meg - The Megalodon Expedition class rebreather.
- O.M.G. Italy - manufacturer of the AZIMUTH and many military rebreathers.
- Rebreatherlab - Manufacturers of the Pelagian rebreather.
- Rebreather US - The Juergensen Marine Hammerhead Electronic System.
- Rebreathers Australia - maker of the Abyss and Stingray closed circuit rebreathers.
- Siebe Gorman, see also Siebe Gorman. Important in diving history, but now closed down.
- Shearwater Research - Rebreather monitors, controllers, and computers.
- Steam Machines - Prism rebreathers.
- Submatix Rebreather - manufacturer of the Submatix SCR 100 ST.
- Subsea Systems - manufacturer of rebreather electronics.
- Teledyne Analytical Instruments - manufacturer of oxygen sensors for rebreathers.
- Reynolds, Glen Harlan (2006), "Seeking New Depths", Popular Mechanics 183(12): 58, ISSN 0032-4558
- Historical Diving Times #42 Summer 2007, p27 |
Yiddish (ייִדיש yidish; literally, Jewish) is a language spoken by Ashkenazi Jews. It is a branch of Old High German, with many loan words from other European languages and from Hebrew. Yiddish is written in the Hebrew alphabet. When the modern nation of Israel was first created in 1948 they chose between Yiddish and Hebrew as their state language. Hebrew won out.
American English has a lot of loanwords from Yiddish, for example bagel, kvetch, borscht, chutzpah, to name just a few. Yiddish was also spoken by many Jewish American comedians, like the Three Stooges and the Marx Brothers. This is used by these comedians as humorous "gibberish" when assuming another identity.
Some Yiddish songs have been introduced successfully into American culture, the most popular being "Bei Mir Bist Du Shein".
Origin and History of Yiddish
The Galut (exile) of the Jews by the Roman Titus after the destruction of the Second Temple in 70 A.D. They were triumphantly displayed in Rome and dispersed in the lands along the Rhine Valley known in the Hebrew tongue as Ashkenaz - which is known now as Germany. There they learned the language of the land which developed into Modern German. The Jews called their language, the early stage of German, Yiddish. From the Rhine, many of the "Ashkenazis" moved (were moved) to Eastern Europe, many fleeing from there, to America, to Israel, to Latin America, etc. learning new languages, but also speaking their old language, not Hebrew, but Yiddish. This, with the common religion, enabled the fostering of unity and brotherhood |
(Last Updated on : 11/12/2012)
Sepoy Mutiny 1857, also known as Indian Rebellion of 1857, was a rebellion of the native foot soldiers of British Colonised India against the British Empire. It is considered as the first movement against the Britsh rulers which sparked the Indian freedom struggle
Origin of Sepoy Mutiny
Sepoy mutiny or the Revolt of 1857 was one of the many revolutions to achieve independence. India came under complete control of British in the year 1857 and the constant urge and spirit of freedom started to in built itself in the natives of India. Sepoy mutiny that started long before March of 1857 in Kolkata
is known as the first war for independence against British.
Sepoy mutiny though was a widespread movement, but was ultimately unsuccessful and ended its course in 1858. It started from Meerut
and eventually started spreading in Delhi
Areas of Sepoy Mutiny
The Sepoy Mutiny of 1857 started as a rebellion of soldiers or Sepoys of British East India Company's army on the 10th of May 1857, in the town of Meerut
, and soon ignited into other mutinies and civilian uprisings, mostly in the upper Gangetic Plain and central India, with the major aggressions restrained to present-day Uttar Pradesh
, northern Madhya Pradesh
, and the Delhi region. The insurgents of Sepoy Mutiny speedily captured large portions of the North-Western Provinces and Oudh, including Delhi, where they set up the Mughal ruler, Bahadur Shah Zafar, as Emperor of Hindustan.
Other regions of Company-controlled India-Bengal province, the Bombay Presidency
, and the Madras Presidency
-remained calm for the larger part. In Punjab
, just lately annexed by the British East India Company
, the Sikh princes backed the Company by furnishing both soldiers and support. The large princely states, Hyderabad
, Travancore, and Jammu and Kashmir
, as well as the smaller ones of Rajputana, did not participate in the rebellion, and served, in Governor General Lord Canning
's words, as "breakwaters in a storm" for the Company.
Uprising of British Indian Army in Sepoy Mutiny
of the 34th Infantry at Barrackpur
rebelled by firing at an officer on command against the use of the new cartridges. He was arrested and on 8 April he was hanged to death. This followed with repeated outbreak of revolt at Kolkata
and Europeans were in a state of prolonged panic. In April 1857, Indian soldiers at cavalry unit in Meerut refused to use the new cartridges, which ultimately led to their arrest and were thrown into prison.
Although this mutiny started among the native soldiers, the others who weren't affected by British rule also joined hands. An appeal was forwarded to 82-year-old Mughal Emperor Bahadur Shah Zafar for leading the revolt and proclaimed him the Emperor of India.
Delhi was seized by the Sepoys on 12th May 1857. The palace and the city were occupied. The old Mughal Emperor Bahadur Shah II
was persuaded to lend support to the anti-British activities and was proclaimed the Emperor of India. In spite of strong resistance from the sepoys, the British recaptured the city on 20th September. The emperor was exiled to Rangoon (Burma) for life where the king died in 1862 at the age of eight seven. This was the end of the once mighty Mughal Dynasty in India
The Sepoys captured Kanpur
on 5th June 1857. Nana Sahib, the adopted son of Peshwa Baji Rao was proclaimed the Peshwa. He led the revolt in Kanpur along with Tantia Tope
, his able and experienced Lieutenant.
In June 1857 the General defeated Nana Sahib. Though Nana Sahib and Tantia Tope recaptured Kanpur in November 1857, they could not hold it for long as it was reoccupied by General Campbell on 6th December 1857.
The Sepoys rebelled in Awadh
soon after the events in Meerut. The rebellion broke out at Lucknow
on 4th June. The rebels' initial attempts were unsuccessful. Begum Hazrat Mahal
, who was acting as a regent for her son, besieged the British Residency along with the rebels, killing Sir Henry during the siege. The fighting continued till the end of the year. Ultimately the rebels were defeated in November 1857. In March 1858 the city was finally recaptured by the British after three weeks of fierce fighting.
became centre of rebellion when war broke out. Lakhsmi Bai
led the rebellion against the British. She proclaimed the ruler of the state. The British army besieged Jhansi. However, the mutiny failed due to lack of strong leadership and proper coordination.
Causes of Sepoy Mutiny
The Indian custom of daily living witnessed a gradual shattering of their future vision because of the continuous English intrusion. The East India Company had initially come to India with a very different intension, which although changed in due course. The first uprisings of 1857 were thus efficiently justified. Numerous political, social, economic, religious and above all the military causes led to the heroic attempt of Sepoy Mutiny being held in such a manner.
Effects of Sepoy Mutiny
The Sepoy Mutiny had impressed upon every Indian in different ways, including even the British living in England. Numerous got divided into pro-British and anti-British groups and sects. The ruthless primary effect was that, thousands of native army-men were slaughtered mercilessly. However, the British authorities in London had highly justified these killings in the press. |
around 180, see text
Echinopsis is a large genus of cacti native to South America, sometimes known as hedgehog cactus, sea-urchin cactus or Easter lily cactus. One small species, E. chamaecereus, is known as the peanut cactus. The 128 species range from large and treelike types to small globose cacti. The name derives from echinos hedgehog or sea urchin, and opsis appearance, a reference to these plants' dense coverings of spines.
Echinopsis is distinguished from Echinocactus by the length of the flower tube, from Cereus by the form and size of their stems, and from both in the position on the stem occupied by the flowers. They are remarkable for the great size, length of tube, and beauty of their flowers, which, borne upon generally small and dumpy stems, appear much larger and more attractive than would be expected.
Echinopsis species are native to South America (Argentina, Chile, Bolivia, Peru, Brazil, Ecuador, Paraguay and Uruguay). They grow only in situations where the soil is sandy or gravelly, or on the sides of hills in the crevices of rocks.
The growing and resting seasons for Echinopsis are the same as for Echinocactus. Research by J. Smith (former Curator at the Royal Botanic Gardens, Kew) showed that species like the Chilean Echinopsis cristata and its Mexican relatives thrive if potted in light loam, with a little leaf mould and a few nodules of limestone. The limestone keeps the soil open; it is important that the soil should be well drained. In winter, water must be given very sparingly, and the atmosphere should be dry; the temperature need not exceed 10°C during the night, and in very cold weather it may be allowed to fall to 5°C, provided a higher temperature of 14°C is maintained during the day. In spring, the plants should receive the full influence of the increasing warmth of the sun; and during hot weather, they will be benefited by frequent spraying overhead, which should be done in the evening. The soil should never be saturated, as the soft fibrous roots will rot if kept wet for any length of time.
None of the species need to be grafted to grow freely and remain healthy, as the stems are all robust enough and of sufficient size to take care of themselves. The only danger is in keeping the plants too moist in winter, for although a little water now and again keeps the stems fresh and green, it deprives them of that rest which is essential to the development of their large, beautiful flowers in summer.
Studies in the 1970s and 1980s resulted in several formerly separate genera being absorbed into Echinopsis:
- Acantholobivia Backeb.
- Acanthopetalus Y.Itô
- Andenea Fric (nom. inval.)
- Aureilobivia Fric (nom. inval.)
- Chamaecereus Britton & Rose
- Chamaelobivia Y.Itô (nom. inval.)
- Cinnabarinea Fric ex F.Ritter
- Echinolobivia Y.Itô (nom. inval.)
- Echinonyctanthus Lem.
- Furiolobivia Y.Itô (nom. inval.)
- Helianthocereus Backeb
- Heterolobivia Y.Itô (nom. inval.)
- Hymenorebulobivia Fric (nom. inval.)
- Hymenorebutia Fric ex Buining
- Leucostele Backeb.
- Lobirebutia Fric (nom. inval.)
- Lobivia Britton & Rose
- Lobiviopsis Fric (nom. inval.)
- Megalobivia Y.Itô (nom. inval.)
- Mesechinopsis Y.Itô
- Neolobivia Y.Itô
- Pilopsis Y.Itô (nom. inval.)
- Pseudolobivia (Backeb.) Backeb.
- Rebulobivia Fric (nom. inval.)
- Salpingolobivia Y.Itô
- Scoparebutia Fric & Kreuz. ex Buining
- Setiechinopsis (Backeb.) de Haas
- Soehrensia Backeb.
- Trichocereus (A.Berger) Riccob.
Like several other taxonomic changes in Cactaceae, this one has not been universally accepted. Amateur and professional growers still use names like Echinopsis (in the older sense), Lobivia, Setiechinopsis and Trichocereus, although many of the others listed above fell out of common usage long before the change.
Changing the genus name necessitated using some different specific epithets to avoid creating duplicate names. Thus both Echinopsis bridgesii and Trichocereus bridgesii previously existed. These are very different plants: Echinopsis bridgesii is a short clumping cactus, whereas Trichocereus bridgesii is a tall columnar cactus similar to E. (or T.) pachanoi. Under the new classification, Trichocereus bridgesii becomes Echinopsis lageniformis.
- Note: some of the species listed below may be synonyms, subspecies, or varieties of others.
Many hybrids exist, mostly between similar species but also between more distinct ones, such as the cross between E. pachanoi and E. eyriesii which was sold under the name "Trichopsis pachaniesii" by Sacred Succulents.
Echinopsis huascha Botanical Garden Meran
- Edward F. Anderson, The Cactus Family (Timber Press, 2001) ISBN 0-88192-498-9, pp. 255–286
- K. Trout, Trout's Notes on San Pedro & related Trichocereus species (Sacred Cacti 3rd ed. Part B) (Moksha Press, 2005) ISBN 0-9770876-0-3
|Wikimedia Commons has media related to Echinopsis.|
|Wikispecies has information related to: Echinopsis|
- Cactus Culture for Amateurs by W. Watson (1889)
- SucculentCity: Plant Profiles, Photographs & Cultivation Data
- Growing Trichocereus species (Plot55.com)
- Kuentz: Echinopsis (in French)
- Lohmueller: Echinopsis
- Mattslandscape; Echinopsis hybrids-growing culture |
When you are given a real world problem that must be solved, you could be given numerous aspects of the equation. If you are given slope and the y-intercept, then you have it made.
You have all the information you need, and you can create your graph or write an equation in slope intercept form very easily.
However, most times it's not that easy and we are forced to really understand the problem and decipher what we are given. It could be slope and the y-intercept, but it could also be slope and one point or it could be just two points.
If you are given slope and a point, then it becomes a little trickier to write an equation. Although you have the slope, you need the y-intercept.
You have enough information to find the y-intercept, but it requires a few more steps. Let's look at an example.
That was a little tougher only because we needed to add the extra step of finding the y-intercept. Since you are so awesome at solving equations, I'm sure this wasn't too painful.
Now let's look at a real world applications of this skill. Here you will have to read the problem and figure out the slope and the point that is given. The slope is going to be your "rate" and the point will be two numbers that are related in some way.
Ok, so if you are given slope and a point, then you need to substitute for m (slope), x, and y and then solve for b! Once you have m (slope) and b (y-intercept), you can write an equation in slope intercept form.
Now you are ready to solve real world problems given two points. It's not the hard - I promise.
Sign Up for Algebra Class E-courses
Copyright © 2009-2015 Karin Hutchinson ALL RIGHTS RESERVED |
Art History Program | Student Learning Outcomes and Goals
Basic Concepts and Foundations
- Develop a vocabulary to describe and analyze course material.
- Understand and recognize objects of art and visual culture and
assess them in relation to their historical and cultural contexts.
- Understand prerequisites and modalities of artistic production and training.
- Understand the merit of studying art and visual culture in historical perspective.
- Recognize the way in which art and visual culture relates to the human experience on individual and a collective levels.
- Understand their own cultural environment within a global context.
- Appreciate art and material culture of different traditions.
- Realize interconnectedness of world art while recognizing that
one’s own view of the world may not be universally shared and that
others may have profoundly different perspectives
- Develop ability to perceive works of art from more than one cultural viewpoint.
Research Methods, Writing and Presentation Skills
- Develop appropriate methods to research, analyze and discuss course material in written assignments and class presentations. |
What is an allergy?
The difference between allergies and intolerances is that allergies tend to be hereditary and for life. Many children and babies suffer from allergies to tree nuts, peanut, egg and dairy. However, in many incidences, they will eventually grow out of these.
How are allergies detected?
By measuring the blood’s Immunoglobulin E count (IgE) in the blood. These are specific blood cells that help to combat allergies. If you are allergic to a specific food or non-food item, your body will start to immediately react upon contact.
What are typical allergic reactions?
These can include localized swelling (eg, of the throat or tongue), a rash, or difficulty breathing problems.
Common allergens include:
What is a food intolerance?
An intolerance, unlike an allergy, may change according to an individual’s diet or lifestyle. With intolerances you have the ability due to dietary changes to reduce them and even eliminate them.
Symptoms such as diarrhea, bloating and stomach cramps, digestive problems of particular food types such as lactose.
Intolerances are due to a number of factors most commonly because:
- The body lacks the vital digestive enzymes(s) for the specific food to digest the item properly or efficiently and extract nutrients from it.
- A sensitivity resulting from over-consumption or overexposure.
Experts in food intolerance and allergy testing
We have been active in this field of research for over ten years and our laboratories have now completed over 65,000 tests. We believe it is currently the best available.
– Testing of over 600 items in a single test
– Non-invasive testing – with samples of just a single strand of hair needed
– Cost effective with a range of tests to suit you or your family |
(Sandipan Dey, 14 August 2016)
- In this article, a mathematical model for the growth of a sunflower (shown below) will be described (reference: the video lectures of Prof. Jeffrey R Chesnov from Coursera Course on Fibonacci numbers).
- New florets are created close to center.
- Florets move radially out with constant speed as the sunflower grows.
- Each new floret is rotated through a constant angle before moving radially.
- Denote the rotation angle by , with .
- With , the most irrational of the irrational numbers and using , the following model of the sunflower growth is obtained, as can be seen from the following animation in R.
- In our model 2πα is chosen to be the golden angle, since α is very difficult to be approximated by a rational number.
- The model contains 34 anti-clockwise and 21 clockwise spirals, which are Fibonacci numbers, since the golden angle can be represented by the continued fraction
- 2π = 1−ψ = ψ^2 = 1 / Ø^2 = 1 / (1+ Ø) = [0; 2,1,1,1,1,1,1,…].
- Then we can prove that .
- Proof by induction (on n) |
Some images of Ancient Egyptian gods and goddesses show them with a human body and the head of a bird or an animal. Bastet was the Goddess of Protection of joy, love, pleasure and pregnant woman. In Egyptian mythology, the sacred cat is the animal incarnation of the goddess Bast or Bastet. She was the protector of women and childbirth, as well as a loving goddess who enjoyed music and dance.
He guided the dead to the next life via the court of Osiris in the Underworld. He was the one who looked after the mummification process. Ra was the most important God. He was the lord of all the gods. He was usually shown in human form with a falcon head, crowned with the sun disc encircled by the a sacred cobra.
Ra sailed across the heavens in a boat called the 'Barque of Millions of Years'. At the end of each day Ra was thought to die and sailed on his night voyage through the Underworld, leaving the Moon to light the world above.
The Myth of Osiris and Isis. Ancient Egyptian Gods - discover more. Egyptian Gods Discover more about Egyptian gods. Click on a god's name or symbol name for picture and information or story. Click on the name of the god you want on the left hand side of the page.
What are pyramids and mummies? When people died, they were mummified — this process took a long time, but it prepared them for the afterlife. The Egyptians believed that there were many gods who oversaw different parts of life on earth and life after death. It was important that someone was able to reach the afterlife and the god Osiris, so mummification was taken very seriously.
During the mummification process, the internal organs were put into containers called canopic jars. The pharaohs — the kings and queens of Egypt — were thought to be gods themselves. Some were buried in elaborate tombs called pyramids, though some were buried in underground tombs in the Valley of the Kings.
The pyramids at Giza are the biggest that we can see today , but we have found around 80 pyramids from Ancient Egypt.
Pyramids took a long time to build, so work would start on them while the pharaoh was still alive. The Egyptians used their knowledge of maths to build pyramids that were shaped well and positioned properly. Inside pyramids, there were different chambers that held things the king would need in the afterlife.
We learned about how mummies were made, and what Egyptians believed about the afterlife because of discoveries by archaeologists. Pyramids started to be built.
The Great pyramid of Giza was built. The pyramids in Giza were built, as well as the sphinx. Mummies of pharaohs were buried in the Valley of the Kings. King Rameses II ruled. Cleopatra VII ruled; she was the last pharaoh. The Rosetta stone was found.
Kings and queens in Egypt were called pharaohs. When pharaohs died, they would be buried in decorated tombs.
These tombs would sometimes be inside a pyramid, which has four sides shaped like triangles. The largest pyramid in Egypt is the Great Pyramid of Giza , which today is It might have taken more than 20 years to build! It was a tomb for King Khufu. This happened with King Tutankhamen also called King Tut, for short , who died when he was very young.
Tombs would be covered by a decorated stone called a stela — it had information about the person buried inside. Some Egyptian gods had animal heads, which would have something to do with the certain kind of power that the god had. For instance, the god Khnum had the head of a ram, because he was a mighty fighter.
The Egyptians invented mummification, which is a process of preserving a body. There was a lot involved in mummifying, and it was all very important because people believed it helped them get to the afterlife. It was also expensive, so only people who could afford it were mummified.
Not just humans were mummified — archeologists have found mummies of animals such as cats, dogs and even bulls and crocodiles too. This was done to please the gods. The Great Sphinx is a huge stone sculpture near the pyramids in Giza. It has the head of a person with the body of a lion.
Look at the images in the gallery and see if you can spot the following: These were called canopic jars, and each one had a different top and a special purpose: The jar for the intestines had a falcon head, and was called Qebehsenuf. The jar for the stomach had the head of a desert dog, and was called Duamutef. The jar for the liver had the head of an ape, and was called Hapy. The jar for the lungs had a human head, and was called Imsety.
The story of ancient Egypt has survived for thousands of years. Egypt was one of the greatest civilizations of the past. The monuments and tombs of their Pharaohs continue to stand intact today, some 4, years later!
There were over 2, names of gods in Ancient Egypt. Some images of Ancient Egyptian gods and goddesses show them with a human body and the head of a bird or an animal. Animals were chosen to represent the powers of the god. Bastet was the Goddess of .
Ritual murder or slavery for the captives and, and will for ancient egypt primary homework help indefinite future. There is a special horror attached to the Third Reich — which. A primary professional business plan writers calgary artificial preservation, called mummification was developed by the ancient Help. Mummification was a complicated and .
essays of eb white read online Ancient Egypt Primary Homework Help project management essay of mice and men lonliness essay/10(). The Egyptians believed that when they died, help would make a journey to another world primary they would lead a river life. They would need all the things they had used when they were alive, so their families would put those things in their nile. |
Gastroesophageal reflux disease (GERD) and laryngopharyngeal reflux disease (LPRD) make up the last branch of chronic airway-digestive inflammatory disease (CAID). When we swallow, food and liquids travel through the esophagus and land in the stomach, where stomach acids help the digestive process. Within the esophagus are two constricting muscles: the lower esophageal sphincter and the upper esophageal sphincter. During normal swallowing, these rings of muscle open and close at the precise moment that food passes through the esophagus. When the lower esophageal sphincter is not functioning properly, there is a backflow of stomach acid into the esophagus. This acid flow irritates the esophagus causing heartburn, a painful, burning sensation in the chest. If this happens, it can be a sign of GERD. Additionally, recent research reveals that the stomach enzyme pepsin causes damage to the airway - digestive membranes when it refluxes.
Those of us who experience GERD have a clear picture of how it feels. Typically, the profile of a GERD patient is someone who is sedentary; is slightly overweight; and has a history of burping, heartburn, and stomach pain, usually associated with meals. GERD sufferers
often regurgitate their food, particularly at night, which can cause chronic coughing. This symptom is usually a clue that the GERD complications are reaching beyond the esophagus. When the upper esophageal sphincter doesn’t function correctly, acid that has already flowed back into the esophagus enters the throat and voice box. When this happens, acidic material contacts the sensitive tissue at the back of the throat and even the back of the nasal airway - causing heartburn, sore throat, phlegm, postnasal drip, cough, choking, hoarseness, and/or CAID. This is known as LPRD. GERD and LPRD can occur separately or together.
Other symptoms of LPRD include a bitter taste in the back of the throat, which commonly occurs in the morning upon awakening, and the sensation of a lump or something stuck in the throat, which does not go away despite multiple swallowing attempts. Some adults may also experience a burning sensation in the throat. Others may find themselves with ear pain caused by inflammation of the throat; laryngitis caused by the inflammation of the voice box from the reflux; gingivitis, which is irritation and inflammation of the gums as a result of the acids burning the membranes around the teeth; nasal obstruction caused by the inflammation of the nasal membranes, resulting in nasal swelling; noisy breathing (stridor) caused by the inflammation and swelling around the airway; or bad breath (halitosis).
More than half of LPRD sufferers do not experience heartburn: The stomach acid does not stay in the esophagus long enough to irritate the esophagus and cause these symptoms. In LPRD, most of the damage to the esophagus and/or the throat caused by reflux that happens without you ever knowing it. The frequency and the contact time of the acid with mucous membranes in the pharynx is much greater than the contact with the esophagus. Compared to the esophagus, the mucous membranes in the voice box and the back of the throat are significantly more sensitive to the affects of stomach acids. Acid that passes quickly through the food pipe does not have a chance to irritate the area for too long. However, acid that pools in the throat and voice box will cause prolonged irritation, resulting in the symptoms of LPRD. For this
reason, LPRD is often referred to as “silent GERD,” and can be very difficult to diagnose.
Irritation in the voice box can lead to a laryngospasm, which means that the voice box contracts. This can be a very scary: It feels as if you were going to choke or suffocate. Some doctors believe that laryngospasm, GERD, and panic attacks are related. Worst of all, acids can enter the trachea and the lungs, where they can be even more damaging. These tissues will become irritated, leading to bronchitis and asthma. In patients who have bad asthma, this irritation can set off status asthmaticus, in which the lungs tighten up and you cannot get air. Status asthmaticus is rarely caused by reflux; but if it occurs, it can be life threatening.
Finally, 10-15 percent of patients who have chronic GERD can end up with histologic changes of the esophageal lining. This occurs in the lower esophagus. This disease is called Barrett’s esophagus: In rare instances, the acids cause the normal lining of the esophagus to be replaced by the type of lining that is found in the stomach or intestine. When this membrane is continuously subjected to refluxing acid, esophageal cancer can develop. A gastroenterologist can make this diagnosis with an outpatient endoscopy and biopsy of the esophagus. For people found to have these changes, close monitoring of the esophagus is necessary. Furthermore, the refluxing acids can potentially cause ulcers in the esophagus. Although this is also rare, these ulcers can hemorrhage and perforate. For all of these reasons, it is very important to control severe reflux (4). |
Kritima Lamichhane - Caviness 5th
Sally, John, and Jake were playing with darts in their back yard. They were standing in a triangle shape 8 feet apart from each other and have a cylinder shaped pole, with a balloon on top, which is their target. They need to place the balloon target in the center of the triangle so that it is equal distance from each friend. Where should they put their target so that it is equidistant from the three friends?
Picture Depiction of where Sally, John, & Jake are Standing
Point of Concurrency shown with all Labels
What each color represents:
Blue - the equilateral triangle
Pink - arcs created by the 3 sides of the triangle
Green - lines of intersection that show the circus center
Orange - Circle that proves the points are equidisant
Purple - lines connecting vertices to the circumcenter
Midpoint, Slope, & Equation of the Lines
Steps to solve the problem:
I started off my drawing the graph, and labeling the 3 points, that each represented one of the kids. I connected the lines and created an equilateral triangle (4 inches on each side - each inch represented 2 feet). Then I set my compass a little wider than half of a line, and placing the tip of the compass on point A, I drew an arc. Keeping the compass in the same shape, I drew arcs from point B. Then I drew a line between the intersection of the two arcs. Then I drew an arc from point C. I drew a line through the intersection made by the arcs from point B & C. Where the two lines intersected was the circumcenter.
As shown I opened the compass from the circumcenter to one of the points, and then I drew a circle starting the point, the circle hit the other vertices as well. That proves that the circumcenter is equidisant from each vertex. That spot (0,1) is where the kids should place their pole with the balloon that they are all trying to aim for. They will each be throwing their darts the same distance, if they place their target in the circumcenter. That way it is fair when the kids aim for the balloon. |
Supernovae have been discovered that are the brightest ever. They are cosmic explosions that are much bigger than just basic novae. Although they may burn for months or only a few weeks, during this time period a supernova can emit as much energy as the Sun is believed it will release over its whole life span. Now, it appears that there are a couple more that have happened to put down in the record books. Astronomers, who are associated with the Supernova Legacy Survey, have found two of the brightest, but also most distant supernovae that have ever been recorded. They are each located at least 10 billion light-years away and are over a hundred times more radiant than any regular supernova.
Supernovae are typically caused by massive stars disintegrating into black holes or neutron stars, and in these two cases, they have scientists shaking their heads. According to the astronomers, the ordinary mechanism for producing supernovae is unable to explain why these two supernovaes are so exceptionally bright. They were so shiny when they were each discovered in 2006 and 2007, respectively, that astrophysicists were not completely sure what exactly they even were.
The research study lead author D. Andrew Howell, who is also a staff scientist at the Las Cumbres Observatory Global Telescope Network, stated that at first his team had no idea what they were looking at, if they were supernovae and if they were even in Earth’s own galaxy or a very distant one. He added that he showed their findings at a space conference, and everybody there was also perplexed. No one guessed that they were faraway supernovae because they would have had to have been mind numbingly large based on the size of the energy they each were producing. It was believed to be impossible.
One of the supernovae was given the “catchy” name “SNLS-06D4eu.” It is so bright that it has produced a totally new class of supernovae that is called “superluminous supernovae.” Such new models belong to a special class of supernovae which do not have any hydrogen, but are the brightest supernovae ever discovered.
The newest study had discovered the supernovae are probably powered by the formation of a magnetar, which is an extraordinarily magnetized star that is spinning at least hundreds of times per second. Magnetars are extremely dense, with the mass of the Sun crammed into a star that is about the size of a city from Earth. Though magnetars have been assumed to be the source of these types of supernovae, this study is the first to ever match observations with what such an explosion could possibly look like.
The supernovae were so far away that the ultraviolet light released during the explosion was overextended by the universe’s expansion until it increased in wavelength to the part of the spectrum where it shows up in the lens of telescopes on Earth. This explained why astronomers were at first mystified by the observations. They had never seen a supernova so far in UV before.
The supernovae burst back when the universe was 4 billion years old, yet their light is just reaching us now. These events happened before the Sun even existed. Another star had existed here which died and whose gas cloud ended up creating the Sun and Earth.
These supernovae are dinosaurs and are basically extinct in this day and time, but they happened more frequently during the early times of the universe. Astronomers are lucky to be able to use telescopes to be able to look back in time and study such fossil light. Scientists are hoping to be able to discover many more of such kinds of these bright supernovae with ongoing studies and research.
By Kimberly Ruble |
Understanding the Difference Between Muscle and Fat Reigate
Have you ever wondered if muscle weighs more than fat? This is a common question among people who are trying to lose weight or improve their body composition. The truth is, muscle and fat are two different types of tissue with different properties. In this article, we’ll take a closer look at the difference between muscle and fat and answer the question once and for all.
Muscle and Fat: What’s the Difference?
Muscle is a type of tissue that is responsible for movement and support of the body. It is made up of long, thin cells called muscle fibers that contract and relax to produce movement. There are three types of muscle tissue in the body: skeletal, smooth, and cardiac.
Fat, on the other hand, is a type of tissue that stores energy in the form of triglycerides. It provides insulation and protection for the body’s organs, and also plays a role in hormone regulation. There are two types of fat in the body: subcutaneous fat, which is located just beneath the skin, and visceral fat, which is located deep within the abdomen and around the organs.
Muscle vs. Fat: Which Weighs More? Reigate
The answer to the question of whether muscle weighs more than fat is both yes and no. Pound for pound, muscle tissue weighs more than fat tissue. However, muscle is also more dense than fat, which means it takes up less space in the body. So, while a pound of muscle weighs the same as a pound of fat, the volume of muscle is smaller than the volume of fat. This is why someone who is muscular may weigh more than someone who is not, but still appear leaner and more toned.
Advantages of Muscle vs. Fat Reigate
Now that we know the difference between muscle and fat, let’s take a look at some of the advantages of having more muscle and less fat:
- Having more muscle can help you burn more calories at rest and during the day since muscle tissue burns more calories than fat tissue does.
- Improved body composition: Having more muscle and less fat can improve your overall appearance and make you look leaner and more toned.
- Better health: Excess fat, especially visceral fat, is associated with a higher risk of health problems such as diabetes, heart disease, and cancer. Having more muscle and less fat can help reduce this risk.
In conclusion, muscle and fat are two different types of tissue with different properties. While muscle tissue weighs more than fat tissue pound for pound, it is also more dense, which means it takes up less space in the body. Having more muscle and less fat can have numerous benefits for both your appearance and your health. If you’re looking to improve your body composition, consider incorporating strength training into your exercise routine to build more muscle and reduce fat.
At Lipo Sculpt Reigate, we understand the importance of a healthy body composition. That’s why we offer a range of non-surgical body contouring treatments to help you achieve your goals. Contact Lipo Sculpt Reigate to learn more about our services and how we can help you achieve the body you’ve always wanted. |
Aims and Objectives of TFP
Total factor productivity (TFP) is that portion of output which is not determined by the amount of input used at the time of production. Definition of TFP states the importance of TFP in growth, fluctuations, and development. TFP is defined as that portion of output which is not defined by the amount of input used in the production, and level of TFP is determined by the efficient utilization of inputs in the production.
From last few decades, growth in Singapore was very low, even though educational aspect of the population was increased, investments related to research and development was increased, and there was rise in the capital inflows. However, concept of TFP is unique in the Singapore. As per one study, growth in Hong Kong and other small open economy are higher as compared to Singapore. For solving this issue in this paper we calculate the TFP growth in Singapore and other countries also.
In this paper, definition of TFP & Solow residual, aims and objectives of TFP, background of TFP & Solow residual, difference of A v/s K & L, examples related to importance of A and reasons of failing of K & L, and some idea related to Paul Krugman articles are also stated in this paper. Lastly, paper is concluded with brief conclusion.
The main objective of TFP is to measure the productivity growth for the purpose of judging the development trend of the units of production. Measurement of TFP is considered important for any country for the purpose of understand the growth of the economy. It must be noted that study related to productivity is conducted on whole economy or on any particular regions of the country. For measuring the growth, experts used accounting technique or index members and even some distance functions. Following are some other aims of TFP and these aims and objectives are stated below:
- TFP helps in promoting the viable food production and it also focus on agriculture income, agricultural productivity, and price stability.
- TFP helps in managing the natural resources and climate action, and it also focus on greenhouse gas emissions, biodiversity, soil, and water.
- TFP also helps in balancing the territorial development with focus on rural employment, growth and poverty in rural areas.
It must be noted that TFP growth is considered as difference between the growth of output and growth of a combination of all factor inputs, generally labor and capital. Generally, any improvement in TFP reflects the contribution related to output because of the efficient use of resources or the adoption of new technology. In Singapore, this approach is used to calculate the growth for the purpose of employing a production function for decomposing growth in output into the contributions of primary resources. A weightage scheme is employed that allowed to aggregate the contributions related to primary factors. Then difference between the actual GDP growth and portion accounted by the factors of production is considered to measure the TFP growth. Like maximum literatures, Singapore assuming the transcendental logarithmic production functions.
The Solow Residual:
Under this, exercise related to growth accounting is considered for the purpose of break down the growth of output into the growth of factors of production such as capital and labor, and it also consider the growth of efficiency for utilizing these factors. Measurement of this efficiency is considered as Total Factor Productivity. For policy purposes it is considered that whether growth of output is increased from factor accumulation or from increases in TFP. Concept related to growth accounting was set up by Robert M. Solow (1957), and he considered it a neoclassical production function and stated:
Background of TFP and Solow Residual
As peer this formula where is considered as output, is determined as the stock of physical capital, is the labor force and represents the TFP, which actually state the Hicks neutral way. After making some transformations, this equation is considered in the terms of the growth rates related to these variables.
TFP is also known as multi-factor productivity, and it is a variable which used to calculate total output growth in relation to the growth measured traditionally of the inputs of labor and capital of the company. However, TFP is calculated by dividing the output from the weighted average of labor and capital input, and for the purpose of calculating TFP standard weightage was considered 0.7 for labor and 0.3 for capital. In case all inputs are accounted, then TFP can be taken as measure related to long term technological change or technological dynamism in economy.
It must be noted that TFP growth is considered as difference between the growth of output and the growth of the combination of all factor inputs generally related to labor and capital. Generally, TFP reflects the contribution in output because of the more efficient use of resources or also as the adoption of new production technologies.
In Singapore, this approach is used to estimate the TFP growth for the purpose of employ a production function to first decompose growth in output into the contribution of primary resources such as labor and capital. In this weighted scheme is applied for the purpose of aggregating the primary factors, and difference between the real GDP growth and factors of production is determined for the purpose of calculating TFP.
After conducting various researches Paul Krugman stated that all technological changes are TFP, but current controversy that is technological change versus factor inputs stated that it is not substantial but it is largely matter of data and definitional or conceptual problems. Therefore, it is stated that conclusion of assumption of Paul Krugman is most probably wrong.
Krugman further believes that rates related to Economic growth of Asian countries will taper off significantly, well before convergence with today’s world economic leaders. Three factors are introduced by Krugman which result in the rapid growth in the Asian NICs. First is the transition of labor from rural to industrial, second states the education of these workers, and third factor which contributes is the catching-up effect in the capital stock. Krugman state the argument related to critical lacking, which is ability to done innovation in technology. Krugman state the growth accounting in following format:
Economic Growth = Increases in Labor + Increases in Capital + Changes in Total Factor Productivity
It is not possible to directly measure the TFP. However, it is the approach which is used for considering the effect on total output which is not caused by inputs. In the production function identified by Cobb-Douglas, variable A denotes the total Factor productivity:
In the above stated equation, total output is represented by Y, capital input is denoted by K, and Labor input is represented by L. alpha and beta in above equation are the two inputs respective shares of output. Any increment in K & L will increase the output, but because of the law of diminishing return increase in use of inputs will not increase output in long run. Therefore, quantity of input used in production does not completely determine the amount related to output produced. It is also important how factors of production are effectively used. It must be noted that TFP is less tangible as compared to capital and labor inputs, and it also include other factors also such as technology to human capital for the purpose of organizational technology.
The Solow Residual
TFP can also be used for the purpose of measuring competitiveness, and if TFP of any country is higher than it denotes higher competitiveness. Generally, it is considered as main vehicles for the purpose of driving economic growth, and if Singapore increases its TFP then it can give higher output from the same resources and also drive economic growth (OECD, 1997).
TFP is used superiorly for the purpose of growth, and solving issues related to growth and productivity. Effect of TFP was examined by Thirtle, Lin and Piesse (2003) on the incidence related to poverty in LDCs, and measured on the basis of percentage of population who are surviving on less than US $ 1 per day. Analysis related to employees regression shows that growth related to agricultural productivity and how it substantially affect the reduction related to poverty, whereas production growth related to industry does not. They use their empirical findings to show that investment in agriculture R&D substantially affects the poverty reduction in Singapore, and it is also considered as extremely profitable investment.
In China, economic growth was rapid but growth related to productivity was slow, but with passage of time such as in 2002-2007 not only economic growth but productivity growth was also rapid. However, it must be noted that this growth is slower as compared to growth of Japan.
After assessing the TFP growth in China related to sustainability development it was found out that China's 8.9-percent annual GDP growth over the period 1980–2012.
The two biggest subsection of TFP are technology growth and efficiency. Technology growth includes some special features such as positive externalities and non-rivalness which increase its position as driver of economic growth. TFP is generally considered as real driver of growth in the economy, and this study reveal that labor and capital are important contributors but as compared to other factors these factors contribute less in sustainable growth of the economy. Therefore, it is clear from the above facts that there is strong connection between the TFP and conversion efficiency, because of which K & L are less effective.
As per the results of one study, Singapore provide supports to the view that physical capital accumulation has been the dominant factor in the country per capita output growth. On the other hand, results related to japan and china shows TFPG contribution is not negligible specially the factors K & L.
In this paper, definition of TFP is considered and also we outline the methodology related to TFP in Singapore, and it must be noted that this methodology is based on approach related to growth-accounting which means improvement related to quality of factor inputs which was weak in early periods of 90s. However, this paper also concluded that pattern related to TFP growth in Singapore required some changes, as it becomes robust day by day. In this paper TFP growth was computed without making any adjustments in the quality related to factor inputs. Therefore, in future, it is necessary to determine the TFP growth by removing the effects of change in factor input quality. These quality changes are important in case of Singapore such as by raising the attainment of labor force.
Economic Research, Total Factor Productivity, < https://www.frbsf.org/economic-research/indicators-data/total-factor-productivity-tfp/>, Accessed on 7th July 2017.
Digi Library, ‘Literature Review’, < https://digi.library.tu.ac.th/thesis/ec/0116/10CHAPTER_3.pdf>, Accessed on 7th July 2017.
OECD, ‘Measurement of EU agricultural total factor productivity growth’, < https://www.oecd.org/tad/events/Session%203%20Koen%20MONDELAERS%20PPT.pdf>, Accessed on 7th July 2017.
Though co., ‘The Meaning of Total Factor Productivity’ < https://www.thoughtco.com/total-factor-productivity-definition-1147262>, Accessed on 7th July 2017.
Diego Mocin, (2006). Total Factor Productivity, https://www.people.hbs.edu/dcomin/def.pdf, Accessed on 7th July 2017.
Colorado, ‘Paul Krugman - "Pacific Myths"’, < https://www.colorado.edu/economics/courses/econ2020/section15/krugman.html>, Accessed on 7th July 2017.
Boundless, Total Factor Productivity, < https://www.boundless.com/economics/textbooks/boundless-economics-textbook/inputs-to-production-labor-natural-resources-and-technology-14/capital-productivity-and-technology-82/total-factor-productivity-316-12413/>, Accessed on 7th July 2017.
OECD, (1997). Total Factor Productivity Growth in Singapore:
Methodology and Trends, < https://www.oecd.org/std/na/2666910.pdf>, Accessed on 7th July 2017.
Statistics, ‘International Comparison of Productivity Growth in China, Japan and South Korea’, < https://www.statistics.gov.hk/wsc/CPS104-P13-S.pdf>, Accessed on 7th July 2017.
DSS, National Accounts, < https://www.singstat.gov.sg/statistics/browse-by-theme/glossary/national-accounts>, Accessed on 7th July 2017.
To export a reference to this article please select a referencing stye below:
My Assignment Help. (2022). Total Factor Productivity (TFP) And Solow Residual. Retrieved from https://myassignmenthelp.com/free-samples/law303-law-of-business-organisations/total-factor-productivity-file-H8FCCF.html.
"Total Factor Productivity (TFP) And Solow Residual." My Assignment Help, 2022, https://myassignmenthelp.com/free-samples/law303-law-of-business-organisations/total-factor-productivity-file-H8FCCF.html.
My Assignment Help (2022) Total Factor Productivity (TFP) And Solow Residual [Online]. Available from: https://myassignmenthelp.com/free-samples/law303-law-of-business-organisations/total-factor-productivity-file-H8FCCF.html
[Accessed 01 March 2024].
My Assignment Help. 'Total Factor Productivity (TFP) And Solow Residual' (My Assignment Help, 2022) <https://myassignmenthelp.com/free-samples/law303-law-of-business-organisations/total-factor-productivity-file-H8FCCF.html> accessed 01 March 2024.
My Assignment Help. Total Factor Productivity (TFP) And Solow Residual [Internet]. My Assignment Help. 2022 [cited 01 March 2024]. Available from: https://myassignmenthelp.com/free-samples/law303-law-of-business-organisations/total-factor-productivity-file-H8FCCF.html. |
Last updated on August 21st, 2023 at 05:51 pm
The rolled steel sections laid end to end in two parallel lines over sleepers to form a railway track are known as rails. What is meant by rails. They work as continuous girders carrying axle loads of trains. Rails provide a hard and smooth surface for the movement of the train.
What is the function of rails:
d. To provide a hard, strong, and smooth surface for the movement of trains with minimum tractate resistance.
e. To bear the stress developed in the track due to heavy wheel loads, lateral and braking forces, and also due to variations of temperature.
f. To transmit the axle loads of trains to sleepers and consequently reducing pressure on ballast and formation.
Composition of rails steel: The composition of rail steel to be used in the manufacture of rail is given below:-
a. For ordinary rails: for these rails, high carbon steel having the following composition is used
Carbon – 0.55 to 0.68 %
Manganese Mn) – 0.65 to 0.90 %
Silicon (Si) – 0.55 to 0.3 %
Sulphur (s) – 0.05% or below
Phosphorus (P) -0.05% or below
b. For rails on points and crossing:- For these rails, medium carbons steel having the following composition is used:-
C. Carbon (C) -0.5 to 0.6 %
d. Manganese (Mn) -0.95 to 1.25 %
e. Silicon (si) -0.05 to 0.20 %
f. Sulphur (s) – 0.06% or below
g. Phosphorus (p) 0.06% or below
The rails used in the construction of railway track can be divided into the following three steps:
1. Double headed rails (D.H. rails)
2. Bull-headed rails (B.H. rails)
3. Flat footed rails (F.F. rails)
Bull headed rails: The rail sections, having their head of more dimensions than of their foot are known as Bull-headed rails.
The rails overcome the defect of double-headed rails. In these rails, metal in their foot is just sufficient to bear the stress caused by the moving wheel loads.
These rails also require chairs for holding them in position.
B.H. rails are commonly used in Indian railways, especially for constructing points and crossing.
- B.H. rails keep better alignment due to the provision of chairs for holding them in position.
- These rails facilities easy manufacturing of points and crossing
- These rails, with a larger bearing on sleepers due to provisions of chairs, provide longer life to wooden sleepers and also greater stability to the track.
- These rails can be removed and replaced quickly and hence, renewal to the track is easy.
- B.H. rails require costly fastening
- These rails are of less strength and stiffiness
- The track formed with these rails require more attention and thus involve more maintenance
TYPES OF RAILS – Double Headed, Bull Headed and Flat Footed Rails || Railway || Civil Engineering
What is Track Alignment? What are the requirements of a good alignment?
Track Alignment: It is the position and the direction given to the center lines of the railway track on the ground.
The vertical alignment includes changes in gradient and vertical curves. The horizontal alignment includes the straight path, its width, any deviations in width and curves.
Proper alignment of a new railway track is very essential as improper alignment would result in capital loss due to the higher initial cost of construction of recurring loSs in maintenance it is very difficult to change the alignment.
Requirement for a good Alignment
- The length of the track should be as short as possible
- The construction cost should be minimum
- The maintenance cost should be minimum
- The transport cost should be minimum
- It should have an easy gradient
- It should pass through aesthetic areas in view for a comfortable and pleasant railway journey
- It should connect important places
- It should pass through important cities and industrial areas.
What are the different type of survey to be carried out in case off new railway project? Explain traffic survey in detail.
The following four surveys should be conducted to fix the good alignment:
a) Traffic survey
b) Reconnaissance survey
c) Preliminary survey
d) Location survey
Traffic Survey: Before starting any big railway project the future earnings should be considered. The earning directly depends upon the traffic at present and traffic in the immediate future, both passengers and goods traffic.
To know the potential of available traffic in the route, a traffic survey is conducted. In the traffic survey the following information should be collected:
1) Population of the cities, towns, and villages lying within 15kms on both sides of the track
2) Approximate number of passengers who will use trains
3) Position of local industry
4) Future prospectus of development of industry and trade center
5) Jnature and volume of exports and imports center of their destination
6) Locations of railway stations to get more business
7) General character of land and communities
After collecting the above information, the cost of the railway per kilometer is calculated from the previous experience.
Then the approximate expenditure, the operating cost, and revenue earned is calculated.
In this type of survey the following maps are prepared for the study:
1. Topographical. maps
2. Agricultural maps
3. Industrial maps
What is coning of wheels? What are its advantages?
The art of providing an outwards slope of 1 in 20 to the treads of wheels of rolling stock is known as coning of wheels.
The wheels of rolling stock are provided with flanges on the inner side of rails forming the track.
The function of providing wheel flanges on the inner side is to prevent lateral slipping of the wheels over the rail in a track.
The distance (B) between the inner faces of the fiancé is kept somewhat less than the gauge. Distance ‘G’ so as to prevent to rubbing of the flange against the inner face of rails and causing were and tear of both the rails as well as of the wheels as illustrated in fig. 4.1.
If the threads of wheels are kept flat, there is the possibility of lateral movement of wheels axle by the distance equal to the distance between the runnings or inner face of the rails and the inner face of the flange.
Thus, the wheels will damage the inner faces of the rails. To check the lateral movement, which causes damage to the running or inner faces of rails and inc convenience to the passengers, and outwards slow of 1 in 20 is provided to the threads of the wheel, which is known as coning of wheels.
The main object of conning of wheels is to prevent lateral movement of trains.
On straight portions of the track, if the axle moves towards, one track rail, the diameter of the wheel rim over that rail increases, whereas it decreases over the opposite rail of the track, due to conning of wheels since both wheels are to cover the equal distance.
This prevents further lateral movement and the axle returns to its original position. Coning of wheels thus prevents the lateral movements and ensures smooth movements of trains without causing excessive wear of the inner faces of the rails and Inconvenience to passengers.
On curves, the outer wheels of the train have to travel a greater distance than the inner wheels. Under the effect of centrifugal forces, the axle moves towards the outer rail.
Due to the conning of wheels, the diameter of wheels on the outer rail increases, while on inner rails, it decreases as illustrated. This helps the outer wheels to cover a greater distance than the inner wheels without any slip.
Also, Check This |
This two-hour field study teaches the reproductive functions in the life cycle of flowering plants, focusing on the changes we can observe as an individual flower develops into the seed-holding structure called a fruit. Students investigate flower and fruit parts and look carefully for the specific stages of change as a flower begins this gradual, often overlooked transformation from bud to flower to fruit. Participants observe, describe, and collect evidence of these life cycle developments in the Arnold Arboretum landscape.
At the beginning of the field study, students share what they already know about flowers, their function, and flower parts. Using various visuals, students learn the vocabulary they will be using throughout the field study. Students are challenged to discover all the stages of a flower’s transformation from bud to fruit during their explorations in the landscape.
Throughout their visit to the Arboretum, students encounter and interact with a large variety of flowers, examining flower parts closely and identifying the stages of change. They observe and dissect a flower, looking for and naming distinct flower parts that play a role in the flower’s function as seed maker: petals, pistil, and stamen. Students test their predictions regarding a flower’s stage of development by cutting open a bud (containing flower parts) or fruit (containing seeds) to see what is inside, using this evidence to sequence the stages. Students may also observe pollinators and examine pollen more closely, and even help fertilize a flower by transferring pollen using “bee’s legs.”
Students have opportunities to record their observations through drawings and challenge each other to correctly sequence photos of flowers, both observed in the landscape and novel. Students return to class with their recording sheets and collected flowers for further discovery.
If you are a Boston Public School teacher and would like to register for a program, email Nancy Sableski or call 617.384.5239.
MA Science Standards correlations:
- 3-LS-1-1 Use simple graphical representations to show that different types of organisms have unique and diverse life cycles. Describe that all organisms have birth, growth, reproduction, and death in common, but there are a variety of ways in which these happen.
- 3-LS4-5(MA) Provide evidence to support a claim that the survival of a population is dependent upon reproduction.
- 4-LS1-1 Construct an argument that animals and plants have internal and external structures that support their survival, growth, behavior, and reproduction. |
Mental health is as important to a child's safety and wellbeing as their physical health. It can impact on all aspects of their life, including their educational attainment, relationships and physical wellbeing. Mental health can also change over time, to varying degrees of seriousness, and for different reasons.
Everyone, has mental health just as we all have physical health and emotional health, mental health is just one of the types of health that make up who we are as an individual. When we talk about mental health, we're talking about our mental well-being: our emotions, our thoughts and feelings, our ability to solve problems and overcome difficulties, our social connections, and our understanding of the world around us..
One critical aspect of a child having good mental health and wellbeing is by them knowing that they are loved – that they are loved for the unique and precious individuals they are. Therefore we need to help children understand their feelings and emotions, by using emotional language and by giving them an emotional vocabulary to help them to understand their own and others feelings.
The school has accessed the Department for Education funding which has been used to train a Senior Mental Health Lead. The headteacher is the senior mental health lead and is responsible for creating a whole-school approach to supporting mental health and wellbeing as well as creating an open culture in which staff, children and parents alike can discuss their mental health and wellbeing.
Our school is also part of the BERT award which strives to promote, protect and improve children and young peoples mental health and well being.
Here are some of the practical strategies we use to support children’s emotional wellbeing:
We also have a strong Personal, Social, Health Education programme. The table below shows some of the topics that we explore with the children.
The food a child eats in their early years can influence their dietary habits later in life, so it’s important to instil good habits and a healthy relationship with food from an early age. Eating a well-balanced diet can improve mood, provide more energy and help you think more clearly. The food groups that make up this balanced diet are protein foods like fish, meat and eggs, starchy foods supplying carbohydrates, fruits and vegetables and milk and dairy foods. At school the children have access to fresh fruit each day and milk in nursery. Every child is entitled to a free healthy meal when they start primary school.
Physical activity can have an immediate and long-term impact on cognitive skills, attitudes, behaviour and concentration - all of which are important factors in academic achievement. We have an active PE curriculum and are Gold award winners for our work with Travel Smart.
Good personal hygiene habits help children present an attractive appearance to the world, which in turn influences how they are perceived and treated by others. Good hygiene habits are encouraged daily at school and children learn about what they can do at home to ensure their hygiene habits are good.
A regular teeth-cleaning routine is essential for good dental health. Children should brush at least twice daily for about 2 minutes with fluoride toothpaste. Good oral and dental hygiene can help prevent bad breath, tooth decay and gum disease. Children should visit a dentist as soon as their first teeth appear. Children are able to meet health professionals as part of our curriculum offer so they learn about people who help us in our community.
Sleep is as important to our health as eating, drinking and breathing. It allows our bodies to repair themselves and our brains to consolidate our memories and process information. Poor sleep is linked to physical problems such as a weakened immune system and mental health problems such as anxiety. Our school Pastoral Team can offer advice and support in this area should you need it. |
ICYMI: Our Brains May Be Hotter Than We Thought
- Published20 Jul 2022
- Author Tristan Rivera
- Source BrainFacts/SfN
What is the normal temperature of the human brain? Previous consensus placed brain temperature in line with our body temperature: both at around 98.5°F. But a study published June 13 in Brain found that the temperature of our brains is slightly hotter than our bodies — and it varies throughout the day, across brain regions, throughout the course of our lives. This newfound understanding could help guide future treatments for traumatic brain injuries (TBIs), as patients with moderate to severe TBIs may receive clinical interventions based on monitoring brain temperatures that deviate from the ‘norm’.
Researchers looked at two patient groups to gather data about brain temperatures. The first group included 74 brain injured patients previously recorded from an E.U.-wide TBI database. Then, they recruited another patient group comprised of 40 healthy adults. The team measured participants’ brain temperatures using magnetic resonance spectroscopy (MRS). This technique uses MRI machines to measure the temperature of different parts of the brain.
Researchers found that oral body temperatures in the healthy patient group tended to be around 99.5°F, while their brain temperatures were on average around 101.3°F, an average of 2°F difference between the brain and body. Brain temperatures varied by time of day and was lowest (about 1.6°F lower) at night, just before bed. Temperatures in healthy participants tended to increase a bit with age. Regions closer to the brain’s core also tended to be warmer than regions measured closer to the cortex. Women in the post-ovulation (luteal) phase of the menstrual cycle had slightly higher brain temperatures than men — around a 0.7°F difference between the groups. All together: The researchers hypothesized that daily brain temperature variation better distinguishes temperature function or dysfunction, rather than absolute brain temperature.
Big Picture: This study notes that TBI patients lacking a daily rhythm in brain temperature had a 21 times greater chance of dying in intensive care, hinting that maybe new methods should be developed for temperature monitoring and management. These findings come from relatively small samples of people and will need further confirmation before impacting the clinic. Regardless, the study has us question something we’ve taken as a fundamental fact for some time — reminding us to keep asking questions about the things we think we know about brains.
Read more: Brains can be hotter than the rest of our bodies, especially in women. New Scientist
More Top Stories
- Our brains welcome and host immune cells to help maintain their health. Nature
- Novice astronauts have increased space around their brain’s blood vessels when returning home from their first mission. NSF
- This optical illusion may deceive your brain and cause your pupils to dilate. The New York Times
- A neuron population in the hypothalamus was responsible for inducing fever in mice. New Scientist
- Exploratory studies suggest stuttering can be quelled by TMS. Science
- Collective neuroscience aims to center animal research around allied evolution to be a part of a social group. Aeon
- Today’s cannabis is hitting teens harder — raising abuse and sickness potential as THC levels rise. The New York Times
- Patients’ implicit bias to their doctor’s gender or race may impact their placebo response. STAT News
CONTENT PROVIDED BY |
- Colour Pale green, black, red, or cloudy white
- Size From 4 mm to 8 mm long
- Also known as Plant lice
- Description Tiny, soft bodied, pear-shaped pests. Some have wings, others do not.
How to identify Aphids
Aphids are tiny insects measuring 4 to 8 mm in length with soft, pear shaped bodies. Their colour can vary from pale green, black, red, or cloudy white. Depending on the season, these pear-shaped pests may be winged or wingless.
Signs of an infestation
Aphids often cause leaves to spot, yellow, curl, or wilt. Galls may also form on plant stems and branches. Check the undersides of leaves, the tips of branches, or new plant growth to find aphids. In addition, gardeners can look on the soil under infested plants for aphids’ cast-off skins, which look like small white flakes.
Adding plants that repel the pests, such as coriander, basil, catnip, chives, and dill, is another way to help protect at-risk gardens. Sweet alyssum, yarrow, or herbs in the carrot family can also attract helpful bugs like lady beetles, lacewings, and flies that eat aphids. For the most effective and reliable solutions, contact local pest professionals.
There are appropriately registered and labeled insecticides for controlling these pests. If they have invaded an indoor structure, you should consult a professional, licensed pest control provider to control the infestation. For outdoor plant infestations, you can consult your local garden centre.
How to prevent Aphids from invading
Inspect plants and cut flowers prior to bringing them indoors, Check the undersides of leaves, Inspect branches and new growth periodically, Look on the soil for cast-off skins, Repair door and window screens, Install weather stripping, Add plants that repel aphids
Habitat, Diet, and Life Cycle
Aphids can be found throughout North America. While they attack many plant species, common hosts include the coniferous and deciduous trees that grow across Canada. Aphids live on or near the plants they eat and collect under leaves and sheltered areas.
Aphids feed on plant sap. They pierce leaves and stem surfaces and suck up the sap (juice) from the plants. They can feed on any part of a plant but they prefer new plant growths. As they feed, aphids produce a sweet, sticky substance known as honeydew which attracts ants and other insects to the plant; it can also cause fungal growth on the plant surfaces, which is detrimental to the plant. Since they often feed in large groups, aphid infestations can quickly destroy gardens and flowerbeds.
Aphids have a complex reproductive cycle. The first aphids of the year hatch from eggs during the spring. This generation of females reproduces through self-cloning until their host plant is crowded. The next generation develops with wings and flies off to find another plant. At the end of summer, all aphids return to their original host to lay eggs for the next year. While they may only live for one to six weeks, each female aphid can produce as many as 12 young per day.
Commonly Asked Questions
How worried should I be about aphids?
Aphids wreak more damage on cultivated plants than any other insect, causing stunted growth, low crop yields, and even plant death. They often feed in large groups, so an aphid infestation can quickly destroy gardens and flower beds.
As well, when aphids feed, they produce honeydew, a sweet, sticky substance which attracts ants and other insects to the plant, and can also cause fungal growth detrimental to the plant. This honeydew is also smeared on sidewalks, cars, or other objects under their feeding area.
Aphids can reproduce in huge numbers, through a complex reproductive cycle that begins in spring, when a generation of wingless female aphids hatch from eggs on a host plant. They self-clone until the plant is crowded. The next generation develops wings and spreads to another plants.
At the end of summer, all aphids return to their original host plant to lay eggs for the following year. While female aphids may only live for up to six weeks, each female aphid can produce as many as 12 young per day, or more than 500 young in her lifetime.
To tackle aphids, try adding plants that repel them, like coriander, basil, catnip, chives, and dill. Sweet alyssum, yarrow, or herbs in the carrot family can also attract helpful bugs like lady beetles, lacewings, and flies that eat aphids.
However, if your aphid infestation is not eliminated, you may need the help of a professional pest control service to effectively manage and control them using various strategies.
Other pests related to Aphids
100% Satisfaction or Money Back Guarantee
Resolving your pest problem is our #1 priority. If re-treatment is required, we'll provide immediate services at no extra cost. If your expectations are not met, we guarantee a full refund of your service payment. |
A scientist and General Election candidate has prepared for the heat of the campaign trail by helping to discover the cause of rising sea temperatures in the Arctic Ocean.
Prof Rippeth discovered that warm water from the Atlantic ocean is flowing into the cold Arctic and melting floating sea ice which the believe may have played a role in recent weather extremes in the UK, US and Europe.
He made the discovery while working as part of the National Oceanography Centre, Liverpool's TEA-COSI (The Environment of the Arctic: Climate, Ocean and Sea Ice) project team.
The results of the research, which were published this week in journal Nature Geoscience, show that tidal flows in the Arctic are causing deep, warm water – originating from the Gulf Stream – to mix with cold, fresh water lying above, which contributes to melting the floating sea ice.
He said: “Our oceans are not made up of one body of water, but contain waters of different temperatures and salinity, lying in different ‘layers’, so the Arctic Ocean is a bit like a jam sandwich, where the ‘bread’ is the cold water layers above and below the ‘jam’, which is the warm, salty water that enters the Arctic from the Atlantic.
“Sea ice floating on the surface of the ocean is insulated from the heat of the Atlantic layer by the top slice of cold polar water.
“We studied the warm body of water from the Atlantic that represents the largest oceanic input of heat into the Arctic – it is four degrees Celsius warmer than the surrounding water, and it is the warmest it has been in nearly two thousand years.”
“These mixing hotspots may then grow into other areas of the Arctic Ocean with steep sea bed slopes, resulting in further sea ice retreat.
“We know that the Arctic is already warming faster than the rest of the planet, and other research conducted in the past few years is pointing to the impact of Arctic warming on mid-latitude weather, so the Arctic may have had a role in recent weather extremes in the US, UK and Europe.
“Therefore the importance of the discovery of this new mechanism for moving heat up towards the Arctic ocean surface lies in its potential to further enhance Arctic warming.” |
Download a comprehensive list of BC Ministry of Education prescribed learning outcomes (PLOs) that may be addressed with this resource package. While these lessons were designed for secondary students, most modules and activities are easily adaptable for upper intermediate students.
Module 1: Introduction to Climate Justice
This lesson introduces students to the concept of climate justice in the context of global climate change. Students will consider both the causes and effects of climate change through a fairness and equity lens. Using figures, quotes and videos, this lesson invites students to discuss and act on how BC can reduce its carbon emissions while becoming more socially just.
Module 2: Reimagining our Food System
This lesson explores climate change and our food systems, how climate change may affect food production in BC and elsewhere, and social justice issues, such as vulnerability to hunger and migrant farm labour. Students will consider actions that individuals, schools and governments can take to make BC’s food systems more fair, sustainable and resilient to climate change.
Module 3: Transportation Transformation
Students will unpack the advantages and disadvantages of owning a car and how it relates to both greenhouse gas emissions and social equity. They will reflect on how community design encourages or discourages car use, and imagine what we can to do to better facilitate walking, biking and public transit options, create more complete communities and improve quality of life.
Module 4: Rethinking Waste
This lesson explores our culture of consumption and how it produces waste, both solid waste and airborne emissions like greenhouse gases. Students will reflect on what gets thrown away over the course of a day and what items in their lives are “designed for the dump.” Students will consider how to move beyond recycling and composting to reducing and eliminating wasteful consumption, and how their school can take action.
Module 5: Fracking Town Hall
This lesson wrestles with the challenges of fossil fuel extraction and the bigger picture context of the push for a BC-based liquefied natural gas (LNG) industry. Students will use video resources and graphics to learn about the fracking process, then explore why fracking is a contentious issue in BC. Through a town hall simulation, students will take on the roles of key stakeholders surrounding a proposed fracking operation in the fictional town of Mountainhead. Looking at the issue through the lenses of the economy, environment and human health, students will deepen their understanding of different sides of the debate.
Module 6: Green Industrial Revolution
Students will explore the history of resource development in BC and BC’s current carbon crossroads. Using the Climate Justice Project mini-documentary Town at the End of the Road, students will reflect on the decline of forestry in Mackenzie, BC, and consider how the sector can be re-imagined as part of a green economy. Using highly engaging infographics, students will explore the possibilities and advantages of investing in green buildings and energy efficiency retrofits, new transportation infrastructure, and clean energy and conservation initiatives.
Module 7: Imagining the Future We Want
Students will reflect on our current time of ecological crisis and social inequity. Then, using a storytelling exercise, students will talk to their descendants seven generations in the future and discuss the challenges we face today and imagine how we can move towards a better future. Through these exercises, students will explore themes of intergenerational justice and cultivate stories of action and hope.
Module 8: Challenges to Change
This lesson explores the essential elements of successful social change movements. Drawing from the Story of Stuff’s The Story of Change video, students will contemplate what is holding us back from achieving climate justice in BC and what can move us forward. The lesson ends by looking at “Youth4Tap,” a BC student-driven initiative aimed at eliminating the sale of bottled water from their school and installing new water refill stations on campus. |
Chimborazo Province in Ecuador is a major agricultural center. Changes in freshwater availability and agricultural sustainability affect the province, with important cultural and economic implications for the region.
In September 2023, researchers from the University of Georgia Franklin College of Arts and Sciences Department of Geography traveled to collaborate with local Ecuadorian scientists on an investigation of past environmental conditions in the nearby Andes Mountains. Their goal was to assess regional hydroclimatic variability—changes through time not only in weather and climate, but also the associated ecology and water resources. The work is a component of a formal collaboration between UGA and Escuela Superior Politecnica de Chimborazo (ESPOCH), a science and engineering university in Riobamba, Ecuador.
Geography Professor David Porinchu led the team in collecting sediment and water samples from two lakes within Sangay National Park, providing a snapshot of regional hydroclimatic conditions over the past 5,000 years. Paleoenvironmental records such as sediment cores—like the 30-foot core extracted from Laguna Kuyuk—provide scientists and resource managers with information from a wider range of conditions and scenarios than is available from modern data alone.
Since the team’s return, the UGA team has begun analyzing samples for a suite of physical, chemical, and biological characteristics. The team’s first priorities are to obtain X-ray images and X-ray fluorescence chemistry scans of the intact core and establish dates from as many locations as possible.
Subsequent analyses will utilize small subsections of the cores to understand changes in various time intervals. Results from this initial study will be used to help design broader, regional-scale investigations aimed at unravelling the effects of climate change on natural resources and the largely indigenous populations they support. |
Exophthalmos is a Greek word that means bulging or protruding eyeball or eyeballs.
Proptosis is also a term sometimes used to describe a bulging eyeball or eyeballs.
Depending on what is causing bulging eyes, you may also have other associated symptoms.
For example, if exophthalmos is caused by a thyroid-related condition, such as Graves' disease, as well as bulging, your eyes may also be:
Your vision may also be affected – for example, you may have double vision or some loss of vision.
Exophthalmos can be caused by many different conditions. It is important that the underlying cause is identified so appropriate treatment can be given.
Conditions that affect the thyroid gland are a common cause of exophthalmos. The thyroid is a small gland at the base of the throat that controls your metabolism (the rate at which your body uses energy).
A thyroid condition that affects the eyes is known as thyroid eye disease or thyroid orbitopathy.
An overactive thyroid gland can sometimes lead to a thyroid eye disease and symptoms such as puffy, swollen eyes, tearing and bulging eyeballs.
Exophthalmos is sometimes related to tumours that develop in the eyes. For example, a capillary haemangioma is a type of tumour that can develop in the eye cavity during childhood. It can sometimes cause exophthalmos.
A pleomorphic lacrimal gland adenoma is a painless, slow-growing eye tumour that can develop in adults in their 50s. In some cases, it can also cause exophthalmos.
Exophthalmos is often easy to recognise from the appearance of the eyeballs, which clearly bulge or protrude from the sockets, exposing most of the whites of the eyes.
How much the eyeball bulges, the direction it protrudes and other associated symptoms will often provide clues about the underlying cause. However, further tests will be needed to confirm a diagnosis.
Your doctor may refer you to an ophthalmologist (a specialist in diagnosing and treating eye conditions). The ophthalmologist will check how well you are able to move your eyes. They may also use an instrument called an exophthalmometer to measure how far your eyeball protrudes.
You may also have a blood test or a thyroid function test to check your thyroid gland is working properly.
Treatment for exophthalmos will depend on the underlying cause.
If a thyroid problem is causing your eyes to bulge, treatment to stop your thyroid gland producing excess amounts of thyroid hormones may be recommended.
This can be achieved using medication, such as thionamides, or by having radioiodine treatment, where a chemical called radioactive iodine is swallowed, which shrinks your thyroid gland.
The use of corticosteroids (medication that contains manmade versions of the hormone cortisol) can help reduce painful eye inflammation.
If your eyes are dry, sore and inflamed, a lubricant, such as artificial tears, may also be prescribed to moisten your eyes and relieve irritation.
Read more about treating exophthalmos.
In very severe cases of exophthalmos, you may not be able to close your eyes properly. This can damage your cornea (the transparent tissue that covers the front of your eyeball) by causing it to dry out.
If your cornea becomes very dry, an infection or ulcers (open sores) may develop. If left untreated, these could damage your vision.
Other possible complications of exophthalmos include conjunctivitis (inflammation of the lining of the eye) and optic atrophy (deterioration of the optic nerve).
As long as the underlying cause of exophthalmos is identified at an early stage, it can usually be successfully treated.
After treatment, any pain, redness, swelling or irritability will usually settle down after a few months, although in some cases it may take longer.
If exophthalmos is caused by thyroid eye disease, your eyes may not go back to normal. In up to one in 20 people, thyroid eye disease may get worse, resulting in double vision or some degree of visual impairment.
Important: Our website provides useful information but is not a substitute for medical advice. You should always seek the advice of your doctor when making decisions about your health. |
Statue of Chiron, the wisest of all the centaurs and the teacher mentor of the ancient Greek hero Achilles (Sculptor: Anna Hyatt Huntington, 1936)
Centaurs according to the Greek mythology were monstrous creatures. With a human head, arms and upper body and the body and legs of a horse. Centaurs, male and female, lived on mountains and in forests. They fed on raw flesh, could not drink wine without getting drunk. Centaurs went about in herds and were very prone, if male, to rape mortal women.
They represent man’s animal nature. At the same time as any other creature having a human head and an animal body the union of the ‘spiritual’ and the ‘material’ world, the, conscious and the unconscious human behavior. As half man, half horse we have conflict of primal nature (wild behavior) versus civilized nature (educated, refined behavior in Chiron).
The most realistic view on Centaurs is that they were a Greek tribe in Thessaly during the Bronze age. They were excellent horsemen, that ‘s why they became a legend, as a half man-half horse creature.
It is not a coincidence that the ancient Greeks, placed on the south side of Metopes of the Parthenon the temple of Goddess Athena (the goddess of wisdom and arts), imprinted the Centauromachy. They simply wanted to signify the struggle between the forces of civilization, philosophy, science order and justice, on the one hand, and irrational chaos and barbarism on the other. |
Birmingham researchers have, as part of a global collaboration, confirmed a major prediction of Albert Einstein’s 1915 theory of general relativity through the detection of gravitational waves.
Professor Alberto Vecchio and Professor Andreas Freise have been at the forefront of developing a new field of gravitational wave astronomy. Alongside their colleagues at Cardiff and Glasgow Universities, they have developed and built instrumentation for Advanced LIGO, and pioneered the techniques that have allowed them to extract the properties of the sources from gravitational wave signatures.
Dr Kat Grover (University of Birmingham Outreach Officer), who recently received her doctorate in gravitational waves, met with Professors Andreas Freise and Alberto Vecchio to discuss gravitational waves and what this means for the future of astronomy and astrophysics.
Gravitational Waves Detected
Kat: What has been discovered?
“LIGO observed gravitational waves from two black holes that orbited each other and then merged to form a bigger black hole. The final black hole has a mass about 60 times that of our Sun. This event occurred around a billion light years away from Earth. The merger was extremely energetic (for a fraction of a seconds the event released 50 times more energy in gravitational waves than the all the stars in the entire Universe in light), but by the time the waves reached us, they were so weak that the change in the length of LIGO's arms was less than a 1000th of the diameter of the core of an atom.”
Answers by Alberto Vecchio, Professor of Astrophysics
and Andreas Freise, Professor of Experimental Physics
Kat: What does this mean for general relativity?
“The measured signal matched the waveform predictions of Einstein’s theory; we've never tested the theory in such extreme conditions before, so it has passed its toughest test!”
Kat: What does it mean for astrophysics?
“This tells us that binary black holes do exist. It also tells us that they form, evolve and die during a period shorter than the age of the Universe. We’ve never seen binary black holes before. We've never found black holes of this mass before. It looks like these mergers should be common enough that we will see more in future observations with LIGO. Then we can start to understand exactly what is out there and how these binaries are made.”
Kat: What are “binary black holes”?
“Most stars have a companion and orbit around each other as the Earth orbits around the Sun. A binary black hole is a system in which two black holes orbit around each other.”
Background to gravitational wave astronomy
Kat: What is Einstein's theory of general relativity?
“General relativity is our best theory of gravity. In general relativity, gravity can be thought of as the effect of the curvature of spacetime. Massive objects bend space and time; the curvature in spacetime changes how things move.”
Kat: What are gravitational waves?
“Gravitational waves are ripples in spacetime. When objects move, the curvature of spacetime changes and these changes move outwards (like ripples on a pond) as gravitational waves. A gravitational wave is a stretch and squash of space and so can be found by measuring the change in length between two objects.”
Kat: What is spacetime?
“In our everyday lives we think of three-dimensional space (up/down, left/right, forward/back) and time as completely separate things. But Einstein’s theory of special relativity showed that the three spacial dimensions plus time are actually just part of the same thing: the four dimesions of spacetime.
“In general relativity, Einstein went further. Not only are space and time part of the same thing, but they are both warped by mass or energy, causing a curved spacetime. Things like to move along the shortest route available; when spacetime is flat, this looks like a straight line. But when spacetime is warped, the shortest route might not look straight anymore. For example, when you are flying over the curved earth, your aeroplane’s flight path will look curved, even if you are going “straight” from A to B. We can see and measure the effect of curved spacetime; for example, the sun’s mass curves spacetime so the Earth moves in a circular orbit around the sun.”
Kat: What does curvature of spacetime mean?
“It is hard to imagine a four-dimensional spacetime, let alone what a curved version of this looks like, so we often simplify this by thinking of an example in two dimensions. We can imagine a two-dimensional spacetime as a rubber sheet; dropping a heavy object on the sheet will bend and distort the sheet. In a similar way, mass or energy distorts spacetime around it.”
Kat: What are black holes?
“Black holes are the regions of strongest gravity in the Universe. They are where the curvature of spacetime is so steep that all paths lead inwards. Eventually nothing can climb up the curvature no matter how fast it goes; even light, the fastest thing in the universe, can’t escape if it gets too close to a black hole.”
Kat: What does detecting gravitational waves mean?
“Einstein first predicted gravitational waves 100 years ago. We have some good evidence they exist from watching binary pulsars (which won the 1993 Nobel Prize). We see the orbit of the binary shrink by the amount predicted by gravitational waves emission, but we don't see the waves themselves. Measuring the waves themselves would be the final piece of evidence for the predictions of Einstein's general relativity.”
Kat: What are “binary pulsars”?
“Neutron stars are old, dead stars that collapsed down to an extremely dense object. Roughly the mass of our sun compressed into about the size of a city. Pulsars are rotating neutron stars which emit a beam of radiation. As the pulsar rotates, the beam of radiation sweeps across the Earth like a cosmic lighthouse. A binary pulsar is where a pulsar orbits another star or sometimes another pulsar.”
Kat: What can we learn from gravitational waves?
“Gravitational waves are a new way of observing the Universe. Astronomy traditionally uses light to explore the cosmos, but there are lots of things you can miss because a lot of the universe is dark, including black holes. One source of gravitational waves is two dense objects (like black holes or neutron stars) in orbit around each other.”
Kat: What is LIGO?
“The Laser Interferometer Gravitational-Wave Observatory (LIGO) is made up of two gravitational wave detectors in the USA designed and operated by Caltech and MIT. In addition the LIGO Scientific Collaboration with a 1000 scientists from around the world provides crucial support for the LIGO science from instrument development to data analysis and astronomy. One LIGO observatory is located in Livingston, Louisiana and the other in Hanford, Washington. Each observatory contains an enormous, extremely sensitive laser ruler. We bounce lasers along two 4-kilometre long paths, or “arms”, which are at right angles to each, and then compare the length of each path. A gravitational wave can change the length of the arms, but the effect is extremely small (one part in 1,000,000,000,000,000,000,000 for the strongest waves), so the instruments need to be extremely sensitive, which became possible using completely new technologies and a new interferometer concept.”
Kat: What is the future for gravitational-wave science?
“LIGO has just finished its first observations using its new “advanced” sensitivity. It will slowly be improved over the next five years, making it even more sensitive. Next year it should also be joined by Virgo, a detector in Italy. There is also another detector being built underground in Japan called KAGRA. There is also a plan for putting a LIGO detector in India. Plans for a network of third generation of observatories such as the Einstein Telescope are under way. Improving the worldwide network of detectors will help us to measure properties of the signals, especially helping us figure out the position in the sky of the source of the waves. At the same time, Pulsar Timing Arrays are taking data to observe giant black holes at the centre of galaxies.
“Further in the future, there will be a space-based mission called eLISA. This will be much bigger (100 times the size of the Earth) and look for gravitational waves from much more massive objects.”
There are also resources here: http://www.ligo.org/science/faq.php |
WALT: identify the meanings of words in context.
1. Watch the video lesson.
2. Open the 'Vocab Day 25.1.21' file.
3. Use a dictionary or an online dictionary (link below) to find the definitions.
4. Type your definitions into the document.
5. Submit your work.
Tuesday 27th January 2021
WALT: retrieve information and identify key details.
1. Watch today's lesson video.
2. Answer the questions.
3. Submit your work.
Wednesday 27th January 2021
WALT: Summarise the main ideas from more than one paragraph
1. Read through the PDF presentation
2. Answer the questions
3. Submit your learning
Thursday 28th January 2021
1. Read through 'Stone Age'
2. Choose your challenge (*, ** or ***)
3. Answer the comprehension questions
4. Use the answer sheet to mark your own work
5. Submit your learning
Friday 29th January 2021
Use this time for individual reading. |
kidzsearch.com > wiki
Radio is a way to send electromagnetic signals over a long distance, to deliver information from one place to another. A machine that sends radio signals is called a transmitter, while a machine that "picks up" the signals is called a receiver or antenna. A machine that does both jobs is a "transceiver". When radio signals are sent out to many receivers at the same time, it is called a broadcast.
Television also uses radio signals to send pictures and sound. Radio signals can start engines moving so that gates open on their own from a distance. (See: Radio control.). Radio signals can be used to lock and unlock the doors in a car from a distance.
History of radio
Many people worked to make radio possible. After James Clerk Maxwell predicted them, Heinrich Rudolf Hertz in Germany first showed that radio waves exist. Guglielmo Marconi in Italy made radio into a practical tool of telegraphy, used mainly by ships at sea. He is sometimes said to have invented radio. Later inventors learned to transmit voices, which led to broadcasting of news, music and entertainment.
Uses of radio
Radio was first created as a way to send telegraph messages between two people without wires, but soon two-way radio brought voice communication, including walkie-talkies and eventually mobile phones.
Now an important use is to broadcast music, news and entertainers including "talk radio". Radio shows were used before there were TV programs. In the 1930s the US President started sending a message about the country every week to the American people. Companies that make and send radio programming are called radio stations. These are sometimes run by governments, and sometimes by private companies, who make money by sending advertisements. Other radio stations are supported by local communities. These are called community radio stations. In the early days manufacturing companies would pay to broadcast complete stories on the radio. These were often plays or dramas. Because companies who made soap often paid for them, these were called "soap operas".
Radio waves are still used to send messages between people. Talking to someone with a radio is different than "talk radio". Citizens band radio and amateur radio use specific radios to talk back and forth. Policemen, firemen and other people who help in emergency use a radio emergency communication system to communicate (talk to each other). It is like a mobile phone, (which also uses radio signals) but the distance they reach is shorter and both people must use the same kind of radio.
Microwaves have even higher frequency; shorter wavelength. They also are used to transmit television and radio programs, and for other purposes. Communications satellites relay microwaves around the world.
A radio receiver does not need to be directly in view of the transmitter to receive programme signals. Low frequency radio waves can bend around hills by diffraction, although repeater stations are often used to improve the quality of the signals.
Shortwave radio frequencies are also reflected from an electrically charged layer of the upper atmosphere, called the Ionosphere. The waves can bounce between the ionosphere and the earth to reach receivers that are not in the line of sight because of the curvature of the Earth's surface. They can reach very far, sometimes around the world. |
How many ways have you used electricity today?
In the modern world, electricity is an essential part of day-to-day life. In fact, it is probably impossible to count all the ways we use electricity. From the moment we wake up we use electricity to toast our bread, listen to the radio or refrigerate our orange juice. Electricity powers the lights in the classrooms and offices where we work. The clothes we wear, even the cars we drive, are made by machines that use electricity.
Where does electricity come from?
To see where electricity comes from all we need to do is look inside an aluminum wire. The problem is what we are looking for is too small to see. But, if you could look past the protective covering, past the aluminum wire’s shiny surface, you would see that the wire is made up of tiny particles. These are atoms, the basic building blocks from which everything in the universe is made.
If you could look closely at an atom you would see that the atom itself is made up of even smaller particles. Some of these particles are called electrons. Usually, electrons spin around the centre, or nucleus, of the atom. However, sometimes electrons are knocked out of the outer orbit of an atom. These electrons become “free” electrons.
All materials normally have free electrons that are capable of moving from atom to atom. Some materials, such as metal, contain a great number of free electrons and are called conductors. Conductors are capable of carrying electric current. Other materials, such as wood or rubber, have few free electrons and are called insulators.
If free electrons in a conductor can be made to jump in the same direction at the same time then a stream, or current, of electrons is produced. This is an electric current. In an electrified wire, the free electrons are jumping between atoms, creating an electric current from 1 end to the other. But, how can the electrons jump in the same direction at the same time? By using magnets.
Surrounding the end of every magnet are invisible lines of force called magnetic fields. If you move a straight wire through a magnetic field, the force will push the free electrons from 1 atom to another, creating electric current. If you move several coils of wire quickly and continuously through the field of a powerful magnet, a great quantity of electric current can be produced.
How does Manitoba Hydro produce electricity?
We use machines called generators to produce electricity. In a generator, a huge electromagnet, or rotor, is rotated inside a cylinder, called a stator, containing coils and coils of electric wires. Some rotors are 12 metres across and weigh as much as 8 railway cars, nearly 380 tonnes. A great deal of energy is needed to rotate something that size. Manitoba Hydro uses the province’s abundant supply of water.
Electricity generated using waterpower is called hydroelectricity. A hydroelectric generating station uses the natural force of a river as energy. The same water flow or current that pushes a floating canoe down a river can also turn a generator’s rotor.
Typically, there are 2 components to a generating station. A powerhouse which houses the generators and a spillway that allows any water not being used to bypass the powerhouse.
At the heart of a hydroelectric generating station is the turbine runner. Looking like a giant propeller, some turbine runners are nearly 8 metres across. Attached to the rotor by a 5-metre shaft, the turbine runner converts the physical energy of the water into the mechanical energy that drives the generator.
Water flows into a station’s powerhouse through the intake and enters into the scroll case. The scroll case is a spiral area surrounding the turbine. The spiral shape gives the incoming water the spiral movement which pushes the blades of the turbine. As the turbine is turned, the attached rotor also spins, generating electricity. The potential energy of the river is converted into the mechanical energy of a generator which produces electric energy. Just 1 of the 10 generators at the Limestone generating station can produce 133 million watts or 133 megawatts of electricity. That’s enough to supply power to over 12,000 homes.
When the natural flow of a river is adequate, a run-of-river plant is built. The run-of-river design reduces the need for a large reservoir of water, or forebay, behind the station. Instead, the water flowing into a generating station upstream is used immediately, not stored for later use. The Limestone generating station located on the Nelson River is an example of a run-of-river design.
When the natural flow of water is inconsistent or inadequate, a more extensive network of dams is constructed to create a large forebay to provide for times when the river’s water level is low. The dam also creates a head of water, or waterfall, to ensure the water has enough force to spin the turbines. The Grand Rapids generating station on the Saskatchewan River is an example of a station that uses a water reservoir.
When you plug a toaster or a stereo into a wall socket, electricity is instantly there to toast your bread or play music. But have you ever wondered how that electricity gets from a hydroelectric generating station to the socket in your wall? For the answer, we need to take another look at those electrons in our aluminum wire.
Remember, magnets passing over a wire or coil of wire push electrons causing them to jump between atoms. As the electrons jump they transfer a charge to the next atom. As the next atom receives the charge its electron jumps. The magnets trigger a chain reaction which moves down the wire. The electric energy travels down the wire because aluminum is a conductor — it’s conducting the electricity. Manitoba Hydro has an extensive system of wires of varying sizes that conduct electricity throughout the province and to your house. But, that’s only part of the answer.
In Manitoba, nearly 70% of our electricity is produced by hydroelectric generating stations on the Nelson River in northern Manitoba. So, we must transmit the renewable hydroelectric power it generates about 1,000 km to southern Manitoba where most people live and work, and where most businesses are located.
But, electricity doesn’t travel long distances easily. It needs help.
Manitoba Hydro uses high voltage direct current (HVDC) technology to transmit electricity from the north more efficiently. Direct current (DC) is electric current that flows in one direction only. It’s the type of power produced by batteries used in cameras, flashlights and cars. The electricity in your home is alternating current (AC), electric current which reverses direction approximately 60 times a second. The advantage of DC is that the power loss over long distances is considerably less than AC.
A higher voltage is used with DC transmission to increase energy transmission and reduce losses. To explain why, let’s compare electricity flowing through a wire and water flowing through a pipe. Just as great quantities of water can be moved through a large diameter pipe, a great quantity of electricity can be moved through a large diameter wire. Great quantities of water can also be moved through a small diameter pipe, such as a garden hose, by increasing the pressure. Similarly, electricity can be moved in greater quantities through a small diameter wire by increasing the voltage.
We built 3 HVDC transmission lines, known as Bipole I, Bipole II, and Bipole III, to bring electricity from the north. In your home, the electricity you use has 120 volts AC of force. The electricity travelling on the HVDC lines has 500,000 volts or 500 kV of force.
Let’s say the electricity in your house has the same force as a baseball pitched towards you at 100 km per hour. The force of the electricity on the HVDC line would be over 4,000 times more powerful.
Hydroelectric generating stations produce AC electricity that has about 25 kV of force. It must be converted to DC and transmitted at an even higher voltage to reduce the power losses experienced over long distances. This conversion is done at the Henday, Radisson, and Keewatinohk converter stations located near Gillam, Manitoba.
Once the electricity has been converted from AC to DC, it travels south to the Dorsey and Riel converter stations outside Winnipeg. At Dorsey and Riel, the electricity is converted back to AC to be delivered to your home as refrigerators, TVs, computers, and other appliances run on AC electricity. From Dorsey and Riel, AC transmission lines supply other areas of southern Manitoba, plus the northern U.S. states, Saskatchewan, and northwestern Ontario.
The high-voltage AC lines transport the electricity to substations located throughout the province. These substations contain equipment used to transform voltages to lower levels, switch the current in a line on or off, and analyze and measure electricity.
The transformation of electricity from high voltage to low voltage is done using the same principle as generation. The magnetic field of a coil of wire carrying an alternating, or fluctuating current, can cause a fluctuating current in a second coil. In a transformer, 2 separate coils of wire are wrapped around a magnetic iron core. The electricity in the first coil of wire creates a fluctuation in the magnetic field of the iron core. That fluctuation then passes through the iron core, electrifying the second coil of wire. If the second coil of wire has half as many turns the electricity will have half the voltage. If the second coil has twice the number of turns, then the voltage will be doubled.
From the substations, electricity runs through overhead lines or underground cables to transformers which complete the voltage reduction. These transformers are located near the top of hydro poles for overhead lines or at ground level for underground service.
From the pole, electricity travels through a wire into your home, going first to the meter and the main switch. The wires then lead to a distribution panel. From there, circuits inside the walls lead to the power outlets and light fixtures.
Electricity today and tomorrow: alternative sources
Next time you are boiling water, put a lid on the top of the pot. As the water boils, it expands and turns into steam. The pressure of the expanding steam will eventually shake or raise the lid. A thermal generating station uses this same energy to turn turbines that drive electric generators. The fuel used to heat the water can be coal, oil, natural gas or a nuclear energy source.
We maintain 2 small thermal generating stations, in Brandon and Selkirk. The thermal stations are used to help meet power demand during times of low water flows or to provide extra electricity during periods of high demand, particularly in winter.
Unlike hydroelectric generating stations, thermal generating stations can be built almost anywhere. However, the major disadvantage of thermal stations is that fossil fuel, like the natural gas used by Manitoba Hydro, is not self-renewing like water power.
When you’re outside on a windy day, you can feel the wind push against your body. That push can also spin blades on a wind turbine which produces electricity.
Though not practical in all locations, wind generators are a good idea in those areas where they can be used because wind, like water, is a renewable resource. However, wind generators have 2 main drawbacks. First, they are expensive. Second, not all locations have consistent strong winds.
Biomass is a general term used to describe organic or living matter such as wood. Biomass generation means burning organic matter rather than fossil fuels to create electricity. Potential biomass fuels include residue from the forestry and agricultural industries. Everything from rice hulls to coffee grounds could be burned to create steam.
Plants are able to create food from the light of the sun. This process is called photosynthesis. The word photo means light and the word synthesis means to change. We can also use the sun’s light to make electricity. Panels made from silicon are able to convert sunlight to electricity through the photovoltaic process. Voltaic is another word for electricity.
Photovoltaic (PV) panels can be used to power everything from calculators to appliances in your home. One of the advantages of solar energy is that it doesn’t need fuel. PV panels are expensive to set up and operate. Solar energy in Manitoba is more expensive than our low-cost and renewable hydroelectric power and may increase your carbon footprint.
If you blow up a toy balloon and then let go of the neck, the balloon will shoot away. The force that pushes the balloon, expanding air, is the same force that drives a gas combustion turbine.
A combustion turbine looks and operates something like a jet engine. In a combustion turbine, fuel such as natural gas is mixed with compressed air and combusted. The gases produced during combustion are hot and under pressure. In most combustion turbines, the combustion gases can reach up to 1,300°C. The super-hot, high pressure gases are pushed into the turbine section where they are allowed to expand and apply pressure across the blades of a rotating turbine that drives an electrical generator.
Over the last decade, combustion turbines have gained importance as an electric generation option. In fact, we operate 2 natural gas combustion turbines at the Brandon generating station. |
Earth Day Activities that will teach kids all about the environment and the importance of recycling
Earth Day is a special day for us around her because it just also happens to be our oldest daughter’s birthday! We have even incorporated the environmentally friendly Earth Day concept into her birthday party celebrations.
As a family I wouldn’t call us the absolute best at going green but we do try to do our part with teaching our children the importance of sustainable living, conservationism, and recycling to reduce our carbon footprint.
We do things like recycle all materials from the house that can be, using cloth napkins instead of paper, gardening and growing our own veggies, and shopping second-hand and donating instead of always buying brand new.
For Earth Day we enjoy taking a nature walk to gain an appreciation for the world we live in and doing a beach clean-up or neighborhood trash clean-up. You’d be amazed how many bags you can fill with small trash items that you find around the roads.
We have found some other great Earth Day activities that involve using recycled materials to create crafts and art, and learn about the environment and the importance of recycling. Check out our list and try one of these activities at your house or in the classroom.
Earth Day Activities
Learn about Air, Land and Water pollution with a simple observation activity. Or learn about the water cycle with your LEGOS. You can dive deep into the learning about the layers of the earth with this amazing layers of the earth bowl science fair project.
Create artwork out of recyclable materials and focus the lesson on Earth Day: Egg Carton Tree | Plastic Water Bottle Flowers | Newspaper Earth Craft | Upcycled Sun Catchers |Recycled Newspaper Garland | Fairy House Night Light | Tin Can Shelf
Help out the animals in your neighborhood with a recycled milk carton bird feeder.
Send a gift to a neighbor with a recycled soup can flower pot.
Use Magazine and Paper scraps to make an eco bead bracelet.
Teach hands on recycling and make your own paper.
Spread some wild flower love with homemade seed paper – this is so cute to pass out to your friends or neighbors to plant too!
35 Ways to be Eco Friendly is a great place to start for more Earth Day lessons.
Do you have a favorite activity for Earth Day? I’d love to hear all about it!
MORE ACTIVITIES YOU MIGHT LIKE |
Earthquake, any sudden shaking of the ground caused by the passage of seismic waves through Earth’s rocks. How Often Do Earthquakes Occur?
Scientists aren't yet able to predict earthquakes, but people living near fault lines can help protect themselves by living in earthquake-protected housing and practicing earthquake drills. Share; Tweet; LinkedIn; Pin; Email ; The earth is a fascinating body that has a lot going on within it, much more than what the human eye can see. Tectonic earthquakes occur anywhere in the earth where there is sufficient stored elastic strain energy to drive fracture propagation along a fault plane.The sides of a fault move past each other smoothly and aseismically only if there are no irregularities or asperities along the fault surface that increase the frictional resistance. The world's greatest earthquake belt, the circum-Pacific seismic belt, is found along the rim of the Pacific Ocean, where about 81 percent of our planet's largest earthquakes occur. Learn More All About Earthquakes and Why They Happen. In fact, the National Earthquake Information Center locates about 12,000-14,000 earthquakes each year! It is thought that 17% of the largest and most destructive earthquakes occur here. Earthquakes can strike any location at any time, but history shows they occur in the same general patterns year after year, principally in three large zones of the earth:. An earthquake is one such phenomenon, which is again enveloped in mystery and fear. A simplified cartoon of the crust (brown), mantle (orange), and core (liquid in light gray, solid in dark gray) of the earth. Let's take a look at how it happens. Imagine that you’re sitting in school when the ground begins to shake. Earthquakes around magnitude 7 were frequent in the late 60’s and early 70’s. These are smaller earthquakes that occur afterwards in the same place as the mainshock.
Pressure slowly starts to build up where the edges are stuck and, once the pressure gets strong enough, the plates will suddenly move causing an earthquake.
Sometimes the edges, which are called fault lines, can get stuck, but the plates keep moving. Earthquakes can also rarely occur in the middle of tectonic plates. Although most of these earthquakes are too small to feel, some have been large enough to cause moderate structural damage and personal injury, such as the magnitude 5.6 earthquake that struck near Prague, Oklahoma, in 2011, and probably the magnitude 5.8 earthquake that struck near Pawnee, Oklahoma, in 2016. There are many earthquake-prone areas across the world.
Novice Spanish. Learn more about the causes and effects of earthquakes in this article. Share it!
Earthquakes usually occur on the edges of large sections of the Earth's crust called tectonic plates. An earthquake damages buildings and land, causes tsunamis and has many other disastrous effects. Earthquakes occur where two plates bang up against each other and do not slide along each other smoothly. Depending on the size of the mainshock, aftershocks can continue for weeks, months, and even years after the mainshock! (Public domain.) These plates slowly move over a long period of time. Earthquakes around magnitude 7 were frequent in the late 60’s and early 70’s. Resource Files Fact Sheet #3 (Low-res) Download All 348KB. That’s what happens when an earthquake occurs. The devastating 7.0 earthquake hitting Haiti in January 2010 and took away more than 200,000 precious lives, the massive 8.2 magnitude earthquake in Fiji in August, the 7.5 earthquake in Indonesia in September, and a 7.9 magnitude earthquake in Fiji during September, all make us shudder. Earthquakes are the most deadly of all natural disasters. Earth is an active place and earthquakes are always happening somewhere.
When one block suddenly slips and moves relative to the other along a fault, the energy released creates vibrations called seismic waves that radiate up through the crust to the earth’s surface, causing the ground to shake.
Titan 1 Movie, How To Pronounce Lyre, Waardenburg Syndrome Causes, Gerald Mccoy Age, What Is A Clique Graph, Earthquake Prediction San Diego, Earthquake News Idaho, Galaxy In A Sentence, Canberra Raiders Beanie With Horns, Spacex Launch Site Florida, Psyche In A Sentence, Nike Sb Dunk High Pro Particle Grey/black-terra Blush, Scott Cutler, Md, Shutter Shades - Roblox, Wellington Uk Postcode, Breaking News Largo, Fl, Off White Vapormax, First American Woman In Space, Gerald Mccoy Video, The Project Managers Leadership, Daily Express Owner, Ok Google Open Contacts, Tobi King Bakare Death In Paradise, Train Crash 1966, Jong Az Alkmaar, Can You Visit Someone On Remand Nz, Stewart Island Marine Forecast, Jordan 11 Space Jam Release Date 2020, Dragon Ball Z Dokkan Battle, Sierra Classics Games, Plane Flips Before Landing 2020, Nike Janoski Men's, Snow In Moscow, Gene Kranz For All Mankind, Iphone Medical Devices, Gisborne District Councilrates, Martin Compston Interview, Rnz Concert Staff, Neighbours Where To Watch, How To Forget About A Girl Reddit, Tetris 99 Maximus Cup, Wellington Washington Hotels, Great Railway Journeys, Cloud In Spanish, Satellite Cluster Tonight, Everly For The Crowd, Barcelona Hotels Near Airport, Dongju: The Portrait Of A Poet Review, Millionaire Raffle Results Checker, Unrestricted Data Meaning In Bengali, Nova In Guardians Of The Galaxy, Airbnb Stewart Island, ABC Western Victoria, The Art Of Fencing, Paris Breaking News Today, What Does Tristan Get Turned Into In Stardust, The Crooked Hinge, Grian Hermitcraft 7, France Castle Island, What Is The Cosmic Microwave Background Radiation Quizlet, Ultima Thule Distance From Earth, Toyota Competitive Strategy, Livescore Cz Prediction Tomorrow, |
Annealing is a process by which the properties of steel are enhanced to meet machinability requirements. Annealing is a process of heating the steel slightly above the critical temperature of steel (723 degrees Centigrade) and allowing it to cool down very slowly. There are 5 types of annealing: Full Annealing, Process Annealing, Stress Relief Annealing, Spherodise Annealing, and Isothermal Annealing.
The following are some of the advantages of annealing.
· It softens the steel.
· It enhances and improves the machinability of steel.
· It increases the ductility of steel
· It enhances the toughness of steel
· It improves the homogeneity in steel
· The grain size of the steel is refined a lot by annealing
· It prepares the steel for further heat treatment.
With the following data, accurate classify the steel
annealing with the attributes given
The evaluation of this dataset is done using Area Under the ROC curve (AUC). |
T he HIV epidemic is one of the most serious to affect humanity. Regular oral health care is essential for health- especially for people with HIV. Inadequate oral health care can undermine HIV treatment regimens and diminish quality of life. People who are positive need comprehensive and individualized oral health care, yet large numbers of people with HIV have unmet needs for oral health care.
Oral heath programme initiated by the WHO, can make important contributions to the early diagnosis, prevention and treatment of the disease. A lot of studies have demonstrated that about 45% of persons who are HIV positive most likely have an oral fungal, bacterial or oral infection which often occurs early in the course of the disease diagnosis. Oral lesions strongly associated with HIV infection are pseudo- membranous oral candidiasis, oral hairy leukoplakia, HIV gingivitis and periodontitis, Kaposi sarcoma, non-Hodgkin lymphoma and xerostomia which is commonly known as dry mouth brought about by a decreased level of salivary flow. Saliva is very important as it includes electrolytes, proteins, antibacterial compounds, enzymes and also proteins. Decreased production in saliva makes it unable to wash away sugars and acids that are produced by bacteria thus causing tooth decay.
In case the HIV patient is a child, thrush becomes a common problem of the mouth. Thrush can be treated with mouth rinses that are medicated. The medications given to HIV sick children are usually sugary. If the sugary medicines are taken, the child should rinse their mouths with water afterwards to avoid cases of tooth decay. This becomes a big problem to the child as it leads to pain, infection, chewing difficulties and problems involving malnutrition.
People with HIV suffer more damaging cases of periodontal disease. Periodontal disease is an extreme inflammatory process that affects the tissues and the structures of the bone that supports the teeth. It can happen to anyone but, mostly it occurs in people with low immunity. There are two types of periodontal disease namely: Necrotizing Ulcerative Periodontis (NUP) and Linear Gingival Erythema (LFE).
The NUP is an indicator of severe immune system compromise. Their characteristics are severe pain, bleeding, premature tooth loss and halitosis. Linear Gingival Erythema is characterized by a red band that appears along the gum line and it causes bleeding and a lot of pain. |
We know the importance of teaching our kids about their physical health. We teach them to take their vitamins, brush their teeth, and live a life that will keep them physically healthy. However, it’s equally important to teach our kids about mental health. Here are a few ways to get started.
Make All Feelings Allowed
When a child is crying or yelling, it’s easy to dismiss the response as overly dramatic. However, that’s a mistake. Instead, teach kids that all feelings are valid, even if all reactions to those feelings are not. Help your kids learn to name their emotions so they have a vocabulary to use when they are in distress.
Don’t equate anger with being crazy or crying with wanting attention. Simply ask your child what they are feeling and help them learn to express it in a healthy way. This goes a long way in teaching your kids that they have feelings but that doesn’t mean they are their feelings. Feeling anxious or angry does not define who they are, it’s just something they experience.
Stop the Stigma
If you grew up in a household where mental health issues were stigmatized, end that cycle now. Give your kids a safe place to express what they are going through without fearing judgment. Make discussions about anxiety and depression a normal part of health talks, and let your child know that suffering from mental health problems is not a failure.
It’s also a good idea to point out what it might look like if someone is struggling with mental health issues. This will help your child evaluate their own feelings and also help them look out for others who are having problems and need help.
Get a Professional Involved
Taking care of mental health is important, so get a professional involved early. A trained psychologist can help your child deal with mental health issues and teach him how to deal with problems as they arise. Seeing a professional can also help children who have trouble truly understanding the importance of their mental wellbeing. Another adult can often help a child understand and listen when they won’t take advice from a parent.
Dr. Ramani Durvasula offers advice on how to care for your mental health and how to teach your kids to do the same. Advice from a professional can help the entire family learn more together.
If you want your kids to develop habits that boost their mental health, model the behavior for them. Make sure to let your kids hear you using positive self-talk instead of berating yourself when you make a mistake. Teach your kids to meditate, and let them see you prioritize your mental health.
Tell your kids that you work out, get enough sleep, and choose a healthy diet to help you alleviate mental health problems. It’s also a good idea to tell them when you seek help for mental health. If you take medication or go to a therapist, don’t hide that information from your children. Normalize talking about and prioritizing mental wellbeing by example.
Take a Holistic Approach
Teach your kids that their mental health can be affected by a variety of factors. This will help them figure out what tools to have ready to use when they are having a challenging time. Though it’s not always possible to pull yourself out of a mental health slump without professional help, there are a few habits that might help the low times feel a bit less extreme.
Teach your kids to eat a balanced diet, prioritize quality sleep, and move their bodies every day. Yoga, meditation, and journaling are also great ways to help kids keep big feelings in check. Teach your kids that our bodies and brains work together to keep us physically and mentally functional.
Teach your kids about mental health early to help them combat problems in the future. |
Can electricity be stored for future use?
Energy generated at a power generation station is not stored. The most common way of storing energy for electrical consumption is with batteries, but you're not really storing electrical energy. You are storing chemical energy.
Energy comes in two basic forms: potential and kinetic
- Potential Energy is any type of stored energy. It can be chemical, nuclear, gravitational, or mechanical.
- Kinetic Energy is found in movement.
- THERMAL ENERGY AND TEMPERATURE.
- 1. Conduction: Heat is thermal energy, and in solids it can be transferred by conduction. Heat is passed along from the hotter end of an object to the cold end by the particles in the solid vibrating.
- While the sun does emit ultraviolet radiation, the majority of solar energy comes in the form of "light" and "heat," in the visible and infrared regions of the electromagnetic spectrum.
- The kinetic energy of electrons flowing between atoms is electricity. Electromagnetic (EM) radiation is a form of energy that is all around us and takes many forms such as radio waves, microwaves, Infra Red waves, Light, Ultra Violet rays, X-rays and gamma rays. Light from the Sun is all around us.
9 Ways to Store Energy on the Grid
- Compressed Air Energy Storage (CAES)
- High-Speed Flywheels.
- Pumped Hydro.
- Rail Energy Storage.
- Solid Electrochemical Batteries.
- Flow Batteries.
- Molten Salt Storage.
- Electrical energy is stored during times when production (especially from intermittent power plants such as renewable electricity sources such as wind power, tidal power, solar power) exceeds consumption, and returned to the grid when production falls below consumption.
- Energy storage plays an important role in this balancing act and helps to create a more flexible and reliable grid system. For example, when there is more supply than demand, such as during the night when low-cost power plants continue to operate, the excess electricity generation can be used to power storage devices.
- Kinetic energy is motion––of waves, electrons, atoms, molecules, substances, and objects. POTENTIAL ENERGY. Potential energy is stored energy and the energy of position––gravitational energy. There are several forms of potential energy. Electrical Energy is the movement of electrical charges.
Energy storage plays an important role in this balancing act and helps to create a more flexible and reliable grid system. For example, when there is more supply than demand, such as during the night when low-cost power plants continue to operate, the excess electricity generation can be used to power storage devices.
- Solar-powered photovoltaic (PV) panels convert the sun's rays into electricity by exciting electrons in silicon cells using the photons of light from the sun. This electricity can then be used to supply renewable energy to your home or business.
- It starts as stored fuel - coal or oil or nuclear - and then converted to electricity, or steam. Power plants don't store the output energy. With electricity, it is sent along transmission lines as it is generated, but is not stored.
- This stored energy is released whenever these chemical bonds are broken in metabolic processes such as cellular respiration. Cellular respiration is the process by which the chemical energy of "food" molecules is released and partially captured in the form of ATP.
Updated: 6th October 2018 |
Exclusionary language is embedded in the English language and undermines
the ability of schools to educate students with diverse cultural
backgrounds. By exclusionary language we mean language and messages that
describe someone as not being something -“non-white”- and results in
the excluded listener feeling “othered,” “less than,” or “inadequate”.
In this training, participants will discuss the forms of exclusionary
language and messages that are prevalent in educational settings, and
discuss appropriate personal and professional responses to its use.
- Explore how language can be used to marginalize people and groups.
- Explore examples of Non-ness in your personal environment/communities and identify how it impacts the people around you.
- Explore the concept of White Privilege and the implications it has on relationships, individuals experiences, and the impact it has on marginalized people and groups.
- Develop and actively practice the RIR protocol to address situations that affect marginalized groups. |
At the Ontario eSecondary School, we are committed to supporting student who have/require Individual Education Plans.
What is an Individual Education Plan (IEP)?
An IEP is:
- a written plan describing the special education program and/or services required by a student;
- based on assessments that show the student’s strengths and needs that affect their ability to learn;
- a description of the key features of the program and/or services
- not a daily plan or outline of everything that will be taught to the student. The special education program may have accommodations, modifications, and/or alternative curriculum.
What are accommodations?
- Accommodations are teaching strategies and supports that are necessary for some students with special needs to allow them to achieve learning expectations and demonstrate their learning.
- Accommodations are split into three categories: instructional, environment, and assessment. They do not change the curriculum expectations for the grade.
- Specifically for OES, accommodations will be implemented for the instructional elements and assessment tasks.
What is the process for developing and reviewing an IEP?
- IEPs are developed by schools when assessments indicate that a student needs special education programming and/or services.
- The IEP remains in place for as long as the special education programming and/or services are required.
- The development of the IEP should be a collaborative process. During the development phase, schools are required to seek feedback through consultation with parents and the student (if possible), so that there is an opportunity to share information with the classroom teacher and/or special education resource teacher to help guide the development of the IEP.
- Gathering information – the IEP team may gather information by observing the student, completing assessments, and reviewing documentation.
- Setting the direction – typically one teacher is given the primary responsibility for coordinating the development of the IEP, working with the IEP team.
- Developing the IEP – the IEP must be completed within 30 school days of the beginning of the school year or semester, or within 30 days of a student beginning a special education program.
- Parent Consultation – letters, surveys, and interviews with parents as well as having them review the draft IEP to learn about their child’s strengths, needs and learning goals from their perspective.
- Implement the IEP – At the OES we will do our best to implement the program and services outlined in the student’s IEP. More specifically, accommodations will assist during final assessments.
- Review and update the IEP – The IEP should reflect any adjustments to learning expectations, teaching strategies, and assessment methods as new assessments provide additional information on the student’s strengths and needs.
Accommodations for Exams
- During the final exam, students on an IEP may be provided accommodations. These may include, but are not limited to, use of assistive technology, extra time (no longer than double time), scribe, reading of the questions, breaks during the exam. Each student will be provided support on an individual basis so accommodations may vary based on previous IEP information and proper documentation.
For further information on how we can support you, contact [email protected] |
Sources of Islamic Law
Muslims believe the Qur'an to be the direct words of Allah, as revealed to and transmitted by the Prophet Muhammad. All sources of Islamic law must be in essential agreement with the Qur'an, the most fundamental source of Islamic knowledge. When the Qur'an itself does not speak directly or in detail about a certain subject, Muslims only then turn to alternative sources of Islamic law.
Sunnah is the traditions or known practices of the Prophet Muhammad, many of which have been recorded in the volumes of Hadith literature. The resources include many things that he said, did, or agreed to -- and he lived his life according to the Qur'an, putting the Qur'an into practice in his own life.
During his lifetime, the Prophet's family and companions observed him and shared with others exactly what they had seen in his words and behaviors -- i.e. how he performed ablutions, how he prayed, and how he performed many other acts of worship. People also asked the Prophet directly for rulings on various matters, and he would pronounce his judgment. All of these details were passed on and recorded, to be referred to in future legal rulings. Many issues concerning personal conduct, community and family relations, political matters, etc. were addressed during the time of the Prophet, decided by him, and recorded. The Sunnah can thus clarify details of what is stated generally in the Qur'an.
In situations when Muslims have not been able to find a specific legal ruling in the Qur'an or Sunnah, the consensus of the community is sought (or at least the consensus of the legal scholars within the community). The Prophet Muhammad once said that his community (i.e. the Muslim community) would never agree on an error.
In cases when something needs a legal ruling, but has not been clearly addressed in the other sources, judges may use analogy, reasoning, and legal precedent to decide new case law. This is often the case when a general principle can be applied to new situations. (See the article Smoking in Islam for an example of this process at work.)
shari'aMyths About Islam What is the Qur'an? Islam
Please report any
broken links to
Copyright © 1988-2012 irfi.org. All Rights Reserved. Disclaimer |
Polio is a contagious disease caused by a virus that can lead to nerve damage and paralysis in the most severe cases.
What is Pediatric Polio?
Polio is caused by the polio virus, which lives in an infected person’s throat or intestines. The disease can be mild – causing no symptoms – or it can be severe – leading to permanent paralysis.
Polio is highly contagious and can be spread through bodily fluids. Children diagnosed with polio may experience post-polio syndrome later in life. Because of the polio vaccine, the disease has been eradicated in the United States, and it’s important for adults or children traveling out of the country
What are the different types of Pediatric Polio?
The two types of polio are:
Nonparalytic causes flu-like symptoms that last up to 10 days.
Paralytic occurs in about 1 percent of polio cases and is classified as:
- Bulbar – affects the brainstem, which can impair breathing, swallowing and other vital functions
- Bulbospinal – affects both the brainstem and spinal cord
- Spinal – affects the spinal cord and paralyzes the legs
What are the signs and symptoms of Pediatric Polio?
Some children will show no signs of the disease. But one in four will show these flu-like symptoms:
These symptoms can last up to 10 days.
For those with paralytic polio, the virus begins to attack the nervous system and can cause:
Children infected with polio can develop post-polio syndrome as adults with these symptoms:
- Fatigue (mental and physical)
- Joint pain
- Muscle weakness
What are the causes of Pediatric Polio?
The poliovirus causes polio. The virus lives in an infected person’s throat or intestines. It can be passed on through droplets from sneezing or coughing. It can also be spread through the infected person’s feces.
The virus can live in an infected person for several weeks. Once someone is infected, they may show symptoms in one to two weeks. |
When faced with extreme weather incidents, whether its flooding, hurricanes, wildfires or severe thunderstorms, it can be challenging to pinpoint the human toll as a result of global climate change. A new study in the journal Science Advances, however, attempts to put some hard numbers on the crisis by extrapolating out how many residents in U.S. cities would die from heat-related causes should temperatures continue to increase.
If average temperatures rise by 3 degrees Celsius, or 5.4 degrees Fahrenheit, above preindustrial temperatures, during any one particularly hot year, New York City can expect 5,800 people to die from heat. Los Angeles will see 2,500 die and Miami will see 2,300. Even San Francisco, of where it's been said "The coldest winter I ever spent was a summer in San Francisco," could see 328 heat-related deaths. But the research also shows that if action is taken to limit warming, thousands of lives in cities across the U.S. could be saved.
For the study, researchers looked at temperature and heat mortality data from 15 U.S. cities between 1987 and 2000. Using computer models, they simulated various warming scenarios figuring out how many Americans would die in each city based on global average temperature increases of 1.5, 2 and 3 degrees Celsius during a year that was the warmest in the past 30 years. (We're already more than a third of the way there, having passed 1 degree Celsius over preindustrial temperatures in 2015.) They found that almost all cities involved would see deaths rise, with the totals depending on their regional climate, population and other factors.
But according to the models, if warming was limited to 1.5 degrees Celsius, the goal set forth in the Paris Climate Agreement, it would save upwards of 2,720 lives during years experiencing extreme heat.
“Reducing emissions would lead to a smaller increase in heat-related deaths, assuming no additional actions to adapt to higher temperatures,” co-author Kristie Ebi of the University of Washington tells Oliver Milman at The Guardian. “Climate change, driven by greenhouse gas emissions, is affecting our health, our economy and our ecosystems. This study adds to the body of evidence of the harms that could come without rapid and significant reductions in our greenhouse gas emissions.”
“At the path we are on, toward 3 degrees Celsius warming, we get into temperatures that people have not previously experienced,” co-author Peter Frumhoff, chief climate scientist at the Union of Concerned Scientists tells Bob Berwyn at Inside Climate News. “The core point is, across these cities, thousands of deaths can be avoided by keeping temperatures within the Paris target.”
While most predictions about the effects of climate change have been fairly general, the authors say in a press release that calculating actual death tolls in specific cities changes the narrative.
“We are no longer counting the impact of climate in change in terms of degrees of global warming, but rather in terms of number of lives lost,” co-lead author Dann Mitchell from University of Bristol says. “Our study brings together a wide range of physical and social complexities to show just how human lives could be impacted if we do not cut carbon emissions.”
Berwyn reports that calculating potential heat-related mortality for other cities around the world is difficult since reliable health data is unavailable. But a recent study looking at Europe found that if temperatures increase by 2 degrees Celsius, there will be 132,000 additional deaths on the continent.
While thousands of heat-related deaths in American cities is attention grabbing, they pale in comparison to the impacts that may already be occurring due to climate change. A report from the Lancet released late last year found that in 2017 alone 153 billion work hours were lost due to extreme heat and hundreds of millions of vulnerable people experienced heat waves. Changes in heat and rainfall have caused diseases transmitted by mosquitoes or water to become 10 percent more infectious than they were in 1950. The same factors are damaging crops and reducing their overall nutrition, leading to the three straight years of rising global hunger after decades of improvements. All of those problems are expected to increase along with temperatures.
The impacts on health aren’t all caused by heat and weather disruption either. The World Health Organziation released a report last year showing fossil fuel pollution currently causes more than a million preventable deaths annually and contributes to countless cases of asthma, lung disease, heart disease and stroke. According to the study, the improved health benefits of moving to cleaner energy would double the costs of cutting those emissions.
Berwyn reports that deaths from extreme heat, especially in the United States, are preventable, since heat waves can be forecast and mitigated. Many cities already have heat action plans, including projects like providing air conditioning for seniors and other vulnerable populations. But Julie Arrighi, a climate expert with the International Red Cross Red Crescent Climate Centre says many of those plans need to be scaled up to meet predicted future temperatures. And in the Global South, which will bear the brunt of the heat, urgent action is needed to help city dwellers prepare for a future full of record breaking temperatures. |
The seas of Titan communicate with each other
Measuring liquid seas and lakes on Saturn’s moon to an accuracy of just 30 centimetres reveals surprising finds. Richard A Lovett reports.
Drawing on data taken by NASA’s Cassini spacecraft as recently as five months before its final plunge into Saturn’s atmosphere, scientists have found that the large northern hemisphere seas of the planet’s moon, Titan, all lie at the same elevation, just like the Earth’s oceans.
What this means, says Alex Hayes, a planetary scientist at Cornell University, US, is that these seas are somehow in “communication” with each other, either via aboveground channels or through subsurface flows that serve to equalise their levels. Unlike Earth’s seas, however, they are composed of liquid methane, ethane and nitrogen at an approximate temperature of minus 180 degrees Celsius.
Hayes and his colleagues report their analysis in the journal Geophysical Research Letters.
The find was made possible by a companion study, in the same journal, led by Cornell University graduate student Paul Corlies, who stitched together radar mapping data from dozens of flybys over the course of more than a decade to create the most complete possible topographic map of the surface of Titan.
It’s a difficult process, says Hayes, because each flyby produced a separate dataset focusing on a single noodle-like swath of Titan’s surface. It was easy to determine the relative topography along any given one of these strips, but comparing the elevations of features on different strips was more difficult.
There are enough strips, however, that they crossed each other, at numerous points during different flybys. “That allowed them to compare features,” says Hayes.
Ultimately, Corlies says, he was able to map about 9% of Titan’s surface elevations to a resolution of about 35 metres, and another 25 to 30% at resolutions of about 100 to 200 metres. Although that still leaves big gaps, one area that was extensively covered was the north polar region, where scientists had discovered large seas, roughly the size of North America’s Great Lakes, as well as numerous smaller lakes.
One initial finding was that the mapped parts of Titan have about the same altitude range as Australia — about 2300 metres from lowest point to highest.
But more interesting was the fact that the images from the flat, highly reflective surfaces of the seas and lakes gave much stronger signals than the adjoining solid surfaces. “The seas are perfectly flat mirrors most of the time,” Hayes says. “Over the seas, we can obtain accuracies on the order of about 30 to 50 centimetres.”
Using that, Hayes says, it was possible to determine that the seas had the same sea level, except for small differences accounted for by variations in Titan’s gravitational field. (Similar variations in Earth’s gravitational field affect the levels of own oceans.) This almost certainly means that the seas are linked in some way, presumably via a combination of surface and underground liquid that keeps them at the same level relative to gravity, akin to an earthly water table.
But that just applies to the large seas. Titan’s smaller lakes can be hundreds of metres higher, with a diversity of elevations. But within drainage basins, Hayes says, the lakes appear to share a common elevation, implying that they too share fluid via groundwater flow – “or ground methane in this case,” Hayes says. “Which means there’s a reservoir of liquid we don’t see on the surface.”
Finally, he notes, the small lakes are surrounded by high, steep bluffs. “They look like you took a cookie cutter to the surface and made little cut-outs,” he says.
The most likely explanation, he adds, is that the lakes are in collapsed pits where something dissolved out the underlying material, causing it to cave in and create the lake. On Earth, such features, called karst, can be found in limestone deposits such as Australia’s Nullarbor Plain, but steep-walled topography around Titan’s lakes, sometimes forming 100-metre high rims, is sufficiently different that Hayes and his team aren’t sure what caused it. “We punted and said we’re leaving it to the modellers,” he says.
Looking at lakes and seas is just one of many ways in which the new Titan topographic map can be used. For example, says Corlies, climate modellers can use it to improve their models of Titan’s atmospheric processes.
”Right now most of the models assume a flat surface,” he says, “but we know that on Earth, mountains have a huge effect.”
It can also be used to help understand Titan’s internal structure, he explains, including the question of whether it has a subsurface ocean.
Francis Nimmo, a planetary scientist at the University of California, Santa Cruz, US, who was not a member of either team, says Hayes’ study is an example of how much can be learned about a planetary body once one has a good topographical map.
For example, he says, in the outer solar system we know that on some worlds big impact basins are deep, while on others, impact basins of similar size are much shallower. “The difference arises because some satellites experienced ancient heating events that allowed the ice to flow and the impact basins to fill in, while others didn’t,” he says. “There’s no way we could make that kind of analysis without having topographic data.”
And in this case, the precision of the data is remarkable. “Think about that,” says Hayes. “You’re measuring the elevation of a liquid surface on a moon of Saturn using a spacecraft that launched in 1997, to an accuracy of 30 to 50 centimetres. That’s phenomenal.” |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.