sentence1
stringlengths
1
133k
sentence2
stringlengths
1
131k
to obtain the Fourier series for the polynomials . The bound is achieved for even when is zero. The term may be omitted for odd but the proof in this case is more complex (see Lehmer). Using this inequality, the size of the remainder term can be estimated as Low-order cases The Bernoulli numbers from to are . Therefore the low-order cases of the Euler–Maclaurin formula are: Applications The Basel problem The Basel problem is to determine the sum Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals , which he proved in the same year. Sums involving a polynomial If is a polynomial and is big enough, then the remainder term vanishes. For instance, if , we can choose to obtain, after simplification, Approximation of integrals The formula provides a means of approximating a finite integral. Let be the endpoints of the interval of integration. Fix , the number of points to use in the approximation, and denote the corresponding step size by . Set , so that and . Then: This may be viewed as an extension of the trapezoid rule by the inclusion of correction terms. Note that this asymptotic expansion is usually not convergent; there is some , depending upon and , such that the terms past order increase rapidly. Thus, the remainder term generally demands close attention. The Euler–Maclaurin formula is also used for detailed error analysis in numerical quadrature. It explains the superior performance of the trapezoidal rule on smooth periodic functions and is used in certain extrapolation methods. Clenshaw–Curtis quadrature is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a discrete cosine transform). This technique is known as a periodizing transformation. Asymptotic expansion of sums In the context of computing asymptotic expansions of sums and series, usually the most useful form of the Euler–Maclaurin formula is where and are integers. Often the expansion remains valid even after taking the limits or or both. In many cases the integral on the right-hand side can be evaluated in closed form in terms of elementary functions even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example, Here the left-hand side is equal to , namely the first-order polygamma function defined by the gamma function is equal to when is a positive integer. This results in an asymptotic expansion
term arises because the integral is usually not exactly equal to the sum. The formula may be derived by applying repeated integration by parts to successive intervals for . The boundary terms in these integrations lead to the main terms of the formula, and the leftover integrals form the remainder term. The remainder term has an exact expression in terms of the periodized Bernoulli functions . The Bernoulli polynomials may be defined recursively by and, for , The periodized Bernoulli functions are defined as where denotes the largest integer less than or equal to , so that always lies in the interval . With this notation, the remainder term equals When , it can be shown that where denotes the Riemann zeta function; one approach to prove this inequality is to obtain the Fourier series for the polynomials . The bound is achieved for even when is zero. The term may be omitted for odd but the proof in this case is more complex (see Lehmer). Using this inequality, the size of the remainder term can be estimated as Low-order cases The Bernoulli numbers from to are . Therefore the low-order cases of the Euler–Maclaurin formula are: Applications The Basel problem The Basel problem is to determine the sum Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals , which he proved in the same year. Sums involving a polynomial If is a polynomial and is big enough, then the remainder term vanishes. For instance, if , we can choose to obtain, after simplification, Approximation of integrals The formula provides a means of approximating a finite integral. Let be the endpoints of the interval of integration. Fix , the number of points to use in the approximation, and denote the corresponding step size by . Set , so that and . Then: This may be viewed as an extension of the trapezoid rule by the inclusion of correction terms. Note that this asymptotic expansion is usually not convergent; there is some , depending upon and , such that the terms past order increase rapidly. Thus, the remainder term generally demands close attention. The Euler–Maclaurin formula is also used for detailed error analysis in numerical quadrature. It explains the superior performance of the trapezoidal rule on smooth periodic functions and is used in certain extrapolation methods. Clenshaw–Curtis quadrature is essentially a change of
statement is true leads us to conclude that the statement is false. This is a contradiction, so the option of the statement being true is not possible. This leaves the second option: that it is false. If we assume the statement is false and that Epimenides is lying about all Cretans being liars, then there must exist at least one Cretan who is honest. This does not lead to a contradiction since it is not required that this Cretan be Epimenides. This means that Epimenides can say the false statement that all Cretans are liars while knowing at least one honest Cretan and lying about this particular Cretan. Hence, from the assumption that the statement is false, it does not follow that the statement is true. So we can avoid a paradox as seeing the statement "all Cretans are liars" as a false statement, which is made by a lying Cretan, Epimenides. The mistake made by Thomas Fowler (and many other people) above is to think that the negation of "all Cretans are liars" is "all Cretans are honest" (a paradox) when in fact the negation is "there exists a Cretan who is honest", or "not all Cretans are liars". The Epimenides paradox can be slightly modified as to not allow the kind of solution described above, as it was in the first paradox of Eubulides but instead leading to a non-avoidable self-contradiction. Paradoxical versions of the Epimenides problem are closely related to a class of more difficult logical problems, including the liar paradox, Socratic paradox and the Burali-Forti paradox, all of which have self-reference in common with Epimenides. The Epimenides paradox is usually classified as a variation on the liar paradox, and sometimes the two are not distinguished. The study of self-reference led to important developments in logic and mathematics in the twentieth century. In other words, it is not a paradox once one realizes "All Cretans are liars" being untrue only means "Not all Cretans are liars" instead of the assumption that "All Cretans are honest". Perhaps better put, for "All Cretans are liars" to be a true statement, it does not mean that all Cretans must lie all the time. In fact, Cretans could tell the truth quite often, but still all be liars in the sense that liars are people prone to deception for dishonest gain. Considering that "All Cretans are liars" has been seen as a paradox only since the 19th century, this seems to resolve the alleged paradox. If 'all Cretans are continuous liars' is actually true, then asking a Cretan
is true. So we can avoid a paradox as seeing the statement "all Cretans are liars" as a false statement, which is made by a lying Cretan, Epimenides. The mistake made by Thomas Fowler (and many other people) above is to think that the negation of "all Cretans are liars" is "all Cretans are honest" (a paradox) when in fact the negation is "there exists a Cretan who is honest", or "not all Cretans are liars". The Epimenides paradox can be slightly modified as to not allow the kind of solution described above, as it was in the first paradox of Eubulides but instead leading to a non-avoidable self-contradiction. Paradoxical versions of the Epimenides problem are closely related to a class of more difficult logical problems, including the liar paradox, Socratic paradox and the Burali-Forti paradox, all of which have self-reference in common with Epimenides. The Epimenides paradox is usually classified as a variation on the liar paradox, and sometimes the two are not distinguished. The study of self-reference led to important developments in logic and mathematics in the twentieth century. In other words, it is not a paradox once one realizes "All Cretans are liars" being untrue only means "Not all Cretans are liars" instead of the assumption that "All Cretans are honest". Perhaps better put, for "All Cretans are liars" to be a true statement, it does not mean that all Cretans must lie all the time. In fact, Cretans could tell the truth quite often, but still all be liars in the sense that liars are people prone to deception for dishonest gain. Considering that "All Cretans are liars" has been seen as a paradox only since the 19th century, this seems to resolve the alleged paradox. If 'all Cretans are continuous liars' is actually true, then asking a Cretan if they are honest would always elicit the dishonest answer 'yes'. So arguably the original proposition is not so much paradoxical as invalid. A contextual reading of the contradiction may also provide an answer to the paradox. The original phrase, "The Cretans, always liars, evil beasts, idle bellies!" asserts not an intrinsic paradox, but rather an opinion of the Cretans from Epimenides. A stereotyping of his people not intended to be an absolute statement about the people as a whole. Rather it is a claim made about their position regarding their religious beliefs and socio-cultural attitudes. Within the context of his poem the phrase is specific to a certain belief, a context that Callimachus repeats in his poem regarding Zeus. Further, a more poignant answer to the paradox is simply that to be a liar is to state falsehoods, nothing in the statement asserts everything said is false, but rather they're "always" lying. This is not an absolute statement of fact and thus we cannot conclude there's a true contradiction made by Epimenides with this statement. Origin
was merely a water pump, with the engine being transported to the fire by horses. In modern usage, the term engine typically describes devices, like steam engines and internal combustion engines, that burn or otherwise consume fuel to perform mechanical work by exerting a torque or linear force (usually in the form of thrust). Devices converting heat energy into motion are commonly referred to simply as engines. Examples of engines which exert a torque include the familiar automobile gasoline and diesel engines, as well as turboshafts. Examples of engines which produce thrust include turbofans and rockets. When the internal combustion engine was invented, the term motor was initially used to distinguish it from the steam engine—which was in wide use at the time, powering locomotives and other vehicles such as steam rollers. The term motor derives from the Latin verb which means 'to set in motion', or 'maintain motion'. Thus a motor is a device that imparts motion. Motor and engine are interchangeable in standard English. In some engineering jargons, the two words have different meanings, in which engine is a device that burns or otherwise consumes fuel, changing its chemical composition, and a motor is a device driven by electricity, air, or hydraulic pressure, which does not change the chemical composition of its energy source. However, rocketry uses the term rocket motor, even though they consume fuel. A heat engine may also serve as a prime mover—a component that transforms the flow or changes in pressure of a fluid into mechanical energy. An automobile powered by an internal combustion engine may make use of various motors and pumps, but ultimately all such devices derive their power from the engine. Another way of looking at it is that a motor receives power from an external source, and then converts it into mechanical energy, while an engine creates power from pressure (derived directly from the explosive force of combustion or other chemical reaction, or secondarily from the action of some such force on other substances such as air, water, or steam). History Antiquity Simple machines, such as the club and oar (examples of the lever), are prehistoric. More complex engines using human power, animal power, water power, wind power and even steam power date back to antiquity. Human power was focused by the use of simple engines, such as the capstan, windlass or treadmill, and with ropes, pulleys, and block and tackle arrangements; this power was transmitted usually with the forces multiplied and the speed reduced. These were used in cranes and aboard ships in Ancient Greece, as well as in mines, water pumps and siege engines in Ancient Rome. The writers of those times, including Vitruvius, Frontinus and Pliny the Elder, treat these engines as commonplace, so their invention may be more ancient. By the 1st century AD, cattle and horses were used in mills, driving machines similar to those powered by humans in earlier times. According to Strabo, a water-powered mill was built in Kaberia of the kingdom of Mithridates during the 1st century BC. Use of water wheels in mills spread throughout the Roman Empire over the next few centuries. Some were quite complex, with aqueducts, dams, and sluices to maintain and channel the water, along with systems of gears, or toothed-wheels made of wood and metal to regulate the speed of rotation. More sophisticated small devices, such as the Antikythera Mechanism used complex trains of gears and dials to act as calendars or predict astronomical events. In a poem by Ausonius in the 4th century AD, he mentions a stone-cutting saw powered by water. Hero of Alexandria is credited with many such wind and steam powered machines in the 1st century AD, including the Aeolipile and the vending machine, often these machines were associated with worship, such as animated altars and automated temple doors. Medieval Medieval Muslim engineers employed gears in mills and water-raising machines, and used dams as a source of water power to provide additional power to watermills and water-raising machines. In the medieval Islamic world, such advances made it possible to mechanize many industrial tasks previously carried out by manual labour. In 1206, al-Jazari employed a crank-conrod system for two of his water-raising machines. A rudimentary steam turbine device was described by Taqi al-Din in 1551 and by Giovanni Branca in 1629. In the 13th century, the solid rocket motor was invented in China. Driven by gunpowder, this simplest form of internal combustion engine was unable to deliver sustained power, but was useful for propelling weaponry at high speeds towards enemies in battle and for fireworks. After invention, this innovation spread throughout Europe. Industrial Revolution The Watt steam engine was the first type of steam engine to make use of steam at a pressure just above atmospheric to drive the piston helped by a partial vacuum. Improving on the design of the 1712 Newcomen steam engine, the Watt steam engine, developed sporadically from 1763 to 1775, was a great step in the development of the steam engine. Offering a dramatic increase in fuel efficiency, James Watt's design became synonymous with steam engines, due in no small part to his business partner, Matthew Boulton. It enabled rapid development of efficient semi-automated factories on a previously unimaginable scale in places where waterpower was not available. Later development led to steam locomotives and great expansion of railway transportation. As for internal combustion piston engines, these were tested in France in 1807 by de Rivaz and independently, by the Niépce brothers. They were theoretically advanced by Carnot in 1824. In 1853–57 Eugenio Barsanti and Felice Matteucci invented and patented an engine using the free-piston principle that was possibly the first 4-cycle engine. The invention of an internal combustion engine which was later commercially successful was made during 1860 by Etienne Lenoir. In 1877 the Otto cycle was capable of giving a far higher power to weight ratio than steam engines and worked much better for many transportation applications such as cars and aircraft. Automobiles The first commercially successful automobile, created by Karl Benz, added to the interest in light and powerful engines. The lightweight gasoline internal combustion engine, operating on a four-stroke Otto cycle, has been the most successful for light automobiles, while the more efficient Diesel engine is used for trucks and buses. However, in recent years, turbo Diesel engines have become increasingly popular, especially outside of the United States, even for quite small cars. Horizontally opposed pistons In 1896, Karl Benz was granted a patent for his design of the first engine with horizontally opposed pistons. His design created an engine in which the corresponding pistons move in horizontal cylinders and reach top dead center simultaneously, thus automatically balancing each other with respect to their individual momentum. Engines of this design are often referred to as flat engines because of their shape and lower profile. They were used in the Volkswagen Beetle, the Citroën 2CV, some Porsche and Subaru cars, many BMW and Honda motorcycles, and propeller aircraft engines. Advancement Continuance of the use of the internal combustion engine for automobiles is partly due to the improvement of engine control systems (onboard computers providing engine management processes, and electronically controlled fuel injection). Forced air induction by turbocharging and supercharging have increased power outputs and engine efficiencies. Similar changes have been applied to smaller diesel engines giving them almost the same power characteristics as gasoline engines. This is especially evident with the popularity of smaller diesel engine propelled cars in Europe. Larger diesel engines are still often used in trucks and heavy machinery, although they require special machining not available in most factories. Diesel engines produce lower hydrocarbon and emissions, but greater particulate and pollution, than gasoline engines. Diesel engines are also 40% more fuel efficient than comparable gasoline engines. Increasing power In the first half of the 20th century, a trend of increasing engine power occurred, particularly in the U.S models. Design changes incorporated all known methods of increasing engine capacity, including increasing the pressure in the cylinders to improve efficiency, increasing the size of the engine, and increasing the rate at which the engine produces work. The higher forces and pressures created by these changes created engine vibration and size problems that led to stiffer, more compact engines with V and opposed cylinder layouts replacing longer straight-line arrangements. Combustion efficiency Optimal combustion efficiency in passenger vehicles is reached with a coolant temperature of around . Engine configuration Earlier automobile engine development produced a much larger range of engines than is in common use today. Engines have ranged from 1- to 16-cylinder designs with corresponding differences in overall size, weight, engine displacement, and cylinder bores. Four cylinders and power ratings from 19 to 120 hp (14 to 90 kW) were followed in a majority of the models. Several three-cylinder, two-stroke-cycle models were built while most engines had straight or in-line cylinders. There were several V-type models and horizontally opposed two- and four-cylinder makes too. Overhead camshafts were frequently employed. The smaller engines were commonly air-cooled and located at the rear of the vehicle; compression ratios were relatively low. The 1970s and 1980s saw an increased interest in improved fuel economy, which caused a return to smaller V-6 and four-cylinder
engine is perhaps the most common example of a chemical heat engine, in which heat from the combustion of a fuel causes rapid pressurisation of the gaseous combustion products in the combustion chamber, causing them to expand and drive a piston, which turns a crankshaft. Unlike internal combustion engines, a reaction engine (such as a jet engine) produces thrust by expelling reaction mass, in accordance with Newton's third law of motion. Apart from heat engines, electric motors convert electrical energy into mechanical motion, pneumatic motors use compressed air, and clockwork motors in wind-up toys use elastic energy. In biological systems, molecular motors, like myosins in muscles, use chemical energy to create forces and ultimately motion (a chemical engine, but not a heat engine). Chemical heat engines which employ air (ambient atmospheric gas) as a part of the fuel reaction are regarded as airbreathing engines. Chemical heat engines designed to operate outside of Earth's atmosphere (e.g. rockets, deeply submerged submarines) need to carry an additional fuel component called the oxidizer (although there exist super-oxidizers suitable for use in rockets, such as fluorine, a more powerful oxidant than oxygen itself); or the application needs to obtain heat by non-chemical means, such as by means of nuclear reactions. All chemically fueled heat engines emit exhaust gases. The cleanest engines emit water only. Strict zero-emissions generally means zero emissions other than water and water vapour. Only heat engines which combust pure hydrogen (fuel) and pure oxygen (oxidizer) achieve zero-emission by a strict definition (in practice, one type of rocket engine). If hydrogen is burnt in combination with air (all airbreathing engines), a side reaction occurs between atmospheric oxygen and atmospheric nitrogen resulting in small emissions of , which is adverse even in small quantities. If a hydrocarbon (such as alcohol or gasoline) is burnt as fuel, large quantities of are emitted, a potent greenhouse gas. Hydrogen and oxygen from air can be reacted into water by a fuel cell without side production of , but this is an electrochemical engine not a heat engine. Terminology The word engine derives from Old French , from the Latin –the root of the word . Pre-industrial weapons of war, such as catapults, trebuchets and battering rams, were called siege engines, and knowledge of how to construct them was often treated as a military secret. The word gin, as in cotton gin, is short for engine. Most mechanical devices invented during the industrial revolution were described as engines—the steam engine being a notable example. However, the original steam engines, such as those by Thomas Savery, were not mechanical engines but pumps. In this manner, a fire engine in its original form was merely a water pump, with the engine being transported to the fire by horses. In modern usage, the term engine typically describes devices, like steam engines and internal combustion engines, that burn or otherwise consume fuel to perform mechanical work by exerting a torque or linear force (usually in the form of thrust). Devices converting heat energy into motion are commonly referred to simply as engines. Examples of engines which exert a torque include the familiar automobile gasoline and diesel engines, as well as turboshafts. Examples of engines which produce thrust include turbofans and rockets. When the internal combustion engine was invented, the term motor was initially used to distinguish it from the steam engine—which was in wide use at the time, powering locomotives and other vehicles such as steam rollers. The term motor derives from the Latin verb which means 'to set in motion', or 'maintain motion'. Thus a motor is a device that imparts motion. Motor and engine are interchangeable in standard English. In some engineering jargons, the two words have different meanings, in which engine is a device that burns or otherwise consumes fuel, changing its chemical composition, and a motor is a device driven by electricity, air, or hydraulic pressure, which does not change the chemical composition of its energy source. However, rocketry uses the term rocket motor, even though they consume fuel. A heat engine may also serve as a prime mover—a component that transforms the flow or changes in pressure of a fluid into mechanical energy. An automobile powered by an internal combustion engine may make use of various motors and pumps, but ultimately all such devices derive their power from the engine. Another way of looking at it is that a motor receives power from an external source, and then converts it into mechanical energy, while an engine creates power from pressure (derived directly from the explosive force of combustion or other chemical reaction, or secondarily from the action of some such force on other substances such as air, water, or steam). History Antiquity Simple machines, such as the club and oar (examples of the lever), are prehistoric. More complex engines using human power, animal power, water power, wind power and even steam power date back to antiquity. Human power was focused by the use of simple engines, such as the capstan, windlass or treadmill, and with ropes, pulleys, and block and tackle arrangements; this power was transmitted usually with the forces multiplied and the speed reduced. These were used in cranes and aboard ships in Ancient Greece, as well as in mines, water pumps and siege engines in Ancient Rome. The writers of those times, including Vitruvius, Frontinus and Pliny the Elder, treat these engines as commonplace, so their invention may be more ancient. By the 1st century AD, cattle and horses were used in mills, driving machines similar to those powered by humans in earlier times. According to Strabo, a water-powered mill was built in Kaberia of the kingdom of Mithridates during the 1st century BC. Use of water wheels in mills spread throughout the Roman Empire over the next few centuries. Some were quite complex, with aqueducts, dams, and sluices to maintain and channel the water, along with systems of gears, or toothed-wheels made of wood and metal to regulate the speed of rotation. More sophisticated small devices, such as the Antikythera Mechanism used complex trains of gears and dials to act as calendars or predict astronomical events. In a poem by Ausonius in the 4th century AD, he mentions a stone-cutting saw powered by water. Hero of Alexandria is credited with many such wind and steam powered machines in the 1st century AD, including the Aeolipile and the vending machine, often these machines were associated with worship, such as animated altars and automated temple doors. Medieval Medieval Muslim engineers employed gears in mills and water-raising machines, and used dams as a source of water power to provide additional power to watermills and water-raising machines. In the medieval Islamic world, such advances made it possible to mechanize many industrial tasks previously carried out by manual labour. In 1206, al-Jazari employed a crank-conrod system for two of his water-raising machines. A rudimentary steam turbine device was described by Taqi al-Din in 1551 and by Giovanni Branca in 1629. In the 13th century, the solid rocket motor was invented in China. Driven by gunpowder, this simplest form of internal combustion engine was unable to deliver sustained power, but was useful for propelling weaponry at high speeds towards enemies in battle and for fireworks. After invention, this innovation spread throughout Europe. Industrial Revolution The Watt steam engine was the first type of steam engine to make use of steam at a pressure just above atmospheric to drive the piston helped by a partial vacuum. Improving on the design of the 1712 Newcomen steam engine, the Watt steam engine, developed sporadically from 1763 to 1775, was a great step in the development of the steam engine. Offering a dramatic increase in fuel efficiency, James Watt's design became synonymous with steam engines, due in no small part to his business partner, Matthew Boulton. It enabled rapid development of efficient semi-automated factories on a previously unimaginable scale in places where waterpower was not available. Later development led to steam locomotives and great expansion of railway transportation. As for internal combustion piston engines, these were tested in France in 1807 by de Rivaz and independently, by the Niépce brothers. They were theoretically advanced by Carnot in 1824. In 1853–57 Eugenio Barsanti and Felice Matteucci invented and patented an engine using the free-piston principle that was possibly the first 4-cycle engine. The invention of an internal combustion engine which was later commercially successful was made during 1860 by Etienne Lenoir. In 1877 the Otto cycle was capable of giving a far higher power to weight ratio than steam engines and worked much better for many transportation applications such as cars and aircraft. Automobiles The first commercially successful automobile, created by Karl Benz, added to the interest in light and powerful engines. The lightweight gasoline internal combustion engine, operating on a four-stroke Otto cycle, has been the most successful for light automobiles, while the more efficient Diesel engine is used for trucks and buses. However, in recent years, turbo Diesel engines have become increasingly popular, especially outside of the United States, even for quite small cars. Horizontally opposed pistons In 1896, Karl Benz was granted a patent for his design of the first engine with horizontally opposed pistons. His design created an engine in which the corresponding pistons move in horizontal cylinders and reach top dead center simultaneously, thus automatically balancing each other with respect to their individual momentum. Engines of this design are often referred to as flat engines because
Werner, Prime Minister of Luxembourg. The decision to form the Economic and Monetary Union of the European Union (EMU) was accepted in which later became part of the Maastricht Treaty (the Treaty on European Union). Processes in the European EMU The EMU involves four main activities. The first responsibility is to be in charge of implementing effective monetary policy for the euro area with price stability. There is a group of economists whose only role is studying how to improve the monetary policy while maintaining price stability. They conduct research, and their results are presented to the leaders of the EMU. Thereafter, the role of the leaders is to find a suitable way to implement the economists' work into their country's policies. Maintaining price stability is a long-term goal for all states in the EU, due to the effects it might have on the Euro as a currency. Secondly, the EMU must coordinate economic and fiscal policies in EU countries. They must find an equilibrium between the implementation of monetary and fiscal policies. They will advise countries to have greater coordination, even if that means having countries tightly coupled with looser monetary and tighter fiscal policy. Not coordinating the monetary market could result in risking an unpredictable situation. The EMU also deliberates on a mixed policy option, which has been shown to be beneficial in some empirical studies. Thirdly, the EMU ensures that the single market runs smoothly. The member countries respect the decisions made by the EMU and ensure that their actions will be in favor of a stable market. Finally, regulations of the EMU aid in supervising and monitoring financial institutions. There is an imperative need for all members of the EMU to act in unison. Therefore, the EMU has to have institutions supervising all the member states to protect the main aim of the EMU. Roles of national governments The economic roles of nations within the EMU are to: control fiscal policy that concerns government budgets control tax policies that determine how income is raised control structural policies that determine pension systems, labor, and capital-market regulations List of economic and monetary unions Economic and Monetary Union of the European
stability is a long-term goal for all states in the EU, due to the effects it might have on the Euro as a currency. Secondly, the EMU must coordinate economic and fiscal policies in EU countries. They must find an equilibrium between the implementation of monetary and fiscal policies. They will advise countries to have greater coordination, even if that means having countries tightly coupled with looser monetary and tighter fiscal policy. Not coordinating the monetary market could result in risking an unpredictable situation. The EMU also deliberates on a mixed policy option, which has been shown to be beneficial in some empirical studies. Thirdly, the EMU ensures that the single market runs smoothly. The member countries respect the decisions made by the EMU and ensure that their actions will be in favor of a stable market. Finally, regulations of the EMU aid in supervising and monitoring financial institutions. There is an imperative need for all members of the EMU to act in unison. Therefore, the EMU has to have institutions supervising all the member states to protect the main aim of the EMU. Roles of national governments The economic roles of nations within the EMU are to: control fiscal policy that concerns government budgets control tax policies that determine how income is raised control structural policies that determine pension systems, labor, and capital-market regulations List of economic and monetary unions Economic and Monetary Union of the European Union (EMU) (1999/2002) with the Euro for the Eurozone members de facto the OECS Eastern Caribbean Currency Union with the East Caribbean dollar in the CSME (2006) de facto Switzerland–Liechtenstein Proposed Previous EMUs Monetary union of the Belgium–Luxembourg Economic Union (1922–2002), superseded by the European EMU. See also North American Union and North American Currency Union (Amero) Pacific Union (one proposal for Australian dollar) References Further reading Acocella, N. and Di Bartolomeo, G. and Tirelli, P. [2007], ‘Fiscal leadership and coordination in the EMU’, in: ‘Open Economies Review’, 18(3): 281–9. External links African monetary union inches closer United States of Southern Africa? East Africa's first steps towards union West Africa opts for currency union Gulf States push for single currency 'Limited gains' from Gulf single currency Do the Mercosur Countries Form an Optimum Currency Area? Argentina plans monetary union Economist – Antipodean currencies (Australia and New Zealand) Three Perspectives on an Australasian Monetary Union Reasons for the collapse of the Rouble Zone In Search of the "Ruble Zone" OECD Development Centre – the Rand Zone A single African currency in our time? South Africa
its administrative and budgetary documents in its public documents register. The discharge process for the 2010 budget required additional clarificatios. In February 2012, the European Parliament's Committee on Budgetary Control published a draft report, identifying areas of concern in the use of funds and its influence for the 2010 budget such as a 26% budget increase from 2009 to 2010 to €50 600 000. and questioned that maximum competition and value-for-money principles were honored in hiring, also possible fictitious employees. The EEA's Executive Director refuted allegations of irregularities in a public hearing. On 27 March 2012 Members of the European Parliament (MEPs) voted on the report and commended the cooperation between the Agency and NGOs working in the environmental area. On 23 October 2012, the European Parliament voted and granted the discharge to the European Environment Agency for its 2010 budget. Executive directors International cooperation In addition to its 32 members and six Balkan cooperating countries, the EEA also cooperates and fosters partnerships with its neighbours and other countries and regions, mostly in the context of the European Neighbourhood Policy: EaP states: Belarus, Ukraine, Moldova, Armenia, Azerbaijan, Georgia UfM states: Algeria, Egypt, Israel, Jordan, Lebanon, Libya, Morocco, Palestinian Authority, Syria, Tunisia other ENPI states: Russia Central Asia states: Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, Uzbekistan Additionally the EEA cooperates with multiple international organizations and the corresponding agencies of the following countries: United States of America (Environmental Protection Agency) Canada (Environment Canada) Official languages The 26 official languages used by the EEA are: Bulgarian, Czech, Croatian, Danish, German, Greek, English, Spanish, Estonian, Finnish, French, Hungarian, Icelandic, Italian, Lithuanian, Latvian, Malti, Dutch, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovene, Swedish and Turkish. See also Agencies of the European Union Citizen Science, cleanup projects that people can take part in. EU environmental policy List of atmospheric dispersion models List of environmental organizations Confederation of European Environmental Engineering Societies Coordination of Information on the Environment European Agency for Safety and Health at Work Environment Agency References External links European Environment Agency website European Topic Centre on Land Use and Spatial Information (ETC LUSI) European Topic Centre on Air and Climate Change(ETC/ACC) European Topic Centre on Biological Diversity(ETC/BD) Model Documentation System (MDS) The European Environment Agency's near real-time ozone map (ozoneweb) The European Climate Adaptation Platform Climate-ADAPT EUNIS homepage 1990 in the European Economic
policy, and to inform the general public. Organization The EEA was established by the European Economic Community (EEC) Regulation 1210/1990 (amended by EEC Regulation 933/1999 and EC Regulation 401/2009) and became operational in 1994, headquartered in Copenhagen, Denmark. The agency is governed by a management board composed of representatives of the governments of its 32 member states, a European Commission representative and two scientists appointed by the European Parliament, assisted by its Scientific Committee. The current Executive Director of the agency is Professor Hans Bruyninckx, who has been appointed for a five-year term. He is the successor of Professor Jacqueline McGlade. Member countries The member states of the European Union are members; however other states may become members of it by means of agreements concluded between them and the EU. It was the first EU body to open its membership to the 13 candidate countries (pre-2004 enlargement). The EEA has 32 member countries and six cooperating countries. The 33 member countries include the 27 European Union member states together with Iceland, Liechtenstein, Norway, Switzerland and Turkey. Since Brexit in 2020, the UK is not a member of the EU anymore and therefore not a member state of the EEA. The six Western Balkan countries are cooperating countries: Albania, Bosnia and Herzegovina, Montenegro, North Macedonia, Serbia as well as Kosovo under the UN Security Council Resolution 1244/99. These cooperation activities are integrated into Eionet and are supported by the EU under the "Instrument for Pre-Accession Assistance". The EEA is an active member of the EPA Network. Reports, data and knowledge The European Environment Agency(EEA) produces assessments based on quality-assured data on a wide range of issues from biodiversity, air quality, transport to climate change. These assessments are closely linked to the European Union's environment policies and legislation and help monitor progress in some areas and indicate areas where additional efforts are needed. As required in its founding regulation, the EEA publishes its flagship report the State and Outlook of Europe's environment (SOER), which is an integrated assessment, analysing trends, progress to targets as well as outlook for the mid- to long-term. The EEA shares this information, including the datasets used in its assessments, through its main website and a number of thematic information platforms such as Biodiversity Information System for Europe (BISE) and ClimateADAPT. The Climate-ADAPT knowledge platform presents information and data on expected climatic changes, the vulnerability of regions and sectors, adaptation case studies, and adaptation options, adaptation planning tools, and EU policy. European Nature Information System The European Nature Information System (EUNIS) provides access to the publicly available data in the EUNIS database for species, habitat types and protected sites across Europe. It is part of the European Biodiversity data centre (BDC), and is maintained by the EEA. The database contains data on species, habitat types and designated sites from the framework of Natura 2000, from material compiled by the European Topic Centre on Biological Diversity mentioned in relevant international conventions and in the IUCN Red Lists,
of X.509 Certificate used in securing computer communications Extracellular vesicle, a membrane-bound vesicle Stereo-4, also known as EV (Electro-Voice), a quadraphonic sound system developed in 1970 Exploration vessel (E/V), a type of marine vessel Other uses Land of Ev, a fictional country in the Oz books of L. Frank Baum and his successors Eesti Vabariik, Estonian for Republic of Estonia Eingetragener Verein ("registered association"; e. V.), a legal status for a registered voluntary association in Germany and Austria Enterprise Village, an educational program co-managed by the Stavros Institute in Pinellas County, Florida Era vulgaris, Latin for the Common Era EuroVelo, a network of long-distance cycling routes in Europe See also EV1 (disambiguation) EVS
an electric motor instead of an internal combustion engine Electric car, a type of electric vehicle Electronvolt (eV), in physics, a unit of energy Evolution-Data Optimized, a telecommunications standard for the wireless transmission of data through radio signals Expected value, the mean of a random variable's probability distribution Exposure value, a combination of shutter speed and aperture in photography Extended Validation Certificate, a type of X.509 Certificate used in securing computer communications Extracellular vesicle, a membrane-bound vesicle Stereo-4, also known as EV (Electro-Voice), a quadraphonic
element of a list can be accessed in constant time. The first element of a list is called the head of the list. The remainder of a list when its head has been removed is called the tail of the list. Maps Maps contain a variable number of key-value associations. The syntax is#{Key1=>Value1,...,KeyN=>ValueN}. Two forms of syntactic sugar are provided: Strings Strings are written as doubly quoted lists of characters. This is syntactic sugar for a list of the integer Unicode code points for the characters in the string. Thus, for example, the string "cat" is shorthand for [99,97,116]. Records Records provide a convenient way for associating a tag with each of the elements in a tuple. This allows one to refer to an element of a tuple by name and not by position. A pre-compiler takes the record definition and replaces it with the appropriate tuple reference. Erlang has no method to define classes, although there are external libraries available. "Let it crash" coding style Erlang is designed with a mechanism that makes it easy for external processes to monitor for crashes (or hardware failures), rather than an in-process mechanism like exception handling used in many other programming languages. Crashes are reported like other messages, which is the only way processes can communicate with each other, and subprocesses can be spawned cheaply. The "let it crash" philosophy prefers that a process be completely restarted rather than trying to recover from a serious failure. Though it still requires handling of errors, this philosophy results in less code devoted to defensive programming where error-handling code is highly contextual and specific. Supervisor trees A typical Erlang application is written in the form of a supervisor tree. This architecture is based on a hierarchy of processes in which the top level process is known as a "supervisor". The supervisor then spawns multiple child processes that act either as workers or more, lower level supervisors. Such hierarchies can exist to arbitrary depths and have proven to provide a highly scalable and fault-tolerant environment within which application functionality can be implemented. Within a supervisor tree, all supervisor processes are responsible for managing the lifecycle of their child processes, and this includes handling situations in which those child processes crash. Any process can become a supervisor by first spawning a child process, then calling erlang:monitor/2 on that process. If the monitored process then crashes, the supervisor will receive a message containing a tuple whose first member is the atom 'DOWN'. The supervisor is responsible firstly for listening for such messages and secondly, for taking the appropriate action to correct the error condition. Concurrency and distribution orientation Erlang's main strength is support for concurrency. It has a small but powerful set of primitives to create processes and communicate among them. Erlang is conceptually similar to the language occam, though it recasts the ideas of communicating sequential processes (CSP) in a functional framework and uses asynchronous message passing. Processes are the primary means to structure an Erlang application. They are neither operating system processes nor threads, but lightweight processes that are scheduled by BEAM. Like operating system processes (but unlike operating system threads), they share no state with each other. The estimated minimal overhead for each is 300 words. Thus, many processes can be created without degrading performance. In 2005, a benchmark with 20 million processes was successfully performed with 64-bit Erlang on a machine with 16 GB random-access memory (RAM; total 800 bytes/process). Erlang has supported symmetric multiprocessing since release R11B of May 2006. While threads need external library support in most languages, Erlang provides language-level features to create and manage processes with the goal of simplifying concurrent programming. Though all concurrency is explicit in Erlang, processes communicate using message passing instead of shared variables, which removes the need for explicit locks (a locking scheme is still used internally by the VM). Inter-process communication works via a shared-nothing asynchronous message passing system: every process has a "mailbox", a queue of messages that have been sent by other processes and not yet consumed. A process uses the receive primitive to retrieve messages that match desired patterns. A message-handling routine tests messages in turn against each pattern, until one of them matches. When the message is consumed and removed from the mailbox the process resumes execution. A message may comprise any Erlang structure, including primitives (integers, floats, characters, atoms), tuples, lists, and functions. The code example below shows the built-in support for distributed processes: % Create a process and invoke the function web:start_server(Port, MaxConnections) ServerProcess = spawn(web, start_server, [Port, MaxConnections]), % Create a remote process and invoke the function % web:start_server(Port, MaxConnections) on machine RemoteNode RemoteProcess = spawn(RemoteNode, web, start_server, [Port, MaxConnections]), % Send a message to ServerProcess (asynchronously). The message consists of a tuple % with the atom "pause" and the number "10". ServerProcess ! {pause, 10}, % Receive messages sent to this process receive a_message -> do_something; {data, DataContent} -> handle(DataContent); {hello, Text} -> io:format("Got hello message: ~s", [Text]); {goodbye, Text} -> io:format("Got goodbye message: ~s", [Text]) end. As the example shows, processes may be created on remote nodes, and communication with them is transparent in the sense that communication with remote processes works exactly as communication with local processes. Concurrency supports the primary method of error-handling in Erlang. When a process crashes, it neatly exits and sends a message to the controlling process which can then take action, such as starting a new process that takes over the old process's task. Implementation The official reference implementation of Erlang uses BEAM. BEAM is included in the official distribution of Erlang, called Erlang/OTP. BEAM executes bytecode which is converted to threaded code at load time. It also includes a native code compiler on most platforms, developed by the High Performance
Erlang, using list comprehension: %% qsort:qsort(List) %% Sort a list of items -module(qsort). % This is the file 'qsort.erl' -export([qsort/1]). % A function 'qsort' with 1 parameter is exported (no type, no name) qsort([]) -> []; % If the list [] is empty, return an empty list (nothing to sort) qsort([Pivot|Rest]) -> % Compose recursively a list with 'Front' for all elements that should be before 'Pivot' % then 'Pivot' then 'Back' for all elements that should be after 'Pivot' qsort([Front || Front <- Rest, Front < Pivot]) ++ [Pivot] ++ qsort([Back || Back <- Rest, Back >= Pivot]). The above example recursively invokes the function qsort until nothing remains to be sorted. The expression [Front || Front <- Rest, Front < Pivot] is a list comprehension, meaning "Construct a list of elements Front such that Front is a member of Rest, and Front is less than Pivot." ++ is the list concatenation operator. A comparison function can be used for more complicated structures for the sake of readability. The following code would sort lists according to length: % This is file 'listsort.erl' (the compiler is made this way) -module(listsort). % Export 'by_length' with 1 parameter (don't care about the type and name) -export([by_length/1]). by_length(Lists) -> % Use 'qsort/2' and provides an anonymous function as a parameter qsort(Lists, fun(A,B) -> length(A) < length(B) end). qsort([], _)-> []; % If list is empty, return an empty list (ignore the second parameter) qsort([Pivot|Rest], Smaller) -> % Partition list with 'Smaller' elements in front of 'Pivot' and not-'Smaller' elements % after 'Pivot' and sort the sublists. qsort([X || X <- Rest, Smaller(X,Pivot)], Smaller) ++ [Pivot] ++ qsort([Y || Y <- Rest, not(Smaller(Y, Pivot))], Smaller). A Pivot is taken from the first parameter given to qsort() and the rest of Lists is named Rest. Note that the expression [X || X <- Rest, Smaller(X,Pivot)] is no different in form from [Front || Front <- Rest, Front < Pivot] (in the previous example) except for the use of a comparison function in the last part, saying "Construct a list of elements X such that X is a member of Rest, and Smaller is true", with Smaller being defined earlier as fun(A,B) -> length(A) < length(B) end The anonymous function is named Smaller in the parameter list of the second definition of qsort so that it can be referenced by that name within that function. It is not named in the first definition of qsort, which deals with the base case of an empty list and thus has no need of this function, let alone a name for it. Data types Erlang has eight primitive data types: Integers Integers are written as sequences of decimal digits, for example, 12, 12375 and -23427 are integers. Integer arithmetic is exact and only limited by available memory on the machine. (This is called arbitrary-precision arithmetic.) Atoms Atoms are used within a program to denote distinguished values. They are written as strings of consecutive alphanumeric characters, the first character being lowercase. Atoms can contain any character if they are enclosed within single quotes and an escape convention exists which allows any character to be used within an atom. Atoms are never garbage collected and should be used with caution, especially if using dynamic atom generation. Floats Floating point numbers use the IEEE 754 64-bit representation. References References are globally unique symbols whose only property is that they can be compared for equality. They are created by evaluating the Erlang primitive make_ref(). Binaries A binary is a sequence of bytes. Binaries provide a space-efficient way of storing binary data. Erlang primitives exist for composing and decomposing binaries and for efficient input/output of binaries. Pids Pid is short for process identifiera Pid is created by the Erlang primitive spawn(...) Pids are references to Erlang processes. Ports Ports are used to communicate with the external world. Ports are created with the built-in function open_port. Messages can be sent to and received from ports, but these messages must obey the so-called "port protocol." Funs Funs are function closures. Funs are created by expressions of the form: fun(...) -> ... end. And three compound data types: Tuples Tuples are containers for a fixed number of Erlang data types. The syntax {D1,D2,...,Dn} denotes a tuple whose arguments are D1, D2, ... Dn. The arguments can be primitive data types or compound data types. Any element of a tuple can be accessed in constant time. Lists Lists are containers for a variable number of Erlang data types. The syntax [Dh|Dt] denotes a list whose first element is Dh, and whose remaining elements are the list Dt. The syntax [] denotes an empty list. The syntax [D1,D2,..,Dn] is short for [D1|[D2|..|[Dn|[]]]]. The first element of a list can be accessed in constant time. The first element of a list is called the head of the list. The remainder of a list when its head has been removed is called the tail of the list. Maps Maps contain a variable number of key-value associations. The syntax is#{Key1=>Value1,...,KeyN=>ValueN}. Two forms of syntactic sugar are provided: Strings Strings are written as doubly quoted lists of characters. This is syntactic sugar for a list of the integer Unicode code points for the characters in the string. Thus, for example, the string "cat" is shorthand for [99,97,116]. Records Records provide a convenient way for associating a tag with each of the elements in a tuple. This allows one to refer to an element of a tuple by name and not by position. A pre-compiler takes the record definition and replaces it with the appropriate tuple reference. Erlang has no method to define classes, although there are external libraries available. "Let it crash" coding style Erlang is designed with a mechanism that makes it easy for external processes to monitor for crashes (or hardware failures), rather than an in-process mechanism like exception handling used in many other programming languages. Crashes are reported like other messages, which is the only way processes can communicate with each other, and subprocesses can be spawned cheaply. The "let it crash" philosophy prefers that a process be completely restarted rather than trying to recover from a serious failure. Though it still requires handling of errors, this philosophy results in less code devoted to defensive programming where error-handling code is highly contextual and specific. Supervisor trees A typical Erlang application is written in the form of a supervisor tree. This architecture is based on a hierarchy of processes in which the top level process is known as a "supervisor". The supervisor then spawns multiple child processes that act either as workers or more, lower level supervisors. Such hierarchies can exist to arbitrary depths and have proven to provide a highly scalable and fault-tolerant environment within which application functionality can be implemented. Within a supervisor tree, all supervisor processes are responsible for managing the lifecycle of their child processes, and this includes handling situations in which those child processes crash. Any process can become a supervisor by first spawning a child process, then calling erlang:monitor/2 on that process. If the monitored process then crashes, the supervisor will receive a message containing a tuple whose first member is the atom 'DOWN'. The supervisor is responsible firstly for listening for such messages and secondly, for taking the appropriate action to correct the error condition. Concurrency and distribution orientation Erlang's main strength is support for concurrency. It has a small but powerful set of primitives to create processes and communicate among them. Erlang is conceptually similar to the language occam, though it recasts the ideas of communicating sequential processes (CSP) in a functional framework and uses asynchronous message passing. Processes are the primary means to structure an Erlang application. They are neither operating system processes nor threads, but lightweight processes that are scheduled by BEAM. Like operating system processes (but unlike operating system threads), they share no state with each other. The estimated minimal overhead for each is 300 words. Thus, many processes can be created without degrading performance. In 2005, a benchmark with 20 million processes was successfully performed with 64-bit Erlang on a machine with
collection (GCC) and Open Watcom compilers are supported. Alternatively, Euphoria programs may be bound with the interpreter to create stand-alone executables. A number of graphical user interface (GUI) libraries are supported including Win32lib and wrappers for wxWidgets, GTK+ and IUP. Euphoria has a simple built-in database and wrappers for a variety of other databases. Overview The Euphoria language is a general purpose procedural language that focuses on simplicity, legibility, rapid development and performance via several means. Simplicity – It uses just four built-in data types (see below) and implements automatic garbage collection. Legibility – The syntax favors simple English keywords over the use of punctuation to delineate constructs. Rapid development – An interpreter encourages prototyping and incremental development. Performance – An efficient reference-counting garbage collector correctly handles cyclic references. History Developed as a personal project to invent a programming language from scratch, Euphoria was created by Robert Craig on an Atari Mega-ST. Many design ideas for the language came from Craig's Master's thesis in computer science at the University of Toronto. Craig's thesis was heavily influenced by the work of John Backus on functional programming (FP) languages. Craig ported his original Atari implementation to the 16-bit DOS platform and Euphoria was first released, version 1.0, in July 1993 under a proprietary licence. The original Atari implementation is described by Craig as "primitive" and has not been publicly released. Euphoria continued to be developed and released by Craig via his company Rapid Deployment Software (RDS) and website rapideuphoria.com. In October 2006 RDS released version 3 of Euphoria and announced that henceforth Euphoria would be freely distributed under an open-source software licence. RDS continued to develop Euphoria, culminating with the release of version 3.1.1 in August, 2007. Subsequently, RDS ceased unilateral development of Euphoria and the openEuphoria Group took over ongoing development. The openEuphoria Group released version 4 in December, 2010 along with a new logo and mascot for the openEuphoria project. Version 3.1.1 remains an important milestone release, being the last version of Euphoria which supports the DOS platform. Euphoria is an acronym for End-User Programming with Hierarchical Objects for Robust Interpreted Applications although there is some suspicion that this is a backronym. The Euphoria interpreter was originally written in C. With the release of version 2.5 in November 2004 the Euphoria interpreter was split into two parts: a front-end parser, and a back-end interpreter. The front-end is now written in Euphoria (and used with the Euphoria-to-C translator and the Binder). The main back-end and run time library are written in C. Features Euphoria was conceived and developed with the following design
system Low-level memory handling Straightforward wrapping of (or access to) C libraries Execution modes Interpreter C translator (E2C) for standalone executables or dynamic linking Bytecode compiler and interpreter (shrouder) The Binder binds the Euphoria source code to the interpreter to create an executable. A read–eval–print loop (REPL) version is on the openEuphoria roadmap. Use Euphoria is designed to readily facilitate handling of dynamic sets of data of varying types and is particularly useful for string and image processing. Euphoria has been used in artificial intelligence experiments, the study of mathematics, for teaching programming, and to implement fonts involving thousands of characters. A large part of the Euphoria interpreter is written in Euphoria. Data types Euphoria has two basic data types: Atom – A number, implemented as a 31-bit signed integer or a 64-bit IEEE floating-point. Euphoria dynamically changes between integer and floating point representation according to the current value. Sequence – A vector (array) with zero or more elements. Each element may be an atom or another sequence. The number of elements in a sequence is not fixed (i.e., the size of the vector/array does not have to be declared). The program may add or remove elements as needed during run-time. Memory allocation-deallocation is automatically handled by reference counting. Individual elements are referenced using an index value enclosed in square brackets. The first element in a sequence has an index of one [1]. Elements inside embedded sequences are referenced by additional bracked index values, thus X[3][2] refers to the second element contained in the sequence that is the third element of X. Each element of a sequence is an object type (see below). Euphoria has two additional data types predefined: Integer – An atom, restricted to 31-bit signed integer values in the range -1073741824 to 1073741823 (-2^30 to 2^30-1). Integer data types are more efficient than the atom data types, but cannot contain the same range of values. Characters are stored as integers, e.g., coding ASCII-'A' is exactly the same as coding 65. Object – A generic datatype which may contain any of the above (i.e., atom, sequence or integer) and which may be changed to another type during run-time. There is no character string data type. Strings are represented by a sequence of integer values. However, because literal strings are so commonly used in programming, Euphoria interprets double-quote enclosed characters as a sequence of integers. Thus "ABC" is seen as if the coder had written: {'A', 'B', 'C'} which is the same as: {65, 66, 67} Hello, World! puts(1, "Hello, World!\n") Examples Program comments start with a double hyphen -- and go through the end of line. The following code looks for an old item in a group of items. If found, it removes it by concatenating all the elements before it with all the elements after it. Note that the first element in a sequence has the index one [1] and that $ refers to the length (i.e., total number of elements) of the sequence. global function delete_item( object old, sequence group ) integer pos -- Code begins -- pos = find( old, group ) if pos > 0 then group = group[1 .. pos-1] & group[pos+1 .. $] end if return group end function The following modification to the above example replaces an old item with a new item. As the variables old and new have been defined as objects, they could be atoms or sequences. Type checking is not needed as the function will work with any sequence of data of any type and needs no external libraries. global function replace_item( object old, object new, sequence group ) integer pos -- Code begins -- pos = find( old, group ) if pos > 0 then group[pos] = new end if return group end function Furthermore, no pointers are involved and subscripts are automatically checked. Thus the function cannot access memory out-of-bounds. There is no need to allocate or deallocate memory explicitly
was equal to the internal energy gained by the water through friction with the paddle. In the International System of Units (SI), the unit of energy is the joule, named after Joule. It is a derived unit. It is equal to the energy expended (or work done) in applying a force of one newton through a distance of one metre. However energy is also expressed in many other units not part of the SI, such as ergs, calories, British Thermal Units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units. The SI unit of energy rate (energy per unit time) is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce. Scientific use Classical mechanics In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept. Work, a function of energy, is force times distance. This says that the work () is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball. The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have remarkably direct analogs in nonrelativistic quantum mechanics. Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction). Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law. Chemistry In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular, or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is usually accompanied by a decrease, and sometimes an increase, of the total energy of the substances involved. Some energy may be transferred between the surroundings and the reactants in the form of heat or light; thus the products of a reaction have sometimes more but usually less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the less common case of endothermic reactions the situation is the reverse. Chemical reactions are usually not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at a given temperature T) is related to the activation energy E by the Boltzmann's population factor e−E/kT; that is, the probability of a molecule to have energy greater than or equal to E at a given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy. Biology In biology, energy is an attribute of all biological systems, from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or organelle of a biological organism. Energy used in respiration is mostly stored in molecular oxygen and can be unlocked by reactions with molecules of substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, using as a standard an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts. For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum. The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy. Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, proteins and high-energy compounds like oxygen and ATP. Carbohydrates, lipids, and proteins can release the energy of oxygen, which is utilized by living organisms as an electron acceptor. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark in a forest fire, or it may be made available more slowly for animal or human metabolism when organic molecules are ingested and catabolism is triggered by enzyme action. All living creatures rely on an external source of energy to be able to grow and reproduce – radiant energy from the Sun in the case of green plants and chemical energy (in some form) in the case of animals. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as a combination of oxygen and food molecules, the latter mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidized to carbon dioxide and water in the mitochondria C6H12O6 + 6O2 -> 6CO2 + 6H2O C57H110O6 + (81 1/2) O2 -> 57CO2 + 55H2O and some of the energy is used to convert ADP into ATP: The rest of the chemical energy of O2 and the carbohydrate or fat are converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work: gain in kinetic energy of a sprinter during a 100 m race: 4 kJ gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3 kJ Daily food intake of a normal adult: 6–8 MJ It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy); most machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings"). Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology. As an example, to take just the first step in the food chain: of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat. Earth sciences In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior, while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations in our atmosphere brought about by solar energy. Sunlight is the main input to Earth's energy budget which accounts for its temperature and climate stability. Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example when) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives most weather phenomena, save a few exceptions, like those generated by volcanic events for example. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, suddenly give up some of their thermal energy to power a few days of violent air movement. In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may later be transformed into active kinetic energy during landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars (which created these atoms). Cosmology In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen). The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight. Quantum mechanics In quantum mechanics, energy is defined in terms of the energy operator (Hamiltonian) as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic)
is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work: gain in kinetic energy of a sprinter during a 100 m race: 4 kJ gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3 kJ Daily food intake of a normal adult: 6–8 MJ It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy); most machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings"). Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology. As an example, to take just the first step in the food chain: of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat. Earth sciences In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior, while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations in our atmosphere brought about by solar energy. Sunlight is the main input to Earth's energy budget which accounts for its temperature and climate stability. Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example when) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives most weather phenomena, save a few exceptions, like those generated by volcanic events for example. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, suddenly give up some of their thermal energy to power a few days of violent air movement. In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may later be transformed into active kinetic energy during landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars (which created these atoms). Cosmology In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen). The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight. Quantum mechanics In quantum mechanics, energy is defined in terms of the energy operator (Hamiltonian) as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation: (where is Planck's constant and the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons. Relativity When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body: where m0 is the rest mass of the body, c is the speed of light in vacuum, is the rest energy. For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons. In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation. Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws. In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of spacetime (= boosts). Transformation Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery (from chemical energy to electric energy), a dam (from gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator), and a heat engine (from heat to work). Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. Our Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that itself (since it still contains the same total energy even in different forms) but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy. There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces. Energy transformations in the universe over time are characterized by various kinds of potential energy, that has been available since the Big Bang, being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae to "store" energy in the creation of heavy isotopes (such as uranium and thorium), and nuclear decay, a process in which energy is released that was originally stored in these heavy elements, before they were incorporated into the solar system and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at its maximum. At its lowest point the kinetic energy is at its maximum and is equal to the decrease in potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever. Energy is also transferred from potential energy () to kinetic energy () and then back to potential energy constantly. This is referred to as conservation of energy. In this isolated system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following: The equation can then be simplified further since (mass times acceleration due to gravity times the height) and (half mass times velocity squared). Then the total amount of energy can be found by adding . Conservation of energy and mass in transformation Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass-energy equivalence. The formula E = mc², derived by Albert Einstein (1905) quantifies the relationship between relativistic mass and energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass-energy equivalence#History for further information). Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~ joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics. Often, however, the complete conversion of matter (such as atoms) to non-matter (such as photons) is forbidden by conservation laws. Reversible and non-reversible transformations Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another is reversible, as in the pendulum system described above. In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as thermal energy and cannot
with Carcavine a year later (in 1656), he realized his method was essentially the same as Pascal's. Therefore, he knew about Pascal's priority in this subject before his book went to press in 1657. In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the expectations of random variables. Etymology Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes: More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly: Notations The use of the letter to denote expected value goes back to W. A. Whitworth in 1901. The symbol has become popular since then for English writers. In German, stands for "Erwartungswert", in Spanish for "Esperanza matemática", and in French for "Espérance mathématique". When "E" is used to denote expected value, authors use a variety of stylization: the expectation operator can be stylized as (upright), (italic), or (in blackboard bold), while a variety of bracket notations (such as , , and ) are all used. Another popular notation is , whereas , , and are commonly used in physics, and in Russian-language literature. Definition As discussed below, there are several context-dependent ways of defining the expected value. The simplest and original definition deals with the case of finitely many possible outcomes, such as in the flip of a coin. With the theory of infinite series, this can be extended to the case of countably many possible outcomes. It is also very common to consider the distinct case of random variables dictated by (piecewise-)continuous probability density functions, as these arise in many natural contexts. All of these specific definitions may be viewed as special cases of the general definition based upon the mathematical tools of measure theory and Lebesgue integration, which provide these different contexts with an axiomatic foundation and common language. Any definition of expected value may be extended to define an expected value of a multidimensional random variable, i.e. a random vector . It is defined component by component, as . Similarly, one may define the expected value of a random matrix with components by . Random variables with finitely many outcomes Consider a random variable with a finite list of possible outcomes, each of which (respectively) has probability of occurring. The expectation of is defined as Since the probabilities must satisfy , it is natural to interpret as a weighted average of the values, with weights given by their probabilities . In the special case that all possible outcomes are equiprobable (that is, ), the weighted average is given by the standard average. In the general case, the expected value takes into account the fact that some outcomes are more likely than others. Examples Let represent the outcome of a roll of a fair six-sided . More specifically, will be the number of pips showing on the top face of the after the toss. The possible values for are 1, 2, 3, 4, 5, and 6, all of which are equally likely with a probability of . The expectation of is If one rolls the times and computes the average (arithmetic mean) of the results, then as grows, the average will almost surely converge to the expected value, a fact known as the strong law of large numbers. The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability in American roulette), the payoff is $35; otherwise the player loses the bet. The expected profit from such a bet will be That is, the expected value to be won from a $1 bet is −$. Random variables with countably many outcomes Informally, the expectation of a random variable with a countable set of possible outcomes is defined analogously as the weighted average of all possible outcomes, where the weights are given by the probabilities of realizing each given value. This is to say that where are the possible outcomes of the random variable and are their corresponding probabilities. In many non-mathematical textbooks, this is presented as the full definition of expected values in this context. However, there are some subtleties with infinite summation, so the above formula is not suitable as a mathematical definition. In particular, the Riemann series theorem illustrates that the value of certain infinite sums fundamentally depends on the order in which the summands are given. Since the outcomes of a random variable are not naturally given in a particular order, this creates a difficulty in giving a general formulation of the notion of expected value. For this reason, many mathematical textbooks only consider the case that the infinite sum given above converges absolutely, which (as a theorem of mathematical analysis) implies that the infinite sum is a finite number which does not depend on the ordering of the summands. In the alternative case that the infinite sum does not converge absolutely, it is then said that the random variable does not have finite expectation. Examples Suppose and for , where (with being the natural logarithm) is the scale factor such that the probabilities sum to 1. Then, using the direct definition for non-negative random variables, we have Random variables with density Now consider a random variable which has a probability density function given by a function on the real number line. This means that the probability of taking on a value in any given open interval is given by the integral of over that interval. The expectation of is then given by the integral A general and mathematically precise formulation of this definition uses measure theory and Lebesgue integration, and the corresponding theory of absolutely continuous random variables is described in the next section. The density functions of many common distributions are piecewise continuous, and as such the theory is often developed in this restricted setting. For such functions, it is sufficient to only consider the standard Riemann integration. Sometimes continuous random variables are defined as those corresponding to this special class of densities, although the term is used differently by many various authors. Analogously to the countably-infinite case above, there are subtleties with this expression due to the infinite region of integration. Such subtleties can be seen concretely if the distribution of is given by the Cauchy distribution , so that . It is
published his treatise in 1657, (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory just after visiting Paris. The book extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players), and can be seen as the first successful attempt at laying down the foundations of the theory of probability. In the foreword to his treatise, Huygens wrote: During his visit to France in 1655, Huygens learned about de Méré's Problem. From his correspondence with Carcavine a year later (in 1656), he realized his method was essentially the same as Pascal's. Therefore, he knew about Pascal's priority in this subject before his book went to press in 1657. In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the expectations of random variables. Etymology Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes: More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly: Notations The use of the letter to denote expected value goes back to W. A. Whitworth in 1901. The symbol has become popular since then for English writers. In German, stands for "Erwartungswert", in Spanish for "Esperanza matemática", and in French for "Espérance mathématique". When "E" is used to denote expected value, authors use a variety of stylization: the expectation operator can be stylized as (upright), (italic), or (in blackboard bold), while a variety of bracket notations (such as , , and ) are all used. Another popular notation is , whereas , , and are commonly used in physics, and in Russian-language literature. Definition As discussed below, there are several context-dependent ways of defining the expected value. The simplest and original definition deals with the case of finitely many possible outcomes, such as in the flip of a coin. With the theory of infinite series, this can be extended to the case of countably many possible outcomes. It is also very common to consider the distinct case of random variables dictated by (piecewise-)continuous probability density functions, as these arise in many natural contexts. All of these specific definitions may be viewed as special cases of the general definition based upon the mathematical tools of measure theory and Lebesgue integration, which provide these different contexts with an axiomatic foundation and common language. Any definition of expected value may be extended to define an expected value of a multidimensional random variable, i.e. a random vector . It is defined component by component, as . Similarly, one may define the expected value of a random matrix with components by . Random variables with finitely many outcomes Consider a random variable with a finite list of possible outcomes, each of which (respectively) has probability of occurring. The expectation of is defined as Since the probabilities must satisfy , it is natural to interpret as a weighted average of the values, with weights given by their probabilities . In the special case that all possible outcomes are equiprobable (that is, ), the weighted average is given by the standard average. In the general case, the expected value takes into account the fact that some outcomes are more likely than others. Examples Let represent the outcome of a roll of a fair six-sided . More specifically, will be the number of pips showing on the top face of the after the toss. The possible values for are 1, 2, 3, 4, 5, and 6, all of which are equally likely with a probability of . The expectation of is If one rolls the times and computes the average (arithmetic mean) of the results, then as grows, the average will almost surely converge to the expected value, a fact known as the strong law of large numbers. The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability in American roulette), the payoff is $35; otherwise the player loses the bet. The expected profit from such a bet will be That is, the expected value to be won from a $1 bet is −$. Random variables with countably many outcomes Informally, the expectation of a random variable with a countable set of possible outcomes is defined analogously as the weighted average of all possible outcomes, where the weights are given by the probabilities of realizing each given value. This is to say that where are the possible outcomes of the random variable and are their corresponding probabilities. In many non-mathematical textbooks, this is presented as the full definition of expected values in this context. However, there are some subtleties with infinite summation, so the above formula is not suitable as a mathematical definition. In particular, the Riemann series theorem illustrates that the value of certain infinite sums fundamentally depends on the order in which the summands are given. Since the outcomes of a random variable are not naturally given in a particular order, this creates a difficulty in giving a general formulation of the notion of expected value. For this reason, many mathematical textbooks only consider the case that the infinite sum given above converges absolutely, which (as a theorem of mathematical analysis) implies that the infinite sum is a finite number which does not depend on the ordering of the summands. In the alternative case that the infinite sum does not converge absolutely, it is then said that the random variable does not have finite expectation. Examples Suppose and for , where (with being the natural logarithm) is the scale factor such that the probabilities sum to 1. Then, using the direct definition for non-negative random variables, we have Random variables with density Now consider a random variable which has a probability density function given by a function on the real number line. This means that the probability of taking on a value in any given open interval is given by the integral of over that interval. The expectation of is then given by the integral A general and mathematically precise formulation of this definition uses measure theory and Lebesgue integration, and the corresponding theory of absolutely continuous random variables is described in the next section. The density functions of many common distributions are piecewise continuous, and as such the theory is often developed in this restricted setting. For such functions, it is sufficient to only consider the standard Riemann integration. Sometimes continuous random variables are defined as those corresponding to this special class of densities, although the term is used differently by many various authors. Analogously to the countably-infinite case above, there are subtleties with this expression due to the infinite region of integration. Such subtleties can be seen concretely if the distribution of is given by the Cauchy distribution , so that . It is straightforward to compute in this case that The limit of this expression as and does not exist: if the limits are taken so that , then the limit is zero, while if the constraint is taken, then the limit is . To avoid such ambiguities, in mathematical textbooks it is common to require that the given integral converges absolutely, with left undefined otherwise. However, measure-theoretic notions as given below can be used to give a systematic definition of for more general random variables . Arbitrary random variables All definitions of the expected value may be expressed in the language of measure theory. In general, if is a real-valued random variable defined on a probability space , then the expected value of , denoted by , is defined as the Lebesgue integral Despite the newly abstract situation, this definition is extremely similar in nature to the very simplest definition of expected values, given above, as certain weighted averages. This is because, in measure theory,
Carbon arc lamp Xenon arc lamp, used in many camera flashes, stroboscopes and digital cinema projectors Mercury-xenon arc lamp Ultra-high-performance lamp, an ultra-high-pressure mercury-vapor arc lamp used in many video projectors Gas-discharge lamp, a light source that generates light by sending an electric discharge through an ionized gas Fluorescent lamp Compact fluorescent lamp, a fluorescent lamp designed to replace an incandescent lamp Neon lamp Mercury-vapor lamp Sodium-vapor lamp Sulfur lamp Metal-halide lamp High-intensity discharge lamp Electrodeless lamp, a gas discharge lamp in which the power is transferred into the lamp via electromagnetic fields or radio waves Plasma lamp Different types of lights have vastly differing efficacies and color of light. *Color temperature is defined as the temperature of a black body emitting a similar spectrum; these spectra are quite different from those of black bodies. The most efficient source of electric light is the low-pressure sodium lamp. It produces, for all practical purposes, a monochromatic orange-yellow light, which gives a similarly monochromatic perception of any illuminated scene. For this reason, it is generally reserved for outdoor public lighting applications. Low-pressure sodium lights are favoured for public lighting by astronomers, since the light pollution that they generate can be easily filtered, contrary to broadband or continuous spectra. Incandescent light bulb The modern incandescent light bulb, with a coiled filament of tungsten, and commercialized in the 1920s, developed from the carbon filament lamp introduced about 1880. Less than 3% of the input energy is converted into usable light. Nearly all of the input energy ends up as heat that, in warm climates, must then be removed from the building by ventilation or air conditioning, often resulting in more energy consumption. In colder climates where heating and lighting is required during the cold and dark winter months, the heat byproduct has some value. Incandescent bulbs are being phased out in many countries due to their low energy efficiency. As well as bulbs for normal illumination, there is a very wide range, including low voltage, low-power types often used as components in equipment, but now largely displaced by LEDs. Halogen lamp Halogen lamps are usually much smaller than standard incandescent lamps, because for successful operation a bulb temperature over 200 °C is generally necessary. For this reason, most have a bulb of fused silica (quartz) or aluminosilicate glass. This is often sealed inside an additional layer of glass. The outer glass is a safety precaution, to reduce ultraviolet emission and to contain hot glass shards should the inner envelope explode during operation. Oily residue from fingerprints may cause a hot quartz envelope to shatter due to excessive heat buildup at the contamination site. The risk of burns or fire is also greater with bare bulbs, leading to their prohibition in some places, unless enclosed by the luminaire. Those designed for 12- or 24-volt operation have compact filaments, useful for good optical control. Also, they have higher efficacies (lumens per watt) and better lives than non-halogen types. The light output remains almost constant throughout their life. Fluorescent lamp Fluorescent lamps consist of a glass tube that contains mercury vapour or argon under low pressure. Electricity flowing through the tube causes the gases to give off ultraviolet energy. The inside of the tubes are coated with phosphors that give off visible light when struck by ultraviolet photons. They have much higher efficiency than incandescent lamps. For the same amount of light generated, they typically use around one-quarter to one-third the power of an incandescent. The typical luminous efficacy of fluorescent lighting systems is 50–100 lumens per watt, several times the efficacy of incandescent bulbs with comparable light output. Fluorescent lamp fixtures are more costly than incandescent lamps, because they require a ballast to regulate the current through the lamp, but the lower energy cost typically offsets the higher initial cost. Compact fluorescent lamps are available in the same popular sizes as incandescent lamps and are used as an energy-saving alternative in homes. Because they contain mercury, many fluorescent lamps are classified as hazardous waste. The United States Environmental Protection Agency recommends that fluorescent lamps be segregated from general waste for recycling or safe disposal, and some jurisdictions require recycling of them. LED lamp The solid-state light-emitting diode (LED) has been popular as an indicator light in consumer electronics and professional audio gear since the 1970s. In the 2000s, efficacy and output have risen to the point where LEDs are now being used in lighting applications such as car headlights and brake lights, in flashlights and bicycle lights, as well as in decorative applications, such as holiday lighting. Indicator LEDs are known for their extremely long life, up to 100,000 hours, but lighting LEDs are operated much less conservatively, and consequently have shorter lives. LED technology is useful for lighting designers, because of its low power consumption, low heat generation, instantaneous on/off control, and in the case of single color LEDs, continuity of color throughout the life of the diode and relatively low cost of manufacture. LED lifetime depends strongly on the temperature of the diode. Operating an LED lamp in conditions that increase the internal temperature can greatly shorten the lamp's life. Carbon arc lamp Carbon arc lamps consist of two carbon rod electrodes in open air, supplied by a current-limiting ballast. The electric arc is struck by touching the rod tips then separating them. The ensuing arc produces a white-hot plasma between the rod tips. These lamps have higher efficacy than filament lamps, but the carbon rods are short-lived and require constant adjustment in use, as the intense heat of the arc erodes them. The lamps produce significant ultraviolet output, they require ventilation when used indoors, and due to their intensity they need protection from direct sight. Invented by Humphry Davy around 1805, the carbon arc was the first
most have a bulb of fused silica (quartz) or aluminosilicate glass. This is often sealed inside an additional layer of glass. The outer glass is a safety precaution, to reduce ultraviolet emission and to contain hot glass shards should the inner envelope explode during operation. Oily residue from fingerprints may cause a hot quartz envelope to shatter due to excessive heat buildup at the contamination site. The risk of burns or fire is also greater with bare bulbs, leading to their prohibition in some places, unless enclosed by the luminaire. Those designed for 12- or 24-volt operation have compact filaments, useful for good optical control. Also, they have higher efficacies (lumens per watt) and better lives than non-halogen types. The light output remains almost constant throughout their life. Fluorescent lamp Fluorescent lamps consist of a glass tube that contains mercury vapour or argon under low pressure. Electricity flowing through the tube causes the gases to give off ultraviolet energy. The inside of the tubes are coated with phosphors that give off visible light when struck by ultraviolet photons. They have much higher efficiency than incandescent lamps. For the same amount of light generated, they typically use around one-quarter to one-third the power of an incandescent. The typical luminous efficacy of fluorescent lighting systems is 50–100 lumens per watt, several times the efficacy of incandescent bulbs with comparable light output. Fluorescent lamp fixtures are more costly than incandescent lamps, because they require a ballast to regulate the current through the lamp, but the lower energy cost typically offsets the higher initial cost. Compact fluorescent lamps are available in the same popular sizes as incandescent lamps and are used as an energy-saving alternative in homes. Because they contain mercury, many fluorescent lamps are classified as hazardous waste. The United States Environmental Protection Agency recommends that fluorescent lamps be segregated from general waste for recycling or safe disposal, and some jurisdictions require recycling of them. LED lamp The solid-state light-emitting diode (LED) has been popular as an indicator light in consumer electronics and professional audio gear since the 1970s. In the 2000s, efficacy and output have risen to the point where LEDs are now being used in lighting applications such as car headlights and brake lights, in flashlights and bicycle lights, as well as in decorative applications, such as holiday lighting. Indicator LEDs are known for their extremely long life, up to 100,000 hours, but lighting LEDs are operated much less conservatively, and consequently have shorter lives. LED technology is useful for lighting designers, because of its low power consumption, low heat generation, instantaneous on/off control, and in the case of single color LEDs, continuity of color throughout the life of the diode and relatively low cost of manufacture. LED lifetime depends strongly on the temperature of the diode. Operating an LED lamp in conditions that increase the internal temperature can greatly shorten the lamp's life. Carbon arc lamp Carbon arc lamps consist of two carbon rod electrodes in open air, supplied by a current-limiting ballast. The electric arc is struck by touching the rod tips then separating them. The ensuing arc produces a white-hot plasma between the rod tips. These lamps have higher efficacy than filament lamps, but the carbon rods are short-lived and require constant adjustment in use, as the intense heat of the arc erodes them. The lamps produce significant ultraviolet output, they require ventilation when used indoors, and due to their intensity they need protection from direct sight. Invented by Humphry Davy around 1805, the carbon arc was the first practical electric light. It was used commercially beginning in the 1870s for large building and street lighting until it was superseded in the early 20th century by the incandescent light. Carbon arc lamps operate at high power and produce high intensity white light. They also are a point source of light. They remained in use in limited applications that required these properties, such as movie projectors, stage lighting, and searchlights, until after World War II. Discharge lamp A discharge lamp has a glass or silica envelope containing two metal electrodes separated by a gas. Gases used include, neon, argon, xenon, sodium, metal halide, and mercury. The core operating principle is much the same as the carbon arc lamp, but the term "arc lamp" normally refers to carbon arc lamps, with more modern types of gas discharge lamp normally called discharge lamps. With some discharge lamps, very high voltage is used to strike the arc. This requires an electrical circuit called an igniter, which is part of the electrical ballast circuitry. After the arc is struck, the internal resistance of the lamp drops to a low level, and the ballast limits the current to the operating current. Without a ballast, excess current would flow, causing rapid destruction of the lamp.
exploration of Mars, an impact crater on Mars was named in his honor after his death. In a Paris Review interview, Ray Bradbury said of Burroughs that "Edgar Rice Burroughs never would have looked upon himself as a social mover and shaker with social obligations. But as it turns out – and I love to say it because it upsets everyone terribly – Burroughs is probably the most influential writer in the entire history of the world." Bradbury continued that "By giving romance and adventure to a whole generation of boys, Burroughs caused them to go out and decide to become special." In Something of Myself (published posthumously in 1937) Rudyard Kipling wrote: "My Jungle Books begat Zoos of [imitators]. But the genius of all the genii was one who wrote a series called Tarzan of the Apes. I read it, but regret I never saw it on the films, where it rages most successfully. He had 'jazzed' the motif of the Jungle Books and, I imagine, had thoroughly enjoyed himself. He was reported to have said that he wanted to find out how bad a book he could write and 'get away with', which is a legitimate ambition." By 1963, Floyd C. Gale of Galaxy Science Fiction wrote when discussing reprints of several Burroughs novels by Ace Books, "an entire generation has grown up inexplicably Burroughs-less". He stated that most of the author's books had been out of print for years and that only the "occasional laughable Tarzan film" reminded public of his fiction. Gale reported his surprise that after two decades his books were again available, with Canaveral Press, Dover Publications, and Ballantine Books also reprinting them. Few critical books have been written about Burroughs. From an academic standpoint, the most helpful are Erling Holtsmark's two books: Tarzan and Tradition and Edgar Rice Burroughs; Stan Galloway's The Teenage Tarzan: A Literary Analysis of Edgar Rice Burroughs' Jungle Tales of Tarzan; and Richard Lupoff's two books: Master of Adventure: Edgar Rice Burroughs and Barsoom: Edgar Rice Burroughs and the Martian Vision. Galloway was identified by James Edwin Gunn as "one of the half-dozen finest Burroughs scholars in the world"; Galloway called Holtsmark his "most important predecessor." Burroughs strongly supported eugenics and scientific racism. His views held that English nobles made up a particular heritable elite among Anglo-Saxons. Tarzan was meant to reflect this, with him being born to English nobles and then adopted by talking apes (the Mangani). They express eugenicist views themselves, but Tarzan is permitted to live despite being deemed "unfit" in comparison, and grows up to surpass not only them but black Africans, whom Burrough clearly presents as inherently inferior, even not wholly human. In one Tarzan story, he finds an ancient civilization where eugenics has been practiced for over 2,000 years, with the result that it is free of all crime. Criminal behavior is held to be entirely hereditary, with the solution having been to kill not only criminals but also their families. Lost on Venus, a later novel, presents a similar utopia where forced sterilization is practiced and the "unfit" killed. Burroughs explicitly supported such ideas in his unpublished nonfiction essay I See A New Race. Additionally, his Pirate Blood, which is not speculative fiction and remained unpublished after his death, portrayed the characters as victims of their hereditary criminal traits (one a descendant of the corsair Jean Lafitte, another from the Jukes family). These views have been compared with Nazi eugenics (though noting that they were popular and common at the time), with his Lost on Venus being released the same year the Nazis took power (in 1933). Selected works Barsoom series A Princess of Mars (1912) The Gods of Mars (1913) The Warlord of Mars (1914) Thuvia, Maid of Mars (1916) The Chessmen of Mars (1922) The Master Mind of Mars (1927) A Fighting Man of Mars (1930) Swords of Mars (1934) Synthetic Men of Mars (1939) Llana of Gathol (1941) John Carter of Mars (1964, two stories from 1940 and 1943) Tarzan series Tarzan of the Apes (1912) The Return of Tarzan (1913) The Beasts of Tarzan (1914) The Son of Tarzan (1915) Tarzan and the Jewels of Opar (1916) Jungle Tales of Tarzan (stories 1916–1917) Tarzan the Untamed (1919) Tarzan the Terrible (1921) Tarzan and the Golden Lion (1922) Tarzan and the Ant Men (1924) Tarzan, Lord of the Jungle (1927) Tarzan and the Lost Empire (1928) Tarzan at the Earth's Core (1929) Tarzan the Invincible (1930) Tarzan Triumphant (1931) Tarzan and the City of Gold (1932) Tarzan and the Lion Man (1933) Tarzan and the Leopard Men (1932) Tarzan's Quest (1935) Tarzan the Magnificent (1936) Tarzan and the Forbidden City (1938) Tarzan and the Foreign Legion (1947, written in 1944) Tarzan and the Tarzan Twins (1963, collects 1927 and 1936 children's books) Tarzan and the Madman (1964, written in 1940) Tarzan and the Castaways (1965, stories from 1940 to 1941) Tarzan: The Lost Adventure (1995, rewritten version of 1946 fragment, completed by Joe R. Lansdale) Pellucidar series At the Earth's Core (1914) Pellucidar (1915) Tanar of Pellucidar (1929) Back to the Stone Age (1937) Land of Terror (1944, written in 1939) Savage Pellucidar (1963, stories from 1942) Tarzan at the Earth's Core (1929) Venus series Pirates of Venus (1932) Lost on Venus (1933) Carson of Venus (1938) Escape on Venus (1946, stories from 1941 to 1942) The Wizard of Venus (1970, written in 1941) Caspak series The Land That Time Forgot (1918) The People That Time Forgot (1918) Out of Time's Abyss (1918) Moon series Part I: The Moon Maid (1923, serialized in Argosy, May 5 – June 2, 1923) Part II: The Moon Men (1925, serialized in Argosy, February 21 – March 14, 1925) Part III: The Red Hawk (1925 serialized in Argosy, September 5–19, 1925) These three texts have been published by various houses in one or two volumes. Adding to the confusion, some editions have the original (significantly longer) introduction to Part I from the first publication as a magazine serial, and others have the shorter version from the first book publication, which included all three parts under the title The Moon Maid. Mucker series The Mucker (1914) The Return of the Mucker (1916) The Oakdale Affair (1918) Other science fiction The Monster Men (1913) The Lost Continent (1916; a.k.a. Beyond Thirty) The Resurrection of Jimber-Jaw (1937) Beyond the Farthest Star (1942) Jungle adventure novels The Cave Girl (1913, revised 1917) The Eternal Lover (1914, rev. 1915; aka The Eternal Savage) The Man-Eater (1915) The Lad and the Lion (1917) Jungle Girl (1931; aka Land of the Hidden Men) Western novels The Bandit of Hell's Bend (1924) The War Chief (1927) Apache Devil (1933) The Deputy Sheriff of Comanche County (1940) Historical novels The Outlaw of Torn (1914) I am a Barbarian (1967; written in 1941) Other works Minidoka: 937th Earl of One Mile Series M (1998; written in 1903) The Mad King
Massachusetts, and then the Michigan Military Academy. Graduating in 1895, and failing the entrance exam for the United States Military Academy at West Point, he became an enlisted soldier with the 7th U.S. Cavalry in Fort Grant, Arizona Territory. After being diagnosed with a heart problem and thus ineligible to serve, he was discharged in 1897. After his discharge Burroughs worked a number of different jobs. During the Chicago influenza epidemic of 1891, he spent half a year at his brother's ranch on the Raft River in Idaho, as a cowboy, drifted somewhat afterward, then worked at his father's Chicago battery factory in 1899, marrying his childhood sweetheart, Emma Hulbert (1876–1944), in January 1900. In 1903, Burroughs joined his brothers, Yale graduates George and Harry, who were, by then, prominent Pocatello area ranchers in southern Idaho, and partners in the Sweetser-Burroughs Mining Company, where he took on managing their ill-fated Snake River gold dredge, a classic bucket-line dredge. The Burroughs brothers were also the sixth cousins, once removed, of famed miner Kate Rice, a brilliant and statuesque Maths professor who, in 1914, became the first female prospector in the Canadian North. Journalist and publisher C. Allen Thorndike Rice was also his third cousin. When the new mine proved unsuccessful, the brothers secured for Burroughs a position with the Oregon Short Line Railroad in Salt Lake City. Burroughs resigned from the railroad in October 1904. Author By 1911, after seven years of low wages as a pencil-sharpener wholesaler, Burroughs began to write fiction. By this time, Emma and he had two children, Joan (1908–1972), and Hulbert (1909–1991). During this period, he had copious spare time and began reading pulp-fiction magazines. In 1929, he recalled thinking that In 1913, Burroughs and Emma had their third and last child, John Coleman Burroughs (1913–1979), later known for his illustrations of his father's books. In the 1920s, Burroughs became a pilot, purchased a Security Airster S-1, and encouraged his family to learn to fly. Daughter Joan married Tarzan film actor, James Pierce, starring with her husband, as the voice of Jane, during 1932–1934 for the Tarzan radio series. The pair were wed for more than forty years, until her death in 1972. Burroughs divorced Emma in 1934 and, in 1935, married the former actress Florence Gilbert Dearholt, who was the former wife of his friend (who was then himself remarrying), Ashton Dearholt, with whom he had co-founded Burroughs-Tarzan Enterprises while filming The New Adventures of Tarzan. Burroughs adopted the Dearholts' two children. He and Florence divorced in 1942. Burroughs was in his late 60s and was in Honolulu at the time of the Japanese attack on Pearl Harbor. Despite his age, he applied for and received permission to become a war correspondent, becoming one of the oldest U.S. war correspondents during World War II. This period of his life is mentioned in William Brinkley's bestselling novel Don't Go Near the Water. Death After the war ended, Burroughs moved back to Encino, California, where after many health problems, he died of a heart attack on March 19, 1950, having written almost 80 novels. He is buried in Tarzana, California, US. When he died, he was believed to have been the writer who had made the most from films, earning over $2 million in royalties from 27 Tarzan pictures. The Science Fiction Hall of Fame inducted Burroughs in 2003. Literary career Aiming his work at the pulps—under the name "Norman Bean" to protect his reputation—Burroughs had his first story, Under the Moons of Mars, serialized by Frank Munsey in the February to July 1912 issues of The All-Story. Under the Moons of Mars inaugurated the Barsoom series and earned Burroughs ($ today). It was first published as a book by A. C. McClurg of Chicago in 1917, entitled A Princess of Mars, after three Barsoom sequels had appeared as serials and McClurg had published the first four serial Tarzan novels as books. Burroughs soon took up writing full-time, and by the time the run of Under the Moons of Mars had finished, he had completed two novels, including Tarzan of the Apes, published from October 1912 and one of his most successful series. Burroughs also wrote popular science fiction and fantasy stories involving adventurers from Earth transported to various planets (notably Barsoom, Burroughs's fictional name for Mars, and Amtor, his fictional name for Venus), lost islands (Caspak), and into the interior of the Hollow Earth in his Pellucidar stories. He also wrote Westerns and historical romances. Besides those published in All-Story, many of his stories were published in The Argosy magazine. Tarzan was a cultural sensation when introduced. Burroughs was determined to capitalize on Tarzan's popularity in every way possible. He planned to exploit Tarzan through several different media including a syndicated Tarzan comic strip,
other medieval architecture, made detailed drawings and watercolours, which he sometimes sold at a high price to members of the Court. On May 3, 1834, at age twenty, he married Élisabeth Templier, and in the same year he was named an associate professor of ornamental decoration at the Royal School of Decorative Arts, which gave him a more regular income. His first pupils there included Léon Gaucherel. With the money from the sale of his drawings and paintings, Viollet-le-Duc and his wife set off on a long tour of the monuments of Italy, visiting Rome, Venice, Florence and other sites, drawing and painting. His reaction to the Leaning Tower of Pisa was characteristic: "It was extremely disagreeable to see", he wrote, "it would have been infinitely better if it had been straight." In 1838, he presented several of his drawings at the Paris Salon, and began making a travel book, Picturesque and romantic images of the old France, for which, between 1838 and 1844, he made nearly three hundred engravings. First architectural restorations In October 1838, with the recommendation of Achille Leclère, the architect with whom he had trained, he was named deputy inspector of the enlargement of the Hôtel Soubise, the new home of the French National Archives. His uncle, Delécluze, then recommended him to the new Commission of Historic Monuments of France, led by Prosper Mérimée, who had just published a book on medieval French monuments. Though he was just twenty-four years old and had no degree in architecture, he was asked to go to Narbonne to propose a plan for the completion of the cathedral there. He made his first plan, which included not only the completion but also the restoration of the oldest parts of the structure. His first project was rejected by the local authorities as too ambitious and too expensive. His next project was a restoration of the Vézelay Abbey, the church of a Benedictine monastery founded in the 12th century to house the reputed relics of Mary Magdalene. The church had been sacked by the Huguenots in 1569, and during the French Revolution, the facade and statuary on the facade were destroyed. The vaults of the roof were weakened, and many of the stones had been carried off for other projects. When Mérimée visited to inspect the structure he heard stones falling around him. In February 1840 Mérimée gave Viollet-le-Duc the mission of restoring and reconstructing the church so it would not collapse, while "respecting exactly in his project of restoration all the ancient dispositions of the church". The task was all the more difficult because up until that time no scientific studies had been made of medieval building techniques, and there were no schools of restoration. He had no plans for the original building to work from. Viollet-le-Duc had to discover the flaws of construction that had caused the building to start to collapse in the first place and to construct a more solid and stable structure. He lightened the roof and built new arches to stabilize the structure, and slightly changed the shape of the vaults and arches. He was criticized for these modifications in the 1960s, though, as his defenders argued, without them the roof would have collapsed under its own weight. Mérimée's deputy, Lenormant, inspected the construction and reported to Mérimée: "The young Leduc seems entirely worthy of your confidence. He needed a magnificent audacity to take charge of such a desperate enterprise; it's certain that he arrived just in time, and if we had waited only ten years the church would have been a pile of stones." Sainte-Chapelle and Amboise Viollet-le-Duc's work at Vezelay led to a series of larger projects. In 1840, in collaboration with his friend the architect Jean-Baptiste Lassus he began the restoration of Sainte-Chapelle in Paris, which had been turned into a storage depot after the Revolution. His role in this project was relatively minor, with Lassus taking the lead. In February 1843, King Louis Philippe sent him to the Château of Amboise, to restore the stained glass windows in the chapel holding the tomb of Leonardo da Vinci. The windows were unfortunately destroyed in 1940 during World War II. In 1843, Mérimée took Viollet-le-Duc with him to Burgundy and the south of France, on one of his long inspection tours of possible monuments. The two men shared the same passion for the gothic. Viollet-le-Duc made drawings of the buildings and wrote detailed accounts of each site, illustrated with his drawing, which were published in architectural journals. These articles were later turned into books; he became the most prominent academic scholar on French medieval architecture. Notre-Dame de Paris In 1844, with the backing of Mérimée, Viollet-le-Duc, just thirty years old, and Lassus, then thirty-seven, won a competition for the restoration of Notre-Dame Cathedral. Their project involved primarily the facade, where many of the statues over the portals had been beheaded or smashed during the Revolution. They proposed two major changes to the interior: rebuilding two of the bays to their original medieval height of four storeys, and removing the marble neoclassical structures and decoration which had been added to the choir during the reign of Louis XIV. Mérimée warned them to be careful: "In such a project, one cannot act with too much prudence or discretion...A restoration may be more disastrous for a monument than the ravages of centuries." The Commission on Historical Monuments approved most of Viollet-le-Duc's plans, but rejected his proposal to remove the choir built under Louis XIV. Viollet-le-Duc himself turned down a proposal to add two new spires atop the towers, arguing that such a monument "would be remarkable but would not be Notre-Dame de Paris". Instead, he proposed to rebuild the original medieval spire and bell tower over the transept, which had been removed in 1786 because it was unstable in the wind. Once the project was approved, Viollet-le-Duc made drawings and photographs of the existing decorative elements; then they were removed and a stream of sculptors began making new statues of saints, gargoyles, chimeras and other architectural elements in a workshop he established, working from his drawings and photographs of similar works in other cathedrals of the same period. Other craftsmen made stained glass windows in Gothic grisaille patterns designed by Viollet-le-Duc to replace the destroyed medieval windows in the chapels of the ground floor of the nave of the cathedral. He also designed a new treasury in the Gothic style to serve as the museum of the cathedral, replacing the residence of the Archbishop, which had been destroyed in a riot in 1831. The bells in the two towers had been taken out in 1791 and melted down to make cannons. Viollet-le-Duc had new bells cast for the north tower and a new structure built inside to support them. Viollet-le-Duc and Lassus also rebuilt the sacristy, on the south side of the church, which had been built in 1756, but had been burned by rioters during the July Revolution of 1830. The new spire was completed, taller and more strongly built to withstand the weather; it was decorated with statues of the apostles, and the face of Saint Thomas bore a noticeable resemblance to Viollet-le-Duc. The spire was destroyed on 15 April 2019, as a result of the Notre-Dame de Paris fire. Saint Denis and Amiens The restoration of Notre-Dame continued in this slow and methodical manner for twenty-five years. When not engaged in Paris, Viollet-le-Duc continued his long tours into the French provinces, inspecting, drawing, and making recommendations, and checking the progress of more than twenty different restoration projects that were under his control, including seven in Burgundy alone. His new projects included the Basilica of Saint-Sernin, Toulouse, and the Basilica of Saint-Denis just outside Paris. Saint-Denis had undergone a restoration by a different architect, Francois Debret, who had rebuilt one of the two towers. However, in 1846, the new tower, overloaded with masonry, began to crack, and Viollet-le-Duc was called in. He found no way the building could be saved; he had to oversee the demolition of the tower, saving the stones. He concentrated on restoring the interior of the church, and was able to substantially restore the original burial chamber of the Kings of France. In May 1849, he was named the architect for the restoration of Amiens Cathedral, one of the largest in France, which had been built over many centuries in a variety of different styles. He wrote, "his goal should be to save in each part of the monument its own character, and yet to make it so that the united parts don't conflict with each other; and that can be maintained in a state that is durable and simple." For his restorations of churches and cathedrals, Viollet-le-Duc designed not only architecture, but new altars and furnishings. His new furnishings were installed in the sacristy of Notre-Dame, and his neo-Gothic altar was placed in the restored Cathedral of Clermont-Ferrand. Thanks largely to Viollet-le-Duc, the neo-Gothic became the standard style for church furnishing throughout France. Imperial projects: Carcassonne, Vincennes and Pierrefonds The French coup d'état of 1851 had transformed France from a republic to an empire and brought Napoleon III to power. The coup accelerated some of Viollet-le-Duc's projects. His patron and supervisor, Prosper Mérimée, had introduced the Emperor to the new Empress, and brought Viollet-le-Duc into close contact with the Emperor. The Emperor married the Empress Eugénie in Notre-Dame, and the government voted additional funds to advance the restoration. He moved forward with the slow work of restoration of the Cathedral of Reims and Cathedral of Amiens. In Amiens, he cleared the interior of the French classical decoration added under Louis XIV, and proposed to make it resolutely Gothic. He gave the Emperor and Empress a tour of his project in September 1853; the Empress immediately offered to pay two-thirds of the cost of the restoration. In the same year he undertook the restoration of the Château de Vincennes, long occupied by the military, along with its chapel, similar to Sainte-Chapelle. A devotee of the pure Gothic, he described the chapel as "one of the finest specimens of Gothic in decline". In November 1853, he provided the costs and plans for the medieval ramparts of Carcassonne, which he had first begun planning in 1849. The first fortifications had been built by the Visigoths; on top of these, in the Middle Ages Louis XI and then Philip the Bold had built a formidable series of towers, galleries, walls, gates and interlocking defences that resisted all sieges until 1355. The fortifications were largely intact, since the surroundings of the city were still a military defensive zone in the 19th century, but the towers were without tops and a large number of structures had been built up against the old walls. Once he obtained funding and made his plans, he began demolishing all structures which had been attached to ramparts over the centuries, and restored the gates, walls and towers to their original form, including the defence platforms, roofs on the towers and shelters for archers that would have been used during a siege. He found many of the original mountings for weapons still in place. To accompany his work, he published a detailed history of the city and its fortifications, with his drawings. Carcassonne became the best example of medieval military architecture in France, and also an important tourist attraction. Napoleon III provided additional funding for the continued restoration of Notre-Dame. Viollet-le-Duc was also to replace the great bestiary of mythical beasts and animals which had decorated the cathedral in the 18th century. In 1856, using examples from other medieval churches and debris from Notre-Dame as his model, his workshop produced dragons, chimeras, grotesques, and gargoyles, as well as an assortment of picturesque pinnacles and fleurons. He engaged in a new project for restoration of the Cathedral of Clermont-Ferrand, a project which continued for ten years. He also undertook an unusual project for Napoleon III; the design and construction of six railway coaches with neo-Gothic interior décor for the Emperor and his entourage. Two of the cars still exist; the salon of honour car, with a fresco on the ceiling, is at the Château de Compiègne, and the dining car, with a massive golden eagle as the centrepiece of the décor, is at the Railroad Museum of Mulhouse. Napoleon III asked Viollet-le-Duc if he could restore a medieval chateau for the Emperor's own use near Compiègne, where the Emperor traditionally passed September and October. Viollet-le-Duc first studied a restoration of the Château de Coucy, which had the highest medieval tower in France, later destroyed. When this proved too complicated, he settled upon Château de Pierrefonds, a castle begun by Louis of Orleans in 1396, then dismantled in 1617 after several sieges by Louis XIII of France. Napoleon bought the ruin for 5000 francs in 1812, and Mérimée declared it an historic monument in 1848. In 1857 Viollet-le-Duc began designing an entirely new chateau on the ruins. This structure was not designed to recreate anything exactly that had existed, but a castle which recaptured the spirit of the gothic, with lavish neo-gothic decoration and 19th-century comforts. While most of his attention was devoted to restorations, Viollet-le-Duc designed and built a number of private residences and new buildings in Paris. He also participated in the most important competition of the period, for the new Paris Opera. There were one hundred seventy-one projects proposed in the original competition, presented the 1855 Paris Universal Exposition. A jury of noted architects narrowed it down to five, including projects from Viollet-le-Duc, Charles Rohault de Fleury and Charles Garnier, age thirty-five. The favorites of the Emperor and Empress were de Fleury and Viollet-le-Duc, but both were eliminated in the next round. Viollet-le-Duc was not a good loser, and he dismissed Garnier's style. Garnier wrote of his rival in 1869: "Monsieur Viollet-le-Duc has produced much, but his best works without doubt are his restorations...One hesitates to appreciate his personal works. You cannot find any personality in them, only compromise. He is broken by archeology and crushed by the weight of the past. If it is difficult to learn, it is even more difficult to forget." Napoleon III called upon Viollet-le-Duc for a wide variety of archeological and architectural tasks. When he wished to put up a monument to mark the Battle of Alesia, where Julius Caesar defeated the Gauls, a battle whose actual site was disputed by historians, he asked Viollet-le-Duc to locate the exact battlefield. Viollet-le-Duc conducted excavations at various purported sites, and finally found vestiges of the walls that Caesar had built. He also designed the metal frame for the six-metre-high statue that would be placed on the site. He later designed a similar frame for a much larger statue, the Statue of Liberty, but died before that statue was finished. End of the Empire and of Restoration In 1863, Viollet-le-Duc was named a professor at the École des Beaux-Arts, the school where he had refused to become a student, and the fortress of neoclassical Beaux-Arts architecture. This launched him on a new academic career as an architectural theorist, where he would have as much influence as he did as an architect of restorations. There was much resistance from the traditional faculty, but he attracted two hundred students to his course, who applauded his lecture at the end. He had already published the first volumes of his first major work, A Reasoned Dictionary of French Architecture. This series eventually included ten volumes, published between 1854 and 1868. But while he had many supporters, the faculty and many of the students were strongly against him. His critics complained that, aside from having little formal architectural training himself, he had only built a handful of new buildings. He tired of the confrontations and resigned on 16 May 1863, and continued his writing and teaching outside the Beaux-Arts. In the beginning of 1864, he celebrated the conclusion of his most important project, the restoration of Notre-Dame. In January of the same year he completed the first phase of the restoration of the Cathedral of Saint Sernin in Toulouse, one of the landmarks of French Romanesque architecture. Napoleon III invited Viollet-le-Duc to study possible restorations overseas, including in Algeria, Corsica, and in Mexico, where Napoleon had installed a new Emperor, Maximilien, under French sponsorship. He also saw the consecration of the third church that he had designed, the neo-Gothic Church of Saint-Denis de l'Estree, in the Paris suburb of Saint-Denis. Between 1866 and 1870, his major project was the ongoing transformation of Pierrefonds from a ruin into a royal residence. His plans for the metal framework he had designed for Pierrefonds were displayed at the Paris Universal Exposition of 1867. He also completed the tenth and final volume of his monumental dictionary of medieval architecture. He also began a new area of study, researching the geology and geography of the region around Mont Blanc in the Alps. While on his mapping
withstand the weather; it was decorated with statues of the apostles, and the face of Saint Thomas bore a noticeable resemblance to Viollet-le-Duc. The spire was destroyed on 15 April 2019, as a result of the Notre-Dame de Paris fire. Saint Denis and Amiens The restoration of Notre-Dame continued in this slow and methodical manner for twenty-five years. When not engaged in Paris, Viollet-le-Duc continued his long tours into the French provinces, inspecting, drawing, and making recommendations, and checking the progress of more than twenty different restoration projects that were under his control, including seven in Burgundy alone. His new projects included the Basilica of Saint-Sernin, Toulouse, and the Basilica of Saint-Denis just outside Paris. Saint-Denis had undergone a restoration by a different architect, Francois Debret, who had rebuilt one of the two towers. However, in 1846, the new tower, overloaded with masonry, began to crack, and Viollet-le-Duc was called in. He found no way the building could be saved; he had to oversee the demolition of the tower, saving the stones. He concentrated on restoring the interior of the church, and was able to substantially restore the original burial chamber of the Kings of France. In May 1849, he was named the architect for the restoration of Amiens Cathedral, one of the largest in France, which had been built over many centuries in a variety of different styles. He wrote, "his goal should be to save in each part of the monument its own character, and yet to make it so that the united parts don't conflict with each other; and that can be maintained in a state that is durable and simple." For his restorations of churches and cathedrals, Viollet-le-Duc designed not only architecture, but new altars and furnishings. His new furnishings were installed in the sacristy of Notre-Dame, and his neo-Gothic altar was placed in the restored Cathedral of Clermont-Ferrand. Thanks largely to Viollet-le-Duc, the neo-Gothic became the standard style for church furnishing throughout France. Imperial projects: Carcassonne, Vincennes and Pierrefonds The French coup d'état of 1851 had transformed France from a republic to an empire and brought Napoleon III to power. The coup accelerated some of Viollet-le-Duc's projects. His patron and supervisor, Prosper Mérimée, had introduced the Emperor to the new Empress, and brought Viollet-le-Duc into close contact with the Emperor. The Emperor married the Empress Eugénie in Notre-Dame, and the government voted additional funds to advance the restoration. He moved forward with the slow work of restoration of the Cathedral of Reims and Cathedral of Amiens. In Amiens, he cleared the interior of the French classical decoration added under Louis XIV, and proposed to make it resolutely Gothic. He gave the Emperor and Empress a tour of his project in September 1853; the Empress immediately offered to pay two-thirds of the cost of the restoration. In the same year he undertook the restoration of the Château de Vincennes, long occupied by the military, along with its chapel, similar to Sainte-Chapelle. A devotee of the pure Gothic, he described the chapel as "one of the finest specimens of Gothic in decline". In November 1853, he provided the costs and plans for the medieval ramparts of Carcassonne, which he had first begun planning in 1849. The first fortifications had been built by the Visigoths; on top of these, in the Middle Ages Louis XI and then Philip the Bold had built a formidable series of towers, galleries, walls, gates and interlocking defences that resisted all sieges until 1355. The fortifications were largely intact, since the surroundings of the city were still a military defensive zone in the 19th century, but the towers were without tops and a large number of structures had been built up against the old walls. Once he obtained funding and made his plans, he began demolishing all structures which had been attached to ramparts over the centuries, and restored the gates, walls and towers to their original form, including the defence platforms, roofs on the towers and shelters for archers that would have been used during a siege. He found many of the original mountings for weapons still in place. To accompany his work, he published a detailed history of the city and its fortifications, with his drawings. Carcassonne became the best example of medieval military architecture in France, and also an important tourist attraction. Napoleon III provided additional funding for the continued restoration of Notre-Dame. Viollet-le-Duc was also to replace the great bestiary of mythical beasts and animals which had decorated the cathedral in the 18th century. In 1856, using examples from other medieval churches and debris from Notre-Dame as his model, his workshop produced dragons, chimeras, grotesques, and gargoyles, as well as an assortment of picturesque pinnacles and fleurons. He engaged in a new project for restoration of the Cathedral of Clermont-Ferrand, a project which continued for ten years. He also undertook an unusual project for Napoleon III; the design and construction of six railway coaches with neo-Gothic interior décor for the Emperor and his entourage. Two of the cars still exist; the salon of honour car, with a fresco on the ceiling, is at the Château de Compiègne, and the dining car, with a massive golden eagle as the centrepiece of the décor, is at the Railroad Museum of Mulhouse. Napoleon III asked Viollet-le-Duc if he could restore a medieval chateau for the Emperor's own use near Compiègne, where the Emperor traditionally passed September and October. Viollet-le-Duc first studied a restoration of the Château de Coucy, which had the highest medieval tower in France, later destroyed. When this proved too complicated, he settled upon Château de Pierrefonds, a castle begun by Louis of Orleans in 1396, then dismantled in 1617 after several sieges by Louis XIII of France. Napoleon bought the ruin for 5000 francs in 1812, and Mérimée declared it an historic monument in 1848. In 1857 Viollet-le-Duc began designing an entirely new chateau on the ruins. This structure was not designed to recreate anything exactly that had existed, but a castle which recaptured the spirit of the gothic, with lavish neo-gothic decoration and 19th-century comforts. While most of his attention was devoted to restorations, Viollet-le-Duc designed and built a number of private residences and new buildings in Paris. He also participated in the most important competition of the period, for the new Paris Opera. There were one hundred seventy-one projects proposed in the original competition, presented the 1855 Paris Universal Exposition. A jury of noted architects narrowed it down to five, including projects from Viollet-le-Duc, Charles Rohault de Fleury and Charles Garnier, age thirty-five. The favorites of the Emperor and Empress were de Fleury and Viollet-le-Duc, but both were eliminated in the next round. Viollet-le-Duc was not a good loser, and he dismissed Garnier's style. Garnier wrote of his rival in 1869: "Monsieur Viollet-le-Duc has produced much, but his best works without doubt are his restorations...One hesitates to appreciate his personal works. You cannot find any personality in them, only compromise. He is broken by archeology and crushed by the weight of the past. If it is difficult to learn, it is even more difficult to forget." Napoleon III called upon Viollet-le-Duc for a wide variety of archeological and architectural tasks. When he wished to put up a monument to mark the Battle of Alesia, where Julius Caesar defeated the Gauls, a battle whose actual site was disputed by historians, he asked Viollet-le-Duc to locate the exact battlefield. Viollet-le-Duc conducted excavations at various purported sites, and finally found vestiges of the walls that Caesar had built. He also designed the metal frame for the six-metre-high statue that would be placed on the site. He later designed a similar frame for a much larger statue, the Statue of Liberty, but died before that statue was finished. End of the Empire and of Restoration In 1863, Viollet-le-Duc was named a professor at the École des Beaux-Arts, the school where he had refused to become a student, and the fortress of neoclassical Beaux-Arts architecture. This launched him on a new academic career as an architectural theorist, where he would have as much influence as he did as an architect of restorations. There was much resistance from the traditional faculty, but he attracted two hundred students to his course, who applauded his lecture at the end. He had already published the first volumes of his first major work, A Reasoned Dictionary of French Architecture. This series eventually included ten volumes, published between 1854 and 1868. But while he had many supporters, the faculty and many of the students were strongly against him. His critics complained that, aside from having little formal architectural training himself, he had only built a handful of new buildings. He tired of the confrontations and resigned on 16 May 1863, and continued his writing and teaching outside the Beaux-Arts. In the beginning of 1864, he celebrated the conclusion of his most important project, the restoration of Notre-Dame. In January of the same year he completed the first phase of the restoration of the Cathedral of Saint Sernin in Toulouse, one of the landmarks of French Romanesque architecture. Napoleon III invited Viollet-le-Duc to study possible restorations overseas, including in Algeria, Corsica, and in Mexico, where Napoleon had installed a new Emperor, Maximilien, under French sponsorship. He also saw the consecration of the third church that he had designed, the neo-Gothic Church of Saint-Denis de l'Estree, in the Paris suburb of Saint-Denis. Between 1866 and 1870, his major project was the ongoing transformation of Pierrefonds from a ruin into a royal residence. His plans for the metal framework he had designed for Pierrefonds were displayed at the Paris Universal Exposition of 1867. He also completed the tenth and final volume of his monumental dictionary of medieval architecture. He also began a new area of study, researching the geology and geography of the region around Mont Blanc in the Alps. While on his mapping excursion in the Alps in July 1870, he learned that war had been declared between Prussia and France. As the Franco-Prussian War commenced, Viollet-le-Duc hurried back to Paris, and offered his services as a military engineer; he was put into service as a colonel of engineers, preparing the defenses of Paris. In September, the Emperor was captured at the Battle of Sedan, a new Republican government took power, and the Empress Eugénie fled into exile, as Germans marched as far as Paris and put it under siege. At the same time, on September 23, Viollet-le-Duc's primary patron and supporter, Prosper Mérimée, died peacefully in the south of France. Viollet-le-Duc supervised the construction of new defensive works outside Paris. On 14 December 1870, he wrote in his journal, "Disorganization is everywhere. The officers have no confidence in the troops, and the troops have no confidence in the officers. Each day, new orders and new projects which contravene those of the day before." He fought with the French army against the Germans at Buzenval on 24 January 1871. The battle was lost, and the French capitulated on 28 January. Viollet-le-Duc wrote to his wife on February 28, "I don't know what will become of me, but I do not want to return any more to administration. I am disgusted by it forever, and want nothing more than to pass the years that remain to me in study and in the most modest possible life." In May 1871 he left his home in Paris just before national guardsmen arrived to draft him into the armed force of the Paris Commune. He returned to Pierrefonds, where he had a small apartment. Always the scholar, he wrote a detailed study of the effectiveness and deficiencies of the fortifications of Paris during the siege. He returned to the city shortly after the Commune was suppressed in May, 1871, and saw the ruins of most of the public buildings of the city, burned by the Commune in its last days. He received his only commission from the new government of the French Third Republic; Jules Simon, the new Minister of Culture and Public Instruction, asked him to design a plaque to be placed before Notre-Dame to honor the hostages killed by the Paris Commune in its final days. The new government of the French Third Republic made little use of his expertise in the restoration of the major government buildings which had been burned by the Paris Commune, including the Tuileries Palace, the Palace of the Legion of Honor, the Palais Royale, the library of the Louvre, the Ministry of Justice and the Ministry of Finance. The only reconstruction on which he was consulted was that of the Hotel de Ville. The writer Edmond de Goncourt called for leaving the ruin of the Hotel de Ville exactly as it was, "a ruin of a magical palace, A marvel of the picturesque. The country should not condemn it without appeal to restoration by Viollet-le-Duc." The government asked Viollet-le-Duc to organize a competition. He presented two options; to either restore the building to its original state, with its historic interior; or to demolish it and build a new city hall. In July 1872 the government decided to preserve the Renaissance facade, but otherwise to completely demolish and rebuild the building. Later life – author and theorist In his later years he devoted most of his time to writing about architectural history. He consulted on many of his earlier projects around France, which were still underway. He also continued his explorations of the Alps around Mount Blanc, making a detailed map and a series of thirty-two drawings of the alpine scenery. He passed by Lausanne, where he was asked to prepare a plan for the restoration of the cathedral, which he did. In 1872, he completed the second volume of his major theoretical work, Entretiens sur l'architecture. In his Entretiens sur l'architecture he concentrated in particular on the use of iron and other new materials, and the importance of designing buildings whose architecture was adapted to their function, rather than to a particular style. The book was translated into English in 1881 and won a large following in the United States. The Chicago architect Louis Sullivan, one of the inventors of the skyscraper, often invoked the phrase, "Form follows function." The Lausanne cathedral was his final major restoration project; it was rebuilt following his plans between 1873 and 1876. Work continued after his death. His reconstruction of the bell tower was later criticized; he eliminated the original octagonal base and added a new spire, which rested on the walls, and not on the vaulting, like the original spire. He also added new decoration, crowning the spire at mid-height with gables, another original element, and removing the original tiles. He was also criticized for
bleeding into the skin, heart murmur, feeling tired, and low red blood cells. Complications may include valvular insufficiency, heart failure, stroke, and kidney failure. The cause is typically a bacterial infection and less commonly a fungal infection. Risk factors include valvular heart disease including rheumatic disease, congenital heart disease, artificial valves, hemodialysis, intravenous drug use, and electronic pacemakers. The bacterial most commonly involved are streptococci or staphylococci. Diagnosis is suspected based on symptoms and supported by blood cultures or ultrasound. The usefulness of antibiotics following dental procedures for prevention is unclear. Some recommend them in those at high risk. Treatment is generally with intravenous antibiotics. The choice of antibiotics is based on the blood cultures. Occasionally heart surgery is required. The number of people affected is about 5 per 100,000 per year. Rates, however, vary between regions of the world. Males are affected more often than females. The risk of death among those infected is about 25%. Without treatment it is almost universally fatal. Non-infective endocarditis Nonbacterial thrombotic endocarditis (NBTE) is most commonly found on previously undamaged valves. As opposed to infective endocarditis, the vegetations in NBTE are small,
presence of endocarditis-causing microorganisms. Signs and symptoms include fever, chills, sweating, malaise, weakness, anorexia, weight loss, splenomegaly, flu-like feeling, cardiac murmur, heart failure, petechia (red spots on the skin), Osler's nodes (subcutaneous nodules found on hands and feet), Janeway lesions (nodular lesions on palms and soles), and Roth's spots (retinal hemorrhages). Infective endocarditis Infective endocarditis is an infection of the inner surface of the heart, usually the valves. Symptoms may include fever, small areas of bleeding into the skin, heart murmur, feeling tired, and low red blood cells. Complications may include valvular insufficiency, heart failure, stroke, and kidney failure. The cause is typically a bacterial infection and less commonly a fungal infection. Risk factors include valvular heart disease including rheumatic disease, congenital heart disease, artificial valves, hemodialysis, intravenous drug use, and electronic pacemakers. The bacterial most commonly involved are streptococci or staphylococci. Diagnosis is suspected based on symptoms and supported by blood cultures or ultrasound. The usefulness of antibiotics following dental procedures for prevention is unclear. Some recommend them in those at high risk. Treatment is generally with intravenous antibiotics. The choice of antibiotics is based on the blood
the equality involving sums of four fourth powers; this however is not a counterexample because no term is isolated on one side of the equation. He also provided a complete solution to the four cubes problem as in Plato's number or the taxicab number 1729. The general solution of the equation is where and are any integers. Counterexamples Euler's conjecture was disproven by L. J. Lander and T. R. Parkin in 1966 when, through a direct computer search on a CDC 6600, they found a counterexample for . This was published in a paper comprising just two sentences. A total of three primitive (that is, in which the summands do not all have a common factor) counterexamples are known: (Lander & Parkin, 1966), (Scher & Seidl, 1996), and (Frye, 2004). In 1988, Noam Elkies published a method to construct an infinite sequence of counterexamples for the case. His smallest counterexample was . A particular case of Elkies' solutions can be reduced to the identity where . This is an elliptic curve with a rational point at . From this initial rational point, one can compute an infinite collection of others. Substituting into the identity and removing common factors gives the numerical example cited above. In 1988, Roger Frye found the smallest possible counterexample for by a direct computer search using techniques suggested by Elkies. This solution is the only one with values of the variables below 1,000,000. Generalizations In 1967, L. J. Lander, T. R. Parkin, and John Selfridge conjectured that if , where are positive integers for all and , then . In the special case
of the equation. He also provided a complete solution to the four cubes problem as in Plato's number or the taxicab number 1729. The general solution of the equation is where and are any integers. Counterexamples Euler's conjecture was disproven by L. J. Lander and T. R. Parkin in 1966 when, through a direct computer search on a CDC 6600, they found a counterexample for . This was published in a paper comprising just two sentences. A total of three primitive (that is, in which the summands do not all have a common factor) counterexamples are known: (Lander & Parkin, 1966), (Scher & Seidl, 1996), and (Frye, 2004). In 1988, Noam Elkies published a method to construct an infinite sequence of counterexamples for the case. His smallest counterexample was . A particular case of Elkies' solutions can be reduced to the identity where . This is an elliptic curve with a rational point at . From this initial rational point, one can compute an infinite collection of others. Substituting into the identity and removing common factors gives the numerical example cited above. In 1988, Roger Frye found the smallest possible counterexample for by a direct computer search using techniques suggested by Elkies. This solution is the only
them if they obey. Moses comes down from the mountain and writes down God's words, and the people agree to keep them. God calls Moses up the mountain again, where he remains for forty days and forty nights, at the conclusion of which he returns, bearing the set of stone tablets. God gives Moses instructions for the construction of the tabernacle so that God may dwell permanently among his chosen people, along with instructions for the priestly vestments, the altar and its appurtenances, procedures for ordination of priests, and the daily sacrifice offerings. Aaron becomes the first hereditary high priest. God gives Moses the two tablets of stone containing the words of the ten commandments, written with the "finger of God". While Moses is with God, Aaron casts a golden calf, which the people worship. God informs Moses of their apostasy and threatens to kill them all, but relents when Moses pleads for them. Moses comes down from the mountain, smashes the stone tablets in anger, and commands the Levites to massacre the unfaithful Israelites. God commands Moses to construct two new tablets. Moses ascends the mountain again, where God dictates the Ten Commandments for Moses to write on the tablets. Moses descends from the mountain with a transformed face; from that time onwards he must hide his face with a veil. Moses assembles the Hebrews and repeats to them the commandments he has received from God, which are to keep the Sabbath and to construct the Tabernacle. The Israelites do as they are commanded. From that time God dwells in the Tabernacle and orders the travels of the Hebrews. Composition Authorship Jewish and Christian tradition viewed Moses as the author of Exodus and the entire Torah, but by the end of the 19th century the increasing awareness of discrepancies, inconsistencies, repetitions and other features of the Pentateuch had led scholars to abandon this idea. In approximate round dates, the process which produced Exodus and the Pentateuch probably began around 600 BCE when existing oral and written traditions were brought together to form books recognizable as those we know, reaching their final form as unchangeable sacred texts around 400 BCE. Sources Although patent mythical elements are not so prominent in Exodus as in Genesis, ancient legends may have an influence on the book's form or content: for example, the story of the infant Moses's salvation from the Nile is argued to be based on an earlier legend of king Sargon of Akkad, while the story of the parting of the Red Sea may trade on Mesopotamian creation mythology. Similarly, the Covenant Code (the law code in Exodus 20:22–23:33) has some similarities in both content and structure with the Laws of Hammurabi. These influences serve to reinforce the conclusion that the Book of Exodus originated in the exiled Jewish community of 6th-century BCE Babylon, but not all the sources are Mesopotamian: the story of Moses's flight to Midian following the murder of the Egyptian overseer may draw on the Egyptian Story of Sinuhe. Themes Salvation Biblical scholars describe the Bible's theologically-motivated history writing as "salvation history", meaning a history of God's saving actions that give identity to Israel – the promise of offspring and land to the ancestors, the Exodus from Egypt (in which God saves Israel from slavery), the wilderness wandering, the revelation at Sinai, and the hope for the future life in the promised land. Theophany A theophany is a manifestation (appearance) of a god – in the Bible, an appearance of the God of Israel, accompanied by storms – the earth trembles, the mountains quake, the heavens pour rain, thunder peals and lightning flashes. The theophany in Exodus begins "the third day" from their arrival at Sinai in chapter 19: Yahweh and the people meet at the mountain, God appears in the storm and converses with Moses, giving him the Ten Commandments while the people listen. The theophany is therefore a public experience of divine law. The second half of Exodus marks the point at which, and describes the process through which, God's theophany becomes a permanent presence for Israel via the Tabernacle. That so much of the book (chapters 25–31, 35–40) describes the plans of the Tabernacle demonstrates the importance it played
of Aviv at the head of the Hebrew calendar, and instructs the Israelites to take a lamb on the 10th day of the month, sacrifice the lamb on the 14th day, daub its blood on their mezuzot—doorposts and lintels, and to observe the Passover meal that night, during the full moon. The 10th plague then comes that night, causing the death of all Egyptian firstborn sons, and prompting Pharaoh to command a final pursuit of the Israelites through the Red Sea as they escape Egypt. God assists the Israelite exodus by parting the sea and allowing the Israelites to pass through, before drowning Pharaoh's forces. As desert life proves arduous, the Israelites complain and long for Egypt, but God miraculously provides manna for them to eat and water to drink. The Israelites arrive at the mountain of God, where Moses's father-in-law Jethro visits Moses; at his suggestion, Moses appoints judges over Israel. God asks whether they will agree to be his people. They accept. The people gather at the foot of the mountain, and with thunder and lightning, fire and clouds of smoke, the sound of trumpets, and the trembling of the mountain, God appears on the peak, and the people see the cloud and hear the voice (or possibly sound) of God. God tells Moses to ascend the mountain. God pronounces the Ten Commandments (the Ethical Decalogue) in the hearing of all Israel. Moses goes up the mountain into the presence of God, who pronounces the Covenant Code of ritual and civil law and promises Canaan to them if they obey. Moses comes down from the mountain and writes down God's words, and the people agree to keep them. God calls Moses up the mountain again, where he remains for forty days and forty nights, at the conclusion of which he returns, bearing the set of stone tablets. God gives Moses instructions for the construction of the tabernacle so that God may dwell permanently among his chosen people, along with instructions for the priestly vestments, the altar and its appurtenances, procedures for ordination of priests, and the daily sacrifice offerings. Aaron becomes the first hereditary high priest. God gives Moses the two tablets of stone containing the words of the ten commandments, written with the "finger of God". While Moses is with God, Aaron casts a golden calf, which the people worship. God informs Moses of their apostasy and threatens to kill them all, but relents when Moses pleads for them. Moses comes down from the mountain, smashes the stone tablets in anger, and commands the Levites to massacre the unfaithful Israelites. God commands Moses to construct two new tablets. Moses ascends the mountain again, where God dictates the Ten Commandments for Moses to write on the tablets. Moses descends from the mountain with a transformed face; from that time onwards he must hide his face with a veil. Moses assembles the Hebrews and repeats to them the commandments he has received from God, which are to keep the Sabbath and to construct the Tabernacle. The Israelites do as they are commanded. From that time God dwells in the Tabernacle and orders the travels of the Hebrews. Composition Authorship Jewish and Christian tradition viewed Moses as the author of Exodus and the entire Torah, but by the end of the 19th century the increasing awareness of discrepancies, inconsistencies, repetitions and other features of the Pentateuch had led scholars to abandon this idea. In approximate round dates, the process which produced Exodus and the Pentateuch probably began around 600 BCE when existing oral and written traditions were brought together to form books recognizable as those we know, reaching their final form as unchangeable sacred texts around 400 BCE. Sources Although patent mythical elements are not so prominent in Exodus as in Genesis, ancient legends may have an influence on the book's form or content: for example, the story of the infant Moses's salvation from the Nile is argued to be based on an earlier legend of king Sargon of Akkad, while the story of the parting of the Red Sea may trade on
equipment, guitar amplifiers and some microwave devices. The first working point-contact transistor was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947. In April 1955, the IBM 608 was the first IBM product to use transistor circuits without any vacuum tubes and is believed to be the first all-transistorized calculator to be manufactured for the commercial market. The 608 contained more than 3,000 germanium transistors. Thomas J. Watson Jr. ordered all future IBM products to use transistors in their design. From that time on transistors were almost exclusively used for computer logic and peripherals. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications. The MOSFET (MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. The MOSFET was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. Its advantages include high scalability, affordability, low power consumption, and high density. It revolutionized the electronics industry, becoming the most widely used electronic device in the world. The MOSFET is the basic element in most modern electronic equipment. Types of circuits Circuits and components can be divided into two groups: analog and digital. A particular device may consist of circuitry that has one or the other or a mix of the two types. An important electronic technique in both analog and digital electronics involves the use of feedback. Among many other things this allows very linear amplifiers to be made with high gain, and digital circuits such as registers, computers and oscillators. Analog circuits Most analog electronic appliances, such as radio receivers, are constructed from combinations of a few types of basic circuits. Analog circuits use a continuous range of voltage or current as opposed to discrete levels as in digital circuits. The number of different analog circuits so far devised is huge, especially because a 'circuit' can be defined as anything from a single component, to systems containing thousands of components. Analog circuits are sometimes called linear circuits although many non-linear effects are used in analog circuits such as mixers, modulators, etc. Good examples of analog circuits include vacuum tube and transistor amplifiers, operational amplifiers and oscillators. One rarely finds modern circuits that are entirely analog, These days analog circuitry may use digital or even microprocessor techniques to improve performance. This type of circuit is usually called "mixed signal" rather than analog or digital. Sometimes it may be difficult to differentiate between analog and digital circuits as they have elements of both linear and non-linear operation. An example is the comparator which takes in a continuous range of voltage but only outputs one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier can take on the characteristics of a controlled switch having essentially two levels of output. In fact, many digital circuits are actually implemented as variations of analog circuits similar to this example – after all, all aspects of the real physical world are essentially analog, so digital effects are only realized by constraining analog behaviour. Digital circuits Digital circuits are electric circuits based on a number of discrete voltage levels. Digital circuits are the most common physical representation of Boolean algebra and are the basis of all digital computers. To most engineers, the terms "digital circuit", "digital system" and "logic" are interchangeable in the context of digital circuits. Most digital circuits use a binary system with two voltage levels labelled "0" and "1". Often logic "0" will be a lower voltage and referred to as "Low" while logic "1" is referred to as "High". However, some systems use the reverse definition ("0" is "High") or are current based. Quite often the logic designer may reverse these definitions from one circuit to the next as they see fit to facilitate their design. The definition of the levels as "0" or "1" is arbitrary. Ternary (with three states) logic has been studied, and some prototype computers made. Computers, electronic clocks, and programmable logic controllers (used to control industrial processes) are constructed of digital circuits. Digital signal processors are another example. Building blocks: Metal-oxide-semiconductor field-effect transistor (MOSFET) Logic gates Adders Flip-flops Counters Registers Multiplexers Schmitt triggers Highly integrated devices: Memory chip Microprocessors Microcontrollers Application-specific integrated circuit (ASIC) Digital signal processor (DSP) Field-programmable gate array (FPGA) Field-programmable analog array (FPAA) System on chip (SOC) Heat dissipation and thermal management Heat generated by electronic circuitry must be dissipated to prevent immediate failure and improve long term reliability. Heat dissipation is mostly achieved by passive conduction/convection. Means to achieve greater dissipation include heat sinks and fans for air cooling, and other forms of computer cooling such as water cooling. These techniques use convection, conduction, and radiation of heat energy. Noise Electronic noise is defined as unwanted disturbances superposed on a useful signal that tend to obscure its information content. Noise is not the same as signal distortion caused by a circuit. Noise is associated with all electronic circuits. Noise may be electromagnetically or thermally generated, which can be decreased by lowering the operating temperature of the circuit. Other types of noise, such as shot noise cannot be removed as they are due to limitations in physical properties. Electronics theory Mathematical methods are integral to the study of electronics. To become proficient in electronics it is also necessary to become proficient in the mathematics of circuit analysis. Circuit analysis is the study of methods of solving generally linear systems for unknown variables such as the voltage at a certain node or the current through a certain branch of a network. A common analytical tool for this is the SPICE circuit simulator. Also important to electronics is the study and understanding of electromagnetic field theory. Electronics lab Due to the complex nature of electronics theory, laboratory experimentation is an important part of the development of electronic devices. These experiments are
An example is the comparator which takes in a continuous range of voltage but only outputs one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier can take on the characteristics of a controlled switch having essentially two levels of output. In fact, many digital circuits are actually implemented as variations of analog circuits similar to this example – after all, all aspects of the real physical world are essentially analog, so digital effects are only realized by constraining analog behaviour. Digital circuits Digital circuits are electric circuits based on a number of discrete voltage levels. Digital circuits are the most common physical representation of Boolean algebra and are the basis of all digital computers. To most engineers, the terms "digital circuit", "digital system" and "logic" are interchangeable in the context of digital circuits. Most digital circuits use a binary system with two voltage levels labelled "0" and "1". Often logic "0" will be a lower voltage and referred to as "Low" while logic "1" is referred to as "High". However, some systems use the reverse definition ("0" is "High") or are current based. Quite often the logic designer may reverse these definitions from one circuit to the next as they see fit to facilitate their design. The definition of the levels as "0" or "1" is arbitrary. Ternary (with three states) logic has been studied, and some prototype computers made. Computers, electronic clocks, and programmable logic controllers (used to control industrial processes) are constructed of digital circuits. Digital signal processors are another example. Building blocks: Metal-oxide-semiconductor field-effect transistor (MOSFET) Logic gates Adders Flip-flops Counters Registers Multiplexers Schmitt triggers Highly integrated devices: Memory chip Microprocessors Microcontrollers Application-specific integrated circuit (ASIC) Digital signal processor (DSP) Field-programmable gate array (FPGA) Field-programmable analog array (FPAA) System on chip (SOC) Heat dissipation and thermal management Heat generated by electronic circuitry must be dissipated to prevent immediate failure and improve long term reliability. Heat dissipation is mostly achieved by passive conduction/convection. Means to achieve greater dissipation include heat sinks and fans for air cooling, and other forms of computer cooling such as water cooling. These techniques use convection, conduction, and radiation of heat energy. Noise Electronic noise is defined as unwanted disturbances superposed on a useful signal that tend to obscure its information content. Noise is not the same as signal distortion caused by a circuit. Noise is associated with all electronic circuits. Noise may be electromagnetically or thermally generated, which can be decreased by lowering the operating temperature of the circuit. Other types of noise, such as shot noise cannot be removed as they are due to limitations in physical properties. Electronics theory Mathematical methods are integral to the study of electronics. To become proficient in electronics it is also necessary to become proficient in the mathematics of circuit analysis. Circuit analysis is the study of methods of solving generally linear systems for unknown variables such as the voltage at a certain node or the current through a certain branch of a network. A common analytical tool for this is the SPICE circuit simulator. Also important to electronics is the study and understanding of electromagnetic field theory. Electronics lab Due to the complex nature of electronics theory, laboratory experimentation is an important part of the development of electronic devices. These experiments are used to test or verify the engineer's design and detect errors. Historically, electronics labs have consisted of electronics devices and equipment located in a physical space, although in more recent years the trend has been towards electronics lab simulation software, such as CircuitLogix, Multisim, and PSpice. Computer-aided design (CAD) Today's electronics engineers have the ability to design circuits using premanufactured building blocks such as power supplies, semiconductors (i.e. semiconductor devices, such as transistors), and integrated circuits. Electronic design automation software programs include schematic capture programs and printed circuit board design programs. Popular names in the EDA software world are NI Multisim, Cadence (ORCAD), EAGLE PCB and Schematic, Mentor (PADS PCB and LOGIC Schematic), Altium (Protel), LabCentre Electronics (Proteus), gEDA, KiCad and many others. Packaging methods Many different methods of connecting components have been used over the years. For instance, early electronics often used point to point wiring with components attached to wooden breadboards to construct circuits. Cordwood construction and wire wrap were other methods used. Most modern day electronics now use printed circuit boards made of materials such as FR4, or the cheaper (and less hard-wearing) Synthetic Resin Bonded Paper (SRBP, also known as Paxoline/Paxolin (trade marks) and FR2) – characterised by its brown colour. Health and environmental concerns associated with electronics assembly have gained increased attention in recent years, especially for products destined to the European . Electronic systems design Electronic systems design deals with the multi-disciplinary design issues of complex electronic devices and systems, such as mobile phones and computers. The subject covers a broad spectrum, from the design and development of an electronic system (new product development) to assuring its proper function, service life and disposal. Electronic systems design is therefore the process of defining and developing complex electronic devices to satisfy specified requirements of the user. Mounting options Electrical components are generally mounted in the following ways: Through-hole (sometimes referred to as 'Pin-Through-Hole') Surface mount Chassis mount Rack mount LGA/BGA/PGA socket Electronics industry The electronics industry consists of various sectors. The central driving force behind the entire electronics industry is the semiconductor industry sector, which has annual sales of over as of 2018. The largest industry sector is e-commerce, which generated over in 2017. The most widely manufactured electronic device is the
Station for about four years (1860–64), and explored parts of the interior of the South Island and which he wrote about in his A First Year in Canterbury Settlement (1863). The novel is one of the first to explore ideas of artificial intelligence, as influenced by Darwin's recently published On the Origin of Species (1859) and the machines developed out of the Industrial Revolution (late 18th to early 19th centuries). Specifically, it concerns itself, in the three-chapter "Book of the Machines", with the potentially dangerous ideas of machine consciousness and self-replicating machines. Content The greater part of the book consists of a description of Erewhon. The nature of this nation is intended to be ambiguous. At first glance, Erewhon appears to be a Utopia, yet it soon becomes clear that this is far from the case. Yet for all the failings of Erewhon, it is also clearly not a dystopia, such as that depicted in 1949 in George Orwell's Nineteen Eighty-Four. As a satirical utopia, Erewhon has sometimes been compared to Gulliver's Travels (1726), a classic novel by Jonathan Swift; the image of Utopia in this latter case also bears strong parallels with the self-view of the British Empire at the time. It can also be compared to the William Morris novel, News from Nowhere (1890). Erewhon satirises various aspects of Victorian society, including criminal punishment, religion and anthropocentrism. For example, according to Erewhonian law, offenders are treated as if they were ill, whereas ill people are looked upon as criminals. Another feature of Erewhon is the absence of machines; this is due to the widely shared perception by the Erewhonians that they are potentially dangerous. The Book of the Machines Butler developed the three chapters of Erewhon that make up "The Book of the Machines" from a number of articles that he had contributed to The Press, which had just begun publication in Christchurch, New Zealand, beginning with "Darwin among the Machines" (1863). Butler was the first to write about the possibility that machines might develop consciousness by natural selection.<ref>"Darwin among the Machines", reprinted in the Notebooks of Samuel Butler at Project Gutenberg</ref> Many dismissed this as a joke; but, in his preface to the second edition, Butler wrote, "I regret that reviewers have in some cases been inclined to treat the chapters on Machines as an attempt to reduce Mr Darwin's theory to an absurdity. Nothing could be further from my intention, and few things would be more distasteful to me than any attempt to laugh at Mr Darwin." Characters Higgs—The narrator who informs the reader of the nature
the Erewhonians. Her name is an anagram of Grundy (from Mrs. Grundy, a character in Thomas Morton's play Speed the Plough). Reception In a 1945 broadcast, George Orwell praised the book and said that when Butler wrote Erewhon it needed "imagination of a very high order to see that machinery could be dangerous as well as useful." He recommended the novel, though not its sequel, Erewhon Revisited. Influence and legacy Deleuze and Guattari The French philosopher Gilles Deleuze used ideas from Butler's book at various points in the development of his philosophy of difference. In Difference and Repetition (1968), Deleuze refers to what he calls "Ideas" as "Erewhon". "Ideas are not concepts", he argues, but rather "a form of eternally positive differential multiplicity, distinguished from the identity of concepts." "Erewhon" refers to the "nomadic distributions" that pertain to simulacra, which "are not universals like the categories, nor are they the hic et nunc or nowhere, the diversity to which categories apply in representation." "Erewhon", in this reading, is "not only a disguised no-where but a rearranged now-here." In his collaboration with Félix Guattari, Anti-Oedipus (1972), Deleuze draws on Butler's "The Book of the Machines" to "go beyond" the "usual polemic between vitalism and mechanism" as it relates to their concept of "desiring-machines": Other uses C. S. Lewis alludes to the book in his essay, The Humanitarian Theory of Punishment in the posthumously published collection, God in the Dock (1970). Aldous Huxley alludes to the book in his novel Island (1962) as does Agatha Christie in Death on the Nile (1937). In 1994, a group of ex-Yugoslavian writers in Amsterdam, who had established the PEN centre of Yugoslav Writers in Exile, published a single issue of a literary journal Erewhon. New Zealand sound art organisation, the Audio Foundation, published in 2012 an anthology edited by Bruce Russell named Erewhon Calling after Butler's book. In 2014, New Zealand artist Gavin Hipkins released his first feature film, titled Erewhon and based on Butler's book. It premiered at the New Zealand International Film Festival and the Edinburgh Art Festival. In "Smile", the second episode of the 2017 season of Doctor Who, the Doctor and Bill explore a spaceship named Erehwon. Despite the slightly different spelling, the episode writer Frank Cottrell-Boyce confirmed that this was a reference to Butler's novel. The book The open society and its enemies, by Karl Popper, reproduces on first page, the following citation of Butler: "It will be seen . . . that the Erewhoniaris arare a meek and long-suffering people easily led by the nose, and quick to offer up common sense at the shrine of logic, when a philosopher arises among them who carries them away... by convincing them that their existing institutions are not based on the strictest principles of morality".'Erewhon' is the unofficial name US astronauts gave Regan Station, a military space station in David Brin's 1990 novel Earth. 'The Butlerian Jihad' is the name of the crusade to wipe out 'thinking machines' in the novel, Dune, by Frank Herbert. 'Erewhon' is the name of Los Angeles-based natural foods grocery store originally founded in Boston in 1966. 'Erewhon' is also the name of an independent speculative fiction publishing company founded in 2018 by Liz Gorinsky. See also Rangitata River – the location of the Erewhon sheep station named by Butler who was the first white settler in the area and lived at the Mesopotamia Sheep Station Nacirema - another piece of satirical writing with a similar backwards pun References "Mesopotamia Station", Newton, P. (1960) "Early Canterbury Runs", Acland, L. G. D. (1946) "Samuel Butler of Mesopotamia", Maling, P. B. (1960) "The Cradle
referred to as ectopic (). Most ectopias are congenital, but some may happen later in life. Examples Ectopic ACTH syndrome, also known as small-cell carcinoma. Ectopic calcification, a pathologic deposition of calcium salts in tissues or bone growth in soft tissues Cerebellar tonsillar ectopia, aka Chiari malformation, a herniation of the brain through the foramen magnum, which may be congenital or caused by trauma. Ectopic cilia, a hair growing where it isn't supposed to be, commonly an eyelash on an abnormal spot on the eyelid, distichia Ectopia cordis, the displacement of the heart outside the body during fetal development Ectopic enamel, a tooth abnormality, where enamel is found in an unusual location, such as at the root of a tooth Ectopic expression, the expression of a gene in an abnormal place in
crystalline lens of the eye Neuronal ectopia Ectopic pancreas, displacement of pancreatic tissue in the body with no connection, anatomical or vascular, to the pancreas Ectopic recombination, the recombination between sequences (like leu2 sequences) present at different genomic locations Renal ectopia occurs when both kidneys are on the same side of the body Ectopic testis, a testis that has moved to an unusual location Ectopic thymus, where thymus tissue is found in an abnormal location Ectopic thyroid, where an entire or parts of the thyroid are located elsewhere in the body Ectopic tooth Ectopic ureter, where the ureter terminates somewhere other than the urinary bladder Ectopia vesicae, a congenital anomaly in which
which contrasts to that of the hippocampal neurons, which usually encode information about specific places. This suggests that EC encodes general properties about current contexts that are then used by hippocampus to create unique representations from combinations of these properties. Research generally highlights a useful distinction in which the medial entorhinal cortex (MEC) mainly supports processing of space, whereas the lateral entorhinal cortex (LEC) mainly supports the processing of time. The MEC exhibits a strong ~8 Hz rhythmic neural activity known as theta. Alterations in the neural activity across the brain region results in an observed "traveling wave" phenomena across the MEC long-axis, similar to that of the hippocampus, due to asymmetric theta oscillations. The underlying cause of these phase shifts and their waveform changes are unknown. Individual variation in the volume of EC is linked to taste perception. People with a larger EC in the left hemisphere found quinine, the source of bitterness in tonic water, less bitter. Clinical significance Alzheimer's disease The entorhinal cortex is the first area of the brain to be affected in Alzheimer's disease; a recent functional magnetic resonance imaging study has localised the area to the lateral entorhinal cortex. Lopez et al. have shown, in a multimodal study, that there are differences in the volume of the left entorhinal cortex between progressing (to Alzheimer's disease) and stable mild cognitive impairment patients. These authors also found that the volume of the left entorhinal cortex inversely correlates with the level of alpha band phase synchronization between the right anterior cingulate and temporo-occipital regions. In 2012, neuroscientists at UCLA conducted an experiment using a virtual taxi video game connected to seven epilepsy patients with electrodes already implanted in their brains, allowing the researchers to monitor neuronal activity whenever memories were being formed. As the researchers stimulated the nerve fibers of each of the patients' entorhinal cortex as they were learning, they were then able to better navigate themselves through various routes and recognize landmarks more quickly. This signified an improvement in the patients' spatial memory. Effect of aerobic exercise A study finds that regardless of gender, young adults who have greater aerobic fitness also have greater volume of their entorhinal cortex. It suggests that aerobic exercise may have a positive effect on the medial temporal lobe memory system (which includes the entorhinal cortex) in healthy young adults. This also suggests that exercise training, when designed to increase aerobic fitness, might have a positive effect on the brain in healthy young adults Additional Images References External links NIF Search - Entorhinal Cortex via the Neuroscience Information Framework For delineating the Entorhinal cortex, see Desikan RS, Ségonne F, Fischl B, Quinn BT, Dickerson BC, Blacker D, Buckner RL, Dale
is an area of the brain's allocortex, located in the medial temporal lobe, whose functions include being a widespread network hub for memory, navigation, and the perception of time. The EC is the main interface between the hippocampus and neocortex. The EC-hippocampus system plays an important role in declarative (autobiographical/episodic/semantic) memories and in particular spatial memories including memory formation, memory consolidation, and memory optimization in sleep. The EC is also responsible for the pre-processing (familiarity) of the input signals in the reflex nictitating membrane response of classical trace conditioning; the association of impulses from the eye and the ear occurs in the entorhinal cortex. Structure In rodents, the EC is located at the caudal end of the temporal lobe. In primates it is located at the rostral end of the temporal lobe and stretches dorsolaterally. It is usually divided into medial and lateral regions with three bands with distinct properties and connectivity running perpendicular across the whole area. A distinguishing characteristic of the EC is the lack of cell bodies where layer IV should be; this layer is called the Lamina dissecans. Connections The superficial layers – layers II and III – of EC project to the dentate gyrus and hippocampus: Layer II projects primarily to dentate gyrus and hippocampal region CA3; layer III projects primarily to hippocampal region CA1 and the subiculum. These layers receive input from other cortical areas, especially associational, perirhinal, and parahippocampal cortices, as well as prefrontal cortex. EC as a whole, therefore, receives highly processed input from every sensory modality, as well as input relating to ongoing cognitive processes, though it should be stressed that, within EC, this information remains at least partially segregated. The deep layers, especially layer V, receive one of the three main outputs of the hippocampus and, in turn, reciprocate connections from other cortical areas that project to superficial EC. The rodent entorhinal cortex shows a modular organization, with different properties and connections in different areas. Brodmann's areas Brodmann area 28 is known as the "area entorhinalis" Brodmann area 34 is known as the "area entorhinalis dorsalis" Function Neuron information processing In 2005, it was discovered that entorhinal cortex contains a neural map of the spatial environment in rats. In 2014, John O'Keefe, May-Britt Moser and Edvard Moser received the Nobel Prize in Physiology or Medicine, partly because of this discovery. In rodents, neurons in the lateral entorhinal cortex exhibit little spatial selectivity, whereas neurons of the medial entorhinal cortex (MEC), exhibit multiple "place fields" that are arranged in a hexagonal pattern, and are, therefore, called "grid cells". These fields and spacing between fields increase from the dorso-lateral MEA
book, [The Myth of the Twentieth Century], Peter Peel affirms that Rosenberg had indeed read Haeckel. In the same line of thought, historian Daniel Gasman states that Haeckel's ideology stimulated the birth of Fascist ideology in Italy and France. However, Robert J. Richards notes: "Haeckel, on his travels to Ceylon and Indonesia, often formed closer and more intimate relations with natives, even members of the untouchable classes, than with the European colonials." and says the Nazis rejected Haeckel, since he opposed antisemitism, while supporting ideas they disliked (for instance atheism, feminism, internationalism, pacifism etc.). Asia hypothesis Haeckel claimed the origin of humanity was to be found in Asia: he believed that Hindustan (Indian subcontinent) was the actual location where the first humans had evolved. Haeckel argued that humans were closely related to the primates of Southeast Asia and rejected Darwin's hypothesis of Africa. Haeckel later claimed that the missing link was to be found on the lost continent of Lemuria located in the Indian Ocean. He believed that Lemuria was the home of the first humans and that Asia was the home of many of the earliest primates; he thus supported that Asia was the cradle of hominid evolution. Haeckel also claimed that Lemuria connected Asia and Africa, which allowed the migration of humans to the rest of the world. In Haeckel's book The History of Creation (1884) he included migration routes which he thought the first humans had used outside of Lemuria. Embryology and recapitulation theory When Haeckel was a student in the 1850s he showed great interest in embryology, attending the rather unpopular lectures twice and in his notes sketched the visual aids: textbooks had few illustrations, and large format plates were used to show students how to see the tiny forms under a reflecting microscope, with the translucent tissues seen against a black background. Developmental series were used to show stages within a species, but inconsistent views and stages made it even more difficult to compare different species. It was agreed by all European evolutionists that all vertebrates looked very similar at an early stage, in what was thought of as a common ideal type, but there was a continuing debate from the 1820s between the Romantic recapitulation theory that human embryos developed through stages of the forms of all the major groups of adult animals, literally manifesting a sequence of organisms on a linear chain of being, and Karl Ernst von Baer's opposing view, stated in von Baer's laws of embryology, that the early general forms diverged into four major groups of specialised forms without ever resembling the adult of another species, showing affinity to an archetype but no relation to other types or any transmutation of species. By the time Haeckel was teaching he was able to use a textbook with woodcut illustrations written by his own teacher Albert von Kölliker, which purported to explain human development while also using other mammalian embryos to claim a coherent sequence. Despite the significance to ideas of transformism, this was not really polite enough for the new popular science writing, and was a matter for medical institutions and for experts who could make their own comparisons. Darwin, Naturphilosophie and Lamarck Darwin's On the Origin of Species, which made a powerful impression on Haeckel when he read it in 1864, was very cautious about the possibility of ever reconstructing the history of life, but did include a section reinterpreting von Baer's embryology and revolutionising the field of study, concluding that "Embryology rises greatly in interest, when we thus look at the embryo as a picture, more or less obscured, of the common parent-form of each great class of animals." It mentioned von Baer's 1828 anecdote (misattributing it to Louis Agassiz) that at an early stage embryos were so similar that it could be impossible to tell whether an unlabelled specimen was of a mammal, a bird, or of a reptile, and Darwin's own research using embryonic stages of barnacles to show that they are crustaceans, while cautioning against the idea that one organism or embryonic stage is "higher" or "lower", or more or less evolved. Haeckel disregarded such caution, and in a year wrote his massive and ambitious Generelle Morphologie, published in 1866, presenting a revolutionary new synthesis of Darwin's ideas with the German tradition of Naturphilosophie going back to Goethe and with the progressive evolutionism of Lamarck in what he called Darwinismus. He used morphology to reconstruct the evolutionary history of life, in the absence of fossil evidence using embryology as evidence of ancestral relationships. He invented new terms, including ontogeny and phylogeny, to present his evolutionised recapitulation theory that "ontogeny recapitulated phylogeny". The two massive volumes sold poorly, and were heavy going: with his limited understanding of German, Darwin found them impossible to read. Haeckel's publisher turned down a proposal for a "strictly scholarly and objective" second edition. Embryological drawings Haeckel's aim was a reformed morphology with evolution as the organising principle of a cosmic synthesis unifying science, religion, and art. He was giving successful "popular lectures" on his ideas to students and townspeople in Jena, in an approach pioneered by his teacher Rudolf Virchow. To meet his publisher's need for a popular work he used a student's transcript of his lectures as the basis of his Natürliche Schöpfungsgeschichte of 1868, presenting a comprehensive presentation of evolution. In the Spring of that year he drew figures for the book, synthesising his views of specimens in Jena and published pictures to represent types. After publication he told a colleague that the images "are completely exact, partly copied from nature, partly assembled from all illustrations of these early stages that have hitherto become known". There were various styles of embryological drawings at that time, ranging from more schematic representations to "naturalistic" illustrations of specific specimens. Haeckel believed privately that his figures were both exact and synthetic, and in public asserted that they were schematic like most figures used in teaching. The images were reworked to match in size and orientation, and though displaying Haeckel's own views of essential features, they support von Baer's concept that vertebrate embryos begin similarly and then diverge. Relating different images on a grid conveyed a powerful evolutionary message. As a book for the general public, it followed the common practice of not citing sources. The book sold very well, and while some anatomical experts hostile to Haeckel's evolutionary views expressed some private concerns that certain figures had been drawn rather freely, the figures showed what they already knew about similarities in embryos. The first published concerns came from Ludwig Rütimeyer, a professor of zoology and comparative anatomy at the University of Basel who had placed fossil mammals in an evolutionary lineage early in the 1860s and had been sent a complimentary copy. At the end of 1868 his review in the Archiv für Anthropologie wondered about the claim that the work was "popular and scholarly", doubting whether the second was true, and expressed horror about such public discussion of man's place in nature with illustrations such as the evolutionary trees being shown to non-experts. Though he made no suggestion that embryo illustrations should be directly based on specimens, to him the subject demanded the utmost "scrupulosity and conscientiousness" and an artist must "not arbitrarily model or generalise his originals for speculative purposes" which he considered proved by comparison with works by other authors. In particular, "one and the same, moreover incorrectly interpreted woodcut, is presented to the reader three times in a row and with three different captions as [the] embryo of the dog, the chick, [and] the turtle". He accused Haeckel of "playing fast and loose with the public and with science", and failing to live up to the obligation to the truth of every serious researcher. Haeckel responded with angry accusations of bowing to religious prejudice, but in the second (1870) edition changed the duplicated embryo images to a single image captioned "embryo of a mammal or bird". Duplication using galvanoplastic stereotypes (clichés) was a common technique in textbooks, but not on the same page to represent different eggs or embryos. In 1891 Haeckel made the excuse that this "extremely rash foolishness" had occurred in undue haste but was "bona fide", and since repetition of incidental details was obvious on close inspection, it is unlikely to have been intentional deception. The revised 1870 second edition of 1,500 copies attracted more attention, being quickly followed by further revised editions with larger print runs as the book became a prominent part of the optimistic, nationalist, anticlerical "culture of progress" in Otto von Bismarck's new German Empire. The similarity of early vertebrate embryos became common knowledge, and the illustrations were praised by experts such as Michael Foster of the University of Cambridge. In the introduction to his 1871 The Descent of Man, and Selection in Relation to Sex, Darwin gave particular praise to Haeckel, writing that if Natürliche Schöpfungsgeschichte "had appeared before my essay had been written, I should probably never have completed it". The first chapter included an illustration: "As some of my readers may never have seen a drawing of an embryo, I have given one of man and another of a dog, at about the same early stage of development, carefully copied from two works of undoubted accuracy" with a footnote citing the sources and noting that "Häckel has also given analogous drawings in his Schöpfungsgeschichte." The fifth edition of Haeckel's book appeared in 1874, with its frontispiece a heroic portrait of Haeckel himself, replacing the previous controversial image of the heads of apes and humans. Controversy Later in 1874, Haeckel's simplified embryology textbook Anthropogenie made the subject into a battleground over Darwinism aligned with Bismarck's Kulturkampf ("culture struggle") against the Catholic Church. Haeckel took particular care over the illustrations, changing to the leading zoological publisher Wilhelm Engelmann of Leipzig and obtaining from them use of illustrations from their other textbooks as well as preparing his own drawings including a dramatic double page illustration showing "early", "somewhat later" and "still later" stages of 8 different vertebrates. Though Haeckel's views had attracted continuing controversy, there had been little dispute about the embryos and he had many expert supporters, but Wilhelm His revived the earlier criticisms and introduced new attacks on the 1874 illustrations. Others joined in: both expert anatomists and Catholic priests and supporters were politically opposed to Haeckel's views. While it has been widely claimed that Haeckel was charged with fraud by five professors and convicted by a university court at Jena, there does not appear to be an independently verifiable source for this claim. Recent analyses (Richardson 1998, Richardson and Keuck 2002) have found that some of the criticisms of Haeckel's embryo drawings were legitimate, but others were unfounded. There were multiple versions of the embryo drawings, and Haeckel rejected the claims of fraud. It was later said that "there is evidence of sleight of hand" on both sides of the feud between Haeckel and Wilhelm His. Robert J. Richards, in a paper published in 2008, defends the case for Haeckel, shedding doubt against the fraud accusations based on the material used for comparison with what Haeckel could access at the time. Awards and honors Haeckel was elected as a member to the American Philosophical Society in 1885. He was awarded the title of Excellency by Kaiser Wilhelm II in 1907 and the Linnean Society of London's prestigious Darwin-Wallace Medal in 1908. In the United States, Mount Haeckel, a summit in the Eastern Sierra Nevada, overlooking the Evolution Basin, is named in his honour, as is another Mount Haeckel, a summit in New Zealand; and the asteroid 12323 Haeckel. In Jena he is remembered with a monument at Herrenberg (erected in 1969), an exhibition at Ernst-Haeckel-Haus, and at the Jena Phyletic Museum, which continues to teach about evolution and share his work to this day. The ratfish, Harriotta haeckeli is named in his honor. The research vessel Ernst Haeckel is named in his honor. In 1981, a botanical journal called Ernstia was started being published in the
and in 1910 he withdrew from the Evangelical Church of Prussia. On the occasion of his 80th birthday celebration he was presented with a two-volume work entitled Was wir Ernst Haeckel verdanken (What We Owe to Ernst Haeckel), edited at the request of the German Monistenbund by Heinrich Schmidt of Jena. Haeckel's wife, Agnes, died in 1915, and he became substantially frailer, breaking his leg and arm. He sold his "Villa Medusa" in Jena in 1918 to the Carl Zeiss foundation, which preserved his library. Haeckel died on 9 August 1919. Haeckel became the most famous proponent of Monism in Germany. Politics Haeckel's affinity for the German Romantic movement, coupled with his acceptance of a form of Lamarckism, influenced his political beliefs. Rather than being a strict Darwinian, Haeckel believed that the characteristics of an organism were acquired through interactions with the environment and that ontogeny reflected phylogeny. He saw the social sciences as instances of "applied biology", and that phrase was picked up and used for Nazi propaganda. In 1906 Haeckel belonged to the founders of the Monist League (Deutscher Monistenbund), which took a stance against philosophical materialism and promote a "natural Weltanschauung". This organization lasted until 1933 and included such notable members as Wilhelm Ostwald, Georg von Arco (1869–1940), Helene Stöcker and Walter Arthur Berendsohn. He was the first person to use the term "first world war". However, Haeckel's books were banned by the Nazi Party, which refused Monism and Haeckel's freedom of thought. Moreover, it is worth mentioning that Haeckel had often overtly recognized the great contribution of educated Jews to the German culture. Research Haeckel was a zoologist, an accomplished artist and illustrator, and later a professor of comparative anatomy. Although Haeckel's ideas are important to the history of evolutionary theory, and although he was a competent invertebrate anatomist most famous for his work on radiolaria, many speculative concepts that he championed are now considered incorrect. For example, Haeckel described and named hypothetical ancestral microorganisms that have never been found. He was one of the first to consider psychology as a branch of physiology. He also proposed the kingdom Protista in 1866. His chief interests lay in evolution and life development processes in general, including development of nonrandom form, which culminated in the beautifully illustrated Kunstformen der Natur (Art forms of nature). Haeckel did not support natural selection, rather believing in Lamarckism. Haeckel advanced a version of the earlier recapitulation theory previously set out by Étienne Serres in the 1820s and supported by followers of Étienne Geoffroy Saint-Hilaire including Robert Edmond Grant. It proposed a link between ontogeny (development of form) and phylogeny (evolutionary descent), summed up by Haeckel in the phrase "ontogeny recapitulates phylogeny". His concept of recapitulation has been refuted in the form he gave it (now called "strong recapitulation"), in favour of the ideas first advanced by Karl Ernst von Baer. The strong recapitulation hypothesis views ontogeny as repeating forms of adult ancestors, while weak recapitulation means that what is repeated (and built upon) is the ancestral embryonic development process. Haeckel supported the theory with embryo drawings that have since been shown to be oversimplified and in part inaccurate, and the theory is now considered an oversimplification of quite complicated relationships, however comparison of embryos remains a powerful way to demonstrate that all animals are related. Haeckel introduced the concept of heterochrony, the change in timing of embryonic development over the course of evolution. Haeckel was a flamboyant figure, who sometimes took great, non-scientific leaps from available evidence. For example, at the time when Darwin published On the Origin of Species by Means of Natural Selection (1859), Haeckel postulated that evidence of human evolution would be found in the Dutch East Indies (now Indonesia). At that time, no remains of human ancestors had yet been identified. He described these theoretical remains in great detail and even named the as-yet unfound species, Pithecanthropus alalus, and instructed his students such as Richard and Oskar Hertwig to go and find it. One student did find some remains: a Dutchman named Eugène Dubois searched the East Indies from 1887 to 1895, discovering the remains of Java Man in 1891, consisting of a skullcap, thighbone, and a few teeth. These remains are among the oldest hominid remains ever found. Dubois classified Java Man with Haeckel's Pithecanthropus label, though they were later reclassified as Homo erectus. Some scientists of the day suggested Dubois' Java Man as a potential intermediate form between modern humans and the common ancestor we share with the other great apes. The current consensus of anthropologists is that the direct ancestors of modern humans were African populations of Homo erectus (possibly Homo ergaster), rather than the Asian populations exemplified by Java Man and Peking Man. (Ironically, a new human species, Homo floresiensis, a dwarf human type, has recently been discovered in the island of Flores). Polygenism and racial theory The creationist polygenism of Samuel George Morton and Louis Agassiz, which presented human races as separately created species, was rejected by Charles Darwin, who argued for the monogenesis of the human species and the African origin of modern humans. In contrast to most of Darwin's supporters, Haeckel put forward a doctrine of evolutionary polygenism based on the ideas of the linguist August Schleicher, in which several different language groups had arisen separately from speechless prehuman Urmenschen (), which themselves had evolved from simian ancestors. These separate languages had completed the transition from animals to man, and under the influence of each main branch of languages, humans had evolved – in a kind of Lamarckian use-inheritance – as separate species, which could be subdivided into races. From this, Haeckel drew the implication that languages with the most potential yield the human races with the most potential, led by the Semitic and Indo-Germanic groups, with Berber, Jewish, Greco-Roman and Germanic varieties to the fore. As Haeckel stated: Haeckel's view can be seen as a forerunner of the views of Carleton Coon, who also believed that human races evolved independently and in parallel with each other. These ideas eventually fell from favour. Haeckel also applied the hypothesis of polygenism to the modern diversity of human groups. He became a key figure in social darwinism and leading proponent of scientific racism, stating for instance: Haeckel divided human beings into ten races, of which the Caucasian was the highest and the primitives were doomed to extinction. In his view, 'Negroes' were savages and Whites were the most civilised: for instance, he claimed that '[t]he Negro' had stronger and more freely movable toes than any other race, which, he argued, was evidence of their being less evolved, and which led him to compare them to four-handed" Apes'. In his Ontogeny and Phylogeny Harvard paleontologist Stephen Jay Gould wrote: "[Haeckel's] evolutionary racism; his call to the German people for racial purity and unflinching devotion to a 'just' state; his belief that harsh, inexorable laws of evolution ruled human civilization and nature alike, conferring upon favored races the right to dominate others ... all contributed to the rise of Nazism." In his introduction to the Nazi party ideologue Alfred Rosenberg's 1930 book, [The Myth of the Twentieth Century], Peter Peel affirms that Rosenberg had indeed read Haeckel. In the same line of thought, historian Daniel Gasman states that Haeckel's ideology stimulated the birth of Fascist ideology in Italy and France. However, Robert J. Richards notes: "Haeckel, on his travels to Ceylon and Indonesia, often formed closer and more intimate relations with natives, even members of the untouchable classes, than with the European colonials." and says the Nazis rejected Haeckel, since he opposed antisemitism, while supporting ideas they disliked (for instance atheism, feminism, internationalism, pacifism etc.). Asia hypothesis Haeckel claimed the origin of humanity was to be found in Asia: he believed that Hindustan (Indian subcontinent) was the actual location where the first humans had evolved. Haeckel argued that humans were closely related to the primates of Southeast Asia and rejected Darwin's hypothesis of Africa. Haeckel later claimed that the missing link was to be found on the lost continent of Lemuria located in the Indian Ocean. He believed that Lemuria was the home of the first humans and that Asia was the home of many of the earliest primates; he thus supported that Asia was the cradle of hominid evolution. Haeckel also claimed that Lemuria connected Asia and Africa, which allowed the migration of humans to the rest of the world. In Haeckel's book The History of Creation (1884) he included migration routes which he thought the first humans had used outside of Lemuria. Embryology and recapitulation theory When Haeckel was a student in the 1850s he showed great interest in embryology, attending the rather unpopular lectures twice and in his notes sketched the visual aids: textbooks had few illustrations, and large format plates were used to show students how to see the tiny forms under a reflecting microscope, with the translucent tissues seen against a black background. Developmental series were used to show stages within a species, but inconsistent views and stages made it even more difficult to compare different species.
changed over time as the study of evolution has progressed. In the 19th century, it was used to describe the belief that organisms deliberately improved themselves through progressive inherited change (orthogenesis). The teleological belief went on to include cultural evolution and social evolution. In the 1970s the term Neo-Evolutionism was used to describe the idea "that human beings sought to preserve a familiar style of life unless change was forced on them by factors that were beyond their control". The term is most often used by creationists to describe adherence to the scientific consensus on evolution as equivalent to a secular religion. The term is very seldom used within the scientific community, since the scientific position on evolution is accepted by the overwhelming majority of scientists. Because evolutionary biology is the default scientific position, it is assumed that "scientists" or "biologists" are "evolutionists" unless specifically noted otherwise. In the creation–evolution controversy, creationists often call those who accept the validity of the modern evolutionary synthesis "evolutionists" and the theory itself "evolutionism". 19th-century teleological use Before its use to describe biological evolution, the term "evolution" was originally used to refer to any orderly sequence of events with the outcome somehow contained at the start. The first five editions of Darwin's in Origin of Species used the word "evolved", but the word "evolution" was only used in its sixth edition in 1872. By then, Herbert Spencer had developed the concept theory that organisms strive to evolve due
accepted by the overwhelming majority of scientists. Because evolutionary biology is the default scientific position, it is assumed that "scientists" or "biologists" are "evolutionists" unless specifically noted otherwise. In the creation–evolution controversy, creationists often call those who accept the validity of the modern evolutionary synthesis "evolutionists" and the theory itself "evolutionism". 19th-century teleological use Before its use to describe biological evolution, the term "evolution" was originally used to refer to any orderly sequence of events with the outcome somehow contained at the start. The first five editions of Darwin's in Origin of Species used the word "evolved", but the word "evolution" was only used in its sixth edition in 1872. By then, Herbert Spencer had developed the concept theory that organisms strive to evolve due to an internal "driving force" (orthogenesis) in 1862. Edward B. Tylor and Lewis H Morgan brought the term "evolution" to anthropology though they tended toward the older pre-Spencerian definition helping to form the concept of unilineal (social) evolution used during the later part of what Trigger calls the Antiquarianism-Imperial Synthesis period (c1770-c1900). The term evolutionism subsequently came to be used
formulas in order to reduce logic to arithmetic. The is related to Hilbert's tenth problem, which asks for an algorithm to decide whether Diophantine equations have a solution. The non-existence of such an algorithm, established by the work of Yuri Matiyasevich, Julia Robinson, Martin Davis, and Hilary Putnam, with the final piece of the proof in 1970, also implies a negative answer to the Entscheidungsproblem. Some first-order theories are algorithmically decidable; examples of this include Presburger arithmetic, real closed fields and static type systems of many programming languages. The general first-order theory of the natural numbers expressed in Peano's axioms cannot be decided with an algorithm, however. Practical decision procedures Having practical decision procedures for classes of logical formulas is of considerable interest for program verification and circuit verification. Pure Boolean logical formulas are usually decided using SAT-solving techniques based on the DPLL algorithm. Conjunctive formulas over linear real or rational arithmetic can be decided using the simplex algorithm, formulas in linear integer arithmetic (Presburger arithmetic) can be decided using Cooper's algorithm or William Pugh's Omega test. Formulas with negations, conjunctions and disjunctions combine the difficulties of satisfiability testing with that of decision of conjunctions; they are generally decided nowadays using SMT-solving techniques, which combine SAT-solving with decision procedures for conjunctions and propagation techniques. Real polynomial arithmetic, also known as the theory of real closed fields, is decidable; this is the Tarski–Seidenberg theorem, which has been implemented in computers by using the cylindrical algebraic decomposition. See also Decidability (logic) Automated theorem proving Hilbert's second problem Oracle machine Turing's proof Notes References David Hilbert and Wilhelm Ackermann (1928). Grundzüge der theoretischen Logik (Principles of Mathematical Logic). Springer-Verlag, . Alonzo Church, "An unsolvable problem of elementary number theory", American Journal of Mathematics, 58 (1936), pp 345–363 Alonzo Church, "A note on the Entscheidungsproblem", Journal of Symbolic Logic, 1 (1936), pp 40–41. Martin Davis, 2000, Engines of Logic, W.W. Norton & Company, London, pbk. Alan Turing, "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, Series 2, 42 (1936–7), pp 230–265. Online versions: from journal website, from Turing Digital Archive, from abelard.org. Errata appeared in Series 2, 43 (1937), pp 544–546.
in order to reduce logic to arithmetic. The is related to Hilbert's tenth problem, which asks for an algorithm to decide whether Diophantine equations have a solution. The non-existence of such an algorithm, established by the work of Yuri Matiyasevich, Julia Robinson, Martin Davis, and Hilary Putnam, with the final piece of the proof in 1970, also implies a negative answer to the Entscheidungsproblem. Some first-order theories are algorithmically decidable; examples of this include Presburger arithmetic, real closed fields and static type systems of many programming languages. The general first-order theory of the natural numbers expressed in Peano's axioms cannot be decided with an algorithm, however. Practical decision procedures Having practical decision procedures for classes of logical formulas is of considerable interest for program verification and circuit verification. Pure Boolean logical formulas are usually decided using SAT-solving techniques based on the DPLL algorithm. Conjunctive formulas over linear real or rational arithmetic can be decided using the simplex algorithm, formulas in linear integer arithmetic (Presburger arithmetic) can be decided using Cooper's algorithm or William Pugh's Omega test. Formulas with negations, conjunctions and disjunctions combine the difficulties of satisfiability testing with that of decision of conjunctions; they are generally decided nowadays using SMT-solving techniques, which combine SAT-solving with decision procedures for conjunctions and propagation techniques. Real polynomial arithmetic, also known as the theory of real closed fields, is decidable; this is the Tarski–Seidenberg theorem, which has been implemented in computers by using the cylindrical algebraic decomposition. See also Decidability (logic) Automated theorem proving Hilbert's second problem Oracle machine Turing's proof Notes References David Hilbert and Wilhelm Ackermann (1928). Grundzüge der theoretischen Logik (Principles of Mathematical Logic). Springer-Verlag, . Alonzo Church, "An unsolvable problem of elementary number theory", American Journal of Mathematics, 58 (1936), pp 345–363 Alonzo Church, "A note on the Entscheidungsproblem", Journal of Symbolic Logic, 1 (1936), pp 40–41. Martin Davis, 2000, Engines of Logic, W.W. Norton & Company, London, pbk. Alan Turing, "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, Series 2, 42 (1936–7), pp 230–265. Online versions: from journal website, from Turing Digital Archive, from abelard.org. Errata appeared in Series 2, 43 (1937), pp 544–546. Martin Davis, "The Undecidable, Basic Papers on Undecidable Propositions, Unsolvable Problems And Computable Functions", Raven Press, New York, 1965. Turing's paper is #3 in this volume. Papers include those by Gödel, Church, Rosser, Kleene, and Post. Andrew Hodges, Alan Turing: The Enigma, Simon and Schuster, New York, 1983. Alan M. Turing's biography. Cf Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof. Robert Soare, "Computability and recursion", Bull. Symbolic Logic 2 (1996), no. 3, 284–321. Stephen Toulmin, "Fall of a Genius", a book review of "Alan Turing: The Enigma by Andrew Hodges", in The New York Review of Books, 19 January 1984, p. 3ff. Alfred North Whitehead and Bertrand Russell, Principia Mathematica to *56, Cambridge at the University Press, 1962. Re: the problem of paradoxes, the authors discuss the problem of a set not be an object in any of its "determining functions",
servant, Ratleic, to Rome with an end to find relics for the new building. Once in Rome, Ratleic robbed a catacomb of the bones of the Martyrs Marcellinus and Peter and had them translated to Michelstadt. Once there, the relics made it known they were unhappy with their new tomb and thus had to be moved again to Mulinheim. Once established there, they proved to be miracle workers. Although unsure as to why these saints should choose such a "sinner" as their patron, Einhard nonetheless set about ensuring they continued to receive a resting place fitting of their honour. Between 831 and 834 he founded a Benedictine Monastery and, after the death of his wife, served as its Abbot until his own death in 840. Local lore Local lore from Seligenstadt portrays Einhard as the lover of Emma, one of Charlemagne's daughters, and has the couple elope from court. Charlemagne found them at Seligenstadt (then called Obermühlheim) and forgave them. This account is used to explain the name "Seligenstadt" by folk etymology. Einhard and his wife were originally buried in one sarcophagus in the choir of the church in Seligenstadt, but in 1810 the sarcophagus was presented by the Grand Duke of Hesse to the count of Erbach, who claims descent from Einhard as the husband of Imma, the reputed daughter of Charlemagne. The count put it in the famous chapel of his castle at Erbach in the Odenwald. Works The most famous of Einhard's works is his biography of Charlemagne, the Vita Karoli Magni, "The Life of Charlemagne" (c. 817–836), which provides much direct information about Charlemagne's life and character, written sometime between 817 and 830. In composing this he relied heavily upon the Royal Frankish Annals. Einhard's literary model was the classical work of the Roman historian Suetonius, the Lives of the Caesars, though it is important to stress that the work is very much Einhard's own, that is
(nutritor) and to whom he was a debtor "in life and death". The work thus contains an understandable degree of bias, Einhard taking care to exculpate Charlemagne in some matters, not mention others, and to gloss over certain issues which would be of embarrassment to Charlemagne, such as the morality of his daughters; by contrast, other issues are curiously not glossed over, like his concubines. Einhard is also responsible for three other extant works: a collection of letters, On the Translations and the Miracles of SS. Marcellinus and Petrus, and On the Adoration of the Cross. The latter dates from ca. 830 and was not rediscovered until 1885, when Ernst Dümmler identified a text in a manuscript in Vienna as the missing Libellus de adoranda cruce, which Einhard had dedicated to his pupil Lupus Servatus. See also Royal Frankish Annals References Bibliography Tischler, Matthias M. (2001) Einharts Vita Karoli. Studien zur Entstehung, Überlieferung und Rezeption (MGH. Schriften 48, I–II), Hanover: Hahn. . External links Holland, Arthur William (1911). "Einhard". In Chisholm, Hugh (ed.). Encyclopædia Britannica. 9. (11th ed.). Cambridge University Press. pp. 134–135. Schlager, Patricius (1909). "Einhard". In Catholic Encyclopedia. 5. New York: Robert Appleton Company. pp. 82–83. Vita Karoli Magni—Einhard's Life of Charlemagne, Latin text at The Latin Library , translated by Einhard-Preis Literature prize awarded by the Einhard-Foundation of Seligenstadt to authors for writing an outstanding biography Opera Omnia by Migne Patrologia Latina with analytical indexes Einhardi vita Karoli Magni in Bibliotheca Augustana The Einhard Way from Michelstadt to Seligenstadt Home page of the Einhard Foundation at Seligenstadt Home page of the Einhard Society, Seligenstadt 770s
C–O–C bonds has a low barrier. Their flexibility and low polarity is manifested in their physical properties; they tend to be less rigid (lower melting point) and more volatile (lower boiling point) than the corresponding amides. The pKa of the alpha-hydrogens on esters is around 25. Many esters have the potential for conformational isomerism, but they tend to adopt an s-cis (or Z) conformation rather than the s-trans (or E) alternative, due to a combination of hyperconjugation and dipole minimization effects. The preference for the Z conformation is influenced by the nature of the substituents and solvent, if present. Lactones with small rings are restricted to the s-trans (i.e. E) conformation due to their cyclic structure. Physical properties and characterization Esters are more polar than ethers but less polar than alcohols. They participate in hydrogen bonds as hydrogen-bond acceptors, but cannot act as hydrogen-bond donors, unlike their parent alcohols. This ability to participate in hydrogen bonding confers some water-solubility. Because of their lack of hydrogen-bond-donating ability, esters do not self-associate. Consequently, esters are more volatile than carboxylic acids of similar molecular weight. Characterization and analysis Esters are generally identified by gas chromatography, taking advantage of their volatility. IR spectra for esters feature an intense sharp band in the range 1730–1750 cm−1 assigned to νC=O. This peak changes depending on the functional groups attached to the carbonyl. For example, a benzene ring or double bond in conjugation with the carbonyl will bring the wavenumber down about 30 cm−1. Applications and occurrence Esters are widespread in nature and are widely used in industry. In nature, fats are in general triesters derived from glycerol and fatty acids. Esters are responsible for the aroma of many fruits, including apples, durians, pears, bananas, pineapples, and strawberries.<ref>McGee, Harold. On Food and Cooking'. 2003, Scribner, New York.</ref> Several billion kilograms of polyesters are produced industrially annually, important products being polyethylene terephthalate, acrylate esters, and cellulose acetate. Preparation Esterification is the general name for a chemical reaction in which two reactants (typically an alcohol and an acid) form an ester as the reaction product. Esters are common in organic chemistry and biological materials, and often have a pleasant characteristic, fruity odor. This leads to their extensive use in the fragrance and flavor industry. Ester bonds are also found in many polymers. Esterification of carboxylic acids with alcohols The classic synthesis is the Fischer esterification, which involves treating a carboxylic acid with an alcohol in the presence of a dehydrating agent: RCO2H + R′OH RCO2R′ + H2O The equilibrium constant for such reactions is about 5 for typical esters, e.g., ethyl acetate. The reaction is slow in the absence of a catalyst. Sulfuric acid is a typical catalyst for this reaction. Many other acids are also used such as polymeric sulfonic acids. Since esterification is highly reversible, the yield of the ester can be improved using Le Chatelier's principle: Using the alcohol in large excess (i.e., as a solvent). Using a dehydrating agent: sulfuric acid not only catalyzes the reaction but sequesters water (a reaction product). Other drying agents such as molecular sieves are also effective. Removal of water by physical means such as distillation as a low-boiling azeotropes with toluene, in conjunction with a Dean-Stark apparatus. Reagents are known that drive the dehydration of mixtures of alcohols and carboxylic acids. One example is the Steglich esterification, which is a method of forming esters under mild conditions. The method is popular in peptide synthesis, where the substrates are sensitive to harsh conditions like high heat. DCC (dicyclohexylcarbodiimide) is used to activate the carboxylic acid to further reaction. 4-Dimethylaminopyridine (DMAP) is used as an acyl-transfer catalyst. Another method for the dehydration of mixtures of alcohols and carboxylic acids is the Mitsunobu reaction: RCO2H + R′OH + P(C6H5)3 + R2N2 → RCO2R′ + OP(C6H5)3 + R2N2H2 Carboxylic acids can be esterified using diazomethane: RCO2H + CH2N2 → RCO2CH3 + N2 Using this diazomethane, mixtures of carboxylic acids can be converted to their methyl esters in near quantitative yields, e.g., for analysis by gas chromatography. The method is useful in specialized organic synthetic operations but is considered too hazardous and expensive for large-scale applications. Esterification of carboxylic acids with epoxides Carboxylic acids are esterified by treatment with epoxides, giving β-hydroxyesters: RCO2H + RCHCH2O → RCO2CH2CH(OH)R This reaction is employed in the production of vinyl ester resin resins from acrylic acid. Alcoholysis of acyl chlorides and acid anhydrides Alcohols react with acyl chlorides and acid anhydrides to give esters: RCOCl + R′OH → RCO2R′ + HCl (RCO)2O + R′OH → RCO2R′ + RCO2H The reactions are irreversible simplifying work-up. Since acyl chlorides and acid anhydrides also react with water, anhydrous conditions are preferred. The analogous acylations of amines to give amides are less sensitive because amines are stronger nucleophiles and react more rapidly than
that drive the dehydration of mixtures of alcohols and carboxylic acids. One example is the Steglich esterification, which is a method of forming esters under mild conditions. The method is popular in peptide synthesis, where the substrates are sensitive to harsh conditions like high heat. DCC (dicyclohexylcarbodiimide) is used to activate the carboxylic acid to further reaction. 4-Dimethylaminopyridine (DMAP) is used as an acyl-transfer catalyst. Another method for the dehydration of mixtures of alcohols and carboxylic acids is the Mitsunobu reaction: RCO2H + R′OH + P(C6H5)3 + R2N2 → RCO2R′ + OP(C6H5)3 + R2N2H2 Carboxylic acids can be esterified using diazomethane: RCO2H + CH2N2 → RCO2CH3 + N2 Using this diazomethane, mixtures of carboxylic acids can be converted to their methyl esters in near quantitative yields, e.g., for analysis by gas chromatography. The method is useful in specialized organic synthetic operations but is considered too hazardous and expensive for large-scale applications. Esterification of carboxylic acids with epoxides Carboxylic acids are esterified by treatment with epoxides, giving β-hydroxyesters: RCO2H + RCHCH2O → RCO2CH2CH(OH)R This reaction is employed in the production of vinyl ester resin resins from acrylic acid. Alcoholysis of acyl chlorides and acid anhydrides Alcohols react with acyl chlorides and acid anhydrides to give esters: RCOCl + R′OH → RCO2R′ + HCl (RCO)2O + R′OH → RCO2R′ + RCO2H The reactions are irreversible simplifying work-up. Since acyl chlorides and acid anhydrides also react with water, anhydrous conditions are preferred. The analogous acylations of amines to give amides are less sensitive because amines are stronger nucleophiles and react more rapidly than does water. This method is employed only for laboratory-scale procedures, as it is expensive. Alkylation of carboxylate salts Although not widely employed for esterifications, salts of carboxylate anions can be alkylating agent with alkyl halides to give esters. In the case that an alkyl chloride is used, an iodide salt can catalyze the reaction (Finkelstein reaction). The carboxylate salt is often generated in situ. In difficult cases, the silver carboxylate may be used, since the silver ion coordinates to the halide aiding its departure and improving the reaction rate. This reaction can suffer from anion availability problems and, therefore, can benefit from the addition of phase transfer catalysts or highly polar aprotic solvents such as DMF. Transesterification Transesterification, which involves changing one ester into another one, is widely practiced: RCO2R′ + CH3OH → RCO2CH3 + R′OH Like the hydrolysation, transesterification is catalysed by acids and bases. The reaction is widely used for degrading triglycerides, e.g. in the production of fatty acid esters and alcohols. Poly(ethylene terephthalate) is produced by the transesterification of dimethyl terephthalate and ethylene glycol: (C6H4)(CO2CH3)2 + 2 C2H4(OH)2 → {(C6H4)(CO2)2(C2H4)}n + 2 CH3OH A subset of transesterification is the alcoholysis of diketene. This reaction affords 2-ketoesters. (CH2CO)2 + ROH → CH3C(O)CH2CO2R Carbonylation Alkenes undergo "hydroesterification" in the presence of metal carbonyl catalysts. Esters of propanoic acid are produced commercially by this method: C2H4 + ROH + CO → C2H5CO2R A preparaton of methyl propionate is one illustrative example. C2H4 + CO + MeOH → MeO2CCH2CH3 The carbonylation of methanol yields methyl formate, which is the main commercial source of formic acid. The reaction is catalyzed by sodium methoxide: CH3OH + CO → CH3O2CH Addition of carboxylic acids to alkenes and alkynes In the presence of palladium-based catalysts, ethylene, acetic acid, and oxygen react to give vinyl acetate: C2H4 + CH3CO2H + O2 → C2H3O2CCH3 + H2O Direct routes to this same ester are not possible because vinyl alcohol is unstable. Carboxylic acids also add across alkynes to give the same products. Silicotungstic acid is used to manufacture ethyl acetate by the alkylation of acetic acid by ethylene: C2H4 + CH3CO2H → CH3CO2C2H5 From aldehydes The Tishchenko reaction involve disproportionation of an aldehyde in the presence of an anhydrous base to give an ester. Catalysts are aluminium alkoxides or sodium alkoxides. Benzaldehyde reacts with sodium benzyloxide (generated from sodium and benzyl alcohol) to generate benzyl benzoate. The method is used in the production of ethyl acetate from acetaldehyde. Other methods Favorskii rearrangement of α-haloketones
worms of the genus Riftia, which get nutrition from their endosymbiotic bacteria. The most common examples of obligate endosymbioses are mitochondria and chloroplasts. Some human parasites, e.g. Wuchereria bancrofti and Mansonella perstans, thrive in their intermediate insect hosts because of an obligate endosymbiosis with Wolbachia spp. They can both be eliminated from hosts by treatments that target this bacterium. However, not all endosymbioses are obligate and some endosymbioses can be harmful to either of the organisms involved. Two major types of organelle in eukaryotic cells, mitochondria and plastids such as chloroplasts, are considered to be bacterial endosymbionts. This process is commonly referred to as symbiogenesis. Symbiogenesis and organelles Symbiogenesis explains the origins of eukaryotes, whose cells contain two major kinds of organelle: mitochondria and chloroplasts. The theory proposes that these organelles evolved from certain types of bacteria that eukaryotic cells engulfed through phagocytosis. These cells and the bacteria trapped inside them entered an endosymbiotic relationship, meaning that the bacteria took up residence and began living exclusively within the eukaryotic cells. Numerous insect species have endosymbionts at different stages of symbiogenesis. A common theme of symbiogenesis involves the reduction of the genome to only essential genes for the host and symbiont collective genome. A remarkable example of this is the fractionation of the Hodgkinia genome of Magicicada cicadas. Because the cicada life cycle takes years underground, natural selection on endosymbiont populations is relaxed for many bacterial generations. This allows the symbiont genomes to diversify within the host for years with only punctuated periods of selection when the cicadas reproduce. As a result, the ancestral Hodgkinia genome has split into three groups of primary endosymbiont, each encoding only a fraction of the essential genes for the symbiosis. The host now requires all three sub-groups of symbiont, each with degraded genomes lacking most essential genes for bacterial viability. Bacterial endosymbionts of invertebrates The best-studied examples of endosymbiosis are known from invertebrates. These symbioses affect organisms with global impact, including Symbiodinium of corals, or Wolbachia of insects. Many insect agricultural pests and human disease vectors have intimate relationships with primary endosymbionts. Endosymbionts of insects Scientists classify insect endosymbionts in two broad categories, 'Primary' and 'Secondary'. Primary endosymbionts (sometimes referred to as P-endosymbionts) have been associated with their insect hosts for many millions of years (from 10 to several hundred million years in some cases). They form obligate associations (see below), and display cospeciation with their insect hosts. Secondary endosymbionts exhibit a more recently developed association, are sometimes horizontally transferred between hosts, live in the hemolymph of the insects (not specialized bacteriocytes, see below), and are not obligate. Primary endosymbionts Among primary endosymbionts of insects, the best-studied are the pea aphid (Acyrthosiphon pisum) and its endosymbiont Buchnera sp. APS, the tsetse fly Glossina morsitans morsitans and its endosymbiont Wigglesworthia glossinidia brevipalpis and the endosymbiotic protists in lower termites. As with endosymbiosis in other insects, the symbiosis is obligate in that neither the bacteria nor the insect is viable without the other. Scientists have been unable to cultivate the bacteria in lab conditions outside of the insect. With special nutritionally-enhanced diets, the insects can survive, but are unhealthy, and at best survive only a few generations. In some insect groups, these endosymbionts live in specialized insect cells called bacteriocytes (also called mycetocytes), and are maternally-transmitted, i.e. the mother transmits her endosymbionts to her offspring. In some cases, the bacteria are transmitted in the egg, as in Buchnera; in others like Wigglesworthia, they are transmitted via milk to the developing insect embryo. In termites, the endosymbionts reside within the hindguts and are transmitted through trophallaxis among colony members. The primary endosymbionts are thought to help the host either by providing nutrients that the host cannot obtain itself or by metabolizing insect waste products into safer forms. For example, the putative primary role of Buchnera is to synthesize essential amino acids that the aphid cannot acquire from its natural diet of plant sap. Likewise, the primary role of Wigglesworthia, it is presumed, is to synthesize vitamins that the tsetse fly does not get from the blood that it eats. In lower termites, the endosymbiotic protists play a major role in the digestion of lignocellulosic materials that constitute a bulk of the termites' diet. Bacteria benefit from the reduced exposure to predators and competition from other bacterial species, the ample supply of nutrients and relative environmental stability inside the host. Genome sequencing reveals that obligate bacterial endosymbionts of insects have among the smallest of known bacterial genomes and have lost many genes that are commonly found in closely related bacteria. Several theories have been put forth to explain the loss of genes. It is presumed that some of these genes are not needed in the environment of the host insect cell. A complementary theory suggests that the relatively small numbers of bacteria inside each insect decrease the efficiency of natural selection in 'purging' deleterious mutations and small mutations from the population, resulting in a loss of genes over many millions of years. Research in which a parallel phylogeny of bacteria and insects was inferred supports the belief that the primary endosymbionts are transferred only vertically (i.e., from the mother), and not horizontally (i.e., by escaping the host and entering a new host). Attacking obligate bacterial endosymbionts may present a way to control their insect hosts, many of which are pests or carriers of human disease. For example, aphids are crop pests and the tsetse fly carries the organism Trypanosoma brucei that causes African sleeping sickness. Other motivations for their study involve understanding the origins of symbioses in general, as a proxy for understanding e.g. how chloroplasts or mitochondria came to be obligate symbionts of eukaryotes or plants. Secondary endosymbionts The pea aphid (Acyrthosiphon pisum) is known to contain at least three secondary endosymbionts, Hamiltonella defensa, Regiella insecticola, and Serratia symbiotica. Hamiltonella defensa defends its aphid host from parasitoid wasps. This defensive symbiosis improves the survival of aphids, which have lost some elements of the insect immune response. One of the best-understood defensive symbionts is the spiral bacteria Spiroplasma poulsonii. Spiroplasma sp. can be reproductive manipulators, but also defensive symbionts of Drosophila flies. In Drosophila neotestacea, S. poulsonii has spread across North America owing to its ability to defend its fly host against nematode parasites. This defence is mediated by toxins called "ribosome-inactivating proteins" that attack the molecular machinery of invading parasites. These Spiroplasma toxins represent one of the first examples of a defensive symbiosis with a mechanistic understanding for defensive symbiosis between an insect endosymbiont and its host. Sodalis glossinidius is a secondary endosymbiont of tsetse flies that lives inter- and intracellularly in various host tissues, including the midgut and hemolymph. Phylogenetic studies have not indicated a correlation between evolution of Sodalis and tsetse. Unlike tsetse's primary symbiont Wigglesworthia, though, Sodalis has been cultured in vitro. Many other insects have secondary endosymbionts not reviewed here. Endosymbionts of ants Bacteriocyte-associated symbionts The most well studied endosymbiont of ants are bacteria of the genus Blochmannia, which are the primary endosymbiont of Camponotus ants. In 2018 a new ant-associated symbiont was discovered in Cardiocondyla ants. This symbiont was named Candidatus Westeberhardia Cardiocondylae and it is also believed to be a primary symbiont. Endosymbionts of marine invertebrates Extracellular endosymbionts are also represented in all four extant classes of Echinodermata (Crinoidea, Ophiuroidea, Echinoidea, and Holothuroidea). Little is known of the nature of the association (mode of infection, transmission, metabolic requirements, etc.) but phylogenetic analysis indicates that these symbionts belong to the alpha group of the class Proteobacteria, relating them to Rhizobium and Thiobacillus. Other studies indicate that these subcuticular bacteria may be both abundant within their hosts and widely distributed among the Echinoderms in general. Some marine oligochaeta (e.g., Olavius algarvensis and Inanidrillus spp.) have
are maternally-transmitted, i.e. the mother transmits her endosymbionts to her offspring. In some cases, the bacteria are transmitted in the egg, as in Buchnera; in others like Wigglesworthia, they are transmitted via milk to the developing insect embryo. In termites, the endosymbionts reside within the hindguts and are transmitted through trophallaxis among colony members. The primary endosymbionts are thought to help the host either by providing nutrients that the host cannot obtain itself or by metabolizing insect waste products into safer forms. For example, the putative primary role of Buchnera is to synthesize essential amino acids that the aphid cannot acquire from its natural diet of plant sap. Likewise, the primary role of Wigglesworthia, it is presumed, is to synthesize vitamins that the tsetse fly does not get from the blood that it eats. In lower termites, the endosymbiotic protists play a major role in the digestion of lignocellulosic materials that constitute a bulk of the termites' diet. Bacteria benefit from the reduced exposure to predators and competition from other bacterial species, the ample supply of nutrients and relative environmental stability inside the host. Genome sequencing reveals that obligate bacterial endosymbionts of insects have among the smallest of known bacterial genomes and have lost many genes that are commonly found in closely related bacteria. Several theories have been put forth to explain the loss of genes. It is presumed that some of these genes are not needed in the environment of the host insect cell. A complementary theory suggests that the relatively small numbers of bacteria inside each insect decrease the efficiency of natural selection in 'purging' deleterious mutations and small mutations from the population, resulting in a loss of genes over many millions of years. Research in which a parallel phylogeny of bacteria and insects was inferred supports the belief that the primary endosymbionts are transferred only vertically (i.e., from the mother), and not horizontally (i.e., by escaping the host and entering a new host). Attacking obligate bacterial endosymbionts may present a way to control their insect hosts, many of which are pests or carriers of human disease. For example, aphids are crop pests and the tsetse fly carries the organism Trypanosoma brucei that causes African sleeping sickness. Other motivations for their study involve understanding the origins of symbioses in general, as a proxy for understanding e.g. how chloroplasts or mitochondria came to be obligate symbionts of eukaryotes or plants. Secondary endosymbionts The pea aphid (Acyrthosiphon pisum) is known to contain at least three secondary endosymbionts, Hamiltonella defensa, Regiella insecticola, and Serratia symbiotica. Hamiltonella defensa defends its aphid host from parasitoid wasps. This defensive symbiosis improves the survival of aphids, which have lost some elements of the insect immune response. One of the best-understood defensive symbionts is the spiral bacteria Spiroplasma poulsonii. Spiroplasma sp. can be reproductive manipulators, but also defensive symbionts of Drosophila flies. In Drosophila neotestacea, S. poulsonii has spread across North America owing to its ability to defend its fly host against nematode parasites. This defence is mediated by toxins called "ribosome-inactivating proteins" that attack the molecular machinery of invading parasites. These Spiroplasma toxins represent one of the first examples of a defensive symbiosis with a mechanistic understanding for defensive symbiosis between an insect endosymbiont and its host. Sodalis glossinidius is a secondary endosymbiont of tsetse flies that lives inter- and intracellularly in various host tissues, including the midgut and hemolymph. Phylogenetic studies have not indicated a correlation between evolution of Sodalis and tsetse. Unlike tsetse's primary symbiont Wigglesworthia, though, Sodalis has been cultured in vitro. Many other insects have secondary endosymbionts not reviewed here. Endosymbionts of ants Bacteriocyte-associated symbionts The most well studied endosymbiont of ants are bacteria of the genus Blochmannia, which are the primary endosymbiont of Camponotus ants. In 2018 a new ant-associated symbiont was discovered in Cardiocondyla ants. This symbiont was named Candidatus Westeberhardia Cardiocondylae and it is also believed to be a primary symbiont. Endosymbionts of marine invertebrates Extracellular endosymbionts are also represented in all four extant classes of Echinodermata (Crinoidea, Ophiuroidea, Echinoidea, and Holothuroidea). Little is known of the nature of the association (mode of infection, transmission, metabolic requirements, etc.) but phylogenetic analysis indicates that these symbionts belong to the alpha group of the class Proteobacteria, relating them to Rhizobium and Thiobacillus. Other studies indicate that these subcuticular bacteria may be both abundant within their hosts and widely distributed among the Echinoderms in general. Some marine oligochaeta (e.g., Olavius algarvensis and Inanidrillus spp.) have obligate extracellular endosymbionts that fill the entire body of their host. These marine worms are nutritionally dependent on their symbiotic chemoautotrophic bacteria lacking any digestive or excretory system (no gut, mouth, or nephridia). The sea slug Elysia chlorotica lives in endosymbiotic relationship with the algae Vaucheria litorea, and the jellyfish Mastigias have a similar relationship with an algae. Dinoflagellate endosymbionts Dinoflagellate endosymbionts of the genus Symbiodinium, commonly known as zooxanthellae, are found in corals, mollusks (esp. giant clams, the Tridacna), sponges, and foraminifera. These endosymbionts drive the formation of coral reefs by capturing sunlight and providing their hosts with energy for carbonate deposition. Previously thought to be a single species, molecular phylogenetic evidence over the past couple decades has shown there to be great diversity in Symbiodinium. In some cases, there is specificity between host and Symbiodinium clade. More often, however, there is an ecological distribution of Symbiodinium, the symbionts switching between hosts with apparent ease. When reefs become environmentally stressed, this distribution of symbionts is related to the observed pattern of coral bleaching and recovery. Thus, the distribution of Symbiodinium on coral reefs and its role in coral bleaching presents one of the most complex and interesting current problems in reef ecology. Endosymbionts of phytoplankton In marine environments, bacterial endosymbionts have more recently been discovered. These endosymbiotic relationships are especially prevalent in oligotrophic or nutrient-poor regions of the ocean like that of the North Atlantic. In these oligotrophic waters, cell growth of larger phytoplankton like that of diatoms is limited by low nitrate concentrations. Endosymbiotic bacteria fix nitrogen for their diatom hosts and in turn receive organic carbon from photosynthesis. These symbioses play an important role in global carbon cycling in oligotrophic regions. One known symbiosis between the diatom Hemialus spp. and the cyanobacterium Richelia intracellularis has been found in the North Atlantic, Mediterranean, and Pacific Ocean. The Richelia endosymbiont is found within the diatom frustule of Hemiaulus spp., and has a reduced genome likely losing genes related to pathways the host now provides. Research by Foster et al. (2011) measured nitrogen fixation by the cyanobacterial host Richelia intracellularis well above intracellular requirements, and found the cyanobacterium was likely fixing excess nitrogen for Hemiaulus host cells. Additionally, both host and symbiont cell growth were much greater than free-living Richelia intracellularis or symbiont-free Hemiaulus spp. The Hemaiulus-Richelia symbiosis is not obligatory especially
of the natural exponential as , it is computationally and conceptually convenient to reduce the study of exponential functions to this particular one. The natural exponential is hence denoted by or The former notation is commonly used for simpler exponents, while the latter is preferred when the exponent is a complicated expression. For real numbers and , a function of the form is also an exponential function, since it can be rewritten as Formal definition The real exponential function can be characterized in a variety of equivalent ways. It is commonly defined by the following power series: Since the radius of convergence of this power series is infinite, this definition is, in fact, applicable to all complex numbers (see for the extension of to the complex plane). The constant can then be defined as The term-by-term differentiation of this power series reveals that for all real , leading to another common characterization of as the unique solution of the differential equation satisfying the initial condition Based on this characterization, the chain rule shows that its inverse function, the natural logarithm, satisfies for or This relationship leads to a less common definition of the real exponential function as the solution to the equation By way of the binomial theorem and the power series definition, the exponential function can also be defined as the following limit: It can be shown that every continuous, nonzero solution of the functional equation is an exponential function, with Overview The exponential function arises whenever a quantity grows or decays at a rate proportional to its current value. One such situation is continuously compounded interest, and in fact it was this observation that led Jacob Bernoulli in 1683 to the number now known as . Later, in 1697, Johann Bernoulli studied the calculus of the exponential function. If a principal amount of 1 earns interest at an annual rate of compounded monthly, then the interest earned each month is times the current value, so each month the total value is multiplied by , and the value at the end of the year is . If instead interest is compounded daily, this becomes . Letting the number of time intervals per year grow without bound leads to the limit definition of the exponential function, first given by Leonhard Euler. This is one of a number of characterizations of the exponential function; others involve series or differential equations. From any of these definitions it can be shown that the exponential function obeys the basic exponentiation identity, which justifies the notation for . The derivative (rate of change) of the exponential function is the exponential function itself. More generally, a function with a rate of change proportional to the function itself (rather than equal to it) is expressible in terms of the exponential function. This function property leads to exponential growth or exponential decay. The exponential function extends to an entire function on the complex plane. Euler's formula relates its values at purely imaginary arguments to trigonometric functions. The exponential function also has analogues for which the argument is a matrix, or even an element of a Banach algebra or a Lie algebra. Derivatives and differential equations The importance of the exponential function in mathematics and the sciences stems mainly from its property as the unique function which is equal to its derivative and is equal to 1 when . That is, Functions of the form for constant are the only functions that are equal to their derivative (by the Picard–Lindelöf theorem). Other ways of saying the same thing include: The slope of the graph at any point is the height of the function at that point. The rate of increase of the function at is equal to the value of the function at . The function solves the differential equation . is a fixed point of derivative as a functional. If a variable's growth or decay rate is proportional to its size—as is the case in unlimited population growth (see Malthusian catastrophe), continuously compounded interest, or radioactive decay—then the variable can be written as a constant times an exponential function of time. Explicitly for any real constant , a function satisfies if and only if for some constant . The constant k is called the decay constant, disintegration constant, rate constant, or transformation constant. Furthermore, for any differentiable function , we find, by the chain rule: Continued fractions for A continued fraction for can be obtained via an identity of Euler: The following generalized continued fraction for converges more quickly: or, by applying the substitution : with a special case for : This formula also converges, though more slowly, for . For example: Complex plane As in the real case, the exponential function can be defined on the complex plane in several equivalent forms. The most common definition of the complex exponential function parallels the power series definition for real arguments, where the real variable is replaced by a complex one: Alternatively, the complex exponential function may defined by modelling the limit definition for real arguments, but with the real variable replaced by a complex one: For the power series definition, term-wise multiplication of two copies of this power series in the Cauchy sense, permitted by Mertens' theorem, shows that the defining multiplicative property of exponential functions continues to hold for all complex arguments: The definition of the complex exponential function in turn leads to the appropriate definitions extending the trigonometric functions to complex arguments. In particular, when ( real), the series definition yields the expansion In this expansion, the rearrangement of the terms into real and imaginary parts is justified
it) is expressible in terms of the exponential function. This function property leads to exponential growth or exponential decay. The exponential function extends to an entire function on the complex plane. Euler's formula relates its values at purely imaginary arguments to trigonometric functions. The exponential function also has analogues for which the argument is a matrix, or even an element of a Banach algebra or a Lie algebra. Derivatives and differential equations The importance of the exponential function in mathematics and the sciences stems mainly from its property as the unique function which is equal to its derivative and is equal to 1 when . That is, Functions of the form for constant are the only functions that are equal to their derivative (by the Picard–Lindelöf theorem). Other ways of saying the same thing include: The slope of the graph at any point is the height of the function at that point. The rate of increase of the function at is equal to the value of the function at . The function solves the differential equation . is a fixed point of derivative as a functional. If a variable's growth or decay rate is proportional to its size—as is the case in unlimited population growth (see Malthusian catastrophe), continuously compounded interest, or radioactive decay—then the variable can be written as a constant times an exponential function of time. Explicitly for any real constant , a function satisfies if and only if for some constant . The constant k is called the decay constant, disintegration constant, rate constant, or transformation constant. Furthermore, for any differentiable function , we find, by the chain rule: Continued fractions for A continued fraction for can be obtained via an identity of Euler: The following generalized continued fraction for converges more quickly: or, by applying the substitution : with a special case for : This formula also converges, though more slowly, for . For example: Complex plane As in the real case, the exponential function can be defined on the complex plane in several equivalent forms. The most common definition of the complex exponential function parallels the power series definition for real arguments, where the real variable is replaced by a complex one: Alternatively, the complex exponential function may defined by modelling the limit definition for real arguments, but with the real variable replaced by a complex one: For the power series definition, term-wise multiplication of two copies of this power series in the Cauchy sense, permitted by Mertens' theorem, shows that the defining multiplicative property of exponential functions continues to hold for all complex arguments: The definition of the complex exponential function in turn leads to the appropriate definitions extending the trigonometric functions to complex arguments. In particular, when ( real), the series definition yields the expansion In this expansion, the rearrangement of the terms into real and imaginary parts is justified by the absolute convergence of the series. The real and imaginary parts of the above expression in fact correspond to the series expansions of and , respectively. This correspondence provides motivation for cosine and sine for all complex arguments in terms of and the equivalent power series: The functions , , and so defined have infinite radii of convergence by the ratio test and are therefore entire functions (that is, holomorphic on ). The range of the exponential function is , while the ranges of the complex sine and cosine functions are both in its entirety, in accord with Picard's theorem, which asserts that the range of a nonconstant entire function is either all of , or excluding one lacunary value. These definitions for the exponential and trigonometric functions lead trivially to Euler's formula: We could alternatively define the complex exponential function based on this relationship. If , where and are both real, then we could define its exponential as where , , and on the right-hand side of the definition sign are to be interpreted as functions of a real variable, previously defined by other means. For , the relationship holds, so that for real and maps the real line (mod ) to the unit circle in the complex plane. Moreover, going from to , the curve defined by traces a segment of the unit circle of length starting from in the complex plane and going counterclockwise. Based on these observations and the fact that the measure of an angle in radians is the arc length on the unit circle subtended by the angle, it is easy to see that, restricted to real arguments, the sine and cosine functions as defined above coincide with the sine and cosine functions as introduced in elementary mathematics via geometric notions. The complex exponential function is periodic with period and holds for all . When its domain is extended from the real line to the complex plane, the exponential function retains the following properties: Extending the natural logarithm to complex arguments yields the complex logarithm , which is a multivalued function. We can then define a more general exponentiation: for all complex numbers and . This is also a multivalued function, even when is real. This distinction is problematic, as the multivalued functions and are easily confused with their single-valued equivalents when substituting a real number for . The rule about multiplying exponents for the case of positive real numbers must be modified in a multivalued context: See failure of power and logarithm identities for more about problems with combining powers. The exponential function maps any line in the complex plane to a logarithmic spiral in the complex plane with the center at the origin. Two special cases exist: when the original line is parallel to the real axis, the resulting spiral never closes in on itself; when the original line is parallel to the imaginary axis, the resulting spiral is a circle of some radius. Considering the complex exponential function as a function involving four real variables: the graph of the exponential function is a two-dimensional surface curving through four dimensions. Starting with a color-coded portion of the domain, the following are depictions of the graph as variously projected into two or three dimensions. The second image shows how the domain complex plane is mapped into the range complex plane: zero is mapped to 1 the real axis is mapped to the positive real axis the imaginary axis is wrapped around the unit circle at a constant angular rate values with negative real parts are mapped inside the unit circle values with positive real parts are mapped outside of the unit circle values with a constant real part are mapped to circles centered at zero values with a constant imaginary part are mapped to rays extending from zero The third and fourth images show how the graph in the second image extends into one of the other two dimensions not shown in the second image. The third image shows the graph extended along the real axis. It shows the graph is a surface of revolution about the axis of the graph of the real exponential function, producing a horn or funnel shape. The fourth image shows the graph extended along the
forces attacked the Venetians in the Morea. To Vienna it was clear that the Turks intended to attack Hungary and undo the whole Karlowitz settlement of 1699. After the Porte rejected an offer of mediation in April 1716, Charles VI despatched Eugene to Hungary to lead his relatively small but professional army. Of all Eugene's wars this was the one in which he exercised most direct control; it was also a war which, for the most part, Austria fought and won on her own.Eugene left Vienna in early June 1716 with a field army of between 80,000 and 90,000 men. By early August 1716 the Ottoman Turks, some 200,000 men under the sultan's son-in-law, the Grand Vizier Damat Ali Pasha, were marching from Belgrade towards Eugene's position west of the fortress of Petrovaradin on the north bank of the Danube. The Grand Vizier had intended to seize the fortress; but Eugene gave him no chance to do so. After resisting calls for caution and forgoing a council of war, the Prince decided to attack immediately on the morning of 5 August with approximately 70,000 men. The Turkish janissaries had some initial success, but after an Imperial cavalry attack on their flank, Ali Pasha's forces fell into confusion. Although the Imperials lost almost 5,000 dead or wounded, the Turks, who retreated in disorder to Belgrade, seem to have lost double that amount, including the Grand Vizier himself who had entered the mêlée and subsequently died of his wounds. Eugene proceeded to take the Banat fortress of Timișoara (Temeswar in German) in mid-October 1716 (thus ending 164 years of Turkish rule), before turning his attention to the next campaign and to what he considered the main goal of the war, Belgrade. Situated at the confluence of the Rivers Danube and Sava, Belgrade held a garrison of 30,000 men under Serasker Mustapha Pasha. Imperial troops besieged the place in mid-June 1717, and by the end of July large parts of the city had been destroyed by artillery fire. By the first days of August, however, a huge Turkish field army (150,000–200,000 strong), under the new Grand Vizier Hacı Halil Pasha had arrived on the plateau east of the city to relieve the garrison. News spread through Europe of Eugene's imminent destruction; but he had no intention of lifting the siege. With his men suffering from dysentery, and continuous bombardment from the plateau, Eugene, aware that a decisive victory alone could extricate his army, decided to attack the relief force. On the morning of 16 August, 40,000 Imperial troops marched through the fog, caught the Turks unaware, and routed Halil Pasha's army; a week later Belgrade surrendered, effectively bringing an end to the war. The victory was the crowning point of Eugene's military career and had confirmed him as the leading European general. His ability to snatch victory at the moment of defeat had shown the Prince at his best. The principal objectives of the war had been achieved: the task Eugene had begun at Zenta was complete, and the Karlowitz settlement secured. By the terms of the Treaty of Passarowitz, signed on 21 July 1718, the Turks surrendered the Banat of Temeswar, along with Belgrade and most of Serbia, although they regained the Morea from the Venetians. The war had dispelled the immediate Turkish threat to Hungary and was a triumph for the Empire and for Eugene personally. Quadruple Alliance While Eugene fought the Turks in the east, unresolved issues following the Utrecht/Rastatt settlements led to hostilities between the Emperor and Philip V of Spain in the west. Charles VI had refused to recognise Philip V as King of Spain, a title which he himself claimed; in return, Philip V had refused to renounce his claims to Naples, Milan, and the Netherlands, all of which had transferred to the House of Austria following the Spanish Succession war. Philip V was roused by his influential wife, Elisabeth Farnese, daughter of the Hereditary Prince of Parma, who personally held dynastic claims in the name of her son, Don Charles, to the duchies of Tuscany, Parma and Piacenza. Representatives from a newly formed Anglo-French alliance—who were desirous of European peace for their own dynastic securities and trade opportunities—called on both parties to recognise each other's sovereignty. Yet Philip V remained intractable, and on 22 August 1717 his chief minister, Alberoni, effected the invasion of Austrian Sardinia in what seemed like the beginning of the reconquest of Spain's former Italian empire. Eugene returned to Vienna from his recent victory at Belgrade (before the conclusion of the Turkish war) determined to prevent an escalation of the conflict, complaining that, "two wars cannot be waged with one army"; only reluctantly did the Prince release some troops from the Balkans for the Italian campaign. Rejecting all diplomatic overtures Philip V unleashed another assault in June 1718, this time against Savoyard Sicily as a preliminary to attacking the Italian mainland. Realising that only the British fleet could prevent further Spanish landings, and that pro-Spanish groups in France might push the regent, Duke of Orléans, into war against Austria, Charles VI had no option but to sign the Quadruple Alliance on 2 August 1718, and formally renounce his claim to Spain. Despite the Spanish fleet's destruction off Cape Passaro, Philip V and Elisabeth remained resolute, and rejected the treaty. Although Eugene could have gone south after the conclusion of the Turkish war, he chose instead to conduct operations from Vienna; but Austria's military effort in Sicily proved derisory, and Eugene's chosen commanders, Zum Jungen, and later Count Mercy, performed poorly. It was only from pressure exerted by the French army advancing into the Basque provinces of northern Spain in April 1719, and the British Navy's attacks on the Spanish fleet and shipping, that compelled Philip V and Elisabeth to dismiss Alberoni and join the Quadruple Alliance on 25 January 1720. Nevertheless, the Spanish attacks had strained Charles VI's government, causing tension between the Emperor and his Spanish Council on the one hand, and the conference, headed by Eugene, on the other. Despite Charles VI's own personal ambitions in the Mediterranean it was clear to the Emperor that Eugene had put the safeguarding of his conquests in Hungary before everything else, and that military failure in Sicily also had to rest on Eugene. Consequently, the Prince's influence over the Emperor declined considerably. Later life (1721–36) Governor-General of the Southern Netherlands Eugene had become governor of the Southern Netherlands—then the Austrian Netherlands—in June 1716, but he was an absent ruler, directing policy from Vienna through his chosen representative the Marquis of Prié. Prié proved unpopular with the local population and the guilds who, following the Barrier Treaty of 1715, were obliged to meet the financial demands of the administration and the Dutch barrier garrisons; with Eugene's backing and encouragement, civil disturbances in Antwerp and Brussels were forcibly suppressed. After displeasing the Emperor over his initial opposition to the formation of the Ostend Company, Prié also lost the support of the native nobility from within his own council of state in Brussels, particularly from the Marquis de Mérode-Westerloo. One of Eugene's former favourites, General Bonneval, also joined the noblemen in opposition to Prié, further undermining the Prince. When Prié's position became untenable, Eugene felt compelled to resign his post as governor of the Southern Netherlands on 16 November 1724. As compensation, Charles VI conferred on him the honorary position as vicar-general of Italy, worth 140,000 gulden a year, and an estate at Siebenbrunn in Lower Austria said to be worth double that amount. But his resignation distressed him, and to compound his concerns Eugene caught a severe bout of influenza that Christmas, marking the beginning of permanent bronchitis and acute infections every winter for the remaining twelve years of his life. 'Cold war' The 1720s saw rapidly changing alliances between the European powers and almost constant diplomatic confrontation, largely over unsolved issues regarding the Quadruple Alliance. The Emperor and the Spanish King continued to use each other's titles, and Charles VI still refused to remove the remaining legal obstacles to Don Charles' eventual succession to the duchies of Parma and Tuscany. Yet in a surprise move Spain and Austria moved closer with the signing of the Treaty of Vienna in April/May 1725. In response Britain, France, and Prussia joined together in the Alliance of Hanover to counter the danger to Europe of an Austro-Spanish hegemony. For the next three years there was the continual threat of war between the Hanover Treaty powers and the Austro-Spanish bloc. From 1726 Eugene gradually began to regain his political influence. With his many contacts throughout Europe Eugene, backed by Gundaker Starhemberg and Count Schönborn, the Imperial vice-chancellor, managed to secure powerful allies and strengthen the Emperor's position—his skill in managing the vast secret diplomatic network over the coming years was the main reason why Charles VI once again came to depend upon him. In August 1726 Russia acceded to the Austro-Spanish alliance, and in October Frederick William of Prussia followed suit by defecting from the Allies with the signing of a mutual defensive treaty with the Emperor. Despite the conclusion of the brief Anglo-Spanish conflict, war between the European powers persisted throughout 1727–28. In 1729 Elisabeth Farnese abandoned the Austro-Spanish alliance. Realizing that Charles VI could not be drawn into the marriage pact she wanted, Elisabeth concluded that the best way to secure her son's succession to Parma and Tuscany now lay with Britain and France. To Eugene it was 'an event that which is seldom to be found in history'. Following the Prince's determined lead to resist all pressure, Charles VI sent troops into Italy to prevent the entry of Spanish garrisons into the contested duchies. By the beginning of 1730 Eugene, who had remained bellicose throughout the whole period, was again in control of Austrian policy. In Britain there now emerged a new political re-alignment as the Anglo-French entente became increasingly defunct. Believing that a resurgent France now posed the greatest danger to their security British ministers, headed by Robert Walpole, moved to reform the Anglo-Austrian alliance, leading to the signing of the Second Treaty of Vienna on 16 March 1731. Eugene had been the Austrian minister most responsible for the alliance, believing once again it would provide security against France and Spain. The treaty compelled Charles VI to sacrifice the Ostend Company and accept, unequivocally, the accession of Don Charles to Parma and Tuscany. In return King George II as King of Great Britain and Elector of Hanover guaranteed the Pragmatic Sanction, the device to secure the rights of the Emperor's daughter, Maria Theresa, to the entire Habsburg inheritance. It was largely through Eugene's diplomacy that in January 1732 the Imperial diet also guaranteed the Pragmatic Sanction which, together with the Treaties with Britain, Russia, and Prussia, marked the culmination of the Prince's diplomacy. But the Treaty of Vienna had infuriated the court of King Louis XV: the French had been ignored and the Pragmatic Sanction guaranteed, thus increasing Habsburg influence and confirming Austria's vast territorial size. The Emperor also intended Maria Theresa to marry Francis Stephen of Lorraine which would present an unacceptable threat on France's border. By the beginning of 1733 the French army was ready for war: all that was needed was the excuse. War of the Polish Succession In 1733 the Polish King and Elector of Saxony, Augustus the Strong, died. There were two candidates for his successor: first, Stanisław Leszczyński, the father-in-law of Louis XV; second, the Elector of Saxony's son, Augustus, supported by Russia, Austria, and Prussia. The Polish succession had afforded Louis XV's chief minister, Fleury, the opportunity to attack Austria and take Lorraine from Francis Stephen. In order to gain Spanish support France backed the succession of Elisabeth Farnese's sons to further Italian lands. Eugene entered the War of the Polish Succession as President of the Imperial War Council and commander-in-chief of the army, but he was severely handicapped by the quality of his troops and the shortage of funds; now in his seventies, the Prince was also burdened by rapidly declining physical and mental powers. France declared war on Austria on 10 October 1733, but without the funds from the Maritime Powers—who, despite the Vienna treaty, remained neutral throughout the war—Austria could not hire the necessary troops to wage an offensive campaign. "The danger to the monarchy," wrote Eugene to the Emperor in October, "cannot be exaggerated". By the end of the year Franco-Spanish forces had seized Lorraine and Milan; by early 1734 Spanish troops had taken Sicily. Eugene took command on the Rhine in April 1734, but vastly outnumbered he was forced onto the defensive. In June Eugene set out to relieve Philippsburg, yet his former drive and energy was now gone. Accompanying Eugene was a young Frederick the Great, sent by his father to learn the art of war. Frederick gained considerable knowledge from Eugene, recalling in later life his great debt to his Austrian mentor, but the Prussian prince was aghast at Eugene's condition, writing later, "his body was still there but his soul had gone." Eugene conducted another cautious campaign in 1735, once again pursuing a sensible defensive strategy on limited resources; but his short-term memory was by now practically non-existent, and his political influence disappeared completely—Gundaker Starhemberg and Johann Christoph von Bartenstein now dominated the conference in his place. Fortunately for Charles VI, Fleury was determined to limit the scope of the war, and in October 1735 he granted generous peace preliminaries to the Emperor. Later years and death Eugene returned to Vienna from the War of the Polish Succession in October 1735, weak and feeble; when Maria Theresa and Francis Stephen married in February 1736 Eugene was too ill to attend. After playing cards at Countess Batthyány's on the evening of 20 April until nine in the evening, he returned home at the Stadtpalais, his attendant offered him to take his prescribed medicine which Eugene declined. When his servants arrived to wake him the next morning on 21 April 1736, they found Prince Eugene dead after passing away quietly during the night. It has been said that on the same morning he was discovered dead, the great lion in his menagerie was also found dead. Eugene's heart was buried with the ashes of his ancestors in Turin, in the mausoleum of the Superga. His remains were carried in a long procession to St. Stephen's Cathedral, where his embalmed body was buried in the Kreuzkapelle. It is said that the emperor himself attended as a mourner without anybody's knowledge. The Prince's niece Maria Anna Victoria, whom he had never met, inherited Eugene's immense possessions. Within a few years she sold off the palaces, the country estates and the art collection of a man who had become one of the wealthiest in Europe, after arriving in Vienna as a refugee with empty pockets. Personal life In what has been interpreted as a sign that he considered himself French by birth, Italian by dynastic extraction, and German-Austrian by allegiance, Eugene of Savoy signed himself using the trilingual form Eugenio (in Italian) Von (in German) Savoye (in French). EVS was sometimes used as an abbreviation. Eugene never married and was reported to have said that a woman was a hindrance in a war, and that a soldier should never marry, Winston Churchill in his biography of the 1st Duke of Marlborough described Eugene as misogynist, because of this he was called "Mars without Venus". During the last 20 years of his life Eugène had a relationship with one woman, Hungarian Countess Eleonore Batthyány-Strattmann the widowed daughter of the former Theodor von Strattman. Much of their acquaintance remains speculative since Eugene left no personal papers: only letters of war, diplomacy and politics. Eugène and Eleonore were constant companions, meeting for dinner, receptions and card games almost every day till his death; although they lived apart most foreign diplomats assumed that Eleonore was his long time mistress. It is not known precisely when their relationship began, but his acquisition of a property in Hungary after the Battle of Zenta, near Rechnitz Castle, made them neighbours. In the years immediately following the War of the Spanish Succession she began to be mentioned regularly in diplomatic correspondence as "Eugen's Egeria" and within a few years she was referred to as his constant companion and his mistress. When asked if she and the Prince would marry, Countess Batthyány replied: "I love him too well for that, I would rather have a bad reputation than deprive him of his". In spite of the lack of clear evidence, rumours that he was homosexual dated back to his teenage years. The origin of those rumours was Elizabeth Charlotte, Duchess of Orléans, the famous Versailles gossipmonger known as "Madame". The Duchess wrote about young Eugene's alleged antics with lackeys and pages and that he was refused an ecclesiastical benefice due to his "depravity". Eugene's biographer, historian Helmut Oehler, reported the Duchess's remarks but credited them to Elizabeth's personal resentment against the Prince. Eugene aware of the malicious rumours, mocked them in his memoirs, calling them "the invented anecdotes from the gallery of Versailles". Whether or not Eugene had homosexual relationships in his youth, the Duchess's remarks about him were made years later, and only after Eugene had severely humiliated the armies of her brother-in-law, the King of France. Once Eugene had left France at the age of nineteen until his death at the age of seventy two, there were no further claims of homosexuality. Being one of the richest and most celebrated men of his age certainly created enmity: jealousy and spite pursued Eugene from the battlefields to Vienna. His old subordinate Guido Starhemberg in particular was an incessant and rancorous detractor of Eugene's fame, and became known at the court of Vienna, according to Montesquieu, as Eugene’s main rival. In a letter to a friend, Johann Matthias von der Schulenburg, another bitter rival, who had previously served under him during the wars of Spanish Succession, but whose ambition to obtain command in the Austrian army had been foiled by Eugene, wrote that the prince "has no idea but to fight whenever the opportunity offers; he thinks that nothing equals the name of Imperialists, before whom all should bend the knee. He loves "la petite débauche et la p---- above all things" That last sentence in French with a word intentionally censored, started speculations by some. For writer Curt Riess, it was "a testament to sodomy"; according to Eugene's foremost biographer, German historian Max Braubach, "la p..." meant (fornication), or , ie., Whoring. While Governor-General of the Southern Netherland, Eugene was known to be a regular at an exclusive brothel on Amsterdam's Prinsengracht, the keeper of the place was known as Madame Therese. Eugene once famously brought the English consul in Amsterdam with him. A drawing by Cornelis Troost, kept at the Rijksmuseum, the national museum of the Netherlands, depicts a scene in which Prince Eugene had "the 'available' women parade in review, just as he did his own troops" according to the museum, Troost based his drawing on an anecdote circulating at the time. Eugene's other friends such as the papal nuncio, Passionei, who delivered the funeral oration of Prince Eugene, made up for the family he lacked. For his only surviving nephew, Emmanuel, the son of his brother Louis Thomas, Eugene arranged marriage with one of the daughters of Prince Liechtenstein, but Emmanuel died of smallpox in 1729. With the death of Emmanuel's son in 1734, no close male relatives remained to succeed the Prince. His closest relative, therefore, was Louis Thomas's unmarried daughter, Princess Maria Anna Victoria of Savoy, daughter of his eldest brother, the count of Soissons, whom Eugene had never met and had made no effort to do so. Patron of the arts Eugene's rewards for his victories, his share of booty, his revenues from his abbeys in Savoy, and a steady income from his Imperial offices and governorships, enabled him to contribute to the landscape of Baroque architecture Eugene spent most of his life in Vienna at his Winter Palace, the Stadtpalais, built by Fischer von Erlach. The palace acted as his official residence and home, but for reasons that remain speculative the Prince's association with Fischer ended before the building was complete, favouring instead Johann Lukas von Hildebrandt as his chief architect. Eugene first employed Hildebrandt to finish the Stadtpalais before commissioning him to prepare plans for a palace (Savoy Castle) on his Danubian island at Ráckeve. Begun in 1701 the single-story building took twenty years to complete; yet, probably because of the Rákóczi revolt, the Prince seems to have visited it only once—after the siege of Belgrade in 1717. Of more importance was the grandiose complex of the two Belvedere palaces in Vienna. The single-storey Lower Belvedere, with its exotic gardens and zoo, was completed in 1716. The Upper Belvedere, completed between 1720 and 1722, is a more substantial building; with sparkling white stucco walls and copper roof, it became a wonder of Europe. Eugene and Hildebrandt also converted an existing structure on his Marchfeld estate into a country seat, the Schlosshof, situated between the Rivers Danube and Morava. The building, completed in 1729, was far less elaborate than his other projects but it was strong enough to serve as a fortress in case of need. Eugene spent much of his spare time there in his last years accommodating large hunting parties. In the years following the Peace of Rastatt Eugene became acquainted with a large number of scholarly men. Given his position and responsiveness, they were keen to meet him: few could exist without patronage and this was probably the main reason for Gottfried Leibniz's association with him in 1714.Eugene also befriended the French writer Jean-Baptiste Rousseau who, by 1716, was receiving financial support from Eugene. Rousseau stayed on attached to the Prince's household, probably helping in the library, until he left for the Netherlands in 1722. Another acquaintance, Montesquieu, already famous for his Persian Letters when he arrived in Vienna in 1728, favourably recalled his time spent at the Prince's table. Nevertheless, Eugene had no literary pretensions of his own, and was not tempted like Maurice de Saxe or Marshal Villars to write his memoirs or books on the art of war. He did, however, become a collector on the grandest scale: his picture galleries were filled with 16th- and 17th-century Italian, Dutch and Flemish art; his library at the Stadtpalais crammed with over 15,000 books, 237 manuscripts as well as a huge collection of prints (of particular interest were books on natural history and geography). "It is hardly believable," wrote Rousseau, "that a man who carries on his shoulders the burden of almost all the affairs of Europe … should find as much time to read as though he had nothing else to do." At Eugene's death his possessions and estates, except those in Hungary which the crown reclaimed, went to his niece, Princess Maria Anna Victoria, who at once decided to sell everything. The artwork was bought by Charles Emmanuel III of Sardinia. Eugene's library, prints and drawings were purchased by the Emperor in 1737 and have since passed into Austrian national collections. Historical reputation and legacy Napoleon considered Eugene one of the seven greatest commanders of history. Although later military critics have disagreed with that assessment, Eugene was undoubtedly the greatest Austrian general. He was no military innovator, but he had the ability to make an inadequate system work. He was equally adept as an organizer, strategist, and tactician, believing in the primacy of battle and his ability to seize the opportune moment to launch a successful attack. "The important thing," wrote Maurice de Saxe in his Reveries, "is to see the opportunity and to know how to use it. Prince Eugene possessed this quality which is the greatest in the art of war and which is the test of the most elevated genius." This fluidity was key to his battlefield successes in Italy and in his wars against the Turks. Nevertheless, in the Low Countries, particularly after the battle of Oudenarde in 1708, Eugene, like his cousin Louis of Baden, tended to play safe and become bogged down in a conservative strategy of sieges and defending supply lines. After the attempt on Toulon in 1707, he also became very wary of combined land/sea operations. To historian Derek McKay the main criticism of him as a general is his legacy—he left no school of officers nor an army able to function without him. Eugene was a disciplinarian—when ordinary soldiers disobeyed orders he was prepared to shoot them himself—but he rejected blind brutality, writing "you should only be harsh when, as often happens, kindness proves useless". On the battlefield Eugene demanded courage in his subordinates, and expected his men to fight where and when he wanted; his criteria for promotion were based primarily on obedience to orders and courage on the battlefield rather than social position. On the whole, his men responded because he was willing to push himself as hard as them. His position as President of the Imperial War Council proved less successful. Following the long period of peace after the Austro-Turkish War, the idea of creating a separate field army or providing garrison troops with effective training for them to be turned into such an army quickly was never considered by Eugene. By the time of the War of the Polish Succession, therefore, the Austrians were outclassed by a better prepared French force. For this Eugene was largely to blame—in his view (unlike the drilling and manoeuvres carried out by the Prussians which to Eugene seemed irrelevant to real warfare) the time to create actual fighting men was when war came. Although Frederick the Great had been struck by the muddle of the Austrian army and its poor organisation during the Polish Succession war, he later amended his initial harsh judgements. "If I understand anything of my trade," commented Frederick in 1758, "especially in the more difficult aspects, I owe that advantage to Prince Eugene. From him I learnt to hold grand objectives constantly in view, and direct all my resources to those ends." To historian Christopher Duffy it was this awareness of the 'grand strategy' that was Eugene's legacy to Frederick. To his responsibilities, Eugene attached his own personal values—physical courage, loyalty to his sovereign, honesty, self-control in all things—and he expected these qualities from his commanders. Eugene's approach was dictatorial, but he was willing to co-operate with someone he regarded as his equal, such as Baden or Marlborough. Yet the contrast to his co-commander of the Spanish Succession war was stark. "Marlborough," wrote Churchill, "was the model husband and father, concerned with building up a home, founding a family, and gathering a fortune to sustain it"; whereas Eugene, the bachelor, was "disdainful of money, content with his bright sword and his lifelong animosities against Louis XIV". The result was an austere figure, inspiring respect and admiration rather than affection. Memorials Places and monuments A huge equestrian statue in the centre of Vienna commemorates Eugene's achievements. It is inscribed on one side, 'To the wise counsellor of three Emperors', and on the other, 'To the glorious conqueror of Austria's enemies'. Prinz-Eugen-Kapelle, A chapel located at the northern corner of St. Stephen's Cathedral in Vienna Prinz-Eugen-Straße a street in Vienna in use since 1890; Until 1911 a street in Döbling was also named Prinz-Eugen-Straße, since then the street connects Schwarzenbergplatz with the Wiedner Gürtel leading past the Belvedere Palace. Warships Several ships have been named in Eugene's honour: , an Austro-Hungarian battleship of WWI launched in 1912 SMS Prinz Eugen, an Austro-Hungarian Ironclad warship built in 1870 , a Royal Navy monitor; , an Italian light cruiser (later USS Prinz Eugen), a World War II heavy cruiser. Other 7th SS Volunteer Mountain Division Prinz Eugen, a German mountain infantry division of the Waffen-SS. It was formed in 1941 from Volksdeutsche volunteers and conscripts from the Banat, Independent State of Croatia, Hungary and Romania. It was initially named (SS-Volunteer Division Prinz Eugen). Prinz Eugen von Savoyen Prize, A prize awarded by the University of Vienna during the Nazi era rewarding "ethnic German culture". Arms Ancestry See also Prinz Eugen, der edle Ritter 20 euro Baroque commemorative coin 7th SS Volunteer Mountain Division Prinz Eugen Louis William, Margrave of Baden-Baden Leopold I, Holy Roman Emperor References Citations Bibliography Lediard, Thomas (1736). The Life of John, Duke of Marlborough. 3 Volumes. London Saxe, Maurice de (2007 [1757]). Reveries on the Art of War. Dover Publications Inc. Chandler, David G (1990). The Art of Warfare in the Age of Marlborough. Spellmount Limited. Childs, John (2003). Warfare in
Eugene applied directly to Louis XIV for command of a company in French service, but the King—who had shown no compassion for Olympia's children since her disgrace—refused him out of hand. "The request was modest, not so the petitioner," he remarked. "No one else ever presumed to stare me out so insolently." Whatever the case, Louis XIV's choice would cost him dearly twenty years later, for it would be precisely Eugene, in collaboration with the Duke of Marlborough, who would defeat the French army at Blenheim, a decisive battle which checked French military supremacy and political power. Denied a military career in France, Eugene decided to seek service abroad. One of Eugene's brothers, Louis Julius, had entered Imperial service the previous year, but he had been immediately killed fighting the Ottoman Turks in 1683. When news of his death reached Paris, Eugene decided to travel to Austria in the hope of taking over his brother's command. It was not an unnatural decision: his cousin, Louis of Baden, was already a leading general in the Imperial army, as was a more distant cousin, Maximilian II Emanuel, Elector of Bavaria. On the night of 26 July 1683, Eugene left Paris and headed east. Years later, in his memoirs, Eugene recalled his early years in France: Great Turkish War By May 1683, the Ottoman threat to Emperor Leopold I's capital, Vienna, was very evident. The Grand Vizier, Kara Mustafa Pasha—encouraged by Imre Thököly's Magyar rebellion—had invaded Hungary with between 100,000 and 200,000 men; within two months approximately 90,000 were beneath Vienna's walls. With the 'Turks at the gates', the Emperor fled for the safe refuge of Passau up the Danube, a more distant and secure part of his dominion. It was at Leopold I's camp that Eugene arrived in mid-August. Although Eugene was not of Austrian extraction, he did have Habsburg antecedents. His grandfather, Thomas Francis, founder of the Carignano line of the House of Savoy, was the son of Catherine Michelle—a daughter of Philip II of Spain—and the great-grandson of the Emperor Charles V. But of more immediate consequence to Leopold I was the fact that Eugene was the second cousin of Victor Amadeus, the Duke of Savoy, a connection that the Emperor hoped might prove useful in any future confrontation with France. These ties, together with his ascetic manner and appearance (a positive advantage to him at the sombre court of Leopold I), ensured the refugee from the hated French king a warm welcome at Passau, and a position in Imperial service. Though French was his favored language, he communicated with Leopold in Italian, as the Emperor (though he knew it perfectly) disliked French. But Eugene also had a reasonable command of German, which he understood very easily, something that helped him much in the military. Eugene was in no doubt where his new allegiance lay, this loyalty was immediately put to the test. By September, the Imperial forces under the Duke of Lorraine, together with a powerful Polish army under King John III Sobieski, were poised to strike the Sultan's army. On the morning of 12 September, the Christian forces drew up in line of battle on the south-eastern slopes of the Vienna Woods, looking down on the massed enemy camp. The day-long Battle of Vienna resulted in the lifting of the 60-day siege, and the Sultan's forces were routed and in retreat. Serving under Baden, as a twenty-year-old volunteer, Eugene distinguished himself in the battle, earning commendation from Lorraine and the Emperor; he later received the nomination for the colonelcy and was awarded the Kufstein regiment of dragoons by Leopold I. Holy League In March 1684, Leopold I formed the Holy League with Poland and Venice to counter the Ottoman threat. For the next two years, Eugene continued to perform with distinction on campaign and establish himself as a dedicated, professional soldier; by the end of 1685, still only 22 years old, he was made a Major-General. Little is known of Eugene's life during these early campaigns. Contemporary observers make only passing comments of his actions, and his own surviving correspondence, largely to his cousin Victor Amadeus, are typically reticent about his own feelings and experiences. Nevertheless, it is clear that Baden was impressed with Eugene's qualities—"This young man will, with time, occupy the place of those whom the world regards as great leaders of armies." In June 1686, the Duke of Lorraine besieged Buda (Budapest), the centre of the Ottoman occupation in Hungary. After resisting for 78 days, the city fell on 2 September, and Turkish resistance collapsed throughout the region as far away as Transylvania and Serbia. Further success followed in 1687, where, commanding a cavalry brigade, Eugene made an important contribution to the victory at the Battle of Mohács on 12 August. Such was the scale of their defeat that the Ottoman army mutinied—a revolt which spread to Constantinople. The Grand Vizier, Suluieman Pasha, was executed and Sultan Mehmed IV, deposed. Once again, Eugene's courage earned him recognition from his superiors, who granted him the honour of personally conveying the news of victory to the Emperor in Vienna. For his services, Eugene was promoted to Lieutenant-General in November 1687. He was also gaining wider recognition. King Charles II of Spain bestowed upon him the Order of the Golden Fleece, while his cousin, Victor Amadeus, provided him with money and two profitable abbeys in Piedmont. Eugene's military career suffered a temporary setback in 1688 when, on 6 September, the Prince suffered a severe wound to his knee by a musket ball during the Siege of Belgrade, and did not return to active service until January 1689. Interlude in the west: Nine Years' War Just as Belgrade was falling to Imperial forces under Max Emmanuel in the east, French troops in the west were crossing the Rhine into the Holy Roman Empire. Louis XIV had hoped that a show of force would lead to a quick resolution to his dynastic and territorial disputes with the princes of the Empire along his eastern border, but his intimidatory moves only strengthened German resolve, and in May 1689, Leopold I and the Dutch signed an offensive compact aimed at repelling French aggression. The Nine Years' War was professionally and personally frustrating for the Prince. Initially fighting on the Rhine with Max Emmanuel—receiving a slight head wound at the Siege of Mainz in 1689—Eugene subsequently transferred himself to Piedmont after Victor Amadeus joined the Alliance against France in 1690. Promoted to general of cavalry, he arrived in Turin with his friend the Prince of Commercy; but it proved an inauspicious start. Against Eugene's advice, Amadeus insisted on engaging the French at Staffarda and suffered a serious defeat—only Eugene's handling of the Savoyard cavalry in retreat saved his cousin from disaster. Eugene remained unimpressed with the men and their commanders throughout the war in Italy. "The enemy would long ago have been beaten," he wrote to Vienna, "if everyone had done their duty." So contemptuous was he of the Imperial commander, Count Caraffa, he threatened to leave Imperial service. In Vienna, Eugene's attitude was dismissed as the arrogance of a young upstart, but so impressed was the Emperor by his passion for the Imperial cause, he promoted him to Field-Marshal in 1693. When Caraffa's replacement, Count Caprara, was himself transferred in 1694, it seemed that Eugene's chance for command and decisive action had finally arrived. But Amadeus, doubtful of victory and now more fearful of Habsburg influence in Italy than he was of French, had begun secret dealings with Louis XIV aimed at extricating himself from the war. By 1696, the deal was done, and Amadeus transferred his troops and his loyalty to the enemy. Eugene was never to fully trust his cousin again; although he continued to pay due reverence to the Duke as head of his family, their relationship would forever after remain strained. Military honours in Italy undoubtedly belonged to the French commander Marshal Catinat, but Eugene, the one Allied general determined on action and decisive results, did well to emerge from the Nine Years' War with an enhanced reputation. With the signing of the Treaty of Ryswick in September/October 1697, the desultory war in the west was finally brought to an inconclusive end, and Leopold I could once again devote all his martial energies into defeating the Ottoman Turks in the east. Battle of Zenta The distractions of the war against Louis XIV had enabled the Turks to recapture Belgrade in 1690. In August 1691, the Austrians, under Louis of Baden, regained the advantage by heavily defeating the Turks at the Battle of Slankamen on the Danube, securing Habsburg possession of Hungary and Transylvania. When Baden was transferred west to fight the French in 1692, his successors, first Caprara, then from 1696, Frederick Augustus, the Elector of Saxony, proved incapable of delivering the final blow. On the advice of the President of the Imperial War Council, Rüdiger Starhemberg, thirty-four-year old Eugene was offered supreme command of Imperial forces in April 1697. This was Eugene's first truly independent command—no longer need he suffer under the excessively cautious generalship of Caprara and Caraffa, or be thwarted by the deviations of Victor Amadeus. But on joining his army, he found it in a state of 'indescribable misery'. Confident and self-assured, the Prince of Savoy (ably assisted by Commercy and Guido Starhemberg) set about restoring order and discipline. Leopold I had warned Eugene that "he should act with extreme caution, forgo all risks and avoid engaging the enemy unless he has overwhelming strength and is practically certain of being completely victorious", but when the Imperial commander learnt of Sultan Mustafa II's march on Transylvania, Eugene abandoned all ideas of a defensive campaign and moved to intercept the Turks as they crossed the River Tisza at Zenta on 11 September 1697. It was late in the day before the Imperial army struck. The Turkish cavalry had already crossed the river so Eugene decided to attack immediately, arranging his men in a half-moon formation. The vigour of the assault wrought terror and confusion amongst the Turks, and by nightfall, the battle was won. For the loss of some 2,000 dead and wounded, Eugene had inflicted an overwhelming defeat upon the enemy with approximately 25,000 Turks killed—including the Grand Vizier, Elmas Mehmed Pasha, the vizirs of Adana, Anatolia, and Bosnia, plus more than thirty aghas of the Janissaries, sipahis, and silihdars, as well as seven horsetails (symbols of high authority), 100 pieces of heavy artillery, 423 banners, and the revered seal which the sultan always entrusted to the grand vizir on an important campaign, Eugene had annihilated the Turkish army and brought to an end the War of the Holy League. Although the Ottomans lacked western organisation and training, the Savoyard prince had revealed his tactical skill, his capacity for bold decision, and his ability to inspire his men to excel in battle against a dangerous foe. After a brief terror-raid into Ottoman-held Bosnia, culminating in the sack of Sarajevo, Eugene returned to Vienna in November to a triumphal reception. His victory at Zenta had turned him into a European hero, and with victory came reward. Land in Hungary, given him by the Emperor, yielded a good income, enabling the Prince to cultivate his newly acquired tastes in art and architecture (see below); but for all his new-found wealth and property, he was, nevertheless, without personal ties or family commitments. Of his four brothers, only one was still alive at this time. His fourth brother, Emmanuel, had died aged 14 in 1676; his third, Louis Julius (already mentioned) had died on active service in 1683, and his second brother, Philippe, died of smallpox in 1693. Eugene's remaining brother, Louis Thomas—ostracised for incurring the displeasure of Louis XIV—travelled Europe in search of a career, before arriving in Vienna in 1699. With Eugene's help, Louis found employment in the Imperial army, only to be killed in action against the French in 1702. Of Eugene's sisters, the youngest had died in childhood. The other two, Marie Jeanne-Baptiste and Louise Philiberte, led dissolute lives. Expelled from France, Marie joined her mother in Brussels, before eloping with a renegade priest to Geneva, living with him unhappily until her premature death in 1705. Of Louise, little is known after her early salacious life in Paris, but in due course, she lived for a time in a convent in Savoy before her death in 1726. The Battle of Zenta proved to be the decisive victory in the long war against the Turks. With Leopold I's interests now focused on Spain and the imminent death of Charles II, the Emperor terminated the conflict with the Sultan, and signed the Treaty of Karlowitz on 26 January 1699. Middle life (1700–20) War of the Spanish Succession With the death of the infirm and childless Charles II of Spain on 1 November 1700, the succession of the Spanish throne and subsequent control over her empire once again embroiled Europe in war—the War of the Spanish Succession. On his deathbed Charles II had bequeathed the entire Spanish inheritance to Louis XIV's grandson, Philip, Duke of Anjou. This threatened to unite the Spanish and French kingdoms under the House of Bourbon—something unacceptable to England, the Dutch Republic, and Leopold I, who had himself a claim to the Spanish throne. From the beginning, the Emperor had refused to accept the will of Charles II, and he did not wait for England and the Dutch Republic to begin hostilities. Before a new Grand Alliance could be concluded Leopold I prepared to send an expedition to seize the Spanish lands in Italy. Eugene crossed the Alps with some 30,000 men in May/June 1701. After a series of brilliant manoeuvres the Imperial commander defeated Catinat at the Battle of Carpi on 9 July. "I have warned you that you are dealing with an enterprising young prince," wrote Louis XIV to his commander, "he does not tie himself down to the rules of war." On 1 September Eugene defeated Catinat's successor, Marshal Villeroi, at the Battle of Chiari, in a clash as destructive as any in the Italian theatre. But as so often throughout his career the Prince faced war on two fronts—the enemy in the field and the government in Vienna. Starved of supplies, money, and men, Eugene was forced into unconventional means against the vastly superior enemy. During a daring raid on Cremona on the night of 31 January/1 February 1702 Eugene captured the French commander-in-chief. Yet the coup was less successful than hoped: Cremona remained in French hands, and the Duke of Vendôme, whose talents far exceeded Villeroi's, became the theatre's new commander. Villeroi's capture caused a sensation in Europe and had a galvanising effect on English public opinion. "The surprise at Cremona," wrote the diarist John Evelyn, "… was the great discourse of this week"; but appeals for succour from Vienna remained unheeded, forcing Eugene to seek battle and gain a 'lucky hit'. The resulting Battle of Luzzara on 15 August proved inconclusive. Although Eugene's forces inflicted double the number of casualties on the French the battle settled little except to deter Vendôme trying an all-out assault on Imperial forces that year, enabling Eugene to hold on the south of the Alps. With his army rotting away, and personally grieving for his long-standing friend Prince Commercy who had died at Luzzara, Eugene returned to Vienna in January 1703. President of the Imperial War Council Eugene's European reputation was growing (Cremona and Luzzara had been celebrated as victories throughout the Allied capitals), yet because of the condition and morale of his troops the 1702 campaign had not been a success. Austria itself was now facing the direct threat of invasion from across the border in Bavaria where the state's Elector, Maximilian Emanuel, had declared for the Bourbons in August the previous year. Meanwhile, in Hungary a small-scale revolt had broken out in May and was fast gaining momentum. With the monarchy at the point of complete financial breakdown Leopold I was at last persuaded to change the government. At the end of June 1703 Gundaker Starhemberg replaced Gotthard Salaburg as President of the Treasury, and Prince Eugene succeeded Henry Mansfeld as the new President of the Imperial War Council (Hofkriegsratspräsident). As head of the war council Eugene was now part of the Emperor's inner circle, and the first president since Montecuccoli to remain an active commander. Immediate steps were taken to improve efficiency within the army: encouragement and, where possible, money, was sent to the commanders in the field; promotion and honours were distributed according to service rather than influence; and discipline improved. But the Austrian monarchy faced severe peril on several fronts in 1703: by June the Duke of Villars had reinforced the Elector of Bavaria on the Danube thus posing a direct threat to Vienna, while Vendôme remained at the head of a large army in northern Italy opposing Guido Starhemberg's weak Imperial force. Of equal alarm was Francis II Rákóczi's revolt which, by the end of the year, had reached as far as Moravia and Lower Austria. Blenheim Dissension between Villars and the Elector of Bavaria had prevented an assault on Vienna in 1703, but in the Courts of Versailles and Madrid, ministers confidently anticipated the city's fall. The Imperial ambassador in London, Count Wratislaw, had pressed for Anglo-Dutch assistance on the Danube as early as February 1703, but the crisis in southern Europe seemed remote from the Court of St. James's where colonial and commercial considerations were more to the fore of men's minds. Only a handful of statesmen in England or the Dutch Republic realised the true implications of Austria's peril; foremost amongst these was the English Captain-General, the Duke of Marlborough. By early 1704 Marlborough had resolved to march south and rescue the situation in southern Germany and on the Danube, personally requesting the presence of Eugene on campaign so as to have "a supporter of his zeal and experience". The Allied commanders met for the first time at the small village of Mundelsheim on 10 June, and immediately formed a close rapport—the two men becoming, in the words of Thomas Lediard, 'Twin constellations in glory'. This professional and personal bond ensured mutual support on the battlefield, enabling many successes during the Spanish Succession war. The first of these victories, and the most celebrated, came on 13 August 1704 at the Battle of Blenheim. Eugene commanded the right wing of the Allied army, holding the Elector of Bavaria's and Marshal Marsin's superior forces, while Marlborough broke through the Marshal Tallard's center, inflicting over 30,000 casualties. The battle proved decisive: Vienna was saved and Bavaria was knocked out of the war. Both Allied commanders were full of praise for each other's performance. Eugene's holding operation, and his pressure for action leading up to the battle, proved crucial for the Allied success. In Europe Blenheim is regarded as much a victory for Eugene as it is for Marlborough, a sentiment echoed by Sir Winston Churchill (Marlborough's descendant and biographer), who pays tribute to "the glory of Prince Eugene, whose fire and spirit had exhorted the wonderful exertions of his troops." France now faced the real danger of invasion, but Leopold I in Vienna was still under severe strain: Rákóczi's revolt was a major threat; and Guido Starhemberg and Victor Amadeus (who had once again switched loyalties and rejoined the Grand Alliance in 1703) had been unable to halt the French under Vendôme in northern Italy. Only Amadeus' capital, Turin, held on. Turin and Toulon Eugene returned to Italy in April 1705, but his attempts to move west towards Turin were thwarted by Vendôme's skillful manoeuvres. Lacking boats and bridging materials, and with desertion and sickness rife within his army, the outnumbered Imperial commander was helpless. Leopold I's assurances of money and men had proved illusory, but desperate appeals from Amadeus and criticism from Vienna goaded the Prince into action, resulting in the Imperialists' bloody defeat at the Battle of Cassano on 16 August. Following Leopold I's death and the accession of Joseph I to the Imperial throne in May 1705, Eugene began to receive the personal backing he desired. Joseph I proved to be a strong supporter of Eugene's supremacy in military affairs; he was the most effective emperor the Prince served and the one he was happiest under. Promising support, Joseph I persuaded Eugene to return to Italy and restore Habsburg honour. The Imperial commander arrived in theatre in mid-April 1706, just in time to organise an orderly retreat of what was left of Count Reventlow's inferior army following his defeat by Vendôme at the Battle of Calcinato on 19 April. Vendôme now prepared to defend the lines along the river Adige, determined to keep Eugene cooped to the east while the Marquis of La Feuillade threatened Turin. Feigning attacks along the Adige, Eugene descended south across the river Po in mid-July, outmanoeuvring the French commander and gaining a favourable position from which he could at last move west towards Piedmont and relieve Savoy's capital. Events elsewhere now had major consequences for the war in Italy. With Villeroi's crushing defeat by Marlborough at the Battle of Ramillies
which hangs in the Harvard Law School. In a 1992 opinion, Justice Antonin Scalia described the portrait of Taney, made two years after Taney's infamous decision in Dred Scott v. Sandford, as showing Taney "in black, sitting in a shadowed red armchair, left hand resting upon a pad of paper in his lap, right hand hanging limply, almost lifelessly, beside the inner arm of the chair. He sits facing the viewer and staring straight out. There seems to be on his face, and in his deep-set eyes, an expression of profound sadness and disillusionment." Leutze also executed other portraits, including one of fellow painter William Morris Hunt. That portrait was owned by Hunt's brother Leavitt Hunt, a New York attorney and sometime Vermont resident, and was shown at an exhibition devoted to William Morris Hunt's work at the Museum of Fine Arts, Boston in 1878. In 1860 Leutze was commissioned by the U.S. Congress to decorate a stairway in the Capitol Building in Washington, DC, for which he painted a large composition, Westward the Course of Empire Takes Its Way, which is also commonly known as Westward Ho!. Late in life, he became a member of the National Academy of Design. He was also a member of the Union League Club of New York, which has a number of his paintings. At age 52, he died in Washington, D.C. of heat stroke. He was interred at Glenwood Cemetery. At the time of his death, a painting, The Emancipation of the Slaves, was in preparation. Leutze's portraits are known for their artistic quality and their patriotic romanticism. Washington Crossing the Delaware firmly ranks among the American national iconography. Gallery of works Footnotes References Additional References: Wierich, Jochen. Grand Themes: Emanuel Leutze, "Washington Crossing the Delaware," and American History Painting (Penn State University Press; 2012) 240 pages; Argues that the painting was a touchstone for debates over history painting at a time of intense sectionalism. Irre, Heidrun. Emanuel Gottlob Leutze: Von der Rems zum Delaware, einhorn-Verlag+Druck GmbH, Schwäbisch Gmünd 2016, https://einhornverlag.de/shop/buecher/von-der-rems-zum-delaware/ New International Encyclopedia External links Leutze Gallery at MuseumSyndicate
Union. A companion picture, Columbus in Chains, procured him the gold medal of the Brussels Art Exhibition, and was subsequently purchased by the Art Union in New York; it was the basis of the 1893 $2 Columbian Issue stamp. In 1845, after a tour in Italy, he returned to Düsseldorf, marrying Juliane Lottner and making his home there for 14 years. During his years in Düsseldorf, he was a resource for visiting Americans: he found them places to live and work, provided introductions, and gave them emotional and even financial support. For many years, he was the president of the Düsseldorf Artists' Association; in 1848, he was an early promoter of the "Malkasten" art association; and in 1857, he led the call for a gathering of artists which originated the founding of the Allgemeine deutsche Kunstgenossenschaft. A strong supporter of Europe's Revolutions of 1848, Leutze decided to paint an image that would encourage Europe's liberal reformers with the example of the American Revolution. Using American tourists and art students as models and assistants, Leutze finished a first version of Washington Crossing the Delaware in 1850. Just after it was completed, the first version was damaged by fire in his studio, subsequently restored, and acquired by the Kunsthalle Bremen. On September 5, 1942, during World War II, it was destroyed in a bombing raid by the Allied forces. The second painting, a replica of the first, only larger, was ordered in 1850 by the Parisian art trader Adolphe Goupil for his New York branch and placed on exhibition on Broadway in October 1851. It is now owned by the Metropolitan Museum of Art in New York. In 1854, Leutze finished his depiction of the Battle of Monmouth, "Washington rallying the troops at Monmouth," commissioned by an important patron, the banker David Leavitt of New York City and Great Barrington, Massachusetts. New York City and Washington, D.C. In 1859, Leutze returned to the United States and opened a studio in New York City. He divided his time between New York City and Washington, D.C. In 1859, he painted a portrait of Chief Justice Roger Brooke Taney, which hangs in the Harvard Law School. In a 1992 opinion, Justice Antonin Scalia described the portrait of Taney, made two years after Taney's infamous decision in Dred Scott v. Sandford, as showing Taney "in black, sitting in a shadowed red armchair, left hand resting upon a pad of paper in his lap, right hand hanging limply, almost lifelessly, beside the inner arm of the chair. He sits facing the viewer and staring straight out. There seems to be on his face, and in his deep-set eyes, an expression of profound sadness and disillusionment." Leutze also executed other portraits, including one of fellow painter William Morris Hunt. That portrait was owned by Hunt's brother
arguments in this satire. Of higher literary value is the didactic and satirical Buch von der Tugend und Weisheit (1550), a collection of forty-nine fables in which Alberus embodies his views on the relations of Church and State. His satire is incisive, but in a scholarly and humanistic way; it does not appeal to popular passions with the fierce directness which enabled the master of Catholic satire, Thomas Murner, to inflict such telling blows. Several of Alberus's hymns, all of which show the influence of his master Luther, have been retained in the German Protestant hymnal. After Luther's death, Alberus was for a time
a German humanist, Lutheran reformer, and poet. Life He was born in the village of Bruchenbrücken (now part of Friedberg, Hesse) about the year 1500. Although his father Tilemann Alber was a schoolmaster, his early education was neglected. Ultimately in 1518, he found his way to the University of Wittenberg, where he studied theology. He had the good fortune to attract the attention of Martin Luther and Philipp Melanchthon, and subsequently became one of Luther's most active helpers in the Protestant Reformation. Not only did he fight for the Protestant cause as a preacher and theologian, but he was almost the only member of Luther's party who was able to confront the Roman Catholics with the weapon of literary satire. In 1542 he published a prose satire to which Luther wrote the preface, Der Barfusser Monche Eulenspiegel und Alkoran, a parodic adaptation of the Liber conformitatum of the Franciscan Bartolommeo Rinonico of Pisa, in which the Franciscan order is held up to ridicule. This drew reactions from Catholic
that caused it to be recognized. But Tomita noticed that this does not take into account the relations between symbols, so if we consider the grammar S → SS | b and the string bbb, it only notes that each S can match one or two b's, and thus produces spurious derivations for bb and bbbb as well as the two correct derivations for bbb. Another method is to build the parse forest as you go, augmenting each Earley item with a pointer to a shared packed parse forest (SPPF) node labelled with a triple (s, i, j) where s is a symbol or an LR(0) item (production rule with dot), and i and j give the section of the input string derived by this node. A node's contents are either a pair of child pointers giving a single derivation, or a list of "packed" nodes each containing a pair of pointers and representing one derivation. SPPF nodes are unique (there is only one with a given label), but may contain more than one derivation for ambiguous parses. So even if an operation does not add an Earley item (because it already exists), it may still add a derivation to the item's parse forest. Predicted items have a null SPPF pointer. The scanner creates an SPPF node representing the non-terminal it is scanning. Then when the scanner or completer advance an item, they add a derivation whose children are the node from the item whose dot was advanced, and the one for the new symbol that was advanced over (the non-terminal or completed item). SPPF nodes are never labeled with a completed LR(0) item: instead they are labelled with the symbol that is produced so that all derivations are combined under one node regardless of which alternative production they come from. Optimizations Philippe McLean and R. Nigel Horspool in their paper "A Faster Earley Parser" combine Earley parsing with LR parsing and achieve an improvement in an order of magnitude. See also CYK algorithm Context-free grammar Parsing algorithms Citations Other reference materials Implementations C, C++ 'Yet Another Earley Parser (YAEP)' – C/C++ libraries 'C Earley Parser' – an Earley parser C Haskell 'Earley' – an Earley parser DSL in Haskell Java – a Java implementation of the Earley algorithm PEN – a Java library that implements the Earley algorithm Pep – a Java library that implements the Earley algorithm and provides charts and parse trees as parsing artifacts digitalheir/java-probabilistic-earley-parser - a Java library that implements the probabilistic Earley algorithm, which is useful to determine the most likely parse tree from an ambiguous sentence C# coonsta/earley - An Earley parser in C# patrickhuber/pliant - An Earley parser that integrates the improvements adopted by Marpa and demonstrates Elizabeth Scott's tree building algorithm. ellisonch/CFGLib - Probabilistic Context Free Grammar (PCFG) Library for C# (Earley + SPPF, CYK) JavaScript Nearley – an Earley parser that's starting to integrate the improvements that Marpa adopted A Pint-sized Earley Parser – a toy parser (with annotated pseudocode) to demonstrate Elizabeth Scott's technique for building the shared packed parse forest lagodiuk/earley-parser-js – a tiny JavaScript implementation of Earley parser (including generation of the parsing-forest) digitalheir/probabilistic-earley-parser-javascript - JavaScript implementation of the probabilistic Earley parser OCaml Simple Earley - An implementation of a simple Earley-like parsing algorithm, with documentation. Perl Marpa::R2 – a Perl module. Marpa is an Earley's algorithm that includes the improvements made by Joop Leo, and by Aycock and Horspool. Parse::Earley – a Perl module implementing Jay Earley's original algorithm Python Lark – an object-oriented, procedural implementation of an Earley parser, that outputs a SPPF. NLTK – a Python toolkit with an Earley parser Spark – an object-oriented little language framework for Python implementing an Earley
of states to process, with the operation to be performed depending on what kind of state it is. The algorithm accepts if (X → γ •, 0) ends up in S(n), where (X → γ) is the top level-rule and n the input length, otherwise it rejects. Pseudocode Adapted from Speech and Language Processing by Daniel Jurafsky and James H. Martin, DECLARE ARRAY S; function INIT(words) S ← CREATE_ARRAY(LENGTH(words) + 1) for k ← from 0 to LENGTH(words) do S[k] ← EMPTY_ORDERED_SET function EARLEY_PARSE(words, grammar) INIT(words) ADD_TO_SET((γ → •S, 0), S[0]) for k ← from 0 to LENGTH(words) do for each state in S[k] do // S[k] can expand during this loop if not FINISHED(state) then if NEXT_ELEMENT_OF(state) is a nonterminal then PREDICTOR(state, k, grammar) // non_terminal else do SCANNER(state, k, words) // terminal else do COMPLETER(state, k) end end return chart procedure PREDICTOR((A → α•Bβ, j), k, grammar) for each (B → γ) in GRAMMAR_RULES_FOR(B, grammar) do ADD_TO_SET((B → •γ, k), S[k]) end procedure SCANNER((A → α•aβ, j), k, words) if a ⊂ PARTS_OF_SPEECH(words[k]) then ADD_TO_SET((A → αa•β, j), S[k+1]) end procedure COMPLETER((B → γ•, x), k) for each (A → α•Bβ, j) in S[x] do ADD_TO_SET((A → αB•β, j), S[k]) end Example Consider the following simple grammar for arithmetic expressions: <P> ::= <S> # the start rule <S> ::= <S> "+" <M> | <M> <M> ::= <M> "*" <T> | <T> <T> ::= "1" | "2" | "3" | "4" With the input: 2 + 3 * 4 This is the sequence of state sets: The state (P → S •, 0) represents a completed parse. This state also appears in S(3) and S(1), which are complete sentences. Constructing the parse forest Earley's dissertation briefly describes an algorithm for constructing parse trees by adding a set of pointers from each non-terminal in an Earley item back to the items that caused it to be recognized. But Tomita noticed that this does not take into account the relations between symbols, so if we consider the grammar S → SS | b and the string bbb, it only notes that each S can match one or two b's, and thus produces spurious derivations for bb and bbbb as well as the two correct derivations for bbb. Another method is to build the parse forest as you go, augmenting each Earley item with a pointer to a shared packed parse forest (SPPF) node labelled with a triple (s, i, j) where s is a symbol or an LR(0) item (production rule with dot), and i and j give the section of the input string derived by this node. A node's contents are either a pair of child pointers giving a single derivation, or a list of "packed" nodes each containing a pair of pointers and representing one derivation. SPPF nodes are unique (there is only one with a given label), but may contain more than one derivation for ambiguous parses. So even if an operation does not add an Earley item (because it already exists), it may still add a derivation to the item's parse forest. Predicted items have a null SPPF pointer. The scanner creates an SPPF node representing the non-terminal it is scanning. Then when the scanner or completer advance an item, they add a derivation whose children are the node from the item whose dot was advanced, and the one for the new symbol that was advanced over (the non-terminal or completed item). SPPF nodes are never labeled with a completed LR(0) item: instead they are labelled with the symbol that is produced so that all derivations are combined under one node regardless of which alternative production they come from. Optimizations Philippe McLean and R. Nigel Horspool in their paper "A Faster Earley Parser" combine Earley parsing with LR parsing and achieve an improvement in an order of magnitude. See also CYK algorithm Context-free grammar Parsing algorithms Citations Other reference materials Implementations C, C++ 'Yet Another Earley Parser (YAEP)' – C/C++ libraries 'C Earley Parser'
rural areas. Coffee is also a large part of Ethiopian culture and cuisine. After every meal, a coffee ceremony is enacted and coffee is served. Restrictions of certain meats Ethiopian Orthodox Christians, Ethiopian Jews and Ethiopian Muslims avoid eating pork or shellfish, for religious reasons. Pork is considered unclean in Ethiopian Orthodox Christianity, Judaism and Islam. Many Ethiopians abstain from eating certain meats, and mostly eat vegetarian and vegan foods. Traditional ingredients Berbere, a combination of powdered chili pepper and other spices (cardamom, fenugreek, coriander, cloves, ginger, nutmeg, cumin and allspice) is an important ingredient used to add flavor to many varied dishes like chicken stews and baked fish dishes. Also essential is niter kibbeh, a clarified butter infused with ginger, garlic, and several spices. Mitmita (, ) is a powdered seasoning mix used in Ethiopian cuisine. It is orange-red in color and contains ground birdseye chili peppers (piri piri), cardamom seed, cloves and salt. It occasionally has other spices including cinnamon, cumin and ginger. In their adherence to strict fasting, Ethiopian cooks have developed a rich array of cooking oil sources—besides sesame and safflower—for use as a substitute for animal fats which are forbidden during fasting periods. Ethiopian cuisine also uses nug (also spelled noog, also known as "niger seed"). Dishes Amhara dishes Wat Wat begins with a large amount of chopped red onion, which is simmered or sauteed in a pot. Once the onions have softened, niter kebbeh (or, in the case of vegan dishes, vegetable oil) is added. Following this, berbere is added to make a spicy keiy wat or keyyih tsebhi. Turmeric is used instead of berbere for a milder alicha wat or both spices are omitted when making vegetable stews, such as atkilt wat. Meat such as beef (ሥጋ, səga), chicken (ዶሮ, doro or derho), fish (ዓሣ, asa), goat or lamb (በግ, beg or beggi) is also added. Legumes such as split peas (ክክ, kək or kikki) and lentils (ምስር, məsər or birsin); or vegetables such as potatoes (ድንች, Dənəch), carrots and chard (ቆስጣ) are also used instead in vegan dishes. Each variation is named by appending the main ingredient to the type of wat (e.g. ). However, the word keiy is usually not necessary, as the spicy variety is assumed when it is omitted (e.g. doro wat). The term is sometimes used to refer to all vegetable dishes, but a more specific name can also be used (as in , which translates to "potatoes and carrots stew"; but the word atkilt is usually omitted when using the more specific term). Tibs Meat along with vegetables are sautéed to make tibs (also tebs, t'ibs, tibbs, etc., Ge'ez: ጥብስ ṭïbs). Tibs is served in a variety of manners, and can range from hot to mild or contain little to no vegetables. There are many variations of the delicacy, depending on type, size or shape of the cuts of meat used. Beef, mutton, and goat are the most common meats used in the preparation of tibs. The mid-18th-century European visitor to Ethiopia describes tibs as a portion of grilled meat served "to pay a particular compliment or show especial respect to someone." It may still be seen this way; today the dish is prepared to commemorate special events and holidays. Kinche (Qinch'e) Kinche (Qinch’e), a porridge, is a very common Ethiopian breakfast or supper. It is incredibly simple, inexpensive, and nutritious. It is made from cracked wheat, Ethiopian oats, barley or a mixture of those. It can be boiled in either milk or water with a little salt. The flavor of kinche comes from the nit'ir qibe, which is a spiced butter. Oromo dishes The Oromos' cuisine consists of various vegetable or meat side dishes and entrées. as part of a long-established custom, practice, or belief; People do not eat pork in Oromia. Foon Waaddii – minced roasted meat; specially seasoned Anchotte – a common dish in the western part of Oromia Baduu – liquid remaining after milk has been curdled and strained (cheese) Maarqaa – porridge like substance made from wheat, honey, milk, chili and spices Chechebsaa – shredded biddena stir-fried with chili powder and cheese Qoocco – also known as kocho, it is not the Gurage type of kocho but a different kind; a common dish in the western part of Oromia Itto – comprises all sorts of vegetables (tomato, potato, ginger, garlic), meat (lamb) chukkoo – also known as Micira; a sweet flavor of whole grain, seasoned with butter and spices Chororsaa – a common dish in the western part of Oromia Hulbata- slow cooked thick stew, made up of organic fenugreek seed powder, potato, lamb rib or loin chops seasoned with chili, garlic and tomato spices served on top of Biddena; mostly cooked in East Hararghe Zone and West Hararghe Zone of Oromia Dokkee – a common dish throughout Oromia state Qince – similar to Marqaa but made from shredded grains as opposed to flour Qorso (Akayi) – as snacks Oromia state Dadhii – a drink made from honey Hanida/Haneed– slow-roasted lamb dish usually served with rice Shitney/Shatta sauce– a mixture of herbs and peppers used as a side for hanida Farsho – Beer like, made from barley Buna – a lot of Coffee Gurage dishes Kitfo Another distinctively Ethiopian dish is kitfo (frequently spelled ketfo). It consists of raw (or rare) beef mince marinated in mitmita (Ge'ez: ሚጥሚጣ mīṭmīṭā a very spicy chili powder similar to ) and . Gored gored is very similar to , but uses cubed rather than ground beef. Ayibe is a cottage cheese that is mild and crumbly. It is much closer in texture to crumbled feta. Although not quite pressed, the whey has been drained and squeezed out. It is often served as a side dish to soften the effect of very spicy food. It has little to no distinct taste of its own. However, when served separately, is often mixed with a variety of mild or hot spices typical of Gurage cuisine. Gomen kitfo Gomen kitfo is another typical Gurage dish. Collard greens (ጎመን gōmen) are boiled, dried and then finely chopped and served with butter, chili and spices. It is a dish specially prepared for the occasion of Meskel, a very popular holiday marking the discovery of the True Cross. It is served along with or sometimes even kitfo in this tradition called . Sidama dishes Wassa The enset plant (called wesse in the Sidamo language) is central to Sidama cuisine and after grinding and fermenting the root to produce wassa, it is used in the preparation of several foods. is an enset flatbread used similarly to to eat wats made from beef, mushrooms, beans, gomen, or pumpkin. Borasaame is a cooked mixture of wassa and butter sometimes eaten with Ethiopian mustard greens and/or beans. It is traditionally eaten by hand
including rural areas. Coffee is also a large part of Ethiopian culture and cuisine. After every meal, a coffee ceremony is enacted and coffee is served. Restrictions of certain meats Ethiopian Orthodox Christians, Ethiopian Jews and Ethiopian Muslims avoid eating pork or shellfish, for religious reasons. Pork is considered unclean in Ethiopian Orthodox Christianity, Judaism and Islam. Many Ethiopians abstain from eating certain meats, and mostly eat vegetarian and vegan foods. Traditional ingredients Berbere, a combination of powdered chili pepper and other spices (cardamom, fenugreek, coriander, cloves, ginger, nutmeg, cumin and allspice) is an important ingredient used to add flavor to many varied dishes like chicken stews and baked fish dishes. Also essential is niter kibbeh, a clarified butter infused with ginger, garlic, and several spices. Mitmita (, ) is a powdered seasoning mix used in Ethiopian cuisine. It is orange-red in color and contains ground birdseye chili peppers (piri piri), cardamom seed, cloves and salt. It occasionally has other spices including cinnamon, cumin and ginger. In their adherence to strict fasting, Ethiopian cooks have developed a rich array of cooking oil sources—besides sesame and safflower—for use as a substitute for animal fats which are forbidden during fasting periods. Ethiopian cuisine also uses nug (also spelled noog, also known as "niger seed"). Dishes Amhara dishes Wat Wat begins with a large amount of chopped red onion, which is simmered or sauteed in a pot. Once the onions have softened, niter kebbeh (or, in the case of vegan dishes, vegetable oil) is added. Following this, berbere is added to make a spicy keiy wat or keyyih tsebhi. Turmeric is used instead of berbere for a milder alicha wat or both spices are omitted when making vegetable stews, such as atkilt wat. Meat such as beef (ሥጋ, səga), chicken (ዶሮ, doro or derho), fish (ዓሣ, asa), goat or lamb (በግ, beg or beggi) is also added. Legumes such as split peas (ክክ, kək or kikki) and lentils (ምስር, məsər or birsin); or vegetables such as potatoes (ድንች, Dənəch), carrots and chard (ቆስጣ) are also used instead in vegan dishes. Each variation is named by appending the main ingredient to the type of wat (e.g. ). However, the word keiy is usually not necessary, as the spicy variety is assumed when it is omitted (e.g. doro wat). The term is sometimes used to refer to all vegetable dishes, but a more specific name can also be used (as in , which translates to "potatoes and carrots stew"; but the word atkilt is usually omitted when using the more specific term). Tibs Meat along with vegetables are sautéed to make tibs (also tebs, t'ibs, tibbs, etc., Ge'ez: ጥብስ ṭïbs). Tibs is served in a variety of manners, and can range from hot to mild or contain little to no vegetables. There are many variations of the delicacy, depending on type, size or shape of the cuts of meat used. Beef, mutton, and goat are the most common meats used in the preparation of tibs. The mid-18th-century European visitor to Ethiopia describes tibs as a portion of grilled meat served "to pay a particular compliment or show especial respect to someone." It may still be seen this way; today the dish is prepared to commemorate special events and holidays. Kinche (Qinch'e) Kinche (Qinch’e), a porridge, is a very common Ethiopian breakfast or supper. It is incredibly simple, inexpensive, and nutritious. It is made from cracked wheat, Ethiopian oats, barley or a mixture of those. It can be boiled in either milk or water with a little salt. The flavor of kinche comes from the nit'ir qibe, which is a spiced butter. Oromo dishes The Oromos' cuisine consists of various vegetable or meat side dishes and entrées. as part of a long-established custom, practice, or belief; People do not eat pork in Oromia. Foon Waaddii – minced roasted
Jews became more and more frustrated with corruption, injustice and poverty. It continued into the 60s, four years before James was killed. War broke out with Rome and would lead to the destruction of Jerusalem and the scattering of the people. The epistle is renowned for exhortations on fighting poverty and caring for the poor in practical ways (James 1:26–27; 2:1–4; 2:14–19; 5:1–6), standing up for the oppressed (James 2:1–4; 5:1–6) and not being "like the world" in the way one responds to evil in the world (James 1:26–27; 2:11; 3:13–18; 4:1–10). Worldly wisdom is rejected and people are exhorted to embrace heavenly wisdom, which includes peacemaking and pursuing righteousness and justice (James 3:13–18). This approach sees the epistle as a real letter with a real immediate purpose: to encourage Christian Jews not to revert to violence in their response to injustice and poverty but to stay focused on doing good, staying holy and to embrace the wisdom of heaven, not that of the world. Doctrine Justification The epistle contains the following famous passage concerning salvation and justification: This passage has been contrasted with the teachings of Paul the Apostle on justification. Some scholars even believe that the passage is a response to Paul. One issue in the debate is the meaning of the Greek word (, 'render righteous or such as he ought to be'), with some among the participants taking the view that James is responding to a misunderstanding of Paul. Roman Catholicism and Eastern Orthodoxy have historically argued that the passage disproves the doctrine of justification by faith alone (sola fide). The early (and many modern) Protestants resolve the apparent conflict between James and Paul regarding faith and works in alternate ways from the Catholics and Orthodox: According to Ben Witherington III, differences exist between the Apostle Paul and James, but both used the law of Moses, the teachings of Jesus and other Jewish and non-Jewish sources, and "Paul was not anti-law any more than James was a legalist". A more recent article suggests that the current confusion regarding the Epistle of James about faith and works resulted from Augustine of Hippo's anti-Donatist polemic in the early fifth century. This approach reconciles the views of Paul and James on faith and works. Anointing of the Sick The epistle is also the chief biblical text for the Anointing of the Sick. James wrote: G. A. Wells suggested that the passage was evidence of late authorship of the epistle, on the grounds that the healing of the sick being done through an official body of presbyters (elders) indicated a considerable development of ecclesiastical organisation "whereas in Paul's day to heal and work miracles pertained to believers indiscriminately (I Corinthians, XII:9)." Works, deeds and care for the poor James and the M Source material in Matthew are unique in the canon in their stand against the rejection of works and deeds. According to Sanders, traditional Christian theology wrongly divested the term "works" of its ethical grounding, part of the effort to characterize Judaism as legalistic. However, for James and for all Jews, faith is alive only through Torah observance. In other words, belief demonstrates itself through practice and manifestation. For James, claims about belief are empty, unless they are alive in action, works and deeds. Torah observance James is unique in the canon by its explicit and wholehearted support of Torah-observance (the Law). According to Bibliowicz, not only is this text a unique view into the milieu of the Jewish founders – its inclusion in the canon signals that as canonization began (fourth century onward) Torah observance among believers in Jesus was still authoritative. According to modern scholarship James, Q, Matthew, the Didache, and the pseudo-Clementine literature reflect a similar ethos, ethical perspective, and stand on, or assume, Torah observance. James call to Torah observance (James 1:22-27) insures salvation (James 2:12–13, 14–26). Hartin is supportive of the focus on Torah observance and concludes that these texts support faith through action and sees them as reflecting the milieu of the Jewish followers of Jesus. Hub van de Sandt sees Matthew's and James' Torah observance reflected in a similar use of the Jewish Two Ways theme which is detectable in the Didache too (Didache 3:1–6). McKnight thinks that Torah observance is at the heart of James's ethics. A strong message against those advocating the rejection of Torah observance characterizes, and emanates from, this tradition: "Some have attempted while I am still alive, to transform my words by certain various interpretations, in order to teach the dissolution of the law; as though I myself were of such a mind, but did not freely proclaim it, which God forbid! For such a thing were to act in opposition to the law of God which was spoken by Moses, and was borne witness to by our Lord in respect of its eternal continuance; for thus he spoke: 'The heavens and the earth shall pass away, but one jot or one tittle shall in no wise pass away from the law. James seem to propose a more radical and demanding interpretation of the law than mainstream Judaism. According to Painter, there is nothing in James to suggest any relaxation of the demands of the law. "No doubt James takes for granted his readers' observance of the whole law, while focusing his attention on its moral demands." Canonicity The Epistle of James was first explicitly referred to and quoted by Origen of Alexandria, and possibly a bit earlier by Irenaeus of Lyons, although it was not mentioned by Tertullian, who was writing at the end of the second century. The Epistle of James was included among the twenty-seven New Testament books first listed by Athanasius of Alexandria in his Thirty-Ninth Festal Epistle (AD 367) and was confirmed as a canonical epistle of the New Testament by a series of councils in the fourth century. In the first centuries of the Church the authenticity of the Epistle was doubted by some, including Theodore of Mopsuestia in the mid-fifth century. Because of the silence of several of the western churches regarding it, Eusebius classes it among the Antilegomena or contested writings (Historia ecclesiae, 3.25; 2.23). Gaius Marius Victorinus, in his commentary on the Epistle to the Galatians, openly questioned whether the teachings of James were heretical. Its late recognition in the Church, especially in the West, may be explained by the fact that it was written for or by Jewish Christians, and therefore not widely circulated among the Gentile Churches. There is some indication that a few groups distrusted the book because of its doctrine. In Reformation times a few theologians, most notably Martin Luther in his early ministry, argued that this epistle should not be part of the canonical New Testament. Martin Luther's description of the Epistle of James varies. In some cases, Luther argues that it was not written by an apostle; but in other cases, he describes James as the work of an apostle. He even cites it as authoritative teaching from God and describes James as "a good book, because it
of James has attracted increasing scholarly interest due to a surge in the quest for the historical James, his role within the Jesus movement, his beliefs, and his relationships and views. This James revival is also associated with an increasing level of awareness of the Jewish grounding of both the epistle and the early Jesus movement. Authorship The debate about the authorship of James is inconclusive and shadows debates about Christology, and about historical accuracy. According to Robert J. Foster, "there is little consensus as to the genre, structure, dating, and authorship of the book of James." There are four "commonly espoused" views concerning authorship and dating of the Epistle of James: the letter was written by James before the Pauline epistles, the letter was written by James after the Pauline epistles, the letter is pseudonymous, the letter comprises material originally from James but reworked by a later editor. The writer refers to himself only as "James, a servant of God and of the Lord Jesus Christ". Jesus had two apostles named James: James, the son of Zebedee and James, the son of Alphaeus, but it is unlikely that either of these wrote the letter. According to the Book of Acts, James, the brother of John, was killed by Herod Agrippa I. James, the son of Alphaeus is a more viable candidate for authorship, although he is not prominent in the scriptural record, and relatively little is known about him. Hippolytus, writing in the early third century, asserted in his work On the 12 Apostles: The similarity of his alleged martyrdom to the stoning of James the Just has led some scholars, such as Robert Eisenman and James Tabor, to assume that these "two Jameses" were one and the same. This identification of James of Alphaeus with James the Just (as well as James the Less) has long been asserted, as evidenced by their conflation in Jacobus de Voragine's medieval hagiography the Golden Legend. Some have said the authorship of this epistle points to James, the brother of Jesus, to whom Jesus evidently had made a special appearance after his resurrection described in the New Testament as this James was prominent among the disciples. James the brother of Jesus was not a follower of Jesus before Jesus died according to John 7:2-5, which states that during Jesus' life "not even his brothers believed in him". From the middle of the 3rd century, patristic authors cited the epistle as written by James, the brother of Jesus and a leader of the Jerusalem church. If the letter is of pseudonymous authorship (i.e. not written by an apostle but by someone else), this implies that the person named "James" is respected and doubtless well known. Moreover, this James, brother of Jesus, is honored by the epistle written and distributed after the lifetime of James, the brother of Jesus. While not numbered among the Twelve Apostles unless he is identified as James the Less, James was nonetheless a very important figure: Paul the Apostle described him as "the brother of the Lord" in Galatians 1:19 and as one of the three "pillars of the Church" in Galatians 2:9. "There is no doubt that James became a much more important person in the early Christian movement than a casual reader of the New Testament is likely to imagine." The James believers are acquainted with, emerges from Galatians 1–2; 1 Corinthians 15-17 and Acts 12,15,21. Accounts about James are also extant in Josephus, Eusebius, Origen, the Gospel of Thomas, the Apocalypses of James, the Gospel of the Hebrews and the Pseudo-Clementine literature – most of whom cast him as righteous and as the undisputed leader of the Jewish camp. "His influence is central and palpable in Jerusalem and in Antioch, despite the fact that he did not minister at Antioch. Although we are dependent on sources dominated by the Pauline perspective… the role and influence of James overshadow all others at Antioch." John Calvin and others suggested that the author was the James, son of Alphaeus, who is referred to as James the Less. The Protestant reformer Martin Luther denied it was the work of an apostle and termed it an "epistle of straw". The Holy Tradition of the Eastern Orthodox Church teaches that the Book of James was "written not by either of the apostles, but by the 'brother of the Lord' who was the first bishop of the Church in Jerusalem." Arguments for a pseudepigraphon There is a majority view that the Epistle of James is pseudonymous. Most scholars consider the epistle to be pseudepigrapha because of these factors: The author introduces himself merely as "a servant of God and of the Lord Jesus Christ" without invoking any special family relationship to Jesus or even mentioning Jesus widely in the book. The cultured Greek language of the Epistle, it is contended, could not have been written by a Jerusalem Jew. Some scholars argue for a primitive version of the letter composed by James and then later polished by another writer. Some see parallels between James and 1 Peter, 1 Clement, and the Shepherd of Hermas and take this to reflect the socio-economic situation Christians were dealing with in the late 1st or early 2nd century; it thus could have been written anywhere in the Empire that Christians spoke Greek. There are some scholars who
Canonical status The letter of Jude was one of the disputed books of the biblical canon. Eusebius doubted its authenticity, although acknowledges it was read in many churches. The links between the Epistle and 2 Peter and its use of the biblical apocrypha raised concern: Saint Jerome wrote the book was "rejected by many" since it quotes the Book of Enoch. The epistle only spread among Christian circles comparatively late, raising concerns that it had not really been written by an apostle, but rather a later figure. Despite the concerns above, the Epistle of Jude was admitted to the canon of the New Testament, with most churches accepting it by the end of the second century. Clement of Alexandria, Tertullian, and the Muratorian canon considered the letter canonical. The first historical record of doubts as to authorship are found in the writings of Origen of Alexandria, who spoke of the doubts held by some, albeit not him. Eusebius classified it with the "disputed writings, the antilegomena." The letter was eventually accepted as part of the biblical canon by later Church Fathers such as Athanasius of Alexandria and the Synods of Laodicea (c. 363) and Carthage (c. 397). Content Jude urges his readers to defend the deposit of Christ's doctrine that had been closed by the time he wrote his epistle, and to remember the words of the apostles spoken somewhat before. He warns about false teachers who use grace as a pretext for wantonness. Jude then asks the reader to recall how even after the Lord saved his own people out of the land of Egypt, he did not hesitate to destroy those who fell into unbelief, much as he punished the angels who fell from their original exalted status and Sodom and Gomorrah. He also paraphrases (verse 9) an incident apparently from the Testament of Moses that has since been lost about Satan and Michael the Archangel quarreling over the body of Moses. Continuing the analogy from Israel's history, he says that the false teachers have followed in the way of Cain, have rushed after reward into the error of Balaam, and have perished in the rebellion of Korach. He describes in vivid terms the opponents he warns of, calling them "clouds without rain", "trees without fruit", "foaming waves of the sea", and "wandering stars". He exhorts believers to remember the words spoken by the Apostles, using language similar to the second epistle of Peter to answer concerns that the Lord seemed to tarry, "In the last time there will be scoffers, indulging their own ungodly lusts," and to keep themselves in God's love, before delivering a doxology to God. Jude quotes directly from 1 Enoch, a widely distributed work among the Old Testament Pseudepigrapha, citing a section of 1 Enoch 1:8 that is based on Deuteronomy 33:2. Style and audience The Epistle of Jude is a brief book. It is one of the shortest books of the New Testament, consisting of just 1 chapter of 25 verses, and almost the shortest book in the Bible, with the shortest the Book of Obadiah. It may have been composed as an encyclical letter—that is, one not directed to the members of one church in particular, but intended rather to be circulated and read in all churches. While addressed to the Christian Church as a whole, the reference to the Old Testament figures such as Michael, Cain, and Korah's sons; the Book of Enoch quotation; and the invocation of James, the head of the church of Jerusalem, suggests a Jewish Christian main audience that would be familiar with Enochian literature and revere James. The wording and syntax of this epistle in its original Greek demonstrates that the author was capable and fluent. The epistle's style is combative, impassioned, and rushed. Many examples of evildoers and warnings about their fates are given in rapid succession. The epistle concludes with a doxology, which is considered by Peter H. Davids to be one of the highest in quality contained in the Bible. Identity of the opponents The epistle fiercely condemns the opponents it warns of and declares that God will judge and punish them, despite them being a part of the Christian community. However, the exact nature of these opponents have been a continuing interest for both theologians and historians, as the epistle does not describe them in any more detail than calling them corrupt and ungodly. Several theories have been proposed. The most specific verse describing the opponents is verse 8: Reject "authority" (κυριοτητα, kyriotēta; alternate translations include "dominion" or "lordship") could mean several things. The most direct would be rejection of civil or ecclesiastical authority: the opponents were ignoring guidance from leaders. Martin Luther and Jean Calvin agreed with this interpretation, and it is the most common one. Another possibility is that this specifically referred to rejecting the authority of Jesus or God, which would agree with verse 4 and be reinforcing the claim that these opponents are not true Christians. A third possibility is that this is the singular of kyriotētes (Dominions), a class of angels. This would fit with the final part of the sentence of "heap abuse on celestial beings", but it is unusual that the singular is used. Versions of Jude vary, and some manuscripts such as the Codex Sinaiticus indeed use the plural form, though. "Heap abuse on celestial beings" is also a relevant statement, as it stands in some tension with the works of Paul the Apostle as well as the Epistle to the Hebrews. Paul's undisputed works indicate that believers are already on the same level as angels, that all existing powers are subject to Christ, and believers are the future judges of angels. Later writings attributed to Paul such as Colossians and Ephesians go even farther, with Colossians decrying the alleged worship of angels. A hypothesis is thus that the author may have been attacking forms of Pauline Christianity that were not suitably deferential to angels in their opinion. "Rejecting authority" may be a reference to Paul's preaching that gentiles did not need to comply with Jewish Law. As James was known to be a major figure among Jewish Christians, this might indicate tension between the more Jewish strands of early Christianity represented by James and Jude set against Paul's message to the gentiles. However, the line about "heap abuse on celestial beings" might have essentially been just another insult, in which case this entire line of thought is rendered moot. The inherent vagueness of the epistle means that the identities of these opponents may well never be known. Similarity to 2 Peter Part of Jude is very similar to 2 Peter (mainly 2 Peter chapter 2); so much so that most scholars agree that either one letter used the other directly, or they both drew on a common source. Comparing the Greek text portions of 2 Peter 2:1–3:3 (426 words) to Jude 4–18 (311 words) results in 80 words in common and 7 words of substituted synonyms. Because this epistle is much shorter than 2 Peter, and due to various stylistic details, some scholars consider Jude the source for the similar passages of 2 Peter. However, other writers, arguing that Jude 18 quotes 2 Peter 3:3 as past tense, consider Jude to have come after 2 Peter. Some scholars who consider Jude to predate 2 Peter note that the latter appears to quote the former but omits the reference to the non-canonical book of Enoch. References to other books The Epistle of Jude references at least three other books, with two (Book of Zechariah & 2 Peter) being canonical in all churches and the other (Book of Enoch) non-canonical in most churches. Verse 9
remember the words spoken by the Apostles, using language similar to the second epistle of Peter to answer concerns that the Lord seemed to tarry, "In the last time there will be scoffers, indulging their own ungodly lusts," and to keep themselves in God's love, before delivering a doxology to God. Jude quotes directly from 1 Enoch, a widely distributed work among the Old Testament Pseudepigrapha, citing a section of 1 Enoch 1:8 that is based on Deuteronomy 33:2. Style and audience The Epistle of Jude is a brief book. It is one of the shortest books of the New Testament, consisting of just 1 chapter of 25 verses, and almost the shortest book in the Bible, with the shortest the Book of Obadiah. It may have been composed as an encyclical letter—that is, one not directed to the members of one church in particular, but intended rather to be circulated and read in all churches. While addressed to the Christian Church as a whole, the reference to the Old Testament figures such as Michael, Cain, and Korah's sons; the Book of Enoch quotation; and the invocation of James, the head of the church of Jerusalem, suggests a Jewish Christian main audience that would be familiar with Enochian literature and revere James. The wording and syntax of this epistle in its original Greek demonstrates that the author was capable and fluent. The epistle's style is combative, impassioned, and rushed. Many examples of evildoers and warnings about their fates are given in rapid succession. The epistle concludes with a doxology, which is considered by Peter H. Davids to be one of the highest in quality contained in the Bible. Identity of the opponents The epistle fiercely condemns the opponents it warns of and declares that God will judge and punish them, despite them being a part of the Christian community. However, the exact nature of these opponents have been a continuing interest for both theologians and historians, as the epistle does not describe them in any more detail than calling them corrupt and ungodly. Several theories have been proposed. The most specific verse describing the opponents is verse 8: Reject "authority" (κυριοτητα, kyriotēta; alternate translations include "dominion" or "lordship") could mean several things. The most direct would be rejection of civil or ecclesiastical authority: the opponents were ignoring guidance from leaders. Martin Luther and Jean Calvin agreed with this interpretation, and it is the most common one. Another possibility is that this specifically referred to rejecting the authority of Jesus or God, which would agree with verse 4 and be reinforcing the claim that these opponents are not true Christians. A third possibility is that this is the singular of kyriotētes (Dominions), a class of angels. This would fit with the final part of the sentence of "heap abuse on celestial beings", but it is unusual that the singular is used. Versions of Jude vary, and some manuscripts such as the Codex Sinaiticus indeed use the plural form, though. "Heap abuse on celestial beings" is also a relevant statement, as it stands in some tension with the works of Paul the Apostle as well as the Epistle to the Hebrews. Paul's undisputed works indicate that believers are already on the same level as angels, that all existing powers are subject to Christ, and believers are the future judges of angels. Later writings attributed to Paul such as Colossians and Ephesians go even farther, with Colossians decrying the alleged worship of angels. A hypothesis is thus that the author may have been attacking forms of Pauline Christianity that were not suitably deferential to angels in their opinion. "Rejecting authority" may be a reference to Paul's preaching that gentiles did not need to comply with Jewish Law. As James was known to be a major figure among Jewish Christians, this might indicate tension between the more Jewish strands of early Christianity represented by James and Jude set against Paul's message to the gentiles. However, the line about "heap abuse on celestial beings" might have essentially been just another insult, in which case this entire line of thought is rendered moot. The inherent vagueness of the epistle means that the identities of these opponents may well never be known. Similarity to 2 Peter Part of Jude is very similar to 2 Peter (mainly 2 Peter chapter 2); so much so that most scholars agree that either one letter used the other directly, or they both drew on a common source. Comparing the Greek text portions of 2 Peter 2:1–3:3 (426 words) to Jude 4–18 (311 words) results in 80 words in common and 7 words of substituted synonyms. Because this epistle is much shorter than 2 Peter, and due to various stylistic details, some scholars consider Jude the source for the similar passages of 2 Peter. However, other writers, arguing that Jude 18 quotes 2 Peter 3:3 as past tense, consider Jude to have come after 2 Peter. Some scholars who consider Jude to predate 2 Peter note that the latter appears to quote the former but omits the reference to the non-canonical book of Enoch. References to other books The Epistle of Jude references at least three other books, with two (Book of Zechariah & 2 Peter) being canonical in all churches and the other (Book of Enoch) non-canonical in most churches. Verse 9 refers to a dispute between Michael the Archangel and the devil about the body of Moses. Some interpreters understand this reference to be an allusion to the events described in Zechariah 3:1–2. The classical theologian Origen, as well as Clement of Alexandria, Didymus the Blind, and others, attributes this reference to the non-canonical Assumption of Moses. However, no extant copies of the Assumption of Moses contain this story, leading most scholars to conclude the section covering this dispute has been lost - perhaps a
conceivable subject, from poetry to astronomy, from dogmatic theology to mysticism. His best known works are: A manual of theology in 4 vols, Theologia eclectica, moralis et scholastica (Augsburg, 1752; revised by Pope Benedict XIV for the 1753 edition published at Bologna) A defence of Catholic doctrine, entitled Demonstratio critica religionis Catholicae (Augsburg, 1751) A work on indulgences, which has often been criticized by Protestant writers, De Origine, Progressu, Valore, et Fructu Indulgentiorum (Augsburg, 1735) A treatise on mysticism, De Revelationibus et Visionibus, etc. (2 vols, 1744) The astronomical work Nova philosophiae planetarum et artis criticae systemata (Nuremberg, 1723). The list of his other works, including his three erudite contributions to the question of authorship of the Imitatio Christi, will be found in C. Toussaint's scholarly article in Alfred Vacant's Dictionnaire de theologie (1900, cols 1115-1117).
revised by Pope Benedict XIV for the 1753 edition published at Bologna) A defence of Catholic doctrine, entitled Demonstratio critica religionis Catholicae (Augsburg, 1751) A work on indulgences, which has often been criticized by Protestant writers, De Origine, Progressu, Valore, et Fructu Indulgentiorum (Augsburg, 1735) A treatise on mysticism, De Revelationibus et Visionibus, etc. (2 vols, 1744) The astronomical work Nova philosophiae planetarum et artis criticae systemata (Nuremberg, 1723). The list of his other works, including his three erudite contributions to the question of authorship of the Imitatio Christi, will be found in C. Toussaint's scholarly article in Alfred Vacant's Dictionnaire de theologie (1900, cols 1115-1117). References Citations Sources 1692
Bishops' Conference of the Old Catholic Church with regard to ordinations by Arnold Mathew, an episcopal ordination is for service within a specific Christian church, and an ordination ceremony that concerns only the individual himself does not make him truly a bishop. The Holy See has not commented on the validity of this theory, but has declared with regard to ordinations of this kind carried out, for example, by Emmanuel Milingo, that the Church "does not recognize and does not intend to recognize in the future those ordinations or any of the ordinations derived from them and therefore the canonical state of the alleged bishops remains that in which they were before the ordination conferred by Mr Milingo". Other theologians, notably those of the Eastern Orthodox Church, dispute the notion that such ordinations have effect, a notion that opens up the possibility of valid but irregular consecrations proliferating outside the structures of the "official" denominations. A Catholic ordained to the episcopacy without a mandate from the Pope is automatically excommunicated and is thereby forbidden to celebrate the sacraments, according to canon law. Eastern Orthodox Vlassios Pheidas, on an official Church of Greece site, uses the canonical language of the Orthodox tradition, to describe the conditions in ecclesial praxis when sacraments, including Holy Orders, are real, valid, and efficacious. He notes language is itself part of the ecclesiological problem. This applies to the validity and efficacy of the ordination of bishops and the other sacraments, not only of the Independent Catholic Churches, but also of all other Christian churches, including the Roman Catholic Church, Oriental Orthodoxy and the Assyrian Church of the East. Anglican Anglican bishop Colin Buchanan, in the Historical Dictionary of Anglicanism, says that the Anglican Communion has held an Augustinian view of orders, by which "the validity of Episcopal ordinations (to whichever order) is based solely upon the historic succession in which the ordaining bishop stands, irrespective of their contemporary ecclesial context". He describes the circumstances of Archbishop Matthew Parker's consecration as one of the reasons why this theory is "generally held". Parker was chosen by Queen Elizabeth I of England to be the first Church of England Archbishop of Canterbury after the death of the previous office holder, Cardinal Reginald Pole, the last Roman Catholic Archbishop of Canterbury. Buchanan notes the Roman Catholic Church also focuses on issues of intention and not just breaks in historical succession. He does not explain whether intention has an ecclesiological role, for Anglicans, in conferring or receiving sacraments. History According to Buchanan, "the real rise of the problem" happened in the 19th century, in the "wake of the Anglo-Catholic movement", "through mischievous activities of a tiny number of independently acting bishops". They exist worldwide, he writes, "mostly without congregations", and "many in different stages of delusion and fantasy, not least in the Episcopal titles they confer on themselves"; "the distinguishing mark" to "specifically identif[y]" an episcopus vagans is "the lack of a true see or the lack of a real church life to oversee". Paul Halsall, on the Internet History Sourcebooks Project, did not list a single church edifice of independent bishops, in a 1996–1998 New York City building architecture survey of religious communities, which maintain bishops claiming apostolic succession and claim cathedral status but noted there "are now literally hundreds of these '', of lesser or greater spiritual probity. They seem to have a tendency to call living room sanctuaries 'cathedrals';" those buildings were not perceived as cultural symbols and did not meet the survey criteria. David V. Barrett wrote, in A Brief Guide to Secret Religions, that "one hallmark of such bishops is that they often collect as many lineages as they can to strengthen their Episcopal legitimacy—at least in their own eyes" and their groups have more clergy than members. Many episcopi vagantes claim succession from the Old Catholic See of Utrecht, or from Eastern Orthodox, Oriental Orthodox, or Eastern Catholic Churches. A few others derive their orders from Roman Catholic bishops who have consecrated their own bishops after disputes with the Holy See. Barrett wrote that leaders "of some esoteric movements, are also priests or bishops in small non-mainstream Christian Churches"; he explains, this type of "independent or autocephalous" group has "little in common with the Church it developed from, the Old Catholic Church, and even less in common with the Roman Catholic Church" but still claims its authority from Apostolic succession. Many, if not most, episcopi vagantes are associated with Independent Catholic Churches. They may be very liberal or very conservative. Episcopi vagantes may also include some conservative "Continuing Anglicans" who have broken with the Anglican Communion over various issues such as Prayer Book revision, the ordination of women and the ordination of unmarried, non-celibate individuals (including homosexuals).
Many, if not most, episcopi vagantes are associated with Independent Catholic Churches. They may be very liberal or very conservative. Episcopi vagantes may also include some conservative "Continuing Anglicans" who have broken with the Anglican Communion over various issues such as Prayer Book revision, the ordination of women and the ordination of unmarried, non-celibate individuals (including homosexuals). Buchanan writes that based the criteria of having "a true see" or having "a real church life to oversee", the bishops of most forms of the Continuing Anglican movement are not necessarily classified as vagantes, but "are always in danger of becoming such". Particular consecrations Arnold Mathew, according to Buchanan, "lapsed into the vagaries of an " Stephen Edmonds, in the Oxford Dictionary of National Biography, wrote that in 1910 Mathew's wife separated from him; that same year, he declared himself and his church seceded from the Union of Utrecht. Within a few months, on 2 November 1911, he was excommunicated by the Roman Catholic Church; sued The Times for libel based on the words "pseudo-bishop" used to describe him in the newspaper's translation from the Latin text ""; and, lost his case in 1913. Henry R.T. Brandreth wrote, in Episcopi Vagantes and the Anglican Church, "[o]ne of the most regrettable features of Mathew's episcopate was the founding of the Order of Corporate Reunion (OCR) in 1908. This claimed to be a revival of Frederick George Lee's movement, but was in fact unconnected with it." Brandreth thought it "seems still to exist in a shadowy underground way" in 1947, but disconnected. Colin Holden, in Ritualist on a Tricycle, places Mathew and his into perspective, he wrote Mathew was an , lived in a cottage provided for him, and performed his conditional acts, sometimes called according to Holden "bedroom ordinations", in his cottage. Mathew questioned the validity of Anglican ordinations and became involved with the OCR, in 1911 according to Edmonds, and he openly advertised his offer to reordain Anglican clergy who requested it. This angered the Church of England. In 1912, D. J. Scannell O'Neill wrote in The Fortnightly Review that London "seems to have more than her due share of bishops" and enumerates what he refers to as "these hireling shepherds". He also announces that one of them, Mathew, revived the OCR and published The Torch, a monthly review, advocating the reconstruction of Western Christianity and reunion with Eastern Christianity. The Torch stated "that the ordinations of the Church of England are not recognized by any church claiming to be Catholic" so the promoters involved Mathew to conditionally ordain group members who are "clergy of the Established Church" and "sign a profession of the Catholic Faith". It stipulated Mathew's services were not a system of simony and given without simoniac expectations. The group sought to enroll "earnest-minded Catholics who sincerely desire to help forward the work of [c]orporate [r]eunion with the Holy See". Nigel Yates, in Anglican Ritualism in Victorian Britain, 1830-1910, described it as "an even more bizarre scheme to promote a Catholic Uniate Church in Britain" than Lee and Ambrose Lisle March Phillipps de Lisle's Association for the Promotion of the Unity of Christendom. It was editorialized by O'Neill that the "most charitable construction to be placed on this latest move of Mathew is that he is not mentally sound. Being an Irishman, it is strange that he has not sufficient humor to see the absurdity of falling away from the Catholic Church in order to assist others to unite with the Holy See." Edmonds reports that "anything between 4 and 265 was suggested" as to how many took up his offer of reordination. When it declared devoid of canonical effect the consecration ceremony conducted by Archbishop Pierre Martin Ngô Đình Thục for the Carmelite Order of the Holy Face group at midnight of 31 December 1975, the
successful and in 1841, at the age of 29, he moved his family to Suffolk, where he bought a barley and coal merchants business in Snape, constructing Snape Maltings, a fine range of buildings for malting barley. The Garretts lived in a square Georgian house opposite the church in Aldeburgh until 1852. Newson's malting business expanded and more children were born, Edmund (1840), Alice (1842), Agnes (1845), Millicent (1847), who was to become a leader in the constitutional campaign for women's suffrage, Sam (1850), Josephine (1853) and George (1854). By 1850, Newson was a prosperous businessman and was able to build Alde House, a mansion on a hill behind Aldeburgh. A "by-product of the industrial revolution", Garrett grew up in an atmosphere of "triumphant economic pioneering" and the Garrett children were to grow up to become achievers in the professional classes of late-Victorian England. Elizabeth was encouraged to take an interest in local politics and, contrary to practices at the time, was allowed the freedom to explore the town with its nearby salt-marshes, beach and the small port of Slaughden with its boatbuilders' yards and sailmakers' lofts. Early education There was no school in Aldeburgh so Garrett learned the three Rs from her mother. When she was 10 years old, a governess, Miss Edgeworth, a poor gentlewoman, was employed to educate Garrett and her sister. Mornings were spent in the schoolroom; there were regimented afternoon walks; educating the young ladies continued at mealtimes when Edgeworth ate with the family; at night, the governess slept in a curtained off area in the girls' bedroom. Garrett despised her governess and sought to outwit the teacher in the classroom. When Garrett was 13 and her sister 15, they were sent to a private school, the Boarding School for Ladies in Blackheath, London, which was run by the step aunts of the poet Robert Browning. There, English literature, French, Italian and German as well as deportment, were taught. Later in life, Garrett recalled the stupidity of her teachers there, though her schooling there did help establish a love of reading. Her main complaint about the school was the lack of science and mathematics instruction. Her reading matter included Tennyson, Wordsworth, Milton, Coleridge, Trollope, Thackeray and George Eliot. Elizabeth and Louie were known as "the bathing Garretts", as their father had insisted they be allowed a hot bath once a week. However, they made what were to be lifelong friends there. When they finished in 1851, they were sent on a short tour abroad, ending with a memorable visit to the Great Exhibition in Hyde Park, London. After this formal education, Garrett spent the next nine years tending to domestic duties, but she continued to study Latin and arithmetic in the mornings and also read widely. Her sister Millicent recalled Garrett's weekly lectures, "Talks on Things in General", when her younger siblings would gather while she discussed politics and current affairs from Garibaldi to Macaulay's History of England. In 1854, when she was eighteen, Garrett and her sister went on a long visit to their school friends, Jane and Anne Crow, in Gateshead where she met Emily Davies, the early feminist and future co-founder of Girton College, Cambridge. Davies was to be a lifelong friend and confidante, always ready to give sound advice during the important decisions of Garrett's career. It may have been in the English Woman's Journal, first issued in 1858, that Garrett first read of Elizabeth Blackwell, who had become the first female doctor in the United States in 1849. When Blackwell visited London in 1859, Garrett travelled to the capital. By then, her sister Louie was married and living in London. Garrett joined the Society for Promoting the Employment of Women, which organised Blackwell's lectures on "Medicine as a Profession for Ladies" and set up a private meeting between Garrett and the doctor. It is said that during a visit to Alde House around 1860, one evening while sitting by the fireside, Garrett and Davies selected careers for advancing the frontiers of women's rights; Garrett was to open the medical profession to women, Davies the doors to a university education for women, while 13-year-old Millicent was allocated politics and votes for women. At first Newson was opposed to the radical idea of his daughter becoming a physician but came round and agreed to do all in his power, both financially and otherwise, to support Garrett. Medical education After an initial unsuccessful visit to leading doctors in Harley Street, Garrett decided to first spend six months as a surgery nurse at Middlesex Hospital, London in August 1860. On proving to be a good nurse, she was allowed to attend an outpatients' clinic, then her first operation. She unsuccessfully attempted to enroll in the hospital's Medical School but was allowed to attend private tuition in Latin, Greek and materia medica with the hospital's apothecary, while continuing her work as a nurse. She also employed a tutor to study anatomy and physiology three evenings a week. Eventually she was allowed into the dissecting room and the chemistry lectures. Gradually, Garrett became an unwelcome presence among the male students, who in 1861 presented a memorial to the school against her admittance as a fellow student, despite the support she enjoyed from the administration. She was obliged to leave the Middlesex Hospital but she did so with an honours certificate in chemistry and materia medica. Garrett then applied to several medical schools, including Oxford, Cambridge, Glasgow, Edinburgh, St Andrews and the Royal College of Surgeons, all of which refused her admittance. A companion to her in this struggle was the lesser known Dr. Sophia Jex-Blake. While both are considered "outstanding" medical figures of the late 19th century, Garrett was able to obtain her credentials by way of a "side door" through a loophole in admissions at the Worshipful Society of Apothecaries. Having privately obtained a certificate in anatomy and physiology, she was admitted in 1862 by the Society of Apothecaries who, as a condition of their charter, could not legally exclude her on account of her sex. She was the only woman in the Apothecaries Hall who sat the exam that year and among the 51 gentlemen candidates was William Heath Strange, who went on to found the Hampstead General Hospital, which was on the site now occupied by the Royal Free Hospital. She continued her battle to qualify by studying privately with various professors, including some at the University of St Andrews, the Edinburgh Royal Maternity and the London Hospital Medical School. In 1865, she finally took her exam and obtained a licence (LSA) from the Society of Apothecaries to practise medicine, the first woman qualified in Britain to do so openly (previously there was Dr James Barry who was born and raised female but presented as male from the age of 20, and lived his adult life as a man). On the day, three out of seven candidates passed the exam, Garrett with the highest marks. The Society of Apothecaries immediately amended its regulations to prevent other women obtaining a licence meaning that Jex-Blake could not follow this same path; the new rule disallowed privately educated women to be eligible for examination. It was not until 1876 that the new Medical Act (39 and 40 Vict, Ch. 41) passed, which allowed British medical authorities to license all qualified applicants whatever their gender. Career Though she was now a licentiate of the Society of Apothecaries, as a woman, Garrett could not take up a medical post in any hospital. So in late 1865, Garrett opened her own practice at 20 Upper Berkeley Street, London. At first patients were scarce, but the practice gradually grew. After six months in practice, she wished to open an outpatients dispensary, to enable poor women to obtain medical help from a qualified practitioner
beach and the small port of Slaughden with its boatbuilders' yards and sailmakers' lofts. Early education There was no school in Aldeburgh so Garrett learned the three Rs from her mother. When she was 10 years old, a governess, Miss Edgeworth, a poor gentlewoman, was employed to educate Garrett and her sister. Mornings were spent in the schoolroom; there were regimented afternoon walks; educating the young ladies continued at mealtimes when Edgeworth ate with the family; at night, the governess slept in a curtained off area in the girls' bedroom. Garrett despised her governess and sought to outwit the teacher in the classroom. When Garrett was 13 and her sister 15, they were sent to a private school, the Boarding School for Ladies in Blackheath, London, which was run by the step aunts of the poet Robert Browning. There, English literature, French, Italian and German as well as deportment, were taught. Later in life, Garrett recalled the stupidity of her teachers there, though her schooling there did help establish a love of reading. Her main complaint about the school was the lack of science and mathematics instruction. Her reading matter included Tennyson, Wordsworth, Milton, Coleridge, Trollope, Thackeray and George Eliot. Elizabeth and Louie were known as "the bathing Garretts", as their father had insisted they be allowed a hot bath once a week. However, they made what were to be lifelong friends there. When they finished in 1851, they were sent on a short tour abroad, ending with a memorable visit to the Great Exhibition in Hyde Park, London. After this formal education, Garrett spent the next nine years tending to domestic duties, but she continued to study Latin and arithmetic in the mornings and also read widely. Her sister Millicent recalled Garrett's weekly lectures, "Talks on Things in General", when her younger siblings would gather while she discussed politics and current affairs from Garibaldi to Macaulay's History of England. In 1854, when she was eighteen, Garrett and her sister went on a long visit to their school friends, Jane and Anne Crow, in Gateshead where she met Emily Davies, the early feminist and future co-founder of Girton College, Cambridge. Davies was to be a lifelong friend and confidante, always ready to give sound advice during the important decisions of Garrett's career. It may have been in the English Woman's Journal, first issued in 1858, that Garrett first read of Elizabeth Blackwell, who had become the first female doctor in the United States in 1849. When Blackwell visited London in 1859, Garrett travelled to the capital. By then, her sister Louie was married and living in London. Garrett joined the Society for Promoting the Employment of Women, which organised Blackwell's lectures on "Medicine as a Profession for Ladies" and set up a private meeting between Garrett and the doctor. It is said that during a visit to Alde House around 1860, one evening while sitting by the fireside, Garrett and Davies selected careers for advancing the frontiers of women's rights; Garrett was to open the medical profession to women, Davies the doors to a university education for women, while 13-year-old Millicent was allocated politics and votes for women. At first Newson was opposed to the radical idea of his daughter becoming a physician but came round and agreed to do all in his power, both financially and otherwise, to support Garrett. Medical education After an initial unsuccessful visit to leading doctors in Harley Street, Garrett decided to first spend six months as a surgery nurse at Middlesex Hospital, London in August 1860. On proving to be a good nurse, she was allowed to attend an outpatients' clinic, then her first operation. She unsuccessfully attempted to enroll in the hospital's Medical School but was allowed to attend private tuition in Latin, Greek and materia medica with the hospital's apothecary, while continuing her work as a nurse. She also employed a tutor to study anatomy and physiology three evenings a week. Eventually she was allowed into the dissecting room and the chemistry lectures. Gradually, Garrett became an unwelcome presence among the male students, who in 1861 presented a memorial to the school against her admittance as a fellow student, despite the support she enjoyed from the administration. She was obliged to leave the Middlesex Hospital but she did so with an honours certificate in chemistry and materia medica. Garrett then applied to several medical schools, including Oxford, Cambridge, Glasgow, Edinburgh, St Andrews and the Royal College of Surgeons, all of which refused her admittance. A companion to her in this struggle was the lesser known Dr. Sophia Jex-Blake. While both are considered "outstanding" medical figures of the late 19th century, Garrett was able to obtain her credentials by way of a "side door" through a loophole in admissions at the Worshipful Society of Apothecaries. Having privately obtained a certificate in anatomy and physiology, she was admitted in 1862 by the Society of Apothecaries who, as a condition of their charter, could not legally exclude her on account of her sex. She was the only woman in the Apothecaries Hall who sat the exam that year and among the 51 gentlemen candidates was William Heath Strange, who went on to found the Hampstead General Hospital, which was on the site now occupied by the Royal Free Hospital. She continued her battle to qualify by studying privately with various professors, including some at the University of St Andrews, the Edinburgh Royal Maternity and the London Hospital Medical School. In 1865, she finally took her exam and obtained a licence (LSA) from the Society of Apothecaries to practise medicine, the first woman qualified in Britain to do so openly (previously there was Dr James Barry who was born and raised female but presented as male from the age of 20, and lived his adult life as a man). On the day, three out of seven candidates passed the exam, Garrett with the highest marks. The Society of Apothecaries immediately amended its regulations to prevent other women obtaining a licence meaning that Jex-Blake could not follow this same path; the new rule disallowed privately educated women to be eligible for examination. It was not until 1876 that the new Medical Act (39 and 40 Vict, Ch. 41) passed, which allowed British medical authorities to license all qualified applicants whatever their gender. Career Though she was now a licentiate of the Society of Apothecaries, as a woman, Garrett could not take up a medical post in any hospital. So in late 1865, Garrett opened her own practice at 20 Upper Berkeley Street, London. At first patients were scarce, but the practice gradually grew. After six months in practice, she wished to open an outpatients dispensary, to enable poor women to obtain medical help from a qualified practitioner of their own gender. In 1865, there was an outbreak of cholera in Britain, affecting both rich and poor, and in their panic, some people forgot any prejudices
the amount being carried away, erosion occurs. When the upcurrent amount of sediment is greater, sand or gravel banks will tend to form as a result of deposition. These banks may slowly migrate along the coast in the direction of the longshore drift, alternately protecting and exposing parts of the coastline. Where there is a bend in the coastline, quite often a buildup of eroded material occurs forming a long narrow bank (a spit). Armoured beaches and submerged offshore sandbanks may also protect parts of a coastline from erosion. Over the years, as the shoals gradually shift, the erosion may be redirected to attack different parts of the shore. Erosion of a coastal surface, followed by a fall in sea level, can produce a distinctive landform called a raised beach. Chemical erosion Chemical erosion is the loss of matter in a landscape in the form of solutes. Chemical erosion is usually calculated from the solutes found in streams. Anders Rapp pioneered the study of chemical erosion in his work about Kärkevagge published in 1960. Formation of sinkholes and other features of karst topography is an example of extreme chemical erosion. Glaciers Glaciers erode predominantly by three different processes: abrasion/scouring, plucking, and ice thrusting. In an abrasion process, debris in the basal ice scrapes along the bed, polishing and gouging the underlying rocks, similar to sandpaper on wood. Scientists have shown that, in addition to the role of temperature played in valley-deepening, other glaciological processes, such as erosion also control cross-valley variations. In a homogeneous bedrock erosion pattern, curved channel cross-section beneath the ice is created. Though the glacier continues to incise vertically, the shape of the channel beneath the ice eventually remain constant, reaching a U-shaped parabolic steady-state shape as we now see in glaciated valleys. Scientists also provide a numerical estimate of the time required for the ultimate formation of a steady-shaped U-shaped valley—approximately 100,000 years. In a weak bedrock (containing material more erodible than the surrounding rocks) erosion pattern, on the contrary, the amount of over deepening is limited because ice velocities and erosion rates are reduced. Glaciers can also cause pieces of bedrock to crack off in the process of plucking. In ice thrusting, the glacier freezes to its bed, then as it surges forward, it moves large sheets of frozen sediment at the base along with the glacier. This method produced some of the many thousands of lake basins that dot the edge of the Canadian Shield. Differences in the height of mountain ranges are not only being the result tectonic forces, such as rock uplift, but also local climate variations. Scientists use global analysis of topography to show that glacial erosion controls the maximum height of mountains, as the relief between mountain peaks and the snow line are generally confined to altitudes less than 1500 m. The erosion caused by glaciers worldwide erodes mountains so effectively that the term glacial buzzsaw has become widely used, which describes the limiting effect of glaciers on the height of mountain ranges. As mountains grow higher, they generally allow for more glacial activity (especially in the accumulation zone above the glacial equilibrium line altitude), which causes increased rates of erosion of the mountain, decreasing mass faster than isostatic rebound can add to the mountain. This provides a good example of a negative feedback loop. Ongoing research is showing that while glaciers tend to decrease mountain size, in some areas, glaciers can actually reduce the rate of erosion, acting as a glacial armor. Ice can not only erode mountains but also protect them from erosion. Depending on glacier regime, even steep alpine lands can be preserved through time with the help of ice. Scientists have proved this theory by sampling eight summits of northwestern Svalbard using Be10 and Al26, showing that northwestern Svalbard transformed from a glacier-erosion state under relatively mild glacial maxima temperature, to a glacier-armor state occupied by cold-based, protective ice during much colder glacial maxima temperatures as the Quaternary ice age progressed. These processes, combined with erosion and transport by the water network beneath the glacier, leave behind glacial landforms such as moraines, drumlins, ground moraine (till), kames, kame deltas, moulins, and glacial erratics in their wake, typically at the terminus or during glacier retreat. The best-developed glacial valley morphology appears to be restricted to landscapes with low rock uplift rates (less than or equal to 2mm per year) and high relief, leading to long-turnover times. Where rock uplift rates exceed 2mm per year, glacial valley morphology has generally been significantly modified in postglacial time. Interplay of glacial erosion and tectonic forcing governs the morphologic impact of glaciations on active orogens, by both influencing their height, and by altering the patterns of erosion during subsequent glacial periods via a link between rock uplift and valley cross-sectional shape. Floods At extremely high flows, kolks, or vortices are formed by large volumes of rapidly rushing water. Kolks cause extreme local erosion, plucking bedrock and creating pothole-type geographical features called rock-cut basins. Examples can be seen in the flood regions result from glacial Lake Missoula, which created the channeled scablands in the Columbia Basin region of eastern Washington. Wind erosion Wind erosion is a major geomorphological force, especially in arid and semi-arid regions. It is also a major source of land degradation, evaporation, desertification, harmful airborne dust, and crop damage—especially after being increased far above natural rates by human activities such as deforestation, urbanization, and agriculture. Wind erosion is of two primary varieties: deflation, where the wind picks up and carries away loose particles; and abrasion, where surfaces are worn down as they are struck by airborne particles carried by wind. Deflation is divided into three categories: (1) surface creep, where larger, heavier particles slide or roll along the ground; (2) saltation, where particles are lifted a short height into the air, and bounce and saltate across the surface of the soil; and (3) suspension, where very small and light particles are lifted into the air by the wind, and are often carried for long distances. Saltation is responsible for the majority (50-70%) of wind erosion, followed by suspension (30-40%), and then surface creep (5-25%). Wind erosion is much more severe in arid areas and during times of drought. For example, in the Great Plains, it is estimated that soil loss due to wind erosion can be as much as
of solutes. Chemical erosion is usually calculated from the solutes found in streams. Anders Rapp pioneered the study of chemical erosion in his work about Kärkevagge published in 1960. Formation of sinkholes and other features of karst topography is an example of extreme chemical erosion. Glaciers Glaciers erode predominantly by three different processes: abrasion/scouring, plucking, and ice thrusting. In an abrasion process, debris in the basal ice scrapes along the bed, polishing and gouging the underlying rocks, similar to sandpaper on wood. Scientists have shown that, in addition to the role of temperature played in valley-deepening, other glaciological processes, such as erosion also control cross-valley variations. In a homogeneous bedrock erosion pattern, curved channel cross-section beneath the ice is created. Though the glacier continues to incise vertically, the shape of the channel beneath the ice eventually remain constant, reaching a U-shaped parabolic steady-state shape as we now see in glaciated valleys. Scientists also provide a numerical estimate of the time required for the ultimate formation of a steady-shaped U-shaped valley—approximately 100,000 years. In a weak bedrock (containing material more erodible than the surrounding rocks) erosion pattern, on the contrary, the amount of over deepening is limited because ice velocities and erosion rates are reduced. Glaciers can also cause pieces of bedrock to crack off in the process of plucking. In ice thrusting, the glacier freezes to its bed, then as it surges forward, it moves large sheets of frozen sediment at the base along with the glacier. This method produced some of the many thousands of lake basins that dot the edge of the Canadian Shield. Differences in the height of mountain ranges are not only being the result tectonic forces, such as rock uplift, but also local climate variations. Scientists use global analysis of topography to show that glacial erosion controls the maximum height of mountains, as the relief between mountain peaks and the snow line are generally confined to altitudes less than 1500 m. The erosion caused by glaciers worldwide erodes mountains so effectively that the term glacial buzzsaw has become widely used, which describes the limiting effect of glaciers on the height of mountain ranges. As mountains grow higher, they generally allow for more glacial activity (especially in the accumulation zone above the glacial equilibrium line altitude), which causes increased rates of erosion of the mountain, decreasing mass faster than isostatic rebound can add to the mountain. This provides a good example of a negative feedback loop. Ongoing research is showing that while glaciers tend to decrease mountain size, in some areas, glaciers can actually reduce the rate of erosion, acting as a glacial armor. Ice can not only erode mountains but also protect them from erosion. Depending on glacier regime, even steep alpine lands can be preserved through time with the help of ice. Scientists have proved this theory by sampling eight summits of northwestern Svalbard using Be10 and Al26, showing that northwestern Svalbard transformed from a glacier-erosion state under relatively mild glacial maxima temperature, to a glacier-armor state occupied by cold-based, protective ice during much colder glacial maxima temperatures as the Quaternary ice age progressed. These processes, combined with erosion and transport by the water network beneath the glacier, leave behind glacial landforms such as moraines, drumlins, ground moraine (till), kames, kame deltas, moulins, and glacial erratics in their wake, typically at the terminus or during glacier retreat. The best-developed glacial valley morphology appears to be restricted to landscapes with low rock uplift rates (less than or equal to 2mm per year) and high relief, leading to long-turnover times. Where rock uplift rates exceed 2mm per year, glacial valley morphology has generally been significantly modified in postglacial time. Interplay of glacial erosion and tectonic forcing governs the morphologic impact of glaciations on active orogens, by both influencing their height, and by altering the patterns of erosion during subsequent glacial periods via a link between rock uplift and valley cross-sectional shape. Floods At extremely high flows, kolks, or vortices are formed by large volumes of rapidly rushing water. Kolks cause extreme local erosion, plucking bedrock and creating pothole-type geographical features called rock-cut basins. Examples can be seen in the flood regions result from glacial Lake Missoula, which created the channeled scablands in the Columbia Basin region of eastern Washington. Wind erosion Wind erosion is a major geomorphological force, especially in arid and semi-arid regions. It is also a major source of land degradation, evaporation, desertification, harmful airborne dust, and crop damage—especially after being increased far above natural rates by human activities such as deforestation, urbanization, and agriculture. Wind erosion is of two primary varieties: deflation, where the wind picks up and carries away loose particles; and abrasion, where surfaces are worn down as they are struck by airborne particles carried by wind. Deflation is divided into three categories: (1) surface creep, where larger, heavier particles slide or roll along the ground; (2) saltation, where particles are lifted a short height into the air, and bounce and saltate across the surface of the soil; and (3) suspension, where very small and light particles are lifted into the air by the wind, and are often carried for long distances. Saltation is responsible for the majority (50-70%) of wind erosion, followed by suspension (30-40%), and then surface creep (5-25%). Wind erosion is much more severe in arid areas and during times of drought. For example, in the Great Plains, it is estimated that soil loss due to wind erosion can be as much as 6100 times greater in drought years than in wet years. Mass movement Mass movement is the downward and outward movement of rock and sediments on a sloped surface, mainly due to the force of gravity. Mass movement is an important part of the erosional process and is often the first stage in the breakdown and transport of weathered materials in mountainous areas. It moves material from higher elevations to lower elevations where other eroding agents such as streams and glaciers can then pick up the material and move it to even lower elevations. Mass-movement processes are always occurring continuously on all slopes; some mass-movement processes act very slowly; others occur very suddenly, often with disastrous results. Any perceptible down-slope movement of rock or sediment is often referred to in general terms as a landslide. However, landslides can be classified in a much more detailed way that reflects the mechanisms responsible for the movement and the velocity at which the movement occurs. One of the visible topographical manifestations of a very slow form of such activity is a scree slope. Slumping happens on steep hillsides, occurring along distinct fracture zones, often within materials like clay that, once released, may move quite rapidly downhill. They will often show a spoon-shaped isostatic depression, in which the material has begun to slide downhill. In some cases, the slump is caused by water beneath the slope weakening it. In many cases it is simply the result of poor engineering along highways where it is a regular occurrence. Surface creep is the slow movement of soil and rock debris by gravity which is usually not perceptible except through extended observation. However, the term can also describe the rolling of dislodged soil particles in diameter by wind along the soil surface. Factors affecting erosion rates Climate The amount and intensity of precipitation is the main climatic factor governing soil erosion by water. The relationship is particularly strong if heavy rainfall occurs at times when, or in locations where, the soil's surface is not well protected by vegetation. This might be during periods when agricultural activities leave the soil bare, or in semi-arid regions where vegetation is naturally sparse. Wind erosion requires strong winds, particularly during times of drought when vegetation is sparse and soil is dry (and so is more erodible). Other climatic factors such as average temperature and temperature range may also affect erosion, via their effects on vegetation and soil properties. In general, given similar vegetation and ecosystems, areas with more precipitation (especially high-intensity rainfall), more wind, or more storms are expected to have more erosion. In some areas of the world (e.g. the mid-western USA), rainfall intensity is the primary determinant of erosivity (for a definition of erosivity check,) with higher intensity rainfall generally resulting in more soil erosion by water. The size and velocity of rain drops is also an important factor. Larger and higher-velocity rain drops have greater kinetic energy, and thus their impact will displace soil particles by larger distances than smaller, slower-moving rain drops. In other regions of the world (e.g. western Europe), runoff and erosion result from relatively low intensities of stratiform rainfall falling onto the previously saturated soil. In such situations, rainfall amount rather than intensity is the main factor determining the severity of soil erosion by water. According to the climate change projections, erosivity will increase significantly in Europe and soil erosion may increase by 13-22.5% by 2050 In Taiwan, where typhoon frequency increased significantly in the 21st century, a strong link has been drawn between the increase in storm frequency with an increase in sediment load in rivers and reservoirs, highlighting the impacts climate change can have on erosion. Vegetative cover Vegetation acts as an interface between the atmosphere and the soil. It increases the permeability of the soil to rainwater, thus decreasing runoff. It shelters the soil from winds, which results in decreased wind erosion, as well as advantageous changes in microclimate. The roots of the plants bind the soil together, and interweave with other roots, forming a more solid mass that is less susceptible to both water and wind erosion. The removal of vegetation increases the rate of surface erosion. Topography The topography of the land determines the velocity at which surface runoff will flow, which in turn determines the erosivity of the runoff. Longer, steeper slopes (especially those without adequate vegetative cover) are more susceptible to very high rates of erosion during heavy rains than shorter, less steep slopes. Steeper terrain is also more prone to mudslides, landslides, and other forms of gravitational erosion processes. Tectonics Tectonic processes control rates and distributions of erosion at the Earth's surface. If the tectonic action causes part of the Earth's surface (e.g., a mountain range) to be raised or lowered relative to surrounding areas, this must necessarily change the gradient of the land surface. Because erosion rates are almost always sensitive to the local slope (see above), this will change the rates of erosion in the uplifted area. Active tectonics also brings fresh, unweathered rock towards the surface, where it is exposed to the action of erosion. However, erosion can also affect tectonic processes. The removal by erosion of large amounts of rock from a particular region, and its deposition elsewhere, can result in a lightening of the load on the lower crust and mantle. Because tectonic processes are driven by gradients in the stress field developed in the crust, this unloading can in turn cause tectonic or isostatic uplift in the region. In some cases, it has been hypothesised that these twin feedbacks can act to
dimension are isomorphic. Therefore, in many cases, it is possible to work with a specific Euclidean space, which is generally the real -space equipped with the dot product. An isomorphism from a Euclidean space to associates with each point an -tuple of real numbers which locate that point in the Euclidean space and are called the Cartesian coordinates of that point. Definition History of the definition Euclidean space was introduced by ancient Greeks as an abstraction of our physical space. Their great innovation, appearing in Euclid's Elements was to build and prove all geometry by starting from a few very basic properties, which are abstracted from the physical world, and cannot be mathematically proved because of the lack of more basic tools. These properties are called postulates, or axioms in modern language. This way of defining Euclidean space is still in use under the name of synthetic geometry. In 1637, René Descartes introduced Cartesian coordinates and showed that this allows reducing geometric problems to algebraic computations with numbers. This reduction of geometry to algebra was a major change of point of view, as, until then, the real numbers were defined in terms of lengths and distances. Euclidean geometry was not applied in spaces of more than three dimensions until the 19th century. Ludwig Schläfli generalized Euclidean geometry to spaces of n dimensions using both synthetic and algebraic methods, and discovered all of the regular polytopes (higher-dimensional analogues of the Platonic solids) that exist in Euclidean spaces of any number of dimensions. Despite the wide use of Descartes' approach, which was called analytic geometry, the definition of Euclidean space remained unchanged until the end of 19th century. The introduction of abstract vector spaces allowed their use in defining Euclidean spaces with a purely algebraic definition. This new definition has been shown to be equivalent to the classical definition in terms of geometric axioms. It is this algebraic definition that is now most often used for introducing Euclidean spaces. Motivation of the modern definition One way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms of distance and angles. For example, there are two fundamental operations (referred to as motions) on the plane. One is translation, which means a shifting of the plane so that every point is shifted in the same direction and by the same distance. The other is rotation around a fixed point in the plane, in which all points in the plane turn around that fixed point through the same angle. One of the basic tenets of Euclidean geometry is that two figures (usually considered as subsets) of the plane should be considered equivalent (congruent) if one can be transformed into the other by some sequence of translations, rotations and reflections (see below). In order to make all of this mathematically precise, the theory must clearly define what is a Euclidean space, and the related notions of distance, angle, translation, and rotation. Even when used in physical theories, Euclidean space is an abstraction detached from actual physical locations, specific reference frames, measurement instruments, and so on. A purely mathematical definition of Euclidean space also ignores questions of units of length and other physical dimensions: the distance in a "mathematical" space is a number, not something expressed in inches or metres. The standard way to mathematically define a Euclidean space, as carried out in the remainder of this article, is to define a Euclidean space as a set of points on which acts a real vector space, the space of translations which is equipped with an inner product. The action of translations makes the space an affine space, and this allows defining lines, planes, subspaces, dimension, and parallelism. The inner product allows defining distance and angles. The set of -tuples of real numbers equipped with the dot product is a Euclidean space of dimension . Conversely, the choice of a point called the origin and an orthonormal basis of the space of translations is equivalent with defining an isomorphism between a Euclidean space of dimension and viewed as a Euclidean space. It follows that everything that can be said about a Euclidean space can also be said about Therefore, many authors, especially at elementary level, call the standard Euclidean space of dimension , or simply the Euclidean space of dimension . A reason for introducing such an abstract definition of Euclidean spaces, and for working with it instead of is that it is often preferable to work in a coordinate-free and origin-free manner (that is, without choosing a preferred basis and a preferred origin). Another reason is that there is no origin nor any basis in the physical world. Technical definition A is a finite-dimensional inner product space over the real numbers. A Euclidean space is an affine space over the reals such that the associated vector space is a Euclidean vector space. Euclidean spaces are sometimes called Euclidean affine spaces for distinguishing them from Euclidean vector spaces. If is a Euclidean space, its associated vector space is often denoted The dimension of a Euclidean space is the dimension of its associated vector space. The elements of are called points and are commonly denoted by capital letters. The elements of are called Euclidean vectors or free vectors. They are also called translations, although, properly speaking, a translation is the geometric transformation resulting of the action of a Euclidean vector on the Euclidean space. The action of a translation on a point provides a point that is denoted . This action satisfies (The second in the left-hand side is a vector addition; all other denote an action of a vector on a point. This notation is not ambiguous, as, for distinguishing between the two meanings of , it suffices to look on the nature of its left argument.) The fact that the action is free and transitive means that for every pair of points there is exactly one vector such that . This vector is denoted or As previously explained, some of the basic properties of Euclidean spaces result of the structure of affine space. They are described in and its subsections. The properties resulting from the inner product are explained in and its subsections. Prototypical examples For any vector space, the addition acts freely and transitively on the vector space itself. Thus a Euclidean vector space can be viewed as a Euclidean space that has itself as associated vector space. A typical case of Euclidean vector space is viewed as a vector space equipped with the dot product as an inner product. The importance of this particular example of Euclidean space lies in the fact that every Euclidean space is isomorphic to it. More precisely, given a Euclidean space of dimension , the choice of a point, called an origin and an orthonormal basis of defines an isomorphism of Euclidean spaces from to As every Euclidean space of dimension is isomorphic to it, the Euclidean space is sometimes called the standard Euclidean space of dimension . Affine structure Some basic properties of Euclidean spaces depend only of the fact that a Euclidean space is an affine space. They are called affine properties and include the concepts of lines, subspaces, and parallelism, which are detailed in next subsections. Subspaces Let be a Euclidean space and its associated vector space. A flat, Euclidean subspace or affine subspace of is a subset of such that is a linear subspace of A Euclidean subspace is a Euclidean space with as associated vector space. This linear subspace is called the direction of . If is a point of then Conversely, if is a point of and is a linear subspace of then is a Euclidean subspace of direction . A Euclidean vector space (that is, a Euclidean space such that ) has two sorts of subspaces: its Euclidean subspaces and its linear subspaces. Linear subspaces are Euclidean subspaces and a Euclidean subspace is a linear subspace if and only if it contains the zero vector. Lines and segments In a Euclidean space, a line is a Euclidean subspace of dimension one. Since a vector space of dimension one is spanned by any nonzero vector a line is a set of the form where and are two distinct points. It follows that there is exactly one line that passes through (contains) two distinct points. This implies that two distinct lines intersect in at most one point. A more symmetric representation of the line passing through and is where is an arbitrary point (not necessary on the line). In a Euclidean vector space, the zero vector is usually chosen for ; this allows simplifying the preceding formula into A standard convention allows using this formula in every Euclidean space, see . The line segment, or simply segment, joining the points and is the subset of the points such that in the preceding formulas. It is denoted or ; that is Parallelism Two subspaces and of the same dimension in a Euclidean space are parallel if they have the same direction. Equivalently, they are parallel, if there is a translation vector that maps one to the other: Given a point and a subspace , there exists exactly one subspace that contains and is parallel to , which is In the case where is a line (subspace of dimension one), this property is Playfair's axiom. It follows that in a Euclidean plane, two lines either meet in one point or are parallel. The concept of parallel subspaces has been extended to subspaces of different dimensions: two subspaces are parallel if the direction of one of them is contained in the direction to the other. Metric structure The vector space associated to a Euclidean space is an inner product space. This implies a symmetric bilinear form that is positive definite (that is is always positive for ). The inner product of a Euclidean space is often called dot product and denoted . This is specially the case when a Cartesian coordinate system has been chosen, as, in this case, the inner product of two vectors is the dot product of their coordinate vectors. For this reason, and for historical reasons, the dot notation is more commonly used than the bracket notation for the inner product of Euclidean spaces. This article will follow this usage; that is will be denoted in the remainder of this article. The Euclidean norm of a vector is The inner product and the norm allows expressing and proving all metric and topological properties of Euclidean geometry. The next subsection describe the most fundamental ones. In these subsections, denotes an arbitrary Euclidean space, and denotes its vector space of translations. Distance and length The distance (more precisely the Euclidean distance) between two points of a Euclidean space is the norm of the translation vector that maps one point to the other; that is The length of a segment is the distance between its endpoints. It is often denoted . The distance is a metric, as it is positive definite, symmetric, and satisfies the triangle inequality Moreover, the equality is true if and only if belongs to the segment . This inequality means that the length of any edge of a triangle is smaller than the sum of the lengths of the other edges. This is the origin of the term triangle inequality. With the Euclidean distance, every Euclidean space is a complete metric space. Orthogonality Two nonzero vectors and of are perpendicular or orthogonal if their inner product is zero: Two linear subspaces of are orthogonal if every nonzero vector of the first one is perpendicular to every nonzero vector of the second one. This implies that the intersection of the linear subspace is reduced to the zero vector. Two lines, and more generally two Euclidean subspaces are orthogonal if their direction are orthogonal. Two orthogonal lines that intersect are said perpendicular. Two segments and that share a common endpoint are perpendicular or form a right angle if the vectors and are orthogonal. If and form a right angle, one has This is the Pythagorean theorem. Its proof is easy in this context, as, expressing this in terms of the inner product, one has, using bilinearity and symmetry of the inner product: Angle The (non-oriented) angle between two nonzero vectors and in is where is the principal value of the arccosine function. By Cauchy–Schwarz inequality, the argument of the arccosine is in the interval . Therefore is real, and (or if angles are measured in degrees). Angles are not useful in a Euclidean line, as they can be only 0 or . In an oriented Euclidean plane, one can define the oriented angle of two vectors. The oriented angle of two vectors and is then the opposite of the oriented angle of and . In this case, the angle of two vectors can have any value modulo an integer multiple of . In particular, a reflex angle equals the negative angle . The angle of two vectors does not change if they are multiplied by positive numbers. More precisely, if and are two vectors, and and are real numbers, then If , , and are three points in a Euclidean space, the angle of the segments and is the angle of the vectors and As the multiplication of vectors by positive numbers do not change the angle, the angle of two half-lines with initial point can be defined: it is the angle of the segments and , where and are arbitrary points, one on each half-line. Although this is less used, one can define similarly the angle of segments or half-lines that do not share an initial points. The angle of two lines is defined as follows. If is the angle of two segments, one on each line, the angle of any two other segments, one on each line, is either or . One of these angles is in the interval , and the other being in . The non-oriented angle of the two lines is the one in the interval . In an oriented Euclidean plane,
space, as carried out in the remainder of this article, is to define a Euclidean space as a set of points on which acts a real vector space, the space of translations which is equipped with an inner product. The action of translations makes the space an affine space, and this allows defining lines, planes, subspaces, dimension, and parallelism. The inner product allows defining distance and angles. The set of -tuples of real numbers equipped with the dot product is a Euclidean space of dimension . Conversely, the choice of a point called the origin and an orthonormal basis of the space of translations is equivalent with defining an isomorphism between a Euclidean space of dimension and viewed as a Euclidean space. It follows that everything that can be said about a Euclidean space can also be said about Therefore, many authors, especially at elementary level, call the standard Euclidean space of dimension , or simply the Euclidean space of dimension . A reason for introducing such an abstract definition of Euclidean spaces, and for working with it instead of is that it is often preferable to work in a coordinate-free and origin-free manner (that is, without choosing a preferred basis and a preferred origin). Another reason is that there is no origin nor any basis in the physical world. Technical definition A is a finite-dimensional inner product space over the real numbers. A Euclidean space is an affine space over the reals such that the associated vector space is a Euclidean vector space. Euclidean spaces are sometimes called Euclidean affine spaces for distinguishing them from Euclidean vector spaces. If is a Euclidean space, its associated vector space is often denoted The dimension of a Euclidean space is the dimension of its associated vector space. The elements of are called points and are commonly denoted by capital letters. The elements of are called Euclidean vectors or free vectors. They are also called translations, although, properly speaking, a translation is the geometric transformation resulting of the action of a Euclidean vector on the Euclidean space. The action of a translation on a point provides a point that is denoted . This action satisfies (The second in the left-hand side is a vector addition; all other denote an action of a vector on a point. This notation is not ambiguous, as, for distinguishing between the two meanings of , it suffices to look on the nature of its left argument.) The fact that the action is free and transitive means that for every pair of points there is exactly one vector such that . This vector is denoted or As previously explained, some of the basic properties of Euclidean spaces result of the structure of affine space. They are described in and its subsections. The properties resulting from the inner product are explained in and its subsections. Prototypical examples For any vector space, the addition acts freely and transitively on the vector space itself. Thus a Euclidean vector space can be viewed as a Euclidean space that has itself as associated vector space. A typical case of Euclidean vector space is viewed as a vector space equipped with the dot product as an inner product. The importance of this particular example of Euclidean space lies in the fact that every Euclidean space is isomorphic to it. More precisely, given a Euclidean space of dimension , the choice of a point, called an origin and an orthonormal basis of defines an isomorphism of Euclidean spaces from to As every Euclidean space of dimension is isomorphic to it, the Euclidean space is sometimes called the standard Euclidean space of dimension . Affine structure Some basic properties of Euclidean spaces depend only of the fact that a Euclidean space is an affine space. They are called affine properties and include the concepts of lines, subspaces, and parallelism, which are detailed in next subsections. Subspaces Let be a Euclidean space and its associated vector space. A flat, Euclidean subspace or affine subspace of is a subset of such that is a linear subspace of A Euclidean subspace is a Euclidean space with as associated vector space. This linear subspace is called the direction of . If is a point of then Conversely, if is a point of and is a linear subspace of then is a Euclidean subspace of direction . A Euclidean vector space (that is, a Euclidean space such that ) has two sorts of subspaces: its Euclidean subspaces and its linear subspaces. Linear subspaces are Euclidean subspaces and a Euclidean subspace is a linear subspace if and only if it contains the zero vector. Lines and segments In a Euclidean space, a line is a Euclidean subspace of dimension one. Since a vector space of dimension one is spanned by any nonzero vector a line is a set of the form where and are two distinct points. It follows that there is exactly one line that passes through (contains) two distinct points. This implies that two distinct lines intersect in at most one point. A more symmetric representation of the line passing through and is where is an arbitrary point (not necessary on the line). In a Euclidean vector space, the zero vector is usually chosen for ; this allows simplifying the preceding formula into A standard convention allows using this formula in every Euclidean space, see . The line segment, or simply segment, joining the points and is the subset of the points such that in the preceding formulas. It is denoted or ; that is Parallelism Two subspaces and of the same dimension in a Euclidean space are parallel if they have the same direction. Equivalently, they are parallel, if there is a translation vector that maps one to the other: Given a point and a subspace , there exists exactly one subspace that contains and is parallel to , which is In the case where is a line (subspace of dimension one), this property is Playfair's axiom. It follows that in a Euclidean plane, two lines either meet in one point or are parallel. The concept of parallel subspaces has been extended to subspaces of different dimensions: two subspaces are parallel if the direction of one of them is contained in the direction to the other. Metric structure The vector space associated to a Euclidean space is an inner product space. This implies a symmetric bilinear form that is positive definite (that is is always positive for ). The inner product of a Euclidean space is often called dot product and denoted . This is specially the case when a Cartesian coordinate system has been chosen, as, in this case, the inner product of two vectors is the dot product of their coordinate vectors. For this reason, and for historical reasons, the dot notation is more commonly used than the bracket notation for the inner product of Euclidean spaces. This article will follow this usage; that is will be denoted in the remainder of this article. The Euclidean norm of a vector is The inner product and the norm allows expressing and proving all metric and topological properties of Euclidean geometry. The next subsection describe the most fundamental ones. In these subsections, denotes an arbitrary Euclidean space, and denotes its vector space of translations. Distance and length The distance (more precisely the Euclidean distance) between two points of a Euclidean space is the norm of the translation vector that maps one point to the other; that is The length of a segment is the distance between its endpoints. It is often denoted . The distance is a metric, as it is positive definite, symmetric, and satisfies the triangle inequality Moreover, the equality is true if and only if belongs to the segment . This inequality means that the length of any edge of a triangle is smaller than the sum of the lengths of the other edges. This is the origin of the term triangle inequality. With the Euclidean distance, every Euclidean space is a complete metric space. Orthogonality Two nonzero vectors and of are perpendicular or orthogonal if their inner product is zero: Two linear subspaces of are orthogonal if every nonzero vector of the first one is perpendicular to every nonzero vector of the second one. This implies that the intersection of the linear subspace is reduced to the zero vector. Two lines, and more generally two Euclidean subspaces are orthogonal if
muralist, illustrator, and painter. He flourished at the beginning of what is now referred to as the "golden age" of illustration, and is best known for his drawings and paintings of Shakespearean and Victorian subjects, as well as for his painting of Edward VII's coronation. His most famous set of murals, The Quest and Achievement of the Holy Grail, adorns the Boston Public Library. Biography Abbey was born in Philadelphia in 1852. He studied art at the Pennsylvania Academy of the Fine Arts under Christian Schuessele. Abbey began as an illustrator, producing numerous illustrations and sketches for such magazines as Harper's Weekly (1871–1874) and Scribner's Magazine. His illustrations began appearing in Harper's Weekly before Abbey was twenty years old. He moved to New York City in 1871. His illustrations were strongly influenced by French and German black and white art. He also illustrated several best-selling books, including Christmas Stories by Charles Dickens (1875), Selections from the Poetry of Robert Herrick (1882), and She Stoops to Conquer by Oliver Goldsmith (1887). Abbey also illustrated a four-volume set of The Comedies of Shakespeare for Harper & Brothers in 1896. He moved to England in 1878, at the request of his employers, to gather material for illustrations of the poems of Robert Herrick, published in 1882, and he settled permanently there in 1883. In 1883, he was elected to the Royal Institute of Painters in Water-Colours. About this time, he was appraised critically by the American writer, S.G.W. Benjamin: He also created illustrations for Goldsmith's She Stoops to Conquer (1887), for a volume of Old Songs (1889), and for the comedies (and a few of the tragedies) of Shakespeare. Among his water-colours are "The Evil Eye" (1877), "The Rose in October" (1879), "An Old Song" (1886), "The Visitors" (1890), and "The Jongleur" (1892). Possibly his best known pastels are "Beatrice", "Phyllis", and "Two Noble Kinsmen". In 1890 he made his first appearance with an oil painting, "A May Day Morn", at the Royal Academy in London. He exhibited "Richard duke of Gloucester and the Lady Anne" there in 1896, and in that year was elected A.R.A., becoming a full member in 1898. He received a gold medal at the Pan-American Exposition and was commissioned to paint the coronation of King Edward VII. in 1901; in the next year, he was chosen to paint the coronation. It was the official painting of the occasion and, hence, resides at Buckingham Palace. He did receive a knighthood, although some say he refused it in 1907. Friendly with other expatriate American artists, he summered at Broadway, Worcestershire, England, where he painted and vacationed alongside John Singer Sargent at the home of Francis Davis Millet. He completed murals for the Boston Public Library in the 1890s. The frieze for the Library was titled "The Quest and Achievement of the Holy Grail". It took Abbey eleven years to complete this series of murals in his England studio. In 1897 he received the honorary degree of A.M. from Yale university. In 1904 he painted a mural for the Royal Exchange, London Reconciliation of the Skinners & Merchant Taylors' Companies by Lord Mayor Billesden, 1484. Pennsylvania State Capitol In 1908–09, Abbey began an ambitious program of murals and other artworks for the newly completed Pennsylvania State Capitol in Harrisburg, Pennsylvania. These included allegorical medallion murals representing Science, Art, Justice, and Religion for the dome of the Rotunda, four large lunette murals beneath the dome, and multiple works for the House and Senate Chambers. For the Senate
Royal Bavarian Society and the Société Nationale des Beaux-Arts, and was made a chevalier of the French Legion of Honour. He was a prolific illustrator, and attention to detail, including historical accuracy, influenced successive generations of illustrators. In 1890, Edwin married Gertrude Mead, the daughter of a wealthy New York merchant. Mrs Abbey encouraged her husband to secure more ambitious commissions, although with their marriage commencing when both were in their forties, the couple remained childless. After her husband's death, Gertrude was active in preserving her husband's legacy, writing about his work and giving her substantial collection and archive to Yale. Edwin had been a keen supporter of the newly founded British School at Rome (BSR), so, in his memory, she donated £6000 to assist in building the artists' studio block and, in 1926, founded the Incorporated Edwin Austin Abbey Memorial Scholarships. The scholarships were established to enable British and American painters to pursue their practice. Recipients of Abbey funding – Scholars and, more recently, Fellows – devote their scholarship to working in the studios at the BSR, where there has, ever since, been at least one Abbey-funded artist in residence. Previous award holders include Stephen Farthing, Chantal Joffe and Spartacus Chetwynd. The Abbey Fellowships (formerly 'Awards') were established in their present form in 1990, and the Abbey studios also host the BSR's other Fine Art residencies, such as the Derek Hill Foundation Scholarship and the Sainsbury Scholarship in Painting and Drawing. A bust of Edwin Abbey, by Sir Thomas Brock, stands in the courtyard of the BSR. Edwin also left bequests of his works to the Metropolitan Museum of Art in New York, to the Museum of Fine Arts, Boston and to the National Gallery in London. Abbey is buried in the churchyard of Old St Andrew's Church in Kingsbury, London. His grave is Grade II listed. Works by Abbey Bibliography Dickens, C. - Christmas Stories, Harper & Brothers, 1875 Longfellow, H. W. - The Poetical Works, Houghton, 1880-1883 Herrick, R. - Selections from the Poetry of Robert Herrick, Harper & Brothers, 1882 Black, W. - Judith Shakespeare, Harper & Brothers, 1884 Boughton, G. H. - Sketching Rambles in Holland, Macmillan 1885 Sheridan, R. B. - Comedies, Chatto & Windus, London, 1885 Goldsmith, O. - She Stoops to Conquer, Harper & Brothers, 1887 Abbey, E. A. - Old Songs, Harper & Brothers, 1888 ----- The Quiet Life, Harper & Brothers, 1890 Shakespeare, W. - The Comedies, Harper & Brothers, 1896 Goldsmith, O. - The Deserted Village, Harper & Brothers, 1902 Stevens, L. O. - King Arthur Stories, Houghton 1908 References Sources http://www.columbia.edu/cu/record/archives/vol19/vol19_iss24/record1924.33 Nancy Mendes. "Edwin Austin Abbey: A Capital Artist." Pennsylvania Heritage magazine 32, no. 3 (Summer 2006): 6–15. Elisa Tamarkin. "The Chestnuts of Edwin Austin Abbey: History Painting and the Transference of Culture in Turn-of-the-Century America." Prospects 24 (1999): 417–448. Corcoran Gallery of Art, Washington, D.C. External links Edwin Austin Abbey at American Art Gallery Jim Vadeboncoeur's biography of Edwin Austin Abbey Pennsylvania Capitol Preservation Committee's E.A. Abbey Bio Drawings by Edwin A. Abbey: illustrating the comedies of Shakespeare Who Is Sylvia? What Is She, That All the Swains Commend Her? (1896–99; reworked 1900), Corcoran Gallery of Art, Washington, D.C. Edwin Austin Abbey letters, 1874–(ca. 1887) from the Archives of American Art Edwin Austin Abbey paintings 1852 births 1911 deaths American illustrators American muralists 19th-century American painters American male painters 20th-century American painters Chevaliers of the Légion d'honneur Royal Academicians
has not eliminated many harmful conditions and nonadaptive characteristics that appear among older adults, such as Alzheimer disease. If it were a disease that killed 20-year-olds instead of 70-year-olds this may have been a disease that natural selection could have eliminated ages ago. Thus, unaided by evolutionary pressures against nonadaptive conditions, modern humans suffer the aches, pains, and infirmities of aging and as the benefits of evolutionary selection decrease with age, the need for modern technological mediums against non-adaptive conditions increases. Social psychology As humans are a highly social species, there are many adaptive problems associated with navigating the social world (e.g., maintaining allies, managing status hierarchies, interacting with outgroup members, coordinating social activities, collective decision-making). Researchers in the emerging field of evolutionary social psychology have made many discoveries pertaining to topics traditionally studied by social psychologists, including person perception, social cognition, attitudes, altruism, emotions, group dynamics, leadership, motivation, prejudice, intergroup relations, and cross-cultural differences. When endeavouring to solve a problem humans at an early age show determination while chimpanzees have no comparable facial expression. Researchers suspect the human determined expression evolved because when a human is determinedly working on a problem other people will frequently help. Abnormal psychology Adaptationist hypotheses regarding the etiology of psychological disorders are often based on analogies between physiological and psychological dysfunctions, as noted in the table below. Prominent theorists and evolutionary psychiatrists include Michael T. McGuire, Anthony Stevens, and Randolph M. Nesse. They, and others, suggest that mental disorders are due to the interactive effects of both nature and nurture, and often have multiple contributing causes. Evolutionary psychologists have suggested that schizophrenia and bipolar disorder may reflect a side-effect of genes with fitness benefits, such as increased creativity. (Some individuals with bipolar disorder are especially creative during their manic phases and the close relatives of people with schizophrenia have been found to be more likely to have creative professions.) A 1994 report by the American Psychiatry Association found that people suffered from schizophrenia at roughly the same rate in Western and non-Western cultures, and in industrialized and pastoral societies, suggesting that schizophrenia is not a disease of civilization nor an arbitrary social invention. Sociopathy may represent an evolutionarily stable strategy, by which a small number of people who cheat on social contracts benefit in a society consisting mostly of non-sociopaths. Mild depression may be an adaptive response to withdraw from, and re-evaluate, situations that have led to disadvantageous outcomes (the "analytical rumination hypothesis") (see Evolutionary approaches to depression). Some of these speculations have yet to be developed into fully testable hypotheses, and a great deal of research is required to confirm their validity. Antisocial and criminal behavior Evolutionary psychology has been applied to explain criminal or otherwise immoral behavior as being adaptive or related to adaptive behaviors. Males are generally more aggressive than females, who are more selective of their partners because of the far greater effort they have to contribute to pregnancy and child-rearing. Males being more aggressive is hypothesized to stem from the more intense reproductive competition faced by them. Males of low status may be especially vulnerable to being childless. It may have been evolutionary advantageous to engage in highly risky and violently aggressive behavior to increase their status and therefore reproductive success. This may explain why males are generally involved in more crimes, and why low status and being unmarried are associated with criminality. Furthermore, competition over females is argued to have been particularly intensive in late adolescence and young adulthood, which is theorized to explain why crime rates are particularly high during this period. Some sociologists have underlined differential exposure to androgens as the cause of these behaviors, notably Lee Ellis in his evolutionary neuroandrogenic (ENA) theory. Many conflicts that result in harm and death involve status, reputation, and seemingly trivial insults. Steven Pinker in his book The Blank Slate argues that in non-state societies without a police it was very important to have a credible deterrence against aggression. Therefore, it was important to be perceived as having a credible reputation for retaliation, resulting in humans developing instincts for revenge as well as for protecting reputation ("honor"). Pinker argues that the development of the state and the police have dramatically reduced the level of violence compared to the ancestral environment. Whenever the state breaks down, which can be very locally such as in poor areas of a city, humans again organize in groups for protection and aggression and concepts such as violent revenge and protecting honor again become extremely important. Rape is theorized to be a reproductive strategy that facilitates the propagation of the rapist's progeny. Such a strategy may be adopted by men who otherwise are unlikely to be appealing to women and therefore cannot form legitimate relationships, or by high-status men on socially vulnerable women who are unlikely to retaliate to increase their reproductive success even further. The sociobiological theories of rape are highly controversial, as traditional theories typically do not consider rape to be a behavioral adaptation, and objections to this theory are made on ethical, religious, political, as well as scientific grounds. Psychology of religion Adaptationist perspectives on religious belief suggest that, like all behavior, religious behaviors are a product of the human brain. As with all other organ functions, cognition's functional structure has been argued to have a genetic foundation, and is therefore subject to the effects of natural selection and sexual selection. Like other organs and tissues, this functional structure should be universally shared amongst humans and should have solved important problems of survival and reproduction in ancestral environments. However, evolutionary psychologists remain divided on whether religious belief is more likely a consequence of evolved psychological adaptations, or a byproduct of other cognitive adaptations. Coalitional psychology Coalitional psychology is an approach to explain political behaviors between different coalitions and the conditionality of these behaviors in evolutionary psychological perspective. This approach assumes that since human beings appeared on the earth, they have evolved to live in groups instead of living as individuals to achieve benefits such as more mating opportunities and increased status. Human beings thus naturally think and act in a way that manages and negotiates group dynamics. Coalitional psychology offers falsifiable ex ante prediction by positing five hypotheses on how these psychological adaptations operate: Humans represent groups as a special category of individual, unstable and with a short shadow of the future Political entrepreneurs strategically manipulate the coalitional environment, often appealing to emotional devices such as "outrage" to inspire collective action. Relative gains dominate relations with enemies, whereas absolute gains characterize relations with allies. Coalitional size and male physical strength will positively predict individual support for aggressive foreign policies. Individuals with children, particularly women, will vary in adopting aggressive foreign policies than those without progeny. Reception and criticism Critics of evolutionary psychology accuse it of promoting genetic determinism, panadaptationism (the idea that all behaviors and anatomical features are adaptations), unfalsifiable hypotheses, distal or ultimate explanations of behavior when proximate explanations are superior, and malevolent political or moral ideas. Ethical implications Critics have argued that evolutionary psychology might be used to justify existing social hierarchies and reactionary policies. It has also been suggested by critics that evolutionary psychologists' theories and interpretations of empirical data rely heavily on ideological assumptions about race and gender. In response to such criticism, evolutionary psychologists often caution against committing the naturalistic fallacy – the assumption that "what is natural" is necessarily a moral good. However, their caution against committing the naturalistic fallacy has been criticized as means to stifle legitimate ethical discussions. Contradictions in models Some criticisms of evolutionary psychology point at contradictions between different aspects of adaptive scenarios posited by evolutionary psychology. One example is the evolutionary psychology model of extended social groups selecting for modern human brains, a contradiction being that the synaptic function of modern human brains require high amounts of many specific essential nutrients so that such a transition to higher requirements of the same essential nutrients being shared by all individuals in a population would decrease the possibility of forming large groups due to bottleneck foods with rare essential nutrients capping group sizes. It is mentioned that some insects have societies with different ranks for each individual and that monkeys remain socially functioning after the removal of most of the brain as additional arguments against big brains promoting social networking. The model of males as both providers and protectors is criticized for the impossibility of being in two places at once, the male cannot both protect his family at home and be out hunting at the same time. In the case of the claim that a provider male could buy protection service for his family from other males by bartering food that he had hunted, critics point at the fact that the most valuable food (the food that contained the rarest essential nutrients) would be different in different ecologies and as such vegetable in some geographical areas and animal in others, making it impossible for hunting styles relying on physical strength or risk-taking to be universally of similar value in bartered food and instead of making it inevitable that in some parts of Africa, food gathered with no need for major physical strength would be the most valuable to barter for protection. A contradiction between evolutionary psychology's claim of men needing to be more sexually visual than women for fast speed of assessing women's fertility than women needed to be able to assess the male's genes and its claim of male sexual jealousy guarding against infidelity is also pointed at, as it would be pointless for a male to be fast to assess female fertility if he needed to assess the risk of there being a jealous male mate and in that case his chances of defeating him before mating anyway (pointlessness of assessing one necessary condition faster than another necessary condition can possibly be assessed). Standard social science model Evolutionary psychology has been entangled in the larger philosophical and social science controversies related to the debate on nature versus nurture. Evolutionary psychologists typically contrast evolutionary psychology with what they call the standard social science model (SSSM). They characterize the SSSM as the "blank slate", "relativist", "social constructionist", and "cultural determinist" perspective that they say dominated the social sciences throughout the 20th century and assumed that the mind was shaped almost entirely by culture. Critics have argued that evolutionary psychologists created a false dichotomy between their own view and the caricature of the SSSM. Other critics regard the SSSM as a rhetorical device or a straw man and suggest that the scientists whom evolutionary psychologists associate with the SSSM did not believe that the mind was a blank state devoid of any natural predispositions. Reductionism and determinism Some critics view evolutionary psychology as a form of genetic reductionism and genetic determinism, a common critique being that evolutionary psychology does not address the complexity of individual development and experience and fails to explain the influence of genes on behavior in individual cases. Evolutionary psychologists respond that they are working within a nature-nurture interactionist framework that acknowledges that many psychological adaptations are facultative (sensitive to environmental variations during individual development). The discipline is generally not focused on proximate analyses of behavior, but rather its focus is on the study of distal/ultimate causality (the evolution of psychological adaptations). The field of behavioral genetics is focused on the study of the proximate influence of genes on behavior. Testability of hypotheses A frequent critique of the discipline is that the hypotheses of evolutionary psychology are frequently arbitrary and difficult or impossible to adequately test, thus questioning its status as an actual scientific discipline, for example because many current traits probably evolved to serve different functions than they do now. Thus because there are a potentially infinite number of alternative explanations for why a trait evolved, critics contend that it is impossible to determine the exact explanation. While evolutionary psychology hypotheses are difficult to test, evolutionary psychologists assert that it is not impossible. Part of the critique of the scientific base of evolutionary psychology includes a critique of the concept of the Environment of Evolutionary Adaptation (EEA). Some critics have argued that researchers know so little about the environment in which Homo sapiens evolved that explaining specific traits as an adaption to that environment becomes highly speculative. Evolutionary psychologists respond that they do know many things about this environment, including the facts that present day humans' ancestors were hunter-gatherers, that they generally lived in small tribes, etc. Edward Hagen argues that the human past environments were not radically different in the same sense as the Carboniferous or Jurassic periods and that the animal and plant taxa of the era were similar to those of the modern world, as was the geology and ecology. Hagen argues that few would deny that other organs evolved in the EEA (for example, lungs evolving in an oxygen rich atmosphere) yet critics question whether or not the brain's EEA is truly knowable, which he argues constitutes selective scepticism. Hagen also argues that most evolutionary psychology research is based on the fact that females can get pregnant and males cannot, which Hagen observes was also true in the EEA. John Alcock describes this as the "No Time Machine Argument", as critics are arguing that since it is not possible to travel back in time to the EEA, then it cannot be determined what was going on there and thus what was adaptive. Alcock argues that present-day evidence allows researchers to be reasonably confident about the conditions of the EEA and that the fact that so many human behaviours are adaptive in the current environment is evidence that the ancestral environment of humans had much in common with the present one, as these behaviours would have evolved in the ancestral environment. Thus Alcock concludes that researchers can make predictions on the adaptive value of traits. Similarly, Dominic Murphy argues that alternative explanations cannot just be forwarded but instead need their own evidence and predictions - if one explanation makes predictions that the others cannot, it is reasonable to have confidence in that explanation. In addition, Murphy argues that other historical sciences also make predictions about modern phenomena to come up with explanations about past phenomena, for example, cosmologists look for evidence for what we would expect to see in the modern-day if the Big Bang was true, while geologists make predictions about modern phenomena to determine if an asteroid wiped out the dinosaurs. Murphy argues that if other historical disciplines can conduct tests without a time machine, then the onus is on the critics to show why evolutionary psychology is untestable if other historical disciplines are not, as "methods should be judged across the board, not singled out for ridicule in one context." Modularity of mind Evolutionary psychologists generally presume that, like the body, the mind is made up of many evolved modular adaptations, although there is some disagreement within the discipline regarding the degree of general plasticity, or "generality," of some modules. It has been suggested that modularity evolves because, compared to non-modular networks, it would have conferred an advantage in terms of fitness and because connection costs are lower. In contrast, some academics argue that it is unnecessary to posit the existence of highly domain specific modules, and, suggest that the neural anatomy of the brain supports a model based on more domain general faculties and processes. Moreover, empirical support for the domain-specific theory stems almost entirely from performance on variations of the Wason selection task which is extremely limited in scope as it only tests one subtype of deductive reasoning. Cultural rather than genetic development of cognitive tools Cecilia Heyes has argued that the picture presented by some evolutionary psychology of the human mind as a collection of cognitive instinctsorgans of thought shaped by genetic evolution over very long time periodsdoes not fit research results. She posits instead that humans have cognitive gadgets"special-purpose organs of thought" built in the course of development through social interaction. Similar criticisms are articulated by Subrena E. Smith of the University of New Hampshire. Response by evolutionary psychologists Evolutionary psychologists have addressed many of their critics (see, for example, books by Segerstråle (2000), Defenders of the Truth: The Battle for Science in the Sociobiology Debate and Beyond, Barkow (2005), Missing the Revolution: Darwinism for Social Scientists, and Alcock (2001), The Triumph of Sociobiology). Among their rebuttals are that some criticisms are straw men, are based on an incorrect nature versus nurture dichotomy, are based on misunderstandings of the discipline, etc. Robert Kurzban suggested that "...critics of the field, when they err, are not slightly missing the mark. Their confusion is deep and profound. It's not like they are marksmen who can't quite hit the center of the target; they're holding the gun backwards." See also Affective neuroscience Behavioural genetics Biocultural evolution Biosocial criminology Collective unconscious Cognitive neuroscience Cultural neuroscience Darwinian Happiness Darwinian literary studies Deep social mind Dunbar's number Evolution of the brain List of evolutionary psychologists Evolutionary origin of religions Evolutionary psychiatry Evolutionary psychology and culture Molecular evolution Primate cognition Hominid intelligence Human ethology Great ape language Chimpanzee intelligence Cooperative eye hypothesis Id, ego, and superego Intersubjectivity Mirror neuron Noogenesis Origin of language Origin of speech Ovulatory shift hypothesis Primate empathy Shadow (psychology) Simulation theory of empathy Theory of mind Neuroethology Paleolithic diet Paleolithic lifestyle r/K selection theory Social neuroscience Sociobiology Universal Darwinism Notes References Barkow, J., Cosmides, L. & Tooby, J. 1992. The adapted mind: Evolutionary psychology and the generation of culture. Oxford: Oxford University Press. Buss, D. M. (1994). The evolution of desire: Strategies of human mating. New York: Basic Books. Confer, Easton, Fleischman, Goetz, Lewis, Perilloux & Buss Evolutionary Psychology, American Psychologist, 2010. Durrant, R., & Ellis, B.J. (2003). Evolutionary Psychology. In M. Gallagher & R.J. Nelson (Eds.), Comprehensive Handbook of Psychology, Volume Three: Biological Psychology (pp. 1–33). New York: Wiley & Sons. Gaulin, Steven J. C. and Donald H. McBurney. Evolutionary psychology. Prentice Hall. 2003. </ref> Nesse, R.M. (2000). Tingergen's Four Questions Organized. Schacter, Daniel L, Daniel Wegner and Daniel Gilbert. 2007. Psychology. Worth Publishers. . Tooby, J. & Cosmides, L. (2005). Conceptual foundations of evolutionary psychology. In D. M. Buss (Ed.), The Handbook of Evolutionary Psychology (pp. 5–67). Hoboken, NJ: Wiley. Full text Further reading Heylighen F. (2012). "Evolutionary Psychology", in: A. Michalos (ed.): Encyclopedia of Quality of Life Research (Springer, Berlin). Gerhard Medicus (2015). Being Human – Bridging the Gap between the Sciences of Body and Mind. Berlin: VWB Oikkonen, Venla: Gender, Sexuality and Reproduction in Evolutionary Narratives. London: Routledge, 2013. External links PsychTable.org Collaborative effort to catalog human psychological adaptations What Is Evolutionary Psychology? by Clinical Evolutionary Psychologist Dale Glaebach. Evolutionary Psychology – Approaches in Psychology Academic societies Human Behavior and Evolution Society; international society dedicated to using evolutionary theory to study human nature The International Society for Human Ethology; promotes ethological perspectives on the study of humans worldwide European Human Behaviour and Evolution Association an interdisciplinary society that supports the activities of European researchers with an interest in evolutionary accounts of human cognition, behavior and society The Association for Politics and the Life Sciences; an international and interdisciplinary association of scholars, scientists, and policymakers concerned with evolutionary, genetic, and ecological knowledge and its bearing on political behavior, public policy and ethics. Society for Evolutionary Analysis in Law a scholarly association dedicated to fostering interdisciplinary exploration of issues at the intersection of law, biology, and evolutionary theory The New England Institute
psychologists say that animals from fiddler crabs to humans use eyesight for collision avoidance, suggesting that vision is basically for directing action, not providing knowledge. Building and maintaining sense organs is metabolically expensive, so these organs evolve only when they improve an organism's fitness. More than half the brain is devoted to processing sensory information, and the brain itself consumes roughly one-fourth of one's metabolic resources, so the senses must provide exceptional benefits to fitness. Perception accurately mirrors the world; animals get useful, accurate information through their senses. Scientists who study perception and sensation have long understood the human senses as adaptations to their surrounding worlds. Depth perception consists of processing over half a dozen visual cues, each of which is based on a regularity of the physical world. Vision evolved to respond to the narrow range of electromagnetic energy that is plentiful and that does not pass through objects. Sound waves go around corners and interact with obstacles, creating a complex pattern that includes useful information about the sources of and distances to objects. Larger animals naturally make lower-pitched sounds as a consequence of their size. The range over which an animal hears, on the other hand, is determined by adaptation. Homing pigeons, for example, can hear the very low-pitched sound (infrasound) that carries great distances, even though most smaller animals detect higher-pitched sounds. Taste and smell respond to chemicals in the environment that are thought to have been significant for fitness in the environment of evolutionary adaptedness. For example, salt and sugar were apparently both valuable to the human or pre-human inhabitants of the environment of evolutionary adaptedness, so present-day humans have an intrinsic hunger for salty and sweet tastes. The sense of touch is actually many senses, including pressure, heat, cold, tickle, and pain. Pain, while unpleasant, is adaptive. An important adaptation for senses is range shifting, by which the organism becomes temporarily more or less sensitive to sensation. For example, one's eyes automatically adjust to dim or bright ambient light. Sensory abilities of different organisms often coevolve, as is the case with the hearing of echolocating bats and that of the moths that have evolved to respond to the sounds that the bats make. Evolutionary psychologists contend that perception demonstrates the principle of modularity, with specialized mechanisms handling particular perception tasks. For example, people with damage to a particular part of the brain suffer from the specific defect of not being able to recognize faces (prosopagnosia). Evolutionary psychology suggests that this indicates a so-called face-reading module. Learning and facultative adaptations In evolutionary psychology, learning is said to be accomplished through evolved capacities, specifically facultative adaptations. Facultative adaptations express themselves differently depending on input from the environment. Sometimes the input comes during development and helps shape that development. For example, migrating birds learn to orient themselves by the stars during a critical period in their maturation. Evolutionary psychologists believe that humans also learn language along an evolved program, also with critical periods. The input can also come during daily tasks, helping the organism cope with changing environmental conditions. For example, animals evolved Pavlovian conditioning in order to solve problems about causal relationships. Animals accomplish learning tasks most easily when those tasks resemble problems that they faced in their evolutionary past, such as a rat learning where to find food or water. Learning capacities sometimes demonstrate differences between the sexes. In many animal species, for example, males can solve spatial problems faster and more accurately than females, due to the effects of male hormones during development. The same might be true of humans. Emotion and motivation Motivations direct and energize behavior, while emotions provide the affective component to motivation, positive or negative. In the early 1970s, Paul Ekman and colleagues began a line of research which suggests that many emotions are universal. He found evidence that humans share at least five basic emotions: fear, sadness, happiness, anger, and disgust. Social emotions evidently evolved to motivate social behaviors that were adaptive in the environment of evolutionary adaptedness. For example, spite seems to work against the individual but it can establish an individual's reputation as someone to be feared. Shame and pride can motivate behaviors that help one maintain one's standing in a community, and self-esteem is one's estimate of one's status. Motivation has a neurobiological basis in the reward system of the brain. Recently, it has been suggested that reward systems may evolve in such a way that there may be an inherent or unavoidable trade-off in the motivational system for activities of short versus long duration. Cognition Cognition refers to internal representations of the world and internal information processing. From an evolutionary psychology perspective, cognition is not "general purpose," but uses heuristics, or strategies, that generally increase the likelihood of solving problems that the ancestors of present-day humans routinely faced. For example, present-day humans are far more likely to solve logic problems that involve detecting cheating (a common problem given humans' social nature) than the same logic problem put in purely abstract terms. Since the ancestors of present-day humans did not encounter truly random events, present-day humans may be cognitively predisposed to incorrectly identify patterns in random sequences. "Gamblers' Fallacy" is one example of this. Gamblers may falsely believe that they have hit a "lucky streak" even when each outcome is actually random and independent of previous trials. Most people believe that if a fair coin has been flipped 9 times and Heads appears each time, that on the tenth flip, there is a greater than 50% chance of getting Tails. Humans find it far easier to make diagnoses or predictions using frequency data than when the same information is presented as probabilities or percentages, presumably because the ancestors of present-day humans lived in relatively small tribes (usually with fewer than 150 people) where frequency information was more readily available. Personality Evolutionary psychology is primarily interested in finding commonalities between people, or basic human psychological nature. From an evolutionary perspective, the fact that people have fundamental differences in personality traits initially presents something of a puzzle. (Note: The field of behavioral genetics is concerned with statistically partitioning differences between people into genetic and environmental sources of variance. However, understanding the concept of heritability can be tricky – heritability refers only to the differences between people, never the degree to which the traits of an individual are due to environmental or genetic factors, since traits are always a complex interweaving of both.) Personality traits are conceptualized by evolutionary psychologists as due to normal variation around an optimum, due to frequency-dependent selection (behavioral polymorphisms), or as facultative adaptations. Like variability in height, some personality traits may simply reflect inter-individual variability around a general optimum. Or, personality traits may represent different genetically predisposed "behavioral morphs" – alternate behavioral strategies that depend on the frequency of competing behavioral strategies in the population. For example, if most of the population is generally trusting and gullible, the behavioral morph of being a "cheater" (or, in the extreme case, a sociopath) may be advantageous. Finally, like many other psychological adaptations, personality traits may be facultative – sensitive to typical variations in the social environment, especially during early development. For example, later-born children are more likely than firstborns to be rebellious, less conscientious and more open to new experiences, which may be advantageous to them given their particular niche in family structure. It is important to note that shared environmental influences do play a role in personality and are not always of less importance than genetic factors. However, shared environmental influences often decrease to near zero after adolescence but do not completely disappear. Language According to Steven Pinker, who builds on the work by Noam Chomsky, the universal human ability to learn to talk between the ages of 1 – 4, basically without training, suggests that language acquisition is a distinctly human psychological adaptation (see, in particular, Pinker's The Language Instinct). Pinker and Bloom (1990) argue that language as a mental faculty shares many likenesses with the complex organs of the body which suggests that, like these organs, language has evolved as an adaptation, since this is the only known mechanism by which such complex organs can develop. Pinker follows Chomsky in arguing that the fact that children can learn any human language with no explicit instruction suggests that language, including most of grammar, is basically innate and that it only needs to be activated by interaction. Chomsky himself does not believe language to have evolved as an adaptation, but suggests that it likely evolved as a byproduct of some other adaptation, a so-called spandrel. But Pinker and Bloom argue that the organic nature of language strongly suggests that it has an adaptational origin. Evolutionary psychologists hold that the FOXP2 gene may well be associated with the evolution of human language. In the 1980s, psycholinguist Myrna Gopnik identified a dominant gene that causes language impairment in the KE family of Britain. This gene turned out to be a mutation of the FOXP2 gene. Humans have a unique allele of this gene, which has otherwise been closely conserved through most of mammalian evolutionary history. This unique allele seems to have first appeared between 100 and 200 thousand years ago, and it is now all but universal in humans. However, the once-popular idea that FOXP2 is a 'grammar gene' or that it triggered the emergence of language in Homo sapiens is now widely discredited. Currently, several competing theories about the evolutionary origin of language coexist, none of them having achieved a general consensus. Researchers of language acquisition in primates and humans such as Michael Tomasello and Talmy Givón, argue that the innatist framework has understated the role of imitation in learning and that it is not at all necessary to posit the existence of an innate grammar module to explain human language acquisition. Tomasello argues that studies of how children and primates actually acquire communicative skills suggest that humans learn complex behavior through experience, so that instead of a module specifically dedicated to language acquisition, language is acquired by the same cognitive mechanisms that are used to acquire all other kinds of socially transmitted behavior. On the issue of whether language is best seen as having evolved as an adaptation or as a spandrel, evolutionary biologist W. Tecumseh Fitch, following Stephen J. Gould, argues that it is unwarranted to assume that every aspect of language is an adaptation, or that language as a whole is an adaptation. He criticizes some strands of evolutionary psychology for suggesting a pan-adaptionist view of evolution, and dismisses Pinker and Bloom's question of whether "Language has evolved as an adaptation" as being misleading. He argues instead that from a biological viewpoint the evolutionary origins of language is best conceptualized as being the probable result of a convergence of many separate adaptations into a complex system. A similar argument is made by Terrence Deacon who in The Symbolic Species argues that the different features of language have co-evolved with the evolution of the mind and that the ability to use symbolic communication is integrated in all other cognitive processes. If the theory that language could have evolved as a single adaptation is accepted, the question becomes which of its many functions has been the basis of adaptation. Several evolutionary hypotheses have been posited: that language evolved for the purpose of social grooming, that it evolved as a way to show mating potential or that it evolved to form social contracts. Evolutionary psychologists recognize that these theories are all speculative and that much more evidence is required to understand how language might have been selectively adapted. Mating Given that sexual reproduction is the means by which genes are propagated into future generations, sexual selection plays a large role in human evolution. Human mating, then, is of interest to evolutionary psychologists who aim to investigate evolved mechanisms to attract and secure mates. Several lines of research have stemmed from this interest, such as studies of mate selection mate poaching, mate retention, mating preferences and conflict between the sexes. In 1972 Robert Trivers published an influential paper on sex differences that is now referred to as parental investment theory. The size differences of gametes (anisogamy) is the fundamental, defining difference between males (small gametes – sperm) and females (large gametes – ova). Trivers noted that anisogamy typically results in different levels of parental investment between the sexes, with females initially investing more. Trivers proposed that this difference in parental investment leads to the sexual selection of different reproductive strategies between the sexes and to sexual conflict. For example, he suggested that the sex that invests less in offspring will generally compete for access to the higher-investing sex to increase their inclusive fitness (also see Bateman's principle). Trivers posited that differential parental investment led to the evolution of sexual dimorphisms in mate choice, intra- and inter- sexual reproductive competition, and courtship displays. In mammals, including humans, females make a much larger parental investment than males (i.e. gestation followed by childbirth and lactation). Parental investment theory is a branch of life history theory. Buss and Schmitt's (1993) Sexual Strategies Theory proposed that, due to differential parental investment, humans have evolved sexually dimorphic adaptations related to "sexual accessibility, fertility assessment, commitment seeking and avoidance, immediate and enduring resource procurement, paternity certainty, assessment of mate value, and parental investment." Their Strategic Interference Theory suggested that conflict between the sexes occurs when the preferred reproductive strategies of one sex interfere with those of the other sex, resulting in the activation of emotional responses such as anger or jealousy. Women are generally more selective when choosing mates, especially under long-term mating conditions. However, under some circumstances, short term mating can provide benefits to women as well, such as fertility insurance, trading up to better genes, reducing the risk of inbreeding, and insurance protection of her offspring. Due to male paternity insecurity, sex differences have been found in the domains of sexual jealousy. Females generally react more adversely to emotional infidelity and males will react more to sexual infidelity. This particular pattern is predicted because the costs involved in mating for each sex are distinct. Women, on average, should prefer a mate who can offer resources (e.g., financial, commitment), thus, a woman risks losing such resources with a mate who commits emotional infidelity. Men, on the other hand, are never certain of the genetic paternity of their children because they do not bear the offspring themselves ("paternity insecurity"). This suggests that for men sexual infidelity would generally be more aversive than emotional infidelity because investing resources in another man's offspring does not lead to the propagation of their own genes. Another interesting line of research is that which examines women's mate preferences across the ovulatory cycle. The theoretical underpinning of this research is that ancestral women would have evolved mechanisms to select mates with certain traits depending on their hormonal status. Known as the ovulatory shift hypothesis, the theory posits that, during the ovulatory phase of a woman's cycle (approximately days 10–15 of a woman's cycle), a woman who mated with a male with high genetic quality would have been more likely, on average, to produce and bear a healthy offspring than a woman who mated with a male with low genetic quality. These putative preferences are predicted to be especially apparent for short-term mating domains because a potential male mate would only be offering genes to a potential offspring. This hypothesis allows researchers to examine whether women select mates who have characteristics that indicate high genetic quality during the high fertility phase of their ovulatory cycles. Indeed, studies have shown that women's preferences vary across the ovulatory cycle. In particular, Haselton and Miller (2006) showed that highly fertile women prefer creative but poor men as short-term mates. Creativity may be a proxy for good genes. Research by Gangestad et al. (2004) indicates that highly fertile women prefer men who display social presence and intrasexual competition; these traits may act as cues that would help women predict which men may have, or would be able to acquire, resources. Parenting Reproduction is always costly for women, and can also be for men. Individuals are limited in the degree to which they can devote time and resources to producing and raising their young, and such expenditure may also be detrimental to their future condition, survival and further reproductive output. Parental investment is any parental expenditure (time, energy etc.) that benefits one offspring at a cost to parents' ability to invest in other components of fitness (Clutton-Brock 1991: 9; Trivers 1972). Components of fitness (Beatty 1992) include the well-being of existing offspring, parents' future reproduction, and inclusive fitness through aid to kin (Hamilton, 1964). Parental investment theory is a branch of life history theory. Robert Trivers' theory of parental investment predicts that the sex making the largest investment in lactation, nurturing and protecting offspring will be more discriminating in mating and that the sex that invests less in offspring will compete for access to the higher investing sex (see Bateman's principle). Sex differences in parental effort are important in determining the strength of sexual selection. The benefits of parental investment to the offspring are large and are associated with the effects on condition, growth, survival, and ultimately, on the reproductive success of the offspring. However, these benefits can come at the cost of the parent's ability to reproduce in the future e.g. through the increased risk of injury when defending offspring against predators, the loss of mating opportunities whilst rearing offspring, and an increase in the time to the next reproduction. Overall, parents are selected to maximize the difference between the benefits and the costs, and parental care will likely evolve when the benefits exceed the costs. The Cinderella effect is an alleged high incidence of stepchildren being physically, emotionally or sexually abused, neglected, murdered, or otherwise mistreated at the hands of their stepparents at significantly higher rates than their genetic counterparts. It takes its name from the fairy tale character Cinderella, who in the story was cruelly mistreated by her stepmother and stepsisters. Daly and Wilson (1996) noted: "Evolutionary thinking led to the discovery of the most important risk factor for child homicide – the presence of a stepparent. Parental efforts and investments are valuable resources, and selection favors those parental psyches that allocate effort effectively to promote fitness. The adaptive problems that challenge parental decision-making include both the accurate identification of one's offspring and the allocation of one's resources among them with sensitivity to their needs and abilities to convert parental investment into fitness increments…. Stepchildren were seldom or never so valuable to one's expected fitness as one's own offspring would be, and those parental psyches that were easily parasitized by just any appealing youngster must always have incurred a selective disadvantage"(Daly & Wilson, 1996, pp. 64–65). However, they note that not all stepparents will "want" to abuse their partner's children, or that genetic parenthood is any insurance against abuse. They see step parental care as primarily "mating effort" towards the genetic parent. Family and kin Inclusive fitness is the sum of an organism's classical fitness (how many of its own offspring it produces and supports) and the number of equivalents of its own offspring it can add to the population by supporting others. The first component is called classical fitness by Hamilton (1964). From the gene's point of view, evolutionary success ultimately depends on leaving behind the maximum number of copies of itself in the population. Until 1964, it was generally believed that genes only achieved this by causing the individual to leave the maximum number of viable offspring. However, in 1964 W. D. Hamilton proved mathematically that, because close relatives of an organism share some identical genes, a gene can also increase its evolutionary success by promoting the reproduction and survival of these related or otherwise similar individuals. Hamilton concluded that this leads natural selection to favor organisms that would behave in ways that maximize their inclusive fitness. It is also true that natural selection favors behavior that maximizes personal fitness. Hamilton's rule describes mathematically whether or not a gene for altruistic behavior will spread in a population: where is the reproductive cost to the altruist, is the reproductive benefit to the recipient of the altruistic behavior, and is the probability, above the population average, of the individuals sharing an altruistic gene – commonly viewed as "degree of relatedness". The concept serves to explain how natural selection can perpetuate altruism. If there is an "altruism gene" (or complex of genes) that influences an organism's behavior to be helpful and protective of relatives and their offspring, this behavior also increases the proportion of the altruism gene in the population, because relatives are likely to share genes with the altruist due to common descent. Altruists may also have some way to recognize altruistic behavior in unrelated individuals and be inclined to support them. As Dawkins points out in The Selfish Gene (Chapter 6) and The Extended Phenotype, this must be distinguished from the green-beard effect. Although it is generally true that humans tend to be more altruistic toward their kin than toward non-kin, the relevant proximate mechanisms that mediate this cooperation have been debated (see kin recognition), with some arguing that kin status is determined primarily via social and cultural factors (such as co-residence, maternal association of sibs, etc.), while others have argued that kin recognition can also be mediated by biological factors such as facial resemblance and immunogenetic similarity of the major histocompatibility complex (MHC). For a discussion of the interaction of these social and biological kin recognition factors see Lieberman, Tooby, and Cosmides (2007) (PDF). Whatever the proximate mechanisms of kin recognition there is substantial evidence that humans act generally more altruistically to close genetic kin compared to genetic non-kin. Interactions with non-kin / reciprocity Although interactions with non-kin are generally less altruistic compared to those with kin, cooperation can be maintained with non-kin via mutually beneficial reciprocity as was proposed by Robert Trivers. If there are repeated encounters between the same two players in an evolutionary game in which each of them can choose either to "cooperate" or "defect," then a strategy of mutual cooperation may be favored even if it pays each player, in the short term, to defect when the other cooperates. Direct reciprocity can lead to the evolution of cooperation only if the probability, w, of another encounter between the same two individuals exceeds the cost-to-benefit ratio of the altruistic act: w > c/b Reciprocity can also be indirect if information about previous interactions is shared. Reputation allows evolution of cooperation by indirect reciprocity. Natural selection favors strategies that base the decision to help on the reputation of the recipient: studies show that people who are more helpful are more likely to receive help. The calculations of indirect reciprocity are complicated and only a tiny fraction of this universe has been uncovered, but again a simple rule has emerged. Indirect reciprocity can only promote cooperation if the probability, q, of knowing someone's reputation exceeds the cost-to-benefit ratio of the altruistic act: q > c/b One important problem with this explanation is that individuals may be able to evolve the capacity to obscure their reputation, reducing the probability, q, that it will be known. Trivers argues that friendship and various social emotions evolved in order to manage reciprocity. Liking and disliking, he says, evolved to help present-day humans' ancestors form coalitions with others who reciprocated and to exclude those who did not reciprocate. Moral indignation may have evolved to prevent one's altruism from being exploited by cheaters, and gratitude may have motivated present-day humans' ancestors to reciprocate appropriately after benefiting from others' altruism. Likewise, present-day humans feel guilty when they fail to reciprocate. These social motivations match what evolutionary psychologists expect to see in adaptations that evolved to maximize the benefits and minimize the drawbacks of reciprocity. Evolutionary psychologists say that humans have psychological adaptations that evolved specifically to help us identify nonreciprocators, commonly referred to as "cheaters." In 1993, Robert Frank and his associates found that participants in a prisoner's dilemma scenario were often able to predict whether their partners would "cheat," based on a half-hour of unstructured social interaction. In a 1996 experiment, for example, Linda Mealey and her colleagues found that people were better at remembering the faces of people when those faces were associated with stories about those individuals cheating (such as embezzling money from a church). Strong reciprocity (or "tribal reciprocity") Humans may have an evolved set of psychological adaptations that predispose them to be more cooperative than otherwise would be expected with members of their tribal in-group, and, more nasty to members of tribal out groups. These adaptations may have been a consequence of tribal warfare. Humans may also have predispositions for "altruistic punishment" – to punish in-group members who violate in-group rules, even when this altruistic behavior cannot be justified in terms of helping those you are related to (kin selection), cooperating with those who you will interact with again (direct reciprocity), or cooperating to better your reputation with others (indirect reciprocity). Evolutionary psychology and culture Though evolutionary psychology has traditionally focused on individual-level behaviors, determined by species-typical psychological adaptations, considerable work has been done on how these adaptations shape and, ultimately govern, culture (Tooby and Cosmides, 1989). Tooby and Cosmides (1989) argued that the mind consists of many domain-specific psychological adaptations, some of which may constrain what cultural material is learned or taught. As opposed to a domain-general cultural acquisition program, where an individual passively receives culturally-transmitted material from the group, Tooby and Cosmides (1989), among others, argue that: "the psyche evolved to generate adaptive rather than repetitive behavior, and hence critically analyzes the behavior of those surrounding it in highly structured and patterned ways, to be used as a rich (but by no means the only) source of information out of which to construct a 'private culture' or individually tailored adaptive system; in consequence, this system may or may not mirror the behavior of others in any given respect." (Tooby and Cosmides 1989). In psychology sub-fields Developmental psychology According to Paul Baltes, the benefits granted by evolutionary selection decrease with age. Natural selection has not eliminated many harmful conditions and nonadaptive characteristics that appear among older adults, such as Alzheimer disease. If it were a disease that killed 20-year-olds instead of 70-year-olds this may have been a disease that natural selection could have eliminated ages ago. Thus, unaided by evolutionary pressures against nonadaptive conditions, modern humans suffer the aches, pains, and infirmities of aging and as the benefits of evolutionary selection decrease with age, the need for modern technological mediums against non-adaptive conditions increases. Social psychology As humans are a highly social species, there are many adaptive problems associated with navigating the social world (e.g., maintaining allies, managing status hierarchies, interacting with outgroup members, coordinating social activities, collective decision-making). Researchers in the emerging field of evolutionary social psychology have made many discoveries pertaining to topics traditionally studied by social psychologists, including person perception, social cognition, attitudes, altruism, emotions, group dynamics, leadership, motivation, prejudice, intergroup relations, and cross-cultural differences. When endeavouring to solve a problem humans at an early age show determination while chimpanzees have no comparable facial expression. Researchers suspect the human determined expression evolved because when a human is determinedly working on a problem other people will frequently help. Abnormal psychology Adaptationist hypotheses regarding the etiology of psychological disorders are often based on analogies between physiological and psychological dysfunctions, as noted in the table below. Prominent theorists and evolutionary psychiatrists include Michael T. McGuire, Anthony Stevens, and Randolph M. Nesse. They, and others, suggest that mental disorders are due to the interactive effects
into the Italo-Dalmatian languages (sometimes grouped with Eastern Romance), including the Tuscan-derived Italian and numerous local Romance languages in Italy as well as Dalmatian, and the Western Romance languages. The Western Romance languages in turn separate into the Gallo-Romance languages, including French and its varieties (Langues d'oïl), the Rhaeto-Romance languages and the Gallo-Italic languages; the Occitano-Romance languages, grouped with either Gallo-Romance or East Iberian, including Occitan, Catalan and Aragonese; and finally the West Iberian languages (Spanish-Portuguese), including the Astur-Leonese languages, Galician-Portuguese, and Castilian. Slavic Slavic languages are spoken in large areas of Southern, Central and Eastern Europe. An estimated 250 million Europeans are native speakers of Slavic languages, the largest groups being Russian ( 110 million in European Russia and adjacent parts of Eastern Europe, Russian forming the largest linguistic community in Europe), Polish ( 45 million), Ukrainian ( 40 million), Serbo-Croatian ( 21 million), Czech ( 11 million), Bulgarian ( 9 million), Slovak ( 5 million) Belarusian and Slovene ( 3 million each) and Macedonian ( 2 million). Phylogenetically, Slavic is divided into three subgroups: West Slavic includes Polish, Czech, Slovak, Lower Sorbian, Upper Sorbian, Silesian and Kashubian. East Slavic includes Russian, Ukrainian, Belarusian, and Rusyn. South Slavic is divided into Southeast Slavic and Southwest Slavic groups: Southwest Slavic languages include Serbo-Croatian and Slovene, each with numerous distinctive dialects. Serbo-Croatian boasts four distinct national standards, Bosnian, Croatian, Montenegrin and Serbian, all based on the Shtokavian dialect; Southeast Slavic languages include Bulgarian, Macedonian and Old Church Slavonic (a liturgical language). Others Greek ( 13 million) is the official language of Greece and Cyprus, and there are Greek-speaking enclaves in Albania, Bulgaria, Italy, North Macedonia, Romania, Georgia, Ukraine, Lebanon, Egypt, Israel, Jordan, and Turkey, and in Greek communities around the world. Dialects of modern Greek that originate from Attic Greek (through Koine and then Medieval Greek) are Cappadocian, Pontic, Cretan, Cypriot, Katharevousa, and Yevanic. Italiot Greek is, debatably, a Doric dialect of Greek. It is spoken in southern Italy only, in the southern Calabria region (as Grecanic) and in the Salento region (as Griko). It was studied by the German linguist Gerhard Rohlfs during the 1930s and 1950s. Tsakonian is a Doric dialect of the Greek language spoken in the lower Arcadia region of the Peloponnese around the village of Leonidio The Baltic languages are spoken in Lithuania (Lithuanian ( 3 million), Samogitian) and Latvia (Latvian ( 2 million), Latgalian). Samogitian and Latgalian are usually considered to be dialects of Lithuanian and Latvian respectively. There are also several extinct Baltic languages, including: Galindian, Curonian, Old Prussian, Selonian, Semigallian and Sudovian. Albanian ( 5 million) has two major dialects, Tosk Albanian and Gheg Albanian. It is spoken in Albania and Kosovo, neighboring North Macedonia, Serbia, Greece, Italy, and Montenegro. It is also widely spoken in the Albanian diaspora. Armenian ( 7 million) has two major forms, Western Armenian and Eastern Armenian. It is spoken in Armenia, Artsakh and Georgia (Samtskhe-Javakheti), also Russia, France, Italy, Turkey, Greece, and Cyprus. It is also widely spoken in the Armenian Diaspora. There are six living Celtic languages, spoken in areas of northwestern Europe dubbed the "Celtic nations". All six are members of the Insular Celtic family, which in turn is divided into: Brittonic family: Welsh (Wales, 700,000), Cornish (Cornwall, 500) and Breton (Brittany, 200,000) Goidelic family: Irish (Ireland, 2,000,000), Scottish Gaelic (Scotland, 50,000), and Manx (Isle of Man, 1,800) Continental Celtic languages had previously been spoken across Europe from Iberia and Gaul to Asia Minor, but became extinct in the first millennium AD. The Indo-Aryan languages have one major representation: Romani ( 1.5 million speakers), introduced in Europe during the late medieval period. Lacking a nation state, Romani is spoken as a minority language throughout Europe. The Iranian languages in Europe are natively represented in the North Caucasus, notably with Ossetian ( 600,000). Non-Indo-European languages Turkic Oghuz languages in Europe include Turkish, spoken in East Thrace and by immigrant communities; Azerbaijani is spoken in Northeast Azerbaijan and parts of Southern Russia and Gagauz is spoken in Gagauzia. Kipchak languages in Europe include Karaim, Crimean Tatar and Krymchak, which is spoken in mainly Crimea; Tatar, which is spoken in Tatarstan; Bashkir, which is spoken in Bashkortostan; Karachay-Balkar, which is spoken in the North Caucasus, and Kazakh, which is spoken in Northwest Kazakhstan. Oghur languages were historically indigenous to much of Eastern Europe; however, most of them are extinct today, with the exception of Chuvash, which is spoken in Chuvashia. Uralic Uralic is native to northern Eurasia. Finno-Ugric groups the Uralic languages other than Samoyedic. Finnic languages include Finnish ( 5 million), Estonian ( 1 million) and Mari (c. 400,000). The Sami languages ( 30,000) are closely related to Finnic. The Ugric languages are represented in Europe with the Hungarian language ( 13 million), historically introduced with the Hungarian conquest of the Carpathian Basin of the 9th century. The Samoyedic Nenets language is spoken in Nenets Autonomous Okrug of Russia, located in the far northeastern corner of Europe (as delimited by the Ural Mountains). Others The Basque language (or Euskara, 750,000) is a language isolate and the ancestral language of the Basque people who inhabit the Basque Country, a region in the western Pyrenees mountains mostly in northeastern Spain and partly in southwestern France of about 3 million inhabitants, where it is spoken fluently by about 750,000 and understood by more than 1.5 million people. Basque is directly related to ancient Aquitanian, and it is likely that an early form of the Basque language was present in Western Europe before the arrival of the Indo-European languages in the area in the Bronze Age. North Caucasian languages is a geographical blanket term for two unrelated language families spoken chiefly in the north Caucasus and Turkey—the Northwest Caucasian family (including Abkhaz and Circassian) and the Northeast Caucasian family, spoken mainly in the border area of the southern Russian Federation (including Dagestan, Chechnya, and Ingushetia). Kalmyk is a Mongolic language, spoken in the Republic of Kalmykia, part of the Russian Federation. Its speakers entered the Volga region in the early 17th century. Maltese ( 500,000) is a Semitic language with Romance and Germanic influences, spoken in Malta. It is based on Sicilian Arabic, with influences from Sicilian, Italian, French and, more recently, English. It is unique in that it is the only Semitic language whose standard form is written in Latin script. It is also the second smallest official language of the EU in terms of speakers, and the only official Semitic language within the EU. Cypriot Maronite Arabic (also known as Cypriot Arabic) is a variety of Arabic spoken by Maronites in Cyprus. Most speakers live in Nicosia, but others are in the communities of Kormakiti and Lemesos. Brought to the island by Maronites fleeing Lebanon over 700 years ago, this variety of Arabic has been influenced by Greek in both phonology and vocabulary, while retaining certain unusually archaic features in other respects. Sign languages Several dozen manual languages exist across Europe, with the most widespread sign language family being the Francosign languages, with its languages found in countries from Iberia to the Balkans and the Baltics. Accurate historical information of sign and tactile languages is difficult to come by, with folk histories noting the existence signing communities across Europe hundreds of years ago. British Sign Language (BSL) and French Sign Language (LSF) are probably the oldest confirmed, continuously-spoken sign languages. Alongside German Sign Language (DGS) according to Ethnologue, these three have the most numbers of signers, though very few institutions take appropriate statistics on contemporary signing populations, making legitimate data hard to find. Notably, few European sign languages have overt connections with the local majority/oral languages, aside from standard language contact and borrowing, meaning grammatically the sign languages and the oral languages of Europe are quite distinct from one another. Due to (visual/aural) modality differences, most sign languages are named for the larger ethnic nation in which they are spoken, plus the words "sign language", rendering what is spoken across much of France, Wallonia and Romandy as French Sign Language or LSF for: langue des signes française. Recognition of non-oral languages varies widely from region to region. Some countries afford legal recognition, even to official on a state level, whereas others continue to be actively suppressed. The major sign linguistic families are: Francosign languages, such as LSF, Irish SL, Austrian Sign Language (ÖGS), Eesti Viipekeel, and probably both Catalan and Valencian Sign Languages. Danish Sign languages, such as DTS, Icelandic Taknmal, Faroese Taknmal, and NTS. Austro-Hungarian Sign descendants, including the sub-families descended from both (separately) the Yugoslav Sign Language and Russian Sign Language, such as Macedonian Sign Language and HZJ, or LGK and Ukrainian Sign Language (USL). Banzsl languages, such as BSL and Northern Ireland Sign Language (NISL). Swedish Sign family, such as SSL, Viittomakieli, FinnSSL, and Portuguese Sign Language (LGP), all of which may be descended from Old BSL. Germanosign languages, such as DGS and Polish Sign Language (PJM). Isolate languages, such as Albanian Sign Language, Armenian Sign Language, Caucasian Sign Language, Spanish Sign Language (LSE), Turkish Sign Language (TİD), and perhaps Ghardaia Sign Language. History of standardization Language and identity, standardization processes In the Middle Ages the two most important defining elements of Europe were Christianitas and Latinitas. The earliest dictionaries were glossaries: more or less structured lists of lexical pairs (in alphabetical order or according to conceptual fields). The Latin-German (Latin-Bavarian) Abrogans was among the first. A new wave of lexicography can be seen from the late 15th century onwards (after the introduction of the printing press, with the growing interest in standardisation of languages). The concept of the nation state began to emerge in the early modern period. Nations adopted particular dialects as their national language. This, together with improved communications, led to official efforts to standardise the national language, and a number of language academies were established: 1582 Accademia della Crusca in Florence, 1617 Fruchtbringende Gesellschaft in Weimar, 1635 Académie française in Paris, 1713 Real Academia Española in Madrid. Language became increasingly linked to nation as opposed to culture, and was also used to promote religious and ethnic identity: e.g. different Bible translations in the same language for Catholics and Protestants. The first languages whose standardisation was promoted included Italian (questione della lingua: Modern Tuscan/Florentine vs. Old Tuscan/Florentine vs. Venetian → Modern Florentine + archaic Tuscan + Upper Italian), French (the standard is based on Parisian), English (the standard is based on the London dialect) and (High) German (based on the dialects of the chancellery of Meissen in Saxony, Middle German, and the chancellery of Prague in Bohemia ("Common German")). But several other nations also began to develop a standard variety in the 16th century. Lingua franca Europe has had a number of languages that were considered linguae francae over some ranges for some periods according to some historians. Typically in the rise of a national language the new language becomes a lingua franca to peoples in the range of the future nation until the consolidation and unification phases. If the nation becomes internationally influential, its language may become a lingua franca among nations that speak their own national languages. Europe has had no lingua franca ranging over its entire territory spoken by all or most of its populations during any historical period. Some linguae francae of past and present over some of its regions for some of its populations are: Classical Greek and then Koine Greek in the Mediterranean Basin from the Athenian Empire to the Eastern Roman Empire, being replaced by Modern Greek. Koine Greek and Modern Greek, in the Eastern Roman or Byzantine Empire and other parts of the Balkans south of the Jireček Line. Vulgar Latin and Late Latin among the uneducated and educated populations respectively of the Roman Empire and the states that followed it in the same range no later than 900 AD; Medieval Latin and Renaissance Latin among the educated populations of western, northern, central and part of eastern Europe until the rise of the national languages in that range, beginning with
and Gaul to Asia Minor, but became extinct in the first millennium AD. The Indo-Aryan languages have one major representation: Romani ( 1.5 million speakers), introduced in Europe during the late medieval period. Lacking a nation state, Romani is spoken as a minority language throughout Europe. The Iranian languages in Europe are natively represented in the North Caucasus, notably with Ossetian ( 600,000). Non-Indo-European languages Turkic Oghuz languages in Europe include Turkish, spoken in East Thrace and by immigrant communities; Azerbaijani is spoken in Northeast Azerbaijan and parts of Southern Russia and Gagauz is spoken in Gagauzia. Kipchak languages in Europe include Karaim, Crimean Tatar and Krymchak, which is spoken in mainly Crimea; Tatar, which is spoken in Tatarstan; Bashkir, which is spoken in Bashkortostan; Karachay-Balkar, which is spoken in the North Caucasus, and Kazakh, which is spoken in Northwest Kazakhstan. Oghur languages were historically indigenous to much of Eastern Europe; however, most of them are extinct today, with the exception of Chuvash, which is spoken in Chuvashia. Uralic Uralic is native to northern Eurasia. Finno-Ugric groups the Uralic languages other than Samoyedic. Finnic languages include Finnish ( 5 million), Estonian ( 1 million) and Mari (c. 400,000). The Sami languages ( 30,000) are closely related to Finnic. The Ugric languages are represented in Europe with the Hungarian language ( 13 million), historically introduced with the Hungarian conquest of the Carpathian Basin of the 9th century. The Samoyedic Nenets language is spoken in Nenets Autonomous Okrug of Russia, located in the far northeastern corner of Europe (as delimited by the Ural Mountains). Others The Basque language (or Euskara, 750,000) is a language isolate and the ancestral language of the Basque people who inhabit the Basque Country, a region in the western Pyrenees mountains mostly in northeastern Spain and partly in southwestern France of about 3 million inhabitants, where it is spoken fluently by about 750,000 and understood by more than 1.5 million people. Basque is directly related to ancient Aquitanian, and it is likely that an early form of the Basque language was present in Western Europe before the arrival of the Indo-European languages in the area in the Bronze Age. North Caucasian languages is a geographical blanket term for two unrelated language families spoken chiefly in the north Caucasus and Turkey—the Northwest Caucasian family (including Abkhaz and Circassian) and the Northeast Caucasian family, spoken mainly in the border area of the southern Russian Federation (including Dagestan, Chechnya, and Ingushetia). Kalmyk is a Mongolic language, spoken in the Republic of Kalmykia, part of the Russian Federation. Its speakers entered the Volga region in the early 17th century. Maltese ( 500,000) is a Semitic language with Romance and Germanic influences, spoken in Malta. It is based on Sicilian Arabic, with influences from Sicilian, Italian, French and, more recently, English. It is unique in that it is the only Semitic language whose standard form is written in Latin script. It is also the second smallest official language of the EU in terms of speakers, and the only official Semitic language within the EU. Cypriot Maronite Arabic (also known as Cypriot Arabic) is a variety of Arabic spoken by Maronites in Cyprus. Most speakers live in Nicosia, but others are in the communities of Kormakiti and Lemesos. Brought to the island by Maronites fleeing Lebanon over 700 years ago, this variety of Arabic has been influenced by Greek in both phonology and vocabulary, while retaining certain unusually archaic features in other respects. Sign languages Several dozen manual languages exist across Europe, with the most widespread sign language family being the Francosign languages, with its languages found in countries from Iberia to the Balkans and the Baltics. Accurate historical information of sign and tactile languages is difficult to come by, with folk histories noting the existence signing communities across Europe hundreds of years ago. British Sign Language (BSL) and French Sign Language (LSF) are probably the oldest confirmed, continuously-spoken sign languages. Alongside German Sign Language (DGS) according to Ethnologue, these three have the most numbers of signers, though very few institutions take appropriate statistics on contemporary signing populations, making legitimate data hard to find. Notably, few European sign languages have overt connections with the local majority/oral languages, aside from standard language contact and borrowing, meaning grammatically the sign languages and the oral languages of Europe are quite distinct from one another. Due to (visual/aural) modality differences, most sign languages are named for the larger ethnic nation in which they are spoken, plus the words "sign language", rendering what is spoken across much of France, Wallonia and Romandy as French Sign Language or LSF for: langue des signes française. Recognition of non-oral languages varies widely from region to region. Some countries afford legal recognition, even to official on a state level, whereas others continue to be actively suppressed. The major sign linguistic families are: Francosign languages, such as LSF, Irish SL, Austrian Sign Language (ÖGS), Eesti Viipekeel, and probably both Catalan and Valencian Sign Languages. Danish Sign languages, such as DTS, Icelandic Taknmal, Faroese Taknmal, and NTS. Austro-Hungarian Sign descendants, including the sub-families descended from both (separately) the Yugoslav Sign Language and Russian Sign Language, such as Macedonian Sign Language and HZJ, or LGK and Ukrainian Sign Language (USL). Banzsl languages, such as BSL and Northern Ireland Sign Language (NISL). Swedish Sign family, such as SSL, Viittomakieli, FinnSSL, and Portuguese Sign Language (LGP), all of which may be descended from Old BSL. Germanosign languages, such as DGS and Polish Sign Language (PJM). Isolate languages, such as Albanian Sign Language, Armenian Sign Language, Caucasian Sign Language, Spanish Sign Language (LSE), Turkish Sign Language (TİD), and perhaps Ghardaia Sign Language. History of standardization Language and identity, standardization processes In the Middle Ages the two most important defining elements of Europe were Christianitas and Latinitas. The earliest dictionaries were glossaries: more or less structured lists of lexical pairs (in alphabetical order or according to conceptual fields). The Latin-German (Latin-Bavarian) Abrogans was among the first. A new wave of lexicography can be seen from the late 15th century onwards (after the introduction of the printing press, with the growing interest in standardisation of languages). The concept of the nation state began to emerge in the early modern period. Nations adopted particular dialects as their national language. This, together with improved communications, led to official efforts to standardise the national language, and a number of language academies were established: 1582 Accademia della Crusca in Florence, 1617 Fruchtbringende Gesellschaft in Weimar, 1635 Académie française in Paris, 1713 Real Academia Española in Madrid. Language became increasingly linked to nation as opposed to culture, and was also used to promote religious and ethnic identity: e.g. different Bible translations in the same language for Catholics and Protestants. The first languages whose standardisation was promoted included Italian (questione della lingua: Modern Tuscan/Florentine vs. Old Tuscan/Florentine vs. Venetian → Modern Florentine + archaic Tuscan + Upper Italian), French (the standard is based on Parisian), English (the standard is based on the London dialect) and (High) German (based on the dialects of the chancellery of Meissen in Saxony, Middle German, and the chancellery of Prague in Bohemia ("Common German")). But several other nations also began to develop a standard variety in the 16th century. Lingua franca Europe has had a number of languages that were considered linguae francae over some ranges for some periods according to some historians. Typically in the rise of a national language the new language becomes a lingua franca to peoples in the range of the future nation until the consolidation and unification phases. If the nation becomes internationally influential, its language may become a lingua franca among nations that speak their own national languages. Europe has had no lingua franca ranging over its entire territory spoken by all or most of its populations during any historical period. Some linguae francae of past and present over some of its regions for some of its populations are: Classical Greek and then Koine Greek in the Mediterranean Basin from the Athenian Empire to the Eastern Roman Empire, being replaced by Modern Greek. Koine Greek and Modern Greek, in the Eastern Roman or Byzantine Empire and other parts of the Balkans south of the Jireček Line. Vulgar Latin and Late Latin among the uneducated and educated populations respectively of the Roman Empire and the states that followed it in the same range no later than 900 AD; Medieval Latin and Renaissance Latin among the educated populations of western, northern, central and part of eastern Europe until the rise of the national languages in that range, beginning with the first language academy in Italy in 1582/83; new Latin written only in scholarly and scientific contexts by a small minority of the educated population at scattered locations over all of Europe; ecclesiastical Latin, in spoken and written contexts of liturgy and church administration only, over the range of the Roman Catholic Church. Lingua Franca or Sabir, the original of the name, an Italian-based pidgin language of mixed origins used by maritime commercial interests around the Mediterranean in the Middle Ages and early Modern Age. Old French in continental western European countries and in the Crusader states. Czech, mainly during the reign of Holy Roman
meeting of the deans and the rector. The service organizations provide services to the inhabitants of the university campus. Examples of these organizations include the housing organization, the ICT organization and the Communication Expertise Center (which does external communications, including to the press). Each service organization is headed by an organization head. Both for the departments and the service organizations, the staff (and students) are involved with the running of the body. For that reason both types of bodies have advisory councils which have advisory and co-decision authorities. TU/e Holding B.V. Over the past two decades, the TU/e has increasingly developed commercial interests and off-campus ties. These include commercial agreements and contracts directly between the university and external companies, but also interests in spinoff companies. In order to manage these kinds of contractual obligations the university started the TU/e Holding B.V. in 1997. The Holding is a limited company, dedicated to the commercial exploitation of scientific knowledge. Service organizations There university is more than just the departments, research bodies and the students. There are several ancillary activities necessary to the running of the university, activities that cross the boundaries and interests of the different departments. These activities are carried out by the universities' service organizations. The university has the following service organizations: Academics Rankings Eindhoven is currently (2018) ranked between 51 and 141 in the world (the university itself provides a survey), and a top ten technical university in Europe. In a 2003 European Commission report, TU/e was ranked as third among European research universities (after Cambridge and Oxford, at equality with TU Munich and thus making it the highest ranked Technical University in Europe), based on the impact of its scientific research. In 2011 Academic Ranking of World Universities (ARWU) rankings, TU/e was placed at the 52-75 bucket internationally in Engineering/Technology and Computer Science ( ENG ) category and at 34th place internationally in the Computer Science subject field. Education The scientific departments (or faculties; Dutch: faculteiten) are the primary vehicles for teaching and research in the university. They employ the majority of the academic staff, are responsible for teaching and sponsor the research schools and institutions. The departments also offer PhD programs (Dutch: promotiefase) whereby a qualified master may earn a PhD Unlike in anglo-saxon countries these are not educational programs, however; rather, a person working towards obtaining the PhD is a research employee of the university. The TU/e has nine departments: Biomedical Engineering Built Environment Electrical Engineering Industrial Design Chemical Engineering and Chemistry Industrial Engineering & Innovation Sciences (formerly Technology Management) Applied Physics Mechanical Engineering Mathematics and Computer Science Honors programs The university offers honors programs aimed at both bachelor and master students. At the bachelor level it consists of intensive study within eight possible areas or tracks. At the master level it consists of personal leadership and professional development components, over and above the normal masters study. Postgraduate doctorate of engineering (PDEng) In 1986, the university started a number of programs for a postgraduate doctorate of engineering (PDEng) together with two other Dutch technological universities (TU Delft and University of Twente). These programs are managed by the Stan Ackermans Institute on behalf of the 4TU Federation. Each program is two years in length. Ten programs are available at the TU/e: Automotive Systems Design Clinical Informatics Data Science Healthcare Systems Design Information and Communication Technology Process and Product Design Qualified Medical Engineer Smart Buildings and Cities Software Technology User-System Interaction Nationally, more than 3,500 students have earned the postgraduate PDEng degree through this program. On 13 February, Ravi Thakkar was awarded 3000th PDEng diploma at TU/e Other educational programs The university hosts a number of other educational programs that are in some way related to the main educational programs. These include the teacher's program and an MBA program. Eindhoven School of Education: Teacher's education for masters, to get their higher education teaching certificate. Also does research into educational sciences and innovation in education. TIAS School for Business and Society: A shared MBA program with the University of Tilburg, for university graduates. HBO minor program: Bachelor programs for students of HBO universities (four-year bachelor programs), to allow them access to university master programs. Research The TU/e participates in a large number of research institutes which balance in different ways between pure science and applied science research. Some of these institutes are bound strictly to the university, others combine research across different universities. Top in research partnerships with industry The TU/e is among the world's ten best-performing research universities in terms of research cooperation with industry in 2011 (Number 1 in 2009). Ten to 20 percent of the scientific publications of these ten universities in the period 2006–2008 were the result of partnerships with researchers in industry. As well as TU/e and Delft University of Technology, the top 10 also includes two universities in Japan (Tokyo Institute of Technology and Keio University in Tokyo), two in Sweden (CTH Chalmers University of Technology and KTH Royal Institute of Technology in Stockholm), and one each in Denmark (DTU Technical University of Denmark in Lyngby), Finland (University of Helsinki), Norway (Norwegian University of Science and Technology in Trondheim) and the USA (Rensselaer Polytechnic Institute in Troy, New York). Admissions and costs Admissions The admission process is similar to other universities in the Netherlands, especially other 4TU institutions. The university provides various infographics to explain the process in their website. Bachelors Some bachelors have a numerus fixus, while others do not. This may differ from year to year. Masters Due to an agreement, students that have graduated from another 4TU institution may qualify for direct admission. Costs Fees at the TU/e differ between students, according to the following table from the official website: Scholarships Students from countries of the European Economic Area (EEA) may be eligible for a grant or loan from the Dutch government. Graduate TU/e offers a small number of graduate scholarships. Some have requirements in terms of study focus, while others are available to all students. However, in order to qualify one must have already been accepted at the university. Off-campus activities The TU/e plays a central role in the academic, economic and social life of Eindhoven and the surrounding region. In addition the university maintains relations with institutions far beyond that region as well and participates in national and international events (sometimes through the student body). Economic and research motor The TU/e is enormously important to the economy of the Eindhoven region, as well as the wider areas of BrabantStad and the Samenwerkingsverband Regio Eindhoven. It provides highly skilled labor for the local knowledge economy and is a knowledge and research partner for technology companies in the area. The historic basis for the university's role as an economy and research motor was the interaction with Philips. The university was founded primarily to address the need of Philips for local personnel with academic levels of education in electronics, physics, chemistry and later computer science. Later that interest spread to DAF and Royal Dutch Shell (which became the primary employer for graduates of the chemistry department). There was also a synergy with these companies in that senior personnel were hired from them to form the academic staff of the university (which led to the Eindhoven joke that the university trains the engineers and Philips trains the professors). Changing economic times and business strategies changed the relationship during the 1980s and 1990s. As Philips started moving away from the region, its importance to the region and the university decreased. A struggle for economic survival forced the university to seek closer ties with the city and region of Eindhoven in the 1989–1995 period, resulting in the creation of the Brainport initiative to draw high tech business and industry to the region. The university started expending more effort in knowledge valorisation, in incubating technology startups, in providing direct knowledge support for local technology companies. Also the academic interests of the research shifted with the times, with more effort going into energy efficiency research, green technologies, and other areas of interest driven by social relevance (the call for better technology in the medical field, for example, led to cooperation with the Catharina Hospital and the University of Maastricht medical department and finally the creation of the Biomedical Technology department). The TU/e is host (and in some cases also commissioner) of a number of highly successful research schools, including the ESI and the DPI. These research institutes are a source of high-tech knowledge for high-tech companies in the area, such as ASML, NXP and FEI. The university also plays a large role as knowledge and personnel supplier to other companies in the High Tech Campus Eindhoven and helps incubate startups through the Eindhoven Twinning Center. It is also a knowledge supporter of the automotive industry in the Helmond region. In the extended region, the TU/e is part of the backbone of the Eindhoven-Leuven-Aachen triangle. This economic cooperation agreement between three cities in three countries has created one of the most innovative regions in the European Union (measured in terms of money invested in technology and knowledge economy); the agreement is based on the cooperative triangle that connects the three technical universities in those cities. Eindhoven Energy Institute As of the summer of 2010, the TU/e is host to the Eindhoven Energy Institute (EEI). The EEI is a virtual research institute (meaning that it doesn't have any actual offices or facilities), which manages and coordinates the activities of a large number of groups and subinstitutes in
high levels of industry. The current president is Robert-Jan Smits, the former Director-General of Research and Innovation at the European Commission. The rector magnificus The rector magnificus is the only member of the EB whose membership is mandated by law. The law allows the university to appoint a rector in any way, but the university statutes determine that the rector magnificus must be an active professor at the university (and must have been that before being appointed rector); in practice the rector is always a former department dean. The rector is the voice of the academic staff in the EB and guards the academic interests of the university in the EB. The current rector magnificus is Frank Baaijens. The vice president The third member is a "tie-breaker" member of the EB. The post is open to anybody (but generally not filled by an academic staff member). The current vice president is Nicole Ummelen. The secretary The secretary is not a member of the EB, but a university staff member that does secretarial work for the EB, keeping the minutes and records and taking care of communication between the EB and the university. The EB secretary is usually the secretary for the entire university. The current secretary is Susanne van Weelden. Oversight of the executive board There are two bodies that supervise the Executive Board: The Supervisory Board is an external board of five people appointed by the Minister of Education (one member is appointed, based on a nomination by the University Council). This Board provides external oversight of the running of the university, including changing of the statutes, the budget, and other strategic decisions. The University Council is a council of 18 people, half of whom are elected from the university staff (academic and otherwise) and half from the student body. The University Council is informed of the running of the university by the Executive Board at least twice a year and may advise the EB as it sees fit. It guards against discrimination within the university. And the council must agree to changes in the management structure. The Council membership is open to all students and personnel, except those persons who are in the Supervisory Board, the Executive Board or who are the University Secretary. Departments and service organizations Most of the work at the university is done in the departments and the service organizations. The departments take care of most of the research and education at the university; each one is run by its professors, headed by the dean. The deans are all members of the executive deliberation meeting, which is a regular meeting of the deans and the rector. The service organizations provide services to the inhabitants of the university campus. Examples of these organizations include the housing organization, the ICT organization and the Communication Expertise Center (which does external communications, including to the press). Each service organization is headed by an organization head. Both for the departments and the service organizations, the staff (and students) are involved with the running of the body. For that reason both types of bodies have advisory councils which have advisory and co-decision authorities. TU/e Holding B.V. Over the past two decades, the TU/e has increasingly developed commercial interests and off-campus ties. These include commercial agreements and contracts directly between the university and external companies, but also interests in spinoff companies. In order to manage these kinds of contractual obligations the university started the TU/e Holding B.V. in 1997. The Holding is a limited company, dedicated to the commercial exploitation of scientific knowledge. Service organizations There university is more than just the departments, research bodies and the students. There are several ancillary activities necessary to the running of the university, activities that cross the boundaries and interests of the different departments. These activities are carried out by the universities' service organizations. The university has the following service organizations: Academics Rankings Eindhoven is currently (2018) ranked between 51 and 141 in the world (the university itself provides a survey), and a top ten technical university in Europe. In a 2003 European Commission report, TU/e was ranked as third among European research universities (after Cambridge and Oxford, at equality with TU Munich and thus making it the highest ranked Technical University in Europe), based on the impact of its scientific research. In 2011 Academic Ranking of World Universities (ARWU) rankings, TU/e was placed at the 52-75 bucket internationally in Engineering/Technology and Computer Science ( ENG ) category and at 34th place internationally in the Computer Science subject field. Education The scientific departments (or faculties; Dutch: faculteiten) are the primary vehicles for teaching and research in the university. They employ the majority of the academic staff, are responsible for teaching and sponsor the research schools and institutions. The departments also offer PhD programs (Dutch: promotiefase) whereby a qualified master may earn a PhD Unlike in anglo-saxon countries these are not educational programs, however; rather, a person working towards obtaining the PhD is a research employee of the university. The TU/e has nine departments: Biomedical Engineering Built Environment Electrical Engineering Industrial Design Chemical Engineering and Chemistry Industrial Engineering & Innovation Sciences (formerly Technology Management) Applied Physics Mechanical Engineering Mathematics and Computer Science Honors programs The university offers honors programs aimed at both bachelor and master students. At the bachelor level it consists of intensive study within eight possible areas or tracks. At the master level it consists of personal leadership and professional development components, over and above the normal masters study. Postgraduate doctorate of engineering (PDEng) In 1986, the university started a number of programs for a postgraduate doctorate of engineering (PDEng) together with two other Dutch technological universities (TU Delft and University of Twente). These programs are managed by the Stan Ackermans Institute on behalf of the 4TU Federation. Each program is two years in length. Ten programs are available at the TU/e: Automotive Systems Design Clinical Informatics Data Science Healthcare Systems Design Information and Communication Technology Process and Product Design Qualified Medical Engineer Smart Buildings and Cities Software Technology User-System Interaction Nationally, more than 3,500 students have earned the postgraduate PDEng degree through this program. On 13 February, Ravi Thakkar was awarded 3000th PDEng diploma at TU/e Other educational programs The university hosts a number of other educational programs that are in some way related to the main educational programs. These include the teacher's program and an MBA program. Eindhoven School of Education: Teacher's education for masters, to get their higher education teaching certificate. Also does research into educational sciences and innovation in education. TIAS School for Business and Society: A shared MBA program with the University of Tilburg, for university graduates. HBO minor program: Bachelor programs for students of HBO universities (four-year bachelor programs), to allow them access to university master programs. Research The TU/e participates in a large number of research institutes which balance in different ways between pure science and applied science research. Some of these institutes are bound strictly to the university, others combine research across different universities. Top in research partnerships with industry The TU/e is among the world's ten best-performing research universities in terms of research cooperation with industry in 2011 (Number 1 in 2009). Ten to 20 percent of the scientific publications of these ten universities in the period 2006–2008 were the result of partnerships with researchers in industry. As well as TU/e and Delft University of Technology, the top 10 also includes two universities in Japan (Tokyo Institute of Technology and Keio University in Tokyo), two in Sweden (CTH Chalmers University of Technology and KTH Royal Institute of Technology in Stockholm), and one each in Denmark (DTU Technical University of Denmark in Lyngby), Finland (University of Helsinki), Norway (Norwegian University of Science and Technology in Trondheim) and the USA (Rensselaer Polytechnic Institute in Troy, New York). Admissions and costs Admissions The admission process is similar to other universities in the Netherlands, especially other 4TU institutions. The university provides various infographics to explain the process in their website. Bachelors Some bachelors have a numerus fixus, while others do not. This may differ from year to year. Masters Due to an agreement, students that have graduated from another 4TU institution may qualify for direct admission. Costs Fees at the TU/e differ between students, according to the following table from the official website: Scholarships Students from countries of the European Economic Area (EEA) may be eligible for a grant or loan from the Dutch government. Graduate TU/e offers a small number of graduate scholarships. Some have requirements in terms of study focus, while others are available to all students. However, in order to qualify one must have already been accepted at the university. Off-campus activities The TU/e plays a central role in the academic, economic and social life of Eindhoven and the surrounding region. In addition the university maintains relations with institutions far beyond that region as well and participates in national and international events (sometimes through the student body). Economic and research motor The TU/e is enormously important to the economy of the Eindhoven region, as well as the wider areas of BrabantStad and the Samenwerkingsverband Regio Eindhoven. It provides highly skilled labor for the local knowledge economy and is a knowledge and research partner for technology companies in the area. The historic basis for the university's role as an economy and research motor was the interaction with Philips. The university was founded primarily to address the need of Philips for local personnel with academic levels of education in electronics, physics, chemistry and later computer science. Later that interest spread to DAF and Royal Dutch Shell (which became the primary employer for graduates of the chemistry department). There was also a synergy with these companies in that senior personnel were hired from them to form the academic staff of the university (which led to the Eindhoven joke that the university trains the engineers and Philips trains the professors). Changing economic times and business strategies changed the relationship during the 1980s and 1990s. As Philips started moving away from the region, its importance to the region and the university decreased. A struggle for economic survival forced the university to seek closer ties with the city and region of Eindhoven in the 1989–1995 period, resulting in the creation of the Brainport initiative to draw high tech business and industry to the region. The university started expending more effort in
of the two energies of covalent bonds of the same molecules, and there is additional energy that comes from ionic factors, i.e. polar character of the bond. The geometric mean is approximately equal to the arithmetic mean—which is applied in the first formula above—when the energies are of a similar value, e.g., except for the highly electropositive elements, where there is a larger difference of two dissociation energies; the geometric mean is more accurate and almost always gives positive excess energy, due to ionic bonding. The square root of this excess energy, Pauling notes, is approximately additive, and hence one can introduce the electronegativity. Thus, it is this semi-empirical formula for bond energy that underlies the concept of Pauling electronegativity. The formulas are approximate, but this rough approximation is in fact relatively good and gives the right intuition, with the notion of the polarity of the bond and some theoretical grounding in quantum mechanics. The electronegativities are then determined to best fit the data. In more complex compounds, there is an additional error since electronegativity depends on the molecular environment of an atom. Also, the energy estimate can be only used for single, not for multiple bonds. The energy of the formation of a molecule containing only single bonds can subsequently be approximated from an electronegativity table and depends on the constituents and sum of squares of differences of electronegativities of all pairs of bonded atoms. Such a formula for estimating energy typically has a relative error of an order of 10% but can be used to get a rough qualitative idea and understanding of a molecule. Mulliken electronegativity Robert S. Mulliken proposed that the arithmetic mean of the first ionization energy (Ei) and the electron affinity (Eea) should be a measure of the tendency of an atom to attract electrons. As this definition is not dependent on an arbitrary relative scale, it has also been termed absolute electronegativity, with the units of kilojoules per mole or electronvolts. However, it is more usual to use a linear transformation to transform these absolute values into values that resemble the more familiar Pauling values. For ionization energies and electron affinities in electronvolts, and for energies in kilojoules per mole, The Mulliken electronegativity can only be calculated for an element for which the electron affinity is known, fifty-seven elements as of 2006. The Mulliken electronegativity of an atom is sometimes said to be the negative of the chemical potential. By inserting the energetic definitions of the ionization potential and electron affinity into the Mulliken electronegativity, it is possible to show that the Mulliken chemical potential is a finite difference approximation of the electronic energy with respect to the number of electrons., i.e., Allred–Rochow electronegativity A. Louis Allred and Eugene G. Rochow considered that electronegativity should be related to the charge experienced by an electron on the "surface" of an atom: The higher the charge per unit area of atomic surface the greater the tendency of that atom to attract electrons. The effective nuclear charge, Zeff, experienced by valence electrons can be estimated using Slater's rules, while the surface area of an atom in a molecule can be taken to be proportional to the square of the covalent radius, rcov. When rcov is expressed in picometres, Sanderson electronegativity equalization R.T. Sanderson has also noted the relationship between Mulliken electronegativity and atomic size, and has proposed a method of calculation based on the reciprocal of the atomic volume. With a knowledge of bond lengths, Sanderson's model allows the estimation of bond energies in a wide range of compounds. Sanderson's model has also been used to calculate molecular geometry, s-electrons energy, NMR spin-spin constants and other parameters for organic compounds. This work underlies the concept of electronegativity equalization, which suggests that electrons distribute themselves around a molecule to minimize or to equalize the Mulliken electronegativity. This behavior is analogous to the equalization of chemical potential in macroscopic thermodynamics. Allen electronegativity Perhaps the simplest definition of electronegativity is that of Leland C. Allen, who has proposed that it is related to the average energy of the valence electrons in a free atom, where εs,p are the one-electron energies of s- and p-electrons in the free atom and ns,p are the number of s- and p-electrons in the valence shell. It is usual to apply a scaling factor, 1.75×10−3 for energies expressed in kilojoules per mole or 0.169 for energies measured in electronvolts, to give values that are numerically similar to Pauling electronegativities. The one-electron energies can be determined directly from spectroscopic data, and so electronegativities calculated by this method are sometimes referred to as spectroscopic electronegativities. The necessary data are available for almost all elements, and this method allows the estimation of electronegativities for elements that cannot be treated by the other methods, e.g. francium, which has an Allen electronegativity of 0.67. However, it is not clear what should be considered to be valence electrons for the d- and f-block elements, which leads to an ambiguity for their electronegativities calculated by the Allen method. In this scale neon has the highest electronegativity of all elements, followed by fluorine, helium, and oxygen. Correlation of electronegativity with other properties The wide variety of methods of calculation of electronegativities, which all give results that correlate well with one another, is one indication of the number of chemical properties that might be affected by electronegativity. The most obvious application of electronegativities is in the discussion of bond polarity, for which the concept was introduced by Pauling. In general, the greater the difference in electronegativity between two atoms the more polar the bond that will be formed between them, with the atom having the higher electronegativity being at the negative end of the dipole. Pauling proposed an equation to relate the "ionic character" of a bond to the difference in electronegativity of the two atoms, although this has fallen somewhat into disuse. Several correlations have been shown between infrared stretching frequencies of certain bonds and the electronegativities of the atoms involved: however, this is not surprising as such stretching frequencies depend in part on bond strength, which enters into the calculation of Pauling electronegativities. More convincing are the correlations between electronegativity and chemical shifts in NMR spectroscopy or isomer shifts in Mössbauer spectroscopy (see figure). Both these measurements depend on the s-electron density at the nucleus, and so are a good indication that the different measures of electronegativity really are describing "the ability of an atom in a molecule to attract electrons to itself". Trends in electronegativity Periodic trends In general, electronegativity increases on passing from left to right along a period and decreases on descending a group. Hence, fluorine is the most electronegative of the elements (not counting noble gases), whereas caesium is the least electronegative, at least of those elements for which substantial data is available. This would lead one to believe that caesium fluoride is the compound whose bonding features the most ionic character. There are some exceptions to this general rule. Gallium and germanium have higher electronegativities than aluminium and silicon, respectively, because of the d-block contraction. Elements of the fourth period immediately after the first row of the transition metals have unusually small atomic radii because the 3d-electrons are not effective at shielding the increased nuclear charge, and smaller atomic size correlates with higher electronegativity (see Allred-Rochow electronegativity, Sanderson electronegativity above). The anomalously high electronegativity of lead, in particular when compared to thallium and bismuth, is an artifact of electronegativity varying with oxidation state: its electronegativity conforms better to trends if it is quoted for the +2 state with a Pauling value of 1.87 instead of the +4 state. Variation of electronegativity with oxidation number In inorganic chemistry, it is common to consider a single value of electronegativity to be valid for most "normal" situations. While this approach has the advantage of simplicity, it is clear that the electronegativity of an element is not an invariable atomic property and, in particular, increases with the oxidation state of the element. Allred used the Pauling method to calculate separate electronegativities for different oxidation states of the handful of elements (including tin and lead) for which sufficient data were available.
the more "pull" it will have on electrons) and the number and location of other electrons in the atomic shells (the more electrons an atom has, the farther from the nucleus the valence electrons will be, and as a result, the less positive charge they will experience—both because of their increased distance from the nucleus and because the other electrons in the lower energy core orbitals will act to shield the valence electrons from the positively charged nucleus). The term "electronegativity" was introduced by Jöns Jacob Berzelius in 1811, though the concept was known before that and was studied by many chemists including Avogadro. In spite of its long history, an accurate scale of electronegativity was not developed until 1932, when Linus Pauling proposed an electronegativity scale which depends on bond energies, as a development of valence bond theory. It has been shown to correlate with a number of other chemical properties. Electronegativity cannot be directly measured and must be calculated from other atomic or molecular properties. Several methods of calculation have been proposed, and although there may be small differences in the numerical values of the electronegativity, all methods show the same periodic trends between elements. The most commonly used method of calculation is that originally proposed by Linus Pauling. This gives a dimensionless quantity, commonly referred to as the Pauling scale (χr), on a relative scale running from 0.79 to 3.98 (hydrogen = 2.20). When other methods of calculation are used, it is conventional (although not obligatory) to quote the results on a scale that covers the same range of numerical values: this is known as an electronegativity in Pauling units. As it is usually calculated, electronegativity is not a property of an atom alone, but rather a property of an atom in a molecule. Even so, the electronegativity of an atom is strongly correlated with the first ionization energy, and negatively correlated with the electron affinity. It is to be expected that the electronegativity of an element will vary with its chemical environment, but it is usually considered to be a transferable property, that is to say that similar values will be valid in a variety of situations. Caesium is the least electronegative element (0.79); fluorine is the most (3.98). Methods of calculation Pauling electronegativity Pauling first proposed the concept of electronegativity in 1932 to explain why the covalent bond between two different atoms (A–B) is stronger than the average of the A–A and the B–B bonds. According to valence bond theory, of which Pauling was a notable proponent, this "additional stabilization" of the heteronuclear bond is due to the contribution of ionic canonical forms to the bonding. The difference in electronegativity between atoms A and B is given by: where the dissociation energies, Ed, of the A–B, A–A and B–B bonds are expressed in electronvolts, the factor (eV)− being included to ensure a dimensionless result. Hence, the difference in Pauling electronegativity between hydrogen and bromine is 0.73 (dissociation energies: H–Br, 3.79 eV; H–H, 4.52 eV; Br–Br 2.00 eV) As only differences in electronegativity are defined, it is necessary to choose an arbitrary reference point in order to construct a scale. Hydrogen was chosen as the reference, as it forms covalent bonds with a large variety of elements: its electronegativity was fixed first at 2.1, later revised to 2.20. It is also necessary to decide which of the two elements is the more electronegative (equivalent to choosing one of the two possible signs for the square root). This is usually done using "chemical intuition": in the above example, hydrogen bromide dissolves in water to form H+ and Br− ions, so it may be assumed that bromine is more electronegative than hydrogen. However, in principle, since the same electronegativities should be obtained for any two bonding compounds, the data are in fact overdetermined, and the signs are unique once a reference point is fixed (usually, for H or F). To calculate Pauling electronegativity for an element, it is necessary to have data on the dissociation energies of at least two types of covalent bonds formed by that element. A. L. Allred updated Pauling's original values in 1961 to take account of the greater availability of thermodynamic data, and it is these "revised Pauling" values of the electronegativity that are most often used. The essential point of Pauling electronegativity is that there is an underlying, quite accurate, semi-empirical formula for dissociation energies, namely: or sometimes, a more accurate fit This is an approximate equation but holds with good accuracy. Pauling obtained it by noting that a bond can be approximately represented as a quantum mechanical superposition of
from the Charter. On the other hand, Ireland has not been able to sign the Charter on behalf of the Irish language (although a minority language) as it is defined as the first official language of the state. The United Kingdom has ratified the Charter in respect to (among other languages) Welsh in Wales, Scots and Gaelic in Scotland, and Irish in Northern Ireland. France, although a signatory, has been constitutionally blocked from ratifying the Charter in respect to the languages of France. The charter provides many actions state parties can take to protect and promote historical regional and minority languages. There are two levels of protection—all signatories must apply the lower level of protection to qualifying languages. Signatories may further declare that a qualifying language or languages will benefit from the higher level of protection, which lists a range of actions from which states must agree to undertake at least 35. Protections Countries can ratify the charter in respect of its minority languages based on Part II or Part III of the charter, which contain varying principles. Countries can treat languages differently under the charter, for example, in the United Kingdom, the Welsh language is ratified under the general Part II principles as well as the more specific Part III commitments, while the Cornish language is ratified only under Part II. Part II Part II of the Charter details eight main principles and objectives upon which States must base their policies and legislation. They are seen as a framework for the preservation of the languages concerned. Recognition of regional or minority languages as an expression of cultural wealth. Respect for the geographical area of each regional or minority language. The need for resolute action to promote such languages. The facilitation and/or encouragement of the use of such languages, in speech and writing, in public and private life. The provision of appropriate forms and means for the teaching and study of such languages at all appropriate stages. The promotion of relevant transnational exchanges. The prohibition of all forms of unjustified distinction, exclusion, restriction or preference relating to the use of a regional or minority language and intended to discourage or endanger its maintenance or development. The promotion by states of mutual understanding between all the country's linguistic groups. Part III Part III details comprehensive rules, across a number of sectors, by which states agree to abide. Each language to which Part III of the Charter is applied must be named specifically by the government. States must select at least thirty-five of the undertakings in respect to each language. Many provisions contain several options, of varying degrees of stringency, one of which has to be chosen "according to the situation of each language". The areas from
(thus excluding languages used by recent immigrants from other states, see immigrant languages), which significantly differ from the majority or official language (thus excluding what the state party wishes to consider as mere local dialects of the official or majority language) and that either have a territorial basis (and are therefore traditionally spoken by populations of regions or areas within the State) or are used by linguistic minorities within the State as a whole (thereby including such languages as Yiddish, Romani and Lemko, which are used over a wide geographic area). Some states, such as Ukraine and Sweden, have tied the status of minority language to the recognized national minorities, which are defined by ethnic, cultural and/or religious criteria, thereby circumventing the Charter's notion of linguistic minority. Languages that are official within regions, provinces or federal units within a State (for example Catalan in Spain) are not classified as official languages of the State and may therefore benefit from the Charter. On the other hand, Ireland has not been able to sign the Charter on behalf of the Irish language (although a minority language) as it is defined as the first official language of the state. The United Kingdom has ratified the Charter in respect to (among other languages) Welsh in Wales, Scots and Gaelic in Scotland, and Irish in Northern Ireland. France, although a signatory, has been constitutionally blocked from ratifying the Charter in respect to the languages of France. The charter provides many actions state parties can take to protect and promote historical regional and minority languages. There are two levels of protection—all signatories must apply the lower level of protection to qualifying languages. Signatories may further declare that a qualifying language or languages will benefit from the higher level of protection, which lists a range of actions from which states must agree to undertake at least 35. Protections Countries can ratify the charter in respect of its minority languages based on Part II or Part III of the charter, which contain varying principles. Countries can treat languages differently under the charter, for example, in the United Kingdom, the Welsh language is ratified under the general Part II principles as well as the more specific Part III commitments, while the Cornish language is ratified
7 to 9 August 1643, there were some popular demonstrations in London – both for and against war. They were protesting at Westminster. A peace demonstration by London women, which turned violent, was suppressed; the women were beaten and fired upon with live ammunition, leaving several dead. Many were arrested and incarcerated in Bridewell and other prisons. After these August events, the Venetian ambassador in England reported to the doge that the London government took considerable measures to stifle dissent. In general, the early part of the war went well for the Royalists. The turning point came in the late summer and early autumn of 1643, when the Earl of Essex's army forced the king to raise the Siege of Gloucester and then brushed the Royalists aside at the First Battle of Newbury (20 September 1643), to return triumphantly to London. Parliamentarian forces led by the Earl of Manchester besieged the port of King's Lynn, Norfolk, which under Sir Hamon L'Estrange held out until September. Other forces won the Battle of Winceby, giving them control of Lincoln. Political manoeuvring to gain an advantage in numbers led Charles to negotiate a ceasefire in Ireland, freeing up English troops to fight on the Royalist side in England, while Parliament offered concessions to the Scots in return for aid and assistance. Helped by the Scots, Parliament won at Marston Moor (2 July 1644), gaining York and the north of England. Cromwell's conduct in the battle proved decisive, and showed his potential as a political and as an important military leader. The defeat at the Battle of Lostwithiel in Cornwall, however, marked a serious reverse for Parliament in the south-west of England. Subsequent fighting around Newbury (27 October 1644), though tactically indecisive, strategically gave another check to Parliament. In 1645, Parliament reaffirmed its determination to fight the war to a finish. It passed the Self-denying Ordinance, by which all members of either House of Parliament laid down their commands and re-organized its main forces into the New Model Army, under the command of Sir Thomas Fairfax, with Cromwell as his second-in-command and Lieutenant-General of Horse. In two decisive engagements – the Battle of Naseby on 14 June and the Battle of Langport on 10 July – the Parliamentarians effectively destroyed Charles's armies. In the remains of his English realm, Charles tried to recover a stable base of support by consolidating the Midlands. He began to form an axis between Oxford and Newark-on-Trent in Nottinghamshire. These towns had become fortresses and showed more reliable loyalty to him than others. He took Leicester, which lies between them, but found his resources exhausted. Having little opportunity to replenish them, in May 1646 he sought shelter with a Presbyterian Scottish army at Southwell in Nottinghamshire. Charles was eventually handed over to the English Parliament by the Scots and imprisoned. This marked the end of the First English Civil War. Interbellum The end of the First Civil War, in 1646, left a partial power vacuum in which any combination of the three English factions, Royalists, Independents of the New Model Army ("the Army"), and Presbyterians of the English Parliament, as well as the Scottish Parliament allied with the Scottish Presbyterians (the "Kirk"), could prove strong enough to dominate the rest. Armed political Royalism was at an end, but despite being a prisoner, Charles I was considered by himself and his opponents (almost to the last) as necessary to ensure the success of whichever group could come to terms with him. Thus he passed successively into the hands of the Scots, the Parliament and the Army. The King attempted to reverse the verdict of arms by "coquetting" with each in turn. On 3 June 1647, Cornet George Joyce of Thomas Fairfax's horse seized the King for the Army, after which the English Presbyterians and the Scots began to prepare for a fresh civil war, less than two years after the conclusion of the first, this time against "Independency", as embodied in the Army. After making use of the Army's sword, its opponents attempted to disband it, to send it on foreign service and to cut off its arrears of pay. The result was that the Army leadership was exasperated beyond control, and, remembering not merely their grievances but also the principle for which the Army had fought, it soon became the most powerful political force in the realm. From 1646 to 1648 the breach between Army and Parliament widened day by day until finally the Presbyterian party, combined with the Scots and the remaining Royalists, felt itself strong enough to begin a Second Civil War. Second English Civil War (1648–1649) Charles I took advantage of the deflection of attention away from himself to negotiate on 28 December 1647 a secret treaty with the Scots, again promising church reform. Under the agreement, called the "Engagement", the Scots undertook to invade England on Charles's behalf and restore him to the throne. A series of Royalist uprisings throughout England and a Scottish invasion occurred in the summer of 1648. Forces loyal to Parliament put down most of those in England after little more than a skirmish, but uprisings in Kent, Essex and Cumberland, the rebellion in Wales, and the Scottish invasion involved pitched battles and prolonged sieges. In the spring of 1648, unpaid Parliamentarian troops in Wales changed sides. Colonel Thomas Horton defeated the Royalist rebels at the Battle of St Fagans (8 May) and the rebel leaders surrendered to Cromwell on 11 July after a protracted two-month siege of Pembroke. Sir Thomas Fairfax defeated a Royalist uprising in Kent at the Battle of Maidstone on 1 June. Fairfax, after his success at Maidstone and the pacification of Kent, turned north to reduce Essex, where, under an ardent, experienced and popular leader, Sir Charles Lucas, the Royalists had taken up arms in great numbers. Fairfax soon drove the enemy into Colchester, but his first attack on the town met with a repulse and he had to settle down to a long siege. In the North of England, Major-General John Lambert fought a successful campaign against several Royalist uprisings, the largest being that of Sir Marmaduke Langdale in Cumberland. Thanks to Lambert's successes, the Scottish commander, the Duke of Hamilton, had to take a western route through Carlisle in his pro-Royalist Scottish invasion of England. The Parliamentarians under Cromwell engaged the Scots at the Battle of Preston (17–19 August). The battle took place largely at Walton-le-Dale near Preston, Lancashire, and resulted in a victory for Cromwell's troops over the Royalists and Scots commanded by Hamilton. This victory marked the end of the Second English Civil War. Nearly all the Royalists who had fought in the First Civil War had given their word not to bear arms against Parliament, and many, like Lord Astley, were therefore bound by oath not to take any part in the second conflict. So the victors in the Second Civil War showed little mercy to those who had brought war into the land again. On the evening of the surrender of Colchester, Parliamentarians had Sir Charles Lucas and Sir George Lisle shot. Parliamentary authorities sentenced the leaders of the Welsh rebels, Major-General Rowland Laugharne, Colonel John Poyer and Colonel Rice Powel to death, but executed only Poyer (25 April 1649), having selected him by lot. Of five prominent Royalist peers who had fallen into Parliamentary hands, three – the Duke of Hamilton, the Earl of Holland, and Lord Capel, one of the Colchester prisoners and a man of high character – were beheaded at Westminster on 9 March. Trial of Charles I for treason Charles's secret pacts and encouragement of supporters to break their parole caused Parliament to debate whether to return the King to power at all. Those who still supported Charles's place on the throne, such as the army leader and moderate Fairfax, tried again to negotiate with him. The Army, furious that Parliament continued to countenance Charles as a ruler, then marched on Parliament and conducted "Pride's Purge" (named after the commanding officer of the operation, Thomas Pride) in December 1648. Troops arrested 45 members and kept 146 out of the chamber. They allowed only 75 members in, and then only at the Army's bidding. This Rump Parliament received orders to set up, in the name of the people of England, a High Court of Justice for the trial of Charles I for treason. Fairfax, a constitutional monarchist, declined to have anything to do with the trial. He resigned as head of the army, so clearing Cromwell's road to power. At the end of the trial the 59 Commissioners (judges) found Charles I guilty of high treason as a "tyrant, traitor, murderer and public enemy". His beheading took place on a scaffold in front of the Banqueting House of the Palace of Whitehall on 30 January 1649. After the Restoration in 1660, nine of the surviving regicides not living in exile were executed and most others sentenced to life imprisonment. After the regicide, Charles, Prince of Wales as the eldest son was publicly proclaimed King Charles II in the Royal Square of St. Helier, Jersey, on 17 February 1649 (after a first such proclamation in Edinburgh on 5 February 1649). It took longer for the news to reach the trans-Atlantic colonies, with the Somers Isles (also known as Bermuda) becoming the first to proclaim Charles II King on 5 July 1649. Third English Civil War (1649–1651) Ireland Ireland had undergone continual war since the rebellion of 1641, with most of the island controlled by the Irish Confederates. Increasingly threatened by the armies of the English Parliament after Charles I's arrest in 1648, the Confederates signed a treaty of alliance with the English Royalists. The joint Royalist and Confederate forces under the Duke of Ormonde tried to eliminate the Parliamentary army holding Dublin by laying siege, but their opponents routed them at the Battle of Rathmines (2 August 1649). As the former Member of Parliament Admiral Robert Blake blockaded Prince Rupert's fleet in Kinsale, Cromwell could land at Dublin on 15 August 1649 with an army to quell the Royalist alliance. Cromwell's suppression of the Royalists in Ireland in 1649 is still remembered by many Irish people. After the Siege of Drogheda, the massacre of nearly 3,500 people – around 2,700 Royalist soldiers and 700 others, including civilians, prisoners and Catholic priests (Cromwell claimed all had carried arms) – became one of the historical memories that has driven Irish-English and Catholic-Protestant strife during the last three centuries. The Parliamentarian conquest of Ireland ground on for another four years until 1653, when the last Irish Confederate and Royalist troops surrendered. In the wake of the conquest, the victors confiscated almost all Irish Catholic-owned land and distributed it to Parliament's creditors, to Parliamentary soldiers who served in Ireland, and to English who had settled there before the war. Scotland The execution of Charles I altered the dynamics of the Civil War in Scotland, which had raged between Royalists and Covenanters since 1644. By 1649, the struggle had left the Royalists there in disarray and their erstwhile leader, the Marquess of Montrose, had gone into exile. At first, Charles II encouraged Montrose to raise a Highland army to fight on the Royalist side. However, when the Scottish Covenanters (who did not agree with the execution of Charles I and who feared for the future of Presbyterianism under the new Commonwealth) offered him the crown of Scotland, Charles abandoned Montrose to his enemies. However, Montrose, who had raised a mercenary force in Norway, had already landed and could not abandon the fight. He did not succeed in raising many Highland clans and the Covenanters defeated his army at the Battle of Carbisdale in Ross-shire on 27 April 1650. The victors captured Montrose shortly afterwards and took him to Edinburgh. On 20 May the Scottish Parliament sentenced him to death and had him hanged the next day. Charles II landed in Scotland at Garmouth in Morayshire on 23 June 1650 and signed the 1638 National Covenant and the 1643 Solemn League and Covenant shortly after coming ashore. With his original Scottish Royalist followers and his new Covenanter allies, Charles II became the greatest threat facing the new English republic. In response to the threat, Cromwell left some of his lieutenants in Ireland to continue the suppression of the Irish Royalists and returned to England. He arrived in Scotland on 22 July 1650 and proceeded to lay siege to Edinburgh. By the end of August, disease and a shortage of supplies had reduced his army, and he had to order a retreat towards his base at Dunbar. A Scottish army under the command of David Leslie tried to block the retreat, but Cromwell defeated them at the Battle of Dunbar on 3 September. Cromwell's army then took Edinburgh, and by the end of the year his army had occupied much of southern Scotland. In July 1651, Cromwell's forces crossed the Firth of Forth into Fife and defeated the Scots at the Battle of Inverkeithing (20 July 1651). The New Model Army advanced towards Perth, which allowed Charles, at the head of the Scottish army, to move south into England. Cromwell followed Charles into England, leaving George Monck to finish the campaign in Scotland. Monck took Stirling on 14 August and Dundee on 1 September. The next year, 1652, saw a mopping up of the remnants of Royalist resistance, and under the terms of the "Tender of Union", the Scots received 30 seats in a united Parliament in London, with General Monck as the military governor of Scotland. England Although Cromwell's New Model Army had defeated a Scottish army at Dunbar, Cromwell could not prevent Charles II from marching from Scotland deep into England at the head of another Royalist army. They marched to the west of England where English Royalist sympathies were strongest, but although some English Royalists joined the army, they were far fewer in number than Charles and his Scottish supporters had hoped. Cromwell finally engaged and defeated the new Scottish king at Worcester on 3 September 1651. Immediate aftermath After the Royalist defeat at Worcester, Charles II escaped via safe houses and an oak tree to France, and Parliament was left in de facto control of England. Resistance continued for a time in Ireland and Scotland, but with the pacification of England, resistance elsewhere did not threaten the military supremacy of the New Model Army and its Parliamentary paymasters. Political control During the Wars, the Parliamentarians established a number of successive committees to oversee the war effort. The first was the Committee of Safety, set up in July 1642. After the Anglo-Scottish alliance against the Royalists, the Committee of Both Kingdoms replaced the Committee of Safety between 1644 and 1648. Parliament dissolved the Committee of Both Kingdoms when the alliance ended, but its English members continued to meet as the Derby House Committee. A second Committee of Safety then replaced it. Episcopacy During the English Civil War, the role of bishops as wielders of political power and upholders of the established church became a matter of heated political controversy. John Calvin of Geneva had formulated a doctrine of Presbyterianism, which held that the offices of presbyter and episkopos in the New Testament were identical; he rejected the doctrine of apostolic succession. Calvin's follower John Knox brought Presbyterianism to Scotland when the Scottish church was reformed in 1560. In practice, Presbyterianism meant that committees of lay elders had a substantial voice in church government, as opposed to merely being subjects to a ruling hierarchy. This vision of at least partial democracy in ecclesiology paralleled the struggles between Parliament and the King. A body within the Puritan movement in the Church of England sought to abolish the office of bishop and remake the Church of England along Presbyterian lines. The Martin Marprelate tracts (1588–1589), applying the pejorative name of prelacy to the church hierarchy, attacked the office of bishop with satire that deeply offended Elizabeth I and her Archbishop of Canterbury John Whitgift. The vestments controversy also related to this movement, seeking further reductions in church ceremony, and labelling the use of elaborate vestments as "unedifying" and even idolatrous. King James I, reacting against the perceived contumacy of his Presbyterian Scottish subjects, adopted "No Bishop, no King" as a slogan; he tied the hierarchical authority of the bishop to the absolute authority he sought as King, and viewed attacks on the authority of the bishops as attacks on his authority. Matters came to a head when Charles I appointed William Laud as Archbishop of Canterbury; Laud aggressively attacked the Presbyterian movement and sought to impose the full Book of Common Prayer. The controversy eventually led to Laud's impeachment for treason by a bill of attainder in 1645 and subsequent execution. Charles also attempted to impose episcopacy on Scotland; the Scots' violent rejection of bishops and liturgical worship sparked the Bishops' Wars in 1639–1640. During the height of Puritan power under the Commonwealth and the Protectorate, episcopacy was formally abolished in the Church of England on 9 October 1646. The Church of England remained Presbyterian until the Restoration of the monarchy. English overseas possessions During the English Civil War, the English overseas possessions became highly involved. In the Channel Islands, the island of Jersey and Castle Cornet in Guernsey supported the King until a surrender with honour in December 1651. Although the newer, Puritan settlements in North America, notably Massachusetts, were dominated by Parliamentarians, the older colonies sided with the Crown. Friction between Royalists and Puritans in Maryland came to a head in the Battle of the Severn. The Virginia Company's settlements, Bermuda and Virginia, as well as Antigua and Barbados, were conspicuous in their loyalty to the Crown. Bermuda's Independent Puritans were expelled, settling the Bahamas under William Sayle as the Eleutheran Adventurers. Parliament passed An Act for prohibiting Trade with the Barbadoes, Virginia, Bermuda and Antego in October, 1650, which stated that The Act also authorised Parliamentary privateers to act against English vessels trading with the rebellious colonies: The Parliament began assembling a fleet to invade the Royalist colonies, but many of the English islands in the Caribbean were captured by the Dutch and French in 1651 during the Second Anglo-Dutch War. Far to the North, Bermuda's regiment of Militia and its coastal batteries prepared to resist an invasion that never came. Built-up inside the natural defence of a nearly impassable barrier reef, to fend off the might of Spain, these defences were would have been a formidable obstacle for the Parliamentary fleet sent in 1651 under the command of Admiral Sir George Ayscue to subdue the trans-Atlantic colonies, but after the fall of Barbados the Bermudians made a separate peace that respected the internal status quo. The Parliament of Bermuda avoided the Parliament of England's fate during The Protectorate, becoming one of the oldest continuous legislatures in the world. Virginia's population swelled with Cavaliers during and after the English Civil War. Even so, Virginia Puritan Richard Bennett was made Governor answering to Cromwell in 1652, followed by two more nominal "Commonwealth Governors". The loyalty of Virginia's Cavaliers to the Crown was rewarded after the 1660 Restoration of the Monarchy when Charles II dubbed it the Old Dominion. Casualties Figures for casualties during this period are unreliable, but some attempt has been made to provide rough estimates. In England, a conservative estimate is that roughly 100,000 people died from war-related disease during the three civil wars. Historical records count 84,830 combat dead from the wars themselves. Counting in accidents and the two Bishops' wars, an estimate of 190,000 dead is achieved, out of a total population of about five million. It is estimated that from 1638 to 1651, 15–20% of all adult males in England and Wales served in the military, and around 4% of the total population died from war-related causes, compared to 2.23% in World War I. As was typical for the era, most combat deaths occurred in minor skirmishes rather than large pitched battles. There were a total of 645 engagements throughout the wars; 588 of these involved fewer than 250 casualties in total, with these 588 accounting for 39,838 fatalities (average count of less than 68) or nearly half of the conflict's combat deaths. There were only 9 major pitched battles (at least 1,000 fatalities) which in total accounted for 15% of casualties. An anecdotal example of perception of high casualties in England is to be found in the posthumously published writing (generally titled The History of Myddle), by a Shropshire man, Richard Gough (lived 1635–1723) of Myddle near Shrewsbury, who, writing in about 1701, commented of men from his rural home parish who joined the Royalist forces: "And out of these three townes [sic - ie townships], Myddle, Marton and Newton, there went noe less than twenty men, of which number thirteen were kill'd in the warrs". After listing those he recalled did not return home, four of whose exact fates were unknown, he concluded: "And if soe many dyed out of these 3 townes [townships] wee may reasonably guess that many thousands dyed in England in that warre." Figures for Scotland are less reliable and should be treated with caution. Casualties include the deaths of prisoners-of-war in conditions that accelerated their deaths, with estimates of 10,000 prisoners not surviving or not returning home (8,000 captured during and immediately after the Battle of Worcester were deported to New England, Bermuda and the West Indies to work for landowners as indentured labourers). There are no figures to calculate how many died from war-related diseases, but if the same ratio of disease to battle deaths from English figures is applied to the Scottish figures, a not unreasonable estimate of 60,000 people is achieved, from a population of about one million. Figures for Ireland are described as "miracles of conjecture". Certainly the devastation inflicted on Ireland was massive, with the best estimate provided by Sir William Petty, the father of English demography. Petty estimated that 112,000 Protestants and 504,000 Catholics were killed through plague, war and famine, giving an estimated total of 616,000 dead, out of a pre-war population of about one and a half million. Although Petty's figures are the best available, they are still acknowledged as tentative; they do not include an estimated 40,000 driven into exile, some of whom served as soldiers in European continental armies, while others were sold as indentured servants to New England and the West Indies. Many of those sold to landowners in New England eventually prospered, but many sold to landowners in the West Indies were worked to death. These estimates indicate that England suffered a 4 percent loss of population, Scotland a loss of 6 percent, while Ireland suffered a loss of 41 percent of its population. Putting these numbers into the context of other catastrophes helps to understand the devastation of Ireland in particular. The Great Famine of 1845–1852 resulted in a loss of 16 percent of the population, while during the Soviet famine and Holodomor of 1932–33 the population of the Soviet Ukraine fell by 14 percent. Popular gains Ordinary people took advantage of the dislocation of civil society in the 1640s to gain personal advantages. The contemporary guild democracy movement won its greatest successes among London's transport workers. Rural communities seized timber and other resources on the sequestrated estates of Royalists and Catholics, and on the estates of the royal family and church hierarchy. Some communities improved their conditions of tenure on such estates. The old status quo began a retrenchment after the end of the First Civil War in 1646, and more especially after the Restoration in 1660, but some gains were long-term. The democratic element introduced into the watermen's company in 1642, for example, survived with vicissitudes until 1827. Aftermath The wars left England, Scotland, and Ireland among the few countries in Europe without a monarch. In the wake of victory, many of the ideals (and many idealists) became sidelined. The republican government of the Commonwealth of England ruled England (and later all of Scotland and Ireland) from 1649 to 1653 and from 1659 to 1660. Between the two periods, and due to in-fighting among various factions in Parliament, Oliver Cromwell ruled over the Protectorate as Lord Protector (effectively a military dictator) until his death in 1658. On Oliver Cromwell's death, his son Richard became Lord Protector, but the Army had little confidence in him. After seven months the Army removed Richard, and in May 1659 it re-installed the Rump. However, military force shortly afterward dissolved this as well. After the second dissolution of the Rump, in October 1659, the prospect of a total descent into anarchy loomed as the Army's pretense of unity finally dissolved into factions. Into this atmosphere General George Monck, Governor of Scotland under the Cromwells, marched south with his army from Scotland. On 4 April 1660, in the Declaration of Breda, Charles II made known the conditions of his acceptance of the Crown of England. Monck organised the Convention Parliament, which met for the first time on 25 April 1660. On 8 May 1660, it declared that Charles II had reigned as the lawful monarch since the execution of Charles I in January 1649. Charles returned from exile on 23 May 1660. On 29 May 1660, the populace in London acclaimed him as king. His coronation took place at Westminster Abbey on 23 April 1661. These events became known as the Restoration. Although the monarchy was restored, it was still only with the consent of Parliament. So the civil wars effectively set England and Scotland on course towards a parliamentary monarchy form of government. The outcome of this system was that the future Kingdom of Great Britain, formed in 1707 under the Acts of Union, managed to forestall the kind of revolution typical of European republican movements which generally resulted in total abolition of their monarchies. Thus the United Kingdom was spared the wave of revolutions that occurred in Europe in the 1840s. Specifically, future monarchs became wary of pushing Parliament too hard, and Parliament effectively chose the line of royal succession in 1688 with the Glorious Revolution. Historical interpretations In the early decades of the 20th century, the Whig school was the dominant theoretical view. It explained the Civil War as resulting from centuries of struggle between Parliament (notably the House of Commons) and the Monarchy, with Parliament defending the traditional rights of Englishmen, while the Stuart monarchy continually attempted to expand its right to dictate law arbitrarily. The major Whig historian, S. R. Gardiner, popularised the idea that the English Civil War was a "Puritan Revolution", which challenged the repressive Stuart Church and prepared the way for religious toleration. So Puritanism was seen as the natural ally of a people preserving their traditional rights against arbitrary monarchical power. The Whig view was challenged and largely superseded by the Marxist school, which became popular in the 1940s, and saw the English Civil War as a bourgeois revolution. According to Marxist historian Christopher Hill: In the 1970s, revisionist historians challenged both the Whig and the Marxist theories, notably in the 1973 anthology The Origins of the English Civil War (Conrad Russell ed.). These historians focused on the minutiae of the years immediately before the civil war, returning to the contingency-based historiography of Clarendon's History of the Rebellion and Civil Wars in England. This, it was claimed, demonstrated that patterns of war allegiance did not fit either Whig or Marxist theories. Parliament was not inherently progressive, nor the events of 1640 a precursor for the Glorious Revolution. Many members of the bourgeoisie fought for the King, while many landed aristocrats supported Parliament. From the 1990s, a number of historians replaced the historical title "English Civil War" with "Wars of the Three Kingdoms" and "British Civil Wars", positing that the civil war in England cannot be understood apart from events in other parts of Britain and Ireland. King Charles I remains crucial, not just as King of England, but through his relationship with the peoples of his other realms. For example, the wars began when Charles forced an Anglican Prayer Book upon Scotland, and when this was met with resistance from the Covenanters, he needed an army to impose his will. However, this need of military funds forced Charles I to call an English Parliament, which was not willing to grant the needed revenue unless he addressed their grievances. By the early 1640s, Charles was left in a state of near-permanent crisis management, confounded by the demands of the various factions. For example, Charles finally made terms with the Covenanters in August 1641, but although this might have weakened the position of the English Parliament, the Irish Rebellion of 1641 broke out in October 1641, largely negating the political advantage he had obtained by relieving himself of the cost of the Scottish invasion. Hobbes' Behemoth Thomas Hobbes gave a much earlier historical account of the English Civil War in his Behemoth, written in 1668 and published in 1681. He assessed the causes of the war to be the conflicting political doctrines of the time. Behemoth offered a uniquely historical and philosophical approach to naming the catalysts for the war. It also attempted to explain why Charles I could not hold his throne and maintain peace in his kingdom. Hobbes analysed in turn the following aspects of English thought during the war: the opinions of divinity and politics that spurred
the Civil War, when a troop of about 1,000 Royalist cavalry under Prince Rupert, a German nephew of the King and one of the outstanding cavalry commanders of the war, defeated a Parliamentary cavalry detachment under Colonel John Brown at the Battle of Powick Bridge, which crossed the River Teme close to Worcester. Rupert withdrew to Shrewsbury, where a council-of-war discussed two courses of action: whether to advance towards Essex's new position near Worcester, or march down the now open road towards London. The Council decided on the London route, but not to avoid a battle, for the Royalist generals wanted to fight Essex before he grew too strong, and the temper of both sides made it impossible to postpone the decision. In the Earl of Clarendon's words, "it was considered more counsellable to march towards London, it being morally sure that the earl of Essex would put himself in their way." So the army left Shrewsbury on 12 October, gaining two days' start on the enemy, and moved south-east. This had the desired effect of forcing Essex to move to intercept them. The first pitched battle of the war, at Edgehill on 23 October 1642, proved inconclusive, both Royalists and Parliamentarians claiming victory. The second field action, the stand-off at Turnham Green, saw Charles forced to withdraw to Oxford, which would serve as his base for the rest of the war. In 1643, Royalist forces won at Adwalton Moor, gaining control of most of Yorkshire. In the Midlands, a Parliamentary force under Sir John Gell besieged and captured the cathedral city of Lichfield, after the death of the original commander, Lord Brooke. This group then joined forces with Sir William Brereton at the inconclusive Battle of Hopton Heath (19 March 1643), where the Royalist commander, the Earl of Northampton, was killed. John Hampden died after being wounded in the Battle of Chalgrove Field (18 June 1643). Subsequent battles in the west of England at Lansdowne and Roundway Down also went to the Royalists. Prince Rupert could then take Bristol. In the same year, however, Cromwell formed his troop of "Ironsides", a disciplined unit that demonstrated his military leadership ability. With their assistance he won a victory at the Battle of Gainsborough in July. At this stage, from 7 to 9 August 1643, there were some popular demonstrations in London – both for and against war. They were protesting at Westminster. A peace demonstration by London women, which turned violent, was suppressed; the women were beaten and fired upon with live ammunition, leaving several dead. Many were arrested and incarcerated in Bridewell and other prisons. After these August events, the Venetian ambassador in England reported to the doge that the London government took considerable measures to stifle dissent. In general, the early part of the war went well for the Royalists. The turning point came in the late summer and early autumn of 1643, when the Earl of Essex's army forced the king to raise the Siege of Gloucester and then brushed the Royalists aside at the First Battle of Newbury (20 September 1643), to return triumphantly to London. Parliamentarian forces led by the Earl of Manchester besieged the port of King's Lynn, Norfolk, which under Sir Hamon L'Estrange held out until September. Other forces won the Battle of Winceby, giving them control of Lincoln. Political manoeuvring to gain an advantage in numbers led Charles to negotiate a ceasefire in Ireland, freeing up English troops to fight on the Royalist side in England, while Parliament offered concessions to the Scots in return for aid and assistance. Helped by the Scots, Parliament won at Marston Moor (2 July 1644), gaining York and the north of England. Cromwell's conduct in the battle proved decisive, and showed his potential as a political and as an important military leader. The defeat at the Battle of Lostwithiel in Cornwall, however, marked a serious reverse for Parliament in the south-west of England. Subsequent fighting around Newbury (27 October 1644), though tactically indecisive, strategically gave another check to Parliament. In 1645, Parliament reaffirmed its determination to fight the war to a finish. It passed the Self-denying Ordinance, by which all members of either House of Parliament laid down their commands and re-organized its main forces into the New Model Army, under the command of Sir Thomas Fairfax, with Cromwell as his second-in-command and Lieutenant-General of Horse. In two decisive engagements – the Battle of Naseby on 14 June and the Battle of Langport on 10 July – the Parliamentarians effectively destroyed Charles's armies. In the remains of his English realm, Charles tried to recover a stable base of support by consolidating the Midlands. He began to form an axis between Oxford and Newark-on-Trent in Nottinghamshire. These towns had become fortresses and showed more reliable loyalty to him than others. He took Leicester, which lies between them, but found his resources exhausted. Having little opportunity to replenish them, in May 1646 he sought shelter with a Presbyterian Scottish army at Southwell in Nottinghamshire. Charles was eventually handed over to the English Parliament by the Scots and imprisoned. This marked the end of the First English Civil War. Interbellum The end of the First Civil War, in 1646, left a partial power vacuum in which any combination of the three English factions, Royalists, Independents of the New Model Army ("the Army"), and Presbyterians of the English Parliament, as well as the Scottish Parliament allied with the Scottish Presbyterians (the "Kirk"), could prove strong enough to dominate the rest. Armed political Royalism was at an end, but despite being a prisoner, Charles I was considered by himself and his opponents (almost to the last) as necessary to ensure the success of whichever group could come to terms with him. Thus he passed successively into the hands of the Scots, the Parliament and the Army. The King attempted to reverse the verdict of arms by "coquetting" with each in turn. On 3 June 1647, Cornet George Joyce of Thomas Fairfax's horse seized the King for the Army, after which the English Presbyterians and the Scots began to prepare for a fresh civil war, less than two years after the conclusion of the first, this time against "Independency", as embodied in the Army. After making use of the Army's sword, its opponents attempted to disband it, to send it on foreign service and to cut off its arrears of pay. The result was that the Army leadership was exasperated beyond control, and, remembering not merely their grievances but also the principle for which the Army had fought, it soon became the most powerful political force in the realm. From 1646 to 1648 the breach between Army and Parliament widened day by day until finally the Presbyterian party, combined with the Scots and the remaining Royalists, felt itself strong enough to begin a Second Civil War. Second English Civil War (1648–1649) Charles I took advantage of the deflection of attention away from himself to negotiate on 28 December 1647 a secret treaty with the Scots, again promising church reform. Under the agreement, called the "Engagement", the Scots undertook to invade England on Charles's behalf and restore him to the throne. A series of Royalist uprisings throughout England and a Scottish invasion occurred in the summer of 1648. Forces loyal to Parliament put down most of those in England after little more than a skirmish, but uprisings in Kent, Essex and Cumberland, the rebellion in Wales, and the Scottish invasion involved pitched battles and prolonged sieges. In the spring of 1648, unpaid Parliamentarian troops in Wales changed sides. Colonel Thomas Horton defeated the Royalist rebels at the Battle of St Fagans (8 May) and the rebel leaders surrendered to Cromwell on 11 July after a protracted two-month siege of Pembroke. Sir Thomas Fairfax defeated a Royalist uprising in Kent at the Battle of Maidstone on 1 June. Fairfax, after his success at Maidstone and the pacification of Kent, turned north to reduce Essex, where, under an ardent, experienced and popular leader, Sir Charles Lucas, the Royalists had taken up arms in great numbers. Fairfax soon drove the enemy into Colchester, but his first attack on the town met with a repulse and he had to settle down to a long siege. In the North of England, Major-General John Lambert fought a successful campaign against several Royalist uprisings, the largest being that of Sir Marmaduke Langdale in Cumberland. Thanks to Lambert's successes, the Scottish commander, the Duke of Hamilton, had to take a western route through Carlisle in his pro-Royalist Scottish invasion of England. The Parliamentarians under Cromwell engaged the Scots at the Battle of Preston (17–19 August). The battle took place largely at Walton-le-Dale near Preston, Lancashire, and resulted in a victory for Cromwell's troops over the Royalists and Scots commanded by Hamilton. This victory marked the end of the Second English Civil War. Nearly all the Royalists who had fought in the First Civil War had given their word not to bear arms against Parliament, and many, like Lord Astley, were therefore bound by oath not to take any part in the second conflict. So the victors in the Second Civil War showed little mercy to those who had brought war into the land again. On the evening of the surrender of Colchester, Parliamentarians had Sir Charles Lucas and Sir George Lisle shot. Parliamentary authorities sentenced the leaders of the Welsh rebels, Major-General Rowland Laugharne, Colonel John Poyer and Colonel Rice Powel to death, but executed only Poyer (25 April 1649), having selected him by lot. Of five prominent Royalist peers who had fallen into Parliamentary hands, three – the Duke of Hamilton, the Earl of Holland, and Lord Capel, one of the Colchester prisoners and a man of high character – were beheaded at Westminster on 9 March. Trial of Charles I for treason Charles's secret pacts and encouragement of supporters to break their parole caused Parliament to debate whether to return the King to power at all. Those who still supported Charles's place on the throne, such as the army leader and moderate Fairfax, tried again to negotiate with him. The Army, furious that Parliament continued to countenance Charles as a ruler, then marched on Parliament and conducted "Pride's Purge" (named after the commanding officer of the operation, Thomas Pride) in December 1648. Troops arrested 45 members and kept 146 out of the chamber. They allowed only 75 members in, and then only at the Army's bidding. This Rump Parliament received orders to set up, in the name of the people of England, a High Court of Justice for the trial of Charles I for treason. Fairfax, a constitutional monarchist, declined to have anything to do with the trial. He resigned as head of the army, so clearing Cromwell's road to power. At the end of the trial the 59 Commissioners (judges) found Charles I guilty of high treason as a "tyrant, traitor, murderer and public enemy". His beheading took place on a scaffold in front of the Banqueting House of the Palace of Whitehall on 30 January 1649. After the Restoration in 1660, nine of the surviving regicides not living in exile were executed and most others sentenced to life imprisonment. After the regicide, Charles, Prince of Wales as the eldest son was publicly proclaimed King Charles II in the Royal Square of St. Helier, Jersey, on 17 February 1649 (after a first such proclamation in Edinburgh on 5 February 1649). It took longer for the news to reach the trans-Atlantic colonies, with the Somers Isles (also known as Bermuda) becoming the first to proclaim Charles II King on 5 July 1649. Third English Civil War (1649–1651) Ireland Ireland had undergone continual war since the rebellion of 1641, with most of the island controlled by the Irish Confederates. Increasingly threatened by the armies of the English Parliament after Charles I's arrest in 1648, the Confederates signed a treaty of alliance with the English Royalists. The joint Royalist and Confederate forces under the Duke of Ormonde tried to eliminate the Parliamentary army holding Dublin by laying siege, but their opponents routed them at the Battle of Rathmines (2 August 1649). As the former Member of Parliament Admiral Robert Blake blockaded Prince Rupert's fleet in Kinsale, Cromwell could land at Dublin on 15 August 1649 with an army to quell the Royalist alliance. Cromwell's suppression of the Royalists in Ireland in 1649 is still remembered by many Irish people. After the Siege of Drogheda, the massacre of nearly 3,500 people – around 2,700 Royalist soldiers and 700 others, including civilians, prisoners and Catholic priests (Cromwell claimed all had carried arms) – became one of the historical memories that has driven Irish-English and Catholic-Protestant strife during the last three centuries. The Parliamentarian conquest of Ireland ground on for another four years until 1653, when the last Irish Confederate and Royalist troops surrendered. In the wake of the conquest, the victors confiscated almost all Irish Catholic-owned land and distributed it to Parliament's creditors, to Parliamentary soldiers who served in Ireland, and to English who had settled there before the war. Scotland The execution of Charles I altered the dynamics of the Civil War in Scotland, which had raged between Royalists and Covenanters since 1644. By 1649, the struggle had left the Royalists there in disarray and their erstwhile leader, the Marquess of Montrose, had gone into exile. At first, Charles II encouraged Montrose to raise a Highland army to fight on the Royalist side. However, when the Scottish Covenanters (who did not agree with the execution of Charles I and who feared for the future of Presbyterianism under the new Commonwealth) offered him the crown of Scotland, Charles abandoned Montrose to his enemies. However, Montrose, who had raised a mercenary force in Norway, had already landed and could not abandon the fight. He did not succeed in raising many Highland clans and the Covenanters defeated his army at the Battle of Carbisdale in Ross-shire on 27 April 1650. The victors captured Montrose shortly afterwards and took him to Edinburgh. On 20 May the Scottish Parliament sentenced him to death and had him hanged the next day. Charles II landed in Scotland at Garmouth in Morayshire on 23 June 1650 and signed the 1638 National Covenant and the 1643 Solemn League and Covenant shortly after coming ashore. With his original Scottish Royalist followers and his new Covenanter allies, Charles II became the greatest threat facing the new English republic. In response to the threat, Cromwell left some of his lieutenants in Ireland to continue the suppression of the Irish Royalists and returned to England. He arrived in Scotland on 22 July 1650 and proceeded to lay siege to Edinburgh. By the end of August, disease and a shortage of supplies had reduced his army, and he had to order a retreat towards his base at Dunbar. A Scottish army under the command of David Leslie tried to block the retreat, but Cromwell defeated them at the Battle of Dunbar on 3 September. Cromwell's army then took Edinburgh, and by the end of the year his army had occupied much of southern Scotland. In July 1651, Cromwell's forces crossed the Firth of Forth into Fife and defeated the Scots at the Battle of Inverkeithing (20 July 1651). The New Model Army advanced towards Perth, which allowed Charles, at the head of the Scottish army, to move south into England. Cromwell followed Charles into England, leaving George Monck to finish the campaign in Scotland. Monck took Stirling on 14 August and Dundee on 1 September. The next year, 1652, saw a mopping up of the remnants of Royalist resistance, and under the terms of the "Tender of Union", the Scots received 30 seats in a united Parliament in London, with General Monck as the military governor of Scotland. England Although Cromwell's New Model Army had defeated a Scottish army at Dunbar, Cromwell could not prevent Charles II from marching from Scotland deep into England at the head of another Royalist army. They marched to the west of England where English Royalist sympathies were strongest, but although some English Royalists joined the army, they were far fewer in number than Charles and his Scottish supporters had hoped. Cromwell finally engaged and defeated the new Scottish king at Worcester on 3 September 1651. Immediate aftermath After the Royalist defeat at Worcester, Charles II escaped via safe houses and an oak tree to France, and Parliament was left in de facto control of England. Resistance continued for a time in Ireland and Scotland, but with the pacification of England, resistance elsewhere did not threaten the military supremacy of the New Model Army and its Parliamentary paymasters. Political control During the Wars, the Parliamentarians established a number of successive committees to oversee the war effort. The first was the Committee of Safety, set up in July 1642. After the Anglo-Scottish alliance against the Royalists, the Committee of Both Kingdoms replaced the Committee of Safety between 1644 and 1648. Parliament dissolved the Committee of Both Kingdoms when the alliance ended, but its English members continued to meet as the Derby House Committee. A second Committee of Safety then replaced it. Episcopacy During the English Civil War, the role of bishops as wielders of political power and upholders of the established church became a matter of heated political controversy. John Calvin of Geneva had formulated a doctrine of Presbyterianism, which held that the offices of presbyter and episkopos in the New Testament were identical; he rejected the doctrine of apostolic succession. Calvin's follower John Knox brought Presbyterianism to Scotland when the Scottish church was reformed in 1560. In practice, Presbyterianism meant that committees of lay elders had a substantial voice in church government, as opposed to merely being subjects to a ruling hierarchy. This vision of at least partial democracy in ecclesiology paralleled the struggles between Parliament and the King. A body within the Puritan movement in the Church of England sought to abolish the office of bishop and remake the Church of England along Presbyterian lines. The Martin Marprelate tracts (1588–1589), applying the pejorative name of prelacy to the church hierarchy, attacked the office of bishop with satire that deeply offended Elizabeth I and her Archbishop of Canterbury John Whitgift. The vestments controversy also related to this movement, seeking further reductions in church ceremony, and labelling the use of elaborate vestments as "unedifying" and even idolatrous. King James I, reacting against the perceived contumacy of his Presbyterian Scottish subjects, adopted "No Bishop, no King" as a slogan; he tied the hierarchical authority of the bishop to the absolute authority he sought as King, and viewed attacks on the authority of the bishops as attacks on his authority. Matters came to a head when Charles I appointed William Laud as Archbishop of Canterbury; Laud aggressively attacked the Presbyterian movement and sought to impose the full Book of Common Prayer. The controversy eventually led to Laud's impeachment for treason by a bill of attainder in 1645 and subsequent execution. Charles also attempted to impose episcopacy on Scotland; the Scots' violent rejection of bishops and liturgical worship sparked the Bishops' Wars in 1639–1640. During the height of Puritan power under the Commonwealth and the Protectorate, episcopacy was formally abolished in the Church of England on 9 October 1646. The Church of England remained Presbyterian until the Restoration of the monarchy. English overseas possessions During the English Civil War, the English overseas possessions became highly involved. In the Channel Islands, the island of Jersey and Castle Cornet in Guernsey supported the King until a surrender with honour in December 1651. Although the newer, Puritan settlements in North America, notably Massachusetts, were dominated by Parliamentarians, the older colonies sided with the Crown. Friction between Royalists and Puritans in Maryland came to a head in the Battle of the Severn. The Virginia Company's settlements, Bermuda and Virginia, as well as Antigua and Barbados, were conspicuous in their loyalty to the Crown. Bermuda's Independent Puritans were expelled, settling the Bahamas under William Sayle as the Eleutheran Adventurers. Parliament passed An Act for prohibiting Trade with the Barbadoes, Virginia, Bermuda and Antego in October, 1650, which stated that The Act also authorised Parliamentary privateers to act against English vessels trading with the rebellious colonies: The Parliament began assembling a fleet to invade the Royalist colonies, but many of the English islands in the Caribbean were captured by the Dutch and French in 1651 during the Second Anglo-Dutch War. Far to the North, Bermuda's regiment of Militia and its coastal batteries prepared to resist an invasion that never came. Built-up inside the natural defence of a nearly impassable barrier reef, to fend off the might of Spain, these defences were would have been a formidable obstacle for the Parliamentary fleet sent in 1651 under the command of Admiral Sir George Ayscue to subdue the trans-Atlantic colonies, but after the fall of Barbados the Bermudians made a separate peace that respected the internal status quo. The Parliament of Bermuda avoided the Parliament of England's fate during The Protectorate, becoming one of the oldest continuous legislatures in the world. Virginia's population swelled with Cavaliers during and after the English Civil War. Even so, Virginia Puritan Richard Bennett was made Governor answering to Cromwell in 1652, followed by two more nominal "Commonwealth Governors". The loyalty of Virginia's Cavaliers to the Crown was rewarded after the 1660 Restoration of the Monarchy when Charles II dubbed it the Old Dominion. Casualties Figures for casualties during this period are unreliable, but some attempt has been made to provide rough estimates. In England, a conservative estimate is that roughly 100,000 people died from war-related disease during the three civil wars. Historical records count 84,830 combat dead from the wars themselves. Counting in accidents and the two Bishops' wars, an estimate of 190,000 dead is achieved, out of a total population of about five million. It is estimated that from 1638 to 1651, 15–20% of all adult males in England and Wales served in the military, and around 4% of the total population died from war-related causes, compared to 2.23% in World War I. As was typical for the era, most combat deaths occurred in minor skirmishes rather than large pitched battles. There were a total of 645 engagements throughout the wars; 588 of these involved fewer than 250 casualties in total, with these 588 accounting for 39,838 fatalities (average count of less than 68) or nearly half of the conflict's combat deaths. There were only 9 major pitched battles (at least 1,000 fatalities) which in total accounted for 15% of casualties. An anecdotal example of perception of high casualties in England is to be found in the posthumously published writing (generally titled The History of Myddle), by a Shropshire man, Richard Gough (lived 1635–1723) of Myddle near Shrewsbury, who, writing in about 1701, commented of men from his rural home parish who joined the Royalist forces: "And out of these three townes [sic - ie townships], Myddle, Marton and Newton, there went noe less than twenty men, of which number thirteen were kill'd in the warrs". After listing those he recalled did not return home, four of whose exact fates were unknown, he concluded: "And if soe many dyed out of these 3 townes [townships] wee may reasonably guess that many thousands dyed in England in that warre." Figures for Scotland are less reliable and should be treated with caution. Casualties include the deaths of prisoners-of-war in conditions that accelerated their deaths, with estimates of 10,000 prisoners not surviving or not returning home (8,000 captured during and immediately after the Battle of Worcester were deported to New England, Bermuda and the West Indies to work for landowners as indentured labourers). There are no figures to calculate how many died from war-related diseases, but if the same ratio of disease to battle deaths from English figures is applied to the Scottish figures, a not unreasonable estimate of 60,000 people is achieved, from a population of about one million. Figures for Ireland are described as "miracles of conjecture". Certainly the devastation inflicted on Ireland was massive, with the best estimate provided by Sir William Petty, the father of English demography. Petty estimated that 112,000 Protestants and 504,000 Catholics were killed through plague, war and famine, giving an estimated total of 616,000 dead, out of a pre-war population of about one and a half million. Although Petty's figures are the best available, they are still acknowledged as tentative; they do not include an estimated 40,000 driven into exile, some of whom served as soldiers in European continental armies, while others were sold as indentured servants to New England and the West Indies. Many of those sold to landowners in New England eventually prospered, but many sold to landowners in the West Indies were worked to death. These estimates indicate that England suffered a 4 percent loss of population, Scotland a loss of 6 percent, while Ireland suffered a loss of 41 percent of its population. Putting these numbers into the context of other catastrophes helps to understand the devastation of Ireland in particular. The Great Famine of 1845–1852 resulted in a loss of 16 percent of the population, while during the Soviet famine and Holodomor of 1932–33 the population of the Soviet Ukraine fell by 14 percent. Popular gains Ordinary people took advantage of the dislocation of civil society in the 1640s to gain personal advantages. The contemporary guild democracy movement won its greatest successes among London's transport workers. Rural communities seized timber and other resources on the sequestrated estates of Royalists and Catholics, and on the estates of the royal family and church hierarchy. Some communities improved their conditions of tenure on
problem can be described algebraically as . Variables allow one to describe general problems, without specifying the values of the quantities that are involved. For example, it can be stated specifically that 5 minutes is equivalent to seconds. A more general (algebraic) description may state that the number of seconds, , where m is the number of minutes. Variables allow one to describe mathematical relationships between quantities that may vary. For example, the relationship between the circumference, c, and diameter, d, of a circle is described by . Variables allow one to describe some mathematical properties. For example, a basic property of addition is commutativity which states that the order of numbers being added together does not matter. Commutativity is stated algebraically as . Simplifying expressions Algebraic expressions may be evaluated and simplified, based on the basic properties of arithmetic operations (addition, subtraction, multiplication, division and exponentiation). For example, Added terms are simplified using coefficients. For example, can be simplified as (where 3 is a numerical coefficient). Multiplied terms are simplified using exponents. For example, is represented as Like terms are added together, for example, is written as , because the terms containing are added together, and, the terms containing are added together. Brackets can be "multiplied out", using the distributive property. For example, can be written as which can be written as Expressions can be factored. For example, , by dividing both terms by can be written as Equations An equation states that two expressions are equal using the symbol for equality, (the equals sign). One of the best-known equations describes Pythagoras' law relating the length of the sides of a right angle triangle: This equation states that , representing the square of the length of the side that is the hypotenuse, the side opposite the right angle, is equal to the sum (addition) of the squares of the other two sides whose lengths are represented by and . An equation is the claim that two expressions have the same value and are equal. Some equations are true for all values of the involved variables (such as ); such equations are called identities. Conditional equations are true for only some values of the involved variables, e.g. is true only for and . The values of the variables which make the equation true are the solutions of the equation and can be found through equation solving. Another type of equation is inequality. Inequalities are used to show that one side of the equation is greater, or less, than the other. The symbols used for this are: where represents 'greater than', and where represents 'less than'. Just like standard equality equations, numbers can be added, subtracted, multiplied or divided. The only exception is that when multiplying or dividing by a negative number, the inequality symbol must be flipped. Properties of equality By definition, equality is an equivalence relation, meaning it has the properties (a) reflexive (i.e. ), (b) symmetric (i.e. if then ) (c) transitive (i.e. if and then ). It also satisfies the important property that if two symbols are used for equal things, then one symbol can be substituted for the other in any true statement about the first and the statement will remain true. This implies the following properties: if and then and ; if then and ; more generally, for any function , if then . Properties of inequality The relations less than and greater than have the property of transitivity: If and then ; If and then ; If and then ; If and then . By reversing the inequation, and can be swapped, for example: is equivalent to Substitution Substitution is replacing the terms in an expression to create a new expression. Substituting 3 for in the expression makes a new expression with meaning . Substituting the terms of a statement makes a new statement. When the original statement is true independently of the values of the terms, the statement created by substitutions is also true. Hence, definitions can be made in symbolic terms and interpreted through substitution: if is meant as the definition of as the product of with itself, substituting for informs the reader of this statement that means . Often it's not known whether the statement is true independently of the values of the terms. And, substitution allows one to derive restrictions on the possible values, or show what conditions the statement holds under. For example, taking the statement , if is substituted with , this implies , which is false, which implies that if then cannot be . If and are integers, rationals, or real numbers, then implies or . Consider . Then, substituting for and for , we learn or . Then we can substitute again, letting and , to show that if then or . Therefore, if , then or ( or ), so implies or or . If the original fact were stated as " implies or ", then when saying "consider ," we would have a conflict of terms when substituting. Yet the above logic is still valid to show that if then or or if, instead of letting and , one substitutes for and for (and with , substituting for and for ). This shows that substituting for the terms in a statement isn't always the same as letting the terms from the statement equal the substituted terms. In this situation it's clear that if we substitute an expression into the term of the original equation, the substituted does not refer to the in the statement " implies or ." Solving algebraic equations The following sections lay out examples of some of the types of algebraic equations that may be encountered. Linear equations with one variable Linear equations are so-called, because when they are plotted, they describe a straight line. The simplest equations to solve are linear equations that have only one variable. They contain only constant numbers and a single variable without an exponent. As an example, consider: Problem in words: If you double the age of a child and add 4, the resulting answer is 12. How old is the child? Equivalent equation: where represent the child's age To solve this kind of equation, the technique is add, subtract, multiply, or divide both sides of the equation by the same number in order to isolate the variable on one side of the equation. Once the variable is isolated, the other side of the equation is the value of the variable. This problem and its solution are as follows: In words: the child is 4 years old. The general form of a linear equation with one variable, can be written as: Following the same procedure (i.e. subtract from both sides, and then divide by ), the general solution is given by Linear equations with two variables A linear equation with two variables has many (i.e. an infinite number of) solutions. For example: Problem in words: A father is 22 years older than his son. How old are they? Equivalent equation: where is the father's age, is the son's age. That cannot be worked out by itself. If the son's age was made known, then there would no longer be two unknowns (variables). The problem then becomes a linear equation with just one variable, that
when a coefficient is used. For example, is written as , and may be written . Usually terms with the highest power (exponent), are written on the left, for example, is written to the left of . When a coefficient is one, it is usually omitted (e.g. is written ). Likewise when the exponent (power) is one, (e.g. is written ). When the exponent is zero, the result is always 1 (e.g. is always rewritten to ). However , being undefined, should not appear in an expression, and care should be taken in simplifying expressions in which variables may appear in exponents. Alternative notation Other types of notation are used in algebraic expressions when the required formatting is not available, or can not be implied, such as where only letters and symbols are available. As an illustration of this, while exponents are usually formatted using superscripts, e.g., , in plain text, and in the TeX mark-up language, the caret symbol "^" represents exponentiation, so is written as "x^2"., as well as some programming languages such as Lua. In programming languages such as Ada, Fortran, Perl, Python and Ruby, a double asterisk is used, so is written as "x**2". Many programming languages and calculators use a single asterisk to represent the multiplication symbol, and it must be explicitly used, for example, is written "3*x". Concepts Variables Elementary algebra builds on and extends arithmetic by introducing letters called variables to represent general (non-specified) numbers. This is useful for several reasons. Variables may represent numbers whose values are not yet known. For example, if the temperature of the current day, C, is 20 degrees higher than the temperature of the previous day, P, then the problem can be described algebraically as . Variables allow one to describe general problems, without specifying the values of the quantities that are involved. For example, it can be stated specifically that 5 minutes is equivalent to seconds. A more general (algebraic) description may state that the number of seconds, , where m is the number of minutes. Variables allow one to describe mathematical relationships between quantities that may vary. For example, the relationship between the circumference, c, and diameter, d, of a circle is described by . Variables allow one to describe some mathematical properties. For example, a basic property of addition is commutativity which states that the order of numbers being added together does not matter. Commutativity is stated algebraically as . Simplifying expressions Algebraic expressions may be evaluated and simplified, based on the basic properties of arithmetic operations (addition, subtraction, multiplication, division and exponentiation). For example, Added terms are simplified using coefficients. For example, can be simplified as (where 3 is a numerical coefficient). Multiplied terms are simplified using exponents. For example, is represented as Like terms are added together, for example, is written as , because the terms containing are added together, and, the terms containing are added together. Brackets can be "multiplied out", using the distributive property. For example, can be written as which can be written as Expressions can be factored. For example, , by dividing both terms by can be written as Equations An equation states that two expressions are equal using the symbol for equality, (the equals sign). One of the best-known equations describes Pythagoras' law relating the length of the sides of a right angle triangle: This equation states that , representing the square of the length of the side that is the hypotenuse, the side opposite the right angle, is equal to the sum (addition) of the squares of the other two sides whose lengths are represented by and . An equation is the claim that two expressions have the same value and are equal. Some equations are true for all values of the involved variables (such as ); such equations are called identities. Conditional equations are true for only some values of the involved variables, e.g. is true only for and . The values of the variables which make the equation true are the solutions of the equation and can be found through equation solving. Another type of equation is inequality. Inequalities are used to show that one side of the equation is greater, or less, than the other. The symbols used for this are: where represents 'greater than', and where represents 'less than'. Just like standard equality equations, numbers can be added, subtracted, multiplied or divided. The only exception is that when multiplying or dividing by a negative number, the inequality symbol must be flipped. Properties of equality By definition, equality is an equivalence relation, meaning it has the properties (a) reflexive (i.e. ), (b) symmetric (i.e. if then ) (c) transitive (i.e. if and then ). It also satisfies the important property that if two symbols are used for equal things, then one symbol can be substituted for the other in any true statement about the first and the statement will
in southern France Erp (Germany), a village in Germany Erp, Netherlands, a town in the Netherlands Politics and economics Effective rate of protection of tariffs Ejército Revolucionario del Pueblo (disambiguation), (Spanish for People's Revolutionary Army), in several Latin American countries Equity risk
result Event-related potential, a measured brain response Exposure and response prevention, a treatment method in behavioral therapy People Erp (Pict), father of Drest I of the Picts Places Erp, Ariège, a village in southern France Erp (Germany), a village in Germany Erp, Netherlands, a town in the Netherlands Politics and economics Effective rate of protection of tariffs Ejército Revolucionario del Pueblo (disambiguation), (Spanish for People's
which he had used since his time as a writer for the Harvard Lampoon. Two mysteries remain about the poem: whether Casey and Mudville were based on a real person or place, and, if so, their actual identities. On March 31, 2007, Katie Zezima of The New York Times wrote an article called "In 'Casey' Rhubarb, 2 Cities Cry 'Foul!'" on the competing claims of two towns to such renown: Stockton, California, and Holliston, Massachusetts. On the possible model for Casey, Thayer dismissed the notion that any single living baseball player was an influence. However, late 1880s Boston star Mike "King" Kelly is likely as a model for Casey's baseball situations. Besides being a native of a town close to Boston, Thayer, as a San Francisco Examiner baseball reporter in the off-season of 1887–88, covered exhibition games featuring Kelly. During November 1887, some of his reportage about a Kelly at-bat has the same ring as Casey's famous at-bat in the poem. A 2004 book by Howard W. Rosenberg, Cap Anson 2: The Theatrical and Kingly Mike Kelly: U.S. Team Sport's First Media Sensation and Baseball's Original Casey at the Bat, reprints a 1905 Thayer letter to a Baltimore scribe who was asking about the poem's roots. In the letter, Thayer named Kelly (d. 1894), as having shown "impudence" in claiming to have inspired it. Rosenberg argues that if Thayer still felt offended, Thayer may have later denied Kelly as an influence. Kelly had also performed as a vaudeville actor, and recited the poem dozens of times. The first public performance of the poem was on August 14, 1888, by actor De Wolf Hopper, on Thayer's 25th birthday. Thayer recited of
1887–88, covered exhibition games featuring Kelly. During November 1887, some of his reportage about a Kelly at-bat has the same ring as Casey's famous at-bat in the poem. A 2004 book by Howard W. Rosenberg, Cap Anson 2: The Theatrical and Kingly Mike Kelly: U.S. Team Sport's First Media Sensation and Baseball's Original Casey at the Bat, reprints a 1905 Thayer letter to a Baltimore scribe who was asking about the poem's roots. In the letter, Thayer named Kelly (d. 1894), as having shown "impudence" in claiming to have inspired it. Rosenberg argues that if Thayer still felt offended, Thayer may have later denied Kelly as an influence. Kelly had also performed as a vaudeville actor, and recited the poem dozens of times. The first public performance of the poem was on August 14, 1888, by actor De Wolf Hopper, on Thayer's 25th birthday. Thayer recited of the poem at a Harvard
C/US) Stewart Conn (born 1936, S) Paul Conneally (born 1959, E) Karen Connelly (born 1969, C) Kevin Connolly (born 1962, C) Susan Connolly (born 1956, Ir) Robert Conquest (1917–2015, E) Tony Conran (1931–2013, W) Henry Constable (1562–1613, E/F) David Constantine (born 1944, E) Eliza Cook (1818–1889, E) Elizabeth Cook-Lynn (born 1930, US) Sophie Cooke (born 1976, S) Ina Coolbrith (1841–1928, US) Dennis Cooley (born 1944, C) Clark Coolidge (born 1939, US) Afua Cooper (born 1957, J/C) Thomas Cooper (1805–1892, E) Jack Cope (1913–1991, SA) Wendy Cope (born 1945, E) Judith Copithorne (born 1939, C) Robert Copland (fl. 1508–1547, E) A. E. Coppard (1878–1957, E) Julia Copus (born 1969, E) Richard Corbet (1582–1635, E) Cid Corman (1924–2004, US) Alfred Corn (born 1943, US) Adam Cornford (born 1950, E) Frances Cornford (1886–1960, E) Francis M. Cornford (1874–1943, E) John Cornford (1915–1936, E) Joe Corrie (1894–1968, S) Gregory Corso (1930–2001, US) Jayne Cortez (1936–2012, US) William Johnson Cory (1823–1892, E) Louisa Stuart Costello (1799–1877, Ir/F) Charles Cotton (1630–1687, E) Anna Couani (born 1948, A) Anne Ross Cousin (1824–1906, E/S) Dani Couture (born 1978, C) Thomas Cowherd (1817–1907, C) Abraham Cowley (1618–1667, E) Hannah Cowley (1743–1809, E) Malcolm Cowley (1898–1989, US) Dorothy Cowlin (1911–2010, E) William Cowper (1731–1800, E) George Crabbe (1754–1832, E) Christine Craig (born 1943, J) Helen Craik (c. 1751–1825, S/E) Hart Crane (1899–1932, US) Stephen Crane (1871–1900, US) Richard Crashaw (1613–1649, E) Isabella Valancy Crawford (1846–1887, C) Robert Crawford (1868–1930, A) Robert Crawford (born 1959, S) Richard Crawley (1840–1893, W/E) Morri Creech (born 1970, US) Robert Creeley (1926–2006, US) Caroline de Crespigny (1797–1861, E/G) Walter D'Arcy Cresswell (1896–1960, NZ) Louise Crisp (born 1957, A) Ann Batten Cristall (1769–1848, E) Andy Croft (born 1956, E) Julian Croft (born 1941, A) Alison Croggon (born 1962, A) Jeremy Cronin (born 1949, SA) M. T. C. Cronin (born 1963, A) Lynn Crosbie (born 1963, C) Camilla Dufour Crosland (1812–1895, E) Zora Cross (1890–1964, A) Aleister Crowley (1875–1947, E) Andrew Crozier (1943–2008, E) Lorna Crozier (Lorna Uher, born 1948, C) Helen Cruickshank (1886–1975, S) Michael Crummey (born 1965, C) Julie Crysler (living, C) Catherine Ann Cullen (living, Ir) Countee Cullen (1903–1946, US) Nancy Jo Cullen (living, C) Patrick Cullinan (1932–2011, SA) E. E. Cummings (1894–1962, US) Gary Cummiskey (born 1963, SA) Allan Cunningham (1784–1842, S/E) J. V. Cunningham (1911–1985, US) John Cunningham (1729–1773, Ir/E) Allen Curnow (1911–2001, NZ) Margaret Curran (1887–1962, A) Jen Currin (living, US/C) Tony Curtis (born 1946, W) Tony Curtis (born 1955, Ir) James Cuthbertson (1851–1910, A) Ivor Cutler (1923–2006, S) Lidija Cvetkovic (born 1967, A) Kayla Czaga (born 1989, C) D Da–Do H.D. (Hilda Doolittle, 1886–1961, US) Cyril Dabydeen (living, Gu/C) David Dabydeen (born 1955, Gu) Kalli Dakos (living, C) Victor Daley (1858–1905, A) Mary Dalton (born 1950, C) Pádraig J. Daly (born 1943, Ir) Raymond Garfield Dandridge (1882/1883–1930, US) Joseph A. Dandurand (living, C) Achmat Dangor (born 1948, SA) Samuel Daniel (1562–1619, E) David Daniels (1933–2008, US) Jeffrey Daniels (living, US) George Darley (1795–1846, Ir) Tina Darragh (born 1950, US) Keki N. Daruwalla (born 1937, In) Erasmus Darwin (1731–1802, E) Elizabeth Daryush (1887–1977, E) Robert von Dassanowsky (Robert Dassanowsky) (born 1965, US) Beverley Daurio (born 1953, C) William Davenant (1606–1668, E) Guy Davenport (1927–2005, US) Frank Davey (born 1940, C) Donald Davidson (1893–1968, US) John Davidson (1857–1909, S/E) Lucretia Maria Davidson (1808–1825, US) Michael Davidson (born 1944, US) Donald Davie (1922–1995, E) Alan Davies (born 1951, US) Deborah Kay Davies (living, W) Hugh Sykes Davies (1909–1984, E) Idris Davies (1905–1953, W) John Davies (1569–1626, E) W. H. Davies (1871–1940, W) Nicholas Flood Davin (1840–1901, C) Olive Dehn (1914–2007, E) Beatrice Deloitte Davis (1909–1992, A) Jon Davis (living, US) Norma Davis (1905–1945, A) Tanya Davis (living, C) Thomas Osborne Davis (1814–1845, Ir) Edward Davison (1898–1970, S/US) Peter Davison (1928–2004, US) Bruce Dawe (1930–2020, A) Kwame Dawes (born 1962, US) Tom Dawe (born 1940, C) Jeffery Day (1896–1918, E) Sarah Day (born 1958, A) Cecil Day-Lewis (1904–1972, E) Adriana de Barros (born 1976, C) Somerset de Chair (1911–1995, E) Jean Louis De Esque (1879–1956, US) Madeline DeFrees (1919–2015, US) Celia de Fréine (born 1948, Ir) Ingrid de Kok (born 1951, SA) Walter de la Mare (1873–1956, E) Christine De Luca (born 1947, S) Sadiqa de Meijer (born 1977, C) Edward de Vere, 17th Earl of Oxford (1550–1604, E) Phillippa Yaa de Villiers (born 1966, SA) James Deahl (born 1945, C) Dulcie Deamer (1890–1972, A) John F. Deane (born 1943, Ir) Joel Deane (born 1969, A) Patrick Deeley (born 1953, Ir) Madeline DeFrees (1919–2015, US) Thomas Dekker (1575–1641, E) Greg Delanty (born 1958, Ir/US) Kris Demeanor (living, C) Barry Dempster (born 1952, C) Joe Denham (living, C) John Denham (1615–1669, E) C. J. Dennis (1876–1938, A) John Dennison (born 1978, NZ) Tory Dent (1958–2005, US) Enid Derham (1882–1941, A) Thomas Dermody (1775–1802, Ir) Toi Derricotte (born 1941, US) Heather Derr-Smith (born 1971, US) Michelle Desbarats (living, C) Babette Deutsch (1895–1982, US) James Devaney (1890–1976, A) Mary Deverell (1731–1805, E) Denis Devlin (1908–1959, Ir) George E. Dewar (1895–1969, NZ) Christopher Dewdney (born 1951, C] Imtiaz Dharker (born 1954, P/W) Pier Giorgio Di Cicco (1949–2019, C) Mary di Michele (born 1949, C) Diane di Prima (1934–2020, US) Ann Diamond (living, C) Natalie Diaz (born 1978, US) Anne Dick (died 1741, S) Jennifer K Dick (born 1970, US) James Dickey (1923–1997, US) Adam Dickinson (living, C) Emily Dickinson (1830–1886, US) Matthew Dickman (born 1975, US) Michael Dickman (born 1975, US) Robert Dickson (1944–2007, C) Peter Didsbury (born 1946, E) Modikwe Dikobe (1913 – unknown year, SA) Des Dillon (living, S) John Dillon (1851–1927, Ir) B. R. Dionysius (born 1969, A) Ray DiPalma (1943–2016, US) Thomas M. Disch (born 1940, US) Chitra Banerjee Divakaruni (born 1956, In/US) Isobel Dixon (born 1969, SA/E) Sarah Dixon (1671–1765, E) William Hepworth Dixon (1821–1879, E) Angifi Dladla (born 1950, SA) Tim Dlugos (1950–1990, US) Kildare Dobbs (1923–2013, C) Henry Austin Dobson (1840–1921, E) Rosemary Dobson (1920–2012, A) Stephen Dobyns (born 1941, US) Jeramy Dodds (born 1974, C) Robert Dodsley (1703–1764, E) Pete Doherty (born 1979, E) Digby Mackworth Dolben (1848–1867, E) Joe Dolce (born 1947, US/A) Don Domanski (born 1950, C) Magie Dominic (born 1944, C) Jeffery Donaldson (living, C) John Donaldson (Jon Inglis, 1921–1989, E) John Donne (1572–1631, E) David Donnell (born 1939, C) Timothy Donnelly (born 1969, US) Gerard Donovan (born 1959, Ir/E) Theo Dorgan (born 1953, Ir) Ed Dorn (1929–1999, US) Catherine Ann Dorset (1752–1834, E) Candas Jane Dorsey (born 1952, C) Mark Doty (born 1953, US) Clive Doucet (born 1946, C) Sarah Doudney (1841–1926, E) Lucy Dougan (born 1966, A) Charles Montagu Doughty (1843–1926, E) Lord Alfred Douglas (1870–1945, E) Alice May Douglas (1865–1943, US) Gavin Douglas (c. 1474–1522, S) George Brisbane Scott Douglas (1856–1935, S) Keith Douglas (1920–1944, E) Orville Lloyd Douglas (born 1976, C) Rita Dove (born 1952, US) Basil Dowling (1910–2000, NZ) Finuala Dowling (born 1962, SA) Gordon Downie (1964–2017, C) Ellen Mary Patrick Downing (1828–1869, Ir) Ernest Dowson (1867–1900, E) Francis Hastings Doyle (1810–1888, E) Kirby Doyle (1932–2003, US) Dr–Dy Michael Dransfield (1948–1973, A) Jane Draycott (born 1954, E) Michael Drayton (1563–1631, E) John Swanwick Drennan (1809–1893, Ir) William Drennan (1754–1820, Ir) Adam Drinan (also Joseph Macleod, 1903–1984, E) John Drinkwater (1882–1937, E) William Drummond of Hawthornden (1585–1649, S) William Henry Drummond (1854–1907, C) John Dryden (1631–1700, E) W. E. B. Du Bois (1868–1963, US) I. D. du Plessis (1900–1981, SA) Klara du Plessis (living, SA/C) Norman Dubie (born 1945, US) Stephen Duck (c. 1705–1756, E) Louis Dudek (1918–2001, S) Carol Ann Duffy (born 1955, S) Charles Gavan Duffy (1816–1903, Ir/A) Maureen Duffy (born 1933, E) Alan Dugan (1923–2003, J/US) Michael Dugan (1947–2006, A) Sasha Dugdale (born 1974, E) Eileen Duggan (1894–1972, NZ) Laurie Duggan (born 1949, A) Jas H. Duke (1939–1992, A) Richard Duke (1658–1711, E) Tug Dumbly (Geoff Forrester, living, A) Marilyn Dumont (born 1955, C) Paul Laurence Dunbar (1872–1906, US) William Dunbar (1459 or 1460 – c. 1530, S) Andrew Duncan (born 1956, E) Robert Duncan (1919–1988, US) Camille Dungy (born 1972, US) Helen Dunmore (1952–2017, E) Douglas Dunn (born 1942, S) Max Dunn (died 1963, A) Stephen Dunn (1939–2021, US) Joe Dunthorne (born 1982, W) Paul Durcan (born 1944, Ir) Lawrence Durrell (1912–1990, E) Anne Dutton (1692–1765, E) Geoffrey Dutton (1922–1998, A) Stuart Dybek (born 1942, US) Edward Dyer (1543–1607, E) John Dyer (1699–1758, W) Bob Dylan (born 1941, US) Edward Dyson (1865–1931, A) E Joan Adeney Easdale (1913–1998, E) Evelyn Eaton (1902–1983, C) Richard Eberhart (1904–2005, US) Emily Eden (1797–1869, E) Helen Parry Eden (1885–1960, E) Stephen Edgar (born 1951, A) Lauris Edmond (1924–2000, NZ) Russell Edson (1928–2014, US) Richard Edwardes (c. 1523–1566, E) Dic Edwards (born 1953, W) Jonathan Edwards (born 1979, W) Rhian Edwards (living, W/E) Helen Merrill Egerton (1866-1951, C) Terry Ehret (born 1955, US) Vic Elias (1948–2006, US/C) Anne Elder (1918–1976, A) George Eliot (Mary Anne Evans, 1819–1880, E) T. S. Eliot (1888–1965, US/E) Elizabeth F. Ellet (1818–1877, US) Charlotte Elliot (1839–1880, S) David Elliott (1923–1999, C) Jean Elliot (1727–1805, S) Ebenezer Elliott (1781–1849, E) George Ellis (1753–1815, E) Royston Ellis (born 1941, E) Chris Else (born 1942, NZ) Rebecca Elson (1960–1999, C) Crispin Elsted (living, C) Claudia Emerson (1957–2014, US) Ralph Waldo Emerson (1803–1882, US) Chris Emery (born 1963, E) William Empson (1906–1984, E) Paul Engle (1908–1991, US) John Ennis (born 1944, Ir) Karen Enns (living, C) D. J. Enright (1920–2002, E) Riemke Ensing (born 1939, NZ) Theodore Enslin (1925–2011, US) Louise Erdrich (born 1954, US) Ralph Erskine (1685–1752, S) Clayton Eshleman (1935–2021, US) Martín Espada (born 1957, US) Ramabai Espinet (born 1948, T) Jill Alexander Essbaum (born 1971, US) Maggie Estep (1963–2014, US) George Etherege (1635–1691, E) Michael Estok (1939–1989, C) Jerry Estrin (1947–1993, US) Anne Evans (1820–1870, E) Christine Evans (born 1943, E/W) George Essex Evans (1863–1909, A) Margiad Evans (Peggy Whistler, 1909–1958, E) Mari Evans (1919–2017, US) Sebastian Evans (1830–1909, E) William Everson (Brother Antoninus, 1912–1994, US) Gavin Ewart (1916–1995, E) John K. Ewers (1904–1978, A) Elisabeth Eybers (1915–2007, SA/Nt) F Frederick William Faber (1814–1863, E) Diane Fahey (born 1945, A) Ruth Fainlight (born 1931, US/E) Kingsley Fairbridge (1885–1924, SA) A. R. D. Fairburn (1904–1957, NZ) Maria and Harriet Falconar (born c. 1771–1774, E or S) William Falconer (1732–1769, S) Padraic Fallon (1905–1974, Ir) Catherine Maria Fanshawe (1765–1834, E) U. A. Fanthorpe (1929–1909, E) Patricia Fargnoli (born 1937, US) Eleanor Farjeon (1881–1965, E) Fiona Farrell (born 1947, NZ) John Farrell (1851–1904, A) John Farrell (1968–2010, US) Michael Farrell (born 1965, A) Katie Farris (born 1983, US) Margaretta Faugères (1771–1801, US) Jessie Redmon Fauset (1882–1961, US) Brian Fawcett (born 1944, C) Elaine Feeney (living, Ir) Elaine Feinstein (1930–2019, E) Alison Fell (born 1944, S) Charles Fenerty (c. 1821–1892, C) Elijah Fenton (1683–1730, E) James Fenton (born 1931, NI) James Fenton (born 1949, E) Richard Fenton (1747–1821, W) Gus Ferguson (1940–2020, SA) Samuel Ferguson (1810–1886, Ir) Robert Fergusson (1750–1774, S) Lawrence Ferlinghetti (1919–2021, US) Ferron (Deborah Foisy, born 1952, C) George Fetherling (born 1949, C) Michael Field (Katherine Bradley, 1846–1914, and Edith Cooper, 1862–1913, E) Henry Fielding (1707–1754, E) Connie Fife (1961–2017, C) Anne Finch, Countess of Winchilsea (1661–1720, E) Annie Finch (born 1956, US) Peter Finch (living, W) Robert Finch (1900–1995, C) Ian Hamilton Finlay (1925–2006, S) Joan Finnigan (1925–2007, C) Jon Paul Fiorentino (living, C) Catherine Fisher (born 1957, W) Roy Fisher (1930–2017, E) Edward FitzGerald (1809–1883, E) Judith Fitzgerald (1952–2015, C) R. D. Fitzgerald (1902–1987, A) Robert Fitzgerald (1910–1985, US) Richard FitzPatrick (1748–1813, Ir/E) Roderick Flanagan (1828–1862, A) James Elroy Flecker (1884–1915, E) Marjorie Fleming (1803–1811, S) Giles Fletcher (c. 1586–1623, E) Giles Fletcher, the Elder (c. 1548–1611, E) John Fletcher (1579–1625, E) John Gould Fletcher (1886–1950, US) Phineas Fletcher (1582–1650, E) Maria De Fleury (c. 1754 – c. 1794, E) F. S. Flint (1885–1960, E) Alice Flowerdew (1759–1830, E) Lionel Fogarty (born 1958, A) Jack Foley (born 1940, US) Mary Hannay Foott (1846–1918, A) John Forbes (1950–1998, A) Carolyn Forché (born 1950, US) Ford Madox Ford (1873–1939, E) John Ford (1586–1639, E) John M. Ford (1957–2006, US) Robert Ford (1915–1998, C) Mabel Forrest (1872–1935, A) William Forrest (fl. 1581, E) Veronica Forrest-Thomson (1947–1975, S) Gary Jeshel Forrester (born 1946, NZ) William Forster (1818–1882, A) John Foulcher (born 1952, A) Ellen Thorneycroft Fowler (1860–1920, E) William Fowler (c. 1560–1612, S) Kate Fox (born 1975, E) Len Fox (1905–2004, A) Janet Frame (1924–2004, NZ) Ruth France (1913–1968, NZ) Matthew Francis (born 1956, E/W) Robert Francis (1901–1987, US) George Sutherland Fraser (1915–1980, S) Gregory Fraser (living, US) Raymond Fraser (1941–2018, C) Benjamin Frater (1979–2007, A) Brentley Frazer (born 1972, A) Grace Beacham Freeman (1916–2002, US) John Freeman (1880–1929, E) Nicholas Freeston (1907–1978, E) Patrick Friesen (born 1946, C) Robert Frost (1874–1963, US) Gwen Frostic (1906–2001, US) Gene Frumkin (1928–2007, US) Mark Frutkin (born 1948, US/C) Sheila Meiring Fugard (born 1932, SA) Ethel Romig Fuller (1883–1965, US) John Fuller (born 1937, E) Roy Fuller (1912–1991, E) Mary Eliza Fullerton (1868–1946, A) Alice Fulton (born 1952, US) Graham Fulton (born 1959, S) Robin Fulton (born 1937, S) Ulpian Fulwell (1545/1546 – before 1586, E) Richard Furness (1791–1857, E) G Ga–Go Frances Dana Barker Gage (1808–1884, US) Dunstan Gale (fl. 1596, E) Kate Gale (living, US) James Galvin (born 1951, US) Patrick Galvin (1927–2011, Ir) Forrest Gander (born 1956, US) Robert Garioch (1909–1981, S) Hamlin Garland (1860–1940, US) Raymond Garlick (1926–2011, W) Richard Garnett (1835–1906, E) Jean Garrigue (1912–1972, US) Samuel Garth (1661–1719, E) George Gascoigne (1525–1577, E) David Gascoyne (1916–2001, E/F) Bill Gaston (born 1953, C) John Gay (1685–1732, E) Ross Gay (born 1974, US) William Gay (1865–1897, S/A) Alexander Geddes (1737–1802, S) Leon Gellert (1892–1977, A) W. R. P. George (1912–2006, W) Dan Gerber (born 1940, US) Amy Gerstler (born 1956, US) Marty Gervais (living, C) Charles Ghigna (born 1946, US) Monk Gibbon (1896–1987, Ir) Reginald Gibbons (born 1947, US) Stella Gibbons (1902–1989, E) Ivy Gibbs (c. 1886–1966, NZ) Kahlil Gibran (1883–1931, L/US) G. H. Gibson (Ironbark, 1846–1921, A) Wilfrid Wilson Gibson (1878–1962, E) Elsa Gidlow (1898–1986, C) Angus Morrison Gidney (1803–1882, C) Gerry Gilbert (1936–2009, C) Jack Gilbert (1925–2012, US) Kevin Gilbert (1933–1993, A) W. S. Gilbert (1836–1911, E) Ellen Gilchrist (born 1935, US) George Gilfillan (1813–1878, S) Charlotte Perkins Gilman (1860–1935, US) Mary Gilmore (1865–1962, A) Allen Ginsberg (1926–1997, US) Dana Gioia (born 1950, US) Nikki Giovanni (born 1943, US) Jesse Glass (born 1954, US/Jp) John Glassco (1909–1981, C) Madeline Gleason (1903–1979, US) Duncan Glen (1933–2008, S) William Glen (1789–1826, S) Lorri Neilsen Glenn (living, C) Denis Glover (1912–1980, NZ) Louise Glück (born 1943, US) Rumer Godden (1907–1998, In/E) Patricia Goedicke (1931–2006, US) Oliver St. John Gogarty (1878–1957, Ir) Albert Goldbarth (born 1948, US) Kenneth Goldsmith (born 1961, US) Oliver Goldsmith (1728–1774, Ir/E) Oliver Goldsmith (1794–1861, C) Peter Goldsworthy (born 1951, A) Leona Gom (born 1946, C) W. T. Goodge (1862–1909, A) Lorna Goodison (born 1947, J) Paul Goodman (1911–1972, US) Barnabe Googe (1540–1594, E) Adam Lindsay Gordon (1833–1870, A) Katherine L. Gordon (living, C) Robert Gordon of Straloch (1580–1661, S) Hedwig Gorski (born 1949, US) Edmund Gosse (1849–1928, E) Phyllis Gotlieb (1926–2009, C) Keith Gottschalk (born 1946, SA) Alan Gould (born 1949, A) Nora Gould (living, C) John Gower (c. 1330–1408, E) Susan Goyette (born 1964, C) Gr–Gy James Graham, 1st Marquess of Montrose (1612–1650, S) Jorie Graham (born 1950, US) Neile Graham (born 1958, C) Robert Cunninghame Graham of Gartmore (1735–1797, S) W. S. Graham (1918–1986, S) James Grahame (1765–1811, S) Mark Granier (born 1957, E/Ir) Paul Grano (1894–1975, A) Alex Grant (living, US) Richard Graves (1715–1804, E) Richard Harry Graves (1897–1971, A) Robert Graves (1895–1985, E) Alexander Gray (1882–1968, S) Catherine Gray, Lady Manners (1766–1852, Ir) Kathryn Gray (born 1973, W) Maxwell Gray (Mary Gleed Tuttiett, 1846–1923, E) Robert Gray (born 1945, A) Stephen Gray (born 1941, SA) Thomas Gray (1716–1771, E) Dorothy Auchterlonie Green (1915–1991, A) H. M. Green (1881–1962, A) Paula Green (born 1955, NZ) Richard Greene (born 1961, C) Robert Greene (1558–1592, E) Lavinia Greenlaw (born 1962, E) Gavin Greenlees (1930–1983, A) Leslie Greentree (living, C) Dora Greenwell (1821–1882, E) Jane Greer (born 1953, US) Linda Gregg (1942–2019, US) Horace Gregory (1898–1982, US) Andrew Greig (born 1951, S) Eamon Grennan (born 1941, Ir) H. W. Gretton (1914–1983, NZ) Fulke Greville, 1st Baron Brooke (1554–1628, E) Gerald Griffin (1803–1840, Ir) Sarah Maria Griffin (living, Ir) Susan Griffin (born 1943, US) Bill Griffiths (1948–2007, E) Bryn Griffiths (living, W/E) Jane Griffiths (born 1970, E) Geoffrey Grigson (1905–1985, E) Nicholas Grimald (1519–1562, E) Angelina Weld Grimké (1880–1958, US) Charlotte Forten Grimké (1837–1914, US) Eliza Griswold (born 1973, US) Rufus Wilmot Griswold (1815–1857, US) Philip Gross (born 1952, E) Paul Groves (born 1947, E/W) Bertha Jane Grundy (Mrs. Leith Adams, 1837–1912, E) Jeff Guess (born 1948, A) Barbara Guest (1920–2006, US) Edgar Guest (1881–1959, US) Paul Guest (living, US) Malcolm Guite (born 1957, E) Arthur Guiterman (1871–1943, US) Genni Gunn (born 1949, C) Thom Gunn (1929–2004, E/US) Kristjana Gunnars (born 1948, C) Lee Gurga (born 1949, US) Ivor Gurney (1890–1937, E) Ralph Gustafson (1909–1995, C) Mafika Gwala (1946–2014, SA) Cyril Gwynn (1897–1988, W/A) Stephen Gwynn (1864–1950, Ir) Beth Gylys (born 1964, US) Brion Gysin (1916–1986, C/E) H Ha–He William Habington (1605–1654, E) Marilyn Hacker (born 1942, US) John Haines (1924–2011, US) Paul Haines (1933–2003, US/C) Helen Hajnoczky (born 1985, C) Thomas Gordon Hake (1809–1895, E) Sarah Josepha Hale (1788–1879, US) Bernadette Hall (born 1945, NZ) Donald Hall (1928–2018, US) Megan Hall (born 1972, SA) Phil Hall (born 1953, C) Radclyffe Hall (1880–1943, E) Rodney Hall (born 1935, A) Arthur Hallam (1811–1833, E) Alan Halsey (born 1949, W/E) Michael Hamburger (1924–2007, E) Ian Hamilton (1938–2001, E) Jane Eaton Hamilton (born 1954, C) Janet Hamilton (1795–1873, S) Philip Hammial (born 1937, A) Robert Gavin Hampson (born 1948, E) Susan Hampton (born 1949, A) Sophie Hannah (born 1971, E) Kerry Hardie (born 1951, NI) Thomas Hardy (1840–1928, E) Lesbia Harford (1891–1927, A) Joy Harjo (born 1951, US) William Harmon (born 1938, US) Frances Harper (1825–1911, US) Michael S. Harper (1938–2016 US) Charles Harpur (1813–1868, A) Alice Harriman (1861–1925, US) Edward Harrington (1895–1966, A) Claire Harris (1937–2018, C) Joseph Harris (1773–1825, W) Max Harris (1921–1995, A) Michael Harris (born 1944, C) Robert Harris (1951–1993, A) Wilson Harris (1921–2018, Gu/E) Jennifer Harrison (born 1955, A) Jim Harrison (1937–2016, US) Martin Harrison (1949–2014, A) Richard Harrison (poet) (living, C) Tony Harrison (born 1937, E) Les Harrop (born 1948, E/A) Molly Harrower (1906–1999, S) J. S. Harry (1939–2015, A) Carla Harryman (born 1952, US) David Harsent (born 1942, E) Kevin Hart (born 1954, A) Paul Hartal (born 1936, Is/C) Anne Le Marquand Hartigan (living, Ir) Jill Hartman (born 1974, C) Sadakichi Hartmann (1867–1944, US) Michael Hartnett (1941–1999, Ir) Diana Hartog (born 1942, C) William Hart-Smith (1911–1990, NZ) F. W. Harvey (1888–1957, E) Elisabeth Harvor (living, C) Gwen Harwood (1920–1995, A) Lee Harwood (1939–2015, E) Alamgir Hashmi (born 1951, E) J. H. Haslam (1874–1969, NZ) Nicholas Hasluck (born 1942, A) Robert Hass (born 1941, US) Katherine Hastings (living, US) Ann Hatton (1764–1838, W) Stephen Hawes (died 1523, E) Robert Stephen Hawker (1803–1875, E) Kathleen Hawkins (1883–1981, NZ) George Campbell Hay (1915–1984, S) Gilbert Hay (born c. 1403, S) Myfanwy Haycock (1913–1963, W/E) Robert Hayden (1913–1980, US) William Hayley (1745–1820, E) Robert Hayman (1575–1629, Nf) Tony Haynes (born 1960, US) Joel Hayward (born 1964, NZ) Eliza Haywood (c. 1693–1756, E) H.D. (Hilda Doolittle, 1886–1961, E) Randolph Healy (born 1956, Ir) Seamus Heaney (1939–2013, Ir) Josephine D. Heard (1861 – c. 1921, US) John Heath-Stubbs (1918–2006, E) Charles Heavysege (1816–1876, C) James Hebblethwaite (1857–1921, A) Anthony Hecht (1923–2004, US) Jennifer Michael Hecht (born 1965, US) John Hegley (born 1953, E) Wilfrid Heighington (1897–1945, C) Steven Heighton (born 1961, C) Anita Heiss (born 1968, A) Lyn Hejinian (born 1941, US) Jill Hellyer (1925–2012, A) David Helwig (1938–2018, C) Maggie Helwig (born 1961, C) Felicia Hemans (1793–1835, E) Kris Hemensley (born 1946, A) Essex Hemphill (1957–1995, US) Brian Henderson (born 1948, C) Hamish Henderson (1919–2002, S) Philip Henderson (1906–1977, E) Thomas William Heney (1862–1928, A) John Henley (1692–1756, E) William Ernest Henley (1849–1903, E) Adrian Henri (1932–2000, E) Paul Henry (born 1959, W) Robert Henryson (fl. 1460–1500, S) Thomas Nicoll Hepburn (wrote as Gabriel Setoun, 1861–1930, S) Dorothea Herbert (c. 1767–1829, Ir) Edward Herbert, 1st Baron Herbert of Cherbury (1582–1648, E/W) George Herbert (1593–1632, W) Mary Herbert, Countess of Pembroke (Mary Sidney, 1561–1621, E) W. N. Herbert (born 1961, S) Robert Herrick (1591–1674, E) Steven Herrick (born 1958, A) Benjamin Hertwig (living, C) Thomas Kibble Hervey (1799–1959, E) Phoebe Hesketh (1909–2005, E) Paul Hetherington (born 1958, A) William Maxwell Hetherington (1803–1865, S) Dorothy Hewett (1923–2002, A) John Hewitt (1907–1987, NI) Maurice Hewlett (1861–1923, E) William Heyen (born 1940, US) Thomas Heywood (c. 1570s – 1650, E) Hi–Hu Bob Hicok (born 1960, US) Dick Higgins (1938–1998, US) F. R. Higgins (1896–1941, Ir) Kevin Higgins (born 1967, Ir) Rita Ann Higgins (born 1955, Ir) Colleen Higgs (born 1962, SA) Charles Higham (1931–2012, A) Scott Hightower (born 1952, US) Conrad Hilberry (1928–2017, US) Fiona Hile (living, A) Barry Hill (born 1943, A) Edward Hill (1843–1923, US) Geoffrey Hill (1932–2016, E/US) Robert Hilles (born 1951, C) Richard Hillman (born 1964, A) Ellen Hinsey (born 1960, US) Jane Hirshfield (born 1953, US) George Hitchcock (1914–2010, US) H. L. Hix (born 1960, US) Thomas Hoccleve (c. 1368–1426, E) Philip Hodgins (1959–1995, A) Ralph Hodgson (1871–1962, E/US) W. N. Hodgson (1893–1916, E) Barbara Hofland (1770–1844, E) Michael Hofmann (born 1957, G/US) James Hogg (1770–1835, S) David Holbrook (1923–2011, E) Susan Holbrook (living, C) Thomas Holcroft (1745–1809, E) Clive Holden (living, C) Margaret Holford (1778–1852, E) Abraham Holland (died 1626, E) Barbara Holland (1933–2010, US) Hugh Holland (1569–1633, W) Jane Holland (born 1966, E) John Holland (1794–1872, E) Norah M. Holland (1876-1925, C) Sarah Holland-Batt (born 1982, A) John Hollander (1929–2013, US) Matthew Hollis (born 1971, E) Anselm Hollo (1934–2013, US) Nancy Holmes (born 1959, C) Oliver Wendell Holmes, Sr. (1809–1894, US) Thomas Hood (1798–1845, E) Cornelia Hoogland (living, C) Ellen Sturgis Hooper (1812–1848, US) Hilda Mary Hooke (1898–1978, C) Harry Hooton (1908–1961, A) A. D. Hope (1907–2000, A) Christopher Hope (born 1944, SA) Gerard Manley Hopkins (1844–1889, E) Leah Horlick (living, C) Sean Horlor (born 1981, C) Frances Horovitz (1938–1983, E) Michael Horovitz (1935–2021, E) Peter Horn (1934–2019, SA) George Moses Horton (1797–1884, US) Allan Kolski Horwitz (born 1952, SA) Sylvester Houédard (1924–1992, Gy) Karen Houle (living, C) Joan Houlihan (living, US) A. E. Housman (1859–1936, E) Edward Howard (1793–1841, E) Henry Howard, Earl of Surrey (1517–1547, E) Liz Howard (living, C) Richard Howard (born 1929, US) Robert Guy Howarth (1906–1974, A) Fanny Howe (born 1940, US) George Howe (1769–1821, A) Julia Ward Howe (1819–1910, US) Susan Howe (born 1937, US) Ada Verdun Howell (1902–1981, A) Anthony Howell (born 1945, E) Harry Howith (1934–2014, C) Mary Howitt (1799–1888, E) Richard Howitt (1799–1869, E) William Howitt (1792–1879, E) Francis Hubert (died 1629, E) Thomas Hudson (d. c. 1605, S) Frieda Hughes (born 1960, A) Langston Hughes (1902–1967, US) Richard Hughes (1900–1976, E/W) Ted Hughes (1930–1998, E) Richard Hugo (1923–1982, US) Coral Hull (born 1965, A) Lynda Hull (1954–1994, US) T. E. Hulme (1883–1917, E) Alexander Hume (c. 1560–1609, S) Anna Hume (fl. 1644, S) David Hume of Godscroft (1558–1629, S) Barry Humphries (born 1934, A/E) Emyr Humphreys (1919–2020, W) Helen Humphreys (born 1961, C) Leigh Hunt (1784–1859, E) Sam Hunt (born 1946, NZ) Aislinn Hunter (living, C) Al Hunter (living, C) Bruce Hunter (born 1952, C) Catherine Hunter (born 1957, C) Rex Hunter (1889–1960, NZ) Constance Hunting (1925–2006, US) Cynthia Huntington (born 1952, US) Chris Hutchinson (born 1972, C) Pearse Hutchinson (1927–2012, Ir) William Hutton (1723–1815, E) Aldous Huxley (1894–1963, E) Douglas Smith Huyghue (1816–1891, C/A) Douglas Hyde (1860–1949, Ir) Robin Hyde (pen name of Iris Wilkinson; 1906–1939, NZ) Helen von Kolnitz Hyer (1896–1983, US) Maureen Hynes (living, C) I John Imlah (1799–1846, S) Rex Ingamells (1913–1955, A) Jean Ingelow (1820–1897, E) P. Inman (born 1947, US) Susan Ioannou (born 1944, C) Valentin Iremonger (1918–1991, Ir) Eric Irvin (1908–1993, A) Frances Itani (born 1942, C) Helen Ivory (born 1969, E) J Alan Jackson (born 1938, S) Violet Jacob (1863–1946, S) Josephine Jacobsen (1908–2003, US) Richard Jago (1715–1781, E) James I of Scotland (1394–1437, S) James VI and I (1566–1625, S/E) Alan James (living, SA) Clive James (1939–2019, A) John James (1939–2018, W/E) Maria James (1793–1868, W/US) Kathleen Jamie (born 1962, S) Robert Alan Jamieson (born 1958, S) Patricia Janus (1932–2006, US) Mark Jarman (born 1952, US) Lisa Jarnot (born 1967, US) Randall Jarrell (1914–1965, US) Alan Jefferies (born 1957, A) Robinson Jeffers (1887–1962, US) Rod Jellema (1927–1918, US) Jemeni (Joanne Gairy, born 1976, Gd/C) Graham Jenkin (born 1938, A) John Jenkins (born 1949, A) Joseph Jenkins (1818–1898, W/A) Mike Jenkins (born 1953, W) Nigel Jenkins (1949–2014, W) Elizabeth Jennings (1926–2001, E) Kate Jennings (1948–2021, A) Wopko Jensma (1939–1993 or after, SA) Sydney Jephcott (1864–1951, A) Paulette Jiles (born 1943, US/C) Liesl Jobson (living, SA) Rita Joe (1932–2007, C) Reg Johanson (born 1968, C) Edmund John (1883–1917, E) Godfrey John (living, W) E. Pauline Johnson (1861–1913, C) Fenton Johnson (born 1953, US) Georgia Douglas Johnson (1880–1966, US) Helene Johnson (1906–1995, US) James Weldon Johnson (1871–1938, US) Linton Kwesi Johnson (born 1952, J) Lionel Johnson (1867–1902, E) Samuel Johnson (1709–1784, E) Sarah Johnson (born 1980, SA) George Benson Johnston (1913–2004, C) Martin Johnston (1947–1990, A) Amanda Jones (1835–1914, US) D. G. Jones (1929–2016, C) David Jones (1895–1974, E) Ebenezer Jones (1820–1860, E) El Jones (living, C) Emma Jones (born 1977, A) Evan Jones (born 1931, A) Glyn Jones (1905–1995, W) Jack Jones (born 1992, W) Jill Jones (born 1951, A) John Joseph Jones (1930–2000, A) Patrick Jones (born 1965, W) Rae Desmond Jones (1941–2017, A) Richard Jones (living, US) Terry Jones (1942–2020, W/E) Erica Jong (born 1942, US) Ben Jonson (1573–1637, E) Julie Joosten (born 1980, US/C) John Jordan (1930–1988, Ir) June Jordan (1936–2002, J/US) Anthony Joseph (born 1966, T/E) Eve Joseph (born 1953, C) Jenny Joseph (1932–2018, E) Danilo Jovanovitch (1919–2015, A) James Joyce (1882–1941, Ir/I) Trevor Joyce (born 1947, Ir) Frank Judge (living, US) Donald Justice (1925–2004, US) A. M. Juster (born 1956, US) K Jim Kacian (born 1953, US) Aryan Kaganof (born 1964, SA) Chester Kallman (1921–1975, US) Surjeet Kalsey (living, C) Smaro Kamboureli (living, C) Ilya Kaminsky (born 1977, US) Julie Kane (born 1952, US) Adeena Karasick (born 1965, C/US) Mary Karr (born 1955, US) Julia Kasdorf (born 1962, US) Laura Kasischke (born 1961, US) Bob Kaufman (1925–1986, US) Shirley Kaufman (1923–2016, US) Rupi Kaur (born 1992, C) Patrick Kavanagh (1904–1967, Ir) Jackie Kay (born 1961, S) Jayne Fenton Keane (living, A) Lionel Kearns (born 1937, C) Annie Keary (1825–1879, E) Diane Keating (living, C) John Keats (1795–1821, E) John Keble (1792–1866, E) Janice Kulyk Keefer (born 1952, C) Weldon Kees (1914–1955, US) Nancy Keesing (1923–1993, A) Antigone Kefala (born 1935, A) Christopher Kelen (born 1958, A) S. K. Kelen (born 1956, A) Anne Kellas (born 1951, SA/A) Isabella Kelly (1759–1857, S/E) M. T. Kelly (born 1946, C) Arthur Kelton (died c. 1550, E/W) Penn Kemp (born 1944, C) Henry Kendall (1839–1882, A) Francis Kenna (1865–1932, A) Cate Kennedy (born 1963, A) Geoffrey Studdert Kennedy ("Woodbine Willy", 1883–1929, E) Leo Kennedy (1907–2000, C) Miranda Kennedy (born 1975, US) Walter Kennedy (c. 1455 – c. 1508, S) X. J. Kennedy (born 1929, US) Jean Kent (born 1951, A) Jane Kenyon (1947–1995, US) Robert Kirkland Kernighan (1854–1926, C) Jack Kerouac (1922–1969, US) Sidney Keyes (1922–1943, E) Keorapetse Kgositsile (1938–2018, SA/US) Mimi Khalvati (born 1944, E) Charles Kickham (1828–1882, Ir) Anne Killigrew (1660–1685, E) Joyce Kilmer (1886–1918, US) Arthur Henry King (1910–2000, E/US) Henry King (1592–1669, E) William King (1663–1712, E) Charles Kingsley (1819–1875, E) Barbara Kingsolver (born 1955, US) Galway Kinnell (1927–2014, US) John Kinsella (born 1963, A) Thomas Kinsella (born 1928, Ir) Rudyard Kipling (1865–1936, E) Olga Kirsch (1924–1997, SA/Is) Roy Kiyooka (1926–1994, C) Carolyn Kizer (1925–1914, US) Barbara Klar (born 1966, C) Sarah Klassen (born 1932, C) A. M. Klein (1909–1972, C) August Kleinzahler (born 1949, US) Etheridge Knight (1931–1991, US) Stephen Knight (born 1960, W/E) Raymond Knister (1899–1932, C) Kenneth Koch (1925–2002, US) Ruth Ellen Kocher (born 1965, US) Joy Kogawa (born 1935, C) komninos (born 1950, A) Yusef Komunyakaa (born 1947, US) Ted Kooser (born 1939, US) Shane Koyczan (born 1976, C) Rustum Kozain (born 1966, SA) Rudi Krausmann (1933–2019, A) Ruth Krauss (1901–1993, US) Carolyn Kreiter-Foronda (born 1946, US) Uys Krige (1910–1987, SA) Robert Kroetsch (1927–2011, C) Antjie Krog (born 1952, SA) Anton Robert Krueger (born 1971, SA) Marilyn Krysl (born 1942, US) Anatoly Kudryavitsky (born 1954, Ir) Abhay Kumar (born 1980, In) Mazisi Kunene (1930–2006, SA) Tuli Kupferberg (1923–2010, US) Maxine Kumin (1925–2014, US) Stanley Kunitz (1905–2006, US) Frank Kuppner (born 1951, S) Stephen Kuusisto (born 1955, US) Morris Kyffin (c. 1555–1598, W/E) Joanne Kyger (1934–2017, US) Francis Kynaston (1587–1642, E) L La–Ln John La Rose (1927–2006, J/E) Sonnet L'Abbé (born 1973, C) Edward A. Lacey (1938–1995, C) Mike Ladd (born 1959, A) Ben Ladouceur (born 1987, C) Nick Laird (born 1975, NI) David Lake (1929–2016, A) Philip Lamantia (1927–2005, US) Kendrick Lamar (born 1987, US) Charles Lamb (1775–1834, E) Archibald Lampman (1861–1899, C) Tim Lander (born 1938, C) Letitia Elizabeth Landon (1802–1838, E) Walter Savage Landor (1775–1864, E) M. Travis Lane (born 1934, US/C) Patrick Lane (1939–2019, C) Andrew Lang (1844–1912, S) D. L. Lang (born 1983, US) William Langland (c. 1332 – c. 1386, E) Eve Langley (1904–1974, A) Emilia Lanier (1569–1645, E) Sidney Lanier (1842–1881, US) Lucy Larcom (1824–1893, US) Rebecca Hammond Lard (1772–1855, US) Bruce Larkin (born 1957, US) Philip Larkin (1922–1985, E) Evelyn Lau (born 1971, C) James Laughlin (1914–1997, US) Ann Lauterbach (born 1942, US) Dorianne Laux (born 1952, US) Emily Lawless (1845–1913, Ir) Anthony Lawrence (born 1957, A) D. H. Lawrence (1885–1930, E) Henry Lawson (1867–1922, A) Louisa Lawson (1848–1920, A) Robert Lax (1915–2000, US) Layamon (late 12th – early 13th c., E) Irving Layton (1912–2006, C) Emma Lazarus (1849–1887, US) Augustus Asplet Le Gros (1840–1877, Je) Bronwyn Lea (living, A) Mary Leapor (1722–1746, E) Edward Lear (1812–1888, E) Lesley Lebkowicz (born 1946, A) Francis Ledwidge (1887–1917, Ir) David Lee (born 1944, US) Dennis Lee (born 1939, C) John B. Lee (born 1951, C) Muna Lee (1895–1965, US) Lily Alice Lefevre (1854–1938, C) Joy Leftow (born 1949, US) Sylvia Legris (born 1960, C) Ursula
Day (born 1958, A) Cecil Day-Lewis (1904–1972, E) Adriana de Barros (born 1976, C) Somerset de Chair (1911–1995, E) Jean Louis De Esque (1879–1956, US) Madeline DeFrees (1919–2015, US) Celia de Fréine (born 1948, Ir) Ingrid de Kok (born 1951, SA) Walter de la Mare (1873–1956, E) Christine De Luca (born 1947, S) Sadiqa de Meijer (born 1977, C) Edward de Vere, 17th Earl of Oxford (1550–1604, E) Phillippa Yaa de Villiers (born 1966, SA) James Deahl (born 1945, C) Dulcie Deamer (1890–1972, A) John F. Deane (born 1943, Ir) Joel Deane (born 1969, A) Patrick Deeley (born 1953, Ir) Madeline DeFrees (1919–2015, US) Thomas Dekker (1575–1641, E) Greg Delanty (born 1958, Ir/US) Kris Demeanor (living, C) Barry Dempster (born 1952, C) Joe Denham (living, C) John Denham (1615–1669, E) C. J. Dennis (1876–1938, A) John Dennison (born 1978, NZ) Tory Dent (1958–2005, US) Enid Derham (1882–1941, A) Thomas Dermody (1775–1802, Ir) Toi Derricotte (born 1941, US) Heather Derr-Smith (born 1971, US) Michelle Desbarats (living, C) Babette Deutsch (1895–1982, US) James Devaney (1890–1976, A) Mary Deverell (1731–1805, E) Denis Devlin (1908–1959, Ir) George E. Dewar (1895–1969, NZ) Christopher Dewdney (born 1951, C] Imtiaz Dharker (born 1954, P/W) Pier Giorgio Di Cicco (1949–2019, C) Mary di Michele (born 1949, C) Diane di Prima (1934–2020, US) Ann Diamond (living, C) Natalie Diaz (born 1978, US) Anne Dick (died 1741, S) Jennifer K Dick (born 1970, US) James Dickey (1923–1997, US) Adam Dickinson (living, C) Emily Dickinson (1830–1886, US) Matthew Dickman (born 1975, US) Michael Dickman (born 1975, US) Robert Dickson (1944–2007, C) Peter Didsbury (born 1946, E) Modikwe Dikobe (1913 – unknown year, SA) Des Dillon (living, S) John Dillon (1851–1927, Ir) B. R. Dionysius (born 1969, A) Ray DiPalma (1943–2016, US) Thomas M. Disch (born 1940, US) Chitra Banerjee Divakaruni (born 1956, In/US) Isobel Dixon (born 1969, SA/E) Sarah Dixon (1671–1765, E) William Hepworth Dixon (1821–1879, E) Angifi Dladla (born 1950, SA) Tim Dlugos (1950–1990, US) Kildare Dobbs (1923–2013, C) Henry Austin Dobson (1840–1921, E) Rosemary Dobson (1920–2012, A) Stephen Dobyns (born 1941, US) Jeramy Dodds (born 1974, C) Robert Dodsley (1703–1764, E) Pete Doherty (born 1979, E) Digby Mackworth Dolben (1848–1867, E) Joe Dolce (born 1947, US/A) Don Domanski (born 1950, C) Magie Dominic (born 1944, C) Jeffery Donaldson (living, C) John Donaldson (Jon Inglis, 1921–1989, E) John Donne (1572–1631, E) David Donnell (born 1939, C) Timothy Donnelly (born 1969, US) Gerard Donovan (born 1959, Ir/E) Theo Dorgan (born 1953, Ir) Ed Dorn (1929–1999, US) Catherine Ann Dorset (1752–1834, E) Candas Jane Dorsey (born 1952, C) Mark Doty (born 1953, US) Clive Doucet (born 1946, C) Sarah Doudney (1841–1926, E) Lucy Dougan (born 1966, A) Charles Montagu Doughty (1843–1926, E) Lord Alfred Douglas (1870–1945, E) Alice May Douglas (1865–1943, US) Gavin Douglas (c. 1474–1522, S) George Brisbane Scott Douglas (1856–1935, S) Keith Douglas (1920–1944, E) Orville Lloyd Douglas (born 1976, C) Rita Dove (born 1952, US) Basil Dowling (1910–2000, NZ) Finuala Dowling (born 1962, SA) Gordon Downie (1964–2017, C) Ellen Mary Patrick Downing (1828–1869, Ir) Ernest Dowson (1867–1900, E) Francis Hastings Doyle (1810–1888, E) Kirby Doyle (1932–2003, US) Dr–Dy Michael Dransfield (1948–1973, A) Jane Draycott (born 1954, E) Michael Drayton (1563–1631, E) John Swanwick Drennan (1809–1893, Ir) William Drennan (1754–1820, Ir) Adam Drinan (also Joseph Macleod, 1903–1984, E) John Drinkwater (1882–1937, E) William Drummond of Hawthornden (1585–1649, S) William Henry Drummond (1854–1907, C) John Dryden (1631–1700, E) W. E. B. Du Bois (1868–1963, US) I. D. du Plessis (1900–1981, SA) Klara du Plessis (living, SA/C) Norman Dubie (born 1945, US) Stephen Duck (c. 1705–1756, E) Louis Dudek (1918–2001, S) Carol Ann Duffy (born 1955, S) Charles Gavan Duffy (1816–1903, Ir/A) Maureen Duffy (born 1933, E) Alan Dugan (1923–2003, J/US) Michael Dugan (1947–2006, A) Sasha Dugdale (born 1974, E) Eileen Duggan (1894–1972, NZ) Laurie Duggan (born 1949, A) Jas H. Duke (1939–1992, A) Richard Duke (1658–1711, E) Tug Dumbly (Geoff Forrester, living, A) Marilyn Dumont (born 1955, C) Paul Laurence Dunbar (1872–1906, US) William Dunbar (1459 or 1460 – c. 1530, S) Andrew Duncan (born 1956, E) Robert Duncan (1919–1988, US) Camille Dungy (born 1972, US) Helen Dunmore (1952–2017, E) Douglas Dunn (born 1942, S) Max Dunn (died 1963, A) Stephen Dunn (1939–2021, US) Joe Dunthorne (born 1982, W) Paul Durcan (born 1944, Ir) Lawrence Durrell (1912–1990, E) Anne Dutton (1692–1765, E) Geoffrey Dutton (1922–1998, A) Stuart Dybek (born 1942, US) Edward Dyer (1543–1607, E) John Dyer (1699–1758, W) Bob Dylan (born 1941, US) Edward Dyson (1865–1931, A) E Joan Adeney Easdale (1913–1998, E) Evelyn Eaton (1902–1983, C) Richard Eberhart (1904–2005, US) Emily Eden (1797–1869, E) Helen Parry Eden (1885–1960, E) Stephen Edgar (born 1951, A) Lauris Edmond (1924–2000, NZ) Russell Edson (1928–2014, US) Richard Edwardes (c. 1523–1566, E) Dic Edwards (born 1953, W) Jonathan Edwards (born 1979, W) Rhian Edwards (living, W/E) Helen Merrill Egerton (1866-1951, C) Terry Ehret (born 1955, US) Vic Elias (1948–2006, US/C) Anne Elder (1918–1976, A) George Eliot (Mary Anne Evans, 1819–1880, E) T. S. Eliot (1888–1965, US/E) Elizabeth F. Ellet (1818–1877, US) Charlotte Elliot (1839–1880, S) David Elliott (1923–1999, C) Jean Elliot (1727–1805, S) Ebenezer Elliott (1781–1849, E) George Ellis (1753–1815, E) Royston Ellis (born 1941, E) Chris Else (born 1942, NZ) Rebecca Elson (1960–1999, C) Crispin Elsted (living, C) Claudia Emerson (1957–2014, US) Ralph Waldo Emerson (1803–1882, US) Chris Emery (born 1963, E) William Empson (1906–1984, E) Paul Engle (1908–1991, US) John Ennis (born 1944, Ir) Karen Enns (living, C) D. J. Enright (1920–2002, E) Riemke Ensing (born 1939, NZ) Theodore Enslin (1925–2011, US) Louise Erdrich (born 1954, US) Ralph Erskine (1685–1752, S) Clayton Eshleman (1935–2021, US) Martín Espada (born 1957, US) Ramabai Espinet (born 1948, T) Jill Alexander Essbaum (born 1971, US) Maggie Estep (1963–2014, US) George Etherege (1635–1691, E) Michael Estok (1939–1989, C) Jerry Estrin (1947–1993, US) Anne Evans (1820–1870, E) Christine Evans (born 1943, E/W) George Essex Evans (1863–1909, A) Margiad Evans (Peggy Whistler, 1909–1958, E) Mari Evans (1919–2017, US) Sebastian Evans (1830–1909, E) William Everson (Brother Antoninus, 1912–1994, US) Gavin Ewart (1916–1995, E) John K. Ewers (1904–1978, A) Elisabeth Eybers (1915–2007, SA/Nt) F Frederick William Faber (1814–1863, E) Diane Fahey (born 1945, A) Ruth Fainlight (born 1931, US/E) Kingsley Fairbridge (1885–1924, SA) A. R. D. Fairburn (1904–1957, NZ) Maria and Harriet Falconar (born c. 1771–1774, E or S) William Falconer (1732–1769, S) Padraic Fallon (1905–1974, Ir) Catherine Maria Fanshawe (1765–1834, E) U. A. Fanthorpe (1929–1909, E) Patricia Fargnoli (born 1937, US) Eleanor Farjeon (1881–1965, E) Fiona Farrell (born 1947, NZ) John Farrell (1851–1904, A) John Farrell (1968–2010, US) Michael Farrell (born 1965, A) Katie Farris (born 1983, US) Margaretta Faugères (1771–1801, US) Jessie Redmon Fauset (1882–1961, US) Brian Fawcett (born 1944, C) Elaine Feeney (living, Ir) Elaine Feinstein (1930–2019, E) Alison Fell (born 1944, S) Charles Fenerty (c. 1821–1892, C) Elijah Fenton (1683–1730, E) James Fenton (born 1931, NI) James Fenton (born 1949, E) Richard Fenton (1747–1821, W) Gus Ferguson (1940–2020, SA) Samuel Ferguson (1810–1886, Ir) Robert Fergusson (1750–1774, S) Lawrence Ferlinghetti (1919–2021, US) Ferron (Deborah Foisy, born 1952, C) George Fetherling (born 1949, C) Michael Field (Katherine Bradley, 1846–1914, and Edith Cooper, 1862–1913, E) Henry Fielding (1707–1754, E) Connie Fife (1961–2017, C) Anne Finch, Countess of Winchilsea (1661–1720, E) Annie Finch (born 1956, US) Peter Finch (living, W) Robert Finch (1900–1995, C) Ian Hamilton Finlay (1925–2006, S) Joan Finnigan (1925–2007, C) Jon Paul Fiorentino (living, C) Catherine Fisher (born 1957, W) Roy Fisher (1930–2017, E) Edward FitzGerald (1809–1883, E) Judith Fitzgerald (1952–2015, C) R. D. Fitzgerald (1902–1987, A) Robert Fitzgerald (1910–1985, US) Richard FitzPatrick (1748–1813, Ir/E) Roderick Flanagan (1828–1862, A) James Elroy Flecker (1884–1915, E) Marjorie Fleming (1803–1811, S) Giles Fletcher (c. 1586–1623, E) Giles Fletcher, the Elder (c. 1548–1611, E) John Fletcher (1579–1625, E) John Gould Fletcher (1886–1950, US) Phineas Fletcher (1582–1650, E) Maria De Fleury (c. 1754 – c. 1794, E) F. S. Flint (1885–1960, E) Alice Flowerdew (1759–1830, E) Lionel Fogarty (born 1958, A) Jack Foley (born 1940, US) Mary Hannay Foott (1846–1918, A) John Forbes (1950–1998, A) Carolyn Forché (born 1950, US) Ford Madox Ford (1873–1939, E) John Ford (1586–1639, E) John M. Ford (1957–2006, US) Robert Ford (1915–1998, C) Mabel Forrest (1872–1935, A) William Forrest (fl. 1581, E) Veronica Forrest-Thomson (1947–1975, S) Gary Jeshel Forrester (born 1946, NZ) William Forster (1818–1882, A) John Foulcher (born 1952, A) Ellen Thorneycroft Fowler (1860–1920, E) William Fowler (c. 1560–1612, S) Kate Fox (born 1975, E) Len Fox (1905–2004, A) Janet Frame (1924–2004, NZ) Ruth France (1913–1968, NZ) Matthew Francis (born 1956, E/W) Robert Francis (1901–1987, US) George Sutherland Fraser (1915–1980, S) Gregory Fraser (living, US) Raymond Fraser (1941–2018, C) Benjamin Frater (1979–2007, A) Brentley Frazer (born 1972, A) Grace Beacham Freeman (1916–2002, US) John Freeman (1880–1929, E) Nicholas Freeston (1907–1978, E) Patrick Friesen (born 1946, C) Robert Frost (1874–1963, US) Gwen Frostic (1906–2001, US) Gene Frumkin (1928–2007, US) Mark Frutkin (born 1948, US/C) Sheila Meiring Fugard (born 1932, SA) Ethel Romig Fuller (1883–1965, US) John Fuller (born 1937, E) Roy Fuller (1912–1991, E) Mary Eliza Fullerton (1868–1946, A) Alice Fulton (born 1952, US) Graham Fulton (born 1959, S) Robin Fulton (born 1937, S) Ulpian Fulwell (1545/1546 – before 1586, E) Richard Furness (1791–1857, E) G Ga–Go Frances Dana Barker Gage (1808–1884, US) Dunstan Gale (fl. 1596, E) Kate Gale (living, US) James Galvin (born 1951, US) Patrick Galvin (1927–2011, Ir) Forrest Gander (born 1956, US) Robert Garioch (1909–1981, S) Hamlin Garland (1860–1940, US) Raymond Garlick (1926–2011, W) Richard Garnett (1835–1906, E) Jean Garrigue (1912–1972, US) Samuel Garth (1661–1719, E) George Gascoigne (1525–1577, E) David Gascoyne (1916–2001, E/F) Bill Gaston (born 1953, C) John Gay (1685–1732, E) Ross Gay (born 1974, US) William Gay (1865–1897, S/A) Alexander Geddes (1737–1802, S) Leon Gellert (1892–1977, A) W. R. P. George (1912–2006, W) Dan Gerber (born 1940, US) Amy Gerstler (born 1956, US) Marty Gervais (living, C) Charles Ghigna (born 1946, US) Monk Gibbon (1896–1987, Ir) Reginald Gibbons (born 1947, US) Stella Gibbons (1902–1989, E) Ivy Gibbs (c. 1886–1966, NZ) Kahlil Gibran (1883–1931, L/US) G. H. Gibson (Ironbark, 1846–1921, A) Wilfrid Wilson Gibson (1878–1962, E) Elsa Gidlow (1898–1986, C) Angus Morrison Gidney (1803–1882, C) Gerry Gilbert (1936–2009, C) Jack Gilbert (1925–2012, US) Kevin Gilbert (1933–1993, A) W. S. Gilbert (1836–1911, E) Ellen Gilchrist (born 1935, US) George Gilfillan (1813–1878, S) Charlotte Perkins Gilman (1860–1935, US) Mary Gilmore (1865–1962, A) Allen Ginsberg (1926–1997, US) Dana Gioia (born 1950, US) Nikki Giovanni (born 1943, US) Jesse Glass (born 1954, US/Jp) John Glassco (1909–1981, C) Madeline Gleason (1903–1979, US) Duncan Glen (1933–2008, S) William Glen (1789–1826, S) Lorri Neilsen Glenn (living, C) Denis Glover (1912–1980, NZ) Louise Glück (born 1943, US) Rumer Godden (1907–1998, In/E) Patricia Goedicke (1931–2006, US) Oliver St. John Gogarty (1878–1957, Ir) Albert Goldbarth (born 1948, US) Kenneth Goldsmith (born 1961, US) Oliver Goldsmith (1728–1774, Ir/E) Oliver Goldsmith (1794–1861, C) Peter Goldsworthy (born 1951, A) Leona Gom (born 1946, C) W. T. Goodge (1862–1909, A) Lorna Goodison (born 1947, J) Paul Goodman (1911–1972, US) Barnabe Googe (1540–1594, E) Adam Lindsay Gordon (1833–1870, A) Katherine L. Gordon (living, C) Robert Gordon of Straloch (1580–1661, S) Hedwig Gorski (born 1949, US) Edmund Gosse (1849–1928, E) Phyllis Gotlieb (1926–2009, C) Keith Gottschalk (born 1946, SA) Alan Gould (born 1949, A) Nora Gould (living, C) John Gower (c. 1330–1408, E) Susan Goyette (born 1964, C) Gr–Gy James Graham, 1st Marquess of Montrose (1612–1650, S) Jorie Graham (born 1950, US) Neile Graham (born 1958, C) Robert Cunninghame Graham of Gartmore (1735–1797, S) W. S. Graham (1918–1986, S) James Grahame (1765–1811, S) Mark Granier (born 1957, E/Ir) Paul Grano (1894–1975, A) Alex Grant (living, US) Richard Graves (1715–1804, E) Richard Harry Graves (1897–1971, A) Robert Graves (1895–1985, E) Alexander Gray (1882–1968, S) Catherine Gray, Lady Manners (1766–1852, Ir) Kathryn Gray (born 1973, W) Maxwell Gray (Mary Gleed Tuttiett, 1846–1923, E) Robert Gray (born 1945, A) Stephen Gray (born 1941, SA) Thomas Gray (1716–1771, E) Dorothy Auchterlonie Green (1915–1991, A) H. M. Green (1881–1962, A) Paula Green (born 1955, NZ) Richard Greene (born 1961, C) Robert Greene (1558–1592, E) Lavinia Greenlaw (born 1962, E) Gavin Greenlees (1930–1983, A) Leslie Greentree (living, C) Dora Greenwell (1821–1882, E) Jane Greer (born 1953, US) Linda Gregg (1942–2019, US) Horace Gregory (1898–1982, US) Andrew Greig (born 1951, S) Eamon Grennan (born 1941, Ir) H. W. Gretton (1914–1983, NZ) Fulke Greville, 1st Baron Brooke (1554–1628, E) Gerald Griffin (1803–1840, Ir) Sarah Maria Griffin (living, Ir) Susan Griffin (born 1943, US) Bill Griffiths (1948–2007, E) Bryn Griffiths (living, W/E) Jane Griffiths (born 1970, E) Geoffrey Grigson (1905–1985, E) Nicholas Grimald (1519–1562, E) Angelina Weld Grimké (1880–1958, US) Charlotte Forten Grimké (1837–1914, US) Eliza Griswold (born 1973, US) Rufus Wilmot Griswold (1815–1857, US) Philip Gross (born 1952, E) Paul Groves (born 1947, E/W) Bertha Jane Grundy (Mrs. Leith Adams, 1837–1912, E) Jeff Guess (born 1948, A) Barbara Guest (1920–2006, US) Edgar Guest (1881–1959, US) Paul Guest (living, US) Malcolm Guite (born 1957, E) Arthur Guiterman (1871–1943, US) Genni Gunn (born 1949, C) Thom Gunn (1929–2004, E/US) Kristjana Gunnars (born 1948, C) Lee Gurga (born 1949, US) Ivor Gurney (1890–1937, E) Ralph Gustafson (1909–1995, C) Mafika Gwala (1946–2014, SA) Cyril Gwynn (1897–1988, W/A) Stephen Gwynn (1864–1950, Ir) Beth Gylys (born 1964, US) Brion Gysin (1916–1986, C/E) H Ha–He William Habington (1605–1654, E) Marilyn Hacker (born 1942, US) John Haines (1924–2011, US) Paul Haines (1933–2003, US/C) Helen Hajnoczky (born 1985, C) Thomas Gordon Hake (1809–1895, E) Sarah Josepha Hale (1788–1879, US) Bernadette Hall (born 1945, NZ) Donald Hall (1928–2018, US) Megan Hall (born 1972, SA) Phil Hall (born 1953, C) Radclyffe Hall (1880–1943, E) Rodney Hall (born 1935, A) Arthur Hallam (1811–1833, E) Alan Halsey (born 1949, W/E) Michael Hamburger (1924–2007, E) Ian Hamilton (1938–2001, E) Jane Eaton Hamilton (born 1954, C) Janet Hamilton (1795–1873, S) Philip Hammial (born 1937, A) Robert Gavin Hampson (born 1948, E) Susan Hampton (born 1949, A) Sophie Hannah (born 1971, E) Kerry Hardie (born 1951, NI) Thomas Hardy (1840–1928, E) Lesbia Harford (1891–1927, A) Joy Harjo (born 1951, US) William Harmon (born 1938, US) Frances Harper (1825–1911, US) Michael S. Harper (1938–2016 US) Charles Harpur (1813–1868, A) Alice Harriman (1861–1925, US) Edward Harrington (1895–1966, A) Claire Harris (1937–2018, C) Joseph Harris (1773–1825, W) Max Harris (1921–1995, A) Michael Harris (born 1944, C) Robert Harris (1951–1993, A) Wilson Harris (1921–2018, Gu/E) Jennifer Harrison (born 1955, A) Jim Harrison (1937–2016, US) Martin Harrison (1949–2014, A) Richard Harrison (poet) (living, C) Tony Harrison (born 1937, E) Les Harrop (born 1948, E/A) Molly Harrower (1906–1999, S) J. S. Harry (1939–2015, A) Carla Harryman (born 1952, US) David Harsent (born 1942, E) Kevin Hart (born 1954, A) Paul Hartal (born 1936, Is/C) Anne Le Marquand Hartigan (living, Ir) Jill Hartman (born 1974, C) Sadakichi Hartmann (1867–1944, US) Michael Hartnett (1941–1999, Ir) Diana Hartog (born 1942, C) William Hart-Smith (1911–1990, NZ) F. W. Harvey (1888–1957, E) Elisabeth Harvor (living, C) Gwen Harwood (1920–1995, A) Lee Harwood (1939–2015, E) Alamgir Hashmi (born 1951, E) J. H. Haslam (1874–1969, NZ) Nicholas Hasluck (born 1942, A) Robert Hass (born 1941, US) Katherine Hastings (living, US) Ann Hatton (1764–1838, W) Stephen Hawes (died 1523, E) Robert Stephen Hawker (1803–1875, E) Kathleen Hawkins (1883–1981, NZ) George Campbell Hay (1915–1984, S) Gilbert Hay (born c. 1403, S) Myfanwy Haycock (1913–1963, W/E) Robert Hayden (1913–1980, US) William Hayley (1745–1820, E) Robert Hayman (1575–1629, Nf) Tony Haynes (born 1960, US) Joel Hayward (born 1964, NZ) Eliza Haywood (c. 1693–1756, E) H.D. (Hilda Doolittle, 1886–1961, E) Randolph Healy (born 1956, Ir) Seamus Heaney (1939–2013, Ir) Josephine D. Heard (1861 – c. 1921, US) John Heath-Stubbs (1918–2006, E) Charles Heavysege (1816–1876, C) James Hebblethwaite (1857–1921, A) Anthony Hecht (1923–2004, US) Jennifer Michael Hecht (born 1965, US) John Hegley (born 1953, E) Wilfrid Heighington (1897–1945, C) Steven Heighton (born 1961, C) Anita Heiss (born 1968, A) Lyn Hejinian (born 1941, US) Jill Hellyer (1925–2012, A) David Helwig (1938–2018, C) Maggie Helwig (born 1961, C) Felicia Hemans (1793–1835, E) Kris Hemensley (born 1946, A) Essex Hemphill (1957–1995, US) Brian Henderson (born 1948, C) Hamish Henderson (1919–2002, S) Philip Henderson (1906–1977, E) Thomas William Heney (1862–1928, A) John Henley (1692–1756, E) William Ernest Henley (1849–1903, E) Adrian Henri (1932–2000, E) Paul Henry (born 1959, W) Robert Henryson (fl. 1460–1500, S) Thomas Nicoll Hepburn (wrote as Gabriel Setoun, 1861–1930, S) Dorothea Herbert (c. 1767–1829, Ir) Edward Herbert, 1st Baron Herbert of Cherbury (1582–1648, E/W) George Herbert (1593–1632, W) Mary Herbert, Countess of Pembroke (Mary Sidney, 1561–1621, E) W. N. Herbert (born 1961, S) Robert Herrick (1591–1674, E) Steven Herrick (born 1958, A) Benjamin Hertwig (living, C) Thomas Kibble Hervey (1799–1959, E) Phoebe Hesketh (1909–2005, E) Paul Hetherington (born 1958, A) William Maxwell Hetherington (1803–1865, S) Dorothy Hewett (1923–2002, A) John Hewitt (1907–1987, NI) Maurice Hewlett (1861–1923, E) William Heyen (born 1940, US) Thomas Heywood (c. 1570s – 1650, E) Hi–Hu Bob Hicok (born 1960, US) Dick Higgins (1938–1998, US) F. R. Higgins (1896–1941, Ir) Kevin Higgins (born 1967, Ir) Rita Ann Higgins (born 1955, Ir) Colleen Higgs (born 1962, SA) Charles Higham (1931–2012, A) Scott Hightower (born 1952, US) Conrad Hilberry (1928–2017, US) Fiona Hile (living, A) Barry Hill (born 1943, A) Edward Hill (1843–1923, US) Geoffrey Hill (1932–2016, E/US) Robert Hilles (born 1951, C) Richard Hillman (born 1964, A) Ellen Hinsey (born 1960, US) Jane Hirshfield (born 1953, US) George Hitchcock (1914–2010, US) H. L. Hix (born 1960, US) Thomas Hoccleve (c. 1368–1426, E) Philip Hodgins (1959–1995, A) Ralph Hodgson (1871–1962, E/US) W. N. Hodgson (1893–1916, E) Barbara Hofland (1770–1844, E) Michael Hofmann (born 1957, G/US) James Hogg (1770–1835, S) David Holbrook (1923–2011, E) Susan Holbrook (living, C) Thomas Holcroft (1745–1809, E) Clive Holden (living, C) Margaret Holford (1778–1852, E) Abraham Holland (died 1626, E) Barbara Holland (1933–2010, US) Hugh Holland (1569–1633, W) Jane Holland (born 1966, E) John Holland (1794–1872, E) Norah M. Holland (1876-1925, C) Sarah Holland-Batt (born 1982, A) John Hollander (1929–2013, US) Matthew Hollis (born 1971, E) Anselm Hollo (1934–2013, US) Nancy Holmes (born 1959, C) Oliver Wendell Holmes, Sr. (1809–1894, US) Thomas Hood (1798–1845, E) Cornelia Hoogland (living, C) Ellen Sturgis Hooper (1812–1848, US) Hilda Mary Hooke (1898–1978, C) Harry Hooton (1908–1961, A) A. D. Hope (1907–2000, A) Christopher Hope (born 1944, SA) Gerard Manley Hopkins (1844–1889, E) Leah Horlick (living, C) Sean Horlor (born 1981, C) Frances Horovitz (1938–1983, E) Michael Horovitz (1935–2021, E) Peter Horn (1934–2019, SA) George Moses Horton (1797–1884, US) Allan Kolski Horwitz (born 1952, SA) Sylvester Houédard (1924–1992, Gy) Karen Houle (living, C) Joan Houlihan (living, US) A. E. Housman (1859–1936, E) Edward Howard (1793–1841, E) Henry Howard, Earl of Surrey (1517–1547, E) Liz Howard (living, C) Richard Howard (born 1929, US) Robert Guy Howarth (1906–1974, A) Fanny Howe (born 1940, US) George Howe (1769–1821, A) Julia Ward Howe (1819–1910, US) Susan Howe (born 1937, US) Ada Verdun Howell (1902–1981, A) Anthony Howell (born 1945, E) Harry Howith (1934–2014, C) Mary Howitt (1799–1888, E) Richard Howitt (1799–1869, E) William Howitt (1792–1879, E) Francis Hubert (died 1629, E) Thomas Hudson (d. c. 1605, S) Frieda Hughes (born 1960, A) Langston Hughes (1902–1967, US) Richard Hughes (1900–1976, E/W) Ted Hughes (1930–1998, E) Richard Hugo (1923–1982, US) Coral Hull (born 1965, A) Lynda Hull (1954–1994, US) T. E. Hulme (1883–1917, E) Alexander Hume (c. 1560–1609, S) Anna Hume (fl. 1644, S) David Hume of Godscroft (1558–1629, S) Barry Humphries (born 1934, A/E) Emyr Humphreys (1919–2020, W) Helen Humphreys (born 1961, C) Leigh Hunt (1784–1859, E) Sam Hunt (born 1946, NZ) Aislinn Hunter (living, C) Al Hunter (living, C) Bruce Hunter (born 1952, C) Catherine Hunter (born 1957, C) Rex Hunter (1889–1960, NZ) Constance Hunting (1925–2006, US) Cynthia Huntington (born 1952, US) Chris Hutchinson (born 1972, C) Pearse Hutchinson (1927–2012, Ir) William Hutton (1723–1815, E) Aldous Huxley (1894–1963, E) Douglas Smith Huyghue (1816–1891, C/A) Douglas Hyde (1860–1949, Ir) Robin Hyde (pen name of Iris Wilkinson; 1906–1939, NZ) Helen von Kolnitz Hyer (1896–1983, US) Maureen Hynes (living, C) I John Imlah (1799–1846, S) Rex Ingamells (1913–1955, A) Jean Ingelow (1820–1897, E) P. Inman (born 1947, US) Susan Ioannou (born 1944, C) Valentin Iremonger (1918–1991, Ir) Eric Irvin (1908–1993, A) Frances Itani (born 1942, C) Helen Ivory (born 1969, E) J Alan Jackson (born 1938, S) Violet Jacob (1863–1946, S) Josephine Jacobsen (1908–2003, US) Richard Jago (1715–1781, E) James I of Scotland (1394–1437, S) James VI and I (1566–1625, S/E) Alan James (living, SA) Clive James (1939–2019, A) John James (1939–2018, W/E) Maria James (1793–1868, W/US) Kathleen Jamie (born 1962, S) Robert Alan Jamieson (born 1958, S) Patricia Janus (1932–2006, US) Mark Jarman (born 1952, US) Lisa Jarnot (born 1967, US) Randall Jarrell (1914–1965, US) Alan Jefferies (born 1957, A) Robinson Jeffers (1887–1962, US) Rod Jellema (1927–1918, US) Jemeni (Joanne Gairy, born 1976, Gd/C) Graham Jenkin (born 1938, A) John Jenkins (born 1949, A) Joseph Jenkins (1818–1898, W/A) Mike Jenkins (born 1953, W) Nigel Jenkins (1949–2014, W) Elizabeth Jennings (1926–2001, E) Kate Jennings (1948–2021, A) Wopko Jensma (1939–1993 or after, SA) Sydney Jephcott (1864–1951, A) Paulette Jiles (born 1943, US/C) Liesl Jobson (living, SA) Rita Joe (1932–2007, C) Reg Johanson (born 1968, C) Edmund John (1883–1917, E) Godfrey John (living, W) E. Pauline Johnson (1861–1913, C) Fenton Johnson (born 1953, US) Georgia Douglas Johnson (1880–1966, US) Helene Johnson (1906–1995, US) James Weldon Johnson (1871–1938, US) Linton Kwesi Johnson (born 1952, J) Lionel Johnson (1867–1902, E) Samuel Johnson (1709–1784, E) Sarah Johnson (born 1980, SA) George Benson Johnston (1913–2004, C) Martin Johnston (1947–1990, A) Amanda Jones (1835–1914, US) D. G. Jones (1929–2016, C) David Jones (1895–1974, E) Ebenezer Jones (1820–1860, E) El Jones (living, C) Emma Jones (born 1977, A) Evan Jones (born 1931, A) Glyn Jones (1905–1995, W) Jack Jones (born 1992, W) Jill Jones (born 1951, A) John Joseph Jones (1930–2000, A) Patrick Jones (born 1965, W) Rae Desmond Jones (1941–2017, A) Richard Jones (living, US) Terry Jones (1942–2020, W/E) Erica Jong (born 1942, US) Ben Jonson (1573–1637, E) Julie Joosten (born 1980, US/C) John Jordan (1930–1988, Ir) June Jordan (1936–2002, J/US) Anthony Joseph (born 1966, T/E) Eve Joseph (born 1953, C) Jenny Joseph (1932–2018, E) Danilo Jovanovitch (1919–2015, A) James Joyce (1882–1941, Ir/I) Trevor Joyce (born 1947, Ir) Frank Judge (living, US) Donald Justice (1925–2004, US) A. M. Juster (born 1956, US) K Jim Kacian (born 1953, US) Aryan Kaganof (born 1964, SA) Chester Kallman (1921–1975, US) Surjeet Kalsey (living, C) Smaro Kamboureli (living, C) Ilya Kaminsky (born 1977, US) Julie Kane (born 1952, US) Adeena Karasick (born 1965, C/US) Mary Karr (born 1955, US) Julia Kasdorf (born 1962, US) Laura Kasischke (born 1961, US) Bob Kaufman (1925–1986, US) Shirley Kaufman (1923–2016, US) Rupi Kaur (born 1992, C) Patrick Kavanagh (1904–1967, Ir) Jackie Kay (born 1961, S) Jayne Fenton Keane (living, A) Lionel Kearns (born 1937, C) Annie Keary (1825–1879, E) Diane Keating (living, C) John Keats (1795–1821, E) John Keble (1792–1866, E) Janice Kulyk Keefer (born 1952, C) Weldon Kees (1914–1955, US) Nancy Keesing (1923–1993, A) Antigone Kefala (born 1935, A) Christopher Kelen (born 1958, A) S. K. Kelen (born 1956, A) Anne Kellas (born 1951, SA/A) Isabella Kelly (1759–1857, S/E) M. T. Kelly (born 1946, C) Arthur Kelton (died c. 1550, E/W) Penn Kemp (born 1944, C) Henry Kendall (1839–1882, A) Francis Kenna (1865–1932, A) Cate Kennedy (born 1963, A) Geoffrey Studdert Kennedy ("Woodbine Willy", 1883–1929, E) Leo Kennedy (1907–2000, C) Miranda Kennedy (born 1975, US) Walter Kennedy (c. 1455 – c. 1508, S) X. J. Kennedy (born 1929, US) Jean Kent (born 1951, A) Jane Kenyon (1947–1995, US) Robert Kirkland Kernighan (1854–1926, C) Jack Kerouac (1922–1969, US) Sidney Keyes (1922–1943, E) Keorapetse Kgositsile (1938–2018, SA/US) Mimi Khalvati (born 1944, E) Charles Kickham (1828–1882, Ir) Anne Killigrew (1660–1685, E) Joyce Kilmer (1886–1918, US) Arthur Henry King (1910–2000, E/US) Henry King (1592–1669, E) William King (1663–1712, E) Charles Kingsley (1819–1875, E) Barbara Kingsolver (born 1955, US) Galway Kinnell (1927–2014, US) John Kinsella (born 1963, A) Thomas Kinsella (born 1928, Ir) Rudyard Kipling (1865–1936, E) Olga Kirsch (1924–1997, SA/Is) Roy Kiyooka (1926–1994, C) Carolyn Kizer (1925–1914, US) Barbara Klar (born 1966, C) Sarah Klassen (born 1932, C) A. M. Klein (1909–1972, C) August Kleinzahler (born 1949, US) Etheridge Knight (1931–1991, US) Stephen Knight (born 1960, W/E) Raymond Knister (1899–1932, C) Kenneth Koch (1925–2002, US) Ruth Ellen Kocher (born 1965, US) Joy Kogawa (born 1935, C) komninos (born 1950, A) Yusef Komunyakaa (born 1947, US) Ted Kooser (born 1939, US) Shane Koyczan (born 1976, C) Rustum Kozain (born 1966, SA) Rudi Krausmann (1933–2019, A) Ruth Krauss (1901–1993, US) Carolyn Kreiter-Foronda (born 1946, US) Uys Krige (1910–1987, SA) Robert Kroetsch (1927–2011, C) Antjie Krog (born 1952, SA) Anton Robert Krueger (born 1971, SA) Marilyn Krysl (born 1942, US) Anatoly Kudryavitsky (born 1954, Ir) Abhay Kumar (born 1980, In) Mazisi Kunene (1930–2006, SA) Tuli Kupferberg (1923–2010, US) Maxine Kumin (1925–2014, US) Stanley Kunitz (1905–2006, US) Frank Kuppner (born 1951, S) Stephen Kuusisto (born 1955, US) Morris Kyffin (c. 1555–1598, W/E) Joanne Kyger (1934–2017, US) Francis Kynaston (1587–1642, E) L La–Ln John La Rose (1927–2006, J/E) Sonnet L'Abbé (born 1973, C) Edward A. Lacey (1938–1995, C) Mike Ladd (born 1959, A) Ben Ladouceur (born 1987, C) Nick Laird (born 1975, NI) David Lake (1929–2016, A) Philip Lamantia (1927–2005, US) Kendrick Lamar (born 1987, US) Charles Lamb (1775–1834, E) Archibald Lampman (1861–1899, C) Tim Lander (born 1938, C) Letitia Elizabeth Landon (1802–1838, E) Walter Savage Landor (1775–1864, E) M. Travis Lane (born 1934, US/C) Patrick Lane (1939–2019, C) Andrew Lang (1844–1912, S) D. L. Lang (born 1983, US) William Langland (c. 1332 – c. 1386, E) Eve Langley (1904–1974, A) Emilia Lanier (1569–1645, E) Sidney Lanier (1842–1881, US) Lucy Larcom (1824–1893, US) Rebecca Hammond Lard (1772–1855, US) Bruce Larkin (born 1957, US) Philip Larkin (1922–1985, E) Evelyn Lau (born 1971, C) James Laughlin (1914–1997, US) Ann Lauterbach (born 1942, US) Dorianne Laux (born 1952, US) Emily Lawless (1845–1913, Ir) Anthony Lawrence (born 1957, A) D. H. Lawrence (1885–1930, E) Henry Lawson (1867–1922, A) Louisa Lawson (1848–1920, A) Robert Lax (1915–2000, US) Layamon (late 12th – early 13th c., E) Irving Layton (1912–2006, C) Emma Lazarus (1849–1887, US) Augustus Asplet Le Gros (1840–1877, Je) Bronwyn Lea (living, A) Mary Leapor (1722–1746, E) Edward Lear (1812–1888, E) Lesley Lebkowicz (born 1946, A) Francis Ledwidge (1887–1917, Ir) David Lee (born 1944, US) Dennis Lee (born 1939, C) John B. Lee (born 1951, C) Muna Lee (1895–1965, US) Lily Alice Lefevre (1854–1938, C) Joy Leftow (born 1949, US) Sylvia Legris (born 1960, C) Ursula K. Le Guin (1929–2018, US) David Lehman (born 1948, US) Geoffrey Lehmann (born 1940, A) Brad Leithauser (born 1953, US) Mark Lemon (1809–1870, E) Sue Lenier (born 1957, E) Charlotte Lennox (c. 1730–1804, S/E) John Lent (living, C) John Leonard (born 1965, A) Tom Leonard (1944–2018, S) William Ellery Leonard (1876–1944, US) Douglas LePan (1914–1998, C) Ben Lerner (born 1979, US) Alex Leslie (living, C) Rika Lesser (born 1953, US) Lilian Leveridge (1879–1953, C) Denise Levertov (1923–1997, E/US) Dana Levin (born 1965, US) Philip Levine (1928–2015, US) Larry Levis (1946–1996, US) D. A. Levy (1942–1968, US) William Levy (1939–2019, US/Nt) Emma Lew (born 1962, A) Oswald LeWinter (1931–2013, US) Alun Lewis (1915–1944, W) C. S. Lewis (1898–1963, Ir/E) Gwyneth Lewis (born 1959, W) J. Patrick Lewis (born 1942, US) Wyndham Lewis (1882–1957, E) Anne Ley (c. 1599–1641, E) Tim Liardet (born 1959, E) Isabella Lickbarrow (1784–1847, E) James Liddy (1934–2008, Ir) Tim Lilburn (born 1950, C) Charles Lillard (1944–1997, C) Kate Lilley (born 1960, A) Tao Lin (born 1983, US) Ada Limón (born 1976, US) Jack Lindeman (living, US) Eddie Linden (born 1935, S/E) Anne Morrow Lindbergh (1906–2001, US) Jack Lindsay (1900–1990, A/E) Maurice Lindsay (1918–2009, S) Sarah Lindsay (born 1958, US) Vachel Lindsay (1879–1931, US) Jessie Litchfield (1883–1956, A) Dorothy Livesay (1909–1996, C) Billie Livingston (living, C) Douglas Livingstone (1932–1996, SA) Lo–Ly Douglas Lochhead (1922–2011, C) Liz Lochhead (born 1947, S) Terry Locke (born 1946, NZ) Thomas Lodge (1556–1625, E) John Logan (1748–1788, S/E) Christopher Logue (1926–2011, E) James Longenbach (living, US) Henry Wadsworth Longfellow (1807–1882, US) Michael Longley (born 1939, NI) John Longmuir (1803–1883, S) Audre Lorde (1934–1992, US) LindaAnn Loschiavo (living, US) Jennifer LoveGrove (living, C) Richard Lovelace (1618–1658, E) Henry Lovelich (fl. mid-15th c., E) Samuel Lover (1797–1868, Ir/E) Amy Lowell (1874–1925, US) James Russell Lowell (1819–1891, US) Maria White Lowell (1821–1853, US) Robert Lowell (1917–1977, US) Pat Lowther (1935–1975, C) Mina Loy (1882–1966, E/US) Edward Lucie-Smith (born 1933, E) Fitz Hugh Ludlow (1836–1870, US) Tatjana Lukić (1959–2008, A) Suzanne Lummis (living, US) Laura Lush (born 1959, C) Richard Lush (born 1934, C) Thomas Lux (1946–2017, US) John Lydgate (1370–1450, E) John Lyly (1553–1606, E) Michael Lynch (1944–1991, US/C) David Lyndsay (c. 1490 – c. 1555, S) P. H. B. Lyon (1893–1986, E) Henry Francis Lyte (1793–1847, S) George Lyttelton Lord Lyttelton (1709–1773, E) M Ma–Mi Rozena Maart (born 1962, SA/C) Lindiwe Mabuza (born 1938, US/SA) Frederick Macartney (1887–1980, A) Thomas Babington Macaulay (1800–1859, E) George MacBeth (1932–1992, S) Norman MacCaig (1910–1996, S) Denis Florence MacCarthy (1817–1882, Ir) Karen Mac Cormack (born 1956, C/US) Hugh MacDiarmid (1892–1978, S) Donagh MacDonagh (1912–1968, Ir) Thomas MacDonagh (1878–1916, Ir) Allan MacDonald (1859–1905, S) Elizabeth Roberts MacDonald (1864-1922, C) George Macdonald (1824–1905, S) Hugh MacDonald (born 1945, C) Wilson MacDonald (1880–1967, C) Patrick MacDonogh (1902–1961, Ir) Gwendolyn MacEwen (1941–1987, C) Seán Mac Falls (born 1957, Ir) Walter Scott MacFarlane (1896–1979, C) Patrick MacGill (1889–1963, Ir) Alasdair Alpin MacGregor (1899–1970, S) Ronald Campbell Macfie (1867–1931, S) James Pittendrigh Macgillivray (1856–1938, S) Thomas MacGreevy (1893–1967, Ir) Arthur Machen (1863–1947, W/E) Tom MacInnes (1867–1951, C) Louise Mack (1870–1935, A) John William Mackail (1859–1945, S) John Macken (c. 1784–1823, Ir) Lachlan Mackinnon (born 1956, S) Compton Mackenzie (1883–1972, S) Kenneth Mackenzie (Seaforth Mackenzie, 1913–1955, A) Archibald MacLeish (1892–1982, US) Dorothea Mackellar (1885–1968, A) Don Maclennan (1929–2009, SA) Joseph Macleod (1903–1984, E) Nathaniel Mackey (born 1947, US) Don Maclennan (1929–2009, SA) Jackson Mac Low (1922–2004, US) Louis MacNeice (1907–1963, Ir/E) Kevin MacNeil (living, S) Hector Macneill (1746–1818, S) Lachlan Mackinnon (born 1956, S) Alasdair Maclean (1926–1994, S) Archibald MacLeish (1892–1982, US) Andrea MacPherson (living, C) James Macpherson (1736–1796, S) Jay Macpherson (1931–2012, C) Barry MacSweeney (1948–2000, E) Haki R. Madhubuti (born 1942, US) John Gillespie Magee, Jr. (1922–1941, C) Wes Magee (born 1939, S) Jayanta Mahapatra (born 1928, In) Sitakant Mahapatra (born 1937, In) Mzi Mahola (born 1949, SA) Derek Mahon (born 1941, NI) Jennifer Maiden (born 1949, A) Keith Maillard (born 1942, US/C) Charles Mair (1838–1827, C) Alice Major (born 1949, C) Clarence Major (born 1936, US) Robert Majzels (born 1950, C) Taylor Mali (born 1965, US) David Mallet (c. 1705–1765, S) Thomas Malory (c. 1415–1471, E) David Malouf (born 1934, A) Kim Maltman (born 1951, C) Eli Mandel (1922–1992, C) Tom Mandel (born 1942, US) Ahdri Zhina Mandiela (born 1953, J/C) James Clarence Mangan (1803–1849, Ir) Bill Manhire (born 1946, NZ) David Manicom (born 1960, C) John Manifold (1915–1985, A) Leonard Mann (1895–1981, A) Emily Manning (1845–1890, A) Frederic Manning (1882–1935, A) Maurice Manning (born 1966, US) Ruth Manning-Sanders (1886–1988, W) Robert Mannyng (1269–1340, E) Chris Mansell (born 1953, A) Peter Manson (born 1969, S) Lee Maracle (1950–2021, C) Blaine Marchand (born 1949, C) Morton Marcus (1936–2009, US) Paul Mariani (born 1940, US) E. A. Markham (1939–2008, Mo/E) Edwin Markham (1852–1940, US) Nicole Markotic (born 1962, C) Daphne Marlatt (born 1942, C) Christopher Marlowe (1564–1593, E) Don Marquis (1878–1937, US) Edward Garrard Marsh (1783–1862, E) Tom Marshall (1938–1993, C) John Marston (1576–1634, E) Garth Martens (living, C) Camille Martin (born 1956, C) David Martin (1915–1997, E/A) Philip Martin (1931–2005, A) Theodore Martin (1816–1909, S) Sid Marty (born 1944, C) Andrew Marvell (1621–1678, E) John Masefield (1878–1967, E) Lebogang Mashile (born 1979, SA) R. A. K. Mason (1905–1971, NZ) Edgar Lee Masters (1868–1950, US) John Mateer (born 1971, A) Ray Mathew (1929–2002, A) Robin Mathews (born 1931, C) Roland Mathias (1915–2007, W) Cleopatra Mathis (born 1947, US) Don Mattera (born 1935, SA) James Matthews (born 1929, SA) Glyn Maxwell (born 1962, E) Bernadette Mayer (born 1945, US) Micheline Maylor (born 1970, C) Seymour Mayne (born 1944, C) Chandra Mayor (born 1973, C) Ben Mazer (born 1964, US) Mzwakhe Mbuli (born 1959, SA) James McAuley (1917–1976, A) Robert McBride (c. 1811/1812–1895, C) Neil McBride (1861–1942, Ir) Ian McBryde (born 1953, A) Brian McCabe (born 1951, S) Steven McCabe (living, C) Steve McCaffery (born 1947, C) Julia McCarthy (living, C) Susan McCaslin (born 1947, C) J. D. McClatchy (1945–2018, US) Kim McClenaghan (born 1974, SA/E) Michael McClure (born 1932, US) Kathleen McConnell (Kathy Mac, living, C) David McCooey (born 1967, A) George Gordon McCrae (1833–1927, A) John McCrae (1872–1918, C) Shane McCrae (born 1975, US) Kathleen McCracken (born 1960, C) John McCrae (1872–1918, C) Ronald McCuaig (1908–1993, A) Matthew McDiarmid (1914–1996, S) Nan McDonald (1921–1974, A) Roger McDonald (born 1941, A) Roy McDonald (1937–2018, C) Walt McDonald (born 1934, US) David McFadden (1940–2018, C) Hugh McFadden (living, Ir) David McGimpsey (living, C) Phyllis McGinley (1905–1978, US) Elvis McGonagall (living, S) William McGonagall (1825–1902, S) Roger McGough (born 1937, E) Michelle McGrane (born 1974, Z/SA) Campbell McGrath (born 1962, US) Thomas McGrath (1916–1990, US) Wendy McGrath (living, C) Medbh McGuckian (born 1950, NI) Heather McHugh (born 1948, US) William McIlvanney (1936–2015, S) Nadine McInnis (born 1957, C) James McIntyre (1828–1906, S/C) Claude McKay (1889–1948, J/US) Don McKay (born 1942, C) Barry McKinnon (born 1944, C) Rod McKuen (1933–2015, US) Greg McLaren (born 1967, A) Isaac McLellan (1806–1899, US) John McLellan (early 19th century, E) Brendan McLeod (born 1979, C) Nigel McLoughlin (born 1968, NI) Rhyll McMaster (born 1947, A) Susan McMaster (born 1950, C) James L. McMichael (born 1939, US) Ian McMillan (born 1956, E) Eugene McNamara (1930–2016, US/C) Anthony McNeill (1941–1996, J) Andrew McNeillie (born 1946, W/E) Hollie McNish (born 1984, E) Bernard McNulty (1842–1892, US) Steve McOrmond (living, C) Dionyse McTair (born 1950, T) Máighréad Medbh (born 1959, Ir] Thomas Medwin (1788–1869, E) Paula Meehan (born 1955, Ir) Peter Meinke (born 1932, US) Mary Melfi (born 1951, C) Elizabeth Melville (c. 1578 – c. 1640, S) Herman Melville (1819–1891, US) Christopher Meredith (born 1955, W) George Meredith (1828–1909, E) Louisa Anne Meredith (1812–1895, E/A) James Merrill (1926–1995, US) Stuart Merrill (1863–1915, US) Iman Mersal (born 1966, C) Thomas Merton (1915–1968, US) W. S. Merwin (1927–2019, US) Sarah Messer (born 1966, US) Joan Metelerkamp (born 1956, SA) Charlotte Mew (1869–1928, E) Bruce Meyer (born 1957, C) Alice Meynell (1847–1922, E) Viola Meynell (1885–1956, E) James Lionel Michael (1824–1868, E/A) Anne Michaels (born 1958, C) William Julius Mickle (1734–1788, S) Marianne Micros (living, C) Christopher Middleton (c. 1560–1628, E) Christopher Middleton (born 1926, E) Richard Barham Middleton (1882–1911, E) Thomas Middleton (1580–1627, E) Roy Miki (born 1942, C) Dorothy Miles (1931–1993, W/US) Josephine Miles (1911–1985, US) Jennifer Militello (living, US) Edna St. Vincent Millay (1892–1950, US) Alice Duer Miller (1874–1942, US) Jane Miller (born 1949, US) Joaquin Miller (1837–1913, US) Leslie Adrienne Miller (born 1956, US) Ruth Miller (1919–1969, SA) Thomas Miller (1807–1874, E) Vassar Miller (1924–1998, US) John Millett (1921–2019, A) Robert Millhouse (1788–1839, E) Alice Milligan (1865–1953, Ir/NI) Spike Milligan (1918–2002, E/Ir) Kenneth G. Mills (1923–2004, C) Roswell George Mills (1896–1966, C) John Milton (1608–1674, E) Robert Minhinnick (born 1952, W) Matthew Minicucci (born 1981, US) Gary Miranda (born 1939, US) Sudesh Mishra (living, A) Adrian Mitchell (1932–2008, E) Paul Mitchell (born 1968, A) Silas Weir Mitchell (1829–1914, US) Stephen Mitchell (born 1943, US) Waddie Mitchell (born 1950, US) Naomi Mitchison (1897–1999, S) Amitabh Mitra (living, SA) Ange Mlinko (born 1960, US) Mo–Mu David Macbeth Moir (1798–1851, S) Anis Mojgani (born 1977, US) John Mole (born 1941, E) Natalia Molebatsi (living, SA) Dorothy Molloy (1942–2004, Ir) Geraldine Monk (born 1952, E) Harold Monro (1879–1932, E) Harriet Monroe (1860–1936, US) Charles Montagu, 1st Earl of Halifax (1661–1715, E) John Montague (1929–2016, Ir) Lady Mary Wortley Montagu (1689–1762, E) Lenore Montanaro (born 1990, US) Alexander Montgomerie (c. 1550–1598, S) James Montgomery (1771–1854, E) Lucy Maud Montgomery (L. M. Montgomery, 1874–1942, C) Marion E. Moodie (1867–1958, C) Susanna Moodie (1803–1885, E/C) Kobus Moolman (living, SA) Jacob McArthur Mooney (born 1983, C) Alan Moore (born 1960, Ir) Marianne Moore (1887–1972, US) Merrill Moore (1903–1957, US) Ruth Moore (1903–1989, US) T. Inglis Moore (1901–1978, A) Thomas Moore (1779–1852, Ir/E) Thomas Sturge Moore (1870–1944, E) Dom Moraes (1938–2004, In) Barbara Moraff (born 1939, US) Cherríe Moraga (born 1952, US) Edythe Morahan de Lauzon (fl. early 20th c., C) Pamela Mordecai (born 1942, J/C) Hannah More (1745–1833, E) Dwayne Morgan (born 1974, C) Edwin Morgan (1920–2010, S) J. O. Morgan (born 1978, S) Jeffrey Morgan (living, C) Mal Morgan (1936–1999, A) Robin Morgan (born 1941, US) Lorin Morgan-Richards (born 1975, US) A. F. Moritz (born 1947, US/C) Mervyn Morris (born 1937, J) Sharon Morris (living, W/E) William Morris (1834–1896, E) David R. Morrison (1941–2012, S) Jim Morrison (1943–1971, US) Morrissey (born 1959, E) Kim Morrissey (Janice Dales, living, C) Garry Thomas Morse (living, C) Viggo Mortensen (born 1958, US/De) Colin Morton (born 1948, C) Frank Morton (1869–1923, A) Twm Morys (born 1961, W) Daniel David Moses (born 1952, C) Howard Moss (1922–1987, US) Thylias Moss (born 1954, US) Isabella Motadinyane (1963–2003, SA) William Motherwell (1797–1835, S) Andrew Motion (born 1952, E) Seitlhamo Motsapi (born 1966, SA) Casey Motsisi (1932–1977, SA) Eric Mottram (1924–1995, E) Erín Moure (born 1955, C) Oswald Mbuyiseni Mtshali (born 1940, SA) Ian Mudie (1911–1976, A) Mudrooroo (Colin Thomas Johnson, 1938–2019, A) Lisel Mueller (1924–2020, US) Micere Githae Mugo (born 1942, K/Z) Edwin Muir (1887–1959, S/E) Paul Muldoon (born 1951, Ir/US) Wendy Mulford (born 1941, W/E) Harryette Mullen (born 1953, US) Laura Mullen (born 1958, US) Anthony Munday (c. 1560–1633, E) Jane Munro (born 1943, C) Sachiko Murakami (born 1980, C) William Murdoch (1823–1887, S/C) Edwin Greenslade Murphy (Dryblower, 1866–1939, A) Hayden Murphy (born 1945, Ir) Richard Murphy (1927–2018, Ir/SLk) Sheila Murphy (born 1951, US) Charles Murray (1864–1941, S) George Murray (born 1971, C) Joan Murray (born 1945, US) Les Murray (1938–2019, A) David Musgrave (born 1965, A) Susan Musgrave (born 1951, C) Togara Muzanenhamo (born 1975, Z) N Vladimir Nabokov (1899–1977, RE/US) Constance Naden (1858–1889, E) Sarojini Naidu (1879–1949, In) Carolina Nairne (1766–1845, S) Sydney Elliott Napier (1870–1940, A) Akhtar Naraghi (living, C) Ogden Nash (1902–1971, US) Roger Nash (born 1942, E/C) Thomas Nashe (1567–1601, E) John Neal (1793–1876, US) Charles Neaves (1800–1876, S) Henry Neele (1798–1828, E) Lyle Neff (born 1969, C) John Neihardt (1881–1973, US) William Neill (1922–2010, S) Philip Neilsen (living, A) Shaw Neilson (1872–1942, A) Alice Dunbar Nelson (1875–1935, US) Holly Nelson (living, US/C) Marilyn Nelson (born 1946, US) Howard Nemerov (1920–1991, US) Kenn Nesbitt (born 1962, US) W. H. New (born 1938, C) Henry Newbolt (1862–1938, E) John Newlove (1938–2003, C) John Henry Newman (1801–1890, E) Kate Newmann (born 1965, NI/Ir) William Newton (1750–1830, E) Aimee Nezhukumatathil (born 1974, US) Nuala Ní Chonchúir (born 1970, Ir) Eiléan Ní Chuilleanáin (born 1942, Ir) Nuala Ní Dhomhnaill (born 1952, Ir) Ailbhe Ní Ghearbhuigh (born 1984, Ir) Doireann Ní Ghríofa (born 1981, Ir) Nicholas of Guildford (12th or 13th c., E) Barrie Phillip Nichol (bpNichol, 1944–1988, C) Grace Nichols (born 1950, Gu/E) Robert Nichols (1893–1944, E) Cecily Nicholson (living, C) Norman Nicholson (1914–1987, E) Lorine Niedecker (1903–1970, US) Emilia Nielsen (living, C) Hume Nisbet (1849–1923, A/S) Christopher Nolan (1965–2009, Ir) Oodgeroo Noonuccal (1920–1993, A) Leslie Norris (1921–2006, W/US) Harry Northup (born 1940, US) Arthur Nortje (1942–1970, SA) Caroline Norton (1808–1877, E) Alice Notley (born 1945, US) Alden Nowlan (1933–1983, C) Alfred Noyes (1880–1958, E) Jeff Nuttall (1933–2004, E) Naomi Shihab Nye (born 1952, US) Robert Nye (1939–2016, E) O Joyce Carol Oates (born 1938, US) John O'Brien (Patrick Joseph Hartigan, 1878–1952, A) Sean O'Brien (born 1952, E) Patrick O'Connell (1944–2005, C) Mark O'Connor (born 1945, A) Philip O'Connor (1916–1998, E) Mary O'Donnell (born 1954, Ir) Bernard O'Donoghue (born 1945, Ir) Gregory O'Donoghue (1951–2005, Ir) Bernard O'Dowd (1866–1953, A) Dennis O'Driscoll (born 1954, Ir) Ernest O'Ferrall (1881–1925, A) Ron Offen (1930–2010, US) William Henry Ogilvie (1869–1963, S) Frank O'Hara (1926–1966, US) John Bernard O'Hara (1862–1927, A) Theodore O'Hara (1820–1867, US) Pixie O'Harris (1903–1991, A) Sharon Olds (born 1942, US) Alexandra Oliver (living, C) Mary Oliver (1935–2019, US) Redell Olsen (born 1971, E) Charles Olson (1910–1970, US) Sheree-Lee Olson (born 1954, C) Nessa O'Mahony (living, Ir) Michael Ondaatje (born 1943, SLk/C) Heather O'Neill (born 1973, C) Henrietta O'Neill (1758–1793, Ir) Mary Devenport O'Neill (1879–1976, Ir) George Oppen (1908–1984, US) Mary Oppen (1908–1990, US) Antoine Ó Raifteiri (1784–1835, Ir) Edward Otho Cresap Ord, II (1858–1923, US) Dowell O'Reilly (1865–1923, A) Peter Orlovsky (1933–2010, US) John Ormond (1923–1990, W) Frank Ormsby (born 1947, NI) Gregory Orr (born 1947, US) Arthur O'Shaughnessy (1844–1881, E) Micheal O'Siadhail (born 1947, Ir) Alicia Ostriker (born 1937, US) Maggie O'Sullivan (born 1951, E) Seumas O'Sullivan (1879–1958, Ir) Niyi Osundare (born 1947, Ni/US) Alice Oswald (born 1966, E) John Oswald (died 1793, S) Eoghan Ó Tuairisc (1919–1982, Ir) Richard Outram (1930–2005, C) Ouyang Yu (歐陽昱; born 1955, A) Catherine Owen living, C) Jan Owen (born 1940, A) Wilfred Owen (1893–1918, E) P Susan Paddon (living, C) Ruth Padel (born 1947, E) Ron Padgett (born 1942, US) Isabel Pagan (c. 1740–1821, S) Geoff Page (born 1940, A) P. K. Page (1916–2010, C) Janet Paisley (1948–2018, S) Grace Paley (1922–2007, E) Francis Turner Palgrave (1824–1897, E) Michael Palmer (born 1943, US) Nettie Palmer (1885–1964, A) Vance Palmer (1885–1959, A) Sylvia Pankhurst (1882–1960, E) William Williams Pantycelyn (W) Aristides Paradissis (1923–2006, A) Arleen Paré (born 1946, C) Dorothy Parker (1893–1967, US) Amy Parkinson (1855-1938, C) Thomas Parnell (1670–1718, Ir/E) Robert Parry (1540–1612, W) Lisa Pasold (living, C) John Pass (born 1947, C) Linda Pastan (born 1932, US) Kenneth Patchen (1911–1972, US) Banjo Paterson (1864–1941, A) Don Paterson (born 1963, S) Coventry Patmore (1823–1896, E) Brian Patten (born 1946, E) Philip Kevin Paul (living, C) Tom Paulin (born 1949, NI/E) Ricardo Pau-Llosa (born 1954, Cu) James Payn (1830–1898, E/S) Molly Peacock (born 1947, US/C) Thomas Love Peacock (1785–1866, E) Patrick Pearse (1879–1916, Ir) Soraya Peerbaye (living, C) Pearl Poet (14th c., E) Patrick Pearse (1879–1916, Ir) James Larkin Pearson (1879–1981, US) Neil Peart (1952–2020, C) Kathleen Peirce (born 1956, US) J. D. C. Pellow (1890–1960, E) Nathan Penlington (living, W/E) Anne Penny (1729–1784, W/E) Hilary Douglas Clark Pepler (1878–1951, E) Sam Pereira (born 1949, US) Lucia Perillo (1958–2016, US) Grace Perry (1927–1987, A) Lenrie Peters (1932–2009, Ga) Robert Peters (1924–2014, US) Pascale Petit (born 1953, W) Mario Petrucci (born 1958, E) W. T. Pfefferle (born 1962, C) M. NourbeSe Philip (born 1947, T/C) Ambrose Philips (1674–1749, E) Katherine Philips (1631/1632–1664, E/W) Ben Phillips (born 1947, C) Eden Phillpotts (1862–1960, E) Alison Pick (born 1975, C) Tom Pickard (born 1946, E) Leah Lakshmi Piepzna-Samarasinha (born 1975, US/C) Marge Piercy (born 1936, US) Laetitia Pilkington (c. 1709–1750, Ir/E) Mary Pilkington (1761–1839, E) Sarah Pinder (living, C) Percy Edward Pinkerton (1855–1946, E) Robert Pinsky (born 1940, US) George Pirie (1799–1870, C) Christopher Pitt (1699–1748, E) Marie Pitt (1869–1948, A) Ruth Pitter (1897–1992, E) Al Pittman (1940–2001, C) Marjorie Pizer (1920–2016, A) Sylvia Plath (1932–1963, US/E) William Plomer (1903–1973, SA/E) Edward Plunkett, 18th Baron of Dunsany (1878–1957, Ir/E) Joseph Plunkett (1887–1916, Ir) Edgar Allan Poe (1809–1849, US) Emily Pohl-Weary (born 1973, C) Craig Poile (living, C) Suman Pokhrel (born 1967, Ne) Marcella Polain (born 1958, A) Margaret Steuart Pollard (1904–1996, E) Edward Pollock (1823–1858, US) Robert Pollok (c. 1798–1827, S) John Pomfret (1667–1702, E) Marie Ponsot (1921–2019, US) John Pook (born 1942, W/F) Sandy Pool (living, C) Marie Ponsot (1921–2019, US) Alexander Pope (1688–1744, E) Judith Pordon (born 1954, US) Anna Maria Porter (1780–1832, E) Dorothy Porter (1954–2008, A) Hal Porter (1911–1984, A) Peter Porter (1929–2010, A) Rochelle Potkar (born 1979, In) Robert Potter (1721–1804, E) Charles Potts (born 1943, US) Ezra Pound (1885–1972, US) B. W. Powe (born 1955, C) Craig Powell (born 1940, A) Winthrop Mackworth Praed (1802–1839, E) Claire Pratt (1921–1995, C) E. J. Pratt (1882–1964, C) Jack Prelutsky (born 1940, US) Karen Press (born 1956, SA) Thomas Preston (1537–1598, E) Ron Pretty (born 1940, A) Frank Prewett (1893–1962, C) Nancy Price (1880–1970, E) Richard Price (born 1966, S/E) Robert Priest (born 1951, C) F. T. Prince (1912–2003, E) Thomas Pringle (1789–1834, S/SA) Matthew Prior (1664–1721, E) Pauline Prior-Pitt (living, S) Adelaide Anne Procter (1825–1864, E) Bryan Procter (1787–1874, E) Kevin Prufer (born 1969, US) J. H. Prynne (born 1936, E) Sheenagh Pugh (born 1950, W/E) Al Purdy (1918–2000, C) Q Andy Quan (born 1969, C/A) Francis Quarles (1592–1644, E) Peter Quennell (1905–1993, E) Sina Queyras (living, C) Roderic Quinn (1867–1949, A) R Ra–Ri William Radice (born 1951, E) Kenneth Radu (born 1945, C) Sam Ragan (1915–1996, US) Jennifer Rahim (born 1963, T) Craig Raine (born 1944, E) Kathleen Raine (1908–2003, US) Carl Rakosi (1903–2004, US) Walter Raleigh (1552 or 1554–1618, E) James Ralph (1705–1762, US/E) Raymond Ramcharitar (living, T) Lesego Rampolokeng (born 1965, S) Allan Ramsay (1686–1758, S) Theodore Harding Rand (1835–1900, C) Dudley Randall (1914–2000, US) Julia Randall (1924–2005, US) Thomas Randolph (1605–1635, E) Jennifer Rankin (1941–1979, A) Claudia Rankine (born 1963, J) John
Caliburnus to be derivative of a lost Old Welsh text in which (Old Welsh ) had not yet been lenited to (Middle Welsh or ). In the late 15th/early 16th-century Middle Cornish play Beunans Ke, Arthur's sword is called Calesvol, which is etymologically an exact Middle Cornish cognate of the Welsh Caledfwlch. It is unclear if the name was borrowed from the Welsh (if so, it must have been an early loan, for phonological reasons), or represents an early, pan-Brittonic traditional name for Arthur's sword. In Old French sources this then became Escalibor, Excalibor, and finally the familiar Excalibur. Geoffrey Gaimar, in his Old French L'Estoire des Engleis (1134-1140), mentions Arthur and his sword: "this Constantine was the nephew of Arthur, who had the sword Caliburc" (""). In Wace's Roman de Brut (c. 1150–1155), an Old French translation and versification of Geoffrey's Historia, the sword is called Calabrum, Callibourc, Chalabrun, and Calabrun (with variant spellings such as Chalabrum, Calibore, Callibor, Caliborne, Calliborc, and Escaliborc, found in various manuscripts of the Brut). In Chrétien de Troyes' late 12th-century Old French Perceval, Arthur's knight Gawain carries the sword Escalibor and it is stated, "for at his belt hung Escalibor, the finest sword that there was, which sliced through iron as through wood" (). This statement was probably picked up by the author of the Estoire Merlin, or Vulgate Merlin, where the author (who was fond of fanciful folk etymologies) asserts that Escalibor "is a Hebrew name which means in French 'cuts iron, steel, and wood (; note that the word for "steel" here, achier, also means "blade" or "sword" and comes from medieval Latin , a derivative of "sharp", so there is no direct connection with Latin in this etymology). It is from this fanciful etymological musing that Thomas Malory got the notion that Excalibur meant "cut steel" (the name of it,' said the lady, 'is Excalibur, that is as moche to say, as Cut stele). The sword in the stone and the sword in the lake In Arthurian romance, a number of explanations are given for Arthur's possession of Excalibur. In Robert de Boron's Merlin, the first tale to mention the "sword in the stone" motif c. 1200, Arthur obtained the British throne by pulling a sword from an anvil sitting atop a stone that appeared in a churchyard on Christmas Eve. In this account, as foretold by Merlin, the act could not be performed except by "the true king", meaning the divinely appointed king or true heir of Uther Pendragon. The scene is set by different authors at either London (Londinium) or generally in Logres, and might have been inspired by a miracle attributed to the 11th-century Bishop Wulfstan of Worcester. As Malory related in his most famous English-language version of the Arthurian tales, the 15th-century Le Morte d'Arthur: "Whoso pulleth out this sword of this stone and anvil, is rightwise king born of all England." After many of the gathered nobles try and fail to complete Merlin's challenge, the teenage Arthur (who up to this point had believed himself to be son of Sir Ector, not Uther's son, and went there as Sir Kay's squire) does this feat effortlessly by accident and then repeats it publicly. The identity of this sword as Excalibur is made explicit in the Prose Merlin, a part of the Lancelot-Grail cycle of French romances (the Vulgate Cycle). Eventually, in the cycle's finale Vulgate Mort Artu, when Arthur is at the brink of death, he orders Griflet to cast Excalibur into the enchanted lake. After two failed attempts (as he felt such a great sword should not be thrown away), Griflet finally complies with the wounded king's request and a hand emerges from the lake to catch it. This tale becomes attached to Bedivere, instead of Griflet, in Malory and the English tradition. However, in the Post-Vulgate Cycle (and consequently Malory), Arthur breaks the Sword from the Stone while in combat against King Pellinore very early in his reign. On Merlin's advice, he then goes with him to be given Excalibur by a Lady of the Lake in exchange for a later boon for her (some time later, she arrives at Arthur's court to demand the head of Balin). Malory records both versions of the legend in his Le Morte d'Arthur, naming each sword as Excalibur. Other roles and attributes In Welsh legends, Arthur's sword is known as Caledfwlch. In Culhwch and Olwen, it is one of Arthur's most valuable possessions and is used by Arthur's warrior Llenlleawg the Irishman to kill the Irish king Diwrnach while stealing his magical cauldron. Though not named as Caledfwlch, Arthur's sword is described vividly in The Dream of Rhonabwy, one of the tales associated with the Mabinogion (as translated by Jeffrey Gantz): "Then they heard Cadwr Earl of Cornwall being summoned, and saw him rise with Arthur's sword in his hand, with a design of two chimeras on the golden hilt; when the sword was unsheathed what was seen from the mouths of the two chimeras was like two flames of fire, so dreadful that it was not easy for anyone to look." Geoffrey's Historia is the first non-Welsh source to speak of the sword. Geoffrey says the sword was forged in Avalon and Latinises the name "Caledfwlch" as Caliburnus. When his influential pseudo-history made it to Continental Europe, writers altered the name further until it finally took on the popular form Excalibur (various spellings in the medieval Arthurian romance and chronicle tradition include: Calabrun, Calabrum, Calibourne, Callibourc, Calliborc, Calibourch, Escaliborc, and Escalibor). The legend was expanded upon in the Vulgate Cycle and in the Post-Vulgate Cycle which emerged in its wake. Both included the Prose Merlin, but the Post-Vulgate authors left out the Merlin continuation from the earlier cycle, choosing to add an original account of Arthur's early days including a new origin for Excalibur.
sword is called Calabrum, Callibourc, Chalabrun, and Calabrun (with variant spellings such as Chalabrum, Calibore, Callibor, Caliborne, Calliborc, and Escaliborc, found in various manuscripts of the Brut). In Chrétien de Troyes' late 12th-century Old French Perceval, Arthur's knight Gawain carries the sword Escalibor and it is stated, "for at his belt hung Escalibor, the finest sword that there was, which sliced through iron as through wood" (). This statement was probably picked up by the author of the Estoire Merlin, or Vulgate Merlin, where the author (who was fond of fanciful folk etymologies) asserts that Escalibor "is a Hebrew name which means in French 'cuts iron, steel, and wood (; note that the word for "steel" here, achier, also means "blade" or "sword" and comes from medieval Latin , a derivative of "sharp", so there is no direct connection with Latin in this etymology). It is from this fanciful etymological musing that Thomas Malory got the notion that Excalibur meant "cut steel" (the name of it,' said the lady, 'is Excalibur, that is as moche to say, as Cut stele). The sword in the stone and the sword in the lake In Arthurian romance, a number of explanations are given for Arthur's possession of Excalibur. In Robert de Boron's Merlin, the first tale to mention the "sword in the stone" motif c. 1200, Arthur obtained the British throne by pulling a sword from an anvil sitting atop a stone that appeared in a churchyard on Christmas Eve. In this account, as foretold by Merlin, the act could not be performed except by "the true king", meaning the divinely appointed king or true heir of Uther Pendragon. The scene is set by different authors at either London (Londinium) or generally in Logres, and might have been inspired by a miracle attributed to the 11th-century Bishop Wulfstan of Worcester. As Malory related in his most famous English-language version of the Arthurian tales, the 15th-century Le Morte d'Arthur: "Whoso pulleth out this sword of this stone and anvil, is rightwise king born of all England." After many of the gathered nobles try and fail to complete Merlin's challenge, the teenage Arthur (who up to this point had believed himself to be son of Sir Ector, not Uther's son, and went there as Sir Kay's squire) does this feat effortlessly by accident and then repeats it publicly. The identity of this sword as Excalibur is made explicit in the Prose Merlin, a part of the Lancelot-Grail cycle of French romances (the Vulgate Cycle). Eventually, in the cycle's finale Vulgate Mort Artu, when Arthur is at the brink of death, he orders Griflet to cast Excalibur into the enchanted lake. After two failed attempts (as he felt such a great sword should not be thrown away), Griflet finally complies with the wounded king's request and a hand emerges from the lake to catch it. This tale becomes attached to Bedivere, instead of Griflet, in Malory and the English tradition. However, in the Post-Vulgate Cycle (and consequently Malory), Arthur breaks the Sword from the Stone while in combat against King Pellinore very early in his reign. On Merlin's advice, he then goes with him to be given Excalibur by a Lady of the Lake in exchange for a later boon for her (some time later, she arrives at Arthur's court to demand the head of Balin). Malory records both versions of the legend in his Le Morte d'Arthur, naming each sword as Excalibur. Other roles and attributes In Welsh legends, Arthur's sword is known as Caledfwlch. In Culhwch and Olwen, it is one of Arthur's most valuable possessions and is used by Arthur's warrior Llenlleawg the Irishman to kill the Irish king Diwrnach while stealing his magical cauldron. Though not named as Caledfwlch, Arthur's sword is described vividly in The Dream of Rhonabwy, one of the tales associated with the Mabinogion (as translated by Jeffrey Gantz): "Then they heard Cadwr Earl of Cornwall being summoned, and
right away, in the second measure, is a characteristic of the eight-bar blues." In the following examples each box represents a 'bar' of music (the specific time signature is not relevant). The chord in the box is played for the full bar. If two chords are in the box they are each played for half a bar, etc. The chords are represented as scale degrees in Roman numeral analysis. Roman numerals are used so the musician may understand the progression of the chords regardless of the key it is played in. {|class="wikitable" style="text-align:left; width:250px;" |+Eight-bar blues |width=25%|I |width=25%|V7 |width=25%|IV7 |width=25%|IV7 |- | I || V7 IV7 ||I ||V7 |- |} "Worried Life Blues" (probably the most common eight bar blues progression): {|class="wikitable" style="text-align:left; width:250px;" |width=25%|I |width=25%|I |width=25%|IV |width=25%|IV |- | I ||V ||I IV||I V |- |} "Heartbreak Hotel" (variation with the I on the first half): {|class="wikitable" style="text-align:left; width:250px;" |width=25%|I |width=25%|I |width=25%|I |width=25%|I |- | IV ||IV ||V ||I |- |} J. B. Lenoir's "Slow Down" and "Key to the Highway" (variation with the V at bar 2): {|class="wikitable" style="text-align:left; width:250px;" |width=25%|I7 |width=25%|V7 |width=25%|IV7 |width=25%|IV7 |- |I7 ||V7 ||I7 ||V7 |- |} "Get a Haircut" by George Thorogood (simple progression): {|class="wikitable" style="text-align:left; width:250px;" |width=25%|I |width=25%|I |width=25%|I |width=25%|I |- | IV || IV ||V ||V |- |} Jimmy Rogers' "Walkin' By Myself" (somewhat unorthodox example of the form): {|class="wikitable" style="text-align:left; width:250px;" |width=25%|I7 |width=25%|I7 |width=25%|I7 |width=25%|I7 |- |IV7 ||V7 ||I7 ||V7 |- |} Howlin Wolf's version of "Sitting on Top of the World" is actually a 9 bar blues that adds an extra "V" chord
of music (the specific time signature is not relevant). The chord in the box is played for the full bar. If two chords are in the box they are each played for half a bar, etc. The chords are represented as scale degrees in Roman numeral analysis. Roman numerals are used so the musician may understand the progression of the chords regardless of the key it is played in. {|class="wikitable" style="text-align:left; width:250px;" |+Eight-bar blues |width=25%|I |width=25%|V7 |width=25%|IV7 |width=25%|IV7 |- | I || V7 IV7 ||I ||V7 |- |} "Worried Life Blues" (probably the most common eight bar blues progression): {|class="wikitable" style="text-align:left; width:250px;" |width=25%|I |width=25%|I |width=25%|IV |width=25%|IV |- | I ||V ||I IV||I V |- |} "Heartbreak Hotel" (variation with the I on the first half): {|class="wikitable" style="text-align:left; width:250px;" |width=25%|I |width=25%|I |width=25%|I |width=25%|I |- | IV ||IV ||V ||I |- |} J. B. Lenoir's "Slow Down" and "Key to the Highway" (variation with the V at bar 2): {|class="wikitable" style="text-align:left; width:250px;" |width=25%|I7 |width=25%|V7 |width=25%|IV7 |width=25%|IV7 |- |I7 ||V7 ||I7 ||V7 |- |} "Get a Haircut" by George Thorogood (simple progression): {|class="wikitable" style="text-align:left; width:250px;" |width=25%|I |width=25%|I |width=25%|I |width=25%|I |- | IV || IV ||V ||V |- |} Jimmy Rogers' "Walkin' By Myself" (somewhat unorthodox example of the form): {|class="wikitable" style="text-align:left; width:250px;" |width=25%|I7 |width=25%|I7 |width=25%|I7 |width=25%|I7 |- |IV7 ||V7 ||I7 ||V7 |- |} Howlin Wolf's version of "Sitting on Top of the World" is actually a 9 bar blues that adds an extra "V" chord at the end of the progression. The song uses movement between major and dominant 7th and major and minor fourth: {|class="wikitable" style="text-align:left; width:250px;" |width=25%|I |width=25%|I7 |width=25%|IV |width=25%|iv |- |I7 ||V ||I7 IV ||I7 V |- |} The first four bar progression used by Wolf is also used in Nina Simone's 1965 version of "Trouble in Mind", but with
high-resolution neutron powder diffractometer at Australia's research reactor OPAL Echidna (Re:Zero), a character in the light novel series Re:Zero − Starting Life in Another World Echidna, character in the video game The Bouncer Taxonomic genera Echidna (fish) , a genus of moray eels Echidna , a junior homonym referring to the mammals commonly known as echidnas Echidna , junior homonym for a genus
egg-laying mammals also known as spiny anteaters. Echidna may also refer to: Echidna (mythology), monster in Greek mythology and namesake of the mammal (42355) Typhon I Echidna, the natural satellite of the asteroid 42355 Typhon ECHIDNA, high-resolution neutron powder diffractometer at Australia's research reactor OPAL Echidna (Re:Zero), a character in the light novel series Re:Zero − Starting Life in Another World Echidna, character
John Wilson, concerning prime numbers; it was later proven rigorously by Lagrange. In Proprietates Algebraicarum Curvarum (1772) Waring reissued in a much revised form the first four chapters of the second part of Miscellanea Analytica. He devoted himself to the classification of higher plane curves, improving results obtained by Isaac Newton, James Stirling, Leonhard Euler, and Gabriel Cramer. In 1794 he published a few copies of a philosophical work entitled An Essay on the Principles of Human Knowledge, which were circulated among his friends. Waring's mathematical style is highly analytical. In fact he criticised those British mathematicians who adhered too strictly to geometry. It is indicative that he was one of the subscribers of John Landen's Residual Analysis (1764), one of the works in which the tradition of the Newtonian fluxional calculus was more severely criticised. In the preface of Meditationes Analyticae Waring showed a good knowledge of continental mathematicians such as Alexis Clairaut, Jean le Rond d'Alembert, and Euler. He lamented the fact that in Great Britain mathematics was cultivated with less interest than on the continent, and clearly desired to be considered as highly as the great names in continental mathematics—there is no doubt that he was reading their work at a level never reached by any other eighteenth-century British mathematician. Most notably, at the end of chapter three of Meditationes Analyticae Waring presents some partial fluxional equations (partial differential equations in Leibnizian terminology); such equations are a mathematical instrument of great importance in the study of continuous bodies which was almost completely neglected in Britain before Waring's researches. One of the most interesting results in Meditationes Analyticae is a test for the convergence of series generally attributed to d'Alembert (the 'ratio test'). The theory of convergence of series (the object of which is to establish when the summation of an infinite number of terms can be said to have a finite 'sum') was not much advanced in the eighteenth century. Waring's work was known both in Britain and on the continent, but it is difficult to evaluate his impact on the development of mathematics. His work on algebraic equations contained in Miscellanea Analytica was translated into Italian by Vincenzo Riccati in 1770. Waring's style is not systematic and his exposition is often obscure. It seems that he never lectured and did not habitually correspond with other mathematicians. After Jérôme Lalande in 1796 observed, in Notice sur la vie de Condorcet, that in 1764 there was not a single first-rate analyst in England, Waring's reply, published after his death as 'Original letter of Dr Waring' in the Monthly Magazine, stated that he had given 'somewhere between three and four hundred new propositions of one kind or another'. Death During his last years he sank into a deep religious melancholy, and a violent cold caused his death, in Plealey, on 15 August 1798. He was
prosperous farming couple. He received his early education in Shrewsbury School under a Mr Hotchkin and was admitted as a sizar at Magdalene College, Cambridge, on 24 March 1753, being also Millington exhibitioner. His extraordinary talent for mathematics was recognised from his early years in Cambridge. In 1757 he graduated BA as senior wrangler and on 24 April 1758 was elected to a fellowship at Magdalene. He belonged to the Hyson Club, whose members included William Paley. Career At the end of 1759 Waring published the first chapter of Miscellanea Analytica. On 28 January the next year he was appointed Lucasian professor of mathematics, one of the highest positions in Cambridge. William Samuel Powell, then tutor in St John's College, Cambridge opposed Waring's election and instead supported the candidacy of William Ludlam. In the polemic with Powell, Waring was backed by John Wilson. In fact Waring was very young and did not hold the MA, necessary for qualifying for the Lucasian chair, but this was granted him in 1760 by royal mandate. In 1762 he published the full Miscellanea Analytica, mainly devoted to the theory of numbers and algebraic equations. In 1763 he was elected to the Royal Society. He was awarded its Copley Medal in 1784 but withdrew from the society in 1795, after he had reached sixty, 'on account of [his] age'. Waring was also a member of the academies of sciences of Göttingen and Bologna. In 1767 he took an MD degree, but his activity in medicine was quite limited. He carried out dissections with Richard Watson, professor of chemistry and later bishop of Llandaff. From about 1770 he was physician at Addenbrooke's Hospital at Cambridge, and he also practised at St Ives, Huntingdonshire, where he lived for some years after 1767. His career as a physician was not very successful since he was seriously short-sighted and a very shy man. Personal life Waring had a younger brother, Humphrey, who obtained a fellowship at Magdalene in 1775. In 1776 Waring
stream of small works which he was able to sell. In due course he left the insurance company to concentrate on his writing, while also working part-time as assistant editor for the weekly Black and White magazine. Eden Phillpotts maintained a steady output of three or four books a year for the next half century. He produced poetry, short stories, novels, plays and mystery tales. Many of his novels were about rural Devon life and some of his plays were distinguished by their effective use of regional dialect. Eden Phillpotts died at his home in Broadclyst near Exeter, Devon, on 29 December 1960. Personal life Phillpotts was for many years the President of the Dartmoor Preservation Association and cared passionately about the conservation of Dartmoor. He was an agnostic and a supporter of the Rationalist Press Association. Phillpotts was a friend of Agatha Christie, who was an admirer of his work and a regular visitor to his home. In her autobiography she expressed gratitude for his early advice on fiction writing and quoted some of it. Jorge Luis Borges was another Phillpotts admirer. Borges mentioned him numerous times, wrote at least two reviews of his novels, and included him in his "Personal Library", a collection of works selected to reflect his personal literary preferences. Philpotts allegedly sexually abused his daughter Adelaide. In a 1976 interview for a book about her father, Adelaide described an incestuous "relationship" with him that she says lasted from the age of five or six until her early thirties, when he remarried. When she herself finally married at the age of 55 her father never forgave her, and never communicated with her again. Writings Phillpotts wrote a great many books with a Dartmoor setting. One of his novels, Widecombe Fair, inspired by an annual fair at the village of Widecombe-in-the-Moor, provided the scenario for his comic play The Farmer's Wife (1916). It went on to become a 1928 silent film of the same name, directed by Alfred Hitchcock. It was followed by a 1941 remake, directed by Norman Lee and Leslie Arliss. It became a BBC TV drama in 1955, directed by Owen Reed. Jan Stewer played Churdles Ash. The BBC had broadcast the play in 1934. He co-wrote several plays with his daughter Adelaide Phillpotts, The Farmer's Wife and Yellow Sands (1926); she later claimed their relationship was incestuous. Eden is best known as the author of many novels, plays and poems about Dartmoor. His Dartmoor cycle of 18 novels and two volumes of short stories still has many avid readers despite the fact that many titles are out of print. Philpotts also wrote a series of novels, each set against the background of a different trade or industry. Titles include: Brunel's Tower (a pottery) and Storm in a Teacup (hand-papermaking). Among his other works is The Grey Room, the plot of which is centred on a haunted room in an English manor house. He also wrote a number of other mystery novels, both under his own name and the pseudonym Harrington Hext. These include: The Thing at Their Heels, The Red Redmaynes, The Monster, The Clue from the Stars, and The Captain's Curio. The Human Boy was a collection of schoolboy stories in the same genre as Rudyard Kipling's Stalky & Co., though different in mood and style. Late in his long writing career he wrote a few books of interest to science fiction and fantasy readers, the most noteworthy being Saurus, which involves an alien reptilian observing human life. Eric Partridge praised the immediacy and impact of his dialect writing. Photographs Works Novels The End of a Life (1891) Folly and Fresh Air (1891) A Tiger's Club (1892) A Deal with the Devil (1895) Some Every-day Folks (1895) My Laughing Philosopher (1896) Down Dartmoor Way (1896) Lying Prophets: A Novel (1897) Children of the Mist (1898) Sons of the Morning (1900) The Good Red Earth (1901) The River (1902) Old Delabole (1903) The Golden Fetich (1903) The American Prisoner (1904) The Farm of the Dagger (1904) The Secret Woman (1905) The Poacher's Wife (1906) AKA Daniel Sweetland (1906) The Sinews of War: A Romance of London and the Sea (1906) with Arnold Bennett Doubloons (1906) with Arnold Bennett The Portreeve (1906) The Whirlwind (1907) The Mother (1908) The Virgin in Judgment (1908) AKA A Fight to Finish (1911) The Statue: A Story of International Intrigue and Mystery (1908) with Arnold Bennett The Three Brothers (1909) The Fun of the Fair (1909) The Haven (1909) The Flint Heart: A Fairy Story (1910) The Thief of Virtue (1910) The Beacon (1911) Demeter's Daughter (1911) The Three Knaves (1912) The Forest on the Hill (1912) The Lovers: A Romance (1912) Widecombe Fair (1913) The Joy of Youth (1913) The Old Time Before Them (1913) Faith Tresilion (1914) The Master of Merripit (1914) Brunel's Tower (1915) The Green Alleys: A Comedy (1916) The Banks of Colne: (the Nursery) (1917) The Girl and the Faun (1917) The Spinners (1918) From the Angle of Seventeen (1912) Evander (1919) Storm in a Teacup (1919) Miser's Money (1920) Eudocia (1921) The Grey Room (1921) The Bronze Venus (1921) Orphan Dinah (1920) The Red Redmaynes (1922) Pan and the Twins (1922) Number 87 (1922) The Thing at Their Heels (1923) Cheat-the-boys; a Story of the Devonshire Orchards (1924) Redcliff (1924) The Treasures of Typhon (1924) The Lavender Dragon (1924) Who Killed Diana? (1924) Circé's Island (1924) A Voice from the Dark (1925) The Monster (1925) George Westover (1926) The Marylebone Miser (1926) AKA Jig-Saw (1926) Cornish Droll: A Novel (1926) The Miniature (1926) The Jury (1927) Arachne (1928) Children of Men (1928) The Ring Fence (1928) Tryphena (1929) The Apes (1929) The Three Maidens (1930) Alcyone (a Fairy Story) (1930) "Found Drowned" (1931) A Clue from the Stars (1932) The Broom Squires (1932) Stormbury, A Story of Devon (1932) The Captain's Curio (1933) Bred in the Bone (1933) Witch's Cauldron (1933) Nancy Owlett (1933) Minions of the Moon (1934) Ned of the Caribbees (1934) Portrait of a Gentleman (1934) Mr. Digweed and Mr. Lumb: A Mystery Novel (1934) The Oldest Inhabitant: A Comedy (1934) A Close Call (1936) The Owl of Athene (1936) The White Camel: A Story of Arabia (1936) The Anniversary Murder (1936) The Wife
in a Teacup (hand-papermaking). Among his other works is The Grey Room, the plot of which is centred on a haunted room in an English manor house. He also wrote a number of other mystery novels, both under his own name and the pseudonym Harrington Hext. These include: The Thing at Their Heels, The Red Redmaynes, The Monster, The Clue from the Stars, and The Captain's Curio. The Human Boy was a collection of schoolboy stories in the same genre as Rudyard Kipling's Stalky & Co., though different in mood and style. Late in his long writing career he wrote a few books of interest to science fiction and fantasy readers, the most noteworthy being Saurus, which involves an alien reptilian observing human life. Eric Partridge praised the immediacy and impact of his dialect writing. Photographs Works Novels The End of a Life (1891) Folly and Fresh Air (1891) A Tiger's Club (1892) A Deal with the Devil (1895) Some Every-day Folks (1895) My Laughing Philosopher (1896) Down Dartmoor Way (1896) Lying Prophets: A Novel (1897) Children of the Mist (1898) Sons of the Morning (1900) The Good Red Earth (1901) The River (1902) Old Delabole (1903) The Golden Fetich (1903) The American Prisoner (1904) The Farm of the Dagger (1904) The Secret Woman (1905) The Poacher's Wife (1906) AKA Daniel Sweetland (1906) The Sinews of War: A Romance of London and the Sea (1906) with Arnold Bennett Doubloons (1906) with Arnold Bennett The Portreeve (1906) The Whirlwind (1907) The Mother (1908) The Virgin in Judgment (1908) AKA A Fight to Finish (1911) The Statue: A Story of International Intrigue and Mystery (1908) with Arnold Bennett The Three Brothers (1909) The Fun of the Fair (1909) The Haven (1909) The Flint Heart: A Fairy Story (1910) The Thief of Virtue (1910) The Beacon (1911) Demeter's Daughter (1911) The Three Knaves (1912) The Forest on the Hill (1912) The Lovers: A Romance (1912) Widecombe Fair (1913) The Joy of Youth (1913) The Old Time Before Them (1913) Faith Tresilion (1914) The Master of Merripit (1914) Brunel's Tower (1915) The Green Alleys: A Comedy (1916) The Banks of Colne: (the Nursery) (1917) The Girl and the Faun (1917) The Spinners (1918) From the Angle of Seventeen (1912) Evander (1919) Storm in a Teacup (1919) Miser's Money (1920) Eudocia (1921) The Grey Room (1921) The Bronze Venus (1921) Orphan Dinah (1920) The Red Redmaynes (1922) Pan and the Twins (1922) Number 87 (1922) The Thing at Their Heels (1923) Cheat-the-boys; a Story of the Devonshire Orchards (1924) Redcliff (1924) The Treasures of Typhon (1924) The Lavender Dragon (1924) Who Killed Diana? (1924) Circé's Island (1924) A Voice from the Dark (1925) The Monster (1925) George Westover (1926) The Marylebone Miser (1926) AKA Jig-Saw (1926) Cornish Droll: A Novel (1926) The Miniature (1926) The Jury (1927) Arachne (1928) Children of Men (1928) The Ring Fence (1928) Tryphena (1929) The Apes (1929) The Three Maidens (1930) Alcyone (a Fairy Story) (1930) "Found Drowned" (1931) A Clue from the Stars (1932) The Broom Squires (1932) Stormbury, A Story of Devon (1932) The Captain's Curio (1933) Bred in the Bone (1933) Witch's Cauldron (1933) Nancy Owlett (1933) Minions of the Moon (1934) Ned of the Caribbees (1934) Portrait of a Gentleman (1934) Mr. Digweed and Mr. Lumb: A Mystery Novel (1934) The Oldest Inhabitant: A Comedy (1934) A Close Call (1936) The Owl of Athene (1936) The White Camel: A Story of Arabia (1936) The Anniversary Murder (1936) The Wife of Elias: A Mystery Novel (1937) Wood-nymph (1937) Farce in Three Acts (1937) Portrait of a Scoundrel (1938) Saurus (1938) Lycanthrope, the Mystery of Sir William Wolf (1938) Dark Horses (1938) Golden Island (1938) Thorn
political asylum to Julian Assange in November 2010. Which he then invoked by entering their London Embassy in June 2012. This was then revoked in 2019, following negotiations between the Moreno administration and the British Government. History Both nations are signatories of the Inter-American Treaty of Reciprocal Assistance (the Rio Treaty) of 1947, the Western Hemisphere's regional mutual security treaty. Ecuador shares U.S. concern over increasing narcotrafficking and international terrorism and has energetically condemned terrorist actions, whether directed against government officials or private citizens. The government has maintained Ecuador virtually free of coca production since the mid-1980s and is working to combat money laundering and the transhipment of drugs and chemicals essential to the processing of cocaine. Ecuador and the U.S. agreed in 1999 to a 10-year arrangement whereby U.S. military surveillance aircraft could use the airbase at Manta, Ecuador, as a Forward Operating Location to detect drug trafficking flights through the region. The arrangement expired in 2009; former president Rafael Correa vowed not to renew it, and since then the Ecuador has not had any foreign military facilities in the country. In fisheries issues, the United States claims jurisdiction for the management of coastal fisheries up to 200 mile (370 km) from its coast, but excludes highly migratory species; Ecuador, on the other hand, claims a 200-mile (370-km) territorial sea, and imposes license fees and fines on foreign fishing vessels in the area, making no exceptions for catches of migratory species. In the early 1970s, Ecuador seized about 100 foreign-flag vessels (many of them U.S.) and collected fees and fines of more than $6 million. After a drop-off in such seizures for some years, several U.S. tuna boats were again detained and seized in 1980 and 1981. The U.S. Magnuson Fishery Conservation and Management Act then triggered an automatic prohibition of U.S. imports of tuna products from Ecuador. The prohibition was lifted in 1983, and although fundamental differences between U.S. and Ecuadorian legislation still exist, there is no current conflict. During the period that has elapsed since seizures which triggered the tuna import ban, successive Ecuadorian governments have declared their willingness to explore possible solutions to this problem with mutual respect for longstanding positions and principles of both sides. The election of Rafael Correa in October 2006, has strained relations between the two countries and relations have since been fraught with tension. Rafael Correa is quite critical of U.S. foreign policy. In April 2011, relations between Ecuador and the United States soured particularly after Ecuador expelled the U.S. ambassador after a leaked diplomatic cable was shown accusing president Correa of knowingly ignoring police corruption. In reciprocation, the Ecuadorian ambassador Luis Gallegos was expelled from
for catches of migratory species. In the early 1970s, Ecuador seized about 100 foreign-flag vessels (many of them U.S.) and collected fees and fines of more than $6 million. After a drop-off in such seizures for some years, several U.S. tuna boats were again detained and seized in 1980 and 1981. The U.S. Magnuson Fishery Conservation and Management Act then triggered an automatic prohibition of U.S. imports of tuna products from Ecuador. The prohibition was lifted in 1983, and although fundamental differences between U.S. and Ecuadorian legislation still exist, there is no current conflict. During the period that has elapsed since seizures which triggered the tuna import ban, successive Ecuadorian governments have declared their willingness to explore possible solutions to this problem with mutual respect for longstanding positions and principles of both sides. The election of Rafael Correa in October 2006, has strained relations between the two countries and relations have since been fraught with tension. Rafael Correa is quite critical of U.S. foreign policy. In April 2011, relations between Ecuador and the United States soured particularly after Ecuador expelled the U.S. ambassador after a leaked diplomatic cable was shown accusing president Correa of knowingly ignoring police corruption. In reciprocation, the Ecuadorian ambassador Luis Gallegos was expelled from the United States. In 2013, when Ecuador unilaterally pulled out of a preferential trade pact with the United States over claiming the U.S. used it as blackmail in regards to the asylum request of Edward Snowden, relations between Ecuador and the United States reached an all-time low. The pact offered Ecuador US$23 million, which it offered to the U.S. for human rights training. Tariff free imports had been offered to Ecuador in exchange for drug elimination efforts. Julian Assange applied for Ecuadorian citizenship
rack (i.e., the middle of the third row), and the two back corner balls, one of which must be a stripe and the other a solid. The cue ball is placed anywhere the breaker desires behind the . Break One person is chosen by some predetermined method (e.g., coin toss, , or win or loss of previous game or match) to shoot first, using the cue ball to the object-ball rack apart. In most leagues it is the breaker's opponent who racks the balls, but in some, players break their own racks. If the breaker fails to make a successful break—usually defined as at least four balls hitting cushions or an object ball being pocketed—then the opponent can opt either to play from the current position or to call for a and either re-break or have the original breaker repeat the break. If the 8 ball is pocketed on the break, then the breaker can choose either to the 8 ball and play from the current position or to re-rack and re-break; but if the cue ball is also pocketed on the break then the opponent is the one who has the choice: either to re-spot the 8 ball and shoot with behind the , accepting the current position, or to re-break or have the breaker re-break. Turn-taking A player (or team) continues to shoot until committing a or failing to legally pocket an object ball (whether or not); thereupon it is the turn of the opposing players. Play alternates in this manner for the remainder of the game. Following a foul, the incoming player has anywhere on the table, unless the foul occurred on the break shot, as noted previously. Selection of the target group The table is "open" at the start of the game, meaning that either player may shoot at any ball. It remains open until one player legally pockets any called ball other than the 8 after the break. That player is assigned the group, or suit, of the pocketed ball – 1–7 (solids), or 9–15 (stripes) – and the other suit is assigned to the opponent. Balls pocketed on the break, or as the result of a foul while the table is still open, are not used to assign the suits. Once the suits are assigned, they remain fixed throughout the game. If any balls from a player's suit are on the table, the player must hit one of them first on every shot; otherwise a foul is called and the turn ends. After all balls from the suit have been pocketed, the player's target becomes the 8 for the remainder of the game. Pocketing the 8 ball Once all of a player's (or team's) group of object balls are pocketed, the player attempts to sink the 8 ball. In order to win the game, the player first designates which pocket the 8 ball will be pocketed into and then successfully pockets the 8 ball into that pocket. If the player knocks the 8 ball off the table, the player loses the game. If the player pockets the 8 ball and commits a foul or pockets it into another pocket than the one designated, the player loses the game. Otherwise (i.e., if the 8 ball is neither pocketed nor knocked off the table), the shooter's turn is simply over, even if a foul occurs. In short, a world-standardized rules game of eight-ball, like a game of nine-ball, is not over until the "" is no longer on the table. The rule has been increasingly adopted by amateur leagues. Winning A player wins the game if that player legally pockets the 8 ball into a designated pocket after all of their object balls have been pocketed. Because of this, it is possible for a game to end with only one of the players having shot, which is known as "running the table" or a "denial"; conversely, it's also possible to win a game without taking a shot; such a scenario may occur if the opposing player illegally pockets the 8 ball on any shot other than the break (such as sinking the 8 ball in an uncalled pocket, knocking the 8 ball off the table, sinking the 8 ball when a player is not yet on the black ball, or sinking both the 8 ball and the cue ball off a single shot). The rules on what happens when the 8 ball is pocketed off the break vary by the rules in question; please read the Fouls section below for more information. Fouls The general rules of pool apply to eight-ball, such as the requirements that the cue ball not be pocketed and that a cushion be hit by any of the balls after the cue ball has struck an object ball. Fouls specific to eight-ball are: The shooter fails to strike one of their own object balls (or the 8 ball when it is the legal ball) with the cue ball, before other balls are contacted by the cue ball. This excludes "" shots, where the cue ball strikes one of the shooter's and one of the opponent's object balls simultaneously. If an
open, are not used to assign the suits. Once the suits are assigned, they remain fixed throughout the game. If any balls from a player's suit are on the table, the player must hit one of them first on every shot; otherwise a foul is called and the turn ends. After all balls from the suit have been pocketed, the player's target becomes the 8 for the remainder of the game. Pocketing the 8 ball Once all of a player's (or team's) group of object balls are pocketed, the player attempts to sink the 8 ball. In order to win the game, the player first designates which pocket the 8 ball will be pocketed into and then successfully pockets the 8 ball into that pocket. If the player knocks the 8 ball off the table, the player loses the game. If the player pockets the 8 ball and commits a foul or pockets it into another pocket than the one designated, the player loses the game. Otherwise (i.e., if the 8 ball is neither pocketed nor knocked off the table), the shooter's turn is simply over, even if a foul occurs. In short, a world-standardized rules game of eight-ball, like a game of nine-ball, is not over until the "" is no longer on the table. The rule has been increasingly adopted by amateur leagues. Winning A player wins the game if that player legally pockets the 8 ball into a designated pocket after all of their object balls have been pocketed. Because of this, it is possible for a game to end with only one of the players having shot, which is known as "running the table" or a "denial"; conversely, it's also possible to win a game without taking a shot; such a scenario may occur if the opposing player illegally pockets the 8 ball on any shot other than the break (such as sinking the 8 ball in an uncalled pocket, knocking the 8 ball off the table, sinking the 8 ball when a player is not yet on the black ball, or sinking both the 8 ball and the cue ball off a single shot). The rules on what happens when the 8 ball is pocketed off the break vary by the rules in question; please read the Fouls section below for more information. Fouls The general rules of pool apply to eight-ball, such as the requirements that the cue ball not be pocketed and that a cushion be hit by any of the balls after the cue ball has struck an object ball. Fouls specific to eight-ball are: The shooter fails to strike one of their own object balls (or the 8 ball when it is the legal ball) with the cue ball, before other balls are contacted by the cue ball. This excludes "" shots, where the cue ball strikes one of the shooter's and one of the opponent's object balls simultaneously. If an attempt is made to pocket a ball, and the ball hits the pocket, bounces out and lands on the ground, the ball is placed in the pocket and the game continues. The shooter shoots the black 8 ball without designating the pocket to opposite team members or the match referee in advance. The shooter deliberately pockets the opponent's balls while shooting the 8 ball. On the break shot, no balls are pocketed and fewer than four balls reach the cushions, in which case the incoming player can demand a re-rack and take the break or force the original breaker to re-break, or may take ball-in-hand behind the and shoot the balls as they lie. Variants Blackball The British version of eight-ball, known internationally as blackball, has evolved into a separate game, retaining significant elements of earlier pub versions of the game, with additional influences from English billiards and snooker. It is popular in amateur competition in the UK, Ireland, Australia and some other countries. The game uses unnumbered, solid-colored object balls, typically red and yellow, with one black 8 ball. They are usually or in diameter, the latter being the same size as the balls used in snooker and English billiards. Tables for blackball pool are long, and feature pockets with rounded cushion openings, like snooker tables. The rules of blackball differ from standard eight-ball in numerous ways, including the handling of fouls, which may give the opponent two shots, racking (the 8 ball, not the apex ball, goes on the spot), selection of which group of balls will be shot by which player, handling of balls and s, and many other details. Internationally, the World Pool-Billiard Association and the World Eightball Pool Federation both publish rules and promote events. The two rule sets differ in some details regarding the penalties for fouls. Chinese eight-ball Eight-ball as played in major Chinese tournaments such as the Chinese Eight-ball World Championship. The rules are essentially the same as standard WPA rules, and the game is played with standard solids-and-stripes balls, but the tables are constructed similarly to snooker tables, with rounded pocket openings, napped cloth, and flat-faced rail cushions, which results in some differences in gameplay approach. The variant arose in the mid 1980s and 1990s as eight-ball gained popularity in China, where snooker was the most popular cue sport at the time. With standard American-style pool tables rare, Chinese players made do with playing eight-ball on small snooker tables. It has since become the most popular cue sport in China, and the major tournaments have some of the largest prize money in pool. Eight-ball rotation The hybrid game eight-ball rotation is a combination of eight-ball and rotation, in which the players must pocket their balls (other than the 8, which remains last) in numerical order. Specifically, the solids player starts by pocketing the 1 ball and ascends to the 7 ball, and the stripes player starts by pocketing the 15 ball and descends to the 9 ball. Backwards eight-ball Backwards eight-ball, also called reverse
recent years beyond government contracting, a sector in which its importance continues to rise (e.g. recent new DFARS rules), in part because EVM can also surface in and help substantiate contract disputes. EVM features Essential features of any EVM implementation include: A project plan that identifies work to be accomplished A valuation of planned work, called planned value (PV) or budgeted cost of work scheduled (BCWS) Pre-defined "earning rules" (also called metrics) to quantify the accomplishment of work, called earned value (EV) or budgeted cost of work performed (BCWP) Actual Cost which is also known as Actual Cost of Work Performed (ACWP) a plot of project cumulative costs vs time especially to show both early date and late date curves EVM implementations for large or complex projects include many more features, such as indicators and forecasts of cost performance (over budget or under budget) and schedule performance (behind schedule or ahead of schedule). However, the most basic requirement of an EVM system is that it quantifies progress using PV and EV. Application example Project A has been approved for a duration of one year and with the budget of X. It was also planned that the project spends 50% of the approved budget and expects 50% of the work to be complete in the first six months. If now, six months after the start of the project, a project manager would report that he has spent 50% of the budget, one can initially think, that the project is perfectly on plan. However, in reality the provided information is not sufficient to come to such a conclusion. The project can spend 50% of the budget, whilst finishing only 25% of the work, which would mean the project is not doing well; or the project can spend 50% of the budget, whilst completing 75% of the work, which would mean that project is doing better than planned. EVM is meant to address such and similar issues. History EVM emerged as a financial analysis specialty in United States Government programs in the 1960s, with the government specifying rules for contractors must implement an EVM system (EVMS). It has since become a significant branch of project management and cost engineering. Project management research investigating the contribution of EVM to project success suggests a moderately strong positive relationship. Implementations of EVM can be scaled to fit projects of all sizes and complexities. The genesis of EVM occurred in industrial manufacturing at the turn of the 20th century, based largely on the principle of "earned time" popularized by Frank and Lillian Gilbreth, but the concept took root in the United States Department of Defense in the 1960s. The original concept was called PERT/COST, but it was considered overly burdensome (not very adaptable) by contractors whom were mandated to use it, and many variations of it began to proliferate among various procurement programs. In 1967, the DoD established a criterion-based approach, using a set of 35 criteria, called the Cost/Schedule Control Systems Criteria (C/SCSC). In the 1970s and early 1980s, a subculture of C/SCSC analysis grew, but the technique was often ignored or even actively resisted by project managers in both government and industry. C/SCSC was often considered a financial control tool that could be delegated to analytical specialists. In 1979, EVM was introduced to the architecture and engineering industry in a "Public Works Magazine" article by David Burstein, a project manager with a national engineering firm. This technique has been taught ever since as part of the project management training program presented by PSMJ Resources, an international training and consulting firm that specializes in the engineering and architecture industry. In the late 1980s and early 1990s, EVM emerged as a project management methodology to be understood and used by managers and executives, not just EVM specialists. In 1989, EVM leadership was elevated to the Undersecretary of Defense for Acquisition, thus making EVM an element of program management and procurement. In 1991, Secretary of Defense Dick Cheney canceled the Navy A-12 Avenger II Program because of performance problems detected by EVM. This demonstrated conclusively that EVM mattered to secretary-level leadership. In the 1990s, many U.S. Government regulations were eliminated or streamlined. However, EVM not only survived the acquisition reform movement, but became strongly associated with the acquisition reform movement itself. Most notably, from 1995 to 1998, ownership of EVM criteria (reduced to 32) was transferred to industry by adoption of ANSI EIA 748-A standard. The use of EVM expanded beyond the U.S. Department of Defense. It was adopted by the National Aeronautics and Space Administration, United States Department of Energy and other technology-related agencies. Many industrialized nations also began to utilize EVM in their own procurement programs. An overview of EVM was included in the Project Management Institute's first PMBOK Guide in 1987 and was expanded in subsequent editions. In the most recent edition of the PMBOK guide, EVM is listed among the general tools and techniques for processes to control project costs. The construction industry was an early commercial adopter of EVM. Closer integration of EVM with the practice of project management accelerated in the 1990s. In 1999, the Performance Management Association merged with the Project Management Institute (PMI) to become PMI's first college, the College of Performance Management. The United States Office of Management and Budget began to mandate the use of EVM across all government agencies, and, for the first time, for certain internally managed projects (not just for contractors). EVM also received greater attention by publicly traded companies in response to the Sarbanes-Oxley Act of 2002. In Australia, EVM has been codified as standards AS 4817-2003 and AS 4817-2006. Project tracking It is helpful to see an example of project tracking that does not include earned value performance management. Consider a project that has been planned in detail, including a time-phased spend plan for all elements of work. Figure 1 shows the cumulative budget (cost) for this project as a function of time (the blue line, labeled PV). It also shows the cumulative actual cost of the project (red line, labeled AC) through week 8. To those unfamiliar with EVM, it might appear that this project was over budget through week 4 and then under budget from week 6 through week 8. However, what is missing from this chart is any understanding of how much work has been accomplished during the project. If the project was actually completed at week 8, then the project would actually be well under budget and well ahead of schedule. If, on the other hand, the project is only 10% complete at week 8, the project is significantly over budget and behind schedule. A method is needed to measure technical performance objectively and quantitatively, and that is what EVM accomplishes. Progress measurement sheet Progress can be measured using a measurement sheet and employing various techniques including milestones, weighted steps, value of work done, physical percent complete, earned value, Level of Effort, earn as planned, and more. Progress can be tracked based on any measure – cost, hours, quantities, schedule, directly-input percent complete, and more. Progress can be assessed using fundamental earned value calculations and variance analysis (Planned Cost, Actual Cost, and Earned Value); these calculations can determine where project performance currently is using the estimated project baseline’s cost and schedule information. With EVM Consider the same project, except this time the project plan includes pre-defined methods of quantifying the accomplishment of work. At the end of each week, the project manager identifies every detailed element of work that has been completed, and sums the EV for each of these completed elements. Earned value may be accumulated monthly, weekly, or as progress is made. The Value of Work Done (VOWD) is mainly used in Oil & Gas and is similar to the Actual Cost in Earned Value Management. Earned value (EV) EV is calculated by multiplying %complete of each task (completed or in progress) by its planned value Figure 2 shows the EV curve (in green) along with the PV curve from Figure 1. The chart indicates that technical performance (i.e. progress) started more rapidly than planned, but slowed significantly and fell behind schedule at week 7 and 8. This chart illustrates the schedule performance aspect of EVM. It is complementary to critical path or critical chain schedule management. Figure 3 shows the same EV curve (green) with the actual cost data from Figure 1 (in red). It can be seen that the project was actually under budget, relative to the amount of work accomplished, since the start of the project. This is a much better conclusion than might be
that is what EVM accomplishes. Progress measurement sheet Progress can be measured using a measurement sheet and employing various techniques including milestones, weighted steps, value of work done, physical percent complete, earned value, Level of Effort, earn as planned, and more. Progress can be tracked based on any measure – cost, hours, quantities, schedule, directly-input percent complete, and more. Progress can be assessed using fundamental earned value calculations and variance analysis (Planned Cost, Actual Cost, and Earned Value); these calculations can determine where project performance currently is using the estimated project baseline’s cost and schedule information. With EVM Consider the same project, except this time the project plan includes pre-defined methods of quantifying the accomplishment of work. At the end of each week, the project manager identifies every detailed element of work that has been completed, and sums the EV for each of these completed elements. Earned value may be accumulated monthly, weekly, or as progress is made. The Value of Work Done (VOWD) is mainly used in Oil & Gas and is similar to the Actual Cost in Earned Value Management. Earned value (EV) EV is calculated by multiplying %complete of each task (completed or in progress) by its planned value Figure 2 shows the EV curve (in green) along with the PV curve from Figure 1. The chart indicates that technical performance (i.e. progress) started more rapidly than planned, but slowed significantly and fell behind schedule at week 7 and 8. This chart illustrates the schedule performance aspect of EVM. It is complementary to critical path or critical chain schedule management. Figure 3 shows the same EV curve (green) with the actual cost data from Figure 1 (in red). It can be seen that the project was actually under budget, relative to the amount of work accomplished, since the start of the project. This is a much better conclusion than might be derived from Figure 1. Figure 4 shows all three curves together – which is a typical EVM line chart. The best way to read these three-line charts is to identify the EV curve first, then compare it to PV (for schedule performance) and AC (for cost performance). It can be seen from this illustration that a true understanding of cost performance and schedule performance relies first on measuring technical performance objectively. This is the foundational principle of EVM. Scaling EVM from simple to advanced implementations The foundational principle of EVM, mentioned above, does not depend on the size or complexity of the project. However, the implementations of EVM can vary significantly depending on the circumstances. In many cases, organizations establish an all-or-nothing threshold; projects above the threshold require a full-featured (complex) EVM system and projects below the threshold are exempted. Another approach that is gaining favor is to scale EVM implementation according to the project at hand and skill level of the project team. Simple implementations (emphasizing only technical performance) There are many more small and simple projects than there are large and complex ones, yet historically only the largest and most complex have enjoyed the benefits of EVM. Still, lightweight implementations of EVM are achievable by any person who has basic spreadsheet skills. In fact, spreadsheet implementations are an excellent way to learn basic EVM skills. The first step is to define the work. This is typically done in a hierarchical arrangement called a work breakdown structure (WBS) although the simplest projects may use a simple list of tasks. In either case, it is important that the WBS or list be comprehensive. It is also important that the elements be mutually exclusive, so that work is easily categorized in one and only one element of work. The most detailed elements of a WBS hierarchy (or the items in a list) are called work packages. Work packages are then often devolved further in the project schedule into tasks or activities. The second step is to assign a value, called planned value (PV), to each work package. For large projects, PV is almost always an allocation of the total project budget, and may be in units of currency (e.g. dollar, euro or naira) or in labor hours, or both. However, in very simple projects, each activity may be assigned a weighted “point value" which might not be a budget number. Assigning weighted values and achieving consensus on all PV quantities yields an important benefit of EVM, because it exposes misunderstandings and miscommunications about the scope of the project, and resolving these differences should always occur as early as possible. Some terminal elements can not be known (planned) in great detail in advance, and that is expected, because they can be further refined at a later time. The third step is to define "earning rules" for each work package. The simplest method is to apply just one earning rule, such as the 0/100 rule, to all activities. Using the 0/100 rule, no credit is earned for an element of work until it is finished. A related rule is called the 50/50 rule, which means 50% credit is earned when an element of work is started, and the remaining 50% is earned upon completion. Other fixed earning rules such as a 25/75 rule or 20/80 rule are gaining favor, because they assign more weight to finishing work than for starting it, but they also motivate the project team to identify when an element of work is started, which can improve awareness of work-in-progress. These simple earning rules work well for small or simple projects because generally each activity tends to be fairly short in duration. These initial three steps define the minimal amount of planning for simplified EVM. The final step is to execute the project according to the plan and measure progress. When activities are started or finished, EV is accumulated according to the earning rule. This is typically done at regular intervals (e.g. weekly or monthly), but there is no reason why EV cannot be accumulated in near real-time, when work elements are started/completed. In fact, waiting to update EV only once per month (simply because that is when cost data are available) only detracts from a primary benefit of using EVM, which is to create a technical performance scoreboard for the project team. In a lightweight implementation such as described here, the project manager has not accumulated cost nor defined a detailed project schedule network (i.e. using a critical path or critical chain methodology). While such omissions are inappropriate for managing large projects, they are a common and reasonable occurrence in many very small or simple projects. Any project can benefit from using EV alone as a real-time score of progress. One useful result of this very simple approach (without schedule models and actual cost accumulation) is to compare EV curves of similar projects, as illustrated in Figure 5. In this example, the progress of three residential construction projects are compared by aligning the starting dates. If these three home construction projects were measured with the same PV valuations, the relative schedule performance of the projects can be easily compared. Making earned value schedule metrics concordant with the CPM schedule The actual critical path is ultimately the determining factor of every project's duration. Because earned value schedule metrics take no account of critical path data, big budget activities that are not on the critical path have the potential to dwarf the impact of performing small budget critical path activities. This can lead to "gaming" the SV and Schedule Performance Index or SPI metrics by ignoring critical path activities in favor of big-budget activities that may have much float. This can sometimes even lead to performing activities out-of-sequence just to improve the schedule tracking metrics, which can cause major problems with quality. A simple two-step process has been suggested to fix this: Create a second earned-value baseline strictly for schedule, with the weighted activities and milestones on the as-late-as-possible dates of the backward pass of the critical path algorithm, where there is no float. Allow earned-value credit for schedule metrics to be taken no earlier than the reporting period during which the activity is scheduled unless it is on the project's current critical path. In this way, the distorting aspect of float would be eliminated. There would be no benefit to performing a non-critical activity with much float until it is due in proper sequence. Also, an activity would not generate a negative schedule variance until it had used up its float. Under this method, one way of gaming the schedule metrics would be eliminated. The only way of generating a positive schedule variance (or SPI over 1.0) would be by completing work on the current critical path ahead of schedule, which is in fact the only way for a project to get ahead of schedule. Advanced implementations (integrating cost, schedule and technical performance) In addition to managing technical and schedule performance, large and complex projects require that cost performance be monitored and reviewed at regular intervals. To measure cost performance, planned value (or BCWS - Budgeted Cost of Work Scheduled) and earned value (or BCWP - Budgeted Cost of Work Performed) must be in units of currency (the same units that actual costs are measured.). In large implementations, the planned value curve is commonly called a Performance Measurement Baseline (PMB) and may be arranged in control accounts, summary-level planning packages, planning packages and work packages. In large projects, establishing control accounts is the primary method of delegating responsibility and authority to various parts of the performing organization. Control accounts are cells of a responsibility assignment (RACI) matrix, which is the intersection of the project WBS and the organizational breakdown structure (OBS). Control accounts are assigned to Control Account Managers (CAMs). Large projects require more elaborate processes for controlling baseline revisions, more thorough integration with subcontractor EVM systems, and more elaborate management of procured materials. In the United States, the primary standard for full-featured EVM systems is the ANSI/EIA-748A standard, published in May 1998 and reaffirmed in August 2002. The standard defines 32 criteria for full-featured EVM system compliance. As of the year 2007, a draft of ANSI/EIA-748B, a revision to the original is available from ANSI. Other countries have established similar standards. In addition to using BCWS and BCWP, implementations often use the term actual cost of work performed (ACWP) instead of AC. Additional acronyms and formulas include: Budget at completion (BAC) The total planned value (PV or BCWS) at the end of the project. If a project has a management reserve (MR), it is typically not included in the BAC, and respectively, in the performance measurement baseline. Cost variance (CV) CV greater than 0 is good (under budget). Cost performance index (CPI) CPI greater than 1 is favorable (under budget): < 1 means that the cost of completing the work is higher than planned (bad); = 1 means that the cost of completing the work is right on plan (good); > 1 means that the cost of completing the work is less than planned (good or sometimes bad). Having a CPI that is very high (in some cases, very high is only 1.2) may mean that the plan was too conservative, and thus a very high number may in fact not be good, as the CPI is being measured against a poor baseline. Management or the customer may be upset with the planners as an overly conservative baseline ties up available funds for other purposes, and the baseline is also used for manpower planning. Estimate at completion (EAC) EAC is the manager's projection of total cost of the project at completion. This formula is based on the assumption, that the performance of the project (or rather a deviation of the actual performance from a baseline) to date gives a good indication of what a performance (or rather deviation of a performance from a baseline) will be in the future. In other words this formula is using statistics of the project to date to predict future results. Therefore it has to be used carefully, when the nature of the project in the future is likely to be different from the one to date (e.g. performance of the project compare to baseline at the design phase may not be a good indication of what it will be during a construction phase). Estimate to complete (ETC) ETC is the estimate to complete the remaining work of the project. ETC must be based on objective measures of the outstanding work remaining, typically based on the measures or estimates used to create the original planned value (PV) profile, including any adjustments to predict performance based on historical performance, actions being taken to improve performance, or acknowledgement of degraded performance. While algebraically, ETC = EAC-AC is correct, ETC should never be computed using either EAC or AC. In the following equation, ETC is the independent variable, EAC is the dependent variable, and AC is fixed based on expenditures to date. ETC should always be reported truthfully to reflect the project team estimate to complete the outstanding work. If ETC pushes EAC to exceed BAC, then project management skills are employed to either recommend performance improvements or scope change, but never force ETC to give the "correct" answer so that EAC=BAC. Managing project activities to keep the project within budget is a human factors activity, not a mathematical
usually ground and polished to a mirror-like finish using ultra-fine abrasives. The polishing process must be performed carefully to minimize scratches and other polishing artifacts that reduce image quality. Metal shadowing – Metal (e.g. platinum) is evaporated from an overhead electrode and applied to the surface of a biological sample at an angle. The surface topography results in variations in the thickness of the metal that are seen as variations in brightness and contrast in the electron microscope image. Replication – A surface shadowed with metal (e.g. platinum, or a mixture of carbon and platinum) at an angle is coated with pure carbon evaporated from carbon electrodes at right angles to the surface. This is followed by removal of the specimen material (e.g. in an acid bath, using enzymes or by mechanical separation) to produce a surface replica that records the surface ultrastructure and can be examined using transmission electron microscopy. Sectioning – produces thin slices of the specimen, semitransparent to electrons. These can be cut on an ultramicrotome with a glass or diamond knife to produce ultra-thin sections about 60–90 nm thick. Disposable glass knives are also used because they can be made in the lab and are much cheaper. Staining – uses heavy metals such as lead, uranium or tungsten to scatter imaging electrons and thus give contrast between different structures, since many (especially biological) materials are nearly "transparent" to electrons (weak phase objects). In biology, specimens can be stained "en bloc" before embedding and also later after sectioning. Typically thin sections are stained for several minutes with an aqueous or alcoholic solution of uranyl acetate followed by aqueous lead citrate. Freeze-fracture or freeze-etch – a preparation method particularly useful for examining lipid membranes and their incorporated proteins in "face on" view. The fresh tissue or cell suspension is frozen rapidly (cryofixation), then fractured by breaking (or by using a microtome) while maintained at liquid nitrogen temperature. The cold fractured surface (sometimes "etched" by increasing the temperature to about −100 °C for several minutes to let some ice sublime) is then shadowed with evaporated platinum or gold at an average angle of 45° in a high vacuum evaporator. The second coat of carbon, evaporated perpendicular to the average surface plane is often performed to improve the stability of the replica coating. The specimen is returned to room temperature and pressure, then the extremely fragile "pre-shadowed" metal replica of the fracture surface is released from the underlying biological material by careful chemical digestion with acids, hypochlorite solution or SDS detergent. The still-floating replica is thoroughly washed free from residual chemicals, carefully fished up on fine grids, dried then viewed in the TEM. Freeze-fracture replica immunogold labeling (FRIL) – the freeze-fracture method has been modified to allow the identification of the components of the fracture face by immunogold labeling. Instead of removing all the underlying tissue of the thawed replica as the final step before viewing in the microscope the tissue thickness is minimized during or after the fracture process. The thin layer of tissue remains bound to the metal replica so it can be immunogold labeled with antibodies to the structures of choice. The thin layer of the original specimen on the replica with gold attached allows the identification of structures in the fracture plane. There are also related methods which label the surface of etched cells and other replica labeling variations. Ion beam milling – thins samples until they are transparent to electrons by firing ions (typically argon) at the surface from an angle and sputtering material from the surface. A subclass of this is focused ion beam milling, where gallium ions are used to produce an electron transparent membrane in a specific region of the sample, for example through a device within a microprocessor. Ion beam milling may also be used for cross-section polishing prior to SEM analysis of materials that are difficult to prepare using mechanical polishing. Conductive coating – an ultrathin coating of electrically conducting material, deposited either by high vacuum evaporation or by low vacuum sputter coating of the sample. This is done to prevent the accumulation of static electric fields at the specimen due to the electron irradiation required during imaging. The coating materials include gold, gold/palladium, platinum, tungsten, graphite, etc. Earthing – to avoid electrical charge accumulation on a conductively coated sample, it is usually electrically connected to the metal sample holder. Often an electrically conductive adhesive is used for this purpose. Disadvantages Electron microscopes are expensive to build and maintain, but the capital and running costs of confocal light microscope systems now overlaps with those of basic electron microscopes. Microscopes designed to achieve high resolutions must be housed in stable buildings (sometimes underground) with special services such as magnetic field canceling systems. The samples largely have to be viewed in vacuum, as the molecules that make up air would scatter the electrons. An exception is liquid-phase electron microscopy using either a closed liquid cell or an environmental chamber, for example, in the environmental scanning electron microscope, which allows hydrated samples to be viewed in a low-pressure (up to ) wet environment. Various techniques for in situ electron microscopy of gaseous samples have been developed as well. Scanning electron microscopes operating in conventional high-vacuum mode usually image conductive specimens; therefore non-conductive materials require conductive coating (gold/palladium alloy, carbon, osmium, etc.). The low-voltage mode of modern microscopes makes possible the observation of non-conductive specimens without coating. Non-conductive materials can be imaged also by a variable pressure (or environmental) scanning electron microscope. Small, stable specimens such as carbon nanotubes, diatom frustules and small mineral crystals (asbestos fibres, for example) require no special treatment before being examined in the electron microscope. Samples of hydrated materials, including almost all biological specimens have to be prepared in various ways to stabilize them, reduce their thickness (ultrathin sectioning) and increase their electron optical contrast (staining). These processes may result in artifacts, but these can usually be identified by comparing the results obtained by using radically different specimen preparation methods. Since the 1980s, analysis of cryofixed, vitrified specimens has also become increasingly used by scientists, further confirming the validity of this technique. Applications Semiconductor and data storage Circuit edit Defect analysis Failure analysis Biology and life sciences Cryobiology Cryo-electron microscopy Diagnostic electron microscopy Drug research (e.g. antibiotics) Electron tomography Particle analysis Particle detection Protein localization Structural biology Tissue imaging Toxicology Virology (e.g. viral load monitoring) Materials research Device testing and characterization Dynamic materials experiments Electron beam-induced deposition
can be assigned to different colours and superimposed on a single colour micrograph displaying simultaneously the properties of the specimen. Some types of detectors used in SEM have analytical capabilities, and can provide several items of data at each pixel. Examples are the energy-dispersive X-ray spectroscopy (EDS) detectors used in elemental analysis and cathodoluminescence microscope (CL) systems that analyse the intensity and spectrum of electron-induced luminescence in (for example) geological specimens. In SEM systems using these detectors, it is common to colour code the signals and superimpose them in a single colour image, so that differences in the distribution of the various components of the specimen can be seen clearly and compared. Optionally, the standard secondary electron image can be merged with the one or more compositional channels, so that the specimen's structure and composition can be compared. Such images can be made while maintaining the full integrity of the original signal, which is not modified in any way. Sample preparation Materials to be viewed under an electron microscope may require processing to produce a suitable sample. The technique required varies depending on the specimen and the analysis required: Chemical fixation – for biological specimens aims to stabilize the specimen's mobile macromolecular structure by chemical crosslinking of proteins with aldehydes such as formaldehyde and glutaraldehyde, and lipids with osmium tetroxide. Negative stain – suspensions containing nanoparticles or fine biological material (such as viruses and bacteria) are briefly mixed with a dilute solution of an electron-opaque solution such as ammonium molybdate, uranyl acetate (or formate), or phosphotungstic acid. This mixture is applied to a suitably coated EM grid, blotted, then allowed to dry. Viewing of this preparation in the TEM should be carried out without delay for best results. The method is important in microbiology for fast but crude morphological identification, but can also be used as the basis for high-resolution 3D reconstruction using EM tomography methodology when carbon films are used for support. Negative staining is also used for observation of nanoparticles. Cryofixation – freezing a specimen so rapidly, in liquid ethane that the water forms vitreous (non-crystalline) ice. This preserves the specimen in a snapshot of its solution state. An entire field called cryo-electron microscopy has branched from this technique. With the development of cryo-electron microscopy of vitreous sections (CEMOVIS), it is now possible to observe samples from virtually any biological specimen close to its native state. Dehydration – or replacement of water with organic solvents such as ethanol or acetone, followed by critical point drying or infiltration with embedding resins. Also freeze drying. Embedding, biological specimens – after dehydration, tissue for observation in the transmission electron microscope is embedded so it can be sectioned ready for viewing. To do this the tissue is passed through a 'transition solvent' such as propylene oxide (epoxypropane) or acetone and then infiltrated with an epoxy resin such as Araldite, Epon, or Durcupan; tissues may also be embedded directly in water-miscible acrylic resin. After the resin has been polymerized (hardened) the sample is thin sectioned (ultrathin sections) and stained – it is then ready for viewing. Embedding, materials – after embedding in resin, the specimen is usually ground and polished to a mirror-like finish using ultra-fine abrasives. The polishing process must be performed carefully to minimize scratches and other polishing artifacts that reduce image quality. Metal shadowing – Metal (e.g. platinum) is evaporated from an overhead electrode and applied to the surface of a biological sample at an angle. The surface topography results in variations in the thickness of the metal that are seen as variations in brightness and contrast in the electron microscope image. Replication – A surface shadowed with metal (e.g. platinum, or a mixture of carbon and platinum) at an angle is coated with pure carbon evaporated from carbon electrodes at right angles to the surface. This is followed by removal of the specimen material (e.g. in an acid bath, using enzymes or by mechanical separation) to produce a surface replica that records the surface ultrastructure and can be examined using transmission electron microscopy. Sectioning – produces thin slices of the specimen, semitransparent to electrons. These can be cut on an ultramicrotome with a glass or diamond knife to produce ultra-thin sections about 60–90 nm thick. Disposable glass knives are also used because they can be made in the lab and are much cheaper. Staining – uses heavy metals such as lead, uranium or tungsten to scatter imaging electrons and thus give contrast between different structures, since many (especially biological) materials are nearly "transparent" to electrons (weak phase objects). In biology, specimens can be stained "en bloc" before embedding and also later after sectioning. Typically thin sections are stained for several minutes with an aqueous or alcoholic solution of uranyl acetate followed by aqueous lead citrate. Freeze-fracture or freeze-etch – a preparation method particularly useful for examining lipid membranes and their incorporated proteins in "face on" view. The fresh tissue or cell suspension is frozen rapidly (cryofixation), then fractured by breaking (or by using a microtome) while maintained at liquid nitrogen temperature. The cold fractured surface (sometimes "etched" by increasing the temperature to about −100 °C for several minutes to let some ice sublime) is then shadowed with evaporated platinum or gold at an average angle of 45° in a high vacuum evaporator. The second coat of carbon, evaporated perpendicular to the average surface plane is often performed to improve the stability of the replica coating. The specimen is returned to room temperature and pressure, then the extremely fragile "pre-shadowed" metal replica of the fracture surface is released from the underlying biological material by careful chemical digestion with acids, hypochlorite solution or SDS detergent. The still-floating replica is thoroughly washed free from residual chemicals, carefully fished up on fine grids, dried then viewed in the TEM. Freeze-fracture replica immunogold labeling (FRIL) – the freeze-fracture method has been modified to allow the identification of the components of the fracture face by immunogold labeling. Instead of removing all the underlying tissue of the thawed replica as the final step before viewing in the microscope the tissue thickness is minimized during or after the fracture process. The thin layer of tissue remains bound to the metal replica so it can be immunogold labeled with antibodies to the structures of choice. The thin layer of the original specimen on the replica with gold attached allows the identification of structures in the fracture plane. There are also related methods which label the surface of etched cells and other replica labeling variations. Ion beam milling – thins samples until they are transparent to electrons by firing ions (typically argon) at the surface from an angle and sputtering material from the surface. A subclass of this is focused ion beam milling, where gallium ions are used to produce an electron transparent membrane in a specific region of the sample, for example through a device within a microprocessor. Ion beam milling may also be used for cross-section polishing prior to SEM analysis of materials that are difficult to prepare using mechanical polishing. Conductive coating – an ultrathin coating of electrically conducting material, deposited either by high vacuum evaporation or by low vacuum sputter coating of the sample. This is done to prevent the accumulation of static electric fields at the specimen due to the electron irradiation required during imaging. The coating materials include gold, gold/palladium, platinum, tungsten, graphite, etc. Earthing – to avoid electrical charge accumulation on a conductively coated sample, it is usually electrically connected to the metal sample holder. Often an electrically conductive adhesive is used for this purpose. Disadvantages Electron microscopes are expensive to build and maintain, but the capital and running costs of confocal light microscope systems now overlaps with those of basic electron microscopes. Microscopes designed to achieve high resolutions must be housed in stable buildings (sometimes underground) with special services such as magnetic field canceling systems. The samples largely have to be viewed in vacuum, as the molecules that make up air would scatter the electrons. An exception is liquid-phase electron microscopy using either a closed liquid cell or an environmental chamber, for example, in the environmental scanning electron microscope, which allows hydrated samples to be viewed in a low-pressure (up to ) wet environment. Various techniques for in situ electron microscopy of gaseous samples have been developed as well. Scanning electron microscopes operating in conventional high-vacuum mode usually image conductive specimens; therefore non-conductive materials require conductive coating (gold/palladium alloy, carbon, osmium, etc.). The low-voltage mode of modern microscopes makes possible the observation of non-conductive specimens without coating. Non-conductive materials can be imaged also by a variable pressure (or environmental) scanning electron microscope. Small, stable specimens such as carbon nanotubes, diatom frustules and small mineral crystals (asbestos fibres, for example) require no special treatment before being examined in the electron microscope. Samples of hydrated materials, including almost all biological specimens have to be prepared in various ways to stabilize them, reduce their thickness (ultrathin sectioning) and increase their electron optical contrast (staining). These processes may result in artifacts, but these can usually be identified by comparing the results obtained by using radically different specimen preparation methods. Since the 1980s, analysis of cryofixed, vitrified specimens has also become increasingly used by scientists, further confirming the validity of this technique. Applications Semiconductor and data storage Circuit edit Defect analysis Failure analysis Biology and life sciences Cryobiology Cryo-electron microscopy Diagnostic electron microscopy Drug research (e.g. antibiotics) Electron tomography Particle analysis Particle detection Protein localization Structural biology Tissue imaging Toxicology Virology (e.g. viral load monitoring) Materials research Device testing and characterization Dynamic materials experiments Electron beam-induced deposition In-situ characterisation Materials qualification Medical research Nanometrology Nanoprototyping Industry Chemical/Petrochemical Direct beam-writing fabrication Food science Forensics Fractography Micro-characterization Mining (mineral liberation analysis) Pharmaceutical QC See also Acronyms in microscopy Electron diffraction Electron energy loss spectroscopy (EELS) Electron microscope images Energy filtered transmission electron microscopy (EFTEM) Environmental scanning electron microscope (ESEM) Field emission
Officially critically endangered, possibly extinct, but a thorough survey in 2000 concluded the species was certainly extinct. Imber's petrel, Pterodroma imberi Described from subfossil remains from the Chatham Islands, became apparently extinct in the early 19th century. Sphenisciformes Penguins The Chatham penguin, Eudyptes sp. (Chatham Islands, SW Pacific), is only known from subfossil bones, but a bird kept captive at some time between 1867 and 1872 might refer to this taxon. Columbiformes Pigeons, doves and dodos For the "Réunion solitaire", see Réunion ibis. Saint Helena dove, Dysmoropelia dekarchiskos, possibly survived into the Modern Era. Passenger pigeon, Ectopistes migratorius (Eastern North America, 1914) The passenger pigeon was once among the most common birds in the world, a single flock numbering up to 2.2 billion birds. It was hunted close to extinction for food and sport in the late 19th century. The last individual, Martha, died in the Cincinnati Zoo in 1914. Bonin wood pigeon, Columba versicolor (Nakodo-jima and Chichi-jima, Ogasawara Islands, c. 1890) Ryukyu wood pigeon, Columba jouyi (Okinawa and Daito Islands, Northwest Pacific, late 1930s) Réunion pink pigeon, Nesoenas duboisi (Réunion, Mascarenes, c. 1700) Formerly in Streptopelia. There seems to have been at least another species of pigeon on Réunion (probably an Alectroenas), but bones have not yet been found. It disappeared at the same time. Rodrigues pigeon, Nesoenas rodericana (Rodrigues, Mascarenes, before 1690?) Formerly in Streptopelia. A possible subspecies of the Madagascar turtle dove (N. picturata), this seems not to be the bird observed by Leguat. Introduced rats might have killed it off in the late 17th century. Spotted green pigeon, "Caloenas" maculata (South Pacific or Indian Ocean islands, 1820s) Also known as the Liverpool pigeon, the only known specimen has been in Liverpool's World Museum since 1851, and was probably collected on a Pacific island for Edward Stanley, 13th Earl of Derby. It has been suggested that this bird came from Tahiti based on native lore about a somewhat similar extinct bird called the titi, but this has not been verified. Sulu bleeding-heart, Gallicolumba menagei (Tawitawi, Philippines, late 1990s?) Officially listed as critically endangered. Only known from two specimens taken in 1891. There have been a number of unconfirmed reports from all over the Sulu Archipelago in 1995, however, these reports stated that the bird had suddenly undergone a massive decline, and by now, habitat destruction is almost complete. If not extinct, this species is very rare, but the ongoing civil war prevents comprehensive surveys. Norfolk ground dove, Gallicolumba norfolciensis (Norfolk Island, Southwest Pacific, c. 1800) Tanna ground dove, Gallicolumba ferruginea (Tanna, Vanuatu, late 18th-19th century) Only known from descriptions of two now-lost specimens. Thick-billed ground dove, Gallicolumba salamonis (Makira and Ramos, Solomon Islands, mid-20th century?) Last recorded in 1927, only two specimens exist. Declared extinct in 2005. Choiseul pigeon, Microgoura meeki (Choiseul, Solomon Islands, early 20th century) Red-moustached fruit dove, Ptilinopus mercierii (Nuku Hiva and Hiva Oa, Marquesas, mid-20th century) Two subspecies, the little-known P. m. mercierii of Nuku Hiva (extinct mid-late 19th century) and P. m. tristrami of Hiva Oa. Negros fruit dove, Ptilinopus arcanus (Negros, Philippines, late 20th century?) Known only from one specimen taken at the only documented sighting in 1953, the validity of this species has been questioned, but no good alternative to distinct species status has been proposed. Officially critically endangered, it might occur on Panay, but no survey has located it. One possible record in 2002 does not seem to have been repeated. Mauritius blue pigeon, Alectroenas nitidissima (Mauritius, Mascarenes, c. 1830s) Farquhar blue pigeon, Alectroenas sp. (Farquhar Group, Seychelles, 19th century) Only known from early reports; possibly a subspecies of the Comoros or Seychelles blue pigeon. Rodrigues grey pigeon, "Alectroenas" rodericana (Rodrigues, Mascarenes, mid-18th century) A mysterious bird of unknown affinities, known from a few bones and, as it seems, two historical reports. Dodo, Raphus cucullatus (Mauritius, Mascarenes, late 17th century) Called Didus ineptus by Linnaeus. A metre-high flightless bird found on Mauritius. Its forest habitat was lost when Dutch settlers moved to the island and the dodo's nests were destroyed by the monkeys, pigs and cats that the Dutch brought with them. The last specimen was killed in 1681, only 80 years after the arrival of the new predators. Rodrigues solitaire, Pezophaps solitaria (Rodrigues, Mascarenes, c. 1730) Psittaciformes Parrots Sinú parakeet, Pyrrhura subandina (Colombia, mid-20th century?) This bird has a very restricted distribution and was last reliably recorded in 1949. It was not found during searches in 2004 and 2006 and seems to be extinct; efforts to find it again continue, but are hampered by the threat of armed conflict. New Caledonian lorikeet, Charmosyna diadema (New Caledonia, Melanesia, mid-20th century?) Officially critically endangered, there have been no reliable reports of this bird since the early 20th century. It is, however, small and inconspicuous. Norfolk kaka, Nestor productus (Norfolk and Philip Islands, SW Pacific, 1851?) Society parakeet, Cyanoramphus ulietanus (Raiatea, Society Islands, late 18th century) Black-fronted parakeet, Cyanoramphus zealandicus (Tahiti, Society Islands, c. 1850) Paradise parrot, Psephotus pulcherrimus (Rockhampton area, Australia, late 1920s) Oceanic eclectus parrot, Eclectus infectus, known from subfossil bones found on Tonga, Vanuatu, and possibly Fiji, may have survived until the 18th century: a bird which seems to be a male Eclectus parrot was drawn in a report on the Tongan island of Vavaʻu by the Malaspina expedition. Also a 19th-century Tongan name ʻāʻā ("parrot") for "a beautiful bird found only at ʻEua" is attested (see here under "kaka"). This seems to refer either E. infectus which in Tonga is only known from Vavaʻu and ʻEua, or the extirpated population of the collared lory which also occurred there. It is possible but unlikely that the species survived on ʻEua until the 19th century. Seychelles parakeet, Psittacula wardi (Seychelles, W Indian Ocean, 1883) Newton's parakeet, Psittacula exsul (Rodrigues, Mascarenes, c. 1875) Mascarene grey parakeet, Psittacula bensoni (Mauritius, possible Réunion as Psittacula cf bensoni). Formerly described as Mauritius grey parrot, Lophopsittacus bensoni. Known from a 1602 sketch by Captain Willem van Westzanen and by subfossil bones described by David Thomas Holyoak in 1973. Might have survived to the mid-18th century. Mascarene parrot, Mascarinus mascarin (Réunion and possibly Mauritius, Mascarenes, 1834?) Last known individual was a captive bird which was alive before 1834. Broad-billed parrot, Lophopsittacus mauritianus (Mauritius, Mascarenes, 1680?) May have survived to the late 18th century. Rodrigues parrot, Necropsittacus rodericanus (Rodrigues, Mascarenes, late 18th century) The species N. francicus is fictional, N. borbonicus most likely so. Glaucous macaw, Anodorhynchus glaucus (N Argentina, early 20th century) Officially critically endangered due to persistent rumors of wild birds, but probably extinct. Cuban macaw, Ara tricolor (Cuba,late 19th century) A number of related species have been described from the West Indies, but are not based on good evidence. Several prehistoric forms are now known to have existed in the region, however. Carolina parakeet, Conuropsis carolinensis (SE North America, c. 1930?) Although the date of the last captive bird's death in the Cincinnati Zoo, 1918, is generally given as its extinction date, there are convincing reports of some wild populations persisting until later. Two subspecies, C. c. carolinensis (Carolina parakeet, east and south of the Appalachian range–extinct 1918 or c. 1930) and C. c. ludovicianus (Louisiana parakeet, west of the Appalachian range–extinct c. 1912). Guadeloupe parakeet, Aratinga labati (Guadeloupe, West Indies, late 18th century) Only known from descriptions, the former existence of this bird is likely for biogeographic reasons and because details as described cannot be referred to known species. Martinique amazon, Amazona martinica (Martinique, West Indies, mid-18th century) Guadeloupe amazon, Amazona violacea (Guadeloupe, West Indies, mid-18th century) These extinct amazon parrots were originally described after travelers' descriptions. Their existence is still controversial. Cuculiformes Cuckoos Delalande's coua, Coua delalandei (Madagascar, late 19th century?) Saint Helena cuckoo, Nannococcyx psix (Saint Helena, Atlantic, 18th century) Falconiformes Birds of prey Guadalupe caracara, Caracara lutosa (Guadalupe, E Pacific, 1900 or 1903) Réunion kestrel, Falco duboisi (Réunion, Mascarenes, c. 1700) Strigiformes Typical owls and barn owls. Pernambuco pygmy owl, Glaucidium mooreorum (Pernambuco, Brazil, 2001?) Might still exist, classified as critically endangered. A 2018 BirdLife study citing extinction patterns recommended reclassifying this species as possibly extinct. Réunion owl, Mascarenotus grucheti (Réunion, Mascarenes, late 17th century?) Mauritius owl, Mascarenotus sauzieri (Mauritius, Mascarenes, c. 1850) Rodrigues owl, Mascarenotus murivorus (Rodrigues, Mascarenes, mid-18th century) The preceding three species were variously placed in Bubo, Athene, "Scops" (=Otus), Strix and Tyto before their true affinity was realized. New Caledonian boobook, Ninox cf. novaeseelandiae (New Caledonia, Melanesia) Known only from prehistoric bones, but might still survive. Laughing owl, Sceloglaux albifacies (New Zealand, 1914?) Two subspecies, S. a. albifacies (South Island and Stewart Island, extinct 1914?) and S. a. rufifacies (North Island, extinct c. 1870s?); circumstantial evidence suggests that small remnants survived until the early/mid-20th century. The Puerto Rican barn owl, Tyto cavatica, known from prehistoric remains found in caves of Puerto Rico, West Indies; may still have existed in 1912, given reports of the presence of cave-roosting owls. The Andros Island barn owl, Tyto pollens, known from prehistoric remains found on Andros (Bahamas); may have survived to the 16th century, as indicated by the "chickcharney" legend. Siau scops owl, Otus siaoensis (20th century?) Only known from the holotype collected in 1866. Endemic to the small volcanic island of Siau north of Sulawesi in Indonesia; might still survive, as there are ongoing rumors of scops owls at Siau. Caprimulgiformes Caprimulgidae - nightjars and nighthawks Reclusive ground-nesting birds that sally out at night to hunt for large insects and similar prey. They are easily located by the males' song, but this is not given all year. Habitat destruction represents currently the biggest threat, while island populations are threatened by introduced mammalian predators, notably dogs, cats, pigs and mongooses. Jamaican poorwill, Siphonorhis americana (Jamaica, West Indies, late 19th century?) Reports of unidentifiable nightjars from the 1980s in habitat appropriate for S. americana suggest that this cryptic species may still exist. Research into this possibility is currently underway; pending further information, it is classified as critically endangered, possibly extinct. Cuban pauraque, Siphonorhis daiquiri (Cuba, West Indies, prehistoric?) Described from subfossil bones in 1985. There are persistent rumors that this bird, which was never seen alive by scientists, may still survive. Compare Puerto Rican nightjar and preceding. Vaurie's nightjar (Caprimulgus centralasicus) is only known from a single 1929 specimen from Xinjiang, China. It has never been found again, but the validity of this supposed species is seriously disputed. It was never refuted to be an immature female desert European nightjar. Apodiformes Swifts and hummingbirds Coppery thorntail, Discosura letitiae (Bolivia?) Known only from three trade specimens of unknown origin. Might still exist. Brace's emerald, Chlorostilbon bracei (New Providence, Bahamas, late 19th century) Gould's emerald, Chlorostilbon elegans (Jamaica or northern Bahamas, West Indies, late 19th century) Turquoise-throated puffleg, Eriocnemis godini (Ecuador, 20th century?) Officially classified as critically endangered, possibly extinct. Known only from six pre-1900 specimens, the habitat at the only known site where it occurred has been destroyed. However, the bird's distribution remains unresolved. Coraciiformes Kingfishers and related birds Saint Helena hoopoe, Upupa antaois (Saint Helena, Atlantic, early 16th century) Piciformes Woodpeckers and related birds Bermuda flicker (Colaptes oceanicus) (Bermuda, 17th century?) Known only from fossils found in Bermuda and dated to the Late Pleistocene and the Holocene; however, a 17th century report written by explorer Captain John Smith may refer to this species. Imperial woodpecker, Campephilus imperialis (Mexico, late 20th century) This 60-centimetre-long woodpecker is officially listed as critically endangered, possibly extinct. Occasional unconfirmed reports come up, the most recent was in late 2005. Ivory-billed woodpecker, Campephilus principalis (Southeastern U.S. and Cuba, late 20th century) The American ivory-billed woodpecker (Campephilus principalis principalis) is critically endangered, and considered possibly extinct by some authorities. The Cuban ivory-billed woodpecker (Campephilus principalis bairdii) is generally considered extinct, but a few patches of unsurveyed potential habitat remain Passeriformes Perching birds Furnariidae- Ovenbirds Cryptic treehunter, Cichlocolaptes mazarbarnetti (E Brazil, 2007) Alagoas foliage-gleaner, Philydor novaesi (E Brazil, 2011) Acanthisittidae– New Zealand "wrens" Lyall's wren, Traversia lyalli (New Zealand, 1895?) The species famously (but erroneously) claimed to have been made extinct by a single cat named "Tibbles". Bushwren, Xenicus longipes (New Zealand, 1972) Three subspecies: X. l. stokesi (North Island, extinct 1955); X. l. longipes (South Island, extinct 1968); X. l. variabilis (Stewart Island, extinct 1972). Mohoidae – Hawaiian "honeyeaters". Family established in 2008, previously in Meliphagidae. Kioea, Chaetoptila angustipluma (Big Island, Hawaiian Islands, 1860s) Hawaiʻi ʻōʻō, Moho nobilis (Big Island, Hawaiian Islands, 1930s) Oʻahu ʻōʻō, Moho apicalis (Oʻahu, Hawaiian Islands, mid-19th century) Molokaʻi ʻōʻō, Moho bishopi (Molokaʻi and probably Maui, Hawaiian Islands, c. 1910 or 1980s) Kauaʻi ʻōʻō, Moho braccatus (Kauaʻi, Hawaiian Islands, 1987) Meliphagidae – honeyeaters and Australian chats Chatham bellbird, Anthornis melanocephala (Chatham Islands, Southwest Pacific, c. 1910) Sometimes regarded as a subspecies of the New Zealand bellbird, Anthornis melanura. Unconfirmed records exist from the early-mid-1950s. The identity of "Strigiceps leucopogon" (an invalid name), described by Lesson in 1840, is unclear. Apart from the holotype supposedly from "New Holland", a second specimen from the "Himalaya" may have existed (or still exist). Lesson tentatively allied it to the Meliphagidae, and Rothschild felt reminded of the kioea. Acanthizidae – scrubwrens, thornbills, and gerygones Lord Howe gerygone, Gerygone insularis (Lord Howe Island, Southwest Pacific, c. 1930) Pachycephalidae – whistlers, shrike-thrushes, pitohuis and allies Mangarevan whistler, ?Pachycephala gambierana (Mangareva, Gambier Islands, late 19th century?) Tentatively placed here. A mysterious bird of which no specimens exist today. It was initially described as a shrike, then classified as an Eopsalteria "robin" and may actually be an Acrocephalus warbler. Dicruridae – monarch flycatchers and allies Maupiti monarch, Pomarea pomarea (Maupiti, Society Islands, mid-19th century) Eiao monarch, Pomarea fluxa (Eiao, Marquesas, late 1970s) Previously considered a subspecies of the Iphis monarch, this is an early offspring of the Marquesan stock. Nuku Hiva monarch, Pomarea nukuhivae (Nuku Hiva, Marquesas, mid-late 20th century) Previously considered a subspecies of the Marquesas monarch, this is another early offspring of the Marquesan stock. Ua Pou monarch, Pomarea mira (Ua Pou, Marquesas, c. 1986) Previously considered another subspecies of the Marquesas monarch, this was a distinct species most closely related to that bird and the Fatuhiva monarch. Guam flycatcher, Myiagra freycineti (Guam, Marianas, 1983) Oriolidae – Old World orioles and allies North Island piopio, Turnagra tanagra (North Island, New Zealand, c. 1970?) Not reliably recorded since about 1900. South Island piopio, Turnagra capensis (South Island, New Zealand, 1960s?) Two subspecies, T. c. minor from Stephens Island (extinct c. 1897) and the nominate T. c. capensis from the South Island mainland (last specimen taken in 1902, last unconfirmed record in 1963) Callaeidae – New Zealand wattlebirds Huia, Heteralocha acutirostris (North Island, New Zealand, early 20th century) Hirundinidae – swallows and martins White-eyed river martin, Pseudochelidon sirintarae (Thailand, late 1980s?) Officially classified as critically endangered, this enigmatic species is only known from migrating birds and it was last seen in 1986 at its former roost site. Recent unconfirmed reports suggest that it may occur in Cambodia. Red Sea cliff swallow, Petrochelidon perdita (Red Sea area, late 20th century?) Known from a single specimen, this enigmatic swallow probably still exists, but the lack of recent records is puzzling. It is alternatively placed in the genus Hirundo. Acrocephalidae – marsh and tree warblers Nightingale reed warbler, Acrocephalus luscinius (Guam, c. 1970's) Aguiguan reed warbler, Acrocephalus nijoi (Aguiguan, Marianas, c. 1997) Mangareva reed warbler, Acrocephalus astrolabii (Marianas?, mid-19th century?) Known from just two specimens found from Mangareva Island in the western Pacific. Pagan reed warbler, Acrocephalus yamashinae (Pagan, Marianas, 1970s) Garrett's reed warbler, Acrocephalus musae (Society Islands, 19th century?) Moorea reed warbler, Acrocephalus longirostris (Moorea, Society Islands, 1980s?) Last reliable sighting was in 1981. Survey in 1986/1987 remained unsuccessful. A photograph of a warbler from Moorea in 1998 or 1999 taken by Philippe Bacchet remains uncertain, as do reports from 2003 and 2010. Muscicapidae – Old World flycatchers and chats Rück's blue flycatcher, Cyornis ruckii (Malaysia or Indochina, 20th century?) An enigmatic bird known from two or four possibly migrant specimens, last recorded in 1918. Might exist in northeast Indochina and might be a subspecies of the Hainan blue flycatcher. Megaluridae – megalurid warblers or grass warblers Chatham fernbird, Bowdleria rufescens (Chatham Islands, New Zealand, c. 1900) Often placed in genus Megalurus, but this is based on an incomplete review of the evidence. Cisticolidae – cisticolas and allies Tana River cisticola, Cisticola restrictus (Kenya, 1970s?) A mysterious bird, found in the Tana River basin in small numbers at various dates, but not since 1972. Probably invalid, based on aberrant or hybrid specimens. An unconfirmed sighting was apparently made in 2007 in the Tana River Delta. Zosteropidae – white-eyes - probably belonging to Timaliidae Marianne white-eye, Zosterops semiflavus (Marianne Island, Seychelles, late 19th century) Lord Howe white-eye, Zosterops strenuus (Lord Howe Island, Southwest Pacific, c. 1918) White-chested white-eye, Zosterops albogularis (Norfolk Island, between 2006 and 2010) Pycnonotidae – bulbuls Rodrigues bulbul, Hypsipetes cowlesi (Rodrigues, Mascarenes, extinction date unknown, 17th century or 18th century might be possible) Known only from subfossil bones. Sylvioidea incertae sedis Aldabra brush warbler, Nesillas aldabrana (Aldabra, Indian Ocean, c. 1984) Rodrigues "babbler" (Rodrigues, Mascarenes, 17th century?) Known from subfossil bones. Provisionally assigned to Timaliidae, but placement highly doubtful. Sturnidae – starlings Kosrae starling, Aplonis corvina (Kosrae, Carolines, mid-19th century) Mysterious starling, Aplonis mavornata (Mauke, Cook Islands, mid-19th century) Tasman starling, Aplonis fusca (Norfolk Island and Lord Howe Island, Southwest Pacific, c. 1923) Two subspecies, A. f. fusca– Norfolk starling (extinct c. 1923); A. fusca hulliana– Lord Howe starling (extinct c. 1919). Pohnpei starling, Aplonis pelzelni (Pohnpei, Micronesia, c. 2000) Only one reliable record since 1956, in 1995, leaves the species' survival seriously in doubt. Bay starling, Aplonis? ulietensis (Raiatea, Society Islands, between 1774 and 1850) Usually called "bay thrush" (Turdus ulietensis); a mysterious bird from Raiatea, now only known from a painting and some descriptions of a (now lost) specimen. Its taxonomic position is thus unresolvable at present, although for biogeographic reasons and because of the surviving description, it has been suggested to have been a honeyeater. However, with the discovery of fossils of the prehistorically extinct starling Aplonis diluvialis on neighboring Huahine, it seems likely that this bird also belonged to this genus. Hoopoe starling, Fregilupus varius (Réunion, Mascarenes, 1850s) Tentatively assigned to Sturnidae. Rodrigues starling, Necropsar rodericanus (Rodrigues, Mascarenes, mid-18th century?) Tentatively assigned to Sturnidae. The bird variously described as Necropsar leguati or Orphanopsar leguati and considered to be identical with N. rodericanus (which is only known from subfossil bones) was found to be based on a misidentified albinistic specimen of the Martinique trembler (Cinclocerthia gutturalis) Turdidae – thrushes and relatives Grand Cayman thrush, Turdus ravidus (Grand Cayman, West Indies, late 1940s) Bonin thrush, Zoothera terrestris (Chichi-jima, Ogasawara Islands, c. 1830s) Kāmaʻo, Myadestes myadestinus (Kauaʻi, Hawaiian Islands, 1990s) Olomaʻo, Myadestes lanaiensis (Hawaiian Islands, 1980s?) Officially classified as critically endangered because a possible location on Molokaʻi remains unsurveyed. Three subspecies are known from Oahu (M. l. woahensis, extinct 1850s), Lanaʻi (M. l. lanaiensis, extinct early 1930s), Molokaʻi (M. l. rutha, extinct 1980s?) and a possible fourth subspecies from Maui (extinct before late 19th century). Mimidae – mockingbirds and thrashers Cozumel thrasher, Toxostoma guttatum (Cozumel, Caribbean, early first decade of the 21st century?) It is still unknown whether the tiny population rediscovered in 2004 survived Hurricanes Emily and Wilma in 2005. Unconfirmed records in April 2006 and October and December 2007. Estrildidae– estrildid finches (waxbills, munias, etc. Black-lored waxbill, Estrilda nigriloris (D.R. Congo, Africa, late 20th century?) An enigmatic waxbill not seen since 1950; because part of its habitat is in Upemba National Park, it may survive. Icteridae – grackles Slender-billed grackle, Quiscalus palustris (Mexico, 1910) Parulidae – New World warblers Bachman's warbler, Vermivora bachmanii (southern US, c. 1990?) Officially classified as critically endangered. Semper's warbler, Leucopeza semperi (Saint Lucia, Caribbean, 1970s?) Officially classified as critically endangered. Suitable habitat remains and there have been unconfirmed records within the last decade. Ploceidae – weavers Réunion fody, Foudia delloni Formerly Foudia bruante, which might refer to a color morph of the red fody. Fringillidae – true finches and Hawaiian honeycreepers Bonin grosbeak, Chaunoproctus ferreorostris (Chichi-jima, Ogasawara Islands, 1830s) ʻŌʻū, Psittirostra psittacea (Hawaiian Islands, c. 2000?) Officially classified as critically endangered, this was once the most widespread species of Hawaiian honeycreeper. It has not been reliably recorded since 1987 or 1989. Lanaʻi hookbill, Dysmorodrepanis munroi (Lanaʻi, Hawaiian Islands, 1918) Pila's palila, Loxioides kikuichi (Kauaʻi, Hawaiian Islands), possibly survived to the early 18th century. Lesser koa finch, Rhodacanthus flaviceps (Big Island, Hawaiian Islands, 1891) Greater koa finch, Rhodacanthus palmeri (Big Island, Hawaiian Islands, 1896) Kona grosbeak, Psittirostra kona (Big Island, Hawaiian Islands, 1894) Greater ʻamakihi, Hemignathus sagittirostris (Big Island, Hawaiian Islands, 1901) Maui nukupuʻu, Hemignathus affinis (Maui, Hawaiian Islands, 1990s) Kauaʻi nukupuʻu, Hemignathus hanapepe (Kauaʻi, Hawaiian Islands, late 1990s) Oʻahu nukupuʻu, Hemignathus lucidus (Oʻahu, Hawaiian Islands, late 19th century) Hawaiʻi ʻakialoa or lesser ʻakialoa, Akialoa obscurus (Big Island, Hawaiian Islands, 1940) Maui Nui ʻakialoa, Akialoa lanaiensis (Lanaʻi and, prehistorically, probably Maui and Molokaʻi, Hawaiian Islands, 1892) Oʻahu ʻakialoa, Akialoa ellisiana (Oʻahu, Hawaiian Islands, early 20th century) Kauaʻi ʻakialoa, Akialoa stejnegeri (Kauaʻi, Hawaiian Islands, 1969) Kakawahie, Paroreomyza flammea (Molokaʻi, Hawaiian Islands, 1963) Oʻahu ʻalauahio, Paroreomyza maculata (Oʻahu, Hawaiian Islands, early 1990s?) Officially classified as critically endangered. Last reliable record was in 1985, with an unconfirmed sighting in 1990. Maui akepa, Loxops ochraceus (Maui, Hawaiian Islands, 1988) Oʻahu akepa, Loxops wolstenholmei (Oʻahu, Hawaiian Islands, 1900s) ʻUla-ʻai-hawane, Ciridops anna (Big Island, Hawaiian Islands, 1892 or 1937) Black mamo, Drepanis funerea (Molokaʻi, Hawaiian Islands, 1907) Hawaiʻi mamo, Drepanis pacifica (Big Island, Hawaiian Islands, 1898) Laysan honeycreeper, Himatione fraithii (Laysan, Hawaiian Islands, 1923) Poʻo-uli, Melamprosops phaeosoma (Maui, Hawaiian Islands, 2004) Emberizidae – buntings and American sparrows Hooded seedeater, Sporophila melanops (Brazil, 20th century?) Officially classified as critically endangered. It is known only from a single male collected in 1823 and has variously been considered an aberrant yellow-bellied seedeater or a hybrid. Bermuda towhee, Pipilo naufragus. Known by subfossil remains and possibly from a travel report by William Strachey in 1610. Possibly extinct bird subspecies or status unknown Extinction of subspecies is a subject very dependent on guesswork. National and international conservation projects and research publications such as redlists usually focus on species as a whole. Reliable information on the status of threatened subspecies usually has to be assembled piecemeal from published observations, such as regional checklists. Therefore, the following listing contains a high proportion of taxa that may still exist, but are listed here due to any combination of absence of recent records, a known threat such as habitat destruction, or an observed decline. Struthioniformes Ratites and related birds Arabian ostrich, Struthio camelus syriacus (Arabia, 1966) The last record of this ostrich subspecies was a bird found dead in Jordan in 1966. Apterygiformes North Island little spotted kiwi, Apteryx owenii iredalei (North Island, New Zealand, late 19th century) A doubtfully distinct little spotted kiwi subspecies. Casuariiformes King Island emu, Dromaius novaehollandiae minor (King Island, Australia, 1822) A dwarf subspecies of the emu; extinct in the wild c. 1805, the last captive specimen died in 1822 in the Jardin des Plantes. Kangaroo Island emu, Dromaius novaehollandiae baudinianus (Kangaroo Island, Australia, 1827) A dwarf subspecies of the emu; extinct since c. 1827. Tasmanian emu, Dromaius novaehollandiae diemenensis (Tasmania, Australia, mid-19th century) A dwarf subspecies of the emu; the last wild bird was collected in 1845. It may have persisted in captivity until 1884. It may be invalid. Tinamiformes Tinamous Magdalena tinamou, Crypturellus (erythropus) saltuarius (Colombia, late 20th century?) Variously considered a red-legged tinamou subspecies or a distinct species, this bird is currently only known with certainty from the 1943 type specimen. An additional specimen exists (or existed), but its present whereabouts is unknown. Recent research suggest that it is still extant and there was a likely – although as yet unconfirmed – record near the type locality by Colombian ornithologist Oswaldo Cortés in late 2008. Anseriformes Ducks, geese and swans Bering cackling goose, Branta hutchinsii asiatica (Komandorski and Kuril Islands, N Pacific, c. 1914 or 1929) A subspecies of the cackling goose (formerly called the Bering Canada goose (Branta canadensis asiatica)) which is doubtfully distinct from the Aleutian subspecies. Rennell Island teal, Anas gibberifrons remissa (Rennell, Solomon Islands, c. 1959) Pink-headed duck, Rhodonessa caryophyllacea (East India, Bangladesh, North Myanmar, 1945?)– a reclassification into the genus Netta is recommended, but not generally accepted. Officially critically endangered; recent surveys have failed to rediscover it, though sightings continue to be reported. A doubtfully distinct subspecies of the Sunda teal, which disappeared due to predation on young birds by introduced tilapia (Oreochromis mossambicus). Niceforo's pintail, Anas georgica niceforoi (Colombia, 1950s) A yellow-billed pintail subspecies that has not been recorded since the 1950s. Borrero's cinnamon teal, Anas cyanoptera borreroi (Colombia, mid-20th century?) A subspecies of the cinnamon teal known only from a restricted area in the Cordillera Occidental of Colombia, with a couple of records from Ecuador. It was discovered in 1946 and thought to be extinct by
until more material is found, however. Labrador duck, Camptorhynchus labradorius (Northeastern North America, ca. 1878) New Zealand merganser, Mergus australis (New Zealand, Auckland Islands, Southwest Pacific, c.1902) Galliformes Quails and relatives See also Bokaak "bustard" under Gruiformes below The pile-builder megapode, Megapodius molistructor may have survived on New Caledonia to the late 18th century as evidenced by descriptions of the bird named "Tetrao australis" and later "Megapodius andersoni". The Viti Levu scrubfowl, Megapodius amissus of Viti Levu and possibly Kadavu, Fiji, may have survived to the early 19th or even the 20th century as suggested by circumstantial evidence. Raoul Island scrubfowl, Megapodius sp. (Raoul, Kermadec Islands, 1876) A megapode is said to have inhabited Raoul Island until the population was wiped out in a volcanic eruption. It is not clear whether the birds represent a distinct taxon or derive from a prehistoric introduction by Polynesian seafarers. New Zealand quail, Coturnix novaezelandiae (New Zealand, 1875) Himalayan quail, Ophrysia superciliosa (North India, late 19th century?) Officially critically endangered. Not recorded with certainty since 1876, but thorough surveys are still required, and there was a recent set of possible (though unlikely) sightings around Naini Tal in 2003. A little-known native name from Western Nepal probably refers to this bird, but for various reasons, no survey for Ophrysia has ever been conducted in that country, nor is it generally assumed to occur there (due to the native name being overlooked). Charadriiformes Shorebirds, gulls and auks Javan lapwing, Vanellus macropterus (Java, Indonesia, mid-20th century) Officially classified as critically endangered, but as this conspicuous bird has not been recorded since 1940, it is almost certainly extinct. Christmas sandpiper, Prosobonia cancellata (Kiritimati Island, Kiribati, 1850s) Tahiti sandpiper, Prosobonia leucoptera (Tahiti, Society Islands, 19th century) Moorea sandpiper, Prosobonia ellisi (Moorea, Society Islands, 19th century) Doubtfully distinct from P. leucoptera. North Island snipe, Coenocorypha barrierensis (North Island, New Zealand, 1870s) South Island snipe, Coenocorypha iredalei (South and Stewart Islands, New Zealand, 1964) Eskimo curlew, Numenius borealis (Northern North America, late 20th century?) May still exist; officially classified as critically endangered, possibly extinct. Slender-billed curlew, Numenius tenuirostris (Western Siberia, early first decade of the 21st century?) May still exist; officially classified as critically endangered. A few birds were recorded in 2004, following several decades of increasing rarity. There was an unconfirmed sighting in Albania in 2007. A survey to find out whether this bird still exists is currently being undertaken by the RSPB (BirdLife in the UK). Great auk, Pinguinus impennis (Newfoundland, 1852) Canary Islands oystercatcher, Haematopus meadewaldoi (Eastern Canary Islands, E Atlantic, c. 1940?) Later sightings of black oystercatchers off Senegal were not likely to be of this sedentary species, but two records from Tenerife - the last in 1981 - may be. Gruiformes Rails and allies - probably paraphyletic Antillean cave rail, Nesotrochis debooyi, known by pre-Columbian bones from Puerto Rico and the Virgin Islands. Stories of an easy-to-catch bird named carrao heard by Alexander Wetmore in 1912 on Puerto Rico might refer to this species. Hawkins' rail, Diaphorapteryx hawkinsi (Chatham Islands, SW Pacific, 19th century) Red rail, Aphanapteryx bonasia (Mauritius, Mascarenes, c. 1700) Rodrigues rail, Erythromachus leguati (Rodrigues, Mascarenes, mid-18th century) Bar-winged rail, Nesoclopeus poecilopterus (Fiji, Polynesia, c. 1980) Dieffenbach's rail, Gallirallus dieffenbachii (Chatham Islands, SW Pacific, mid-19th century) Tahiti rail, Hypotaenidia pacificus (Tahiti, Society Islands, late 18th – 19th century) Wake Island rail, Hypotaenidia wakensis (Wake Island, Micronesia, 1945) Tongatapu rail, Gallirallus hypoleucus (Tongatapu, Tonga, late 18th - 19th century) New Caledonian rail, Gallirallus lafresnanayanus (New Caledonia, Melanesia, c. 1900?) Officially classified as critically endangered, the last records were in 1984 and it seems that all available habitat is overrun by feral pigs and dogs, which prey on this bird. Vava'u rail, Gallirallus cf. vekamatolu (Vava'u, Tonga, early 19th century?) This bird is known only from a drawing by the 1793 Malaspina expedition, apparently depicting a species of Gallirallus. The 'Eua rail, Gallirallus vekamatolu, is known from prehistoric bones found on 'Eua, but this species is almost certainly not G. vekamatolu, as that bird was flightless and hence is unlikely to have settled three distant islands. However, it probably was a close relative. Norfolk Island rail, Gallirallus sp., may be the bird shown on a bad watercolor illustration made around 1800. Chatham rail, Cabalus modestus (Chatham Islands, SW Pacific, c. 1900) Réunion rail or Dubois' wood-rail, Dryolimnas augusti (Réunion, Mascarenes, late 17th century) Ascension crake, Mundia elpenor (Ascension, Island, Atlantic, late 17th century)– formerly Atlantisia Saint Helena crake, Porzana astrictocarpus (Saint Helena, Atlantic, early 16th century) Laysan rail, Porzana palmeri (Laysan Island, Hawaiian Islands, 1944) Hawaiian rail, Porzana sandwichensis (Big Island, Hawaiian Islands, c. 1890) Kosrae crake, Porzana monasa (Kosrae, Carolines, c. mid-late 19th century) Tahiti crake, Porzana nigra (Tahiti, Society Islands, c. 1800) Known only from paintings and descriptions; taxonomic status uncertain, as the material is often believed to refer to the extant spotless crake. Saint Helena swamphen, Aphanocrex podarces (Saint Helena, Atlantic, 16th century)– formerly Atlantisia Lord Howe swamphen, Porphyrio albus (Lord Howe Island, SW Pacific, early 19th century) Réunion swamphen or Oiseau bleu, Porphyrio coerulescens (Réunion, Mascarenes, 18th century) Known only from descriptions. Former existence of a Porphyrio on Réunion is fairly certain, but not proven to date. Marquesas swamphen, Porphyrio paepae (Hiva Oa and Tahuata, Marquesas) May have survived to c. 1900. In the lower right corner of Paul Gauguin's 1902 painting Le Sorcier d'Hiva Oa ou le Marquisien à la cape rouge there is a bird which resembles native descriptions of P. paepae. North Island takahē, Porphyrio mantelli, known from subfossil bones found in New Zealand's North Island; may have survived to 1894 or later. New Caledonian gallinule, Porphyrio kukwiedei from New Caledonia, Melanesia, may have survived into historic times. The native name n'dino is thought to refer to this bird. Samoan woodhen, Gallinula pacifica (Savai'i, Samoa, 1907?) Probably better placed in the genus Pareudiastes, unconfirmed reports from the late 20th century suggest it still survives in small numbers and therefore it is officially classified as critically endangered. Makira woodhen, Gallinula silvestris (Makira, Solomon Islands, mid-20th century?) Only known from a single specimen, this rail is probably better placed in its own genus, Edithornis. There are some unconfirmed recent records that suggest it still survives, and thus it is officially classified as critically endangered. Tristan moorhen, Gallinula nesiotis (Tristan da Cunha, Atlantic, late 19th century) Mascarene coot, Fulica newtonii (Mauritius and Réunion, Mascarenes, c. 1700) Fernando de Noronha rail, Rallidae gen. et sp. indet. (Fernando de Noronha, W. Atlantic, 16th century?) A distinct species of rail inhabited Fernando de Noronha island, but it has not been formally described yet. Probably was extant at first Western contact. Tahitian "goose", Rallidae gen. et sp. indet. (Tahiti, late 18th century?) Early travelers to Tahiti reported a "goose" that was found in the mountains. Altogether, a species of rail in the genus Porphyrio seems to be the most likely choice. Bokaak "bustard", Rallidae? gen. et sp. indet. 'Bokaak' An unidentified terrestrial bird is mentioned in an early report from Bokaak in the Marshall Islands. It is described as a "bustard" and may have been a rail or a megapode. In the former case it may have been a vagrant of an extant species; in any case, no bird that could be described as "bustard-like" is found on Bokaak today. Rallidae gen. et sp. indet. 'Amsterdam Island' Unknown rail from Amsterdam Island; one specimen found, but not recovered. Extinct by 1800, or it may have been a vagrant of an extant species. Podicipediformes Grebes Colombian grebe, Podiceps andinus (Bogotá area, Colombia, 1977) Alaotra grebe, Tachybaptus rufolavatus (Lake Alaotra, Madagascar, 1985) Officially declared extinct in 2010, 25 years after the last official sighting. Declined through habitat destruction and hybridization with the little grebe. Disappeared from only known location in the 1980s. Atitlán grebe, Podilymbus gigas (Lake Atitlán, Guatemala, 1989) Cathartiformes "Painted vulture", Sarcoramphus sacra (Florida, United States, late 18th century?) A bird supposedly similar to the king vulture identified by William Bartram on his travels in the 1770s. Skeptics have stated that it is likely based on a misidentification of the northern caracara, although evidence has increasingly shifted towards it being a valid taxon that existed, either as a species in its own right or a subspecies of the king vulture, based on an independent illustration of a near-identical bird made several decades earlier by Eleazar Albin. See King vulture article for discussion. Pelecaniformes Pelicans and related birds Bermuda night heron, Nyctanassa carcinocatactes (Bermuda, West Atlantic, 17th century) Sometimes assigned to the genus Nycticorax. Réunion night heron, Nycticorax duboisi (Réunion, Mascarenes, late 17th century) Mauritius night heron, Nycticorax mauritianus (Mauritius, Mascarenes, c. 1700) Rodrigues night heron, Nycticorax megacephalus (Rodrigues, Mascarenes, mid-18th century) Ascension night heron, Nycticorax olsoni (Ascension Island, Atlantic, late 16th century?) Known only from subfossil bones, but the description of a flightless Ascension bird by André Thévet cannot be identified with anything other than this species. New Zealand little bittern, Ixobrychus novaezelandiae (New Zealand, late 19th century) Long considered to be vagrant individuals of the Australian little bittern, bones recovered from Holocene deposits indicate that this was indeed a distinct taxon, but it might not be a separate species. Réunion ibis, Threskiornis solitarius (Réunion, Mascarenes, early 18th century) This species was the basis of the "Réunion solitaire", a supposed relative of the dodo and the Rodrigues solitaire. Given the fact that ibis (but no dodo-like) bones were found on Réunion and that old descriptions match a flightless sacred ibis quite well, the "Réunion solitaire" hypothesis has been refuted. Suliformes Boobies and related birds Spectacled cormorant, Phalacrocorax perspicillatus (Komandorski Islands, North Pacific, c. 1850) Mascarene booby, Papasula sp. (Mauritius and Rodrigues, Mascarenes, mid-19th century) An undescribed booby species that was formerly considered a population of Abbott's booby. Known physically only from subfossil bones, but is likely the bird referred to as a "boeuf" by early settlers. The "boeuf" was last recorded on Rodrigues in 1832 and likely went extinct following the deforestation of the island. Procellariiformes Petrels, shearwaters, albatrosses and storm petrels. Olson's petrel, Bulweria bifax (Saint Helena, Atlantic, early 16th century) Bermuda shearwater, Puffinus parvus (Bermuda, West Atlantic, 16th century) Saint Helena petrel, Pseudobulweria rupinarum (Saint Helena, Atlantic, early 16th century) Jamaican petrel, Pterodroma caribbaea (Jamaica, Caribbean, late 19th century?) Possibly a subspecies of the black-capped petrel; unconfirmed reports suggest it might survive. Officially classified as critically endangered, possibly extinct. Pterodroma cf. leucoptera (Mangareva, Gambier Islands, 20th century?) A wing of a carcass similar to Gould's petrel was recovered on Mangareva in 1922, where it possibly bred. No such birds are known to exist there today. Guadalupe storm petrel, Oceanodroma macrodactyla (Guadalupe, East Pacific, 1910s) Officially critically endangered, possibly extinct, but a thorough survey in 2000 concluded the species was certainly extinct. Imber's petrel, Pterodroma imberi Described from subfossil remains from the Chatham Islands, became apparently extinct in the early 19th century. Sphenisciformes Penguins The Chatham penguin, Eudyptes sp. (Chatham Islands, SW Pacific), is only known from subfossil bones, but a bird kept captive at some time between 1867 and 1872 might refer to this taxon. Columbiformes Pigeons, doves and dodos For the "Réunion solitaire", see Réunion ibis. Saint Helena dove, Dysmoropelia dekarchiskos, possibly survived into the Modern Era. Passenger pigeon, Ectopistes migratorius (Eastern North America, 1914) The passenger pigeon was once among the most common birds in the world, a single flock numbering up to 2.2 billion birds. It was hunted close to extinction for food and sport in the late 19th century. The last individual, Martha, died in the Cincinnati Zoo in 1914. Bonin wood pigeon, Columba versicolor (Nakodo-jima and Chichi-jima, Ogasawara Islands, c. 1890) Ryukyu wood pigeon, Columba jouyi (Okinawa and Daito Islands, Northwest Pacific, late 1930s) Réunion pink pigeon, Nesoenas duboisi (Réunion, Mascarenes, c. 1700) Formerly in Streptopelia. There seems to have been at least another species of pigeon on Réunion (probably an Alectroenas), but bones have not yet been found. It disappeared at the same time. Rodrigues pigeon, Nesoenas rodericana (Rodrigues, Mascarenes, before 1690?) Formerly in Streptopelia. A possible subspecies of the Madagascar turtle dove (N. picturata), this seems not to be the bird observed by Leguat. Introduced rats might have killed it off in the late 17th century. Spotted green pigeon, "Caloenas" maculata (South Pacific or Indian Ocean islands, 1820s) Also known as the Liverpool pigeon, the only known specimen has been in Liverpool's World Museum since 1851, and was probably collected on a Pacific island for Edward Stanley, 13th Earl of Derby. It has been suggested that this bird came from Tahiti based on native lore about a somewhat similar extinct bird called the titi, but this has not been verified. Sulu bleeding-heart, Gallicolumba menagei (Tawitawi, Philippines, late 1990s?) Officially listed as critically endangered. Only known from two specimens taken in 1891. There have been a number of unconfirmed reports from all over the Sulu Archipelago in 1995, however, these reports stated that the bird had suddenly undergone a massive decline, and by now, habitat destruction is almost complete. If not extinct, this species is very rare, but the ongoing civil war prevents comprehensive surveys. Norfolk ground dove, Gallicolumba norfolciensis (Norfolk Island, Southwest Pacific, c. 1800) Tanna ground dove, Gallicolumba ferruginea (Tanna, Vanuatu, late 18th-19th century) Only known from descriptions of two now-lost specimens. Thick-billed ground dove, Gallicolumba salamonis (Makira and Ramos, Solomon Islands, mid-20th century?) Last recorded in 1927, only two specimens exist. Declared extinct in 2005. Choiseul pigeon, Microgoura meeki (Choiseul, Solomon Islands, early 20th century) Red-moustached fruit dove, Ptilinopus mercierii (Nuku Hiva and Hiva Oa, Marquesas, mid-20th century) Two subspecies, the little-known P. m. mercierii of Nuku Hiva (extinct mid-late 19th century) and P. m. tristrami of Hiva Oa. Negros fruit dove, Ptilinopus arcanus (Negros, Philippines, late 20th century?) Known only from one specimen taken at the only documented sighting in 1953, the validity of this species has been questioned, but no good alternative to distinct species status has been proposed. Officially critically endangered, it might occur on Panay, but no survey has located it. One possible record in 2002 does not seem to have been repeated. Mauritius blue pigeon, Alectroenas nitidissima (Mauritius, Mascarenes, c. 1830s) Farquhar blue pigeon, Alectroenas sp. (Farquhar Group, Seychelles, 19th century) Only known from early reports; possibly a subspecies of the Comoros or Seychelles blue pigeon. Rodrigues grey pigeon, "Alectroenas" rodericana (Rodrigues, Mascarenes, mid-18th century) A mysterious bird of unknown affinities, known from a few bones and, as it seems, two historical reports. Dodo, Raphus cucullatus (Mauritius, Mascarenes, late 17th century) Called Didus ineptus by Linnaeus. A metre-high flightless bird found on Mauritius. Its forest habitat was lost when Dutch settlers moved to the island and the dodo's nests were destroyed by the monkeys, pigs and cats that the Dutch brought with them. The last specimen was killed in 1681, only 80 years after the arrival of the new predators. Rodrigues solitaire, Pezophaps solitaria (Rodrigues, Mascarenes, c. 1730) Psittaciformes Parrots Sinú parakeet, Pyrrhura subandina (Colombia, mid-20th century?) This bird has a very restricted distribution and was last reliably recorded in 1949. It was not found during searches in 2004 and 2006 and seems to be extinct; efforts to find it again continue, but are hampered by the threat of armed conflict. New Caledonian lorikeet, Charmosyna diadema (New Caledonia, Melanesia, mid-20th century?) Officially critically endangered, there have been no reliable reports of this bird since the early 20th century. It is, however, small and inconspicuous. Norfolk kaka, Nestor productus (Norfolk and Philip Islands, SW Pacific, 1851?) Society parakeet, Cyanoramphus ulietanus (Raiatea, Society Islands, late 18th century) Black-fronted parakeet, Cyanoramphus zealandicus (Tahiti, Society Islands, c. 1850) Paradise parrot, Psephotus pulcherrimus (Rockhampton area, Australia, late 1920s) Oceanic eclectus parrot, Eclectus infectus, known from subfossil bones found on Tonga, Vanuatu, and possibly Fiji, may have survived until the 18th century: a bird which seems to be a male Eclectus parrot was drawn in a report on the Tongan island of Vavaʻu by the Malaspina expedition. Also a 19th-century Tongan name ʻāʻā ("parrot") for "a beautiful bird found only at ʻEua" is attested (see here under "kaka"). This seems to refer either E. infectus which in Tonga is only known from Vavaʻu and ʻEua, or the extirpated population of the collared lory which also occurred there. It is possible but unlikely that the species survived on ʻEua until the 19th century. Seychelles parakeet, Psittacula wardi (Seychelles, W Indian Ocean, 1883) Newton's parakeet, Psittacula exsul (Rodrigues, Mascarenes, c. 1875) Mascarene grey parakeet, Psittacula bensoni (Mauritius, possible Réunion as Psittacula cf bensoni). Formerly described as Mauritius grey parrot, Lophopsittacus bensoni. Known from a 1602 sketch by Captain Willem van Westzanen and by subfossil bones described by David Thomas Holyoak in 1973. Might have survived to the mid-18th century. Mascarene parrot, Mascarinus mascarin (Réunion and possibly Mauritius, Mascarenes, 1834?) Last known individual was a captive bird which was alive before 1834. Broad-billed parrot, Lophopsittacus mauritianus (Mauritius, Mascarenes, 1680?) May have survived to the late 18th century. Rodrigues parrot, Necropsittacus rodericanus (Rodrigues, Mascarenes, late 18th century) The species N. francicus is fictional, N. borbonicus most likely so. Glaucous macaw, Anodorhynchus glaucus (N Argentina, early 20th century) Officially critically endangered due to persistent rumors of wild birds, but probably extinct. Cuban macaw, Ara tricolor (Cuba,late 19th century) A number of related species have been described from the West Indies, but are not based on good evidence. Several prehistoric forms are now known to have existed in the region, however. Carolina parakeet, Conuropsis carolinensis (SE North America, c. 1930?) Although the date of the last captive bird's death in the Cincinnati Zoo, 1918, is generally given as its extinction date, there are convincing reports of some wild populations persisting until later. Two subspecies, C. c. carolinensis (Carolina parakeet, east and south of the Appalachian range–extinct 1918 or c. 1930) and C. c. ludovicianus (Louisiana parakeet, west of the Appalachian range–extinct c. 1912). Guadeloupe parakeet, Aratinga labati (Guadeloupe, West Indies, late 18th century) Only known from descriptions, the former existence of this bird is likely for biogeographic reasons and because details as described cannot be referred to known species. Martinique amazon, Amazona martinica (Martinique, West Indies, mid-18th century) Guadeloupe amazon, Amazona violacea (Guadeloupe, West Indies, mid-18th century) These extinct amazon parrots were originally described after travelers' descriptions. Their existence is still controversial. Cuculiformes Cuckoos Delalande's coua, Coua delalandei (Madagascar, late 19th century?) Saint Helena cuckoo, Nannococcyx psix (Saint Helena, Atlantic, 18th century) Falconiformes Birds of prey Guadalupe caracara, Caracara lutosa (Guadalupe, E Pacific, 1900 or 1903) Réunion kestrel, Falco duboisi (Réunion, Mascarenes, c. 1700) Strigiformes Typical owls and barn owls. Pernambuco pygmy owl, Glaucidium mooreorum (Pernambuco, Brazil, 2001?) Might still exist, classified as critically endangered. A 2018 BirdLife study citing extinction patterns recommended reclassifying this species as possibly extinct. Réunion owl, Mascarenotus grucheti (Réunion, Mascarenes, late 17th century?) Mauritius owl, Mascarenotus sauzieri (Mauritius, Mascarenes, c. 1850) Rodrigues owl, Mascarenotus murivorus (Rodrigues, Mascarenes, mid-18th century) The preceding
was born in Westborough, Massachusetts, on December 8, 1765, the eldest child of Eli Whitney Sr., a prosperous farmer, and his wife Elizabeth Fay, also of Westborough. The younger Eli was famous during his lifetime and after his death by the name "Eli Whitney", though he was technically Eli Whitney Jr. His son, born in 1820, also named Eli, was known during his lifetime and afterward by the name "Eli Whitney, Jr." Whitney's mother, Elizabeth Fay, died in 1777, when he was 11. At age 14 he operated a profitable nail manufacturing operation in his father's workshop during the Revolutionary War. Because his stepmother opposed his wish to attend college, Whitney worked as a farm laborer and school teacher to save money. He prepared for Yale at Leicester Academy (now Becker College) and under the tutelage of Rev. Elizur Goodrich of Durham, Connecticut, he entered in the fall of 1789 and graduated Phi Beta Kappa in 1792. Whitney expected to study law but, finding himself short of funds, accepted an offer to go to South Carolina as a private tutor. Instead of reaching his destination, he was convinced to visit Georgia. In the closing years of the 18th century, Georgia was a magnet for New Englanders seeking their fortunes (its Revolutionary-era governor had been Lyman Hall, a migrant from Connecticut). When he initially sailed for South Carolina, among his shipmates were the widow (Catherine Littlefield Greene) and family of the Revolutionary hero Gen. Nathanael Greene of Rhode Island. Mrs. Greene invited Whitney to visit her Georgia plantation, Mulberry Grove. Her plantation manager and husband-to-be was Phineas Miller, another Connecticut migrant and Yale graduate (class of 1785), who would become Whitney's business partner. Career Whitney is most famous for two innovations which came to have significant impacts on the United States in the mid-19th century: the cotton gin (1793) and his advocacy of interchangeable parts. In the South, the cotton gin revolutionized the way cotton was harvested and reinvigorated slavery. Conversely, in the North the adoption of interchangeable parts revolutionized the manufacturing industry, contributing greatly to the U.S. victory in the Civil War. Cotton gin The cotton gin is a mechanical device that removes the seeds from cotton, a process that had previously been extremely labor-intensive. The word gin is short for engine. While staying at Mulberry Grove, Whitney constructed several ingenious household devices which led Mrs Greene to introduce him to some businessmen who were discussing the desirability of a machine to separate the short staple upland cotton from its seeds, work that was then done by hand at the rate of a pound of lint a day. In a few weeks Whitney produced a model. The cotton gin was a wooden drum stuck with hooks that pulled the cotton fibers through a mesh. The cotton seeds would not fit through the mesh and fell outside. Whitney occasionally told a story wherein he was pondering an improved method of seeding the cotton when he was inspired by observing a cat attempting to pull a chicken through a fence, and able to only pull through some of the feathers. A single cotton gin could generate up to of cleaned cotton daily. This contributed to the economic development of the Southern United States, a prime cotton growing area; some historians believe that this invention allowed for the African slavery system in the Southern United States to become more sustainable at a critical point in its development. Whitney applied for the patent for his cotton gin on October 28, 1793, and received the patent (later numbered as X72) on March 14, 1794, but it was not validated until 1807. Whitney and his partner, Miller, did not intend to sell the gins. Rather, like the proprietors of grist and sawmills, they expected to charge farmers for cleaning their cotton – two-fifths of the value, paid in cotton. Resentment at this scheme, the mechanical simplicity of the device and the primitive state of patent law, made infringement inevitable. Whitney and Miller could not build enough gins to meet demand, so gins from other makers found ready sale. Ultimately, patent infringement lawsuits consumed the profits (one patent, later annulled, was granted in 1796 to Hogden Holmes for a gin which substituted circular saws for the spikes) and their cotton gin company went out of business in 1797. One oft-overlooked point is that there were drawbacks to Whitney's first design. There is significant evidence that the design flaws were solved by his sponsor, Mrs. Greene, but Whitney gave her no public credit or recognition. After validation of the patent, the legislature of South Carolina voted $50,000 for the rights for that state, while North Carolina levied a license tax for five years, from which about $30,000 was realized. There is a claim that Tennessee paid, perhaps, $10,000. While the cotton gin did not earn Whitney the fortune he had hoped for, it did give him fame. It has been argued by some historians that Whitney's cotton gin was an important if unintended cause of the American Civil War. After Whitney's invention, the plantation slavery industry was rejuvenated, eventually culminating in the Civil War. The cotton gin transformed Southern agriculture and the national economy. Southern cotton found ready markets in Europe and in the burgeoning textile mills of New England. Cotton exports from the U.S. boomed after the cotton gin's appearance – from less than in 1793 to by 1810. Cotton was a staple that could be stored for long periods and shipped long distances, unlike most agricultural products. It became the U.S.'s chief export, representing over half the value of U.S. exports from 1820 to 1860. Whitney believed that his cotton gin would reduce the need for enslaved labor and would help hasten the end of southern slavery. Paradoxically, the cotton gin, a labor-saving device, helped preserve and prolong slavery in the United States for another 70 years. Before the 1790s, slave labor was primarily employed in growing rice, tobacco, and indigo, none of which were especially profitable anymore. Neither was cotton, due to the difficulty of
South, the cotton gin revolutionized the way cotton was harvested and reinvigorated slavery. Conversely, in the North the adoption of interchangeable parts revolutionized the manufacturing industry, contributing greatly to the U.S. victory in the Civil War. Cotton gin The cotton gin is a mechanical device that removes the seeds from cotton, a process that had previously been extremely labor-intensive. The word gin is short for engine. While staying at Mulberry Grove, Whitney constructed several ingenious household devices which led Mrs Greene to introduce him to some businessmen who were discussing the desirability of a machine to separate the short staple upland cotton from its seeds, work that was then done by hand at the rate of a pound of lint a day. In a few weeks Whitney produced a model. The cotton gin was a wooden drum stuck with hooks that pulled the cotton fibers through a mesh. The cotton seeds would not fit through the mesh and fell outside. Whitney occasionally told a story wherein he was pondering an improved method of seeding the cotton when he was inspired by observing a cat attempting to pull a chicken through a fence, and able to only pull through some of the feathers. A single cotton gin could generate up to of cleaned cotton daily. This contributed to the economic development of the Southern United States, a prime cotton growing area; some historians believe that this invention allowed for the African slavery system in the Southern United States to become more sustainable at a critical point in its development. Whitney applied for the patent for his cotton gin on October 28, 1793, and received the patent (later numbered as X72) on March 14, 1794, but it was not validated until 1807. Whitney and his partner, Miller, did not intend to sell the gins. Rather, like the proprietors of grist and sawmills, they expected to charge farmers for cleaning their cotton – two-fifths of the value, paid in cotton. Resentment at this scheme, the mechanical simplicity of the device and the primitive state of patent law, made infringement inevitable. Whitney and Miller could not build enough gins to meet demand, so gins from other makers found ready sale. Ultimately, patent infringement lawsuits consumed the profits (one patent, later annulled, was granted in 1796 to Hogden Holmes for a gin which substituted circular saws for the spikes) and their cotton gin company went out of business in 1797. One oft-overlooked point is that there were drawbacks to Whitney's first design. There is significant evidence that the design flaws were solved by his sponsor, Mrs. Greene, but Whitney gave her no public credit or recognition. After validation of the patent, the legislature of South Carolina voted $50,000 for the rights for that state, while North Carolina levied a license tax for five years, from which about $30,000 was realized. There is a claim that Tennessee paid, perhaps, $10,000. While the cotton gin did not earn Whitney the fortune he had hoped for, it did give him fame. It has been argued by some historians that Whitney's cotton gin was an important if unintended cause of the American Civil War. After Whitney's invention, the plantation slavery industry was rejuvenated, eventually culminating in the Civil War. The cotton gin transformed Southern agriculture and the national economy. Southern cotton found ready markets in Europe and in the burgeoning textile mills of New England. Cotton exports from the U.S. boomed after the cotton gin's appearance – from less than in 1793 to by 1810. Cotton was a staple that could be stored for long periods and shipped long distances, unlike most agricultural products. It became the U.S.'s chief export, representing over half the value of U.S. exports from 1820 to 1860. Whitney believed that his cotton gin would reduce the need for enslaved labor and would help hasten the end of southern slavery. Paradoxically, the cotton gin, a labor-saving device, helped preserve and prolong slavery in the United States for another 70 years. Before the 1790s, slave labor was primarily employed in growing rice, tobacco, and indigo, none of which were especially profitable anymore. Neither was cotton, due to the difficulty of seed removal. But with the invention of the gin, growing cotton with slave labor became highly profitable – the chief source of wealth in the American South, and the basis of frontier settlement from Georgia to Texas. "King Cotton" became a dominant economic force, and slavery was sustained as a key institution of Southern society. Interchangeable parts Eli Whitney has often been incorrectly credited with inventing the idea of interchangeable parts, which he championed for years as a maker of muskets; however, the idea predated Whitney, and Whitney's role in it was one of promotion and popularizing, not invention. Successful implementation of the idea eluded Whitney until near the end of his life, occurring first in others' armories. Attempts at interchangeability of parts can be traced back as far as the Punic Wars through both archaeological remains of boats now in Museo Archeologico Baglio Anselmi and contemporary written accounts. In modern times the idea developed over decades among many people. An early leader was Jean-Baptiste Vaquette de Gribeauval, an 18th-century French artillerist who created a fair amount of standardization of artillery pieces, although not true interchangeability of parts. He inspired others, including Honoré Blanc and Louis de Tousard, to work further on the idea, and on shoulder weapons as well as artillery. In the 19th century these efforts produced the "armory system," or American system of manufacturing. Certain other New Englanders, including Captain John H. Hall and Simeon North, arrived at successful
name in 1929. The story concerns an English woman who lives at Fox Tor farm, and an American captured during the American War of Independence and held at the prison at Princetown on Dartmoor. The heroine's father, Maurice Malherb, is based on Thomas Windeatt. In the novel Malherb is a miscreant who destroys Childe's tomb and beats his servant. He is depicted as
at Fox Tor farm, and an American captured during the American War of Independence and held at the prison at Princetown on Dartmoor. The heroine's father, Maurice Malherb, is based on Thomas Windeatt. In the novel Malherb is a miscreant who destroys Childe's tomb and beats his servant. He is depicted as a victim of his own bad temper rather than a sadist. Malherb is introduced as the younger son of a noble family and he builds the Fox Tor house
Clerk Maxwell was the first to obtain this relationship by his completion of Maxwell's equations with the addition of a displacement current term to Ampere's circuital law. Relation to and comparison with other physical fields Being one of the four fundamental forces of nature, it is useful to compare the electromagnetic field with the gravitational, strong and weak fields. The word 'force' is sometimes replaced by 'interaction' because modern particle physics models electromagnetism as an exchange of particles known as gauge bosons. Electromagnetic and gravitational fields Sources of electromagnetic fields consist of two types of charge – positive and negative. This contrasts with the sources of the gravitational field, which are masses. Masses are sometimes described as gravitational charges, the important feature of them being that there are only positive masses and no negative masses. Further, gravity differs from electromagnetism in that positive masses attract other positive masses whereas same charges in electromagnetism repel each other. The relative strengths and ranges of the four interactions and other information are tabulated below: Applications Static E and M fields and static EM fields When an EM field (see electromagnetic tensor) is not varying in time, it may be seen as a purely electrical field or a purely magnetic field, or a mixture of both. However the general case of a static EM field with both electric and magnetic components present, is the case that appears to most observers. Observers who see only an electric or magnetic field component of a static EM field, have the other (electric or magnetic) component suppressed, due to the special case of the immobile state of the charges that produce the EM field in that case. In such cases the other component becomes manifest in other observer frames. A consequence of this, is that any case that seems to consist of a "pure" static electric or magnetic field, can be converted to an EM field, with both E and M components present, by simply moving the observer into a frame of reference which is moving with regard to the frame in which only the "pure" electric or magnetic field appears. That is, a pure static electric field will show the familiar magnetic field associated with a current, in any frame of reference where the charge moves. Likewise, any new motion of a charge in a region that seemed previously to contain only a magnetic field, will show that the space now contains an electric field as well, which will be found to produce an additional Lorentz force upon the moving charge. Thus, electrostatics, as well as magnetism and magnetostatics, are now seen as studies of the static EM field when a particular frame has been selected to suppress the other type of field, and since an EM field with both electric and magnetic will appear in any other frame, these "simpler" effects are merely the observer's. The "applications" of all such non-time varying (static) fields are discussed in the main articles linked in this section. Time-varying EM fields in Maxwell’s equations An EM field that varies in time has two "causes" in Maxwell's equations. One is charges and currents (so-called "sources"), and the other cause for an E or M field is a change in the other type of field (this last cause also appears in "free space" very far from currents and charges). An electromagnetic field very far from currents and charges (sources) is called electromagnetic radiation (EMR) since it radiates from the charges and currents in the source, and has no "feedback" effect on them, and is also not affected directly by them in the present time (rather, it is indirectly produced by a sequences of changes in fields radiating out from them in the past). EMR consists of the radiations in the electromagnetic spectrum, including radio waves, microwave, infrared, visible light, ultraviolet light, X-rays, and gamma rays. The many commercial applications of these radiations are discussed in the named and linked articles. A notable application of visible light is that this type of energy from the Sun powers all life on Earth that either makes or uses oxygen. A changing electromagnetic field which is physically close to currents and charges (see near and far field for a definition of "close") will have a dipole characteristic that is dominated by either a changing electric dipole, or a changing magnetic dipole. This type of dipole field near sources is called an electromagnetic near-field. Changing electric dipole fields, as such, are used commercially as near-fields mainly as a source of dielectric heating. Otherwise, they appear parasitically around conductors which absorb EMR, and around antennas which have the purpose of generating EMR at greater distances. Changing magnetic dipole fields (i.e., magnetic near-fields) are used commercially for many types of magnetic induction devices. These include motors and electrical transformers at low frequencies, and devices such as metal detectors and MRI scanner coils at higher frequencies. Sometimes these high-frequency magnetic fields change at radio frequencies without being far-field waves and thus radio waves; see RFID tags. See also near-field communication. Further uses of near-field EM effects commercially may be found in the article on virtual photons, since at the quantum level, these fields are represented by these particles. Far-field effects (EMR) in the quantum picture of radiation are represented by ordinary photons. Other Electromagnetic field can be used to record data on static electricity. Old televisions can be traced with electromagnetic fields. Health and safety The potential effects of electromagnetic fields on human health vary widely depending on the frequency and intensity of the fields. The potential health effects of the very low frequency EMFs surrounding power lines and electrical devices are the subject of on-going research and a significant amount of public debate. The US National Institute for Occupational Safety and Health (NIOSH) and other US government agencies do not consider EMFs a proven health hazard. NIOSH has issued some cautionary advisories but stresses that the data are currently too limited to draw good conclusions. In 2011, The WHO/International Agency for Research on Cancer (IARC) classified radiofrequency electromagnetic fields as possibly carcinogenic to humans (Group
addition of a displacement current term to Ampere's circuital law. Relation to and comparison with other physical fields Being one of the four fundamental forces of nature, it is useful to compare the electromagnetic field with the gravitational, strong and weak fields. The word 'force' is sometimes replaced by 'interaction' because modern particle physics models electromagnetism as an exchange of particles known as gauge bosons. Electromagnetic and gravitational fields Sources of electromagnetic fields consist of two types of charge – positive and negative. This contrasts with the sources of the gravitational field, which are masses. Masses are sometimes described as gravitational charges, the important feature of them being that there are only positive masses and no negative masses. Further, gravity differs from electromagnetism in that positive masses attract other positive masses whereas same charges in electromagnetism repel each other. The relative strengths and ranges of the four interactions and other information are tabulated below: Applications Static E and M fields and static EM fields When an EM field (see electromagnetic tensor) is not varying in time, it may be seen as a purely electrical field or a purely magnetic field, or a mixture of both. However the general case of a static EM field with both electric and magnetic components present, is the case that appears to most observers. Observers who see only an electric or magnetic field component of a static EM field, have the other (electric or magnetic) component suppressed, due to the special case of the immobile state of the charges that produce the EM field in that case. In such cases the other component becomes manifest in other observer frames. A consequence of this, is that any case that seems to consist of a "pure" static electric or magnetic field, can be converted to an EM field, with both E and M components present, by simply moving the observer into a frame of reference which is moving with regard to the frame in which only the "pure" electric or magnetic field appears. That is, a pure static electric field will show the familiar magnetic field associated with a current, in any frame of reference where the charge moves. Likewise, any new motion of a charge in a region that seemed previously to contain only a magnetic field, will show that the space now contains an electric field as well, which will be found to produce an additional Lorentz force upon the moving charge. Thus, electrostatics, as well as magnetism and magnetostatics, are now seen as studies of the static EM field when a particular frame has been selected to suppress the other type of field, and since an EM field with both electric and magnetic will appear in any other frame, these "simpler" effects are merely the observer's. The "applications" of all such non-time varying (static) fields are discussed in the main articles linked in this section. Time-varying EM fields in Maxwell’s equations An EM field that varies in time has two "causes" in Maxwell's equations. One is charges and currents (so-called "sources"), and the other cause for an E or M field is a change in the other type of field (this last cause also appears in "free space" very far from currents and charges). An electromagnetic field very far from currents and charges (sources) is called electromagnetic radiation (EMR) since it radiates from the charges and currents in the source, and has no "feedback" effect on them, and is also not affected directly by them in the present time (rather, it is indirectly produced by a sequences of changes in fields radiating out from them in the past). EMR consists of the radiations in the electromagnetic spectrum, including radio waves, microwave, infrared, visible light, ultraviolet light, X-rays, and gamma rays. The many commercial applications of these radiations are discussed in the named and linked articles. A notable application of visible light is that this type of energy from the Sun powers all life on Earth that either makes or uses oxygen. A changing electromagnetic field which is physically close to currents and charges (see near and far field for a definition of "close") will have a dipole characteristic that is dominated by either a changing electric dipole, or a changing magnetic dipole. This type of dipole field near sources is called an electromagnetic near-field. Changing electric dipole fields, as such, are used commercially as near-fields mainly as a source of dielectric heating. Otherwise, they appear parasitically around conductors which absorb EMR, and around antennas which have the purpose of generating EMR at greater distances. Changing magnetic dipole fields (i.e., magnetic near-fields) are used commercially for many types of magnetic induction devices. These include motors and electrical transformers at low frequencies, and devices such as metal detectors and MRI scanner coils at higher frequencies. Sometimes these high-frequency magnetic fields change at radio frequencies without being far-field waves and thus radio waves; see RFID tags.
the project requirements. The 1916 Zoning Act forced Lamb to design a structure that incorporated setbacks resulting in the lower floors being larger than the upper floors. Consequently, the building was designed from the top down, giving it a "pencil"-like shape. The plans were devised within a budget of $50 million and a stipulation that the building be ready for occupancy within 18 months of the start of construction. Design changes The original plan of the building was 50 stories, but was later increased to 60 and then 80 stories. Height restrictions were placed on nearby buildings to ensure that the top fifty floors of the planned 80-story, building would have unobstructed views of the city. The New York Times lauded the site's proximity to mass transit, with the Brooklyn–Manhattan Transit's 34th Street station and the Hudson and Manhattan Railroad's 33rd Street terminal one block away, as well as Penn Station two blocks away and the Grand Central Terminal nine blocks away at its closest. It also praised the of proposed floor space near "one of the busiest sections in the world". While plans for the Empire State Building were being finalized, an intense competition in New York for the title of "world's tallest building" was underway. 40 Wall Street (then the Bank of Manhattan Building) and the Chrysler Building in Manhattan both vied for this distinction and were already under construction when work began on the Empire State Building. The "Race into the Sky", as popular media called it at the time, was representative of the country's optimism in the 1920s, fueled by the building boom in major cities. The race was defined by at least five other proposals, although only the Empire State Building would survive the Wall Street Crash of 1929. The 40 Wall Street tower was revised, in April 1929, from to making it the world's tallest. The Chrysler Building added its steel tip to its roof in October 1929, thus bringing it to a height of and greatly exceeding the height of 40 Wall Street. The Chrysler Building's developer, Walter Chrysler, realized that his tower's height would exceed the Empire State Building's as well, having instructed his architect, William Van Alen, to change the Chrysler's original roof from a stubby Romanesque dome to a narrow steel spire. Raskob, wishing to have the Empire State Building be the world's tallest, reviewed the plans and had five floors added as well as a spire; however, the new floors would need to be set back because of projected wind pressure on the extension. On November 18, 1929, Smith acquired a lot at 27–31 West 33rd Street, adding to the width of the proposed office building's site. Two days later, Smith announced the updated plans for the skyscraper. The plans included an observation deck on the 86th-floor roof at a height of , higher than the Chrysler's 71st-floor observation deck. The 1,050-foot Empire State Building would only be taller than the Chrysler Building, and Raskob was afraid that Chrysler might try to "pull a trick like hiding a rod in the spire and then sticking it up at the last minute." The plans were revised one last time in December 1929, to include a 16-story, metal "crown" and an additional mooring mast intended for dirigibles. The roof height was now , making it the tallest building in the world by far, even without the antenna. The addition of the dirigible station meant that another floor, the now-enclosed 86th floor, would have to be built below the crown; however, unlike the Chrysler's spire, the Empire State's mast would serve a practical purpose. A revised plan was announced to the public in late December 1929, just before the start of construction. The final plan was sketched within two hours, the night before the plan was supposed to be presented to the site's owners in January 1930. The New York Times reported that the spire was facing some "technical problems", but they were "no greater than might be expected under such a novel plan." By this time the blueprints for the building had gone through up to fifteen versions before they were approved. Lamb described the other specifications he was given for the final, approved plan: The contractors were Starrett Brothers and Eken, Paul and William A. Starrett and Andrew J. Eken, who would later construct other New York City buildings such as Stuyvesant Town, Starrett City and Trump Tower. The project was financed primarily by Raskob and Pierre du Pont, while James Farley's General Builders Supply Corporation supplied the building materials. John W. Bowser was the construction superintendent of the project, and the structural engineer of the building was Homer G. Balcom. The tight completion schedule necessitated the commencement of construction even though the design had yet to be finalized. Construction Hotel demolition Demolition of the old Waldorf–Astoria began on October 1, 1929. Stripping the building down was an arduous process, as the hotel had been constructed using more rigid material than earlier buildings had been. Furthermore, the old hotel's granite, wood chips, and "'precious' metals such as lead, brass, and zinc" were not in high demand resulting in issues with disposal. Most of the wood was deposited into a woodpile on nearby 30th Street or was burned in a swamp elsewhere. Much of the other materials that made up the old hotel, including the granite and bronze, were dumped into the Atlantic Ocean near Sandy Hook, New Jersey. By the time the hotel's demolition started, Raskob had secured the required funding for the construction of the building. The plan was to start construction later that year but, on October 24, the New York Stock Exchange experienced the major and sudden Wall Street Crash, marking the beginning of the decade-long Great Depression. Despite the economic downturn, Raskob refused to cancel the project because of the progress that had been made up to that point. Neither Raskob, who had ceased speculation in the stock market the previous year, nor Smith, who had no stock investments, suffered financially in the crash. However, most of the investors were affected and as a result, in December 1929, Empire State Inc. obtained a $27.5 million loan from Metropolitan Life Insurance Company so construction could begin. The stock market crash resulted in no demand for new office space; Raskob and Smith nonetheless started construction, as canceling the project would have resulted in greater losses for the investors. Steel structure A structural steel contract was awarded on January 12, 1930, with excavation of the site beginning ten days later on January 22, before the old hotel had been completely demolished. Two twelve-hour shifts, consisting of 300 men each, worked continuously to dig the foundation. Small pier holes were sunk into the ground to house the concrete footings that would support the steelwork. Excavation was nearly complete by early March, and construction on the building itself started on March 17, with the builders placing the first steel columns on the completed footings before the rest of the footings had been finished. Around this time, Lamb held a press conference on the building plans. He described the reflective steel panels parallel to the windows, the large-block Indiana Limestone facade that was slightly more expensive than smaller bricks, and the building's vertical lines. Four colossal columns, intended for installation in the center of the building site, were delivered; they would support a combined when the building was finished. The structural steel was pre-ordered and pre-fabricated in anticipation of a revision to the city's building code that would have allowed the Empire State Building's structural steel to carry , up from , thus reducing the amount of steel needed for the building. Although the 18,000-psi regulation had been safely enacted in other cities, Mayor Jimmy Walker did not sign the new codes into law until March 26, 1930, just before construction was due to commence. The first steel framework was installed on April 1, 1930. From there, construction proceeded at a rapid pace; during one stretch of 10 working days, the builders erected fourteen floors. This was made possible through precise coordination of the building's planning, as well as the mass production of common materials such as windows and spandrels. On one occasion, when a supplier could not provide timely delivery of dark Hauteville marble, Starrett switched to using Rose Famosa marble from a German quarry that was purchased specifically to provide the project with sufficient marble. The scale of the project was massive, with trucks carrying "16,000 partition tiles, 5,000 bags of cement, of sand and 300 bags of lime" arriving at the construction site every day. There were also cafes and concession stands on five of the incomplete floors so workers did not have to descend to the ground level to eat lunch. Temporary water taps were also built so workers did not waste time buying water bottles from the ground level. Additionally, carts running on a small railway system transported materials from the basement storage to elevators that brought the carts to the desired floors where they would then be distributed throughout that level using another set of tracks. The of steel ordered for the project was the largest-ever single order of steel at the time, comprising more steel than was ordered for the Chrysler Building and 40 Wall Street combined. According to historian John Tauranac, building materials were sourced from numerous, and distant, sources with "limestone from Indiana, steel girders from Pittsburgh, cement and mortar from upper New York State, marble from Italy, France, and England, wood from northern and Pacific Coast forests, [and] hardware from New England." The facade, too, used a variety of material, most prominently Indiana limestone but also Swedish black granite, terracotta, and brick. By June 20, the skyscraper's supporting steel structure had risen to the 26th floor, and by July 27, half of the steel structure had been completed. Starrett Bros. and Eken endeavored to build one floor a day in order to speed up construction, a goal that they almost reached with their pace of stories per week; prior to this, the fastest pace of construction for a building of similar height had been stories per week. While construction progressed, the final designs for the floors were being designed from the ground up (as opposed to the general design, which had been from the roof down). Some of the levels were still undergoing final approval, with several orders placed within an hour of a plan being finalized. On September 10, as steelwork was nearing completion, Smith laid the building's cornerstone during a ceremony attended by thousands. The stone contained a box with contemporary artifacts including the previous day's New York Times, a U.S. currency set containing all denominations of notes and coins minted in 1930, a history of the site and building, and photographs of the people involved in construction. The steel structure was topped out at on September 19, twelve days ahead of schedule and 23 weeks after the start of construction. Workers raised a flag atop the 86th floor to signify this milestone. Completion and scale Afterward, work on the building's interior and crowning mast commenced. The mooring mast topped out on November 21, two months after the steelwork had been completed. Meanwhile, work on the walls and interior was progressing at a quick pace, with exterior walls built up to the 75th floor by the time steelwork had been built to the 95th floor. The majority of the facade was already finished by the middle of November. Because of the building's height, it was deemed infeasible to have many elevators or large elevator cabins, so the builders contracted with the Otis Elevator Company to make 66 cars that could speed at , which represented the largest-ever elevator order at the time. In addition to the time constraint builders had, there were also space limitations because construction materials had to be delivered quickly, and trucks needed to drop off these materials without congesting traffic. This was solved by creating a temporary driveway for the trucks between 33rd and 34th Streets, and then storing the materials in the building's first floor and basements. Concrete mixers, brick hoppers, and stone hoists inside the building ensured that materials would be able to ascend quickly and without endangering or inconveniencing the public. At one point, over 200 trucks made material deliveries at the building site every day. A series of relay and erection derricks, placed on platforms erected near the building, lifted the steel from the trucks below and installed the beams at the appropriate locations. The Empire State Building was structurally completed on April 11, 1931, twelve days ahead of schedule and 410 days after construction commenced. Al Smith shot the final rivet, which was made of solid gold. The project involved more than 3,500 workers at its peak, including 3,439 on a single day, August 14, 1930. Many of the workers were Irish and Italian immigrants, with a sizable minority of Mohawk ironworkers from the Kahnawake reserve near Montreal. According to official accounts, five workers died during the construction, although the New York Daily News gave reports of 14 deaths and a headline in the socialist magazine The New Masses spread unfounded rumors of up to 42 deaths. The Empire State Building cost $40,948,900 to build (equivalent to $ in ), including demolition of the Waldorf–Astoria. This was lower than the $60 million budgeted for construction. Lewis Hine captured many photographs of the construction, documenting not only the work itself but also providing insight into the daily life of workers in that era. Hine's images were used extensively by the media to publish daily press releases. According to the writer Jim Rasenberger, Hine "climbed out onto the steel with the ironworkers and dangled from a derrick cable hundreds of feet above the city to capture, as no one ever had before (or has since), the dizzy work of building skyscrapers". In Rasenberger's words, Hine turned what might have been an assignment of "corporate flak" into "exhilarating art". These images were later organized into their own collection. Onlookers were enraptured by the sheer height at which the steelworkers operated. New York magazine wrote of the steelworkers: "Like little spiders they toiled, spinning a fabric of steel against the sky". Opening and early years The Empire State Building officially opened on May 1, 1931, forty-five days ahead of its projected opening date, and eighteen months from the start of construction. The opening was marked with an event featuring United States President Herbert Hoover, who turned on the building's lights with the ceremonial button push from Washington, D.C. Over 350 guests attended the opening ceremony, and following luncheon, at the 86th floor including Jimmy Walker, Governor Franklin D. Roosevelt, and Al Smith. An account from that day stated that the view from the luncheon was obscured by a fog, with other landmarks such as the Statue of Liberty being "lost in the mist" enveloping New York City. The Empire State Building officially opened the next day. Advertisements for the building's observatories were placed in local newspapers, while nearby hotels also capitalized on the events by releasing advertisements that lauded their proximity to the newly opened building. According to The New York Times, builders and real estate speculators predicted that the Empire State Building would be the world's tallest building "for many years", thus ending the great New York City skyscraper rivalry. At the time, most engineers agreed that it would be difficult to build a building taller than , even with the hardy Manhattan bedrock as a foundation. Technically, it was believed possible to build a tower of up to , but it was deemed uneconomical to do so, especially during the Great Depression. As the tallest building in the world, at that time, and the first one to exceed 100 floors, the Empire State Building became an icon of the city and, ultimately, of the nation. In 1932, the Fifth Avenue Association gave the building its 1931 "gold medal" for architectural excellence, signifying that the Empire State had been the best-designed building on Fifth Avenue to open in 1931. A year later, on March 2, 1933, the movie King Kong was released. The movie, which depicted a large stop motion ape named Kong climbing the Empire State Building, made the still-new building into a cinematic icon. Tenants and tourism The Empire State Building's opening coincided with the Great Depression in the United States, and as a result much of its office space was vacant from its opening. In the first year, only 23% of the available space was rented, as compared to the early 1920s, where the average building would have occupancy of 52% upon opening and 90% rented within five years. The lack of renters led New Yorkers to deride the building as the "Empty State Building. or "Smith's Folly". The earliest tenants in the Empire State Building were large companies, banks, and garment industries. Jack Brod, one of the building's longest resident tenants, co-established the Empire Diamond Corporation with his father in the building in mid-1931 and rented space in the building until he died in 2008. Brod recalled that there were only about 20 tenants at the time of opening, including him, and that Al Smith was the only real tenant in the space above his seventh-floor offices. Generally, during the early 1930s, it was rare for more than a single office space to be rented in the building, despite Smith's and Raskob's aggressive marketing efforts in the newspapers and to anyone they knew. The building's lights were continuously left on, even in the unrented spaces, to give the impression of occupancy. This was exacerbated by competition from Rockefeller Center as well as from buildings on 42nd Street, which, when combined with the Empire State Building, resulted in surplus of office space in a slow market during the 1930s. Aggressive marketing efforts served to reinforce the Empire State Building's status as the world's tallest. The observatory was advertised in local newspapers as well as on railroad tickets. The building became a popular tourist attraction, with one million people each paying one dollar to ride elevators to the observation decks in 1931. In its first year of operation, the observation deck made approximately $2 million in revenue, as much as its owners made in rent that year. By 1936, the observation deck was crowded on a daily basis, with food and drink available for purchase at the top, and by 1944 the building had received its five-millionth visitor. In 1931, NBC took up tenancy, leasing space on the 85th floor for radio broadcasts. From the outset the building was in debt, losing $1 million per year by 1935. Real estate developer Seymour Durst recalled that the building was so underused in 1936 that there was no elevator service above the 45th floor, as the building above the 41st floor was empty except for the NBC offices and the Raskob/Du Pont offices on the 81st floor. Other events Per the original plans, the Empire State Building's spire was intended to be an airship docking station. Raskob and Smith had proposed dirigible ticketing offices and passenger waiting rooms on the 86th floor, while the airships themselves would be tied to the spire at the equivalent of the building's 106th floor. An elevator would ferry passengers from the 86th to the 101st floor after they had checked in on the 86th floor, after which passengers would have climbed steep ladders to board the airship. The idea, however, was impractical and dangerous due to powerful updrafts caused by the building itself, the wind currents across Manhattan, and the spires of nearby skyscrapers. Furthermore, even if the airship were to successfully navigate all these obstacles, its crew would have to jettison some ballast by releasing water onto the streets below in order to maintain stability, and then tie the craft's nose to the spire with no mooring lines securing the tail end of the craft. On September 15, 1931, a small commercial United States Navy airship circled 25 times in winds. The airship then attempted to dock at the mast, but its ballast spilled and the craft was rocked by unpredictable eddies. The near-disaster scuttled plans to turn the building's spire into an airship terminal, although one blimp did manage to make a single newspaper delivery afterward. On July 28, 1945, a B-25 Mitchell bomber crashed into the north side of the Empire State Building, between the 79th and 80th floors. One engine completely penetrated the building and landed in a neighboring block, while the other engine and part of the landing gear plummeted down an elevator shaft. Fourteen people were killed in the incident, but the building escaped severe damage and was reopened two days later. Profitability The Empire State Building only started becoming profitable in the 1950s, when it was finally able to break even for the first time. At the time, mass transit options in the building's vicinity were limited compared to the present day. Despite this challenge, the Empire State Building began to attract renters due to its reputation. A radio antenna was erected on top of the towers starting in 1950, allowing the area's television stations to be broadcast from the building. However, despite the turnaround in the building's fortunes, Raskob listed it for sale in 1951, with a minimum asking price of $50 million. The property was purchased by business partners Roger L. Stevens, Henry Crown, Alfred R. Glancy and Ben Tobin. The sale was brokered by the Charles F. Noyes Company, a prominent real estate firm in upper Manhattan, for $51 million, the highest price paid for a single structure at the time. By this time, the Empire State had been fully leased for several years with a waiting list of parties looking to lease space in the building, according to the Cortland Standard. That same year, six news companies formed a partnership to pay a combined annual fee of $600,000 to use the building's antenna, which was completed in 1953. Crown bought out his partners' ownership stakes in 1954, becoming the sole owner. The following year, the American Society of Civil Engineers named the building one of the "Seven Modern Civil Engineering Wonders". In 1961, Lawrence A. Wien signed a contract to purchase the Empire State Building for $65 million, with Harry B. Helmsley acting as partners in the building's operating lease. This became the new highest price for a single structure. Over 3,000 people paid $10,000 for one share each in a company called Empire State Building Associates. The company in turn subleased the building to another company headed by Helmsley and Wien, raising $33 million of the funds needed to pay the purchase price. In a separate transaction, the land underneath the building was sold to Prudential Insurance for $29 million. Helmsley, Wien, and Peter Malkin quickly started a program of minor improvement projects, including the first-ever full-building facade refurbishment and window-washing in 1962, the installation of new flood lights on the 72nd floor in 1964, and replacement of the manually operated elevators with automatic units in 1966. The little-used western end of the second floor was used as a storage space until 1964, at which point it received escalators to the first floor as part of its conversion into a highly sought retail area. Loss of "tallest building" title In 1961, the same year that Helmsley, Wien, and Malkin had purchased the Empire State Building, the Port Authority of New York and New Jersey formally backed plans for a new World Trade Center in Lower Manhattan. The plan originally included 66-story twin towers with column-free open spaces. The Empire State's owners and real estate speculators were worried that the twin towers' of office space would create a glut of rentable space in Manhattan as well as take away the Empire State Building's profits from lessees. A revision in the World Trade Center's plan brought the twin towers to each or 110 stories, taller than the Empire State. Opponents of the new project included prominent real-estate developer Robert Tishman, as well as Wien's Committee for a Reasonable World Trade Center. In response to Wien's opposition, Port Authority executive director Austin J. Tobin said that Wien was only opposing the project because it would overshadow his Empire State Building as the world's tallest building. The World Trade Center's twin towers started construction in 1966. The following year, the Ostankino Tower succeeded the Empire State Building as the tallest freestanding structure in the world. In 1970, the Empire State surrendered its position as the world's tallest building, when the World Trade Center's still-under-construction North Tower surpassed it, on October 19; the North Tower was topped out on December 23, 1970. In December 1975, the observation deck was opened on the 110th floor of the Twin Towers, significantly higher than the 86th floor observatory on the Empire State Building. The latter was also losing revenue during this period, particularly as a number of broadcast stations had moved to the World Trade Center in 1971; although the Port Authority continued to pay the broadcasting leases for the Empire State until 1984. The Empire State Building was still seen as prestigious, having seen its forty-millionth visitor in March 1971. 1980s and 1990s By 1980, there were nearly two million annual visitors, although a building official had previously estimated between 1.5 million and 1.75 million annual visitors. The building received its own ZIP code in May 1980 in a roll out of 63 new postal codes in Manhattan. At the time, its tenants collectively received 35,000 pieces of mail daily. The Empire State Building celebrated its 50th anniversary on May 1, 1981, with a much-publicized, but poorly received, laser light show, as well as an "Empire State Building Week" that ran through to May 8. The New York City Landmarks Preservation Commission voted to make the lobby a city landmark on May 19, 1981, citing the historic nature of the first and second floors, as well as "the fixtures and interior components" of the upper floors. The building became a National Historic Landmark in 1986 in close alignment to the New York City Landmarks report. The Empire State Building was added to the National Register of Historic Places the following year due to its architectural significance. Capital improvements were made to the Empire State Building during the early to mid-1990s at a cost of $55 million. These improvements entailed replacing alarm systems, elevators, windows, and air conditioning; making the observation deck compliant with the Americans with Disabilities Act of 1990 (ADA); and refurbishing the limestone facade. The observatory renovation was added after disability rights groups and the United States Department of Justice filed a lawsuit against the building in 1992, in what was the first lawsuit filed by an organization under the new law. A settlement was reached in 1994, in which the Empire State Building Associates agreed to add ADA-compliant elements, such as new elevators, ramps, and automatic doors, during its ongoing renovation. Prudential sold the land under the building in 1991 for $42 million to a buyer representing hotelier , who was imprisoned at the time in connection with the deadly at the in Tokyo. In 1994, Donald Trump entered into a joint-venture agreement with Yokoi, with a shared goal of breaking the Empire State Building's lease on the land in an effort to gain total ownership of the building so that, if successful, the two could reap the potential profits of merging the ownership of the building with the land beneath it. Having secured a half-ownership of the land, Trump devised plans to take ownership of the building itself so he could renovate it, even though Helmsley and Malkin had already started their refurbishment project. He sued Empire State Building Associates in February 1995, claiming that the latter had caused the building to become a "high-rise slum" and a "second-rate, rodent-infested" office tower. Trump had intended to have Empire State Building Associates evicted for violating the terms of their lease, but was denied. This led to Helmsley's companies countersuing Trump in May. This sparked a series of lawsuits and countersuits that lasted several years, partly arising from Trump's desire to obtain the building's master lease by taking it from Empire State Building Associates. Upon Harry Helmsley's death in 1997, the Malkins sued Helmsley's widow, Leona Helmsley, for control of the building. 21st century 2000s Following the destruction of the World Trade Center during the September 11 attacks in 2001, the Empire State Building again became the tallest building in New York City, but was only the second-tallest building in the Americas after the Sears (later Willis) Tower in Chicago. As a result of the attacks, transmissions from nearly all of the city's commercial television and FM radio stations were again broadcast from the Empire State Building. The attacks also led to an increase in security due to persistent terror threats against prominent sites in New York City. In 2002, Trump and Yokoi sold their land claim to the Empire State Building Associates, now headed by Malkin, in a $57.5 million sale. This action merged the building's title and lease for the first time in half a century. Despite the lingering threat posed by the 9/11 attacks, the Empire State Building remained popular with 3.5 million visitors to the observatories in 2004, compared to about 2.8 million in 2003. Even though she maintained her ownership stake in the building until the post-consolidation IPO in October 2013, Leona Helmsley handed over day-to-day operations of the building in 2006 to Peter Malkin's company. In 2008, the building was temporarily "stolen" by the New York Daily News to show how easy it was to transfer the deed on a property, since city clerks were not required to validate the submitted information, as well as to help demonstrate how fraudulent deeds could be used to obtain large mortgages and then have individuals disappear with the money. The paperwork submitted to the city included the names of Fay Wray, the famous star of King Kong, and Willie Sutton, a notorious New York bank robber. The newspaper then transferred the deed back over to the legitimate owners, who at that time were Empire State Land Associates. 2010s Starting in 2009, the building's public areas received a $550 million renovation, with improvements to the air conditioning and waterproofing, renovations to the observation deck and main lobby, and relocation of the gift shop to the 80th floor. About $120 million was spent on improving the energy efficiency of the building, with the goal of reducing energy emissions by 38% within five years. For example, all of the windows were refurbished onsite into film-coated "superwindows" which block heat but pass light. Air conditioning operating costs on hot days were reduced, saving $17 million of the project's capital cost immediately and partially funding some of the other retrofits. The Empire State Building won the Leadership in Energy and Environmental Design (LEED) Gold for Existing Buildings rating in September 2011, as well as the World Federation of Great Towers' Excellence in Environment Award for 2010. For
According to historian John Tauranac, building materials were sourced from numerous, and distant, sources with "limestone from Indiana, steel girders from Pittsburgh, cement and mortar from upper New York State, marble from Italy, France, and England, wood from northern and Pacific Coast forests, [and] hardware from New England." The facade, too, used a variety of material, most prominently Indiana limestone but also Swedish black granite, terracotta, and brick. By June 20, the skyscraper's supporting steel structure had risen to the 26th floor, and by July 27, half of the steel structure had been completed. Starrett Bros. and Eken endeavored to build one floor a day in order to speed up construction, a goal that they almost reached with their pace of stories per week; prior to this, the fastest pace of construction for a building of similar height had been stories per week. While construction progressed, the final designs for the floors were being designed from the ground up (as opposed to the general design, which had been from the roof down). Some of the levels were still undergoing final approval, with several orders placed within an hour of a plan being finalized. On September 10, as steelwork was nearing completion, Smith laid the building's cornerstone during a ceremony attended by thousands. The stone contained a box with contemporary artifacts including the previous day's New York Times, a U.S. currency set containing all denominations of notes and coins minted in 1930, a history of the site and building, and photographs of the people involved in construction. The steel structure was topped out at on September 19, twelve days ahead of schedule and 23 weeks after the start of construction. Workers raised a flag atop the 86th floor to signify this milestone. Completion and scale Afterward, work on the building's interior and crowning mast commenced. The mooring mast topped out on November 21, two months after the steelwork had been completed. Meanwhile, work on the walls and interior was progressing at a quick pace, with exterior walls built up to the 75th floor by the time steelwork had been built to the 95th floor. The majority of the facade was already finished by the middle of November. Because of the building's height, it was deemed infeasible to have many elevators or large elevator cabins, so the builders contracted with the Otis Elevator Company to make 66 cars that could speed at , which represented the largest-ever elevator order at the time. In addition to the time constraint builders had, there were also space limitations because construction materials had to be delivered quickly, and trucks needed to drop off these materials without congesting traffic. This was solved by creating a temporary driveway for the trucks between 33rd and 34th Streets, and then storing the materials in the building's first floor and basements. Concrete mixers, brick hoppers, and stone hoists inside the building ensured that materials would be able to ascend quickly and without endangering or inconveniencing the public. At one point, over 200 trucks made material deliveries at the building site every day. A series of relay and erection derricks, placed on platforms erected near the building, lifted the steel from the trucks below and installed the beams at the appropriate locations. The Empire State Building was structurally completed on April 11, 1931, twelve days ahead of schedule and 410 days after construction commenced. Al Smith shot the final rivet, which was made of solid gold. The project involved more than 3,500 workers at its peak, including 3,439 on a single day, August 14, 1930. Many of the workers were Irish and Italian immigrants, with a sizable minority of Mohawk ironworkers from the Kahnawake reserve near Montreal. According to official accounts, five workers died during the construction, although the New York Daily News gave reports of 14 deaths and a headline in the socialist magazine The New Masses spread unfounded rumors of up to 42 deaths. The Empire State Building cost $40,948,900 to build (equivalent to $ in ), including demolition of the Waldorf–Astoria. This was lower than the $60 million budgeted for construction. Lewis Hine captured many photographs of the construction, documenting not only the work itself but also providing insight into the daily life of workers in that era. Hine's images were used extensively by the media to publish daily press releases. According to the writer Jim Rasenberger, Hine "climbed out onto the steel with the ironworkers and dangled from a derrick cable hundreds of feet above the city to capture, as no one ever had before (or has since), the dizzy work of building skyscrapers". In Rasenberger's words, Hine turned what might have been an assignment of "corporate flak" into "exhilarating art". These images were later organized into their own collection. Onlookers were enraptured by the sheer height at which the steelworkers operated. New York magazine wrote of the steelworkers: "Like little spiders they toiled, spinning a fabric of steel against the sky". Opening and early years The Empire State Building officially opened on May 1, 1931, forty-five days ahead of its projected opening date, and eighteen months from the start of construction. The opening was marked with an event featuring United States President Herbert Hoover, who turned on the building's lights with the ceremonial button push from Washington, D.C. Over 350 guests attended the opening ceremony, and following luncheon, at the 86th floor including Jimmy Walker, Governor Franklin D. Roosevelt, and Al Smith. An account from that day stated that the view from the luncheon was obscured by a fog, with other landmarks such as the Statue of Liberty being "lost in the mist" enveloping New York City. The Empire State Building officially opened the next day. Advertisements for the building's observatories were placed in local newspapers, while nearby hotels also capitalized on the events by releasing advertisements that lauded their proximity to the newly opened building. According to The New York Times, builders and real estate speculators predicted that the Empire State Building would be the world's tallest building "for many years", thus ending the great New York City skyscraper rivalry. At the time, most engineers agreed that it would be difficult to build a building taller than , even with the hardy Manhattan bedrock as a foundation. Technically, it was believed possible to build a tower of up to , but it was deemed uneconomical to do so, especially during the Great Depression. As the tallest building in the world, at that time, and the first one to exceed 100 floors, the Empire State Building became an icon of the city and, ultimately, of the nation. In 1932, the Fifth Avenue Association gave the building its 1931 "gold medal" for architectural excellence, signifying that the Empire State had been the best-designed building on Fifth Avenue to open in 1931. A year later, on March 2, 1933, the movie King Kong was released. The movie, which depicted a large stop motion ape named Kong climbing the Empire State Building, made the still-new building into a cinematic icon. Tenants and tourism The Empire State Building's opening coincided with the Great Depression in the United States, and as a result much of its office space was vacant from its opening. In the first year, only 23% of the available space was rented, as compared to the early 1920s, where the average building would have occupancy of 52% upon opening and 90% rented within five years. The lack of renters led New Yorkers to deride the building as the "Empty State Building. or "Smith's Folly". The earliest tenants in the Empire State Building were large companies, banks, and garment industries. Jack Brod, one of the building's longest resident tenants, co-established the Empire Diamond Corporation with his father in the building in mid-1931 and rented space in the building until he died in 2008. Brod recalled that there were only about 20 tenants at the time of opening, including him, and that Al Smith was the only real tenant in the space above his seventh-floor offices. Generally, during the early 1930s, it was rare for more than a single office space to be rented in the building, despite Smith's and Raskob's aggressive marketing efforts in the newspapers and to anyone they knew. The building's lights were continuously left on, even in the unrented spaces, to give the impression of occupancy. This was exacerbated by competition from Rockefeller Center as well as from buildings on 42nd Street, which, when combined with the Empire State Building, resulted in surplus of office space in a slow market during the 1930s. Aggressive marketing efforts served to reinforce the Empire State Building's status as the world's tallest. The observatory was advertised in local newspapers as well as on railroad tickets. The building became a popular tourist attraction, with one million people each paying one dollar to ride elevators to the observation decks in 1931. In its first year of operation, the observation deck made approximately $2 million in revenue, as much as its owners made in rent that year. By 1936, the observation deck was crowded on a daily basis, with food and drink available for purchase at the top, and by 1944 the building had received its five-millionth visitor. In 1931, NBC took up tenancy, leasing space on the 85th floor for radio broadcasts. From the outset the building was in debt, losing $1 million per year by 1935. Real estate developer Seymour Durst recalled that the building was so underused in 1936 that there was no elevator service above the 45th floor, as the building above the 41st floor was empty except for the NBC offices and the Raskob/Du Pont offices on the 81st floor. Other events Per the original plans, the Empire State Building's spire was intended to be an airship docking station. Raskob and Smith had proposed dirigible ticketing offices and passenger waiting rooms on the 86th floor, while the airships themselves would be tied to the spire at the equivalent of the building's 106th floor. An elevator would ferry passengers from the 86th to the 101st floor after they had checked in on the 86th floor, after which passengers would have climbed steep ladders to board the airship. The idea, however, was impractical and dangerous due to powerful updrafts caused by the building itself, the wind currents across Manhattan, and the spires of nearby skyscrapers. Furthermore, even if the airship were to successfully navigate all these obstacles, its crew would have to jettison some ballast by releasing water onto the streets below in order to maintain stability, and then tie the craft's nose to the spire with no mooring lines securing the tail end of the craft. On September 15, 1931, a small commercial United States Navy airship circled 25 times in winds. The airship then attempted to dock at the mast, but its ballast spilled and the craft was rocked by unpredictable eddies. The near-disaster scuttled plans to turn the building's spire into an airship terminal, although one blimp did manage to make a single newspaper delivery afterward. On July 28, 1945, a B-25 Mitchell bomber crashed into the north side of the Empire State Building, between the 79th and 80th floors. One engine completely penetrated the building and landed in a neighboring block, while the other engine and part of the landing gear plummeted down an elevator shaft. Fourteen people were killed in the incident, but the building escaped severe damage and was reopened two days later. Profitability The Empire State Building only started becoming profitable in the 1950s, when it was finally able to break even for the first time. At the time, mass transit options in the building's vicinity were limited compared to the present day. Despite this challenge, the Empire State Building began to attract renters due to its reputation. A radio antenna was erected on top of the towers starting in 1950, allowing the area's television stations to be broadcast from the building. However, despite the turnaround in the building's fortunes, Raskob listed it for sale in 1951, with a minimum asking price of $50 million. The property was purchased by business partners Roger L. Stevens, Henry Crown, Alfred R. Glancy and Ben Tobin. The sale was brokered by the Charles F. Noyes Company, a prominent real estate firm in upper Manhattan, for $51 million, the highest price paid for a single structure at the time. By this time, the Empire State had been fully leased for several years with a waiting list of parties looking to lease space in the building, according to the Cortland Standard. That same year, six news companies formed a partnership to pay a combined annual fee of $600,000 to use the building's antenna, which was completed in 1953. Crown bought out his partners' ownership stakes in 1954, becoming the sole owner. The following year, the American Society of Civil Engineers named the building one of the "Seven Modern Civil Engineering Wonders". In 1961, Lawrence A. Wien signed a contract to purchase the Empire State Building for $65 million, with Harry B. Helmsley acting as partners in the building's operating lease. This became the new highest price for a single structure. Over 3,000 people paid $10,000 for one share each in a company called Empire State Building Associates. The company in turn subleased the building to another company headed by Helmsley and Wien, raising $33 million of the funds needed to pay the purchase price. In a separate transaction, the land underneath the building was sold to Prudential Insurance for $29 million. Helmsley, Wien, and Peter Malkin quickly started a program of minor improvement projects, including the first-ever full-building facade refurbishment and window-washing in 1962, the installation of new flood lights on the 72nd floor in 1964, and replacement of the manually operated elevators with automatic units in 1966. The little-used western end of the second floor was used as a storage space until 1964, at which point it received escalators to the first floor as part of its conversion into a highly sought retail area. Loss of "tallest building" title In 1961, the same year that Helmsley, Wien, and Malkin had purchased the Empire State Building, the Port Authority of New York and New Jersey formally backed plans for a new World Trade Center in Lower Manhattan. The plan originally included 66-story twin towers with column-free open spaces. The Empire State's owners and real estate speculators were worried that the twin towers' of office space would create a glut of rentable space in Manhattan as well as take away the Empire State Building's profits from lessees. A revision in the World Trade Center's plan brought the twin towers to each or 110 stories, taller than the Empire State. Opponents of the new project included prominent real-estate developer Robert Tishman, as well as Wien's Committee for a Reasonable World Trade Center. In response to Wien's opposition, Port Authority executive director Austin J. Tobin said that Wien was only opposing the project because it would overshadow his Empire State Building as the world's tallest building. The World Trade Center's twin towers started construction in 1966. The following year, the Ostankino Tower succeeded the Empire State Building as the tallest freestanding structure in the world. In 1970, the Empire State surrendered its position as the world's tallest building, when the World Trade Center's still-under-construction North Tower surpassed it, on October 19; the North Tower was topped out on December 23, 1970. In December 1975, the observation deck was opened on the 110th floor of the Twin Towers, significantly higher than the 86th floor observatory on the Empire State Building. The latter was also losing revenue during this period, particularly as a number of broadcast stations had moved to the World Trade Center in 1971; although the Port Authority continued to pay the broadcasting leases for the Empire State until 1984. The Empire State Building was still seen as prestigious, having seen its forty-millionth visitor in March 1971. 1980s and 1990s By 1980, there were nearly two million annual visitors, although a building official had previously estimated between 1.5 million and 1.75 million annual visitors. The building received its own ZIP code in May 1980 in a roll out of 63 new postal codes in Manhattan. At the time, its tenants collectively received 35,000 pieces of mail daily. The Empire State Building celebrated its 50th anniversary on May 1, 1981, with a much-publicized, but poorly received, laser light show, as well as an "Empire State Building Week" that ran through to May 8. The New York City Landmarks Preservation Commission voted to make the lobby a city landmark on May 19, 1981, citing the historic nature of the first and second floors, as well as "the fixtures and interior components" of the upper floors. The building became a National Historic Landmark in 1986 in close alignment to the New York City Landmarks report. The Empire State Building was added to the National Register of Historic Places the following year due to its architectural significance. Capital improvements were made to the Empire State Building during the early to mid-1990s at a cost of $55 million. These improvements entailed replacing alarm systems, elevators, windows, and air conditioning; making the observation deck compliant with the Americans with Disabilities Act of 1990 (ADA); and refurbishing the limestone facade. The observatory renovation was added after disability rights groups and the United States Department of Justice filed a lawsuit against the building in 1992, in what was the first lawsuit filed by an organization under the new law. A settlement was reached in 1994, in which the Empire State Building Associates agreed to add ADA-compliant elements, such as new elevators, ramps, and automatic doors, during its ongoing renovation. Prudential sold the land under the building in 1991 for $42 million to a buyer representing hotelier , who was imprisoned at the time in connection with the deadly at the in Tokyo. In 1994, Donald Trump entered into a joint-venture agreement with Yokoi, with a shared goal of breaking the Empire State Building's lease on the land in an effort to gain total ownership of the building so that, if successful, the two could reap the potential profits of merging the ownership of the building with the land beneath it. Having secured a half-ownership of the land, Trump devised plans to take ownership of the building itself so he could renovate it, even though Helmsley and Malkin had already started their refurbishment project. He sued Empire State Building Associates in February 1995, claiming that the latter had caused the building to become a "high-rise slum" and a "second-rate, rodent-infested" office tower. Trump had intended to have Empire State Building Associates evicted for violating the terms of their lease, but was denied. This led to Helmsley's companies countersuing Trump in May. This sparked a series of lawsuits and countersuits that lasted several years, partly arising from Trump's desire to obtain the building's master lease by taking it from Empire State Building Associates. Upon Harry Helmsley's death in 1997, the Malkins sued Helmsley's widow, Leona Helmsley, for control of the building. 21st century 2000s Following the destruction of the World Trade Center during the September 11 attacks in 2001, the Empire State Building again became the tallest building in New York City, but was only the second-tallest building in the Americas after the Sears (later Willis) Tower in Chicago. As a result of the attacks, transmissions from nearly all of the city's commercial television and FM radio stations were again broadcast from the Empire State Building. The attacks also led to an increase in security due to persistent terror threats against prominent sites in New York City. In 2002, Trump and Yokoi sold their land claim to the Empire State Building Associates, now headed by Malkin, in a $57.5 million sale. This action merged the building's title and lease for the first time in half a century. Despite the lingering threat posed by the 9/11 attacks, the Empire State Building remained popular with 3.5 million visitors to the observatories in 2004, compared to about 2.8 million in 2003. Even though she maintained her ownership stake in the building until the post-consolidation IPO in October 2013, Leona Helmsley handed over day-to-day operations of the building in 2006 to Peter Malkin's company. In 2008, the building was temporarily "stolen" by the New York Daily News to show how easy it was to transfer the deed on a property, since city clerks were not required to validate the submitted information, as well as to help demonstrate how fraudulent deeds could be used to obtain large mortgages and then have individuals disappear with the money. The paperwork submitted to the city included the names of Fay Wray, the famous star of King Kong, and Willie Sutton, a notorious New York bank robber. The newspaper then transferred the deed back over to the legitimate owners, who at that time were Empire State Land Associates. 2010s Starting in 2009, the building's public areas received a $550 million renovation, with improvements to the air conditioning and waterproofing, renovations to the observation deck and main lobby, and relocation of the gift shop to the 80th floor. About $120 million was spent on improving the energy efficiency of the building, with the goal of reducing energy emissions by 38% within five years. For example, all of the windows were refurbished onsite into film-coated "superwindows" which block heat but pass light. Air conditioning operating costs on hot days were reduced, saving $17 million of the project's capital cost immediately and partially funding some of the other retrofits. The Empire State Building won the Leadership in Energy and Environmental Design (LEED) Gold for Existing Buildings rating in September 2011, as well as the World Federation of Great Towers' Excellence in Environment Award for 2010. For the LEED Gold certification, the building's energy reduction was considered, as was a large purchase of carbon offsets. Other factors included low-flow bathroom fixtures, green cleaning supplies, and use of recycled paper products. On April 30, 2012, One World Trade Center topped out, taking the Empire State Building's record of tallest in the city. By 2014, the building was owned by the Empire State Realty Trust (ESRT), with Anthony Malkin as chairman, CEO, and president. The ESRT was a public company, having begun trading publicly on the New York Stock Exchange the previous year. In August 2016, the Qatar Investment Authority (QIA) was issued new fully diluted shares equivalent to 9.9% of the trust; this investment gave them partial ownership of the entirety of the ESRT's portfolio, and as a result, partial ownership of the Empire State Building. The trust's president John Kessler called it an "endorsement of the company's irreplaceable assets". The investment has been described by the real-estate magazine The Real Deal as "an unusual move for a sovereign wealth fund", as these funds typically buy direct stakes in buildings rather than real estate companies. Other foreign entities that have a stake in the ESRT include investors from Norway, Japan, and Australia. A renovation of the Empire State Building was commenced in the 2010s to further improve energy efficiency, public areas, and amenities. In August 2018, to improve the flow of visitor traffic, the main visitor's entrance was shifted to 20 West 34th Street as part of a major renovation of the observatory lobby. The new lobby includes several technological features, including large LED panels, digital ticket kiosks in nine languages, and a two-story architectural model of the building surrounded by two metal staircases. The first phase of the renovation, completed in 2019, features an updated exterior lighting system and digital hosts. The new lobby also features free Wi-Fi provided for those waiting. A exhibit with nine galleries, opened in July 2019. The 102nd floor observatory, the third phase of the redesign, re-opened to the public on October 12, 2019. That portion of the project included outfitting the space with floor-to-ceiling glass windows and a brand-new glass elevator. The final portion of the renovations to be completed was a new observatory on the 80th floor, which opened on December 2, 2019. In total, the renovation had cost $165 million and taken four years to finish. Design The Empire State Building is tall to its 102nd floor, or including its pinnacle. The building has 86 usable stories; the first through 85th floors contain of commercial and office space, while the 86th story contains an observatory. The remaining 16 stories are part of the Art Deco spire, which is capped by an observatory on the 102nd floor; the spire does not contain any intermediate levels and is used mostly for mechanical purposes. Atop the 102nd story is the pinnacle, much of which is covered by broadcast antennas, and surmounted with a lightning rod. It was the first building to have more than 100 floors. The building has been named one of the Seven Wonders of the Modern World by the American Society of Civil Engineers. The building and its street floor interior are designated landmarks of the New York City Landmarks Preservation Commission, and confirmed by the New York City Board of Estimate. It was designated as a National Historic Landmark in 1986. In 2007, it was first on the AIA's List of America's Favorite Architecture. Form The Empire State Building has a symmetrical massing, or shape, because of its large lot and relatively short base. The five-story base occupies the entire lot, while the 81-story tower above it is set back sharply from the base. There are smaller setbacks on the upper stories, allowing sunlight to illuminate the interiors of the top floors, and positioning these floors away from the noisy streets below. The setbacks are located at the 21st, 25th, 30th, 72nd, 81st, and 85th stories. The setbacks were mandated per the 1916 Zoning Resolution, which was intended to allow sunlight to reach the streets as well. Normally, a building of the Empire State's dimensions would be permitted to build up to 12 stories on the Fifth Avenue side, and up to 17 stories on the 33rd/34th Streets side, before it would have to utilize setbacks. However, with the largest setback being located above the base, the tower stories could contain a uniform shape. According to architectural writer Robert A. M. Stern, the building's form contrasted with the nearly contemporary, similarly designed 500 Fifth Avenue eight blocks north, which had an asymmetrical massing on a smaller lot. Facade The Empire State Building's art deco design is typical of pre–World War II architecture in New York. The facade is clad in Indiana limestone panels sourced from the Empire Mill in Sanders, Indiana, which give the building its signature blonde color. According to official fact sheets, the facade uses of limestone and granite, ten million bricks, and of aluminum and stainless steel. The building also contains 6,514 windows. The main entrance, composed of three sets of metal doors, is at the center of the Fifth Avenue facade's elevation, flanked by molded piers that are topped with eagles. Above the main entrance is a transom, a triple-height transom window with geometric patterns, and the golden letters above the fifth-floor windows. There are two entrances each on 33rd and 34th Streets, with modernistic, stainless steel canopies projecting from the entrances on 33rd and 34th Streets there. Above the secondary entrances are triple windows, less elaborate in design than those on Fifth Avenue. The storefronts on the first floor contain aluminum-framed doors and windows within a black granite cladding. The second through fourth stories consist of windows alternating with wide stone piers and narrower stone mullions. The fifth story contains windows alternating with wide and narrow mullions, and is topped by a horizontal stone sill. The facade of the tower stories is split into several vertical bays on each side, with windows projecting slightly from the limestone cladding. The bays are arranged into sets of one, two, or three windows on each floor. The windows in each bay are separated by vertical nickel-chrome steel mullions and connected by horizontal aluminum spandrels on each floor. Structural features The riveted steel frame of the building was originally designed to handle all of the building's gravitational stresses and wind loads. The amount of material used in the building's construction resulted in a very stiff structure when compared to other skyscrapers, with a structural stiffness of versus the Willis Tower's and the John Hancock Center's . A December 1930 feature in Popular Mechanics estimated that a building with the Empire State's dimensions would still stand even if hit with an impact of . Utilities are grouped in a central shaft. On the 6th through 86th stories, the central shaft is surrounded by a main corridor on all four sides. Per the final specifications of the building, the corridor is surrounded in turn by office space deep, maximizing office space at a time before air conditioning became commonplace. Each of the floors has 210 structural columns that pass through it, which provide structural stability, but limits the amount of open space on these floors. However, the relative dearth of stone in the building allows for more space overall, with a 1:200 stone-to-building ratio in the Empire State compared to a 1:50 ratio in similar buildings. Interior According to official fact sheets, the Empire State Building weighs and has an internal volume of . The interior required of elevator cable and of electrical wires. It has a total floor area of , and each of the floors in the base cover . This gives the building capacity for 20,000 tenants and 15,000 visitors. The building contains 73 elevators. Its original 64 elevators, built by the Otis Elevator Company, are located in a central core and are of varying heights, with the longest of these elevators reaching from the lobby to the 80th floor. As originally built, there were four "express" elevators that connected the lobby, 80th floor, and several landings in between; the other 60 "local" elevators connected the landings with the floors above these intermediate landings. Of the 64 total elevators, 58 were for passenger use (comprising the four express elevators and 54 local elevators), and eight were for freight deliveries. The elevators were designed to move at . At the time of the skyscraper's construction, their practical speed was limited to per city law, but this limit was removed shortly after the building opened. Additional elevators connect the 80th floor to the six floors above it, as the six extra floors were built after the original 80 stories were approved. The elevators were mechanically operated until 2011, when they were replaced with automatic elevators during the $550 million renovation of the building. An additional elevator connects the 86th and 102nd floor observatories, which allows visitors access the 102nd floor observatory after having their tickets scanned. It also allows employees to access the mechanical floors located between the 87th and 101st floors. The Empire State Building has 73 elevators in all, including service elevators. Lobby The original main lobby is accessed from Fifth Avenue, on the building's east side, and contains an entrance with one set of double doors between a pair of revolving doors. At the top of each doorway is a bronze motif depicting one of three "crafts or industries" used in the building's construction—Electricity, Masonry, and Heating. The lobby contains two tiers of marble, a lighter marble on the top, above the storefronts, and a darker marble on the bottom, flush with the storefronts. There is a pattern of zigzagging terrazzo tiles on the lobby floor, which leads from the entrance on the east to the aluminum relief on the west. The chapel-like three-story-high lobby, which runs parallel to 33rd and 34th Streets, contains storefronts on both its northern and southern sides. These storefronts are framed on each side by tubes of dark "modernistically rounded marble", according to the New York City Landmarks Preservation Commission, and above by a vertical band of grooves set into the marble. Immediately inside the lobby is an airport-style security checkpoint. The side entrances from 33rd and 34th Street lead to two-story-high corridors around the elevator core, crossed by stainless steel and glass-enclosed bridges at the second floor. The walls on both the northern and southern sides of the lobby house storefronts and escalators to a mezzanine level. At the west end of the lobby is an aluminum relief of the skyscraper as it was originally built (i.e. without the antenna). The relief, which was intended to provide a welcoming effect, contains an embossing of the building's outline, accompanied by what the Landmarks Preservation Commission describes as "the rays of an aluminum sun shining out behind [the building] and mingling with aluminum rays emanating from the spire of the Empire State Building". In the background is a state map of New York with the building's location marked by a "medallion" in the very southeast portion of the outline. A compass is located in the bottom right and a plaque to the building's major developers is on the bottom left. The plaque at the western end of the lobby is located on the eastern interior wall of a one-story tall rectangular-shaped corridor that surrounds the banks of escalators, with a similar design to the lobby. The rectangular-shaped corridor actually consists of two long hallways on the northern and southern sides of the rectangle, as well as a shorter hallway on the eastern side and another long hallway on the western side. At both ends of the northern and southern corridors, there is a bank of four low-rise elevators in between the corridors. The western side of the rectangular elevator-bank corridor extends north to the 34th Street entrance and south to the 33rd Street entrance. It borders three large storefronts and leads to escalators that go both to the second floor and to the basement. Going from west to east, there are secondary entrances to 34th and 33rd Streets from both the northern and southern corridors, respectively, at approximately the two-thirds point of each corridor. Until the 1960s, an art deco mural, inspired by both the sky and the Machine Age, was installed in the lobby ceilings. Subsequent damage to these murals, designed by artist Leif Neandross, resulted in reproductions being installed. Renovations to the lobby in 2009, such as replacing the clock over the information desk in the Fifth Avenue lobby with an anemometer and installing two chandeliers intended to be part of the building when it originally opened, revived much of its original grandeur. The north corridor contained eight illuminated panels created in 1963 by Roy Sparkia and Renée Nemorov, in time for the 1964 World's Fair, depicting the building as the Eighth Wonder of the World alongside the traditional seven. The building's owners installed a series of paintings by the New York artist Kysa Johnson in the concourse level. Johnson later filed a federal lawsuit, in January 2014, under the Visual Artists Rights Act alleging the negligent destruction of the paintings and damage to her reputation as an artist. As part of the building's 2010 renovation, Denise Amses commissioned a work consisting of 15,000 stars and 5,000 circles, superimposed on a etched-glass installation, in the lobby. Above the 102nd floor The final stage of the building was the installation of a hollow mast, a steel shaft fitted with elevators and utilities, above the 86th floor. At the top would be a conical roof and the 102nd-floor docking station. Inside, the elevators would ascend from the 86th floor ticket offices to a 101st-floor waiting room. From there, stairs would lead to the 102nd floor, where passengers would enter the airships. The airships would have been moored to the spire at the equivalent of the building's 106th floor. As constructed, the mast contains four rectangular tiers topped by a cylindrical shaft with a conical pinnacle. On the 102nd floor (formerly the 101st floor), there is a door with stairs ascending to the 103rd floor (formerly the 102nd). This was built as a disembarkation floor for airships tethered to the building's spire, and has a circular balcony outside. It is now an access point to reach the spire for maintenance. The room now contains electrical equipment, but celebrities and dignitaries may also be given permission to take pictures there. Above the 103rd floor, there is a set of stairs and a ladder to reach the spire for maintenance work. The mast's 480 windows were all replaced in 2015. The mast serves as the base of the building's broadcasting antenna. Broadcast stations Broadcasting began at the Empire State Building on December 22, 1931,
a pure "Nordic race" or "Aryan" genetic pool and the eventual elimination of "unfit" races. Many leading British politicians subscribed to the theories of eugenics. Winston Churchill supported the British Eugenics Society and was an honorary vice president for the organization. Churchill believed that eugenics could solve "race deterioration" and reduce crime and poverty. Early critics of the philosophy of eugenics included the American sociologist Lester Frank Ward, the English writer G. K. Chesterton, the German-American anthropologist Franz Boas, who argued that advocates of eugenics greatly over-estimate the influence of biology, and Scottish tuberculosis pioneer and author Halliday Sutherland. Ward's 1913 article "Eugenics, Euthenics, and Eudemics", Chesterton's 1917 book Eugenics and Other Evils, and Boas' 1916 article "Eugenics" (published in The Scientific Monthly) were all harshly critical of the rapidly growing movement. Sutherland identified eugenists as a major obstacle to the eradication and cure of tuberculosis in his 1917 address "Consumption: Its Cause and Cure", and criticism of eugenists and Neo-Malthusians in his 1921 book Birth Control led to a writ for libel from the eugenist Marie Stopes. Several biologists were also antagonistic to the eugenics movement, including Lancelot Hogben. Other biologists such as J. B. S. Haldane and R. A. Fisher expressed skepticism in the belief that sterilization of "defectives" would lead to the disappearance of undesirable genetic traits. Among institutions, the Catholic Church was an opponent of state-enforced sterilizations. Attempts by the Eugenics Education Society to persuade the British government to legalize voluntary sterilization were opposed by Catholics and by the Labour Party. The American Eugenics Society initially gained some Catholic supporters, but Catholic support declined following the 1930 papal encyclical Casti connubii. In this, Pope Pius XI explicitly condemned sterilization laws: "Public magistrates have no direct power over the bodies of their subjects; therefore, where no crime has taken place and there is no cause present for grave punishment, they can never directly harm, or tamper with the integrity of the body, either for the reasons of eugenics or for any other reason." As a social movement, eugenics reached its greatest popularity in the early decades of the 20th century, when it was practiced around the world and promoted by governments, institutions, and influential individuals (such as the playwright G. B. Shaw). Many countries enacted various eugenics policies, including: genetic screenings, birth control, promoting differential birth rates, marriage restrictions, segregation (both racial segregation and sequestering the mentally ill), compulsory sterilization, forced abortions or forced pregnancies, ultimately culminating in genocide. By 2014, gene selection (rather than "people selection") was made possible through advances in genome editing, leading to what is sometimes called new eugenics, also known as "neo-eugenics", "consumer eugenics", or "liberal eugenics". Eugenics in the United States Anti-miscegenation laws in the United States made it a crime for individuals to wed someone categorized as belonging to a different race. These laws were part of a broader policy of racial segregation in the United States to minimize contact between people of different ethnicities. Race laws and practices in the United States were explicitly used as models by the Nazi regime when it developed the Nuremberg Laws, stripping Jewish citizens of their citizenship. Nazism and the decline of eugenics The scientific reputation of eugenics started to decline in the 1930s, a time when Ernst Rüdin used eugenics as a justification for the racial policies of Nazi Germany. Adolf Hitler had praised and incorporated eugenic ideas in Mein Kampf in 1925 and emulated eugenic legislation for the sterilization of "defectives" that had been pioneered in the United States once he took power. Some common early 20th century eugenics methods involved identifying and classifying individuals and their families, including the poor, mentally ill, blind, deaf, developmentally disabled, promiscuous women, homosexuals, and racial groups (such as the Roma and Jews in Nazi Germany) as "degenerate" or "unfit", and therefore led to segregation, institutionalization, sterilization, and even mass murder. The Nazi policy of identifying German citizens deemed mentally or physically unfit and then systematically killing them with poison gas, referred to as the Aktion T4 campaign, is understood by historians to have paved the way for the Holocaust. By the end of World War II, many eugenics laws were abandoned, having become associated with Nazi Germany. H. G. Wells, who had called for "the sterilization of failures" in 1904, stated in his 1940 book The Rights of Man: Or What Are We Fighting For? that among the human rights, which he believed should be available to all people, was "a prohibition on mutilation, sterilization, torture, and any bodily punishment". After World War II, the practice of "imposing measures intended to prevent births within [a national, ethnical, racial or religious] group" fell within the definition of the new international crime of genocide, set out in the Convention on the Prevention and Punishment of the Crime of Genocide. The Charter of Fundamental Rights of the European Union also proclaims "the prohibition of eugenic practices, in particular those aiming at selection of persons". In spite of the decline in discriminatory eugenics laws, some government mandated sterilizations continued into the 21st century. During the ten years President Alberto Fujimori led Peru from 1990 to 2000, 2,000 persons were allegedly involuntarily sterilized. China maintained its one-child policy until 2015 as well as a suite of other eugenics based legislation to reduce population size and manage fertility rates of different populations. In 2007, the United Nations reported coercive sterilizations and hysterectomies in Uzbekistan. During the years 2005 to 2013, nearly one-third of the 144 California prison inmates who were sterilized did not give lawful consent to the operation. Modern eugenics Developments in genetic, genomic, and reproductive technologies at the beginning of the 21st century have raised numerous questions regarding the ethical status of eugenics, effectively creating a resurgence of interest in the subject. Some, such as UC Berkeley sociologist Troy Duster, have argued that modern genetics is a back door to eugenics. This view was shared by then-White House Assistant Director for Forensic Sciences, Tania Simoncelli, who stated in a 2003 publication by the Population and Development Program at Hampshire College that advances in pre-implantation genetic diagnosis (PGD) are moving society to a "new era of eugenics", and that, unlike the Nazi eugenics, modern eugenics is consumer driven and market based, "where children are increasingly regarded as made-to-order consumer products". In a 2006 newspaper article, Richard Dawkins said that discussion regarding eugenics was inhibited by the shadow of Nazi misuse, to the extent that some scientists would not admit that breeding humans for certain abilities is at all possible. He believes that it is not physically different from breeding domestic animals for traits such as speed or herding skill. Dawkins felt that enough time had elapsed to at least ask just what the ethical differences were between breeding for ability versus training athletes or forcing children to take music lessons, though he could think of persuasive reasons to draw the distinction. Lee Kuan Yew, the founding father of Singapore, promoted eugenics as late as 1983. A proponent of nature over nurture, he stated that "intelligence is 80% nature and 20% nurture", and attributed the successes of his children to genetics. In his speeches, Lee urged highly educated women to have more children, claiming that "social delinquents" would dominate unless their fertility rate increased. In 1984, Singapore began providing financial incentives to highly educated women to encourage them to have more children. In 1985, incentives were significantly reduced after public uproar. In October 2015, the United Nations' International Bioethics Committee wrote that the ethical problems of human genetic engineering should not be confused with the ethical problems of the 20th century eugenics movements. However, it is still problematic because it challenges the idea of human equality and opens up new forms of discrimination and stigmatization for those who do not want, or cannot afford, the technology. Transhumanism is often associated with eugenics, although most transhumanists holding similar views nonetheless distance themselves from the term "eugenics" (preferring "germinal choice" or "reprogenetics") to avoid having their position confused with the discredited theories and practices of early-20th-century eugenic movements. Prenatal screening can be considered a form of contemporary eugenics because it may lead to abortions of fetuses with undesirable traits. A system was proposed by California State Senator Nancy Skinner to compensate victims of the well-documented examples of prison sterilizations resulting from California's eugenics programs, but this did not pass by the bill's 2018 deadline in the Legislature. Meanings and types The term eugenics and its modern field of study were first formulated by Francis Galton in 1883, drawing on the recent work of his half-cousin Charles Darwin. Galton published his observations and conclusions in his book Inquiries into Human Faculty and Its Development. The origins of the concept began with certain interpretations of Mendelian inheritance and the theories of August Weismann. The word eugenics is derived from the Greek word eu ("good" or "well") and the suffix -genēs ("born"); Galton intended it to replace the word "stirpiculture", which he had used previously but which had come to be mocked due to its perceived sexual overtones. Galton defined eugenics as "the study of all agencies under human control which can improve or impair the racial quality of future generations". Historically, the idea of eugenics has been used to argue for a broad array of practices ranging from prenatal care for mothers deemed genetically desirable to the forced sterilization and murder of those deemed unfit. To population geneticists, the term has included the avoidance of inbreeding without altering allele frequencies; for example, J. B. S. Haldane wrote that "the motor bus, by breaking up inbred village communities, was a powerful eugenic agent." Debate as to what exactly counts as eugenics continues today. Edwin Black, journalist and author of War Against the Weak, argues that eugenics is often deemed a pseudoscience because what is defined as a genetic improvement of a desired trait is a cultural choice rather than a matter that can be determined through objective scientific inquiry. The most disputed aspect of eugenics has been the definition of "improvement" of the human gene pool, such as what is a beneficial characteristic and what is a defect. Historically, this aspect of eugenics was tainted with scientific racism and pseudoscience. Early eugenicists were mostly concerned with factors of perceived intelligence that often correlated strongly with social class. These included Karl Pearson and Walter Weldon, who worked on this at the University College London. In his lecture "Darwinism, Medical Progress and Eugenics", Pearson claimed that everything concerning eugenics fell into the field of medicine. Eugenic policies have been conceptually divided into two categories. Positive eugenics is aimed at encouraging reproduction among the genetically advantaged; for example, the reproduction of the intelligent, the healthy, and the successful. Possible approaches include financial and political stimuli, targeted demographic analyses, in vitro fertilization, egg transplants, and cloning. Negative eugenics aimed to eliminate, through sterilization or segregation, those deemed physically, mentally, or morally "undesirable". This includes abortions, sterilization, and other methods of family planning. Both positive and negative eugenics can be coercive; in Nazi Germany, for example, abortion was illegal for women deemed by the state to be fit. Controversy over scientific and moral legitimacy Arguments for scientific validity The first major challenge to conventional eugenics based on genetic inheritance was made in 1915 by Thomas Hunt Morgan. He demonstrated the event of genetic mutation occurring outside of inheritance involving the discovery of the hatching of a fruit fly (Drosophila melanogaster) with white eyes from a family with red eyes, demonstrating that major genetic changes occurred outside of inheritance. Additionally, Morgan criticized the view that certain traits, such as intelligence and criminality, were hereditary because these traits were subjective. Despite Morgan's public rejection of eugenics, much of his genetic research was adopted by proponents of eugenics. The heterozygote test is used for the early detection of recessive hereditary diseases, allowing for couples to determine if they are at risk of passing genetic defects to a future child. The goal of the test is to estimate the likelihood of passing the hereditary disease to future descendants. There are examples of eugenic acts that managed to lower the prevalence of recessive diseases, although not influencing the prevalence of heterozygote carriers of those diseases. The elevated prevalence of certain genetically transmitted diseases among the Ashkenazi Jewish population (Tay–Sachs, cystic fibrosis, Canavan's disease, and Gaucher's disease), has been decreased in current populations by the application of genetic screening. Pleiotropy occurs when one gene influences multiple, seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect. Andrzej Pękalski, from the University of Wrocław, argues that eugenics can cause harmful loss of genetic diversity if a eugenics program selects a pleiotropic gene that could possibly be associated with a positive trait. Pekalski uses the example of a coercive government eugenics program that prohibits people with myopia from breeding but has the unintended consequence of also selecting against high intelligence since the two go together. Objections to scientific validity Eugenic policies may lead to a loss of genetic diversity. Further, a culturally-accepted "improvement" of the gene pool may result in extinction, due to increased vulnerability to disease, reduced ability to adapt to environmental change, and other factors that may
In its moral dimension, eugenics rejected the doctrine that all human beings are born equal and redefined moral worth purely in terms of genetic fitness. Its racist elements included pursuit of a pure "Nordic race" or "Aryan" genetic pool and the eventual elimination of "unfit" races. Many leading British politicians subscribed to the theories of eugenics. Winston Churchill supported the British Eugenics Society and was an honorary vice president for the organization. Churchill believed that eugenics could solve "race deterioration" and reduce crime and poverty. Early critics of the philosophy of eugenics included the American sociologist Lester Frank Ward, the English writer G. K. Chesterton, the German-American anthropologist Franz Boas, who argued that advocates of eugenics greatly over-estimate the influence of biology, and Scottish tuberculosis pioneer and author Halliday Sutherland. Ward's 1913 article "Eugenics, Euthenics, and Eudemics", Chesterton's 1917 book Eugenics and Other Evils, and Boas' 1916 article "Eugenics" (published in The Scientific Monthly) were all harshly critical of the rapidly growing movement. Sutherland identified eugenists as a major obstacle to the eradication and cure of tuberculosis in his 1917 address "Consumption: Its Cause and Cure", and criticism of eugenists and Neo-Malthusians in his 1921 book Birth Control led to a writ for libel from the eugenist Marie Stopes. Several biologists were also antagonistic to the eugenics movement, including Lancelot Hogben. Other biologists such as J. B. S. Haldane and R. A. Fisher expressed skepticism in the belief that sterilization of "defectives" would lead to the disappearance of undesirable genetic traits. Among institutions, the Catholic Church was an opponent of state-enforced sterilizations. Attempts by the Eugenics Education Society to persuade the British government to legalize voluntary sterilization were opposed by Catholics and by the Labour Party. The American Eugenics Society initially gained some Catholic supporters, but Catholic support declined following the 1930 papal encyclical Casti connubii. In this, Pope Pius XI explicitly condemned sterilization laws: "Public magistrates have no direct power over the bodies of their subjects; therefore, where no crime has taken place and there is no cause present for grave punishment, they can never directly harm, or tamper with the integrity of the body, either for the reasons of eugenics or for any other reason." As a social movement, eugenics reached its greatest popularity in the early decades of the 20th century, when it was practiced around the world and promoted by governments, institutions, and influential individuals (such as the playwright G. B. Shaw). Many countries enacted various eugenics policies, including: genetic screenings, birth control, promoting differential birth rates, marriage restrictions, segregation (both racial segregation and sequestering the mentally ill), compulsory sterilization, forced abortions or forced pregnancies, ultimately culminating in genocide. By 2014, gene selection (rather than "people selection") was made possible through advances in genome editing, leading to what is sometimes called new eugenics, also known as "neo-eugenics", "consumer eugenics", or "liberal eugenics". Eugenics in the United States Anti-miscegenation laws in the United States made it a crime for individuals to wed someone categorized as belonging to a different race. These laws were part of a broader policy of racial segregation in the United States to minimize contact between people of different ethnicities. Race laws and practices in the United States were explicitly used as models by the Nazi regime when it developed the Nuremberg Laws, stripping Jewish citizens of their citizenship. Nazism and the decline of eugenics The scientific reputation of eugenics started to decline in the 1930s, a time when Ernst Rüdin used eugenics as a justification for the racial policies of Nazi Germany. Adolf Hitler had praised and incorporated eugenic ideas in Mein Kampf in 1925 and emulated eugenic legislation for the sterilization of "defectives" that had been pioneered in the United States once he took power. Some common early 20th century eugenics methods involved identifying and classifying individuals and their families, including the poor, mentally ill, blind, deaf, developmentally disabled, promiscuous women, homosexuals, and racial groups (such as the Roma and Jews in Nazi Germany) as "degenerate" or "unfit", and therefore led to segregation, institutionalization, sterilization, and even mass murder. The Nazi policy of identifying German citizens deemed mentally or physically unfit and then systematically killing them with poison gas, referred to as the Aktion T4 campaign, is understood by historians to have paved the way for the Holocaust. By the end of World War II, many eugenics laws were abandoned, having become associated with Nazi Germany. H. G. Wells, who had called for "the sterilization of failures" in 1904, stated in his 1940 book The Rights of Man: Or What Are We Fighting For? that among the human rights, which he believed should be available to all people, was "a prohibition on mutilation, sterilization, torture, and any bodily punishment". After World War II, the practice of "imposing measures intended to prevent births within [a national, ethnical, racial or religious] group" fell within the definition of the new international crime of genocide, set out in the Convention on the Prevention and Punishment of the Crime of Genocide. The Charter of Fundamental Rights of the European Union also proclaims "the prohibition of eugenic practices, in particular those aiming at selection of persons". In spite of the decline in discriminatory eugenics laws, some government mandated sterilizations continued into the 21st century. During the ten years President Alberto Fujimori led Peru from 1990 to 2000, 2,000 persons were allegedly involuntarily sterilized. China maintained its one-child policy until 2015 as well as a suite of other eugenics based legislation to reduce population size and manage fertility rates of different populations. In 2007, the United Nations reported coercive sterilizations and hysterectomies in Uzbekistan. During the years 2005 to 2013, nearly one-third of the 144 California prison inmates who were sterilized did not give lawful consent to the operation. Modern eugenics Developments in genetic, genomic, and reproductive technologies at the beginning of the 21st century have raised numerous questions regarding the ethical status of eugenics, effectively creating a resurgence of interest in the subject. Some, such as UC Berkeley sociologist Troy Duster, have argued that modern genetics is a back door to eugenics. This view was shared by then-White House Assistant Director for Forensic Sciences, Tania Simoncelli, who stated in a 2003 publication by the Population and Development Program at Hampshire College that advances in pre-implantation genetic diagnosis (PGD) are moving society to a "new era of eugenics", and that, unlike the Nazi eugenics, modern eugenics is consumer driven and market based, "where children are increasingly regarded as made-to-order consumer products". In a 2006 newspaper article, Richard Dawkins said that discussion regarding eugenics was inhibited by the shadow of Nazi misuse, to the extent that some scientists would not admit that breeding humans for certain abilities is at all possible. He believes that it is not physically different from breeding domestic animals for traits such as speed or herding skill. Dawkins felt that enough time had elapsed to at least ask just what the ethical differences were between breeding for ability versus training athletes or forcing children to take music lessons, though he could think of persuasive reasons to draw the distinction. Lee Kuan Yew, the founding father of Singapore, promoted eugenics as late as 1983. A proponent of nature over nurture, he stated that "intelligence is 80% nature and 20% nurture", and attributed the successes of his children to genetics. In his speeches, Lee urged highly educated women to have more children, claiming that "social delinquents" would dominate unless their fertility rate increased. In 1984, Singapore began providing financial incentives to highly educated women to encourage them to have more children. In 1985, incentives were significantly reduced after public uproar. In October 2015, the United Nations' International Bioethics Committee wrote that the ethical problems of human genetic engineering should not be confused with the ethical problems of the 20th century eugenics movements. However, it is still problematic because it challenges the idea of human equality and opens up new forms of discrimination and stigmatization for those who do not want, or cannot afford, the technology. Transhumanism is often associated with eugenics, although most transhumanists holding similar views nonetheless distance themselves from the term "eugenics" (preferring "germinal choice" or "reprogenetics") to avoid having their position confused with the discredited theories and practices of early-20th-century eugenic movements. Prenatal screening can be considered a form of contemporary eugenics because it may lead to abortions of fetuses with undesirable traits. A system was proposed by California State Senator Nancy Skinner to compensate victims of the well-documented examples of prison sterilizations resulting from California's eugenics programs, but this did not pass by the bill's 2018 deadline in the Legislature. Meanings and types The term eugenics and its modern field of study were first formulated by Francis Galton in 1883, drawing on the recent work of his half-cousin Charles Darwin. Galton published his observations and conclusions in his book Inquiries into Human Faculty and Its Development. The origins of the concept began with certain interpretations of Mendelian inheritance and the theories of August Weismann. The word eugenics is derived from the Greek word eu ("good" or "well") and the suffix -genēs ("born"); Galton intended it to replace the word "stirpiculture", which he had used previously but which had come to be mocked due to its perceived sexual overtones. Galton defined eugenics as "the study of all agencies under human control which can improve or impair the racial quality of future generations". Historically, the idea of eugenics has been used to argue for a broad array of practices ranging from prenatal care for mothers deemed genetically desirable to the forced sterilization and murder of those deemed unfit. To population geneticists, the term has included the avoidance of inbreeding without altering allele frequencies; for example, J. B. S. Haldane wrote that "the motor bus, by breaking up inbred village communities, was a powerful eugenic agent." Debate as to what exactly counts as eugenics continues today. Edwin Black, journalist and author of War Against the Weak, argues that eugenics is often deemed a pseudoscience because what is defined as a genetic improvement of a desired trait is a cultural choice rather than a matter that can be determined through objective scientific inquiry. The most disputed aspect of eugenics has been the definition of "improvement" of the human gene pool, such as what is a beneficial characteristic and what is a defect. Historically, this aspect of eugenics was tainted with scientific racism and pseudoscience. Early eugenicists were mostly concerned with factors of perceived intelligence that often correlated strongly with social class. These included Karl Pearson and Walter Weldon, who worked on this at the University College London. In his lecture "Darwinism, Medical Progress and Eugenics", Pearson claimed that everything concerning eugenics fell into the field of medicine. Eugenic policies have been conceptually divided into two categories. Positive eugenics is aimed at encouraging reproduction among the genetically advantaged; for example, the reproduction of the intelligent, the healthy, and the successful. Possible approaches include financial and political stimuli, targeted demographic analyses, in vitro fertilization, egg transplants, and cloning. Negative eugenics aimed to eliminate, through sterilization or segregation, those deemed physically, mentally, or morally "undesirable". This includes abortions, sterilization, and other methods of family planning. Both positive and negative eugenics can be coercive; in Nazi Germany, for example, abortion was illegal for women deemed by the state to be fit. Controversy over scientific and moral legitimacy Arguments for scientific validity The first major challenge to conventional eugenics based on genetic inheritance was made in 1915 by Thomas Hunt Morgan. He demonstrated the event of genetic mutation occurring outside of inheritance involving the discovery of the hatching of a fruit fly (Drosophila melanogaster) with white eyes from a family with red eyes, demonstrating that major genetic changes occurred outside of inheritance. Additionally, Morgan criticized the view that certain traits, such as intelligence and criminality, were hereditary because these traits were subjective. Despite Morgan's public rejection of eugenics, much of his genetic research was adopted by proponents of eugenics. The heterozygote test is used for the early detection of recessive hereditary diseases, allowing for couples to determine if they are at risk of passing genetic defects to a future child. The goal of the test is to estimate the likelihood of passing the hereditary disease to future descendants. There are examples of eugenic acts that managed to lower the prevalence of recessive diseases, although not influencing the prevalence of heterozygote carriers of those diseases. The elevated prevalence of certain genetically transmitted diseases among the Ashkenazi Jewish population (Tay–Sachs, cystic fibrosis, Canavan's disease, and Gaucher's disease), has been decreased in current populations by the application of genetic screening. Pleiotropy occurs when one gene influences multiple, seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect. Andrzej Pękalski, from the University of Wrocław, argues that eugenics can cause harmful loss of genetic diversity if a eugenics program selects a pleiotropic gene that could possibly be associated with a
standard protocols such as POP or IMAP, or, as is more likely in a large corporate environment, with a proprietary protocol specific to Novell Groupwise, Lotus Notes or Microsoft Exchange Servers. Programs used by users for retrieving, reading, and managing email are called mail user agents (MUAs). When opening an email, it is marked as "read", which typically visibly distinguishes it from "unread" messages on clients' user interfaces. Email clients may allow hiding read emails from the inbox so the user can focus on the unread. Mail can be stored on the client, on the server side, or in both places. Standard formats for mailboxes include Maildir and mbox. Several prominent email clients use their own proprietary format and require conversion software to transfer email between them. Server-side storage is often in a proprietary format but since access is through a standard protocol such as IMAP, moving email from one server to another can be done with any MUA supporting the protocol. Many current email users do not run MTA, MDA or MUA programs themselves, but use a web-based email platform, such as Gmail or Yahoo! Mail, that performs the same tasks. Such webmail interfaces allow users to access their mail with any standard web browser, from any computer, rather than relying on a local email client. Filename extensions Upon reception of email messages, email client applications save messages in operating system files in the file system. Some clients save individual messages as separate files, while others use various database formats, often proprietary, for collective storage. A historical standard of storage is the mbox format. The specific format used is often indicated by special filename extensions: eml Used by many email clients including Novell GroupWise, Microsoft Outlook Express, Lotus notes, Windows Mail, Mozilla Thunderbird, and Postbox. The files contain the email contents as plain text in MIME format, containing the email header and body, including attachments in one or more of several formats. emlx Used by Apple Mail. msg Used by Microsoft Office Outlook and OfficeLogic Groupware. mbx Used by Opera Mail, KMail, and Apple Mail based on the mbox format. Some applications (like Apple Mail) leave attachments encoded in messages for searching while also saving separate copies of the attachments. Others separate attachments from messages and save them in a specific directory. URI scheme mailto The URI scheme, as registered with the IANA, defines the mailto: scheme for SMTP email addresses. Though its use is not strictly defined, URLs of this form are intended to be used to open the new message window of the user's mail client when the URL is activated, with the address as defined by the URL in the To: field. Many clients also support query string parameters for the other email fields, such as its subject line or carbon copy recipients. Types Web-based email Many email providers have a web-based email client (e.g. AOL Mail, Gmail, Outlook.com and Yahoo! Mail). This allows users to log into the email account by using any compatible web browser to send and receive their email. Mail is typically not downloaded to the web client, so can't be read without a current Internet connection. POP3 email servers The Post Office Protocol 3 (POP3) is a mail access protocol used by a client application to read messages from the mail server. Received messages are often deleted from the server. POP supports simple download-and-delete requirements for access to remote mailboxes (termed maildrop in the POP RFC's). POP3 allows you to download email messages on your local computer and read them even when you are offline. IMAP email servers The Internet Message Access Protocol (IMAP) provides features to manage a mailbox from multiple devices. Small portable devices like smartphones are increasingly used to check email while traveling and to make brief replies, larger devices with better keyboard access being used to reply at greater length. IMAP shows the headers of messages, the sender and the subject and the device needs to request to download specific messages. Usually, the mail is left in folders in the mail server. MAPI email servers Messaging Application Programming Interface (MAPI) is used by Microsoft Outlook to communicate to Microsoft Exchange Server - and to a range of other email server products such as Axigen Mail Server, Kerio Connect, Scalix, Zimbra, HP OpenMail, IBM Lotus Notes, Zarafa, and Bynari where vendors have added MAPI support to allow their products to be accessed directly via Outlook. Uses Business and organizational use Email has been widely accepted by businesses, governments and non-governmental organizations in the developed world, and it is one of the key parts of an 'e-revolution' in workplace communication (with the other key plank being widespread adoption of highspeed Internet). A sponsored 2010 study on workplace communication found 83% of U.S. knowledge workers felt email was critical to their success and productivity at work. It has some key benefits to business and other organizations, including: Facilitating logistics Much of the business world relies on communications between people who are not physically in the same building, area, or even country; setting up and attending an in-person meeting, telephone call, or conference call can be inconvenient, time-consuming, and costly. Email provides a method of exchanging information between two or more people with no set-up costs and that is generally far less expensive than a physical meeting or phone call. Helping with synchronization With real time communication by meetings or phone calls, participants must work on the same schedule, and each participant must spend the same amount of time in the meeting or call. Email allows asynchrony: each participant may control their schedule independently. Batch processing of incoming emails can improve workflow compared to interrupting calls. Reducing cost Sending an email is much less expensive than sending postal mail, or long distance telephone calls, telex or telegrams. Increasing speed Much faster than most of the alternatives. Creating a "written" record Unlike a telephone or in-person conversation, email by its nature creates a detailed written record of the communication, the identity of the sender(s) and recipient(s) and the date and time the message was sent. In the event of a contract or legal dispute, saved emails can be used to prove that an individual was advised of certain issues, as each email has the date and time recorded on it. Possibility of auto-processing and improved distribution As well pre-processing of customer's orders and/or addressing the person in charge can be realized by automated procedures. Email marketing Email marketing via "opt-in" is often successfully used to send special sales offerings and new product information. Depending on the recipient's culture, email sent without permission—such as an "opt-in"—is likely to be viewed as unwelcome "email spam". Personal use Personal computer Many users access their personal emails from friends and family members using a personal computer in their house or apartment. Mobile Email has become used on smartphones and on all types of computers. Mobile "apps" for email increase accessibility to the medium for users who are out of their homes. While in the earliest years of email, users could only access email on desktop computers, in the 2010s, it is possible for users to check their email when they are away from home, whether they are across town or across the world. Alerts can also be sent to the smartphone or other devices to notify them immediately of new messages. This has given email the ability to be used for more frequent communication between users and allowed them to check their email and write messages throughout the day. , there were approximately 1.4 billion email users worldwide and 50 billion non-spam emails that were sent daily. Individuals often check emails on smartphones for both personal and work-related messages. It was found that US adults check their email more than they browse the web or check their Facebook accounts, making email the most popular activity for users to do on their smartphones. 78% of the respondents in the study revealed that they check their email on their phone. It was also found that 30% of consumers use only their smartphone to check their email, and 91% were likely to check their email at least once per day on their smartphone. However, the percentage of consumers using email on a smartphone ranges and differs dramatically across different countries. For example, in comparison to 75% of those consumers in the US who used it, only 17% in India did. Declining use among young people , the number of Americans visiting email web sites had fallen 6 percent after peaking in November 2009. For persons 12 to 17, the number was down 18 percent. Young people preferred instant messaging, texting and social media. Technology writer Matt Richtel said in The New York Times that email was like the VCR, vinyl records and film cameras—no longer cool and something older people do. A 2015 survey of Android users showed that persons 13 to 24 used messaging apps 3.5 times as much as those over 45, and were far less likely to use email. Issues Attachment size limitation Email messages may have one or more attachments, which are additional files that are appended to the email. Typical attachments include Microsoft Word documents, PDF documents, and scanned images of paper documents. In principle, there is no technical restriction on the size or number of attachments. However, in practice, email clients, servers, and Internet service providers implement various limitations on the size of files, or complete email - typically to 25MB or less. Furthermore, due to technical reasons, attachment sizes as seen by these transport systems can differ from what the user sees, which can be confusing to senders when trying to assess whether they can safely send a file by email. Where larger files need to be shared, various file hosting services are available and commonly used. Information overload The ubiquity of email for knowledge workers and "white collar" employees has led to concerns that recipients face an "information overload" in dealing with increasing volumes of email. With the growth in mobile devices, by default employees may also receive work-related emails outside of their working day. This can lead to increased stress and decreased satisfaction with work. Some observers even argue it could have a significant negative economic effect, as efforts to read the many emails could reduce productivity. Spam Email "spam" is unsolicited bulk email. The low cost of sending such email meant that, by 2003, up to 30% of total email traffic was spam, and was threatening the usefulness of email as a practical tool. The US CAN-SPAM Act of 2003 and similar laws elsewhere had some impact, and a number of effective anti-spam techniques now largely mitigate the impact of spam by filtering or rejecting it for most users, but the volume sent is still very high—and increasingly consists not of advertisements for products, but malicious content or links. In September 2017, for example, the proportion of spam to legitimate email rose to 59.56%. The percentage of spam email in 2021 is estimated to be 85%. Malware A range of malicious email types exist. These range from various types of email scams, including "social engineering" scams such as advance-fee scam "Nigerian letters", to phishing, email bombardment and email worms. Email spoofing Email spoofing occurs when the email message header is designed to make the message appear to come from a known or trusted source. Email spam and phishing methods typically use spoofing to mislead the recipient about the true message origin. Email spoofing may be done as a prank, or as part of a criminal effort to defraud an individual or organization. An example of a potentially fraudulent email spoofing is if an individual creates an email that appears to be an invoice from a major company, and then sends it to one or more recipients. In some cases, these fraudulent emails incorporate the logo of the purported organization and even the email address may appear legitimate. Email bombing Email bombing is the intentional sending of large volumes of messages to a target address. The overloading of the target email address can render it unusable and can even cause the mail server to crash. Privacy concerns Today it can be important to distinguish between the Internet and internal email systems. Internet email may travel and be stored on networks and computers without the sender's or the recipient's control. During the transit time it is possible that third parties read or even modify the content. Internal mail systems, in which the information never leaves the organizational network, may be more secure, although information technology personnel and others whose function may involve monitoring or managing may be accessing the email of other employees. Email privacy, without some security precautions, can be compromised because: email messages are generally not encrypted. email messages have to go through intermediate computers before reaching their destination, meaning it is relatively easy for others to intercept and read messages. many Internet Service Providers (ISP) store copies of email messages on their mail servers before they are delivered. The backups of these can remain for up to several months on their server, despite deletion from the mailbox. the "Received:"-fields and other information in the email can often identify the sender, preventing anonymous communication. web bugs invisibly embedded in HTML content can alert the sender of any email whenever an email is rendered as HTML (some e-mail clients do this when the user reads, or re-reads the e-mail) and from which IP address. It can also reveal whether an email was read on a smartphone or a PC, or Apple Mac device via the user agent string. There are cryptography applications that can serve as a remedy to one or more of the above. For example, Virtual Private Networks or the Tor network can be used to encrypt traffic from the user machine to a safer network while GPG, PGP, SMEmail, or S/MIME can be used for end-to-end message encryption, and SMTP STARTTLS or SMTP over Transport Layer Security/Secure Sockets Layer can be used to encrypt communications for a single mail hop between the SMTP client and the SMTP server. Additionally, many mail user agents do not protect logins and passwords, making them easy to intercept by an attacker. Encrypted authentication schemes such as SASL prevent this. Finally, the attached files share many of the same hazards as those found in peer-to-peer filesharing. Attached files may contain trojans or viruses. Legal contracts It is possible for an exchange of emails to form a binding contract, so users must be careful about what they send through email correspondence. A signature block on an email may be interpreted as satisfying a signature requirement for a contract. Flaming Flaming occurs when a person sends a message (or many messages) with angry or antagonistic content. The term is derived from the use of the word incendiary to describe particularly heated email discussions.
POP RFC's). POP3 allows you to download email messages on your local computer and read them even when you are offline. IMAP email servers The Internet Message Access Protocol (IMAP) provides features to manage a mailbox from multiple devices. Small portable devices like smartphones are increasingly used to check email while traveling and to make brief replies, larger devices with better keyboard access being used to reply at greater length. IMAP shows the headers of messages, the sender and the subject and the device needs to request to download specific messages. Usually, the mail is left in folders in the mail server. MAPI email servers Messaging Application Programming Interface (MAPI) is used by Microsoft Outlook to communicate to Microsoft Exchange Server - and to a range of other email server products such as Axigen Mail Server, Kerio Connect, Scalix, Zimbra, HP OpenMail, IBM Lotus Notes, Zarafa, and Bynari where vendors have added MAPI support to allow their products to be accessed directly via Outlook. Uses Business and organizational use Email has been widely accepted by businesses, governments and non-governmental organizations in the developed world, and it is one of the key parts of an 'e-revolution' in workplace communication (with the other key plank being widespread adoption of highspeed Internet). A sponsored 2010 study on workplace communication found 83% of U.S. knowledge workers felt email was critical to their success and productivity at work. It has some key benefits to business and other organizations, including: Facilitating logistics Much of the business world relies on communications between people who are not physically in the same building, area, or even country; setting up and attending an in-person meeting, telephone call, or conference call can be inconvenient, time-consuming, and costly. Email provides a method of exchanging information between two or more people with no set-up costs and that is generally far less expensive than a physical meeting or phone call. Helping with synchronization With real time communication by meetings or phone calls, participants must work on the same schedule, and each participant must spend the same amount of time in the meeting or call. Email allows asynchrony: each participant may control their schedule independently. Batch processing of incoming emails can improve workflow compared to interrupting calls. Reducing cost Sending an email is much less expensive than sending postal mail, or long distance telephone calls, telex or telegrams. Increasing speed Much faster than most of the alternatives. Creating a "written" record Unlike a telephone or in-person conversation, email by its nature creates a detailed written record of the communication, the identity of the sender(s) and recipient(s) and the date and time the message was sent. In the event of a contract or legal dispute, saved emails can be used to prove that an individual was advised of certain issues, as each email has the date and time recorded on it. Possibility of auto-processing and improved distribution As well pre-processing of customer's orders and/or addressing the person in charge can be realized by automated procedures. Email marketing Email marketing via "opt-in" is often successfully used to send special sales offerings and new product information. Depending on the recipient's culture, email sent without permission—such as an "opt-in"—is likely to be viewed as unwelcome "email spam". Personal use Personal computer Many users access their personal emails from friends and family members using a personal computer in their house or apartment. Mobile Email has become used on smartphones and on all types of computers. Mobile "apps" for email increase accessibility to the medium for users who are out of their homes. While in the earliest years of email, users could only access email on desktop computers, in the 2010s, it is possible for users to check their email when they are away from home, whether they are across town or across the world. Alerts can also be sent to the smartphone or other devices to notify them immediately of new messages. This has given email the ability to be used for more frequent communication between users and allowed them to check their email and write messages throughout the day. , there were approximately 1.4 billion email users worldwide and 50 billion non-spam emails that were sent daily. Individuals often check emails on smartphones for both personal and work-related messages. It was found that US adults check their email more than they browse the web or check their Facebook accounts, making email the most popular activity for users to do on their smartphones. 78% of the respondents in the study revealed that they check their email on their phone. It was also found that 30% of consumers use only their smartphone to check their email, and 91% were likely to check their email at least once per day on their smartphone. However, the percentage of consumers using email on a smartphone ranges and differs dramatically across different countries. For example, in comparison to 75% of those consumers in the US who used it, only 17% in India did. Declining use among young people , the number of Americans visiting email web sites had fallen 6 percent after peaking in November 2009. For persons 12 to 17, the number was down 18 percent. Young people preferred instant messaging, texting and social media. Technology writer Matt Richtel said in The New York Times that email was like the VCR, vinyl records and film cameras—no longer cool and something older people do. A 2015 survey of Android users showed that persons 13 to 24 used messaging apps 3.5 times as much as those over 45, and were far less likely to use email. Issues Attachment size limitation Email messages may have one or more attachments, which are additional files that are appended to the email. Typical attachments include Microsoft Word documents, PDF documents, and scanned images of paper documents. In principle, there is no technical restriction on the size or number of attachments. However, in practice, email clients, servers, and Internet service providers implement various limitations on the size of files, or complete email - typically to 25MB or less. Furthermore, due to technical reasons, attachment sizes as seen by these transport systems can differ from what the user sees, which can be confusing to senders when trying to assess whether they can safely send a file by email. Where larger files need to be shared, various file hosting services are available and commonly used. Information overload The ubiquity of email for knowledge workers and "white collar" employees has led to concerns that recipients face an "information overload" in dealing with increasing volumes of email. With the growth in mobile devices, by default employees may also receive work-related emails outside of their working day. This can lead to increased stress and decreased satisfaction with work. Some observers even argue it could have a significant negative economic effect, as efforts to read the many emails could reduce productivity. Spam Email "spam" is unsolicited bulk email. The low cost of sending such email meant that, by 2003, up to 30% of total email traffic was spam, and was threatening the usefulness of email as a practical tool. The US CAN-SPAM Act of 2003 and similar laws elsewhere had some impact, and a number of effective anti-spam techniques now largely mitigate the impact of spam by filtering or rejecting it for most users, but the volume sent is still very high—and increasingly consists not of advertisements for products, but malicious content or links. In September 2017, for example, the proportion of spam to legitimate email rose to 59.56%. The percentage of spam email in 2021 is estimated to be 85%. Malware A range of malicious email types exist. These range from various types of email scams, including "social engineering" scams such as advance-fee scam "Nigerian letters", to phishing, email bombardment and email worms. Email spoofing Email spoofing occurs when the email message header is designed to make the message appear to come from a known or trusted source. Email spam and phishing methods typically use spoofing to mislead the recipient about the true message origin. Email spoofing may be done as a prank, or as part of a criminal effort to defraud an individual or organization. An example of a potentially fraudulent email spoofing is if an individual creates an email that appears to be an invoice from a major company, and then sends it to one or more recipients. In some cases, these fraudulent emails incorporate the logo of the purported organization and even the email address may appear legitimate. Email bombing Email bombing is the intentional sending of large volumes of messages to a target address. The overloading of the target email address can render it unusable and can even cause the mail server to crash. Privacy concerns Today it can be important to distinguish between the Internet and internal email systems. Internet email may travel and be stored on networks and computers without the sender's or the recipient's control. During the transit time it is possible that third parties read or even modify the content. Internal mail systems, in which the information never leaves the organizational network, may be more secure, although information technology personnel and others whose function may involve monitoring or managing may be accessing the email of other employees. Email privacy, without some security precautions, can be compromised because: email messages are generally not encrypted. email messages have to go through intermediate computers before reaching their destination, meaning it is relatively easy for others to intercept and read messages. many Internet Service Providers (ISP) store copies of email messages on their mail servers before they are delivered. The backups of these can remain for up to several months on their server, despite deletion from the mailbox. the "Received:"-fields and other information in the email can often identify the sender, preventing anonymous communication.
and , a visual representation of the Face with Tears of Joy emoji or the acronym LOL. The 1997 book Smileys by David Sanderson included over 650 different emoticons, and James Marshall's online dictionary of emoticons listed over two thousand in the early 2000s. A researcher at Stanford University surveyed the emoticons used in four million Twitter messages and found that the smiling emoticon without a hyphen "nose" was much more common than the original version with the hyphen . Linguist Vyvyan Evans argues that this represents a shift in usage by younger users as a form of covert prestige: rejecting a standard usage in order to demonstrate in-group membership. Inspired by Fahlman's idea of using faces in language, the Loufrani family established The Smiley Company in 1996. Nicolas Loufrani developed hundreds of different emoticons, including 3D versions. His designs were registered at the United States Copyright Office in 1997 and appeared online as .gif files in 1998. These were the first graphical representations of the originally text-based emoticon. He published his icons as well as emoticons created by others, along with their ASCII versions, in an online Smiley Dictionary in the early 2000s. This dictionary included over 3,000 different smileys and was published as a book called Dico Smileys in 2002. Fahlman has stated that he sees emojis as "the remote descendants of this thing I did." The original smileys were sold by Fahlman as non-fungible tokens for $237,500 in 2021. Styles Western Usually, emoticons in Western style have the eyes on the left, followed by the nose and the mouth. The two-character version :) which omits the nose is also very popular. The most basic emoticons are relatively consistent in form, but each of them can be transformed by being rotated (making them tiny ambigrams), with or without a hyphen (nose). There are also some possible variations to emoticons to get new definitions, like changing a character to express a new feeling, or slightly change the mood of the emoticon. For example, :( equals sad and :(( equals very sad. Weeping can be written as :'(. A blush can be expressed as :">. Others include wink ;), a grin :D, smug :->, and can be used to denote a flirting or joking tone, or may be implying a second meaning in the sentence preceding it. ;P, such as when blowing a raspberry. An often used combination is also <3 for a heart, and <code></3</code> for a broken heart. :O is also sometimes used to depict shock. :/ is used to depict melancholy, disappointment, or disapproval. :| is used to depict a neutral face. A broad grin is sometimes shown with crinkled eyes to express further amusement; XD and the addition of further "D" letters can suggest laughter or extreme amusement e.g. XDDDD. The same is true for X3 but the three represents an animal's mouth. There are other variations including >:( for anger, or >:D for an evil grin, which can be, again, used in reverse, for an unhappy angry face, in the shape of D:<. =K for vampire teeth, :s for grimace, and :P tongue out, can be used to denote a flirting or joking tone, or may be implying a second meaning in the sentence preceding it. As computers offer increasing built-in support for non-Western writing systems, it has become possible to use other glyphs to build emoticons. The 'shrug' emoticon, ¯\_(ツ)_/¯, uses the glyph ツ from the Japanese katakana writing system. An equal sign is often used for the eyes in place of the colon, seen as =), without changing the meaning of the emoticon. In these instances, the hyphen is almost always either omitted or, occasionally, replaced with an "o" as in =O). In most circles it has become acceptable to omit the hyphen, whether a colon or an equal sign is used for the eyes, but in some areas of usage people still prefer the larger, more traditional emoticon :-) or :^). One linguistic study has indicated that the use of a nose in an emoticon may be related to the user's age, with younger people less likely to use a nose. Similar-looking characters are commonly substituted for one another: for instance, o, O, and 0 can all be used interchangeably, sometimes for subtly different effect or, in some cases, one type of character may look better in a certain font and therefore be preferred over another. It is also common for the user to replace the rounded brackets used for the mouth with other, similar brackets, such as ] instead of ). Some variants are also more common in certain countries due to keyboard layouts. For example, the smiley =) may occur in Scandinavia, where the keys for = and ) are placed right beside each other. However, the :) variant is without a doubt the dominant one in Scandinavia, making the =) version a rarity. Diacritical marks are sometimes used. The letters Ö and Ü can be seen as an emoticon, as the upright version of :O (meaning that one is surprised) and :D (meaning that one is very happy) respectively. Some emoticons may be read right to left instead, and in fact, can only be written using standard ASCII keyboard characters this way round; for example D: which refers to being shocked or anxious, opposite to the large grin of :D. On the Russian-speaking Internet, the right parenthesis ) is used as a smiley. Multiple parentheses )))) are used to express greater happiness, amusement or laughter. It is commonly placed at the end of a sentence. The colon is omitted due to being in a lesser-known position on the ЙЦУКЕН keyboard layout. Japanese (kaomoji) Users from Japan popularized a style of emoticons (顔文字, kaomoji, lit. 'face characters') that can be understood without tilting one's head. This style arose on ASCII NET, an early Japanese online service, in the 1980s. They often include Japanese typography (katakana) in addition to ASCII characters, and in contrast to Western-style emoticons, tend to emphasize the eyes, rather than the mouth. Wakabayashi Yasushi is credited with inventing the original kaomoji in 1986. Similar-looking emoticons were used on the Byte Information Exchange (BIX) around the same time. Whereas Western emoticons were first used by US computer scientists, kaomoji were most commonly used by young girls and fans of Japanese comics (manga). Linguist Ilaria Moschini suggests this is partly due to the kawaii ('cuteness') aesthetic of kaomoji. These emoticons are usually found in a format similar to (*_*). The asterisks indicate the eyes; the central character, commonly an underscore, the mouth; and the parentheses, the outline of the face. Different emotions can be expressed by changing the character representing the eyes: for example, "T" can be used to express crying or sadness: (T_T). T_T may also be used to mean "unimpressed". The emphasis on the eyes in this style is reflected in the common usage of emoticons that use only the eyes, e.g. ^^. Looks of stress are represented by the likes of (x_x), while (-_-;) is a generic emoticon for nervousness, the semicolon representing an anxiety-induced sweat drop (discussed further below). /// can indicate embarrassment by symbolizing blushing. Characters like hyphens or periods can replace the underscore; the period is often used for a smaller, "cuter" mouth, or to represent a nose, e.g. (^.^). Alternatively, the mouth/nose can be left out entirely, e.g. (^^). Parentheses are sometimes replaced with braces or square brackets, e.g. {^_^} or [o_0]. Many times, the parentheses are left out completely, e.g. ^^, >.< , o_O, O.O, e_e, or e.e. A quotation mark ", apostrophe ', or semicolon ; can be added to the emoticon to imply apprehension or embarrassment, in the same way that a sweat drop is used in manga and anime. Microsoft IME 2000 (Japanese) or later supports the input of emoticons like the above by enabling the Microsoft IME Spoken Language/Emotion Dictionary. In IME 2007, this support was moved to the Emoticons dictionary. Such dictionaries allow users to call up emoticons by typing words that represent them. Communication software allowing the use of Shift JIS encoded characters rather than just ASCII allowed for the development of more kaomoji using the extended character set including hiragana, katakana, kanji, symbols, Greek and Cyrillic alphabet, such as , (`Д´) or (益). Modern communication software generally utilizes Unicode, which allows for the incorporation of characters from other languages and a variety of symbols into the kaomoji, as in (◕‿◕✿) (❤ω❤) (づ ◕‿◕ )づ (▰˘◡˘▰). Further variations can be produced using Unicode combining characters, as in ٩(͡๏̯͡๏)۶ or ᶘᵒᴥᵒᶅ. Combination of Japanese and Western styles English-language anime forums adopted those Japanese-style emoticons that could be used with the standard ASCII characters available on Western keyboards. Because of this, they are often called "anime style" emoticons in English. They have since seen use in more mainstream venues, including online gaming, instant-messaging, and non-anime-related discussion forums. Emoticons such as <( ^.^ )>, <(^_^<), <(o_o<), <( -'.'- )>, <('.'-^), or (>';..;')> which include the parentheses, mouth or nose, and arms (especially those represented by the inequality signs < or >) also are often referred to as "" in reference to their likeness to Nintendo's video game character Kirby. The parentheses are sometimes dropped when used in the English language context, and the underscore of the mouth may be extended as an intensifier for the emoticon in question, e.g. ^_^ for very happy. The emoticon uses the Eastern style, but incorporates a depiction of the Western "middle-finger flick-off" using a "t" as the arm, hand, and finger. Using a lateral click for the nose such as in is believed to originate from the Finnish image-based message board Ylilauta, and is called a "Lenny face". Another apparently Western invention is the use of emoticons like *,..,* or `;..;´ to indicate vampires or other mythical beasts with fangs. Exposure to both Western and Japanese style emoticons or kaomoji through blogs, instant messaging, and forums featuring a blend of Western and Japanese pop culture has given rise to many emoticons that have an upright viewing format. The parentheses are often dropped, and these emoticons typically only use alphanumeric characters and the most commonly used English punctuation marks. Emoticons such as -O-, -3-, -w-, '_', ;_;, T_T, :>, and .V. are used to convey mixed emotions that are more difficult to convey with traditional emoticons. Characters are sometimes added to emoticons to convey an anime- or manga-styled sweat drop, for example ^_^', !>_<!, <@>_<@>;;, ;O;, and *u*. The equals sign can also be used for closed, anime-looking eyes, for example =0=, =3=, =w=, =A=, and =7=. The uwu face (and its variations UwU and OwO), is an emoticon of Japanese origin which denotes a cute expression or emotion felt by the user. In Brazil, sometimes combining characters (accents) are added to emoticons to represent eyebrows, as in ò_ó, ó_ò, õ_o, ù_u, o_Ô, or ( •̀ ᴗ •́ ). 2channel Users of the Japanese discussion board 2channel, in particular, have developed a wide variety of unique emoticons using characters from various scripts, such as Kannada, as in ಠ_ಠ (for a look of disapproval, disbelief, or confusion). These were quickly picked up by 4chan and spread to other Western sites soon after. Some have taken on a
as communication (1982) Carnegie Mellon computer scientist Scott Fahlman is generally credited with the invention of the digital text-based emoticon in 1982. Carnegie Mellon's bulletin board system (BBS) was a forum used by students and teachers for discussing a variety of topics, where jokes often created misunderstandings. As a response to the difficulty of conveying humor or sarcasm in plain text, Fahlman proposed colon–hyphen–right bracket as a label for "attempted humor". The use of ASCII symbols, a standard set of codes representing typographical marks, was essential to allow the symbols to be displayed on any computer. Fahlman sent the following message after an incident where a humorous warning about a mercury spill in an elevator was misunderstood as serious: 19-Sep-82 11:44 Scott E Fahlman :-) From: Scott E Fahlman <Fahlman at Cmu-20c> I propose that the following character sequence for joke markers: :-) Read it sideways. Actually, it is probably more economical to mark things that are NOT jokes, given current trends. For this, use :-( Other suggestions on the forum included an asterisk and an ampersand , the former meant to represent a person doubled over in laughter, as well as a percent sign and a pound sign . Within a few months, the smiley had spread to the ARPANET and Usenet. Many of those that pre-dated Fahlman either drew faces using alphabetic symbols or created digital pictograms. Scott Fahlman took it a step further, by suggesting that not only could his emoticon communication emotion, but also replace language. Using the emoticons as a form of communication is why Fahlman is seen as the creator of emoticons vs. other earlier claims. Later evolution In modern times, emoticons have been around since 1990s and at present "Smiley" emoticons (colon, hyphen and bracket) have become integral to digital communications, and have inspired a variety of other emoticons, including the "winking" face using a semicolon , the "surprised" face with a letter o in place of a bracket , and , a visual representation of the Face with Tears of Joy emoji or the acronym LOL. The 1997 book Smileys by David Sanderson included over 650 different emoticons, and James Marshall's online dictionary of emoticons listed over two thousand in the early 2000s. A researcher at Stanford University surveyed the emoticons used in four million Twitter messages and found that the smiling emoticon without a hyphen "nose" was much more common than the original version with the hyphen . Linguist Vyvyan Evans argues that this represents a shift in usage by younger users as a form of covert prestige: rejecting a standard usage in order to demonstrate in-group membership. Inspired by Fahlman's idea of using faces in language, the Loufrani family established The Smiley Company in 1996. Nicolas Loufrani developed hundreds of different emoticons, including 3D versions. His designs were registered at the United States Copyright Office in 1997 and appeared online as .gif files in 1998. These were the first graphical representations of the originally text-based emoticon. He published his icons as well as emoticons created by others, along with their ASCII versions, in an online Smiley Dictionary in the early 2000s. This dictionary included over 3,000 different smileys and was published as a book called Dico Smileys in 2002. Fahlman has stated that he sees emojis as "the remote descendants of this thing I did." The original smileys were sold by Fahlman as non-fungible tokens for $237,500 in 2021. Styles Western Usually, emoticons in Western style have the eyes on the left, followed by the nose and the mouth. The two-character version :) which omits the nose is also very popular. The most basic emoticons are relatively consistent in form, but each of them can be transformed by being rotated (making them tiny ambigrams), with or without a hyphen (nose). There are also some possible variations to emoticons to get new definitions, like changing a character to express a new feeling, or slightly change the mood of the emoticon. For example, :( equals sad and :(( equals very sad. Weeping can be written as :'(. A blush can be expressed as :">. Others include wink ;), a grin :D, smug :->, and can be used to denote a flirting or joking tone, or may be implying a second meaning in the sentence preceding it. ;P, such as when blowing a raspberry. An often used combination is also <3 for a heart, and <code></3</code> for a broken heart. :O is also sometimes used to depict shock. :/ is used to depict melancholy, disappointment, or disapproval. :| is used to depict a neutral face. A broad grin is sometimes shown with crinkled eyes to express further amusement; XD and the addition of further "D" letters can suggest laughter or extreme amusement e.g. XDDDD. The same is true for X3 but the three represents an animal's mouth. There are other variations including >:( for anger, or >:D for an evil grin, which can be, again, used in reverse, for an unhappy angry face, in the shape of D:<. =K for vampire teeth, :s for grimace, and :P tongue out, can be used to denote a flirting or joking tone, or may be implying a second meaning in the sentence preceding it. As computers offer increasing built-in support for non-Western writing systems, it has become possible to use other glyphs to build emoticons. The 'shrug' emoticon, ¯\_(ツ)_/¯, uses the glyph ツ from the Japanese katakana writing system. An equal sign is often used for the eyes in place of the colon, seen as =), without changing the meaning of the emoticon. In these instances, the hyphen is almost always either omitted or, occasionally, replaced with an "o" as in =O). In most circles it has become acceptable to omit the hyphen, whether a colon or an equal sign is used for the eyes, but in some areas of usage people still prefer the larger, more traditional emoticon :-) or :^). One linguistic study has indicated that the use of a nose in an emoticon may be related to the user's age, with younger people less likely to use a nose. Similar-looking characters are commonly substituted for one another: for instance, o, O, and 0 can all be used interchangeably, sometimes for subtly different effect or, in some cases, one type of character may look better in a certain font and therefore be preferred over another. It is also common for the user to replace the rounded brackets used for the mouth with other, similar brackets, such as ] instead of ). Some variants are also more common in certain countries due to keyboard layouts. For example, the smiley =) may occur in Scandinavia, where the keys for = and ) are placed right beside each other. However, the :) variant is without a doubt the dominant one in Scandinavia, making the =) version a rarity. Diacritical marks are sometimes used. The letters Ö and Ü can be seen as an emoticon, as the upright version of :O (meaning that one is surprised) and :D (meaning that one is very happy) respectively. Some emoticons may be read right to left instead, and in fact, can only be written using standard ASCII keyboard characters this way round; for example D: which refers to being shocked or anxious, opposite to the large grin of :D. On the Russian-speaking Internet, the right parenthesis ) is used as a smiley. Multiple parentheses )))) are used to express greater happiness, amusement or laughter. It is commonly placed at the end of a sentence. The colon is omitted due to being in a lesser-known position on the ЙЦУКЕН keyboard layout. Japanese (kaomoji) Users from Japan popularized a style of emoticons (顔文字, kaomoji, lit. 'face characters') that can be understood without tilting one's head. This style arose on ASCII NET, an early Japanese online service, in the 1980s. They often include Japanese typography (katakana) in addition to ASCII characters, and in contrast to Western-style emoticons, tend to emphasize the eyes, rather than the mouth. Wakabayashi Yasushi is credited with inventing the original kaomoji in 1986. Similar-looking emoticons were used on the Byte Information Exchange (BIX) around the same time. Whereas Western emoticons were first used by US computer scientists, kaomoji were most commonly used by young girls and fans of Japanese comics (manga). Linguist Ilaria Moschini suggests this is partly due to the kawaii ('cuteness') aesthetic of kaomoji. These emoticons are usually found in a format similar to (*_*). The asterisks indicate the eyes; the central character, commonly an underscore, the mouth; and the parentheses, the outline of the face. Different emotions can be expressed by changing the character representing the eyes: for example, "T" can be used to express crying or sadness: (T_T). T_T may also be used to mean "unimpressed". The emphasis on the eyes in this style is reflected in the common usage of emoticons that use only the eyes, e.g. ^^. Looks of stress are represented by the likes of (x_x), while (-_-;) is a generic emoticon for nervousness, the semicolon representing an anxiety-induced sweat drop (discussed further below). /// can indicate embarrassment by symbolizing blushing. Characters like hyphens or periods can replace the underscore; the period is often used for a smaller, "cuter" mouth, or to represent a nose, e.g. (^.^). Alternatively, the mouth/nose can be left out entirely, e.g. (^^). Parentheses are sometimes replaced with braces or square brackets, e.g. {^_^} or [o_0]. Many times, the parentheses are left out completely, e.g. ^^, >.< , o_O, O.O, e_e, or e.e. A quotation mark ", apostrophe ', or semicolon ; can be added to the emoticon to imply apprehension or embarrassment, in the same way that a sweat drop is used in manga and anime. Microsoft IME 2000 (Japanese) or later supports the input of emoticons like the above by enabling the Microsoft IME Spoken Language/Emotion Dictionary. In IME 2007, this support was moved to the Emoticons dictionary. Such dictionaries allow users to call up emoticons by typing words that represent them. Communication software allowing the use of Shift JIS encoded characters rather than just ASCII allowed for the development of more kaomoji using the extended character set including hiragana, katakana, kanji, symbols, Greek and Cyrillic alphabet, such as , (`Д´) or (益). Modern communication software generally utilizes Unicode, which allows for the incorporation of characters from other languages and a variety of symbols into the kaomoji, as in (◕‿◕✿) (❤ω❤) (づ ◕‿◕ )づ (▰˘◡˘▰). Further variations can be produced using Unicode combining characters, as in ٩(͡๏̯͡๏)۶ or ᶘᵒᴥᵒᶅ. Combination of Japanese and Western styles English-language anime forums adopted those Japanese-style emoticons that could be used with the standard ASCII characters available on Western keyboards. Because of this, they are often called "anime style" emoticons in English. They have since seen use in more mainstream venues, including online gaming, instant-messaging, and non-anime-related discussion forums. Emoticons such as <( ^.^ )>, <(^_^<), <(o_o<), <( -'.'- )>, <('.'-^), or (>';..;')> which include the parentheses, mouth
EPOCH may also refer to: Time-related Any historical era Epoch (astronomy), a moment in time used as a reference for the orbital elements of a celestial body Epoch (computing), a moment from which system time is usually measured Epoch (cosmology) or cosmologic epoch, a phase in the development of the universe since the Big Bang On the geologic time scale, an epoch is a span of time smaller than a period and larger than an age Epoch (race), racial periods in Blavatsky's
game), a 1981 space combat game for the Apple II Epoch (The Brave album), 2016 Epoch (Tycho album), 2016 Epoch, a 2006 album by Rip Slyme Games Epoch Co., a Japanese toy and computer games company Epoch Game Pocket Computer, an early hand-held game console produced by Epoch Co. Epoch (Chrono Trigger), a flying time machine in role-playing game Chrono Trigger Epoch Hunter, a boss in Caverns of Time from the game World of Warcraft Publications Epoch (American magazine), literary magazine of Cornell University Epoch (Russian magazine), literary magazine by Fyodor Dostoyevsky and his brother Mikhail Ha-Tsfira (lit. Epoch), a Hebrew language newspaper published in 1862 and 1874–1931 The Epoch Times, a privately owned Falun Gong-linked newspaper Science and technology EPOCH (chemotherapy), a chemotherapy regimen EPOCh (Extrasolar Planet Observation and Characterization),
among researchers, using the Erdős number as a proxy. For example, Erdős collaboration graphs can tell us how authors cluster, how the number of co-authors per paper evolves over time, or how new theories propagate. Several studies have shown that leading mathematicians tend to have particularly low Erdős numbers. The median Erdős number of Fields Medalists is 3. Only 7,097 (about 5% of mathematicians with a collaboration path) have an Erdős number of 2 or lower. As time passes, the lowest Erdős number that can still be achieved will necessarily increase, as mathematicians with low Erdős numbers die and become unavailable for collaboration. Still, historical figures can have low Erdős numbers. For example, renowned Indian mathematician Srinivasa Ramanujan has an Erdős number of only 3 (through G. H. Hardy, Erdős number 2), even though Paul Erdős was only 7 years old when Ramanujan died. Definition and application in mathematics To be assigned an Erdős number, someone must be a coauthor of a research paper with another person who has a finite Erdős number. Paul Erdős has an Erdős number of zero. Anybody else's Erdős number is where is the lowest Erdős number of any coauthor. The American Mathematical Society provides a free online tool to determine the Erdős number of every mathematical author listed in the Mathematical Reviews catalogue. Erdős wrote around 1,500 mathematical articles in his lifetime, mostly co-written. He had 509 direct collaborators; these are the people with Erdős number 1. The people who have collaborated with them (but not with Erdős himself) have an Erdős number of 2 (12,600 people as of 7 August 2020), those who have collaborated with people who have an Erdős number of 2 (but not with Erdős or anyone with an Erdős number of 1) have an Erdős number of 3, and so forth. A person with no such coauthorship chain connecting to Erdős has an Erdős number of infinity (or an undefined one). Since the death of Paul Erdős, the lowest Erdős number that a new researcher can obtain is 2. There is room for ambiguity over what constitutes a link between two authors. The American Mathematical Society collaboration distance calculator uses data from Mathematical Reviews, which includes most mathematics journals but covers other subjects only in a limited way, and which also includes some non-research publications. The Erdős Number Project web site says: but they do not include non-research publications such as elementary textbooks, joint editorships, obituaries, and the like. The "Erdős number of the second kind" restricts assignment of Erdős numbers to papers with only two collaborators. The Erdős number was most likely first defined in print by Casper Goffman, an analyst whose own Erdős number is 2. Goffman published his observations about Erdős' prolific collaboration in a 1969 article entitled "And what is your Erdős number?" See also some comments in an obituary by Michael Golomb. The median Erdős number among Fields medalists is as low as 3. Fields medalists with Erdős number 2 include Atle Selberg, Kunihiko Kodaira, Klaus Roth, Alan Baker, Enrico Bombieri, David Mumford, Charles Fefferman, William Thurston, Shing-Tung Yau, Jean Bourgain, Richard Borcherds, Manjul Bhargava, Jean-Pierre Serre and Terence Tao. There are no Fields medalists with Erdős number 1; however, Endre Szemerédi is an Abel Prize Laureate with Erdős number 1. Most frequent Erdős collaborators While Erdős collaborated with hundreds of co-authors, there were some individuals with whom he co-authored dozens of papers. This is a list of the ten persons who most frequently co-authored with Erdős and their number of papers co-authored with Erdős (i.e. their number of collaborations). Related fields , all Fields Medalists have a finite Erdős number, with values that range between 2 and 6, and a median of 3. In contrast, the median Erdős number across all mathematicians (with a finite Erdős number) is 5, with an extreme value of 13. The table below summarizes the Erdős number statistics for Nobel prize laureates in Physics, Chemistry, Medicine and Economics. The first column counts the number of laureates. The second column counts the number of winners with a finite Erdős number. The third column is the percentage of winners with a finite Erdős number. The remaining columns report the minimum, maximum, average and median Erdős numbers among those laureates. Physics Among the Nobel Prize laureates in Physics, Albert Einstein and Sheldon Glashow have an Erdős number of 2. Nobel Laureates with an Erdős number of 3 include Enrico Fermi, Otto Stern, Wolfgang Pauli, Max Born, Willis E.
linked via Lander and his numerous collaborators. Similarly, collaboration with Gustavus Simmons opened the door for Erdős numbers within the cryptographic research community, and many linguists have finite Erdős numbers, many due to chains of collaboration with such notable scholars as Noam Chomsky (Erdős number 4), William Labov (3), Mark Liberman (3), Geoffrey Pullum (3), or Ivan Sag (4). There are also connections with arts fields. According to Alex Lopez-Ortiz, all the Fields and Nevanlinna prize winners during the three cycles in 1986 to 1994 have Erdős numbers of at most 9. Earlier mathematicians published fewer papers than modern ones, and more rarely published jointly written papers. The earliest person known to have a finite Erdős number is either Antoine Lavoisier (born 1743, Erdős number 13), Richard Dedekind (born 1831, Erdős number 7), or Ferdinand Georg Frobenius (born 1849, Erdős number 3), depending on the standard of publication eligibility. Martin Tompa proposed a directed graph version of the Erdős number problem, by orienting edges of the collaboration graph from the alphabetically earlier author to the alphabetically later author and defining the monotone Erdős number of an author to be the length of a longest path from Erdős to the author in this directed graph. He finds a path of this type of length 12. Also, Michael Barr suggests "rational Erdős numbers", generalizing the idea that a person who has written p joint papers with Erdős should be assigned Erdős number 1/p. From the collaboration multigraph of the second kind (although he also has a way to deal with the case of the first kind)—with one edge between two mathematicians for each joint paper they have produced—form an electrical network with a one-ohm resistor on each edge. The total resistance between two nodes tells how "close" these two nodes are. It has been argued that "for an individual researcher, a measure such as Erdős number captures the structural properties of [the] network whereas the h-index captures the citation impact of the publications," and that "One can be easily convinced that ranking in coauthorship networks should take into account both measures to generate a realistic and acceptable ranking." In 2004 William Tozier, a mathematician with an Erdős number of 4, auctioned off a co-authorship on eBay, hence providing the buyer with an Erdős number of 5. The winning bid of $1031 was posted by a Spanish mathematician, who however did not intend to pay but just placed the bid to stop what he considered a mockery. Variations A number of variations on the concept have been proposed to apply to other fields. The best known is the Bacon number (as in the game Six Degrees of Kevin Bacon), connecting actors to the actor Kevin Bacon by a chain of joint appearances in films. It was created in 1994, 25 years after Goffman's article on the Erdős number. A small number of people are connected to both Erdős and Bacon and thus have an Erdős–Bacon number, which combines the two numbers by taking their sum. One example is the actress-mathematician Danica McKellar, best known for playing Winnie Cooper on the TV series The Wonder Years. Her Erdős number is 4, and her Bacon number is 2. Further extension is possible. For example, the "Erdős–Bacon–Sabbath number" is the sum of the Erdős–Bacon number and the collaborative distance to the band Black Sabbath in terms of singing in public. Physicist Stephen Hawking had an Erdős–Bacon–Sabbath number of 8, and actress Natalie Portman has one of 11 (her Erdős number is 5). In chess, the Morphy number describes a player's connection to Paul Morphy, widely considered the greatest chess player of his time and an unofficial World Chess Champion. See also References External links Jerry Grossman, The Erdős Number Project. Contains statistics and a complete list of all mathematicians with
non-profit kindergartens. After protests by parents with children enrolled in for profit kindergartens, the program was extended to children in for- profit kindergartens, but only for children enrolled in or before September 2007. The government will also provide up to HK$30,000 subsidy to for profit kindergartens wanting to convert to non profit. Pakistan In Pakistani Punjab, the Education Voucher Scheme (EVS) was introduced by Dr. Allah Bakhsh Malik Managing Director and Chief Executive of Punjab Education Foundation (PEF), especially in urban slums and poorest of the poor in 2005. The initial study was sponsored by Open Society Institute New York USA. Professor Henry M. Levin extended Pro-Bono services for children of poor families from Punjab. To ensure educational justice and integration, the government must ensure that the poorest families have equal access to quality education. The voucher scheme was designed by the Teachers College, Columbia University, and the Open Society Institute. It aims to promote freedom of choice, efficiency, equity, and social cohesion. A pilot project was started in 2006 in the urban slums of Sukhnehar, Lahore, where a survey showed that all households lived below the poverty line. Through the EVS, the foundation would deliver education vouchers to every household with children 5–16 years of age. The vouchers would be redeemable against tuition payments at participating private schools. In the pilot stage, 1,053 households were given an opportunity to send their children to a private school of their choice. The EVS makes its partner schools accountable to the parents rather than to the bureaucrats at the Ministry of Education. In the FAS program, every school principal has the choice to admit a student or not. However, in the EVS, a partner school cannot refuse a student if the student has a voucher and the family has chosen that school. The partner schools are also accountable to the PEF: they are subject to periodic reviews of their student learning outcomes, additional private investments, and improvements in working conditions of the teachers. The EVS provides an incentive to parents to send their children to school, and so it has become a source of competition among private schools seeking to join the program. When it comes to the selection of schools, the following criteria are applied across the board: (i) The fee paid by the PEF to EVS partner schools is PKR 550 to per child per month. Schools charging higher fees can also apply to the program, but they will not be paid more than PKR 1200, and they will not be entitled to charge the difference to students' families. (ii) Total school enrollment should be at least 50 children. (iii) The school should have an adequate infrastructure and a good learning environment. (iv) EVS partner schools should be located within a half-kilometer radius of the residences of voucher holders. However, if the parents prefer a particular school farther away, the PEF will not object, provided that the school fulfills the EVS selection criteria. (v) The PEF advertises to stimulate the interest of potential partner schools. It then gives students at short-listed schools preliminary tests in selected subjects, and conducts physical inspections of these schools. PEF offices display a list of all the EVS partner schools so that parents may consult it and choose a school for their children. By now more than 500,000 students are benefiting from EVS and the program is being scaled up by financing from Government of Punjab. School voucher public policy in the United States In the 1980s, the Reagan administration pushed for vouchers, as did the George W. Bush administration in the initial education-reform proposals leading up to the No Child Left Behind Act. As of December 2016, 14 states had traditional school voucher programs. These states consist of: Arkansas, Florida, Georgia, Indiana, Louisiana, Maine, Maryland, Mississippi, North Carolina, Ohio, Oklahoma, Utah, Vermont, and Wisconsin. The capital of the United States, Washington, D.C., also had operating school voucher programs as of December 2016. When including scholarship tax credits and education savings accounts – two alternatives to vouchers – there are 27 states plus the District of Columbia with private school choice programs. Most of these programs were offered to students in low-income families, low performing schools, or students with disabilities. By 2014, the number participating in either vouchers or tax-credit scholarships increased to 250,000, a 30% increase from 2010, but still a small fraction compared to the 55 million in traditional schools. In 1990, the city of Milwaukee, Wisconsin's public schools were the first to offer vouchers and has nearly 15,000 students using vouchers as of 2011. The program, entitled the Milwaukee Parental Choice Program, originally funded school vouchers for nonreligious, private institutions. It was, however, eventually expanded to include private, religious institutions after it saw success with nonreligious, private institutions. The 2006/07 school year marked the first time in Milwaukee that more than $100 million was paid in vouchers. Twenty-six percent of Milwaukee students will receive public funding to attend schools outside the traditional Milwaukee Public School system. In fact, if the voucher program alone were considered a school district, it would mark the sixth-largest district in Wisconsin. St. Anthony Catholic School, located on Milwaukee's south side, boasts 966 voucher students, meaning that it very likely receives more public money for general school support of a parochial elementary or high school than any before it in American history. A 2013 study of Milwaukee's program posited that the use of vouchers increased the probability that a student would graduate from high school, go to college, and stay in college. A 2015 paper published by the National Bureau of Economic Research found that participation in Louisiana's voucher program "substantially reduces academic achievement" although that the result may be reflective of the poor quality of private schools in the program. Recent analysis of the competitive effects of school vouchers in Florida suggests that more competition improves performance in the regular public schools. The largest school voucher program in the United States is Indiana's Indiana Choice Scholarships program. Proponents Proponents of school voucher and education tax credit systems argue that those systems promote free market competition among both private and public schools by allowing parents and students to choose the school where to use the vouchers. This choice available to parents forces schools to perpetually improve in order to maintain enrollment. Thus, proponents argue that a voucher system increases school performance and accountability because it provides consumer sovereignty – allowing individuals to choose what product to buy, as opposed to a bureaucracy. This argument is supported by studies such as "When Schools Compete: The Effects of Vouchers on Florida Public School Achievement" (Manhattan Institute for Policy Research, 2003), which concluded that public schools located near private schools that were eligible to accept voucher students made significantly more improvements than did similar schools not located near eligible private schools. Stanford's Caroline Hoxby, who has researched the systemic effects of school choice, determined that areas with greater residential school choice have consistently higher test scores at a lower per-pupil cost than areas with very few school districts. Hoxby studied the effects of vouchers in Milwaukee and of charter schools in Arizona and Michigan on nearby public schools. Public schools forced to compete made greater test-score gains than schools not faced with such competition, and that the so-called effect of cream skimming did not exist in any of the voucher districts examined. Hoxby's research has found that both private and public schools improved through the use of vouchers. Similarly, it is argued that such competition has helped in higher education, with publicly funded universities directly competing with private universities for tuition money provided by the Government, such as the GI Bill and the Pell Grant in the United States. The Foundation for Educational Choice alleges that a school voucher plan "embodies exactly the same principle as the GI bills that provide for educational benefits to military veterans. The veteran gets a voucher good only for educational expense and he is completely free to choose the school at which he uses it, provided that it satisfies certain standards". The Pell Grant, a need-based aid, like the Voucher, can only be used for authorized school expenses at qualified schools, and, like the Pell, the money follows the student, for use against those authorized expenses (not all expenses are covered). Proponents are encouraged by private school sector growth, as they believe that private schools are typically more efficient at achieving results at a much lower per-pupil cost than public schools. A CATO Institute study of public and private school per pupil spending in Phoenix, Los Angeles, D.C., Chicago, New York City, and Houston found that public schools spend 93% more than estimated median private schools. Proponents claim that institutions often are forced to operate more efficiently when they are made to compete and that any resulting job losses in the public sector would be offset by the increased demand for jobs in the private sector. Friedrich von Hayek on the privatizing of education: Other notable supporters include New Jersey Senator Cory Booker, former Governor of South Carolina Mark Sanford, billionaire and American philanthropist John T. Walton, Former Mayor of Baltimore Kurt L. Schmoke, Former Massachusetts Governor Mitt Romney and John McCain. A random survey of 210 Ph.D. holding members of the American Economic Association, found that over two-thirds of economists support giving parents educational vouchers that can be used at government-operated or privately operated schools, and that support is greater if the vouchers are to be used by parents with low-incomes or parents with children in poorly performing schools. Another prominent proponent of the voucher system was Apple co-founder and CEO, Steve Jobs, who said: As a practical matter, proponents note, most U.S. programs only offer poor families the same choice more affluent families already have, by providing them with the means to leave a failing school and attend one where the child can get an education. Because public schools are funded on a per-pupil basis, the money simply follows the child, but the cost to taxpayers is less because the voucher generally is less than the actual cost. In addition, they say, the comparisons of public and private schools on average are meaningless. Vouchers usually are utilized by children in failing schools, so they can hardly be worse off even if the parents fail to choose a better school. Also, focusing on the effect on the public school suggests that is more important than the education of children. Some proponents of school vouchers, including the Sutherland Institute and many supporters of the Utah voucher effort, see it as a remedy for the negative cultural impact caused by under-performing public schools, which falls disproportionately on demographic minorities. During the run-up to the November referendum election, Sutherland issued a controversial publication: Voucher, Vows, & Vexations. Sutherland called the publication an important review of the history of education in Utah, while critics just called it revisionist history. Sutherland then released a companion article in a law journal as part of an academic conference about school choice. EdChoice, founded by Milton and Rose Friedman in 1996, is a non-profit organization that promotes universal school vouchers and other forms of school choice. In defense of vouchers, it cites empirical research showing that students who were randomly assigned to receive vouchers had higher academic outcomes than students who applied for vouchers but lost a random lottery and did not receive them; and that vouchers improve academic outcomes at public schools, reduce racial segregation, deliver better services to special education students, and do not drain money from public schools. EdChoice also argues that education funding should belong to children, not a specific school type or building. Their purpose for the argument is to try to argue that people should prioritize a student's education and their opportunity over making a specific type of school better. They also emphasize that if a family chooses a public school, the funds also go to that school. This would mean that it would also benefit those who value the public education system. Opponents The main critique of school vouchers and education tax credits is that they put public education in competition with private education, threatening to reduce and reallocate public school funding to private schools. Opponents question the belief that private schools are more efficient. Public school teachers and teacher unions have also fought against school vouchers. In the United States, public school teacher unions, most notably the National Education Association (the largest labor union in the USA), argue that school vouchers erode educational standards and reduce funding, and that giving money to parents who choose to send their child to a
contract. About 20% of French school children attend private schools. Home schooling is permitted in France. Ireland Most schools in the Republic of Ireland are state-aided Catholic parish schools, established under diocesan patronage but with capital costs, teachers' salaries and a fee per head paid to the school. These are given to the school regardless of whether or not it requires its students to pay fees. (Although fee-paying schools are in the minority, there has been much criticism over the state aid they receive. Opponents claim that the aid gives them an unfair advantage.) There is a recent trend towards multi-denominational schools established by parents, and organised as limited companies without share capital. Parents and students are free to choose their own schools. If a school fails to attract students, it immediately loses its fees and eventually loses its teaching posts, and teachers are moved to other schools that are attracting students. The system is perceived to have achieved very successful outcomes for most Irish children. The 1995–97 "Rainbow Coalition" government, containing ministers from parties of the centre right and the left, introduced free third-level education to primary degree level. Critics charge that this has not increased the number of students from economically deprived backgrounds attending university. However, studies have shown that the removal of tuition fees at third level has increased the numbers of students overall and of students from lower socioeconomic backgrounds. Since the economic crisis of 2008 there has been extensive debate regarding the possible reintroduction of third-level fees. Sweden In Sweden, a system of school vouchers (called skolpeng) was introduced in 1992 at primary and secondary school level, enabling free choice among publicly run schools and privately run friskolor ("free schools"). The voucher is paid with public funds from the local municipality (kommun) directly to a school based solely on its number of students. Both public schools and free schools are funded the same way. Free schools can be run by not-for-profit groups as well as by for-profit companies, but may not charge top-up fees or select students other than on a first-come, first-served basis. Over 10% of Swedish pupils were enrolled in free schools in 2008 and the number is growing fast, leading the country to be viewed as a pioneer of the model. Per Unckel, governor of Stockholm and former Minister of Education, has promoted the system, saying "Education is so important that you can't just leave it to one producer, because we know from monopoly systems that they do not fulfill all wishes." The Swedish system has been recommended to Barack Obama by some commentators, including the Pacific Research Institute, which has released a documentary called Not As Good As You Think: Myth of the Middle Class Schools, a movie depicting positive benefits for middle class schools resulting from Sweden's voucher programs. A 2004 study concluded that school results in public schools improved due to the increased competition. However, Per Thulberg, director general of the Swedish National Agency for Education, has said that the system "has not led to better results" and in the 2000s Sweden's ranking in the PISA league tables worsened. Though Rachel Wolf, director of the New Schools Network, has suggested that Sweden's education standards had slipped for reasons other than as a result of free schools. A 2015 study was able to show that "an increase in the share of independent school students improves average short‐ and long‐run outcomes, explained primarily by external effects (e.g. school competition)". Hong Kong A voucher system for children three to six years-old who attend a non-profit kindergarten was implemented in Hong Kong in 2007. Each child will get HK$13,000 per year. The $13,000 subsidy will be separated into two parts. $10,000 is used to subsidize the school fee and the remaining $3,000 is used for kindergarten teachers to pursue further education and obtain a certificate in Education. Also, there are some restrictions on the voucher system. Parents can only choose non-profit schools with a yearly fee less than $24,000. The government hoped that all kindergarten teachers can obtain an Education certificate by the year 2011–12, at which point the subsidies are to be adjusted to $16,000 for each student, all of which will go toward the school fee. Milton Friedman criticised the system, saying "I do not believe that CE Mr. Tsang's proposal is properly structured." He said that the whole point of a voucher system is to provide a competitive market place so should not be limited to non-profit kindergartens. After protests by parents with children enrolled in for profit kindergartens, the program was extended to children in for- profit kindergartens, but only for children enrolled in or before September 2007. The government will also provide up to HK$30,000 subsidy to for profit kindergartens wanting to convert to non profit. Pakistan In Pakistani Punjab, the Education Voucher Scheme (EVS) was introduced by Dr. Allah Bakhsh Malik Managing Director and Chief Executive of Punjab Education Foundation (PEF), especially in urban slums and poorest of the poor in 2005. The initial study was sponsored by Open Society Institute New York USA. Professor Henry M. Levin extended Pro-Bono services for children of poor families from Punjab. To ensure educational justice and integration, the government must ensure that the poorest families have equal access to quality education. The voucher scheme was designed by the Teachers College, Columbia University, and the Open Society Institute. It aims to promote freedom of choice, efficiency, equity, and social cohesion. A pilot project was started in 2006 in the urban slums of Sukhnehar, Lahore, where a survey showed that all households lived below the poverty line. Through the EVS, the foundation would deliver education vouchers to every household with children 5–16 years of age. The vouchers would be redeemable against tuition payments at participating private schools. In the pilot stage, 1,053 households were given an opportunity to send their children to a private school of their choice. The EVS makes its partner schools accountable to the parents rather than to the bureaucrats at the Ministry of Education. In the FAS program, every school principal has the choice to admit a student or not. However, in the EVS, a partner school cannot refuse a student if the student has a voucher and the family has chosen that school. The partner schools are also accountable to the PEF: they are subject to periodic reviews of their student learning outcomes, additional private investments, and improvements in working conditions of the teachers. The EVS provides an incentive to parents to send their children to school, and so it has become a source of competition among private schools seeking to join the program. When it comes to the selection of schools, the following criteria are applied across the board: (i) The fee paid by the PEF to EVS partner schools is PKR 550 to per child per month. Schools charging higher fees can also apply to the program, but they will not be paid more than PKR 1200, and they will not be entitled to charge the difference to students' families. (ii) Total school enrollment should be at least 50 children. (iii) The school should have an adequate infrastructure and a good learning environment. (iv) EVS partner schools should be located within a half-kilometer radius of the residences of voucher holders. However, if the parents prefer a particular school farther away, the PEF will not object, provided that the school fulfills the EVS selection criteria. (v) The PEF advertises to stimulate the interest of potential partner schools. It then gives students at short-listed schools preliminary tests in selected subjects, and conducts physical inspections of these schools. PEF offices display a list of all the EVS partner schools so that parents may consult it and choose a school for their children. By now more than 500,000 students are benefiting from EVS and the program is being scaled up by financing from Government of Punjab. School voucher public policy in the United States In the 1980s, the Reagan administration pushed for vouchers, as did the George W. Bush administration in the initial education-reform proposals leading up to the No Child Left Behind Act. As of December 2016, 14 states had traditional school voucher programs. These states consist of: Arkansas, Florida, Georgia, Indiana, Louisiana, Maine, Maryland, Mississippi, North Carolina, Ohio, Oklahoma, Utah, Vermont, and Wisconsin. The capital of the United States, Washington, D.C., also had operating school voucher programs as of December 2016. When including scholarship tax credits and education savings accounts – two alternatives to vouchers – there are 27 states plus the District of Columbia with private school choice programs. Most of these programs were offered to students in low-income families, low performing schools, or students with disabilities. By 2014, the number participating in either vouchers or tax-credit scholarships increased to 250,000, a 30% increase from 2010, but still a small fraction compared to the 55 million in traditional schools. In 1990, the city of Milwaukee, Wisconsin's public schools were the first to offer vouchers and has nearly 15,000 students using vouchers as of 2011. The program, entitled the Milwaukee Parental Choice Program, originally funded school vouchers for nonreligious, private institutions. It was, however, eventually expanded to include private, religious institutions after it saw success with nonreligious, private institutions. The 2006/07 school year marked the first time in Milwaukee that more than $100 million was paid in vouchers. Twenty-six percent of Milwaukee students will receive public funding to attend schools outside the traditional Milwaukee Public School system. In fact, if the voucher program alone were considered a school district, it would mark the sixth-largest district in Wisconsin. St. Anthony Catholic School, located on Milwaukee's south side, boasts 966 voucher students, meaning that it very likely receives more public money for general school support of a parochial elementary or high school than any before it in American history. A 2013 study of Milwaukee's program posited that the use of vouchers increased the probability that a student would graduate from high school, go to college, and stay in college. A 2015 paper published by the National Bureau of Economic Research found that participation in Louisiana's voucher program "substantially reduces academic achievement" although that the result may be reflective of the poor quality of private schools in the program. Recent analysis of the competitive effects of school vouchers in Florida suggests that more competition improves performance in the regular public schools. The largest school voucher program in the United States is Indiana's Indiana Choice Scholarships program. Proponents Proponents of school voucher and education tax credit systems argue that those systems promote free market competition among both private and public schools by allowing parents and students to choose the school where to use the vouchers. This choice available to parents forces schools to perpetually improve in order to maintain enrollment. Thus, proponents argue that a voucher system increases school performance and accountability because it provides consumer sovereignty – allowing individuals to choose what product to buy, as opposed to a bureaucracy. This argument is supported by studies such as "When Schools Compete: The Effects of Vouchers on Florida Public School Achievement" (Manhattan Institute for Policy Research, 2003), which concluded that public schools located near private schools that were eligible to accept voucher students made significantly more improvements than did similar schools not located near eligible private schools. Stanford's Caroline Hoxby, who has researched the systemic effects of school choice, determined that areas with greater residential school choice have consistently higher test scores at a lower per-pupil cost than areas with very few school districts. Hoxby studied the effects of vouchers in Milwaukee and of charter schools in Arizona and Michigan on nearby public schools. Public schools forced to compete made greater test-score gains than schools not faced with such competition, and that the so-called effect of cream skimming did not exist in any of the voucher districts examined. Hoxby's research has found that both private and public schools improved through the use of vouchers. Similarly, it is argued that such competition has helped in higher education, with publicly funded universities directly competing with private universities for tuition money provided by the Government, such as the GI Bill and the Pell Grant in the United States. The Foundation for Educational Choice alleges that a school voucher plan "embodies exactly the same principle as the GI bills that provide for educational benefits to military veterans. The veteran gets a voucher good only for educational expense and he is completely free to choose the school at which he uses it, provided that it satisfies certain standards". The Pell Grant, a need-based aid, like the Voucher, can only be used for authorized school expenses at qualified schools, and, like the Pell, the money follows the student, for use against those authorized expenses (not all expenses are covered). Proponents are encouraged by private school sector growth, as they believe that private schools are typically more efficient at achieving results at a much lower per-pupil cost than public schools. A CATO Institute study of public and private school per pupil spending in Phoenix, Los Angeles, D.C., Chicago, New York City, and Houston found that public schools spend 93% more than estimated median private schools. Proponents claim that institutions often are forced to operate more efficiently when they are made to compete and that any resulting job losses in the public sector would be offset by the increased demand for jobs in the private sector. Friedrich von Hayek on the privatizing of education: Other notable supporters include New Jersey Senator Cory Booker, former Governor of South Carolina Mark Sanford, billionaire and American philanthropist John T. Walton, Former Mayor of Baltimore Kurt L. Schmoke, Former Massachusetts Governor Mitt Romney and John McCain. A random survey of 210 Ph.D. holding members of the American Economic Association, found that over two-thirds of economists support giving parents educational vouchers that can be used at government-operated or privately operated schools, and that support is greater if the vouchers are to be used by parents with low-incomes or parents with children in poorly performing schools. Another prominent proponent of the voucher system was Apple co-founder and CEO, Steve Jobs, who said: As a practical matter, proponents note, most U.S. programs only offer poor families the same choice more affluent families already have, by providing them with the means to leave a failing school and attend one where the child can get an education. Because public schools are funded on a per-pupil basis, the money simply follows the child, but the cost to taxpayers is less because the voucher generally is less than the actual cost. In addition, they say, the comparisons of public and private schools on average are meaningless. Vouchers usually are utilized by children in failing schools, so they can hardly be worse off even if the parents fail to choose a better school. Also, focusing on the effect on the public school suggests that is more important than the education of children. Some proponents of school vouchers, including the Sutherland Institute and many supporters of the Utah voucher effort, see it as a remedy for the negative cultural impact caused by under-performing public schools, which falls disproportionately on demographic minorities. During the run-up to the November referendum election, Sutherland issued a controversial publication: Voucher, Vows, & Vexations. Sutherland called the publication an important review of the history of education in Utah, while critics just called it revisionist history. Sutherland then released a companion article in a law journal as part of an academic conference about school choice. EdChoice, founded by Milton and Rose Friedman in 1996, is a non-profit organization that promotes universal school vouchers and other forms of school choice. In defense of vouchers, it cites empirical research showing that students who were randomly assigned to receive vouchers had higher academic outcomes than students who applied for vouchers but lost a random lottery and did not receive them; and that vouchers improve academic outcomes at public schools, reduce racial segregation, deliver better services to special education students, and do not drain money from public schools. EdChoice also argues that education funding should belong to children, not a specific school type or building. Their purpose for the argument is to try to argue that people should prioritize a student's education and their opportunity over making a specific type of school better. They also emphasize that if a family chooses a public school, the funds also go to that school. This would mean that it would also benefit those who value the public education system. Opponents The main
in Alaska on a fireboat. He then worked for almost two years with the Frank Seaman advertising agency as a production assistant and copywriter before returning to New York City in 1924. When The New Yorker was founded in 1925, White submitted manuscripts to it. Katharine Angell, the literary editor, recommended to editor-in-chief and founder Harold Ross that White be hired as a staff writer. However, it took months to convince him to come to a meeting at the office and additional weeks to convince him to work on the premises. Eventually, he agreed to work in the office on Thursdays. White was shy around women, claiming he had "too small a heart, too large a pen." But in 1929, after an affair which led to her divorce, White and Katherine Angell were married. They had a son, Joel White, a naval architect and boat builder, who later owned Brooklin Boat Yard in Brooklin, Maine. Katharine's son from her first marriage, Roger Angell, has spent decades as a fiction editor for The New Yorker and is well known as the magazine's baseball writer. In her foreword to Charlotte's Web, Kate DiCamillo quotes White as saying, "All that I hope to say in books, all that I ever hope to say, is that I love the world." White also loved animals, farms and farming implements, seasons, and weather formats. James Thurber described White as a quiet man who disliked publicity and who, during his time at The New Yorker, would slip out of his office via the fire escape to a nearby branch of Schrafft's to avoid visitors whom he didn't know: Later in life, White had Alzheimer's disease and died on October 1, 1985, at his farm home in North Brooklin, Maine. He is buried in the Brooklin Cemetery beside Katharine, who died in 1977. Career E.B. White published his first article in The New Yorker in 1925, then joined the staff in 1927 and continued to contribute for almost six decades. Best recognized for his essays and unsigned "Notes and Comment" pieces, he gradually became the magazine's most important contributor. From the beginning to the end of his career at The New Yorker, he frequently provided what the magazine calls "Newsbreaks" (short, witty comments on oddly worded printed items from many sources) under various categories such as "Block That Metaphor." He also was a columnist for Harper's Magazine from 1938 to 1943. In 1949, White published Here Is New York, a short book based on an article he had been commissioned to write for Holiday. Editor Ted Patrick approached White about writing the essay telling him it would be fun. "Writing is never 'fun'", replied White. That article reflects the writer's appreciation of a city that provides its residents with both "the gift of loneliness and the gift of privacy." It concludes with a dark note touching on the forces that could destroy the city that he loved. This prescient "love letter" to the city was re-published in 1999 on his centennial with an introduction by his stepson, Roger Angell. In 1959, White edited and updated The Elements of Style. This handbook of grammatical and stylistic guidance for writers of American English was first written and published in 1918 by William Strunk Jr., one of White's professors at Cornell. White's reworking of the book was extremely well received, and later editions followed in 1972, 1979, and 1999. Maira Kalman illustrated an edition in 2005. That same year, a New York composer named Nico Muhly premiered a short opera based on the book. The volume is a standard tool for students and writers and remains required reading in many composition classes. The complete history of The Elements of Style is detailed in Mark Garvey's Stylized: A Slightly Obsessive History of Strunk & White's The Elements of Style. In 1978, White won a special Pulitzer Prize citing "his letters, essays and the full body of his work". He also
Maira Kalman illustrated an edition in 2005. That same year, a New York composer named Nico Muhly premiered a short opera based on the book. The volume is a standard tool for students and writers and remains required reading in many composition classes. The complete history of The Elements of Style is detailed in Mark Garvey's Stylized: A Slightly Obsessive History of Strunk & White's The Elements of Style. In 1978, White won a special Pulitzer Prize citing "his letters, essays and the full body of his work". He also received the Presidential Medal of Freedom in 1963 and honorary memberships in a variety of literary societies throughout the United States. The 1973 Oscar-nominated Canadian animated short The Family That Dwelt Apart is narrated by White and is based on his short story of the same name. Children's books In the late 1930s, White turned his hand to children's fiction on behalf of a niece, Janice Hart White. His first children's book, Stuart Little, was published in 1945, and Charlotte's Web followed in 1952. Stuart Little initially received a lukewarm welcome from the literary community. However, both books went on to receive high acclaim, and Charlotte's Web won a Newbery Honor from the American Library Association, though it lost out on winning the Newbery Medal to Secret of the Andes by Ann Nolan Clark. White received the Laura Ingalls Wilder Medal from the U.S. professional children's librarians in 1970. It recognized his "substantial and lasting contributions to children's literature." That year, he was also the U.S. nominee and eventual runner-up for the biennial Hans Christian Andersen Award, as he was again in 1976. Also, in 1970, White's third children's novel was published, The Trumpet of the Swan. In 1973 it won the Sequoyah Award from Oklahoma and the William Allen White Award from Kansas, both selected by students voting for their favorite book of the year. In 2012, the School Library Journal sponsored a survey of readers, which identified Charlotte's Web as the best children's novel ("fictional title for readers 9–12" years old). The librarian who conducted it said, "It is impossible to conduct a poll of this sort and expect [White's novel] to be anywhere but #1." Awards and honors 1953 Newbery Medal for Charlotte's Web 1960 American Academy of Arts and Letters Gold Medal 1963 Presidential Medal of Freedom 1970 Laura Ingalls Wilder Award 1971 National Medal for Literature 1977 L.L. Winship/PEN New England Award, Letters of E.B. White 1978 Pulitzer Prize Special Citation for Letters Other The E.B. White Read Aloud Award is given by The Association of Booksellers for Children (ABC) to honor books that its membership feel embodies the universal read-aloud standards that E.B. White's works created. Bibliography Books Less than Nothing, or, The Life and Times of Sterling Finny (1927) Ho Hum: Newsbreaks from the New Yorker (1931). Intro by E.B. White, and much of the text as well. Alice Through the Cellophane, John Day (1933) Every Day is Saturday, Harper (1934) Quo Vadimus: or The Case for the Bicycle, Harper (1938) A Subtreasury of American Humor (1941). Co-edited with Katherine S. White. One Man's Meat (1942): A collection of his columns from Harper's Magazine The Wild Flag: Editorials From The New Yorker On Federal World Government And Other Matters (1943) Stuart Little (1945) Here Is New York (1949) Charlotte's Web (1952) The Second Tree from the Corner (1954) The Elements of Style (with William Strunk Jr.) (1959, republished 1972, 1979, 1999, 2005) The Points of My Compass (1962) The Trumpet of the Swan (1970) Letters of E.B. White (1976) Essays of E.B. White (1977) Poems and Sketches of E.B. White (1981) Writings from "The New Yorker" (1990) In the Words of E.B. White (2011) The Fox of Peapack Farewell to Model T An E.B. White Reader. Edited by William W. Watt and Robert W. Bradford. Essays and reporting References External links "E.B. White, The Art of the Essay No. 1", The Paris Review, Fall 1969 – interview by George Plimpton and Frank H. Crowther (audio-video) miNYstories based on Here Is New York 1899 births 1985 deaths 20th-century American essayists 20th-century American journalists 20th-century American male writers 20th-century American novelists 20th-century American poets 20th-century American short story writers American children's writers American cultural critics
of the original "patriarch" office instituted movement founder Joseph Smith. Early Latter Day Saint movement The first use of the term "evangelist" in Latter Day Saint theology were mainly consistent with how the term is used by Protestants and Catholics. In 1833, Joseph Smith introduced the new office of patriarch, to which he ordained his father. The elder Smith was given the "keys of the patriarchal Priesthood over the kingdom of God on earth", the same power said to be held by the Biblical patriarchs, which included the power to give blessings upon one's posterity. The elder Smith, however, was also called to give patriarchal blessings to the fatherless within the church, and the church as a whole, a calling he passed onto his eldest surviving son Hyrum Smith prior to his death. Hyrum himself was killed in 1844 along with Joseph, resulting in a succession crisis that broke the Latter Day Saint movement into multiple denominations. It is not known who first identified the term "evangelist" with the office of patriarch. However, in an 1835 church publication, W. W. Phelps stated, "[W]ho is not desirous of receiving a father's or an evangelist's blessing? Who can read the ancient patriarchal blessings, recorded in the bible, for the benefit of the church, without a heart filled with joy ... ?" In 1839, Joseph Smith equated an evangelist with the office of patriarch, stating that "an Evangelist is a Patriarch". The necessity of an evangelist in the church organization has been reinforced repeatedly, based on the passage in Ephesians 4:11, which states, "And he gave some, apostles; and some, prophets; and some, evangelists; and some, pastors and teachers". In 1834, while writing what he called the "principles of salvation", prominent early Latter Day Saint Oliver Cowdery stated that: "We do not believe that he ever had a church on earth without revealing himself to that church: consequently, there were apostles, prophets, evangelists, pastors, and teachers, in the
Jane M. Gardner, since 2016). The Church of Jesus Christ (Bickertonite) In The Church of Jesus Christ (Bickertonite), the prescribed duties of an evangelist are to preach the gospel of Jesus Christ to every nation, kindred, language, and people. An evangelist is part of the Quorum of Seventy Evangelists. Quorum of Seventy Evangelists The Quorum of Seventy Evangelists is responsible for management of the International Missionary Programs of the church and assists Regions of the church with their individual Domestic Missionary Programs. The Quorum of Seventy oversees the activities of its Missionary Operating Committees to ensure the fulfilling of Christ’s commandment to take the gospel to the entire world. In 2007, the officers of the Quorum of Seventy Evangelists were: Evangelist Eugene Perri, President Evangelist Alex Gentile, Vice-President Evangelist Jeffrey Giannetti, Secretary The Church of Jesus Christ of Latter-day Saints In The Church of Jesus Christ of Latter-day Saints (LDS Church), an evangelist is considered to be an office of the Melchizedek priesthood. However, the term "evangelist" is rarely used for this position; instead, the church has retained the term "patriarch", the term most commonly used by Joseph Smith. The most prominent reference to the term "evangelist" in the LDS Church's literature is found in its "Articles of Faith", derived from the Wentworth letter—a statement by Smith in 1842 to a Chicago newspaper editor—that the church believes in "the same organization that existed in the primitive church", including "evangelists". Smith taught that "an Evangelist is an Patriarch". Notes References Edwards, Paul M., "RLDS Priesthood: Structure and Process", Dialogue: A Journal of Mormon Thought 17(3) (1984) p. 6. The Church of Jesus Christ (2005). Faith and Doctrine of The Church of Jesus Christ. Bridgewater, Michigan: The Church of Jesus Christ. Valenti, Jerry (1986). "Volume 56", "Welcome to The Church of Jesus Christ". Bridgewater, Michigan: Gospel News, 9. Veazey, Stephen M. (2006). Faith & Beliefs: Sacraments in the Community of Christ (Independence, Missouri: Herald House). 1833 establishments in the United States Community of Christ Latter Day Saint hierarchy Leadership positions in The Church of Jesus Christ (Bickertonite) 1833
is a poetic form used by Greek lyric poets for a variety of themes usually of smaller scale than the epic. Roman poets, particularly Catullus, Propertius, Tibullus, and Ovid, adopted the same form in Latin many years later. As with the English heroic couplet, each pair of lines usually makes sense on its own, while forming part of a larger work. Each couplet consists of a hexameter verse followed by a pentameter verse. The following is a graphic representation of its scansion: – uu | – uu | – uu | – uu | – uu | – x – uu | – uu | – || – uu | – uu | – – is one long syllable, u one short syllable, uu is one long or two short syllables, and x is one long or one short syllable (anceps). The form was felt by the ancients to contrast the rising action of the first verse with a falling quality in the second. The sentiment is summarized in a line from Ovid's Amores I.1.27 Sex mihi surgat opus numeris, in quinque residat—"Let my work rise in six steps, fall back in five." The effect is illustrated by Samuel Taylor Coleridge as: In the hexameter rises the fountain's silvery column, In the pentameter aye falling in melody back. translating Friedrich Schiller, Im Hexameter steigt des Springquells silberne Säule, Im Pentameter drauf fällt sie melodisch herab. Greek origins The elegiac couplet is presumed to be the oldest Greek form of epodic poetry (a form where a later verse is sung in response or comment to a previous one). Scholars, who even in the past did not know who created it, theorize the form was originally used in Ionian dirges, with the name "elegy" derived from the Greek ε, λεγε ε, λεγε—"Woe, cry woe, cry!" Hence, the form was used initially for funeral songs, typically accompanied by an aulos, a double-reed wind instrument. Archilochus expanded use of the form to treat other themes, such as war, travel, and homespun philosophy. Between Archilochus and other imitators, the verse form became a common poetic vehicle for conveying any strong emotion. At the end of the 7th century BCE, Mimnermus of Colophon struck on the innovation of using the verse for erotic poetry. He composed several elegies celebrating his love for the flute girl Nanno, and though fragmentary today, his poetry was clearly influential in the later Roman development of the form. Propertius, to cite one example, notes Plus in amore valet Mimnermi versus Homero—"The verse of Mimnermus is stronger in love than Homer". The form continued to be popular throughout the Greek period and treated a number of different themes. Tyrtaeus composed elegies on a war theme, apparently for a Spartan audience. Theognis of Megara vented himself in couplets as an embittered aristocrat in a time of social change. Popular leaders were writers of elegies—Solon the lawgiver of Athens composed on political and ethical subjects—and even Plato and Aristotle dabbled with the meter. By the Hellenistic period, the Alexandrian school made elegy its favorite and most highly developed form. They preferred the briefer style associated with elegy in contrast to the lengthier epic forms, and made it the singular medium for short epigrams. The founder of this school was Philitas of Cos. He was eclipsed only by the school's most admired exponent, Callimachus; their learned character and intricate art would have a heavy influence on the Romans. Roman elegy Like many Greek forms, elegy was adapted by the Romans for their own literature. The fragments of Ennius contain a few couplets, and scattered verses attributed to Roman public figures like Cicero and Julius Caesar also survive, but it is the elegists of the mid-to-late first century BCE who are most commonly associated with the distinctive Roman form of the elegiac couplet. Catullus, the first of these, is an invaluable link between the Alexandrine school and the subsequent elegies of Tibullus, Propertius and Ovid. He shows a familiarity with the usual Alexandrine style of terse epigram and a wealth of mythological learning, as in his 66th poem, a direct translation of Callimachus' Coma Berenices. His 85th poem is famous: Many who read it aloud fail to grasp the metre correctly because of the three elisions. – u u| – –| – u u|– – | – u u| – x Od'et a|mo. Qua|r'id faci|am,
Im Pentameter drauf fällt sie melodisch herab. Greek origins The elegiac couplet is presumed to be the oldest Greek form of epodic poetry (a form where a later verse is sung in response or comment to a previous one). Scholars, who even in the past did not know who created it, theorize the form was originally used in Ionian dirges, with the name "elegy" derived from the Greek ε, λεγε ε, λεγε—"Woe, cry woe, cry!" Hence, the form was used initially for funeral songs, typically accompanied by an aulos, a double-reed wind instrument. Archilochus expanded use of the form to treat other themes, such as war, travel, and homespun philosophy. Between Archilochus and other imitators, the verse form became a common poetic vehicle for conveying any strong emotion. At the end of the 7th century BCE, Mimnermus of Colophon struck on the innovation of using the verse for erotic poetry. He composed several elegies celebrating his love for the flute girl Nanno, and though fragmentary today, his poetry was clearly influential in the later Roman development of the form. Propertius, to cite one example, notes Plus in amore valet Mimnermi versus Homero—"The verse of Mimnermus is stronger in love than Homer". The form continued to be popular throughout the Greek period and treated a number of different themes. Tyrtaeus composed elegies on a war theme, apparently for a Spartan audience. Theognis of Megara vented himself in couplets as an embittered aristocrat in a time of social change. Popular leaders were writers of elegies—Solon the lawgiver of Athens composed on political and ethical subjects—and even Plato and Aristotle dabbled with the meter. By the Hellenistic period, the Alexandrian school made elegy its favorite and most highly developed form. They preferred the briefer style associated with elegy in contrast to the lengthier epic forms, and made it the singular medium for short epigrams. The founder of this school was Philitas of Cos. He was eclipsed only by the school's most admired exponent, Callimachus; their learned character and intricate art would have a heavy influence on the Romans. Roman elegy Like many Greek forms, elegy was adapted by the Romans for their own literature. The fragments of Ennius contain a few couplets, and scattered verses attributed to Roman public figures like Cicero and Julius Caesar also survive, but it is the elegists of the mid-to-late first century BCE who are most commonly associated with the distinctive Roman form of the elegiac couplet. Catullus, the first of these, is an invaluable link between the Alexandrine school and the subsequent elegies of Tibullus, Propertius and Ovid. He shows a familiarity with the usual Alexandrine style of terse epigram and a wealth of mythological learning, as in his 66th poem, a direct translation of Callimachus' Coma Berenices. His 85th poem is famous: Many who read it aloud fail to grasp the metre correctly because of the three elisions. – u u| – –| – u u|– – | – u u| – x Od'et a|mo. Qua|r'id faci|am, for|tasse re|quiris? – uu | – uu| – || – u u | – u u|– Nescio, | sed fie|ri || senti'et | excruci|or. Cornelius Gallus, an important statesman of this period, was also regarded by the ancients as a great elegist, but, except for a few lines, his work has been lost. Elegy in the Augustan Age The form reached its zenith with the collections of Tibullus and Propertius and several collections of Ovid (the Amores, Heroides, Tristia, and Epistulae ex Ponto). The vogue of elegy during this time is seen in the so-called 3rd and 4th books of Tibullus. Many poems in these books were clearly not written by Tibullus but by others, perhaps part of a circle under Tibullus' patron Mesalla. Notable in this collection are the poems of Sulpicia, among the few surviving works by Classical Latin female poets. Through these poets—and in comparison with the earlier Catullus—it is possible to trace specific characteristics and evolutionary patterns in the Roman form of the verse: The Roman authors often write about their own love affairs. In contrast to their Greek originals, these poets are characters in his own stories, and write about love in a highly subjective way. The form began to be applied to new themes beyond the traditional love, loss, and other "strong emotion" verse. Propertius uses it to relate aetiological or "origin" myths such as the origins of Rome (IV.1) and the Temple of Apollo on the Palatine Hill (IV.6). Ovid's Heroides—though at first glance fictitious love letters—are described by Ovid himself as a new literary form, and can be read as character studies of famous heroines from mythology. Ovid's Fasti is
BCE or 483 BCE. Dates are given as "BE" for "Buddhist Era"; 2000 AD was 2543 BE in the Thai solar calendar. Other calendar eras of the past counted from political events, such as the Seleucid era and the Ancient Roman ab urbe condita ("AUC"), counting from the foundation of the city. Regnal eras The word era also denotes the units used under a different, more arbitrary system where time is not represented as an endless continuum with a single reference year, but each unit starts counting from one again as if time starts again. The use of regnal years is a rather impractical system, and a challenge for historians if a single piece of the historical chronology is missing, and often reflects the preponderance in public life of an absolute ruler in many ancient cultures. Such traditions sometimes outlive the political power of the throne, and may even be based on mythological events or rulers who may not have existed (for example Rome numbering from the rule of Romulus and Remus). In a manner of speaking the use of the supposed date of the birth of Christ as a base year is a form of an era. In East Asia, each emperor's reign may be subdivided into several reign periods, each being treated as a new era. The name of each was a motto or slogan chosen by the emperor. Different East Asian countries utilized slightly different systems, notably: Chinese eras Japanese era Korean eras Vietnamese eras A similar practice survived in the United Kingdom until quite recently, but only for formal official writings: in daily life the ordinary year A.D. has been used for a long time, but Acts of Parliament were dated according to the
in chronology In chronology, an "era" is the highest level for the organization of the measurement of time. A "calendar era" indicates a span of many years which are numbered beginning at a specific reference date (epoch), which often marks the origin of a political state or cosmology, dynasty, ruler, the birth of a leader, or another significant historical or mythological event; it is generally called after its focus accordingly as in "Victorian era". Geological era In large-scale natural science, there is need for another time perspective, independent from human activity, and indeed spanning a far longer period (mainly prehistoric), where "geologic era" refers to well-defined time spans. The next-larger division of geologic time is the eon. The Phanerozoic Eon, for example, is subdivided into eras. There are currently three eras defined in the Phanerozoic; the following table lists them from youngest to oldest (BP is an abbreviation for "before present"). The older Proterozoic and Archean eons are also divided into eras. Cosmological era For periods in the history of the universe, the term "epoch" is typically preferred, but "era" is used e.g. of the "Stelliferous Era". Calendar eras Calendar eras count the years since a particular date (epoch), often one with religious significance. Anno mundi (year of the world) refers to a group of calendar eras based on a calculation of the age of the world, assuming it was created as described in the Book of Genesis. In Jewish religious contexts one of the versions is still used, and many Eastern Orthodox religious calendars used another version until 1728. Hebrew year 5772 AM began at sunset on 28 September 2011 and ended on 16 September 2012. In the Western church, Anno Domini (AD also written CE), counting the years since the birth of Jesus on traditional calculations, was always dominant. The Islamic calendar, which also has variants, counts years from the Hijra or emigration of the Islamic prophet Muhammad from Mecca to Medina, which occurred in 622 AD. The Islamic year is some days shorter than 365; January 2012 fell in 1433 AH ("After Hijra"). For a time ranging from 1872 to the Second World War, the Japanese used the imperial year system (kōki), counting from the year when the legendary Emperor Jimmu founded Japan, which occurred in 660 BC. Many Buddhist calendars count from the death of the Buddha, which according to the most commonly used calculations was in 545–543 BCE or 483 BCE. Dates are given as "BE" for "Buddhist Era"; 2000 AD was 2543 BE in the Thai solar calendar. Other calendar eras of the past counted from political events, such as the Seleucid era and the Ancient Roman ab urbe condita ("AUC"), counting from the foundation of the city. Regnal eras The word era also denotes the units used under a different, more arbitrary system where time is not represented as an endless continuum with a single reference year, but each unit
(11) 24 September 1958 – Shortly after their aircraft had been retrofitted by technicians of the United States Marine Corps to carry the AIM-9B Sidewinder air to air missiles, Numerous missile armed RoCAF F-86 Sabres took off and gave chase to a group of PLAAF MiG-17 "Frescos" that had cruised above them. Due to the superior rate of climb, vertical maneuverability, thrust to weight ratio and service ceiling the Fresco pilots did not perceive any danger in doing this as they were unaware of this newly installed armament. Sabre pilots began to fire their missiles at the MiG's destroying some. Others broke into a dive and entered a horizontal turning engagement with their pursuers who held an advantage in horizontal turn-rate allowing them to engage with guns shooting down more of the PRC jets. Pilots Jing-Chuen Chen, Chun-Hsein Fu, Jie-Tsu Hsia, Shu-Yuen Li, Ta-Peng Ma, Hong-Yan Sung shot down one MiG-17 each, Yi-Chiang Chien shot down two himself and two pairs of pilots Tasi-Chuen Liu with Tang Jie-Min and Hsin-Yung Wang with Yuen-Po Wang shared in the downing of one MiG by each duo. During this engagement one further Fresco sustained notable damage being impacted by an AIM-9 that did not detonate. It escaped with an intact missile within the airframe that was extracted after returning to its base and hesitantly transferred to the Soviet Union for reverse engineering. 2 October 1958 – Antiaircraft fire from Kinmen knocks down a C-46 Commando killing all five crewmen. / (4) 10 October 1958 – Over the PRC four RoCAF F-86F Sabre Pilots engage and shoot down four MiG-17 "Frescos" of the PLAAF, As one of the Fresco burn it explodes launching chunks of debris towards and striking one of its attackers causing heavy damage, An RoCAF pilot ejects and is captured and placed in detention until his release on 30 June 1959. 29 May 1959 – Above Guandong a PLAAF MiG-17 "Fresco" intercepts and shoots down a RoCAF B-17 Flying Fortress killing all 14 on board. (2) 5 July 1959 – Above the Taiwan Straits twenty four PLAAF MiG-17 "Frescos" are engaged by four F-86 Sabres of the RoCAF ending in the destruction of two Frescos. 7 October 1959 – Above Beijing an RoCAF RB-57D piloted by Wang Ying Chin is the first plane to ever be shot down by a surface to air missile. Chin dies after his plane was destroyed by an SA-2 Guideline missile. 6 November 1961 – Above Shantung province an RB-69A Neptune is destroyed by an SA-2 Guideline missile killing all 13 aboard. 9 September 1962 – Fifteen Kilometers south of Nunchang an RoCAF Lockheed U-2A is shot down by an SA-2 Guideline missile. Pilot Chen Huai Sheng bails out and is captured after landing but dies some time later in a PRC hospital. 14 June 1963 – Above Nanchang a RoCAF RB-69A Neptune is shot down by 23 mm NR-23 cannon-fire from a PLAAF MiG-17PF "Fresco" killing all 14 crew aboard 1 November 1963 – Above Jiagxi an SA-2 Guideline shoots down an RoCAF Lockheed U-2C. Pilot Yeh Chang Yi was returning from an intelligence mission where he took aerial photos of Jiayuguan missile test site and Lanzhou nuclear weapons plant. After detecting the first Guideline had been launched at him he made evasive maneuvers and avoided the first only to be struck by a second missile moments later knocking off his right wing. after bailing out and falling into captivity of the PRC he was held until 10 November 1982 when he was released into Hong Kong. He was eventually admitted into the United States after ROC officials denied his attempts to be repatriated. 11 June 1964 – Near Yantai on the Shantung Peninsula Coordination between a MiG-17F "Fresco" and an Iluyshin Il-28 "Beagle" of the PLAAF supports the nighttime interception of an RoCAF RB-69A Neptune by dropping flares to illuminate the target plane allowing the fighter to shoot it down with cannon-fire. 7 July 1964 – Flying above Fujian, RoCAF pilot Lee Nan Lee is shot down and killed after his Lockheed U-2G is targeted and struck by an SA-2 Guideline missile. 18 December 1964 – Above Wenzhou, an RoCAF RF-101A Voodoo piloted by Hsieh Hsiangho is shot down by a People's Liberation Army Naval Air Force Shenyang J-6. He is captured by fishermen when he ejects above the ocean and detained until July 1985. 10 January 1965 – Southwest of Beijing, On a mission to capture aerial photos of Paotow uranium enrichment plant using an infrared camera RoCAF pilot Chang Liyi is shot down after being struck by an SA-2 Guideline missile. He survives the crash with both legs broken, Captured he is held until 10 November 1982 when released into Hong Kong. He was eventually admitted into the United States after ROC officials denied his attempts to be repatriated. 18 March 1965 – Above Guangdong near Shantou, a PLAAF MiG-19 "Farmer" piloted by Gao Chang Ji shoots down and kills RoCAF pilot Chang Yupao flying an RF-101C Voodoo. 10 January 1966 – Above Matsu, PLAAF MiG-17 "Fresco" shoots down an RoCAF HU-16 Albatross attempting to carry defectors to Taiwan. 10 January 1966 – A HU-16 of the Republic of China Air Force was shot down by People's Republic of China PLAAF MiG-17 over Matsu whilst transporting defectors to Taiwan. / (2) 13 January 1967 – Four F-104G Starfighters of the RoCAF are engaged by Twelve MiG-19 "Farmers" of the PLAAF. Two Farmers are claimed shot by Hu Shih-Lin and one by Bei-Puo Shih. F-104G No. 64-17779 involved in the engagement does not return and is believed to have been shot down. South African Border War (1966-1990) 22 September 1975 - A South African Aérospatiale SA 330 Puma helicopter is hit by Cuban Anti-aircraft fire during Operation Savannah. Two crew members die, the remaining 4 survive and avoid capture. 4 January 1976 - Another South African Aérospatiale SA 330 Puma helicopter is shot down by friendly fire during Operation Savannah. Both crew members and 3 passengers die in the crash. 13 March 1976 – A Fokker F-27 Friendship parked on the ground offloading arms at UNITA's Gago Coutinho aerodrome is caught by surprise by a group of four Cuban Air Force MiG-21 FM. Pilot Rafael Del Pino fires an S-24 unguided rocket destroying it. 14 Marzo 1979 - A South African Canberra medium bomber crashes after the pilot is killed by enemy fire during an attack on Cahama, south of Ongiva. 6 July 1979 - A South African Dassault Mirage III ID number 856 is shot down in Cunene, Angola. 18 October 1979 - A South African Atlas Impala MKII is shot down by anti-aircraft fire; the pilot survives and is rescued. 12 September 1980 - A South African Atlas Impala MKII from 8th SAAF Squadron is shot down in Angola, the pilot is declared MIA. 10 October 1980 - A South African Atlas Impala MKII is shot down over South West of Mupa in Southern Angola by SA-7, the pilot, Lautenslager V.P. is killed by SWAPO rebels. 1 June 1981 - A South African Atlas Impala MKII is shot down in Cuvelai, the pilot died in the crash. 6 November 1981 – South African Air Force Major Johan Rankin flying a Mirage F-1CZ engaged a Cuban MiG-21 FM flown by Major Leonel Ponce, downing his MiG with a burst of 30 mm cannon. 5 January 1982 - A South African Aérospatiale SA 330 Puma helicopter is downed by small arms fire, causing a hydraulic pipe to rupture. The helicopter crashed inverted. All 3 occupants died. 9 August 1982 - A South African Aérospatiale SA 330 Puma helicopter is hit by 23mm Anti aircraft fire, causing it to crash inverted, the crew of 3 and 12 Paratroopers are killed. 5 October 1982 – Flying his Mirage F-1CZ, Major Johan Rankin engages two Cuban MiG-21 FM flown by Lieutenants Raciel Marrero Rodríguez and Gilberto Ortiz Pérez over Angola. Rankin downs the lead MiG with 30 mm cannon-fire followed by a launch of an Matra 550 missile, taking down the wingman. Cuba contests this claim, reporting that the two pilots returned to their base at Lubango airport with some battle damage. 25 July 1986 - An Angolan Air Force Mig-23ML is shot down near Menongue, Angola. Pilot Captain Jorge González Pérez is killed. 28 October 1987 – UNITA ground fire near Luvuei, Angola shoots down a Cuban MiG-21UM a two-seat variant of the type. Both Cuban crew eject and are captured by UNITA forces. 14 November 1987 - A South African Atlas Impala MKII is shot down by Anti-aircraft fire in Cuvelai during a night mission. 20 February 1988 – A South African Dassault Mirage F1 is shot down by an SA-13 fired by Cuban forces during a raid in Cuando Cubango, Angola. 2 March 1988 - A Cuban Air Force Mig-21 piloted by Captain Juan Perez is shot down by friendly Anti-aircraft fire near Menongue. 19 March 1988 – A South African Dassault Mirage F1 ID number 223 is shot down by a missile in Longa, north of Cuito Cuanavale during a night raid. The pilot, Captain Willie Van Coopehagen, dies in the crash. 27 April 1988 - A Cuban Air Force AN-26 is shot down by friendly fire from 9K32 Strela-2 (SA-7) missiles and anti-aircraft cannons. 4 May 1988 - A Cuban Air Force Mig-21 piloted by Carlos Rodriguez Perez is shot down by a UNITA missile. Football War (1969) (3) 17 July 1969 – Honduran Air Force Corsair pilots Captain Fernando and his wingman Captain Edgardo Acosta Soto engaged two Salvadoran TF-51D Cavalier Mustang II who were attacking another Corsair while it was strafing ground targets south of Tegucigalpa. Soto entered a turning engagement with one mustang and blew off its left wing with three bursts of 20 mm cannon, Killing pilot Captain Douglas Varela when his parachute did not fully deploy. Later that day the pair spotted two Salvadoran FG-1D Goodyear Corsair. They jettisoned hard point stores before climbing and made a diving attack, Soto set one Corsair on fire only to find its wingman on his tail. An intense dogfight between them ended when Soto entered a Split-S giving him a firing solution which he used to shoot down Captain Guillermo Reynaldo Cortez who died when his Corsair exploded. The Troubles (Late 1960s–1998) List of attacks on British aircraft during The Troubles British Army Gazelle downing (February 17, 1978) British Army Lynx shootdown (June 23, 1988) British Army Gazelle shootdown (February 11, 1990) British Army Lynx shootdown (March 20, 1994) Yom Kippur War (1973) Ofira Air Battle (6 October 1973) Al Mansoura Air Battle (14 October 1973) Cyprus Conflict (1963-1974) 8 August 1964 – On the 8th of August 1964, Turkey's military intervention during the Battle of Tylliria. He led a four-fighter flight of the 112th Air Squadron leaving Eskişehir Air Base around 17:00 local time for Cyprus. Topel's F-100 Super Sabre was hit by 40mm anti-aircraft fire from a Greek Cypriot gun emplacement and shot-down as he was strafing the Arion, a Greek Cypriot patrol boat. He was able to eject from his aircraft and made a safe parachute jump over land. (2) 20 July 1974 – During the first day of the conflict, F-100D 55-3756 of 171.Filo and F-100C 54-2042 of 132.Filo were shot down by Greek Cypriot anti-aircraft fire. (3) 20 July 1974 – During the first day of the Turkish air campaign, three transport planes - C-47 No.6035, a C-130 of 222.Filo and a C-160 of 221.Filo were damaged by Greek Cypriot anti-aircraft fire. All three salvaged, but played no further part in the conflict. 20 July 1974 - During the first day of the conflict, RF-84F 52-7327 of 184.Filo was shot down by Greek Cypriot anti-aircraft fire. 20 July 1974 - During the first day of the conflict, a Dornier Do-28D of the Turkish Air Force was shot down north-west of Nicosia. (3) 21 July 1974 – F-100D 55-2825 of 111.Filo, F-100C 54-2083 of 112.Filo and F-104G 64-17783 of 191.Filo were shot down by Turkish Navy destroyers. (2) 22 July 1974 – Turkish F-100D Super Sabres 54-2238 of 172.Filo and 54-22?? of 171.Filo were lost in action on 22 July over Cyprus due to enemy fire. 22 July 1974 – A Turkish F-100C of 171.Filo was lost in a landing accident after returning from a combat sortie over Cyprus. Serial unknown. (2) 22 July 1974 – Two transport aircraft (53-234 and 52-144) were accidentally damaged by Greek Cypriot anti-aircraft fire. They managed to land safely in Crete but played no further part in the conflict. Western Sahara War (1975–1991) 21 January 1976 – A Moroccan Northrop F-5 was shot down by the Polisario Front using SA-7 Strela missiles. 18 February 1978 – A Moroccan Northrop F-5 was shot down by the Polisario Front. 10 October 1978 – A Moroccan Northrop F-5 was shot down by the Polisario Front using SA-7 Strela missiles. 10 February 1979 – A Moroccan Northrop F-5 was shot down by the Polisario Front. 12 October 1981 – A Moroccan Dassault Mirage F1 was shot down by the Polisario Front. 13 October 1981(2) – A Moroccan Lockheed C-130 Hercules and Northrop F-5 a were shot down by the Polisario Front using soviet made missiles. 26 September 1982 – A Moroccan Dassault Mirage F1 was shot down by the Polisario Front. Pilot Ten Mohamed Hadri was captured. 12 January 1985– A Moroccan Dassault Mirage F1 was shot down by the Polisario Front near Mansoura Ahmed. (2) 13 January 1985 – Two Moroccan Northrop F-5 were shot down by the Polisario Front near the Algerian border. 21 January 1985 – A Moroccan North American Rockwell OV-10 Bronco was shot down by the Polisario Front using 9K32 Strela-2 near Dakhla, Western Sahara. 21 August 1987 – A Moroccan Northrop F-5 was shot down by the Polisario Front. Ogaden War (1977–1978) July 1977 - An Ethiopian F-5E is shot down by a SA-7 Grail MANPADs near Gode. July 1977 - An Ethiopian Douglas DC-3 cargo plane from Ethiopian Airlines is shot down. July 1977 - An Ethiopian Douglas C-47 is shot down by a pair of Somali Mig-17s. 24 July 1977 - An Somali Mig-21 is shot down by an Ethiopian F-5E. (2) 25 July 1977 - Two Somali Mig-21 are shot down by Ethiopian F-5Es, another two Mig-21s collided in the same engagement. 26 July 1977 - An Somali Mig-21 is shot down by an Ethiopian F-5E. 19 August 1977 - An Somali Mig-21 is shot down by an Ethiopian F-5E. 21 August 1977 - An Somali Mig-21 is shot down by an Ethiopian F-5E. (2) 1 September 1977 - Two Somali Mig-21 were shot down by a pair of Ethiopian F-5Es. Kurdish–Turkish conflict (1978–present) 23 February 2008 – a Turkish Army AH-1 Cobra helicopter crashed with PKK militants claiming the downing and posting a video. Turkey confirmed this later in the day, saying that the incident happened "due to an unknown reason". 13 May 2016 – PKK militants shot down a Turkish Army AH-1W SuperCobra using a 9K38 Igla (SA-18 Grouse) MANPADS. In the published video, the missile severed the tail section from the rest of the helicopter, causing it to spin, fragment in midair and crash, killing the two pilots on board. The Turkish government initially claimed that it fell due to technical failure, it later became obvious that it had been shot down. 10 February 2018 – YPG militants shot down a Turkish Air Force TAI/AgustaWestland T129 ATAK over Kırıkhan district of Hatay province killing two soldiers. 12 February 2018 – Syrian Democratic Forces shot down a Turkish Air Force Bayraktar Tactical UAS over Afrin. 18 October 2019 – A Turkish army Sikorsky UH-60 Black Hawk crashed during operations against the SDF near the border city of Ras Al-‘Ayn in Syria's Al-Hasakah Governorate. Chadian-Libyan conflict (1978–1987) 25 January 1984 - A French Air Force SEPECAT Jaguar is shot down by machine-gun fire from GUNT rebels, its pilot is killed. 7 September 1987 - A Libyan Air Force Tupolev Tu-22 with an Eastern German crew was shot down by a MIM-23 Hawk missile fired by French army while trying to bomb N'Djamena airport. Soviet–Afghan War (1979–1989) Salvadoran Civil War (1979–1992) 26 January 1981 – A Aero Commander Operated by Aerolineas del Pacifico that was air-dropping arms and ammunition for rebels was destroyed by the Salvadoran Air Force at a small airstrip killing the co-pilot, The pilot was captured by the army. 11 May 1981 – A Bell UH-1 Iroquois was hit by machinegun fire and crashed. (22) late January 1982 – Battle of Ilopango Airport 17 June 1982 – A MD Helicopters MD 500 was shot down by FMLN. 19 October 1984 – A Cessna O-2A was shot down by FMLN. 12 April 1986 – A Bell UH-1 Iroquois was shot down by FMLN near San Miguel Air Base. 18 November 1989 – A Cessna A-37 Dragonfly was shot down near San Miguel. (6) 17 October 1990 – Six Bell UH-1 Iroquois were destroyed in a FMLN attack. 23 November 1990 – A Cessna A-37 Dragonfly was shot down using a surface to air missile. 2 January 1991 – A Bell UH-1 Iroquois was shot down near Lolotique. 19 December 1991 – A Bell UH-1 Iroquois was shot down by FMLN. Iran–Iraq War (1980–1988) 20 February 1986 – Iranian Air Force Fokker F27 Friendship is shot down by an Iraqi Air Force MiG-23 with a total of 49 killed including crew and passengers. 17 January 1987- An Iraqi MiG-23ML of unit 63FS shot down a F-14A piloted by Assl-e-Davtalab. 19 July 1988- Two Iraqi Dassault Mirage F1 of unit 115FS shot down two F-14A Tomcats by Super 530 missile. Falklands War (1982) Argentine air forces shot down in the Falklands War: total 45 aircraft including 4 helicopters (Sea Harrier 21, Sea Dart missile 7, Sea Wolf missile 4, Stinger missile 2, Sea Cat missile 1, Rapier missile 1, Blowpipe missile 1, combination/gunfire 6, friendly fire 2), April 3 - June 14, 1982 On 4 May 1982, Sea Harrier FRS.1 hit by anti-aircraft fire at Goose Green. (2) On 21 May 1982, a Gazelle AH.1s hit by small-arms fire at San Carlos. On 21 May 1982, a Harrier GR.3 hit by Blowpipe missile at Port Howard. On 27 May 1982, a Harrier GR.3 hit by anti-aircraft fire at Goose Green. On 28 May 1982, a Scout AH.1 shot down by Pucara at Goose Green. On 30 May 1982, a Harrier GR.3 hit by anti-aircraft fire at Stanley. On 1 June 1982, a Sea Harrier FRS.1 hit by Roland missile at Stanley. On 6 June 1982, a British Army Gazelle friendly fire incident at Bluff Cove. Libyan Gulf of Sidra territorial water dispute (2) 19 August 1981 – Gulf of Sidra incident (1981): two Su-22 Fitters of the Libyan Air Force attempted to intercept two American F-14 Tomcats over the Gulf of Sidra, off the coast of Sirte. Both Su-22s were shot down. 15 April 1986 – One F-111F of the 48th Fighter Wing of the United States Air Force was shot down over Libya by ground fire during the 1986 United States bombing of Libya. (2) 4 January 1989 1989 air battle near Tobruk Sri Lankan Civil War (1983-2009) 13 September 1990 – A Sri Lanka airforce SIAI-Marchetti SF.260 was shot down near Palay killing the pilot. 5 July 1992 – A Sri Lanka airforce Shaanxi Y-8 was shot down with a surface to air missile near Palaly killing 19. 14 July 1992 – A Sri Lanka airforce SIAI-Marchetti SF.260 was shot down killing the pilot. 2 August 1994 – A Sri Lanka airforce Bell 212 was shot down by small arms fire. (2) 28 April 1995 – two Sri Lanka airforce Hawker Siddeley HS 748 were shot down near Palay by SA-7 Anti aircraft missiles, the shot downs cost the lives of 43 in the first shootdown and 52 in the second shootdown. 14 July 1995 – A Sri Lanka airforce FMA IA 58 Pucará was shot down killing the pilot. (2) 18 November 1995 – A Sri Lanka airforce Shaanxi Y-8 and a Mil Mi-24 were shot down near palay killing four in the Y-8. 22 November 1995 – A Sri Lanka airforce Antonov An-32 charted from Kazakhstan was shot down near Jaffna killing 63 troops. 22 January 1996 – A Sri Lanka airforce Mi-17-1V was shot down by LETTE killing 34. 19 March 1996 – A Sri Lanka airforce Mi-24 was shot down off the coast of Mullaittivu killing seven. 20 July 1996 – A Sri Lanka airforce Mil Mi-8 was shot down. (2) 10 November 1997 – A Sri Lanka airforce Mi-24 was shot down killing two and a Mil Mi-17 crashed landed after being hit. 7 January 1998 – A Sri Lanka airforce Mil Mi-17 was hit with an RPG and mortars and was destroyed. 26 June 1998 – A Sri Lanka airforce Mi-24 was shot down south of Vavunia killing four. 17 December 1999 – A Sri Lanka airforce Mi-24 was shot down near Parantan killing four. 18 February 2000 – A Sri Lanka airforce Bell 412 was shot down over Thenmaradchi killing two. 24 May 2000 – A Sri Lanka airforce Mi-24 was shot killing two. 19 October 2000 – A Sri Lanka airforce Mi-24 was shot near Nogar Covil. 23 October 2000 – A Sri Lanka airforce Mi-24 was shot near the Trincomalee harbour. 22 October 2007 – Raid on Anuradhapura Air Force Base: One Bell 212 gunship of the Sri Lankan army either suffered a mechanical failure or was shot down during the attack, killing all four of its crew members. (2) 20 February 2009 – 2009 suicide air raid on Colombo: Two Zlin Z-143 light aircraft laden with explosives and flown by two LTTE suicide pilots were shot down by anti-aircraft fire. One impacted a building and detonated; the other crashed before it could reach its target. First Nagorno-Karabakh War (1988–1994) 20 November 1991 – 1991 Azerbaijani Mil Mi-8 shootdown 28 January 1992 – 1992 Azerbaijani Mil Mi-8 shootdown - A civilian Azerbaijani helicopter of Azal airlines is shot down by MANPADs fire from Armenian forces. 3 March 1992 - a Russian Federation Mi-26 cargo helicopter and a Mi-24 attack helicopter designed as an escort delivered food to an Armenian village in Polistan. On the way back evacuating civilians and wounded the cargo helicopter is attacked by an Azerbaijani Mi-8, the escort thwarted the attack back. However MANPADS fire launched from the ground shot down the Mi-26 near the Azerbaijani village of Seidilyar. Of the 50 people on board, 12 were killed. 12 May 1992 - A Russian Federation Mi-26 is shot down by Armenian MANPAD fire in Tavush province, Armenia. Six crewmen died. 8 August 1992 - An Azerbaijani Mi-24 is shot down by Armenian ZU-23-2 anti aircraft guns, one Armenian 57-mm S-60 gun was destroyed in the same engagement. 20 August 1992 - An Azerbaijani two seat MiG-25PD is shot down, one of the pilots was Alexander Belichenko a Ukrainian national, after being captured by Armenian authorities he is sentenced to death by the Constitutional Court of Armenia. However diplomatic negotiations by the presidents of Russia, Armenia and Azerbaijan allowed the pardon of Belichenko and other mercenary pilots of Azerbaijan. 4 September 1992 - An Azerbaijani Mig-21 is shot down by Armenian fire, the pilot is captured. 12 September 1992 - An Armenian Mi-24 is shot down by Azerbaijani fire. 18 September 1992 - An Azerbaijani Mi-24 is shot down by Armenian anti aircraft gunners. 10 October 1992 - An Azerbaijani Su-25 is shot down by Armenian fire in Malibeyli, the pilot could not managed to eject and perished. 12 November 1992 - An Armenian Mi-24 is shot down by Azerbaijani fire. 7 December 1992 - An Azerbaijani Mi-24 is shot down by Armenian fire the Martuni region. 7 December 1992 - An Azerbaijani Su-25 is shot down by Armenian fire the Martuni region. 13 June 1992 - An Azerbaijani Su-25 piloted by Vagif Gurbanov was shot down. Gurbanov was killed and awarded the title National Hero of Azerbaijan. 15 January 1993 - An Azerbaijani Mig-21 was shot down by Armenian fire. 1 September 1993 - An Azerbaijani Mi-24 was shot down by Armenian fire. 18 January 1994 - An Armenian Su-25 is shot down by Azerbaijani fire. 17 February 1994 - An Azerbaijan Mig-21 is shot down in Vedenis region of Armenia, the pilot is captured. 17 March 1994 – Iranian Air Force C-130 was shot down by Armenian forces en route from Moscow to Iran. 23 April 1994 - An Azerbaijani attack by 7 Su-25 in Stepanakert ends with one Su-25 shot down by air defense. The Azerbaijani side acknowledged the loss but described it as an accident. Later Nagorno-Karabakh conflict 1994-present 12 September 2011 - A UAV was reportedly shot down by the ARDA over the airspace of the unrecognized Republic of Artsakh. Preliminary investigations carried out by the ARDA have determined the model to be a Hermes 450 drone. 12 November 2014 – An Armenian Mil Mi-24 is shot down by Azerbaijani forces, killing the crew of three. 2 April 2016 – During a clash between Azerbaijani and Armenian forces, an Azerbaijani Mil Mi-24 helicopter was shot down by Artsakh Republic forces. The downing was confirmed by the Azerbaijani defense ministry. 21 April 2020 – An Azerbaijani Orbiter-3 UAV was shot down by an Armenian 9K33 Osa missile system over the Artsakh. (2) 27 September 2020 – The Defense Ministry of Azerbaijan confirmed the loss of one helicopter but said that the crew survived the crash. On late December 2020 Armenian social media published footage of an Azerbaijani Mi-17 helicopter crashing in Nagorno Karabakh, Lt. Coronel Ramiz Gasimov, the pilot is seen ejecting the helicopter however he died by wounds after being in coma on October 22, 2020. During the war Azerbaijan officially recognized lossing two helicopters. 28 September 2020 – An Azerbaijani Antonov An-2 was shot down by Armenian Anti
missile 1, combination/gunfire 6, friendly fire 2), April 3 - June 14, 1982 On 4 May 1982, Sea Harrier FRS.1 hit by anti-aircraft fire at Goose Green. (2) On 21 May 1982, a Gazelle AH.1s hit by small-arms fire at San Carlos. On 21 May 1982, a Harrier GR.3 hit by Blowpipe missile at Port Howard. On 27 May 1982, a Harrier GR.3 hit by anti-aircraft fire at Goose Green. On 28 May 1982, a Scout AH.1 shot down by Pucara at Goose Green. On 30 May 1982, a Harrier GR.3 hit by anti-aircraft fire at Stanley. On 1 June 1982, a Sea Harrier FRS.1 hit by Roland missile at Stanley. On 6 June 1982, a British Army Gazelle friendly fire incident at Bluff Cove. Libyan Gulf of Sidra territorial water dispute (2) 19 August 1981 – Gulf of Sidra incident (1981): two Su-22 Fitters of the Libyan Air Force attempted to intercept two American F-14 Tomcats over the Gulf of Sidra, off the coast of Sirte. Both Su-22s were shot down. 15 April 1986 – One F-111F of the 48th Fighter Wing of the United States Air Force was shot down over Libya by ground fire during the 1986 United States bombing of Libya. (2) 4 January 1989 1989 air battle near Tobruk Sri Lankan Civil War (1983-2009) 13 September 1990 – A Sri Lanka airforce SIAI-Marchetti SF.260 was shot down near Palay killing the pilot. 5 July 1992 – A Sri Lanka airforce Shaanxi Y-8 was shot down with a surface to air missile near Palaly killing 19. 14 July 1992 – A Sri Lanka airforce SIAI-Marchetti SF.260 was shot down killing the pilot. 2 August 1994 – A Sri Lanka airforce Bell 212 was shot down by small arms fire. (2) 28 April 1995 – two Sri Lanka airforce Hawker Siddeley HS 748 were shot down near Palay by SA-7 Anti aircraft missiles, the shot downs cost the lives of 43 in the first shootdown and 52 in the second shootdown. 14 July 1995 – A Sri Lanka airforce FMA IA 58 Pucará was shot down killing the pilot. (2) 18 November 1995 – A Sri Lanka airforce Shaanxi Y-8 and a Mil Mi-24 were shot down near palay killing four in the Y-8. 22 November 1995 – A Sri Lanka airforce Antonov An-32 charted from Kazakhstan was shot down near Jaffna killing 63 troops. 22 January 1996 – A Sri Lanka airforce Mi-17-1V was shot down by LETTE killing 34. 19 March 1996 – A Sri Lanka airforce Mi-24 was shot down off the coast of Mullaittivu killing seven. 20 July 1996 – A Sri Lanka airforce Mil Mi-8 was shot down. (2) 10 November 1997 – A Sri Lanka airforce Mi-24 was shot down killing two and a Mil Mi-17 crashed landed after being hit. 7 January 1998 – A Sri Lanka airforce Mil Mi-17 was hit with an RPG and mortars and was destroyed. 26 June 1998 – A Sri Lanka airforce Mi-24 was shot down south of Vavunia killing four. 17 December 1999 – A Sri Lanka airforce Mi-24 was shot down near Parantan killing four. 18 February 2000 – A Sri Lanka airforce Bell 412 was shot down over Thenmaradchi killing two. 24 May 2000 – A Sri Lanka airforce Mi-24 was shot killing two. 19 October 2000 – A Sri Lanka airforce Mi-24 was shot near Nogar Covil. 23 October 2000 – A Sri Lanka airforce Mi-24 was shot near the Trincomalee harbour. 22 October 2007 – Raid on Anuradhapura Air Force Base: One Bell 212 gunship of the Sri Lankan army either suffered a mechanical failure or was shot down during the attack, killing all four of its crew members. (2) 20 February 2009 – 2009 suicide air raid on Colombo: Two Zlin Z-143 light aircraft laden with explosives and flown by two LTTE suicide pilots were shot down by anti-aircraft fire. One impacted a building and detonated; the other crashed before it could reach its target. First Nagorno-Karabakh War (1988–1994) 20 November 1991 – 1991 Azerbaijani Mil Mi-8 shootdown 28 January 1992 – 1992 Azerbaijani Mil Mi-8 shootdown - A civilian Azerbaijani helicopter of Azal airlines is shot down by MANPADs fire from Armenian forces. 3 March 1992 - a Russian Federation Mi-26 cargo helicopter and a Mi-24 attack helicopter designed as an escort delivered food to an Armenian village in Polistan. On the way back evacuating civilians and wounded the cargo helicopter is attacked by an Azerbaijani Mi-8, the escort thwarted the attack back. However MANPADS fire launched from the ground shot down the Mi-26 near the Azerbaijani village of Seidilyar. Of the 50 people on board, 12 were killed. 12 May 1992 - A Russian Federation Mi-26 is shot down by Armenian MANPAD fire in Tavush province, Armenia. Six crewmen died. 8 August 1992 - An Azerbaijani Mi-24 is shot down by Armenian ZU-23-2 anti aircraft guns, one Armenian 57-mm S-60 gun was destroyed in the same engagement. 20 August 1992 - An Azerbaijani two seat MiG-25PD is shot down, one of the pilots was Alexander Belichenko a Ukrainian national, after being captured by Armenian authorities he is sentenced to death by the Constitutional Court of Armenia. However diplomatic negotiations by the presidents of Russia, Armenia and Azerbaijan allowed the pardon of Belichenko and other mercenary pilots of Azerbaijan. 4 September 1992 - An Azerbaijani Mig-21 is shot down by Armenian fire, the pilot is captured. 12 September 1992 - An Armenian Mi-24 is shot down by Azerbaijani fire. 18 September 1992 - An Azerbaijani Mi-24 is shot down by Armenian anti aircraft gunners. 10 October 1992 - An Azerbaijani Su-25 is shot down by Armenian fire in Malibeyli, the pilot could not managed to eject and perished. 12 November 1992 - An Armenian Mi-24 is shot down by Azerbaijani fire. 7 December 1992 - An Azerbaijani Mi-24 is shot down by Armenian fire the Martuni region. 7 December 1992 - An Azerbaijani Su-25 is shot down by Armenian fire the Martuni region. 13 June 1992 - An Azerbaijani Su-25 piloted by Vagif Gurbanov was shot down. Gurbanov was killed and awarded the title National Hero of Azerbaijan. 15 January 1993 - An Azerbaijani Mig-21 was shot down by Armenian fire. 1 September 1993 - An Azerbaijani Mi-24 was shot down by Armenian fire. 18 January 1994 - An Armenian Su-25 is shot down by Azerbaijani fire. 17 February 1994 - An Azerbaijan Mig-21 is shot down in Vedenis region of Armenia, the pilot is captured. 17 March 1994 – Iranian Air Force C-130 was shot down by Armenian forces en route from Moscow to Iran. 23 April 1994 - An Azerbaijani attack by 7 Su-25 in Stepanakert ends with one Su-25 shot down by air defense. The Azerbaijani side acknowledged the loss but described it as an accident. Later Nagorno-Karabakh conflict 1994-present 12 September 2011 - A UAV was reportedly shot down by the ARDA over the airspace of the unrecognized Republic of Artsakh. Preliminary investigations carried out by the ARDA have determined the model to be a Hermes 450 drone. 12 November 2014 – An Armenian Mil Mi-24 is shot down by Azerbaijani forces, killing the crew of three. 2 April 2016 – During a clash between Azerbaijani and Armenian forces, an Azerbaijani Mil Mi-24 helicopter was shot down by Artsakh Republic forces. The downing was confirmed by the Azerbaijani defense ministry. 21 April 2020 – An Azerbaijani Orbiter-3 UAV was shot down by an Armenian 9K33 Osa missile system over the Artsakh. (2) 27 September 2020 – The Defense Ministry of Azerbaijan confirmed the loss of one helicopter but said that the crew survived the crash. On late December 2020 Armenian social media published footage of an Azerbaijani Mi-17 helicopter crashing in Nagorno Karabakh, Lt. Coronel Ramiz Gasimov, the pilot is seen ejecting the helicopter however he died by wounds after being in coma on October 22, 2020. During the war Azerbaijan officially recognized lossing two helicopters. 28 September 2020 – An Azerbaijani Antonov An-2 was shot down by Armenian Anti aircraft artillery near the town of Martuni, Nagorno-Karabakh. 29 September 2020 – Armenian Defense Ministry claimed that an Armenian Air Force Su-25 was shot down by a Turkish Air Force F-16 killing the pilot. However Turkey denied the event. 4 October 2020, An Azerbaijani Air force Su-25 attack aircraft is shot down by Armenian forces while targeting Armenian positions in Fuzuli. The pilot, Col. Zaur Nudiraliyev died in the crash. Azerbaijani officials acknowledged the loss in December 2020. 19 October 2020, A Turkish-made Bayraktar TB2 operated by Azerbaijan is reported shot down by air defense weapons of Armenian Army over the skies of Nagorno Karabakh. 8 November 2020, another Azerbaijani Bayraktar TB2 was shot down by air defense weapons on southeastern Nagorno Karabakh. 9 November 2020 - A Russian Mi-24 combat helicopter was shot down by Azerbaijani forces near the border with Armenia. Two crewmembers died and a third was wounded. The government of Azerbaijan stated the shootdown was an accident and offered an apology. Gulf War (1990–1991) Iraqi no-fly zones (1991–2003) 20 March 1991 – USAF F-15C vs. IRAF Su-22 – In accordance with the ceasefire, an F-15C shoots down an Iraqi Su-22 bomber with an AIM-9 missile. 27 December 1992 – USAF F-16 vs. IRAF MiG-25 – A MiG-25 crossed the no-fly zone and an F-16D shot it down with an AIM-120 AMRAAM missile. It is the first kill with an AIM-120, and also the first USAF F-16 kill. 17 January 1993 – USAF F-16 vs. IRAF MiG-23 – A USAF F-16C shoots down a MiG-23 when the MiG locks the F-16 up. (2) 14 April 1994 – UH-60 Black Hawk friendly fire shootdown incident 23 December 2002 – USAF RQ-1 Predator vs. IRAF MiG-25 – In what was the last aerial victory for the Iraqi Air Force before Operation Iraqi Freedom, an Iraqi MiG-25 shot down an American UAV RQ-1 Predator after the drone opened fire on the Iraqi aircraft with a Stinger missile. Croatian War of Independence (1991–1995) 1992 European Community Monitor Mission helicopter downing (January 7, 1992) First Abkhazia War (1992-93) - 5 September 1992: A Georgian Army helicopter is shot down by 14.5mm heavy machine-gun fire from Abkhazian fighters. - 14 December 1992: Georgian forces shot down a Russian Mi-8 helicopter using SA-14 MANPADs, killing 3 crewmembers and 56 passengers mostly Russian refugees. - December 1992: Russian and Abkhazian force shot down a Georgian Mi-8 with a SA-7 or SA-14 missile. - 15 January 1993: Georgian forces shot down a Russian Su-25 ground attack fighter near Tkvaritchely. - 15 January 1993: A Georgian Mi-8 helicopter is shot down in the area as well. - 19 March 1993: A Russian Air Force Su-27S flew to intercept two Georgian Su-25s aproaching Suchumi area, the Russian Su-27 was destroyed by a SA-2 missile. The Pilot Maj. Schipko was killed. - 3 July 1993: Georgian forces shot down a Russia Su-25 over Suchumi. - 4 July 1993: Georgian forces shot down two Russian aircraft one Yak-52 reconnaissance aircraft and a Mi-8T during the Siege of Tkvarcheli. - 4 July 1993: Russian and Abkhazian forces shot down a Georgian Su-25 with a SA-14 fire over Nizhnaya Eshera. - 5 July 1993: Georgian forces reported the loss of a Su-25 to friendly fire. - 7 July 1993: Abkhazians shot down a Georgian Mi-8 while evacuating refugees from Suchumi, killing 20 persons. - 30 September 1993: Abkhazians shot down another Georgian Mi-8 near Racaka. - 4 October 1993: Abkhazians shot down another Georgian Mi-8 transporting 60 refugees en ruote from Abkhazia to Svanetya. - December 1993: Abkhazians shot down a Georgian helicopter, likely a Mi-24 before OSCE Ceasefire. Bosnian War (1992–1995) 3 September 1992 – An Italian Air Force (Aeronautica Militare Italiana) G.222 was shot down when approaching Sarajevo airfield, while conducting a United Nations relief mission. It crashed from the airfield; a NATO rescue mission was aborted when 2 USMC CH-53 helicopters came under small arms fire. The cause of the crash was determined to be a surface-to-air missile, but it was not clear who fired it. Everyone on board – four Italian crew members and four French passengers – died in the crash. (5) 28 February 1994 – Banja Luka incident 16 April 1994 – A Sea Harrier of the 801 Naval Air Squadron, operating from the aircraft carrier HMS Ark Royal, was brought down by an Igla-1 surface-to-air missile fired by the Army of Republika Srpska while attempting to bomb two Bosnian Serb tanks over Gorazde. The pilot, Lieutenant Nick Richardson, ejected and landed in territory controlled by friendly Bosnian Muslims. 2 June 1995: USAF F-16 piloted by Captain Scott O’Grady shot down by a Serb SA-6. Pilot rescued by Marines seven days after shoot-down. See Mrkonjić Grad incident 30 August 1995 – one French Air Force Mirage 2000N-K2 was shot down over Bosnia by a MANPADS heat-seeking 9K38 Igla missile fired by air defence units of Army of Republika Srpska during operation Deliberate Force. Both pilots were captured by Serbian forces. United Nations Operation in Somalia (1992–1995) (2) 3 October 1993 – Battle of Mogadishu (1993) Venezuelan coup d'état attempt (November 1992) (3) 27 November 1992 - Three OV-10 Bronco aircraft flown by rebel pilots were shot down over Caracas, at least one by a loyalist F-16. Aegean dispute (2) On 22 July 1974, during the Turkish invasion of Cyprus, a pair of Greek F-5Αs intercepted a pair of Turkish F-102 near Agios Efstratios. The aircraft engaged in a dogfight, during which one of the Turkish pilots fired a Falcon missile against one of the F-5As piloted by Thomas Skampardonis. Skampardonis managed to evade the missile and then the other Greek pilot Ioannis Dinopoulos, who up to that point was undetected by the Turks, fired AIM-9B missiles. The first AIM-9 missed its target but the second shot down one of the F-102s. The pilot of the remaining F-102 became disoriented and fled westwards. When he realized his mistake, he turned east towards the Turkish coast but ran out of fuel. This forced him to ditch his aircraft and crash, suffering fatal injuries. On 18 June 1992, a Greek Mirage F1CG crashed near the island of Agios Efstratios in the Northern Aegean, during a low-altitude dogfight with two Turkish F-16s. Greek pilot Nikolaos Sialmas was killed in the crash. Οn 8 February 1995, a Turkish F-16C crashed on the sea after being intercepted by a Greek Mirage F1CG. The Turkish pilot Mustafa Yildirim bailed out and was rescued by a Greek helicopter. After brief hospitalization in Rhodes, the pilot was handed over to the Turkish side. On 27 December 1995, a pair of Greek F-16Cs intercept a pair of Turkish F-4E. During the dogfight that followed, one of the Turkish aircraft went into a steep dive and crashed into the sea, killing its pilot Altug Karaburun. The co-pilot Ogur Kilar managed to bail out safely and was rescued by a Greek ΑΒ-205 helicopter. He was returned to Turkey after receiving first aid treatment in Lesbos. On 8 October 1996 – 7 months after the escalation of the dispute with Turkey over the Imia/Kardak islands, a Greek Mirage 2000 fired an R.550 Magic II missile and shot down a Turkish F-16D over the Aegean Sea. The Turkish pilot died, while the co-pilot ejected and was rescued by Greek forces. In August 2012, after the downing of a RF-4E on the Syrian Coast, Turkish Defence Minister İsmet Yılmaz confirmed that the Turkish F-16D was shot down by a Greek Mirage 2000 with an R.550 Magic II in 1996 after reportedly violating Greek airspace near Chios island. Greece denies that the F-16 was shot down. Athens says that Turkish pilot reported a control failure. It also claims that the jet violated Greece's airspace because one of the Turkish pilots was rescued in the Greek flight information region. Both Mirage 2000 pilots reported that the F-16 caught fire and they saw one parachute. / On 23 May 2006, a Greek F-16 and a Turkish F-16 collided approximately 35 nautical miles south off the island of Rhodes, near the island of Karpathos during a Turkish reconnaissance flight involving two F-16Cs and a RF-4. Greek pilot Kostas Iliakis was killed, whereas the Turkish pilot Halil İbrahim Özdemir bailed out and was rescued by a cargo ship. Insurgency in Ogaden (1994-2018) 18 July 2006 – An Ethiopian airforce Mil Mi-8 was shot down by the Ogaden National Liberation Front near Gabo Gabo killing 26. Cenepa War (1995) 29 January 1995 - A Peruvian Mil Mi-8 was shot down by Ecuadorian forces using a Blowpipe missile between Base Sur and Coangos, killing five. 7 February 1995 - A Peruvian Mil Mi-24 was shot down by Ecuadorian forces using a 9K38 Igla missiles at Base Sur, killing three. ("3") 11 February 1995 - Two Peruvian Sukhoi Su-22 and a Cessna A-37 Dragonfly were shot down by Ecuadorian Dassault Mirage F1s and IAI Kfir. 11 February 1995 - A Ecuadorian A-37 was shot down by Peruvian forces using MANPADS. 17 February 1995 - A Peruvian Mil Mi-8 was hit by AAA fire and crash landed. Eritrean–Ethiopian War (1998-2000) 2 June 1998 - An Ethiopian MiG-23BN was shot down by Eritrean Anti aircraft fire while doing a bombing run on Asmara International Airport. 6 June 1998 - An Ethiopian Mig-21 was shot down by Eritrean anti-aircraft fire. 6 June 1998 - An Etrirean Aermacchi MB-339 was shot down by Ethriopia north of Mekelle. 14 February 1999 - An Ethiopian Mi-24 Attack helicopter ethier crashed or was shot down near Burre. 25 February 1999 - An Eritrean Mig-29 was shot down by a R-73 Air to air missile fired from a Su-27, the Mig-29 crashed near Badme. 26 February 1999 - An Eritrean Mig-29 was shot down near Badme by an Ethiopian Su-27 piloted by Aster Tolossa. (2) On 26 February 1999, Two Ethiopian Mig-21 were shot down by Eritrean MIG-29s. 15 May 2000 - An Ethiopian Mi-35 was shot down attacking a water tank near Barentu by Eritrean ZSU-23 fire. 16 May 2000 - An Eritrean MiG-29 was shot down my Ethiopian Su-27s. 16 May 2000 - An Eritrean MiG-29 was damaged by an Ethiopian Su-27s and later crash landed at Asmara. NATO bombing of Yugoslavia (1999) (2) 24 March 1999 – two Yugoslav Air Force MiG-29 were shot down by two USAF F-15C with AMRAAM missiles. 24 March 1999 – During Operation Allied Force, Royal Netherlands Air Force F-16AM J-063 flown by Major Peter Tankink shot down one Yugoslavian MiG-29, flown by Lt. Colonel Milutinović, with an AMRAAM missile. The pilot of the stricken jet ejected safely. This marked the first air-to-air kill made by a Dutch fighter since WW2. (2) 26 March 1999 – two Yugoslavian MiG-29 were shot down by two USAF F-15C with AMRAAM missiles. 27 March 1999 – 1999 F-117A shoot-down An American F-117A Nighthawk stealth bomber was shot down over Belgrade by a Soviet made S-125E (NATO: SA-3). The pilot ejected safely and the plane's wreckage was recovered by Serbian special forces. It was the only stealth aircraft to be shot down by a surface to air missile. 2 May 1999 – a USAF F-16CG was shot down over Serbia. It was downed by an S-125 Neva SAM (NATO: SA-3) near Nakucani. Its pilot; Lt. Col David Goldfein, 555th Fighter Squadron commander, managed to eject and was later rescued by a combat search-and-rescue (CSAR) mission. The remains of this aircraft are on display in the Yugoslav Aeronautical Museum, Belgrade International Airport. 4 May 1999 – A lone Yugoslav MiG-29 flown by Lt. Col. Milenko Pavlović attempted to intercept a large NATO formation that was returning to base having just bombed Valjevo (the pilot's home town). It was engaged by a pair of USAF F-16CJs from the 78th Fighter Squadron and shot down with AIM-120, killing the pilot with the falling wreckage also being hit by a Strela 2M fired by the Yugoslav army in error. India–Pakistan military confrontation (1996, 1999 and 2019) 26 August 1996 – During the Siachen conflict over the disputed Siachen Glacier region in Kashmir, an Indian Mi-17 helicopter was shot down by Pakistani forces with a surface to air missile. Four crew members died in the crash. 2 July 1997 – During the Siachen conflict over the disputed Siachen Glacier region in Kashmir, an Indian HAL Cheetah helicopter was shot down by Pakistani forces. Both pilots died in the crash. 27 May 1999 – During the Kargil War in the Kashmir region, one Indian Air Force MiG-27 was lost to an engine problem. Its wingman, flying in a MiG-21 was shot down by a MANPADS while trying to locate the downed MiG-27 pilot. 28 May 1999 - An Indian Air Force strike formation composed by four Mi-17 helicopters came under fire by MANPADS, one was hit and shot down, killing all four on board. 10 August 1999 – Pakistan Naval Air Arm Atlantique shootdown. The Atlantique plane was shot down by an IAF MiG-21 of the 45th Indian Air Force Squadron using a R-60 infrared homing missile. 27 February 2019 – India confirmed that it lost one MiG-21 from the 51st fighter squadron in an air skirmish with the Pakistan Air Force (PAF). 4 March 2019 - Sukhoi Su-30MKI of the Indian Air Force shot down a Pakistani drone in Bikaner, Rajasthan at 11:30 am (local time). Another Pakistan surveillance drone was shot down by SPYDER missile defence system in Gujrat on 26 February 2019. Second Chechen War (1999–2009) Khankala Mi-26 shootdown (August 19, 2002) War in Afghanistan (2001–2021) Iraq War (2003–2011) Chadian Civil War (2005–2010) 28 November 2006 – Chadian Air Force plane was shot down by UFDD rebels near the town of Abeche. Rebels also claimed to have shot down a helicopter which was not confirmed by the government. 2006 Lebanon War 12 August 2006 - Hezbollah fighters shot down an Israeli CH-53 Yas'ur with an anti-tank missile, killing five air crew members. Mexican drug war (2006-present) 1 May 2015 – A Mexican airforce Eurocopter EC725 (sometimes incorrectly referred to as a Blackhawk) was shot down by Jalisco New Generation Cartel using RPG-7s. The helicopter crashed, killed eight on board. Russo-Georgian War (2008) (3) 20 April 2008 – Georgian officials claimed a Russian MiG-29 shot down a Georgian Hermes 450 unmanned aerial vehicle and provided video footage from the ill-fated drone showing an apparent MiG-29 launching an air-to-air missile at it. Russia denies that the aircraft was theirs and says they did not have any pilots in the air that day. Abkhazia's administration claimed its own forces shot down the drone with an L-39 aircraft "because it was violating Abkhaz airspace and breaching ceasefire agreements". UN investigation concluded that the video was authentic and that the drone was shot down by a Russian MiG-29 or Su-27 using a R-73 heat seeking missile. 8 August 2008 – The first Russian Air Force loss of the campaign was a Su-25, piloted by Lieutenant Colonel Oleg Terebunsky of the 368th Attack Aviation Regiment. It was shot down over South Ossetia near the Zarsk pass, between Dzhava and Tskhinvali. It was hit by friendly fire, a MANPADS missile fired by South Ossetian militia at around 18:00. Earlier in the day, a flight of four Georgian Air Force Su-25 planes had attacked a Russian army convoy in the same area. This was one of the few missions conducted by Georgia's Su-25s during the brief conflict Georgia believed its aircraft would soon become easy targets for Russian interceptors. The Georgian aircraft returned to their bases and were hidden under camouflage netting to prevent them from being located. 9 August 2008 – a Russian Tu-22M3 was shot down in South Ossettia by a Georgian Buk-M1 surface-to-air-missile system during the Russo-Georgian War. Three of the four crew members were killed, while the co-pilot was taken POW by Georgian forces. 9 August 2008 - A Russian Su-24 was shot down by Georgian air defense forces with an anti-aircraft missile south of Tskhinvali during the morning. Both pilots ejected, but the co-pilot died impacting the ground when his parachute was damaged by fire. The wounded pilot was captured by Georgian forces. This loss was not initially acknowledged by Russia, while verified later by independent sources. The captured pilot, Major Igor Zinov, was shown on Georgian TV while being hospitalized together with the co-pilot of the downed Tu-22MR. 9 August 2008 – A Russian Su-25 piloted by Colonel Sergey Kobylash, commander of the 368th Attack Aviation Regiment, was hit by a Georgian MANPADS during a daylight strafing run on a Georgian military formation south of Tskhinvali, on the Gori-Tskhinvali road at 10:30: after making his initial approach, Kolybash's aircraft was struck by a missile that hit his left engine, destroying it. Not long after, as Kobylash was returning to base at an altitude of 1000 meters, a second MANPADS missile struck his right engine, leaving the plane without thrust. Kobylash was able to glide to Russian controlled territory before ejecting north of Tskhinvali in a South Ossetian village of the Georgian enclave in the Great Liakh gorge, where he was recovered by a Russian combat search and rescue team. Shortly after Kobylash was rescued, South Ossetian militants claimed they had downed a Georgian Su-25, however Georgian Air Force did not
Many of the signs shown above are shared by both Sunni and Shia beliefs, with some exceptions, e.g. Imam Al-Mahdi defeating Al-Masih ad-Dajjal. Concepts and terminology in Shia eschatology include Mi'ad, the Occultation, Al-Yamani, and Sufyani. In Twelver Shia narrations about the last days, the literature largely revolves around Muhammad al-Mahdi, who is considered by many beliefs to be the true twelfth appointed successor to Muhammad. Muhammad al-Mahdi will help mankind against the deception by the Dajjal who will try to get people in to a new world religion which is called "the great deception". Ahmadiyya Ahmadiyya is considered distinct from mainstream Islam. In its writing, the present age has been witness to the evil of man and wrath of God, with war and natural disaster. Ghulam Ahmad is seen as the promised Messiah and the Mahdi, fulfilling Islamic and Biblical prophecies, as well as scriptures of other religions such as Hinduism. His teaching will establish spiritual reform and establish an age of peace. This will continue for a thousand years, and will unify mankind under one faith. Ahmadis believe that despite harsh and strong opposition and discrimination they will eventually be triumphant and their message vindicated both by Muslims and non-Muslims alike. Ahmadis also incorporate the eschatological views from other religions into their doctrine and believe Mirza Ghulam Ahmed falls into this sequence. Baháʼí Faith In the Baháʼí Faith, creation has neither a beginning nor an end; Baháʼís regard the eschatologies of other religions as symbolic. In Baháʼí belief, human time is marked by a series of progressive revelations in which successive messengers or prophets come from God. The coming of each of these messengers is seen as the day of judgment to the adherents of the previous religion, who may choose to accept the new messenger and enter the "heaven" of belief, or denounce the new messenger and enter the "hell" of denial. In this view, the terms "heaven" and "hell" become symbolic terms for a person's spiritual progress and their nearness to or distance from God. In Baháʼí belief, Bahá'u'lláh (1817-1892), the founder of the Baháʼí Faith, was the Second Coming of Christ and also the fulfilment of previous eschatological expectations of Islam and other major religions. The inception of the Baháʼí Faith coincides with Great Disappointment of the Millerite prophesy in 1844. ʻAbdu'l-Bahá taught that Armageddon would begin in 1914, but without a clear indication of its end date. Baháʼís believe that the mass martyrdom anticipated during the End Times had already passed within the historical context of the Baháʼí Faith. Baháʼís expect their faith to be eventually embraced by the masses of the world, ushering in a golden age. Rastafari Rastafari have a unique interpretation of end times, based on the Old Testament and the Book of Revelation. They believe Ethiopian Emperor Haile Selassie I to be God incarnate, the King of kings and Lord of lords mentioned in Revelation 5:5. They saw the crowning of Selassie as the second coming, and the Second Italo-Ethiopian War as fulfillment of Revelation. There is also the expectation that Selassie will return for a day of judgment and bring home the "lost children of Israel", which in Rastafari refers to those taken from Africa through the slave trade. There will then be an era of peace and harmony at Mount Zion in Africa. Cyclic cosmology Hinduism The Vaishnavite tradition links contemporary Hindu eschatology to the figure of Kalki, the tenth and last avatar of Vishnu. Many Hindus believe that before the age draws to a close, Kalki will reincarnate as Shiva and simultaneously dissolve and regenerate the universe. In contrast, Shaivites hold the view that Shiva is incessantly destroying and creating the world. In Hindu eschatology, time is cyclic and consists of kalpas. Each lasts 4.1–8.2 billion years, which is a period of one full day and night for Brahma, who will be alive for 311 trillion, 40 billion years. Within a kalpa there are periods of creation, preservation and decline. After this larger cycle, all of creation will contract to a singularity and then again will expand from that single point, as the ages continue in a religious fractal pattern. Within the current kalpa, there are four epochs that encompass the cycle. They progress from a beginning of complete purity to a descent into total corruption. The last of the four ages is Kali Yuga (which most Hindus believe is the current time), characterized by quarrel, hypocrisy, impiety, violence and decay. The four pillars of dharma will be reduced to one, with truth being all that remains. As written in the Gita: Yadā yadā hi dharmasya glānirbhavati Bhārata Abhyutthānam adharmasya tadātmānam sṛjāmyaham Whenever there is decay of righteousness in Bharata (Aryavarta) And a rise of unrighteousness then I manifest Myself! At this time of chaos, the final avatar, Kalki, endowed with eight superhuman faculties will appear on a white horse. Kalki will amass an army to "establish righteousness upon the earth" and leave "the minds of the people as pure as crystal." At the completion of Kali Yuga, the next Yuga Cycle will begin with a new Satya Yuga, in which all will once again be righteous with the reestablishment of dharma. This, in turn, will be followed by epochs of Treta Yuga, Dvapara Yuga and again another Kali Yuga. This cycle will then repeat till the larger cycle of existence under Brahma returns to the singularity, and a new universe is born. The cycle of birth, growth, decay, and renewal at the individual level finds its echo in the cosmic order, yet is affected by vagaries of divine intervention in Vaishnavite belief. Buddhism There is no classic account of beginning or end in Buddhism; Masao Abe attributes this to the absence of God. History is embedded in the continuing process of samsara or the "beginningless and endless cycles of birth-death-rebirth". Buddhists believe there is an end to things but it is not final because they are bound to be born again. However, the writers of Mahayana Buddhist scriptures establish a specific end-time account in Buddhist tradition: this describes the return of Maitreya Buddha, who would bring about an end to the world. This constitutes one of the two major branches of Buddhist eschatology, with the other being the Sermon of the Seven Suns. End time in Buddhism could also involve a cultural eschatology covering "final things", which include the idea that Sakyamuni Buddha's dharma will also come to an end. Maitreya The Buddha described his teachings disappearing five thousand years from when he preached them, corresponding approximately to the year 4300 since he was born in 623 BCE. At this time, knowledge of dharma will be lost as well. The last of his relics will be gathered in Bodh Gaya and cremated. There will be a new era in which the next Buddha Maitreya will appear, but it will be preceded by the degeneration of human society. This will be a period of greed, lust, poverty, ill will, violence, murder, impiety, physical weakness, sexual depravity and societal collapse, and even the Buddha himself will be forgotten. This will be followed by the coming of Maitreya when the teachings of dharma are forgotten. Maitreya was the first Bodhisattva around whom a cult developed, in approximately the 3rd century CE. The earliest known mention of Maitreya occurs in the Cakavatti, or Sihanada Sutta in Digha Nikaya 26 of the Pali Canon. In it, Gautama Buddha predicted his teachings of dharma would be forgotten after 5,000 years. The text then foretells the birth of Maitreya Buddha in the city of Ketumatī in present-day Benares, whose king will be the Cakkavattī Sankha. Sankha will live in the former palace of King Mahāpanadā, and will become a renunciate who follows Maitreya. In Mahayana Buddhism, Maitreya will attain bodhi in seven days, the minimum period, by virtue of his many lifetimes of preparation. Once Buddha, he will rule over the Ketumati Pure Land, an earthly paradise sometimes associated with the Indian city of Varanasi or Benares in present-day Uttar Pradesh. In Mahayana Buddhism, the Buddha presides over a land of purity. For example, Amitabha presides over Sukhavati, more popularly known as the "Western Paradise". Notable teaching he will rediscover is that of the ten non-virtuous deeds—killing, stealing, sexual misconduct, lying, divisive speech, abusive speech, idle speech, covetousness, harmful intent and wrong views. The ten virtuous deeds will replace them with the abandonment of each of these practices. Edward Conze in his Buddhist Scriptures (1959) gives an account of Maitreya: Maitreya currently resides in Tushita, but will come to Jambudvipa when needed most as successor to the historic Śākyamuni Buddha. Maitreya will achieve complete enlightenment during his lifetime, and following this reawakening he will bring back the timeless teaching of dharma to this plane and rediscover enlightenment. The Arya Maitreya Mandala, founded in 1933 by Lama Anagarika Govinda, is based on the idea of Maitreya. Maitreya eschatology forms the central canon of the White Lotus Society, a religious and political movement which emerged in Yuan China. It later branched into the Chinese underground criminal organization known as the Triads, which exist today as an international underground criminal network. Note that no description of Maitreya occurs in any other sutta in the canon, casting doubt as to the authenticity of the scripture. In addition, sermons of the Buddha normally are in response to a question, or in a specific context, but this sutta has a beginning and an ending, and its content is quite different from the others. This has led some to conclude that the whole sutta is apocryphal, or tampered with. Sermon of the Seven Suns In his "Sermon of the Seven Suns" in the Pali Canon, the Buddha describes the ultimate fate of the Earth in an apocalypse characterized by the consequent appearance of seven suns in the sky, each causing progressive ruin till the planet is destroyed: The canon goes on to describe the progressive destruction of each sun. The third sun will dry the Ganges River and other rivers, whilst the fourth will cause the lakes to evaporate; the fifth will dry the oceans. Later: The sermon completes with the Earth immersed into an extensive holocaust. The Pali Canon does not indicate when this will happen relative to Maitreya. Norse mythology Norse mythology depicts the end of days as Ragnarök, an Old Norse term translatable as "twilight of the gods". It will be heralded by a devastation known as Fimbulvetr which will seize Midgard in cold and darkness. The sun and moon will disappear from the sky, and poison will fill the air. The dead will rise from the ground and there will be widespread despair. There follows a battle between – on the one hand – the Gods with the Æsir, Vanir and Einherjar, led by Odin, and – on the other hand – forces of Chaos, including the fire giants and jötunn, led by Loki. In the fighting Odin will be swallowed whole by his old nemesis Fenrir. The god Freyr fights Surtr but loses. Víðarr, son of Odin, will then avenge his father by ripping Fenrir's jaws apart and stabbing the wolf in the heart with his spear. The serpent Jörmungandr will open its gaping maw and be met in combat by Thor. Thor, also a son of Odin, will defeat the serpent, only to take nine steps afterwards before collapsing in his own death. After this people will flee their homes as the sun blackens and the earth sinks into the sea. The stars will vanish, steam will rise, and flames will touch the heavens. This conflict will result in the deaths of most of the major Gods and forces of Chaos. Finally, Surtr will fling fire across the nine worlds. The ocean will then completely submerge Midgard. After the cataclysm, the world will resurface new and fertile, and the surviving Gods will meet. Baldr, another son of Odin, will be reborn in the new world, according to Völuspá. The two human survivors, Líf and Lífþrasir, will then repopulate this new earth. No end times Taoism The Taoist faith is not concerned with what came before or after life, knowing only their own being in the Tao. The philosophy is that people come and go, just like mountains, trees and stars, but Tao will go on for time immemorial. Analogies in science and philosophy Researchers in
preterism, this was a fulfillment of the prophecies. However, according to Futurists, their destruction in AD 70 put the prophetic timetable on hold. Many such believers therefore anticipated the return of Jews to Israel and the reconstruction of the Temple before the Second Coming could occur. Post-tribulation pre-millennialism A view of the Second Coming of Christ as held by post-tribulational pre-millennialists holds that the Church of Christ will have to undergo great persecution by being present during the great tribulation. Specific prophetic movements In 1843, William Miller made the first of several predictions that the world would end in only a few months. As his predictions did not come true (referred to as the Great Disappointment), followers of Miller went on to found separate groups, the most successful of which is the Seventh-day Adventist Church. Members of the Baháʼí Faith believe Miller's interpretation of signs and dates of the coming of Jesus were, for the most part, correct. They believe the fulfillment of biblical prophecies of the coming of Christ came through a forerunner of their own religion, the Báb. According to the Báb's words, 4 April 1844 was "the first day that the Spirit descended" into his heart. His subsequent declaration to Mullá Husayn-i Bushru'i that he was the "Promised One"—an event now commemorated by Baháʼís as a major holy day—took place on 23 May 1844. It was in October of that year that the Báb embarked on a pilgrimage to Mecca, where he openly declared his claims to the Sharif of Mecca. The first news coverage of these events in the West was in 1845 by The Times, followed by others in 1850 in the United States. The first Baháʼí to come to America was in 1892. Several Baháʼí books and pamphlets make mention of the Millerites, the prophecies used by Miller and the Great Disappointment, most notably William Sears's Thief in the Night. Restorationism (Christian primitivism) End times theology is also significant to restorationist Christian religions, which consider themselves distinct from both Catholicism and Protestantism. Jehovah's Witnesses The eschatology of Jehovah's Witnesses is central to their religious beliefs. They believe Jesus Christ has been ruling in heaven as king since 1914 (a date they believe was prophesied in the Bible) and that after that time a period of cleansing occurred, resulting in God's selection of the Bible Students associated with Charles Taze Russell as his people in 1919. They also believe that the destruction of those who reject the Bible's message and thus willfully refuse to obey God will shortly take place at Armageddon, ensuring that the beginning of the new earthly society will be composed of willing subjects of that kingdom. The religion's doctrines surrounding 1914 are the legacy of a series of emphatic claims regarding the years 1799, 1874, 1878, 1914, 1918 and 1925 made in the Watch Tower Society's publications between 1879 and 1924. Claims about the significance of those years, including the presence of Jesus Christ, the beginning of the "last days", the destruction of worldly governments and the earthly resurrection of Jewish patriarchs, were successively abandoned. In 1922 the society's principal magazine, The Watchtower, described its chronology as "no stronger than its weakest link", but also claimed the chronological relationships to be "of divine origin and divinely corroborated... in a class by itself, absolutely and unqualifiedly correct" and "indisputable facts", and repudiation of Russell's teachings was described as "equivalent to a repudiation of the Lord". The Watch Tower Society has acknowledged its early leaders promoted "incomplete, even inaccurate concepts". The Governing Body of Jehovah's Witnesses says that, unlike Old Testament prophets, its interpretations of the Bible are not inspired or infallible. It says that Bible prophecies can be fully understood only after their fulfillment, citing examples of biblical figures who did not understand the meaning of prophecies they received. Watch Tower Society literature often cites Proverbs 4:18, "The path of the righteous ones is like the bright light that is getting lighter and lighter until the day is firmly established" (NWT) to support their view that there would be an increase in knowledge during "the time of the end", and that this increase in knowledge needs adjustments. Watch Tower Society publications also say that unfulfilled expectations are partly due to eagerness for God's Kingdom and that they do not call their core beliefs into question. The Church of Jesus Christ of Latter-day Saints Members of The Church of Jesus Christ of Latter-day Saints (LDS Church) believe there will be a Second Coming of Jesus to the earth at some time in the future. The LDS Church and its leaders do not make any predictions of the date of the Second Coming. According to church doctrine, the true gospel will be taught in all parts of the world prior to the Second Coming. They also believe there will be increasing war, earthquakes, hurricanes, and man-made disasters prior to the Second Coming. Disasters of all kind will happen before Christ comes. Upon the return of Jesus Christ, all people will be resurrected, the righteous in a first resurrection and the unrighteous in a second, later resurrection. Christ shall reign for a period of 1000 years, after which the Final Judgement will occur. Realized eschatology Realized eschatology is a Christian eschatological theory that holds that the eschatological passages in the New Testament do not refer to the future, but instead refer to the ministry of Jesus and his lasting legacy. Islam Muslims believe there are three periods before the Day of Judgment with some debate as to whether the periods could overlap. Sunni Sunnis believe the dead will then stand in a grand assembly, awaiting a scroll detailing their righteous deeds, sinful acts and ultimate judgment. Muhammad will be the first to be resurrected. Punishments will include adhab, or severe pain and embarrassment, and khizy or shame. There will also be a punishment of the grave between death and the resurrection. Several Sunni scholars explain some of the signs metaphorically. The signs of the coming end time are divided into major and minor signs: Following the second period, the third is said to be marked by the ten major signs known as alamatu's-sa'ah al- kubra (The major signs of the end). They are as follows: A huge black cloud of smoke (dukhan) will cover the earth. Three sinkings of the earth, one in the East. One sinking of the earth in the West. One sinking of the earth in Arabia. The false messiah—anti-Christ, Masih ad-Dajjal—shall appear with great powers as a one-eyed man with his right eye blind and deformed like a grape. Although believers will not be deceived, he will claim to be God, to hold the keys to heaven and hell, and will lead many astray. In reality, his heaven is hell, and his hell is heaven. The Dajjal will be followed by seventy thousand Jews of Isfahan wearing Persian shawls. The return of Isa (Jesus), from the fourth sky, to kill Dajjal. Ya'jooj and Ma'jooj (Gog and Magog), a Japhetic tribe of vicious beings who had been imprisoned by Dhul-Qarnayn, will break out. They will ravage the earth, drink all the water of Lake Tiberias, and kill all believers in their way. Isa, Imam Al-Mahdi, and the believers with them will go to the top of a mountain and pray for the destruction of Gog and Magog. God eventually will send disease and worms to wipe them out. The sun will rise from the West. The Dabbat al-ard, or Beast of the Earth, will come out of the ground to talk to people. The second blow of the trumpet will be sounded, the dead will return to life, and a fire will come out of Yemen that shall gather all to Mahshar Al Qiy'amah (The Gathering for Judgment). Shia Many of the signs shown above are shared by both Sunni and Shia beliefs, with some exceptions, e.g. Imam Al-Mahdi defeating Al-Masih ad-Dajjal. Concepts and terminology in Shia eschatology include Mi'ad, the Occultation, Al-Yamani, and Sufyani. In Twelver Shia narrations about the last days, the literature largely revolves around Muhammad al-Mahdi, who is considered by many beliefs to be the true twelfth appointed successor to Muhammad. Muhammad al-Mahdi will help mankind against the deception by the Dajjal who will try to get people in to a new world religion which is called "the great deception". Ahmadiyya Ahmadiyya is considered distinct from mainstream Islam. In its writing, the present age has been witness to the evil of man and wrath of God, with war and natural disaster. Ghulam Ahmad is seen as the promised Messiah and the Mahdi, fulfilling Islamic and Biblical prophecies, as well as scriptures of other religions such as Hinduism. His teaching will establish spiritual reform and establish an age of peace. This will continue for a thousand years, and will unify mankind under one faith. Ahmadis believe that despite harsh and strong opposition and discrimination they will eventually be triumphant and their message vindicated both by Muslims and non-Muslims alike. Ahmadis also incorporate the eschatological views from other religions into their doctrine and believe Mirza Ghulam Ahmed falls into this sequence. Baháʼí Faith In the Baháʼí Faith, creation has neither a beginning nor an end; Baháʼís regard the eschatologies of other religions as symbolic. In Baháʼí belief, human time is marked by a series of progressive revelations in which successive messengers or prophets come from God. The coming of each of these messengers is seen as the day of judgment to the adherents of the previous religion, who may choose to accept the new messenger and enter the "heaven" of belief, or denounce the new messenger and enter the "hell" of denial. In this view, the terms "heaven" and "hell" become symbolic terms for a person's spiritual progress and their nearness to or distance from God. In Baháʼí belief, Bahá'u'lláh (1817-1892), the founder of the Baháʼí Faith, was the Second Coming of Christ and also the fulfilment of previous eschatological expectations of Islam and other major religions. The inception of the Baháʼí Faith coincides with Great Disappointment of the Millerite prophesy in 1844. ʻAbdu'l-Bahá taught that Armageddon would begin in 1914, but without a clear indication of its end date. Baháʼís believe that the mass martyrdom anticipated during the End Times had already passed within the historical context of the Baháʼí Faith. Baháʼís expect their faith to be eventually embraced by the masses of the world, ushering in a golden age. Rastafari Rastafari have a unique interpretation of end times, based on the Old Testament and the Book of Revelation. They believe Ethiopian Emperor Haile Selassie I to be God incarnate, the King of kings and Lord of lords mentioned in Revelation 5:5. They saw the crowning of Selassie as the second coming, and the Second Italo-Ethiopian War as fulfillment of Revelation. There is also the expectation that Selassie will return for a day of judgment and bring home the "lost children of Israel", which in Rastafari refers to those taken from Africa through the slave trade. There will then be an era of peace and harmony at Mount Zion in Africa. Cyclic cosmology Hinduism The Vaishnavite tradition links contemporary Hindu eschatology to the figure of Kalki, the tenth and last avatar of Vishnu. Many Hindus believe that before the age draws to a close, Kalki will reincarnate as Shiva and simultaneously dissolve and regenerate the universe. In contrast, Shaivites hold the view that Shiva is incessantly destroying and creating the world. In Hindu eschatology, time is cyclic and consists of kalpas. Each lasts 4.1–8.2 billion years, which is a period of one full day and night for Brahma, who will be alive for 311 trillion, 40 billion years. Within a kalpa there are periods of creation, preservation and decline. After this larger cycle, all of creation will contract to a singularity and then again will expand from that single point, as the ages continue in a religious fractal pattern. Within the current kalpa, there are four epochs that encompass the cycle. They progress from a beginning of complete purity to a descent into total corruption. The last of the four ages is Kali Yuga (which most Hindus believe is the current time), characterized by quarrel, hypocrisy, impiety, violence and decay. The four pillars of dharma will be reduced to one, with truth being all that remains. As written in the Gita: Yadā yadā hi dharmasya glānirbhavati Bhārata Abhyutthānam adharmasya tadātmānam sṛjāmyaham Whenever there is decay of righteousness in Bharata (Aryavarta) And a rise of unrighteousness then I manifest Myself! At this time of chaos, the final avatar, Kalki, endowed with eight superhuman faculties will appear on a white horse. Kalki will amass an army to "establish righteousness upon the earth" and leave "the minds of the people as pure as crystal." At the completion of Kali Yuga, the next Yuga Cycle will begin with a new Satya Yuga, in which all will once again be righteous with the reestablishment of dharma. This, in turn, will be followed by epochs of Treta Yuga, Dvapara Yuga and again another Kali Yuga. This cycle will then repeat till the larger cycle of existence under Brahma returns to the singularity, and a new universe is born. The cycle of birth, growth, decay, and renewal at the individual level finds its echo in the cosmic order, yet is affected by vagaries of divine intervention in Vaishnavite belief. Buddhism There is no classic account of beginning or end in Buddhism; Masao Abe attributes this to the absence of God. History is embedded in the continuing process of samsara or the "beginningless and endless cycles of birth-death-rebirth". Buddhists believe there is an end to things but it is not final because they are bound to be born again. However, the writers of Mahayana Buddhist scriptures establish a specific end-time account in Buddhist tradition: this describes the return of Maitreya Buddha, who would bring about an end to the world. This constitutes one of the two major branches of Buddhist eschatology, with the other being the Sermon of the Seven Suns. End time in Buddhism could also involve a cultural eschatology covering "final things", which include the idea that Sakyamuni Buddha's dharma will also come to an end. Maitreya The Buddha described his teachings disappearing five thousand years from when he preached them, corresponding approximately to the year 4300 since he was born in 623 BCE. At this time, knowledge of dharma will be lost as well. The last of his relics will be gathered in Bodh Gaya and cremated. There will be a new era in which the next Buddha Maitreya will appear, but it will be preceded by the degeneration of human society. This will be a period of greed, lust, poverty, ill will, violence, murder, impiety, physical weakness, sexual depravity and societal collapse, and even the Buddha himself will be forgotten. This will be followed by the coming of Maitreya when the teachings of dharma are forgotten. Maitreya was the first Bodhisattva around whom a cult developed, in approximately the 3rd century CE. The earliest known mention of Maitreya occurs in the Cakavatti, or Sihanada Sutta in Digha Nikaya 26 of the Pali Canon. In it, Gautama Buddha predicted his teachings of dharma would be forgotten after 5,000 years. The text then foretells the birth of Maitreya Buddha in the city of Ketumatī in present-day Benares, whose king will be the Cakkavattī Sankha. Sankha will live in the former palace of King Mahāpanadā, and will become a renunciate who follows Maitreya. In Mahayana Buddhism, Maitreya will attain bodhi in seven days, the minimum period, by virtue of his many lifetimes of preparation. Once Buddha, he will rule over the Ketumati Pure Land, an earthly paradise sometimes associated with the Indian city of Varanasi or Benares in present-day Uttar Pradesh. In Mahayana Buddhism, the Buddha presides over a land of purity. For example, Amitabha presides over Sukhavati, more popularly known as the "Western Paradise". Notable teaching he will rediscover is that of the ten non-virtuous deeds—killing, stealing, sexual misconduct, lying, divisive speech, abusive speech, idle speech, covetousness, harmful intent and wrong views. The ten virtuous deeds will replace them with the abandonment of each of these practices. Edward Conze in his Buddhist Scriptures (1959) gives an account of Maitreya: Maitreya currently resides in Tushita, but will come to Jambudvipa when needed most as successor to the historic Śākyamuni Buddha. Maitreya will achieve complete enlightenment during his lifetime, and following this reawakening he will bring back the timeless teaching of dharma to this plane and rediscover enlightenment. The Arya Maitreya Mandala, founded in 1933 by Lama Anagarika Govinda, is based on the idea of Maitreya. Maitreya eschatology forms the central canon of the White Lotus Society, a religious and political movement which emerged in Yuan China. It later branched into the Chinese underground criminal organization known as the Triads, which exist today as an international underground criminal network. Note that no description of Maitreya occurs in any other sutta in the canon, casting doubt as to the authenticity of the scripture. In addition, sermons of the Buddha normally are in response to a question, or in a specific context, but this sutta has a beginning and an ending, and its content is quite different from the others. This has led some to conclude that the whole sutta is apocryphal, or tampered with. Sermon of the Seven Suns In his "Sermon of the Seven Suns" in the Pali Canon, the Buddha describes the ultimate fate of the Earth in an apocalypse characterized by the consequent appearance of seven suns in the sky, each causing progressive ruin till the planet is destroyed: The canon goes on to describe the progressive destruction of each sun. The third sun will dry the Ganges River and other rivers, whilst the fourth will cause the lakes to evaporate; the fifth will dry the oceans. Later: The sermon completes with the Earth immersed into an extensive holocaust. The Pali Canon does not indicate when this will happen relative to Maitreya. Norse mythology Norse mythology depicts the end of days as Ragnarök, an Old Norse term translatable as "twilight of the gods". It will be heralded by a devastation known as Fimbulvetr which will seize Midgard in cold and darkness. The sun and moon will disappear from the sky, and poison will fill the air. The dead will rise from the ground and there will be widespread despair. There follows a battle between – on the one hand – the Gods with the Æsir, Vanir and Einherjar, led by Odin, and – on the other hand – forces of Chaos, including the fire giants and jötunn, led by Loki. In the fighting Odin will be swallowed whole by his old nemesis Fenrir. The god Freyr fights Surtr but loses. Víðarr, son of Odin, will then avenge his father by ripping Fenrir's jaws apart and stabbing the wolf in the heart with his spear. The serpent Jörmungandr will open its gaping maw and be met in combat by Thor. Thor, also a son of Odin, will defeat the serpent, only to take nine steps afterwards before collapsing in his own death. After this people will flee their homes as the sun blackens and the earth sinks into the sea. The stars will vanish, steam will rise, and flames will touch the heavens. This conflict will result in the deaths of most of the major Gods and forces of Chaos. Finally, Surtr will fling fire across the nine worlds. The ocean will then completely submerge Midgard. After the cataclysm, the world will resurface new and fertile, and the surviving Gods will meet. Baldr, another son of Odin, will be reborn in the new world, according to Völuspá. The two human survivors, Líf and Lífþrasir, will then repopulate this new earth. No end times Taoism The Taoist faith is not concerned with what came before or after life, knowing only their own being in the Tao. The philosophy is that people come and go, just like mountains, trees and stars, but Tao will go on for time immemorial. Analogies in science and philosophy Researchers in futures studies and transhumanists investigate how the accelerating rate of scientific progress may lead to a "technological singularity" in the future that would profoundly and unpredictably change the course of human history, and result in Homo sapiens no longer being the dominant life form on Earth. Occasionally the term "physical eschatology" is applied to
and there are feast days for seven ecumenical councils. Nonetheless, some Eastern Orthodox consider events like the Council of Constantinople of 879–880, that of Constantinople in 1341–1351 and that of Jerusalem in 1672 to be ecumenical: Council in Trullo (692) debates on ritual observance and clerical discipline in different parts of the Christian Church. Fourth Council of Constantinople (Eastern Orthodox) (879–880) restored Photius to the See of Constantinople. This happened after the death of Ignatius and with papal approval. Fifth Council of Constantinople (1341–1351) affirmed hesychastic theology according to Gregory Palamas and condemned Barlaam of Seminara. Synod of Iași (1642) reviewed and amended Peter Mogila's Expositio fidei (Statement of Faith, also known as the Orthodox Confession). Synod of Jerusalem (1672) defined Orthodoxy relative to Catholicism and Protestantism, defined the orthodox Biblical canon. Synod of Constantinople (1872) addressing with nationalism or Phyletism in the unity of Orthodoxy. It is unlikely that formal ecumenical recognition will be granted to these councils, despite the acknowledged orthodoxy of their decisions, so that seven are universally recognized among the Eastern Orthodox as ecumenical. The 2016 Pan-Orthodox Council was sometimes referred to as a potential "Eighth Ecumenical Council" following debates on several issues facing Eastern Orthodoxy, however not all autocephalous churches were represented. Acceptance of the councils Although some Protestants reject the concept of an ecumenical council establishing doctrine for the entire Christian faith, Catholics, Lutherans, Anglicans, Eastern Orthodox and Oriental Orthodox all accept the authority of ecumenical councils in principle. Where they differ is in which councils they accept and what the conditions are for a council to be considered "ecumenical". The relationship of the Papacy to the validity of ecumenical councils is a ground of controversy between Catholicism and the Eastern Orthodox Churches. The Catholic Church holds that recognition by the Pope is an essential element in qualifying a council as ecumenical; Eastern Orthodox view approval by the Bishop of Rome (the Pope) as being roughly equivalent to that of other patriarchs. Some have held that a council is ecumenical only when all five patriarchs of the Pentarchy are represented at it. Others reject this theory in part because there were no patriarchs of Constantinople and Jerusalem at the time of the first ecumenical council. Catholic Church Both the Catholic and Eastern Orthodox churches recognize seven councils in the early centuries of the church, but Catholics also recognize fourteen councils in later times called or confirmed by the Pope. At the urging of German King Sigismund, who was to become Holy Roman Emperor in 1433, the Council of Constance was convoked in 1414 by Antipope John XXIII, one of three claimants to the papal throne, and was reconvened in 1415 by the Roman Pope Gregory XII. The Council of Florence is an example of a council accepted as ecumenical in spite of being rejected by the East, as the Councils of Ephesus and Chalcedon are accepted in spite of being rejected respectively by the Church of the East and Oriental Orthodoxy. The Catholic Church teaches that an ecumenical council is a gathering of the College of Bishops (of which the Bishop of Rome is an essential part) to exercise in a solemn manner its supreme and full power over the whole Church. It holds that "there never is an ecumenical council which is not confirmed or at least recognized as such by Peter's successor". Its present canon law requires that an ecumenical council be convoked and presided over, either personally or through a delegate, by the Pope, who is also to decide the agenda; but the church makes no claim that all past ecumenical councils observed these present rules, declaring only that the Pope's confirmation or at least recognition has always been required, and saying that the version of the Nicene Creed adopted at the First Council of Constantinople (381) was accepted by the Church of Rome only seventy years later, in 451. Eastern Orthodox Church The Eastern Orthodox Church accepts seven ecumenical councils, with the disputed Council in Trullo—rejected by Catholics—being incorporated into, and considered as a continuation of, the Third Council of Constantinople. To be considered ecumenical, Orthodox accept a council that meets the condition that it was accepted by the whole church. That it was called together legally is also an important factor. A case in point is the Third Ecumenical Council, where two groups met as duly called for by the emperor, each claiming to be the legitimate council. The Emperor had called for bishops to assemble in the city of Ephesus. Theodosius did not attend but sent his representative Candidian to preside. However, Cyril managed to open the council over Candidian's insistent demands that the bishops disperse until the delegation from Syria could arrive. Cyril was able to completely control the proceedings, completely neutralizing Candidian, who favored Cyril's antagonist, Nestorius. When the pro-Nestorius Antiochene delegation finally arrived, they decided to convene their own council, over which Candidian presided. The proceedings of both councils were reported to the emperor, who decided ultimately to depose Cyril, Memnon and Nestorius. Nonetheless, the Orthodox accept Cyril's group as being the legitimate council because it maintained the same teaching that the church has always taught. Paraphrasing a rule by St Vincent of Lérins, Hasler states Orthodox believe that councils could over-rule or even depose popes. At the Sixth Ecumenical Council, Pope Honorius and Patriarch Sergius were declared heretics. The council anathematized them and declared them tools of the devil and cast them out of the church. It is their position that, since the Seventh Ecumenical Council, there has been no synod or council of the same scope. Local meetings of hierarchs have been called "pan-Orthodox", but these have invariably been simply meetings of local hierarchs of whatever Eastern Orthodox jurisdictions are party to a specific local matter. From this point of view, there has been no fully "pan-Orthodox" (Ecumenical) council since 787. The use of the term "pan-Orthodox" is confusing to those not within Eastern Orthodoxy, and it leads to mistaken impressions that these are ersatz ecumenical councils rather than purely local councils to which nearby Orthodox hierarchs, regardless of jurisdiction, are invited. Others, including 20th-century theologians Metropolitan Hierotheos (Vlachos) of Naupactus, Fr. John S. Romanides, and Fr. George Metallinos (all of whom refer repeatedly to the "Eighth and Ninth Ecumenical Councils"), Fr. George Dragas, and the 1848 Encyclical of the Eastern Patriarchs (which refers explicitly to the "Eighth Ecumenical Council" and was signed by the patriarchs of Constantinople, Jerusalem, Antioch, and Alexandria as well as the Holy Synods of the first three), regard other synods beyond the Seventh Ecumenical Council as being ecumenical. From the Eastern Orthodox perspective, a council is accepted as being ecumenical if it is accepted by the Eastern Orthodox church at large—clergy, monks and assembly of believers. Teachings from councils that purport to be ecumenical, but which lack this acceptance by the church at large, are, therefore, not considered ecumenical. Oriental Orthodoxy Oriental Orthodoxy accepts three ecumenical councils, the First Council of Nicaea, the First Council of Constantinople, and the Council of Ephesus. The formulation of the Chalcedonian Creed caused a schism in the Alexandrian and Syriac churches. Reconciliatory efforts between Oriental Orthodox with the Eastern Orthodox and the Catholic Church in the mid- and late 20th century have led to common Christological declarations. The Oriental and Eastern Churches have also been working toward reconciliation as a consequence of the ecumenical movement. The Oriental Orthodox hold that the Dyophysite formula of two natures formulated at the Council of Chalcedon is inferior to the Miaphysite formula of "One Incarnate Nature of God the Word" (Byzantine Greek: Mia physis tou theou logou sarkousomene) and that the proceedings of Chalcedon themselves were motivated by imperial politics. The Alexandrian Church, the main Oriental Orthodox body, also felt unfairly underrepresented at the council following the deposition of their Pope, Dioscorus of Alexandria at the council. Church of the East The Church of the East accepts two ecumenical councils, the First Council of Nicaea and the First Council of Constantinople. It was the formulation of Mary as the Theotokos which caused a schism with the Church of the East, now divided between the Assyrian Church of the East and the Ancient Church of the East, while the Chaldean Catholic Church entered into full communion with Rome in the 16th century. Meetings between Pope John Paul II and the Assyrian Patriarch Mar Dinkha IV led to a common Christological declaration on 11 November 1994 that "the humanity to which the Blessed Virgin Mary gave birth always was that of the Son of God himself". Both sides recognised the legitimacy and rightness, as expressions of the same faith, of the Assyrian Church's liturgical invocation of Mary as "the Mother of Christ our God and Saviour" and the Catholic Church's use of "the Mother of God" and also as "the Mother of Christ". Protestantism Lutheran Churches The Lutheran World Federation, in ecumenical dialogues with the Ecumenical Patriarch of Constantinople,
of the original seven ecumenical councils as recognized in whole or in part were called by an emperor of the Eastern Roman Empire and all were held in the Eastern Roman Empire, a recognition denied to other councils similarly called by an Eastern Roman emperor and held in his territory, in particular the Council of Serdica (343), the Second Council of Ephesus (449) and the Council of Hieria (754), which saw themselves as ecumenical or were intended as such. The First Council of Nicaea (325) repudiated Arianism, declared that Christ is "homoousios with the Father" (of the same substance as the Father), and adopted the original Nicene Creed; fixed Easter date; recognised authority of the sees of Rome, Alexandria and Antioch outside their own civil provinces and granted the see of Jerusalem a position of honour. The First Council of Constantinople (381) repudiated Arianism and Macedonianism, declared that Christ is "born of the Father before all time", revised the Nicene Creed in regard to the Holy Spirit. The Council of Ephesus (431) repudiated Nestorianism, proclaimed the Virgin Mary as the Theotokos ("Birth-giver to God", "God-bearer", "Mother of God"), repudiated Pelagianism, and reaffirmed the Nicene Creed.This and all the following councils in this list are not recognised by all of the Church of the East. The Second Council of Ephesus (449) received Eutyches as orthodox based on his petition outlining his confession of faith. Deposed Theodoret of Cyrrhus and Ibas of Edessa. Condemned Ibas's Letter to "Maris the Persian" (possibly a misunderstood title, indicating as the receiver a certain Catholicus Dadyeshu, bishop of Ardashir/Ctesiphon between 421-56; this same letter later became one of the Three Chapters).Though originally convened as an ecumenical council, this council is not recognised as ecumenical and is denounced as a Robber Council by the Chalcedonians (Catholics, Eastern Orthodox, Protestants). The Council of Chalcedon (451) repudiated the Eutychian doctrine of monophysitism; adopted the Chalcedonian Creed, which described the hypostatic union of the two natures of Christ, human and divine; reinstated those deposed in 449 including Theodoret of Cyrus. Restored Ibas of Edessa to his see and declared him innocent upon reading his letter. Deposed Dioscorus of Alexandria; and elevated the bishoprics of Constantinople and Jerusalem to the status of patriarchates. This is also the last council explicitly recognised by the Anglican Communion. This and all the following councils in this list are rejected by Oriental Orthodox churches. The Second Council of Constantinople (553) repudiated the Three Chapters as Nestorian, condemned Origen of Alexandria, and decreed the Theopaschite Formula. The Third Council of Constantinople (680–681) repudiated Monothelitism and Monoenergism. The Quinisext Council, also called Council in Trullo (692) addressed matters of discipline (in amendment to the 5th and 6th councils).The Ecumenical status of this council was repudiated by the Western churches. The Second Council of Nicaea (787) restored the veneration of icons (condemned at the Council of Hieria, 754) and repudiated iconoclasm. Further councils recognised as ecumenical in the Catholic Church As late as the 11th century, seven councils were recognised as ecumenical in the Catholic Church. Then, in the time of Pope Gregory VII (1073–1085), canonists who in the Investiture Controversy quoted the prohibition in canon 22 of the Council of Constantinople of 869–870 against laymen influencing the appointment of prelates elevated this council to the rank of ecumenical council. Only in the 16th century was recognition as ecumenical granted by Catholic scholars to the Councils of the Lateran, of Lyon and those that followed. The following is a list of further councils generally recognised as ecumenical by Catholic theologians: Fourth Council of Constantinople (Catholic) (869–870) deposed Patriarch Photios I of Constantinople as an usurper and reinstated his predecessor Saint Ignatius. Photius had already been declared deposed by the Pope, an act to which the See of Constantinople acquiesced at this council. First Council of the Lateran (1123) addressed investment of bishops and the Holy Roman Emperor's role therein. Second Council of the Lateran (1139) reaffirmed Lateran I and addressed clerical discipline (dress, marriages). Third Council of the Lateran (1179) restricted papal election to the cardinals, condemned simony, and introduced minimum ages for ordination (thirty for bishops). Fourth Council of the Lateran (1215) defined transubstantiation, addressed papal primacy and clerical discipline. First Council of Lyon (1245) proclaimed the deposition of Emperor Frederick II and instituted a levy to support the Holy Land. Second Council of Lyon (1274) attempted reunion with the Eastern churches, approved Franciscan and Dominican orders, a tithe to support crusades, and conclave procedures. Council of Vienne (1311–1312) disbanded the Knights Templar. Council of Pisa (1409) attempted to solve the Great Western Schism.The council is not numbered because it was not convened by a pope and its outcome was repudiated at Constance. Council of Constance (1414–1418) resolved the Great Western Schism and condemned John Hus.The Catholic Church declared invalid the first sessions of the Council of Constance, gathered under the authority of Antipope John XXIII, which included the famous decree Haec Sancta Synodus, which marked the high-water mark of the conciliar movement of reform. Council of Siena (1423–1424) addressed church reform.Not numbered as it was swiftly disbanded. Council of Basel, Ferrara and Florence (1431–1445) addressed church reform and reunion with the Eastern Churches but split into two parties. The fathers remaining at Basel became the apogee of conciliarism. The fathers at Florence achieved union with various Eastern Churches and temporarily with the Eastern Orthodox Church. Fifth Council of the Lateran (1512–1517) addressed church reform. Council of Trent (1545–1563, with interruptions) addressed church reform and repudiated Protestantism, defined the role and canon of Scripture and the seven sacraments, and strengthened clerical discipline and education. Considered the founding event of the Counter-Reformation.Temporarily attended by Lutheran delegates. First Council of the Vatican (1869–1870) defined the Pope's primacy in church governance and his infallibility, repudiated rationalism, materialism and atheism, addressed revelation, interpretation of scripture and the relationship of faith and reason. Second Council of the Vatican (1962–1965) addressed pastoral and disciplinary issues dealing with the Church and its relation to the modern world, including liturgy and ecumenism. Further councils recognised as ecumenical by some Eastern Orthodox Eastern Orthodox catechisms teach that there are seven ecumenical councils and there are feast days for seven ecumenical councils. Nonetheless, some Eastern Orthodox consider events like the Council of Constantinople of 879–880, that of Constantinople in 1341–1351 and that of Jerusalem in 1672 to be ecumenical: Council in Trullo (692) debates on ritual observance and clerical discipline in different parts of the Christian Church. Fourth Council of Constantinople (Eastern Orthodox) (879–880) restored Photius to the See of Constantinople. This happened after the death of Ignatius and with papal approval. Fifth Council of Constantinople (1341–1351) affirmed hesychastic theology according to Gregory Palamas and condemned Barlaam of Seminara. Synod of Iași (1642) reviewed and amended Peter Mogila's Expositio fidei (Statement of Faith, also known as the Orthodox Confession). Synod of Jerusalem (1672) defined Orthodoxy relative to Catholicism and Protestantism, defined the orthodox Biblical canon. Synod of Constantinople (1872) addressing with nationalism or Phyletism in the unity of Orthodoxy. It is unlikely that formal ecumenical recognition will be granted to these councils, despite the acknowledged orthodoxy of their decisions, so that seven are universally recognized among the Eastern Orthodox as ecumenical. The 2016 Pan-Orthodox Council was sometimes referred to as a potential "Eighth Ecumenical Council" following debates on several issues facing Eastern Orthodoxy, however not all autocephalous churches were represented. Acceptance of the councils Although some Protestants reject the concept of an ecumenical council establishing doctrine for the entire Christian faith, Catholics, Lutherans, Anglicans, Eastern Orthodox and Oriental Orthodox all accept the authority of ecumenical councils in principle. Where they differ is in which councils they accept and what the conditions are for a council to be considered "ecumenical". The relationship of the Papacy to the validity of ecumenical councils is a ground of controversy between Catholicism and the Eastern Orthodox Churches. The Catholic Church holds that recognition by the Pope is an essential element in qualifying a council as ecumenical; Eastern Orthodox view approval by the Bishop of Rome (the Pope) as being roughly equivalent to that of other patriarchs. Some have held that a council is ecumenical only when all five patriarchs of the Pentarchy are represented at it. Others reject this theory in part because there were no patriarchs of Constantinople and Jerusalem at the time of the first ecumenical council. Catholic Church Both the Catholic and Eastern Orthodox churches recognize seven councils in the early centuries of the church, but Catholics also recognize fourteen councils in later times called or confirmed by the Pope. At the urging of German King Sigismund, who was to become Holy Roman Emperor in 1433, the Council of Constance was convoked in 1414 by Antipope John XXIII, one of three claimants to the papal throne, and was reconvened in 1415 by the Roman Pope Gregory XII. The Council of Florence is an example of a council accepted as ecumenical in spite of being rejected by the East, as the Councils of Ephesus and Chalcedon are accepted in spite of being rejected respectively by the Church of the East and Oriental Orthodoxy. The Catholic Church teaches that an ecumenical council is a gathering of the College of Bishops (of which the Bishop of Rome is an essential part) to exercise in a solemn manner its supreme and full power over the whole Church. It holds that "there never is an ecumenical council which is not confirmed or at least recognized as such by Peter's successor". Its present canon law requires that an ecumenical council be convoked and presided over, either personally or through a delegate, by the Pope, who is also to decide the agenda; but the church makes no claim that all past ecumenical councils observed these present rules, declaring only that the Pope's confirmation or at least recognition has always been required, and saying that the version of the Nicene Creed adopted at the First Council of Constantinople (381) was accepted by the Church of Rome only seventy years later, in 451. Eastern Orthodox Church The Eastern Orthodox Church accepts seven ecumenical councils, with the disputed Council in Trullo—rejected by Catholics—being incorporated into, and considered as a continuation of, the Third Council of Constantinople. To be considered ecumenical, Orthodox accept a council that meets the condition that it was accepted by the whole church. That it was called together legally is also an important factor. A case in point is the Third Ecumenical Council, where two groups met as duly called for by the emperor, each claiming to be the legitimate council. The Emperor had called for bishops to assemble in the city of Ephesus. Theodosius did not attend but sent his representative Candidian to preside. However, Cyril managed to open the council over Candidian's insistent demands that the bishops disperse until the delegation from Syria could arrive. Cyril was able to completely control the proceedings, completely neutralizing Candidian, who favored Cyril's antagonist, Nestorius. When the pro-Nestorius Antiochene delegation finally arrived, they decided to convene their own council, over which Candidian presided. The proceedings of both councils were reported to the emperor, who decided ultimately to depose Cyril, Memnon and Nestorius. Nonetheless, the Orthodox accept Cyril's group as being the legitimate council because it maintained the same teaching that the church has always taught. Paraphrasing a rule by St Vincent of Lérins, Hasler states Orthodox believe that councils could over-rule or even depose popes. At the Sixth Ecumenical Council, Pope Honorius and Patriarch Sergius were declared heretics. The council anathematized them and declared them tools of the devil and cast them out of the church. It is their position that, since the Seventh Ecumenical Council, there has been no synod or council of the same scope. Local meetings of hierarchs have been called "pan-Orthodox", but these have invariably been simply meetings of local hierarchs of whatever Eastern Orthodox jurisdictions are party to a specific local matter. From this point of view, there has been no fully "pan-Orthodox" (Ecumenical) council since 787. The use of the term "pan-Orthodox" is confusing to those not within Eastern Orthodoxy, and it leads to mistaken impressions that these are ersatz ecumenical councils rather than purely local councils to which nearby Orthodox hierarchs, regardless of jurisdiction, are invited. Others, including 20th-century theologians Metropolitan Hierotheos (Vlachos) of Naupactus, Fr. John S. Romanides, and Fr. George Metallinos (all of whom refer repeatedly to the "Eighth and Ninth Ecumenical Councils"), Fr. George Dragas, and the 1848 Encyclical of the Eastern Patriarchs (which refers explicitly to the "Eighth Ecumenical Council" and was signed by the patriarchs of Constantinople, Jerusalem, Antioch, and Alexandria as well as the Holy Synods of the first three), regard
and the Solar System in August 2018. The official working definition of an exoplanet is now as follows: The IAU noted that this definition could be expected to evolve as knowledge improves. Alternatives The IAU's working definition is not always used. One alternate suggestion is that planets should be distinguished from brown dwarfs on the basis of formation. It is widely thought that giant planets form through core accretion, which may sometimes produce planets with masses above the deuterium fusion threshold; massive planets of that sort may have already been observed. Brown dwarfs form like stars from the direct gravitational collapse of clouds of gas and this formation mechanism also produces objects that are below the limit and can be as low as . Objects in this mass range that orbit their stars with wide separations of hundreds or thousands of AU and have large star/object mass ratios likely formed as brown dwarfs; their atmospheres would likely have a composition more similar to their host star than accretion-formed planets which would contain increased abundances of heavier elements. Most directly imaged planets as of April 2014 are massive and have wide orbits so probably represent the low-mass end of brown dwarf formation. One study suggests that objects above formed through gravitational instability and should not be thought of as planets. Also, the 13-Jupiter-mass cutoff does not have precise physical significance. Deuterium fusion can occur in some objects with a mass below that cutoff. The amount of deuterium fused depends to some extent on the composition of the object. As of 2011 the Extrasolar Planets Encyclopaedia included objects up to 25 Jupiter masses, saying, "The fact that there is no special feature around in the observed mass spectrum reinforces the choice to forget this mass limit". As of 2016 this limit was increased to 60 Jupiter masses based on a study of mass–density relationships. The Exoplanet Data Explorer includes objects up to 24 Jupiter masses with the advisory: "The 13 Jupiter-mass distinction by the IAU Working Group is physically unmotivated for planets with rocky cores, and observationally problematic due to the sin i ambiguity." The NASA Exoplanet Archive includes objects with a mass (or minimum mass) equal to or less than 30 Jupiter masses. Another criterion for separating planets and brown dwarfs, rather than deuterium fusion, formation process or location, is whether the core pressure is dominated by coulomb pressure or electron degeneracy pressure with the dividing line at around 5 Jupiter masses. Nomenclature The convention for designating exoplanets is an extension of the system used for designating multiple-star systems as adopted by the International Astronomical Union (IAU). For exoplanets orbiting a single star, the IAU designation is formed by taking the designated or proper name of its parent star, and adding a lower case letter. Letters are given in order of each planet's discovery around the parent star, so that the first planet discovered in a system is designated "b" (the parent star is considered to be "a") and later planets are given subsequent letters. If several planets in the same system are discovered at the same time, the closest one to the star gets the next letter, followed by the other planets in order of orbital size. A provisional IAU-sanctioned standard exists to accommodate the designation of circumbinary planets. A limited number of exoplanets have IAU-sanctioned proper names. Other naming systems exist. History of detection For centuries scientists, philosophers, and science fiction writers suspected that extrasolar planets existed, but there was no way of knowing whether they existed, how common they were, or how similar they might be to the planets of the Solar System. Various detection claims made in the nineteenth century were rejected by astronomers. The first evidence of a possible exoplanet, orbiting Van Maanen 2, was noted in 1917, but was not recognized as such. The astronomer Walter Sydney Adams, who later became director of the Mount Wilson Observatory, produced a spectrum of the star using Mount Wilson's 60-inch telescope. He interpreted the spectrum to be of an F-type main-sequence star, but it is now thought that such a spectrum could be caused by the residue of a nearby exoplanet that had been pulverized into dust by the gravity of the star, the resulting dust then falling onto the star. The first suspected scientific detection of an exoplanet occurred in 1988. Shortly afterwards, the first confirmation of detection came in 1992, with the discovery of several terrestrial-mass planets orbiting the pulsar PSR B1257+12. The first confirmation of an exoplanet orbiting a main-sequence star was made in 1995, when a giant planet was found in a four-day orbit around the nearby star 51 Pegasi. Some exoplanets have been imaged directly by telescopes, but the vast majority have been detected through indirect methods, such as the transit method and the radial-velocity method. In February 2018, researchers using the Chandra X-ray Observatory, combined with a planet detection technique called microlensing, found evidence of planets in a distant galaxy, stating "Some of these exoplanets are as (relatively) small as the moon, while others are as massive as Jupiter. Unlike Earth, most of the exoplanets are not tightly bound to stars, so they're actually wandering through space or loosely orbiting between stars. We can estimate that the number of planets in this [faraway] galaxy is more than a trillion. Early speculations In the sixteenth century, the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun (heliocentrism), put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planets. In the eighteenth century, the same possibility was mentioned by Isaac Newton in the "General Scholium" that concludes his Principia. Making a comparison to the Sun's planets, he wrote "And if the fixed stars are the centres of similar systems, they will all be constructed according to a similar design and subject to the dominion of One." In 1952, more than 40 years before the first hot Jupiter was discovered, Otto Struve wrote that there is no compelling reason why planets could not be much closer to their parent star than is the case in the Solar System, and proposed that Doppler spectroscopy and the transit method could detect super-Jupiters in short orbits. Discredited claims Claims of exoplanet detections have been made since the nineteenth century. Some of the earliest involve the binary star 70 Ophiuchi. In 1855 William Stephen Jacob at the East India Company's Madras Observatory reported that orbital anomalies made it "highly probable" that there was a "planetary body" in this system. In the 1890s, Thomas J. J. See of the University of Chicago and the United States Naval Observatory stated that the orbital anomalies proved the existence of a dark body in the 70 Ophiuchi system with a 36-year period around one of the stars. However, Forest Ray Moulton published a paper proving that a three-body system with those orbital parameters would be highly unstable. During the 1950s and 1960s, Peter van de Kamp of Swarthmore College made another prominent series of detection claims, this time for planets orbiting Barnard's Star. Astronomers now generally regard all the early reports of detection as erroneous. In 1991 Andrew Lyne, M. Bailes and S. L. Shemar claimed to have discovered a pulsar planet in orbit around PSR 1829-10, using pulsar timing variations. The claim briefly received intense attention, but Lyne and his team soon retracted it. Confirmed discoveries As of , a total of confirmed exoplanets are listed in the Extrasolar Planets Encyclopedia, including a few that were confirmations of controversial claims from the late 1980s. The first published discovery to receive subsequent confirmation was made in 1988 by the Canadian astronomers Bruce Campbell, G. A. H. Walker, and Stephenson Yang of the University of Victoria and the University of British Columbia. Although they were cautious about claiming a planetary detection, their radial-velocity observations suggested that a planet orbits the star Gamma Cephei. Partly because the observations were at the very limits of instrumental capabilities at the time, astronomers remained skeptical for several years about this and other similar observations. It was thought some of the apparent planets might instead have been brown dwarfs, objects intermediate in mass between planets and stars. In 1990, additional observations were published that supported the existence of the planet orbiting Gamma Cephei, but subsequent work in 1992 again raised serious doubts. Finally, in 2003, improved techniques allowed the planet's existence to be confirmed. On 9 January 1992, radio astronomers Aleksander Wolszczan and Dale Frail announced the discovery of two planets orbiting the pulsar PSR 1257+12. This discovery was confirmed, and is generally considered to be the first definitive detection of exoplanets. Follow-up observations solidified these results, and confirmation of a third planet in 1994 revived the topic in the popular press. These pulsar planets are thought to have formed from the unusual remnants of the supernova that produced the pulsar, in a second round of planet formation, or else to be the remaining rocky cores of gas giants that somehow survived the supernova and then decayed into their current orbits. In the early 1990s, a group of astronomers led by Donald Backer, who were studying what they thought was a binary pulsar (PSR B1620−26 b), determined that a third object was needed to explain the observed Doppler shifts. Within a few years, the gravitational effects of the planet on the orbit of the pulsar and white dwarf had been measured, giving an estimate of the mass of the third object that was too small for it to be a star. The conclusion that the third object was a planet was announced by Stephen Thorsett and his collaborators in 1993. On 6 October 1995, Michel Mayor and Didier Queloz of the University of Geneva announced the first definitive detection of an exoplanet orbiting a main-sequence star, nearby G-type star 51 Pegasi. This discovery, made at the Observatoire de Haute-Provence, ushered in the modern era of exoplanetary discovery, and was recognized by a share of the 2019 Nobel Prize in Physics. Technological advances, most notably in high-resolution spectroscopy, led to the rapid detection of
fully phase-dependent, this is not always the case in the near infrared. Temperatures of gas giants reduce over time and with distance from their star. Lowering the temperature increases optical albedo even without clouds. At a sufficiently low temperature, water clouds form, which further increase optical albedo. At even lower temperatures ammonia clouds form, resulting in the highest albedos at most optical and near-infrared wavelengths. Magnetic field In 2014, a magnetic field around HD 209458 b was inferred from the way hydrogen was evaporating from the planet. It is the first (indirect) detection of a magnetic field on an exoplanet. The magnetic field is estimated to be about one tenth as strong as Jupiter's. Exoplanets magnetic fields may be detectable by their auroral radio emissions with sensitive enough radio telescopes such as LOFAR. The radio emissions could enable determination of the rotation rate of the interior of an exoplanet, and may yield a more accurate way to measure exoplanet rotation than by examining the motion of clouds. Earth's magnetic field results from its flowing liquid metallic core, but in massive super-Earths with high pressure, different compounds may form which do not match those created under terrestrial conditions. Compounds may form with greater viscosities and high melting temperatures which could prevent the interiors from separating into different layers and so result in undifferentiated coreless mantles. Forms of magnesium oxide such as MgSi3O12 could be a liquid metal at the pressures and temperatures found in super-Earths and could generate a magnetic field in the mantles of super-Earths. Hot Jupiters have been observed to have a larger radius than expected. This could be caused by the interaction between the stellar wind and the planet's magnetosphere creating an electric current through the planet that heats it up causing it to expand. The more magnetically active a star is the greater the stellar wind and the larger the electric current leading to more heating and expansion of the planet. This theory matches the observation that stellar activity is correlated with inflated planetary radii. In August 2018, scientists announced the transformation of gaseous deuterium into a liquid metallic form. This may help researchers better understand giant gas planets, such as Jupiter, Saturn and related exoplanets, since such planets are thought to contain a lot of liquid metallic hydrogen, which may be responsible for their observed powerful magnetic fields. Although scientists previously announced that the magnetic fields of close-in exoplanets may cause increased stellar flares and starspots on their host stars, in 2019 this claim was demonstrated to be false in the HD 189733 system. The failure to detect "star-planet interactions" in the well-studied HD 189733 system calls other related claims of the effect into question. In 2019 the strength of the surface magnetic fields of 4 hot Jupiters were estimated and ranged between 20 and 120 gauss compared to Jupiter's surface magnetic field of 4.3 gauss. Plate tectonics In 2007, two independent teams of researchers came to opposing conclusions about the likelihood of plate tectonics on larger super-Earths with one team saying that plate tectonics would be episodic or stagnant and the other team saying that plate tectonics is very likely on super-Earths even if the planet is dry. If super-Earths have more than 80 times as much water as Earth then they become ocean planets with all land completely submerged. However, if there is less water than this limit, then the deep water cycle will move enough water between the oceans and mantle to allow continents to exist. Volcanism Large surface temperature variations on 55 Cancri e have been attributed to possible volcanic activity releasing large clouds of dust which blanket the planet and block thermal emissions. Rings The star 1SWASP J140747.93-394542.6 is orbited by an object that is circled by a ring system much larger than Saturn's rings. However, the mass of the object is not known; it could be a brown dwarf or low-mass star instead of a planet. The brightness of optical images of Fomalhaut b could be due to starlight reflecting off a circumplanetary ring system with a radius between 20 and 40 times that of Jupiter's radius, about the size of the orbits of the Galilean moons. The rings of the Solar System's gas giants are aligned with their planet's equator. However, for exoplanets that orbit close to their star, tidal forces from the star would lead to the outermost rings of a planet being aligned with the planet's orbital plane around the star. A planet's innermost rings would still be aligned with the planet's equator so that if the planet has a tilted rotational axis, then the different alignments between the inner and outer rings would create a warped ring system. Moons In December 2013 a candidate exomoon of a rogue planet was announced. On 3 October 2018, evidence suggesting a large exomoon orbiting Kepler-1625b was reported. Atmospheres Atmospheres have been detected around several exoplanets. The first to be observed was HD 209458 b in 2001. In May 2017, glints of light from Earth, seen as twinkling from an orbiting satellite a million miles away, were found to be reflected light from ice crystals in the atmosphere. The technology used to determine this may be useful in studying the atmospheres of distant worlds, including those of exoplanets. Comet-like tails KIC 12557548 b is a small rocky planet, very close to its star, that is evaporating and leaving a trailing tail of cloud and dust like a comet. The dust could be ash erupting from volcanos and escaping due to the small planet's low surface-gravity, or it could be from metals that are vaporized by the high temperatures of being so close to the star with the metal vapor then condensing into dust. In June 2015, scientists reported that the atmosphere of GJ 436 b was evaporating, resulting in a giant cloud around the planet and, due to radiation from the host star, a long trailing tail long. Insolation pattern Tidally locked planets in a 1:1 spin-orbit resonance would have their star always shining directly overhead on one spot which would be hot with the opposite hemisphere receiving no light and being freezing cold. Such a planet could resemble an eyeball with the hotspot being the pupil. Planets with an eccentric orbit could be locked in other resonances. 3:2 and 5:2 resonances would result in a double-eyeball pattern with hotspots in both eastern and western hemispheres. Planets with both an eccentric orbit and a tilted axis of rotation would have more complicated insolation patterns. Habitability As more planets are discovered, the field of exoplanetology continues to grow into a deeper study of extrasolar worlds, and will ultimately tackle the prospect of life on planets beyond the Solar System. At cosmic distances, life can only be detected if it is developed at a planetary scale and strongly modified the planetary environment, in such a way that the modifications cannot be explained by classical physico-chemical processes (out of equilibrium processes). For example, molecular oxygen () in the atmosphere of Earth is a result of photosynthesis by living plants and many kinds of microorganisms, so it can be used as an indication of life on exoplanets, although small amounts of oxygen could also be produced by non-biological means. Furthermore, a potentially habitable planet must orbit a stable star at a distance within which planetary-mass objects
had been married before to a man with whom she had two daughters—Helena in 1860 and Lena in 1862. When her first husband died of tuberculosis, Taube was devastated. Goldman later wrote: "Whatever love she had had died with the young man to whom she had been married at the age of fifteen." Taube's second marriage was arranged by her family and, as Goldman puts it, "mismated from the first". Her second husband, Abraham Goldman, invested Taube's inheritance in a business that quickly failed. The ensuing hardship, combined with the emotional distance between husband and wife, made the household a tense place for the children. When Taube became pregnant, Abraham hoped desperately for a son; a daughter, he believed, would be one more sign of failure. They eventually had three sons, but their first child was Emma. Emma Goldman was born on June 27, 1869. Her father used violence to punish his children, beating them when they disobeyed him. He used a whip on Emma, the most rebellious of them. Her mother provided scarce comfort, rarely calling on Abraham to tone down his beatings. Goldman later speculated that her father's furious temper was at least partly a result of sexual frustration. Goldman's relationships with her elder half-sisters, Helena and Lena, were a study in contrasts. Helena, the oldest, provided the comfort the children lacked from their mother and filled Goldman's childhood with "whatever joy it had". Lena, however, was distant and uncharitable. The three sisters were joined by brothers Louis (who died at the age of six), Herman (born in 1872), and Moishe (born in 1879). Adolescence When Emma Goldman was a young girl, the Goldman family moved to the village of Papilė, where her father ran an inn. While her sisters worked, she became friends with a servant named Petrushka, who excited her "first erotic sensations". Later in Papilė she witnessed a peasant being whipped with a knout in the street. This event traumatized her and contributed to her lifelong distaste for violent authority. At the age of seven, Goldman moved with her family to the Prussian city of Königsberg (then part of the German Empire), and she was enrolled in a Realschule. One teacher punished disobedient students—targeting Goldman in particular—by beating their hands with a ruler. Another teacher tried to molest his female students and was fired when Goldman fought back. She found a sympathetic mentor in her German-language teacher, who loaned her books and took her to an opera. A passionate student, Goldman passed the exam for admission into a gymnasium, but her religion teacher refused to provide a certificate of good behavior and she was unable to attend. The family moved to the Russian capital of Saint Petersburg, where her father opened one unsuccessful store after another. Their poverty forced the children to work, and Goldman took an assortment of jobs, including one in a corset shop. As a teenager Goldman begged her father to allow her to return to school, but instead he threw her French book into the fire and shouted: "Girls do not have to learn much! All a Jewish daughter needs to know is how to prepare gefilte fish, cut noodles fine, and give the man plenty of children." Goldman pursued an independent education on her own. She studied the political turmoil around her, particularly the Nihilists responsible for assassinating Alexander II of Russia. The ensuing turmoil intrigued Goldman, although she did not fully understand it at the time. When she read Nikolai Chernyshevsky's novel, What Is to Be Done? (1863), she found a role model in the protagonist Vera. She adopts a Nihilist philosophy and escapes her repressive family to live freely and organize a sewing cooperative. The book enthralled Goldman and remained a source of inspiration throughout her life. Her father, meanwhile, continued to insist on a domestic future for her, and he tried to arrange for her to be married at the age of fifteen. They fought about the issue constantly; he complained that she was becoming a "loose" woman, and she insisted that she would marry for love alone. At the corset shop, she was forced to fend off unwelcome advances from Russian officers and other men. One man took her into a hotel room and committed what Goldman described as "violent contact"; two biographers call it rape. She was stunned by the experience, overcome by "shock at the discovery that the contact between man and woman could be so brutal and painful." Goldman felt that the encounter forever soured her interactions with men. Rochester, New York In 1885, her sister Helena made plans to move to New York in the United States to join her sister Lena and her husband. Goldman wanted to join her sister, but their father refused to allow it. Despite Helena's offer to pay for the trip, Abraham turned a deaf ear to their pleas. Desperate, Goldman threatened to throw herself into the Neva River if she could not go. Their father finally agreed. On December 29, 1885, Helena and Emma arrived at New York City's Castle Garden, the entry for immigrants. They settled upstate, living in the Rochester home which Lena had made with her husband Samuel. Fleeing the rising antisemitism of Saint Petersburg, their parents and brothers joined them a year later. Goldman began working as a seamstress, sewing overcoats for more than ten hours a day, earning two and a half dollars a week. She asked for a raise and was denied; she quit and took work at a smaller shop nearby. At her new job, Goldman met a fellow worker named Jacob Kershner, who shared her love for books, dancing, and traveling, as well as her frustration with the monotony of factory work. After four months, they married in February 1887. Once he moved in with Goldman's family, their relationship faltered. On their wedding night she discovered that he was impotent; they became emotionally and physically distant. Before long he became jealous and suspicious and threatened to commit suicide lest she left him. Meanwhile, Goldman was becoming more engaged with the political turmoil around her, particularly the aftermath of executions related to the 1886 Haymarket affair in Chicago and the anti-authoritarian political philosophy of anarchism. Less than a year after the wedding, the couple were divorced; Kershner begged Goldman to return and threatened to poison himself if she did not. They reunited, but after three months she left once again. Her parents considered her behavior "loose" and refused to allow Goldman into their home. Carrying her sewing machine in one hand and a bag with five dollars in the other, she left Rochester and headed southeast to New York City. Most and Berkman On her first day in the city, Goldman met two men who greatly changed her life. At Sachs' Café, a gathering place for radicals, she was introduced to Alexander Berkman, an anarchist who invited her to a public speech that evening. They went to hear Johann Most, editor of a radical publication called Freiheit and an advocate of "propaganda of the deed"—the use of violence to instigate change. She was impressed by his fiery oration, and Most took her under his wing, training her in methods of public speaking. He encouraged her vigorously, telling her that she was "to take my place when I am gone." One of her first public talks in support of "the Cause" was in Rochester. After convincing Helena not to tell their parents of her speech, Goldman found her mind a blank once on stage. She later wrote, suddenly: Excited by the experience, Goldman refined her public persona during subsequent engagements. She quickly found herself arguing with Most over her independence. After a momentous speech in Cleveland, she felt as though she had become "a parrot repeating Most's views" and resolved to express herself on the stage. When she returned to New York, Most became furious and told her: "Who is not with me is against me!" She left Freiheit and joined another publication, Die Autonomie. Meanwhile, Goldman had begun a friendship with Berkman, whom she affectionately called Sasha. Before long they became lovers and moved into a communal apartment with his cousin Modest "Fedya" Stein and Goldman's friend, Helen Minkin, on 42nd Street. Although their relationship had numerous difficulties, Goldman and Berkman would share a close bond for decades, united by their anarchist principles and commitment to personal equality. In 1892, Goldman joined with Berkman and Stein in opening an ice cream shop in Worcester, Massachusetts. After a few months of operating the shop, Goldman and Berkman were diverted to participate in the Homestead Strike near Pittsburgh. Homestead plot Berkman and Goldman came together through the Homestead Strike. In June 1892, a steel plant in Homestead, Pennsylvania, owned by Andrew Carnegie became the focus of national attention when talks between the Carnegie Steel Company and the Amalgamated Association of Iron and Steel Workers (AA) broke down. The factory's manager was Henry Clay Frick, a fierce opponent of the union. When a final round of talks failed at the end of June, management closed the plant and locked out the workers, who immediately went on strike. Strikebreakers were brought in and the company hired Pinkerton guards to protect them. On July 6, a fight broke out between 300 Pinkerton guards and a crowd of armed union workers. During the twelve-hour gunfight, seven guards and nine strikers were killed. When a majority of the nation's newspapers expressed support of the strikers, Goldman and Berkman resolved to assassinate Frick, an action they expected would inspire the workers to revolt against the capitalist system. Berkman chose to carry out the assassination, and ordered Goldman to stay behind in order to explain his motives after he went to jail. He would be in charge of "the deed"; she of the associated propaganda. Berkman set off for Pittsburgh on his way to Homestead, where he planned to shoot Frick. Goldman, meanwhile, decided to help fund the scheme through prostitution. Remembering the character of Sonya in Fyodor Dostoevsky's novel Crime and Punishment (1866), she mused: "She had become a prostitute in order to support her little brothers and sisters...Sensitive Sonya could sell her body; why not I?" Once on the street, Goldman caught the eye of a man who took her into a saloon, bought her a beer, gave her ten dollars, informed her she did not have "the knack," and told her to quit the business. She was "too astounded for speech". She wrote to Helena, claiming illness, and asked her for fifteen dollars. On July 23, Berkman gained access to Frick's office while carrying a concealed handgun; he shot Frick three times, and stabbed him in the leg. A group of workers—far from joining in his attentat—beat Berkman unconscious, and he was carried away by the police. Berkman was convicted of attempted murder and sentenced to 22 years in prison. Goldman suffered during his long absence. Convinced Goldman was involved in the plot, police raided her apartment. Although they found no evidence, they pressured her landlord into evicting her. Worse, the attentat had failed to rouse the masses: workers and anarchists alike condemned Berkman's action. Johann Most, their former mentor, lashed out at Berkman and the assassination attempt. Furious at these attacks, Goldman brought a toy horsewhip to a public lecture and demanded, onstage, that Most explain his betrayal. He dismissed her, whereupon she struck him with the whip, broke it on her knee, and hurled the pieces at him. She later regretted her assault, confiding to a friend: "At the age of twenty-three, one does not reason." "Inciting to riot" When the Panic of 1893 struck in the following year, the United States suffered one of its worst economic crises. By year's end, the unemployment rate was higher than 20%, and "hunger demonstrations" sometimes gave way to riots. Goldman began speaking to crowds of frustrated men and women in New York City. On August 21, she spoke to a crowd of nearly 3,000 people in Union Square, where she encouraged unemployed workers to take immediate action. Her exact words are unclear: undercover agents insist she ordered the crowd to "take everything ... by force". But Goldman later recounted this message: "Well then, demonstrate before the palaces of the rich; demand work. If they do not give you work, demand bread. If they deny you both, take bread." Later in court, Detective-Sergeant Charles Jacobs offered yet another version of her speech. A week later, Goldman was arrested in Philadelphia and returned to New York City for trial, charged with "inciting to riot". During the train ride, Jacobs offered to drop the charges against her if she would inform on other radicals in the area. She responded by throwing a glass of ice water in his face. As she awaited trial, Goldman was visited by Nellie Bly, a reporter for the New York World. She spent two hours talking to Goldman and wrote a positive article about the woman she described as a "modern Joan of Arc." Despite this positive publicity, the jury was persuaded by Jacobs' testimony and frightened by Goldman's politics. The assistant District Attorney questioned Goldman about her anarchism, as well as her atheism; the judge spoke of her as "a dangerous woman". She was sentenced to one year in the Blackwell's Island Penitentiary. Once inside she suffered an attack of rheumatism and was sent to the infirmary; there she befriended a visiting doctor and began studying medicine. She also read dozens of books, including works by the American activist-writers Ralph Waldo Emerson and Henry David Thoreau; novelist Nathaniel Hawthorne; poet Walt Whitman, and philosopher John Stuart Mill. When Goldman was released after ten months, a raucous crowd of nearly 3,000 people greeted her at the Thalia Theater in New York City. She soon became swamped with requests for interviews and lectures. To make money, Goldman decided to continue the medical studies she had started in prison but her preferred fields of specialization—midwifery and massage—were unavailable to nursing students in the US. She sailed to Europe, lecturing in London, Glasgow, and Edinburgh. She met with renowned anarchists such as Errico Malatesta, Louise Michel, and Peter Kropotkin. In Vienna, she received two diplomas for midwifery and put them immediately to use back in the US. Alternating between lectures and midwifery, Goldman conducted the first cross-country tour by an anarchist speaker. In November 1899 she returned to Europe to speak, where she met the Czech anarchist Hippolyte Havel in London. They went together to France and helped organize the 1900 International Anarchist Congress on the outskirts of Paris. Afterward Havel immigrated to the United States, traveling with Goldman to Chicago. They shared a residence there with friends of Goldman. McKinley assassination On September 6, 1901, Leon Czolgosz, an unemployed factory worker and registered Republican with a history of mental illness, shot US President William McKinley twice during a public speaking event in Buffalo, New York. McKinley was hit in the breastbone and stomach, and died eight days later. Czolgosz was arrested, and interrogated around the clock. During interrogation he claimed to be an anarchist and said he had been inspired to act after attending a speech by Goldman. The authorities used this as a pretext to charge Goldman with planning McKinley's assassination. They tracked her to the residence in Chicago she shared with Havel, as well as with Mary and Abe Isaak, an anarchist couple and their family. Goldman was arrested, along with Isaak, Havel, and ten other anarchists. Earlier, Czolgosz had tried but failed to become friends with Goldman and her companions. During a talk in Cleveland, Czolgosz had approached Goldman and asked her advice on which books he should read. In July 1901, he had appeared at the Isaak house, asking a series of unusual questions. They assumed he was an infiltrator, like a number of police agents sent to spy on radical groups. They had remained distant from him, and Abe Isaak sent a notice to associates warning of "another spy". Although Czolgosz repeatedly denied Goldman's involvement, the police held her in close custody, subjecting her to what she called the "third degree". She explained her housemates' distrust of Czolgosz, and the police finally recognized that she had not had any significant contact with the attacker. No evidence was found linking Goldman to the attack, and she was released after two weeks of detention. Before McKinley died, Goldman offered to provide nursing care, referring to him as "merely a human being". Czolgosz, despite considerable evidence of mental illness, was convicted of murder and executed. Throughout her detention and after her release, Goldman steadfastly refused to condemn Czolgosz's actions, standing virtually alone in doing so. Friends and supporters—including Berkman—urged her to quit his cause. But Goldman defended Czolgosz as a "supersensitive being" and chastised other anarchists for abandoning him. She was vilified in the press as the "high priestess of anarchy", while many newspapers declared the anarchist movement responsible for the murder. In the wake of these events, socialism gained support over anarchism among US radicals. McKinley's successor, Theodore Roosevelt, declared his intent to crack down "not only against anarchists, but against all active and passive sympathizers with anarchists". Mother Earth and Berkman's release After Czolgosz was executed, Goldman withdrew from the world and, from 1903 to 1913, lived at 208-210 East 13th Street, New York City. Scorned by her fellow anarchists, vilified by the press, and separated from her love, Berkman, she retreated into anonymity and nursing. "It was bitter and hard to face life anew," she wrote later. Using the name E. G. Smith, she left public life and took on a series of private nursing jobs while suffering from severe depression. The US Congress' passage of the Anarchist
whip on Emma, the most rebellious of them. Her mother provided scarce comfort, rarely calling on Abraham to tone down his beatings. Goldman later speculated that her father's furious temper was at least partly a result of sexual frustration. Goldman's relationships with her elder half-sisters, Helena and Lena, were a study in contrasts. Helena, the oldest, provided the comfort the children lacked from their mother and filled Goldman's childhood with "whatever joy it had". Lena, however, was distant and uncharitable. The three sisters were joined by brothers Louis (who died at the age of six), Herman (born in 1872), and Moishe (born in 1879). Adolescence When Emma Goldman was a young girl, the Goldman family moved to the village of Papilė, where her father ran an inn. While her sisters worked, she became friends with a servant named Petrushka, who excited her "first erotic sensations". Later in Papilė she witnessed a peasant being whipped with a knout in the street. This event traumatized her and contributed to her lifelong distaste for violent authority. At the age of seven, Goldman moved with her family to the Prussian city of Königsberg (then part of the German Empire), and she was enrolled in a Realschule. One teacher punished disobedient students—targeting Goldman in particular—by beating their hands with a ruler. Another teacher tried to molest his female students and was fired when Goldman fought back. She found a sympathetic mentor in her German-language teacher, who loaned her books and took her to an opera. A passionate student, Goldman passed the exam for admission into a gymnasium, but her religion teacher refused to provide a certificate of good behavior and she was unable to attend. The family moved to the Russian capital of Saint Petersburg, where her father opened one unsuccessful store after another. Their poverty forced the children to work, and Goldman took an assortment of jobs, including one in a corset shop. As a teenager Goldman begged her father to allow her to return to school, but instead he threw her French book into the fire and shouted: "Girls do not have to learn much! All a Jewish daughter needs to know is how to prepare gefilte fish, cut noodles fine, and give the man plenty of children." Goldman pursued an independent education on her own. She studied the political turmoil around her, particularly the Nihilists responsible for assassinating Alexander II of Russia. The ensuing turmoil intrigued Goldman, although she did not fully understand it at the time. When she read Nikolai Chernyshevsky's novel, What Is to Be Done? (1863), she found a role model in the protagonist Vera. She adopts a Nihilist philosophy and escapes her repressive family to live freely and organize a sewing cooperative. The book enthralled Goldman and remained a source of inspiration throughout her life. Her father, meanwhile, continued to insist on a domestic future for her, and he tried to arrange for her to be married at the age of fifteen. They fought about the issue constantly; he complained that she was becoming a "loose" woman, and she insisted that she would marry for love alone. At the corset shop, she was forced to fend off unwelcome advances from Russian officers and other men. One man took her into a hotel room and committed what Goldman described as "violent contact"; two biographers call it rape. She was stunned by the experience, overcome by "shock at the discovery that the contact between man and woman could be so brutal and painful." Goldman felt that the encounter forever soured her interactions with men. Rochester, New York In 1885, her sister Helena made plans to move to New York in the United States to join her sister Lena and her husband. Goldman wanted to join her sister, but their father refused to allow it. Despite Helena's offer to pay for the trip, Abraham turned a deaf ear to their pleas. Desperate, Goldman threatened to throw herself into the Neva River if she could not go. Their father finally agreed. On December 29, 1885, Helena and Emma arrived at New York City's Castle Garden, the entry for immigrants. They settled upstate, living in the Rochester home which Lena had made with her husband Samuel. Fleeing the rising antisemitism of Saint Petersburg, their parents and brothers joined them a year later. Goldman began working as a seamstress, sewing overcoats for more than ten hours a day, earning two and a half dollars a week. She asked for a raise and was denied; she quit and took work at a smaller shop nearby. At her new job, Goldman met a fellow worker named Jacob Kershner, who shared her love for books, dancing, and traveling, as well as her frustration with the monotony of factory work. After four months, they married in February 1887. Once he moved in with Goldman's family, their relationship faltered. On their wedding night she discovered that he was impotent; they became emotionally and physically distant. Before long he became jealous and suspicious and threatened to commit suicide lest she left him. Meanwhile, Goldman was becoming more engaged with the political turmoil around her, particularly the aftermath of executions related to the 1886 Haymarket affair in Chicago and the anti-authoritarian political philosophy of anarchism. Less than a year after the wedding, the couple were divorced; Kershner begged Goldman to return and threatened to poison himself if she did not. They reunited, but after three months she left once again. Her parents considered her behavior "loose" and refused to allow Goldman into their home. Carrying her sewing machine in one hand and a bag with five dollars in the other, she left Rochester and headed southeast to New York City. Most and Berkman On her first day in the city, Goldman met two men who greatly changed her life. At Sachs' Café, a gathering place for radicals, she was introduced to Alexander Berkman, an anarchist who invited her to a public speech that evening. They went to hear Johann Most, editor of a radical publication called Freiheit and an advocate of "propaganda of the deed"—the use of violence to instigate change. She was impressed by his fiery oration, and Most took her under his wing, training her in methods of public speaking. He encouraged her vigorously, telling her that she was "to take my place when I am gone." One of her first public talks in support of "the Cause" was in Rochester. After convincing Helena not to tell their parents of her speech, Goldman found her mind a blank once on stage. She later wrote, suddenly: Excited by the experience, Goldman refined her public persona during subsequent engagements. She quickly found herself arguing with Most over her independence. After a momentous speech in Cleveland, she felt as though she had become "a parrot repeating Most's views" and resolved to express herself on the stage. When she returned to New York, Most became furious and told her: "Who is not with me is against me!" She left Freiheit and joined another publication, Die Autonomie. Meanwhile, Goldman had begun a friendship with Berkman, whom she affectionately called Sasha. Before long they became lovers and moved into a communal apartment with his cousin Modest "Fedya" Stein and Goldman's friend, Helen Minkin, on 42nd Street. Although their relationship had numerous difficulties, Goldman and Berkman would share a close bond for decades, united by their anarchist principles and commitment to personal equality. In 1892, Goldman joined with Berkman and Stein in opening an ice cream shop in Worcester, Massachusetts. After a few months of operating the shop, Goldman and Berkman were diverted to participate in the Homestead Strike near Pittsburgh. Homestead plot Berkman and Goldman came together through the Homestead Strike. In June 1892, a steel plant in Homestead, Pennsylvania, owned by Andrew Carnegie became the focus of national attention when talks between the Carnegie Steel Company and the Amalgamated Association of Iron and Steel Workers (AA) broke down. The factory's manager was Henry Clay Frick, a fierce opponent of the union. When a final round of talks failed at the end of June, management closed the plant and locked out the workers, who immediately went on strike. Strikebreakers were brought in and the company hired Pinkerton guards to protect them. On July 6, a fight broke out between 300 Pinkerton guards and a crowd of armed union workers. During the twelve-hour gunfight, seven guards and nine strikers were killed. When a majority of the nation's newspapers expressed support of the strikers, Goldman and Berkman resolved to assassinate Frick, an action they expected would inspire the workers to revolt against the capitalist system. Berkman chose to carry out the assassination, and ordered Goldman to stay behind in order to explain his motives after he went to jail. He would be in charge of "the deed"; she of the associated propaganda. Berkman set off for Pittsburgh on his way to Homestead, where he planned to shoot Frick. Goldman, meanwhile, decided to help fund the scheme through prostitution. Remembering the character of Sonya in Fyodor Dostoevsky's novel Crime and Punishment (1866), she mused: "She had become a prostitute in order to support her little brothers and sisters...Sensitive Sonya could sell her body; why not I?" Once on the street, Goldman caught the eye of a man who took her into a saloon, bought her a beer, gave her ten dollars, informed her she did not have "the knack," and told her to quit the business. She was "too astounded for speech". She wrote to Helena, claiming illness, and asked her for fifteen dollars. On July 23, Berkman gained access to Frick's office while carrying a concealed handgun; he shot Frick three times, and stabbed him in the leg. A group of workers—far from joining in his attentat—beat Berkman unconscious, and he was carried away by the police. Berkman was convicted of attempted murder and sentenced to 22 years in prison. Goldman suffered during his long absence. Convinced Goldman was involved in the plot, police raided her apartment. Although they found no evidence, they pressured her landlord into evicting her. Worse, the attentat had failed to rouse the masses: workers and anarchists alike condemned Berkman's action. Johann Most, their former mentor, lashed out at Berkman and the assassination attempt. Furious at these attacks, Goldman brought a toy horsewhip to a public lecture and demanded, onstage, that Most explain his betrayal. He dismissed her, whereupon she struck him with the whip, broke it on her knee, and hurled the pieces at him. She later regretted her assault, confiding to a friend: "At the age of twenty-three, one does not reason." "Inciting to riot" When the Panic of 1893 struck in the following year, the United States suffered one of its worst economic crises. By year's end, the unemployment rate was higher than 20%, and "hunger demonstrations" sometimes gave way to riots. Goldman began speaking to crowds of frustrated men and women in New York City. On August 21, she spoke to a crowd of nearly 3,000 people in Union Square, where she encouraged unemployed workers to take immediate action. Her exact words are unclear: undercover agents insist she ordered the crowd to "take everything ... by force". But Goldman later recounted this message: "Well then, demonstrate before the palaces of the rich; demand work. If they do not give you work, demand bread. If they deny you both, take bread." Later in court, Detective-Sergeant Charles Jacobs offered yet another version of her speech. A week later, Goldman was arrested in Philadelphia and returned to New York City for trial, charged with "inciting to riot". During the train ride, Jacobs offered to drop the charges against her if she would inform on other radicals in the area. She responded by throwing a glass of ice water in his face. As she awaited trial, Goldman was visited by Nellie Bly, a reporter for the New York World. She spent two hours talking to Goldman and wrote a positive article about the woman she described as a "modern Joan of Arc." Despite this positive publicity, the jury was persuaded by Jacobs' testimony and frightened by Goldman's politics. The assistant District Attorney questioned Goldman about her anarchism, as well as her atheism; the judge spoke of her as "a dangerous woman". She was sentenced to one year in the Blackwell's Island Penitentiary. Once inside she suffered an attack of rheumatism and was sent to the infirmary; there she befriended a visiting doctor and began studying medicine. She also read dozens of books, including works by the American activist-writers Ralph Waldo Emerson and Henry David Thoreau; novelist Nathaniel Hawthorne; poet Walt Whitman, and philosopher John Stuart Mill. When Goldman was released after ten months, a raucous crowd of nearly 3,000 people greeted her at the Thalia Theater in New York City. She soon became swamped with requests for interviews and lectures. To make money, Goldman decided to continue the medical studies she had started in prison but her preferred fields of specialization—midwifery and massage—were unavailable to nursing students in the US. She sailed to Europe, lecturing in London, Glasgow, and Edinburgh. She met with renowned anarchists such as Errico Malatesta, Louise Michel, and Peter Kropotkin. In Vienna, she received two diplomas for midwifery and put them immediately to use back in the US. Alternating between lectures and midwifery, Goldman conducted the first cross-country tour by an anarchist speaker. In November 1899 she returned to Europe to speak, where she met the Czech anarchist Hippolyte Havel in London. They went together to France and helped organize the 1900 International Anarchist Congress on the outskirts of Paris. Afterward Havel immigrated to the United States, traveling with Goldman to Chicago. They shared a residence there with friends of Goldman. McKinley assassination On September 6, 1901, Leon Czolgosz, an unemployed factory worker and registered Republican with a history of mental illness, shot US President William McKinley twice during a public speaking event in Buffalo, New York. McKinley was hit in the breastbone and stomach, and died eight days later. Czolgosz was arrested, and interrogated around the clock. During interrogation he claimed to be an anarchist and said he had been inspired to act after attending a speech by Goldman. The authorities used this as a pretext to charge Goldman with planning McKinley's assassination. They tracked her to the residence in Chicago she shared with Havel, as well as with Mary and Abe Isaak, an anarchist couple and their family. Goldman was arrested, along with Isaak, Havel, and ten other anarchists. Earlier, Czolgosz had tried but failed to become friends with Goldman and her companions. During a talk in Cleveland, Czolgosz had approached Goldman and asked her advice on which books he should read. In July 1901, he had appeared at the Isaak house, asking a series of unusual questions. They assumed he was an infiltrator, like a number of police agents sent to spy on radical groups. They had remained distant from him, and Abe Isaak sent a notice to associates warning of "another spy". Although Czolgosz repeatedly denied Goldman's involvement, the police held her in close custody, subjecting her to what she called the "third degree". She explained her housemates' distrust of Czolgosz, and the police finally recognized that she had not had any significant contact with the attacker. No evidence was found linking Goldman to the attack, and she was released after two weeks of detention. Before McKinley died, Goldman offered to provide nursing care, referring to him as "merely a human being". Czolgosz, despite considerable evidence of mental illness, was convicted of murder and executed. Throughout her detention and after her release, Goldman steadfastly refused to condemn Czolgosz's actions, standing virtually alone in doing so. Friends and supporters—including Berkman—urged her to quit his cause. But Goldman defended Czolgosz as a "supersensitive being" and chastised other anarchists for abandoning him. She was vilified in the press as the "high priestess of anarchy", while many newspapers declared the anarchist movement responsible for the murder. In the wake of these events, socialism gained support over anarchism among US radicals. McKinley's successor, Theodore Roosevelt, declared his intent to crack down "not only against anarchists, but against all active and passive sympathizers with anarchists". Mother Earth and Berkman's release After Czolgosz was executed, Goldman withdrew from the world and, from 1903 to 1913, lived at 208-210 East 13th Street, New York City. Scorned by her fellow anarchists, vilified by the press, and separated from her love, Berkman, she retreated into anonymity and nursing. "It was bitter and hard to face life anew," she wrote later. Using the name E. G. Smith, she left public life and took on a series of private nursing jobs while suffering from severe depression. The US Congress' passage of the Anarchist Exclusion Act (1903) stirred a new wave of oppositional activism, pulling Goldman back into the movement. A coalition of people and organizations across the left end of the political spectrum opposed the law on grounds that it violated freedom of speech, and she had the nation's ear once again. After an English anarchist named John Turner was arrested under the Anarchist Exclusion Act and threatened with deportation, Goldman joined forces with the Free Speech League to champion his cause. The league enlisted the aid of noted attorneys Clarence Darrow and Edgar Lee Masters, who took Turner's case to the US Supreme Court. Although Turner and the League lost, Goldman considered it a victory of propaganda. She had returned to anarchist activism, but it was taking its toll on her. "I never felt so weighed down," she wrote to Berkman. "I fear I am forever doomed to remain public property and to have my life worn out through the care for the lives
magnitude 3.9, 186 light-years from Earth. Its traditional name means "the section of the horse". There are few variable stars in Equuleus. Only around 25 are known, most of which are faint. Gamma Equulei is an alpha CVn star, ranging between magnitudes 4.58 and 4.77 over a period of around 12½ minutes. It is a white star 115 light-years from Earth, and has an optical companion of magnitude 6.1, 6 Equulei. It is divisible in binoculars. R Equulei is a Mira variable that ranges between magnitudes 8.0 and 15.7 over nearly 261 days. Equuleus contains some double stars of interest. γ Equ consists of a primary star with a magnitude around 4.7 (slightly variable) and a secondary star of magnitude 11.6, separated by 2 arcseconds. Epsilon Equulei is a triple star also designated 1 Equulei. The system, 197 light-years away, has a primary of magnitude 5.4 that is itself a binary star; its components are of magnitude 6.0 and 6.3 and have a period of 101 years. The secondary is of magnitude 7.4 and is visible in small telescopes. The components of the primary are becoming closer together and will not be divisible in amateur telescopes beginning in 2015. δ Equ is a binary star with an orbital period of 5.7 years, which at one time was the shortest known orbital period for an optical binary. The two components of the system are never more than 0.35 arcseconds apart. Deep-sky objects Due to its small size
the fourth magnitude. Notable features Stars The brightest star in Equuleus is Alpha Equulei, traditionally called Kitalpha, a yellow star magnitude 3.9, 186 light-years from Earth. Its traditional name means "the section of the horse". There are few variable stars in Equuleus. Only around 25 are known, most of which are faint. Gamma Equulei is an alpha CVn star, ranging between magnitudes 4.58 and 4.77 over a period of around 12½ minutes. It is a white star 115 light-years from Earth, and has an optical companion of magnitude 6.1, 6 Equulei. It is divisible in binoculars. R Equulei is a Mira variable that ranges between magnitudes 8.0 and 15.7 over nearly 261 days. Equuleus contains some double stars of interest. γ Equ consists of a primary star with a magnitude around 4.7 (slightly variable) and a secondary star of magnitude 11.6, separated by 2 arcseconds. Epsilon Equulei is a triple star also designated 1 Equulei. The system, 197 light-years away, has a primary of magnitude 5.4 that is itself a binary star; its components are of magnitude 6.0 and 6.3 and have a period of 101 years. The secondary is of magnitude 7.4 and is visible in small telescopes. The components of the primary are becoming closer together and will not be divisible in amateur telescopes beginning in 2015. δ Equ is a binary star with an orbital period of 5.7 years, which
refer to: Rivers Eridanos (mythology) (or Eridanus), a river in Greek mythology, somewhere in Central Europe, which was territory that Ancient Greeks knew only vaguely The Po River, according to Roman word usage Eridanos (Athens), a former river near Athens, now subterranean Eridanos (geology), a former large river that flowed
through where the Baltic Sea is now Astronomy Eridanus (constellation), a southern constellation Eridanus Cluster of galaxies in the constellation Eridanus Eridanus II, a low-surface brightness dwarf galaxy in the constellation Eridanus List of stars in Eridanus Delta Eridani,
for 'remembrance' is , or "anamnesis", which itself has a much richer theological history than the English word "remember". Gospels The synoptic gospels, Mark 14:22–25, Matthew 26:26–29 and Luke 22:13–20, depict Jesus as presiding over the Last Supper prior to his crucifixion. The versions in Matthew and Mark are almost identical, but the Gospel of Luke presents a textual difference, in that a few manuscripts omit the second half of verse 19 and all of verse 20 ("given for you … poured out for you"), which are found in the vast majority of ancient witnesses to the text. If the shorter text is the original one, then Luke's account is independent of both that of Paul and that of Matthew/Mark. If the majority longer text comes from the author of the third gospel, then this version is very similar to that of Paul in 1 Corinthians, being somewhat fuller in its description of the early part of the Supper, particularly in making specific mention of a cup being blessed before the bread was broken. Uniquely, in the one prayer given to posterity by Jesus, the Lord's Prayer, the word epiousios—which does not exist in Classical Greek literature—has been interpreted by some as meaning "super-substantial", a reference to the Bread of Life, the Eucharist. In the Gospel of John, however, the account of the Last Supper does not mention Jesus taking bread and "the cup" and speaking of them as his body and blood; instead, it recounts other events: his humble act of washing the disciples' feet, the prophecy of the betrayal, which set in motion the events that would lead to the cross, and his long discourse in response to some questions posed by his followers, in which he went on to speak of the importance of the unity of the disciples with him, with each other, and with God. Some would find in this unity and in the washing of the feet the deeper meaning of the Communion bread in the other three gospels. In John 6:26–65, a long discourse is attributed to Jesus that deals with the subject of the living bread, and in John 6:51–59 contains echoes of Eucharistic language. The interpretation of the whole passage has been extensively debated due to theological and scholarly disagreements. Agape feast The expression The Lord's Supper, derived from Paul's usage in 1 Corinthians 11:17–34, may have originally referred to the Agape feast (or love feast), the shared communal meal with which the Eucharist was originally associated. The Agape feast is mentioned in Jude 12 but The Lord's Supper is now commonly used in reference to a celebration involving no food other than the sacramental bread and wine. Early Christian sources The Didache (Greek: , "teaching") is an Early Church treatise that includes instructions for baptism and the Eucharist. Most scholars date it to the late 1st century, and distinguish in it two separate Eucharistic traditions, the earlier tradition in chapter 10 and the later one preceding it in chapter 9. The Eucharist is mentioned again in chapter 14. Ignatius of Antioch (born , died between 98 and 117), one of the Apostolic Fathers, mentions the Eucharist as "the flesh of our Saviour Jesus Christ": Justin Martyr (born c. 100, died c. 165) mentions in this regard: Paschasius Radbertus (785–865) was a Carolingian theologian, and the abbot of Corbie, whose most well-known and influential work is an exposition on the nature of the Eucharist written around 831, entitled De Corpore et Sanguine Domini. In it, Paschasius agrees with Ambrose in affirming that the Eucharist contains the true, historical body of Jesus Christ. According to Paschasius, God is truth itself, and therefore, his words and actions must be true. Christ's proclamation at the Last Supper that the bread and wine were his body and blood must be taken literally, since God is truth. He thus believes that the transubstantiation of the bread and wine offered in the Eucharist really occurs. Only if the Eucharist is the actual body and blood of Christ can a Christian know it is salvific. The Gnostic Gospel of Judas refers to a meal in which the disciples of Jesus put a blessing over bread with a prayer of thanks, using terminology that can bring the Eucharist to mind. Eucharistic theology Most Christians, even those who deny that there is any real change in the elements used, recognize a special presence of Christ in this rite. But Christians differ about exactly how, where and how long Christ is present in it. Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, and the Church of the East teach that the reality (the "substance") of the elements of bread and wine is wholly changed into the body and blood of Jesus Christ, while the appearances (the "species") remain. Transubstantiation ("change of the substance") is the term used by Catholics to denote what is changed, not to explain how the change occurs, since the Catholic Church teaches that "the signs of bread and wine become, in a way surpassing understanding, the Body and Blood of Christ". The Orthodox use various terms such as transelementation, but no explanation is official as they prefer to leave it a mystery. Lutherans believe Christ to be "truly and substantially present" with the bread and wine that are seen in the Eucharist. They attribute the real presence of Jesus' living body to His word spoken in the Eucharist, and not to the faith of those receiving it. They also believe that "forgiveness of sins, life, and salvation" are given through the words of Christ in the Eucharist to those who believe his words ("given and shed for you"). Reformed Christians believe Christ to be present and may both use the term "sacramental union" to describe this. Although Lutherans will also use this phrase, the Reformed generally describe the presence as a "spiritual presence", not a physical one. Anglicans adhere to a range of views depending on churchmanship although the teaching in the Anglican Thirty-Nine Articles holds that the body of Christ is received by the faithful only in a heavenly and spiritual manner, a doctrine also taught in the Methodist Articles of Religion. Unlike Catholics and Lutherans, Reformed Christians do not believe forgiveness and eternal life are given in the Eucharist. Christians adhering to the theology of Memorialism, such as the Anabaptist Churches, do not believe in the concept of the real presence, believing that the Eucharist is only a ceremonial remembrance or memorial of the death of Christ. The Baptism, Eucharist and Ministry document of the World Council of Churches, attempting to present the common understanding of the Eucharist on the part of the generality of Christians, describes it as "essentially the sacrament of the gift which God makes to us in Christ through the power of the Holy Spirit", "Thanksgiving to the Father", "Anamnesis or Memorial of Christ", "the sacrament of the unique sacrifice of Christ, who ever lives to make intercession for us", "the sacrament of the body and blood of Christ, the sacrament of his real presence", "Invocation of the Spirit", "Communion of the Faithful", and "Meal of the Kingdom". Ritual and liturgy Many Christian denominations classify the Eucharist as a sacrament. Some Protestants (though not all) prefer to instead call it an ordinance, viewing it not as a specific channel of divine grace but as an expression of faith and of obedience to Christ. Catholic Church In the Catholic Church the Eucharist is considered as a sacrament, according to the church the Eucharist is "the source and summit of the Christian life." "The other sacraments, and indeed all ecclesiastical ministries and works of the apostolate, are bound up with the Eucharist and are oriented toward it. For in the blessed Eucharist is contained the whole spiritual good of the Church, namely Christ himself, our Pasch." ("Pasch" is a word that sometimes means Easter, sometimes Passover.) As a sacrifice In the Eucharist the same sacrifice that Jesus made only once on the cross is made present at every Mass. According to Compendium of the Catechism of the Catholic Church "The Eucharist is the very sacrifice of the Body and Blood of the Lord Jesus which he instituted to perpetuate the sacrifice of the cross throughout the ages until his return in glory. Thus he entrusted to his Church this memorial of his death and Resurrection. It is a sign of unity, a bond of charity, a paschal banquet, in which Christ is consumed, the mind is filled with grace, and a pledge of future glory is given to us." For the Catholic Church, "the Eucharist is the memorial of Christ's Passover, the making present and the sacramental offering of his unique sacrifice, in the liturgy of the Church which is his Body. ... The memorial is not merely the recollection of past events but ... they become in a certain way present and real. ... When the Church celebrates the Eucharist, she commemorates Christ's Passover, and it is made present the sacrifice Christ offered once for all on the cross remains ever present. ... The Eucharist is thus a sacrifice because it re-presents (makes present) the same and only sacrifice offered once for all on the cross, because it is its memorial and because it applies its fruit. The sacrifice of Christ and the sacrifice of the Eucharist are one single sacrifice: 'The victim is one and the same: the same now offers through the ministry of priests, who then offered himself on the cross; only the manner of offering is different.' In the holy sacrifice of the Mass, "it is Christ himself, the eternal high priest of the New Covenant who, acting through the ministry of the priests, offers the Eucharistic sacrifice. And it is the same Christ, really present under the species of bread and wine, who is the offering of the Eucharistic sacrifice." 'And since in this divine sacrifice which is celebrated in the Mass, the same Christ who offered himself once in a bloody manner on the altar of the cross is contained and is offered in an unbloody manner... this sacrifice is truly propitiatory.' The only ministers who can officiate at the Eucharist and consecrate the sacrament are validly ordained priests (either bishops or presbyters) acting in the person of Christ ("in persona Christi"). In other words, the priest celebrant represents Christ, who is the head of the church, and acts before God the Father in the name of the church, always using "we" not "I" during the Eucharistic prayer. The matter used must be wheaten bread and grape wine; this is considered essential for validity. As sacrifice, the Eucharist is also offered in reparation for the sins of the living and the dead and to obtain spiritual or temporal benefits from God. As a real presence According to the Catholic Church Jesus Christ is present in the Eucharist in a true, real and substantial way, with his Body and his Blood, with his Soul and his Divinity. By the consecration, the substances of the bread and wine actually become the substances of the body and blood of Christ (transubstantiation) while the appearances or "species" of the bread and wine remain unaltered (e.g. colour, taste, feel, and smell). This change is brought about in the eucharistic prayer through the efficacy of the word of Christ and by the action of the Holy Spirit. The Eucharistic presence of Christ begins at the moment of the consecration and endures as long as the Eucharistic species subsist, that is, until the Eucharist is digested, physically destroyed, or decays by some natural process (at which point, theologian Thomas Aquinas argued, the substance of the bread and wine cannot return). The Fourth Council of the Lateran in 1215 spoke of the bread and wine as "transubstantiated" into the body and blood of Christ: "His body and blood are truly contained in the sacrament of the altar under the forms of bread and wine, the bread and wine having been transubstantiated, by God's power, into his body and blood". In 1551, the Council of Trent definitively declared: "Because Christ our Redeemer said that it was truly his body that he was offering under the species of bread, it has always been the conviction of the Church of God, and this holy Council now declares again that by the consecration of the bread and wine there takes place a change of the whole substance of the bread into the substance of the body of Christ and of the whole substance of the wine into the substance of his blood. This change the holy Catholic Church has fittingly and properly called transubstantiation." The church holds that the body and blood of Jesus can no longer be truly separated. Where one is, the other must be. Therefore, although the priest (or extraordinary minister of Holy Communion) says "The Body of Christ" when administering the Host and "The Blood of Christ" when presenting the chalice, the communicant who receives either one receives Christ, whole and entire. "Christ is present whole and entire in each of the species and whole and entire in each of their parts, in such a way that the breaking of the bread does not divide Christ." The Catholic Church sees as the main basis for this belief the words of Jesus himself at his Last Supper: the Synoptic Gospels and Paul's recount that Jesus at the time of taking the bread and the cup said: "This is my body … this is my blood." The Catholic understanding of these words, from the Patristic authors onward, has emphasized their roots in the covenantal history of the Old Testament. The interpretation of Christ's words against this Old Testament background coheres with and supports belief in the Real presence of Christ in the Eucharist. Since the Eucharist is the body and blood of Christ, "the worship due to the sacrament of the Eucharist, whether during the celebration of the Mass or outside it, is the worship of latria, that is, the adoration given to God alone. The Church guards with the greatest care Hosts that have been consecrated. She brings them to the sick and to other persons who find it impossible to participate at Mass. She also presents them for the solemn adoration of the faithful and she bears them in processions. The Church encourages the faithful to make frequent visits to adore the Blessed Sacrament reserved in the tabernacle." According to the Catholic Church doctrine receiving the Eucharist in a state of mortal sin is a sacrilege and only those who are in a state of grace, that is, without any mortal sin, can receive it. Based on 1 Corinthians 11:27–29, it affirms the following: "Anyone who is aware of having committed a mortal sin must not receive Holy Communion, even if he experiences deep contrition, without having first received sacramental absolution, unless he has a grave reason for receiving Communion and there is no possibility of going to confession." Eastern Orthodoxy Within Eastern Christianity, the Eucharistic service is called the Divine Liturgy (Byzantine Rite) or similar names in other rites. It comprises two main divisions: the first is the Liturgy of the Catechumens which consists of introductory litanies, antiphons and scripture readings, culminating in a reading from one of the Gospels and, often, a homily; the second is the Liturgy of the Faithful in which the Eucharist is offered, consecrated, and received as Holy Communion. Within the latter, the actual Eucharistic prayer is called the anaphora, literally: "offering" or "carrying up" (). In the Rite of Constantinople, two different anaphoras are currently used: one is attributed to John Chrysostom, the other to Basil the Great. In the Oriental Orthodox Church, a variety of anaphoras are used, but all are similar in structure to those of the Constantinopolitan Rite, in which the Anaphora of Saint John Chrysostom is used most days of the year; Saint Basil's is offered on the Sundays of Great Lent, the eves of Christmas and Theophany, Holy Thursday, Holy Saturday, and upon his feast day (1 January). At the conclusion of the Anaphora the bread and wine are held to be the Body and Blood of Christ. Unlike the Latin Church, the Byzantine Rite uses leavened bread, with the leaven symbolizing the presence of the Holy Spirit. The Armenian Apostolic Church, like the Latin Church, uses unleavened bread, whereas the Greek Orthodox Church utilizes leavened bread in their celebration. Conventionally this change in the elements is understood to be accomplished at the Epiclesis ("invocation") by which the Holy Spirit is invoked and the consecration of the bread and wine as the true and genuine Body and Blood of Christ is specifically requested, but since the anaphora as a whole is considered a unitary (albeit lengthy) prayer, no one moment within it can readily be singled out. Protestantism Anglican Anglican eucharistic theology on the matter is nuanced. The Eucharist is neither wholly a matter of transubstantiation nor simply devotional and memorialist in orientation. The Anglican church does not adhere to the belief that the Lord's Supper is merely a devotional reflection on Christ's death. For some Anglicans, "Christ" is spiritually present in the fullness of his person in the Eucharist. The Church of England itself has repeatedly has refused to make official any definition of "the Presence of Christ". Church authorities prefer to leave it a mystery while proclaiming the consecrated bread and wine to be "spiritual food" of "Christ's Most Precious Body and Blood". The bread and wine are an "outward sign of an inner grace," BCP Catechism, p. 859. The Words of Administration at Communion allow for Real Presence or for a real but spiritual Presence (Calvinist Receptionism and Virtualism). This concept was congenial to most Anglicans well into the 19th Century. From the 1840s, the Tractarians re-introduced the idea of "the Real Presence" to suggest a corporeal presence which could be done since the language of the BCP Rite referred to the Body and Blood of Christ without details as well as referring to these as spiritual food at other places in the text. Both are found in the Latin and other Rites, but in the former, a definite interpretation as corporeal is applied. Receptionism and Virtualism assert the Real Presence. The former places emphasis on the recipient and the latter states "the Presence" is confected by the power of the Holy Spirit but not in Christ's natural body. His presence is objective and does not depend on its existence from the faith of the recipient. The liturgy petitions that elements 'be' rather than 'become' the Body and Blood of Christ leaving aside any theory of a change in the natural elements: bread and wine are the outer reality and "the Presence" is the inner invisible except as perceived in faith. In 1789 the Protestant Episcopal Church of the USA restored explicit language that the Eucharist is an oblation (sacrifice) to God. Subsequent revisions of the Prayer Book by member churches of the Anglican Communion have done likewise (the Church of England did so in the 1928 Prayer Book). The so-called 'Black Rubric' in the 1552 Prayer Book which allowed kneeling for communion but denied the real and essential presence of Christ in the elements was omitted in the 1559 edition at the Queen's insistence. It was re-instated in the 1662 Book modified to deny any corporeal presence to suggest Christ was present in his Natural Body. In most parishes of the Anglican Communion the Eucharist is celebrated every Sunday, having replaced Morning Prayer as the principal service. The rites for the Eucharist are found in the various prayer books of the Anglican churches. Wine and unleavened wafers or unleavened bread is used. Daily celebrations are the norm in many cathedrals and parish churches sometimes offer one or more services of Holy Communion during the week. The nature of the liturgy varies according to the theological tradition of the priests, parishes, dioceses and regional churches. Leavened or unleavened bread may be used. Baptist groups The bread and "fruit of the vine" indicated in Matthew, Mark and Luke as the elements of the Lord's Supper are interpreted by many Baptists as unleavened bread (although leavened bread is often used) and, in line with the historical stance of some Baptist groups (since the mid-19th century) against partaking of alcoholic beverages, grape juice, which they commonly refer to simply as "the Cup". The unleavened bread also underscores the symbolic belief attributed to Christ's breaking the bread and saying that it was his body. A soda cracker is often used. Most Baptists consider the Communion to be primarily an act of remembrance of Christ's atonement, and a time of renewal of personal commitment. However, with the rise of confessionalism, some Baptists have denied the Zwinglian doctrine of mere memorialism and have taken up a Reformed view of Communion. Confessional Baptists believe in pneumatic presence, which is expressed in the Second London Baptist Confession, specifically in Chapter 30, Articles 3 and 7. This view is prevalent among Southern Baptists, those in the Founders movement (a Calvinistic movement among some Independent Baptists), Freewill Baptists, and several individuals in other Baptist associations. Communion practices and frequency vary among congregations. A typical practice is to have small cups of juice and plates of broken bread distributed to the seated congregation. In other congregations, communicants may proceed to the altar to receive the elements, then return to their seats. A widely accepted practice is for all to receive and hold the elements until everyone is served, then consume the bread and cup in unison. Usually, music is performed and Scripture is read during the receiving of the elements. Some Baptist churches are closed-Communionists (even requiring full membership in the church before partaking), with others being partially or fully open-Communionists. It is rare to find a Baptist church where The Lord's Supper is observed every Sunday; most observe monthly or quarterly, with some holding Communion only during a designated Communion service or following a worship service. Adults and children in attendance, who have not made a profession of faith in Christ, are expected to not participate. Lutheran Lutherans believe that the body and blood of Christ are "truly and substantially present in, with, and under the forms" of the consecrated bread and wine (the elements), so that communicants eat and drink the body and blood of Christ himself as well as the bread and wine in this sacrament. The Lutheran doctrine of the Real Presence is more accurately and formally known as the "sacramental union". Others have erroneously called this consubstantiation, a Lollardist doctrine, though this term is specifically rejected by Lutheran churches and theologians since it creates confusion about the actual doctrine and subjects the doctrine to the control of a non-biblical philosophical concept in the same manner as, in their view, does the term "transubstantiation". While an official movement exists in Lutheran congregations to celebrate Eucharist weekly, using formal rites very similar to the Catholic and "high" Anglican services, it was historically common for congregations to celebrate monthly or even quarterly. Even in congregations where Eucharist is offered weekly, there is not a requirement that every church service be a Eucharistic service, nor that all members of a congregation must receive it weekly. Mennonites and Anabaptists Traditional Mennonite and German Baptist Brethren Churches such as the Church of the Brethren churches and congregations have the Agape Meal, footwashing and the serving of the bread and wine two parts to the Communion service in the Lovefeast. In the more modern groups, Communion is only the serving of the Lord's Supper. In the communion meal, the members of the Mennonite churches renew their covenant with God and with each other. Open Brethren and Exclusive Brethren Among Open assemblies, also termed Plymouth Brethren, the Eucharist is more commonly called the Breaking of Bread or the Lord's Supper. It is seen as a symbolic memorial and is
Qadisho refers to the Eucharist as celebrated in the West Syrian traditions of Syriac Christianity. while that of the West Syrian tradition is the Liturgy of Saint James. Both are extremely old, going back at least to the third century, and are the oldest extant liturgies continually in use. Restorationism Seventh-day Adventists In the Seventh-day Adventist Church the Holy Communion service customarily is celebrated once per quarter. The service includes the ordinance of footwashing and the Lord's Supper. Unleavened bread and unfermented (non-alcoholic) grape juice is used. Open communion is practised: all who have committed their lives to the Saviour may participate. The communion service must be conducted by an ordained pastor, minister or church elder. Jehovah's Witnesses The Christian Congregation of Jehovah's Witnesses commemorates Christ's death as a ransom or propitiatory sacrifice by observing a Memorial annually on the evening that corresponds to the Passover, Nisan 14, according to the ancient Jewish calendar. They refer to this observance generally as "the Lord's Evening Meal" or the "Memorial of Christ's Death", taken from Jesus' words to his Apostles "do this as a memorial of me". (Luke 22:19) They believe that this is the only annual religious observance commanded for Christians in the Bible. Of those who attend the Memorial a small minority worldwide partake of the wine and unleavened bread. Jehovah's Witnesses believe that only 144,000 people will receive heavenly salvation and immortal life and thus spend eternity with God and Christ in heaven, with glorified bodies, as under-priests and co-rulers under Christ the King and High Priest, in Jehovah's Kingdom. Paralleling the anointing of kings and priests, they are referred to as the "anointed" class and are the only ones who should partake of the bread and wine. They believe that the baptized "other sheep" of Christ's flock, or the "great crowd", also benefit from the ransom sacrifice, and are respectful observers and viewers of the Lord's Supper remembrance at these special meetings of Jehovah's witnesses, with hope of receiving salvation, through Christ's atoning sacrifice, which is memorialized by the Lord's Evening Meal, and with the hope of obtaining everlasting life in Paradise restored on a prophesied "New Earth", under Christ as Redeemer and Ruler. The Memorial, held after sundown, includes a sermon on the meaning and importance of the celebration and gathering, and includes the circulation and viewing among the audience of unadulterated red wine and unleavened bread (matzo). Jehovah's Witnesses believe that the bread symbolizes and represents Jesus Christ's perfect body which he gave on behalf of mankind, and that the wine represents his perfect blood which he shed at Calvary and redeems fallen man from inherited sin and death. The wine and the bread (sometimes referred to as "emblems") are viewed as symbolic and commemorative; the Witnesses do not believe in transubstantiation or consubstantiation; so not a literal presence of flesh and blood in the emblems, but that the emblems are simply sacred symbolisms and representations, denoting what was used in the first Lord's Supper, and which figuratively represent the ransom sacrifice of Jesus and sacred realities. Latter-day Saints In The Church of Jesus Christ of Latter-day Saints (LDS Church), the "Holy Sacrament of the Lord's Supper", more simply referred to as the Sacrament, is administered every Sunday (except General Conference or other special Sunday meeting) in each LDS Ward or branch worldwide at the beginning of Sacrament meeting. The Sacrament, which consists of both ordinary bread and water (rather than wine or grape juice), is prepared by priesthood holders prior to the beginning of the meeting. At the beginning of the Sacrament, priests say specific prayers to bless the bread and water. The Sacrament is passed row-by-row to the congregation by priesthood holders (typically deacons). The prayer recited for the bread and the water is found in the Book of Mormon and Doctrine and Covenants. The prayer contains the above essentials given by Jesus: "Always remember him, and keep his commandments …, that they may always have his Spirit to be with them." (Moroni, 4:3.) Non-observing denominations While the Salvation Army does not reject the Eucharistic practices of other churches or deny that their members truly receive grace through this sacrament, it does not practice the sacraments of Communion or baptism. This is because they believe that these are unnecessary for the living of a Christian life, and because in the opinion of Salvation Army founders William and Catherine Booth, the sacrament placed too much stress on outward ritual and too little on inward spiritual conversion. Emphasizing the inward spiritual experience of their adherents over any outward ritual, Quakers (members of the Religious Society of Friends) generally do not baptize or observe Communion. Although the early Church of Christ, Scientist observed communion, founder Mary Baker Eddy eventually discouraged the physical ritual as she believed it distracted from the true spiritual nature of the sacrament. As such, Christian Scientists generally do not observe communion. The United Society of Believers (commonly known as Shakers) do not take communion, instead viewing every meal as a Eucharistic feast. Practice and customs Open and closed communion Christian denominations differ in their understanding of whether they may celebrate the Eucharist with those with whom they are not in full communion. The apologist Justin Martyr (c. 150) wrote of the Eucharist "of which no one is allowed to partake but the man who believes that we teach are true, and who has been washed with the washing that is for the remission of sins and unto regeneration, and who is so living as Christ has enjoined." This was continued in the practice of dismissing the catechumens (those still undergoing instruction and not yet baptized) before the sacramental part of the liturgy, a custom which has left traces in the expression "Mass of the Catechumens" and in the Byzantine Rite exclamation by the deacon or priest, "The doors! The doors!", just before recitation of the Creed. Churches such as the Catholic and the Eastern Orthodox Churches practice closed communion under normal circumstances. However, the Catholic Church allows administration of the Eucharist, at their spontaneous request, to properly disposed members of the eastern churches (Eastern Orthodox, Oriental Orthodox and Church of the East) not in full communion with it and of other churches that the Holy See judges to be sacramentally in the same position as these churches; and in grave and pressing need, such as danger of death, it allows the Eucharist to be administered also to individuals who do not belong to these churches but who share the Catholic Church's faith in the reality of the Eucharist and have no access to a minister of their own community. Some Protestant communities exclude non-members from Communion. The Evangelical Lutheran Church in America (ELCA) practices open communion, provided those who receive are baptized, but the Lutheran Church–Missouri Synod and the Wisconsin Evangelical Lutheran Synod (WELS) practice closed communion, excluding non-members and requiring communicants to have been given catechetical instruction. The Evangelical Lutheran Church in Canada, the Evangelical Church in Germany, the Church of Sweden, and many other Lutheran churches outside of the US also practice open communion. Some use the term "close communion" for restriction to members of the same denomination, and "closed communion" for restriction to members of the local congregation alone. Most Protestant communities including Congregational churches, the Church of the Nazarene, the Assemblies of God, Methodists, most Presbyterians and Baptists, Anglicans, and Churches of Christ and other non-denominational churches practice various forms of open communion. Some churches do not limit it to only members of the congregation, but to any person in attendance (regardless of Christian affiliation) who considers himself/herself to be a Christian. Others require that the communicant be a baptized person, or a member of a church of that denomination or a denomination of "like faith and practice". Some Progressive Christian congregations offer communion to any individual who wishes to commemorate the life and teachings of Christ, regardless of religious affiliation. In the Episcopal Church (United States), those who do not receive Holy Communion may enter the communion line with their arms crossed over their chest, in order to receive a blessing from the priest, instead of receiving Holy Communion. As a matter of local convention, this practice can also be found in Catholic churches in the United States for Catholics who find themselves, for whatever reason, not in a position to receive the Eucharist itself, as well as for non-Catholics, who are not permitted to receive it. Most Latter-Day Saint churches practice closed communion; one notable exception is the Community of Christ, the second-largest denomination in this movement. While The Church of Jesus Christ of Latter-day Saints (the largest of the LDS denominations) technically practice a closed communion, their official direction to local Church leaders (in Handbook 2, section 20.4.1, last paragraph) is as follows: "Although the sacrament is for Church members, the bishopric should not announce that it will be passed to members only, and nothing should be done to prevent nonmembers from partaking of it." Preparation Catholic The Catholic Church requires its members to receive the sacrament of Penance or Reconciliation before taking Communion if they are aware of having committed a mortal sin and to prepare by fasting, prayer, and other works of piety. Eastern Orthodox Traditionally, the Eastern Orthodox church has required its members to have observed all church-appointed fasts (most weeks, this will be at least Wednesday and Friday) for the week prior to partaking of communion, and to fast from all food and water from midnight the night before. In addition, Orthodox Christians are to have made a recent confession to their priest (the frequency varying with one's particular priest), and they must be at peace with all others, meaning that they hold no grudges or anger against anyone. In addition, one is expected to attend Vespers or the All-Night Vigil, if offered, on the night before receiving communion. Furthermore, various pre-communion prayers have been composed, which many (but not all) Orthodox churches require or at least strongly encourage members to say privately before coming to the Eucharist. Protestant confessions Many Protestant congregations generally reserve a period of time for self-examination and private, silent confession just before partaking in the Lord's Supper. Footwashing Seventh-day Adventists, Mennonites, and some other groups participate in "foot washing" as a preparation for partaking in the Lord's Supper. At that time they are to individually examine themselves, and confess any sins they may have between one and another. Malankara Orthodox Syrian Church In the Malankara Orthodox Syrian Church the Eucharist is only given to those who have come prepared to receive the life giving body and blood. Therefore, in a manner to worthily receive, believers will fast from the night before the liturgy from around 6pm or the conclusion of evening prayer and will remain fasting until they receive Holy Qurbana the next morning. Additionally, members who plan on receiving the holy communion have to follow a strict guide of prescribed prayers from the Shehimo or the book of common prayers for the week. Adoration Eucharistic adoration is a practice in the Western (or "Roman") Catholic, Anglo-Catholic and some Lutheran traditions, in which the Blessed Sacrament is exposed to and adored by the faithful. When this exposure and adoration is constant (twenty-four hours a day), it is called Perpetual Adoration. In a parish, this is usually done by volunteer parishioners; in a monastery or convent, it is done by the resident monks or nuns. In the Exposition of the Blessed Sacrament, the Eucharist is displayed in a monstrance, typically placed on an altar, at times with a light focused on it, or with candles flanking it. Health issues Gluten The gluten in wheat bread is dangerous to people with celiac disease and other gluten-related disorders, such as non-celiac gluten sensitivity and wheat allergy. For the Catholic Church, this issue was addressed in the 24 July 2003 letter of the Congregation for the Doctrine of the Faith, which summarized and clarified earlier declarations. The Catholic Church believes that the matter for the Eucharist must be wheaten bread and fermented wine from grapes: it holds that, if the gluten has been entirely removed, the result is not true wheaten bread. For celiacs, but not generally, it allows low-gluten bread. It also permits Holy Communion to be received under the form of either bread or wine alone, except by a priest who is celebrating Mass without other priests or as principal celebrant. Many Protestant churches offer communicants gluten-free alternatives to wheaten bread, usually in the form of a rice-based cracker or gluten-free bread. Alcohol The Catholic Church believes that grape juice that has not begun even minimally to ferment cannot be accepted as wine, which it sees as essential for celebration of the Eucharist. For non-alcoholics, but not generally, it allows the use of mustum (grape juice in which fermentation has begun but has been suspended without altering the nature of the juice), and it holds that "since Christ is sacramentally present under each of the species, communion under the species of bread alone makes it possible to receive all the fruit of Eucharistic grace. For pastoral reasons, this manner of receiving communion has been legitimately established as the most common form in the Latin rite." As already indicated, the one exception is in the case of a priest celebrating Mass without other priests or as principal celebrant. The water that in the Roman Rite is prescribed to be mixed with the wine must be only a relatively small quantity. The practice of the Coptic Church is that the mixture should be two parts wine to one part water. Many Protestant churches allow clergy and communicants to take mustum instead of wine. In addition to, or in replacement of wine, some churches offer grape juice which has been pasteurized to stop the fermentation process the juice naturally undergoes; de-alcoholized wine from which most of the alcohol has been removed (between 0.5% and 2% remains), or water. Exclusive use of unfermented grape juice is common in Baptist churches, the United Methodist Church, Seventh-day Adventists, Christian Churches/Churches of Christ, Churches of Christ, Church of God (Anderson, Indiana), some Lutherans, Assemblies of God, Pentecostals, Evangelicals, the Christian Missionary Alliance, and other American independent Protestant churches. Transmission of diseases Risk of infectious disease transmission related to use of a common communion cup exists but it is low. No case of transmission of an infectious disease related to a common communion cup has ever been documented. Experimental studies have demonstrated that infectious diseases can be transmitted. The most likely diseases to be transmitted would be common viral illnesses such as the common cold. A study of 681 individuals found that taking communion up to daily from a common cup did not increase the risk of infection beyond that of those who did not attend services at all. In influenza epidemics, some churches suspend the giving wine at communion, for fear of spreading the disease. This is in full accord with Catholic Church belief that communion under the form of bread alone makes it possible to receive all the fruit of Eucharistic grace. However, the same measure has also been taken by churches that normally insist on the importance of receiving communion under both forms. This was done in 2009 by the Church of England. Some fear contagion through the handling involved in distributing the hosts to the communicants, even if they are placed on the hand rather than on the tongue. Accordingly, some churches use mechanical wafer dispensers or "pillow packs" (communion wafers with
such lunar events as far back as eclipses are recorded. Historical record Records of solar eclipses have been kept since ancient times. Eclipse dates can be used for chronological dating of historical records. A Syrian clay tablet, in the Ugaritic language, records a solar eclipse which occurred on March 5, 1223 B.C., while Paul Griffin argues that a stone in Ireland records an eclipse on November 30, 3340 B.C. Positing classical-era astronomers' use of Babylonian eclipse records mostly from the 13th century BC provides a feasible and mathematically consistent explanation for the Greek finding all three lunar mean motions (synodic, anomalistic, draconitic) to a precision of about one part in a million or better. Chinese historical records of solar eclipses date back over 3,000 years and have been used to measure changes in the Earth's rate of spin. In 5th century AD, solar and lunar eclipses were scientifically explained by Aryabhata, in his book Aryabhatia.Aryabhata states that the Moon and planets shine by reflected sunlight and explains eclipses in terms of shadows cast by and falling on Earth. Aryabhata provides the computation and the size of the eclipsed part during an eclipse. Aryabhata's computations were so accurate that 18th-century scientist Guillaume Le Gentil, during a visit to Pondicherry, India, found the Indian computations of the duration of the lunar eclipse of 30 August 1765 to be short by 41 seconds, whereas Le Gentil's charts were long by 68 seconds. By the 1600s, European astronomers were publishing books with diagrams explaining how lunar and solar eclipses occurred. In order to disseminate this information to a broader audience and decrease fear of the consequences of eclipses, booksellers printed broadsides explaining the event either using the science or via astrology. Eclipses in Mythology & Religion Before eclipses were understood as well as they are today, there was a much more fearful connotation surrounding the seemingly inexplicable events. There was very considerable confusion regarding eclipses before the 17th century because eclipses were not very accurately or scientifically described until Johannes Kepler provided a scientific explanation for eclipses in the early seventeenth century. Typically in mythology, eclipses were understood to be one variation or another of a spiritual battle between the sun and evil forces or spirits of darkness. The phenomenon of the sun seeming to disappear was a very fearful sight to all who did not understand the science of eclipses as well as those who supported and believed in the idea of mythological gods. The sun was highly regarded as divine by many old religions, and some even viewed eclipses as the sun god being overwhelmed by evil spirits. More specifically, in Norse mythology, it is believed that there is a wolf by the name of Fenrir that is in constant pursuit of the sun, and eclipses are thought to occur when the wolf successfully devours the divine sun. Other Norse tribes believe that there are two wolves by the names of Sköll and Hati that are in pursuit of the sun and the moon, known by the names of Sol and Mani, and these tribes believe that an eclipse occurs when one of the wolves successfully eats either the sun or the moon. Once again, this mythical explanation was a very common source of fear for the majority of people at the time who believed the sun to be a sort of divine power or god because the known explanations of eclipses were quite frequently viewed as the downfall of their highly regarded god. Similarly, other mythological explanations of eclipses describe the phenomenon of darkness covering the sky during the day as a war between the gods of the sun and the moon. In most types of mythologies and certain religions, eclipses were seen as a sign that the gods were angry and that danger was soon to come, so people often altered their actions in an effort to dissuade the gods from unleashing their wrath. In the Hindu religion, for example, people often sing religious hymns for protection from the evil spirits of the eclipse, and many people of the Hindu religion refuse to eat during an eclipse to avoid the effects of the evil spirits. All food that had been stored before the eclipse is to be thrown out to avoid contamination by spirits, and Hindu people living in India will also wash off in the Ganges River, which is believed to be spiritually cleansing, directly following an eclipse to clean themselves of the evil spirits. In early Judaism and Christianity, eclipses were viewed as signs from God, and some eclipses were seen as a display of God's greatness or even signs of cycles of life and death. However, more ominous eclipses such as a blood moon were believed to be a divine sign that God would soon destroy their enemies. Other planets and dwarf planets Gas giants The gas giant planets have many moons and thus frequently display eclipses. The most striking involve Jupiter, which has four large moons and a low axial tilt, making eclipses more frequent as these bodies pass through the shadow of the larger planet. Transits occur with equal frequency. It is common to see the larger moons casting circular shadows upon Jupiter's cloudtops. The eclipses of the Galilean moons by Jupiter became accurately predictable once their orbital elements were known. During the 1670s, it was discovered that these events were occurring about 17 minutes later than expected when Jupiter was on the far side of the Sun. Ole Rømer deduced that the delay was caused by the time needed for light to travel from Jupiter to the Earth. This was used to produce the first estimate of the speed of light. On the other three gas giants (Saturn, Uranus and Neptune) eclipses only occur at certain periods during the planet's orbit, due to their higher inclination between the orbits of the moon and the orbital plane of the planet. The moon Titan, for example, has an orbital plane tilted about 1.6° to Saturn's equatorial plane. But Saturn has an axial tilt of nearly 27°. The orbital plane of Titan only crosses the line of sight to the Sun at two points along Saturn's orbit. As the orbital period of Saturn is 29.7 years, an eclipse is only possible about every 15 years. The timing of the Jovian satellite eclipses was also used to calculate an observer's longitude upon the Earth. By knowing the expected time when an eclipse would be observed at a standard longitude (such as Greenwich), the time difference could be computed by accurately observing the local time of the eclipse. The time difference gives the longitude of the observer because every hour of difference corresponded to 15° around the Earth's equator. This technique was used, for example, by Giovanni D. Cassini in 1679 to re-map France. Mars On Mars, only partial solar eclipses (transits) are possible, because neither of its moons is large enough, at their respective orbital radii, to cover the Sun's disc as seen from the surface of the planet. Eclipses of the moons by Mars are not only possible, but commonplace, with hundreds occurring each Earth year. There are also rare occasions when Deimos is eclipsed by Phobos. Martian eclipses have been photographed from both the surface of Mars and from orbit. Pluto Pluto, with its proportionately largest moon Charon, is also the site of many eclipses. A series
passing through any particular location in the region for a fixed interval of time. As viewed from such a location, this shadowing event is known as an eclipse. Typically the cross-section of the objects involved in an astronomical eclipse are roughly disk shaped. The region of an object's shadow during an eclipse is divided into three parts: The umbra, within which the object completely covers the light source. For the Sun, this light source is the photosphere. The antumbra, extending beyond the tip of the umbra, within which the object is completely in front of the light source but too small to completely cover it. The penumbra, within which the object is only partially in front of the light source. A total eclipse occurs when the observer is within the umbra, an annular eclipse when the observer is within the antumbra, and a partial eclipse when the observer is within the penumbra. During a lunar eclipse only the umbra and penumbra are applicable, because the antumbra of the Sun-Earth system lies far beyond the Moon. Analogously, Earth's apparent diameter from the viewpoint of the Moon is nearly four times that of the Sun and thus cannot produce an annular eclipse. The same terms may be used analogously in describing other eclipses, e.g., the antumbra of Deimos crossing Mars, or Phobos entering Mars's penumbra. The first contact occurs when the eclipsing object's disc first starts to impinge on the light source; second contact is when the disc moves completely within the light source; third contact when it starts to move out of the light; and fourth or last contact when it finally leaves the light source's disc entirely. For spherical bodies, when the occulting object is smaller than the star, the length (L) of the umbra's cone-shaped shadow is given by: where Rs is the radius of the star, Ro is the occulting object's radius, and r is the distance from the star to the occulting object. For Earth, on average L is equal to 1.384 km, which is much larger than the Moon's semimajor axis of 3.844 km. Hence the umbral cone of the Earth can completely envelop the Moon during a lunar eclipse. If the occulting object has an atmosphere, however, some of the luminosity of the star can be refracted into the volume of the umbra. This occurs, for example, during an eclipse of the Moon by the Earth—producing a faint, ruddy illumination of the Moon even at totality. On Earth, the shadow cast during an eclipse moves very approximately at 1 km per sec. This depends on the location of the shadow on the Earth and the angle in which it is moving. Eclipse cycles An eclipse cycle takes place when eclipses in a series are separated by a certain interval of time. This happens when the orbital motions of the bodies form repeating harmonic patterns. A particular instance is the saros, which results in a repetition of a solar or lunar eclipse every 6,585.3 days, or a little over 18 years. Because this is not a whole number of days, successive eclipses will be visible from different parts of the world. In one saros period there are 239.0 anomalistic periods, 241.0 sidereal periods, 242.0 nodical periods, and 223.0 synodic periods. Although the orbit of the Moon does not give exact integers, the numbers of orbit cycles are close enough to integers to give strong similarity for eclipses spaced at 18.03 yr intervals. Earth–Moon system An eclipse involving the Sun, Earth, and Moon can occur only when they are nearly in a straight line, allowing one to be hidden behind another, viewed from the third. Because the orbital plane of the Moon is tilted with respect to the orbital plane of the Earth (the ecliptic), eclipses can occur only when the Moon is close to the intersection of these two planes (the nodes). The Sun, Earth and nodes are aligned twice a year (during an eclipse season), and eclipses can occur during a period of about two months around these times. There can be from four to seven eclipses in a calendar year, which repeat according to various eclipse cycles, such as a saros. Between 1901 and 2100 there are the maximum of seven eclipses in: four (penumbral) lunar and three solar eclipses: 1908, 2038. four solar and three lunar eclipses: 1918, 1973, 2094. five solar and two lunar eclipses: 1934. Excluding penumbral lunar eclipses, there are a maximum of seven eclipses in: 1591, 1656, 1787, 1805, 1918, 1935, 1982, and 2094. Solar eclipse As observed from the Earth, a solar eclipse occurs when the Moon passes in front of the Sun. The type of solar eclipse event depends on the distance of the Moon from the Earth during the event. A total solar eclipse occurs when the Earth intersects the umbra portion of the Moon's shadow. When the umbra does not reach the surface of the Earth, the Sun is only partially occulted, resulting in an annular eclipse. Partial solar eclipses occur when the viewer is inside the penumbra. The eclipse magnitude is the fraction of the Sun's diameter that is covered by the Moon. For a total eclipse, this value is always greater than or equal to one. In both annular and total eclipses, the eclipse magnitude is the ratio of the angular sizes of the Moon to the Sun. Solar eclipses are relatively brief events that can only be viewed in totality along a relatively narrow track. Under the most favorable circumstances, a total solar eclipse can last for 7 minutes, 31 seconds, and can be viewed along a track that is up to 250 km wide. However, the region where a partial eclipse can be observed is much larger. The Moon's umbra will advance eastward at a rate of 1,700 km/h, until it no longer intersects the Earth's surface. During a solar eclipse, the Moon can sometimes perfectly cover the Sun because its apparent size is nearly the same as the Sun's when viewed from the Earth. A total solar eclipse is in fact an occultation while an annular solar eclipse is a transit. When observed at points in space other than from the Earth's surface, the Sun can be eclipsed by bodies other than the Moon. Two examples include when the crew of Apollo 12 observed the Earth to eclipse the Sun in 1969 and when the Cassini probe observed Saturn to eclipse the Sun in 2006. Lunar eclipse Lunar eclipses occur when the Moon passes through the Earth's shadow. This happens only during a full moon, when the Moon is on the far side of the Earth from the Sun. Unlike a solar eclipse, an eclipse of the Moon can be observed from nearly an entire hemisphere. For this reason it is much more common to observe a lunar eclipse from a given location. A lunar eclipse lasts longer, taking several hours to complete, with totality itself usually averaging anywhere from about 30 minutes to over an hour. There are three types of lunar eclipses: penumbral, when the Moon crosses only the Earth's penumbra; partial, when the Moon crosses partially into the Earth's umbra; and total, when the Moon crosses entirely into the Earth's umbra. Total lunar eclipses pass through all three phases. Even during a total lunar eclipse, however, the Moon is not completely dark. Sunlight refracted through the Earth's atmosphere enters the umbra and provides a faint illumination. Much as in a sunset, the atmosphere tends to more strongly scatter light with shorter wavelengths, so the illumination of the Moon by refracted light has a red hue, thus the phrase 'Blood Moon' is often found in descriptions of such lunar events as far back as eclipses are recorded. Historical record Records of solar eclipses have been kept since ancient times. Eclipse dates can be used for chronological dating of historical records. A Syrian clay tablet, in the Ugaritic language, records a solar eclipse which occurred on March 5, 1223 B.C., while Paul Griffin argues that a stone in Ireland records an eclipse on November 30, 3340 B.C. Positing classical-era astronomers' use of Babylonian eclipse records mostly from the 13th century BC provides a feasible and mathematically consistent explanation for the Greek finding all three lunar mean motions (synodic, anomalistic, draconitic) to a precision of about one part in a million or better. Chinese historical records of solar eclipses date back over 3,000 years and have been used to measure changes in the Earth's rate of spin. In 5th century AD, solar and lunar eclipses were scientifically explained by Aryabhata, in his book Aryabhatia.Aryabhata states that the Moon and planets shine by reflected sunlight and explains eclipses in terms of shadows cast by and falling on Earth. Aryabhata provides the computation and the size of the eclipsed part during an eclipse. Aryabhata's computations were so accurate that 18th-century scientist Guillaume Le Gentil, during a visit to Pondicherry, India, found the Indian computations of the duration of the lunar eclipse of 30 August 1765 to be short by 41 seconds, whereas Le Gentil's charts were long by 68 seconds. By the 1600s, European astronomers were publishing books with diagrams explaining how lunar and solar eclipses occurred. In order to disseminate this information to a broader audience and decrease fear of the consequences of eclipses, booksellers printed broadsides explaining the event either using the science or via astrology. Eclipses in Mythology & Religion Before eclipses were understood as well as they are today, there was a much more fearful connotation surrounding the seemingly inexplicable events. There was very considerable confusion regarding eclipses before the 17th century because eclipses were not very accurately or scientifically described until Johannes Kepler provided a
subsumed by the sam, vi and Emacs editors in the 1980s. ed can be found on virtually every version of Unix and Linux available, and as such is useful for people who have to work with multiple versions of Unix. On Unix-based operating systems, some utilities like SQL*Plus run ed as the editor if the EDITOR and VISUAL environment variables are not defined. If something goes wrong, ed is sometimes the only editor available. This is often the only time when it is used interactively. The version of ed provided by GNU has a few switches to enhance the feedback. Using provides a simple prompt and enables more useful feedback messages. The switch is defined in POSIX since XPG2 (1987). The ed commands are often imitated in other line-based editors. For example, EDLIN in early MS-DOS versions and 32-bit versions of Windows NT has a somewhat similar syntax, and text editors in many MUDs (LPMud and descendants, for example) use ed-like syntax. These editors, however, are typically more limited in function. Example Here is an example transcript of an ed session. For clarity, commands and text typed by the user are in normal face, and output from ed is emphasized. a This is line number two. . 2i . ,l ed is the standard Unix text editor.$ $ This is line number two.$ 3s/two/three/ ,l ed is the standard Unix text editor.$ $ This is line number three.$ w text 65 q The end result is a simple text file containing the following text: ed is the standard Unix text editor. This is line number three. Started with an empty file, the a command appends text (all ed commands are single letters). The command puts ed in insert mode, inserting the characters that follow and is terminated by a single dot on a line. The two lines that are entered before the dot end up in the file buffer. The 2i command also goes into insert mode, and will insert the entered text (a single empty line in our case) before line two. All commands may be prefixed by a line number to operate on that line. In the line ,l, the lowercase L stands for the list command. The command is prefixed by a range, in this case , which is a shortcut for 1,$. A range is two line numbers separated by a comma ($ means the last line). In return, ed lists all lines, from first to last. These lines are ended with dollar signs, so that white space at the end of lines is clearly visible. Once the empty line is inserted in line 2, the line which reads "This is line number two." is now actually the third line. This error is corrected with 3s/two/three/, a substitution command. The 3 will apply it to the correct line; following the command is the text to be replaced, and then the replacement. Listing all lines with ,l the line is shown now to be correct. w text writes the buffer to the file "text" making ed respond with 65, the number of characters written to the file. q will end an ed session. Cultural references The GNU project has numerous jokes around ed hosted on its website. In addition, an error code called is defined in glibc: when asked to print out its description (errorstr), the library returns a single question mark. The documentation is simply "the experienced user
error, and when it wants to make sure the user wishes to quit without saving, is "?". It does not report the current filename or line number, or even display the results of a change to the text, unless requested. Older versions (c. 1981) did not even ask for confirmation when a quit command was issued without the user saving changes. This terseness was appropriate in the early versions of Unix, when consoles were teletypes, modems were slow, and memory was precious. As computer technology improved and these constraints were loosened, editors with more visual feedback became the norm. In current practice, ed is rarely used interactively, but does find use in some shell scripts. For interactive use, ed was subsumed by the sam, vi and Emacs editors in the 1980s. ed can be found on virtually every version of Unix and Linux available, and as such is useful for people who have to work with multiple versions of Unix. On Unix-based operating systems, some utilities like SQL*Plus run ed as the editor if the EDITOR and VISUAL environment variables are not defined. If something goes wrong, ed is sometimes the only editor available. This is often the only time when it is used interactively. The version of ed provided by GNU has a few switches to enhance the feedback. Using provides a simple prompt and enables more useful feedback messages. The switch is defined in POSIX since XPG2 (1987). The ed commands are often imitated in other line-based editors. For example, EDLIN in early MS-DOS versions and 32-bit versions of Windows NT has a somewhat similar syntax, and text editors in many MUDs (LPMud and descendants, for example) use ed-like syntax. These editors, however, are typically more limited in function. Example Here is an example transcript of an ed session. For clarity, commands and text typed by the user are in normal face, and output from ed is emphasized. a This is line number two. . 2i . ,l ed is the standard Unix text editor.$ $ This is line number two.$ 3s/two/three/ ,l ed is the standard Unix text editor.$ $ This is line number three.$ w text 65 q The end result is a simple text file containing the following text: ed is the
in the same way as replace, but without the replacement text. A search for 'apple' in the first 20 lines of a file is typed 1,20?sapple (no space, unless that is part of the search) followed by a press of enter. For each match, it asks if it is the correct one, and accepts n or y (or Enter). P - displays a listing of a range of lines. If no range is specified, P displays the complete file from the * to the end. This is different from L in that P changes the current line to be the last line in the range. T - transfers another file into the one being edited, with this syntax: [line to insert at]t[full path to file]. W - (write) saves the file. E - saves the file and quits edlin. Q - quits edlin without saving. Scripts Edlin may be used as a non-interactive file editor in scripts by redirecting a series of edlin commands. edlin < script FreeDOS Edlin A GPL-licensed clone of Edlin that includes long filename support is available for download as part of the FreeDOS project. This runs on operating systems such as Linux or Unix as well as
text editor in early versions of DOS. 2: 3: Back in the day, I remember seeing web pages 4: branded with a logo at the bottom: 5: "This page created in edlin." 6: The things that some people put themselves through. ;-) * The currently selected line has a *. To replace the contents of any line, the line number is entered and any text entered replaces the original. While editing a line pressing Ctrl-C cancels any changes. The * marker remains on that line. Entering I (optionally preceded with a line number) inserts one or more lines before the * line or the line given. When finished entering lines, Ctrl-C returns to the edlin command prompt. *6I 6:*(...or similar) 7:*^C *7D *L 1: Edlin: The only text editor in early versions of DOS. 2: 3: Back in the day, I remember seeing web pages 4: branded with a logo at the bottom: 5: "This page created in edlin." 6: (...or similar) * i - Inserts lines of text. D - deletes the specified line, again optionally starting with the number of a line, or a range of lines. E.g.: 2,4d deletes lines 2 through 4. In the above example, line 7 was deleted. R - is used to replace
into six bits. The distinct encoding of 's' and 'S' (using position 2 instead of 1) was maintained from punched cards where it was desirable not to have hole punches too close to each other to ensure the integrity of the physical card. While IBM was a chief proponent of the ASCII standardization committee, the company did not have time to prepare ASCII peripherals (such as card punch machines) to ship with its System/360 computers, so the company settled on EBCDIC. The System/360 became wildly successful, together with clones such as RCA Spectra 70, ICL System 4, and Fujitsu FACOM, thus so did EBCDIC. All IBM mainframe and midrange peripherals and operating systems use EBCDIC as their inherent encoding (with toleration for ASCII, for example, ISPF in z/OS can browse and edit both EBCDIC and ASCII encoded files). Software and many hardware peripherals can translate to and from encodings, and modern mainframes (such as IBM Z) include processor instructions, at the hardware level, to accelerate translation between character sets. There is an EBCDIC-oriented Unicode Transformation Format called UTF-EBCDIC proposed by the Unicode consortium, designed to allow easy updating of EBCDIC software to handle Unicode, but not intended to be used in open interchange environments. Even on systems with extensive EBCDIC support, it has not been popular. For example, z/OS supports Unicode (preferring UTF-16 specifically), but z/OS only has limited support for UTF-EBCDIC. Not all IBM products use EBCDIC; IBM AIX, Linux on IBM Z, and Linux on Power all use ASCII. Compatibility with ASCII There were numerous difficulties to writing software that would work in both ASCII and EBCDIC. The gaps between letters made simple code that worked in ASCII fail on EBCDIC. For example would print the alphabet from A to Z if ASCII is used, but print 41 characters (including a number of unassigned ones) in EBCDIC. Fixing this required complicating the code with function calls which was greatly resisted by programmers. Sorting EBCDIC put lowercase letters before uppercase letters and letters before numbers, exactly the opposite of ASCII. Programming languages and file formats and network protocols designed for ASCII quickly made use of available punctuation marks (such as the curly braces and ) that did not exist in EBCDIC,
IBM mainframe and IBM midrange computer operating systems. It descended from the code used with punched cards and the corresponding six-bit binary-coded decimal code used with most of IBM's computer peripherals of the late 1950s and early 1960s. It is supported by various non-IBM platforms, such as Fujitsu-Siemens' BS2000/OSD, OS-IV, MSP, and MSP-EX, the SDS Sigma series, Unisys VS/9, Unisys MCP and ICL VME. History EBCDIC was devised in 1963 and 1964 by IBM and was announced with the release of the IBM System/360 line of mainframe computers. It is an eight-bit character encoding, developed separately from the seven-bit ASCII encoding scheme. It was created to extend the existing Binary-Coded Decimal (BCD) Interchange Code, or BCDIC, which itself was devised as an efficient means of encoding the two zone and number punches on punched cards into six bits. The distinct encoding of 's' and 'S' (using position 2 instead of 1) was maintained from punched cards where it was desirable not to have hole punches too close to each other to ensure the integrity of the physical card. While IBM was a chief proponent of the ASCII standardization committee, the company did not have time to prepare ASCII peripherals (such as card punch machines) to ship with its System/360 computers, so the company settled on EBCDIC. The System/360 became wildly successful, together with clones such as RCA Spectra 70, ICL System 4, and Fujitsu FACOM, thus so did EBCDIC. All IBM mainframe and midrange peripherals and operating systems use EBCDIC as their inherent encoding (with toleration for ASCII, for example, ISPF in z/OS can browse and edit both EBCDIC and ASCII encoded files). Software and many hardware peripherals can translate to and from encodings, and modern mainframes (such as IBM Z) include processor instructions, at the hardware level, to accelerate translation between character sets. There is an EBCDIC-oriented Unicode Transformation Format called UTF-EBCDIC proposed by the Unicode consortium, designed to allow easy updating of EBCDIC software to handle Unicode, but not intended to be used in open interchange environments. Even on systems with extensive EBCDIC support, it has not been popular. For example, z/OS supports Unicode (preferring UTF-16 specifically), but z/OS only has limited support for UTF-EBCDIC. Not all IBM products use EBCDIC; IBM AIX, Linux on IBM Z, and Linux on Power all use ASCII. Compatibility with ASCII There were numerous difficulties to writing software that would work in both ASCII and EBCDIC. The gaps between letters made simple code that worked in ASCII fail on EBCDIC. For example would print the alphabet from A to Z if ASCII is used, but print 41 characters (including a number of unassigned ones) in EBCDIC. Fixing this required complicating the code with function calls which was greatly resisted by
of the rough endoplasmic reticulum forms large double-membrane sheets that are located near, and continuous with, the outer layer of the nuclear envelope. The double membrane sheets are stacked and connected through several right- or left-handed helical ramps, the "Terasaki ramps", giving rise to a structure resembling a multi-story car park. Although there is no continuous membrane between the endoplasmic reticulum and the Golgi apparatus, membrane-bound transport vesicles shuttle proteins between these two compartments. Vesicles are surrounded by coating proteins called COPI and COPII. COPII targets vesicles to the Golgi apparatus and COPI marks them to be brought back to the rough endoplasmic reticulum. The rough endoplasmic reticulum works in concert with the Golgi complex to target new proteins to their proper destinations. The second method of transport out of the endoplasmic reticulum involves areas called membrane contact sites, where the membranes of the endoplasmic reticulum and other organelles are held closely together, allowing the transfer of lipids and other small molecules. The rough endoplasmic reticulum is key in multiple functions: Manufacture of lysosomal enzymes with a mannose-6-phosphate marker added in the cis-Golgi network. Manufacture of secreted proteins, either secreted constitutively with no tag or secreted in a regulatory manner involving clathrin and paired basic amino acids in the signal peptide. Integral membrane proteins that stay embedded in the membrane as vesicles exit and bind to new membranes. Rab proteins are key in targeting the membrane; SNAP and SNARE proteins are key in the fusion event. Initial glycosylation as assembly continues. This is N-linked (O-linking occurs in the Golgi). N-linked glycosylation: If the protein is properly folded, oligosaccharyltransferase recognizes the AA sequence NXS or NXT (with the S/T residue phosphorylated) and adds a 14-sugar backbone (2-N-acetylglucosamine, 9-branching mannose, and 3-glucose at the end) to the side-chain nitrogen of Asn. Smooth endoplasmic reticulum In most cells the smooth endoplasmic reticulum (abbreviated SER) is scarce. Instead there are areas where the ER is partly smooth and partly rough, this area is called the transitional ER. The transitional ER gets its name because it contains ER exit sites. These are areas where the transport vesicles that contain lipids and proteins made in the ER, detach from the ER and start moving to the Golgi apparatus. Specialized cells can have a lot of smooth endoplasmic reticulum and in these cells the smooth ER has many functions. It synthesizes lipids, phospholipids, and steroids. Cells which secrete these products, such as those in the testes, ovaries, and sebaceous glands have an abundance of smooth endoplasmic reticulum. It also carries out the metabolism of carbohydrates, detoxification of natural metabolism products and of alcohol and drugs, attachment of receptors on cell membrane proteins, and steroid metabolism. In muscle cells, it regulates calcium ion concentration. Smooth endoplasmic reticulum is found in a variety of cell types (both animal and plant), and it serves different functions in each. The smooth endoplasmic reticulum also contains the enzyme glucose-6-phosphatase, which converts glucose-6-phosphate to glucose, a step in gluconeogenesis. It is connected to the nuclear envelope and consists of tubules that are located near the cell periphery. These tubes sometimes branch forming a network that is reticular in appearance. In some cells, there are dilated areas like the sacs of rough endoplasmic reticulum. The network of smooth endoplasmic reticulum allows for an increased surface area to be devoted to the action or storage of key enzymes and the products of these enzymes. Sarcoplasmic reticulum The sarcoplasmic reticulum (SR), from the Greek σάρξ sarx ("flesh"), is smooth ER found in muscle cells. The only structural difference between this organelle and the smooth endoplasmic reticulum is the medley of proteins they have, both bound to their membranes and drifting within the confines of their lumens. This fundamental difference is indicative of their functions: The endoplasmic reticulum synthesizes molecules, while the sarcoplasmic reticulum stores calcium ions and pumps them out into the sarcoplasm when the muscle fiber is stimulated. After their release from the sarcoplasmic reticulum, calcium ions interact with contractile proteins that utilize ATP to shorten the muscle fiber. The sarcoplasmic reticulum plays a major role in excitation-contraction coupling. Functions The endoplasmic reticulum serves many general functions, including the folding of protein molecules in sacs called cisternae and the transport of synthesized proteins in vesicles to the Golgi apparatus. Rough endoplasmic reticulum is also involved in protein synthesis. Correct folding of newly made proteins is made possible by several endoplasmic reticulum chaperone proteins, including protein disulfide isomerase (PDI), ERp29, the Hsp70 family member BiP/Grp78, calnexin, calreticulin, and the peptidylprolyl isomerase family. Only properly folded proteins are transported from the rough ER to the Golgi apparatus – unfolded proteins cause an unfolded protein response as a stress response in the
back to the rough endoplasmic reticulum. The rough endoplasmic reticulum works in concert with the Golgi complex to target new proteins to their proper destinations. The second method of transport out of the endoplasmic reticulum involves areas called membrane contact sites, where the membranes of the endoplasmic reticulum and other organelles are held closely together, allowing the transfer of lipids and other small molecules. The rough endoplasmic reticulum is key in multiple functions: Manufacture of lysosomal enzymes with a mannose-6-phosphate marker added in the cis-Golgi network. Manufacture of secreted proteins, either secreted constitutively with no tag or secreted in a regulatory manner involving clathrin and paired basic amino acids in the signal peptide. Integral membrane proteins that stay embedded in the membrane as vesicles exit and bind to new membranes. Rab proteins are key in targeting the membrane; SNAP and SNARE proteins are key in the fusion event. Initial glycosylation as assembly continues. This is N-linked (O-linking occurs in the Golgi). N-linked glycosylation: If the protein is properly folded, oligosaccharyltransferase recognizes the AA sequence NXS or NXT (with the S/T residue phosphorylated) and adds a 14-sugar backbone (2-N-acetylglucosamine, 9-branching mannose, and 3-glucose at the end) to the side-chain nitrogen of Asn. Smooth endoplasmic reticulum In most cells the smooth endoplasmic reticulum (abbreviated SER) is scarce. Instead there are areas where the ER is partly smooth and partly rough, this area is called the transitional ER. The transitional ER gets its name because it contains ER exit sites. These are areas where the transport vesicles that contain lipids and proteins made in the ER, detach from the ER and start moving to the Golgi apparatus. Specialized cells can have a lot of smooth endoplasmic reticulum and in these cells the smooth ER has many functions. It synthesizes lipids, phospholipids, and steroids. Cells which secrete these products, such as those in the testes, ovaries, and sebaceous glands have an abundance of smooth endoplasmic reticulum. It also carries out the metabolism of carbohydrates, detoxification of natural metabolism products and of alcohol and drugs, attachment of receptors on cell membrane proteins, and steroid metabolism. In muscle cells, it regulates calcium ion concentration. Smooth endoplasmic reticulum is found in a variety of cell types (both animal and plant), and it serves different functions in each. The smooth endoplasmic reticulum also contains the enzyme glucose-6-phosphatase, which converts glucose-6-phosphate to glucose, a step in gluconeogenesis. It is connected to the nuclear envelope and consists of tubules that are located near the cell periphery. These tubes sometimes branch forming a network that is reticular in appearance. In some cells, there are dilated areas like the sacs of rough endoplasmic reticulum. The network of smooth endoplasmic reticulum allows for an increased surface area to be devoted to the action or storage of key enzymes and the products of these enzymes. Sarcoplasmic reticulum The sarcoplasmic reticulum (SR), from the Greek σάρξ sarx ("flesh"), is smooth ER found in muscle cells. The only structural difference between this organelle and the smooth endoplasmic reticulum is the medley of proteins they have, both bound to their membranes and drifting within the confines of their lumens. This fundamental difference is indicative of their functions: The endoplasmic reticulum synthesizes molecules, while the sarcoplasmic reticulum stores calcium ions and pumps them out into the sarcoplasm when the muscle fiber is stimulated. After their release from the sarcoplasmic reticulum, calcium ions interact with contractile proteins that utilize ATP to shorten the muscle fiber. The sarcoplasmic reticulum plays a major role in excitation-contraction coupling. Functions The endoplasmic reticulum serves many general functions, including the folding of protein molecules in sacs called cisternae and the transport of synthesized proteins in vesicles to the Golgi apparatus. Rough endoplasmic reticulum is also involved in protein synthesis. Correct folding of newly made proteins is made possible by several endoplasmic reticulum chaperone proteins, including protein disulfide isomerase (PDI), ERp29, the Hsp70 family member BiP/Grp78, calnexin, calreticulin, and the peptidylprolyl isomerase family. Only properly folded proteins are transported from the rough ER to the Golgi apparatus – unfolded proteins cause an unfolded protein response as a stress response in the ER. Disturbances in redox regulation, calcium regulation, glucose deprivation, and viral infection or the over-expression of proteins can lead to endoplasmic reticulum stress response (ER stress), a state in which the folding of proteins slows, leading to an increase in unfolded proteins. This stress is emerging as a potential cause of damage in hypoxia/ischemia, insulin resistance, and other disorders. Protein transport Secretory proteins, mostly glycoproteins, are moved across the endoplasmic reticulum membrane. Proteins that are transported by the endoplasmic reticulum throughout the cell are marked with an address tag called a signal sequence. The N-terminus (one end) of a polypeptide chain (i.e., a protein) contains a few amino acids that work as an address tag, which are removed when the polypeptide reaches its destination. Nascent peptides reach the ER via the translocon, a membrane-embedded multiprotein complex. Proteins that are destined for places outside the endoplasmic reticulum are packed into transport vesicles and moved along the cytoskeleton toward their destination. In human fibroblasts, the ER is always co-distributed with microtubules and the depolymerisation of the latter cause its co-aggregation
media Fictional entities The Enemy, an alias of Morgoth, a fictional character in Tolkien's legendarium Films The Enemy (1916 film), directed by Paul Scardon with Julia Swayne Gordon, Charles Kent The Enemy (1927 film), directed by Fred Niblo, starring Lillian Gish The Enemy (1979 film), directed by Yılmaz Güney and Zeki Ökten, starring Aytaç Arman Enemy (1990 film) The Enemy (2001 film), directed by Tom Kinninmont, starring Roger Moore and Luke Perry Enemy (2013 film), a 2013 Canadian film starring Jake Gyllenhaal Enemy (2015 film), a 2015 Indian film Enemy (2021 film), a 2021 Indian Tamil-language film Literature The Enemy (Bagley novel), a 1977 espionage thriller by Desmond Bagley The Enemy (Child novel), a 2004 novel by Lee Child; it is the eighth book in the Jack Reacher thriller series The Enemy (Higson novel), a 2009 young adult, horror novel by Charlie Higson and the first book in an eponymous, seven book series The Enemy (short story), a 1958 science-fiction short story by Damon Knight The
may refer to: Enemy combatant Art, entertainment, and media Fictional entities The Enemy, an alias of Morgoth, a fictional character in Tolkien's legendarium Films The Enemy (1916 film), directed by Paul Scardon with Julia Swayne Gordon, Charles Kent The Enemy (1927 film), directed by Fred Niblo, starring Lillian Gish The Enemy (1979 film), directed by Yılmaz Güney and Zeki Ökten, starring Aytaç Arman Enemy (1990 film) The Enemy (2001 film), directed by Tom Kinninmont, starring Roger Moore and Luke Perry Enemy (2013 film), a 2013 Canadian film starring Jake Gyllenhaal Enemy (2015 film), a 2015 Indian film Enemy (2021 film), a 2021 Indian Tamil-language film Literature The Enemy (Bagley novel), a 1977 espionage thriller by Desmond Bagley The Enemy (Child novel), a 2004 novel by Lee Child; it is the eighth book in the Jack Reacher thriller series The Enemy (Higson novel), a 2009 young adult, horror novel by Charlie Higson and the first book in an eponymous, seven book series The Enemy (short story), a 1958 science-fiction short story by Damon Knight The Enemy, a short story by Pearl S. Buck
Japanese Americans had to rebuild their lives but had lost a lot. United States citizens and long-time residents who had been incarcerated lost their personal liberties; many also lost their homes, businesses, property, and savings. Individuals born in Japan were not allowed to become naturalized US citizens until after passage of the Immigration and Nationality Act of 1952. On February 19, 1976, President Gerald Ford signed a proclamation formally terminating Executive Order 9066 and apologizing for the internment, stated: "We now know what we should have known then — not only was that evacuation wrong but Japanese-Americans were and are loyal Americans. On the battlefield and at home the names of Japanese-Americans have been and continue to be written in history for the sacrifices and the contributions they have made to the well-being and to the security of this, our common Nation." In 1980, President Jimmy Carter signed legislation to create the Commission on Wartime Relocation and Internment of Civilians (CWRIC). The CWRIC was appointed to conduct an official governmental study of Executive Order 9066, related wartime orders, and their effects on Japanese Americans in the West and Alaska Natives in the Pribilof Islands. In December 1982, the CWRIC issued its findings in Personal Justice Denied, concluding that the incarceration of Japanese Americans had not been justified by military necessity. The report determined that the decision to incarcerate was based on "race prejudice, war hysteria, and a failure of political leadership". The Commission recommended legislative remedies consisting of an official Government apology and redress payments of $20,000 to each of the survivors; a public education fund was set up to help ensure that this would not happen again (). On August 10, 1988, the Civil Liberties Act of 1988, based on the CWRIC recommendations, was signed into law by Ronald Reagan. On November 21, 1989, George H. W. Bush signed an appropriation bill authorizing payments to be paid out between 1990 and 1998. In 1990, surviving internees began to receive individual redress payments and a letter of apology. This bill applied to the Japanese Americans and to members of the Aleut people inhabiting the strategic Aleutian islands in Alaska who had also been relocated. Legacy February 19, the anniversary of the signing of Executive Order 9066, is now the Day of Remembrance, an annual commemoration of the unjust incarceration of the Japanese-American community. In 2017, the Smithsonian launched an exhibit about these events with artwork by Roger Shimomura. It provides context and interprets the treatment of Japanese Americans during World War II. In Feb. 2022, for the 80th anniversary of the signing of the order, supporters lobbied to pass the Amache National Historic Site Act historical designation for the Granada War Relocation Center in Colorado. See also Bob Emmett Fletcher Fred Korematsu Day Executive Order 9102 War Relocation Authority Hirabayashi v. United States Korematsu
order On March 21, 1942, Roosevelt signed Public Law 77-503 (approved after only an hour of discussion in the Senate and thirty minutes in the House) in order to provide for the enforcement of his executive order. Authored by War Department official Karl Bendetsen — who would later be promoted to Director of the Wartime Civilian Control Administration and oversee the incarceration of Japanese Americans — the law made violations of military orders a misdemeanor punishable by up to $5,000 in fines and one year in prison. Using a broad interpretation of EO 9066, Lieutenant General John L. DeWitt issued orders declaring certain areas of the western United States as zones of exclusion under the Executive Order. As a result, approximately 112,000 men, women, and children of Japanese ancestry were evicted from the West Coast of the continental United States and held in American relocation camps and other confinement sites across the country. EO 9066 was not applied in such a sweeping manner to persons of non-Japanese descent. Notably, in a 1943 letter, Attorney General Francis Biddle reminded Roosevelt that "You signed the original Executive Order permitting the exclusions so the Army could handle the Japs. It was never intended to apply to Italians and Germans." Japanese Americans in Hawaii were not incarcerated in the same way, despite the attack on Pearl Harbor. Although the Japanese-American population in Hawaii was nearly 40% of the population of the territory, only a few thousand people were detained there. This fact supported the government's eventual conclusion that the mass removal of ethnic Japanese from the West Coast was motivated by reasons other than "military necessity." Japanese Americans and other Asians in the U.S. had suffered for decades from prejudice and racially motivated fears. Racially discriminatory laws prevented Asian Americans from owning land, voting, testifying against whites in court, and set up other restrictions. Additionally, the FBI, Office of Naval Intelligence and Military Intelligence Division had been conducting surveillance on Japanese-American communities in Hawaii and the continental U.S. from the early 1930s. In early 1941, President Roosevelt secretly commissioned a study to assess the possibility that Japanese Americans would pose a threat to U.S. security. The report, submitted one month before the Japanese bombing of Pearl Harbor, found that, "There will be no armed uprising of Japanese" in the United States. "For the most part," the Munson Report said, "the local Japanese are loyal to the United States or, at worst, hope that by remaining quiet they can avoid concentration camps or irresponsible mobs." A second investigation started in 1940, written by Naval Intelligence officer Kenneth Ringle and submitted in January 1942, likewise found no evidence of fifth column activity and urged against mass incarceration. Both were ignored by military and political leaders. Over two-thirds of the people of Japanese ethnicity who were incarcerated — almost 70,000 — were American citizens. Many of the rest had lived in the country between 20 and 40 years. Most Japanese Americans, particularly the first generation born in the United States (the Nisei), identified as loyal to the United States of America. No Japanese-American citizen or Japanese national residing in the United States was ever found guilty of sabotage or espionage. There were 10 of these internment camps across the country called “relocation centers”. There were two in Arkansas, two in Arizona, two in California, one in Idaho, one in Utah, one in Wyoming, and one in Colorado. World War II camps under the order Secretary of War Henry L. Stimson was responsible for assisting relocated people with transport, food, shelter, and other accommodations and delegated Colonel Karl Bendetsen to administer the removal of West Coast Japanese. Over the spring of 1942, General John L. DeWitt issued Western Defense Command orders for Japanese Americans to present themselves for removal. The "evacuees" were taken first to temporary assembly centers, requisitioned fairgrounds and horse racing tracks where living quarters were often converted livestock stalls. As construction on the more permanent and isolated War Relocation Authority camps was completed, the population was transferred by truck or train. These accommodations consisted of tar paper-walled frame buildings in parts of the country with bitter winters and often hot summers. The camps were guarded by armed soldiers and fenced with barbed wire (security measures not shown in published photographs of the camps). Camps held up to 18,000 people, and were small cities, with medical care, food, and education provided by the government. Adults were offered "camp jobs" with wages of $12 to $19 per month, and many camp services such as medical
other bohemians, Munch was still respectful of women, as well as reserved and well-mannered, but he began to give in to the binge drinking and brawling of his circle. He was unsettled by the sexual revolution going on at the time and by the independent women around him. He later turned cynical concerning sexual matters, expressed not only in his behavior and his art, but in his writings as well, an example being a long poem called The City of Free Love. Still dependent on his family for many of his meals, Munch's relationship with his father remained tense over concerns about his bohemian life. After numerous experiments, Munch concluded that the Impressionist idiom did not allow sufficient expression. He found it superficial and too akin to scientific experimentation. He felt a need to go deeper and explore situations brimming with emotional content and expressive energy. Under Jæger's commandment that Munch should "write his life", meaning that Munch should explore his own emotional and psychological state, the young artist began a period of reflection and self-examination, recording his thoughts in his "soul's diary". This deeper perspective helped move him to a new view of his art. He wrote that his painting The Sick Child (1886), based on his sister's death, was his first "soul painting", his first break from Impressionism. The painting received a negative response from critics and from his family, and caused another "violent outburst of moral indignation" from the community. Only his friend Christian Krohg defended him: He paints, or rather regards, things in a way that is different from that of other artists. He sees only the essential, and that, naturally, is all he paints. For this reason Munch's pictures are as a rule "not complete", as people are so delighted to discover for themselves. Oh, yes, they are complete. His complete handiwork. Art is complete once the artist has really said everything that was on his mind, and this is precisely the advantage Munch has over painters of the other generation, that he really knows how to show us what he has felt, and what has gripped him, and to this he subordinates everything else. Munch continued to employ a variety of brushstroke techniques and color palettes throughout the 1880s and early 1890s, as he struggled to define his style. His idiom continued to veer between naturalistic, as seen in Portrait of Hans Jæger, and impressionistic, as in Rue Lafayette. His Inger On the Beach (1889), which caused another storm of confusion and controversy, hints at the simplified forms, heavy outlines, sharp contrasts, and emotional content of his mature style to come. He began to carefully calculate his compositions to create tension and emotion. While stylistically influenced by the Post-Impressionists, what evolved was a subject matter which was symbolist in content, depicting a state of mind rather than an external reality. In 1889, Munch presented his first one-man show of nearly all his works to date. The recognition it received led to a two-year state scholarship to study in Paris under French painter Léon Bonnat. Munch seems to have been an early critic of photography as an art form, and remarked that it "will never compete with the brush and the palette, until such time as photographs can be taken in Heaven or Hell!" Munch's younger sister Laura was the subject of his 1899 interior Melancholy: Laura. Amanda O'Neill says of the work, "In this heated claustrophobic scene Munch not only portrays Laura's tragedy, but his own dread of the madness he might have inherited." Paris Munch arrived in Paris during the festivities of the Exposition Universelle (1889) and roomed with two fellow Norwegian artists. His picture Morning (1884) was displayed at the Norwegian pavilion. He spent his mornings at Bonnat's busy studio (which included female models) and afternoons at the exhibition, galleries, and museums (where students were expected to make copies as a way of learning technique and observation). Munch recorded little enthusiasm for Bonnat's drawing lessons—"It tires and bores me—it's numbing"—but enjoyed the master's commentary during museum trips. Munch was enthralled by the vast display of modern European art, including the works of three artists who would prove influential: Paul Gauguin, Vincent van Gogh, and Henri de Toulouse-Lautrec—all notable for how they used color to convey emotion. Munch was particularly inspired by Gauguin's "reaction against realism" and his credo that "art was human work and not an imitation of Nature", a belief earlier stated by Whistler. As one of his Berlin friends said later of Munch, "he need not make his way to Tahiti to see and experience the primitive in human nature. He carries his own Tahiti within him." Influenced by Gauguin, as well as the etchings of German artist Max Klinger, Munch experimented with prints as a medium to create graphic versions of his works. In 1896 he created his first woodcuts—a medium that proved ideal to Munch's symbolic imagery. Together with his contemporary Nikolai Astrup, Munch is considered an innovator of the woodcut medium in Norway. In December 1889 his father died, leaving Munch's family destitute. He returned home and arranged a large loan from a wealthy Norwegian collector when wealthy relatives failed to help, and assumed financial responsibility for his family from then on. Christian's death depressed him and he was plagued by suicidal thoughts: "I live with the dead—my mother, my sister, my grandfather, my father…Kill yourself and then it's over. Why live?" Munch's paintings of the following year included sketchy tavern scenes and a series of bright cityscapes in which he experimented with the pointillist style of Georges Seurat. Berlin By 1892, Munch formulated his characteristic, and original, Synthetist aesthetic, as seen in Melancholy (1891), in which color is the symbol-laden element. Considered by the artist and journalist Christian Krohg as the first Symbolist painting by a Norwegian artist, Melancholy was exhibited in 1891 at the Autumn Exhibition in Oslo. In 1892, Adelsteen Normann, on behalf of the Union of Berlin Artists, invited Munch to exhibit at its November exhibition, the society's first one-man exhibition. However, his paintings evoked bitter controversy (dubbed "The Munch Affair"), and after one week the exhibition closed. Munch was pleased with the "great commotion", and wrote in a letter: "Never have I had such an amusing time—it's incredible that something as innocent as painting should have created such a stir." In Berlin, Munch became involved in an international circle of writers, artists and critics, including the Swedish dramatist and leading intellectual August Strindberg, whom he painted in 1892. He also met Danish writer and painter Holger Drachmann, whom he painted in 1898. Drachmann was 17 years Munch's senior and a drinking companion at Zum schwarzen Ferkel in 1893–94. In 1894 Drachmann wrote of Munch: "He struggles hard. Good luck with your struggles, lonely Norwegian." During his four years in Berlin, Munch sketched out most of the ideas that would comprise his major work, The Frieze of Life, first designed for book illustration but later expressed in paintings. He sold little, but made some income from charging entrance fees to view his controversial paintings. Already, Munch was showing a reluctance to part with his paintings, which he termed his "children". His other paintings, including casino scenes, show a simplification of form and detail which marked his early mature style. Munch also began to favor a shallow pictorial space and a minimal backdrop for his frontal figures. Since poses were chosen to produce the most convincing images of states of mind and psychological conditions, as in Ashes, the figures impart a monumental, static quality. Munch's figures appear to play roles on a theatre stage (Death in the Sick-Room), whose pantomime of fixed postures signify various emotions; since each character embodies a single psychological dimension, as in The Scream, Munch's men and women began to appear more symbolic than realistic. He wrote, "No longer should interiors be painted, people reading and women knitting: there would be living people, breathing and feeling, suffering and loving." The Scream The Scream exists in four versions: two pastels (1893 and 1895) and two paintings (1893 and 1910). There are also several lithographs of The Scream (1895 and later). The 1895 pastel sold at auction on 2 May 2012 for US$119,922,500, including commission. It is the most colorful of the versions and is distinctive for the downward-looking stance of one of its background figures. It is also the only version not held by a Norwegian museum. The 1893 version was stolen from the National Gallery in Oslo in 1994 and was recovered. The 1910 painting was stolen in 2004 from The Munch Museum in Oslo, but recovered in 2006 with limited damage. The Scream is Munch's most famous work, and one of the most recognizable paintings in all art. It has been widely interpreted as representing the universal anxiety of modern man. Painted with broad bands of garish color and highly simplified forms, and employing a high viewpoint, it reduces the agonized figure to a garbed skull in the throes of an emotional crisis. With this painting, Munch met his stated goal of "the study of the soul, that is to say the study of my own self". Munch wrote of how the painting came to be: "I was walking down the road with two friends when the sun set; suddenly, the sky turned as red as blood. I stopped and leaned against the fence, feeling unspeakably tired. Tongues of fire and blood stretched over the bluish black fjord. My friends went on walking, while I lagged behind, shivering with fear. Then I heard the enormous, infinite scream of nature." He later described the personal anguish behind the painting, "for several years I was almost mad… You know my picture, 'The Scream?' I was stretched to the limit—nature was screaming in my blood… After that I gave up hope ever of being able to love again." In summing up the painting's effects, author Martha Tedeschi has stated: Whistler's Mother, Wood's American Gothic, Leonardo da Vinci's Mona Lisa and Edvard Munch's The Scream have all achieved something that most paintings—regardless of their art historical importance, beauty, or monetary value—have not: they communicate a specific meaning almost immediately to almost every viewer. These few works have successfully made the transition from the elite realm of the museum visitor to the enormous venue of popular culture. Frieze of Life—A Poem about Life, Love and Death In December 1893, Unter den Linden in Berlin was the location of an exhibition of Munch's work, showing, among other pieces, six paintings entitled Study for a Series: Love. This began a cycle he later called the Frieze of Life—A Poem about Life, Love and Death. Frieze of Life motifs, such as The Storm and Moonlight, are steeped in atmosphere. Other motifs illuminate the nocturnal side of love, such as Rose and Amelie and Vampire. In Death in the Sickroom, the subject is the death of his sister Sophie, which he re-worked in many future variations. The dramatic focus of the painting, portraying his entire family, is dispersed in the separate and disconnected figures of sorrow. In 1894, he enlarged the spectrum of motifs by adding Anxiety, Ashes, Madonna and Women in Three Stages (from innocence to old age). Around the start of the 20th century, Munch worked to finish the "Frieze". He painted a number of pictures, several of them in bigger format and to some extent featuring the Art Nouveau aesthetics of the time. He made a wooden frame with carved reliefs for the large painting Metabolism (1898), initially called Adam and Eve. This work reveals Munch's preoccupation with the "fall of man" and his pessimistic philosophy of love. Motifs such as The Empty Cross and Golgotha (both ) reflect a metaphysical orientation, and also reflect Munch's pietistic upbringing. The entire Frieze was shown for the first time at the secessionist exhibition in Berlin in 1902. "The Frieze of Life" themes recur throughout Munch's work but he especially focused on them in the mid-1890s. In sketches, paintings, pastels and prints, he tapped the depths of his feelings to examine his major motifs: the stages of life, the femme fatale, the hopelessness of love, anxiety, infidelity, jealousy, sexual humiliation, and separation in life and death. These themes are expressed in paintings such as The Sick Child (1885), Love and Pain (retitled Vampire; 1893–94), Ashes (1894), and The Bridge. The latter shows limp figures with featureless or hidden faces, over which loom the threatening shapes of heavy trees and brooding houses. Munch portrayed women either as frail, innocent sufferers (see Puberty and Love and Pain) or as the cause of great longing, jealousy and despair (see Separation, Jealousy, and Ashes). Munch often uses shadows and rings of color around his figures to emphasize an aura of fear, menace, anxiety, or sexual intensity. These paintings have been interpreted as reflections of the artist's sexual anxieties, though it could also be argued that they represent his turbulent relationship with love itself and his general pessimism regarding human existence. Many of these sketches and paintings were done in several versions, such as Madonna, Hands and Puberty, and also transcribed as wood-block prints and lithographs. Munch hated to part with his paintings because he thought of his work as a single body of expression. So to capitalize on his production and make some income, he turned to graphic arts to reproduce many of his paintings, including those in this series. Munch admitted to the personal goals of his work but he also offered his art to a wider purpose, "My art is really a voluntary confession and an attempt to explain to myself my relationship with life—it is, therefore, actually a sort of egoism, but I am constantly hoping that through this I can help others achieve clarity." While attracting strongly negative reactions, in the 1890s Munch began to receive some understanding of his artistic goals, as one critic wrote, "With ruthless contempt for form, clarity, elegance, wholeness, and realism, he paints with intuitive strength of talent the most subtle visions of the soul." One of his great supporters in Berlin was Walther Rathenau, later the German foreign minister, who strongly contributed to his success. Paris, Berlin and Kristiania In 1896, Munch moved to Paris, where he focused on graphic representations of his Frieze of Life themes. He further developed his woodcut and lithographic technique. Munch's Self-Portrait with Skeleton Arm (1895) is done with an etching needle-and-ink method also used by Paul Klee. Munch also produced multi-colored versions of The Sick Child, concerning tuberculosis, which sold well, as well as several nudes and multiple versions of Kiss (1892). In May 1896, Siegfried Bing held an exhibition of Munch's work inside Bing's Maison de l'Art Nouveau. The exhibition displayed sixty works, including The Kiss, The Scream, Madonna, The Sick Child, The Death Chamber, and The Day After. Bing's exhibition helped to introduce Munch to a French audience. Still, many of the Parisian critics still considered Munch's work "violent and brutal" even if his exhibitions received serious attention and good attendance. His financial situation improved considerably and in 1897, Munch bought himself a summer house facing the fjords of Kristiania, a small fisherman's cabin built in the late 18th century, in the small town of Åsgårdstrand in Norway. He dubbed this home the "Happy House" and returned here almost every summer for the next 20 years. It was this place he missed when he was abroad and when he felt depressed and exhausted. "To walk in Åsgårdstrand is like walking among my paintings—I get so inspired to paint when I am here". In 1897 Munch returned to Kristiania, where he also received grudging acceptance—one critic wrote, "A fair number of these pictures have been exhibited before. In my opinion these improve on acquaintance." In 1899, Munch began an intimate relationship with Tulla Larsen, a "liberated" upper-class woman. They traveled to Italy together and upon returning, Munch began another fertile period in his art, which included landscapes and his final painting in "The Frieze of Life" series, The Dance of Life (1899). Larsen was eager for marriage, and Munch begged off. His drinking and poor health reinforced his fears, as he wrote in the third person: "Ever since he was a child he had hated marriage. His sick and nervous home had given him the feeling that he had no right to get married." Munch almost gave in to Tulla, but fled from her in 1900, also turning away from her considerable fortune, and moved to Berlin. His Girls on the Jetty, created in eighteen different versions, demonstrated the theme of feminine youth without negative connotations. In 1902, he displayed his works thematically at the hall of the Berlin Secession, producing "a symphonic effect—it made a great stir—a lot of antagonism—and a lot of approval." The Berlin critics were beginning to appreciate Munch's work even though the public still found his work alien and strange. The good press coverage gained Munch the attention of influential patrons Albert Kollman and Max Linde. He described the turn of events in his diary, "After twenty years of struggle and misery forces of good finally come to my aid in Germany—and a bright door opens up for me." However, despite this positive change, Munch's self-destructive and erratic behavior involved him first with a violent quarrel with another artist, then with an accidental shooting in the presence of Tulla Larsen, who had returned for a
emotions; since each character embodies a single psychological dimension, as in The Scream, Munch's men and women began to appear more symbolic than realistic. He wrote, "No longer should interiors be painted, people reading and women knitting: there would be living people, breathing and feeling, suffering and loving." The Scream The Scream exists in four versions: two pastels (1893 and 1895) and two paintings (1893 and 1910). There are also several lithographs of The Scream (1895 and later). The 1895 pastel sold at auction on 2 May 2012 for US$119,922,500, including commission. It is the most colorful of the versions and is distinctive for the downward-looking stance of one of its background figures. It is also the only version not held by a Norwegian museum. The 1893 version was stolen from the National Gallery in Oslo in 1994 and was recovered. The 1910 painting was stolen in 2004 from The Munch Museum in Oslo, but recovered in 2006 with limited damage. The Scream is Munch's most famous work, and one of the most recognizable paintings in all art. It has been widely interpreted as representing the universal anxiety of modern man. Painted with broad bands of garish color and highly simplified forms, and employing a high viewpoint, it reduces the agonized figure to a garbed skull in the throes of an emotional crisis. With this painting, Munch met his stated goal of "the study of the soul, that is to say the study of my own self". Munch wrote of how the painting came to be: "I was walking down the road with two friends when the sun set; suddenly, the sky turned as red as blood. I stopped and leaned against the fence, feeling unspeakably tired. Tongues of fire and blood stretched over the bluish black fjord. My friends went on walking, while I lagged behind, shivering with fear. Then I heard the enormous, infinite scream of nature." He later described the personal anguish behind the painting, "for several years I was almost mad… You know my picture, 'The Scream?' I was stretched to the limit—nature was screaming in my blood… After that I gave up hope ever of being able to love again." In summing up the painting's effects, author Martha Tedeschi has stated: Whistler's Mother, Wood's American Gothic, Leonardo da Vinci's Mona Lisa and Edvard Munch's The Scream have all achieved something that most paintings—regardless of their art historical importance, beauty, or monetary value—have not: they communicate a specific meaning almost immediately to almost every viewer. These few works have successfully made the transition from the elite realm of the museum visitor to the enormous venue of popular culture. Frieze of Life—A Poem about Life, Love and Death In December 1893, Unter den Linden in Berlin was the location of an exhibition of Munch's work, showing, among other pieces, six paintings entitled Study for a Series: Love. This began a cycle he later called the Frieze of Life—A Poem about Life, Love and Death. Frieze of Life motifs, such as The Storm and Moonlight, are steeped in atmosphere. Other motifs illuminate the nocturnal side of love, such as Rose and Amelie and Vampire. In Death in the Sickroom, the subject is the death of his sister Sophie, which he re-worked in many future variations. The dramatic focus of the painting, portraying his entire family, is dispersed in the separate and disconnected figures of sorrow. In 1894, he enlarged the spectrum of motifs by adding Anxiety, Ashes, Madonna and Women in Three Stages (from innocence to old age). Around the start of the 20th century, Munch worked to finish the "Frieze". He painted a number of pictures, several of them in bigger format and to some extent featuring the Art Nouveau aesthetics of the time. He made a wooden frame with carved reliefs for the large painting Metabolism (1898), initially called Adam and Eve. This work reveals Munch's preoccupation with the "fall of man" and his pessimistic philosophy of love. Motifs such as The Empty Cross and Golgotha (both ) reflect a metaphysical orientation, and also reflect Munch's pietistic upbringing. The entire Frieze was shown for the first time at the secessionist exhibition in Berlin in 1902. "The Frieze of Life" themes recur throughout Munch's work but he especially focused on them in the mid-1890s. In sketches, paintings, pastels and prints, he tapped the depths of his feelings to examine his major motifs: the stages of life, the femme fatale, the hopelessness of love, anxiety, infidelity, jealousy, sexual humiliation, and separation in life and death. These themes are expressed in paintings such as The Sick Child (1885), Love and Pain (retitled Vampire; 1893–94), Ashes (1894), and The Bridge. The latter shows limp figures with featureless or hidden faces, over which loom the threatening shapes of heavy trees and brooding houses. Munch portrayed women either as frail, innocent sufferers (see Puberty and Love and Pain) or as the cause of great longing, jealousy and despair (see Separation, Jealousy, and Ashes). Munch often uses shadows and rings of color around his figures to emphasize an aura of fear, menace, anxiety, or sexual intensity. These paintings have been interpreted as reflections of the artist's sexual anxieties, though it could also be argued that they represent his turbulent relationship with love itself and his general pessimism regarding human existence. Many of these sketches and paintings were done in several versions, such as Madonna, Hands and Puberty, and also transcribed as wood-block prints and lithographs. Munch hated to part with his paintings because he thought of his work as a single body of expression. So to capitalize on his production and make some income, he turned to graphic arts to reproduce many of his paintings, including those in this series. Munch admitted to the personal goals of his work but he also offered his art to a wider purpose, "My art is really a voluntary confession and an attempt to explain to myself my relationship with life—it is, therefore, actually a sort of egoism, but I am constantly hoping that through this I can help others achieve clarity." While attracting strongly negative reactions, in the 1890s Munch began to receive some understanding of his artistic goals, as one critic wrote, "With ruthless contempt for form, clarity, elegance, wholeness, and realism, he paints with intuitive strength of talent the most subtle visions of the soul." One of his great supporters in Berlin was Walther Rathenau, later the German foreign minister, who strongly contributed to his success. Paris, Berlin and Kristiania In 1896, Munch moved to Paris, where he focused on graphic representations of his Frieze of Life themes. He further developed his woodcut and lithographic technique. Munch's Self-Portrait with Skeleton Arm (1895) is done with an etching needle-and-ink method also used by Paul Klee. Munch also produced multi-colored versions of The Sick Child, concerning tuberculosis, which sold well, as well as several nudes and multiple versions of Kiss (1892). In May 1896, Siegfried Bing held an exhibition of Munch's work inside Bing's Maison de l'Art Nouveau. The exhibition displayed sixty works, including The Kiss, The Scream, Madonna, The Sick Child, The Death Chamber, and The Day After. Bing's exhibition helped to introduce Munch to a French audience. Still, many of the Parisian critics still considered Munch's work "violent and brutal" even if his exhibitions received serious attention and good attendance. His financial situation improved considerably and in 1897, Munch bought himself a summer house facing the fjords of Kristiania, a small fisherman's cabin built in the late 18th century, in the small town of Åsgårdstrand in Norway. He dubbed this home the "Happy House" and returned here almost every summer for the next 20 years. It was this place he missed when he was abroad and when he felt depressed and exhausted. "To walk in Åsgårdstrand is like walking among my paintings—I get so inspired to paint when I am here". In 1897 Munch returned to Kristiania, where he also received grudging acceptance—one critic wrote, "A fair number of these pictures have been exhibited before. In my opinion these improve on acquaintance." In 1899, Munch began an intimate relationship with Tulla Larsen, a "liberated" upper-class woman. They traveled to Italy together and upon returning, Munch began another fertile period in his art, which included landscapes and his final painting in "The Frieze of Life" series, The Dance of Life (1899). Larsen was eager for marriage, and Munch begged off. His drinking and poor health reinforced his fears, as he wrote in the third person: "Ever since he was a child he had hated marriage. His sick and nervous home had given him the feeling that he had no right to get married." Munch almost gave in to Tulla, but fled from her in 1900, also turning away from her considerable fortune, and moved to Berlin. His Girls on the Jetty, created in eighteen different versions, demonstrated the theme of feminine youth without negative connotations. In 1902, he displayed his works thematically at the hall of the Berlin Secession, producing "a symphonic effect—it made a great stir—a lot of antagonism—and a lot of approval." The Berlin critics were beginning to appreciate Munch's work even though the public still found his work alien and strange. The good press coverage gained Munch the attention of influential patrons Albert Kollman and Max Linde. He described the turn of events in his diary, "After twenty years of struggle and misery forces of good finally come to my aid in Germany—and a bright door opens up for me." However, despite this positive change, Munch's self-destructive and erratic behavior involved him first with a violent quarrel with another artist, then with an accidental shooting in the presence of Tulla Larsen, who had returned for a brief reconciliation, which injured two of his fingers. Munch later sawed a self-portrait depicting him and Larsen in half as a consequence of the shooting and subsequent events. She finally left him and married a younger
However, in an effort to reassert its dominant role, IBM patented the bus and placed stringent licensing and royalty policies on its use. A few manufacturers did produce licensed MCA machines (most notably, NCR), but overall the industry balked at IBM's restrictions. Steve Gibson proposed that clone makers adopt NuBus. Instead, a group (the "Gang of Nine"), led by Compaq, created a new bus, which was named the Extended (or Enhanced) Industry Standard Architecture, or "EISA" (and the 16-bit bus became known as Industry Standard Architecture, or "ISA"). This provided virtually all of the technical advantages of MCA, while remaining compatible with existing 8-bit and 16-bit cards, and (most enticing to system and card makers) minimal licensing cost. The EISA bus slot is a two-level staggered pin system, with the upper part of the slot corresponding to the standard ISA bus pin layout. The additional features of the EISA bus are implemented on the lower part of the slot connector, using thin traces inserted into the insulating gap of the upper / ISA card card edge connector. Additionally, the lower part of the bus has five keying notches, so an ISA card with unusually long traces cannot accidentally extend down into the lower part of the slot. Intel introduced their first EISA chipset (and also their first chipset in the modern sense of the word) as the 82350 in September 1989. Intel introduced a lower-cost variant as the 82350DT, announced in April 1991; it began shipping in June of that year. The first EISA computer announced was the HP Vectra 486 in October 1989. The first EISA computers to hit the market were the Compaq Deskpro 486 and the SystemPro. The SystemPro, being one of the first PC-style systems designed as a network server, was built from the ground up to take full advantage of the EISA bus. It included such features as multiprocessing, hardware RAID, and bus-mastering network cards. One of the benefits to come out of the EISA standard was a final codification of the standard to which ISA slots and cards should be held (in particular, clock speed was fixed at an industry standard of 8.33 MHz). Thus, even systems that didn't use the EISA bus gained the advantage of having the ISA standardized, which contributed to its longevity. The "Gang of Nine" The Gang of Nine was the informal name given to the consortium of personal computer manufacturing companies that together created the EISA bus. Rival members generally acknowledged Compaq's leadership, with one stating in 1989 that within the Gang of Nine "when you have 10 people sit down before a table to write a letter to the president, someone has to write the letter. Compaq is sitting down at the typewriter". The members were: AST Research, Inc. Compaq Computer Corporation Seiko Epson Corporation Hewlett-Packard Company NEC Corporation Olivetti Tandy Corporation WYSE Zenith Data Systems Technical data Although the MCA bus had a slight performance advantage over EISA (bus speed of 10 MHz, compared to 8.33 MHz), EISA contained almost all of the technological benefits that MCA boasted, including bus mastering, burst mode, software-configurable resources, and 32-bit data/address buses. These brought EISA nearly to par with MCA from a performance standpoint, and EISA easily defeated MCA in industry support. EISA replaced the tedious jumper configuration common with ISA cards with software-based configuration. Every EISA system shipped with an EISA configuration utility; this was usually a slightly customized version of the standard utilities written by the EISA chipset makers. The user would boot into this utility, either from floppy disk or on a dedicated hard-drive partition. The utility software would detect all EISA cards in the system and could configure any hardware resources (interrupts, memory ports, etc.) on any EISA card (each EISA card would include a disk with information that described the available options on the card) or on the EISA system motherboard. The user could also enter information about ISA cards in the system, allowing the utility to automatically reconfigure EISA cards to avoid resource conflicts. Similarly, Windows 95, with its Plug-and-Play capability, was not able to change the configuration of EISA cards, but it could detect the cards, read their configuration, and reconfigure Plug-and-Play hardware to avoid resource conflicts. Windows 95 would also automatically attempt to install appropriate drivers for detected EISA cards. Industry acceptance EISA's success was far from guaranteed. Many manufacturers, including those in the "Gang of Nine", researched the possibility of using MCA. For example, Compaq actually produced prototype DeskPro systems using the bus. However, these were never put into production, and when it was clear that MCA had lost, Compaq allowed its MCA license to expire (the license actually cost relatively little; the primary costs associated with MCA, and at which the industry revolted, were royalties to be paid per system
system clock speed of 6 MHz in the earlier models and 8 MHz in the last version of the computer. The 16-bit slots were a superset of the 8-bit configuration, so most 8-bit cards were able to plug into a 16-bit slot (some cards used a "skirt" design that physically interfered with the extended portion of the slot) and continue to run in 8-bit mode. One of the key reasons for the success of the IBM PC (and the PC clones that followed it) was the active ecosystem of third-party expansion cards available for the machines. IBM was restricted from patenting the bus and widely published the bus specifications. As the PC-clone industry continued to build momentum in the mid- to late-1980s, several problems with the bus began to be apparent. First, because the "AT slot" (as it was known at the time) was not managed by any central standards group, there was nothing to prevent a manufacturer from "pushing" the standard. One of the most common issues was that as PC clones became more common, PC manufacturers began increasing the processor speed to maintain a competitive advantage. Unfortunately, because the ISA bus was originally locked to the processor clock, this meant that some 286 machines had ISA buses that ran at 10, 12, or even 16 MHz. In fact, the first system to clock the ISA bus at 8 MHz was the turbo 8088 clones that clocked the processors at 8 MHz. This caused many issues with incompatibility, where a true IBM-compatible third-party card (designed for an 8 MHz or 4.77 MHz bus) might not work in a higher-speed system (or even worse, would work unreliably). Most PC makers eventually decoupled the slot clock from the system clock, but there was still no standards body to "police" the industry. As companies like Dell modified the AT bus design, the architecture was so well entrenched that no single clone manufacturer had the leverage to create a standardized alternative, and there was no compelling reason for them to cooperate on a new standard. Because of this, when the first 386-based system (the Compaq Deskpro 386) hit the market in 1986, it still supported 16-bit slots. Other 386 PCs followed suit, and the AT (later ISA) bus remained a part of most systems even into the late 1990s. Meanwhile, IBM began to worry that it was losing control of the industry it had created. In 1987, IBM released the PS/2 line of computers, which included the MCA bus. MCA included numerous enhancements over the 16-bit AT bus, including bus mastering, burst mode, software-configurable resources, and 32-bit capabilities. However, in an effort to reassert its dominant role, IBM patented the bus and placed stringent licensing and royalty policies on its use. A few manufacturers did produce licensed MCA machines (most notably, NCR), but overall the industry balked at IBM's restrictions. Steve Gibson proposed that clone makers adopt NuBus. Instead, a group (the "Gang of Nine"), led by Compaq, created a new bus, which was named the Extended (or Enhanced) Industry Standard Architecture, or "EISA" (and the 16-bit bus became known as Industry Standard Architecture, or "ISA"). This provided virtually all of the technical advantages of MCA, while remaining compatible with existing 8-bit and 16-bit cards, and (most enticing to system and card makers) minimal licensing cost. The EISA bus slot is a two-level staggered pin system, with the upper part of the slot corresponding to the standard ISA bus pin layout. The additional features of the EISA bus are implemented on the lower part of the slot connector, using thin traces inserted into the insulating gap of the upper / ISA card card edge connector. Additionally, the lower part of the bus has five keying notches, so an ISA card with unusually long traces cannot accidentally extend down into the lower part of the slot. Intel introduced their first EISA chipset (and also their first chipset in the modern sense of the word) as the 82350 in September 1989. Intel introduced a lower-cost variant as the 82350DT, announced in April 1991; it began shipping in June of that year. The first EISA computer announced was the HP Vectra 486 in October 1989. The first EISA computers to hit the market were the Compaq Deskpro 486 and the SystemPro. The SystemPro, being one
goal of creating a rich, logical fantasy world. Like many role-playing games from the nineties, Earthdawn focuses much of its detail on its setting, a province called Barsaive. It was also originally written as a prequel to Shadowrun, mirroring its setting of returning magic with one where magic has just recently dropped from its peak. However, after Shadowrun was licensed out to a different publisher the ties between the two were deliberately severed (See Setting) and remain so. History Starting in 1993, FASA released over 20 gaming supplements describing this universe; however, it closed down production of Earthdawn in January 1999. During that time several novels and short-story anthologies set in the Earthdawn universe were also released. In late 1999, FASA granted Living Room Games a licensing agreement to produce new material for the game. The Second Edition did not alter the setting, though it did update the timeline to include events that took place in Barsaive. There were a few changes to the rules in the Second Edition; some classes were slightly different or altered abilities from the original. The changes were meant to allow for more rounded characters and better balance of play. Living Room Games last published in 2005 and they no longer have a license with FASA to publish Earthdawn material. In 2003 a second license was granted to RedBrick, who developed their own edition based on the FASA products, in addition to releasing the original FASA books in PDF form. The Earthdawn Classic Player's Compendium and Earthdawn Classic Gamemaster's Compendium are essentially an alternative Second Edition, but without a version designation (since the material is compatible anyway). Each book has 524 pages and summarizes much of what FASA published—not only the game mechanics, but also the setting, narrations, and stories. For example, each Discipline has its own chapter, describing it from the point of view of different adepts. Likewise, Barsaive gets a complete treatment, and the chapters contain a lot of log entries and stories in addition to the setting descriptions; the same applies to Horrors and Dragons. Errata was incorporated into the text, correcting previous edition errors and providing rules clarifications. While RedBrick tried to remain faithful to FASA's vision and visual style, they revised almost everything and introduced new material to fill the gaps. RedBrick began publishing new Earthdawn novels in 2007. In 2009, RedBrick announced the Third Edition of the game. To gain a larger audience for this edition, RedBrick published the book through Mongoose Publishing's Flaming Cobra imprint. The first two books were released in July 2009. In 2012, RedBrick ceased publishing and announced the transfer of the Earthdawn license to FASA Games, Inc. In 2014, FASA Games announced the forthcoming publication of Earthdawn Fourth Edition and launched a successful Kickstarter to support the project. Fourth Edition is described as a reworking of the game mechanics, with redundancies eliminated, and a simpler success level system. The game world is advanced five years, past the end of the Barsaive-Thera War, in order to clear dangling threads in the metaplot and open the game world to new stories. The first Fourth Edition title—the Player's Guide—was released in early 2015. In 2014 FASA Corporation also gave permission for Impact Miniatures to return the original Heartbreaker Hobbies & Games Official Earthdawn Miniatures range to production. In order to fund this, Impact Miniatures launched a successful Kickstarter project. Setting In Barsaive, magic, like many things in nature, goes through cycles. As the magic level rises, it allows alien creatures called Horrors to cross from their distant, otherworldly dimension into our own. The Horrors come in an almost infinite variety—from simple eating machines that devour all they encounter, to incredibly intelligent and cunning foes that feed off the negative emotions they inspire in their prey. In the distant past of Earthdawns setting, an elf scholar discovered that the time of the Horrors was approaching, and founded the Eternal Library in order to discover a way to defeat them — or at the very least, survive them. The community that grew up around the library developed wards and protections against the Horrors, which they traded to other lands and eventually became the powerful Theran Empire, an extremely magically advanced civilization and the main antagonist of the Earthdawn setting. The peoples of the world built kaers, underground towns and cities, which they sealed with the Theran wards to wait out the time of the Horrors, which was called the Scourge. Theran wizards and politicians warned many of the outlying nations around Thera of the coming of the Horrors, offering the protection of the kaers to those who would pledge their loyalty to the Empire. Most of these nations agreed at first though some became unwilling to fulfill their end of the bargain after the end of the Scourge, wanting to have nothing to do with the bureaucratic nation run on political conflict and powered by slavery. After four hundred years of hiding, the Scourge ended, and the people emerged to a world changed by the Horrors. The player characters explore this new world, discovering lost secrets of the past, and fighting Horrors that remain. The primary setting of Earthdawn is Barsaive, a former province of the Theran Empire. Barsaive is a region of city-states, independent from the Therans since the dwarven Kingdom of Throal led a rebellion against their former overlords. The Theran presence in Barsaive has been limited to a small part of south-western Barsaive, located around the magical fortress of Sky Point and the city of Vivane. The setting of Earthdawn is the same world as Shadowrun (i.e. a fictionalized version of Earth), but takes place millennia earlier. The map of Barsaive and its neighboring regions established that most of the game takes place where Ukraine and Russia are in our world. However, the topography other than coastlines and major rivers is quite different, and the only apparent reference to the real world besides the map may be the Blood Wood, known as "Wyrm Wood" before the Scourge and similar in location and extent to the Chernobyl (Ukrainian for "wormwood") zone of alienation. Note should be made that game world links between Earthdawn and Shadowrun were deliberately broken by the publisher when the Shadowrun property was licensed out, in order to avoid the necessity for coordination between publishing companies. FASA has announced since then, that there are no plans to return Shadowrun to in-house publication, nor to restore the links between the game worlds. Two Earthdawn supplements cover territories outside Barsaive. The Theran Empire book (First Edition) covers the Theran Empire and its provinces (which roughly correspond to the territories of the Roman Empire, plus colonies in America and India). Cathay: The Five Kingdoms (Third Edition) covers the lands of Cathay (Far East). Races The setting of Earthdawn features several fantasy races for characters and NPCs: Dwarf: Dwarfs in Earthdawn are similar in appearance to the classic D&D or Tolkien dwarfs. They are the predominant race in Barsaive, and the dwarf language is considered the common language. Their culture, especially of the dominant Throal Kingdom, can be considered more of a Renaissance-level culture than in most other fantasy settings, and forms the main source of resistance to a return of Thera's rule in Barsaive. Elf: Elves in Earthdawn fit the common fantasy role playing convention; they are tall, lithe, pointy-eared humanoids who prefer living in nature. Elves in Earthdawn naturally live a very long time; some are thought to be immortal. Such immortal Elves feature in many cross-pollinated storylines with Shadowrun. A subrace of Earthdawn elves are called the Blood Elves. The blood elves rejected the Theran protective magic, and attempted their own warding spells. These wards failed, and a last-ditch ritual caused thorns to thrust through the skin of the blood elves. These ever-bleeding wounds caused constant pain, but the self-inflicted suffering was enough to protect the blood elves from the worst of the Horrors. Human: Humans in Earthdawn are physically similar to humans in our own real world. Human adepts are granted a special Versatility talent to make them more mechanically appealing. Humans in Earthdawn are considered to be somewhat warlike in general outlook. Obsidiman: Obsidimen are a race of large, rock-based humanoids. They stand over tall and weigh over 900 pounds. Their primary connection is to their Liferock, which is a large formation of stone that they emerge from. Obsidimen are loyal to the community around their Liferock, and eventually return to and re-merge with it. Obsidimen can live around 500 years away from their Liferock, and their ultimate lifespan is unknown, as they generally return to it and remain there. Due to their rocky nature and long lives, obsidimen are rather slow moving and deliberate in both speech and action, and can have difficulty understanding the smaller races' need for haste. However, if aroused by a threat to self, friend, or community, obsidimen are fearsome to behold. Ork: The ork race in Earthdawn is physically similar to other depictions of orcs in fantasy role-playing. They are tribal, nomadic and often barbaric humanoids, with olive, tan, beige or ebony skin. They are relatively short-lived, and as a result many attempt to leave a legacy marked by a memorable death—preferably one that leaves no corpse. Before the Scourge almost all orks were enslaved by other races. Troll: The troll race in Earthdawn is also similar in appearance to many other fantasy role playing depictions of trolls. They are very tall humanoids, with a hardened skin and horns. Socially, they form clans to which they are fiercely loyal. Troll clans often raid one another, and a significant subset of the troll race are crystal raiders, which command many of the airships of Barsaive. Other trolls, known as lowland trolls, have merged with mixed communities around Barsaive, although most retain the fierce cultural
several fantasy races for characters and NPCs: Dwarf: Dwarfs in Earthdawn are similar in appearance to the classic D&D or Tolkien dwarfs. They are the predominant race in Barsaive, and the dwarf language is considered the common language. Their culture, especially of the dominant Throal Kingdom, can be considered more of a Renaissance-level culture than in most other fantasy settings, and forms the main source of resistance to a return of Thera's rule in Barsaive. Elf: Elves in Earthdawn fit the common fantasy role playing convention; they are tall, lithe, pointy-eared humanoids who prefer living in nature. Elves in Earthdawn naturally live a very long time; some are thought to be immortal. Such immortal Elves feature in many cross-pollinated storylines with Shadowrun. A subrace of Earthdawn elves are called the Blood Elves. The blood elves rejected the Theran protective magic, and attempted their own warding spells. These wards failed, and a last-ditch ritual caused thorns to thrust through the skin of the blood elves. These ever-bleeding wounds caused constant pain, but the self-inflicted suffering was enough to protect the blood elves from the worst of the Horrors. Human: Humans in Earthdawn are physically similar to humans in our own real world. Human adepts are granted a special Versatility talent to make them more mechanically appealing. Humans in Earthdawn are considered to be somewhat warlike in general outlook. Obsidiman: Obsidimen are a race of large, rock-based humanoids. They stand over tall and weigh over 900 pounds. Their primary connection is to their Liferock, which is a large formation of stone that they emerge from. Obsidimen are loyal to the community around their Liferock, and eventually return to and re-merge with it. Obsidimen can live around 500 years away from their Liferock, and their ultimate lifespan is unknown, as they generally return to it and remain there. Due to their rocky nature and long lives, obsidimen are rather slow moving and deliberate in both speech and action, and can have difficulty understanding the smaller races' need for haste. However, if aroused by a threat to self, friend, or community, obsidimen are fearsome to behold. Ork: The ork race in Earthdawn is physically similar to other depictions of orcs in fantasy role-playing. They are tribal, nomadic and often barbaric humanoids, with olive, tan, beige or ebony skin. They are relatively short-lived, and as a result many attempt to leave a legacy marked by a memorable death—preferably one that leaves no corpse. Before the Scourge almost all orks were enslaved by other races. Troll: The troll race in Earthdawn is also similar in appearance to many other fantasy role playing depictions of trolls. They are very tall humanoids, with a hardened skin and horns. Socially, they form clans to which they are fiercely loyal. Troll clans often raid one another, and a significant subset of the troll race are crystal raiders, which command many of the airships of Barsaive. Other trolls, known as lowland trolls, have merged with mixed communities around Barsaive, although most retain the fierce cultural and personal pride of their less-civilized cousins. T'skrang: The t'skrang are lizard-like amphibious humanoids with long tails and a flair for dramatics. Many of them exhibit the behaviors and characteristics which are stereotypical to a "swashbuckler". T'skrang are often sailors, and many t'skrang families run ships up and down the rivers of Barsaive. A rare subrace of t'skrang, the k'stulaami, possess a flap of skin much like a flying squirrel's patagium, allowing them to glide. While k'stulaami can be born as a random mutation in any t'skrang line, they tend to congregate into communities filled with their own kind. Windling: The windlings are small, winged humanoids; similar to many depictions of fae creatures, they resemble small elves with insect-like wings. They have the ability to see into the astral plane, and are considerably luckier than the other races. Windlings are often somewhat mischievous, hedonistic, and eager for new experiences, and are culturally similar to the Kender of Krynn, but without the same kleptomaniacal tendencies. They have wings similar to those of a dragonfly and are one to two feet in height. Leafer: A race native to the Dark Forest of Vasgothia, leafers are sentient plant people. Ulkmen: Another race unique to Vasgothia, the ulkmen have been merged with Horrors. In addition to their talents, an ulkman adepts gain a Horror power every four Circles. Despite their origins & horrific appearance, the ulkmen are a largely peaceful people. Jubruq: The only 'half-race' in Earthdawn, jubruq are half human or ork and half elemental spirit. They are native to the Sufik tribes of Marak. Jackelmen: Native to Creana, jackalmen have the body of a human and the head of a jackal. They are a warrior people and are thought to practice cannibalism. Political entities Barsaive Barsaive was once one of the Theran Empire's many provinces but a series of post-Scourge wars between Thera and various city-states of Barsaive have seen the former province secure its independence. Barsaive's people and governments represent a varied number of individual powers. Kingdom of Throal (dwarfs, monarchy) Iopos (city state, autocracy) Blood Wood (elves, monarchy) Kratas (city of thieves, kleptocracy) Urupa (city-state, important port) Jerris (city-state) Travar (city-state) Troll clans of mountains (Raiders) T'skrang clans (Aropagoi) of the Serpent River (traders) Vivane (city-state, under occupation by Thera) Haven and Parlainth (ruins) Great Dragons Various secret societies Provinces of the Theran Empire Creana: An ancient land far to the South of Barsaive, Creana was once a mighty empire when Thera was still in its infancy. Ruled over by living Passion known as the Pharon, Creana is plagued by magical multi-coloured sandstorms and Horror-corruputed Mummies. Creana also includes several conquered cities from other parts of the Selestrean Basin including the Ulustan city of Okonopolis and Issyr as well as cities from deeper within Fekara such as Nuboz. Indrisa: Thera's newest province, Indrisa was discovered just before the Scourge. A land rich in resources and culture, the Indrisans have a complicated relationship with their Passions, who often send powerful creatures called Dhuna to punish those that transgress against them. Indrisa survived the Scourge using an ancient magical method that harnessed positive energy against the Horrors. Marac: A land of polished brass towers where science is as praised as magic, Marac is currently in the grip of a bloody revolt known as the Jinari Rebellion. The Sufik tribes of the desert have discovered how to control Horrors and have weaponized them against the invading Therans. Rugaria: The lands immediately north of Thera, Rugaria is one of the empire's earliest provinces. The people of Rugaria are described as grim and dour and submitted to Theran rule without much resistance. Talea: Talea is a province of political intrigue and bizarre religious practices. Dozens of Dukes and Kings make war upon each other whilst waiting for the birth of Prima – the Passion yet to be. Vasgothia: The empire's most western province, Vasgothia is where the Therans produce their crops that feed their vast empire, it is also home to savage tribes that hate the empire deeply. Vasgothia survived the Scourge because its Passions fought directly against the Horrors, dying in the process. The Scourge has affected Vasgothia is drastic ways, producing dozens of magical oddities in the process. Vivane: The lands around the city of Vivane are also known as Vivane province. Whilst not a true province in an administrative or geographical sense, this portion of Southwest Barsaive is an important bulwark between Rugaria and those rebellions nations found in Barsaive proper. Other Lands The Western Kingdoms / Gwydenro: One of the Elven Nations, the Gwydenro once spread throughout the entire Roheline Wood, but the area was destroyed during the Scourge and it is now known as The Wastes. The Gwydenro consists of dozens of kingdoms known as gerryth that are bound together by the oaths of the lew teryn. The largest and most powerful kingdom is Sereatha – The City of Spires. Shosara: Another Elven Nation, Shosara was formally separated from the Elven Court in pre-Scourge times for adopting Human culture. Largely isolated from the rest of the world, Shosara is a culture of seafarers and traders with a 'relaxed' attitude to the Theran Empire. Arancia: An independent nation next to the Theran province of Talea, very little is known or written about Arcancia. (This land will be explored fully in the upcoming 4th edition regional source-book; Arancia) The Slithering Wastes: The name given to a large region west of Arancia and north of Marac. Very little is known about the Slithering Wastes, but presumably it suffered greatly during the Scourge, leading to its current name. Aznan: A land located to the south of Creana, Aznan is renowned for its huge Cloud Mountain and various medicinal plants that possess magical properties. Aruacania: Aruacania lies to the far west of the Theran Empire and is a land of Feathered Dragons and unknown magical mysteries. Fekara: The name of the continent where Creana, Marac, Nuboz and Aznan are located. Cathay: A large and powerful group of kingdoms to the far East of Indrisa, Cathay was fully explored in Earthdawn: Third Edition with the Cathay: Player's Guide & Cathay: Gamemaster's Guide. Magic in Earthdawn Earthdawn'''s magic system is highly varied but the essential idea is that all player characters (called Adepts) have access to magic, used to perform abilities attained through their Disciplines. Each Discipline is given a unique set of Talents which are used to access the world's magic. Legend points (the Earthdawn equivalent of experience points) can be spent to put up the characters level in the Talent, increasing his step level for the ability, making the user more proficient at using that specific type of magic. Caster Disciplines use the same Talent system as others, but also have access to spells. How a player character obtains spells varies depending on his Game Master; but how they are used is universal. Casters all have special Talents called spell matrixes which they can place spells into. A spell attuned (placed into) to a matrix is easily accessible and can be cast at any time. Spells can be switched at the players will while out of combat. Once engaged in combat, however, they must use an action to do so (called re-attuning on the fly), which requires a set difficulty they must achieve, or risk losing their turn. It is generally recommended that Casters only use attuned spells, but this is not required. Casting a spell that is not in a matrix is referred to as raw casting. Raw casting is perhaps the most dangerous aspect of the Earthdawn magic system. If the spell is successfully cast, it has its normal effects along with added consequences. Raw casting has a very good chance of drawing the attention of a Horror, which can quickly turn into death for low level characters (and for high level characters as well in some cases). One of the most innovative ideas in Earthdawn is how magical items
additional services, e.g. retransmitting documents, providing third party audit information, acting as a gateway for different transmission methods, and handling telecommunications support. Because of these and other services VANs provide, businesses frequently use a VAN even when both trading partners are using Internet-based protocols. Healthcare clearinghouses perform many of the same functions as a VAN, but have additional legal restrictions. VANs may be operated by various entities: telecommunication companies; industry group consortia; a large company interacting with its suppliers/vendors; managed services providers. Costs, trade-offs and implementation It is important to note that there are key trade-offs between VANs and Direct EDI, and in many instances, organizations exchanging EDI documents can in fact use both in concert, for different aspects of their EDI implementations. For example, in the U.S., the majority of EDI document exchanges use AS2, so a direct EDI setup for AS2 may make sense for a U.S.-based organization. But adding OFTP2 capabilities to communicate with a European partner may be difficult, so a VAN might make sense to handle those specific transactions, while direct EDI is used for the AS2 transactions. In many ways, a VAN acts as a service provider, simplifying much of the setup for organizations looking to initiate EDI. Due to the fact that many organizations first starting out with EDI often do so to meet a customer or partner requirement and therefore lack in-house EDI expertise, a VAN can be a valuable asset. However, VANs may come with high costs. VANs typically charge a per-document or even per-line-item transaction fee to process EDI transactions as a service on behalf of their customers. This is the predominant reason why many organizations also implement an EDI software solution or eventually migrate to one for some or all of their EDI. On the other hand, implementing EDI software can be a challenging process, depending on the complexity of the use case, technologies involved and availability of EDI expertise. In addition, there are ongoing maintenance requirements and updates to consider. For example, EDI mapping is one of the most challenging EDI management tasks. Companies must develop and maintain EDI maps for each of their trading partners (and sometimes multiple EDI maps for each trading partner based on their order fulfilment requirements). Interpreting data EDI translation software provides the interface between internal systems and the EDI format sent/received. For an "inbound" document, the EDI solution will receive the file (either via a value-added network or directly using protocols such as FTP or AS2), take the received EDI file (commonly referred to as an "envelope"), and validate that the trading partner who is sending the file is a valid trading partner, that the structure of the file meets the EDI standards, and that the individual fields of information conform to the agreed-upon standards. Typically, the translator will either create a file of either fixed length, variable length or XML tagged format or "print" the received EDI document (for non-integrated EDI environments). The next step is to convert/transform the file that the translator creates into a format that can be imported into a company's back-end business systems, applications or ERP. This can be accomplished by using a custom program, an integrated proprietary "mapper" or an integrated standards-based graphical "mapper," using a standard data transformation language such as XSLT. The final step is to import the transformed file (or database) into the company's back-end system. For an "outbound" document, the process for integrated EDI is to export a file (or read a database) from a company's information systems and transform the file to the appropriate format for the translator. The translation software will then "validate" the EDI file sent to ensure that it meets the standard agreed upon by the trading partners, convert the file into "EDI" format (adding the appropriate identifiers and control structures) and send the file to the trading partner (using the appropriate communications protocol). Another critical component of any EDI translation software is a complete "audit" of all the steps to move business documents between trading partners. The audit ensures that any transaction (which in reality is a business document) can be tracked to ensure that they are not lost. In the case of a retailer sending a Purchase Order to a supplier, if the Purchase Order is "lost" anywhere in the business process, the effect is devastating to both businesses. To the supplier, they do not fulfil the order as they have not received it thereby losing business and damaging the business relationship with their retail client. For the retailer, they have a stock outage and the effect is lost sales, reduced customer service and ultimately lower profits. In EDI terminology, "inbound" and "outbound" refer to the direction of transmission of an EDI document in relation to a particular system, not the direction of merchandise, money or other things represented by the document. For example, an EDI document that tells a warehouse to perform an outbound shipment is an inbound document in relation to the warehouse computer system. It is an outbound document in relation to the manufacturer or dealer that transmitted the document. Advantages over paper systems EDI and other similar technologies save the company money by providing an alternative to or replacing, information flows that require a great deal of human interaction and paper documents. Even when paper documents are maintained in parallel with EDI exchange, e.g. printed shipping manifests, electronic exchange and the use of data from that exchange reduces the handling costs of sorting, distributing, organizing, and searching paper documents. EDI and similar technologies allow a company to take advantage of the benefits of storing and manipulating data electronically without the cost of manual entry. Another advantage of EDI is the opportunity to reduce or eliminate manual data entry errors, such as shipping and billing errors, because EDI eliminates the need to re-key documents on the destination side. One very important advantage of EDI over paper documents is the speed at which the trading partner receives and incorporates the information into their system greatly reducing cycle times. For this reason, EDI can be an important component of just-in-time production systems. According to the 2008 Aberdeen report "A Comparison of Supplier Enablement around the World", only 34% of purchase orders are transmitted electronically in North America. In EMEA, 36% of orders are transmitted electronically and in APAC, 41% of orders are transmitted electronically. They also report that the average paper requisition to order costs a company $37.45 in North America, $42.90 in EMEA and $23.90 in APAC. With an EDI requisition to order, costs are reduced to $23.83 in North America, $34.05 in EMEA and $14.78 in APAC. Barriers to implementation There are a few barriers to adopting electronic data interchange. One of the most significant barriers is the accompanying business process change. Existing business processes built around paper handling may not be suited for EDI and would require changes to accommodate automated processing of business documents. For example, a business may receive the bulk of their goods by 1 or 2-day shipping and all of their invoices by mail. The existing process may, therefore, assume that goods are typically received before the invoice. With EDI, the invoice will typically be sent when the goods ship and will, therefore, require a process that handles large numbers of invoices whose corresponding goods have not yet been received. Another significant barrier is the cost in time and money in the initial setup. The preliminary expenses and time that arise from the implementation, customization and training can be costly. It is important to select the correct level of integration to match the business requirement. For a business with relatively few transactions with EDI-based partners, it may make sense for businesses to implement inexpensive "rip and read" solutions, where the EDI format is printed out in human-readable form, and people — rather than computers — respond to the transaction. Another alternative is outsourced EDI solutions provided by EDI "Service Bureaus". For other businesses, the implementation of an integrated EDI solution may be necessary as increases in trading volumes brought on by EDI force them to re-implement their order processing business processes. The key hindrance to a successful implementation of EDI is the perception many businesses have of the nature of EDI. Many view EDI from the technical perspective that EDI is a data format; it would be more accurate to take the business view that EDI is a system for exchanging business documents with external entities, and integrating the data from those documents into the company's internal systems. Successful implementations of EDI take into account the effect externally generated information will have on their internal systems and validate the business information received. For example, allowing a supplier to update a retailer's accounts payable system without appropriate checks and balances would put the company at significant risk. Businesses new to the implementation of EDI must understand the underlying business process and apply proper judgment. Acknowledgement Below are common EDI acknowledgement Communication Status – Indicate the transmission completed MDN (Message Disposition Notification) – In AS2 only, indicate the message is readable Functional Acknowledgement – typically "997" in ANSI, or "CONTRL" in EDIFACT, which indicate the message content is verified against its template, and tell
many instances, organizations exchanging EDI documents can in fact use both in concert, for different aspects of their EDI implementations. For example, in the U.S., the majority of EDI document exchanges use AS2, so a direct EDI setup for AS2 may make sense for a U.S.-based organization. But adding OFTP2 capabilities to communicate with a European partner may be difficult, so a VAN might make sense to handle those specific transactions, while direct EDI is used for the AS2 transactions. In many ways, a VAN acts as a service provider, simplifying much of the setup for organizations looking to initiate EDI. Due to the fact that many organizations first starting out with EDI often do so to meet a customer or partner requirement and therefore lack in-house EDI expertise, a VAN can be a valuable asset. However, VANs may come with high costs. VANs typically charge a per-document or even per-line-item transaction fee to process EDI transactions as a service on behalf of their customers. This is the predominant reason why many organizations also implement an EDI software solution or eventually migrate to one for some or all of their EDI. On the other hand, implementing EDI software can be a challenging process, depending on the complexity of the use case, technologies involved and availability of EDI expertise. In addition, there are ongoing maintenance requirements and updates to consider. For example, EDI mapping is one of the most challenging EDI management tasks. Companies must develop and maintain EDI maps for each of their trading partners (and sometimes multiple EDI maps for each trading partner based on their order fulfilment requirements). Interpreting data EDI translation software provides the interface between internal systems and the EDI format sent/received. For an "inbound" document, the EDI solution will receive the file (either via a value-added network or directly using protocols such as FTP or AS2), take the received EDI file (commonly referred to as an "envelope"), and validate that the trading partner who is sending the file is a valid trading partner, that the structure of the file meets the EDI standards, and that the individual fields of information conform to the agreed-upon standards. Typically, the translator will either create a file of either fixed length, variable length or XML tagged format or "print" the received EDI document (for non-integrated EDI environments). The next step is to convert/transform the file that the translator creates into a format that can be imported into a company's back-end business systems, applications or ERP. This can be accomplished by using a custom program, an integrated proprietary "mapper" or an integrated standards-based graphical "mapper," using a standard data transformation language such as XSLT. The final step is to import the transformed file (or database) into the company's back-end system. For an "outbound" document, the process for integrated EDI is to export a file (or read a database) from a company's information systems and transform the file to the appropriate format for the translator. The translation software will then "validate" the EDI file sent to ensure that it meets the standard agreed upon by the trading partners, convert the file into "EDI" format (adding the appropriate identifiers and control structures) and send the file to the trading partner (using the appropriate communications protocol). Another critical component of any EDI translation software is a complete "audit" of all the steps to move business documents between trading partners. The audit ensures that any transaction (which in reality is a business document) can be tracked to ensure that they are not lost. In the case of a retailer sending a Purchase Order to a supplier, if the Purchase Order is "lost" anywhere in the business process, the effect is devastating to both businesses. To the supplier, they do not fulfil the order as they have not received it thereby losing business and damaging the business relationship with their retail client. For the retailer, they have a stock outage and the effect is lost sales, reduced customer service and ultimately lower profits. In EDI terminology, "inbound" and "outbound" refer to the direction of transmission of an EDI document in relation to a particular system, not the direction of merchandise, money or other things represented by the document. For example, an EDI document that tells a warehouse to perform an outbound shipment is an inbound document in relation to the warehouse computer system. It is an outbound document in relation to the manufacturer or dealer that transmitted the document. Advantages over paper systems EDI and other similar technologies save the company money by providing an alternative to or replacing, information flows that require a great deal of human interaction and paper documents. Even when paper documents are maintained in parallel with EDI exchange, e.g. printed shipping manifests, electronic exchange and the use of data from that exchange reduces the handling costs of sorting, distributing, organizing, and searching paper documents. EDI and similar technologies allow a company to take advantage of the benefits of storing and manipulating data electronically without the cost of manual entry. Another advantage of EDI is the opportunity to reduce or eliminate manual data entry errors, such as shipping and billing errors, because EDI eliminates the need to re-key documents on the destination side. One very important advantage of EDI over paper documents is the speed at which the trading partner receives and incorporates the information into their system greatly reducing cycle times. For this reason, EDI can be an important component of just-in-time production systems. According to the 2008 Aberdeen report "A Comparison of Supplier Enablement around the World", only 34% of purchase orders are transmitted electronically in North America. In EMEA, 36% of orders are transmitted electronically and in APAC, 41% of orders are transmitted electronically. They also report that the average paper requisition to order costs a company $37.45 in North America, $42.90 in EMEA
untethered. Untethered spacewalks were only performed on three missions in 1984 using the Manned Maneuvering Unit (MMU), and on a flight test in 1994 of the Simplified Aid For EVA Rescue (SAFER), a safety device worn on tethered U.S. EVAs. Development history NASA planners invented the term extravehicular activity (abbreviated with the acronym EVA) in the early 1960s for the Apollo program to land men on the Moon, because the astronauts would leave the spacecraft to collect lunar material samples and deploy scientific experiments. To support this, and other Apollo objectives, the Gemini program was spun off to develop the capability for astronauts to work outside a two-man Earth orbiting spacecraft. However, the Soviet Union was fiercely competitive in holding the early lead it had gained in crewed spaceflight, so the Soviet Communist Party, led by Nikita Khrushchev, ordered the conversion of its single-pilot Vostok capsule into a two- or three-person craft named Voskhod, in order to compete with Gemini and Apollo. The Soviets were able to launch two Voskhod capsules before U.S. was able to launch its first crewed Gemini. The Voskhod's avionics required cooling by cabin air to prevent overheating, therefore an airlock was required for the spacewalking cosmonaut to exit and re-enter the cabin while it remained pressurized. By contrast, the Gemini avionics did not require air cooling, allowing the spacewalking astronaut to exit and re-enter the depressurized cabin through an open hatch. Because of this, the American and Soviet space programs developed different definitions for the duration of an EVA. The Soviet (now Russian) definition begins when the outer airlock hatch is open and the cosmonaut is in vacuum. An American EVA began when the astronaut had at least his head outside the spacecraft. The USA has changed its EVA definition since. First spacewalk The first EVA was performed on March 18, 1965, by Soviet cosmonaut Alexei Leonov, who spent 12 minutes and 9 seconds outside the Voskhod 2 spacecraft. Carrying a white metal backpack containing 45 minutes' worth of breathing and pressurization oxygen, Leonov had no means to control his motion other than pulling on his tether. After the flight, he claimed this was easy, but his space suit ballooned from its internal pressure against the vacuum of space, stiffening so much that he could not activate the shutter on his chest-mounted camera. At the end of his space walk, the suit stiffening caused a more serious problem: Leonov had to re-enter the capsule through the inflatable cloth airlock, in diameter and long. He improperly entered the airlock head-first and got stuck sideways. He could not get back in without reducing the pressure in his suit, risking "the bends". This added another 12 minutes to his time in vacuum, and he was overheated by from the exertion. It would be almost four years before the Soviets tried another EVA. They misrepresented to the press how difficult Leonov found it to work in weightlessness and concealed the problems encountered until after the end of the Cold War. Project Gemini The first American spacewalk was performed on June 3, 1965, by Ed White from the second crewed Gemini flight, Gemini IV, for 21 minutes. White was tethered to the spacecraft, and his oxygen was supplied through a umbilical, which also carried communications and biomedical instrumentation. He was the first to control his motion in space with a Hand-Held Maneuvering Unit, which worked well but only carried enough propellant for 20 seconds. White found his tether useful for limiting his distance from the spacecraft but difficult to use for moving around, contrary to Leonov's claim. However, a defect in the capsule's hatch latching mechanism caused difficulties opening and closing the hatch, which delayed the start of the EVA and put White and his crewmate at risk of not getting back to Earth alive. No EVAs were planned on the next three Gemini flights. The next EVA was planned to be made by David Scott on Gemini VIII, but that mission had to be aborted due to a critical spacecraft malfunction before the EVA could be conducted. Astronauts on the next three Gemini flights (Eugene Cernan, Michael Collins, and Richard Gordon), performed several EVAs, but none was able to successfully work for long periods outside the spacecraft without tiring and overheating. Cernan attempted but failed to test an Air Force Astronaut Maneuvering Unit which included a self-contained oxygen system. On November 13, 1966, Edwin "Buzz" Aldrin became the first to successfully work in space without tiring during Gemini XII, the last Gemini mission. Aldrin worked outside the spacecraft for 2 hours and 6 minutes, in addition to two stand-up EVAs in the spacecraft hatch for an additional 3 hours and 24 minutes. Aldrin's interest in scuba diving inspired the use of underwater EVA training to simulate weightlessness, which has been used ever since to allow astronauts to practice techniques of avoiding wasted muscle energy. First EVA crew transfer On January 16,
exit and re-enter the depressurized cabin through an open hatch. Because of this, the American and Soviet space programs developed different definitions for the duration of an EVA. The Soviet (now Russian) definition begins when the outer airlock hatch is open and the cosmonaut is in vacuum. An American EVA began when the astronaut had at least his head outside the spacecraft. The USA has changed its EVA definition since. First spacewalk The first EVA was performed on March 18, 1965, by Soviet cosmonaut Alexei Leonov, who spent 12 minutes and 9 seconds outside the Voskhod 2 spacecraft. Carrying a white metal backpack containing 45 minutes' worth of breathing and pressurization oxygen, Leonov had no means to control his motion other than pulling on his tether. After the flight, he claimed this was easy, but his space suit ballooned from its internal pressure against the vacuum of space, stiffening so much that he could not activate the shutter on his chest-mounted camera. At the end of his space walk, the suit stiffening caused a more serious problem: Leonov had to re-enter the capsule through the inflatable cloth airlock, in diameter and long. He improperly entered the airlock head-first and got stuck sideways. He could not get back in without reducing the pressure in his suit, risking "the bends". This added another 12 minutes to his time in vacuum, and he was overheated by from the exertion. It would be almost four years before the Soviets tried another EVA. They misrepresented to the press how difficult Leonov found it to work in weightlessness and concealed the problems encountered until after the end of the Cold War. Project Gemini The first American spacewalk was performed on June 3, 1965, by Ed White from the second crewed Gemini flight, Gemini IV, for 21 minutes. White was tethered to the spacecraft, and his oxygen was supplied through a umbilical, which also carried communications and biomedical instrumentation. He was the first to control his motion in space with a Hand-Held Maneuvering Unit, which worked well but only carried enough propellant for 20 seconds. White found his tether useful for limiting his distance from the spacecraft but difficult to use for moving around, contrary to Leonov's claim. However, a defect in the capsule's hatch latching mechanism caused difficulties opening and closing the hatch, which delayed the start of the EVA and put White and his crewmate at risk of not getting back to Earth alive. No EVAs were planned on the next three Gemini flights. The next EVA was planned to be made by David Scott on Gemini VIII, but that mission had to be aborted due to a critical spacecraft malfunction before the EVA could be conducted. Astronauts on the next three Gemini flights (Eugene Cernan, Michael Collins, and Richard Gordon), performed several EVAs, but none was able to successfully work for long periods outside the spacecraft without tiring and overheating. Cernan attempted but failed to test an Air Force Astronaut Maneuvering Unit which included a self-contained oxygen system. On November 13, 1966, Edwin "Buzz" Aldrin became the first to successfully work in space without tiring during Gemini XII, the last Gemini mission. Aldrin worked outside the spacecraft for 2 hours and 6 minutes, in addition to two stand-up EVAs in the spacecraft hatch for an additional 3 hours and 24 minutes. Aldrin's interest in scuba diving inspired the use of underwater EVA training to simulate weightlessness, which has been used ever since to allow astronauts to practice techniques of avoiding wasted muscle energy. First EVA crew transfer On January 16, 1969, Soviet cosmonauts Aleksei Yeliseyev and Yevgeny Khurnov transferred from Soyuz 5 to Soyuz 4, which were docked together. This was the second Soviet EVA, and it would be almost another nine years before the Soviets performed their third. Apollo lunar EVA American astronauts Neil Armstrong and Buzz Aldrin performed the first EVA on the lunar surface on July 21, 1969 (UTC), after landing their Apollo 11 Lunar Module spacecraft. This first Moon walk, using self-contained portable life support systems, lasted 2 hours and 36 minutes. A total of fifteen Moon walks were performed among six Apollo crews, including Charles "Pete" Conrad, Alan Bean, Alan Shepard, Edgar Mitchell, David Scott, James Irwin, John Young, Charles Duke, Eugene Cernan, and Harrison
author's work on the later epidemiological studies. , average Cr-6 levels in Hinkley were recorded as 1.19 ppb with a peak of 3.09 ppb. For comparison, the PG&E Topock Compressor Station on the California-Arizona border averaged 7.8 ppb with peaks of 31.8 ppb based on a PG&E Background Study. Other litigation Working with Edward L. Masry, a lawyer based in Thousand Oaks, California, Brockovich went on to participate in other anti-pollution lawsuits. One suit accused the Whitman Corporation of chromium contamination in Willits, California. Another, which listed 1,200 plaintiffs, alleged contamination near PG&E's Kettleman Hills compressor station in Kings County, California, along the same pipeline as the Hinkley site. The Kettleman suit was settled for $335 million in 2006. In 2003, after experiencing problems with mold contamination in her own home in the Conejo Valley, Brockovich received settlements of $430,000 from two parties, and an undisclosed amount from a third party, to settle her lawsuit alleging toxic mold in her Agoura Hills, California, home. Brockovich then became a prominent activist and educator in the area as well. Brockovich and Masry filed suit against the Beverly Hills Unified School District in 2003, in which the district was accused of harming the health and safety of its students by allowing a contractor to operate a cluster of oil wells on campus. Brockovich and Masry alleged that 300 cancer cases were linked to the oil wells. Subsequent testing and epidemiological investigation failed to corroborate a substantial link, and Los Angeles County Superior Court Judge Wendell Mortimer granted summary judgment against the plaintiffs. In May 2007, the School District announced that it was to be paid $450,000 as reimbursement for legal expenses. Brockovich assisted in the filing of a lawsuit against Prime Tanning Corp. of St. Joseph, Missouri in April 2009. The lawsuit claims that waste sludge from the production of leather, containing high levels of hexavalent chromium, was distributed to farmers in northwest Missouri to use as fertilizer on their fields. It is believed to be a potential cause of an abnormally high number of brain tumors (70 since 1996) around the town of Cameron, Missouri. The site was investigated by the EPA and the agency found "no detections of total chromium" and further stated that the 70 brain tumors were not abnormally high for the population size. In June 2009, Brockovich began investigating a case of contaminated water in Midland, Texas. "Significant amounts" of hexavalent chromium were found in the water of more than 40 homes in the area, some of which have now been fitted with state-monitored filters on their water supply. Brockovich said: "The only difference between here and Hinkley is that I saw higher levels here than I saw in Hinkley." In 2012, Brockovich became involved in the mysterious case of 14 students from LeRoy, New York, who began reporting perplexing medical symptoms, including tics and speech difficulties. Brockovich believed environmental pollution from the 1970 Lehigh Valley Railroad derailment was the cause, and conducted testing in the area. Brockovich was supposed to return to LeRoy to present her findings, but never did; in the meantime, the students' doctors determined the cause was mass psychogenic illness, and that the media exposure was exacerbating the symptoms. No environmental causes were found after repeat testing, and the students improved once the media attention died down. In early 2016, Brockovich became involved in potential litigation against Southern California Gas for a large methane leak
near PG&E's Kettleman Hills compressor station in Kings County, California, along the same pipeline as the Hinkley site. The Kettleman suit was settled for $335 million in 2006. In 2003, after experiencing problems with mold contamination in her own home in the Conejo Valley, Brockovich received settlements of $430,000 from two parties, and an undisclosed amount from a third party, to settle her lawsuit alleging toxic mold in her Agoura Hills, California, home. Brockovich then became a prominent activist and educator in the area as well. Brockovich and Masry filed suit against the Beverly Hills Unified School District in 2003, in which the district was accused of harming the health and safety of its students by allowing a contractor to operate a cluster of oil wells on campus. Brockovich and Masry alleged that 300 cancer cases were linked to the oil wells. Subsequent testing and epidemiological investigation failed to corroborate a substantial link, and Los Angeles County Superior Court Judge Wendell Mortimer granted summary judgment against the plaintiffs. In May 2007, the School District announced that it was to be paid $450,000 as reimbursement for legal expenses. Brockovich assisted in the filing of a lawsuit against Prime Tanning Corp. of St. Joseph, Missouri in April 2009. The lawsuit claims that waste sludge from the production of leather, containing high levels of hexavalent chromium, was distributed to farmers in northwest Missouri to use as fertilizer on their fields. It is believed to be a potential cause of an abnormally high number of brain tumors (70 since 1996) around the town of Cameron, Missouri. The site was investigated by the EPA and the agency found "no detections of total chromium" and further stated that the 70 brain tumors were not abnormally high for the population size. In June 2009, Brockovich began investigating a case of contaminated water in Midland, Texas. "Significant amounts" of hexavalent chromium were found in the water of more than 40 homes in the area, some of which have now been fitted with state-monitored filters on their water supply. Brockovich said: "The only difference between here and Hinkley is that I saw higher levels here than I saw in Hinkley." In 2012, Brockovich became involved in the mysterious case of 14 students from LeRoy, New York, who began reporting perplexing medical symptoms, including tics and speech difficulties. Brockovich believed environmental pollution from the 1970 Lehigh Valley Railroad derailment was the cause, and conducted testing in the area. Brockovich was supposed to return to LeRoy to present her findings, but never did; in the meantime, the students' doctors determined the cause was mass psychogenic illness, and that the media exposure was exacerbating the symptoms. No environmental causes were found after repeat testing, and the students improved once the media attention died down. In early 2016, Brockovich became involved in potential litigation against Southern California Gas for a large methane leak from its underground storage facility near the community of Porter Ranch north of Los Angeles. Awards Honorary Doctor of Laws and commencement speaker at Lewis & Clark Law School, Portland, Oregon, in May 2005 Honorary Doctor of Humane Letters and commencement speaker at Loyola Marymount University, Los Angeles, California, on May 5, 2007 Honorary Master of Arts, Business Communication, from Jones International University, Centennial, Colorado Movies and television Brockovich's work in bringing litigation against Pacific Gas & Electric was the focus of the 2000 feature film, Erin Brockovich, starring Julia Roberts in the title role. The film was nominated for five Academy Awards: Best Actress in a Leading Role, Best Actor in a Supporting Role, Best Director, Best Picture, and Best Writing in a Screenplay Written Directly for the Screen. Roberts won the Academy Award for Best Actress for her portrayal of Erin Brockovich. Erin Brockovich herself had a cameo role as a waitress named Julia R. Brockovich originally recorded a cameo role in the 2007 animated film The Simpsons Movie, based on the long-running animated sitcom The Simpsons. However, Brockovich's role was ultimately cut from the film. Brockovich had a more extensive role in the 2012 documentary Last Call at the Oasis, which focused on not only water pollution but also the overall state of water scarcity as it relates to water policy in the United States. On April 8, 2021, Rebel, a television series which creator Krista Vernoff loosely based on Brockovich's life, premiered on ABC. Books and articles Brockovich's first book, Take It from Me: Life's a Struggle But You Can Win (), was published in 2001. A second book, Superman's Not Coming, was released on August 25, 2020. In 2021, Brockovich wrote about hormone-disrupting chemicals (such as PFAS) decimating human fertility at an alarming rate. On February 8, 2022, Brokovich wrote an article talking about the case of Steven Donziger, a lawyer who won a $18 billion judgment against Chevron before being jailed for contempt of court after refusing to turn his
a quantity of electricity or charge. The quantity of electric charge can be directly measured with an electrometer, or indirectly measured with a ballistic galvanometer. The amount of charge in 1 electron (elementary charge) is defined as a fundamental constant in the SI system of units, (effective from 20 May 2019). The value for elementary charge, when expressed in the SI unit for electric charge (coulomb), is exactly . After finding the quantized character of charge, in 1891 George Stoney proposed the unit 'electron' for this fundamental unit of electrical charge. This was before the discovery of the particle by J. J. Thomson in 1897. The unit is today referred to as , , or simply as . A measure of charge should be a multiple of the elementary charge e, even if at large scales charge seems to behave as a real quantity. In some contexts it is meaningful to speak of fractions of a charge; for example in the charging of a capacitor, or in the fractional quantum Hall effect. The unit faraday is sometimes used in electrochemistry. One faraday of charge is the magnitude of the charge of one mole of electrons, i.e. 96485.33289(59) C. In systems of units other than SI such as cgs, electric charge is expressed as combination of only three fundamental quantities (length, mass, and time), and not four, as in SI, where electric charge is a combination of length, mass, time, and electric current. History From ancient times, people were familiar with four types of phenomena that today would all be explained using the concept of electric charge: (a) lightning, (b) the torpedo fish (or electric ray), (c) St Elmo's Fire, and (d) that amber rubbed with fur would attract small, light objects. The first account of the is often attributed to the ancient Greek mathematician Thales of Miletus, who lived from c. 624 – c. 546 BC, but there are doubts about whether Thales left any writings; his account about amber is known from an account from early 200s. This account can be taken as evidence that the phenomenon was known since at least c. 600 BC, but Thales explained this phenomenon as evidence for inanimate objects having a soul. In other words, there was no indication of any conception of electric charge. More generally, the ancient Greeks did not understand the connections among these four kinds of phenomena. The Greeks observed that the charged amber buttons could attract light objects such as hair. They also found that if they rubbed the amber for long enough, they could even get an electric spark to jump, but there is also a claim that no mention of electric sparks appeared until late 17th century. This property derives from the triboelectric effect. In late 1100s, the substance jet, a compacted form of coal, was noted to have an amber effect, and in the middle of the 1500s, Girolamo Fracastoro, discovered that diamond also showed this effect. Some efforts were made by Fracastoro and others, especially Gerolamo Cardano to develop explanations for this phenomenon. In contrast to astronomy, mechanics, and optics, which had been studied quantitatively since antiquity, the start of ongoing qualitative and quantitative research into electrical phenomena can be marked with the publication of De Magnete by the English scientist William Gilbert in 1600. In this book, there was a small section where Gilbert returned to the amber effect (as he called it) in addressing many of the earlier theories, and coined the New Latin word electrica (from (ēlektron), the Greek word for amber). The Latin word was translated into English as . Gilbert is also credited with the term electrical, while the term electricity came later, first attributed to Sir Thomas Browne in his Pseudodoxia Epidemica from 1646. (For more linguistic details see Etymology of electricity.) Gilbert hypothesized that this amber effect could be explained by an effluvium (a small stream of particles that flows from the electric object, without diminishing its bulk or weight) that acts on other objects. This idea of a material electrical effluvium was influential in the 17th and 18th centuries. It was a precursor to ideas developed in the 18th century about "electric fluid" (Dufay, Nollet, Franklin) and "electric charge". Around 1663 Otto von Guericke invented what was probably the first electrostatic generator, but he did not recognize it primarily as an electrical device and only conducted minimal electrical experiments with it. Other European pioneers were Robert Boyle, who in 1675 published the first book in English that was devoted solely to electrical phenomena. His work was largely a repetition of Gilbert's studies, but he also identified several more "electrics", and noted mutual attraction between two bodies. In 1729 Stephen Gray was experimenting with static electricity, which he generated using a glass tube. He noticed that a cork, used to protect the tube from dust and moisture, also became electrified (charged). Further experiments (e.g., extending the cork by putting thin sticks into it) showed—for the first time—that electrical effluvia (as Gray called it) could be transmitted (conducted) over a distance. Gray managed to transmit charge with twine (765 feet) and wire (865 feet). Through these experiments, Gray discovered the importance of different materials, which facilitated or hindered the conduction of electrical effluvia. John Theophilus Desaguliers, who repeated many of Gray's experiments, is credited with coining the terms conductors and insulators to refer to the effects of different materials in these experiments. Gray also discovered electrical induction (i.e., where charge could be transmitted from one object to another without any direct physical contact). For example, he showed that by bringing a charged glass tube close to, but not touching, a lump of lead that was sustained by a thread, it was possible to make the lead become electrified (e.g., to attract and repel brass filings). He attempted to explain this phenomenon with the idea of electrical effluvia. Gray's discoveries introduced an important shift in the historical development of knowledge about electric charge. The fact that electrical effluvia could be transferred from one object to another, opened the theoretical possibility that this property was not inseparably connected to the bodies that were electrified by rubbing. In 1733 Charles François de Cisternay du Fay, inspired by Gray's work, made a series of experiments (reported in Mémoires de l'Académie Royale des Sciences), showing that more or less all substances could be 'electrified' by rubbing, except for metals and fluids and proposed that electricity comes in two varieties that cancel each other, which he expressed in terms of a two-fluid theory. When glass was rubbed with silk, du Fay said that the glass was charged with vitreous electricity, and, when amber was rubbed with fur, the amber was charged with resinous electricity. In contemporary understanding, positive charge is now defined as the charge of a glass rod after being rubbed with a silk cloth, but it is arbitrary which type of charge is called positive and which is called negative. Another important two-fluid theory from this time was proposed by Jean-Antoine Nollet (1745). Up until about 1745, the main explanation for electrical attraction and repulsion was the idea that electrified bodies gave off an effluvium. Benjamin Franklin started electrical experiments in late 1746, and by 1750 had developed a one-fluid theory of electricity, based on an experiment that showed that a rubbed glass received the same, but opposite, charge strength as the cloth used to rub the glass. Franklin imagined electricity as being a type of invisible fluid present in all matter; for example, he believed that it was the glass in a Leyden jar that held the accumulated charge. He posited that rubbing insulating surfaces together caused this fluid to change location, and that a flow of this fluid constitutes an electric current. He also posited that when matter contained an excess of the fluid it was charged and when it had a deficit it was charged. He identified the term with vitreous electricity and with resinous electricity after performing an experiment with a glass tube he had received from his overseas colleague Peter Collinson. The experiment had participant A charge the glass tube and participant B receive a shock to the knuckle from the charged tube. Franklin identified participant B to be positively charged after having been shocked by the tube. There is some ambiguity about whether William Watson independently arrived at the same one-fluid explanation around the same time (1747). Watson, after seeing Franklin's letter to Collinson, claims that he had presented the same explanation as Franklin in spring 1747. Franklin had studied some of Watson's works prior to making his own experiments and analysis, which was probably significant for Franklin's own theorizing. One physicist suggests that Watson first proposed a one-fluid theory, which Franklin then elaborated further and more influentially. A historian of science argues that Watson missed a subtle difference between his ideas and Franklin's, so that Watson misinterpreted his ideas as being similar to Franklin's. In any case, there was no animosity between Watson and Franklin, and the Franklin model of electrical action, formulated in early 1747, eventually became widely accepted at that time. After Franklin's work, effluvia-based explanations were rarely put forward. It is now known that the Franklin model was fundamentally correct. There is only one kind of electrical charge, and only one variable is required to keep track of the amount of charge. Until 1800 it was only possible to study conduction of electric charge by using an electrostatic discharge. In 1800 Alessandro Volta was the first to show that charge could be maintained in continuous motion through a closed path. In 1833, Michael Faraday sought to remove any doubt that electricity is identical, regardless of the source by which it is produced. He discussed a variety of known forms, which he characterized as common electricity (e.g., static electricity, piezoelectricity, magnetic induction), voltaic electricity (e.g., electric current from a voltaic pile), and animal electricity (e.g., bioelectricity). In 1838, Faraday raised a question about whether electricity was a fluid or fluids or a property of matter, like gravity. He investigated whether matter could be charged with one kind of charge independently of the other. He came to the conclusion that electric charge was a relation between two or more bodies, because he could not charge one body without having an opposite charge in another body. In 1838, Faraday also put forth a theoretical explanation of electric force, while expressing neutrality about whether it originates from one, two, or no fluids. He focused on the idea that the normal state of particles is to be nonpolarized, and that when polarized, they seek to return to their natural, nonpolarized state. In developing a field theory approach to electrodynamics (starting in the mid-1850s), James Clerk Maxwell stops considering electric charge as a special substance that accumulates in objects, and starts to understand electric charge as a consequence of the transformation of energy in the field. This pre-quantum understanding considered magnitude of electric charge to be a continuous quantity, even at
(a) lightning, (b) the torpedo fish (or electric ray), (c) St Elmo's Fire, and (d) that amber rubbed with fur would attract small, light objects. The first account of the is often attributed to the ancient Greek mathematician Thales of Miletus, who lived from c. 624 – c. 546 BC, but there are doubts about whether Thales left any writings; his account about amber is known from an account from early 200s. This account can be taken as evidence that the phenomenon was known since at least c. 600 BC, but Thales explained this phenomenon as evidence for inanimate objects having a soul. In other words, there was no indication of any conception of electric charge. More generally, the ancient Greeks did not understand the connections among these four kinds of phenomena. The Greeks observed that the charged amber buttons could attract light objects such as hair. They also found that if they rubbed the amber for long enough, they could even get an electric spark to jump, but there is also a claim that no mention of electric sparks appeared until late 17th century. This property derives from the triboelectric effect. In late 1100s, the substance jet, a compacted form of coal, was noted to have an amber effect, and in the middle of the 1500s, Girolamo Fracastoro, discovered that diamond also showed this effect. Some efforts were made by Fracastoro and others, especially Gerolamo Cardano to develop explanations for this phenomenon. In contrast to astronomy, mechanics, and optics, which had been studied quantitatively since antiquity, the start of ongoing qualitative and quantitative research into electrical phenomena can be marked with the publication of De Magnete by the English scientist William Gilbert in 1600. In this book, there was a small section where Gilbert returned to the amber effect (as he called it) in addressing many of the earlier theories, and coined the New Latin word electrica (from (ēlektron), the Greek word for amber). The Latin word was translated into English as . Gilbert is also credited with the term electrical, while the term electricity came later, first attributed to Sir Thomas Browne in his Pseudodoxia Epidemica from 1646. (For more linguistic details see Etymology of electricity.) Gilbert hypothesized that this amber effect could be explained by an effluvium (a small stream of particles that flows from the electric object, without diminishing its bulk or weight) that acts on other objects. This idea of a material electrical effluvium was influential in the 17th and 18th centuries. It was a precursor to ideas developed in the 18th century about "electric fluid" (Dufay, Nollet, Franklin) and "electric charge". Around 1663 Otto von Guericke invented what was probably the first electrostatic generator, but he did not recognize it primarily as an electrical device and only conducted minimal electrical experiments with it. Other European pioneers were Robert Boyle, who in 1675 published the first book in English that was devoted solely to electrical phenomena. His work was largely a repetition of Gilbert's studies, but he also identified several more "electrics", and noted mutual attraction between two bodies. In 1729 Stephen Gray was experimenting with static electricity, which he generated using a glass tube. He noticed that a cork, used to protect the tube from dust and moisture, also became electrified (charged). Further experiments (e.g., extending the cork by putting thin sticks into it) showed—for the first time—that electrical effluvia (as Gray called it) could be transmitted (conducted) over a distance. Gray managed to transmit charge with twine (765 feet) and wire (865 feet). Through these experiments, Gray discovered the importance of different materials, which facilitated or hindered the conduction of electrical effluvia. John Theophilus Desaguliers, who repeated many of Gray's experiments, is credited with coining the terms conductors and insulators to refer to the effects of different materials in these experiments. Gray also discovered electrical induction (i.e., where charge could be transmitted from one object to another without any direct physical contact). For example, he showed that by bringing a charged glass tube close to, but not touching, a lump of lead that was sustained by a thread, it was possible to make the lead become electrified (e.g., to attract and repel brass filings). He attempted to explain this phenomenon with the idea of electrical effluvia. Gray's discoveries introduced an important shift in the historical development of knowledge about electric charge. The fact that electrical effluvia could be transferred from one object to another, opened the theoretical possibility that this property was not inseparably connected to the bodies that were electrified by rubbing. In 1733 Charles François de Cisternay du Fay, inspired by Gray's work, made a series of experiments (reported in Mémoires de l'Académie Royale des Sciences), showing that more or less all substances could be 'electrified' by rubbing, except for metals and fluids and proposed that electricity comes in two varieties that cancel each other, which he expressed in terms of a two-fluid theory. When glass was rubbed with silk, du Fay said that the glass was charged with vitreous electricity, and, when amber was rubbed with fur, the amber was charged with resinous electricity. In contemporary understanding, positive charge is now defined as the charge of a glass rod after being rubbed with a silk cloth, but it is arbitrary which type of charge is called positive and which is called negative. Another important two-fluid theory from this time was proposed by Jean-Antoine Nollet (1745). Up until about 1745, the main explanation for electrical attraction and repulsion was the idea that electrified bodies gave off an effluvium. Benjamin Franklin started electrical experiments in late 1746, and by 1750 had developed a one-fluid theory of electricity, based on an experiment that showed that a rubbed glass received the same, but opposite, charge strength as the cloth used to rub the glass. Franklin imagined electricity as being a type of invisible fluid present in all matter; for example, he believed that it was the glass in a Leyden jar that held the accumulated charge. He posited that rubbing insulating surfaces together caused this fluid to change location, and that a flow of this fluid constitutes an electric current. He also posited that when matter contained an excess of the fluid it was charged and when it had a deficit it was charged. He identified the term with vitreous electricity and with resinous electricity after performing an experiment with a glass tube he had received from his overseas colleague Peter Collinson. The experiment had participant A charge the glass tube and participant B receive a shock to the knuckle from the charged tube. Franklin identified participant B to be positively charged after having been shocked by the tube. There is some ambiguity about whether William Watson independently arrived at the same one-fluid explanation around the same time (1747). Watson, after seeing Franklin's letter to Collinson, claims that he had presented the same explanation as Franklin in spring 1747. Franklin had studied some of Watson's works prior to making his own experiments and analysis, which was probably significant for Franklin's own theorizing. One physicist suggests that Watson first proposed a one-fluid theory, which Franklin then elaborated further and more influentially. A
bakery and carpentry shop buildings, as well as two sheds and a frame waiting room. There are no exterior entrances, and the only access is via the kitchen and laundry. The first floor generally contained oven rooms, baking areas and storage while the second floor contained the carpentry shop. Baggage and dormitory The baggage and dormitory structure is a three-story structure located north of the main building. It is made of a steel frame and terracotta blocks, with a limestone base and a facade of brick in Flemish bond. Completed as a two-story structure , the baggage and dormitory building replaced a 700-bed wooden barracks nearby that operated between 1903 and 1911. The baggage and dormitory initially had baggage collection on its first floor, dormitories and detention rooms on its second floor, and a tiled garden on its roof. The building received a third story, and a two-story annex to the north side, in 1913–1914. Initially, the third floor included additional dormitory space while the annex provided detainees with outdoor porch space. A detainee dining room on the first floor was expanded in 1951. The building is mostly rectangular except for its northern annex and contains an interior courtyard, skylighted at the second floor. On its facade the first story has rectangular windows in arched window openings while the second and third stories have rectangular windows and window openings. There are cornices below the second and third stories. The annex contains wide window openings with narrow brick piers outside them. The roof's northwest corner contains a one-story extension. Multiple wings connect the baggage and laundry to its adjacent buildings. Powerhouse The powerhouse of Ellis Island is a two-story structure located north of the kitchen and laundry building and west of the baggage and dormitory building. It is roughly rectangular and oriented north–south. Like the kitchen and laundry, it was completed in 1901. It is made of a steel frame with a granite base, a facade of brick in Flemish bond, and decorative bluestone and limestone elements. The hip roof contains dormers and is covered with terracotta tiling. A brick smokestack rises from ground level. Formerly, the powerhouse provided almost all power for Ellis Island. A coal trestle at the northwest end was used to transport coal for power generation from 1901 to 1932, when the powerhouse started using fuel oil. The powerhouse also generated steam for the island. After the immigration station closed, the powerhouse deteriorated and was left unrepaired until the 1980s renovation. The powerhouse is no longer operational; instead, the island receives power from 13,200-volt cables that lead from a Public Service Electric & Gas substation in Liberty State Park. The powerhouse contains sewage pumps that can dispose of up to to the Jersey City Sewage Authority sewage system. A central heating plant was installed during the 1980s renovation. South side The southern side of Ellis Island, located across the ferry basin from the northern side, is composed of island 2 (created in 1899) and island 3 (created in 1906). The entire southern side of the island is in New Jersey, and the majority of the site is occupied by the hospital buildings. A central corridor runs southward from the ferry building on the west side of the island. Two additional corridors split eastward down the centers of islands 2 and 3. Island 2 Island 2 comprises the northern part of Ellis Island's southern portion. The structures share the same design: a brick facade in Flemish bond, quoins, and limestone ornamentation. All structures were internally connected via covered passageways.The laundry-hospital outbuilding is south of the ferry terminal, and was constructed in 1900–1901 along with the now-demolished surgeon's house. The structure is one and a half stories tall with a hip roof and skylights facing to the north and south. Repaired repeatedly throughout its history, the laundry-outbuilding was last restored in 2002. It had linen, laundry, and disinfecting rooms; a boiler room; a morgue with autopsy room; and quarters for the laundry staff on the second floor. To the east is the psychopathic ward, a two-story building erected 1906–1907. The building is the only structure in the hospital complex to have a flat roof, and formerly also had a porch to its south. It housed 25 to 30 beds and was intended for the temporary treatment of immigrants suspected of being insane or having mental disorders, pending their deportation, hospitalization, or commitment to sanatoria. Male and female patients were segregated, and there were also a dayroom, veranda, nurse's office, and small pantry on each floor. In 1952 the psychopathic ward was converted into a Coast Guard brig. The main building is directly east of the psychopathic ward. It is composed of three similarly designed structures: from west to east, they are Hospital Building No. 1 (built 1900–1901), the Administration Building (1905–1907), and Hospital Building No. 2 (1908–1909). The 3.5-story building no. 1 is shaped like an inverted "C" with two 2.5-story rectangular wings facing southward; the wings contain two-story-tall porches. The administration building is smaller but also 3.5 stories. The 3.5-story building no. 2 is similar to building no. 1, but also has a three-story porch at the south elevation of the central pavilion. All three buildings have stone-stoop entrances on their north facades and courtyards on their south. Recreation hall The recreation hall and one of the island's two recreation shelters are located between islands 2 and 3 on the western side of Ellis Island, at the head of the former ferry basin between the two landmasses. Built in 1937 in the Colonial Revival style, the structures replaced an earlier recreation building at the northeast corner of island 2. The recreation hall is a two-story building with a limestone base, a facade of brick in Flemish bond, a gable roof, and terracotta ornamentation. The first floor contained recreational facilities, while the second floor was used mostly for offices. It contains wings on the north, south, and west. The recreation shelter, a one-story brick pavilion, is located directly to the east. A second shelter of similar design was located adjacent to the power plant on the island's north side. Island 3 As part of the Ellis Island Immigrant Hospital, the contagious disease hospital comprised 17 pavilions, connected with a central connecting corridor. Each pavilion contained separate hospital functions that could be sealed off from each other. Most of the structures were completed in 1911. The pavilions included eight measles wards, three isolation wards, a power house/sterilizer/autopsy theater, mortuary, laboratory, administration building, kitchen, and staff house. All structures were designed by James Knox Taylor in the Italian Renaissance style and are distinguished by red-tiled hip roofs, roughcast walls of stucco, and ornamentation of brick and limestone. The office building and laboratory is a 2.5-story structure located at the west end of island 3. It housed doctors' offices and a dispensary on the first floor, along with a laboratory and pharmacists' quarters on the second floor. In 1924, the first floor offices were converted into male nurses' quarters. A one-story morgue is located east of the office building, and was converted to the "Animal House" circa 1919. An "L"-shaped powerhouse and laundry building, built in 1908, is also located on the west side of island 3. It has a square north wing with boiler, coal, and pump rooms, as well as a rectangular south wing with laundry and disinfection rooms, staff kitchen, and staff pantry. The powerhouse and laundry also had a distinctive yellow-brick smokestack. Part of the building was converted into a morgue and autopsy room in the 1930s. To the east are the eight measles pavilions (also known as wards A-H), built in phases from 1906 to 1909 and located near the center of island 3. There are four pavilions each to the west and east of island 3's administration building. All of the pavilions are identical, two-story rectangular structures. Each pavilion floor had a spacious open ward with large windows on three sides and independent ventilation ducts. A hall leading to the connecting corridor was flanked by bathrooms, nurses' duty room, offices, and a serving kitchen. The administration building is a 3.5-story structure located on the north side of island 3's connecting corridor, in the center of the landmass. It included reception rooms, offices, and a staff kitchen on the first floor; nurses' quarters and operating rooms on the second floor; and additional staff quarters on the third floor. A one-story kitchen with a smokestack is located opposite the administration building to the south. The eastern end of island 3 contained three isolation pavilions (wards I-K) and a staff building. The isolation pavilions were intended for patients for more serious diseases, including scarlet fever, diphtheria, and a combination of either of these diseases with measles and whooping cough. Each pavilion is a 1.5-story rectangular structure. Wards I and K are located to the south of the connecting corridor while ward J is located to the north; originally, all three pavilions were freestanding structures, but covered ways were built between wards I and K and the center corridor in 1914. There were also nurses' quarters in each attic. The staff building. located at the extreme east end of island 3's connecting corridor, is a 2.5-story building for high-ranking hospital staff. Living and dining rooms, a kitchen, and a library were located on the first floor while bedrooms were located on the second floor. Ferry building The ferry building is at the western end of the ferry basin, within New Jersey. The current structure was built in 1936 and is the third ferry landing to occupy the site. It is made of a steel-and-concrete frame with a facade of red brick in Flemish bond, and limestone and terracotta ornamentation, in the Moderne architectural style. The building's central pavilion is mostly one story tall, except for a two-story central section that is covered by a hip roof with cupola. Two rectangular wings are located to the north and south and are oriented east–west. The south wing was originally reserved for U.S. Customs while the north wing contained a lunchroom and restrooms. A wooden dock extends east from the ferry building. The ferry building is connected to the kitchen and laundry to the north, and the hospital to the south, via covered walkways. The structure was completely restored in 2007. Immigration procedures By the time Ellis Island's immigration station closed, almost 12 million immigrants had been processed by the U.S. Bureau of Immigration. It is estimated that 10.5 million immigrants departed for points across the United States from the Central Railroad of New Jersey Terminal nearby. Others would have used one of the other terminals along the North River/Hudson River at that time. At the time of closure, it was estimated that closer to 20 million immigrants had been processed or detained at Ellis Island. According to an estimate by The History Channel, about 40% of the population of the United States can trace their ancestry to immigrants who arrived in America at Ellis Island. Initial immigration policy provided for the admission of most immigrants to the United States, other than those with mental or physical disabilities, or a moral, racial, religious, or economic reason for exclusion. At first, the majority of immigrants arriving were Northern and Western Europeans, with the largest numbers coming from the German Empire, the Russian Empire and Finland, the United Kingdom, and Italy. Eventually, these groups of peoples slowed in the rates that they were coming in, and immigrants came in from Southern and Eastern Europe, including Jews. These people immigrated for a variety of reasons including escaping political and economic oppression, as well as persecution, destitution, and violence. Other groups of peoples being processed through the station were Poles, Hungarians, Czechs, Serbs, Slovaks, Greeks, Syrians, Turks, and Armenians. Immigration through Ellis Island peaked in the first decade of the 20th century. Between 1905 and 1914, an average of one million immigrants per year arrived in the United States. Immigration officials reviewed about 5,000 immigrants per day during peak times at Ellis Island. Two-thirds of those individuals emigrated from eastern, southern and central Europe. The peak year for immigration at Ellis Island was 1907, with 1,004,756 immigrants processed, and the all-time daily high occurred on April 17 of that year, when 11,747 immigrants arrived. Following the Immigration Act of 1924, which both greatly reduced immigration and allowed processing overseas, Ellis Island was only used by those who had problems with their immigration paperwork, as well as displaced persons and war refugees. This affected both nationwide and regional immigration processing: only 2.34 million immigrants passed through the Port of New York from 1925 to 1954, compared to the 12 million immigrants processed from 1900 to 1924. Average annual immigration through the Port of New York from 1892 to 1924 typically numbered in the hundreds of thousands, though after 1924, annual immigration through the port was usually in the tens of thousands. Inspections Medical inspection Beginning in the 1890s, initial medical inspections were conducted by steamship companies at the European ports of embarkation; further examinations and vaccinations occurred on board ship during the voyage to New York. On arrival at the port of New York, ships halted at the New York state quarantine station near the Narrows. Those with serious contagious diseases (such as cholera and typhus) were quarantined at Hoffman Island or Swinburne Island, two artificial islands off the shore of Staten Island to the south. The islands ceased to be used for quarantine by the 1920s due to the decline in inspections at Ellis Island. For the vast majority of passengers, since most transatlantic ships could not dock at Ellis Island due to shallow water, the ships unloaded at Manhattan first, and steerage passengers were then taken to Ellis Island for processing. First- and second-class passengers typically bypassed the Ellis Island processing altogether. To support the activities of the United States Bureau of Immigration, the United States Public Health Service operated an extensive medical service. The medical force at Ellis Island started operating when the first immigration station opened in 1892, and was suspended when the station burned down in 1897. Between 1897 and 1902, medical inspections took place both at other facilities in New York City and on ships in the New York Harbor. A second hospital called U.S. Marine Hospital Number 43 or the Ellis Island Immigrant Hospital was built in 1902 and operated through 1930. Uniformed military surgeons staffed the medical division, which was active in the hospital wards, the Battery's Barge Office, and Ellis Island's Main Building. Immigrants were brought to the island via barge from their transatlantic ships. A "line inspection" was conducted in the main building. In the line inspection, the immigrants were split into several single-file lines, and inspectors would first check for any visible physical disabilities. Each immigrant would be inspected by two inspectors: one to catch any initial physical disabilities, and another to check for any other ailments that the first inspector did not notice. The doctors would then observe immigrants as they walked, to determine any irregularities in their gait. Immigrants were asked to drop their baggage and walk up the stairs to the second floor. The line inspection at Ellis Island was unique because of the volume of people it processed, and as such, used several unconventional methods of medical examination. For example, after an initial check for physical disabilities, inspectors would use special forceps or the buttonhook to examine immigrants for signs of eye diseases such as trachoma. Following each examination, inspectors used chalk to draw symbols on immigrants who were suspected to be sick. Some immigrants supposedly wiped the chalk marks off surreptitiously or inverted their clothes to avoid medical detention. Chalk-marked immigrants and those with suspected mental disabilities were then sent to rooms for further inspection, according to a 1917 account. The symbols used for chalk markings were: B – Back C – Conjunctivitis TC – Trachoma E – Eyes F – Face FT – Feet G – Goiter H – Heart K – Hernia L – Lameness N – Neck P – Physical and Lungs PG – Pregnancy S – Senility SC – Scalp (favus) X – Suspected mental defect ⓧ – Definite signs of mental defect Primary inspection Once immigrants had completed and passed the medical examination, they were sent to the Registry Room to undergo what was called primary inspection. This consisted of interrogations conducted by U.S. Immigrant Inspectors to determine if each newcomer was eligible for admission. In addition, any medical certificates issued by physicians were taken into account. Aside from the U.S. immigrant inspectors, the Bureau of Immigration work force included interpreters, watchmen, matrons, clerks and stenographers. According to a reconstruction of immigration processes in 1907, immigrants who passed the initial inspections spent two to five hours at Ellis Island to do these interviews. Arrivals were asked a couple dozen questions, including name, occupation, and the amount of money they carried. The government wanted to determine whether new arrivals would be self-sufficient upon arrival, and on average, wanted the immigrants to have between $18 and $25 (worth between $ and $ as of ). Some immigrants were also given literacy tests in their native languages, though children under 16 were exempt. The determination of admissibility was relatively arbitrary and determined by the individual inspector. U.S. Immigrant Inspectors used some other symbols or marks as they interrogated immigrants in the Registry Room to determine whether to admit or detain them, including: SI – Special Inquiry IV – Immigrant Visa LPC – Likely or Liable to become a Public Charge Med. Cert. – Medical certificate issued Those who were cleared were given a medical certificate or an affidavit. According to a 1912 account by physician Alfred C. Reed, immigrants were medically cleared only after three on-duty physicians signed an affidavit. Those with visible illnesses were deported or held in the island's hospital. Those who were admitted often met with relatives and friends at the Kissing Post, a wooden column outside the registry room. Between 1891 and 1930, Ellis Island reviewed over 25 million attempted immigrants, of which 700,000 were given certificates of disability or disease and of these 79,000 were barred from entry. Approximately 4.4% of immigrants between 1909 and 1930 were classified as disabled or diseased, and one percent of immigrants were deported yearly due to medical causes. The proportion of "diseased" increased to 8.0% during the Spanish flu of 1918–1919. More than 3,000 attempted immigrants died in the island's hospital. Some unskilled workers were deemed "likely to become a public charge" and so were rejected; about 2% of immigrants were deported. Immigrants could also be excluded if they were disabled and previously rejected; if they were Chinese, regardless of their citizenship status; or if they were contract laborers, stowaways, and workaways. However, immigrants were exempt from deportation if they had close family ties to a U.S. permanent resident or citizen, or if they were seamen. Ellis Island was sometimes known as the "Island of Tears" or "Heartbreak Island" for these deportees. If immigrants were rejected, appeals could be made to a three-member board of inquiry. Mass detentions and deportations Ellis Island's use as a detention center dates from World War I, when it was used to house those who were suspected of being enemy soldiers. During the war, six classes of "enemy aliens" were established, including officers and crewmen from interned ships; three classes of Germans; and suspected spies. After the American entry into World War I, about 1,100 German and Austrian naval officers and crewmen in the Ports of New York and New London were seized and held in Ellis Island's baggage and dormitory building. A commodious stockade was built for the seized officers. A 1917 New York Times article depicted the conditions of the detention center as being relatively hospitable. Anti-immigrant sentiments developed in the U.S. during and after World War I, especially toward Southern and Eastern Europeans who were entering the country in large numbers. Following the Immigration Act of 1924, primary inspection was moved to New York Harbor, and Ellis Island only hosted immigrants that were to be detained or deported. After the passage of the 1924 act, the Immigration Service established multiple classes of people who were said to be "deportable". This included immigrants who entered in violation of previous exclusion acts; Chinese immigrants in violation of the 1924 act; those convicted of felonies or other "crimes of moral turpitude"; and those involved in prostitution. During and immediately following World War II, Ellis Island was used to hold German merchant mariners and "enemy aliens"—Axis nationals detained for fear of spying, sabotage, and other fifth column activity. When the U.S. entered the war in December 1941, Ellis Island held 279 Japanese, 248 Germans, and 81 Italians removed from the East Coast. Unlike other wartime immigration detention stations, Ellis Island was designated as a permanent holding facility and was used to hold foreign nationals throughout the war. A total of 7,000 Germans, Italians and Japanese would be ultimately detained at Ellis Island. The Internal Security Act of 1950 barred members of communist or fascist organizations from immigrating to the United States. Ellis Island saw detention peak at 1,500,
and disagreements between the federal government and the Hood Company. A separate contract to build the island 2 had to be approved by the War Department because it was in New Jersey's waters; that contract was completed in December 1898. The construction costs ultimately totaled $1.5 million. Early expansions The new immigration station opened on December 17, 1900, without ceremony. On that day, 2,251 immigrants were processed. Almost immediately, additional projects commenced to improve the main structure, including an entrance canopy, baggage conveyor, and railroad ticket office. The kitchen/laundry and powerhouse started construction in May 1900 and were completed by the end of 1901. A ferry house was also built between islands 1 and 2 . The hospital, originally slated to be opened in 1899, was not completed until November 1901, mainly due to various funding delays and construction disputes. The facilities proved barely able to handle the flood of immigrants that arrived, and as early as 1903, immigrants had to remain in their transatlantic boats for several days due to inspection backlogs. Several wooden buildings were erected by 1903, including waiting rooms and a 700-bed barracks, and by 1904, over a million dollars' worth of improvements were proposed. The hospital was expanded from 125 to 250 beds in February 1907, and a new psychopathic ward debuted in November of the same year. Also constructed was an administration building adjacent to the hospital. Immigration commissioner William Williams made substantial changes to Ellis Island's operations, and during his tenure from 1902–1905 and 1909–1913, Ellis Island processed its peak number of immigrants. Williams also made changes to the island's appearance, adding plants and grading paths upon the once-barren landscape of Ellis Island. Under Williams's supervision, a third island was built to accommodate a proposed contagious-diseases ward, separated from existing facilities by of water. Island 3, as it was called, was located to the south of island 2 and separated from that island by a now-infilled ferry basin. The government bought the underwater area for island 3 from New Jersey in 1904, and a contract was awarded in April 1905. The islands were all connected via a cribwalk on their western sides (later covered with wood canopy), giving Ellis Island an overall "E"-shape. Upon the completion of island 3 in 1906, Ellis Island covered . A baggage and dormitory building was completed , and the main hospital was expanded in 1909. Alterations were made to the registry building and dormitories as well, but even this was insufficient to accommodate the high volume of immigrants. In 1911, Williams alleged that Congress had allocated too little for improvements to Ellis Island, even though the improvement budget that year was $868,000. Additional improvements and routine maintenance work were completed in the early 1910s. A greenhouse was built in 1910, and the contagious-diseases ward on island 3 opened the following June. In addition, the incinerator was replaced in 1911, and a recreation center operated by the American Red Cross was also built on island 2 by 1915. These facilities generally followed the design set by Tilton and Boring. When the Black Tom explosion occurred on Black Tom Island in 1916, the complex suffered moderate damage; though all immigrants were evacuated safely, the main building's roof collapsed, and windows were broken. The main building's roof was replaced with a Guastavino-tiled arched ceiling by 1918. The immigration station was temporarily closed during World War I in 1917–1919, during which the facilities were used as a jail for suspected enemy combatants, and later as a treatment center for wounded American soldiers. Immigration inspections were conducted aboard ships or at docks. During the war, immigration processing at Ellis Island declined by 97%, from 878,000 immigrants per year in 1914 to 26,000 per year in 1919. Ellis Island's immigration station was reopened in 1920, and processing had rebounded to 560,000 immigrants per year by 1921. There were still ample complaints about the inadequate condition of Ellis Island's facilities. However, despite a request for $5.6 million in appropriations in 1921, aid was slow to materialize, and initial improvement work was restricted to smaller projects such as the infilling of the basin between islands 2 and 3. Other improvements included rearranging features such as staircases to improve pedestrian flow. These projects were supported by president Calvin Coolidge, who in 1924 requested that Congress approve $300,000 in appropriations for the island. The allocations were not received until the late 1920s. Conversion to detention center With the passing of the Emergency Quota Act of 1921, the number of immigrants being allowed into the United States declined greatly, ending the era of mass immigration. Following the Immigration Act of 1924, strict immigration quotas were enacted, and Ellis Island was downgraded from a primary inspection center to an immigrant-detention center, hosting only those that were to be detained or deported (see ). Final inspections were now instead conducted on board ships in New York Harbor. The Wall Street Crash of 1929 further decreased immigration, as people were now discouraged from immigrating to the U.S. Because of the resulting decline in patient counts, the hospital closed in 1930. Edward Corsi, who himself was an immigrant, became Ellis Island commissioner in 1931 and commenced an improvement program for the island. The initial improvements were utilitarian, focusing on such aspects as sewage, incineration, and power generation. In 1933, a federal committee led by the Secretary of Labor, Frances Perkins, was established to determine what operations and facilities needed improvement. The committee's report, released in 1934, suggested the construction of a new class-segregated immigration building, recreation center, ferry house, verandas, and doctors/nurses' quarters, as well as the installation of a new seawall around the island. These works were undertaken using Public Works Administration funding and Works Progress Administration labor, and were completed by the late 1930s. As part of the project, the surgeon's house and recreation center were demolished, and Edward Laning commissioned some murals for the island's buildings. Other improvements included the demolition of the greenhouse, the completion of the infilling of the basin between islands 2 and 3, and various landscaping activities such as the installation of walkways and plants. However, because of the steep decline in immigration, the immigration building went underused for several years, and it started to deteriorate. With the start of World War II in 1939, Ellis Island was again utilized by the military, this time being used as a United States Coast Guard base. As during World War I, the facilities were used to detain enemy soldiers in addition to immigrants, and the hospital was used for treating injured American soldiers. So many combatants were detained at Ellis Island that administrative offices were moved to mainland Manhattan in 1943, and Ellis Island was used solely for detainment. By 1947, shortly after the end of World War II, there were proposals to close Ellis Island due to the massive expenses needed for the upkeep of a relatively small detention center. The hospital was closed in 1950–1951 by the United States Public Health Service, and by the early 1950s, there were only 30 to 40 detainees left on the island. The island's closure was announced in mid-1954, when the federal government announced that it would construct a replacement facility on Manhattan. Ellis Island closed on November 12, 1954, with the departure of its last detainee, Norwegian merchant seaman Arne Pettersen. At the time, it was estimated that the government would save $900,000 a year from closing the island. The ferryboat Ellis Island, which had operated since 1904, stopped operating two weeks later. Post-closure Initial redevelopment plans After the immigration station closed, the buildings fell into disrepair and were abandoned, and the General Services Administration (GSA) took over the island in March 1955. The GSA wanted to sell off the island as "surplus property" and contemplated several options, including selling the island back to the city of New York or auctioning it to a private buyer. In 1959, real estate developer Sol Atlas unsuccessfully bid for the island, with plans to turn it into a $55 million resort with a hotel, marina, music shell, tennis courts, swimming pools, and skating rinks. The same year, Frank Lloyd Wright designed the $100 million "Key Project", which included housing, hotels, and large domes along the edges. However, Wright died before presenting the project. Other attempts at redeveloping the site, including a college, a retirement home, an alcoholics' rehabilitation center, and a world trade center were all unsuccessful. In 1963, the Jersey City Council voted to rezone the island's area within New Jersey for high-rise residential, monument/museum, or recreational use, though the new zoning ordinance banned "Coney Island"-style amusement parks. In June 1964, the National Park Service published making a report that proposed making Ellis Island part of a national monument. This idea was approved by Secretary of the Interior Stewart Udall in October 1964. Ellis Island was added to the Statue of Liberty National Monument on May 11, 1965, and that August, President Lyndon B. Johnson approved the redevelopment of the island as a museum and park. The initial master plan for the redevelopment of Ellis Island, designed by Philip Johnson, called for the construction of the Wall, a large "stadium"-shaped monument to replace the structures on the island's northwest side, while preserving the main building and hospital. However, no appropriations were immediately made, other than a $250,000 allocation for emergency repairs in 1967. By the late 1960s, the abandoned buildings were deteriorating severely. Johnson's plan was never implemented due to public opposition and a lack of funds. Another master plan was proposed in 1968, which called for the rehabilitation of the island's northern side and the demolition of all buildings, including the hospital, on the southern side. The Jersey City Jobs Corpsmen started rehabilitating part of Ellis Island the same year, in accordance with this plan. This was soon halted indefinitely because of a lack of funding. In 1970, a squatters' club called the National Economic Growth and Reconstruction Organization (NEGRO) started refurbishing buildings as part of a plan to turn the island into an addiction rehabilitation center, but were evicted after less than two weeks. NEGRO's permit to renovate the island were ultimately terminated in 1973. Restoration and reopening of north side In the 1970s, the NPS started restoring the island by repairing seawalls, eliminating weeds, and building a new ferry dock. Simultaneously, Peter Sammartino launched the Restore Ellis Island Committee to raise awareness and money for repairs. The north side of the island, comprising the main building and surrounding structures, was rehabilitated and partially reopened for public tours in May 1976. The plant was left unrepaired to show the visitors the extent of the deterioration. The NPS limited visits to 130 visitors per boat, or less than 75,000 visitors a year. Initially, only parts of three buildings were open to visitors. Further repairs were stymied by a lack of funding, and by 1982, the NPS was turning to private sources for funds. In May 1982, President Ronald Reagan announced the formation of the Statue of Liberty–Ellis Island Centennial Commission, led by Chrysler Corporation chair Lee Iacocca with former President Gerald Ford as honorary chairman, to raise the funds needed to complete the work. The plan for Ellis Island was to cost $128 million, and by the time work commenced in 1984, about $40 million had been raised. Through its fundraising arm, the Statue of Liberty–Ellis Island Foundation, Inc., the group eventually raised more than $350 million in donations for the renovations of both the Statue of Liberty and Ellis Island. Initial restoration plans included renovating the main building, baggage and dormitory building, and the hospital, as well as possibly adding a bandshell, restaurant, and exhibits. Two firms, Finegold Alexander + Associates Inc and Beyer Blinder Belle, designed the renovation. In advance of the renovation, public tours ceased in 1984, and work started the following year. As part of the restoration, the powerhouse was renovated, while the incinerator, greenhouse, and water towers were removed. The kitchen/laundry and baggage/dormitory buildings were restored to their original condition while the main building was restored to its 1918–1924 appearance. The main building opened as a museum on September 10, 1990. Further improvements were made after the north side's renovation was completed. The Wall of Honor, a monument to raise money for the restoration, was completed in 1990 and reconstructed starting in 1993. A research facility with online database, the American Family Immigration History Center, was opened in April 2001. Subsequently, the ferry building was restored for $6.4 million and reopened in 2007. The north side was temporarily closed after being damaged in Hurricane Sandy in October 2012, though the island and part of the museum reopened exactly a year later, after major renovations. Structures The current complex was designed by Edward Lippincott Tilton and William A. Boring, who performed the commission under the direction of the Supervising Architect for the U.S. Treasury, James Knox Taylor. Their plan, submitted in 1898, called for structures to be located on both the northern and southern portions of Ellis Island. The plan stipulated a large main building, a powerhouse, and a new baggage/dormitory and kitchen building on the north side of Ellis Island; a hospital on the south side; and a ferry dock with covered walkways at the head of the ferry basin, on the west side of the island. The plan roughly corresponds to what was ultimately built. North side The northern half of Ellis Island is composed of the former island 1. Only the areas associated with the original island, including much of the main building, are in New York; the remaining area is in New Jersey. Main building The present three-story main structure was designed in French Renaissance style. It is made of a steel frame, with a facade of red brick in Flemish bond ornamented with limestone trim. The structure is located above the mean waterline to prevent flooding. The building was initially composed of a three-story center section with two-story east and west wings, though the third stories of each wing were completed in the early 1910s. Atop the corners of the building's central section are four towers capped by cupolas of copper cladding. Some 160 rooms were included within the original design to separate the different functions of the building. Namely, the first floor was initially designed to handle baggage, detention, offices, storage and waiting rooms; the second floor, primary inspection; and the third floor, dormitories. However, in practice, these spaces generally served multiple functions throughout the immigration station's operating history. At opening, it was estimated that the main building could inspect 5,000 immigrants per day. The main building's design was highly acclaimed; at the 1900 Paris Exposition, it received a gold medal, and other architectural publications such as the Architectural Record lauded the design. The first floor contained detention rooms, social service offices, and waiting rooms on its west wing, a use that remained relatively unchanged. The central space was initially a baggage room until 1907, but was subsequently subdivided and later re-combined into a single records room. The first floor's east wing also contained a railroad waiting room and medical offices, though much of the wing was later converted to record rooms. A railroad ticket office annex was added to the north side of the first floor in 1905–1906. The south elevation of the first floor contains the current immigration museum's main entrance, approached by a slightly sloped passageway covered by a glass canopy. Though the canopy was added in the 1980s, it evokes the design of an earlier glass canopy on the site that existed from 1902 to 1932. A registry room, with a ceiling, is located on the central section of the second floor. The room was used for primary inspections. Initially, there were handrails within the registry room that separated the primary inspection into several queues, but 1911 these were replaced with benches. A staircase from the first floor formerly rose into the middle of the registry room, but this was also removed around 1911. When the room's roof collapsed during the Black Tom explosion of 1916, the current Guastavino-tiled arched ceiling was installed, and the asphalt floor was replaced with red Ludowici tile. There are three large arched openings each on the northern and southern walls, filled-in with grilles of metal-and-glass. The southern elevation retains its original double-height arches, while the lower sections of the arches on the northern elevations were modified to make way for the railroad ticket office. On all four sides of the room, above the level of the third floor, is a clerestory of semicircular windows. The east wing of the second floor was used for administrative offices, while the west wing housed the special inquiry and deportation divisions, as well as dormitories. On the third floor is a balcony surrounding the entire registry room. There were also dormitories for 600 people on the third floor. Between 1914 and 1918, several rooms were added to the third floor. These rooms included offices as well as an assembly room that were later converted to detention. The remnants of Fort Gibson still exist outside the main building. Two portions are visible to the public, including the remnants of the lower walls around the fort. Kitchen and laundry The kitchen and laundry structure is a two-and-a-half-story structure located west of the main building. It is made of a steel frame and terracotta blocks, with a granite base and a facade of brick in Flemish bond. Originally designed as two separate structures, it was redesigned in 1899 as a single structure with kitchen-restaurant and laundry-bathhouse components, and was subsequently completed in 1901. A one-and-a-half-story ice plant on the northern elevation was built between 1903 and 1908, and was converted into a ticket office in 1935. It has a facade of brick in English and stretcher bond. Today, the kitchen and laundry contains NPS offices as well as the museum's Peopling of America exhibit. The laundry facility is part of save ellis island's hard hat tour. The building has a central portion with a narrow gable roof, as well as pavilions on the western and eastern sides with hip roofs; the roof tiling was formerly of slate and currently of terracotta. The larger eastern pavilion, which contained the laundry-bathhouse, had hipped dormers. The exterior-facing window and door openings contain limestone features on the facade, while the top of the building has a modillioned copper cornice. Formerly, there was also a two-story porch on the southern elevation. Multiple enclosed passageways connect the kitchen and laundry to adjacent structures. Bakery and carpentry shop The bakery and carpentry shop is a two-story structure located west of the kitchen and laundry building. It is roughly rectangular and oriented north–south. It is made of a steel frame with a granite base, a flat roof, and a facade of brick in Flemish bond. The building was constructed in 1914–1915 to replace the separate wooden bakery and carpentry shop buildings, as well as two sheds and a frame waiting room. There are no exterior entrances, and the only access is via the kitchen and laundry. The first floor generally contained oven rooms, baking areas and storage while the second floor contained the carpentry shop. Baggage and dormitory The baggage and dormitory structure is a three-story structure located north of the main building. It is made of a steel frame and terracotta blocks, with a limestone base and a facade of brick in Flemish bond. Completed as a two-story structure , the baggage and dormitory building replaced a 700-bed wooden barracks nearby that operated between 1903 and 1911. The baggage and dormitory initially had baggage collection on its first floor, dormitories and detention rooms on its second floor, and a tiled garden on its roof. The building received a third story, and a two-story annex to the north side, in 1913–1914. Initially, the third floor included additional dormitory space while the annex provided detainees with outdoor porch space. A detainee dining room on the first floor was expanded in 1951. The building is mostly rectangular except for its northern annex and contains an interior courtyard, skylighted at the second floor. On its facade the first story has rectangular windows in arched window openings while the second and third stories have rectangular windows and window openings. There are cornices below the second and third stories. The annex contains wide window openings with narrow brick piers outside them. The roof's northwest corner contains a one-story extension. Multiple wings connect the baggage and laundry to its adjacent buildings. Powerhouse The powerhouse of Ellis Island is a two-story structure located north of the kitchen and laundry building and west of the baggage and dormitory building. It is roughly rectangular and oriented north–south. Like the kitchen and laundry, it was completed in 1901. It is made of a steel frame with a granite base, a facade of brick in Flemish bond, and decorative bluestone and limestone elements. The hip roof contains dormers and is covered with terracotta tiling. A brick smokestack rises from ground level. Formerly, the powerhouse provided almost all power for Ellis Island. A coal trestle at the northwest end was used to transport coal for power generation from 1901 to 1932, when the powerhouse started using fuel oil. The powerhouse also generated steam for the island. After the immigration station closed, the powerhouse deteriorated and was left unrepaired until the 1980s renovation. The powerhouse is no longer operational; instead, the island receives power from 13,200-volt cables that lead from a Public Service Electric & Gas substation in Liberty State Park. The powerhouse contains sewage pumps that can dispose of up to to the Jersey City Sewage Authority sewage system. A central heating plant was installed during the 1980s renovation. South side The southern side of Ellis Island, located across the ferry basin from the northern side, is composed of island 2 (created in 1899) and island 3 (created in 1906). The
only to entertain but also educate fellow citizenshe was expected to have a message. Traditional myth provided the subject matter, but the dramatist was meant to be innovative, which led to novel characterizations of heroic figures and use of the mythical past as a tool for discussing present issues. The difference between Euripides and his older colleagues was one of degree: his characters talked about the present more controversially and pointedly than those of Aeschylus and Sophocles, sometimes even challenging the democratic order. Thus, for example, Odysseus is represented in Hecuba (lines 131–32) as "agile-minded, sweet-talking, demos-pleasing", i.e. similar to the war-time demagogues that were active in Athens during the Peloponnesian War. Speakers in the plays of Aeschylus and Sophocles sometimes distinguish between slaves who are servile by nature and those servile by circumstance, but Euripides' speakers go further, positing an individual's mental, rather than social or physical, state as a true indication of worth. For example, in Hippolytus, a love-sick queen rationalizes her position and, reflecting on adultery, arrives at this comment on intrinsic merit: Euripides' characters resembled contemporary Athenians rather than heroic figures of myth. As mouthpieces for contemporary issues, they "all seem to have had at least an elementary course in public speaking". The dialogue often contrasts so strongly with the mythical and heroic setting that it can seem like Euripides aimed at parody. For example, in The Trojan Women, the heroine's rationalized prayer elicits comment from Menelaus: Athenian citizens were familiar with rhetoric in the assembly and law courts, and some scholars believe that Euripides was more interested in his characters as speakers with cases to argue than as characters with lifelike personalities. They are self-conscious about speaking formally, and their rhetoric is shown to be flawed, as if Euripides were exploring the problematical nature of language and communication: "For speech points in three different directions at once, to the speaker, to the person addressed, to the features in the world it describes, and each of these directions can be felt as skewed". For example, in the quotation above, Hecuba presents herself as a sophisticated intellectual describing a rationalized cosmos, but the speech is ill-suited to her audience, the unsophisticated listener Menelaus, and is found to not suit the cosmos either (her grandson is murdered by the Greeks). In Hippolytus, speeches appear verbose and ungainly, as if to underscore the limitations of language. Like Euripides, both Aeschylus and Sophocles created comic effects, contrasting the heroic with the mundane, but they employed minor supporting characters for that purpose. Euripides was more insistent, using major characters as well. His comic touches can be thought to intensify the overall tragic effect, and his realism, which often threatens to make his heroes look ridiculous, marks a world of debased heroism: "The loss of intellectual and moral substance becomes a central tragic statement". Psychological reversals are common and sometimes happen so suddenly that inconsistency in characterization is an issue for many critics, such as Aristotle, who cited Iphigenia in Aulis as an example (Poetics 1454a32). For others, psychological inconsistency is not a stumbling block to good drama: "Euripides is in pursuit of a larger insight: he aims to set forth the two modes, emotional and rational, with which human beings confront their own mortality." Some think unpredictable behaviour realistic in tragedy: "everywhere in Euripides a preoccupation with individual psychology and its irrational aspects is evident....In his hands tragedy for the first time probed the inner recesses of the human soul and let passions spin the plot." The tension between reason and passion is symbolized by his characters' relationship with the gods: For example, Hecuba's prayer is answered not by Zeus, nor by the law of reason, but by Menelaus, as if speaking for the old gods. And the perhaps most famous example is in Bacchae where the god Dionysus savages his own converts. When the gods do appear (in eight of the extant plays), they appear "lifeless and mechanical". Sometimes condemned by critics as an unimaginative way to end a story, the spectacle of a "god" making a judgement or announcement from a theatrical crane might actually have been intended to provoke scepticism about the religious and heroic dimension of his plays. Similarly, his plays often begin in a banal manner that undermines theatrical illusion. Unlike Sophocles, who established the setting and background of his plays in the introductory dialogue, Euripides used a monologue in which a divinity or human character simply tells the audience all it needs to know to understand what follows. Aeschylus and Sophocles were innovative, but Euripides had arrived at a position in the "ever-changing genre" where he could easily move between tragic, comic, romantic, and political effects. This versatility appears in individual plays and also over the course of his career. Potential for comedy lay in his use of 'contemporary' characters, in his sophisticated tone, his relatively informal Greek (see In Greek below), and in his ingenious use of plots centred on motifs that later became standard in Menander's New Comedy (for example the 'recognition scene'). Other tragedians also used recognition scenes, but they were heroic in emphasis, as in Aeschylus's The Libation Bearers, which Euripides parodied in Electra (Euripides was unique among the tragedians in incorporating theatrical criticism in his plays). Traditional myth with its exotic settings, heroic adventures, and epic battles offered potential for romantic melodrama as well as for political comments on a war theme, so that his plays are an extraordinary mix of elements. The Trojan Women, for example, is a powerfully disturbing play on the theme of war's horrors, apparently critical of Athenian imperialism (it was composed in the aftermath of the Melian massacre and during the preparations for the Sicilian Expedition), yet it features the comic exchange between Menelaus and Hecuba quoted above, and the chorus considers Athens, the "blessed land of Theus", to be a desirable refugesuch complexity and ambiguity are typical both of his "patriotic" and "anti-war" plays. Tragic poets in the fifth century competed against one another at the City Dionysia, each with a tetralogy of three tragedies and a satyr play. The few extant fragments of satyr plays attributed to Aeschylus and Sophocles indicate that these were a loosely structured, simple, and jovial form of entertainment. But in Cyclops (the only complete satyr-play that survives), Euripides structured the entertainment more like a tragedy and introduced a note of critical irony typical of his other work. His genre-bending inventiveness is shown above all in Alcestis, a blend of tragic and satyric elements. This fourth play in his tetralogy for 438 BC (i.e., it occupied the position conventionally reserved for satyr plays) is a "tragedy", featuring Heracles as a satyric hero in conventional satyr-play scenes: an arrival, a banquet, a victory over an ogre (in this case, death), a happy ending, a feast, and a departure for new adventures. Most of the big innovations in tragedy were made by Aeschylus and Sophocles, but "Euripides made innovations on a smaller scale that have impressed some critics as cumulatively leading to a radical change of direction". Euripides is also known for his use of irony. Many Greek tragedians make use of dramatic irony to bring out the emotion and realism of their characters or plays, but Euripides uses irony to foreshadow events and occasionally amuse his audience. For example, in his play Heracles, Heracles comments that all men love their children and wish to see them grow. The irony here is that Heracles will be driven into madness by Hera and will kill his children. Similarly, in Helen, Theoclymenus remarks how happy he is that his sister has the gift of prophecy and will warn him of any plots or tricks against him (the audience already knows that she has betrayed him). In this instance, Euripides uses irony not only for foreshadowing but also for comic effect—which few tragedians did. Likewise, in the Bacchae, Pentheus’s first threat to the god Dionysus is that if Pentheus catches him in his city, he will 'chop off his head', whereas it is Pentheus who is beheaded at the end of the play. In Greek The spoken language of the plays is not fundamentally different in style from that of Aeschylus or Sophoclesit employs poetic meters, a rarefied vocabulary, fullness of expression, complex syntax, and ornamental figures, all aimed at representing an elevated style. But its rhythms are somewhat freer, and more natural, than that of his predecessors, and the vocabulary has been expanded to allow for intellectual and psychological subtleties. Euripides was also a great lyric poet. In Medea, for example, he composed for his city, Athens, "the noblest of her songs of praise". His lyrical skills are not just confined to individual poems: "A play of Euripides is a musical whole...one song echoes motifs from the preceding song, while introducing new ones." For some critics, the lyrics often seem dislocated from the action, but the extent and significance of this is "a matter of scholarly debate". See Chronology for details about his style. Reception Euripides has aroused, and continues to arouse, strong opinions for and against his work: Aeschylus gained thirteen victories as a dramatist; Sophocles at least twenty; Euripides only four in his lifetime; and this has often been taken as indication of the latter's unpopularity. But a first place might not have been the main criterion for success (the system of selecting judges appears to have been flawed), and merely being chosen to compete was a mark of distinction. Moreover, to have been singled out by Aristophanes for so much comic attention is proof of popular interest in his work. Sophocles was appreciative enough of the younger poet to be influenced by him, as is evident in his later plays Philoctetes and Oedipus at Colonus. According to Plutarch, Euripides had been very well received in Sicily, to the extent that after the failure of the Sicilian Expedition, many Athenian captives were released, simply for being able to teach their captors whatever fragments they could remember of his work. Less than a hundred years later, Aristotle developed an almost "biological' theory of the development of tragedy in Athens: the art form grew under the influence of Aeschylus, matured in the hands of Sophocles, then began its precipitous decline with Euripides. However, "his plays continued to be applauded even after those of Aeschylus and Sophocles had come to seem remote and irrelevant"; they became school classics in the Hellenistic period (as mentioned in the introduction) and, due to Seneca's adaptation of his work for Roman audiences, "it was Euripides, not Aeschylus or Sophocles, whose tragic muse presided over the rebirth of tragedy in Renaissance Europe." In the seventeenth century, Racine expressed admiration for Sophocles, but was more influenced by Euripides (Iphigenia in Aulis and Hippolytus were the models for his plays Iphigénie and Phèdre). Euripides' reputation was to take a beating in the early 19th century, when Friedrich Schlegel and his brother August Wilhelm Schlegel championed Aristotle's 'biological' model of theatre history, identifying Euripides with the moral, political, and artistic degeneration of Athens. August Wilhelm's Vienna lectures on dramatic art and literature went through four editions between 1809 and 1846; and, in them, he opined that Euripides "not only destroyed the external order of tragedy but missed its entire meaning". This view influenced Friedrich Nietzsche, who seems, however, not to have known the Euripidean plays well. But literary figures, such as the poet Robert Browning and his wife Elizabeth Barrett Browning, could study and admire the Schlegels, while still appreciating Euripides as "our Euripides the human" (Wine of Cyprus stanza 12). Classicists such as Arthur Verrall and Ulrich von Wilamowitz-Moellendorff reacted against the views of the Schlegels and Nietzsche, constructing arguments sympathetic to Euripides, which involved Wilamowitz in this restatement of Greek tragedy as a genre: "A [Greek] tragedy does not have to end 'tragically' or be 'tragic'. The only requirement is a serious treatment." In the English-speaking world, the pacifist Gilbert Murray played an important role in popularizing Euripides, influenced perhaps by his anti-war plays. Today, as in the time of Euripides, traditional assumptions are constantly under challenge, and audiences therefore have a natural affinity with the Euripidean outlook, which seems nearer to ours, for example, than the Elizabethan. As stated above, however, opinions continue to diverge, so that modern readers might actually "seem to feel a special affinity with Sophocles"; one recent critic might dismiss the debates in Euripides' plays as "self-indulgent digression for the sake of rhetorical display"; and one spring to the defence: "His plays are remarkable for their range of tones and the gleeful inventiveness, which morose critics call cynical artificiality, of their construction." Unique among writers of ancient Athens, Euripides demonstrated sympathy towards the underrepresented members of society. His male contemporaries were frequently shocked by the heresies he put into the mouths of characters, such as these words of his heroine Medea: Texts Transmission The textual transmission of the plays, from the 5th century BC, when they were first written, until the era of the printing press, was a largely haphazard process. Much of Euripides' work was lost and corrupted; but the period also included triumphs by scholars and copyists, thanks to whom much was recovered and preserved. Summaries of the transmission are often found in modern editions of the plays, three of which are used as sources for this summary. The plays of Euripides, like those of Aeschylus and Sophocles, circulated in written form. But literary conventions that we take for granted today had not been inventedthere was no spacing between words; no consistency in punctuation, nor elisions; no marks for breathings and accents (guides to pronunciation, and word recognition); no convention to denote change of speaker; no stage directions; and verse was written straight across the page, like prose. Possibly, those who bought texts supplied their own interpretative markings. Papyri discoveries have indicated, for example, that a change in speakers was loosely denoted with a variety of signs, such as equivalents of the modern dash, colon, and full-stop. The absence of modern literary conventions (which aid comprehension), was an early and persistent source of errors, affecting transmission. Errors were also introduced when Athens replaced its old Attic alphabet with the Ionian alphabet, a change sanctioned by law in 403–402 BC, adding a new complication to the task of copying. Many more errors came from the tendency of actors to interpolate words and sentences, producing so many corruptions and variations that a law was proposed by Lycurgus of Athens in 330 BC "that the plays of Aeschylus, Sophocles and Euripides should be written down and preserved in a public office; and that the town clerk should read the text over with the actors; and that all performances which did not comply with this regulation should be illegal." The law was soon disregarded, and actors continued to make changes until about 200 BC, after which the habit ceased. It was about then that Aristophanes of Byzantium compiled an edition of all the extant plays of Euripides, collated from pre-Alexandrian texts, furnished with introductions and accompanied by a commentary that was "published" separately. This became the "standard edition" for the future, and it featured some of the literary conventions that modern readers expect: there was still no spacing between words; little or no punctuation; and no stage directions; but abbreviated names denoted changes of speaker; lyrics were broken into "cola" and "strophai", or lines and stanzas; and a system of accentuation was introduced. After this creation of a standard edition, the text was fairly safe from errors, besides slight and gradual corruption introduced with tedious copying. Many of these trivial errors occurred in the Byzantine period, following a change in script (from uncial to minuscule), and many were "homophonic" errorsequivalent, in English, to substituting "right" for "write"; except that there were more opportunities for Byzantine scribes to make these errors, because η, ι, οι and ει, were pronounced similarly in the Byzantine period. Around 200 AD, ten of the plays of Euripides began to be circulated in a select edition, possibly for use in schools, with some commentaries or scholia recorded in the margins. Similar editions had appeared for Aeschylus and Sophoclesthe only plays of theirs that survive today. Euripides, however, was more fortunate than the other tragedians, with a second edition of his work surviving, compiled in alphabetical order as if from a set of his collect works; but without scholia attached. This "Alphabetical" edition was combined with the "Select" edition by some unknown Byzantine scholar, bringing together all the nineteen plays that survive today. The "Select" plays are found in many medieval manuscripts, but only two manuscripts preserve the "Alphabetical" playsoften denoted L and P, after the Laurentian Library at Florence, and the Bibliotheca Palatina in the Vatican, where they are stored. It is believed that P derived its Alphabet plays and some Select plays from copies of an ancestor of L, but the remainder is derived from elsewhere. P contains all the extant plays of Euripides, L is missing The Trojan Women and latter part of The Bacchae. In addition to L, P, and many other medieval manuscripts, there are fragments of plays on papyrus. These papyrus fragments are often recovered only with modern technology. In June 2005, for example, classicists at the University of Oxford worked on a joint project with Brigham Young University, using multi-spectral imaging technology to retrieve previously illegible writing (see References). Some of this work employed infrared technology—previously used for satellite imaging—to detect previously unknown material by Euripides, in fragments of the Oxyrhynchus papyri, a collection of ancient manuscripts held by the university. It is from such materials that modern scholars try to piece together copies of the original plays. Sometimes the picture is almost lost. Thus, for example, two extant plays, The Phoenician Women and Iphigenia in Aulis, are significantly corrupted by interpolations (the latter possibly being completed post mortem by the poet's son); and the very authorship of Rhesus is a matter of dispute. In fact, the very existence of the Alphabet plays, or rather the absence of an equivalent edition for Sophocles and Aeschylus, could distort our notions of distinctive Euripidean qualitiesmost of his
438 BC (i.e., it occupied the position conventionally reserved for satyr plays) is a "tragedy", featuring Heracles as a satyric hero in conventional satyr-play scenes: an arrival, a banquet, a victory over an ogre (in this case, death), a happy ending, a feast, and a departure for new adventures. Most of the big innovations in tragedy were made by Aeschylus and Sophocles, but "Euripides made innovations on a smaller scale that have impressed some critics as cumulatively leading to a radical change of direction". Euripides is also known for his use of irony. Many Greek tragedians make use of dramatic irony to bring out the emotion and realism of their characters or plays, but Euripides uses irony to foreshadow events and occasionally amuse his audience. For example, in his play Heracles, Heracles comments that all men love their children and wish to see them grow. The irony here is that Heracles will be driven into madness by Hera and will kill his children. Similarly, in Helen, Theoclymenus remarks how happy he is that his sister has the gift of prophecy and will warn him of any plots or tricks against him (the audience already knows that she has betrayed him). In this instance, Euripides uses irony not only for foreshadowing but also for comic effect—which few tragedians did. Likewise, in the Bacchae, Pentheus’s first threat to the god Dionysus is that if Pentheus catches him in his city, he will 'chop off his head', whereas it is Pentheus who is beheaded at the end of the play. In Greek The spoken language of the plays is not fundamentally different in style from that of Aeschylus or Sophoclesit employs poetic meters, a rarefied vocabulary, fullness of expression, complex syntax, and ornamental figures, all aimed at representing an elevated style. But its rhythms are somewhat freer, and more natural, than that of his predecessors, and the vocabulary has been expanded to allow for intellectual and psychological subtleties. Euripides was also a great lyric poet. In Medea, for example, he composed for his city, Athens, "the noblest of her songs of praise". His lyrical skills are not just confined to individual poems: "A play of Euripides is a musical whole...one song echoes motifs from the preceding song, while introducing new ones." For some critics, the lyrics often seem dislocated from the action, but the extent and significance of this is "a matter of scholarly debate". See Chronology for details about his style. Reception Euripides has aroused, and continues to arouse, strong opinions for and against his work: Aeschylus gained thirteen victories as a dramatist; Sophocles at least twenty; Euripides only four in his lifetime; and this has often been taken as indication of the latter's unpopularity. But a first place might not have been the main criterion for success (the system of selecting judges appears to have been flawed), and merely being chosen to compete was a mark of distinction. Moreover, to have been singled out by Aristophanes for so much comic attention is proof of popular interest in his work. Sophocles was appreciative enough of the younger poet to be influenced by him, as is evident in his later plays Philoctetes and Oedipus at Colonus. According to Plutarch, Euripides had been very well received in Sicily, to the extent that after the failure of the Sicilian Expedition, many Athenian captives were released, simply for being able to teach their captors whatever fragments they could remember of his work. Less than a hundred years later, Aristotle developed an almost "biological' theory of the development of tragedy in Athens: the art form grew under the influence of Aeschylus, matured in the hands of Sophocles, then began its precipitous decline with Euripides. However, "his plays continued to be applauded even after those of Aeschylus and Sophocles had come to seem remote and irrelevant"; they became school classics in the Hellenistic period (as mentioned in the introduction) and, due to Seneca's adaptation of his work for Roman audiences, "it was Euripides, not Aeschylus or Sophocles, whose tragic muse presided over the rebirth of tragedy in Renaissance Europe." In the seventeenth century, Racine expressed admiration for Sophocles, but was more influenced by Euripides (Iphigenia in Aulis and Hippolytus were the models for his plays Iphigénie and Phèdre). Euripides' reputation was to take a beating in the early 19th century, when Friedrich Schlegel and his brother August Wilhelm Schlegel championed Aristotle's 'biological' model of theatre history, identifying Euripides with the moral, political, and artistic degeneration of Athens. August Wilhelm's Vienna lectures on dramatic art and literature went through four editions between 1809 and 1846; and, in them, he opined that Euripides "not only destroyed the external order of tragedy but missed its entire meaning". This view influenced Friedrich Nietzsche, who seems, however, not to have known the Euripidean plays well. But literary figures, such as the poet Robert Browning and his wife Elizabeth Barrett Browning, could study and admire the Schlegels, while still appreciating Euripides as "our Euripides the human" (Wine of Cyprus stanza 12). Classicists such as Arthur Verrall and Ulrich von Wilamowitz-Moellendorff reacted against the views of the Schlegels and Nietzsche, constructing arguments sympathetic to Euripides, which involved Wilamowitz in this restatement of Greek tragedy as a genre: "A [Greek] tragedy does not have to end 'tragically' or be 'tragic'. The only requirement is a serious treatment." In the English-speaking world, the pacifist Gilbert Murray played an important role in popularizing Euripides, influenced perhaps by his anti-war plays. Today, as in the time of Euripides, traditional assumptions are constantly under challenge, and audiences therefore have a natural affinity with the Euripidean outlook, which seems nearer to ours, for example, than the Elizabethan. As stated above, however, opinions continue to diverge, so that modern readers might actually "seem to feel a special affinity with Sophocles"; one recent critic might dismiss the debates in Euripides' plays as "self-indulgent digression for the sake of rhetorical display"; and one spring to the defence: "His plays are remarkable for their range of tones and the gleeful inventiveness, which morose critics call cynical artificiality, of their construction." Unique among writers of ancient Athens, Euripides demonstrated sympathy towards the underrepresented members of society. His male contemporaries were frequently shocked by the heresies he put into the mouths of characters, such as these words of his heroine Medea: Texts Transmission The textual transmission of the plays, from the 5th century BC, when they were first written, until the era of the printing press, was a largely haphazard process. Much of Euripides' work was lost and corrupted; but the period also included triumphs by scholars and copyists, thanks to whom much was recovered and preserved. Summaries of the transmission are often found in modern editions of the plays, three of which are used as sources for this summary. The plays of Euripides, like those of Aeschylus and Sophocles, circulated in written form. But literary conventions that we take for granted today had not been inventedthere was no spacing between words; no consistency in punctuation, nor elisions; no marks for breathings and accents (guides to pronunciation, and word recognition); no convention to denote change of speaker; no stage directions; and verse was written straight across the page, like prose. Possibly, those who bought texts supplied their own interpretative markings. Papyri discoveries have indicated, for example, that a change in speakers was loosely denoted with a variety of signs, such as equivalents of the modern dash, colon, and full-stop. The absence of modern literary conventions (which aid comprehension), was an early and persistent source of errors, affecting transmission. Errors were also introduced when Athens replaced its old Attic alphabet with the Ionian alphabet, a change sanctioned by law in 403–402 BC, adding a new complication to the task of copying. Many more errors came from the tendency of actors to interpolate words and sentences, producing so many corruptions and variations that a law was proposed by Lycurgus of Athens in 330 BC "that the plays of Aeschylus, Sophocles and Euripides should be written down and preserved in a public office; and that the town clerk should read the text over with the actors; and that all performances which did not comply with this regulation should be illegal." The law was soon disregarded, and actors continued to make changes until about 200 BC, after which the habit ceased. It was about then that Aristophanes of Byzantium compiled an edition of all the extant plays of Euripides, collated from pre-Alexandrian texts, furnished with introductions and accompanied by a commentary that was "published" separately. This became the "standard edition" for the future, and it featured some of the literary conventions that modern readers expect: there was still no spacing between words; little or no punctuation; and no stage directions; but abbreviated names denoted changes of speaker; lyrics were broken into "cola" and "strophai", or lines and stanzas; and a system of accentuation was introduced. After this creation of a standard edition, the text was fairly safe from errors, besides slight and gradual corruption introduced with tedious copying. Many of these trivial errors occurred in the Byzantine period, following a change in script (from uncial to minuscule), and many were "homophonic" errorsequivalent, in English, to substituting "right" for "write"; except that there were more opportunities for Byzantine scribes to make these errors, because η, ι, οι and ει, were pronounced similarly in the Byzantine period. Around 200 AD, ten of the plays of Euripides began to be circulated in a select edition, possibly for use in schools, with some commentaries or scholia recorded in the margins. Similar editions had appeared for Aeschylus and Sophoclesthe only plays of theirs that survive today. Euripides, however, was more fortunate than the other tragedians, with a second edition of his work surviving, compiled in alphabetical order as if from a set of his collect works; but without scholia attached. This "Alphabetical" edition was combined with the "Select" edition by some unknown Byzantine scholar, bringing together all the nineteen plays that survive today. The "Select" plays are found in many medieval manuscripts, but only two manuscripts preserve the "Alphabetical" playsoften denoted L and P, after the Laurentian Library at Florence, and the Bibliotheca Palatina in the Vatican, where they are stored. It is believed that P derived its Alphabet plays and some Select plays from copies of an ancestor of L, but the remainder is derived from elsewhere. P contains all the extant plays of Euripides, L is missing The Trojan Women and latter part of The Bacchae. In addition to L, P, and many other medieval manuscripts, there are fragments of plays on papyrus. These papyrus fragments are often recovered only with modern technology. In June 2005, for example, classicists at the University of Oxford worked on a joint project with Brigham Young University, using multi-spectral imaging technology to retrieve previously illegible writing (see References). Some of this work employed infrared technology—previously used for satellite imaging—to detect previously unknown material by Euripides, in fragments of the Oxyrhynchus papyri, a collection of ancient manuscripts held by the university. It is from such materials that modern scholars try to piece together copies of the original plays. Sometimes the picture is almost lost. Thus, for example, two extant plays, The Phoenician Women and Iphigenia in Aulis, are significantly corrupted by interpolations (the latter possibly being completed post mortem by the poet's son); and the very authorship of Rhesus is a matter of dispute. In fact, the very existence of the Alphabet plays, or rather the absence of an equivalent edition for Sophocles and Aeschylus, could distort our notions of distinctive Euripidean qualitiesmost of his least "tragic" plays are in the Alphabet edition; and, possibly, the other two tragedians would appear just as genre-bending as this "restless experimenter", if we possessed more than their "select" editions. See Extant plays below for listing of "Select" and "Alphabetical" plays. Chronology Original production dates for some of Euripides' plays are known from ancient records, such as lists of prize-winners at the Dionysia; and approximations are obtained for the remainder by various means. Both the playwright and his work were travestied by comic poets such as Aristophanes, the known dates of whose own plays can serve as a terminus ad quem for those of Euripides (though the gap can be considerable: twenty-seven years separate Telephus, known to have been produced in 438 BC, from its parody in Thesmophoriazusae in 411 BC.). References in Euripides' plays to contemporary events provide a terminus a quo, though sometimes the references might even precede a datable event (e.g. lines 1074–89 in Ion describe a procession to Eleusis, which was probably written before the Spartans occupied it during the Peloponnesian War). Other indications of dating are obtained by stylometry. Greek tragedy comprised lyric and dialogue, the latter mostly in iambic trimeter (three pairs of iambic feet per line). Euripides sometimes 'resolved' the two syllables of the iamb (˘¯) into three syllables (˘˘˘), and this tendency increased so steadily over time that the number of resolved feet in a play can indicate an approximate date of composition (see Extant plays below for one scholar's list of resolutions per hundred trimeters). Associated with this increase in resolutions was an increasing vocabulary, often involving prefixes to refine meanings, allowing the language to assume a more natural rhythm, while also becoming ever more capable of psychological and philosophical subtlety. The trochaic tetrameter catalecticfour pairs of trochees per line, with the final syllable omittedwas identified by Aristotle as the original meter of tragic dialogue (Poetics 1449a21). Euripides employs it here and there in his later plays, but seems not to have used it in his early plays at all, with The Trojan Women being the earliest appearance of it in an extant play—it is symptomatic of an archaizing tendency in his later works. The later plays also feature extensive use of stichomythia (i.e. a series of one-liners). The longest such scene comprises one hundred and five lines in Ion (lines 264–369). In contrast, Aeschylus never exceeded twenty lines of stichomythia; Sophocles' longest such scene was fifty lines, and that is interrupted several times by αντιλαβή (Electra, lines 1176–1226). Euripides' use of lyrics in sung parts shows the influence of Timotheus of Miletus in the later playsthe individual singer gained prominence, and was given additional scope to demonstrate his virtuosity in lyrical duets, as well as replacing some of the chorus's functions with monodies. At the same time, choral odes began to take on something of the form of dithyrambs reminiscent of the poetry of Bacchylides, featuring elaborate treatment of myths. Sometimes these later choral odes seem to have only a tenuous connection with the plot, linked to the action only in their mood. The Bacchae, however, shows a reversion to old forms, possibly as a deliberate archaic effect, or because there were no virtuoso choristers in Macedonia (where it is said to have been written). Extant plays Key: Date indicates date of first production. Prize indicates a place known to have been awarded in festival competition. Lineage: S denotes plays surviving from a 'Select' or 'School' edition, A plays surviving from an 'Alphabetical' editionsee Transmission above for details. Resolutions: Number of resolved feet per 100 trimeters, Ceadel's listsee Chronology above for details. Genre: Generic orientation (see 'Transmission' section) with additional notes in brackets. Lost and fragmentary plays The following plays have come down to us in fragmentary form, if at all. They are known through quotations in other works (sometimes as little as a single line); pieces of papyrus; partial copies in manuscript; part of a collection of hypotheses (or summaries); and through being parodied in the works of Aristophanes. Some of the fragments, such as those of Hypsipyle, are extensive enough to allow tentative reconstructions to be proposed. A two-volume selection from the fragments, with facing-page translation, introductions, and notes, was published by Collard, Cropp, Lee, and Gibert; as were two Loeb Classical Library volumes derived from them; and there are critical studies in T. B. L. Webster's older The Tragedies of Euripides, based on what were then believed to be the most likely reconstructions of the plays. The following lost and fragmentary plays can be dated, and are arranged in roughly chronological order: Peliades (455 BC) Telephus (438 BC with Alcestis) Alcmaeon in Psophis (438 BC with Alcestis) Cretan Women (438 with Alcestis) Cretans (c. 435 BC) Philoctetes (431 BC with Medea) Dictys (431 BC with Medea) Theristai (Reapers, satyr play, 431 BC with Medea) Stheneboea (before 429 BC) Bellerophon (c. 430 BC) Cresphontes (c. 425 BC) Erechtheus (422 BC) Phaethon (c. 420 BC) Wise Melanippe (c. 420 BC) Alexandros (415 BC with Trojan Women) Palamedes (415 BC with Trojan Women) Sisyphus (satyr play, 415 BC with Trojan Women) Captive Melanippe (c. 412 BC) Andromeda (412 BC with Helen) Antiope (c. 410 BC) Archelaus (c. 410 BC) Hypsipyle (c. 410 BC) Alcmaeon in Corinth (c. 405 BC) Won first prize as part of a trilogy with The Bacchae and Iphigenia in Aulis The following lost and fragmentary plays are of uncertain date, and are arranged in English
the basis for Lowood School in Jane Eyre. The three remaining sisters and their brother Branwell were thereafter educated at home by their father and aunt Elizabeth Branwell. A shy girl, Emily was very close to her siblings and was known as a great animal lover, especially for befriending stray dogs she found wandering around the countryside. Despite the lack of formal education, Emily and her siblings had access to a wide range of published material; favourites included Sir Walter Scott, Byron, Shelley, and Blackwood's Magazine. Inspired by a box of toy soldiers Branwell had received as a gift, the children began to write stories which they set in a number of invented imaginary worlds peopled by their soldiers as well as their heroes the Duke of Wellington and his sons, Charles and Arthur Wellesley. Little of Emily's work from this period survives, except for poems spoken by characters. Initially, all four children shared in creating stories about a world called Angria. However, when Emily was 13, she and Anne withdrew from participation in the Angria story and began a new one about Gondal, a fictional island whose myths and legends were to preoccupy the two sisters throughout their lives. With the exception of their Gondal poems and Anne's lists of Gondal's characters and place-names, Emily and Anne's Gondal writings were largely not preserved. Among those that did survive are some "diary papers," written by Emily in her twenties, which describe current events in Gondal. The heroes of Gondal tended to resemble the popular image of the Scottish Highlander, a sort of British version of the "noble savage": romantic outlaws capable of more nobility, passion, and bravery than the denizens of "civilization". Similar themes of romanticism and noble savagery are apparent across the Brontë's juvenilia, notably in Branwell's The Life of Alexander Percy, which tells the story of an all-consuming, death-defying, and ultimately self-destructive love and is generally considered an inspiration for Wuthering Heights. At seventeen, Emily began to attend the Roe Head Girls' School, where Charlotte was a teacher, but suffered from extreme homesickness and left after only a few months. Charlotte wrote later that "Liberty was the breath of Emily's nostrils; without it, she perished. The change from her own home to a school and from her own very noiseless, very secluded but unrestricted and unartificial mode of life, to one of disciplined routine (though under the kindest auspices), was what she failed in enduring... I felt in my heart she would die if she did not go home, and with this conviction obtained her recall." Emily returned home and Anne took her place. At this time, the girls' objective was to obtain sufficient education to open a small school of their own. Adulthood Emily became a teacher at Law Hill School in Halifax beginning in September 1838, when she was twenty. Her always fragile health soon broke under the stress of the 17-hour work day and she returned home in April 1839. Thereafter she remained at home, doing most of the cooking, ironing, and cleaning at Haworth. She taught herself German out of books and also practised the piano. In 1842, Emily accompanied Charlotte to the Héger Pensionnat in Brussels, Belgium, where they attended the girls' academy run by Constantin Héger in the hope of perfecting their French and German before opening their school. Unlike Charlotte, Emily was uncomfortable in Brussels, and refused to adopt Belgian fashions, saying "I wish to be as God made me", which rendered her something of an outcast. Nine of Emily's French essays survive from this period. Héger seems to have been impressed with the strength of Emily's character, writing that: She should have been a man – a great navigator. Her powerful reason would have deduced new spheres of discovery from the knowledge of the old; and her strong imperious will would never have been daunted by opposition or difficulty, never have given way but with life. She had a head for logic, and a capability of argument unusual in a man and rarer indeed in a woman... impairing this gift was her stubborn tenacity of will which rendered her obtuse to all reasoning where her own wishes, or her own sense of right, was concerned. The two sisters were committed to their studies and by the end of the term had become so competent in French that Madame Héger proposed that they both stay another half-year, even, according to Charlotte, offering to dismiss the English master so that she could take his place. Emily had, by this time, become a competent pianist and teacher and it was suggested that she might stay on to teach music. However, the illness and death of their aunt drove them to return to their father and Haworth. In 1844, the sisters attempted to open a school in their house, but their plans were stymied by an inability to attract students to the remote area. In 1844, Emily began going through all the poems she had written, recopying them neatly into two notebooks. One was labelled "Gondal Poems"; the other was unlabelled. Scholars such as Fannie Ratchford and Derek Roper have attempted to piece together a Gondal storyline and chronology from these poems. In the autumn of 1845, Charlotte discovered the notebooks and insisted that the poems be published. Emily, furious at the invasion of her privacy, at first refused but relented when Anne brought out her own manuscripts and revealed to Charlotte that she had been writing poems in secret as well. As co-authors of Gondal stories, Anne and Emily were accustomed to read their Gondal stories and poems to each other, while Charlotte was excluded from their privacy. Around this time Emily had written one of her most famous poems "No coward soul is mine", probably as an answer to the violation of her privacy and her own transformation into a published writer. Despite Charlotte's later claim, it was not her last poem. In 1846, the sisters' poems were published in one volume as Poems by Currer, Ellis, and Acton Bell. The Brontë sisters had adopted pseudonyms for publication, preserving their initials: Charlotte was "Currer Bell", Emily was "Ellis Bell" and Anne was "Acton Bell". Charlotte wrote in the 'Biographical Notice of Ellis and Acton Bell' that their "ambiguous choice" was "dictated by a sort of conscientious scruple at assuming Christian names positively masculine, while we did not like to declare ourselves women, because... we had a vague impression that authoresses are liable to be looked on with prejudice". Charlotte contributed 19 poems, and Emily and Anne each contributed 21. Although the sisters were told several months after publication that only two copies had sold, they were not discouraged (of their two readers, one was impressed enough to request their autographs). The Athenaeum reviewer praised Ellis Bell's work for its music and power, singling out his poems as the best: "Ellis possesses a fine, quaint spirit and an evident power of wing that may reach heights not here attempted", and The Critic reviewer recognised "the presence of more genius than it was supposed this utilitarian age had devoted to the loftier exercises of the intellect." Personality and character Emily Brontë's solitary and reclusive nature has made her a mysterious figure and a challenge for biographers to assess. Except for Ellen Nussey and Louise de Bassompierre, Emily's fellow student in Brussels, she does not seem to have made any friends outside her family. Her closest friend was her sister Anne. Together they shared their own fantasy world, Gondal, and, according to Ellen Nussey, in childhood they were "like twins", "inseparable companions" and "in the very closest sympathy which never had any interruption". In 1845 Anne took Emily to visit some of the places she had come to know and love in the five years she spent as governess. A plan to visit Scarborough fell through and instead the sisters went to York where Anne showed Emily York Minster. During the trip the sisters acted out some of their Gondal characters. Charlotte Brontë remains the primary source of information about Emily, although as an elder sister, writing publicly about her only shortly after her death, she is considered by certain scholars not to be a neutral witness. Stevie Davies believes that there is what might be called Charlotte's smoke-screen and argues that Emily evidently shocked her, to the point where she may even have doubted her sister's sanity. After Emily's death, Charlotte rewrote her character, history and even poems on a more acceptable (to her and the bourgeois reading public) model. Biographer Claire O'Callaghan suggests that the trajectory of Brontë's legacy was altered significantly by Elizabeth Gaskell’s biography of Charlotte, concerning not only because Gaskell did not visit Haworth until after Emily’s death, but also because Gaskell admits to disliking what she did know of Emily in her biography of Charlotte. As O'Callaghan and others have noted, Charlotte was Gaskell's primary source of information on Emily's life and may have exaggerated or fabricated Emily’s frailty and shyness to
Riding of Yorkshire, England. Emily was the second youngest of six siblings, preceded by Maria, Elizabeth, Charlotte and Branwell. In 1820, Emily's younger sister Anne, the last Brontë child, was born. Shortly thereafter, the family moved eight miles away to Haworth, where Patrick was employed as perpetual curate. In Haworth, the children would have opportunities to develop their literary talents. When Emily was only three, and all six children under the age of eight, she and her siblings lost their mother, Maria, to cancer on 15 September 1821. The younger children were to be cared for by Elizabeth Branwell, their aunt and Maria's sister. Emily's three elder sisters, Maria, Elizabeth, and Charlotte, were sent to the Clergy Daughters' School at Cowan Bridge. At the age of six, on 25 November 1824, Emily joined her sisters at school for a brief period. At school, however, the children suffered abuse and privations, and when a typhoid epidemic swept the school, Maria and Elizabeth became ill. Maria, who may actually have had tuberculosis, was sent home, where she died. Elizabeth died shortly after. The four youngest Brontë children, all under ten years of age, had suffered the loss of the three eldest females in their immediate family. Charlotte maintained that the school's poor conditions permanently affected her health and physical development and that it had hastened the deaths of Maria (born 1814) and Elizabeth (born 1815), who both died in 1825. After the deaths of his older daughters, Patrick removed Charlotte and Emily from the school. Charlotte would use her experiences and knowledge of the school as the basis for Lowood School in Jane Eyre. The three remaining sisters and their brother Branwell were thereafter educated at home by their father and aunt Elizabeth Branwell. A shy girl, Emily was very close to her siblings and was known as a great animal lover, especially for befriending stray dogs she found wandering around the countryside. Despite the lack of formal education, Emily and her siblings had access to a wide range of published material; favourites included Sir Walter Scott, Byron, Shelley, and Blackwood's Magazine. Inspired by a box of toy soldiers Branwell had received as a gift, the children began to write stories which they set in a number of invented imaginary worlds peopled by their soldiers as well as their heroes the Duke of Wellington and his sons, Charles and Arthur Wellesley. Little of Emily's work from this period survives, except for poems spoken by characters. Initially, all four children shared in creating stories about a world called Angria. However, when Emily was 13, she and Anne withdrew from participation in the Angria story and began a new one about Gondal, a fictional island whose myths and legends were to preoccupy the two sisters throughout their lives. With the exception of their Gondal poems and Anne's lists of Gondal's characters and place-names, Emily and Anne's Gondal writings were largely not preserved. Among those that did survive are some "diary papers," written by Emily in her twenties, which describe current events in Gondal. The heroes of Gondal tended to resemble the popular image of the Scottish Highlander, a sort of British version of the "noble savage": romantic outlaws capable of more nobility, passion, and bravery than the denizens of "civilization". Similar themes of romanticism and noble savagery are apparent across the Brontë's juvenilia, notably in Branwell's The Life of Alexander Percy, which tells the story of an all-consuming, death-defying, and ultimately self-destructive love and is generally considered an inspiration for Wuthering Heights. At seventeen, Emily began to attend the Roe Head Girls' School, where Charlotte was a teacher, but suffered from extreme homesickness and left after only a few months. Charlotte wrote later that "Liberty was the breath of Emily's nostrils; without it, she perished. The change from her own home to a school and from her own very noiseless, very secluded but unrestricted and unartificial mode of life, to one of disciplined routine (though under the kindest auspices), was what she failed in enduring... I felt in my heart she would die if she did not go home, and with this conviction obtained her recall." Emily returned home and Anne took her place. At this time, the girls' objective was to obtain sufficient education to open a small school of their own. Adulthood Emily became a teacher at Law Hill School in Halifax beginning in September 1838, when she was twenty. Her always fragile health soon broke under the stress of the 17-hour work day and she returned home in April 1839. Thereafter she remained at home, doing most of the cooking, ironing, and cleaning at Haworth. She taught herself German out of books and also practised the piano. In 1842, Emily accompanied Charlotte to the Héger Pensionnat in Brussels, Belgium, where they attended the girls' academy run by Constantin Héger in the hope of perfecting their French and German before opening their school. Unlike Charlotte, Emily was uncomfortable in Brussels, and refused to adopt Belgian fashions, saying "I wish to be as God made me", which rendered her something of an outcast. Nine of Emily's French essays survive from this period. Héger seems to have been impressed with the strength of Emily's character, writing that: She should have been a man – a great navigator. Her powerful reason would have deduced new spheres of discovery from the knowledge of the old; and her strong imperious will would never have been daunted by opposition or difficulty, never have given way but with life. She had a head for logic, and a capability of argument unusual in a man and rarer indeed in a woman... impairing this gift was her stubborn tenacity of will which rendered her obtuse to all reasoning where her own wishes, or her own sense of right, was concerned. The two sisters were committed to their studies and by the end of the term had become so competent in French that Madame Héger proposed that they both stay another half-year, even, according to Charlotte, offering to dismiss the English master so that she could take his place. Emily had, by this time, become a competent pianist and teacher and it was suggested that she might stay on to teach music. However, the illness and death of their aunt drove them to return to their father and Haworth. In 1844, the sisters attempted to open a school in their house, but their plans were stymied by an inability to attract students to the remote area. In 1844, Emily began going through all the poems she had written, recopying them neatly into two notebooks. One was labelled "Gondal Poems"; the other was unlabelled. Scholars such as Fannie Ratchford and Derek Roper have attempted to piece together a Gondal storyline and chronology from these poems. In the autumn of 1845, Charlotte discovered the notebooks and insisted that the poems be published. Emily, furious at the invasion of her privacy, at first refused but relented when Anne brought out her own manuscripts and revealed to Charlotte that she had been writing poems in secret as well. As co-authors of Gondal stories, Anne
million species, 1 million plant and animal species are currently threatened with extinction. In late 2021, WWF Germany suggested that over a million species could go extinct within a decade in the "largest mass extinction event since the end of the dinosaur age." |} List of extinction events Evolutionary importance Mass extinctions have sometimes accelerated the evolution of life on Earth. When dominance of particular ecological niches passes from one group of organisms to another, it is rarely because the newly dominant group is "superior" to the old but usually because an extinction event eliminates the old, dominant group and makes way for the new one, a process known as adaptive radiation. For example, mammaliformes ("almost mammals") and then mammals existed throughout the reign of the dinosaurs, but could not compete in the large terrestrial vertebrate niches that dinosaurs monopolized. The end-Cretaceous mass extinction removed the non-avian dinosaurs and made it possible for mammals to expand into the large terrestrial vertebrate niches. The dinosaurs themselves had been beneficiaries of a previous mass extinction, the end-Triassic, which eliminated most of their chief rivals, the crurotarsans. Another point of view put forward in the Escalation hypothesis predicts that species in ecological niches with more organism-to-organism conflict will be less likely to survive extinctions. This is because the very traits that keep a species numerous and viable under fairly static conditions become a burden once population levels fall among competing organisms during the dynamics of an extinction event. Furthermore, many groups that survive mass extinctions do not recover in numbers or diversity, and many of these go into long-term decline, and these are often referred to as "Dead Clades Walking". However, clades that survive for a considerable period of time after a mass extinction, and which were reduced to only a few species, are likely to have experienced a rebound effect called the "push of the past". Darwin was firmly of the opinion that biotic interactions, such as competition for food and space – the ‘struggle for existence’ – were of considerably greater importance in promoting evolution and extinction than changes in the physical environment. He expressed this in The Origin of Species: "Species are produced and exterminated by slowly acting causes ... and the most import of all causes of organic change is one which is almost independent of altered ... physical conditions, namely the mutual relation of organism to organism – the improvement of one organism entailing the improvement or extermination of others". Patterns in frequency It has been suggested variously that extinction events occurred periodically, every 26 to 30 million years, or that diversity fluctuates episodically about every 62 million years. Various ideas attempt to explain the supposed pattern, including the presence of a hypothetical companion star to the Sun, oscillations in the galactic plane, or passage through the Milky Way's spiral arms. However, other authors have concluded that the data on marine mass extinctions do not fit with the idea that mass extinctions are periodic, or that ecosystems gradually build up to a point at which a mass extinction is inevitable. Many of the proposed correlations have been argued to be spurious. Others have argued that there is strong evidence supporting periodicity in a variety of records, and additional evidence in the form of coincident periodic variation in nonbiological geochemical variables. Mass extinctions are thought to result when a long-term stress is compounded by a short-term shock. Over the course of the Phanerozoic, individual taxa appear to have become less likely to suffer extinction, which may reflect more robust food webs, as well as fewer extinction-prone species, and other factors such as continental distribution. However, even after accounting for sampling bias, there does appear to be a gradual decrease in extinction and origination rates during the Phanerozoic. This may represent the fact that groups with higher turnover rates are more likely to become extinct by chance; or it may be an artefact of taxonomy: families tend to become more speciose, therefore less prone to extinction, over time; and larger taxonomic groups (by definition) appear earlier in geological time. It has also been suggested that the oceans have gradually become more hospitable to life over the last 500 million years, and thus less vulnerable to mass extinctions, but susceptibility to extinction at a taxonomic level does not appear to make mass extinctions more or less probable. Causes There is still debate about the causes of all mass extinctions. In general, large extinctions may result when a biosphere under long-term stress undergoes a short-term shock. An underlying mechanism appears to be present in the correlation of extinction and origination rates to diversity. High diversity leads to a persistent increase in extinction rate; low diversity to a persistent increase in origination rate. These presumably ecologically controlled relationships likely amplify smaller perturbations (asteroid impacts, etc.) to produce the global effects observed. Identifying causes of specific mass extinctions A good theory for a particular mass extinction should: explain all of the losses, not just focus on a few groups (such as dinosaurs); explain why particular groups of organisms died out and why others survived; provide mechanisms that are strong enough to cause a mass extinction but not a total extinction; be based on events or processes that can be shown to have happened, not just inferred from the extinction. It may be necessary to consider combinations of causes. For example, the marine aspect of the end-Cretaceous extinction appears to have been caused by several processes that partially overlapped in time and may have had different levels of significance in different parts of the world. Arens and West (2006) proposed a "press / pulse" model in which mass extinctions generally require two types of cause: long-term pressure on the eco-system ("press") and a sudden catastrophe ("pulse") towards the end of the period of pressure. Their statistical analysis of marine extinction rates throughout the Phanerozoic suggested that neither long-term pressure alone nor a catastrophe alone was sufficient to cause a significant increase in the extinction rate. Most widely supported explanations MacLeod (2001) summarized the relationship between mass extinctions and events that are most often cited as causes of mass extinctions, using data from Courtillot, Jaeger & Yang et al. (1996), Hallam (1992) and Grieve & Pesonen (1992): Flood basalt events: 11 occurrences, all associated with significant extinctions But Wignall (2001) concluded that only five of the major extinctions coincided with flood basalt eruptions and that the main phase of extinctions started before the eruptions. Sea-level falls: 12, of which seven were associated with significant extinctions. Asteroid impacts: one large impact is associated with a mass extinction, that is, the Cretaceous–Paleogene extinction event; there have been many smaller impacts but they are not associated with significant extinctions. The most commonly suggested causes of mass extinctions are listed below. Flood basalt events The formation of large igneous provinces by flood basalt events could have: produced dust and particulate aerosols, which inhibited photosynthesis and thus caused food chains to collapse both on land and at sea emitted sulfur oxides that were precipitated as acid rain and poisoned many organisms, contributing further to the collapse of food chains emitted carbon dioxide and thus possibly causing sustained global warming once the dust and particulate aerosols dissipated. Flood basalt events occur as pulses of activity punctuated by dormant periods. As a result, they are likely to cause the climate to oscillate between cooling and warming, but with an overall trend towards warming as the carbon dioxide they emit can stay in the atmosphere for hundreds of years. It is speculated that massive volcanism caused or contributed to the End-Permian, End-Triassic and End-Cretaceous extinctions. The correlation between gigantic volcanic events expressed in the large igneous provinces and mass extinctions was shown for the last 260 million years. Recently such possible correlation was extended across the whole Phanerozoic Eon. Sea-level falls These are often clearly marked by worldwide sequences of contemporaneous sediments that show all or part of a transition from sea-bed to tidal zone to beach to dry land – and where there is no evidence that the rocks in the relevant areas were raised by geological processes such as orogeny. Sea-level falls could reduce the continental shelf area (the most productive part of the oceans) sufficiently to cause a marine mass extinction, and could disrupt weather patterns enough to cause extinctions on land. But sea-level falls are very probably the result of other events, such as sustained global cooling or the sinking of the mid-ocean ridges. Sea-level falls are associated with most of the mass extinctions, including all of the "Big Five"—End-Ordovician, Late Devonian, End-Permian, End-Triassic, and End-Cretaceous. A 2008 study, published in the journal Nature, established a relationship between the speed of mass extinction events and changes in sea level and sediment. The study suggests changes in ocean environments related to sea level exert a driving influence on rates of extinction, and generally determine the composition of life in the oceans. Impact events The impact of a sufficiently large asteroid or comet could have caused food chains to collapse both on land and at sea by producing dust and particulate aerosols and thus inhibiting photosynthesis. Impacts on sulfur-rich rocks could have emitted sulfur oxides precipitating as poisonous acid rain, contributing further to the collapse of food chains. Such impacts could also have caused megatsunamis and/or global forest fires. Most paleontologists now agree that an asteroid did hit the Earth about 66 Ma, but there is an ongoing dispute whether the impact was the sole cause of the Cretaceous–Paleogene extinction event. Nonetheless, in October 2019, researchers reported that the Cretaceous Chicxulub asteroid impact that resulted in the extinction of non-avian dinosaurs 66 Ma, also rapidly acidified the oceans, producing ecological collapse and long-lasting effects on the climate, and was a key reason for end-Cretaceous mass extinction. According to the Shiva Hypothesis, the Earth is subject to increased asteroid impacts about once every 27 million years because of the Sun's passage through the plane of the Milky Way galaxy, thus causing extinction events at 27 million year intervals. Some evidence for this hypothesis has emerged in both marine and non-marine contexts. Alternatively, the Sun's passage through the higher density spiral arms of the galaxy could coincide with mass extinction on Earth, perhaps due to increased impact events. However, a reanalysis of the effects of the Sun's transit through the spiral structure based on CO data has failed to find a correlation. Global cooling Sustained and significant global cooling could kill many polar and temperate species and force others to migrate towards the equator; reduce the area available for tropical species; often make the Earth's climate more arid on average, mainly by locking up more of the planet's water in ice and snow. The glaciation cycles of the current ice age are believed to have had only a very mild impact on biodiversity, so the mere existence of a significant cooling is not sufficient on its own to explain a mass extinction. It has been suggested that global cooling caused or contributed to the End-Ordovician, Permian–Triassic, Late Devonian extinctions, and possibly others. Sustained global cooling is distinguished from the temporary climatic effects of flood basalt events or impacts. Global warming This would have the opposite effects: expand the area available for tropical species; kill temperate species or force them to migrate towards the poles; possibly cause severe extinctions of polar species; often make the Earth's climate wetter on average, mainly by melting ice and snow and thus increasing the volume of the water cycle. It might also cause anoxic events in the oceans (see below). Global warming as a cause of mass extinction is supported by several recent studies. The most dramatic example of sustained warming is the Paleocene–Eocene Thermal Maximum, which was associated with one of the smaller mass extinctions. It has also been suggested to have caused the Triassic–Jurassic extinction event, during which 20% of
and birds, the former descended from the synapsids and the latter from theropod dinosaurs, emerged as dominant terrestrial animals. |} Despite the popularization of these five events, there is no definite line separating them from other extinction events; using different methods of calculating an extinction's impact can lead to other events featuring in the top five. Older fossil records are more difficult to interpret. This is because: Older fossils are harder to find as they are usually buried at a considerable depth. Dating of older fossils is more difficult. Productive fossil beds are researched more than unproductive ones, therefore leaving certain periods unresearched. Prehistoric environmental events can disturb the deposition process. The preservation of fossils varies on land, but marine fossils tend to be better preserved than their sought after land-based counterparts. It has been suggested that the apparent variations in marine biodiversity may actually be an artifact, with abundance estimates directly related to quantity of rock available for sampling from different time periods. However, statistical analysis shows that this can only account for 50% of the observed pattern, and other evidence such as fungal spikes (geologically rapid increase in fungal abundance) provides reassurance that most widely accepted extinction events are real. A quantification of the rock exposure of Western Europe indicates that many of the minor events for which a biological explanation has been sought are most readily explained by sampling bias. Research completed after the seminal 1982 paper (Sepkoski and Raup) has concluded that a sixth mass extinction event is ongoing: {| |- style="vertical-align:top;" | 6. | Holocene extinction currently ongoing. Extinctions have occurred at over 1000 times the background extinction rate since 1900, and the rate is increasing. The mass extinction is a result of human activity, driven by population growth and overconsumption of the earth's natural resources. The 2019 global biodiversity assessment by IPBES asserts that out of an estimated 8 million species, 1 million plant and animal species are currently threatened with extinction. In late 2021, WWF Germany suggested that over a million species could go extinct within a decade in the "largest mass extinction event since the end of the dinosaur age." |} List of extinction events Evolutionary importance Mass extinctions have sometimes accelerated the evolution of life on Earth. When dominance of particular ecological niches passes from one group of organisms to another, it is rarely because the newly dominant group is "superior" to the old but usually because an extinction event eliminates the old, dominant group and makes way for the new one, a process known as adaptive radiation. For example, mammaliformes ("almost mammals") and then mammals existed throughout the reign of the dinosaurs, but could not compete in the large terrestrial vertebrate niches that dinosaurs monopolized. The end-Cretaceous mass extinction removed the non-avian dinosaurs and made it possible for mammals to expand into the large terrestrial vertebrate niches. The dinosaurs themselves had been beneficiaries of a previous mass extinction, the end-Triassic, which eliminated most of their chief rivals, the crurotarsans. Another point of view put forward in the Escalation hypothesis predicts that species in ecological niches with more organism-to-organism conflict will be less likely to survive extinctions. This is because the very traits that keep a species numerous and viable under fairly static conditions become a burden once population levels fall among competing organisms during the dynamics of an extinction event. Furthermore, many groups that survive mass extinctions do not recover in numbers or diversity, and many of these go into long-term decline, and these are often referred to as "Dead Clades Walking". However, clades that survive for a considerable period of time after a mass extinction, and which were reduced to only a few species, are likely to have experienced a rebound effect called the "push of the past". Darwin was firmly of the opinion that biotic interactions, such as competition for food and space – the ‘struggle for existence’ – were of considerably greater importance in promoting evolution and extinction than changes in the physical environment. He expressed this in The Origin of Species: "Species are produced and exterminated by slowly acting causes ... and the most import of all causes of organic change is one which is almost independent of altered ... physical conditions, namely the mutual relation of organism to organism – the improvement of one organism entailing the improvement or extermination of others". Patterns in frequency It has been suggested variously that extinction events occurred periodically, every 26 to 30 million years, or that diversity fluctuates episodically about every 62 million years. Various ideas attempt to explain the supposed pattern, including the presence of a hypothetical companion star to the Sun, oscillations in the galactic plane, or passage through the Milky Way's spiral arms. However, other authors have concluded that the data on marine mass extinctions do not fit with the idea that mass extinctions are periodic, or that ecosystems gradually build up to a point at which a mass extinction is inevitable. Many of the proposed correlations have been argued to be spurious. Others have argued that there is strong evidence supporting periodicity in a variety of records, and additional evidence in the form of coincident periodic variation in nonbiological geochemical variables. Mass extinctions are thought to result when a long-term stress is compounded by a short-term shock. Over the course of the Phanerozoic, individual taxa appear to have become less likely to suffer extinction, which may reflect more robust food webs, as well as fewer extinction-prone species, and other factors such as continental distribution. However, even after accounting for sampling bias, there does appear to be a gradual decrease in extinction and origination rates during the Phanerozoic. This may represent the fact that groups with higher turnover rates are more likely to become extinct by chance; or it may be an artefact of taxonomy: families tend to become more speciose, therefore less prone to extinction, over time; and larger taxonomic groups (by definition) appear earlier in geological time. It has also been suggested that the oceans have gradually become more hospitable to life over the last 500 million years, and thus less vulnerable to mass extinctions, but susceptibility to extinction at a taxonomic level does not appear to make mass extinctions more or less probable. Causes There is still debate about the causes of all mass extinctions. In general, large extinctions may result when a biosphere under long-term stress undergoes a short-term shock. An underlying mechanism appears to be present in the correlation of extinction and origination rates to diversity. High diversity leads to a persistent increase in extinction rate; low diversity to a persistent increase in origination rate. These presumably ecologically controlled relationships likely amplify smaller perturbations (asteroid impacts, etc.) to produce the global effects observed. Identifying causes of specific mass extinctions A good theory for a particular mass extinction should: explain all of the losses, not just focus on a few groups (such as dinosaurs); explain why particular groups of organisms died out and why others survived; provide mechanisms that are strong enough to cause a mass extinction but not a total extinction; be based on events or processes that can be shown to have happened, not just inferred from the extinction. It may be necessary to consider combinations of causes. For example, the marine aspect of the end-Cretaceous extinction appears to have been caused by several processes that partially overlapped in time and may have had different levels of significance in different parts of the world. Arens and West (2006) proposed a "press / pulse" model in which mass extinctions generally require two types of cause: long-term pressure on the eco-system ("press") and a sudden catastrophe ("pulse") towards the end of the period of pressure. Their statistical analysis of marine extinction rates throughout the Phanerozoic suggested that neither long-term pressure alone nor a catastrophe alone was sufficient to cause a significant increase in the extinction rate. Most widely supported explanations MacLeod (2001) summarized the relationship between mass extinctions and events that are most often cited as causes of mass extinctions, using data from Courtillot, Jaeger & Yang et al. (1996), Hallam (1992) and Grieve & Pesonen (1992): Flood basalt events: 11 occurrences, all associated with significant extinctions But Wignall (2001) concluded that only five of the major extinctions coincided with flood basalt eruptions and that the main phase of extinctions started before the eruptions. Sea-level falls: 12, of which seven were associated with significant extinctions. Asteroid impacts: one large impact is associated with a mass extinction, that is, the Cretaceous–Paleogene extinction event; there have been many smaller impacts but they are not associated with significant extinctions. The most commonly suggested causes of mass extinctions are listed below. Flood basalt events The formation of large igneous provinces by flood basalt events could have: produced dust and particulate aerosols, which inhibited photosynthesis and thus caused food chains to collapse both on land and at sea emitted sulfur oxides that were precipitated as acid rain and poisoned many organisms, contributing further to the collapse of food chains emitted carbon dioxide and thus possibly causing sustained global warming once the dust and particulate aerosols dissipated. Flood basalt events occur as pulses of activity punctuated by dormant periods. As a result, they are likely to cause the climate to oscillate between cooling and warming, but with an overall trend towards warming as the carbon dioxide they emit can stay in the atmosphere for hundreds of years. It is speculated that massive volcanism caused or contributed to the End-Permian, End-Triassic and End-Cretaceous extinctions. The correlation between gigantic volcanic events expressed in the large igneous provinces and mass extinctions was shown for the last 260 million years. Recently such possible correlation was extended across the whole Phanerozoic Eon. Sea-level falls These are often clearly marked by worldwide sequences of contemporaneous sediments that show all or part of a transition from sea-bed to tidal zone to beach to dry land – and where there is no evidence that the rocks in the relevant areas were raised by geological processes such as orogeny. Sea-level falls could reduce the continental shelf area (the most productive part of the oceans) sufficiently to cause a marine mass extinction, and could disrupt weather patterns enough to cause extinctions on land. But sea-level falls are very probably the result of other events, such as sustained global cooling or the sinking of the mid-ocean ridges. Sea-level falls are associated with most of the mass extinctions, including all of the "Big Five"—End-Ordovician, Late Devonian, End-Permian, End-Triassic, and End-Cretaceous. A 2008 study, published in the journal Nature, established a relationship between the speed of mass extinction events and changes in sea level and sediment. The study suggests changes in ocean environments related to sea level exert a driving influence on rates of extinction, and generally determine the composition of life in the oceans. Impact events The impact of a sufficiently large asteroid or comet could have caused food chains to collapse both on land and at sea by producing dust and particulate aerosols and thus inhibiting photosynthesis. Impacts on sulfur-rich rocks could have emitted sulfur oxides precipitating as poisonous acid rain, contributing further to the collapse of food chains. Such impacts could also have caused megatsunamis and/or global forest fires. Most paleontologists now agree that an asteroid did hit the Earth about 66 Ma, but there is an ongoing dispute whether the impact was the sole cause of the Cretaceous–Paleogene extinction event. Nonetheless, in October 2019, researchers reported that the Cretaceous Chicxulub asteroid impact that resulted in the extinction of non-avian dinosaurs 66 Ma, also rapidly acidified the oceans, producing ecological collapse and long-lasting effects on the climate, and was a key reason for end-Cretaceous mass extinction. According to the Shiva Hypothesis, the Earth is subject to increased asteroid impacts about once every 27 million years because of the Sun's passage through the plane of the Milky Way galaxy, thus causing extinction events at 27 million year intervals. Some evidence for this hypothesis has emerged in both marine and non-marine contexts. Alternatively, the Sun's passage through the higher density spiral arms of the galaxy could coincide with mass extinction on Earth, perhaps due to increased impact events. However, a reanalysis of the effects of the Sun's transit through the spiral structure based on CO data has failed to find a correlation. Global cooling Sustained and significant global cooling could kill many polar and temperate species and force others to migrate towards the equator; reduce the area available for tropical species; often make the Earth's climate more arid on average, mainly by locking up more of the planet's water in ice and snow. The glaciation cycles of the current ice age are believed to have had only a very mild impact on biodiversity, so the mere existence of a significant cooling is not sufficient on its own to explain a mass extinction. It has been suggested that global cooling caused or contributed to the End-Ordovician, Permian–Triassic, Late Devonian extinctions, and possibly others. Sustained global cooling is distinguished from the temporary climatic effects of flood basalt events or impacts. Global warming This would have the opposite effects: expand the area available for tropical species; kill temperate species or force them to migrate towards the poles; possibly cause severe extinctions of polar species; often make the Earth's climate wetter on average, mainly by melting ice and snow and thus increasing the volume of the water cycle. It might also cause anoxic events in the oceans